question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
2,412,521 | 2,412,617 | Is there a C++ dependency index somewhere? | When trying new software and compiling with the classic ./configure, make, make install process, I frequently see something like:
error: ____.h: No such file or directory
Sometimes, I get really lucky and apt-get install ____ installs the missing piece and all is well. However, that doesn't always happen and I end up googling to find the package that contains what I need. And sometimes the package is the wrong version or flavor and is already used by another package that I downloaded.
How do people know which packages contain which .h files or whatever resource the compiler needs? Is there a dependency resolver website or something that people use to decode failed builds to missing packages? Is there a more modern method of automatically downloading and installing transitive dependencies for a build (somewhat like Java's Maven)?
| You can also use "auto-apt ./configure" (on Ubuntu, and probably also on Debian?) and it will attempt to download dependencies automatically.
|
2,412,608 | 2,412,651 | g++: const discards qualifiers | why do I get a discard qualifiers error:
customExc.cpp: In member function ‘virtual const char* CustomException::what() const’:
customExc.cpp: error: passing ‘const CustomException’ as ‘this’ argument of ‘char customException::code()’ discards qualifiers
on the following code example
#include <iostream>
class CustomException: public std::exception {
public:
virtual const char* what() const throw() {
static std::string msg;
msg = "Error: ";
msg += code(); // <---------- this is the line with the compile error
return msg.c_str();
}
char code() { return 'F'; }
};
I have searched around on SOF before regarding simular issues.
I have already added a const on every possible place.
Please enlighten me - I don't get the point...
EDIT:
here are the steps to reproduce on Ubuntu-Carmic-32bit (g++ v4.4.1)
save example as customExc.cpp
type make customExc.o
EDIT: The error is related to CustomException. The class Foo has nothing to do with it. So I have deleted it.
| CustomException::what calls CustomException::code. CustomException::what is a const method, as signified by the const after what(). Since it is a const method, it cannot do anything that may modify itself. CustomException::code is not a const method, which means that it does not promise to not modify itself. So CustomException::what can't call CustomException::code.
Note that const methods are not necessarily related to const instances. Foo::bar can declare its exc variable as non-const and call const methods like CustomException::what; this simply means that CustomException::what promises not to modify exc, but other code might.
The C++ FAQ has a bit more information on const methods.
|
2,412,667 | 2,412,837 | Seg Fault when using std::string on an embedded Linux platform | I have been working for a couple of days on a problem with my application running on an embedded Arm Linux platform. Unfortunately the platform precludes me from using any of the usual useful tools for finding the exact issue. When the same code is run on the PC running Linux, I get no such error.
In the sample below, I can reliably reproduce the problem by uncommenting the string, list or vector lines. Leaving them commented results in the application running to completion. I expect that something is corrupting the heap, but I cannot see what? The program will run for a few seconds before giving a segmentation fault.
The code is compiled using a arm-linux cross compiler:
arm-linux-g++ -Wall -otest fault.cpp -ldl -lpthread
arm-linux-strip test
Any ideas greatly appreciated.
#include <stdio.h>
#include <vector>
#include <list>
#include <string>
using namespace std;
/////////////////////////////////////////////////////////////////////////////
class TestSeg
{
static pthread_mutex_t _logLock;
public:
TestSeg()
{
}
~TestSeg()
{
}
static void* TestThread( void *arg )
{
int i = 0;
while ( i++ < 10000 )
{
printf( "%d\n", i );
WriteBad( "Function" );
}
pthread_exit( NULL );
}
static void WriteBad( const char* sFunction )
{
pthread_mutex_lock( &_logLock );
printf( "%s\n", sFunction );
//string sKiller; // <----------------------------------Bad
//list<char> killer; // <----------------------------------Bad
//vector<char> killer; // <----------------------------------Bad
pthread_mutex_unlock( &_logLock );
return;
}
void RunTest()
{
int threads = 100;
pthread_t _rx_thread[threads];
for ( int i = 0 ; i < threads ; i++ )
{
pthread_create( &_rx_thread[i], NULL, TestThread, NULL );
}
for ( int i = 0 ; i < threads ; i++ )
{
pthread_join( _rx_thread[i], NULL );
}
}
};
pthread_mutex_t TestSeg::_logLock = PTHREAD_MUTEX_INITIALIZER;
int main( int argc, char *argv[] )
{
TestSeg seg;
seg.RunTest();
pthread_exit( NULL );
}
| Maybe you're using a single-threaded version of the standard library, including the new and delete operators?
Those objects are being constructed within the guards of your mutex, but are destructed outside those bounds, so the destructors might be stepping on each other. One quick test would be to put scoping brackets {} around the declaration of killer.
See the gcc documentation for more.
|
2,412,718 | 2,412,752 | Declare float or cast float? | I've started to learn C++ using "C++ Primer by Stephen Prate" and I'm currently trying to complete one of the exercises. I am wondering if I should declare arc_to_minute & arc_to_degree as float or cast them as float as I've done already. Any tips welcome!
#include <iostream>
int main()
{
using namespace std;
cout.setf(ios_base::fixed, ios_base::floatfield);
const int arc_to_minute = 60;
const int arc_to_degree = 60;
float degrees;
float minutes;
int seconds;
cout << "Degrees: ";
cin >> degrees;
cout << "Minutes: ";
cin >> minutes;
cout << "Seconds: ";
cin >> seconds;
//convert seconds to minutes and add
minutes = minutes + seconds / float (arc_to_minute);
//covert minutes to degrees and add
degrees = degrees + minutes / float (arc_to_degree);
cout << degrees;
}
| Make them floats, there's no reason for them to be integers when all your calculations are done in floating point:
const float arc_to_minute = 60.0f;
const float arc_to_degree = 60.0f;
Keep in mind in a constant-value case the cast will be done at compile-time anyway, so this is purely a design choice, with no performance changes. But in general, if you find yourself casting, you probably chose the incorrect data type to begin with.
For what it's worth, you should prefer C++ style casts when you do need to cast. For example:
static_cast<float>(arc_to_minute);
|
2,412,792 | 2,413,210 | Multi-Threaded MPI Process Suddenly Terminating | I'm writing an MPI program (Visual Studio 2k8 + MSMPI) that uses Boost::thread to spawn two threads per MPI process, and have run into a problem I'm having trouble tracking down.
When I run the program with: mpiexec -n 2 program.exe, one of the processes suddenly terminates:
job aborted:
[ranks] message
[0] terminated
[1] process exited without calling finalize
---- error analysis -----
[1] on winblows
program.exe ended prematurely and may have crashed. exit code 0xc0000005
---- error analysis -----
I have no idea why the first process is suddenly terminating, and can't figure out how to track down the reason. This happens even if I put the rank zero process into an infinite loop at the end of all of it's operations... it just suddenly dies. My main function looks like this:
int _tmain(int argc, _TCHAR* argv[])
{
/* Initialize the MPI execution environment. */
MPI_Init(0, NULL);
/* Create the worker threads. */
boost::thread masterThread(&Master);
boost::thread slaveThread(&Slave);
/* Wait for the local test thread to end. */
masterThread.join();
slaveThread.join();
/* Shutdown. */
MPI_Finalize();
return 0;
}
Where the master and slave functions do some arbitrary work before ending. I can confirm that the master thread, at the very least, is reaching the end of it's operations. The slave thread is always the one that isn't done before the execution gets aborted. Using print statements, it seems like the slave thread isn't actually hitting any errors... it's happily moving along and just get's taken out in the crash.
So, does anyone have any ideas for:
a) What could be causing this?
b) How should I go about debugging it?
Thanks so much!
Edit:
Posting minimal versions of the Master/Slave functions. Note that the goal of this program is purely for demonstration purposes... so it isn't doing anything useful. Essentially, the master threads send a dummy payload to the slave thread of the other MPI process.
void Master()
{
int myRank;
int numProcs;
MPI_Comm_size(MPI_COMM_WORLD, &numProcs);
MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
/* Create a message with numbers 0 through 39 as the payload, addressed
* to this thread. */
int *payload= new int[40];
for(int n = 0; n < 40; n++) {
payload[n] = n;
}
if(myRank == 0) {
MPI_Send(payload, 40, MPI_INT, 1, MPI_ANY_TAG, MPI_COMM_WORLD);
} else {
MPI_Send(payload, 40, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD);
}
/* Free memory. */
delete(payload);
}
void Slave()
{
MPI_Status status;
int *payload= new int[40];
MPI_Recv(payload, 40, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
/* Free memory. */
delete(payload);
}
| you have to use thread safe version of mpi runtime.
read up on MPI_Init_thread.
|
2,412,838 | 2,415,439 | Unable to access tables created with sqlite3 from a program using the C API | I generate an sqlite3 database file (call it db.sl3; invoked interactively as $ sqlite3 db.sl3 from a shell) from within the sqlite3 commandline program, for instance
create table people (
id integer,
firstname varchar(20),
lastname varchar(20),
phonenumber char(10)
);
insert into people (id, firstname, lastname, phonenumber) values
(1, 'Fred', 'Flintstone', '5055551234');
insert into people (id, firstname, lastname, phonenumber) values
(2, 'Wilma', 'Flintstone', '5055551234');
insert into people (id, firstname, lastname, phonenumber) values
(3, 'Barny', 'Rubble', '5055554321');
I am trying to use this in a program I have written which uses the sqlite3 C API; however, whenever I attempt to open up the database file in the C program using either
sqlite3* db;
rc = sqlite3_open( "db.sl3", &db );
or
rc = sqlite3_open_v2( "db.sl3", &db, SQLITE_READONLY, 0 );
followed by a query where the SQL is contained in the following string
std::string sqlCmd = "select * from sqlite_master where type='table' order by name";
to the sqlite3_get_table wrapper interface invoked as follows
rc = sqlite3_get_table( db, sqlCmd.c_str(), &result, &numRows, &numCols, &errorMsg );
The return code (rc) is 0 in either case implying that there was no problem with the operation but there are no tables entries recorded in the result variable. I have tried all sorts of pathing issues (e.g., using absolute paths to the db file, relative, only in the invocation directory, etc.) to no avail. Also, when I reopen the database file with the sqlite3 commandline program I can see the tables and their entries. As an aside, if I open, create and insert the lines into the table in the C program I can access them fine.
Any help would be much appreciated. Am I not properly initializing things?
| By default sqlite3_open will create the database if it does not exist (equivalent with calling sqlite3_open_v2 with flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE) - and rc will be SQLITE_OK.
I've tried this test program against a database created using sqlite3 and each time (existing db with people table: full path, relative; existing 0 sized db: full path, relative; nonexistent db: full path, relative) sqlite3_open_v2 behaved as advertised.
Try to do a 'sqlite3 [full_path_to_db]' then run the query and compare the results with what your program or mine does (again with [full_path_to_db]) just in case your sqlite3_open tests created some 0 sized db.sl3 which are then gadly opened by sqlite3_open_v2 .
Hope this helps.
|
2,412,941 | 2,413,033 | QT 2d list causing segfault | I've got a program that worked until recently. The offending code is shown here:
void writeTimeInfo(fitsfile *fptr, QList<QList<double> > &in)
{
double data[in.size() * in[0].size()];
long naxes[2];
int status = 0;
naxes[1] = in.size();
naxes[0] = in[0].size();
for (int i=0; i<naxes[1]; i++)
{
for (int j=0; j<naxes[0]; j++)
{
data[j+i*naxes[0]] = in[i][j];
}
}
fits_insert_img(fptr, DOUBLE_IMG, 2, naxes, &status);
fits_write_key(fptr, TSTRING, "EXTNAME", (void*)"HJD", "Extension name", &status);
fits_write_2d_dbl(fptr, 0, naxes[0], naxes[0], naxes[1], data, &status);
if (status)
fits_report_error(stderr, status);
}
The specifics of the program are not important (all the fits stuff, it is used to manipulate the astronomy standard fits files), but currently the program segfaults on the line
naxis[1] = in.size();
I cannot see what's wrong - before this step I can use gdb to see in.size() and in[0].size() *the two array dimensions) and I've checked the array values. It just segfaults here.
in is a 2d QList array as you can see by the function argument list. The array is passed by reference so as to not duplicate memory. The only change I've made is running the program on a larger input set where before (the working stage): in was a 2d double array of 515*1508 elements, whereas now it is an array of 515*2480 elements. Could it be there is not enough memory on the stack?
Cheers
| An array of 515 * 1508 doubles is roughly 6MB -- a lot for the stack. This is probably a stackoverflow. Try setting the stack limit by using --stack option of ld to ~10 MB (if possible) and test with the value of 515 * 2480.
On Windows, using VS2010 Beta, the following crashes the stack:
int main() { double x[ 515 * 1508 ]; }
Create the array on the heap. Use a scoped_array if you can use Boost.
|
2,412,971 | 2,412,984 | How can I find the real size of my C++ class? | I'm working on a homework assignment in which I'm required to use char arrays instead of strings and qsort/bsearch. In my call to bsearch below, I know I'm passing the wrong size of Entry, but I'm not sure how to get the real size, and my compareEntries function is therefore not finding the right objects.
Can anyone help me understand what I'm missing?
#include <iostream>
using namespace std;
typedef Entry* EntryPtr;
class Dictionary
{
public :
Dictionary( const char *filename );
~Dictionary();
const char *lookupDefinition( const char *word );
private :
int m_numEntries;
EntryPtr *m_entries;
static int compareEntries(const void *leftSide, const void *rightSide);
};
int Dictionary::compareEntries(const void *leftSide, const void *rightSide)
{
EntryPtr lside = (EntryPtr) leftSide;
EntryPtr rside = (EntryPtr) rightSide;
return strcmp(lside->Word, rside->Word);
}
const char *Dictionary::lookupDefinition(const char *word)
{
if (m_numEntries == 0)
return 0;
EntryPtr result = (EntryPtr) bsearch(word, m_entries,
m_numEntries, sizeof(m_entries[0]), Dictionary::compareEntries);
return result->Definition;
}
class Entry
{
public:
Entry(const char *line);
char *Word;
char *Definition;
};
int main()
{
Dictionary dict( "C:\\path\\file.txt" );
dict.lookupDefinition("chair");
return 0;
}
| Why doesn't sizeof(Entry) work?
Changed again -- I think the size should be the size of the pointer....
EntryPtr tmp = new Entry("");
tmp->Word = word;
EntryPtr result = (EntryPtr) bsearch(tmp, m_entries,
m_numEntries, sizeof(EntryPtr), Dictionary::compareEntries);
|
2,413,034 | 2,413,410 | Problem to convert string binary (64 bits) to decimal (c++ in iphone) | I have a problem converting a string binary to a decimal
I was using bitset
bitstring ="1011010001111111";
unsigned long binToDec( string bitstring){
bitset<32> dec (bitstring);
return dec.to_ulong();
}
All of this works fine, but !! the problem comes when i try to do the same with a bits string with more of 32 bits. I know that bitset only does the conversion with 4 bytes, but i need to do the conversion with strings that has 48 or 56 bits (sometimes i need to get 14 or 15 digits )
string bitstring;
bitstring ="11100101001001000000100000100100110100110011010001111111";
i want to get this number: 64497387062899840
Any sugestion? anybody have a function to transform a binary string to decimal ?
Note: i can´t use boost because it´s not ported to the iphone.
Thanks for your help
| I've just put together this and it seems to work with your example, I haven't tested any bigger values, compared result with calculator.
Outputs:
64497387062899839
Code:
#include <iostream>
#include <limits>
using namespace std;
unsigned long long convert(string& bits)
{
if (bits.length() > (size_t)numeric_limits<unsigned long long>::digits)
return 0;
unsigned long long sum = 0;
unsigned long long shift = 1;
for (string::reverse_iterator it(bits.rbegin()), end(bits.rend());
it < end; ++it)
{
if (*it == '1') sum += shift;
shift = (shift << 1);
}
return sum;
}
int main()
{
string bits("11100101001001000000100000100100110100110011010001111111");
cout << "returned: " << convert(bits) << endl;
}
|
2,413,172 | 2,413,269 | Cross platform C++ code architecture | I'm having a bit of a go at developing a platform abstraction library for an application I'm writing, and struggling to come up with a neat way of separating my platform independent code from the platform specific code.
As I see it there are two basic approaches possible: platform independent classes with platform specific delegates, or platform independent classes with platform specific derived classes. Are there any inherent advantages/disadvantages to either approach? And in either case, what's the best mechanism to set up the delegation/inheritance relationship such that the process is transparent to a user of the platform independent classes?
I'd be grateful for any suggestions as to a neat architecture to employ, or even just some examples of what people have done in the past and the pros/cons of the given approach.
EDIT: in response to those suggesting Qt and similar, yes I'm purposely looking to "reinvent the wheel" as I'm not just concerned with developing the app, I'm also interested in the intellectual challenge of rolling my own platform abstraction library. Thanks for the suggestion though!
| I'm using platform neutral header files, keeping any platform specific code in the source files (using the PIMPL idiom where neccessary). Each platform neutral header has one platform specific source file per platform, with extensions such as *.win32.cpp, *.posix.cpp. The platform specific ones are only compiled on the relevent platforms.
I also use boost libraries (filesystem, threads) to reduce the amount of platform specific code I have to maintain.
It's platform independent classes declarations with platform specific definitions.
Pros: Works fairly well, doesn't rely on the preprocessor - no #ifdef MyPlatform, keeps platform specific code readily identifiable, allows compiler specific features to be used in platform specific source files, doesn't pollute the global namespace by #including platform headers.
Cons: It's difficult to use inheritance with pimpled classes, sometimes the PIMPL structs need their own headers so they can be referenced from other platform specific source files.
|
2,413,342 | 2,413,392 | OpenCV: Reading a YAML file into a CvMat structure | Using OpenCV, saving a CvMat structure into a YAML file on the disk is easy with
CvMat* my_matrix = cvCreateMat( row_no, col_no, CV_32FC1 );
cvSave("filename.yml", my_matrix);
However, I couldn't find an easy way to read the saved files back from the disk. Is there function in OpenCV that can handle this and create a CvMat structure from a YAML file?
| CvMat* my_matrix;
my_matrix = (CvMat*)cvLoad("filename.yml");
seems to do the trick!
|
2,413,533 | 2,413,649 | How should I go about generating every possible map<char, char> combination from map<char, vector<char> >? | I am looking to take a map<char, vector<char> > and generate each possible map<char, char> from it.
I understand this may use a sizeable amount of memory and take a bit of time.
Each map<char, char> needs to contain every letter a-z, and be mapped to a unique a-z character. ie.
ak
bj
cp
dy
ev
fh
ga
hb
ir
jq
kn
li
mx
nc
oo
pz
qs
rl
sd
te
uw
vf
wg
xm
yu
zt
Here is what I have concluded for myself so far:
To cut down the ridiculous number of possible combinations to a lower amount, if a vector<char> contains more than 5 elements, I will simply replace it with a vector<char> containing a single char from my 'master'/'original' map<char, char>.
Not all characters will be present over all of the vector<char>s in the map. These characters need to be found and put into some 'others' vector.
This should also contain characters where one character is the only possible character for more than one character key(ie. mw in the example I am working from - I'm unsure how to go about this).
This 'others' vector should be used for the cases where it is not possible to have a unique a-z character, or where more than one character has the same, single possible character.
Here’s an example of what I have so far.
I will be taking a map<char, vector<char> >, such as:
a: gjkpqvxz
b: gjkpqvxz
c: gjkpqvxyz
d: mw
e: gjkpqvxz
f: nr
g: at
h: cf
i: his
j: gjkpqvxz
k: r
l: h
m: gjkpqvxz
n: gjkpquvxyz
o: is
p: gjkpqvxz
q: is
r: dl
s: l
t: e
u: dgkpuvy
v: cf
w: bcf
x: dguy
y: f
z: at
This is my starting map. After cutting out the large character vectors of over 5 and replacing them with the best guess. Where the is a vector<char> of size 1, that character mapping only has one combination, and that character cannot be used in any other mapping as it would make it not unique. I have trimmed it down to:
a: k
b: j
c: p
d: mw
e: v
f: n
g: at
h: c
i: is
j: q
k: r
l: h
m: x
n: guy
o: is
p: z
q: is
r: d
s: l
t: e
u: dguy
v: c
w: bc
x: dguy
y: f
z: at
The 'others' vector contains 'o' (I think it is important to note that I think this should contain cases such as mw from the above example. As d is the only place mw can be used, but obviously with the need for each letter to only be used once, only one of them can be used, leaving the other to be lost somewhere. I'm not sure how to go about programming a general case to add these to the others vector.)
I am looking for help and pointers with generating every possible map<char, char> from map<char, vector<char> >s like this and in this format. They will be used as an argument in a function call. I'm not really sure where to start writing something that would work in a general sense. I would probably approach it with a large amount of for loops looking through every element against every other element against every other element ... etc etc, which I assume would be extremely inefficient and there are probably much more elegant ways of solving such a problem.
Sorry if this is too wall of text-ish or seems overly specific or poorly written/asked.
I appreciate any and all assistance.
| I guess I'd hope that I don't need them all to exist simultaneously. Then I could:
1) Create the first map by assigning the first possible element to each letter:
for (char c = 'a'; c <= 'z'; ++c) { // yes, I assume ASCII
new_map[c] = old_map[c][0];
}
int indexes[26] = {0};
2) Create the remaining maps in turn by modifying the existing map, repeatedly:
++indexes[0];
if (indexes[0] < old_map['a'].size()) {
new_map['a'] = old_map['a'][indexes[0]];
} else {
indexes[0] = 0;
new_map['a'] = old_map['a'][0];
// "carry the 1" by applying the same increment process to indexes[1]
}
do_something_with(new_map);
do_something_with can re-construct the "others" vector each time from the map, or else you can update it each time you change a character. Replace:
new_map['a'] = something;
with:
char removed = new_map['a'];
--counts[removed];
if (counts[removed] == 0) others.add(removed);
++counts[something];
if (counts[something] == 1) others.remove(something);
new_map['a'] = something;
In your trimmed-down example there are only about 6000 possibilities, which should fly by. In fact, if you did need them all simultaneously you could copy the previous map at every step, and it wouldn't exactly take until the next ice age.
Btw, have you considered that a map is a bit overkill for only 26 possible keys, each of which is required to be present in every map? A vector or array would be considerably cheaper to use and to copy.
|
2,413,786 | 2,413,815 | Using for_each and boost::bind with a vector of pointers | I have a vector of pointers. I would like to call a function for every element, but that function takes a reference. Is there a simple way to dereference the elements?
Example:
MyClass::ReferenceFn( Element & e ) { ... }
MyClass::PointerFn( Element * e ) { ... }
MyClass::Function()
{
std::vector< Element * > elements;
// add some elements...
// This works, as the argument is a pointer type
std::for_each( elements.begin(), elements.end(),
boost::bind( &MyClass::PointerFn, boost::ref(*this), _1 ) );
// This fails (compiler error), as the argument is a reference type
std::for_each( elements.begin(), elements.end(),
boost::bind( &MyClass::ReferenceFn, boost::ref(*this), _1 ) );
}
I could create a dirty little wrapper that takes a pointer, but I figured there had to be a better way?
| You could use boost::indirect_iterator:
std::for_each( boost::make_indirect_iterator(elements.begin()),
boost::make_indirect_iterator(elements.end()),
boost::bind( &MyClass::ReferenceFn, boost::ref(*this), _1 ) );
That will dereference the adapted iterator twice in its operator*.
|
2,414,095 | 2,414,125 | How to hide specific type completely using typedef? | I have a quick question about encapsulating specific types with typedef. Say I have a class Foo whose constructor takes a certain value, but I want to hide the specific type using typedef:
class Foo {
public:
typedef boost::shared_ptr< std::vector<int> > value_type;
Foo(value_type val) : val_(val) {}
private:
value_type val_;
};
But in this case, the main function still has to know the type (so it's explicitly using std::vector<int>):
int main() {
Foo::value_type val(new std::vector<int>());
val->push_back(123);
Foo foo(val);
return 0;
}
How can I fix that while still avoiding a deep copy of the vector in the Foo constructor?
| Various solutions:
Foo::value_type val(new Foo::value_type::element_type());
// least change from your current code, might be too verbose or too
// coupled to boost's smart pointer library, depending on your needs
Foo::value_type val(new Foo::element_type());
// add this typedef to Foo: typedef value_type::element_type element_type;
Foo::value_type val = Foo::new_value_type();
// static method in Foo, allows you to even easily change from new (as you
// encapsulate the whole smart pointer, and can specify another deleter to the
// boost::shared_ptr)
struct Foo {
static value_type new_value_type() { // function used above
return value_type(new value_type::element_type());
}
};
However, if all you want is to have a vector member in Foo initialized from outside data without copying it, instead of actually sharing through a shared_ptr, then I wouldn't use a shared_ptr at all. Take a reference in Foo's ctor and document that it changes the object.
struct Foo {
typedef std::vector<int> value_type;
explicit Foo(value_type& val) {
using std::swap;
swap(val, _val);
}
private:
value_type _val;
};
int main() {
Foo::value_type val;
val->push_back(123);
Foo foo(val);
return 0;
}
|
2,414,155 | 2,414,208 | Getting input from a file in C++ | I am currently developing an application, which gets the input from a text file and proceeds accordingly. The concept is the input file will have details in this fomat
A AND B
B OR C
Each and every line will be seperated by a blank space and the input must be taken from the text file and processed by logic. I use a TCPP compiler and i am facing problems reading the input. Please help me with the issue...
| Reading input a line at a time is normally done with std::getline, something like this:
std::string line;
std::ifstream infile("filename");
while (std::getline(line, infile))
// show what we read
std::cout << line << "\n";
If you're having trouble with things like this, you might consider looking for a (better) book on C++ than whatever you're now (hopefully) using.
|
2,414,261 | 2,415,084 | What is the current modern term for "Multi-byte Character Set" | I used to be confusing quite a while :
Confusion on Unicode and Multibyte Articles
After reading up the comments by all contributors, plus :
Looking at an old article (Year 2001) : http://www.hastingsresearch.com/net/04-unicode-limitations.shtml, which talk about unicode :
being a 16-bit character definition
allowing a theoretical total of over
65,000 characters. However, the
complete character sets of the world
add up to over 170,000 characters.
and Looking at current "modern" article : http://en.wikipedia.org/wiki/Unicode
The most commonly used encodings are
UTF-8 (which uses 1 byte for all
ASCII characters, which have the same
code values as in the standard ASCII
encoding, and up to 4 bytes for other
characters), the now-obsolete UCS-2
(which uses 2 bytes for all
characters, but does not include every
character in the Unicode standard),
and UTF-16 (which extends UCS-2, using
4 bytes to encode characters missing
from UCS-2).
It seems that in the compilation options in VC2008, the options "Unicode" under Character Sets really means "Unicode encoded in UCS-2" (Or UTF-16? I am not sure)
I try to verify this by running the following code under VC2008
#include <iostream>
int main()
{
// Use unicode encoded in UCS-2?
std::cout << sizeof(L"我爱你") << std::endl;
// Use unicode encoded in UCS-2?
std::cout << sizeof(L"abc") << std::endl;
getchar();
// Compiled using options Character Set : Use Unicode Character Set.
// print out 8, 8
// Compiled using options Character Set : Multi-byte Character Set.
// print out 8, 8
}
It seems that during compilation with Unicode Character Set options, the outcome matched my assumption.
But what about Multi-byte Character Set? What does Multi-byte Character Set means in current "modern" world? :)
| http://en.wikipedia.org/wiki/Multi-byte_character_set
MBCS is a term used to denote a class of character encodings with characters that cannot be represented with a single byte, hence multi-byte character set. In order to properly decode a string in this format, you need a codepage that tells you various byte combinations map to characters. ISO/IEC 8859 defines a set of MBCS standards, but according to Wikipedia, ISO stopped maintaining them in 2004, presumably to focus on Unicode.
So I guess the modern term for MBCS is "deprecated in favor of Unicode".
|
2,414,359 | 2,457,919 | Microsecond resolution timestamps on Windows | How do I get microsecond resolution timestamps on Windows?
I am loking for something better than QueryPerformanceCounter and QueryPerformanceFrequency (these can only give you an elapsed time since boot and are not necessarily accurate if they are called on different threads - that is, QueryPerformanceCounter may return different results on different CPUs. There are also some processors that adjust their frequency for power saving, which apparently isn't always reflected in their QueryPerformanceFrequency result.)
There is Implement a Continuously Updating, High-Resolution Time Provider for Windows, but it does not seem to be solid. When microseconds matter looks great, but it's not available for download any more.
Another resource is Obtaining Accurate Timestamps under Windows XP, but it requires a number of steps, running a helper program plus some init stuff also, I am not sure if it works on multiple CPUs.
I also looked at the Wikipedia article Time Stamp Counter which is interesting, but not that useful.
If the answer is just do this with BSD or Linux, it's a lot easier and that's fine, but I would like to confirm this and get some explanation as to why this is so hard in Windows and so easy in Linux and BSD. It's the same fine hardware...
| I believe this is still useful: System Internals: Guidelines For Providing Multimedia Timer Support.
It does a good job of explaining the various timers available and their limitations. It might be that your archenemy will not so much be resolution, but latency.
QueryPerformanceCounter will not always run at CPU speed. In fact, it might try to avoid RDTSC, especially on multi-processor(/multi-core) systems: it will use the HPET on Windows Vista and later if it is available or the ACPI/PM timer.
On my system (Windows 7 x64, dual core AMD) the timer runs at 14.31818 MHz.
The same is true for earlier systems:
By default, Windows Server 2003 Service Pack 2 (SP2) uses the PM timer for all multiprocessor APIC or ACPI HALs, unless the check process to determine whether the BIOS supports the APIC or ACPI HALs fails."
The problem is, when the check fails. This simply means that your computer/BIOS is broken in a way. Then you might either fix your BIOS (recommended), or at least switch to using the ACPI timer (/usepmtimer) for the time being.
It is easy from C# - without P/Invoke - to check for high-resolution timer support with Stopwatch.IsHighResolution and then peek at Stopwatch.Frequency. It will make the necessary QueryPerformanceCounter call internally.
Also consider that if the timers are broken, the whole system will go havoc and in general, behave strangely, reporting negative elapsed times, slowing down, etc. - not just your application.
This means that you can actually rely on QueryPerformanceCounter.
... and contrary to popular belief, QueryPerformanceFrequency() "cannot change while the system is running".
Edit: As the documentation on QueryPerformanceCounter() states, "it should not matter which processor is called" - and in fact the whole hacking around with thread affinity is only needed if the APIC/ACPI detection fails and the system resorts to using the TSC. It is a resort that should not happen. If it happens on older systems, there is likely a BIOS update/driver fix from the manufacturer. If there is none, the /usepmtimer boot switch is still there. If that fails as well, because the system does not have a proper timer apart from the Pentium TSC, you might in fact consider messing with thread affinity - even then, the sample provided by others in the "Community Content" area of the page is misleading as it has a non-negligible overhead due to setting thread affinity on every start/stop call - that introduces considerable latency and likely diminishes the benefits of using a high resolution timer in the first place.
Game Timing and Multicore Processors is a recommendation on how to use them properly. Please consider that it is now five years old, and at that time fewer systems were fully ACPI compliant/supported - that is why while bashing it, the article goes into so much detail about TSC and how to work around its limitations by keeping an affine thread.
I believe it is a fairly hard task nowadays to find a common PC with zero ACPI support and no usable PM timer. The most common case is probably BIOS settings, when ACPI support is incorrectly set (sometimes sadly by factory defaults).
Anecdotes tell that eight years ago, the situation was different in rare cases. (Makes a fun read, developers working around design "shortcomings" and bashing chip designers. To be fair, it might be the same way vice versa. :-)
|
2,414,399 | 2,414,432 | Problem linking to a DLL from C++ console application | I have a console application in C++. I need to access a functionality from a C++ DLL. How do I link the DLL from my console application?
Please help me in this regard.
Thanks,
Rakesh.
| It depends on whether you are using load-time or run-time dynamic linking.
With load-time dynamic linking, you simply link against the import library for the DLL and make sure the DLL is in the right location when you run. With run-time dynamic linking, you need to use LoadLibrary and GetProcAddress to make calls into the DLL.
Both are described in detail on MSDN, with examples.
|
2,414,463 | 2,414,469 | Application launch from c command in window OS | I want to make an console application of c which can run other applications (exe files). Kindly guide me how can I make it possible so that from my c code i can run other executable files in window OS.
| You can use the CreateProcess Windows API function.
|
2,414,477 | 2,414,503 | Stl Set find an item c++ | I have a set where i want to find items in it. Right now i have global objects that i am using to store my finds - (ItemSetMap allMusicByBand)
I would like to get away from this and just search the sets directly.
All the cd info are stored in the private section - (ItemSet allCDS;)
here is the library.cpp -
the commented code is where i was doing my search and adding to the global object...
I would like to do the search in the musicByBand function instead..
#include "Library.h"
#include "book.h"
#include "cd.h"
#include "dvd.h"
#include <iostream>
//ItemSetMap allBooksByAuthor; //these are what i am trying to get away from...
ItemSetMap allmoviesByDirector;
ItemSetMap allmoviesByActor;
//ItemSetMap allMusicByBand;
ItemSetMap allMusicByMusician;
const Item* Library::addMusicCD(const string& title, const string& band, const int nSongs)
{
CD* item = new CD(title,band,nSongs);
allCDS.insert(item);
//ItemSetMap::iterator myband = allMusicByBand.find(band);
//if(myband != allMusicByBand.end())
//{
//myband->second->insert(item);
//}
//else{
//ItemSet* obj = new ItemSet();
//obj->insert(item);
//allMusicByBand.insert(make_pair(band, obj));
//}
return item;
}
const ItemSet* Library::musicByBand(const string& band) const
{
return allMusicByBand[author];
}
i hope i was clear enough on what i wanted.
I have tried to iterate through it. I have tried just about everything i can think of..
CD class is a superclass of item class.
Thank you..
| The simplest way to do it would be this:
const ItemSet* Library::musicByBand(const string& band) const
{
ItemSet* bandMusic = new ItemSet();
for (ItemSet::const_iterator i = allCDs.begin(); i != allCDs.end(); ++i)
{
if ((*i)->getBand() == band) {
bandMusic->insert(*i);
}
}
return itemSet;
}
Although this runs in O(n) time, which doesn't at all take advantage of the fact that you are using a set. These could just as well be in a vector. The way you were doing it before with the "index" sets is actually a faster-performing solution, although it will take somewhat more memory. Plus, presumably the retrieval methods will be called far more frequently than the insertion methods, so it makes sense to do more work on insertion in order to save on work during retrieval. But of course if you do this, you will want the index sets to be private members, not globals.
You should also be very careful about your memory management here. The fact that you are returning a const pointer to an ItemSet from the musicByBand method concerns me. Why can it not be just an ItemSet that you are returning?
|
2,414,478 | 2,414,490 | Escaping an apostrophe in a character literal | Could you please tell me how can one escape an apostrophe.
I need it to process non apostrophe characters through file operations so when I encounter an apostrophe(') I can't just give ch!='''. It doesn't work. Could you please tell me the right format. Thank you..:)
| Use \', for example:
if ( ch != '\'' )
\' is an escape sequence for the apostrophe.
Google for "escape sequence" to know more about it.
|
2,414,561 | 2,414,570 | Why some libraries must implement basic data structure? | Some open source libraries have tendency to re implement basic structures like string, list, stack, queue...
Why don't they use stl library? Is stl not good enough?
| Exposing STL types in headers can, in some cases, lead to nasty, nasty link times. On large projects, that can be sufficient reason to "hide" them behind a proprietary-looking API.
|
2,414,616 | 2,414,661 | How to make script/program to make it so an application is always running? | I have a simple .exe that needs to be running continuously.
Unfortunately, sometimes it crashes unexpectedly, and there's nothing that can be done for this.
I'm thinking of like a C# program that scans the running application tree on a timer and if the process stops running it re-launches it... ? Not sure how to do that though....
Any other ideas?
| It's fairly easy to do that, but the "crashes unexpectedly, and there's nothing that can be done for this" sounds highly suspect to me. Perhaps you mean the program in question is from a third party, and you need to work around problems they can't/won't fix?
In any case, there's quite a bit of sample code to do exactly what you're talking about.
|
2,414,828 | 2,414,852 | Get path to My Documents | From Visual C++, how do I get the path to the current user's My Documents folder?
Edit:
I have this:
TCHAR my_documents[MAX_PATH];
HRESULT result = SHGetFolderPath(NULL, CSIDL_MYDOCUMENTS, NULL, SHGFP_TYPE_CURRENT, my_documents);
However, result is coming back with a value of E_INVALIDARG. Any thoughts as to why this might be?
| It depends on how old of a system you need compatibility with. For old systems, there's SHGetSpecialFolderPath. For somewhat newer systems, there's SHGetFolderPath. Starting with Vista, there's SHGetKnownFolderPath.
Here's some demo code that works, at least on my machine:
#include <windows.h>
#include <iostream>
#include <shlobj.h>
#pragma comment(lib, "shell32.lib")
int main() {
CHAR my_documents[MAX_PATH];
HRESULT result = SHGetFolderPath(NULL, CSIDL_PERSONAL, NULL, SHGFP_TYPE_CURRENT, my_documents);
if (result != S_OK)
std::cout << "Error: " << result << "\n";
else
std::cout << "Path: " << my_documents << "\n";
return 0;
}
|
2,414,872 | 2,415,198 | IPhone compilation of ported code problems: calling a static templated function that's inside a templated class == fail | template<typename T> struct AClass
{
public:
template<typename T0>
static void AFunc()
{}
};
template<typename T>
void ATestFunc()
{
AClass<T>::AFunc<int>();
}
this works on other platforms, but not on the iPhone I get an error " expected primary-expression before 'int' " on the line where I call the function.
it works fine if I was to do
AClass<int>::AFunc<int>();
and it works fine if we ditch the template parameter for the function as well:
template<typename T> struct AClass
{
public:
static void AFunc()
{}
};
template<typename T>
void ATestFunc()
{
AClass<T>::AFunc();
}
Any Ideas as to why it doesn't work with the iPhone?
| try changing the line AClass<T>::AFunc<int>() to AClass<T>::template AFunc<int>();
|
2,415,082 | 2,415,088 | When to use recursive mutex? | I understand recursive mutex allows mutex to be locked more than once without getting to a deadlock and should be unlocked the same number of times. But in what specific situations do you need to use a recursive mutex? I'm looking for design/code-level situations.
| For example when you have function that calls it recursively, and you want to get synchronized access to it:
void foo() {
... mutex_acquire();
... foo();
... mutex_release();
}
without a recursive mutex you would have to create an "entry point" function first, and this becomes cumbersome when you have a set of functions that are mutually recursive. Without recursive mutex:
void foo_entry() {
mutex_acquire(); foo(); mutex_release(); }
void foo() { ... foo(); ... }
|
2,415,153 | 2,415,167 | How often do you declare your functions to be const? | Do you find it helpful?
| Every time You know that method won't change state of the object you should declare it to be constant.
It helps reading your code. And it helps when you try to change state of the object - compiler will stop you.
|
2,415,373 | 2,415,593 | Small block allocator on Linux (or RedHat Linux) to avoid memory fragmentation | I know that there is an allocator for user applications than handles lots of small block allocation on HP-UX link text and on Windows XP Low-fragmentation Heap. On HP-UX it is possible to tune the allocator and on Windows XP it considers block of size less than 16 K as small.
My problem is that I can't find any information about this kind of allocator for user programs running on Linux (RedHat Linux actually). If there is such an allocator I actually would like to find what maximum size of blocks it can handle.
Update
I have found jemalloc (http://www.canonware.com/jemalloc/). It handles small, large and huge blocks: http://www.canonware.com/download/jemalloc/jemalloc-latest/doc/jemalloc.html#size_classes.
| Redhat Linux or any Linux based distributions mostly use DL-Malloc (http://gee.cs.oswego.edu/dl/html/malloc.html).
For user applications as Kirill pointed out, better to use separate memory allocators if fragmentation is more because of smaller blocks.
If the user application is small, you can try using C++ placement new/delete which can override the default allocator pattern. (http://en.wikipedia.org/wiki/Placement_syntax)
|
2,415,927 | 2,416,048 | ITERATOR LIST CORRUPTED in std::string constructor | The code below compiled in Debug configuration in VS2005 SP1 shows two messages with “ITERATOR LIST CORRUPTED” notice.
Code Snippet
#define _SECURE_SCL 0
#define _HAS_ITERATOR_DEBUGGING 0
#include <sstream>
#include <string>
int main()
{
std::stringstream stream;
stream << "123" << std::endl;
std::string str = stream.str();
std::string::const_iterator itFirst = str.begin();
int position = str.find('2');
std::string::const_iterator itSecond = itFirst + position;
std::string tempStr(itFirst,itSecond); ///< errors are here
return 0;
}
Is it a bug in the compiler or standard library?
| What @dirkgently said in his edit.
Apparently, some code for std::string is located in the runtime dll, in particular the macro definition does not take effect for the constructor an the code for iterator debugging gets executed. You can fix this by linking the runtime library statically.
I would consider this a bug, though perhaps not in the Visual Studio itself, but in the documentation.
|
2,415,989 | 2,417,589 | Is it possible to compact the VC++ runtime heap? | Can I have the same effect as with HeapCompact() for the Visual C++ runtime heap? How can I achieve that?
| You can get a HANDLE for the CRT heap using _get_heap_handle, then call HeapCompact on it. Not sure whether this is supported/stable as I have not tried this myself. I imagine you would want to call HeapCompact in serialized mode to have any chance of this working.
If you are going to this trouble just call HeapSetInformation on the handle (per MDSN docs on _get_heap_handle) and let the built-in LFH handle compaction for you.
|
2,416,033 | 2,416,142 | Creating csv files in ObjC | Is it possible or any library available for creating .csv file in ObjC ?
Thanks
| A CSV file is a text file of comma seperated values.
You could write an a routine that loops through values adding each one to a text file (or even add the values to a string?). After each field, add the ',' character. At the end of each row, add a new line. The first row can be the field titles.
E.g.
Year,Make,Model
1997,Ford,E350
2000,Mercury,Cougar
Here is a wikipedia article that describes what CSV is. I hope it can help.
|
2,416,091 | 2,416,259 | How applications can be protected from errors in DLL module | I have DLL and application that will call some function in this dll. For example...
DLL function:
char const* func1()
{
return reinterpret_cast<char const*>(0x11223344);
}
Application code:
func1 = reinterpret_cast<Func1Callback>(::GetProcAddress(hDll, "func1"));
blablabla
char const* ptr = func1();
cout << ptr;
That DLL is not under my control (plugin)..
Same code will cause access violation in my application, so... Is there any mechanism that will allow to determine such errors?
| Since the DLL can do anything your program could do the only reliable way is to load it into a separate worker lightweight process and once anything bad happens just restart the process. You'll need some protocol to pass data into the worker process and receive results.
|
2,416,255 | 2,416,399 | Destruction of string temporaries in thrown exceptions | Consider the following code:
std::string my_error_string = "Some error message";
// ...
throw std::runtime_error(std::string("Error: ") + my_error_string);
The string passed to runtime_error is a temporary returned by string's operator+. Suppose this exception is handled something like:
catch (const std::runtime_error& e)
{
std::cout << e.what() << std::endl;
}
When is the temporary returned by string's operator+ destroyed? Does the language spec have anything to say about this? Also, suppose runtime_error took a const char* argument and was thrown like this:
// Suppose runtime_error has the constructor runtime_error(const char* message)
throw std::runtime_error((std::string("Error: ") + my_error_string).c_str());
Now when is the temporary string returned by operator+ destroyed? Would it be destroyed before the catch block tries to print it, and is this why runtime_error accepts a std::string and not a const char*?
| As a temporary object (12.2), the result of the + will be destroyed as the last step in the evaluation of the full-expression (1.9/9) that contains it. In this case the full-expression is the throw-expression.
A throw-expression constructs a temporary object (the exception-object) (15.1) (std::runtime_error in this case). All the temporaries in the throw-expression will be destroyed after the exception-object has been constructed. The exception is thrown only once the evaluation of the throw-expression has completed, as the destruction of temporaries is part of this evaluation they will be destroyed before the destruction of automatic variables constructed since the try block was entered (15.2) and before the handler is entered.
The post-condition on runtime_error's constructor is that what() returns something that strcmp considers equal to what c_str() on the passed in argument returns. It is a theoretical possiblility that once the std::string passed as a constructor argument is destroyed, runtime_error's what() could return something different, although it would be a questionable implementation and it would still have to be a null-terminated string of some sort, it couldn't return a pointer to a stale c_str() of a dead string.
|
2,416,653 | 2,497,534 | Tuples of unknown size/parameter types | I need to create a map, from integers to sets of tuples, the tuples in a single set have the same size. The problem is that the size of a tuple and its parameter types can be determined at runtime, not compile time. I am imagining something like:
std::map<int, std::set<boost::tuple> >
but not exctly sure how to exactly do this, bossibly using pointers.
The purpose of this is to create temporary relations (tables), each with a unique identifier (key), maybe you have another approach.
| The purpose of boost::tuple is to mix arbitrary types. If, as you say,
I am only inserting integers
then you should use map< int, set< vector< int > > >. (If I were you, I'd throw some typedefs at that.)
To answer the original question, though, boost::tuple doesn't allow arbitrary types at runtime. boost::any does. However, any does not support comparison so there's a little more work if you want to use it in a set.
typedef vector< boost::any > tuple;
struct compare_tuple { bool operator()( tuple const &l, tuple const &r ) const {
assert ( l.size() == r.size() );
for ( tuple::iterator lit = l.begin(), rit = r.begin();
lit != l.end(); ++ lit, ++ rit ) {
assert ( lit->type() == rit->type() );
if ( lit->type() == typeid( foo ) ) { // find the type and perform "<"
return boost::any_cast<foo>(*lit) < boost::any_cast<foo>(*rit);
} else if ( lit->type() == typeid( bar ) ) {
return boost::any_cast<bar>(*lit) < boost::any_cast<bar>(*rit);
} /* etc; you will need to enumerate all the types you can insert */
}
} };
typedef std::map< int, std::set< tuple, compare_tuple > > main_map;
|
2,416,932 | 2,416,954 | Virtual Inheritance : Base Ctor not calling in Most Derived Class? | class Base
{
public:
Base(){}
Base(int k):a(k)
{
}
int a;
};
class X:virtual public Base
{
public:
X():Base(10){}
int x;
};
class Y:virtual public Base
{
public:
Y():Base(10){}
int y;
};
class Z:public X,public Y
{
public:
Z():X(10){}
};
int main()
{
Z a;
cout << a.a;
return 1;
}
In the above case, for Z():X(10){} Base(int k):a(k) is not calling, but when i change to Z():Base(10){} the Base(int k):a(k) is called. Why ?
Thank you.
| Because you used the virtual keyword - that's exactly what it does.
You have to explicitly initialize Base in the initializer list of Z in order to disambiguate between the initialization in X and the initalization in Y.
|
2,417,195 | 2,417,536 | What's the shortest code to write directly to a memory address in C/C++? | I'm writing system-level code for an embedded system without memory protection (on an ARM Cortex-M1, compiling with gcc 4.3) and need to read/write directly to a memory-mapped register. So far, my code looks like this:
#define UART0 0x4000C000
#define UART0CTL (UART0 + 0x30)
volatile unsigned int *p;
p = UART0CTL;
*p &= ~1;
Is there any shorter way (shorter in code, I mean) that does not use a pointer? I looking for a way to write the actual assignment code as short as this (it would be okay if I had to use more #defines):
*(UART0CTL) &= ~1;
Anything I tried so far ended up with gcc complaining that it could not assign something to the lvalue...
| I'd like to be a nitpick: are we talking C or C++ ?
If C, I defer to Chris' answer willingly (and I'd like the C++ tag to be removed).
If C++, I advise against the use of those nasty C-Casts and #define altogether.
The idiomatic C++ way is to use a global variable:
volatile unsigned int& UART0 = *((volatile unsigned int*)0x4000C000);
volatile unsigned int& UART0CTL = *(&UART0 + 0x0C);
I declare a typed global variable, which will obey scope rules (unlike macros).
It can be used easily (no need to use *()) and is thus even shorter!
UART0CTL &= ~1; // no need to dereference, it's already a reference
If you want it to be pointer, then it would be:
volatile unsigned int* const UART0 = 0x4000C000; // Note the const to prevent rebinding
But what is the point of using a const pointer that cannot be null ? This is semantically why references were created for.
|
2,417,317 | 2,417,499 | Serialization over Pipes | I wrote several simulation programs in C++ and want to connect their outputs/inputs with pipes (best solution would probably be to use the C++ streams).
For this I would like to serialize some objects (for example the simulations output/input are tensors and matrices). How should I handle this problem? I searched around for some time for serialization + pipes on google but was not really able to find anything about this...
Any tips on a starting point? Is it easy for example to use the boost/serialization to exchange objects with pipes? Generally I would be pleased, if serialization is text-based (to avoid rounding errors I would send hex-values rather than decimal), so I could redirect the output streams into textfiles, split them up etc... Also this way I could probably write my simulations in different programs...
Thx for all the answers!
| Check these libraries:
http://en.wikipedia.org/wiki/Thrift_(protocol)
http://code.google.com/apis/protocolbuffers/
|
2,417,484 | 2,417,527 | Winsock only sending data at program close | I have a c++/windows program that receives data from another c++ program via a WM_COPYDATA message. It is then supposed to use Sockets/winsock to send this message on to a server written in Java. The client connects to the server fine, but it doesn't seem to be able to send messages in a timely fashion. However, once the client is closed down, all the messages it should have been sending get sent in one big lump. Here is an example of the terminal output of the Java server:
Server Starting up.
Client Accepted.
hi from clienttesttesttesttesttesttesttesttesttesttesttesttesttesttest
the first two lines are output by the Java server when those events happen. The last line is messages from the client. The client sends "hi from client" right after winsock is initialized, and then "test" at various points later in the program as it receives data from the other c++ program via WM_COPYDATA messages.
Here is the Java server code:
BufferedReader in = new BufferedReader(new InputStreamReader(
clientSocket.getInputStream()));
String incomingLine;
while((incomingLine = in.readLine()) != null)
System.out.println(incomingLine);
Here is the c++ function where the messages are sent:
void sendDataWinsock(char* text){
int result = send(ConnectSocket,text,(int)strlen(text),0);
}
And here is a section of WndProc where the WM_COPYDATA messages are processed:
case WM_COPYDATA:
sendDataWinsock("test");
break;
Does anyone know why it is doing this? It is as if the client program is adding all these messages to a queue of things it should be sending, but is too busy to send them immediately, and so only sends them as the program is closing down, when it no longer has to process Windows messages. Or, I suppose, the error could actually be in the Java code - I am fairly new to this.
| You are reading lines on the server, but you are not sending lines.
That means your server sits there, receiving data but waiting to return a line of text back to your program from readLine() , which does not happen since no newlines , \n, gets sent. When the client exits, readLine() gives you back the data it read thus far.
|
2,417,494 | 2,417,555 | Passing a string variable as a ref between a c# dll and c++ dll | I have a c# dll and a c++ dll . I need to pass a string variable as reference from c# to c++ . My c++ dll will fill the variable with data and I will be using it in C# how can I do this. I tried using ref. But my c# dll throwed exception . "Attempted to read or write protected memory. ... This is often an indication that other memory is corrupt". Any idea on how this can be done
| As a general rule you use StringBuilder for reference or return values and string for strings you don't want/need to change in the DLL.
StringBuilder corresponds to LPTSTR and string corresponds to LPCTSTR
C# function import:
[DllImport("MyDll.dll", CharSet = CharSet.Auto, SetLastError = true)]
public static void GetMyInfo(StringBuilder myName, out int myAge);
C++ code:
__declspec(dllexport) void GetMyInfo(LPTSTR myName, int *age)
{
*age = 29;
_tcscpy(name, _T("Brian"));
}
C# code to call the function:
StringBuilder name = new StringBuilder(512);
int age;
GetMyInfo(name, out age);
|
2,417,583 | 2,417,628 | How to perform Cross-Platform Asynchronous File I/O in C++ | I am writing an application needs to use large audio multi-samples, usually around 50 mb in size. One file contains approximately 80 individual short sound recordings, which can get played back by my application at any time. For this reason all the audio data gets loaded into memory for quick access.
However, when loading one of these files, it can take many seconds to put into memory because I need to read a large amount of data with ifstream, meaning my program GUI is temporarily frozen. I have tried memory mapping my file but this causes huge CPU spikes and a mess of audio every time I need to jump to a different area of the file, which is not acceptable.
So this has led me to think that performing an Asynchronous file read will solve my problem, that is the data gets read in a different process and calls a function on completion. This needs to be both compatible for Mac OS X and Windows and in C++.
EDIT: Don't want to use the Boost library because I want to keep a small code base.
| boost has an asio library, which I've not used before (it's not on NASA's list of approved third-party libraries).
My own approach has been to write the file reading code twice, once for Windows, once for the POSIX aio API, and then just pick the right one to link with.
For Windows, use OVERLAPPED (you have to enable it in the CreateFile call, then pass an OVERLAPPED structure when you read). You can either have it set an event on completion (ReadFile) or call a completion callback (ReadFileEx). You'll probably need to change your main event loop to use MsgWaitForMultipleObjectsEx so you can either wait for the I/O events or allow callbacks to run, in addition to receiving WM_ window messages. MSDN has documentation for these functions.
For Linux, there's either fadvise and epoll, which will use the readahead cache, or aio_read which will allow actual async read requests. You'll get a signal when the request completes, which you should use to post an XWindows message and wake up your event processing loop.
Both are a little different in the details, but the net effect is the same -- you request a read which completes in the background, then your event dispatch loop gets woken up when the I/O finishes.
|
2,417,588 | 2,417,770 | Escaping a C++ string | What's the easiest way to convert a C++ std::string to another std::string, which has all the unprintable characters escaped?
For example, for the string of two characters [0x61,0x01], the result string might be "a\x01" or "a%01".
| Take a look at the Boost's String Algorithm Library. You can use its is_print classifier (together with its operator! overload) to pick out nonprintable characters, and its find_format() functions can replace those with whatever formatting you wish.
#include <iostream>
#include <boost/format.hpp>
#include <boost/algorithm/string.hpp>
struct character_escaper
{
template<typename FindResultT>
std::string operator()(const FindResultT& Match) const
{
std::string s;
for (typename FindResultT::const_iterator i = Match.begin();
i != Match.end();
i++) {
s += str(boost::format("\\x%02x") % static_cast<int>(*i));
}
return s;
}
};
int main (int argc, char **argv)
{
std::string s("a\x01");
boost::find_format_all(s, boost::token_finder(!boost::is_print()), character_escaper());
std::cout << s << std::endl;
return 0;
}
|
2,417,608 | 2,417,637 | is c++ STL algorithms and containers same across platforms and performance? | After learning good amount of c++, i'm now into STL containers and algorithms template library, my major concerns are,
1) Is this library same across different platforms like MS, linux n other os?
2) will quality or efficiency of program c++ module decrease with more use of STL containers and algorithms, i think i can't customize it to all needs.
3) Is this template library good to use in linux system programming, kernel modules?
4) lastly can i use this in programming contests, because it relives a lot of coding and pressure off shoulders.
|
1) Is this library same across different platforms like MS, linux n other os?
No. Except the standardized interface, the implementations are all different for each compiler suite, and sometimes they also provide custom extensions such as hash_map.
2) will quality or efficiency of program c++ module decrease with more use of STL containers and algorithms, i think i can't customize it to all needs.
I don't think so. The quality and efficiency should be much better than cooking one up by your own, unless you have very special needs.
3) Is this template library good to use in linux system programming, kernel modules?
Better use C for the kernel.
4) lastly can i use this in programming contests, because it relives a lot of coding and pressure off shoulders.
Depends on the rules of the contests.
|
2,417,614 | 2,417,729 | end of istream not detected when expected | I wrote a function to count vowels. If there is a vowel at the end of the stream it gets counted twice. Why?
#include <iostream>
#include <string>
#include <map>
using namespace std;
void countChars(istream& in, string theChars, ostream& out) {
map<char, int> charMap;
map<char, int>::iterator mapIt;
for (string::iterator i = theChars.begin(); i != theChars.end(); ++i) {
charMap[*i] = 0;
}
while (in) {
char c;
in >> c;
c = tolower(c);
if (charMap.count(c))
++charMap[c];
}
for (mapIt = charMap.begin(); mapIt != charMap.end(); ++mapIt) {
out << (*mapIt).first << ":" << (*mapIt).second << endl;
}
}
int main(int argc, char **argv) {
std::string s = "aeiou";
countChars(std::cin, s, std::cout);
}
| Because in evaluates as false when the last read failed due to running out of data, not because the next read would fail due to running out of data. It doesn't "look ahead", it only knows that the stream is finished if it has previously tried and failed to read.
So the following happens:
last char is read and processed
in evaluates as true, so the loop repeats
you try to read again, but there is no more data, so c is not modified
by undefined (although unsurprising) behavior, c happens to contain the value it had in the last run of the loop
hence, you process the same char again.
You should write:
char c;
while (in >> c) { etc }
|
2,417,697 | 2,419,231 | gluPerspective was removed in OpenGL 3.1, any replacements? | I'm trying to read some OpenGL tutorials on the net. the problem is that I found some old ones that use gluPerspective(). gluPerspective was deprecated in OpenGL 3.0 and removed in 3.1.
What function can I use instead?
I'm using C++ with latest FreeGlut installed.
| You have to compute the matrix manually and then pass it to OpenGL.
Computing the matrix
This snippet of code is based on the gluPerspective documentation.
void BuildPerspProjMat(float *m, float fov, float aspect,
float znear, float zfar)
{
float f = 1/tan(fov * PI_OVER_360);
m[0] = f/aspect;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = f;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = (zfar + znear) / (znear - zfar);
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = 2*zfar*znear / (znear - zfar);
m[15] = 0;
}
There is a C++ library called OpenGL Mathematics that may be useful.
Loading the Matrix in OpenGL 3.1
I am still new to the OpenGL 3.1 API, but you need to update a matrix on the GPU and then make use of it in your vertex shader to get the proper perspective. The following code just loads the matrix using glUniformMatrix4fv onto the video card.
{
glUseProgram(shaderId);
glUniformMatrix4fv(glGetUniformLocation(shaderId, "u_proj_matrix"),
1, GL_FALSE, theProjectionMatrix);
RenderObject();
glUseProgram(0);
}
A simple vertex shader from a random blog (found through stack overflow).
attribute vec4 a_position;
attribute vec4 a_color;
varying vec4 v_color;
uniform mat4 u_proj_matrix;
uniform mat4 u_model_matrix;
void main() {
mat4 mvp_matrix = u_proj_matrix * u_model_matrix;
v_color = a_color;
gl_Position = mvp_matrix * a_position;
}
|
2,417,859 | 2,417,997 | Template mutual dependence | template
struct A{
typedef float atype;
typedef typename tB::btype typeB;
};
template
struct B{
typedef float btype;
typedef typename tA::atype typeA;
};
struct MyB;
struct MyA: public A<MyB>{};
struct MyB: public B<MyA>{};
int main(int argc, char *argv[])
{
}
does not compile because
"main.cpp:6: error: invalid use of incomplete type ‘struct MyB’".
Basically the compiler cannot solve the loop because definition
of A depends on definition of B an viceversa.
Is there a way to sort this out?
thanks,
| This cannot be resolved directly. There was a very similar question (although not involving templates) posted yesterday: C++: How can I avoid "invalid covariant return type" in inherited classes without casting?
Your two options are to redesign your solutions so that this cross-dependency is no longer required, or to redesign your solution so that you are only using the template parameters to declare references and pointers (because their declarations do not require that a complete definition of the type is available at the time).
|
2,418,157 | 2,418,175 | C++ error: undefined reference to 'clock_gettime' and 'clock_settime' | I am pretty new to Ubuntu, but I can't seem to get this to work. It works fine on my school computers and I don't know what I am not doing. I have checked usr/include and time.h is there just fine. Here is the code:
#include <iostream>
#include <time.h>
using namespace std;
int main()
{
timespec time1, time2;
int temp;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
//do stuff here
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
return 0;
}
I am using CodeBlocks as my IDE to build and run as well. Any help would be great, thank you.
| Add -lrt to the end of g++ command line. This links in the librt.so "Real Time" shared library.
|
2,418,170 | 2,418,748 | 'long long int' is interpreted as 'long int'. How do I get round this? | I'm working on project involving c programming for my mathematics course at university.
I need to be able to handle large integers, larger than those that can be stored in a 'long int' datatype. So I tried using 'long long int', but if I try something like this:
long long int number;
number = 10000000000;
Then the error message says 'error: integer constant too large for "long" type'.
I've tried other datatypes like '___int64' and 'int_64t' I've tried including all the standard c libraries and I still get the same problem.
Strangely, when I try 'printf("LLONG_MAX = %lld\n", LLONG_MAX);', I get this:
LLONG_MAX = -1
I'm using Codeblocks 8.02 on windows xp, but I'm not sure what version of gcc compiler is installed since I'm using network computers on campus and I don't have permission to access the main filesystem. I don't want to have to bring my laptop into campus everyday. Please help! Thanks
| In Microsoft environment use printf with this syntax :
__int64 i64 = 10000000000;
unsigned __int64 u64 = 10000000000000000000;
printf ( "%I64d\n", i64 );
printf ( "%I64u\n", u64 );
printf ( "%I64d\n", u64 ); <-- note this typo
|
2,418,244 | 2,418,459 | Basic polynomial reading using linked lists | Ok, after failing to read a polynomial, I'm trying first a basic approach to this.
So i have class polinom with function read and print:
#ifndef _polinom_h
#define _polinom_h
#include <iostream>
#include <list>
#include <cstdlib>
#include <conio.h>
using namespace std;
class polinom
{
class term
{
public:
double coef;
int pow;
term(){
coef = 0;
pow = 0;
}
};
list<term> poly;
list<term>::iterator i;
public:
void read(int id)
{
term t;
double coef = 1;
int pow = 0;
int nr_term = 1;
cout << "P" << id << ":\n";
while (coef != 0) {
cout << "Term" << nr_term << ": ";
cout << "coef = ";
cin >> coef;
if (coef == 0) break;
cout << " grade = ";
cin >> pow;
t.coef = coef;
t.pow = pow;
if (t.coef != 0) poly.push_back(t);
nr_term++;
}
}
void print(char var)
{
for (i=poly.begin() ; i != poly.end(); i++ ) { //going through the entire list to retrieve the terms and print them
if (poly.size() < 2) {
if (i->pow == 0) //if the last term's power is 0 we print only it's coefficient
cout << i->coef;
else if (i->pow == 1) {
if (i->coef == 1)
cout << var;
else if (i->coef == -1)
cout << "-" << var;
else
cout << i->coef << var;
}
else
cout << i->coef << var << "^" << i->pow; //otherwise we print both
}
else {
if (i == poly.end()) { // if we reached the last term
if (i->pow == 0) //if the last term's power is 0 we print only it's coefficient
cout << i->coef;
else if (i->pow == 1)
cout << i->coef << var;
else
cout << i->coef << var << "^" << i->pow; //otherwise we print both
}
else {
if (i->coef > 0) {
if (i->pow == 1)//if the coef value is positive
cout << i->coef << var << " + "; //we also add the '+' sign
else
cout << cout << i->coef << var << "^" << i->pow << " + ";
}
else {
if (i->pow == 1)//if the coef value is positive
cout << i->coef << var << " + "; //we also add the '+' sign
else
cout << cout << i->coef << var << "^" << i->pow << " + ";
}
}
}
}
}
};
#endif
Well, it works when reading only one term but when reading more the printed coefficients are some random values and also after the last term it print '+' or '-' when it shouldn't.
So any idea what's wrong?
Thanks!
FINAL UPDATE
Ok, i made it work perfectly by modifying Bill's code so thanks a lot Bill and everyone else who commented or answered!
Here's the final print function:
void print(char var)
{
list<term>::iterator endCheckIter;
for (i=poly.begin() ; i != poly.end(); i++ )
{
//going through the entire list to retrieve the terms and print them
endCheckIter = i;
++endCheckIter;
if (i->pow == 0)
cout << i->coef;
else if (i->pow == 1)
cout << i->coef << var;
else
cout << i->coef << var << "^" << i->pow;
if (endCheckIter != poly.end()) {
if (endCheckIter->coef > 0)
cout << " + ";
else {
cout << " - ";
endCheckIter->coef *= -1;
}
}
}
}
| if (i == poly.end()) { // if we reached the last term
This comment shows your error. For any given collection of items, items.end() returns the entry after the last item.
For instance, say I have a 5-item std::vector:
[0] [1] [2] [3] [4]
Then begin() points to:
[0] [1] [2] [3] [4]
/\
And end() points to:
[0] [1] [2] [3] [4] []
/\
Your for loop, it looks like:
for (i=poly.begin() ; i != poly.end(); i++ )
Note that comparing i to poly.end() happens before iter is used. As soon as i == poly.end(), you're done.
Your code inside of if (i == poly.end()) { will never be executed because this can never be true.
You can test for the end using the following:
// get access to the advance function
#include <iterator>
....
std::list<term>::iterator endCheckIter = i;
std::advance(endCheckIter, 1);
if (endCheckIter == poly.end())
{
...
}
But a simpler way might be:
std::list<term>::iterator endCheckIter = i;
++endCheckIter;
if (endCheckIter == poly.end())
{
...
}
Edit:
I'm not sure why you're getting garbage. Add in your missing braces and handle the non-end case, and everything works here:
void print(char var)
{
list<term>::iterator endCheckIter;
for (i=poly.begin() ; i != poly.end(); i++ )
{ // <- MISSING BRACE
//going through the entire list to retrieve the terms and print them
endCheckIter = i;
++endCheckIter;
cout << i->coef << var << "^" << i->pow; // <- MISSING OUTPUT
if (endCheckIter != poly.end()) {
if (i->coef > 0)
cout << " + ";
else
cout << " - ";
}
} // <- MISSING BRACE
}
|
2,418,588 | 2,418,742 | Trying to switch a texture when player dies (OpenGL + C++) | I'm creating a 2D game and when the player dies I want the texture I to switch to another (to show an explosion) I also want the game to pause for a second or two so the user can see that the texture has changed.
My textures are loading correctly because I can apply it to a shape and i can see it if I say switched it with the players original texture.
I think that it must be that it is only rendering in one frame and then disappearing or something like that. Here is the code.
void Player::die(){
if(Player::lives > 0){
glPushMatrix();
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, explosionTex);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTranslatef(200, 200, 0.0);
glRotatef(heading, 0,0,1);
glColor3f(1.0,0.0,0.0);
glBegin(GL_POLYGON);
glTexCoord2f(0.0, 1.0); glVertex2f(-40,40);
glTexCoord2f(0.0, 0.0); glVertex2f(-40,-40);
glTexCoord2f(1.0, 0.0); glVertex2f(40,-40);
glTexCoord2f(1.0, 1.0); glVertex2f(40,40);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
Sleep ( 1000 );
*xscroll = 0;
*yscroll = 0;
Player::lives--;
Player::XPos = 0;
Player::YPos = 0;
Player::heading = 0;
Player::speed = 0;
}
}
How can I get it to switch texture, display that and then sleep for a time?
| You need to swap your buffers before you Sleep() if you want to see anything.
More generally, replace the Sleep() with a ExplodeStart, which you set to CurrentTimeInMilliseconds(). Then each time through your render loop check if CurrentTimeInMilliseconds()-ExplodeStart > 1000. If it is, switch to your regular player texture again.
|
2,418,739 | 2,424,619 | Empty Win32 Popup Menu | I'm trying to create a dynamic popup menu within my application, the generation code I use is something like that :
HMENU menu;
POINT pt;
menu = CreatePopupMenu();
SetForegroundWindow( receivingWindow );
GetCursorPos( &pt );
int i = 19;
AppendMenu( menu, MF_STRING, i++, _TEXT("meh meh") );
AppendMenu( menu, MF_STRING, i++, _TEXT("testo") );
AppendMenu( menu, MF_STRING, i++, _TEXT("foobar foobar") );
TrackPopupMenuEx( menu
, 0
, pt.x, pt.y
, receivingWindow
, NULL );
DestroyMenu( menu );
_TEXT is used to ensure text is in Unicode and receivingWindow is a Layered window created before and working well.
When calling TrackPopupMenuEx the menu is displayed with the good size and at the good position, but absolutely no text appear in the popup menu. Did someone got an idea why, and how to fix this problem?
EDIT: more information regarding my environment :
Windows 7 x64
x86 build in Visual Studio 2008
EDIT2: I've tested the same on Windows XP x86, and it works like a charm, and after further test, the menu is well displayed in Windows 7 x64 with the classic look.
| I found a workaround for this problem. Instead of using my main window (receivingWindow), I'm using a message only window to receive the event. For a reason that I don't understand, the text is displayed normally this way.
|
2,418,776 | 2,419,006 | CreateDIBSection leaving 'Not enough storage' error, but seems to still work anyway | Whenever my app tries to create a DIB section, either by calling CreateDIBSection(), or by calling LoadImage() with the LR_CREATEDIBSECTION flag, it seems to return successfully. The HBITMAP it returns is valid, and I can manipulate and display it just fine.
However, calls to GetLastError() will return 8: Not enough storage is available to process this command. This happens from the very first call to the last. The size of the bitmap requested seems inconsequential; 800x600 or 16x16, same result. Immediately prior to the function call, GetLastError() returns no error; additionally, calling SetLastError(0) before the function call has the same result.
I have found other people asking similar questions, but it either turns out they are using CreateCompatibleBitmap() and the problem goes away when they switch to CreateDIBSection(), or they are already using CreateDIBSection() and the result it returns is invalid and so is not working at all.
Since things seem to be working, I have thought I could just ignore it (and call SetLastError(0) after calls to either function), but there could be some subtle problem I am overlooking by doing so.
And of course, here's some of the basic code I'm using. First, the call to LoadImage(), which is part of a basic bitmap class that I use for a lot of things, and which I simplified quite a bit to show the more relevant aspects:
bool Bitmap::Load( const char* szBitmapName, /*...*/ )
{
m_hBitmap = (HBITMAP)LoadImage( GetModuleHandle( NULL ), szBitmapName,
IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION | LR_LOADFROMFILE );
//...
}
// ...
Bitmap::~Bitmap()
{
if( m_hBitmap ) DeleteObject( m_hBitmap );
}
bool Bitmap::Draw( HDC hDC, int iDstX, int iDstY, int iDstWidth,
int iDstHeight, int iSrcX, int iSrcY, int iSrcWidth,
int iSrcHeight, bool bUseMask ) const
{
HDC hdcMem = CreateCompatibleDC( hDC );
if( hdcMem == NULL ) return false;
HBITMAP hOld = (HBITMAP)SelectObject( hdcMem, m_hBitmap );
BLENDFUNCTION blendFunc;
blendFunc.BlendOp = AC_SRC_OVER;
blendFunc.BlendFlags = 0;
blendFunc.AlphaFormat = AC_SRC_ALPHA;
blendFunc.SourceConstantAlpha = 255;
AlphaBlend( hDC, iDstX, iDstY, iDstWidth, iDstHeight, hdcMem, iSrcX,
iSrcY, iSrcWidth, iSrcHeight, blendFunc );
SelectObject( hdcMem, hOld );
DeleteDC( hdcMem );
}
Calls to CreateDIBSection are typically done when updating a layered window:
HDC hDCScreen( GetDC(0) );
POINT tSourcePos = { 0, 0 };
HDC hDCSource( CreateCompatibleDC( hDCScreen ) );
// iWidth and iHeight are used both for the bitmap size and window size
// to keep this example simpler
BITMAPINFO bi32 = {0};
bi32.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bi32.bmiHeader.biWidth = iWidth;
bi32.bmiHeader.biHeight = iHeight;
bi32.bmiHeader.biPlanes = 1;
bi32.bmiHeader.biBitCount = 32;
bi32.bmiHeader.biCompression = BI_RGB;
void* pBits = NULL;
HBITMAP hBitmap = CreateDIBSection(NULL, &bi32, DIB_RGB_COLORS,
(void**)&pBits, NULL, NULL);
HBITMAP hOldBitmap = (HBITMAP)SelectObject( hDCSource, hBitmap );
POINT tWindowPos = { 0, 0 };
SIZE tWindowSize = { iWidth, iHeight };
BLENDFUNCTION blendFunction = {0};
blendFunction.BlendOp = AC_SRC_OVER;
blendFunction.SourceConstantAlpha = 255;
blendFunction.AlphaFormat = AC_SRC_ALPHA;
DWORD iFlags( ULW_ALPHA );
// m_tBitmap is an instance of Bitmap class previously mentioned
m_tBitmap.Draw( hDCSource, 0, 0, iWidth, iHeight, 0, 0, iWidth, iHeight );
UpdateLayeredWindow( GetHandle(), hDCScreen, &tWindowPos, &tWindowSize,
hDCSource, &tSourcePos, 0, &blendFunction, iFlags );
SelectObject( hDCSource, hOldBitmap );
DeleteObject( hBitmap );
DeleteDC( hDCSource );
ReleaseDC( 0, hDCScreen );
Any pointers about anything I'm completely off-base about would be appreciated.
| I think you aren't following what the documentation says (from CreateDIBSection):
If the function succeeds, the return
value is a handle to the newly created
DIB, and *ppvBits points to the bitmap
bit values.
If the function fails, the return
value is NULL, and *ppvBits is NULL.
This function can return the following value. [...]
If the return value is not NULL, the function succeeded. Calling GetLastError won't necessarily return any reliably meaningful information on success (from GetLastError):
If the function is not documented to
set the last-error code, the value
returned by this function is simply
the most recent last-error code to
have been set; some functions set the
last-error code to 0 on success and
others do not.
|
2,418,777 | 2,714,580 | Compiling boost on Sunos | I have just started with using boost libraries.
For one of our projects i want to compile Boost 1.39.0 on Sun OS using sun compiler. However if i compile it using steps mentioned in http://www.boost.org/doc/libs/1_39_0/more/getting_started/unix-variants.html, not all of the targets are compiled. Can someone provide resources which would be helpful for compiling it on Sun os. Are there any separate set of instructions for compiling on Sun OS
| The SunOS compiler is notorious for not having conformant libraries and compilation. But we do have at least one tester that uses the platform (see Sandia-sun tester). And from what you can see there are many failures in the toolset. As for setting it up the key thing to do is not use the standard STD lib, but use the STLport STD lib. As you can see from the description of the setup of the Sandia-sun tester (see Sandia-sun info). What you can do is try and mirror that setup. First would be to create a user-config.jam that has the "using sun ..." part of that setup. And when you build you should build with something similar to: bjam sun-5.10 stdlib=sun-stlport address-model=64 .
|
2,418,841 | 2,418,908 | Differences among including <xstring>, <cstring>, <string> and <wstring> in C++ | I have seen the following #include directives:
#include <xstring>
#include <cstring>
#include <string>
#include <wstring>
What are the differences among these include directives? Did I miss any others that should be considered part of this group?
| <string> is where std::string is defined.
<xstring> is a Microsoft C++ header containing the actual implementation of the std::basic_string template. You never need to include <xstring> yourself. <string> includes it for the basic_string implementation.
<cstring> is the standard C string library (strcpy, strcat, etc) placed into the C++ std namespace.
wstring is not a header file that I'm aware of. std::wstring is the wchar_t version of std::string and is defined when including <string>.
|
2,418,921 | 2,418,951 | How do I get CPU Clock Speed in C++ (Linux)? | How can I get the CPU clock speed in C++?
I am running Ubuntu 9.10 if that makes any difference.
| Read the pseudo-file /proc/cpuinfo. See this link for an explanation of the fields it contains.
|
2,419,068 | 2,419,814 | How to debug a segmentation fault while the gdb stack trace is full of '??'? | My executable contains symbol table. But it seems that the stack trace is overwrited.
How to get more information out of that core please? For instance, is there a way to inspect the heap ? See the objects instances populating the heap to get some clues. Whatever, any idea is appreciated.
| I am a C++ programmer for a living and I have encountered this issue more times than i like to admit. Your application is smashing HUGE part of the stack. Chances are the function that is corrupting the stack is also crashing on return. The reason why is because the return address has been overwritten, and this is why GDB's stack trace is messed up.
This is how I debug this issue:
1)Step though the application until it crashes. (Look for a function that is crashing on return).
2)Once you have identified the function, declare a variable at the VERY FIRST LINE of the function:
int canary=0;
(The reason why it must be the first line is that this value must be at the very top of the stack. This "canary" will be overwritten before the function's return address.)
3) Put a variable watch on canary, step though the function and when canary!=0, then you have found your buffer overflow! Another possibility it to put a variable breakpoint for when canary!=0 and just run the program normally, this is a little easier but not all IDE's support variable breakpoints.
EDIT: I have talked to a senior programmer at my office and in order to understand the core dump you need to resolve the memory addresses it has. One way to figure out these addresses is to look at the MAP file for the binary, which is human readable. Here is an example of generating a MAP file using gcc:
gcc -o foo -Wl,-Map,foo.map foo.c
This is a piece of the puzzle, but it will still be very difficult to obtain the address of function that is crashing. If you are running this application on a modern platform then ASLR will probably make the addresses in the core dump useless. Some implementation of ASLR will randomize the function addresses of your binary which makes the core dump absolutely worthless.
|
2,419,107 | 2,419,136 | Replacing special characters from HTML source | I'm new to HTML coding and I know HTML has some reserved characters for its use and it also displays some characters by their character code. For example -:
Œ is Œ
© is ©
® is ®
I have the HTML source in std::string. how can i decipher them into their actual form and replace from std::string? is there any library with source available or can it be done using macros preprocessors?
| I would recommend using some HTML/XML parser that can automatically do the conversion for you. Parsing HTML correctly by hand is extremely difficult. If you insist on doing it yourself, Boost String Algorithms library provides useful replacement functions.
|
2,419,137 | 2,419,161 | Is filling memory with non zero values slower than filling it with zeros? | I'm not very expert on how processors work, but one might imagine that it was easier to set chunks of memory to zero than non zero values and so it may be marginally faster.
| I think the only difference would be in setting up the register that has the value to store to memory. Some processors have a register that's fixed at zero (ia64 for example). Even so, whatever minuscule overhead there might be for setting up a register will be monstrously dwarfed by the writing to memory.
As far as the time to actually write to the memory - that'll be clocked the same on all architectures I'm familiar with.
|
2,419,562 | 2,419,597 | Convert seconds to Days, Minutes and Seconds | Hey everyone. I've continuing to learn C++ and I've been set the 'challenge' of converting seconds to format as the Days,Minutes and Seconds.
For example: 31600000 = 365 days, 46 minutes, 40 seconds.
using namespace std;
const int hours_in_day = 24;
const int mins_in_hour = 60;
const int secs_to_min = 60;
long input_seconds;
cin >> input_seconds;
long seconds = input_seconds % secs_to_min;
long minutes = input_seconds / secs_to_min % mins_in_hour;
long days = input_seconds / secs_to_min / mins_in_hour / hours_in_day;
cout << input_seconds << " seconds = "
<< days << " days, "
<< minutes << " minutes, "
<< seconds << " seconds ";
return 0;
It works and comes up with the correct answer but after completing it I looked at how other people had tackled it and theirs was different. I'm wondering If I'm missing something.
Thanks, Dan.
| One of the things about programming is that there is never just one way to do something. In fact if I were to set my mind to it, I might be able to come up with a dozen completely different ways to accomplish this. You're not missing anything if your code meets requirements.
For your amusement, here's a way to format up hours:minutes:seconds under Windows (elapsed is a double & represents number of seconds elapsed since... something)
sprintf_s<bufSize>(buf, "%01.0f:%02.0f:%02.2f", floor(elapsed/3600.0), floor(fmod(elapsed,3600.0)/60.0), fmod(elapsed,60.0));
|
2,419,601 | 2,419,835 | Vim, C++, look up member function | I am using vim 7.x
I am using alternate file.
I have a mapping of *.hpp <--> *.cpp
Suppose I'm in
class Foo {
void some_me#mber_func(); // # = my cursor
}
in Foo.hpp
is there a way to tell vim to do the following:
Grab word under # (easy, expand("")
Look up the class I'm inside of ("Foo") <-- I have no idea how to do this
Append `1 & 2 (easy: using ".") --> "Foo::some_member_func"
4: Switch files (easy, :A)
Do a / on 4
So basically, I can script all of this together, except the "find the name of the enclosing class I'm in part (especially if classes are nested).
I know about ctags. I know about cscope. I'm choosing to not use them -- I prefer solutions where I understand where they break.
| This is relatively easy to do crudely and very difficult to do well. C and C++ are rather complex languages to parse reliably. At the risk of being downvoted, I'd personally recommend parsing the tags file generated by ctags, but if you really want to do it in Vim, there are a few of options for the "crude" method.
Make some assumptions. The assumptions you make depend on how complicated you want it to be. At the simplest level: assume you're in a class definition and there are no other nearby braces. Based on your coding style, assume that the opening brace of the class definition is on the same line as "class".
let classlineRE = '^class\s\+\(\k\+\)\s\+{.*'
let match = search(classlineRE, 'bnW')
if match != 0
let classline = getline(match)
let classname = substitute(classline, classlineRE, '\1', '')
" Now do something with classname
endif
The assumptions model can obviously be extended/generalised as much as you see fit. You can just search back for the brace and then search back for class and take what's in between (to handle braces on a separate line to "class"). You can filter out comments. If you want to be really clever, you can start looking at what level of braces you're in and make sure it's a top level one (go to the start of the file, add 1 every time you see '{' and subtract one every time you see '}' etc). Your vim script will get very very very complicated.
Another one risking the downvote, you could use one of the various C parsers written in python and use the vim-python interface to make it act like a vim script. To be honest, if you're thinking of doing this, I'd stick with ctags/cscope.
Use rainbow.vim. This does highlighting based on depth of indentation, so you could be a little clever and search back (using search('{', 'bW') or similar) for opening braces, then interrogate the syntax highlighting of those braces (using synIDattr(synID(line("."), col("."),1), "name")) and if it's hlLevel0, you know it's a top-level brace. You can then search back for class and parse as per item 1.
I hope that all of the above gives you some food for thought...
|
2,419,650 | 2,419,720 | C/C++ macro/template blackmagic to generate unique name | Macros are fine.
Templates are fine.
Pretty much whatever it works is fine.
The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL.
Precise problem:
I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends.
For example, consider:
class GlTranslate {
GLTranslate(float x, float y, float z); {
glPushMatrix();
glTranslatef(x, y, z);
}
~GlTranslate() { glPopMatrix(); }
};
Manual solution:
{
GlTranslate foo(1.0, 0.0, 0.0); // I had to give it a name
.....
} // auto popmatrix
Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block?
Thanks!
| If your compiler supports __COUNTER__ (it probably does), you could try:
// boiler-plate
#define CONCATENATE_DETAIL(x, y) x##y
#define CONCATENATE(x, y) CONCATENATE_DETAIL(x, y)
#define MAKE_UNIQUE(x) CONCATENATE(x, __COUNTER__)
// per-transform type
#define GL_TRANSLATE_DETAIL(n, x, y, z) GlTranslate n(x, y, z)
#define GL_TRANSLATE(x, y, z) GL_TRANSLATE_DETAIL(MAKE_UNIQUE(_trans_), x, y, z)
For
{
GL_TRANSLATE(1.0, 0.0, 0.0);
// becomes something like:
GlTranslate _trans_1(1.0, 0.0, 0.0);
} // auto popmatrix
|
2,419,805 | 2,419,830 | When did "and" become an operator in C++ | I have some code that looks like:
static const std::string and(" AND ");
This causes an error in g++ like so:
Row.cpp:140: error: expected unqualified-id before '&&' token
so after cursing the fool that defined "and" as &&, I added
#ifdef and
#undef and
#endif
and now I get
Row.cpp:9:8: error: "and" cannot be used as a macro name as it is an operator in C++
Which leads to my question of WHEN did "and" become an operator in C++? I can't find anything that indicates it is, except of course this message from g++
| There are several such alternatives defined in C++. You can probably use switches to turn these on/off.
|
2,419,857 | 2,420,060 | Java or C++ for my particular agent-based model (ABM)? | I unfortunately need to develop an agent-based model. My background is C++; I'm decent but not a professional programmer. My goal is to determine whether, my background aside for the moment, the following kind of algorithm would be faster or dramatically easier to write in C++ or Java.
My agents will be of class Host. Their private member variables include their infection and immune statuses (type int) with respect to different strains. (In C++, I might use an unordered_map or vector to hold this information, depending on the number of strains.) I plan to keep track of all hosts in a vector, vector< Host *> hosts .
The program will need to know at any time all the particular hosts infected with a particular strain or with immunity to a particular strain. For each strain, I could thus maintain two separate structures, e.g., vector< Host *> immune and vector< Host *> infectious (I might make each two-dimensional, indexed by strain and then host).
Hosts can die. It seems like this creates a mess in C++, in that I would have to find the right individual to kill in host and search through the other structures (immune and infectious) to find all pointers to this object. I'm under the impression that Java will delete all these pointers implicitly if I delete the underlying object. Is this true? Is there a dramatically better way to do this in C++ than what I have here?
Thanks in advance for any help.
I should add that if I use C++, I will use smart pointers. That said, I still don't see a slick way to delete all pointers to an object when the object needs to go. (When a host dies, I want to delete it from memory.)
I realize there's a lot to learn in Java. I'm hoping someone with more perspective on the differences between the languages, and who can understand what I need to do (above), can tell me if one language will obviously be more efficient than another.
| I'm under the impression that Java will delete all these pointers implicitly if I delete the underlying object. Is this true?
Nope. You actually have it backwards; if you delete all the pointers, Java will delete the underlying object. So you'll still need to search through all three of your data structures (hosts, immune, and infectious) to kill that particular host.
However, this "search" will be fast and simple if you use the right data structures; a HashSet will do the job very nicely.
private HashSet<Host> hosts;
private HashSet<Host> immune;
private HashSet<Host> infectious;
public void killHost(Host deadManWalking) {
hosts.remove(deadManWalking);
immune.remove(deadManWalking);
infectious.remove(deadManWalking);
}
It's really that simple, and will take place in O(lg n) time. (Though you will have to override the equals and hashCode methods in your implementation of Host; this is not technically challenging.)
My memories of C++ are too hazy for me to give any sort of authoritative comparison between the two languages; I did a ton of C++ work in college, haven't touched it since. Will C++ code run faster? Done right and assuming you don't have any memory leaks, I'd suspect it would, though Java's rep as a slow language is mostly a holdover from its youth; it's pretty decent these days. Easier to write? Well, give that you'd be learning the language, probably not. But the learning curve from C++ to Java is pretty gentle, and I personally don't miss C++ at all. Once you know the languages, Java is, in my opinion, vastly easier to work with. YMMV, natch, but it may well be worth the effort for you.
|
2,419,919 | 2,419,970 | A problem with random number generation | I am taking a course on programming, and we're using C++.
We had an assignment where, at some point, we needed to code a function that would return a random number in an [upper, lower] interval. I used the following:
lower + (int) (upper * (rand() / (RAND_MAX + 1.0)));
I did not forget to change srand by using srand((unsigned int) time(0)).
However, I get the same value every time! I asked my professor for help and he, after some investigation, found out that the first number generated by rand() isn't that random... The higher order bits remained unchanged, and since this implementation uses them, the end result isn't quite what I expected.
Is there a more elegant, yet simple solution than to discard the first value or use remainders to achieve what I want?
Thanks a lot for your attention!
~Francisco
EDIT: Thank you all for your input. I had no idea rand() was such a sucky RNG :P
| Given that rand() is not a very strong random number generator, the small amount of bias added by the standard approach is probably not an issue: (higher-lower) needs to be smaller than MAX_RAND of course.
lower + rand() % (higher-lower+1);
fixed off by one error.
|
2,419,928 | 2,496,979 | Where to get pure C++ Lame MP3 encoder - PCM to MP3 example? | So all I need is a simple function that sets it up (eating incoming PCM RATE (for example: rate near to 44100) It's channels (for example: 2) and -bits (for example: 16) and desirable 128 kb\s rate) and another one that takes PCM data and encodes it into pure MP3 frames.
I know it looks like a silly homework task but I assure you - it is not.
I hope it will be of help to all C++ developers starting with MP3s.
So can anybody please help me with that?
| See the example I gave in your other question for the basic usage of Lame. It should contain everything you need.
|
2,420,043 | 2,420,097 | memmove, memcpy, and new | I am making a simple byte buffer that stores its data in a char array acquired with new and I was just wondering if the memcpy and memmove functions would give me anything weird if used on memory acquired with new or is there anything you would recommend doing instead?
| No, they are perfectly fine. new and malloc() are just two different ways in you can aquire memory on the heap (actually they are quite the same, because new uses malloc() under the hood in most implementations). As soon as you have a valid char* variable in your hand (allocated by new, malloc() or on the stack) it is just a pointer to memory, hence memcpy() and other functions from this family will work as expected.
|
2,420,131 | 2,422,014 | Detect insertion of media into a drive using windows messages | I am currently using WM_DEVICECHANGE to be notified when new USB drives are connected to the computer. This works great for devices like thumb-drives where as soon as the device arrives it is ready to have files read from it. For devices like SD card readers it does not because the message is sent out once when the device is connected but no message is sent when a user actually inserts a card into the device.
Is it possible to detect the insertion of new media into an existing USB device without having to use polling?
| I just did this a few weeks ago. Technically speaking the RegisterDeviceNotification route is the proper way to go, but it requires a decent amount of work to get right. However, Windows Explorer already does all of the hard work for you. Just use SHChangeNotifyRegister with SHCNE_DRIVEADD / SHCNE_DRIVEREMOVED / SHCNE_MEDIAINSERTED / SHCNE_MEDIAREMOVED. Note that this method depends on the Shell Hardware Detection service (or whatever it is called), but it's much easier than trying to re-implement the functionality yourself.
|
2,420,346 | 2,420,382 | C API function callbacks into C++ member function code | So, I'm using the FMOD api and it really is a C api.
Not that that's bad or anything. Its just it doesn't interface well with C++ code.
For example, using
FMOD_Channel_SetCallback( channel, callbackFunc ) ;
It wants a C-style function for callbackFunc, but I want to pass it a member function of a class.
I ended up using the Win32 trick for this, making the member function static. It then works as a callback into FMOD.
Now I have to hack apart my code to make some of the members static, just to account for FMOD's C-ness.
I wonder if its possible in FMOD or if there's a work around to link up the callback to a specific C++ object's instance member function (not a static function). It would be much smoother.
| You cannot directly pass a member function. A member function has the implicit parameter this and C functions don't.
You'll need to create a trampoline (not sure the signature of the callback, so just doing something random here).
extern "C" int fmod_callback( ... args ...)
{
return object->member();
}
One issue is where does that object pointer come from. Hopefully, fmod gives you a generic context value that will be provided to you when your callback is made (you can then pass in the object pointer).
If not, you'll just need to make it a global to access it.
|
2,420,380 | 2,420,432 | XML Parsing: Checking for strings within string C++ | I have written a simple C++ shell program to parse large XML files and fix syntax errors.
I have so far covered everything I can think of except strings within strings, for example.
<ROOT>
<NODE attribute="This is a "string within" a string" />
<ROOT>
My program loops through the entire xml file character by character(keeping only a few characters in memory at a time for efficiency), it looks for things such as &<> etc and escapes them with & > < etc. A basic example of what I am doing can be found at the accepted answer for this Escaping characters in large XML files
The question is: What conditions or logic can I use to detect "string within" to be able to escape the quotes to this:
<ROOT>
<NODE attribute="This is a "string within" a string" />
<ROOT>
Is it even possible at all?
| I think it's difficult to decide where the attribute ends and another begins. I think you need to restrict the possible input you can parse otherwise you will have ambiguous cases such as this one:
<ROOT>
<NODE attribute="This is a "string within" a string" attribute2="This is another "string within" a string" />
<ROOT>
These are either two attributes or one attribute.
One assumption you could make is that after an equal number of double quotes and an equal sign a new attribute begins. Then you simply replace all the inner double quotes with your escape string. Or any equal sign after 2 ore more double quotes means new attribute. The same could be assumed for the end of node.
|
2,420,484 | 4,666,302 | How can I make gcc understand this template syntax? | I'm trying to use a delegate library in gcc http://www.codeproject.com/KB/cpp/ImpossiblyFastCppDelegate.aspx but the "preferred syntax" is not recognized by gcc 4.3. I.e. howto make compiler understand the
template < RET_TYPE (ARG1, ARG2) > syntax instead of template ??
TIA
/Rob
| If a class has a template function as:
class A {
public:
template<typename T>
static void doThis() {...}
};
template<typename T>
class B {
public:
static void doThat() {
A::doThis<T>();
}
};
then VC++ recognizes the syntax in class B, but for GCC you have to insert keyword template:
template<typename T>
class B {
public:
static void doThat() {
A::template doThis<T>(); // <-- "template" inserted
}
};
and then it works in both GCC and VC++ (I wrote this off the top of my head, so I'm fairly sure its correct ;)
|
2,420,496 | 2,437,785 | Drop target - where do I register the COleDropTarget variable if the view class doesn't have OnCreate? | The MSDN site says:
From your view class's function that handles the WM_CREATE message (typically OnCreate), call the new member variable's Register member function. Revoke will be called automatically for you when your view is destroyed.
But I don't have an OnCreate function in the ChildView class.
I do have OnCreate in the CMainFrame class. Can I register it there? What are the ramifications?
PLEASE NOTE: I have it working for dropping files but I want to drop the text as a file, not at a cursor location like a text cut and paste, but rather I want the application to make a buffer to hold it, and I will treat it like a file.
TIA,
Harvey
| Solved:
In using F1 to get the syntax for OnDrop and the others, MSDN gave me:
virtual BOOL OnDrop(
CWnd* pWnd,
COleDataObject* pDataObject,
DROPEFFECT dropEffect,
CPoint point
);
But the correct virtual function does not have the first parameter and should be:
virtual BOOL OnDrop(
COleDataObject* pDataObject,
DROPEFFECT dropEffect,
CPoint point
);
Same with the others. So I was never actually overriding the default functions.
|
2,420,506 | 2,420,687 | Finding cells in a grid with varying grid cell sizes | I have a grid of rectangular cells covering a plane sitting at some distance from the coordinate system origin and would like to identify the grid cell where a straight line starting at the origin would intersect it.
The cells on the grid have equal sizes (dx,dy) and there are no gaps between cells, but since every cell on the plane has a different distance from the origin, the solid angle they cover is not constant -- if it was I could find a simple function that translates a direction (theta,phi) into a cell index (ix,iy).
Currently I use something like a nearest-neighbor search to find cells, but this doesn't exploit the "gridded-ness" of my cells at all. Is there any algorithm that would help me improve on this?
EDIT
I know I could just use simple trigonometry to get the cell, but I am more interested in what algorithms there are that do nearest-neighbor searches on regularly spaced inputs.
|
[...]but I am more interested in what algorithms there are that do nearest-neighbor searches on regularly spaced inputs.
Though they are data structures to be very specific, I think you should take a look at the following:
R Tree
BK Tree
|
2,420,663 | 2,422,653 | windows equivalent of inet_aton | I'm converting some code written for a linux system to a windows system. I'm using C++ for my windows system and wanted to know the equivalent of the function inet_aton.
| Windows supports inet_pton, which has a similar interface to inet_aton (but that works with IPV6 addresses too). Just supply AF_INET as the first parameter, and it will otherwise work like inet_aton.
(If you can change the Linux source, inet_pton will also work there).
|
2,420,777 | 2,420,822 | Is there a way to use thread local variables when using ACE? | I am using ACE threads and need each thread to have its own int member.
Is that possible?
| ACE calls this "Thread Specific Storage". Check this out: ACE_TSS. That's about all I know about it, sorry can't be more help.
The Wikipedia page for thread-local storage says there is a pthreads way to do this too.
|
2,421,197 | 2,421,215 | How do acquire a xml string for a child using msxml4? | Using MSXML4, I am creating and saving an xml file:
MSXML2::IXMLDOMDocument2Ptr m_pXmlDoc;
//add some elements with data
SaveToDisk(static_cast<std::string>(m_pXmlDoc->xml));
I now need to acquire a substring from m_pXmlDoc->xml and save it. For example, if the full xml is:
<data>
<child1>
<A>data</A>
<One>data</One>
<B>data</B>
</child1>
</data>
I want to store this substring instead:
<A>data</A>
<One>data</One>
<B>data</B>
How do I get this substring using MXML4?
| Use XPath queries. See the MSDN documentaion for querying nodes. Basically you need to call the selectNodes API with the appropriate XPath expression that matches the part of the DOM you are interested in.
// Query a node-set.
MSXML4::IXMLDOMNodeListPtr pnl = pXMLDom->selectNodes(L"//child/*");
|
2,421,219 | 2,421,250 | Best C++ static & run time tools | Apologies if I missed this question already, but I searched and couldn't find it.
I have been out the C/C++ world for a little while and am back on a project. I was wondering what tools are preferred today to help with development.
The types of tools I'm referring to are:
Purify
Electric Fence
PC-Lint
cscope
Thanks!
| You already have mentioned some of the (mostly free) alternatives. This depends on the platform again.
Windows:
VSTS 2008 is pretty good with its /analyze and profiling tools
Rational Purify (as you've mentioned)
BoundsChecker
Linux:
Valgrind
Mac:
Shark
CHUD
Sleuth
MalloDebug
|
2,421,254 | 2,421,438 | Static and global variable in memory |
Are static variables stored on the stack itself similar to globals? If so, how are they protected to allow for only local class access?
In a multi threaded context, is the fear that this memory can be directly accessed by other threads/ kernel? or why cant we use static/global in multi process/ thread enviornment?
| Variables stored on the stack are temporal in nature. They belong to a function, etc and when the function returns and the corresponding stack frame is popped off, the stack variables disappear with it. Since globals are designed to be accessible everywhere, they must not go out of context and thus are stored on the heap (or in a special data section of the binary) instead of on the stack. The same goes for static variables; since they must hold their value between invocations of a function, they cannot disappear when the function returns thus they cannot be allocated on the stack.
As far as protection of static variables goes, IIRC this is mainly done by the compiler. Even though the variable is on the heap, your compiler knows the limited context in which that variable is valid and any attempt to access the static from outside that context will result in an "unknown identifier" or similar error. The only other way to access the heap variable incorrectly is if you know the address of the static and you blindly de-reference a pointer to it. This should result in a run-time memory access error.
In a multi-threaded environment, it is still okay to use globals and static variables. However, you have to be a lot more careful. You must guarantee that only one thread can access the variable at a time (typically through some kind of locking mechanism such as a mutex). In the case of static local variables inside a function, you must ensure that your function will still function as expected if it is called from multiple threads sequentially (that is, called from thread 1, then from thread 2, then thread 1, then thread 2, etc etc). This is generally harder to do and many functions that rely on static member variables are not thread-safe because of this (strtok is a notable example).
|
2,421,485 | 2,421,651 | Could not send backspace key using ::SendInput() to wordpad application | I have used sendinput() function and windows keyboard hooks to develop a custom keyboard for indian languages.
Project is in google code here: http://code.google.com/p/ekalappai
The keyboad hook and sendinput functions are placed in a win32 dll. And they are called from a Qt exe.
Our application works fine for most keys and applications. I find the following issue:
I could not send Backspace key to few applications like Wordpad/Openoffice/MsOffice. I find same issue with Arrowkeys and delete keys.
Here is my code:
extern "C" __declspec(dllexport) void GenerateKey(int vk , bool bExtended)
{
//update previous characters
previous_2_character = previous_1_character;
previous_1_character = vk;
KEYBDINPUT kb={0};
INPUT Input={0};
//keydown
kb.wVk = 0;
kb.wScan = vk;/*enter unicode here*/;
kb.dwFlags = KEYEVENTF_UNICODE; // KEYEVENTF_UNICODE=4
Input.type = INPUT_KEYBOARD;
Input.ki = kb;
::SendInput(1,&Input,sizeof(Input));
//keyup
kb.wVk = 0;
kb.wScan = vk;/*enter unicode here*/;
kb.dwFlags = KEYEVENTF_UNICODE|KEYEVENTF_KEYUP; //KEYEVENTF_UNICODE=4
Input.type = INPUT_KEYBOARD;
Input.ki = kb;
::SendInput(1,&Input,sizeof(Input));
}
Full dll code is here: http://code.google.com/p/ekalappai/source/browse/trunk/ekhook/ekhook/dllmain.cpp
Calling code:
generatekey = (GenerateKey) myLib->resolve( "GenerateKey" );
generatekey(44,FALSE); //comma - THis works in wordpad/MsOffice/Openoffice
generatekey(2949,FALSE); //tamil character "a" works in Wordpad/Msoffice/Openoffice
generatekey(8,FALSE); //backspace - This is NOT working in Wordpad/Msoffice/Openoffice
Full calling code from Qt Exe is here:
http://code.google.com/p/ekalappai/source/browse/trunk/ekalappai/window.cpp
I tried searching in google but could not fine a solution yet. If anyone has clue on resolving this pls help. Thanks.
| You are mixing up the virtual key and the scan code. The wVk member is the important one, the scan code will only be used it the virtual key is ambiguous. Fix:
kb.wVk = vk;
kb.wScan = 0; // TODO: look at VkKeyScanEx()
|
2,421,492 | 2,421,671 | Visibility of privately inherited typedefs to nested classes | In the following example (apologies for the length) I have tried to isolate some unexpected behaviour I've encountered when using nested classes within a class that privately inherits from another. I've often seen statements to the effect that there is nothing special about a nested class compared to an unnested class, but in this example one can see that a nested class (at least according to GCC 4.4) can see the public typedefs of a class that is privately inherited by the closing class.
I appreciate that typdefs are not the same as member data, but I found this behaviour surprising, and I imagine many others would, too. So my question is twofold:
Is this standard behaviour? (a decent explanation of why would be very helpful)
Can one expect it to work on most modern compilers (i.e., how portable is it)?
#include <iostream>
class Base {
typedef int priv_t;
priv_t priv;
public:
typedef int pub_t;
pub_t pub;
Base() : priv(0), pub(1) {}
};
class PubDerived : public Base {
public:
// Not allowed since Base::priv is private
// void foo() {std::cout << priv << "\n";}
class Nested {
// Not allowed since Nested has no access to PubDerived member data
// void foo() {std::cout << pub << "\n";}
// Not allowed since typedef Base::priv_t is private
// void bar() {priv_t x=0; std::cout << x << "\n";}
};
};
class PrivDerived : private Base {
public:
// Allowed since Base::pub is public
void foo() {std::cout << pub << "\n";}
class Nested {
public:
// Works (gcc 4.4 - see below)
void fred() {pub_t x=0; std::cout << x << "\n";}
};
};
int main() {
// Not allowed since typedef Base::priv_t private
// std::cout << PubDerived::priv_t(0) << "\n";
// Allowed since typedef Base::pub_t is inaccessible
std::cout << PubDerived::pub_t(0) << "\n"; // Prints 0
// Not allowed since typedef Base::pub_t is inaccessible
//std::cout << PrivDerived::pub_t(0) << "\n";
// Works (gcc 4.4)
PrivDerived::Nested o;
o.fred(); // Prints 0
return 0;
}
| Preface: In the answer below I refer to some differences between C++98 and C++03. However, it turns out that the change I'm talking about haven't made it into the standard yet, so C++03 is not really different from C++98 in that respect (thanks to Johannes for pointing that out). Somehow I was sure I saw it in C++03, but in reality it isn't there. Yet, the issue does indeed exist (see the DR reference in Johannes comment) and some compilers already implement what they probably consider the most reasonable resolution of that issue. So, the references to C++03 in the text below are not correct. Please, interpret the references to C++03 as references to some hypothetical but very likely future specification of this behavior, which some compilers are already trying to implement.
It is important to note that there was a significant change in access rights for nested classes between C++98 and C++03 standards.
In C++98 nested class had no special access rights to the members of enclosing class. It was basically completely independent class, just declared in the scope of the enclosed class. It could only access public members of the enclosing class.
In C++03 nested class was given access rights to the members of the enclosing class as a member of the enclosing class. More precisely, nested class was given the same access rights as a static member function of the enclosing class. I.e. now the nested class can access any members of the enclosing class, including private ones.
For this reason, you might observe the differences between different compilers and versions of the same compiler depending on when they implemented the new specification.
Of course, you have to remember that an object of the nested class is not tied in any way to any specific object of the enclosing class. As far as the actual objects are concerned, these are two independent classes. In order to access the non-static data members or methods of the enclosing class from the nested class you have to have a specific object of the enclosing class. In other words, once again, the nested class indeed behaves as just like a static member function of the enclosing class: it has no specific this pointer for the enclosing class, so it can't access the non-static members of the enclosing class, unless you make an effort to give it a specific object of the enclosing class to access. Without it the nested class can only access typedef-names, enums, and static members of the enclosing class.
A simple example that illustrates the difference between C++98 and C++03 might look as follows
class E {
enum Foo { A };
public:
enum Bar { B };
class I {
Foo i; // OK in C++03, error in C++98
Bar j; // OK in C++03, OK in C++98
};
};
This change is exactly what allows your PrivDerived::Nested::fred function to compile. It wouldn't pass compilation in a pedantic C++98 compiler.
|
2,421,771 | 2,421,781 | What is the Win compile switch to turn off #pragma deprecated warning? | Using Visual Studio .NET 2003 C++ and the wininet.dll
Am seeing many C4995 warnings
More info
Any help is appreciated.
Thanks.
| You can use #pragma warning as shown on that MSDN page:
#pragma warning(disable: 4995)
Or, you can turn the warning off for the whole project in the project's properties (right click project -> Properties -> C/C++ -> Advanced -> Disable Specific Warnings). On the command line, you can achieve the same effect using /wd4995.
|
2,421,783 | 2,421,831 | Queue giving erroneous data | So I'm attempting to use a queue to parse some input, turning prefix mathematical expressions into infix mathematical expressions with parentheses. For example: +++12 20 3 4 turns into (((12+20)+3)+4). For the most part, my algorithm works, except for one specific thing. When the numbers are greater than 2 digits long, the output becomes strange. I'll give you some examples instead of attempting to explain.
Examples: +++12 200 3 4 becomes (((12+3)+3)+4)
+++12 2000 3 4 becomes (((12+20004)+3)+4)
+++12 20005 3 4 becomes (((12+20004)+3)+4)
+++12 20005 3 45 becomes (((12+20004)+3)+45)
+++12 20005 3 456 becomes (((12+20004)+3)+()
Hopefully that's enough examples, if you need more, just ask.
I'm using GCC 4.2 in XCode on Mac OSX 10.6.2.
And here is the code that does this wonderful thing:
#include "EParse.h"
#include <iostream>
#include <iomanip>
EParse::EParse( char* s )
{
this->s = s;
len = strlen( s );
}
void EParse::showParsed()
{
parse( s, 0, len, new std::queue< char* >(), new std::queue< char >() );
}
void EParse::parse( char* str, int beg, int len, std::queue< char* > *n, std::queue< char > *ex )
{
//ex is for mathematical expressions (+, -, etc.), n is for numbers
if( beg == len )
{
if( ex->size() > n->size() )
{
std::cout << "Malformed expression. Too many mathematical expressions to too few numbers." << std::endl;
std::cout << ex->size() << " mathematical expressions." << std::endl;
std::cout << n->size() << " number(s)." << std::endl;
return;
}
else
{
std::string *s = new std::string();
output( n, ex, 0, s );
std::cout << s->c_str();
return;
}
}
if( str[ beg ] == ' ' && beg != ( len - 1 ) )
beg++;
if( num( str[ beg ] ) )
{
std::string *s = new std::string();
getNum( s, str, beg, len );
//std::cout << s->c_str() << std::endl;
n->push( const_cast< char* >( s->c_str() ) );
delete s;
parse( str, beg, len, n, ex );
}
else if( mathexp( str[ beg ] ) )
{
ex->push( str[ beg ] );
parse( str, beg + 1, len, n, ex );
}
}
void EParse::getNum( std::string *s, char* str, int &beg, int len )
{
if( num( str[ beg ] ) )
{
char *t = new char[ 1 ];
t[ 0 ] = str[ beg ];
s->append( t );
beg += 1;
getNum( s, str, beg, len );
}
}
bool EParse::num( char c )
{
return c == '0' || c == '1' || c == '2' || c == '3' || c == '4' ||
c == '5' || c == '6' || c == '7' || c == '8' || c == '9';
}
bool EParse::mathexp( char c )
{
return c == '+' || c == '*' || c == '/' || c == '%' || c == '-';
}
void EParse::output( std::queue< char* > *n, std::queue< char > *ex, int beg, std::string *str )
{
if( ex->empty() )
{
return;
}
char *t = new char[1];
t[ 0 ] = ex->front();
ex->pop();
if( beg == 0 )
{
str->insert( 0, "(" );
str->append( n->front() );
beg += 1 + strlen( n->front() );
n->pop();
str->append( t );
str->append( n->front() );
str->append( ")" );
beg += 2 + strlen( n->front() );
n->pop();
}
else
{
str->insert( 0, "(" );
str->insert( beg, t );
str->insert( beg + 1, n->front() );
beg += 1 + strlen( n->front() );
str->insert( beg, ")" );
n->pop();
beg++;
}
//ex->pop();
output( n, ex, beg + 1, str );
//std::cout << str << std::endl;
}
If you need any commenting or explaining of what exactly certain stuff does, please let me know, I will be checking back here fairly often tonight.
| While I don't have the exact answer to your problem, I did notice this:
std::string *s = new std::string();
getNum( s, str, beg, len );
//std::cout << s->c_str() << std::endl;
n->push( const_cast< char* >( s->c_str() ) );
delete s;
The problem there is you are pushing s into the queue and then you are deleting it. The queue, then, will be referencing a string's value that no longer is valid, which could lead to the errors you are describing.
To make life a little easier for you, I would recommend changing your queue type to:
std::queue<std::string>
Then you can push and pop whole std::strings instead of pointers to their data:
n->push(s);
Note that you'll have to change the APIs of your routines from taking a char* to a std::string&, but you will be able to modify the string's value like you did the char*.
|
2,421,833 | 2,422,076 | Qt - How to do superscripts and subscripts in a QLineEdit? | I need to have the ability to use superscripts asnd subscripts in a QLineEdit in Qt 4.6. I know how to do superscripts and subscripts in a QTextEdit as seen below but I can't figure out how to do them in QLineEdit because the class doesn't contain a mergeCurrentCharFormat() function like QTextEdit does. Please help. Thanks
void MainWindow::superscriptFormat()
{
QTextCharFormat format;
format.setVerticalAlignment(QTextCharFormat::AlignSuperScript);
if(ui->txtEdit->hasFocus())
ui->txtEdit->mergeCurrentCharFormat(format);
}
| QLineEdit wasn't really made for this type of thing, as it was designed for simple text entry. You have a few options, however. The simplest one is to do as Hostile Fork suggested and use a QTextEdit, and add a style override to not show the scroll bar (which I assume would remove the arrows). The more complex one would be to either inherit QLineEdit and do your own drawing, or to make your own widget completely that appears similar to the QLineEdits do.
|
2,422,068 | 2,422,080 | How can I run an external program without waiting for it to exit? | I'm trying to execute an external program from inside my Linux C++ program.
I'm calling the method system("gedit") to launch an instance of the Gedit editor. However my problem is while the Gedit window is open, my C++ program waits for it to exit.
How can I call an external program without waiting for it to exit?
| You will need to use fork and exec
int fork_rv = fork();
if (fork_rv == 0)
{
// we're in the child
execl("/path/to/gedit", "gedit", 0);
// in case execl fails
_exit(1);
}
else if (fork_rv == -1)
{
// error could not fork
}
You will also need to reap your child so as not to leave a zombie process.
void reap_child(int sig)
{
int status;
waitpid(-1, &status, WNOHANG);
}
int main()
{
signal(SIGCHLD, reap_child);
...
}
In regards to zombie processes, you have a second option. It uses a bit more resources (this flavor forks twice), but the benefit is you can keep your wait closer to your fork which is nicer in terms of maintenance.
int fork_rv = fork();
if (fork_rv == 0)
{
fork_rv = fork();
if (fork_rv == 0)
{
// we're in the child
execl("/path/to/gedit", "gedit", 0);
// if execl fails
_exit(1);
}
else if (fork_rv == -1)
{
// fork fails
_exit(2);
}
_exit(0);
}
else if (fork_rv != -1)
{
// parent wait for the child (which will exit quickly)
int status;
waitpid(fork_rv, &status, 0);
}
else if (fork_rv == -1)
{
// error could not fork
}
What this last flavor does is create a child, which in turns creates a grandchild and the grandchild is what exec's your gedit program. The child itself exits and the parent process can reap it right away. So an extra fork but you keep all the code in one place.
|
2,422,155 | 2,535,751 | how to get namespace prefixes from XML document, using MSXML? | For example,
In this document
< ?xml version="1.0" ? >
< SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:ns1="http://opcfoundation.org/webservices/XMLDA/1.0/"
xmlns:ns2="Service">
< SOAP-ENV:Body id="_0" >
if I need to select the element "Body", I need to know the prefix "SOAP-ENV". How can I get that? getting a root element and slicing the colon (:) off seems a dirty idea to me, I am sure there should be a neat way to do that. Google does not help (may be I am not searching for the right thing).
| If you're doing XML processing you shouldn't need to know the prefix.
To select a node in an XML document, you need not know the prefix. You need to know the namespace, not the prefix.
If you are processing a SOAP document, then you know the namespace is http://schemas.xmlsoap.org/soap/envelope/. And that's all you need. In the XML application, you can assign your own namespace prefix.
|
2,422,273 | 2,422,279 | One question about vector push_back | I just noticed that for vector push_back it is push back a reference to the element.
void push_back ( const T& x );
My question is does the memory layout changed after push_back?
For example, I have an array first which containing five elements and the layout is like this.
| | | | | |
| A1 | A2 | A3 | A4 | A5 |
Now I have a vector v
v.push_back(A3)
Now, how does the memory look like?
How does the vector store the elements here?
How does the vector access the element?
| A vector stores by value not by reference.
When you re-add the same element, a copy will be stored at the end. If you do not want to make a copy of the values you are inserting into the vector, then you should use pointers instead.
Example:
std::vector<std::string> v;
string s = "";
v.push_back(s);
s = "hi";
v.push_back(s);
v now contains 2 different elements, one with an empty string, and one with a string which contains "hi". Both strings in the vector remain independent from s.
Note: The internal implementation details of an STL container can vary, there is no guarantee that it will be implemented a certain way; however, the semantics of how an STL container works, will remain the same no matter what the internal implementation is.
|
2,422,419 | 2,422,444 | Disable full keyboard and mouse when console of c is running in window OS | Is it possible to disable full keyboard and mouse when I run my c program in window OS. Kindly guide me how can I make it possible.
| What about BlockInput()?
|
2,422,430 | 2,424,210 | Hide the console of a C program in the Windows OS | I want to hide my console of C when I run my application. How can I make my application run in the background?
| Programs with main() by default are compiled as SUBSYSTEM:CONSOLE applications and get a console window. If you own the other processes your application is starting, you could modify them to be windowed applications by one of the following methods:
Modify them to use WinMain() instead of main(). This is the typical approach but requires modifying code. (If the reason for using main() is for easy access to argc/argv, MSVC provides global __argc/__argv equivalents for windowed applications.)
Explicitly specifying the subsystem and entry point via /SUBSYSTEM:WINDOWS /ENTRY:main arguments to link.exe.
Use editbin.exe (from the Windows SDK) to change the subsystem type after the fact. This one might be useful if you don't have source code access to the spawned processes.
|
2,422,431 | 2,422,457 | Is a "factory" method the right pattern? | So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor...
/*
* class1 and class 2 derive from the same superclass
*/
class Container
{
public:
boost::shared_ptr<ComposedClass1> class1;
boost::shared_ptr<ComposedClass2> class2;
private:
...
}
/*
* Constructor - builds the objects that we need in this container.
*/
Container::Container(some params)
{
class1.reset(new ComposedClass1(...));
class2.reset(new ComposedClass2(...));
}
What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!
| Dependency injection springs to mind.
|
2,422,588 | 2,424,136 | How to add existing project using environment variable? | I have a a project that resides on a "thumb drive" (a.k.a. memory stick). Due to Windows ability to change drive letters of thumb drives, I would like to specify the location of sub-projects using an environment variable. This allows me to set the thumb drive drive letter, depending on the PC that I am using; or change the variable when the drive letter changes (such as happens when adding a hard drive to the PC).
This issue has sub parts:
How do I tell Visual Studio (2008) to use environment variable when adding an existing project using the GUI?
Which files do I need to modify with the environment variable (*.dcp, *.vcproj, *.sln)?
Do I need to delete the platform specific vcproj files, such as *.vcproj.host_name?
{Since I use different host PCs with the thumb drive, there are different vcproj files.}
I am using MS Visual Studio 2008, C++, on Vista and Windows XP (at least two platforms).
| The best solution here is to use relative paths for your subprojects. The relative path from your solution file to the subprojects does not change, as both are on the same thumb drive.
|
2,422,625 | 2,422,637 | Using clang to analyze C++ code | We want to do some fairly simple analysis of user's C++ code and then use that information to instrument their code (basically regen their code with a bit of instrumentation code) so that the user can run a dynamic analysis of their code and get stats on things like ranges of values of certain numeric types.
clang should be able to handle enough C++ now to handle the kind of code our users would be throwing at it - and since clang's C++ coverage is continuously improving by the time we're done it'll be even better.
So how does one go about using clang like this as a standalone parser? We're thinking we could just generate an AST and then walk it looking for objects of the classes we're interested in tracking. Would be interested in hearing from others who are using clang without LLVM.
| clang is designed to be modular. Quoting from its page:
A major design concept for clang is
its use of a library-based
architecture. In this design, various
parts of the front-end can be cleanly
divided into separate libraries which
can then be mixed up for different
needs and uses.
Look at clang libraries like libast for your needs. Read more here.
|
2,422,704 | 2,423,530 | Unit Testing Refcounted Critical Section Class | I'm looking at a simple class I have to manage critical sections and locks, and I'd like to cover this with test cases. Does this make sense, and how would one go about doing it? It's difficult because the only way to verify the class works is to setup very complicated threading scenarios, and even then there's not a good way to test for a leak of a Critical Section in Win32. Is there a more direct way to make sure it's working correctly?
Here's the code:
CriticalSection.hpp:
#pragma once
#include <windows.h>
#include <boost/shared_ptr.hpp>
namespace WindowsAPI { namespace Threading {
class CriticalSectionImpl;
class CriticalLock;
class CriticalAttemptedLock;
class CriticalSection
{
friend class CriticalLock;
friend class CriticalAttemptedLock;
boost::shared_ptr<CriticalSectionImpl> impl;
void Enter();
bool TryEnter();
void Leave();
public:
CriticalSection();
};
class CriticalLock
{
CriticalSection &ref;
public:
CriticalLock(CriticalSection& sectionToLock) : ref(sectionToLock) { ref.Enter(); };
~CriticalLock() { ref.Leave(); };
};
class CriticalAttemptedLock
{
CriticalSection &ref;
bool valid;
public:
CriticalAttemptedLock(CriticalSection& sectionToLock) : ref(sectionToLock), valid(ref.TryEnter()) {};
bool LockHeld() { return valid; };
~CriticalAttemptedLock() { if (valid) ref.Leave(); };
};
}}
CriticalSection.cpp:
#include "CriticalSection.hpp"
namespace WindowsAPI { namespace Threading {
class CriticalSectionImpl
{
friend class CriticalSection;
CRITICAL_SECTION sectionStructure;
CriticalSectionImpl() { InitializeCriticalSection(§ionStructure); };
void Enter() { EnterCriticalSection(§ionStructure); };
bool TryEnter() { if (TryEnterCriticalSection(§ionStructure)) return true; else return false; };
void Leave() { LeaveCriticalSection(§ionStructure); };
public:
~CriticalSectionImpl() { DeleteCriticalSection(§ionStructure); };
};
void CriticalSection::Enter() { impl->Enter(); };
bool CriticalSection::TryEnter() { return impl->TryEnter(); };
void CriticalSection::Leave() { impl->Leave(); };
CriticalSection::CriticalSection() : impl(new CriticalSectionImpl) {} ;
}}
| Here are three options and personally I favour the last one...
You could create a 'critical section factory' interface that can be passed to your constructor. This would have functions that wrapped the API level functions that you need to use. You could then mock this interface up and pass the mock to the code when under test and you can be sure that the right API functions are called. You would, generally, also have a constructor that didn't take this interface and that instead initialised itself with a static instance of the factory that called directly to the API. Normal creation of the objects wouldn't be affected (as you have them use a default implementation) but you can instrument when under test. This is the standard dependency injection route and results in you being able to parameterise from above. The downside of all this is that you have a layer of indirection and you need to store a pointer to the factory in each instance (so you're probably losing out in both space and time).
Alternatively you could try and mock the API out from underneath... A long time ago I looked into testing this kind of low level API usage with API hooking; the idea being that if I hooked the actual Win32 API calls I could develop a 'mock API layer' which would be used in the same way as more normal Mock Objects but would rely on "parameterise from below" rather than parameterise from above. Whilst this worked and I got quite a long way into the project, it was very complex to ensure that you were only mocking the code under test. The good thing about this approach was that I could cause the API calls to fail under controlled conditions in my test; this allowed me to test failure paths which were otherwise VERY difficult to exercise.
The third approach is to accept that some code is not testable with reasonable resources and that dependency injection isn't always suitable. Make the code as simple as you can, eyeball it, write tests for the bits that you can and move on. This is what I tend to do in situations like this.
However....
I'm dubious of your design choice. Firstly there's too much going on in the class (IMHO). The reference counting and the locking are orthogonal. I'd split them apart so that I had a simple class that did critical section management and then built on it I found I really needed reference counting... Secondly there's the reference counting and the design of your lock functions; rather than returning an object that releases the lock in its dtor why not simply have an object that you create on the stack to create a scoped lock. This would remove much of the complexity. In fact you could end up with a critical section class that's as simple as this:
CCriticalSection::CCriticalSection()
{
::InitializeCriticalSection(&m_crit);
}
CCriticalSection::~CCriticalSection()
{
::DeleteCriticalSection(&m_crit);
}
#if(_WIN32_WINNT >= 0x0400)
bool CCriticalSection::TryEnter()
{
return ToBool(::TryEnterCriticalSection(&m_crit));
}
#endif
void CCriticalSection::Enter()
{
::EnterCriticalSection(&m_crit);
}
void CCriticalSection::Leave()
{
::LeaveCriticalSection(&m_crit);
}
Which fits with my idea of this kind of code being simple enough to eyeball rather than introducing complex testing ...
You could then have a scoped locking class such as:
CCriticalSection::Owner::Owner(
ICriticalSection &crit)
: m_crit(crit)
{
m_crit.Enter();
}
CCriticalSection::Owner::~Owner()
{
m_crit.Leave();
}
You'd use it like this
void MyClass::DoThing()
{
ICriticalSection::Owner lock(m_criticalSection);
// We're locked whilst 'lock' is in scope...
}
Of course my code isn't using TryEnter() or doing anything complex but there's nothing to stop your simple RAII classes from doing more; though, IMHO, I think TryEnter() is actually required VERY rarely.
|
2,422,755 | 2,423,930 | How can I force Doxygen to show full include path? | How can I force Doxygen to show full include path?
What do I mean:
I have a class foo::bar::bee defined in bee.hpp in following directory structure:
foo
foo/bar
foo/bar/bee.hpp
Doxygen, when it documents foo::bar::bee class tells that you need to include <bee.hpp>, but for my software I need <foo/bar/bee.hpp>
How can I cause Doxygen to do this? Is there any option to provide "Include flags" like "-I" so doxygen would know where the base is?
Notes:
FULL_PATH_NAMES is already set to default YES
I do not want to provide include header explicitly for each class, because there
too many of them. I want Doxygen to do this automatically.
Thanks.
Answer
Set:
STRIP_FROM_INC_PATH = relative/path/to/include/directory
| Taken directly from the hints in DoxyWizard:
STRIP_FROM_INC_PATH
The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the path mentioned in the documentation of a class, which tells the reader which header file to include in order to use a class. If left blank only the name of the header file containing the class definition is used. Otherwise one should specify the include paths that are normally passed to the compiler using the -I flag.
|
2,422,843 | 2,422,992 | Writing preprocessor directives to get string | Can you write preprocessor directives to return you a std::string or char*?
For example: In case of integers:
#define square(x) (x*x)
int main()
{
int x = square(5);
}
I'm looking to do the same but with strings like a switch-case pattern. if pass 1 it should return "One" and 2 for "Two" so on..
| You don't want to do this with macros in C++; a function is fine:
char const* num_name(int n, char const* default_=0) {
// you could change the default_ to something else if desired
static char const* names[] = {"Zero", "One", "Two", "..."};
if (0 <= n && n < (sizeof names / sizeof *names)) {
return names[n];
}
return default_;
}
int main() {
cout << num_name(42, "Many") << '\n';
char const* name = num_name(35);
if (!name) { // using the null pointer default_ value as I have above
// name not defined, handle however you like
}
return 0;
}
Similarly, that square should be a function:
inline int square(int n) {
return n * n;
}
(Though in practice square isn't very useful, you'd just multiply directly.)
As a curiosity, though I wouldn't recommend it in this case (the above function is fine), a template meta-programming equivalent would be:
template<unsigned N> // could also be int if desired
struct NumName {
static char const* name(char const* default_=0) { return default_; }
};
#define G(NUM,NAME) \
template<> struct NumName<NUM> { \
static char const* name(char const* default_=0) { return NAME; } \
};
G(0,"Zero")
G(1,"One")
G(2,"Two")
G(3,"Three")
// ...
#undef G
Note that the primary way the TMP example fails is you have to use compile-time constants instead of any int.
|
2,422,881 | 2,437,771 | Why is OnDragEnter not called? | I have added the COleDropTarget variable to my view class and registered it in the OnCreate(), which is being called at startup. I added the OnDragEnter and OnDrop virtual functions (not the others yet, as OnDragLeave). But they are not called when I drag (or drop) a piece of text over them.
I just happened to think about the fact that I had already implemented the dropfiles function to the same window. Is this preventing the text drag?
What else do I need?
TIA,
Harvey
| Solved:
In using F1 to get the syntax for OnDrop and the others, MSDN gave me:
virtual BOOL OnDrop(
CWnd* pWnd,
COleDataObject* pDataObject,
DROPEFFECT dropEffect,
CPoint point
);
But the correct virtual function does not have the first parameter and should be:
virtual BOOL OnDrop(
COleDataObject* pDataObject,
DROPEFFECT dropEffect,
CPoint point
);
Same with the others. So I was never actually overriding the default functions.
|
2,422,889 | 2,423,161 | How do you correctly use boost::make_shared_ptr? | This simple example fails to compile in VS2K8:
io_service io2;
shared_ptr<asio::deadline_timer> dt(make_shared<asio::deadline_timer>(io2, posix_time::seconds(20)));
As does this one:
shared_ptr<asio::deadline_timer> dt = make_shared<asio::deadline_timer>(io2);
The error is:
error C2664: 'boost::asio::basic_deadline_timer::basic_deadline_timer(boost::asio::io_service &,const boost::posix_time::ptime &)' : cannot convert parameter 1 from 'const boost::asio::io_service' to 'boost::asio::io_service &'
| The problem is that asio::deadline_timer has a constructor that requires a non-const reference to a service. However, when you use make_shared its parameter is const. That is, this part of make_shared is the problem:
template< class T, class A1 > // service is passed by const-reference
boost::shared_ptr< T > make_shared( A1 const & a1 )
{
// ...
::new( pv ) T( a1 ); // but the constructor requires a non-const reference
// ...
}
What you can do is wrap the service up into a reference_wrapper, using ref:
#include <boost/ref.hpp>
asio::io_service io1;
shared_ptr<asio::deadline_timer> dt = // pass a "reference"
make_shared<asio::deadline_timer>(boost::ref(io1));
This takes your instance, and puts it into an object that can be converted implicitly to a reference to your isntance. You've then essentially passed an object representing a non-const reference to your instance.
This works because the reference_wrapper really stores a pointer to your instance. It can therefore return that pointer dereferenced while still being const.
|
2,422,898 | 2,423,229 | Intel MKL memory management and exceptions | I am trying out Intel MKL and it appears that they have their own memory management (C-style).
They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly.
I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups.
How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother?
Thanks in advance.
EDIT
http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm
this link may explain why I brought up the question
UPDATE
I used an idea from the answer below for allocator. This is what I have now:
template <typename T, size_t TALIGN=16, size_t TBLOCK=4>
class aligned_allocator : public std::allocator<T>
{
public:
pointer allocate(size_type n, const void *hint)
{
pointer p = NULL;
size_t count = sizeof(T) * n;
size_t count_left = count % TBLOCK;
if( count_left != 0 ) count += TBLOCK - count_left;
if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN));
else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN));
return p;
}
void deallocate(pointer p, size_type n){ MKL_free(p); }
};
If anybody has any suggestions, feel free to make it better.
| You could use a std::vector with a custom allocator like the ones mentioned here to ensure 16 byte alignment. Then you can just take address of the first element as the input pointer to the MKL functions. It is important that you have 16 byte alignment since the MKL uses SIMD extensively for performance.
|
2,422,970 | 2,423,091 | C++ class object memory map | When we create an object of a class what does it memory map look like. I am more interested in how the object calls the non virtual member functions. Does the compiler create a table like vtable which is shared between all objects?
class A
{
public:
void f0() {}
int int_in_b1;
};
A * a = new A;
What will be the memory map of a?
| You can imagine this code:
struct A {
void f() {}
int int_in_b1;
};
int main() {
A a;
a.f();
return 0;
}
Being transformed into something like:
struct A {
int int_in_b1;
};
void A__f(A* const this) {}
int main() {
A a;
A__f(&a);
return 0;
}
Calling f is straight-forward because it's non-virtual. (And sometimes for virtual calls, the virtual dispatch can be avoided if the dynamic type of the object is known, as it is here.)
A longer example that will either give you an idea about how virtual functions work or terribly confuse you:
struct B {
virtual void foo() { puts(__func__); }
};
struct D : B {
virtual void foo() { puts(__func__); }
};
int main() {
B* a[] = { new B(), new D() };
a[0]->foo();
a[1]->foo();
return 0;
}
Becomes something like:
void B_foo(void) { puts(__func__); }
void D_foo(void) { puts(__func__); }
struct B_VT {
void (*foo)(void);
}
B_vtable = { B_foo },
D_vtable = { D_foo };
typedef struct B {
struct B_VT* vt;
} B;
B* new_B(void) {
B* p = malloc(sizeof(B));
p->vt = &B_vtable;
return p;
}
typedef struct D {
struct B_VT* vt;
} D;
D* new_D(void) {
D* p = malloc(sizeof(D));
p->vt = &D_vtable;
return p;
}
int main() {
B* a[] = {new_B(), new_D()};
a[0]->vt->foo();
a[1]->vt->foo();
return 0;
}
Each object only has one vtable pointer, and you can add many virtual methods to the class without affecting object size. (The vtable grows, but this is stored once per class and is not significant size overhead.) Note that I've simplified many details in this example, but it does work: destructors are not addressed (which should additionally be virtual here), it leaks memory, and the __func__ values will be slightly different (they're generated by the compiler for the current function's name), among others.
|
2,423,052 | 2,423,322 | Problem related to dll | Can anyone guide me what could be the problem in the mentioned below:-
alt text http://lh5.ggpht.com/_D1MfgvBDtsU/S5iLmYivj1I/AAAAAAAAABU/8Mquam_XxZ4/s912/dll%20issue.PNG
This PP folder is present in the following path at my desk "E:\WINCE600\PLATFORM\COMMON\SRC\SOC\COMMON_FSL_V2_PDK1_7\IPUV3"
In this IPUV3 folder, PP folder is present which does the resize,rotation & conversion task of an image. This PP folder consists of PDK & SDK . Inside PDK folder there is a file called Ppclass.cpp which i have modified.
After modifying the Ppclass.cpp i have
rebuild the PP folder to check whether
in my project the modification is
reflected or not. But later i found
that the problem is of pp.dll which
even after the rebuild of PP folder
the new pp.dll is not highlighted.
Also the path for iMX51-EVK-PDK1_7 is as follows:
"E:\WINCE600\PLATFORM\iMX51-EVK-PDK1_7\target"
So now i want advice that how to sort this problem. I am sure that this problem is related to pp.dll
Please guide me to follow the correct step. I will be very thankful to u all.
Thanks in Advance
| Was everything working as expected before the code change?
Are you getting any build errors?
Do you have a DIRS file in the IPUV3 directory that specifies the two subdirectories?
What is the problem? State what you did, what you expect and what was the outcome. It is not clear right now.
Update:
According to the comment below it seems that the build process is having trouble parsing one of your SOURCES files. From the error my guess is you have someting similar to:
SOURCELIBS=E:\...
Try:
SOURCELIBS=\
E:\...
The \ symbol tells the tool that there is are more values on the next line.
By the way, I don't know who wrote this on the SOURCES file, but I think it is bad practice to use absolute paths. You should use the macro for your platform path _TARGETPLATROOT. Use it like this: $(_TARGETPLATROOT)\...
|
2,423,270 | 2,424,390 | Adding an allocator to a C++ class template for shared memory object creation | In short, my question is: If you have class, MyClass<T>, how can you change the class definition to support cases where you have MyClass<T, Alloc>, similar to how, say, STL vector provides.
I need this functionality to support an allocator for shared memory. Specifically, I am trying to implement a ring buffer in shared memory. Currently it has the following ctor:
template<typename ItemType>
SharedMemoryBuffer<ItemType>::SharedMemoryBuffer( unsigned long capacity, std::string name )
where ItemType is the type of the data to be placed in each slot of the buffer.
Now, this works splendid when I create the buffer from the main program thus
SharedMemoryBuffer<int>* sb;
sb = new SharedMemoryBuffer<int>(BUFFER_CAPACITY + 1, sharedMemoryName);
However, in this case the buffer itself is not created in shared memory and so is not accessible to other processes. What I want to do is to be able to do something like
typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator;
typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuffer;
managed_shared_memory segment(create_only, "MySharedMemory", 65536);
const ShmemAllocator alloc_inst (segment.get_segment_manager());
MyBuffer *mybuf = segment.construct<MyBuffer>("MyBuffer")(alloc_inst);
However, I don't know how to go about adding an explicit allocator to the class template.
| what make me confuse is, why you need to allocate or create an object in SharedMemory (SHM), for example if you reserve shared memory of the size 65536 Bytes, then suppose you get your shared memory at address 0x1ABC0000, if reservation success you will have free and directly accessible memory space at 0x1ABC0000 to 0x1ABCFFFF.
then when your application need to "allocate" object in SHM of size sizeof(SHMObject), and your memory manager see that address at 0x1ABC0000+0x1A is free, your memory manager should just return 0x1ABC001A value, and mark ( 0x1ABC001A to 0x1ABC001A+sizeof(SHMObject) ) was occupied, and you just need to cast: SHMObject* shmObjectPtr = (SHMObject*)(0x1ABC001A);
and ofcourse that is assuming you have your own custom memory allocator that work on specified range of memory address.
as for template, i don't really understand how does your SHM ring buffer look like, but I've done that before using SHM, my implementation is like this:
`
//memory SHM allocator
template<typename T> class ShmRingAllocator
{
protected:
void* baseAddress;
public:
ShmRingAllocator(void* baseAddress,int memSize);
void* allocate(); //this function do what I described earlier, or you can use placement new: new (baseAddress+offset)T;
}
//some kind of shared_ptr<> that handle object in SHM, this provides mechanishm to check is the pointer still valid in shm or not
template<typname T> ShmRingObjectPtr
{
protected:
T* object; //mapped address of object at current process
ShmBuffer* shm; //every object has pointer to which SHM does this pointer pointing at
public:
virtual T* operator->(); //operator overload to access T object
}
class ShmBuffer //base class for all kind of SHM buffer
{
protected:
std::string shmName;
void* shmBasePtr;
}
template<typename T,class A=ShmRingAllocator<T>> ShmRingBuffer : public ShmBuffer
{
protected:
A allocator;
public:
ShmRingObjectPtr<T> insert() //push one element to ring buffer
{
return ShmRingObjectPtr<T>((T*)this->allocator.allocate(),this);
}
}
`
|
2,423,352 | 2,424,358 | what's the deal with compile time evaluation of constant arithmetic, and can it be done in the pre processor? | template <int T>
void aFunc(){}
int main()
{
int anArray[45-32];
switch(4)
{
case 4<45:
break;
}
aFunc<4*3/7&8 == 45 - 5>();
}
so this all compiles in VC++ 2005
is this standard? if so, what do the conditional operators return? 0 and 1? are there limits?
and the thing that interests me the most, can you do it in macros? defines?
Edit:
to elaborate further on the preprocessor bit:
#define STRINGAFY(n) #n
#define STRINGAFY_2(n) STRINGAFY(n)
#define SOME_RANDOM_MACRO(n) n
printf("%s", STRINGAFY(1)); //prints out "1"
printf("%s", STRINGAFY(SOME_RANDOM_MACRO(1))); //prints out "SOME_RANDOM_MACRO(1)"
printf("%s", STRINGAFY_2(SOME_RANDOM_MACRO(1))); //causes SOME_RANDOM_MACRO to be evaluated and prints out "1"
printf("%s", STRINGAFY_2(1+1)); // doesn't evaluate and print "2", prints "1+1" :(
| I think you misunderstand a bit.
The actual evaluation of constant expressions is done by the compiler, not the preprocessor. The preprocessor only evaluates macros, which is about textual substitution.
If you check out Boost.Preprocessor, you'll realize that even simple operations like addition or soustractions cannot be expressed using common expressions if you want them to be evaluated by the preprocessor.
BOOST_PP_ADD(4, 3) // expands to 7
BOOST_PP_SUB(4, 3) // expands to 1
This is done, behind the scenes, by substitions means, for example you could define it (though it would be very tiresome) like so:
#define ADD_IMPL_4_3 7
#define BOOST_PP_ADD(lhs, rhs) ADD_IMPL_##lhs##_##rhs
So this is way different from what the compiler does ;)
As for testing whether or not your compiler is able to evaluate an expression or not, just use a template.
template <int x> struct f {};
typedef f< 3*4 / 5 > super_f_type;
If it compiles, then the compiler was able to properly evaluate the expression... since otherwise it would not have been able to instantiate the template!
Note: the actual definition of BOOST_PP_ADD is much more complicated, this is a toy example and this may not work properly > BOOST_PP_ADD(BOOST_PP_SUB(4,3),3).
|
2,423,415 | 2,423,431 | C++ - Unwanted characters printed in output file | This is the last part of the program I am working on. I want to output a tabular list of songs to cout. And then I want to output a specially formatted list of song information into fout (which will be used as an input file later on).
Printing to cout works great. The problem is that tons of extra character are added when printing to fout.
Any ideas?
Here's the code:
void Playlist::printFile(ofstream &fout, LinkedList<Playlist> &allPlaylists, LinkedList<Songs*> &library)
{
fout.open("music.txt");
if(fout.fail())
{
cout << "Output file failed. Information was not saved." << endl << endl;
}
else
{
if(library.size() > 0)
fout << "LIBRARY" << endl;
for(int i = 0; i < library.size(); i++) // For Loop - "Incremrenting i"-Loop to go through library and print song information.
{
fout << library.at(i)->getSongName() << endl; // Prints song name.
fout << library.at(i)->getArtistName() << endl; // Prints artist name.
fout << library.at(i)->getAlbumName() << endl; // Prints album name.
fout << library.at(i)->getPlayTime() << " " << library.at(i)->getYear() << " ";
fout << library.at(i)->getStarRating() << " " << library.at(i)->getSongGenre() << endl;
}
if(allPlaylists.size() <= 0)
fout << endl;
else if(allPlaylists.size() > 0)
{
int j;
for(j = 0; j < allPlaylists.size(); j++) // Loops through all playlists.
{
fout << "xxxxx" << endl;
fout << allPlaylists.at(j).getPlaylistName() << endl;
for(int i = 0; i < allPlaylists.at(j).listSongs.size(); i++)
{
fout << allPlaylists.at(j).listSongs.at(i)->getSongName();
fout << endl;
fout << allPlaylists.at(j).listSongs.at(i)->getArtistName();
fout << endl;
}
}
fout << endl;
}
}
}
Here's a sample of the output to music.txt (fout):
LIBRARY
sadljkhfds
dfgkjh
dfkgh
3 3333 3 Rap
sdlkhs
kjshdfkh
sdkjfhsdf
3 33333 3 Rap
xxxxx
PayröÈöè÷÷(÷H÷h÷÷¨÷È÷èøø(øHøhøø¨øÈøèùù(ùHùhùù¨ùÈùèúú(úHúhúú¨úÈúèûû(ûHûhûû¨ûÈûèüü(üHühüü¨üÈüèýý(ýHýhý
! sdkjfhsdf!õüöýÄõ¼5!
sadljkhfds!þõÜö|ö\
þx þ þÈ þð ÿ ÿ@ ÿh ÿ ÿ¸ ÿà 0 X ¨ Ð ø
enter code here
enter code here
| Most likely, one of your methods returns an improper char * string (not null terminated).
Edit: actually, not just one: getPlaylistName(), getSongName() and getArtistName().
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.