question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
3,181,040 | 3,181,111 | Dynamic memory and constructor exceptions | Early today I discovered function try-catch blocks (from here in fact) and then went on a bit of a research spree - apparently they're main use is it catch exceptions throw in by a constructor initialiser list.
Anyway, this sent me thinking about failing constructors and I've got to a stage where I just need a little clarification. This is all just me trying to learn more about the language, so I don't have a practical example, but here goes...
Given this example code:
class A
{
private:
B b
C *c; //classes B, C & D omitted for brevity as not really relevant
D d;
public
A(int x, int y, int z)
};
A::A(int x, int y, int z)
try
: b( x )
, c( new C(y) )
, d( z )
{
//omitted
}
catch(...)
{
//omitted
}
What happens in these cases:
The initialisation of b throws an exception.
The initialisation of c throws an exception.
The initialisation of d throws an exception.
Specifically, I want to know at least:
what will/may cause a memory leak from new C(y). I'm thinking only 3? (see here)
could you just delete b in the catch? Is dangerous in cases 1 and 2?
Obviously, I guess the safest thing to do is to make c a smart pointer. But disregarding that option for the moment, what's the best course of action?
Is it safe to set c to NULL in the initialiser, and then place the call to new in the constructor body?
That would then mean a delete c must be placed in the catch in case something else throws in the constructor body? Are there safety issues doing that (ie, if it's the c = new C(y); itself that throws)?
| Try/catch function blocks are frowned upon, in the same way as goto --there might be some corner case where they make sense, but they are better avoided: when an object fails to be constructed, best thing you can do is fail and fail fast.
On you specific questions, when an exception is thrown in a constructor, all fully constructed subobjects will be destroyed. That means that in the case of b it will be destroyed, in the case of c, it being a raw pointer, nothing is done. The simplest solution is changing c to be a smart pointer that handles the allocated memory. That way, if d throws, the smart pointer will be destroyed and the object released. This is not related to the try/catch block, but rather to how constructors work.
It is also, in general, unsafe to delete the pointer from the catch block, as there is no guarantee of the actual value of the pointer before the initialization is performed. That is, if b or c throw, it might be the case that c is not 0, but is not a valid pointer either, and deleting it would be undefined behavior. As always there are cases, as if you have a guarantee that neither b nor c will throw, and assuming that a bad_alloc is not something you usually recover from, then it might be safe.
Note that if you need to keep a pointer with a raw pointer for some specific reason, it is better to initialize it to 0 in the initialization list, and then create the object inside the construction block to avoid this problem. Remember also that holding more than one (raw) resource directly in a class makes it really hard to ensure that no resource is leaked -- if first resource is created and assigned, and the second one fails, the destructor will not be called and the resource will leak. Again, if you can use a smart pointer you will have one less problem to deal with.
|
3,181,081 | 3,181,093 | file reading: feof() for binary files | I am reading a binary file. and when it reaches the end. it seems it is terminated by feof() function. is it because there is no EOF character for binary files? if so how can i solve it.
currently my code is using a while loop
while (!feof(f))
when it reaches the end of file at position 5526900. it doesn't stop. it just keeps trying to read, and i am stuck at the loop.
can anyone tell me why and how to solve it.
Thanks
| You should not use feof() to loop on - instead, use the return value of fread() - loop until it returns zero. This is easy to see if you consider reading an empty file - feof() returns the EOF status AFTER a read operation, so it will always try to read bogus data if used as a loop control.
I don't know why so many people think feof() (and the eof() member of C++ streams) can predict if the next read operation will succeed, but believe me, they can't.
|
3,181,156 | 3,259,932 | Can a PyQt program consume a DBus interface that exposes custom C++ types (marhsalled via Qt's MetaType system)? If so, how? | I have a Qt/C++ application that exposes some custom C++ classes via DBus methods (by registering them as MetaTypes, and using annotations in the xml), and I want my PyQt program to consume these methods.
The problem I see is that the exposed types are C++ classes, not python, so how can I make python aware of these classes?
| There is no such thing as 'C++ classes' in D-Bus, it is language-agnostic. All methods, functions, etc. have type signatures expressible in basic D-Bus types (see the spec). Just call those classes, and it should work.
|
3,181,211 | 3,181,240 | Prefix/Postfix increment operators | I'm wanting to make sure I understand pass-by-value vs pass-by-reference properly. In particular, I'm looking at the prefix/postfix versions of the increment ++ operator for an object.
Let's suppose we have the following class X:
class X{
private:
int i;
public:
X(){i=0;}
X& operator ++ (){ ++i; return *this; } //prefix increment
X operator ++ (int unused){ //postfix increment
X ret(*this);
i++;
return ret;
}
operator int(){ return i; } //int cast
};
First of all, have I implemented the prefix/postfix increment operators properly?
Second, how memory-efficient is the postfix operator, compared to the prefix operator? Specifically how many X object copies are created when each version of the operator is used?
An explanation of exactly what happens with return-by-reference vs return-by-value might help me understand.
Edit: For example, with the following code...
X a;
X b=a++;
...are a and b now aliases?
| This is a correct implementation. It is typical that a postfix operator will be worse on performance because you have to create another copy before doing the increment (and this is why I've gotten in the habit of always using prefix unless I need something else).
With return-by-reference, you're returning an l-value reference to the current object. The compiler would typically implement this by returning the address of the current object. This means that returning the object is as simple as returning a number.
However, with return-by-value, a copy must be done. This means there's more information to copy over during the return (instead of just an address) as well as a copy constructor to call. This is where your performance hit comes in.
The efficiency of your implementation looks on-par with typical implementations.
EDIT:
With regards to your addendum, no, they are not aliases. You have created two separate objects. When you return by value (and when you created a new object from within the postfix increment operator) this new object is placed in a distinct memory location.
However, in the following code, a and b are aliases:
int a = 0;
int& b = ++a;
b is an address which references a.
|
3,181,367 | 3,181,389 | What is an easy way to test whether any process of a given id is presently running on Linux? | In C++, I have a resource that is tied to a pid. Sometimes the process associated with that pid exits abnormally and leaks the resource.
Therefore, I'm thinking of putting the pid in the file that records the resource as being in use. Then when I go to get a resource, if I see an item as registered as being in use, I would search to see whether a process matching the pid is currently running, and if not, clean up the leaked resource.
I realize there is a very small probability that a new unrealated pid is now sharing the same number, but this is better than leaking with no clean up I have now.
Alternatively, perhaps there is a better solution for this, if so, please suggest, otherwise, I'll pursue the pid recording.
Further details: The resource is a port number for communication between a client and a server over tcp. Only one instance of the client may use a given port number on a machine. The port numbers are taken from a range of available port numbers to use. While the client is running, it notes the port number it is using in a special file on disk and then cleans this entry up on exit. For abnormal exit, this does not always get cleaned up and the port number is left annotated as being in use, when it is no longer being used.
| To check for existence of process with a given id, use kill(pid,0) (I assume you are on POSIX system). See man 2 kill for details.
Also, you can use waitpid call to be notified when the process finishes.
|
3,181,641 | 3,181,660 | How can I delete a file upon its close in C++ on Linux? | I wish for a file to be deleted from disk only when it is closed. Up until that point, other processes should be able to see the file on disk and read its contents, but eventually after the close of the file, it should be deleted from disk and no longer visible on disk to other processes.
| Open the file, then delete it while it's open. Other processes will be able to use the file, but as soon as all handles to file are closed, it will be deleted.
Edit: based on the comments WilliamKF added later, this won't accomplish what he wants -- it'll keep the file itself around until all handles to it are closed, but the directory entry for the file name will disappear as soon as you call unlink/remove.
|
3,181,646 | 3,181,676 | Accessibility: Managed vs Unmanaged code | I'm about to begin working on a Accessibility project for Windows (targeting XP through 7) and would like some advice on the pros an cons of using managed code vs unmanaged code. Basically the software will need to be able to read text from open windows, access menus, and other common functions programs like a JAWS or another screen reader would be able to do. The question is, can I do this with C# or do I need to use C++?
For the last two years I've been developing a lot of C# code so it seems the quickest way to get started would be to play around with the System.Windows.Automation namespace. On the other hand, I haven't done much C++ or COM programming in quite a while and would have to spend some time refreshing before using unmanaged code.
What are the limitations of using C# code for Accessibility software? Are they severe enough to justify putting in a fair amount of time to refresh my C++ and COM skills because I'd run into a lot of things that just wouldn't be possible (or much more difficult) with managed code? Is the Automation namespace compatible with older applications (not using .NET or WPF)?
| Definitely managed code
Even if you run into situations where you're comfortable using COM/unmanaged you can use COM/unmanaged for those modules.
.NET offers seamless interop between managed and unmanaged code.
with .NET 4.0 and C# 4.0 interop is even better ..
And above all else make sure you design a loosely coupled system so that there's always room for plugging in another module in an entirely different technology..
Just my two cents..
|
3,181,766 | 3,181,803 | How far to go with a strongly typed language? | Let's say I am writing an API, and one of my functions take a parameter that represents a channel, and will only ever be between the values 0 and 15. I could write it like this:
void Func(unsigned char channel)
{
if(channel < 0 || channel > 15)
{ // throw some exception }
// do something
}
Or do I take advantage of C++ being a strongly typed language, and make myself a type:
class CChannel
{
public:
CChannel(unsigned char value) : m_Value(value)
{
if(channel < 0 || channel > 15)
{ // throw some exception }
}
operator unsigned char() { return m_Value; }
private:
unsigned char m_Value;
}
My function now becomes this:
void Func(const CChannel &channel)
{
// No input checking required
// do something
}
But is this total overkill? I like the self-documentation and the guarantee it is what it says it is, but is it worth paying the construction and destruction of such an object, let alone all the additional typing? Please let me know your comments and alternatives.
| If you wanted this simpler approach generalize it so you can get more use out of it, instead of tailor it to a specific thing. Then the question is not "should I make a entire new class for this specific thing?" but "should I use my utilities?"; the latter is always yes. And utilities are always helpful.
So make something like:
template <typename T>
void check_range(const T& pX, const T& pMin, const T& pMax)
{
if (pX < pMin || pX > pMax)
throw std::out_of_range("check_range failed"); // or something else
}
Now you've already got this nice utility for checking ranges. Your code, even without the channel type, can already be made cleaner by using it. You can go further:
template <typename T, T Min, T Max>
class ranged_value
{
public:
typedef T value_type;
static const value_type minimum = Min;
static const value_type maximum = Max;
ranged_value(const value_type& pValue = value_type()) :
mValue(pValue)
{
check_range(mValue, minimum, maximum);
}
const value_type& value(void) const
{
return mValue;
}
// arguably dangerous
operator const value_type&(void) const
{
return mValue;
}
private:
value_type mValue;
};
Now you've got a nice utility, and can just do:
typedef ranged_value<unsigned char, 0, 15> channel;
void foo(const channel& pChannel);
And it's re-usable in other scenarios. Just stick it all in a "checked_ranges.hpp" file and use it whenever you need. It's never bad to make abstractions, and having utilities around isn't harmful.
Also, never worry about overhead. Creating a class simply consists of running the same code you would do anyway. Additionally, clean code is to be preferred over anything else; performance is a last concern. Once you're done, then you can get a profiler to measure (not guess) where the slow parts are.
|
3,181,910 | 3,220,369 | Is it possible to use custom c++ classes with overloaded operators in QtScript? | Does anyone know if it is possible to have a C++ class with overloaded operators such as +,-,* and declare it somehow (this is where the magic happens) to a QtScriptEngine such that js-expressions like "a+b" are evaluated as they would be on the C++ side?
| It seems to be impossible. At least that is what I received as an answer in the #qt-labs IRC.
However, I think I found a viable alternative: ChaiScript. It embeds itself wonderfully into C++, plays well with Qt and allows for the overloading of operators, and even better the direct use of any(?) C++ data type.
|
3,182,224 | 3,183,101 | C++ objects serialization for Linux | I'm doing a program that needs send and receive data over the network. I never dealt with object serialization. I readed about some recommendations about Boost and Google Protocol Buffers. For use in Linux which is the best?
If you know some other I will appreciate your help.
Thanks.
| I've used Boost.Serialization to serialize objects and transmit them over a socket. It's a very flexible library, objects can be serialized intrusively if you have access to them
class Foo
{
public:
template<class Archive>
void serialize(Archive& ar, const unsigned int version)
{
ar & _foo;
ar & _bar;
}
int _foo;
int _bar;
};
or non-intrusively if you don't have access to the object you need to serialize
namespace boost {
namespace serialization {
template<class Archive>
void serialize(Archive& ar, Foo& f, const unsigned int version)
{
ar & f._foo;
ar & f._bar;
}
} // namespace serialization
} // namespace boost
There are tricks to serialize Foo if it does not expose its members (_foo and _bar here), the documentation explains this quite well. To serialize Foo, you use an object in the boost::archive namespace: text, binary, or xml.
std::stringstream ss;
boost::archive::text_oarchive ar( ss );
Foo foo;
foo._foo = 1;
foo._bar = 2;
ar << foo;
reconstructing the archive into a Foo object is done like so
boost::archive::text_iarchive ar( ss );
Foo foo
ar >> foo;
Note this example is fairly trivial, and obviously when you introduce a network you'll be using sockets and buffers.
|
3,182,299 | 3,182,336 | How do event listeners work? | Do they repeatedly check for the condition and execute if the condition is met. Ex, how the OS knows exactly when a USB device is plugged in, or how MSN knows exactly when you get an email. How does this work?
Thanks
| At the low level, the OS kernel "knows" when something happens, because the device in question sends the CPU a hardware interrupt.
So when, say a network packet arrives, the network controller sends an interrupt, and the OS kernel responds as appropriate.
At the program level, it works quite differently - most application programs run an "event loop", where they fetch a message (say, a message from the OS saying that "the mouse was clicked on this point in your application"), perform the appropriate actions in response to that, and then, listen for more messages. If there is no message, the OS sleeps the thread until it has a message to deliver.
|
3,182,316 | 3,182,333 | what is the proper way to delete a pointer in an array? | //the setup
tiles = new Tile **[num_bands];
for( int i = 0 ; i < num_bands ; i++ )
tiles[i] = new Tile *[num_spokes];
for(int i=0; i < num_bands; i++){
for(int ii=0; ii < num_spokes; ii++){
tiles[i][ii] = 0;
}
}
//the problem
delete tiles[1][1];
When I delete a tile, tiles[1][1] still holds an address. I thought it should be a null pointer or 0x0, but its not. Am I deleting this wrong?
| delete isn't supposed to null the pointer; it's your own responsibility to do that if you want to.
Basically, delete just means "I no longer need the memory at this address, so you can use it for something else." - it doesn't say anything about what value pointers that pointed to the freed address will have.
|
3,182,443 | 3,182,833 | More on the mediator pattern and OO design | So, I've come back to ask, once more, a patterns-related question. This may be too generic to answer, but my problem is this (I am programming and applying concepts that I learn as I go along):
I have several structures within structures (note, I'm using the word structure in the general sense, not in the strict C struct sense (whoa, what a tongue twister)), and quite a bit of complicated inter-communications going on. Using the example of one of my earlier questions, I have Unit objects, UnitStatistics objects, General objects, Army objects, Soldier objects, Battle objects, and the list goes on, some organized in a tree structure.
After researching a little bit and asking around, I decided to use the mediator pattern because the interdependencies were becoming a trifle too much, and the classes were starting to appear too tightly coupled (yes, another term which I just learned and am too happy about not to use it somewhere). The pattern makes perfect sense and it should straighten some of the chaotic spaghetti that I currently have boiling in my project pot.
But well, I guess I haven't learned yet enough about OO design. My question is this (finally. PS, I hope it makes sense): should I have one central mediator that deals with all communications within the program, and is it even possible? Or should I have, say, an abstract mediator and one subclassed mediator per structure type that deals with communication of a particular set of classes, e.g. a concrete mediator per army which helps out the army, its general, its units, etc.
I'm leaning more towards the second option, but I really am no expert when it comes to OO design. So third question is, what should I read to learn more about this kind of subject (I've looked at Head First's Design Patterns and the GoF book, but they're more of a "learn the vocabulary" kind of book than a "learn how to use your vocabulary" kind of book, which is what I need in this case.
As always, thanks for any and all help (including the witty comments).
| I don't think you've provided enough info above to be able to make an informed decision as to which is best.
From looking at your other questions it seems that most of the communication occurs between components within an Army. You don't mention much occurring between one Army and another. In which case it would seem to make sense to have each Mediator instance coordinate communication between the components comprising a single Army - i.e. the Generals, Soldiers etc. So if you have 10 Army's then you will have 10 ArmyMediator's.
If you really want to learn O-O Design you're going to have to try things out and run the risk of getting it wrong from time to time. I think you'll learn just as much, if not more, from having to refactor a design that doesn't quite model the problem correctly into one that does, as you will from getting the design right the first time around.
Often you just won't have enough information up front to be able to choose the right design from the go anyway. Just choose the simplest one that works for now, and improve it later when you have a better idea of the requirements and/or the shortcomings of the current design.
Regarding books, personally I think the GoF book is more useful if you focus less on the specific set of patterns they describe, and focus more on the overall approach of breaking classes down into smaller reusable components, each of which typically encapsulates a single unit of functionality.
|
3,182,733 | 3,182,787 | What is the pointer-to-pointer technique for the simpler traversal of linked lists? | Ten years ago, I was shown a technique for traversing a linked list: instead of using a single pointer, you used a double pointer (pointer-to-pointer).
The technique yielded smaller, more elegant code by eliminating the need to check for certain boundary/edge cases.
Does anyone know what this technique actually is?
| I think you mean double pointer as in "pointer to a pointer" which is very efficient for inserting at the end of a singly linked list or a tree structure. The idea is that you don't need a special case or a "trailing pointer" to follow your traversal pointer once you find the end (a NULL pointer). Since you can just dereference your pointer to a pointer (it points to the last node's next pointer!) to insert. Something like this:
T **p = &list_start;
while (*p) {
p = &(*p)->next;
}
*p = new T;
instead of something like this:
T *p = list_start;
if (p == NULL) {
list_start = new T;
} else {
while (p->next) {
p = p->next;
}
p->next = new T;
}
NOTE: It is also useful for making efficient removal code for a singly linked list. At any point doing *p = (*p)->next will remove the node you are "looking at" (of course you still need to clean up the node's storage).
|
3,182,843 | 3,182,864 | Writing stl compatible iterators | I'm trying to convert an iterator class I have to be stl compatible so that it can be used with the stl algorithms. In the following simple (and frankly useless) example, which should print the values 0 to 5 inclusive, I am getting the following errors,
ISO C++ forbids incrementing a pointer of type ‘Iterator (*)()‘
and,
invalid conversion from ‘Iterator (*)()’ to ‘int‘
What am I doing wrong?
Thanks.
#include <iterator>
#include <algorithm>
#include <iostream>
class Iterator : public std::iterator<std::bidirectional_iterator_tag, int> {
public:
Iterator(int i = 0) : val(i) {
if(val<0 || val>5) throw;
}
bool operator==(Iterator const& rhs) const {
return (val==rhs.val);
}
bool operator!=(Iterator const& rhs) const {
return !(*this==rhs);
}
Iterator& operator++() {
if(val!=6)
++val;
return *this;
}
Iterator operator++(int) {
Iterator tmp (*this);
++(*this);
return tmp;
}
Iterator& operator--() {
if(val!=-1)
--val;
return *this;
}
Iterator operator--(int) {
Iterator tmp (*this);
--(*this);
return tmp;
}
int operator* () const {
if(val==-1 || val==6) throw;
return val;
}
private:
int val;
};
Iterator begin() {
return Iterator();
}
Iterator end() {
return ++Iterator(5);
}
void print(int i) {
std::cout << i << std::endl;
}
int main(int argc, char* argv[]) {
std::for_each(begin,end,print);
}
| You are passing the functions begin and end to std::for_each, instead of the iterators that these functions would return:
std::for_each(begin,end,print);
It should be:
std::for_each(begin(),end(),print);
Also note that the empty throw statements, like in if(val==-1 || val==6) throw;, will not do anything good. You have to throw something, like throw std::out_of_range("out of bounds").
|
3,182,927 | 3,182,969 | Automata based alternative to re2 | I am needing to implement regular expressions in a C++ program I am writing, and I wanted to use re2 but I could not compile it on windows. Does anyone know of another regular expression library or whatever it's called that compiles easily on windows and isn't a "backtracking" regex engine, but an automata-theory based one (whatever that means) like re2 is?
Or just figuring out how to compile re2 on windows would be perfect.
| Regular expressions are part of the TR1 standard, so chances are you already have a <tr1/regex> header that contains a std::tr1::regex class and related functions.
|
3,183,319 | 3,183,585 | save gdb display to a variable | Is there a way to store the output of the last command in gdb to a string? What I would like to do is store the address information of selected machine level instructions. Redirecting the output is not a solution as it would generate too much output. A simulator would also be a solution, but I'd like to see if it would be possible with gdb as I only want the analysis on a small chunk of code.
So I would need something like this:
(gdb) display/i $pc
(gdb) 1: x/i $pc 0x100000d2e <main+61>: jle 0x100000d02 <main+17
(gdb) set $foo = ??? somehow set this to display line 1
(gdb) call myFunc($foo)
(I excluded the looping controls to keep the example simple)
Or would there be another way of doing this?
| Not possible as far as I know, which is kind of surprising considering all the Lisp background of the author :) You'd need either redirection (grep, sed, and awk make wonders on large files, and there's always perl), or your own instruction decoding based on $pc, which I assume is not an option.
Then I don't really understand what you are trying to do. Figure out jump targets? Relocation correctness? What is that you don't know about the code until the runtime? More details can probably point into better direction.
Edit:
Just some links - haven't tried it yet - you might want to play with script-extension setting and see if you can make Python command files work for you:
see Extending GDB and Python in GDB.
|
3,183,430 | 3,183,500 | remove_vertex when the graph VertexList=vecS | I have a Boost Graph with VertexList=vecS.
typedef adjacency_list <listS, vecS, undirectedS, TrackInformation, LinkInformation> TracksConnectionGraph;
Now I want to iterate through my vertices and remove those that have a specific property. How can I do this?
The problem is whenever I call remove_vertex, the iterator to the vertices in the graph along with the vertex descriptors are invalidated.
| I don't think it is possible (in a reasonable time) with vecS as a template parameter. Look what Boost documentation says:
If the VertexList template parameter of the adjacency_list was vecS, then all vertex descriptors, edge descriptors, and iterators for the graph are invalidated by this operation. <...> If you need to make frequent use of the remove_vertex() function the listS selector is a much better choice for the VertexList template parameter.
In case of listS the iterators are not invalidated by calling remove_vertex unless the iterator is pointing to the actual vertex that was removed.
|
3,183,670 | 3,183,690 | Can I combine setter and getter in one method, in C++? | I would like to combine setter/getter in one method, in C++, in order to be able to do the following:
Foo f;
f.name("Smith");
BOOST_CHECK_EQUAL("Smith", f.name());
I don't know how can I declare such a method inside Foo class:
class Foo {
public:
// how to set default value??
const string& name(const string& n /* = ??? */) {
if (false /* is it a new value? */) {
_name = n;
}
return _name;
}
private:
string _name;
}
I'm looking for some elegant solution, with a true C++ spirit :) Thanks!
| class Foo {
public:
const string& name() const {
return name_;
}
void name(const string& value) {
name_ = value;
}
private:
string name_;
};
|
3,183,678 | 3,183,709 | Operator overloading | From language design point of view , What type of practice is supporting operator overloading?
What are the pros & cons (if any) ?
| EDIT: it has been mentioned that std::complex is a much better example than std::string for "good use" of operator overloading, so I am including an example of that as well:
std::complex<double> c;
c = 10.0;
c += 2.0;
c = std::complex<double>(10.0, 1.0);
c = c + 10.0;
Aside from the constructor syntax, it looks and acts just like any other built in type.
The primary pro is that you can create new types which act like the built in types. A good example of this is std::string (see above for a better example) in c++. Which is implemented in the library and is not a basic type. Yet you can write things like:
std::string s = "hello"
s += " world";
if(s == "hello world") {
//....
}
The downside is that it is easy to abuse. Poor choices in operator overloading can lead to accidentally inefficient or unclear code. Imagine if std::list had an operator[]. You may be tempted to write:
for(int i = 0; i < l.size(); ++i) {
l[i] += 10;
}
that's an O(n^2) algorithm! Ouch. Fortunately, std::list does not have operator[] since it is assumed to be an efficient operation.
|
3,183,710 | 3,183,731 | Forward declaration with friend function: invalid use of incomplete type | #include <iostream>
class B;
class A{
int a;
public:
friend void B::frndA();
};
class B{
int b;
public:
void frndA();
};
void B::frndA(){
A obj;
std::cout << "A.a = " << obj.a << std::endl;
}
int main() {
return 0;
}
When trying to compile this code, some errors occurred. E.g.
invalid use of incomplete type
What are the problems in this code?
| Place the whole of the class B ... declaration before class A. You haven't declared B::frndA(); yet.
#include <iostream>
using namespace std;
class B{
int b;
public:
void frndA();
};
class A{
int a;
public:
friend void B::frndA();
};
void B::frndA(){
A obj;
//cout<<"A.a = "<<obj.a<<endl;
}
int main() {
return 0;
}
|
3,183,826 | 3,184,063 | Deletion of objects send by signals, Ownership of objects in signals, Qt | Here, my signal declaration:
signals:
void mySignal(MyClass *);
And how I'm using it:
MyClass *myObject=new myClass();
emit mySignal(myObject);
Here comes my problem: Who is responsible for deletion of myObject:
Sender code, what if it deletes before myObject is used? Dangling Pointer
The slot connected to signal, what if there is no slot or more than one slot which is connected to the signal? Memory Leak or Dangling Pointer
How does Qt manage this situation in its build-in signals? Does it use internal reference counting?
What are your best practices?
| You can connect a signal with as many slots as you want so you should make sure that none of those slots are able to do something you would not want them to do with your object:
if you decide to pass a pointer as a parameter then you will be running in the issues you describe, memory management - here nobody can to the work for you as you will have to establish a policy for dealing with allocation/deletion. To some ideas on how to address this see the Memory Management Rules in the COM world.
if you decide to pass a parameter as a reference then you don't have to worry about memory management but only about slots modifying your object in unexpected ways. The ideea is not to pass pointers unless you have to - instead use references if you can.
if you decide to pass a const reference then, depending on your connection type, QT will pass the value of the object for you (see this for some details)
avoid any problems and pass by value :)
See also this question for some thoughts about passing pointers in signals.
|
3,183,889 | 3,193,683 | How can I embed a licence.licx file in a cpp executable? | We are using a third party UI component which requires a licence.licx file. The problem is that the executable is a cpp project and embedding the licence.licx file into the dll which actually uses the third party component does not seem to work. By "does not work" I mean that we get runtime licensing errors when executing the program on a machine which does not have the third party lib installed. We are also using Prism, which means that the dll's are dynamically loaded at runtime.
Does anyone know how I can embed the licence.licx file into the cpp project or get the licensing system to resolve the licence file from a different location?
| It seems that the Assembly Linker can add resources to any PE file, including the EXE generated by the native C++ compiler. You'd do this as a post-build step.
|
3,184,030 | 3,184,348 | Is using implicit conversion for an upcast instead of QueryInterface() legal with multiple inheritance? | Assume I have a class implementing two or more COM interfaces (exactly as here):
class CMyClass : public IInterface1, public IInterface2 {
};
QueryInterface() must return the same pointer for each request of the same interface (it needs an explicit upcast for proper pointer adjustment):
if( iid == __uuidof( IUnknown ) ) {
*ppv = static_cast<IInterface1*>( this );
//call Addref(), return S_OK
} else if( iid == __uuidof( IInterface1 ) ) {
*ppv = static_cast<IInterface1*>( this );
//call Addref(), return S_OK
} else if( iid == __uuidof( IInterface2 ) ) {
*ppv = static_cast<IInterface2*>( this );
//call Addref(), return S_OK
} else {
*ppv = 0;
return E_NOINTERFACE;
}
now there're two IUnknowns in the object - one is the base of IInterface1 and the other is the base of IInterface2. And they are in different subobjects.
Let's pretend I called QueryInterface() for IInterface2 - the pointer returned will be different from the pointer returned when I call QueryInterface() for IUnknown. So far so good. Then I can pass the retrieved IInterface2* into any function accepting IUnknown* and thanks to C++ implicit conversion the pointer will be accepted, but it will be not the same pointer that QueryInterface() for IUnknown* would retrieve. In fact if that function calls QueryInterface() for IUnknown immediately upon being called it will retrieve a different pointer.
Is this legal in terms of COM? How do I handle situations when I have a pointer to a multiply-inherited object and I allow an implicit upcast?
| COM has no rules regarding interface identity, only of object identity. The first rule of QI says that a QI on IID_Unknown on two interface pointers must return the same pointer if they are implemented by the same object. Your QI implementation does this correctly.
Without a guarantee for interface identity, a COM method cannot assume that it gets the same IUnknown pointer passed that it will retrieve when it calls QI on that pointer. So if object identity needs to be proven then a separate QI is required.
|
3,184,088 | 3,184,189 | Help using semphores and threads | I am using the pthread library to simulate a threaded buffer. I am also using semaphores as a solution to accessing critical section variables one at a time.
The main problem is that the producer is filling the entire buffer and the consumer is then emptying the entire buffer. Is this code correct? I was assuming that the production and consumption would occur before the buffer was full or empty.
Here is my code and any comments would help a lot, and yes this is for a class.
Thank you in advance
void *Producer(void *threadid)
{
long tid;
tid = (long)threadid;
while (c < Cycles) //While stuff to buffer
{
pthread_mutex_lock(&lock);
while(size == BUFFER_SIZE)
{
pthread_cond_wait(&cond, &lock);
}
buffer [full] = rand();
data << size+1 << ". Produce: " << buffer[full] << endl;
printBuffer();
full = (full + 1) % BUFFER_SIZE;
size++;
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock);
c++;
}
pthread_exit(NULL);
}
You can also download all the code or see the log file...
download main.cpp
view the log file at funkohland.com/pthreads/log.txt
| This is a well known problem with Mutexes. A Mutex is an expensive operation that requires lots of cycles. When you unlock the mutex the other thread has a TINY opportunity in which to exit its lock and gain a lock. Basially you need to spend less time in the mutex to give the other thread an opportunity to run. Basically you need to choose the portion of code that ACTUALLY required the mutex and then lock the mutex quickly do whatever you need to do with that variable (and nothing more) and then unlock it.
|
3,184,118 | 3,184,129 | new, delete ,malloc & free | This question was asked to me in an interview:
In C++,
what if we allocate memory using malloc and use delete to free that allocated memory?
what if we allocate the memory using new and free it using free?
What are the problems that we would face if the above things are used in the code?
My answer was there is no difference. Was I right in saying so?
| If you do so you will run into undefined behavior. Never try that. Although new might be implemented through malloc() and delete might be implemented through free() there's no guarantee that they are really implemented that way and also the user can overload new and delete at his discretion. You risk running into heap corruption.
Other than that don't forget that when you call malloc() you get raw memory - no constructor is invoked - and when you call free() no destructor is invoked. This can as well lead to undefined behavior and improper functioning of the program.
The bottom line is... never do this.
|
3,184,133 | 3,184,320 | Garbage values when writing and reading to a text file | Can Some one help me the problem with this code? I am getting bunch of garbage value !
fstream fs("hello.txt");
if(fs.is_open())
{
string s = "hello";
string line;
fs << s;
while(getline(fs,line))
{
cout << line;
}
cin.get();
}
fs.close();
Thank you very much but when I try to do this I am getting same garbage. I am trying to rewrite the first hello with world and trying to print that line
fstream fs("hello.txt");
if(fs.is_open())
{
string s = "hello";
string line;
fs << s << endl;
fs.seekg(0);
fs << "world" << endl;
fs.seekg(0);
while(getline(fs,line))
{
cout<<line;
}
cin.get();
}
fs.close();
| If hello.txt is empty prior to running the program then it seems to work for me. If the file contains more than 6 or 7 characters then your hello world code will overwrite the first 6/7 chars with "world" followed by the line terminator (which might be 1 or 2 chars depending on the platform). The reminder of the file won't be overwritten and will be subsequently printed by your getline loop.
|
3,184,205 | 3,269,905 | Undefined reference to external variable | Having problems with a custom logging system I've made. I am declaring an ofstream within my main file so that it is accessible by static functions within my class. This works for my static function (ilra_log_enabled). However, this does not work on my overloaded function for the class. I receive a undefined reference to "logfile" error.
Any ideas?
#ifndef ILRA_H_
#define ILRA_H_
// System libraries
#include <iostream>
#include <ostream>
#include <sstream>
#include <iomanip>
#include <fstream>
// Namespace
using namespace std;
// Classes
class ilra
{
static int ilralevel_set;
static int ilralevel_passed;
static bool relay_enabled;
static bool log_enabled;
static ofstream logfile;
public:
// constructor / destructor
ilra(const std::string &funcName, int toset)
{
// we got passed a loglevel!
ilralevel_passed = toset;
}
~ilra(){};
static void ilra_log_enabled(bool toset){
log_enabled = toset;
if (log_enabled == true){
// get current time
time_t rawtime;
time ( &rawtime );
// name of log file
string logname = "rclient-";
logname.append(rawtime + ".txt");
// open a log file
logfile.open(logname.c_str());
}
}
// output
template <class T>
ilra &operator<<(const T &v)
{
if(ilralevel_passed <= ilralevel_set)
std::cout << v;
if(log_enabled == true)
logfile << "Test"; // undefined reference to ilra::logfile
return *this;
}
}; // end of the class
#endif /* ILRA_H_ */
| I moved the variables to within the class and resolved the problem. Still not sure as to what was wrong with the previous method.
|
3,184,345 | 3,202,213 | fopen problem - too many open files | I have a multithreaded application running on Win XP. At a certain stage one of a threads is failing to open an existing file using fopen function. _get_errno function returns EMFILE which means Too many open files. No more file descriptors are available. FOPEN_MAX for my platform is 20. _getmaxstdio returns 512. I checked this with WinDbg and I see that about 100 files are open:
788 Handles
Type Count
Event 201
Section 12
File 101
Port 3
Directory 3
Mutant 32
WindowStation 2
Semaphore 351
Key 12
Thread 63
Desktop 1
IoCompletion 6
KeyedEvent 1
What is the reason that fopen fails ?
EDIT:
I wrote simple single threaded test application. This app can open 510 files. I don't understand why this app can open more files then multithreaded app. Can it be because of file handle leaks ?
#include <cstdio>
#include <cassert>
#include <cerrno>
void main()
{
int counter(0);
while (true)
{
char buffer[256] = {0};
sprintf(buffer, "C:\\temp\\abc\\abc%d.txt", counter++);
FILE* hFile = fopen(buffer, "wb+");
if (0 == hFile)
{
// check error code
int err(0);
errno_t ret = _get_errno(&err);
assert(0 == ret);
int maxAllowed = _getmaxstdio();
assert(hFile);
}
}
}
| I think in win32 all the crt function will finally endup using the win32 api underneath. So in this case most probably it must be using CreateFile/OpenFile of win32. Now CreatFile/OpenFile api is not meant only for files (Files,Directories,Communication Ports,pipes,mail slots,Drive volumes etc.,). So in a real application depending on the number these resources your max open file may vary. Since you have not described much about the application. This is my first guess. If time permits go through this http://blogs.technet.com/b/markrussinovich/archive/2009/09/29/3283844.aspx
|
3,184,395 | 3,184,619 | Get previous value of QComboBox, which is in a QTableWidget, when the value is changed | Say I have a QTableWidget and in each row there is a QComboBox and a QSpinBox. Consider that I store their values is a QMap<QString /*Combo box val*/,int /*spin box val*/> theMap;
When comboBoxes value or spin boxes value is being changed I want to update theMap. So I should know what was the former value of the combo box in order to replace with the new value of the comboBox and also take care of the value of the spin box.
How can I do this?
P.S. I have decided to create a slot that when you click on a table, it stores the current value of the combo box of that row. But this works only when you press on row caption. In other places (clicking on a combobox or on a spinbox) itemSelectionChanged() signal of QTableWidget does not work.
So in general my problem is to store the value of the combo box of selected row, and the I will get ComboBox or SpinBox change even and will process theMap easily.
| How about creating your own, derived QComboBox class, something along the lines of:
class MyComboBox : public QComboBox
{
Q_OBJECT
private:
QString _oldText;
public:
MyComboBox(QWidget *parent=0) : QComboBox(parent), _oldText()
{
connect(this,SIGNAL(editTextChanged(const QString&)), this,
SLOT(myTextChangedSlot(const QString&)));
connect(this,SIGNAL(currentIndexChanged(const QString&)), this,
SLOT(myTextChangedSlot(const QString&)));
}
private slots:
myTextChangedSlot(const QString &newText)
{
emit myTextChangedSignal(_oldText, newText);
_oldText = newText;
}
signals:
myTextChangedSignal(const QString &oldText, const QString &newText);
};
And then just connect to myTextChangedSignal instead, which now additionally provides the old combo box text.
I hope that helps.
|
3,184,401 | 3,184,423 | varargs(va_list va_start) doesn't work with pass-by-reference parameter |
Possible Duplicate:
Are there gotchas using varargs with reference parameters
Hi, I have a problem with varargs.
Look at my code(Microsoft Visual Studio 2005 or 2008).
#include <stdarg.h>
struct Test { int a; };
void T1(int n, ...) {
va_list args;
va_start(args, n);
char* p = va_arg(args, char*);
va_end(args);
}
void T2(Test n, ...) {
va_list args;
va_start(args, n);
char* p = va_arg(args, char*);
va_end(args);
}
void T3(const Test& n, ...) {
va_list args;
va_start(args, n);
char* p = va_arg(args, char*); // p corrupt!!
va_end(args);
}
int _tmain(int argc, _TCHAR* argv[]) {
const Test t;
T1(1, "Test1");
T2(t, "Test2");
T3(t, "Test3");
return 0;
}
function T1, T2 work well. But T3 function have a problem. The pointer p doesn't point "Test3". Can't I use va_start with pass-by-reference?
Thanks in advance.
| You cannot use references with va_start according to C++ Standard 18.7/3:
The restrictions that ISO C places on the second parameter to the va_start() macro in header
are different in this International Standard. The parameter parmN is the identifier of the
rightmost parameter in the variable parameter list of the function definition (the one just before the ...).
If the parameter parmN is declared with a function, array, or reference type, or with a type that is not compatible
with the type that results when passing an argument for which there is no parameter, the behavior is
undefined.
|
3,184,445 | 3,184,473 | How to clear directory contents in c++ on Linux (basically, i want to do 'rm -rf <directorypath>/*' | I am writing a c++ program on Linux (Ubuntu). I would like to delete the contents of a directory. It can be loose files or sub-directories.
Essentially, i would like to do something equivalent to
rm -rf <path-to-directory>/*
Can you suggest the best way of doing this in c++ along with the required headers.
Is it possible to do this with sys/stat.h or sys/types.h or sys/dir.h ?!
| Boost remove_all http://www.boost.org/doc/libs/1_31_0/libs/filesystem/doc/index.htm
|
3,184,453 | 3,338,915 | extract every audio-channel in a quicktime file | I want to extract every audio channel in a quicktime movie with the QuickTime-API. That means if the file has 5.1 surround, i want 6 audio files in the end.
But at the moment I don't know how to manage that. Until now I have:
OSStatus err = noErr;
MovieAudioExtractionRef extractionSessionRef = nil;
Boolean allChannelsDiscrete = true;
int flags;
int numFrames;
AudioBufferList *mBufferList;
err = MovieAudioExtractionBegin(movie, 0, &extractionSessionRef);
// disable mixing of audio channels
err = MovieAudioExtractionSetProperty(extractionSessionRef,
kQTPropertyClass_MovieAudioExtraction_Movie,
kQTMovieAudioExtractionMoviePropertyID_AllChannelsDiscrete,
sizeof (Boolean), &allChannelsDiscrete);
err = MovieAudioExtractionFillBuffer(extractionSessionRef, &numFrames,
mBufferList, &flags);
if (flags & kQTMovieAudioExtractionComplete)
{
// extraction complete!
}
err = MovieAudioExtractionEnd(extractionSessionRef);
The problem is that I don't know how to get mBufferList and how to export every channel as WAV 48kHz. Can you help me? The example is from this page.
| Have you checked with this?
|
3,184,682 | 3,184,716 | map iterator in template function unrecognized by compiler | I have the following code.
template<class key,class val>
bool has_key(key chkey,std::map<key,val> map){
for (std::map<key,val>::iterator it = map.begin(); #line 13 referenced by gcc
it!=map.end(); ++it){
if(chkey == it->first) return true;
}
return false;
}
GCC is giving me the following error.
objects.hpp: In function `bool has_key(key, std::map<key, val, std::less<_Key>,
std::allocator<std::pair<const _Key, _Tp> > >)':
objects.hpp:13: error: expected `;' before "it"
objects.hpp:14: error: `it' was not declared in this scope
Somehow "it" is not being initialized, what in Sam Hain is going on here?!
| You need the typename keyword:
for (typename std::map<key,val>::iterator it = map.begin(); #line 13 referenced by gcc
it!=map.end(); ++it){
See also: Why do we need typename here?
This is because you are in a template definition and iterator is a dependent name. This has been asked before.
g++ "is not a type" error
C++ Template: 'is not derived from type'
Trouble with dependent types in templates
|
3,184,844 | 3,202,295 | SIP RTP Stack for IVR Application | I have an IVR application which plays the prompts and records the user message and detects the DTMF. Currently the application is based on SS7 signaling and uses the PSTN based media boards for media play / record functionality.
Now I have to move this application to IP based solution. For this, I need any open-source / low-cost solution to enable SIP based signaling and RTP based media flow and G.711 encoding / decoding and support for Windows and Linux.
My application needs to handle around 500 concurrent calls, and the code is closed code so cannot be GPL'ed. I will prefer LGPL code.
I have shortlisted few solutions:
OSIP(LGPL) + ORTP(LGPL) + eXosip(GPL)
PJSIP + PJMEDIA (GPL / Licensed)
Radvision (Too costly).
Please suggest me what is the best option among these or else let me know if any other better option is available.
GJ
| Radvision has a extensive sip stack and it can be used seamlessly if you are ready to pay for it.
I would suggest you can go for option 1 but you may have to tweak the code a lot to get the correct interfaces depending on the platform you are going to use it on.
Exclude GPL license based software if you dont intend to make your code open.
|
3,184,893 | 3,184,971 | Use next_permutation to permutate a vector of classes | Is it possible to use std::next_permutation() to permutate the elements of a vector of a class i created?
How does the comparison parameter in next_permutation() work?
|
Is it possible to use std::next_permutation() to permutate the elements of a vector of a class i created?
Yes!
Try this
#include<iostream>
#include<vector>
#include<algorithm>
int main()
{
typedef std::vector<int> V; //<or_any_class>
V v;
for(int i=1;i<=5;++i)
v.push_back(i*10);
do{
std::cout<<v[0]<<" "<<v[1]<<" "<<v[2]<<" "<<v[3]<<" "<<v[4]<<std::endl;;
}
while(std::next_permutation(v.begin(),v.end()));
}
How does the comparison parameter in next_permutation() work?
This might help
|
3,184,939 | 3,185,008 | why does creating a local type vector fail | #include <iostream>
#include <vector>
int main()
{
class Int {
public:
Int(int _i) : i(i) {}
private:
int i;
};
std::vector<Int> VI;
}
I try to compile the above code and got the following error message:
foo.cc: In function 'int main()':
foo.cc:13: error: 'main()::Int' uses local type 'main()::Int'
foo.cc:13: error: trying to instantiate 'template<class _Alloc> class std::allocator'
foo.cc:13: error: template argument 2 is invalid
foo.cc:13: error: invalid type in declaration before ';' token
Could anyone of you tell me why I can't do things like this in C++? Thanks in advance.
| The standard explictly prohibits using local classes to instantiate templates in 14.3.1[temp.arg.type]/2.
A local type, a type with no linkage, an unnamed type or a type compounded from any of these types shall not be used as a template-argument for a template type-parameter.
This will be changed in C++0x.
|
3,185,132 | 3,185,346 | How to combine a function and a predicate in for_each? | How can you call a Function over some part of a container, using for_each() ?
I have created a for_each_if() to do a
for( i in shapes )
if( i.color == 1 )
displayShape(i);
and the call looks like
for_each_if( shapes.begin(), shapes.end(),
bind2nd( ptr_fun(colorEquals), 0 ),
ptr_fun( displayShape ) );
bool colorEquals( Shape& s, int color ) {
return s.color == color;
}
However, I feel immitating STL-like algorithms is not something that I should be doing.
Is there a way to use only existing STL keywords to produce this ?
I did not want to do a
for_each( shapes.begin(), shapes.end(),
bind2nd( ptr_fun(display_shape_if_color_equals), 0 ) );
because, in a more complicated case, the functor name would be misleading with respect to what the functor
*Is there a way to access a struct's member (like colorEquals) for functions like for_each without having to create a function ? *
| To use a regular for_each with an if you would need a Functor that emulates an if condition.
#include <algorithm>
#include <vector>
#include <functional>
#include <iostream>
#include <boost/bind.hpp>
using namespace std;
struct incr {
typedef void result_type;
void operator()(int& i) { ++i; }
};
struct is_odd {
typedef bool return_type;
bool operator() (const int& value) {return (value%2)==1; }
};
template<class Fun, class Cond>
struct if_fun {
typedef void result_type;
void operator()(Fun fun, Cond cond, int& i) {
if(cond(i)) fun(i);
}
};
int main() {
vector<int> vec;
for(int i = 0; i < 10; ++i) vec.push_back(i);
for_each(vec.begin(), vec.end(), boost::bind(if_fun<incr, is_odd>(), incr(), is_odd(), _1));
for(vector<int>::const_iterator it = vec.begin(); it != vec.end(); ++it)
cout << *it << " ";
}
Unfortunately my template hackery isn't good enough to manage this with bind1st and bind2nd as it somehow gets confusing with the binder being returned being a unary_function but it looks pretty good with boost::bind anyhow. My example is no means perfect as it doesn't allow the Func passed into if_fun to return and I guess somebody could point out more flaws. Suggestions are welcome.
|
3,185,243 | 3,185,297 | Regarding C++ class access/manipulation in C | I've been reading questions on Stack Overflow for a few weeks now... this'll be my first question.
So recently I've looked into making C access/manipulate a C++ class. I understand that ideally one shouldn't compile components in C and C++ separately under normal circumstances, but this isn't an option at the moment.
I looked into 3 Tutorials regarding being able to port/use a C++ in C. They are:
"A Guide to C++ and C Interoperability" on DevX
"Mixing C and C++ Code in the Same Program" article on Sun's site.
"[32] How to mix C and C++" on Parashift
First, what I already know:
You must use extern "C" to avoid
C++ function name mangling.
You need callback prototypes that are C-compatible.
G++ must compile the C++ into .o files, GCC compiles the C-specific code into .o files, then link both after.
As a result, the project I have is made of 4 files:
foo.h, header that'll list all prototypes that C/C++ will see (classes invisible to C of course)
foo.cpp containing the Foo class, and a set of C-compatible callback functions to invoke the class and methods.
fooWrap.c a set of C-specific wrappers that reference the callback functions in foo.cpp.
main.c the test method.
Here's the code I typed up, then my questions:
FOO.H
// Header File foo.h
#ifndef FOO_H
#define FOO_H
//Content set inside this #ifdef will be unseen by C compilers
#ifdef __cplusplus
class Foo
{
public:
void setBar(int);
void printBar();
private:
int bar;
};
#endif
//end of C++-only visible components.
#ifdef __cplusplus
extern "C" {
#endif
//Stuff made to be seen by C compilers only. fooWrap.c has definitions.
#if defined(__STDC__) && !defined(__cplusplus)
typedef struct Foo Foo;
//C-wrappers for C++ callback functions.
Foo * c_NewFoo();
void c_SetFooBar( Foo *, int);
void c_PrintFooBar( Foo *);
#endif
//These are the functions C++ AND C can both use...
Foo * newFoo(); //allocates the memory for Foo class, pass address back.
void setFooBar( Foo * , int ); //set internal contents of Foo object.
void printFooBar ( Foo * ); //print internal contents of Foo object.
#ifdef __cplusplus
}
#endif
#endif /*FOO_H*/
TEST.C
#include "foo.h"
// test.c test file for wrappers that manipulate C++ objects.
main()
{
//looks very C++ like... this makes C-Programmers cringe doesn't it?
Foo * cfoo = c_NewFoo();
Foo * cppfoo = newFoo();
//using the C-specific wrappers.
c_SetFooBar(cfoo,31415);
c_PrintFooBar(cfoo);
//using the C/C++ callback functions to Foo objects.
setFooBar(cppfoo,9001);
printFooBar(cppfoo);
}
So I split the definitions up into the 4 files as I mentioned before... and it compiles fine. But here's what I don't quite get.
Why do the sun and parashift articles suggest to create C-Wrappers whose only code is to pass it's arguments onto C/C++ compatible functions who then call C++ specific code?
i.e.
//in Stuff.cpp
void CallCppStuff () { /* c++ stuff */ }
//in wrapStuff.c
wrapCppStuff() { CallCppStuff() }
As you can see from my test.c file... I'm able to call up either set of calls without a problem (as far as I can tell). Are the c_ wrappers needless overhead, or am I missing the whole point of them altogether? My only guess has something to do with pointer addressing schemes of C/C++... but I'm not sure.
Also, I imagine there are more issues beyond just this... but those 3 sites are all I could find specific to this problem. So if there are any other glaring oversights on my part, I'd appreciate their mentioning.
Thanks in advance for any help/advice,
CX
| If you have a series of functions that are not object-orientated or in a namespace, there's no need to wrap them again. Your c_ series of functions are redundant.
Any C++ function that is extern C, has global (i.e., not namespace/static member) linkage, and only takes C-compat datatypes (normally we use opaque pointers like you have), then it doesn't need to be wrapped. That is the wrapping function. C++ uses member functions directly and doesn't need to use them, and they certainly don't need to be duped.
|
3,185,374 | 3,185,683 | observable container for C++ | Is there an implementation of container classes for C++ which support notification in a similar way as ObservableCollection for C#?
| There is no standard class like you describe, but Boost.Signals is quite a powerful notification library. I would create a wrapper for objects that raises a signal when it is changed, along the lines of this:
#include <boost/signals.hpp>
#include <vector>
#include <iostream>
// Wrapper to allow notification when an object is modified.
template <typename Type>
class Observable
{
public:
// Instantiate one of these to allow modification.
// The observers will be notified when this is destroyed after the modification.
class Transaction
{
public:
explicit Transaction(Observable& parent) :
object(parent.object), parent(parent) {}
~Transaction() {parent.changed();}
Type& object;
private:
Transaction(const Transaction&); // prevent copying
void operator=(const Transaction&); // prevent assignment
Observable& parent;
};
// Connect an observer to this object.
template <typename Slot>
void Connect(const Slot& slot) {changed.connect(slot);}
// Read-only access to the object.
const Type& Get() const {return object;}
private:
boost::signal<void()> changed;
Type object;
};
// Usage example
void callback() {std::cout << "Changed\n";}
int main()
{
typedef std::vector<int> Vector;
Observable<Vector> o;
o.Connect(callback);
{
Observable<Vector>::Transaction t(o);
t.object.push_back(1);
t.object.push_back(2);
} // callback called here
}
|
3,185,380 | 3,185,666 | Boost.Test output_test_stream fails with templated output operator | I have a class:
class foo {
private:
std::string data;
public:
foo &append(const char* str, size_t n) { data.append(str,n); }
// for debug output
template <typename T>
friend T& operator<< (T &out, foo const &f);
// some other stuff
};
template <typename T>
T& operator<< (T &out, foo const &f) {
return out << f.data;
}
I want this to work with any class that provides the << operator.
This works fine with std::cout as in:
std::cout << fooObject;
But the following fails:
BOOST_AUTO_TEST_CASE( foo_append_and_output_operator )
{
// fooObject is accessable here
const char* str = "hello";
fooObject.append(str, strlen(str));
output_test_stream output;
output << fooObject;
BOOST_CHECK( output.is_equal(str) );
}
g++ tells me that:
In function ‘T& operator<<(T&, const foo&)
[with T = boost::test_tools::output_test_stream]’:
error: invalid initialization of reference of type
‘boost::test_tools::output_test_stream&’ from expression of type
‘std::basic_ostream<char, std::char_traits<char> >’
What's going on?
I'm using Boost 1.34.1 on Ubuntu 8.04.
| So I think I have an explanation, but no solution yet. output_test_stream implements its stream functionality by subclassing wrap_stringstream. The insertion-operator for this is a free function-template that looks like this:
template <typename CharT, typename T>
inline basic_wrap_stringstream<CharT>&
operator<<( basic_wrap_stringstream<CharT>& targ, T const& t )
{
targ.stream() << t;
return targ;
}
// ... further down in the same header
typedef basic_wrap_stringstream<char> wrap_stringstream;
Your operator is called with output_test_stream as the stream-type, and that makes this it's return-type. Your operator then calls the above operator, and just propagates the return value. The return value of the above operator however is a superclass of the returntype of your operator. When compiler tries to create the reference you want to return, it chokes, because it cannot initialize a reference to a subclass from a reference to a superclass, even if both refer to the same object. That make any sense?
|
3,185,582 | 3,185,650 | Template subclass pointer problem | In a moment of madness, I decided to write a quadtree C++ template class. I've run into some weird compiler error that I don't understand with regard to subclasses and pointers to templates. I've found some hacky work arounds, but I wondered if anyone could shed some light on why my code wouldn't compile...
I'm on Linux, building with scons, using g++
My code looks something like this, I have a template class to describe a tree and a subclass describing 'leaves':
template <class value_type>
class QuadTree
{
public:
class Leaf //-Subclass--------------------------
{
friend class QuadTree< value_type >;
protected:
value_type* m_data;
Leaf();
~Leaf();
}; //-end-subclass------------------------------
QuadTree();
~QuadTree();
Leaf * Insert ( const value_type & _x );
protected:
QuadTree( Quadtree< value_type >* _parent );
QuadTree< value_type >* m_parent;
QuadTree< value_type >* m_children[4];
std::set< Leaf* > m_leaves;
};
First pointer problem I get is in the QuadTree destructor:
template <class value_type>
QuadTree< value_type >::~QuadTree()
{
// ... Delete children ...
// I allocate each leaf, so I need to delete them
std::set< Leaf* >::iterator it = m_leaves.begin(); // <-- bad
std::set< Leaf* >::iterator endit = m_leaves.end(); // <-- bad
for(;it != endit; ++it)
delete *it;
}
When I compile I get this error: expected ';' before ‘it’ and expected ';' before ‘endit’.
The other pointer error is in the Insert function definition:
template <class value_type>
Leaf * QuadTree< value_type >::Insert ( const value_type & _x ) // <-- bad
{
// Insert stuff...
}
I get the compile error: expected constructor, destructor, or type conversion before ‘*’ token
Anyone know why I'm getting these errors? I've got fixes for the problems, but I want to know why I can't do it this way.
Ps. I've edited the code to show it here, so it is possible I've missed something I thought utterly irrelevant.
Edit. Fixed Quadtree -> QuadTree typo
| You need
typename std::set< Leaf* >::iterator it = m_leaves.begin();
typename std::set< Leaf* >::iterator endit = m_leaves.end();
The type of std::set depends on another template argument and you have to tell the compiler that this is actually a type. gcc 4.5.0 produces a better error message.
The second error is similar:
template <class value_type>
typename QuadTree<value_type>::Leaf* QuadTree< value_type >::Insert ( const value_type & _x )
{
// Insert stuff...
}
Leaf is a inner class to QuadTree. You need to name it as such and you need to specify the type of the QuadTree as the inner class depends on the template parameter.
Another thing: You have a typo in QuadTree in many places.
|
3,185,593 | 3,185,957 | Is the primary implementation of *any* popular programming language interpreter written in C++? | At the moment I am considering whether or not to rewrite a programming language interpreter that I maintain in C++. The interpreter is currently implemented in C.
But I was wondering, is the primary implementation—because, certainly, people have made versions of many interpreters using a language other than the one used by the original authors—of any popular programming language interpreter currently in use today written in C++?
And, if not, is there a good reason for not writing an interpreter in C++? It is my understanding that C++ code, if written correctly, can be very portable and can potentially compile to run just as fast as compiled C code that does the same thing.
| I wrote an interpreter in C++ (after many in C over the years) and I think that C++ is a decent language for that. About the implementation I only would travel back in time and change my choice of implementing the possibility to have several different interpreters running at the same time (every one multithreaded) simply because it made the code more complex and it's something that was never used. Multithreading is quite useful, but multiple instances of the interpreter was pointless...
However now my big regret is indeed the very fact I wrote that interpreter because now it's used in production with a fairly amount of code written and persons trained for it, and because the language is quite uglier and less powerful that python... but switching to python now would add costs. It has no bugs known to me... but yet it's worse than python and this is a bug (in addition to the error already made of paying the cost of writing it for no reason).
I simply should have used python initially instead (or lua or any other ready made interpreter that can easily be embedded and that has a reasonable licensing)... my only excuse for this is that I didn't know about python or lua at that time.
While writing an interpreter is a funny thing to do as a programming exercise I'd suggest you to avoid writing your own for production, especially (please don't take it personally) if the care that low level complexity requires is out of your reach (I find for example the presence of several memory leaks quite shocking).
C++ is still a low level language and while you can get some help for example on the memory handling side still the main assumption of the language is that your code is 100% right as no runtime error is going to help you (only undefined behaviour daemons).
If you missed this assumption of 100% correct code for C (a much simpler language) then I don't see how can you be confident you'll write correct code in C++ (a complexity monster in comparison). I suspect you would just end up with another buggy interpreter that you'll have to throw away.
|
3,185,672 | 3,185,759 | Boost multi-index with slow insertion performance | I have the following code (which largely follows the first example here: http://www.boost.org/doc/libs/1_42_0/libs/multi_index/doc/examples.html)). For some reason, with only 10000 insertations to the multi-index, it takes several minutes to run the program. Am I doing something wrong or is this expected?
struct A
{
int id;
int name;
int age;
A(int id_,int name_,int age_):id(id_),name(name_),age(age_){}
};
/* tags for accessing the corresponding indices*/
struct id{};
struct name{};
struct age{};
typedef multi_index_container<
A,
indexed_by<
ordered_unique<
tag<id>, BOOST_MULTI_INDEX_MEMBER(A,int,id)>,
ordered_non_unique<
tag<name>,BOOST_MULTI_INDEX_MEMBER(A,int,name)>,
ordered_non_unique<
tag<age>, BOOST_MULTI_INDEX_MEMBER(A,int,age)> >
> A_set;
int main()
{
A_set es;
for (int a = 0; a != 10000; a++) {
es.insert(A(a,a+1,a+2));
}
return 0;
}
| Are you by any chance compiling in debug mode? It finishes near instantly with the default release configuration in Visual Studio 2008. If you're compiling in debug mode and you almost followed the example to the letter, including the pre-processor stuff and still had this part:
#ifndef NDEBUG
#define BOOST_MULTI_INDEX_ENABLE_INVARIANT_CHECKING
#define BOOST_MULTI_INDEX_ENABLE_SAFE_MODE
#endif
Then removing/disabling these defines will also significantly speed up execution time. (By at least 180x on my machine, didn't bother letting it finish.) What the consequences of disabling or removing these things in a debug build is, I don't know.
|
3,186,074 | 3,186,257 | On Passing a 2D-Array into a function | This is not so much a question on, "How do I pass it into the function?" but rather, "Is this acceptable?"
void func( int **ptr );
int main( int argc, char* argv[] )
{
int arr[][3] = {{1, 2,}, {3, 4}, {5, 6}};
int *pArr = *arr;
(&pArr[0])[1] = 3;
func(&pArr);
cin.get();
return 0;
}
void func( int **ptr )
{
cout << "In func()" << endl;
ptr[0][1] = 5;
}
This works as far as I can tell. It doesn't really feel safe to me, but I like it more than passing a 2D array into a function. Instead, a pointer that does the work for me.
Would this be very confusing for people who had to read my code? Should I use other methods instead? Is it a bad idea to work with pointers to arrays?
Also, question a little bit off-topic. Why can I write:
int arr[] = { 1, 2, 3, 4, 5 };
int *pArr = *arr;
but why can't I write
int arr[][3] = {{1, 2,}, {3, 4}, {5, 6}};
int *pArr = **arr;
or even use a **pArr?
| Let's dissect this carefully, since array-to-pointer conversions are sometimes confusing.
int arr[][3] = {{1, 2,}, {3, 4}, {5, 6}};
arr is now an array of 3 arrays of 3 ints.
int *pArr = *arr;
*arr uses the array arr in expression, so it decays to a pointer to the first element of arr -- that is pointer to array of 3 ints (the array containing {1,2,0}). Dereferencing that pointer (with *) gives you the array of 3 ints. Now you're using that array in an expression and it decays to a pointer to int, which is assigned to pArr.
(&pArr[0])[1] = 3;
pArr[0] gives the integer at which pArr is pointing (the number 1). &pArr[0] makes a pointer at that integer (which is actually equal to pArr). Indexing that pointer with [1] gives a reference to the next integer after the number 1, which is the number 2. To that reference you're assigning 3. Here's the catch: pointer to an element of an array can only be used to access other elements of the same array. Your pointer points at an element of the array {1, 2, 0}, which you've changed to {1, 3, 0}, and that's fine, but
func(&pArr);
Now you're creating a pointer to pointer to int (since pArr was a pointer to int), and passing that to your function.
ptr[0][1] = 5;
And now you've taken ptr[0], which evaluates to the pointed-to object, which is your original pointer pArr. This line is equivalent to pArr[1] = 5;, and that is still valid (changing your {1,2,0} array to {1,5,0}). However, ptr[1][... would be invalid, because ptr is not pointing at an element of an array of any kind. It's pointing at a standalone pointer. Incrementing ptr will make it point at uninitialized memory and dereferencing that will be undefined behavior.
And for the additional questions:
You should not be able to write this:
int arr[] = { 1, 2, 3, 4, 5 };
int *pArr = *arr;
The array arr decays to a pointer-to-int (pointing at the number 1), dereferencing that gives the integer, and an integer cannot be assigned to the pointer pArr. gcc says error: invalid conversion from 'int' to 'int'*.
Likewise, you cannot write this:
int arr[][3] = {{1, 2,}, {3, 4}, {5, 6}};
int *pArr = **arr;
for the same reason: *arr is the array of 3 ints {1, 2, 0}, **arr is the integer 1, and an integer cannot be assigned to a pointer.
|
3,186,226 | 3,186,255 | Why shouldn't I put "using namespace std" in a header? | Someone once hinted that doing this in a header file is not advised:
using namespace std;
Why is it not advised?
Could it cause linker errors like this: (linewrapped for convenience)
error LNK2005: "public: __thiscall std::basic_string<char,struct
std::char_traits<char>,class std::allocator<char> >::
~basic_string<char,struct std::char_traits<char>,class std::allocator<char> >
(void)" (??1?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@XZ)
already defined in tools.lib(Exception.obj)
| Because it forces anyone who uses your header file to bring the std namespace into global scope. This could be a problem if they have a class that has the same name as one of the standard library classes.
|
3,186,238 | 3,186,258 | How do I assign a pointer to a variable? | Suppose we have a variable k which is equal 7:
int k=7;
int t=&k;
But this does not work. What's the mistake?
| &k takes the address of k. You probably mean
int *t = &k;
I have a good read for you: Alf P. Steinbach's pointer tutorial.
|
3,186,243 | 3,186,292 | Strange behaviour with templates and #defines | I have the following definitions:
template<typename T1, typename T2>
class Test2
{
public:
static int hello() { return 0; }
};
template<typename T>
class Test1
{
public:
static int hello() { return 0; }
};
#define VERIFY_R(call) { if (call == 0) printf("yea");}
With these, I try to compile the following:
VERIFY_R( Test1<int>::hello() );
this compiles fine
VERIFY_R( (Test2<int,int>::hello()) );
this also compiles fine, notice the parentheses around the call.
VERIFY_R( Test2<int,int>::hello() );
This, without the parentheses produces a warning and several syntax errors:
warning C4002: too many actual parameters for macro 'VERIFY_R'
error C2143: syntax error : missing ',' before ')'
error C2059: syntax error : ')'
error C2143: syntax error : missing ';' before '}'
error C2143: syntax error : missing ';' before '}'
error C2143: syntax error : missing ';' before '}'
fatal error C1004: unexpected end-of-file found
What's going on here?
This happens with VS2008 SP1.
| The comma inside a macro can be ambiguous: an extra set of parentheses (your second example) is one way of disambiguating. Consider a macro
#define VERIFY(A, B) { if ( (A) && (B) ) printf("hi"); }
then you could write VERIFY( foo<bar, x> y ).
Another way of disambiguating is with
typedef Test1<int,int> TestII;
VERIFY_R( TestII::hello() );
|
3,186,450 | 3,186,811 | Unicode Woes! Ms-Access 97 migration to Ms-Access 2007 | Problem is categorized in two steps:
Problem Step 1. Access 97 db containing XML strings that are encoded in UTF-8.
The problem boils down to this: the Access 97 db contains XML strings that are encoded in UTF-8. So I created a patch tool for separate conversion for the XML strings from UTF-8 to Unicode. In order to covert UTF8 string to Unicode, I have used function
MultiByteToWideChar(CP_UTF8, 0, PChar(OriginalName), -1, @newName, Size);.(where newName is array as declared "newName : Array[0..2048] of WideChar;" ).
This function works good on most of the cases, I have checked it with Spainsh, Arabic, characters. but I am working on Greek and Chineese Characters it is choking.
For some greek characters like "Ευγ. ΚαÏαβιά" (as stored in Access-97), the resultant new string contains null charaters in between, and when it is stored to wide-string the characters are getting clipped.
For some chineese characters like "?¢»?µ?"(as stored in Access-97), the result is totally absurd like "?¢»?µ?".
Problem Step 2. Access 97 db Text Strings, Application GUI takes unicode input and saved in Access-97
First I checked with Arabic and Spainish Characters, it seems then that no explicit characters encoding is required. But again the problem comes with greek and chineese characters.
I tried the above mentioned same function for the text conversion( Is It correct???), the result was again disspointing. The Spainsh characters which are ok with out conversion, get unicode character either lost or converted to regular Ascii Alphabets.
The Greek and Chineese characters shows similar behaviour as mentined in step 1.
Please guide me. Am I taking the right approach? Is there some other way around???
Well Right now I am confused and full of Questions :)
| There is no special requirement for working with Greek characters. The real problem is that the characters were stored in an encoding that Access doesn't recognize in the first place. When the application stored the UTF8 values in the database it tried to convert every single byte to the equivalent byte in the database's codepage. Every character that had no correspondence in that encoding was replaced with ? That may mean that the Greek text is OK, while the chinese text may be gone.
In order to convert the data to something readable you have to know the codepage they are stored in. Using this you can get the actual bytes and then convert them to Unicode.
|
3,186,472 | 3,186,584 | TypeLoadException while calling C++ method in C# file | I have a main program written in C# which creates and uses objects written in C++.
One of these objects, MODULE, uses a Behavior class (C++), which contains a lot of parameters, initialized by an interface managed by the C# main.
One of these parameters is a system::Collection::Generic < AnotherObject>, let's call it LIST. The behavior object is initialized well, LIST contains an element which is correct.
But when I create a MODULE and call its method BuildModule(BEHAVIOR), at the line of the call, the LIST seems to be damaged. I got this in the locals :
Capacity error: an exception of type: System::TypeLoadException^ occurred>
Count error: an exception of type: System::TypeLoadException^ occurred>
Item cannot view indexed property>
System.Collections.Generic.ICollection.IsReadOnly error: an exception of type: System::TypeLoadException^ occurred>
System.Collections.ICollection.IsSynchronized error: an exception of type: System::TypeLoadException^ occurred>
System.Collections.ICollection.SyncRoot error: an exception of type: System::TypeLoadException^ occurred>
System.Collections.IList.IsFixedSize error: an exception of type: System::TypeLoadException^ occurred>
System.Collections.IList.IsReadOnly error: an exception of type: System::TypeLoadException^ occurred>
System.Collections.IList.Item cannot view indexed property>
This appears not to be loaded, but I don't know - the other objects are loaded, even more complex ones.
If anyone has a clue, I would be grateful.
| You are showing exceptions that the debugger suffers when it tries to display the list instance. It isn't going help either you or us to diagnose the problem, you'll need to take a look at the exception that the code generates. If that doesn't help, post what you see in the exception's message and stack trace properties. The InnerException is most important, in case it isn't null.
This kind of mishap is usually caused by heap corruption btw.
|
3,186,540 | 3,186,586 | visibility problems with namespaces | I have two source files, one named main.cpp (where the namespace M is defined) and the file engines.h (where several names are defined).
main.cpp includes engines.h.
engines.h need to use the stuff inside M, and M needs to use the stuff inside engines.h.
I get an error doing using namespace M; in engines.h.
| You cannot do using namespace M before the namespace was defined. If there is a cyclic dependency, you need to solve it by using one or more techniques
Forward declare if your uses don't need to know the members or size of classes, but just handle with pointers or references to them:
namespace M { class MyCow; }
Define stuff in engines.cc
// engines.h
void f();
// engines.cpp
#include "main.h"
void f() { MyCow muuh; }
Use of the pimpl idiom reduces dependencies even more, as it keeps headers free of headers that are only used by implementations..
Split the part in .h files for the interface and .cpp files for the implementation to handle such dependencies. That way, headers are less dependent on other headers, and implementation files can include the headers.
|
3,186,577 | 3,186,599 | Returning references from a C++ methods | Dear friends, i'm concerned if i'm making a bad use of references in C++
In the following method GCC complains warning "reference to local variable ‘me’ returned"
MatrizEsparsa& MatrizEsparsa::operator+(MatrizEsparsa& outra){
MatrizEsparsa me(outra.linhas(),outra.colunas());
return me;
}
But, with the following changes the warning disappears:
MatrizEsparsa& MatrizEsparsa::operator+(MatrizEsparsa& outra){
MatrizEsparsa me(outra.linhas(),outra.colunas());
MatrizEsparsa &ref = me;
return ref;
}
Is the former method ( returning the 'ref' variable ) correct\acceptable ?
| No. ref still refers to me which will be destroyed at the end of the call.
You should return a copy of your result (not prefixed by &).
MatrizEsparsa MatrizEsparsa::operator+(const MatrizEsparsa& outra) const {
return MatrizEsparsa(outra.linhas(),outra.colunas());
}
I also added two const specifiers (to the parameter and to the method) since I doubt outra or the calling instance need to be modified in this case. (I could be wrong, but then your operator+ would have a weird semantic)
By doing what you did, you just made the code more complex. The compiler probably was confused and couldn't warn you about your possible mistake.
Usually, when you have to use clever tricks to do simple things, it means something is wrong.
|
3,186,926 | 3,187,324 | Shared global variable in C++ static library | I have a MS C++ project (let's call it project A) that I am currently compiling as a static library (.lib). It defines a global variable foo. I have two other projects which compile separately (call them B and C, respectively) and each links the shared static library A in. Both B and C are dll's that end up loaded in the same process. I would like to share a single instance of foo from A between B and C in the same process: a singleton. I'm not sure how to accomplish the singleton pattern here with project A since it is statically compiled into B and C separately. If I declare foo as extern in both B and C, I end up with different instances in B and C. Using a standard, simple singleton class pattern with a static getInstance method results in two static foo instantiations.
Is there any way to accomplish this while project A is statically compiled into B and C? Or do I have to make A a DLL?
| Yes, you have to make A a shared DLL, or else define it as extern in B and C and link all three statically.
|
3,186,984 | 3,186,988 | What does ~ mean in C++? | Specifically, could you tell me what this line of code does:
int var1 = (var2 + 7) & ~7;
Thanks
| It's bitwise negation. This means that it performs the binary NOT operator on every bit of a number. For example:
int x = 15; // Binary: 00000000 00000000 00000000 00001111
int y = ~x; // Binary: 11111111 11111111 11111111 11110000
When coupled with the & operator it is used for clearing bits. So, in your example it means that the last 3 bits of the result of var2+7 are set to zeroes.
As noted in the comments, it's also used to denote destructors, but that's not the case in your example.
|
3,187,019 | 3,187,085 | Does C++ enforce return statements? | Okay, little oddity I discovered with my C++ compiler.
I had a not-overly complex bit of code to refactor, and I accidentally managed to leave in a path that didn't have a return statement. My bad. On the other hand, this compiled, and segfaulted when I ran it and that path was hit, obviously.
Here's my question: Is this a compiler bug, or is there no guarantee that a C++ compiler will enforce the need for a return statement in a non-void return function?
Oh, and to be clear, in this case it was an unecessary if statement without an accompanying else. No gotos, no exits, no aborts.
| Personally I think this should be an error:
int f() {
}
int main() {
int n = f();
return 0;
}
but most compilers treat it as a warning, and you may even have to use compiler switches to get that warning. For example, on g++ you need -Wall to get:
[neilb@GONERIL NeilB]$ g++ -Wall nr.cpp
nr.cpp: In function 'int f()':
nr.cpp:2: warning: no return statement in function returning non-void
Of course, with g++ you should always compile with at least -Wall anyway.
|
3,187,148 | 3,187,358 | Generating permutations via templates | I'd like a function, or function object, that can generate a permutation of its inputs with the permutation specified at compile time. To be clear, I am not looking to generate all of the permutations, only a specific one. For instance, permute<1,4,3,2>( a, b, c, d ) would return (a,d,c,b). Obviously, it is straightforward to do this with a permutation of a specific length, e.g. 2, like this
#include <boost/tuple.hpp>
template< unsigned a, unsigned b>
struct permute {
template< class T >
boost::tuple< T, T > operator()( T ta, T tb ) {
boost::tuple< T, T > init = boost::make_tuple( ta, tb );
return boost::make_tuple( init.get< a >(), init.get< b >() );
}
};
But, how would I go about doing this for an arbitrary length permuation? Also, is there a cleaner way of writing the above code? Yes, the above code is not restricted to making permutations as permute<2,2>(a,b) is allowed, but I don't see that as a flaw. However, can it be restricted to only allowing actual permutations?
| C++0x provides variadic templates, which you should be able to use to handle an arbitrary length permutation. They were added specifically because the current version of C++ doesn't have a clean way of dealing with this kind of problem.
|
3,187,293 | 3,187,486 | double pointer memory allocation | I am currently trying to allocate the same amount of memory for a double pointer. I take in a char** and want to use a bubble sort on that char** . So I create a temp char** and now I'm wondering how to correctly allocate enough memory so that I can return that temp char** to another method.
I know the way I'm allocating right now doesn't look right and it certainly doesn't work...otherwise I wouldn't be asking this question. If someone could respond with some helpful advice, I would greatly appreciate it!
char** bubble_sort(char **filenames, int n)
{
int i;
char **new_list;
new_list = malloc(sizeof(filenames));
for (i = 0; i < n; i++)
{
// malloc(file_list.size * sizeof(int));
new_list[i] = filenames[i];
}
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
int x;
int y;
for(x=0; x<n; x++)
{
for(y=0; y<n-1; y++)
{
if(new_list[y]>new_list[y+1])
{
char *temp = new_list[y+1];
new_list[y+1] = new_list[y];
new_list[y] = temp;
}
}
}
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
return new_list;
}
| Here is the working copy of the program:
#include <cstdio>
#include <cstdlib>
#include <cstring>
char** bubble_sort(const char **filenames, int n)
{
int i;
char **new_list;
new_list = (char**) malloc(sizeof(*new_list) * n);
for (i = 0; i < n; i++)
{
new_list[i] = (char*) filenames[i];
}
printf("Initial list:\n");
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
int x;
int y;
printf("List is sorted:\n");
for(x=0; x<n; x++)
{
for(y=0; y<n-1; y++)
{
if(strcmp(new_list[y],new_list[y+1])>0)
{
char *temp = new_list[y+1];
new_list[y+1] = new_list[y];
new_list[y] = temp;
}
}
}
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
return new_list;
}
int main(){
const char *ar[5]={
"eee", "aaa", "bbb", "ccc", "ddd",
};
bubble_sort(ar, 5);
return (0);
}
Still, keep in mind that your programming style resembles more to C than C++ (which is not always a bad thing).
If you want to allocate new strings for your array elements, you should change the first for like this:
for (i = 0; i < n; i++)
{
//new_list[i] = (char*) filenames[i];
new_list[i] = (char*) malloc(sizeof(**new_list) * (strlen(filenames[i]) + 1));
strcpy(new_list[i], filenames[i]);
}
And this is the C version (first one was the C++ version). Note that the string array has all its elements newly allocated, and is not using the initial strings from the input parameter.:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char** bubble_sort(char **filenames, int n)
{
int i;
char **new_list;
new_list = malloc(sizeof(*new_list) * n);
for (i = 0; i < n; i++)
{
//new_list[i] = (char*) filenames[i];
new_list[i] = malloc(sizeof(**new_list) * (strlen(filenames[i]) + 1));
strcpy(new_list[i], filenames[i]);
}
printf("Initial list:\n");
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
int x;
int y;
printf("List is sorted:\n");
for(x=0; x<n; x++)
{
for(y=0; y<n-1; y++)
{
if(strcmp(new_list[y],new_list[y+1])>0)
{
char *temp = new_list[y+1];
new_list[y+1] = new_list[y];
new_list[y] = temp;
}
}
}
for (i = 0; i < n; i++)
{
printf("%d: %s\n", i, new_list[i]);
}
return new_list;
}
int main(){
char *ar[5]={
"eee", "aaa", "bbb", "ccc", "ddd",
};
bubble_sort(ar, 5);
return (0);
}
|
3,187,625 | 3,187,661 | Call QueryInterface before CoCreateInstance? | Is the above possible?
Can I do this:
IUnknown *punk;
punk->QueryInterface(IID_MyInterface, (void**)&m_pMyInterface);
I thought that this would tell me if the MyInterface is supported m_pMyInterface...
| If you really mean what you've written above, then no: because your punk is an uninitialized pointer.
Normally you need to call CoCreateInstance to create an instance of something; after that you can call QueryInterface on that instance, to ask what interface[s] it supports.
|
3,187,728 | 3,187,732 | Does gcc emit different output in one-liners with/without braces | I tried to use astyle to format the code base I've to work with. When I use the option --add-brackets the executable is not identical (If I only use -t and/or -b the output is identical).
if(a) return b
is modified to
if(a)
{
return b
}
So my question is. Does gcc generate the same output if I only add and/or delete braces (obviously only for one liners like above). I tried some simple test case but already got an bit identical executable.
| 1, No
2, Use the - s flag to see the assembler ( or Using GCC to produce readable assembly? to get more readable assembler)
|
3,187,770 | 3,188,607 | Partially initialize variable defined in other module | I'm considering a certain solution where I would like to initialize a cell of an array that is defined in other module (there will be many modules initializing one table). The array won't be read before running main (so there is not problem with static initialization order).
My approach:
/* secondary module */
extern int i[10]; // the array
const struct Initialize {
Initialize() { i[0] = 12345; }
} init;
/* main module */
#include <stdio.h>
int i[10];
int main()
{
printf("%d\n", i[0]); // check if the value is initialized
}
Compiler won't strip out init constant because constructor has side effects. Am I right? Is the mechanism OK? On GCC (-O3) everything is fine.
//EDIT
In a real world there will be many modules. I want to avoid an extra module, a central place that will gathered all minor initialization routines (for better scalability). So this is important that each module triggers its own initialization.
| This works with MSVC compilers but with GNU C++ does not (at least for me). GNU linker will strip all the symbol not used outside your compilation unit. I know only one way to guarantee such initialization - "init once" idiom. For examle:
init_once.h:
template <typename T>
class InitOnce
{
T *instance;
static unsigned refs;
public:
InitOnce() {
if (!refs++) {
instance = new T();
}
}
~InitOnce() {
if (!--refs) {
delete instance;
}
}
};
template <typename T> unsigned InitOnce<T>::refs(0);
unit.h:
#include "init_once.h"
class Init : public InitOnce<Init>
{
public:
Init();
~Init();
};
static Init module_init_;
secondary.cpp:
#include "unit.h"
extern int i[10]; // the array
Init::Init()
{
i[0] = 12345;
}
...
|
3,187,843 | 3,188,108 | Use Public Variable Globally | I'm attempting to modify a MIPS simulator to display the contents of its registers during run time. My question refers to the way in which I plan to do this. So...
I have a file, file1.cpp and file2.cpp. There is a local public variable in file1.cpp called
typedef long ValueGPR;
ValueGPR reg[33];
that I want to access in file2.cpp. Each of these files have a header file. File2.cpp contains a function which iteratively tracks the execution of a program instruction by instruction making this the perfect place to insert a printf("REG[%d]:%d\n",i,reg[i]); statement or something like that but reg is a local variable in file1.cpp. How do I stitch together something that will allow me to access this reg variable?
This is what both files actually look like (after thinking about this a bit more):
"File1.h"
typedef long ValueGPR;
...
class ThreadContext {
...
public:
ValueGPR reg[33];
...
...
}
...
"File2.cpp"
...
#include ".../ThreadContext.h"
...
long ExecutionFlow::exeInst(void) {
...
//ADD PRINTF OF reg[1] - reg[32] HERE
...
}
...
| Cogwheel's answer is correct, but your comment indicates some possibility of confusion, so perhaps it's better to clarify a bit:
file1.h:
#ifndef FILE1_H_INCLUDED
#define FILE1_H_INCLUDED
typedef long ValueGPR;
extern ValueGPR reg[];
#define NUM_REGS 33
#endif
file1.c:
#include "file1.h"
ValueGPR reg[NUM_REGS];
file2.c:
#include "file1.h"
/* ... */
for (i=0; i<NUM_REGS; i++)
show(reg[i]);
Edit: Given the additional point that reg is really a member of a class, the code above clearly won't work, though the general idea remains the same. For a class member, you'd need to deal with the class as a whole, not just the reg itself. Taking a wild stab at what things might look like, you could have something like:
file1.h:
// include guard here
class CPU_state {
public:
typedef long ValueGPR;
static const int num_regs = 33;
ValueGPR reg[num_regs];
//or, preferably:
// std::vector<ValueGPR> regs;
// CPU_state() : regs(num_regs) {}
// ...
};
extern CPU_state cpu;
file1.cpp:
#include "file1.h"
CPU_state cpu;
file2.cpp:
#include "file1.h"
for (int i=0; i<cpu.num_regs; i++)
show(cpu.reg[i]);
If you might create more than one CPU at a time, then you'll probably need to pass a pointer or reference to a CPU_state (or whatever you call it) rather than relying on a global instance like this code uses.
|
3,188,156 | 3,188,175 | Difference between classes and namespaces? | I'm looking at namespaces and I don't really see a difference between these and classes.
I'm teaching myself C++ I've gotten several books online, so I know I'm not learning the most effectively. Anyway, can someone tell me the difference between the two, and what would be the best time to use a namepace over a class? Also, I don't see much about structs in the book I'm reading.
Is this the format?
struct go
{
goNow(){ cout << "go Now"};
}
Thanks in advance for your assistance.
| Classes and structs define types. You can create an object of a type. Namespaces simply declare a scope inside which other types, functions, objects, or namespaces can exist. You can't create an object of type std (unless of course you created a type called std, which would hide the std namespace).
When you define a function inside a struct/class (a method) you're saying "This function is a fundamental operation on the associated data". When you define a function inside a namespace you're saying "This function is logically related to other functions, types, and objects in the namespace"
Edit
It's probably worth pointing out that "everything is an object" languages like Java and C# regularly use classes as if they were namespaces because they don't allow "free" functions. This may be where the confusion comes from. If you have a class in another language that contains nothing but static members, you would want to use a namespace and free functions in the C++ version.
|
3,188,352 | 3,191,582 | Changing the dataype of a Mat class instance in OpenCV C++ Interface | How can i change the datatype used in storing the pixels in a Mat class instance?
For example after reading an image using the line below
Mat I = imread(file,0);
i obtain a grayscale image with pixels of type unsigned char. I want to change this to a double.
What's the best way to do the conversion? I wasn't able to find a function to do that.
Thanks in advance
| It is very simple. See the documentation at OpenCV website.
Basically do
Mat double_I;
I.convertTo(double_I, CV_64F);
|
3,188,439 | 3,188,481 | question on struct in c++ | I have the following code
#include <iostream>
#include<string>
#include <sstream>
using namespace std;
struct product{
int weight;
float price;
};
int main(){
string mystr;
product prod;
product *pointer;
pointer=∏
getline(cin,pointer->price);
return 0;
}
but it shows me the mistake
no instance of overloaded function "getline" matches argument list
What is the mistake?
| The mistake is that getline returns string, not float.
string str;
getline(cin, str);
pointer->price = atof(str.c_str());
|
3,188,554 | 3,188,591 | How to get OpenGL running on OSX | I normally program on Windows, but I got a macbook pro from my school, so I'm trying to make an OpenGL app for OSX. I downloaded and installed XCode, but I have no clue how to get a simple OpenGL app going. I would prefer not to use Objective-C, but I definitely don't want to use GLUT. Can someone point me in the right direction?
| The biggest difference between OpenGL on OS X compared to pretty much everything else is the location of the header files. On OS X:
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <GLUT/glut.h>
If you want to stay away from Objective-C/Cocoa and GLUT you can try SDL which is a cross platform gaming library (windows, 2D graphics, sound, input, etc.).
Edit: forgot about the compiler flags, which mipadi posted, namely:
-framework OpenGL
|
3,188,706 | 3,188,806 | How to delete certain characters from multiple text files, in a thread safe way? | What is the best way, to develop a multi threaded program, to delete certain characters from multiple text files that are passed in as parameters?
Thus, when someone passes as a.out axvc f1 f2 f3 f4 , the goal is to delete all occurences of the characters a,x,v,c from the file f1, f2, f3 and f4.
| Are you not able to just make use of normal utilities like sed to help you do this?
Even if you aren't, are you sure that the CPU use is a significant enough portion of the processing time that it wouldn't be dwarfed by the file I/O? Most likely doing it in multiple threads won't save you much time at all vs doing it serially in one thread.
Otherwise probably the easiest mechanism would be to have the main thread doing the I/O and dispatching work to a pool of worker threads that do the character removal. It gets trickier if I/O speed actually improves when being done from multiple threads.
|
3,188,793 | 3,188,818 | When should the STL algorithms be used instead of using your own? | I frequently use the STL containers but have never used the STL algorithms that are to be used with the STL containers.
One benefit of using the STL algorithms is that they provide a method for removing loops so that code logic complexity is reduced. There are other benefits that I won't list here.
I have never seen C++ code that uses the STL algorithms. From sample code within web page articles to open source projects, I haven't seen their use.
Are they used more frequently than it seems?
| Short answer: Always.
Long answer: Always. That's what they are there for. They're optimized for use with STL containers, and they're faster, clearer, and more idiomatic than anything you can write yourself. The only situation you should consider rolling your own is if you can articulate a very specific, mission-critical need that the STL algorithms don't satisfy.
Edited to add: (Okay, so not really really always, but if you have to ask whether you should use STL, the answer is "yes".)
|
3,189,117 | 3,189,438 | How to listen to dll function calls | is there any way to "listen" to when a function of a dll is called?
I would like to know what functions of a dll is called and the parameters etc....
is it possible?
thanks!
| Check out WinApiOverride32. This is a really powerful monitor, with support of COM and .NET and easily customizable (you can monitor DLL internal functions as well). Also, you can write a custom DLL to override some APIs called by the target.
|
3,189,199 | 3,189,257 | Declaring and initializing variable in for loop | Can I write simply
for (int i = 0; ...
instead of
int i;
for (i = 0; ...
in C or C++?
(And will variable i be accessible inside the loop only?)
| It's valid in C++.
It was not legal in the original version of C.
But was adopted as part of C in C99 (when some C++ features were sort of back ported to C)
Using gcc
gcc -std=c99 <file>.c
The variable is valid inside the for statement and the statement that is looped over. If this is a block statement then it is valid for the whole of the block.
for(int loop = 0; loop < 10; ++loop)
{
// loop valid in here aswell
}
// loop NOT valid here.
|
3,189,202 | 3,189,272 | Deleting nodes in a doubly linked list (C++) | I have problems understanding why when I create two or more nodes (as shown below), the function void del_end()will only delete the char name[20] and not the whole node . How do I fix this problem without memory leak?
#include <iostream>
using namespace std;
struct node
{
char name[20];
char profession[20];
int age;
node *nxt;
node *prv;
};
node *start_ptr = NULL;
void del_end()
{
node *temp, *temp2;
temp = start_ptr;
if (start_ptr == NULL)
cout << "Can't delete: there are no nodes" << endl;
else if (start_ptr != NULL && start_ptr->nxt == NULL)
{start_ptr=NULL;}
else
{
while (temp->nxt != NULL)
{
temp = temp->nxt;
}
temp2=temp->prv;
delete temp;
temp->nxt= NULL;
}
}
| Your code has some problems, the worst being here:
temp2=temp->prv;
delete temp2;
temp->nxt= NULL;
You're deleting the next-to-last node, leaving any pointers to it dangling, and losing the last node.
But if you post more of the real code, we can tell you more.
EDIT:
Here's a slightly cleaned-up version of del_end (and there's still plenty of room for improvement).
void del_end()
{
if (start_ptr == NULL)
{
cout << "Can't delete: there are no nodes" << endl;
return;
}
if (start_ptr->nxt == NULL)
{
delete start_ptr;
start_ptr = NULL;
return;
}
node *nextToLast = start_ptr;
node *last = start_ptr->nxt;
while(last->nxt != NULL)
{
nextToLast = last;
last = last->nxt;
}
delete last;
nextToLast->nxt = NULL;
return;
}
Note that this does not assume that the prev links are correct, which seems prudent here.
|
3,189,306 | 3,190,365 | Boost thread_interrupted exception terminate()s with MinGW gcc 4.4.0, OK with 3.4.5 | I've been "playing around with" boost threads today as a learning exercise, and I've got a working example I built quite a few months ago (before I was interrupted and had to drop multi-threading for a while) that's showing unusual behaviour.
When I initially wrote it I was using MingW gcc 3.4.5, and it worked. Now I'm using 4.4.0 and it doesn't - incidentally, I've tried again using 3.4.5 (I kept that version it a separate folder when I installed 4.4.0) and it's still working.
The code is at the end of the question; in summary what it does is start two Counter objects off in two child threads (these objects simply increment a variable then sleep for a bit and repeat ad infinitum - they count), the main thread waits for the user via a cin.get() and then interrupts both threads, waits for them to join, then outputs the result of both counters.
Complied with 3.4.5 it runs as expected.
Complied with 4.4.0 it runs until the user input, then dies with a message like the below - it seems the the interrupt exceptions are killing the entire process?
terminate called after throwing an instance of '
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
boost::thread_interrupted'
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
From what I read, I think that any (?) uncaught exception that is allowed to propagate out of a child thread will kill the process? But, I'm catching the interrupts here, aren't I? At least I seem to be when using 3.4.5.
So, firstly, have I understood how interrupting works?
And, any suggestions as to what is happening and how to fix?
Code:
#include <iostream>
#include <boost/thread/thread.hpp>
#include <boost/date_time.hpp>
//fixes a linker error for boost threads in 4.4.0 (not needed for 3.4.5)
//found via Google, so not sure on validity - but does fix the link error.
extern "C" void tss_cleanup_implemented() { }
class CCounter
{
private:
int& numberRef;
int step;
public:
CCounter(int& number,int setStep) : numberRef(number) ,step(setStep) { }
void operator()()
{
try
{
while( true )
{
boost::posix_time::milliseconds pauseTime(50);
numberRef += step;
boost::this_thread::sleep(pauseTime);
}
}
catch( boost::thread_interrupted const& e )
{
return;
}
}
};
int main( int argc , char *argv[] )
{
try
{
std::cout << "Starting counters in secondary threads.\n";
int number0 = 0,
number1 = 0;
CCounter counter0(number0,1);
CCounter counter1(number1,-1);
boost::thread threadObj0(counter0);
boost::thread threadObj1(counter1);
std::cout << "Press enter to stop the counters:\n";
std::cin.get();
threadObj0.interrupt();
threadObj1.interrupt();
threadObj0.join();
threadObj1.join();
std::cout << "Counter stopped. Values:\n"
<< number0 << '\n'
<< number1 << '\n';
}
catch( boost::thread_interrupted& e )
{
std::cout << "\nThread Interrupted Exception caught.\n";
}
catch( std::exception& e )
{
std::cout << "\nstd::exception thrown.\n";
}
catch(...)
{
std::cout << "\nUnexpected exception thrown.\n"
}
return EXIT_SUCCESS;
}
| Solved.
It turns out adding the complier flag -static-libgcc removes the problem with 4.4.0 (and has no apparent affect with 3.4.5) - or at least in this case the program returns the expected results.
|
3,189,429 | 3,189,922 | Redirecting ostream to file not working | I have a custom logging system which allows me to send information to a log file and the console depending on the verbosity currently selected. Right now, the trouble I am having is with the output to the file, with output to the console working fine.
Here is an example:
ilra_talk << "Local IP: " << systemIP() << " | Hostname: " << systemhostname() << endl;
// the systemIP() and systemhostname() functions have already been defined
This should result in the current local IP and hostname of the system being printed to a file. However, it is only resulting in the information being printed to the console, despite how the function is overloaded to result in it printing to both.
I've outlined the code below. Any assistance is appreciated (as always).
A definition currently exists for ilra_talk which results in a new class object being created:
#define ilra_talk ilra(__FUNCTION__,0)
The class definition the following:
class ilra
{
static int ilralevel_set; // properly initialized in my main .cpp
static int ilralevel_passed; // properly initialized in my main .cpp
static bool relay_enabled; // properly initialized in my main .cpp
static bool log_enabled; // properly initialized in my main .cpp
static ofstream logfile; // properly initialized in my main .cpp
public:
// constructor / destructor
ilra(const std::string &funcName, int toset)
{
ilralevel_passed = toset;
}
~ilra(){};
// enable / disable irla functions
static void ilra_verbose_level(int toset){
ilralevel_set = toset;
}
static void ilra_log_enabled(bool toset){
log_enabled = toset;
if (log_enabled == true){
// get current time
time_t rawtime;
time ( &rawtime );
// name of log file (based on time of application start)
stringstream logname_s;
string logname = "rclient-";
logname_s << rawtime;
logname.append(logname_s.str());
// open a log file
logfile.open(logname.c_str());
}
}
// output
template <class T>
ilra &operator<<(const T &v)
{
if(log_enabled == true){ // log_enabled is set to true
logfile << v;
logfile << "Test" << endl; // test will show up, but intended information will not appear
}
if(ilralevel_passed <= ilralevel_set)
std::cout << v;
return *this;
}
ilra &operator<<(std::ostream&(*f)(std::ostream&))
{
if(log_enabled == true) // log_enabled is set to true
logfile << *f;
if(ilralevel_passed <= ilralevel_set)
std::cout << *f;
return *this;
}
}; // end of the class
| I see nothing totally wrong with the code, though I personally would have made two changes:
Put the logfile.flush() into the ilra::~ilra(). Logging and buffering are no friends.
Change static ofstream logfile to static ofstream *logfile: allocated/delete it in ilra_log_enabled() and add NULL check in the << operators. I prefer objects with explicit life cycle.
Overall, since logging is a performance hog, I never use the iostreams for it and stick with the printf()-like macros: logging check is made without a function call, right in the macro. That has important side-effect: if no logging is needed then the parameter list isn't evaluated at all. In your case it is impossible to avoid the functions being called, e.g. systemIP() and systemhostname(), since the check for log level/etc is done after they are already have been called. With the macros I can also for example remove completely debug-level logging from release builds (or corollary: in a debug build I can have as much logging as I want).
|
3,189,436 | 3,189,464 | Access class functions from another thead? | I have a function in my class that creates a thread and gives it arguments to call a function which is part of that class but since thread procs must be static, I can't access any of the class's members. How can this be done without using a bunch of static members in the cpp file to temporarily give the data to be manipulated, this seems slow.
Heres an example of what I mean:
in cpp file:
void myclass::SetNumber(int number)
{
numberfromclass = number;
}
void ThreadProc(void *arg)
{
//Can't do this
myclass::SetNumber((int)arg);
}
I can't do that since SetNumber would have to be static, but I instance my class a lot so that won't work.
What can I do?
Thanks
| Usually you specify the address of the object of myclass as arg type and cast it inside the ThreadProc. But then you'll be blocked on how passing the int argument.
void ThreadProc(void *arg)
{
myclass* obj = reinterpret_cast<myclass*>(arg);
//Can't do this
obj->SetNumber(???);
}
As you said this is maybe not only a bit slow but it also clutters the code. I would suggest to use boost::bind for argument binding and to create the threads in an os independent way (for your own source at least) you could use boost::thread. Then no need for static methods for your threads.
Now in the C++0x standard, here a small tutorial
|
3,189,475 | 3,190,304 | Minor (unimportant) defect in the standard? | This question has no practical issues associated with it, it is more a matter of curiosity and wanting to know if I am taking things too literally ;).
So I have been trying to work towards understanding as much of the c++ standard as possible. Today in my delving into the standard I noticed this (ISO/IEC 14882:2003 21.3.4):
const_reference operator[](size_type pos) const;
reference operator[](size_type pos);
Returns: If pos < size(), returns data()[pos].
Otherwise, if pos == size(), the const version returns charT().
Otherwise, the behavior is undefined.
Seems pretty sane to me. But then I thought to myself, wait a sec what's the definition of data()?.
const charT* data() const;
yup, it returns a const charT*.
Clearly the non-const version of operator[] cannot be implemented as a simple return data()[pos] then since that would be initializing a reference of type char& from an expression of type const char.
I think that it is obvious that the intent is that data() be implemented something like return data_; and operator[] be implemented as return data_[pos]; or something functionally similar, but that's not what the standard says :-P.
If I recall correctly, implementors have some leeway in that they can implement things how they please as long as it meets the basic requirements given and has the same net effect.
So the question is, am I being way too literal, or is this the type of thing that would be considered a defect.
EDIT: It is worth noting that the c++0x draft has changed the wording to:
Returns: If pos < size(), returns *(begin() + pos).
Otherwise, if pos == size(), the const version returns charT().
Otherwise, the behavior is undefined.
So perhaps I have just stumbled onto something that has already been discussed.
| Yes, it was a defect and yes, this was the fix.
http://www.open-std.org/JTC1/SC22/WG21/docs/lwg-defects.html#259
|
3,189,545 | 3,189,588 | Proper way to make a global "constant" in C++ | Typically, the way I'd define a true global constant (lets say, pi) would be to place an extern const in a header file, and define the constant in a .cpp file:
constants.h:
extern const pi;
constants.cpp:
#include "constants.h"
#include <cmath>
const pi=std::acos(-1.0);
This works great for true constants such as pi. However, I am looking for a best practice when it comes to defining a "constant" in that it will remain constant from program run to program run, but may change, depending on an input file. An example of this would be the gravitational constant, which is dependent on the units used. g is defined in the input file, and I would like it to be a global value that any object can use. I've always heard it is bad practice to have non-constant globals, so currently I have g stored in a system object, which is then passed on to all of the objects it generates. However this seems a bit clunky and hard to maintain as the number of objects grow.
Thoughts?
| It all depends on your application size. If you are truly absolutely sure that a particular constant will have a single value shared by all threads and branches in your code for a single run, and that is unlikely to change in the future, then a global variable matches the intended semantics most closely, so it's best to just use that. It's also something that's trivial to refactor later on if needed, especially if you use distinctive prefixes for globals (such as g_) so that they never clash with locals - which is a good idea in general.
In general, I prefer to stick to YAGNI, and don't try to blindly placate various coding style guides. Instead, I first look if their rationale applies to a particular case (if a coding style guide doesn't have a rationale, it is a bad one), and if it clearly doesn't, then there is no reason to apply that guide to that case.
|
3,189,900 | 3,191,252 | Fetching remote database info from a client application | What would be the preferred method of pulling content from a remote database?
I don't think that I would want to pull directly from the database for a number of reasons.
(Such as easily being able to change where it is fetching the info from and a current lack of access from outside the server.)
I've been thinking of using HTTP as a proxy to the database basically just using some PHP to display raw text from the database and then grabbing the page and dumping it to a string for displaying.
I'm not exactly sure how I would go about doing that though. (Sockets?)
Right now I am building it around a blog/news type system. Though the content would expand in the future.
| I've got a similar problem at the moment, and the approach I'm taking is to communicate from the client app with a database via a SOAP web service.
The beauty of this approach is that on the client side the networking involved consists of a standard HTTP request. Most platforms these days include an API to perform basic HTTP client functions. You'll then also need an XML or JSON parser to parse the returned SOAP data, but they're also readily available.
As a concrete example, a little about my particular project: It's an iPhone app communicating with an Oracle database. I use a web service to read data from the database and send the data to the app formatted in XML using SOAP. The app can use Apple's NSURLConnection API to perform the necessary HTTP request. The XML is then parsed using the NSXMLParser API.
While the above are pretty iPhone-specific (and are Objective-C based) I think the general message still applies - there's tools out there that will do most of the work for you. I can't think of an example of an HTTP API offhand, but for the XML parsing part of the equation there's Xerces, TinyXML, Expat...
HTH!
|
3,189,944 | 3,190,001 | SSL handshake yields BIO errors | Fairly new to socket programming, but I've been assigned with a whopper of project.
My issue is this: I try initiating an SSL handshake with both SSL_accept() and SSL_connect(), as well as renegotiating the handshake and then attempting to reconnect with SSL_renegotiate() and SSL_do_handshake() in succession, but all of these give me the error of BIO routines:BIO_write:unsupported method
Before making any calls, I make sure to set my BIO and initialize all SSL libraries.
The BIO and SSL pointers are not null during the time of execution.
Any ideas?
| It is hard to tell without seeing any code but the error 'unsupported method' means that you are problably trying to call a function with the wrong BIO as parameter. In other words, you cannot call BIO_write with an accept BIO (one created with, eg, a call to BIO_new_accept()). An accept BIO is for, well, accepting connections.
|
3,189,998 | 3,190,084 | C++: Two classes needing each other | I'm making a game and I have a class called Man and a class called Block in their code they both need each other, but they're in seperate files. Is there a way to "predefine" a class? Like Objective-C's @class macro?
| It's called a circular dependency. In class Two.h
class One;
class Two {
public:
One* oneRef;
};
And in class One.h
class Two;
class One {
public:
Two* twoRef;
};
The "class One;" and "class Two;" directives allocate a class names "One" and "Two" respectively; but they don't define any other details beyond the name. Therefore you can create pointers the class, but you cannot include the whole class like so:
class One;
class Two : public One {
};
class Three {
public:
One one;
};
The reason the two examples above won't compile is because while the compiler knows there is a class One, it doesn't know what fields, methods, or virtual methods, class One might contain because only the name had been defined, not the actual class definition.
|
3,190,096 | 3,190,130 | C++ constant temporary lifetime | Can you please tell me if such code is correct (according to standard):
struct array {
int data[4];
operator const int*() const { return data; }
};
void function(const int*) { ... }
function(array()); // is array data valid inside function?
Thank you
| Yes. The temporary object is valid until the end of the full expression in which it is created; that is, until after the function call returns.
I don't have my copy of the standard to hand, so I can't give the exact reference; but it's in 12.2 of the C++0x final draft.
|
3,190,158 | 3,190,264 | What am I doing wrong? (multithreading) | Here s what I'm doing in a nutshell.
In my class's cpp file I have:
std::vector<std::vector<GLdouble>> ThreadPts[4];
The thread proc looks like this:
unsigned __stdcall BezierThreadProc(void *arg)
{
SHAPETHREADDATA *data = (SHAPETHREADDATA *) arg;
OGLSHAPE *obj = reinterpret_cast<OGLSHAPE*>(data->objectptr);
for(unsigned int i = data->start; i < data->end - 1; ++i)
{
obj->SetCubicBezier(
obj->Contour[data->contournum].UserPoints[i],
obj->Contour[data->contournum].UserPoints[i + 1],
data->whichVector);
}
_endthreadex( 0 );
return 0;
}
SetCubicBezier looks like this:
void OGLSHAPE::SetCubicBezier(USERFPOINT &a,USERFPOINT &b, int ¤tvector )
{
std::vector<GLdouble> temp;
if(a.RightHandle.x == a.UserPoint.x && a.RightHandle.y == a.UserPoint.y
&& b.LeftHandle.x == b.UserPoint.x && b.LeftHandle.y == b.UserPoint.y )
{
temp.clear();
temp.push_back((GLdouble)a.UserPoint.x);
temp.push_back((GLdouble)a.UserPoint.y);
ThreadPts[currentvector].push_back(temp);
temp.clear();
temp.push_back((GLdouble)b.UserPoint.x);
temp.push_back((GLdouble)b.UserPoint.y);
ThreadPts[currentvector].push_back(temp);
}
}
The code that calls the threads looks like this:
for(int i = 0; i < Contour.size(); ++i)
{
Contour[i].DrawingPoints.clear();
if(Contour[i].UserPoints.size() < 2)
{
break;
}
HANDLE hThread[4];
SHAPETHREADDATA dat;
dat.objectptr = (void*)this;
dat.start = 0;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.25);
dat.whichVector = 0;
dat.contournum = i;
hThread[0] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.5);
dat.whichVector = 1;
hThread[1] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.75);
dat.whichVector = 2;
hThread[2] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = Contour[i].UserPoints.size();
dat.whichVector = 3;
hThread[3] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
WaitForMultipleObjects(4,hThread,true,INFINITE);
}
Is there something wrong with this?
I'd expect it to fill ThreadPts[4]; ... There should never be any conflicts the way I have it set up. I usually get error writing at... on the last thread where dat->whichvector = 3. If I remove:
dat.start = dat.end;
dat.end = Contour[i].UserPoints.size();
dat.whichVector = 3;
hThread[3] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
Then it does not seem to crash, what could be wrong?
Thanks
| The problem is that you're passing the same dat structure to each thread as the argument to the threadproc.
For example, When you start thread 1, there's no guarantee that it will have read the information in the dat structure before your main thread starts loading that same dat structure with the information for thread 2 (and so on). In fact, you're constantly directly using that dat structure throughout the thread's loop, so the thread won't be finished with the structure passed to it until the thread is basically done with all its work.
Also note that currentvector in SetCubicBezier() is a reference to data->whichVector, which is referring to the exact same location in a threads. So SetCubicBezier() will be performing push_back() calls on the same object in separate threads because of this.
There's a very simple fix: you should use four separate SHAPETHREADDATA instances - one to initialize each thread.
|
3,190,275 | 3,190,515 | Using C++ Macros To Check For Variable Existence | I am creating a logging facility for my library, and have made some nice macros such as:
#define DEBUG myDebuggingClass(__FILE__, __FUNCTION__, __LINE__)
#define WARING myWarningClass(__FILE__, __FUNCTION__, __LINE__)
where myDebuggingClass and myWarningClass both have an overloaded << operator, and do some helpful things with log messages.
Now, I have some base class that users will be overloading called "Widget", and I would like to change these definitions to something more like:
#define DEBUG myDebuggingClass(__FILE__, __FUNCTION__, __LINE__, this)
#define WARNING myWarningClass(__FILE__, __FUNCTION__, __LINE__, this)
so that when users call 'DEBUG << "Some Message"; ' I can check to see if the "this" argument dynamic_casts to a Widget, and if so I can do some useful things with that information, and if not then I can just ignore it. The only problem is that I would like users to be able to also issue DEBUG and WARNING messages from non-member functions, such as main(). However, given this simple macro, users will just get a compilation error because "this" will not be defined outside of class member functions.
The easiest solution is to just define separate WIDGET_DEBUG, WIDGET_WARNING, PLAIN_DEBUG, and PLAIN_WARNING macros and to document the differences to the users, but it would be really cool if there were a way to get around this. Has anyone seen any tricks for doing this sort of thing?
| Declare a global Widget* const widget_this = NULL; and a protected member variable widget_this in the Widget class, initialized to this, and do
#define DEBUG myDebuggingClass(__FILE__, __FUNCTION__, __LINE__, widget_this)
|
3,190,514 | 3,190,555 | popen equivalent in c++ | Is their any C popen() equivalent in C++ ?
| You can use the "not yet official" boost.process if you want an object-oriented approach for managing the subprocess.
Or you can just use popen itself, if you don't mind the C-ness of it all.
|
3,190,569 | 3,190,658 | Determine type of templated function result at compile-time? | I'm not sure this is possible, but is there a way, using template programming magic, to define a function that has different return values depending on what input it takes?
Potentially:
template<typename resultType>
resultType GetResult(const std::string &key); // where the value of key may change resultType
template<typename keyType, typename resultType>
resultType GetResult(keyType key);
Now, I know that the above isn't correct. To use the first one, you'd have to know what resultType was before calling the function. However, I've learned that a lot of "impossible" things are often made possible with just another layer (or two) of indirection. I just can't seem to find the right way to do it.
The second option tickles my brain though. I feel like I should be able to define some other helper object that maps strings to types (or whatever) and then the compile-time result of that will call GetResult with the appropriate template parameter.
Edit: Assume that the types used for resultType are unrelated. There is not an interface that can be tested for the "real" type (maybe it could be an int and a MyClass *).
Edit 2: The real-world usage is that I've got a third-party object that contains a collection of Widgets, Gadgets, etc. You can ask for these by string id (prefixed with a type, conveniently), but you have to parse the string to find out that you need to call "collectionInstance.getWidget(id)". My plan was to write a thin wrapper object that would intelligently know how to get at these internal objects.
| No. You cannot make the return type, defined at compile-time, depend on a run-time value.
You could return a boost::variant or a boost::any, though.
|
3,190,571 | 3,190,676 | How do I use an int& parameter with a default value? | I have to add a default int& argument (Eg iNewArgument) to an existing function definition in a header file:
What is the syntax to initialize the last argument to make it take a default value?
..
virtual int ExecNew
(int& param1, int& RowIn,int& RowOut, char* Msg = '0',
bool bDebug=true, int& iNewArgument = NULL ) = 0 ;
.
.
NULL is #defined to 0
Error: default argument: cannot convert from int to int&
| Your example indicates that you are creating a virtual function. Default arguments in virtual (especially pure) functions are not a good idea, because they are not considered when you call an overridden function.
struct A {
virtual void f(int = 4711) = 0;
};
struct B : A {
virtual void f(int) { /* ... */ }
};
Now, if someone calls f on a B object, he will have to provide the argument - the default isn't used. You would have to duplicate the default argument in the derived class, which creates ugly redundancies. Instead, you can use the non-virtual interface pattern
struct A {
void f(int a) {
int out;
f(a, out);
}
void f(int a, int &out) {
doF(a, out);
}
protected:
virtual void doF(int a, int out&) = 0;
};
The user can now call f with one, and with two arguments, from whatever class type he calls the function, and the implementers of A don't have to worry about the default argument. By using overloading instead of default arguments, you don't need to break your head around lvalues or temporaries.
|
3,190,700 | 3,190,799 | Error with C++ operator overloading, in Visual Studio AND Xcode | I am working on a C++ assignment for class that wants me to overload the ">>" operator. I am encountering errors linking in both Visual Studio 2005 and Xcode 3.2.2. The C++ code is separated into a few files. The prototypes are stored in overload.h, the implementations are stored in overload.cpp, and main() is in overloadtester.cpp (although I have not been able to put anything related to this new overloaded operator in main yet because of the error. below is the code I added before I started getting the errors (listed below)
// overload.h
// Class Name OverloadClass
#ifndef _OverloadClass_H
#define _OverloadClass_H
#include < string >
using namespace std ;
class OverloadClass
{
public:
// Constructors and methods
private:
x_;
y_;
};
istream& operator>> (istream& in, OverloadClass &p);
#endif
// overload.cpp
#include < iostream >
#include < string >
#include < sstream >
#include < istream >
extern "C"
{
#include "DUPoint.h"
}
using namespace std ;
void OverloadClass::input()
{
char leftParen;
char comma ;
cin >> leftParen >> x_ >> comma;
char rightParen;
cin >> y_ >> rightParen;
}
// Constructor and method implementations
istream& operator>> (istream& in, OverloadClass &p)
{
p.input();
return in;
}
The error I have been getting in Xcode is:
Link ./OverloadDirectory
Undefined symbols: "operator>>(std:basic_istream >&, OverloadClass&)" referenced from: _main in overloadtester.o
ld: symbol(s) not found collect2: ld returned 1 exit status
I dont have access to the computer with VS at this very moment hopefully when I do I can post the error I am getting from running the code on that IDE.
Edited in response to @Mark's request for more details
Edited in response to @sbi's request for more info on input()
| The error undefined symbol is a linker error. It means that when you are linking all your code into a single executable / library, the linker is not able to find the definition of the function. The main two reasons for that are forgetting to include the compiled object in the library/executable or an error while defining the function (say that you declare void f( int & ); but you implement void f( const int & ) {}).
I don't know your IDEs (I have never really understood Xcode) but you can try to compile from the command line: g++ -o exe input1.cpp input2.cpp input3.cpp ...
|
3,190,934 | 3,190,976 | Inheritance Costs in C++ | Taking the following snippet as an example:
struct Foo
{
typedef int type;
};
class Bar : private Foo
{
};
class Baz
{
};
As you can see, no virtual functions exist in this relationship. Since this is the case, are the the following assumptions accurate as far as the language is concerned?
No virtual function table will be created in Bar.
sizeof(Bar) == sizeof(Baz)
Basically, I'm trying to figure out if I'll be paying any sort of penalty for doing this. My initial testing (albeit on a single compiler) indicates that my assertions are valid, but I'm not sure if this is my compiler's optimizer or the language specification that's responsible for what I'm seeing.
| According to the standard, Bar is not a POD (plain old data) type, because it has a base. As a result, the standard gives C++ compilers wide latitude with what they do with such a type.
However, very few compilers are going to do anything insane here. The one thing you probably have to look out for is the Empty Base Optimization. For various technical reasons, the C++ standard requires that any instance be allocated storage space. For some compilers, Foo will be allocated dedicated space in the bar class. Compilers which implement the Empty Base Optimization (most all in modern use) will remove the empty base, however.
If the given compiler does not implement EBO, then sizeof(foo) will be at least twice sizeof(baz).
|
3,191,124 | 3,191,142 | Passing class template as a function parameter | This question is a result of my lack of understanding of a situation, so please bear if it sounds overly stupid.
I have a function in a class, like:
Class A {
void foo(int a, int b, ?)
{
----
}
}
The third parameter I want to pass, is a typed parameter like
classA<classB<double > > obj
Is this possible? If not, can anybody please suggest a workaround? I have just started reading about templates.
Thanks,
Sayan
| Doesn't it work if you just put it there as a third parameter?
void foo(int a, int b, classA< classB<double> > obj) { ... }
If it's a complex type it might also be preferable to make it a const reference, to avoid unnecessary copying:
void foo(int a, int b, const classA< classB<double> > &obj) { ... }
|
3,191,331 | 3,191,474 | Multithreading not taking advantage of multiple cores? | My computer is a dual core core2Duo. I have implemented multithreading in a slow area of my application but I still notice cpu usage never exceeds 50% and it still lags after many iterations. Is this normal? I was hopeing it would get my cpu up to 100% since im dividing it into 4 threads. Why could it still be capped at 50%?
Thanks
See What am I doing wrong? (multithreading)
for my implementation, except I fixed the issue that that code was having
| Looking at your code, you are making a huge number of allocations in your tight loop--in each iteration you dynamically allocate two, two-element vectors and then push those back onto the result vector (thus making copies of both of those vectors); that last push back will occasionally cause a reallocation and a copy of the vector contents.
Heap allocation is relatively slow, even if your implementation uses a fast, fixed-size allocator for small blocks. In the worst case, the general-purpose allocator may even use a global lock; if so, it will obliterate any gains you might get from multithreading, since each thread will spend a lot of time waiting on heap allocation.
Of course, profiling would tell you whether heap allocation is constraining your performance or whether it's something else. I'd make two concrete suggestions to cut back your heap allocations:
Since every instance of the inner vector has two elements, you should consider using a std::array (or std::tr1::array or boost::array); the array "container" doesn't use heap allocation for its elements (they are stored like a C array).
Since you know roughly how many elements you are going to put into the result vector, you can reserve() sufficient space for those elements before inserting them.
|
3,191,500 | 3,191,505 | Enumeration relying on integer boolean conversion | In my compiler project, I have an enumeration that goes like
enum Result {
No,
Maybe,
Yes
};
I have put No explicitly at the first position, so that i can rely on the boolean evaluation to false. If my compiler is not sure about something, and has to wait for facts until runtime, its analysis functions will return Maybe. Used like
if(!typesEqual(t1, t2)) {
diagnose(types_unequal) << t1 << t2;
}
I wonder whether you or your company considers it bad style not to compare to No explicitly
if(typesEqual(t1, t2) == No) { /* ... */ }
Comparing explicitly seems wordy to me, but relying on the implicit boolean conversion somehow makes me feel guilty. Have you had that feeling before, and how have you dealt with it?
| I'd feel guilty about it as well, because from reading the code above what would you expect the boolean typesEqual() expression to return for a Maybe? Would it return true? Maybe! Would it return false? Maybe! We don't know - that's the entire point of the enum. That's why it makes sense to explicitly compare to No, even though it's more verbose.
|
3,191,535 | 3,191,542 | Pushing a static array into a std::vector? | I'm trying to do the following:
I have:
std::vector<std::vector<GLdouble[2]>> ThreadPts(4);
then I try to do:
GLdouble tmp[2];
while(step--)
{
fx += dfx;
fy += dfy;
dfx += ddfx;
dfy += ddfy;
ddfx += dddfx;
ddfy += dddfy;
tmp[0] = fx;
tmp[1] = fy;
ThreadPts[currentvector].push_back(tmp);
}
But the compiler says:
Error 15 error C2440: 'initializing' : cannot convert from 'const GLdouble [2]' to 'double [2]' C:\Program Files\Microsoft Visual Studio 9.0\VC\include\vector 1211
How could I do this then?
I'm using VS 2008 and don;t have std::array, and I don't have boost.
Thanks
| A C-style array is not assignable, so it cannot be used as the value type of a vector.
If you are using at least C++11, you can #include <array> and use std::array. (Historically available in Visual C++ 2008 SP1 as std::tr1::array).
typedef std::vector<GLdouble[2]> pointList;
// Becomes
typedef std::vector<std::array<GLdouble, 2>> pointList;
https://en.cppreference.com/w/cpp/container/array
If you don't have that, you may be able to simply copy the Boost Array header into your project and use it on its own; it doesn't rely on many other parts of Boost, and those on which it does rely can be easily removed.
|
3,191,610 | 3,192,421 | Writing a keyboard driver that accepts input from code | The ultimate goal of this project is to send low level input (so that it looks like it is coming from the keyboard) to my windows machine.
I know C++, Python, and Java. Though I would love to do this in python, C++ will probably be the only option.
I have been searching around the internet and have found something called a Keyboard Filter Driver that can inject keystrokes into the keyboard stream by adding an extra layer to the driver. Is this the best way to accomplish my goal? If yes, where could I find some material to help me code it?
Note: Windows Function SendInput() is not an option for me
| Download DDK or WDK and look at kbfiltr sample. You can't use Python or Java. Drivers are typically written in C. If you have no driver development background it will be not so easy (you need to read a lot of docs to understand what you are actually doing).
Good luck!
|
3,191,727 | 3,191,773 | Better way to copy several std::vectors into 1? (multithreading) | Here is what I'm doing:
I'm taking in bezier points and running bezier interpolation then storing the result in a std::vector<std::vector<POINT>.
The bezier calculation was slowing me down so this is what I did.
I start with a std::vector<USERPOINT> which is a struct with a point and 2 other points for bezier handles.
I divide these up into ~4 groups and assign each thread to do 1/4 of the work. To do this I created 4 std::vector<std::vector<POINT> > to store the results from each thread.In the end all the points have to be in 1 continuous vector, before I used multithreading I accessed this directly but now I reserve the size of the 4 vectors produced by the threads and insert them into the original vector, in the correct order. This works, but unfortunatly the copy part is very slow and makes it slower than without multithreading. So now my new bottleneck is copying the results to the vector. How could I do this way more efficiently?
Thanks
| Have all the threads put their results into a single contiguous vector just like before. You have to ensure each thread only accesses parts of the vector that are separate from the others. As long as that's the case (which it should be regardless -- you don't want to generate the same output twice) each is still working with memory that's separate from the others, and you don't need any locking (etc.) for things to work. You do, however, need/want to ensure that the vector for the result has the correct size for all the results first -- multiple threads trying (for example) to call resize() or push_back() on the vector will wreak havoc in a hurry (not to mention causing copying, which you clearly want to avoid here).
Edit: As Billy O'Neal pointed out, the usual way to do this would be to pass a pointer to each part of the vector where each thread will deposit its output. For the sake of argument, let's assume we're using the std::vector<std::vector<POINT> > mentioned as the original version of things. For the moment, I'm going to skip over the details of creating the threads (especially since it varies across systems). For simplicity, I'm also assuming that the number of curves to be generated is an exact multiple of the number of threads -- in reality, the curves won't divide up exactly evenly, so you'll have to "fudge" the count for one thread, but that's really unrelated to the question at hand.
std::vector<USERPOINT> inputs; // input data
std::vector<std::vector<POINT> > outputs; // space for output data
const int thread_count = 4;
struct work_packet { // describe the work for one thread
USERPOINT *inputs; // where to get its input
std::vector<POINT> *outputs; // where to put its output
int num_points; // how many points to process
HANDLE finished; // signal when it's done.
};
std::vector<work_packet> packets(thread_count); // storage for the packets.
std::vector<HANDLE> events(thread_count); // storage for parent's handle to events
outputs.resize(inputs.size); // can't resize output after processing starts.
for (int i=0; i<thread_count; i++) {
int offset = i * inputs.size() / thread_count;
packets[i].inputs = &inputs[0]+offset;
packets[i].outputs = &outputs[0]+offset;
packets[i].count = inputs.size()/thread_count;
events[i] = packets[i].done = CreateEvent();
threads[i].process(&packets[i]);
}
// wait for curves to be generated (Win32 style, for the moment).
WaitForMultipleObjects(&events[0], thread_count, WAIT_ALL, INFINITE);
Note that although we have to be sure that the outputs vector doesn't get resized while be operated on by multiple threads, the individual vectors of points in outputs can be, because each will only ever be touched by one thread at a time.
|
3,191,854 | 3,191,865 | Automatic Evaluation Strategy Selection in C++ | Consider the following function template:
template<typename T> void Foo(T)
{
// ...
}
Pass-by-value semantics make sense if T happens to be an integral type, or at least a type that's cheap to copy.
Using pass-by-[const]-reference semantics, on the other hand, makes more sense if T happens to be an expensive type to copy.
Let's assume for a second that you are writing a library. Ideally, as a library implementer, your job is to provide your consumers with a clean API that is both as generic and efficient as possible. How then, do you provide a generic interface that caters to both types of argument passing strategies?
Here is my first attempt at getting this to work:
#include <boost/type_traits.hpp>
template<typename T> struct DefaultCondition
{
enum {value = boost::is_integral<T>::value /* && <other trait(s)> */};
};
template< typename T, class Condition = DefaultCondition<T> > class Select
{
template<bool PassByValue = Condition::value, class Dummy = void> struct Resolve
{
typedef T type;
};
template<class Dummy> struct Resolve<false, Dummy>
{
typedef const T& type;
};
public: typedef typename Resolve<>::type type;
};
Typical usage:
template<typename T> class EnterpriseyObject
{
typedef typename Select<T>::type type;
public: explicit EnterpriseyObject(type)
{
// ...
}
};
struct CustomType {};
void Usage()
{
EnterpriseyObject<int>(0); // Pass-by-value.
(EnterpriseyObject<CustomType>(CustomType())); // Pass-by-const-reference.
}
This, of course, indirectly breaks implicit template argument deduction for non-class templates:
template<typename T> void Foo(typename Select<T>::type)
{
// ...
}
void Usage()
{
Foo(0); // Incomplete.
Foo<int>(0); // Fine.
}
This can be "fixed" with the Boost.Typeof library and a macro, a la the WinAPI:
#define Foo(Arg) ::Foo<BOOST_TYPEOF((Arg))>((Arg))
Though this is just a quasi-portable hack.
As you can see, my general approach is not really satisfactory for all cases.
As a hobbyist programmer, I neither have real-world experience nor do I have access to production-quality code for reference. I also realize that this might seem like a bad case of premature optimization, but I'm genuinely interested in a couple of things:
Do you, or have you used this type of optimization* in the past?
Does the Boost (or any other public) library already provide similar functionality?
If the answer to #1 or #2 is a 'yes' -- how is the non-class template case handled?
Are there any obvious pitfalls that I'm not seeing with something like this?
Finally, is this even a sane thing to do?
* Not profiled. ;)
|
Yes. All the time. I use it myself.
Yes, use Boost.Utility's Call Traits :)
Usage would be...
template <typename T>
void foo(boost::call_traits<T>::param_type param)
{
// Use param
}
As far as I know, non-class templates are passed-by-value unless it is faster to not. Thanks to partial template specialization, it can be customized relatively easily.
Sorry, didn't really read what you did, it just looked like exactly what I went through a few months ago. Therefore, can't really answer this one. My recommendation is just to read through Boost.Utility.
Of course!
|
3,191,912 | 3,192,002 | Making worker threads? | Is there a way to make "worker threads" Basically I tried creating threads every time I needed them and this resulted in being slower than with 1 thread becase creating a new thread all the time is expensive. Would there be a way to create worker threads when the application first starts, then give them work when necessary?
Thanks
| Yes, you can create the threads up front and have them wait for signals to go and start their work.
These signals can be message queues or semaphores or any other sort of inter-thread communication method.
As an example, we once put together a system under UNIX which had a master thread to receive work over the net and farm the jobs out to slave threads. The slave threads would actually have to interact with another system by screen scraping (basically emulating a user) to get their desired information.
In other words, work could come in faster than slave threads could do it.
So we started up about fifty odd slaves and just created a condition variable for them (this was using pthreads). Each slave that wasn't doing active work simply waited on the condition variable.
When a job came in to the master thread, it loaded the relevant data into known memory and "kicked" the condition variable, waking up one of the slaves which would then grab the work and start processing it (after notifying the master that it could continue).
So there was no overhead in having to create threads on the fly - all were created on application start-up and simply waited for work to be handed over.
Of course, there's a potential downside to this sort of static sizing in that you may get into trouble if you actually need more threads. We solved it by simply monitoring the maximum number of threads and ensuring the process was restarted the following day with more if we were consistently running out.
If you want to have a more dynamic solution, you need to look into a concept called thread pooling. This is basically what I've described above but it generally lets you set a minimum and maximum number of threads along with a maximum time an inactive thread will be allowed to survive without work (unless you're already at the minimum).
Its implementation could be something as simple as:
master:
minslaves = 7
maxslaves = 20
maxtime = 600
numslaves = 0
freeslaves = 0
while running:
while numslaves < minslaves:
increment numslaves and freeslaves
start a slave
endwhile
wait for work
if freeslaves = 0 and numslaves < maxslaves:
start a slave
endif
queue work to slavequeue
endwhile
while numslaves > 0:
wait
endwhile
exit
and:
slave:
while running:
get work from slavequeue with timeout
if timed out:
if time since last work > maxtime and numslaves > minslaves:
break out of while loop
endif
restart while loop
endif
decrement freeslaves
do the work
increment freeslaves
endwhile
decrement numslaves and freeslaves
exit
(with proper semaphore protection of all shared variables of course, numslaves, freeslaves, slavequeue and the others if you decide to change them after threads have been started).
And, if your environment already has thread pooling, I suggest you use it rather than trying to convert my pseudo-code above. My code is from memory and is meant to illustrate a point - it may or may not be bug-free :-)
|
3,191,925 | 3,211,153 | On the fly font coloring in Tclsh via c++ | I am an amateur try to hack together a little project. It is a simple note storage and retrieval console app on Windows Vista (and XP - i'm hoping to run the whole thing off a USB Stick).
I use Sqlite as the store and Tcl/SQL scripts to add notes (and tags!) and also retrieve them by tag. 3 tables and a "Toxi" schema.
So anyway... I want to use it from either a "dos prompt" or more frequently tclsh (NOT wish!) I don't want the windowing shell or to use TK at all. But to help visually distinguish some things, stdin from stdout, notes from timestamps, etc, I want to change the font color on-the-fly with some kind of crude markup.
I found a c++ project that will do exactly this! Jaded Hobo put it up on: http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=9130. Jaded Hobo says the header file "Console.H" is sufficient to include in a c++ project, but he doesn't know TCL.
I found SWIG, the interface compiler, and I'm going to give it a try. But I'm clueless on a few things:
Can just a header file be enough?
The SWIG Win32 examples aren't as edifying as the 'nix example and they use MS VC++ (VStudio)- I want to use Quincy/MinGW.
(Oh, btw this is my first ever attempt at using C of any kind. So can you show how to use SWIG with Quincy?)
How can I glean from the header source just what the heck to type in my Tcl script to use it?
Thank you for reading this, let alone answering. I started to put it on comp.lang.tcl but I don't like my email addr broadcast like that.
| A header isn't enough by itself. On the other hand, you really don't need to go to all that much work since this page indicates that the API is actually really simple. Here's the C code that you need:
#include <tcl.h>
#include <windows.h>
static int MySetConsoleColorCmd(
ClientData clientData, Tcl_Interp *interp,
int objc, Tcl_Obj *const objv[])
{
HANDLE hConsole;
int code;
/* Parse arguments, first for argument count, then for number format */
if (objc != 2) {
Tcl_WrongNumArgs(interp, 1, objv, "colorCode");
return TCL_ERROR;
} else if (Tcl_GetIntFromObj(interp, objv[1], &code) != TCL_OK) {
return TCL_ERROR;
}
/* Get console handle, checking for the error case */
hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
if (hConsole == INVALID_HANDLE_VALUE) {
Tcl_SetResult(interp, "not a console application", TCL_STATIC);
return TCL_ERROR;
}
/* Set the color! */
SetConsoleTextAttribute(hConsole, code);
return TCL_OK;
}
/* Standard entry point for loadable library */
int Consolecolor_Init(Tcl_Interp *interp) {
Tcl_CreateObjCommand(interp, "consolecolor", MySetConsoleColorCmd,
NULL, NULL);
return TCL_OK;
}
Compile this up into a DLL (it's got no fancy dependencies at all, beyond Tcl itself) called consolecolor.dll (the name should match the entry point function somewhat) and then you'll be able to use the load command to import the new consolecolor command into your code, like this:
load /path/to/consolecolor.dll
# Duplicate example from the page mentioned at the top of this answer
for {set k 1} {$k < 255} {incr k} {
consolecolor $k
puts "$k => I want to be nice today!"
}
For a guide to how to pick colors, see this MSDN page.
|
3,192,052 | 3,192,189 | Critique my concurrent queue | This is a concurrent queue I wrote which I plan on using in a thread pool I'm writing. I'm wondering if there are any performance improvements I could make. atomic_counter is pasted below if you're curious!
#ifndef NS_CONCURRENT_QUEUE_HPP_INCLUDED
#define NS_CONCURRENT_QUEUE_HPP_INCLUDED
#include <ns/atomic_counter.hpp>
#include <boost/noncopyable.hpp>
#include <boost/smart_ptr/detail/spinlock.hpp>
#include <cassert>
#include <cstddef>
namespace ns {
template<typename T,
typename mutex_type = boost::detail::spinlock,
typename scoped_lock_type = typename mutex_type::scoped_lock>
class concurrent_queue : boost::noncopyable {
struct node {
node * link;
T const value;
explicit node(T const & source) : link(0), value(source) { }
};
node * m_front;
node * m_back;
atomic_counter m_counter;
mutex_type m_mutex;
public:
// types
typedef T value_type;
// construction
concurrent_queue() : m_front(0), m_mutex() { }
~concurrent_queue() { clear(); }
// capacity
std::size_t size() const { return m_counter; }
bool empty() const { return (m_counter == 0); }
// modifiers
void push(T const & source);
bool try_pop(T & destination);
void clear();
};
template<typename T, typename mutex_type, typename scoped_lock_type>
void concurrent_queue<T, mutex_type, scoped_lock_type>::push(T const & source) {
node * hold = new node(source);
scoped_lock_type lock(m_mutex);
if (empty())
m_front = hold;
else
m_back->link = hold;
m_back = hold;
++m_counter;
}
template<typename T, typename mutex_type, typename scoped_lock_type>
bool concurrent_queue<T, mutex_type, scoped_lock_type>::try_pop(T & destination) {
node const * hold;
{
scoped_lock_type lock(m_mutex);
if (empty())
return false;
hold = m_front;
if (m_front == m_back)
m_front = m_back = 0;
else
m_front = m_front->link;
--m_counter;
}
destination = hold->value;
delete hold;
return true;
}
template<typename T, typename mutex_type, typename scoped_lock_type>
void concurrent_queue<T, mutex_type, scoped_lock_type>::clear() {
node * hold;
{
scoped_lock_type lock(m_mutex);
hold = m_front;
m_front = 0;
m_back = 0;
m_counter = 0;
}
if (hold == 0)
return;
node * it;
while (hold != 0) {
it = hold;
hold = hold->link;
delete it;
}
}
}
#endif
atomic_counter.hpp
#ifndef NS_ATOMIC_COUNTER_HPP_INCLUDED
#define NS_ATOMIC_COUNTER_HPP_INCLUDED
#include <boost/interprocess/detail/atomic.hpp>
#include <boost/noncopyable.hpp>
namespace ns {
class atomic_counter : boost::noncopyable {
volatile boost::uint32_t m_count;
public:
explicit atomic_counter(boost::uint32_t value = 0) : m_count(value) { }
operator boost::uint32_t() const {
return boost::interprocess::detail::atomic_read32(const_cast<volatile boost::uint32_t *>(&m_count));
}
void operator=(boost::uint32_t value) {
boost::interprocess::detail::atomic_write32(&m_count, value);
}
void operator++() {
boost::interprocess::detail::atomic_inc32(&m_count);
}
void operator--() {
boost::interprocess::detail::atomic_dec32(&m_count);
}
};
}
#endif
| I think you will run into performance problems with a linked list in this case because of calling new for each new node. And this isn't just because calling the dynamic memory allocator is slow. It's because calling it frequently introduces a lot of concurrency overhead because the free store has to be kept consistent in a multi-threaded environment.
I would use a vector that you resize to be larger when it's too small to hold the queue. I would never resize it smaller.
I would arrange the front and back values so the vector is a ring buffer. This will require that you move elements when you resize though. But that should be a fairly rare event and can be mitigated to some extent by giving a suggested vector size at construction.
Alternatively you could keep the linked list structure, but never destroy a node. Just keep adding it to a queue of free nodes. Unfortunately the queue of free nodes is going to require locking to manage properly, and I'm not sure you're really in a better place than if you called delete and new all the time.
You will also get better locality of reference with a vector. But I'm not positive how that will interact with the cache lines having to shuttle back and forth between CPUs.
Some others suggest a ::std::deque and I don't think that's a bad idea, but I suspect the ring buffer vector is a better idea.
|
3,192,097 | 3,192,321 | Fastest way to calculate cubic bezier curves? | Right now I calculate it like this:
double dx1 = a.RightHandle.x - a.UserPoint.x;
double dy1 = a.RightHandle.y - a.UserPoint.y;
double dx2 = b.LeftHandle.x - a.RightHandle.x;
double dy2 = b.LeftHandle.y - a.RightHandle.y;
double dx3 = b.UserPoint.x - b.LeftHandle.x;
double dy3 = b.UserPoint.y - b.LeftHandle.y;
float len = sqrt(dx1 * dx1 + dy1 * dy1) +
sqrt(dx2 * dx2 + dy2 * dy2) +
sqrt(dx3 * dx3 + dy3 * dy3);
int NUM_STEPS = int(len * 0.05);
if(NUM_STEPS > 55)
{
NUM_STEPS = 55;
}
double subdiv_step = 1.0 / (NUM_STEPS + 1);
double subdiv_step2 = subdiv_step*subdiv_step;
double subdiv_step3 = subdiv_step*subdiv_step*subdiv_step;
double pre1 = 3.0 * subdiv_step;
double pre2 = 3.0 * subdiv_step2;
double pre4 = 6.0 * subdiv_step2;
double pre5 = 6.0 * subdiv_step3;
double tmp1x = a.UserPoint.x - a.RightHandle.x * 2.0 + b.LeftHandle.x;
double tmp1y = a.UserPoint.y - a.RightHandle.y * 2.0 + b.LeftHandle.y;
double tmp2x = (a.RightHandle.x - b.LeftHandle.x)*3.0 - a.UserPoint.x + b.UserPoint.x;
double tmp2y = (a.RightHandle.y - b.LeftHandle.y)*3.0 - a.UserPoint.y + b.UserPoint.y;
double fx = a.UserPoint.x;
double fy = a.UserPoint.y;
//a user
//a right
//b left
//b user
double dfx = (a.RightHandle.x - a.UserPoint.x)*pre1 + tmp1x*pre2 + tmp2x*subdiv_step3;
double dfy = (a.RightHandle.y - a.UserPoint.y)*pre1 + tmp1y*pre2 + tmp2y*subdiv_step3;
double ddfx = tmp1x*pre4 + tmp2x*pre5;
double ddfy = tmp1y*pre4 + tmp2y*pre5;
double dddfx = tmp2x*pre5;
double dddfy = tmp2y*pre5;
int step = NUM_STEPS;
while(step--)
{
fx += dfx;
fy += dfy;
dfx += ddfx;
dfy += ddfy;
ddfx += dddfx;
ddfy += dddfy;
temp[0] = fx;
temp[1] = fy;
Contour[currentcontour].DrawingPoints.push_back(temp);
}
temp[0] = (GLdouble)b.UserPoint.x;
temp[1] = (GLdouble)b.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
I'm wondering if there is a faster way to interpolate cubic beziers?
Thanks
| There is another point that is also very important, which is that you are approximating your curve using a lot of fixed-length straight-line segments. This is inefficient in areas where your curve is nearly straight, and can lead to a nasty angular poly-line where the curve is very curvy. There is not a simple compromise that will work for high and low curvatures.
To get around this is you can dynamically subdivide the curve (e.g. split it into two pieces at the half-way point and then see if the two line segments are within a reasonable distance of the curve. If a segment is a good fit for the curve, stop there; if it is not, then subdivide it in the same way and repeat). You have to be careful to subdivide it enough that you don't miss any localised (small) features when sampling the curve in this way.
This will not always draw your curve "faster", but it will guarantee that it always looks good while using the minimum number of line segments necessary to achieve that quality.
Once you are drawing the curve "well", you can then look at how to make the necessary calculations "faster".
|
3,192,491 | 3,192,538 | How to get a list of hosts connected to a mysql server | I am trying to get a list of hosts connected to a mysql server. How can i get this?
What should i do after connecting to the mysql server.
Code snippets will really help.
Also whats the best api to use to connect to mysql using c++?
| One way you could do it is to execute the query show processlist, which will give you a table with Id, User, Host, db, Command, Time, State and Info columns. Remember that your show processlist query will be part of the output.
|
3,192,631 | 3,192,964 | How to design a big class header file in c++? | I have a big C++ class, which includes 5 other classes inside, and some of them have other classes inside. The total length of a header (.h) file is very big and it is unreadable. This is what I'm trying to do:
// Foo.h file
#ifndef __INCLUDE_FOO_H
#define __INCLUDE_FOO_H
namespace foo {
class Foo {
public:
#include "foo/Bar.h"
void f() { /* something here */ }
};
}
#endif
This is the sub-class:
// Foo/Bar.h file
class Bar {
}
What do you think? Is it a proper approach or I'm re-inventing a wheel?
| Instead of making the inner classes inner classes, you're better off leaving them on their own, eventually in their own namespace:
// in file foo/bar.h
namespace foo_detail
{
class Bar
{
};
};
// in file foo.h
#include "foo/bar.h"
class Foo
{
private:
foo_detail::Bar theBar;
};
This way your Foo class definition is not overly complicated, your definition files are small (and easy to skim over), you do not include things in your Foo class definition (respecting the principle of least surprise - or decreasing the number of WTF/LOC as they say) and when you look at the definition for Bar, you can clearly see it's an implementation details for foo (by the namespace name if nothing else).
Namespaces are good. Trust the namespaces.
|
3,193,020 | 3,193,099 | Calling C++ static member functions from C code | I have a bunch of C code. I have no intention to convert them into C++ code.
Now, I would like to call some C++ code (I don't mind to modify the C++ code so that they are callable by C code).
class Utils {
public:
static void fun();
}
class Utils2 {
public:
static std::wstring fun();
}
If I tend to call them with the following syntax, they wont compiled (I am using VC++ 2008, with C code files with .c extension)
Utils::fun();
// Opps. How I can access std::wstring in C?
Utils2::fun();
Any suggestion?
| // c_header.h
#if defined(__cplusplus)
extern "C" {
#endif
void Utils_func();
size_t Utils2_func(wchar_t* data, size_t size);
#if defined(__cplusplus)
}
#endif
//eof
// c_impl.cpp
// Beware, brain-compiled code ahead!
void Utils_func()
{
Utils::func();
}
size_t Utils2_func(wchar_t* data, size_t size)
{
std::wstring wstr = Utsls2::func();
if( wstr.size() >= size ) return wstr.size();
std::copy( wstr.begin(), wstr.end(), data );
data[wstr.size()] = 0;
return str.size();
}
//eof
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.