question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
2,051,462 | 2,051,489 | Question About Why String is Truncated on First Instance of \0 | I have a function which reads in a character, one byte at a time, through the serial port. After these bytes are collected, they are passed to a method to process the bytes and message.
I know how to fix the problem (fix is below), but why do my bytes get truncated when I don't perform the fix?
unsigned char dpBuf[255];
...
// Pretend dpBuf has values 0x01 0x02 0x03 0x00 0x00 0x00 0x04 0x05
..
ProcessMsg(dpBuf);
..
void ProcessMsg(unsigned char *buff)
{
// buff comes in as 0x01 0x02 0x03 0x00 and rest is truncated
char p[255];
...
for (int i = 0; i < sizeof(buff); i++)
{
sprintf(p, " 0x%X ", (unsigned char)b[i]);
}
..
}
Fix:
ProcessMsg((unsigned char*)&dpBuf, length); // instead of sizeof() in the loop, use length
..
void ProcessMsg (unsigned char *buff, int length)
{
// buff comes in as the original character string and is not truncated
..
// do for loop, print out contents
}
| buff is declared as const char*, so sizeof(buff) returns the size of such a pointer, which seems to be 4 bytes on your machine. Therefore the first four bytes of the buffer are then printed in the loop.
It doesn't matter that dpBuf is declared as an array of larger size because it is passed to the function as a pointer. To circumvent this problem you should add a parameter to the function where the size of the buffer is passed explicitly.
|
2,051,534 | 2,051,585 | Floating Point Math Execution Time | What accounts for the added execution time of the first data set? The assembly instructions are the same.
With DN_FLUSH flag not on, the first data set takes 63 milliseconds, the second set takes 15 milliseconds.
With DN_FLUSH flag on, the first data set takes 15 milliseconds, the second set takes ~0 milliseconds.
Therefore, in both cases the execution time of the first data set is much greater.
Is there any way to decrease the execution time to be closer in line with the second data set?
I am using C++ Visual Studio 2005, /arch:SSE2 /fp:fast running on Intel Core 2 Duo T7700 @ 2.4Ghz Windows XP Pro.
#define NUMLOOPS 1000000
// Denormal values flushed to zero by hardware on ALPHA and x86
// processors with SSE2 support. Ignored on other x86 platforms
// Setting this decreases execution time from 63 milliseconds to 16 millisecond
// _controlfp(_DN_FLUSH, _MCW_DN);
float denormal = 1.0e-38;
float denormalTwo = 1.0e-39;
float denormalThree = 1;
tickStart = GetTickCount();
// Run First Calculation Loop
for (loops=0; loops < NUMLOOPS; loops++)
{
denormalThree = denormal - denormalTwo;
}
// Get execution time
duration = GetTickCount()-tickStart;
printf("Duration = %dms\n", duration);
float normal = 1.0e-10;
float normalTwo = 1.0e-2;
float normalThree = 1;
tickStart = GetTickCount();
// Run Second Calculation Loop
for (loops=0; loops < NUMLOOPS; loops++)
{
normalThree = normal - normalTwo;
}
// Get execution time
duration = GetTickCount()-tickStart;
printf("Duration = %dms\n", duration);
| Quoting from Intel's optimization manual:
When an input operand for a SIMD
floating-point instruction [here this includes scalar arithmetic done using SSE] contains
values that are less than the
representable range of the data type,
a denormal exception occurs. This
causes a significant performance
penalty. An SIMD floating-point
operation has a flush-to-zero mode in
which the results will not underflow.
Therefore subsequent computation will
not face the performance penalty of
handling denormal input operands.
As for how to avoid this, if you can't flush denormals: do what you can to make sure your data is scaled appropriately and you don't encounter denormals in the first place. Usually this means delaying applying some scale factor until you've finished all of your other computation.
Alternatively, do your computations in double which has a much larger exponent range, and therefore makes it much less likely that you will encounter denormals in the first place.
|
2,051,685 | 2,051,814 | Review: reusable safe_bool implementation | Trying to find a "simple to use" safe_bool idiom/implementation, I've ended up with my own.
Q: Is this implementation correct?
template <typename T>
class safe_bool
{
protected:
typedef void (safe_bool::*bool_type)() const;
bool_type to_bool_type(bool b) const
{ return b ? &safe_bool<T>::safe_bool_true : 0; }
private:
void safe_bool_true() const {}
private:
bool operator ==(safe_bool<T> const & rhs);
bool operator !=(safe_bool<T> const & rhs);
};
to be used like this:
struct A : public safe_bool<A>
{
// operator bool() const { return true; }
operator bool_type() const { return to_bool_type(true); }
};
The only addition to existing base classes would be to_bool_type, but I hope I've got everything else correct, too.
The test cases I used (VC9) can be found here.
The downsides I see in implementation: bool_type and to_bool_type are visible in derived classes, which might not appease to everyone. Also, using the wrong template argument (e.g. class B : public safe_bool<A> introduced during copy-and-paste) will go unnoticed.
| Using a pointer to a member function as the bool alias is idiomatic, as you're doing here.
Your implementation looks correct for what is there, but slightly incomplete. See http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Safe_bool
IMO safe_bool falls into the category of things which do more harm than good; ie the complexity and confusion introduced by this idiom along with the mental effort needed to understand it are greater than the intial problem it is intending to solve.
|
2,051,705 | 2,880,215 | Has anyone used Facebook Scribe? (the tool for logging everything) | How does it work? (Explain it in terms of server, writes, GETs, values, whatever).
DOes it work with Win32 apps?
| I'll try to explain:
There is an application, with thrift
class/interface. When event that you
want to log occures, you send
message to the
Server, which collect logs from many
sources (application, server logs,
etc)
And then server decides what do
do with it: generate visualization,
send over tcp/ip, store in a file,
nfs, hdfs, you decide.
Server and client can be the same
app or machine, or this log data can
be sent from client over internet.
Definitely works with win32 apps.
|
2,052,224 | 2,052,254 | Virtual destructor for boost:noncopyable classes? | I have a question about the following code:
class MyClass : private boost::noncopyable
{
public:
MyClass() {}
virtual ~MyClass() {}
}
class OtherClass : private boost::noncopyable
{
private:
MyClass* m_pMyClass;
}
My thoughts are that MyClass cannot be copied using construction or assignment. Using a virtual destructor is needed if I want to support deriving classes from MyClass, which I do not want to support. I do not intend to create pointers to this class and pass them around.
I do not want a Singleton and I cannot see a downside to removing the virtual destructor.
Do I introduce a potential problem if remove the virtual destructor for a noncopyable class? Are there a better practices to handle a class that does not need to be a Singleton, but I only want one instance in another class and not support inheritance?
| No, the entire point of a virtual destructor is so derived classes can properly destruct polymorphically. If this will never be a base class, you don't need it to be virtual.
|
2,052,497 | 2,052,555 | Is there any way to have dot (.) match newline in C++ TR1 Regular Expressions? | I couldn't find anything regarding this on http://msdn.microsoft.com/en-us/library/bb982727.aspx.
Maybe I could use '[^]+' to match everything but that seems like a hack?
| Boost.Regex has a mod_s flag to make the dot match newlines, but it's not part of the TR1 regex standard. (and not available as a Microsoft extension either, as far as I can see)
As a workaround, you could use [\s\S] (which means match any whitespace or any non-whitespace).
|
2,053,029 | 2,053,078 | How exactly does __attribute__((constructor)) work? | It seems pretty clear that it is supposed to set things up.
When exactly does it run?
Why are there two parentheses?
Is __attribute__ a function? A macro? Syntax?
Does this work in C? C++?
Does the function it works with need to be static?
When does __attribute__((destructor)) run?
Example in Objective-C:
__attribute__((constructor))
static void initialize_navigationBarImages() {
navigationBarImages = [[NSMutableDictionary alloc] init];
}
__attribute__((destructor))
static void destroy_navigationBarImages() {
[navigationBarImages release];
}
|
It runs when a shared library is loaded, typically during program startup.
That's how all GCC attributes are; presumably to distinguish them from function calls.
GCC-specific syntax.
Yes, this works in C and C++.
No, the function does not need to be static.
The destructor runs when the shared library is unloaded, typically at program exit.
So, the way the constructors and destructors work is that the shared object file contains special sections (.ctors and .dtors on ELF) which contain references to the functions marked with the constructor and destructor attributes, respectively. When the library is loaded/unloaded the dynamic loader program (ld.so or somesuch) checks whether such sections exist, and if so, calls the functions referenced therein.
Come to think of it, there is probably some similar magic in the normal static linker so that the same code is run on startup/shutdown regardless if the user chooses static or dynamic linking.
|
2,053,106 | 2,053,205 | initialize boost::multi_array in a class | For start I would like to say that I am newbie.
I am trying to initialized boost:multi_array inside my class. I know how to create a boost:multi_array:
boost::multi_array<int,1> foo ( boost::extents[1000] );
but as part of a class I have problems:
class Influx {
public:
Influx ( uint32_t num_elements );
boost::multi_array<int,1> foo;
private:
};
Influx::Influx ( uint32_t num_elements ) {
foo = boost::multi_array<int,1> ( boost::extents[ num_elements ] );
}
My program passes through compilation but during run-time I get an error when I try to accuse an element from foo (e.g. foo[0]).
How to solve this problem?
| Use an initialisation list (BTW, I know zip about this bit of Boost, so I'm going by your code):
Influx::Influx ( uint32_t num_elements )
: foo( boost::extents[ num_elements ] ) {
}
|
2,053,798 | 2,053,826 | Extending Ruby with C++? | Is there any way to pass Ruby objects to a C++ application ? I have never done that kind of thing before and was wondering if that would be possible. Would it require to modify the Ruby core code ?
| Yes, and no, respectively.
Ruby is written in C. C++ is, by design, C-compatible.
All objects in Ruby are held by a VALUE object (which is a union type), which can be passed around quite easily.
Any directions you find for extending Ruby with C apply in C++ with little modification. Alternatively, you can use something like SWIG to simplify writing your extensions.
|
2,054,427 | 2,054,456 | VC ++ express, how do I fix this error? | I have experience programming in C#, but I'm taking a C++ class this semester, and I'm writing my second project, but I keep getting this error when I try to build a debug configuration of my program.
My build log is below, any ideas on what's going on? I'm at a loss.
Thanks everyone!
1>------ Rebuild All started: Project: Project_2, Configuration: Debug Win32 ------
1>Deleting intermediate and output files for project 'Project_2', configuration 'Debug|Win32'
1>Compiling...
1>main.cpp
1>Linking...
1>LINK : C:\Users\Alex\Documents\Visual Studio 2008\Projects\Project_2\Debug\Project_2.exe not found or not built by the last incremental link; performing full link
1>Embedding manifest...
1>Project : error PRJ0002 : Error result 31 returned from 'C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\mt.exe'.
1>Build log was saved at "file://c:\Users\Alex\Documents\Visual Studio 2008\Projects\Project_2\Project_2\Debug\BuildLog.htm"
1>Project_2 - 1 error(s), 0 warning(s)
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
| You should look at the buildlog.htm file that is given in the build output. It will give you more (useful) information about what has happened.
|
2,054,477 | 2,054,546 | Why is my C++ app faster than my C app (using the same library) on a Core i7 | I have a library written in C and I have 2 applications written in C++ and C. This library is a communication library, so one of the API calls looks like this:
int source_send( source_t* source, const char* data );
In the C app the code does something like this:
source_t* source = source_create();
for( int i = 0; i < count; ++i )
source_send( source, "test" );
Where as the C++ app does this:
struct Source
{
Source()
{
_source = source_create();
}
bool send( const std::string& data )
{
source_send( _source, data.c_str() );
}
source_t* _source;
};
int main()
{
Source* source = new Source();
for( int i = 0; i < count; ++i )
source->send( "test" );
}
On a Intel Core i7 the C++ code produces almost exactly 50% more messages per second..
Whereas on a Intel Core 2 Duo it produces almost exactly the same amount of messages per second. ( The core i7 has 4 cores with 2 processing threads each )
I am curious what kind of magic the hardware performs to pull this off. I have some theories but I thought I would get a real answer :)
Edit: Additional information from comments
Compiler is visual C++, so this is a windows box (both of them)
The implementation of the communication library creates a new thread to send messages on. The source_create is what creates this thread.
| From examining your source code alone, I can't see any reason why the C++ code should be faster.
The next thing I would do is check out the assembly code that is being generated. If you are using a GNU toolchain, you have a couple of ways to do that.
You can ask gcc and g++ to output the assembly code via the -S command line argument. Make sure that other then adding that argument, you use the exact same command line arguments that you do for a regular compile.
A second option is to load your program with gdb and use the disas command.
Good luck.
Update
You can do the same things with the Microsoft Toolchain.
To get the compiler to output assembly, you can use either /FA or /FAs. The first should output assembly only while the second will mix assembly and source (which should make it easier to follow).
As for using the debugger, once you have the debugger started in Visual Studio, navigate to "Debug | Windows | Disassembly" (verified on Visual Studio 2005, other versions may vary).
|
2,054,572 | 2,054,602 | Linker options for Boost | I'm wondering if there are any simple ways to link boost libraries (all or individual) via some entry like....
-lSDL_ttf
The above links SDL's True Type Font library. Can this be done with boost? If so, I'm not sure what file I'm linking for to link. I'm currently using boost_1_40_0.
If this isn't possible, or there are better ways to do this, I'd be happy to hear them.
P.S. I'm using the CodeBlocks IDE.
| Most boost libraries don't need to be linked as they are header only.
For those that are not header only, see the instructions here on the naming conventions and make sure you put the folder containing the boost libraries in your library search path if you want to avoid specifying it explicitly.
|
2,054,710 | 2,054,745 | Can a C++ class determine whether it's on the stack or heap? | I have
class Foo {
....
}
Is there a way for Foo to be able to separate out:
function blah() {
Foo foo; // on the stack
}
and
function blah() {
Foo foo* = new Foo(); // on the heap
}
I want Foo to be able to do different things depending on whether it's allocated on the Stack or the Heap.
Edit:
Alof of people have asked me "why do this?"
The answer:
I'm using a ref-counted GC right now. However, I want to have ability to run mark & sweep too. For this, I need to tag a set of "root" pointers -- these are the pointers on the stack. Thus, for each class, I'd like to know whether they're in the stack or in the heap.
| You need to actually ask us the real question(a) :-) It may be apparent to you why you think this is necessary but it almost certainly isn't. In fact, it's almost always a bad idea. In other words, why do you think you need to do this?
I usually find it's because developers want to delete or not delete the object based on where it was allocated but that's something that should usually be left to the client of your code rather than your code itself.
Update:
Now that you've clarified your reasons in the question, I apologise, you've probably found one of the few areas in which what you're asking makes sense (running your own garbage collection processes). Ideally, you'd override all the memory allocation and de-allocation operators to keep track of what is created and removed from the heap.
However, I'm not sure it's a simple matter of intercepting the new/delete for the class since there could be situations where delete is not called and, since mark/sweep relies on a reference count, you need to be able to intercept pointer assignments for it to work correctly.
Have you thought about how you're going to handle that?
The classic example:
myobject *x = new xclass();
x = 0;
will not result in a delete call.
Also, how will you detect the fact that the pointer to one of your instances is on the stack? The interception of new and delete can let you store whether the object itself is stack or heap-based but I'm at a loss as to how you tell where the pointer is going to be assigned to, especially with code like:
myobject *x1 = new xclass(); // yes, calls new.
myobject *x2 = x; // no, it doesn't.
Perhaps you may want to look into C++'s smart pointers, which go a long way toward making manual memory management obsolete. Shared pointers on their own can still suffer from problems like circular dependencies but the judicious use of weak pointers can readily solve that.
It may be that manual garbage collection is no longer required in your scenario.
(a) This is known as the X/Y problem. Many times, people will ask a question that pre-supposes a class of solution whereas a better approach would be just to describe the problem with no preconceptions of what the best solution will be.
|
2,055,205 | 2,055,242 | There is an if-else, is there a Neither Nor statement? | Is there a neither A nor B syntax?
| While there isn't a built-in syntax to do this, I'd suggest you take a look at the list of supported logical operators and then carefully study De Morgan's laws. Sufficient knowledge in these two fields will allow you to write any logical statement in if–else if syntax.
EDIT: To completely answer your question (although this has been done already in other answers), you could write a neither–nor statement like this:
if (!A && !B) { DoStuff(); }
|
2,055,221 | 2,788,610 | How to build "Auto Detect Proxy Settings" In Windows and in Mac | What are the steps to implement that feature in 1) Windows and 2) in Mac? I went through these, still I am not very clear! I am using C/C++ in Windows and in Mac. So, Win API or Mac API will be enough.
I am also confused because Mac Firefox has also has a option "Use system proxy settings", which is not present in Windows Firefox.
These are some similar questions:
LINK-1: Programmatically detect system-proxy settings on Windows XP with Python
LINK-2: How does Windows actually detect LAN (proxy) settings when using Automatic Configuration
According the this Wiki WPAD article, we should traverse in this sequence:
http://wpad.branch.example.com/wpad.dat
http://wpad.example.com/wpad.dat
http://wpad.com/wpad.dat
But LINK-1 says "GET http://wpad/wpad.dat" is enough. Which one should I follow?
| I using librproxy. That solved this requirement.
|
2,055,350 | 2,055,373 | Any cross platform way to build cpp skeleton from a header? | I'm tired of copy pasting the header into my cpp file then hacking at it until its in the correct form. Has anyone made a program to read a header file and make a corresponding cpp skeleton? I need something that is cross platform or bare minimum works on Linux. A vim plugin would also be acceptable.
Example
class A
{
public:
int DoSomething( int number );
}
Would produce the following file
int A::DoSomething( int number )
{
;
}
| http://www.vim.org/scripts/script.php?script_id=2624
|
2,055,486 | 2,055,501 | Why can't I access a public var using the getter? | having a file containing these statements:
public:
boost::shared_ptr<TBFControl::TbfCmdHandler> _tbfCmdHandlerPtr;
// will be private later...
boost::shared_ptr<TBFControl::TbfCmdHandler> getTBFCmdHandler()
{ return _tbfCmdHandlerPtr; }
I can use it this way:
boost::shared_ptr<TBFControl::TbfCmdHandler>myTbfCmdHandlerPtr(
this->getTBFInstallation()-> _tbfCmdHandlerPtr );
but not, like i want, this way:
boost::shared_ptr<TBFControl::TbfCmdHandler>myTbfCmdHandlerPtr(
this->getTBFInstallation()->getTBFCmdHandler() );
Using the getter function, the following error occurs:
'Housekeeping::TBFInstallation::getTBFCmdHandler'
: cannot convert 'this' pointer from
'const Housekeeping::TBFInstallation'
to 'Housekeeping::TBFInstallation &'
what is going wrong here?
| Obviously, this->getTBFInstallation() returns a const pointer. You need to make the function getTBFCmdHandler const as well.
boost::shared_ptr<TBFControl::TbfCmdHandler> getTBFCmdHandler() const
{
return _tbfCmdHandlerPtr;
}
Note the const keyword at the end of the first line.
Edit: By adding const, you're in effect changing the type of this from TBFInstallation * to TBFInstallation const *. Basically, by adding the const, you're saying that the function can be called even when the object on which the function is being called is const.
|
2,055,518 | 2,055,645 | Converting: #define xxxxxx ((LPCSTR) 4) | In WinCrypt.h I see:
#define CERT_CHAIN_POLICY_SSL ((LPCSTR) 4)
WINCRYPT32API BOOL WINAPI CertVerifyCertificateChainPolicy(
IN LPCSTR pszPolicyOID,
IN PCCERT_CHAIN_CONTEXT pChainContext,
IN PCERT_CHAIN_POLICY_PARA pPolicyPara,
IN OUT PCERT_CHAIN_POLICY_STATUS pPolicyStatus
);
The first argument takes CERT_CHAIN_POLICY_SSL. This appears to be a pointer to a C string, yet it is an integer!?
The pointer is obviously a 32bit integer, but what is it pointing at?
If the number is < 255 it will take up a single byte, so is the C string in fact a single byte "string" (ie a byte)?
When conveting to another language that does support BYTE variables, I can just create a bVar (a BYTE variable) and assign it 4. Then I can pass a pointer to that BYTE variable?
| Sometimes an API will take a parameter that can be a 'cookie' or ID for a well-known object or a pointer to a name (for example),which is what appears to be the case here. 4 is a cookie/handle/ID for the well-known CERT_CHAIN_POLICY_SSL policy. Some users of the API might specify a policy that's not known to the library ahead of time, but is specified by a name that the it can look up somewhere (or like the registry, config file or something).
In a somewhat similar vein, GetProcAddress() can take a pointer to the name of the function you want the address for (which is how it's used 99% of the time today), or the pointer-to-a-string parameter can be a number that specifies the ordinal of the function.
Overloading pointer parameters like this is an unfortunate techniques that's sometimes used to make an API more flexible. Fortunately it's not particularly common.
Anyway, if you want to call this API from another language and specify the CERT_CHAIN_POLICY_SSL policy, you need to pass a 4 for the pointer's value (not a pointer pointing to the value 4).
|
2,055,818 | 2,055,884 | Why is Visual C++ 2010 complaining about 'Using uninitialized memory'? | I've got a function that takes a pointer to a buffer, and the size of that buffer (via a pointer). If the buffer's not big enough, it returns an error value and sets the required length in the out-param:
// FillBuffer is defined in another compilation unit (OBJ file).
// Whole program optimization is off.
int FillBuffer(__int_bcount_opt(*pcb) char *buffer, size_t *pcb);
I call it like this:
size_t cb = 12;
char *p = (char *)malloc(cb);
if (!p)
return ENOMEM;
int result;
for (;;)
{
result = FillBuffer(p, &cb);
if (result == ENOBUFS)
{
char *q = (char *)realloc(p, cb);
if (!q)
{
free(p);
return ENOMEM;
}
p = q;
}
else
break;
}
Visual C++ 2010 (with code analysis cranked to the max) complains with 'warning C6001: Using uninitialized memory 'p': Lines: ...'. It reports line numbers covering pretty much the entire function.
Visual C++ 2008 doesn't. As far as I can tell, this code's OK. What am I missing? Or what is VC2010 missing?
| This has to be a bug in Visual Studio 2010. Wrapping malloc removes the warning, as in the following tested code:
char * mymalloc(int i)
{
return (char *) malloc(i);
}
// ...
void *r = mymalloc(cb);
char *p;
p = (char *) malloc(cb);
|
2,055,871 | 2,055,906 | Inline assembly inside loops | I use inline assembly massively in a project where I need to call functions with an unknown number of arguments at compile time and while I manage myself to get it to work, sometimes, in linux (in windows I don't recall having that problem) strange things like this happen:
If I have something like
for(int i = 1; i >= 0; i--)
asm("push %0"::"m"(someArray[i]));
It works.
If I have
for(int i = this->someVar; i >= 0; i--)
asm("push %0"::"m"(someArray[i]));
and I guarantee with my life that someVar is holding the value 1 it throws segmentation fault.
Also if I have
int x = 1;
for(int i = x; i >= 0; i--)
asm("push %0"::"m"(someArray[i]));
it works but
int x = this->someVar;
for(int i = x; i >= 0; i--)
asm("push %0"::"m"(someArray[i]));
does not.
Also, and also strangely, I can say that while in some functions I don't have problems doing that in others I have, all in the same object.
If someone can point me to some information that can clear up what's the problem there, I would appreciate.
Beware that I really have to push the arguments in a for loop so avoiding it is not an option.
I also tried using the inline assembly word "volatile" but nothing changed.
| I can't understand what's the problem but try to write code using clear asm code same as
asm{
loop1:
mov ax, this->var
...
dec ax
cmp ax, 0
je exit
jmp loop1
}
...
exit:
Also try to make "var" value as static may it help too.
|
2,056,033 | 2,064,969 | Netbeans C/C++ JavaDoc code-completion | I am developing C++ in NetBeans 6.7.1. When I press CTRL + space for autocomplete there is shown only method's signature. I am using JavaDoc for commenting my code but NetBeans doesn't show it. I have installed Doxygen plugin but it is only for generating complete documentation.
Is there any way how to force the IDE to show signature and JavaDoc for C++ please?
I think that it should not be a problem because this functionality is currently implemented for Java.
Thanks a lot.
| So I asked on NetBeans forum this question ( using friend's account because I don't have my own ) and there is the conclusion: It is impossible and it is in requests.
|
2,056,380 | 2,056,425 | Linking error: Undefined Symbols, lots of them (cpp cross compiling) | I get to the very last linking command (the actual executable is being linked) but i get a BUNCH of undefined symbols (and they're in cpp and look so scary to me, a simple c programmer)
--its probably something simple but i cant get what im supposed to put as linker (its using gcc here...? is that appropriate? g++ told me too many input files lol) (ld returns much of the same)
anyway its ridiculous, i am completely stuck
thankyou for your help!
make
Making all in docs
Making all in en
make[2]: Nothing to be done for `all'.
make[2]: Nothing to be done for `all-am'.
/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.0 -arch armv6 -pipe -std=c99 -Wno-trigraphs -fpascal-strings -fasm-blocks -Wreturn-type -Wunused-variable -fmessage-length=0 -fvisibility=hidden -miphoneos-version-min=2.0 -gdwarf-2 -mthumb -miphoneos-version-min=2.0 -I../include -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.2.sdk -O0 -arch armv6 -pipe -std=c99 -gdwarf-2 -mthumb -I../include -L../libs -L../../libs -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.0.sdk -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib -o mutella -L/usr/local/lib uilocalsocket.o gnumarkedfiles.o uitextmode.o sha1.o sha1thread.o gnuwordhash.o gnulogcentre.o asyncdns.o gnuwebcache.o uiterminal.o uiremote.o asyncproxysocket.o messages.o lineinput.o rcobject.o event.o term_help.o mprintf.o readline4fix.o asyncfile.o tstring.o dir.o inifile.o property.o byteorder.o mui.o gnusearch.o mthread_unix.o asyncsocket.o controller.o preferences.o packet.o gnuupload.o gnusock.o gnushare.o gnunode.o gnuhash.o gnudownload.o gnudirector.o gnucache.o conversions.o common.o main.o -lpthread -lreadline -lcurses -lpoll -lz
Undefined symbols:
"std::__throw_bad_alloc()", referenced from:
__gnu_cxx::new_allocator<std::_List_node<MUILSocketCommunicate*> >::allocate(unsigned long, void const*)in uilocalsocket.o
__gnu_cxx::new_allocator<SMarkedFile>::allocate(unsigned long, void const*)in gnumarkedfiles.o
__gnu_cxx::new_allocator<std::_List_node<long> >::allocate(unsigned long, void const*)in gnumarkedfiles.o
__gnu_cxx::new_allocator<std::_Rb_tree_node<TString<char> > >::allocate(unsigned long, void const*)in gnumarkedfiles.o
__gnu_cxx::new_allocator<std::_Rb_tree_node<long> >::allocate(unsigned long, void const*)in gnumarkedfiles.o
__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<long const, MFileSizeClass> > >::allocate(unsigned long, void const*)in gnumarkedfiles.o
| Seems you are trying to link C++ code with a C (gcc) linker call. That'll not include the appropriate libraries which is just what you are seeing. Try g++ instead of gcc (or throw out the C++ code/libraries).
|
2,056,778 | 2,058,296 | How to display a modal message box in C++ on Mac? | CFUserNotificationDisplayAlert and CFUserNotificationDisplayNotice creates a non-modal window and this is bad because it could bring your application UI in a very undesired state if you select the original application window (the message box is hidden but the applicaton does not respond).
The old SystemAlert was modal but this one doesn't fully support Unicode strings.
How can i display a message box as a modal window under Mac? I'm looking for someting similar to MessageBox from Windows?
| It looks that CreateStandardAlert is the right solution because this one is modal.
DialogRef theItem;
DialogItemIndex itemIndex;
CreateStandardAlert(kAlertNoteAlert, CFSTR("aaa"), CFSTR("bbb"), NULL, &theItem);
RunStandardAlert(theItem, NULL, &itemIndex);
|
2,056,996 | 2,057,044 | Comparison is always false due to limited range ... with templates | I have a templated function that operates on a template-type variable, and if the value is less than 0, sets it to 0. This works fine, but when my templated type is unsigned, I get a warning about how the comparison is always false. This obviously makes sense, but since its templated, I'd like it to be generic for all data types (signed and unsigned) and not issue the warning.
I'm using g++ on Linux, and I'm guessing there's a way to suppress that particular warning via command line option to g++, but I'd still like to get the warning in other, non-templated, cases. I'm wondering if there's some way, in the code, to prevent this, without having to write multiple versions of the function?
template < class T >
T trim(T &val)
{
if (val < 0)
{
val = 0;
}
return (val);
}
int main()
{
char cval = 5;
unsigned char ucval = 5;
cout << "Untrimmed: " << (int)cval;
cval = trim(cval);
cout << " Trimmed: " << (int)cval << endl;
cout << "Untrimmed: " << (int)ucval;
cval = trim(ucval);
cout << " Trimmed: " << (int)ucval << endl;
return (0);
}
| #include <algorithm>
template<class T>
T& trim(T& val) {
val = std::max(T(0), val);
return val;
}
It's not apparent from the question that passing by non-const reference is appropriate. You can change the above return nothing (void), pass by value and return by value, or pass by const& and return by value:
template<class T>
T trim(T const& val);
// example use:
value = trim(value); // likely the most clear solution
Generalize a bit more, even though outside the scope of your question:
template<class T>
T constrain(T const& value, T const& lower, T const& upper) {
// returns value if value within [lower, upper] (inclusive end points)
// returns lower if value < lower
// otherwise returns upper
assert(lower <= upper); // precondition
return std::min(std::max(value, lower), upper);
}
template<class T>
T constrain_range(T const& value, T const& lower, T const& upper) {
// returns value if value within [lower, upper) (exclusive upper)
// returns lower if value < lower
// otherwise returns upper - 1
assert(lower < upper); // precondition
if (value < lower) return lower;
else if (value >= upper) return upper - 1;
else return value;
}
|
2,057,047 | 2,057,067 | Freeing a Pointer if memory isn't being referenced by anything else | I have a method that has a few pointers as parameters. This method can be called with either named pointers from the callee or dynamically create a pointer to a new object and pass it in as an argument directly as the method is being called.
myClass *myPtr = new myClass(...);
myMethod(myPtr);
Verus
myMethod(new myClass(...));
The problem is that if both of these are valid options, how does one properly free the passed in pointer? Deleting myPtr within myMethod will cause a crash if myPtr is ever accessed again within the program. If I don't delete myPtr, the second option will cause a memory leak if it is used. There are benefits for using both options so both shouldn't break the program.
Aside from using STL, what are some solutions to this problem? Would I have to implement my own garbage collector?
| I would say, in this case caller should be responsible for freeing the object. You can consider various options, simplest is:
myClass myInstance = myClass; // or myClass(arg1, arg2, ...)
// and the pass it to your method like this:
myMethod(&myInstance);
You could also consider some smart pointer options like std::tr1::shared_ptr or something from boost.
UPDATE: If your method should be able to get NULL-pointer as its argument, there's no problem at all:
// this is your method declaration:
void myMethod(const myClass *myPtr);
// in your tests or wherever in your code you can call it like
myClass myInstance = myClass; // or myClass(arg1, arg2, ...)
myMethod(&myInstance);
// or like this:
myMethod(NULL);
// for as long as your method has something like this in it:
if (myPtr)
myPtr->someMethod();
|
2,057,350 | 2,057,392 | Determine input encoding by examining the input bytes | I'm getting console input from the user and want to encode it to UTF-8. My understanding is C++ does not have a standard encoding for input streams, and that it instead depends on the compiler, the runtime environment, localization, and what not.
How can I determine the input encoding by examining the bytes of the input?
| In general, you can't. If I shoot a stream of randomly generated bytes at your app how can it determine their "encoding"? You simply have to specify that your application accepts certain encodings, or make an assumption that what the OS hands you will be suitably encoded.
|
2,057,424 | 2,057,464 | LRU implementation in production code | I have some C++ code where I need to implement cache replacement using LRU technique.
So far I know two methods to implement LRU cache replacement:
Using timeStamp for each time the cached data is accessed and finally comparing the timeStamps at time of replacement.
Using a stack of cached items and moving them to the top if they are accessed recently, so finally the bottom will contain the LRU Candidate.
So, which of these is better to be used in production code?
Are their any other better methods?
| Recently I implemented a LRU cache using a linked list spread over a hash map.
/// Typedef for URL/Entry pair
typedef std::pair< std::string, Entry > EntryPair;
/// Typedef for Cache list
typedef std::list< EntryPair > CacheList;
/// Typedef for URL-indexed map into the CacheList
typedef boost::unordered_map< std::string, CacheList::iterator > CacheMap;
/// Cache LRU list
CacheList mCacheList;
/// Cache map into the list
CacheMap mCacheMap;
It has the advantage of being O(1) for all important operations.
The insertion algorithm:
// create new entry
Entry iEntry( ... );
// push it to the front;
mCacheList.push_front( std::make_pair( aURL, iEntry ) );
// add it to the cache map
mCacheMap[ aURL ] = mCacheList.begin();
// increase count of entries
mEntries++;
// check if it's time to remove the last element
if ( mEntries > mMaxEntries )
{
// erease from the map the last cache list element
mCacheMap.erase( mCacheList.back().first );
// erase it from the list
mCacheList.pop_back();
// decrease count
mEntries--;
}
|
2,057,456 | 3,984,857 | OpenGL ES 2.0 SDK for Windows Mobile | I would like to get started developing native (C/C++) OpenGL ES 2.0 applications for Windows Mobile (version 5 or later, any version would do, really). I do however have trouble finding appropriate headers and libraries.
What I am looking for is a OpenGL ES 2.0 SDK for Windows Mobile, or an SDK which contains the appropriate headers and libraries. Previously, when I have developed OpenGL ES 1.0 applications, I have used the headers and libraries provided by the Vincent3D open source OpenGL ES 1.0 software rendering project in order to compile my applications, but for 2.0 applications, I have found no such solution.
I have searched Microsoft's developer site, some phone manufacturers developer sites, Qualcomm's developer site, nVidia's developer site, etc. for an SDK, without any luck, however. I know there are OpenGL ES 2.0 applications out there so I guess there are SDKs available, I just need help finding them.
| It would seem there is no such SDK. Not publicly available anyhow. From what I gather, developers that have created GLES-applications for Windows Mobile has used the strategy suggested by Virne in one of the comments and created a lib from the GLES DLL.
|
2,057,523 | 2,057,702 | String reference not updating in function call in C++ | I am writing an arduino library to post http request on web.
I am using the String class from http://arduino.cc/en/Tutorial/TextString
My code is behaving strangely when I am referring to my defined string objects after a function call.
Here actually I am trying to get the body of my GET request and removing the http headers from the http GET request's response.
Following is the description:
Method Call:
String body;
if(pinger.Get(host,path,&body))
{
Serial.println("Modified String Outside :");
Serial.println(body);
Serial.println();
Serial.println("Modified String Outside Address");
Serial.println((int)&body);
}
Output
Modified String Outside :
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/html
Content-Length: 113
Date: Wed, 13 Jan 2010 14:36:28 GMT
<html>
<head>
<title>Ashish Sharma
</title>
</head>
<body>
Wed Jan 13 20:06:28 IST 2010
</body>
</html>
Modified String Outside Address
2273
Method Description:
bool Pinger::Get(String host, String path, String *response) {
bool connection = false;
bool status = false;
String post1 = "GET ";
post1 = post1.append(path);
post1 = post1.append(" HTTP/1.1");
String host1 = "Host: ";
host1 = host1.append(host);
for (int i = 0; i < 10; i++) {
if (client.connect()) {
client.println(post1);
client.println(host1);
client.println();
connection = true;
break;
}
}
int nlCnt = 0;
while (connection) {
if (client.available()) {
int c = client.read();
response->append((char) c);
if (c == 0x000A && nlCnt == 0) {
nlCnt++;
if (response->contains("200")) {
status = true;
continue;
} else {
client.stop();
client.flush();
break;
}
}
}
if (!client.connected()) {
client.stop();
connection = false;
}
}
response = &response->substring(response->indexOf("\n\r\n"),response->length());
Serial.println("Modified String: ");
Serial.println(*response);
Serial.println();
Serial.print("Modified String Address: ");
Serial.println((int)&response);
return status;
}
Output:
Modified String:
Ø
<html>
<head>
<title>Ashish Sharma
</title>
</head>
<body>
Wed Jan 13 20:06:28 IST 2010
</body>
</html>
Modified String Address: 2259
As can be seen from the example the string reference object is giving me the correct string inside the Get method but the reference of the string contents change when the Get method returns.
| If I understood correctly your code, you probably would want to do something like this:
*response = response->substring(response->indexOf("\n\r\n"),response->length());
instead of
response = &response->substring(response->indexOf("\n\r\n"),response->length());
Also there's probably no need to pass in a pointer ( reference would probably make the code look much nicer ).
|
2,057,610 | 2,057,629 | STL Map with custom compare function object | I want to use the STL's Map container to lookup a pointer by using binary data as a key so I wrote this custom function object:
struct my_cmp
{
bool operator() (unsigned char * const &a, unsigned char * const &b)
{
return (memcmp(a,b,4)<0) ? true : false;
}
};
And using it like this:
map<unsigned char *, void *, my_cmp> mymap;
This compiles and seems to work, but I'm not sure what an "unsigned char * const &" type is and why it didn't work with just "unsigned char *"?
| You need to provide a comparator that guarantees non-modifying of the passed values, hence the const (note that it applies to the pointer not the char). As for the reference operator (&), you don't need it -- it's optional. This will also compile:
struct my_cmp
{
bool operator() (unsigned char * const a, unsigned char * const b)
{
return memcmp(a,b,4) < 0;
}
};
|
2,057,784 | 2,057,879 | Locking files in linux with c/c++ | I am wondering if you can : lock only a line or a single character in a file in linux and the rest of the file should remain accessible for other processes?
I received a task regarding simulating transaction on a file with c/c++ under linux .
Please give me an answer and if this answer is yes ,give me some links from where i could take a peek to make this task.
Thanks,
Madicemickael
| Yes, this is possible.
The Unix way to do this is via fcntl or lockf.
Whatever you choose, make sure to use only it and not mix the two. Have a look at this question (with answer) about it: fcntl, lockf, which is better to use for file locking?.
If you can, have a look at section 14.3 in Advanced Programming in the UNIX Environment.
|
2,057,823 | 2,057,904 | Issues with Partial Class Function Overrides in C++ | Is there any issue with partially overriding a set of virtual functions defined by a base class?
My compiler provides the following warning:
overloaded virtual function "MyBaseClass::setValue" is only partially overridden in class "MyDerivedClass".
The classes look like this:
class MyBaseClass
{
public:
virtual void setValue(int);
virtual void setValue(SpecialType*);
}
class MyDerivedClass : public MyBaseClass
{
public:
virtual void setValue(int);
}
The easy way to get rid of this warning is to use different names for the base functions, but I wanted to know if there was any compelling reason to fix this specific warning. I do not believe this violates the C++ standard. My guess is that it's to warn a programmer that they may have forgotten to implement the behavior for all possible input types. In our case, it is intentional to exclude some of the specific types.
Would you discourage suppressing this warning altogether?
| The override for setValue(int) hides setValue(SpecialType*) of the base class (see the C++ FAQ Lite), so if you try to call setValue(new SpecialType()) you will get an error.
You can avoid this by adding a using directive to the derived class that "imports" the overloads from the base class:
class MyDerivedClass : public MyBaseClass
{
public:
using MyBaseClass::setValue;
virtual void setValue(int);
};
|
2,057,946 | 2,057,987 | How to mix std::string with Win32 functions that take char[] buffers? | There are a number of Win32 functions that take the address of a buffer, such as TCHAR[256], and write some data to that buffer. It may be less than the size of the buffer or it may be the entire buffer.
Often you'll call this in a loop, for example to read data off a stream or pipe. In the end I would like to efficiently return a string that has the complete data from all the iterated calls to retrieve this data. I had been thinking to use std::string since it's += is optimized in a similar way to Java or C#'s StringBuffer.append()/StringBuilder.Append() methods, favoring speed instead of memory.
But I'm not sure how best to co-mingle the std::string with Win32 functions, since these functions take the char[] to begin with. Any suggestions?
| std::string has a function c_str() that returns its equivalent C-style string. (const char *)
Further, std::string has overloaded assignment operator that takes a C-style string as input.
e.g. Let ss be std::string instance and sc be a C-style string then the interconversion can be performed as :
ss = sc; // from C-style string to std::string
sc = ss.c_str(); // from std::string to C-style string
UPDATE :
As Mike Weller pointed out, If UNICODE macro is defined, then the strings will be wchar_t* and hence you would have to use std::wstring instead.
|
2,057,960 | 3,380,861 | how to set a threadname in MacOSX | In Windows, it is possible to set the threadname via this code. The threadname is then shown in debuggers.
In MacOSX, I have seen several hints which indicates that there are threadnames. I think the class NSThread also has a name-attribute. My goal is that I can set the threadname in my C++ application and see it in Xcode/gdb.
Other related questions:
Can I set the name of a thread in pthreads / linux? (with a very good answer/overview for pthread here)
How to name a thread in Linux?
How to set name to a Win32 Thread? (also interesting is this discussion by Bruce Dawson)
(Android) How to set name to the thread?
| I recommend the following:
[[NSThread currentThread] setName:@"My thread name"]; // For Cocoa
pthread_setname_np("My thread name"); // For GDB.
(You'll need to include pthread.h)
Works a treat in XCode 3.2.3 (at least for iPhone development)
|
2,058,091 | 2,058,207 | If-Then-Else Conditionals in Regular Expressions and using capturing group | I have some difficulties in understanding if-then-else conditionals in regular expressions.
After reading If-Then-Else Conditionals in Regular Expressions I decided to write a simple test. I use C++, Boost 1.38 Regex and MS VC 8.0.
I have written this program:
#include <iostream>
#include <string>
#include <boost/regex.hpp>
int main()
{
std::string str_to_modify = "123";
//std::string str_to_modify = "ttt";
boost::regex regex_to_search ("(\\d\\d\\d)");
std::string regex_format ("(?($1)$1|000)");
std::string modified_str =
boost::regex_replace(
str_to_modify,
regex_to_search,
regex_format,
boost::match_default | boost::format_all | format_no_copy );
std::cout << modified_str << std::endl;
return 0;
}
I expected to get "123" if str_to_modify has "123" and to get "000" if I str_to_modify has "ttt". However I get ?123123|000 in the first case and nothing in second one.
Coluld you tell me, please, what is wrong with my test?
The second example that still doesn't work :
#include <iostream>
#include <string>
#include <boost/regex.hpp>
int main()
{
//std::string str_to_modify = "123";
std::string str_to_modify = "ttt";
boost::regex regex_to_search ("(\\d\\d\\d)");
std::string regex_format ("(?1foo:bar");
std::string modified_str =
boost::regex_replace(str_to_modify, regex_to_search, regex_format,
boost::match_default | boost::format_all | boost::format_no_copy );
std::cout << modified_str << std::endl;
return 0;
}
| I think the format string should be (?1$1:000) as described in the Boost.Regex docs.
Edit: I don't think regex_replace can do what you want. Why don't you try the following instead? regex_match will tell you whether the match succeeded (or you can use match[i].matched to check whether the i-th tagged sub-expression matched). You can format the match using the match.format member function.
#include <iostream>
#include <string>
#include <boost/regex.hpp>
int main()
{
boost::regex regex_to_search ("(\\d\\d\\d)");
std::string str_to_modify;
while (std::getline(std::cin, str_to_modify))
{
boost::smatch match;
if (boost::regex_match(str_to_modify, match, regex_to_search))
std::cout << match.format("foo:$1") << std::endl;
else
std::cout << "error" << std::endl;
}
}
|
2,058,141 | 2,058,587 | Reading SDL_RWops from a std::istream | I'm quite surprised that Google didn't find a solution. I'm searching for a solution that allows SDL_RWops to be used with std::istream. SDL_RWops is the alternative mechanism for reading/writing data in SDL.
Any links to sites that tackle the problem?
An obvious solution would be to pre-read enough data to memory and then use SDL_RWFromMem. However, that has the downside that I'd need to know the filesize beforehand.
Seems like the problem could somehow be solved by "overriding" SDL_RWops functions...
| I feel bad answering my own question, but it preocupied me for some time, and this is the solution I came up with:
int istream_seek( struct SDL_RWops *context, int offset, int whence)
{
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
if ( whence == SEEK_SET )
stream->seekg ( offset, std::ios::beg );
else if ( whence == SEEK_CUR )
stream->seekg ( offset, std::ios::cur );
else if ( whence == SEEK_END )
stream->seekg ( offset, std::ios::end );
return stream->fail() ? -1 : stream->tellg();
}
int istream_read(SDL_RWops *context, void *ptr, int size, int maxnum)
{
if ( size == 0 ) return -1;
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
stream->read( (char*)ptr, size * maxnum );
return stream->bad() ? -1 : stream->gcount() / size;
}
int istream_close( SDL_RWops *context )
{
if ( context ) {
SDL_FreeRW( context );
}
return 0;
}
SDL_RWops *SDL_RWFromIStream( std::istream& stream )
{
SDL_RWops *rwops;
rwops = SDL_AllocRW();
if ( rwops != NULL )
{
rwops->seek = istream_seek;
rwops->read = istream_read;
rwops->write = NULL;
rwops->close = istream_close;
rwops->hidden.unknown.data1 = &stream;
}
return rwops;
}
Works under the assumptions that istream's are never freed by SDL (and that they live through the operation). Also only istream support is in, a separate function would be done for ostream -- I know I could pass iostream, but that would not allow passing an istream to the conversion function :/.
Any tips on errors or upgrades welcome.
|
2,058,159 | 2,058,318 | How does the compiler know to use a template specialization instead of its own instantiation? | Consider the following files:
Foo.H
template <typename T>
struct Foo
{
int foo();
};
template <typename T>
int Foo<T>::foo()
{
return 6;
}
Foo.C
#include "Foo.H"
template <>
int Foo<int>::foo()
{
return 7;
}
main.C
#include <iostream>
#include "Foo.H"
using namespace std;
int main()
{
Foo<int> f;
cout << f.foo() << endl;
return 0;
}
When I compile and run, 7 is printed. What's going on here? When are templates instantiated? If the compiler does it, how does the compiler know not to instantiate its own version of Foo?
| The issue is that you've violated the one definition rule. In main.C, you've included Foo.H but not Foo.C (which makes sense since it's a source file). When main.C is compiled, the compiler doesn't know that you've specialized the template in Foo.C, so it uses the generic version (that returns 6) and compiles a Foo class. Then when it compiles Foo.C, it sees a full specialization which it can compile right away -- it doesn't need to wait for it to be instantiated somewhere because all the types are filled in (if you had two template parameters and only specialized one this wouldn't be the case), and it compiles a new and distinct Foo class.
Normally, multiple definitions for the same thing cause a linker error. But template instantiations are "weak symbols", which means that multiple definitions are allowed. The linker assumes all definitions are really the same and then picks one at random (well, probably consistently the first one or the last one, but only as a coincidence of the implementation).
Why make them weak symbols? Because Foo might be used in multiple source files, each of which is compiled individually, and each time Foo is used in a compilation unit a new instantiation is generated. Normally, these are redundant, so it makes sense to throw them away. But you've violated this assumption, by providing a specialization in one compilation unit (foo.C) but not the other (main.C).
If you declare the template specialization in Foo.H, then when main.C is compiled it not generate an instantiation of Foo, thus making sure only one definition exists in your program.
|
2,058,271 | 2,058,321 | Comparing String Iterator to Char Pointer | I have a const char * const string in a function.
I want to use this to compare against elements in a string.
I want to iterate through the string and then compare against the char *.
#include <iostream>
#include <string>
#include <cstring>
using namespace std;
int main()
{
const char * const pc = "ABC";
string s = "Test ABC Strings";
string::iterator i;
for (i = s.begin(); i != s.end(); ++i)
{
if ((*i).compare(pc) == 0)
{
cout << "found" << endl;
}
}
How can I resolve a char* to resolve against a string iterator?
Thanks..
| Look at std::string::find:
const char* bar = "bar";
std::string s = "foo bar";
if (s.find(bar) != std::string::npos)
cout << "found!";
|
2,058,492 | 2,058,512 | PyQt vs PySide comparison | I currently develop many applications in a Qt heavy C++/Python environment on Linux, porting to PC/Mac as needed. I use Python embedded in C++ as well as in a stand alone GUI. Qt is used fro xml parsing/event handling/GUI/threading and much more. Right now all my Python work is in PyQt and I wanted to see how everyone views PySide. I'm interested because it is in house and as such should support more components with hopefully better integration. What are your experiences?
I know this has been asked before, but I want to revive the conversation.
| We were recently thinking about using PySide, but we haven't found any information about whether it is supported by py2exe. That's why we kept to PyQt. If you need to develop for Windows, it's safer to use good ol' PyQt :-)
|
2,058,527 | 2,058,568 | How to Set Baud Rate 28800 Using DCB Structure | Previously I was using CBR_9600 when communicating with 9600 baud devices. But there does not seem to be a CBR_28800 setting. Is it possible to set the baud rate using the DCB structure of 28800?
| According to MSDN, the baud rate can either be one of the defined constants (such as CBR_9600, CBR_38400, etc) or any integer value. The constants are just defined to the values, so it's not really an enumeration at all. From the link:
The baud rate at which the communications device operates. This member can be an actual baud rate value, or one of the following indexes.
|
2,058,634 | 2,059,001 | why is stroull() not working on a byte array with hexadecimal values? | here is some new test code with regards to my long issue.
I figure that if i code my stuff as long long then that is half the battle in porting.
the other half would be to make it into big endian so it can work on any 64 bit system.
so i did the following:
#include <iostream>
#include "byteswap.h"
#include "stdlib.h"
using namespace std;
int main()
{
char bytes[6] = {0x12,0x23,0xff,0xed,0x22,0x34};
//long *p_long = reinterpret_cast<long*> (bytes);
long long *p_long = reinterpret_cast<long long*> (bytes);
std::cout<<"hex="<<std::hex<<*p_long<<"LE"<<std::endl;
*p_long = bswap_64(*p_long);
std::cout<<"hex="<<std::hex<<*p_long<<"BE"<<std::endl;
return 0;
}
this seems to me the simplest way of doing it. the problem now is at using bswap...i get the following output
hex=34563422edff2312LE
hex=0BE
i got all the bytes in the first as LE. but now it seems the 64bit swap function is not working. I think this would solve the issue i am having.
considering that i will be operating on a 20 byte array. i am also not sure how i would use pointers to do that. i am thinking i would need an array of long long pointers to get all this stuff stored and then call the byteswap on each to swap the values in each of those pointers. I personally have not done pointer incrementation via sizeof(long) to increment before.
| If you're expecting this program to output "ab3254cd44" then you're using the wrong function. bytes is not a string. It's just an array of 5 values.
Try this:
int bytes[5] = {0xab,0x32,0x54,0xcd,0x44};
cout << hex;
copy(&bytes[0], &bytes[sizeof(bytes)/sizeof(bytes[0])], ostream_iterator<int>(cout));
Program outputs:
ab3254cd44
|
2,058,700 | 2,058,710 | Language recommendations for expanding programming skills (For a semi-experienced software developer) | I have little (<1 year professional) experience with
Perl
Groovy/Java
I have limited (<2 year professional)
C
I have decent experience (>= 6 years professional) with
PHP
SQL
I have hobby experience with
C++/DX9 (some simple windows games/demos)
Obj-C (a few iphone app's)
ASM (http://www.amazon.com/Assembly-Language-Intel-Based-Computers-5th/dp/0132383101/ref=sr_1_1?ie=UTF8&s=books&qid=1263401280&sr=8-1) - I stopped when it got too windows specific.
So my question is what language can I next approach (on my free time) to give me some new insight to programming and problem solving in general - I was looking at maybe LISP - something which would be very foreign to me. I want to tackle something very, very different from the languages listed above.
EDIT: I think I'll investigate Haskell - thanks for the feedback! and maybe possibly Erlang, and I really liked Adrian Kosmaczewski's idea about a Mac App for snow leopard.
| Haskell, followed shortly by Python.
|
2,058,943 | 2,064,125 | Reading SIM contacts on Symbian S60 | I am looking for a working code snippet for Symbian S60 5th edition in which you can read SIM contact details.
If possible, I would skip using RPhoneBookSession, but if that is the only way, please provide code snippet how to use it.
Thank you.
| What you want is the example code from the relevant chapter of the Quick recipes on Symbian OS book, which you can find here.
EDIT-1:
Should have read the question more carefully.
The CContactDatabase API should synchronize with the SIM Phonebook seamlessly by using RPhoneBookSession so you don't have to.
To figure out what's wrong, I would suggest calling RPhoneBookSession::GetLastSyncError, RPhoneBookSession::GetPhoneBookCacheState and RPhoneBookSession::GetSyncMode.
I would also suggest doing all this both before and after adding a new CContactICCEntry to the database yourself.
Of course, this is all supposing Nokia didn't just brutally disable Phonebook synchronization...
EDIT-2:
If Nokia disabled Symbian's phonebook synchronization, they may have replaced it with their own, which would mean that using the CPbkContactEngine::AllContactsView method could yield different results than the CContactDatabase approach.
Let's face it, though. If the Contacts application provided with the phone doesn't even allow you to save a contact on the SIM, Nokia may have removed all possibilities to interact with the SIM phonebook period.
EDIT-3:
You could try to develop against phbksyncsvr.lib using the binaries in the Product Development Kit (PDK) from the first real version of the Symbian Foundation operating system: Symbian^2. Binary compatibilty between versions of Symbian OS can sometimes help you.
|
2,058,991 | 2,058,995 | what does this declaration mean? exception() throw() | std::exception class is defined as follows
exception() throw() { }
virtual ~exception() throw();
virtual const char* what() const throw();
what does the throw() syntax mean in a declaration?
Can throw() take parameters? What does no parameters mean?
| Without any parameter, it means that the mentioned functions does not throw any exceptions.
If you specify anything as a parameter, you're saying that the function will throw only exceptions of that type. Notice, however, this is not an enforcement to the compiler. If an exception of some other type happens to be thrown the program will call std::terminate().
|
2,059,058 | 2,059,110 | C++ Abstract class operator overloading and interface enforcement question | (edited from original post to change "BaseMessage" to "const BaseMessage&")
Hello All,
I'm very new to C++, so I hope you folks can help me "see the errors of my ways".
I have a hierarchy of messages, and I'm trying to use an abstract base class to enforce
an interface. In particular, I want to force each derived message to provide an overloaded
<< operator.
When I try doing this with something like this:
class BaseMessage
{
public:
// some non-pure virtual function declarations
// some pure virtual function declarations
virtual ostream& operator<<(ostream& stream, const BaseMessage& objectArg) = 0;
}
the compiler complains that
"error: cannot declare parameter ‘objectArg’ to be of abstract type ‘BaseMessage’
I believe there are also "friend" issues involved here, but when I tried to declare it as:
virtual friend ostream& operator<<(ostream& stream, const BaseMessage objectArg) = 0;
the compiler added an addition error
"error: virtual functions cannot be friends"
Is there a way to ensure that all of my derived (message) classes provide an "<<" ostream operator?
Thanks Much,
Steve
| The common convention for this is to have a friend output operator at the base level and have it call private virtual function:
class Base
{
public:
/// don't forget this
virtual ~Base();
/// std stream interface
friend std::ostream& operator<<( std::ostream& out, const Base& b )
{
b.Print( out );
return out;
}
private:
/// derivation interface
virtual void Print( std::ostream& ) const =0;
};
|
2,059,208 | 2,059,281 | Derived class can't see parent class properly | I'm seeing two problems in a setup like this:
namespace ns1
{
class ParentClass
{
protected:
void callback();
};
}
namespace ns1
{
namespace ns2
{
class ChildClass : public ParentClass
{
public:
void method()
{
registerCallback(&ParentClass::callback);
}
};
}
}
ChildClass::method() gives a compile error: "'ns1::ParentClass::callback' : cannot access protected member declared in class 'ns1::ParentClass'"
ParentClass *pObj = new ChildClass() gives an error, that it can't do the conversion without a cast. C++ can down-cast happily, no?
| Change:
registerCallback(&ParentClass::callback);
...to:
registerCallback(&ChildClass::callback);
The reason is because &ParentClass::callback is a fully-qualified typename, not resolved from the context of ChildClass but from global context. In other words, it is the same problem as this:
class Thingy
{
protected:
virtual int Foo() {};
};
int main()
{
Thingy t;
t.Foo();
return 0;
}
|
2,059,363 | 2,059,368 | Compile error when I use C++ inheritance | I am new to this website and I am trying a simple inheritance example in C++.
I checked my code lots of times and I really see nothing wrong with it, however the compilers gives me errors:
my code:
#ifndef READWORDS_H
#define READWORDS_H
using namespace std;
#include "ReadWords.h"
/**
* ReadPunctWords inherits ReadWords, so MUST define the function filter.
* It chooses to override the default constructor.
*/
class ReadPunctWords: public ReadWords {
public:
bool filter(string word);
};
#endif
And the messages I get from the compiler:
ReadPunctWords.h:11: error: expected class-name before '{' token
ReadPunctWords.h:13: error: `string' has not been declared
ReadPunctWords.h:13: error: ISO C++ forbids declaration of `word' with no type
Tool completed with exit code 1
I am really not sure where I get it wrong as it looks just fine to me?
Thank you for any mistakes you might spot.
| You need to include string:
#include <string>
That said, don't use using namespace! Especially at file-scope, and definitely not in a header file. Now any unit that includes this file is forced to succumb to everything in the std namespace.
Take that out, and qualify your names:
bool filter(std::string word);
It's arguable more readable, too. Additionally, you should take your string as a const&:
bool filter(const std::string& word);
To avoid having to copy the string unnecessarily. Lastly, your header guards seem off. Should they be changed? As of now, they seem like the same ones that would be used in your other header, which might effectively stop it from being included.
If you define READWORDS_H and then include ReadWords.h, and if that also has:
#ifndef READWORDS_H
#define READWORDS_H
Then nothing in that file will be processed. If that's the case, ReadWords as a class won't be defined, and you cannot inherit from it. Your guard should probably be:
READPUNCTWORDS_H
|
2,059,483 | 2,060,461 | Installing poco library | im trying to install the poco library for visual c++ 2008 but when I type this command
buildwin.cmd 90
I get the following error
"'devenv' is not recognized as an internal or external command, operable program or batch file."
The readme file says there is an alternate way to install poco from visual studio itself but i don't quite know how to do that either.
any pointers what the problem is?
| You can build the projects by opening the solution files in Visual Studio and build them from there.
|
2,059,665 | 2,059,705 | Why can't I forward-declare a class in a namespace using double colons? | class Namespace::Class;
Why do I have to do this?:
namespace Namespace {
class Class;
}
Using VC++ 8.0, the compiler issues:
error C2653: 'Namespace' : is not a class or namespace name
I assume that the problem here is that the compiler cannot tell whether Namespace is a class or a namespace? But why does this matter since it's just a forward declaration?
Is there another way to forward-declare a class defined in some namespace? The syntax above feels like I'm "reopening" the namespace and extending its definition. What if Class were not actually defined in Namespace? Would this result in an error at some point?
| Because you can't. In C++ language fully-qualified names are only used to refer to existing (i.e. previously declared) entities. They can't be used to introduce new entities.
And you are in fact "reopening" the namespace to declare new entities. If the class Class is later defined as a member of different namespace - it is a completely different class that has nothing to do with the one you declared here.
Once you get to the point of defining the pre-declared class, you don't need to "reopen" the namespace again. You can define it in the global namespace (or any namespace enclosing your Namespace) as
class Namespace::Class {
/* whatever */
};
Since you are referring to an entity that has already been declared in namespace Namespace, you can use qualified name Namespace::Class.
|
2,059,725 | 2,060,697 | setting File Version automatically after compile | Is there any tool which can inject into an .exe or .dll information like File Version, Product name, Copyright, etc?
I did find a tool called StampVer but it can only modify resources that are already in the file itself. I could use it but would need to modify a bunch of Visual Studio projects to include some dummy information, and I would of course prefer to avoid that.
| I ended up adding a dummy resource version and will be using StampVer.
|
2,059,782 | 2,062,588 | Pointer to a class and function - problem | Firstly I'll show you few classes.
class A {
public:
B * something;
void callSomething(void) {
something->call();
}
};
class B {
public:
A * activeParent;
B(A * parent) {
activeParent = parent;
}
void call(void) {
activeParent->something = new C;
}
};
class C : public B {
public:
A * activeParent;
C(A * parent) {
activeParent = parent;
}
void call(void) {
// do something
}
};
A * object;
object = new A;
object->something = new B;
object->callSomething();
My app needs such a structure.
When I do callSomething(), it calls B's call() but when B's call() changes the "something" to C, C's call() is triggered and I want to avoid that. How should I do?
| Aside from the design decisions (e.g., cyclical dependencies)...
The only reason A's callSomething() method would call C's call() method from a pointer to B is if the call() method is virtual. To avoid calling C's call() method, here are a couple of options:
Don't make the call() method virtual
Rename one of B or C's call() method (preferred over the first option)
Call B's call() method explicitly
To call B's call() method explicitly:
void callSomething(void) {
something->B::call();
}
|
2,059,804 | 2,059,888 | Image format and unsigned char arrays | I'm developping imaging functions (yes I REALLY want to reinvent the wheel for various reasons).
I'm copying bitmaps into unsigned char arrays but I'm having some problem with byte size versus image pixel format.
for example a lot of images come as 24 bits per pixel for RGB representation so that's rather easy, every pixel has 3 unsigned chars (bytes) and everyone is happy
however sometimes the RGB has weirder types like 48 bits per pixel so 16 bits per color channel. Copying the whole image into the byte array works fine but its when I want to retrieve the data that things get blurry
Right now I have the following code to get a single pixel for grayscale images
unsigned char NImage::get_pixel(int i, int j)
{
return this->data[j * pitch + i];
}
NImage::data is unsigned char array
This returns a single byte. How can I access my data array with different pixel formats?
| You should do it like this:
unsigned short NImage::get_pixel(int i, int j)
{
int offset = 2 * (j * pitch + i);
// image pixels are usually stored in big-endian format
return data[offset]*256 + data[offset+1];
}
|
2,059,895 | 2,060,648 | Are there any automated unit testing frameworks for testing an in-house threading framework? | We have created a common threading framework to manage how we want to use threads in our applications. Are there any frameworks out there like gtest or cppunit that solely focus on unit testing threads, thread pools, thread queues, and such?
Right now I just kind of manually go through some steps that I know I should cover and do checks in the code to make sure that certain conditions are met (like values aren't corrupted b/c a shared resource was accessed simultaneously by two or more threads at once) If I'm not able to create definitive check, then I step through the debugger but this seems like it's testing in the 1990's.
I would like to more systematically test the functionality of the threading framework for it's internal functionality that might not be the same as all threading frameworks, but I also want to test common functionality that all threading frameworks should have (like not deadlocking, not corrupting data a.k.a counts are what they should be, etc ...).
Any suggestions would be greatly appreciated.
| If your threads are built on OpenMP, you can use VivaMP for static checking.
But you want dynamic checking with unit tests. I'm not aware of any existing framework for this purpose. You could roll your own with one of the many unit test frameworks out there, but it would be hard to make it robust. Intel has a suite of parallel development tools that might be of interest, but I've never used them. They say that they can help with unit tests from within Visual Studio.
|
2,059,914 | 2,059,927 | Using Structs -- Odd Issue | Been awhile since I've used structs in C++.
Any idea why this isn't working? My compiler is complaining about DataStruct not being a recognized type but Intellisense in VC++ is still able to see the data members inside the struct so the syntax is ok...
Frustating. xD
struct DataStruct
{
int first;
};
int main(int argc, char **argv)
{
DataStruct test;
//test.first = 1;
}
| Are you sure you are compiling the file as C++? If you compile it as C (i.e. if the file has a .c rather than a .cpp extension), you will have problems.
|
2,060,200 | 2,060,250 | What is the best way to wait for a variable in a multithreaded application | I would like to do something like the below for a multi-threaded program:
// wait for variable to become true but don't hog resources
// then re-sync queues
Is something like this a good solution?
while (!ready) {
Thread.Sleep(250); // pause for 1/4 second;
};
| No, this is not a good solution. First it might sleep too long. Second it's easy for threads to get into lockstep. Here's couple of links to MSDN articles on proper synchronization techniques:
Conditional variables
Events
|
2,060,403 | 2,060,440 | Is there a better way to load a dll in C++? | Right now I do something like this and it seems messy if I end having a lot of functions I want to reference in my DLL. Is there a better and cleaner way of accessing the functions without having to create a typedef for each function definition so that it will compile and load the function properly. I mean the function definitions are already in the .h file and I shouldn't have to redeclare them after I load the function (or do I?) Is there a better solution than using LoadLibary? I don't necessarily need that function if there is a way I can do the same thing within Visual Studio 2005 project settings.
BHannan_Test_Class.h
#include "stdafx.h"
#include <windows.h>
#ifndef BHANNAN_TEST_CLASS_H_
#define BHANNAN_TEST_CLASS_H_
extern "C" {
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int __declspec (dllexport) Factorial(int n);
// Returns true iff n is a prime number.
bool __declspec (dllexport) IsPrime(int n);
}
#endif // BHANNAN_TEST_CLASS_H_
BHannan_Test_Class.cpp
#include "stdafx.h"
#include "BHannan_Test_Class.h"
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int Factorial(int n) {
int result = 1;
for (int i = 1; i <= n; i++) {
result *= i;
}
return result;
}
// Returns true iff n is a prime number.
bool IsPrime(int n) {
// Trivial case 1: small numbers
if (n <= 1) return false;
// Trivial case 2: even numbers
if (n % 2 == 0) return n == 2;
// Now, we have that n is odd and n >= 3.
// Try to divide n by every odd number i, starting from 3
for (int i = 3; ; i += 2) {
// We only have to try i up to the squre root of n
if (i > n/i) break;
// Now, we have i <= n/i < n.
// If n is divisible by i, n is not prime.
if (n % i == 0) return false;
}
// n has no integer factor in the range (1, n), and thus is prime.
return true;
}
dll_test.cpp
#include <BHannan_Test_Class.h>
typedef int (*FactorialPtr) (int);
FactorialPtr myFactorial=NULL;
// Tests factorial of negative numbers.
TEST(FactorialTest, Negative) {
HMODULE myDLL = LoadLibrary("BHannan_Sample_DLL.dll");
if(myDLL) {
myFactorial = (FactorialPtr) GetProcAddress(myDLL,"Factorial");
if(myFactorial)
{
EXPECT_EQ(1, myFactorial(-5));
EXPECT_EQ(1, myFactorial(-1));
EXPECT_TRUE(myFactorial(-10) > 0);
}
FreeLibrary(myDLL);
}
}
| After building your .dll get the .lib file nearby and link your test application with it. Use functions as they are declared in .h
There's a minor change you need to do in your header file:
#ifdef EXPORTS_API
#define MY_API_EXPORT __declspec (dllexport)
#else
#define MY_API_EXPORT __declspec (dllimport)
#endif
extern "C" {
int MY_API_EXPORT Factorial(int n);
// do the same for other functions
}
This way, when building your dll you define EXPORTS_API in your project settings and functions get exported, in the client application, no need to define anything.
|
2,060,578 | 2,060,631 | Is it possible to write a varargs function that sends it argument list to another varargs function? |
Possible Duplicate:
C Programming: Forward variable argument list.
What I'd like to do is send data to a logging library (that I can't modfify) in a printf kind of way.
So I'd like a function something like this:
void log_DEBUG(const char* fmt, ...) {
char buff[SOME_PROPER_LENGTH];
sprintf(buff, fmt, <varargs>);
log(DEBUG, buff);
}
Can I pass varargs to another vararg function in some manner?
| You can't forward the variable argument list, since there's no way to express what's underneath the ... as a parameter(s) to another function.
However you can build a va_list from the ... parameters and send that to a function which will format it up properly. This is what vsprintf is for. Example:
void log_DEBUG(const char* fmt, ...) {
char buff[SOME_PROPER_LENGTH];
va_list args;
va_start(args, fmt);
vsprintf(buff, fmt, args);
va_end(args);
log(DEBUG, buff);
}
|
2,060,735 | 2,067,321 | Boost.MPL and type list generation | Background
This is for a memory manager in a game engine. I have a freelist implemented, and would like to have a compile-time list if these. (A MPL or Fusion vector, for example). The freelist's correspond to allocation sizes, and when allocating/deallocating objects of size less than a constant, they will go to the corresponding freelist.
In the end, this means small objects globally have amortized constant time allocation and constant time deallocation. (Yay.)
Problem
The problem is generating the types I need, so I may eventually use Fusion to instantiate those types. The types in use are (shortened, etc.):
template <size_t N>
struct data_block
{
size_t mSize; // = N
char mData[N];
};
template <typename T, size_t ElementsPerPage,
template <typename> class Allocator = std::allocator >
class freelist { /* ... */ };
template <typename T>
class callocator; // allocator that uses malloc/free
The freelist's will manage data_block's of power-of-2 sizes, starting from a minimum going to a maximum. So what I want is:
static const size_t MinimumSmallSize = 4; // anything smaller gets rounded up
static const size_t MaximumSmallSize = 512; // anything bigger goes to the large allocator
static const size_t ElementsPerPage = 4096;
// mpl magic
To generate this:
typedef boost::mpl::vector<
freelist<data_block<4>, ElementsPerPage, callocator>,
freelist<data_block<8>, ElementsPerPage, callocator>
// ...
freelist<data_block<256>, ElementsPerPage, callocator>
freelist<data_block<512>, ElementsPerPage, callocator>
> free_list_collection;
Obviously, I could do this by hand but I'd rather avoid that for a more general and tweakable interface. Using the Fusion vector in code should be simpler than hard-coded members, too.
Question
I'm not sure the best way to go about this; I've never used MPL extensively before. Any ideas? I had a few poor ideas such as making a range, then remove_if it's not power of 2, etc., but surely that's not best. Maybe something recursive instead, that doubles each time, pushing into my result vector? I'm not sure how to go about that.
| This is the best solution I came up with, and it's fairly simple. It requires a log and pow meta-template, which I've included for those who want to play or try it:
#include <boost/mpl/for_each.hpp>
#include <boost/mpl/range_c.hpp>
#include <boost/mpl/transform.hpp>
#include <boost/mpl/vector.hpp>
#include <iostream>
namespace bmpl = boost::mpl;
//// helpers
template <size_t N, size_t Base>
struct log
{
static const size_t value = 1 + log<N / Base, Base>::value;
};
template <size_t Base>
struct log<1, Base>
{
static const size_t value = 0;
};
template <size_t Base>
struct log<0, Base>
{
static const size_t value = 0;
};
template <size_t N, size_t Power>
struct pow
{
static const size_t value = N * pow<N, Power - 1>::value;
};
template <size_t N>
struct pow<N, 0>
{
static const size_t value = 1;
};
//// types and constants
template <size_t N>
struct data_block
{
size_t mSize; // = N
char mData[N];
};
template <typename T, size_t ElementsPerPage,
template <typename> class Allocator = std::allocator >
class freelist { /* ... */ };
template <typename T>
class callocator; // allocator that uses malloc/free
static const size_t MinimumSmallSize = 4;
static const size_t MaximumSmallSize = 512;
static const size_t ElementsPerPage = 4096;
//// type generation
// turn a power into a freelist
template <typename T>
struct make_freelist
{
static const size_t DataSize = pow<2, T::value>::value;
typedef data_block<DataSize> data_type;
typedef freelist<data_type, ElementsPerPage, callocator> type;
};
// list of powers
typedef bmpl::range_c<size_t, log<MinimumSmallSize, 2>::value,
log<MaximumSmallSize, 2>::value + 1> size_range_powers;
// transform that list into freelists, into a vector
typedef bmpl::transform<size_range_powers, make_freelist<bmpl::_1>,
bmpl::back_inserter<bmpl::vector<> > >::type size_range;
//// testing
struct print_type
{
template <typename T>
void operator()(const T&) const
{
std::cout << typeid(T).name() << "\n";
}
};
int main(void)
{
bmpl::for_each<size_range>(print_type());
std::cout << std::endl;
}
The core of it is just a struct and two typedef's. The log trick reduced the size of the range greatly, and pow of course just undoes the log. Works exactly how I'd like, and I don't see any way to make it simpler.
That said, I've decided to go with Boost.Pool, so I won't be needing my solution (because their pool sizes are dynamic, not compile-time.) But this was good fun.
|
2,060,742 | 2,060,783 | C++ object class problems when used in another class | Having trouble when trying to create a class using another class(and 2 inner classes), I think it might be a syntax problem.
The first class
class listitem
{
//listitem.h(11)
public:
//MONSTER CLASS
static class monster
{
public:
monster(string thename);
monster(void);
~monster(void);
private:
string name;
int level;
};
//PLAYER CLASS
static class player
{
public:
player(string _pname, int _plevel, int _maxhp, int _currhp);
player(void);
~player(void);
private:
string pname;
int plevel;
int maxhp;
int currhp;
};
public:
listitem(player member, monster head);
~listitem(void);
private:
player a;
monster b;
//other fields
};
The second class is where I encounter a problem:
class hatelist
{
private:
vector<listitem> thelist;
public:
hatelist(listitem newlist);
int addNewListItem(listitem item);
hatelist(void);
~hatelist(void);
};
The implementation of the offending code:
hatelist::hatelist(listitem inputlist)
{ //hatelist.cpp(6)
thelist.push_back(inputlist);
}
1>hatelist.cpp
1>c:\...\listitem.h(11) : error C2011: 'listitem' : 'class' type redefinition
1>c:\...\listitem.h(11) : see declaration of 'listitem'
1>c:\...\hatelist.cpp(6) : error C2027: use of undefined type 'listitem'
1>c:\...\listitem.h(11) : see declaration of 'listitem'
1>c:\...\hatelist.cpp(6) : error C2227: left of '->{dtor}' must point to class/struct/union/generic type
Any help would be appreciated.
| Did you protect your header with a
#ifndef LISTITEM_H
#define LISTITEM_H
// All of your code
#endif
If not, it could be getting included twice, causing your error.
|
2,060,900 | 2,060,921 | difference between foo[i] and foo->at(i) with stl vector | is there any reason why
foo = (bar->at(x))->at(y);
works but
foo = bar[x][y];
does not work, where bar is a vector of vectors (using the c++ stl)
the declaration is:
std::vector< std::vector < Object * > * >
| Is it a vector of vectors or a vector of pointers to vectors? Your code should work as advertised:
typedef std::vector<int> vec_int;
typedef std::vector<vec_int> multi_int;
multi_int m(10, vec_int(10));
m.at(2).at(2) = /* ... */;
m[2][1] = /* ... */;
But your code appears to have:
typedef std::vector<vec_int*> multi_int; // pointer!
multi_int* m; // more pointer!
If you have pointers, you'll need to dereference them first to use operator[]:
(*(*m)[2])[2] = /* ... */;
That that can be ugly. Maybe use references temporarily:
multi_int& mr = m;
(*mr[2])[2] = /* ... */;
Though that still has some ugly. Maybe free-functions are helpful:
template <typename T>
typename T::value_type& access_ptr(T* pContainer,
unsigned pInner, unsigned pOuter)
{
return (*(*pContainer)[pInner])[pOuter]);
}
access_ptr(m, 2, 2) = /* ... */
Most preferable is to be rid of the pointers, though. Pointers can leak and have all sorts of problems, like leaking when exceptions are thrown. If you must use pointers, use a pointer container from boost for the inner vector, and store the actual object in a smart pointer.
Also, your title is a bit misleading. The difference between at and operator[] is that at does range checks. Otherwise, they are the same.
|
2,060,935 | 2,060,970 | Easiest way to download HTML page from web? | I have a web page whose content I'd like to download into a wxString.
For example, let's say that page is this:
http://www.example.com/mypage.html
And wxString would contain HTML source. In some other languages, say PHP
for example, I would write something like this:
$html = file_get_contents('http://www.example.com/mypage.html');
I guess it is not a one-liner in wxWidgets, and I have peeked into wxHTTP class, but I wonder if there is some simple wrapper class that does the job with minimal code?
| If you are running on windows you could use the Microsoft WinHTTP library. However, having a quick look at the wxHTTP documentation, WinHTTP probably isn't any easier.
Have a look at this straightforward wxHTTP sample code. It is doing exactly what you are after.
|
2,061,154 | 2,062,008 | Byte swap of a byte array into a long long | I have a program where i simply copy a byte array into a long long array. There are a total of 20 bytes and so I just needed a long long of 3. The reason I copied the bytes into a long long was to make it portable on 64bit systems.
I just need to now byte swap before I populate that array such that the values that go into it go reversed.
there is a byteswap.h which has _int64 bswap_64(_int64) function that i think i can use. I was hoping for some help with the usage of that function given my long long array. would i just simply pass in the name of the long long and read it out into another long long array?
I am using c++ not .net or c#
update:
clearly there are issues i am still confused about. for example, workng with byte arrays that just happen to be populated with 160 bit hex string which then has to be outputed in decimal form made me think about the case where if i just do a simple assignment to a long (4 byte) array my worries would be over. Then i found out that this code would ahve to run on a 64bit sun box. Then I thought that since the sizes of data from one env to another can change just a simple assignment would not cut it. this made me think about just using a long long to just make the code sort of immune to that size issue. however, then i read about endianess and how 64bit reads MSB vs 32bit which is LSB. So, taking my data and reversing it such that it is stored in my long long as MSB was the only solution that came to mind. ofc, there is the case about the 4 extra bytes which in this case does not matter and i simply will take the decimal output and display any random six digits i choose. However programatically, i guess it would be better to just work with 4 byte longs and not deal with that whole wasted 4 byte issue.
| Between this and your previous questions, it sounds like there are several fundamental confusions here:
If your program is going to be run on a 64-bit machine, it sounds like you should compile and unit-test it on a 64-bit machine. Running unit tests on a 32-bit machine can give you confidence the program is correct in that environment, but doesn't necessarily mean the code is correct for a 64-bit environment.
You seem to be confused about how 32- and 64-bit architectures relate to endianness. 32-bit machines are not always little-endian, and 64-bit machines are not always big-endian. They are two separate concepts and can vary independently.
Endianness only matters for single values consisting of multiple bytes; for example, the integer 305,419,896 (0x12345678) requires 4 bytes to represent, or a UTF-16 character (usually) requires 2 bytes to represent. For these, the order of storage matters because the bytes are interpreted as a single unit. It sounds like what you are working with is a sequence of raw bytes (like a checksum or hash). Values like this, where multiple bytes are not interpreted in groups, are not affected by the endianness of the processor. In your case, casting the byte array to a long long * actually creates a potential endianness problem (on a little-endian architecture, your bytes will now be interpreted in the opposite order), not the other way around.
Endianness also doesn't matter unless the little-endian and big-endian versions of your program actually have to communicate with each other. For example, if the little-endian program writes a file containing multi-byte integers without swapping and the big-endian program reads it in, the big-endian program will probably misinterpret the data. It sounds like you think your code that works on a little-endian platform will suddenly break on a big-endian platform even if the two never exchange data. You generally don't need to be worried about the endianness of the architecture if the two versions don't need to talk to each other.
Another point of confusion (perhaps a bit pedantic). A byte does not store a "hex value" versus a "decimal value," it stores an integer. Decimal and hexadecimal are just two different ways of representing (printing) a particular integer value. It's all binary in the computer's memory anyway, hexadecimal is just an easy conversion to and from binary and decimal is convenient to our brains since we have ten fingers.
Assuming what you're trying to do is print the value of each byte of the array as decimal, you could do this:
unsigned char bytes[] = {0x12, 0x34, 0x56, 0x78};
for (int i = 0; i < sizeof(bytes) / sizeof(unsigned char); ++i)
{
printf("%u ", (unsigned int)bytes[i]);
}
printf("\n");
Output should be something like:
18 52 86 120
|
2,061,520 | 2,061,787 | Is there a simple way to get scaled unix timestamp in C++ | I'm porting some PHP to C++. Some of our database code stores time values as unix time stamps *100
The php contains code that looks a bit like this.
//PHP
static function getTickTime()
{
return round(microtime(true)*100);
}
I need something like this:
//C++
uint64_t getTickTime()
{
ptime Jan1st1970(date(1970, 1, 1));
ptime Now = microsecond_clock::local_time();
time_duration diff = Now - Jan1st1970;
return static_cast<uint64_t>(diff.total_seconds()*100);
}
Is something like this sensible? Is there a neater solution?
Is there something nasty in this code that I can't see? (Guess I'm not experienced enough with boost::date_time to know these things)
| The solution suggested by dauphic can be modified to something like this
uint64_t getTickTime()
{
timeval tim;
gettimeofday(&tim, NULL);
return tim.tv_sec*100 + tim.tv_usec/10000;
}
I cant think of a neater solution than that.
|
2,061,558 | 2,072,109 | streaming video to and from multiple sources | I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
| Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
|
2,061,593 | 2,061,613 | Why do C languages require parens around a simple condition in an if statement? | It sounds stupid, but over the years I haven't been able to come up with a use case that would require this. A quick google search didn't reveal anything worthwhile.
From memory there was a use case mentioned by Bjarne Stroustrup but i can't find a reference to it.
So why can't you have this in C languages:
int val = 0;
if val
doSomehing();
else
doSomehinglse();
I can accept the "we couldn't be bothered adding support to lexer" reason, I just want to figure out if this syntax breaks other language constructs. Considering how many whacky syntax features there are in C/C++, i hardly think this would have added much complexity.
| If there are no brackets around expressions in if constructs, what would be the meaning of the following statement?
if x * x * b = NULL;
Is it
if (x*x)
(*b) = NULL;
or is it
if (x)
(*x) * b = NULL;
(of course these are silly examples and don't even work for obvious reasons but you get the point)
TLDR: Brackets are required in C to remove even the possibility of any syntactic ambiguity.
|
2,061,715 | 2,061,814 | Unresolved external symbols in beginners CUDA program | I create a new Win32 Console App as an empty project
I am running Windows 7 64bit with Visual Studio 2008 C++. I am trying to get the sample code from the bottom of this article to build: http://www.ddj.com/architect/207200659
I add CUDA Build Rule v2.3.0 to the project's custom build rules. It is the only thing with a checkbox in the available rule files list
I create moveArrays.cu in the Source Files (folder/filter???)
In that file I add the following code:
// moveArrays.cu
//
// demonstrates CUDA interface to data allocation on device (GPU)
// and data movement between host (CPU) and device.
#include <stdio.h>
#include <assert.h>
#include <cuda.h>
int main(void)
{
float *a_h, *b_h; // pointers to host memory
float *a_d, *b_d; // pointers to device memory
int N = 14;
int i;
// allocate arrays on host
a_h = (float *)malloc(sizeof(float)*N);
b_h = (float *)malloc(sizeof(float)*N);
// allocate arrays on device
cudaMalloc((void **) &a_d, sizeof(float)*N);
cudaMalloc((void **) &b_d, sizeof(float)*N);
// initialize host data
for (i=0; i<N; i++) {
a_h[i] = 10.f+i;
b_h[i] = 0.f;
}
// send data from host to device: a_h to a_d
cudaMemcpy(a_d, a_h, sizeof(float)*N, cudaMemcpyHostToDevice);
// copy data within device: a_d to b_d
cudaMemcpy(b_d, a_d, sizeof(float)*N, cudaMemcpyDeviceToDevice);
// retrieve data from device: b_d to b_h
cudaMemcpy(b_h, b_d, sizeof(float)*N, cudaMemcpyDeviceToHost);
// check result
for (i=0; i<N; i++)
assert(a_h[i] == b_h[i]);
// cleanup
free(a_h); free(b_h);
cudaFree(a_d); cudaFree(b_d);
}
When I build I get these errors:
1>------ Build started: Project: CUDASandbox, Configuration: Debug x64 ------
1>Linking...
1>moveArrays.cu.obj : error LNK2019: unresolved external symbol cudaFree referenced in function main
1>moveArrays.cu.obj : error LNK2019: unresolved external symbol cudaMemcpy referenced in function main
1>moveArrays.cu.obj : error LNK2019: unresolved external symbol cudaMalloc referenced in function main
1>moveArrays.cu.obj : error LNK2019: unresolved external symbol __cudaUnregisterFatBinary referenced in function __cudaUnregisterBinaryUtil
1>moveArrays.cu.obj : error LNK2019: unresolved external symbol __cudaRegisterFatBinary referenced in function __sti____cudaRegisterAll_45_tmpxft_00001264_00000000_6_moveArrays_cpp1_ii_main
1>D:\Stuff\Programming\Visual Studio 2008\Projects\CUDASandbox\x64\Debug\CUDASandbox.exe : fatal error LNK1120: 5 unresolved externals
1>Build log was saved at "file://d:\Stuff\Programming\Visual Studio 2008\Projects\CUDASandbox\CUDASandbox\x64\Debug\BuildLog.htm"
1>CUDASandbox - 6 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
I can compile and run the example CUDA programs that came with the SDK. I know I am missing something simple here, but what is it?
| I guess you are missing to link to the correct library.
Make sure you have the CUDA library added under "Configuration Properties->Linker->Input".
Refer this.
|
2,061,804 | 2,061,915 | openssl BF_cfb64_encrypt thread-safety | Is openssl's BF_cfb64_encrypt() thread safe?
A sample code to use it to encrypt / decrypt a blob of data would be much appreciated.
| According to the FAQ, the OpenSSL routines are thread safe. I looked at the source of that function, and it does indeed appear to be thread safe. Of course, that assumes you are not passing the same input/output buffers to the function on different threads.
For an example of a call to it, you should be able to look in the OpenSSL source. There is a file named bftest.c located in openssl/crypto/bf/ that has test calls to the function.
Edit After thinking about this a bit more, it is probably still wise to use the CRYPTO_set_locking_callback functionality for multi-threaded locking. The Blowfish algorithm asked about in the original question does not currently (in the version of OpenSSL I am using) use those locks, but that does not guarantee it will not in the future for some reason. Plus it saves future pain if you end up using functionality (such as RAND_bytes) that does need those locks.
|
2,061,885 | 2,061,939 | boost regex sub-string match | I want to return output "match" if the pattern "regular" is a sub-string of variable st. Is this possible?
int main()
{
string st = "some regular expressions are Regxyzr";
boost::regex ex("[Rr]egular");
if (boost::regex_match(st, ex))
{
cout << "match" << endl;
}
else
{
cout << "not match" << endl;
}
}
| The boost::regex_match only matches the whole string, you probably want boost::regex_search instead.
|
2,061,947 | 2,061,959 | Easy way to add text above all methods in a solution in VS 2005? | A colleague was working on a Perl script to consume a C++ source file and add text above all of the methods in the file. He was looking to develop code using regular expressions from the ground up to detect the top line of the method:
void MyClass::MyMethod(int somethingOrOther)
Trying to do this from scratch is fraught with landmines, like discriminating the method headers from macros, comments, conditionals, etc.
This may be the really, really hard way to do things, as VS 2005 seems to be able to figure out exactly where all of the methods start and end (so that I can click on the box to collapse the method source).
Is there an easy way within the VS 2005 IDE to add some text above each method, solution-wide?
| You can do a regular expression search and replace. Since you can place new lines into replace box, you can go nuts and do anything you want(except for extracting parameters). Example forthcoming.
Search string: ^:b*{:i}:b{:i}\:\:{:i}:b*{\(.*\)}
Replace string: ///Regex Example\n///Class: \2\n///Method: \3 returning \1\n\1 \2::\3\4
Code:
///Regex Example
///Class: Class
///Method: Foo returning void
void Class::Foo(int oneParam)
///Regex Example
///Class: Class
///Method: Bar returning void
void Class::Bar(int noParam)
I am not aware of a method of hooking into Visual Studio parser, unless you write a plugin, which might be a bit of an overkill.
|
2,061,978 | 2,063,826 | How do I align QtWidget to right in the QtToolBar? | I have some QtWidget (QtLineEdit) and I would like to align it to the right in my QtToolBar.
Is there any simple way to do it?
Thanks.
| try putting spacer before it
|
2,062,171 | 2,064,275 | HitTest not working as expected | I am wanting to display a context menu when a user right-clicks on an item within a CListCtrl. My code is as follows:
void DatastoreDialog::OnContextMenu(CWnd *pWnd, CPoint pos)
{
// Find the rectangle around the list control
CRect rectMainArea;
m_itemList.GetWindowRect(&rectMainArea);
// Find out if the user right-clicked the list control
if( rectMainArea.PtInRect(pos) )
{
LVHITTESTINFO hitTestInfo;
hitTestInfo.pt = pos;
hitTestInfo.flags = LVHT_ONITEM;
m_itemList.HitTest(&hitTestInfo);
if (hitTestInfo.flags & LVHT_NOWHERE)
{
// No item was clicked
}
else
{
MyContextHandler(hitTestInfo)
}
}
}
When I actually run the code, regardless of where I click; on an item, in empty space within the CListCtrl, anywhere else on the dialog (by removing the first if statement); hitTestInfo.flags is set to 48, which, if I'm reading this correctly, means "Below, and to the right of the whole CListCtrl". Which doesn't really make sense when I'm first checking if it's within the CListCtrl.
So do I have an incorrect assumption somewhere? Is my code incorrect? Am I missing something?
As a possibly-related, or maybe not, BONUS QUESTION, both LVHT_ONITEMSTATEICON and LVHT_ABOVE are #defined as 0x08 - why is this? This may be key to my misunderstanding.
| I think HitTest() needs a position in client coordinates. It's been a while since I last did this, but it doesn't make sense to me to pass screen coordinates into a client window hit testing routine. Add m_itemList.ScreenToClient(&pos); before hitTestInfo.pt = pos; and see if that helps.
Furthermore, note that OnContextMenu() may not be the call you're looking for. It is called in response to (by default) shift-f10 as well. The documentation of WM_CONTEXTMENU is (on reading it diagonally, I don't remember how it works from when I last did this) not very clear on what the content of 'pos' will be in that case; you may need to do an explicit GetCursorPos() to handle that case. Or just show your context in WM_RBUTTONDOWN.
|
2,062,242 | 2,062,333 | C++ - Deleting a vector element that is referenced by a pointer | Well, I don't know if it is possible, but the thing would be:
struct stPiece
{
/* some stuff */
stPiece *mother; // pointer to the piece that created this one
};
vector<stPiece> pieces;
Is it possible to erase the piece referenced by 'mother' from pieces, having just that pointer as a reference? How?
Would it mess with the other references? (i.e. if it is not the last element in the vector, by shifting the next elements to other memory positions, while the other '*mothers' remain constant). Of course, I assuming that all the child pieces will be deleted (so I won't need to update any pointer that goes to the same mother).
Thanks!
| If your mother pointers point directly to elements of the pieces vector you will get in all kinds of trouble.
Deleting an element from pieces will shift all the positions of the elements at higher indexes. Even inserting elements can make all the pointers invalid, since the vector might need to reallocate it's internal array which might transfer all the elements to new positions in memory.
To answer your main question: You can't delete the element you have the pointer to directly, you would first need search through the vector to find it, or calculate it's index in the vector.
Not storing pointers into pieces as mother but instead the indexes of the elements would make it a bit more robust, so that at least inserting new elements could not break the existing mothers. But deleting from pieces would still shift elements to new indexes.
Using a std::list for pieces and storing iterators into that as mother might be a solution. Iterators of std::list are not invalidated if other elements are of that list are removed/added. If different elements can have the same mother you still have a problem finding out when to remove the mother elements, than maybe using boost::shared_ptr would be simpler.
|
2,062,259 | 2,062,329 | Using C/C++ to efficiently de-serialize a string comprised of floats, tokens and blank lines | I have large strings that resemble the following...
some_text_token
24.325973 -20.638823
-1.964366 0.753947
-1.290811 -3.547422
0.813014 -3.547227
0.472015 3.723311
-0.719116 3.676793
other_text_token
24.325973 20.638823
-1.964366 0.753947
-1.290811 -3.547422
-1.996611 -2.877422
0.813014 -3.547227
1.632365 2.083673
0.472015 3.723311
-0.719116 3.676793
...
...from which I'm trying to efficiently, and in the interleaved sequence they appear in the string, grab...
the text tokens
the float values
the blank lines
...but I'm having trouble.
I've tried strtod and successfully grabbed the floats from the string, but I can't seem to get a loop using strtod to report back to me the interleaved text tokens and blank lines. I'm not 100% confident strtod is the "right track" given the interleaved tokens and blank lines that I'm also interested in.
The tokens and blank lines are present in the string to give context to the floats so my program knows what the float values occurring after each token are to be used for, but strtod seems more geared, understandably, toward just reporting back floats it encounters in a string without regard for silly things like blank lines or tokens.
I know this isn't very hard conceptually, but being relatively new to C/C++ I'm having trouble judging what language features I should focus on to take best advantage of the efficiency C/C++ can bring to bear on this problem.
Any pointers? I'm very interested in why various approaches function more or less efficiently. Thanks!!!
| Using C, I would do something like this (untested):
#include <stdio.h>
#define MAX 128
char buf[MAX];
while (fgets(buf, sizeof buf, fp) != NULL) {
double d1, d2;
if (buf[0] == '\n') {
/* saw blank line */
} else if (sscanf(buf, "%lf%lf", &d1, &d2) != 2) {
/* buf has the next text token, including '\n' */
} else {
/* use the two doubles, d1, and d2 */
}
}
The check for blank line is first because it's relatively inexpensive. Depending upon your needs:
you might need to increase/change MAX,
you may need to check if buf ends with a newline, if it doesn't, then the line was too long (go to 1 or 3 in that case),
you might need a function that reads full lines from a file, using malloc() and realloc() to dynamically allocate the buffer (see this for more),
you might want to take care of special cases such as a single floating-point value on a line (which I assume is not going to happen). sscanf() returns the number of input items successfully matched and assigned.
I am also assuming that blank lines are really blank (just the newline character by itself). If not, you will need to skip leading white-space. isspace() in ctype.h is useful in that case.
fp is a valid FILE * object returned by fopen().
|
2,062,316 | 2,062,383 | Compiler optimization for fastest possible code | I would like to select the compiler optimizations to generate the fastest possible application.
Which of the following settings should I set to true?
Dead store elimination
Eliminate duplicate expressions within basic blocks and functions
Enable loop induction variable and strength reduction
Enable Pentium instruction scheduling
Expand common intrinsic functions
Optimize jumps
Use register variables
There is also the option 'Generate the fastest possible code.', which I have obviously set to true. However, when I set this to true, all the above options are still set at false.
So I would like to know if any of the above options will speed up the application if I set them to true?
|
So I would like to know if any of the above options will speed up the application if I set them to true?
I know some will hate me for this, but nobody here can answer you truthfully. You have to try your program with and without them, and profile each build and see what the results are. Guess-work won't get anybody anywhere.
Compilers already do tons(!) of great optimization, with or without your permission. Your best bet is to write your code in a clean and organized matter, and worry about maintainability and extensibility. As I like to say: Code now, optimize later.
|
2,062,817 | 2,062,840 | IcmpSendEcho and setting MTU size? | Does anyone know if its possible to adjust the MTU while sending an ICMP echo packet (with IcmpSendEcho)?
I'm trying to do this under Windows using the IcmpSendEcho() function.
| The maximum transmission unit (MTU) is a property of your network subsystem, it's not something that can be changed on the fly. Typical Ethernet, for instance, has a maximum MTU of 1,500 bytes.
If you want to adjust the size of the request, it looks as if the fourth argument to IcmpSendEcho() is the size of the data to send.
If you're trying to implement path MTU discovery this is the way to adjust the size of your outgoing request, to figure out what the maximum allowable MTU to a host is. Note that you also need to set the DF (don't fragment) IPv4 header bit.
What are you trying to achieve?
|
2,062,837 | 2,063,116 | Help with type traits | Suppose we have the following template class
template<typename T> class Wrap { /* ... */ };
We can not change Wrap. It is important.
Let there are classes derived from Wrap<T>. For example,
class NewInt : public Wrap<int> { /* ... */ };
class MyClass : public Wrap<myclass> { /* ... */ };
class Foo : public Wrap<Bar> { /* ... */ };
We can not change these classes too. All classes above is 3rd party. They are not mine.
I need the following compile time type_traits:
template<class T>
struct is_derived_from_Wrap {
static const bool value = /* */;
};
What do I need?
assert(is_derived_from_Wrap<Int>::value == true); // Indeed I need static assert
assert(is_derived_from_Wrap<MyClass>::value == true);
assert(is_derived_from_Wrap<char>::value == false);
struct X {};
assert(is_derived_from_Wrap<X>::value == false);
| You can do this using SFINAE but its kind of magical if you dont know whats going on...
template<typename T> class Wrap { };
struct myclass {};
struct X {};
class Int : public Wrap<int> { /* ... */ };
class MyClass : public Wrap<myclass> { /* ... */ };
template< typename X >
struct is_derived_from_Wrap
{
struct true_type { char _[1]; };
struct false_type { char _[2]; };
template< typename U >
static true_type test_sfinae( Wrap<U> * w);
static false_type test_sfinae( ... );
enum { value = sizeof( test_sfinae( (X*)(0) ) )==sizeof(true_type) };
};
#include <iostream>
#define test(X,Y) std::cout<<( #X " == " #Y )<<" : "<<( (X)?"true":"false") <<std::endl;
int main()
{
test(is_derived_from_Wrap <Int>::value, true);
test(is_derived_from_Wrap <MyClass>::value, true);
test(is_derived_from_Wrap <char>::value, false);
test(is_derived_from_Wrap <X>::value, false);
}
This gives the expected output
is_derived_from_Wrap <Int>::value == true : true
is_derived_from_Wrap <MyClass>::value == true : true
is_derived_from_Wrap <char>::value == false : false
is_derived_from_Wrap <X>::value == false : false
There are a couple of gotchas with my code. It will also return true if the type is a Wrap.
assert( is_derived_from_Wrap< Wrap<char> >::value == 1 );
This can probably be fixed using a bit more SFINAE magic if needed.
It will return false if the derivation is not a public derivation (i.e is private or protected )
struct Evil : private Wrap<T> { };
assert( is_derived_from_Wrap<Evil>::value == 0 );
I suspect this can't be fixed. (But I may be wrong ). But I suspect public inheritance is enough.
|
2,062,881 | 2,063,075 | How to skip common classes in VS 2008 when stepping in? | How can I skip common classes in VS 2008 debugger when stepping in?
For example, I do not want debugger to step into any of the std:: classes.
How can I achieve that?
I've found ways of doing this in VS 2005 and earlier, but not 2008
| You can do this by entering entries into the registry (I know, it sucks). The key you are looking for varies from 32 to 64 bit systems. For 32-bit systems the key is
HKEY_LOCAL_MACHINE\Software\Microsoft\VisualStudio\9.0\NativeDE\StepOver
If you're running a 64 bit OS and a 32 bit Visual Studio the key is
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\9.0\NativeDE\StepOver
The Wow6432Node key is a key present for 32 bit applications running on 64 bit systems.
(Sidenote: searching the registry for "_RTC_CheckEsp" probably will lead you to the right place, it's a default entry in Visual Studio 9)
The syntax should be familiar to you, but as an example, a simple entry could be string value boost::.*=NoStepInto which will avoid the debugger to step into Boost. See http://www.cprogramming.com/debugging/visual-studio-msvc-debugging-NoStepInto.html for some other examples.
Hope this helps :)
|
2,062,956 | 2,062,982 | Checking if an iterator is valid | Is there any way to check if an iterator (whether it is from a vector, a list, a deque...) is (still) dereferenceable, i.e. has not been invalidated?
I have been using try-catch, but is there a more direct way to do this?
Example: (which doesn't work)
list<int> l;
for (i = 1; i<10; i++) {
l.push_back(i * 10);
}
itd = l.begin();
itd++;
if (something) {
l.erase(itd);
}
/* now, in other place.. check if it points to somewhere meaningful */
if (itd != l.end())
{
// blablabla
}
| I assume you mean "is an iterator valid," that it hasn't been invalidated due to changes to the container (e.g., inserting/erasing to/from a vector). In that case, no, you cannot determine if an iterator is (safely) dereferencable.
|
2,063,037 | 2,063,083 | Call my own Java code from C# | Having my own Java code I'm using C# to call some unmanaged code that call (via JNI) the java code. I'm using JNI since I need to ensure:
the ability that the Java code will run over real JVM and not over some .NET VM
the ability to attach to the VM for debugging (IKVM does'nt support it)
I need free solution
The current free solutions are not applicable (e.g. IKVM)
Anyway, my question is how can I manage strings passed between these layers in the best manner without leaks.
I'm doing something like:
[DllImport(@"MyDll.dll")]
public extern static void receive_message(string receDest, StringBuilder response);
This means I'm allocating the memory for the response in the managed code.
I want to avoid that since I don't know in advance the response length. How can I write a JNI appropriate method that will allocate the right buffer for the managed code without leaks. The JNI code should be thread safe.
Any suggestions?
Thanks,
Guy
| You may need JNI, but your requirements don't really indicate it.
The requirement to use a real JVM does not dictate the use of JNI. I'd suggest sharpening your requirements, or considering looser coupling. For example, socket comms, web services, a shared database, a shared file, or a queue.
If you really need Java and .NET to be run in the same process, with tight coupling, consider JNBridge.
They've solved the problem you are confronting.
|
2,063,328 | 2,064,061 | Parallel for_each using openmp | Why does this code not parallelize std::for_each() when it works perfectly fine with std::sort()?
How do I fix it?
g++ -fopenmp -D_GLIBCXX_PARALLEL=1 -o p p.cc && time ./p sort
GCC 4.3 on Linux.
#include <cstdio>
#include <algorithm>
#include <vector>
#include <cstring>
void delay()
{
for(int c = 0; c < 1000000; c++) {
}
}
int lt(int a, int b)
{
delay();
return a < b;
}
void noop(int a)
{
delay();
}
int main(int argc, char **argv)
{
if (argc < 2) {
printf("%s <sort | for_each>\n", argv[0]);
return 1;
}
std::vector<int> foo(10000);
if (!strcmp(argv[1], "sort")) {
std::sort(foo.begin(), foo.end(), lt);
} else if (!strcmp(argv[1], "for_each")) {
std::for_each(foo.begin(), foo.end(), noop);
}
}
| Just compiling with -D_GLIBCXX_PARALLEL does not necessarily parallelize all algorithms (see here):
Please note that this doesn't necessarily mean that everything will end up being executed in a parallel manner, but rather that the heuristics and settings coded into the parallel versions will be used to determine if all, some, or no algorithms will be executed using parallel variants.
However the Configuration and tuning Chapter might help you to force parallelization.
Just a note to your "Benchmark":
std::sort and std::for_each won't necessarily call delay() the same number of times. std::for_each calls the delay method for N times, std::sort calls it for something between N log(N) and N^2 times (see reference). Thus measuring the execution time gives you only a weak indication about parallelization.
|
2,063,488 | 2,063,539 | Interprocess communication | I have an API for a game, which calls methods in a C++ dll, you can write bots for the game by modifying the DLL and calling certain methods. This is fine, except I'm not a big fan of C++, so I decided to use named pipes so I can send game events down the pipe to a client program, and send commands back - then the C++ side is just a simple framework for sending a listening on a named pipe.
I have some methods like this on the C# side of things:
private string Read()
{
byte[] buffer = new byte[4096];
int bytesRead = pipeStream.Read(buffer, 0, (int)4096);
ASCIIEncoding encoder = new ASCIIEncoding();
return encoder.GetString(buffer, 0, bytesRead);
}
private void Write(string message)
{
ASCIIEncoding encoder = new ASCIIEncoding();
byte[] sendBuffer = encoder.GetBytes(message);
pipeStream.Write(sendBuffer, 0, sendBuffer.Length);
pipeStream.Flush();
}
What would be the equivalent methods on the C++ side of things?
| After you create the pipe and have pipe handles, you read and write using the ReadFile and WriteFile APIs: see Named Pipe Client in MSDN for a code example.
However, I'm at loss exactly how to use them.
The "Named Pipe Client" section which I quoted above gives an example of how to use them.
For example, what are the types of all the arguments
The types of all the arguments are defined in MSDN: see ReadFile and WriteFile
how do I convert from a buffer of bytes which I presumably receive from the ReadFile method into a string and vice/versa?
You're sending the string using ASCIIEncoding, so you'll receive a string of non-Unicode characters.
You can convert that to a string, using the overload of the std::string constructor which takes a pointer to a character buffer plus a sencond parameter which specifies how many characters are in the buffer:
//chBuf and cbRead are as defined in the
//MSDN "Named Pipe Client" example code fragment
std::string received((const char*)chBuf, cbRead);
|
2,063,553 | 2,138,508 | ComboBoxEx32 (CComboBoxEx) keyboard behaviour | I have a WTL application that uses an extended combobox control (the Win32 class ComboBoxEx32) with the CBS_DROPDOWNLIST style. It works well (I can have images against each item in the box) but the keyboard behaviour is different to a normal combobox - pressing a key will not jump to the first item in the combo that starts with that letter.
For example, if I add the strings 'Arnold', 'Bob' and 'Charlie' to the combo, if I then select the combo and press 'B', then 'Bob' won't be selected.
Does anyone know how to make this work? Currently the only idea I can think of is to somehow subclass the 'actual' combobox (I can get the handle to this using the CBEM_GETCOMBOCONTROL message) and process WM_CHARTOITEM. This is a PITA so I thought I'd ask if anyone else has come across this issue before.
| In the end I hooked the combobox control (obtained with CBEM_GETCOMBOCONTROL) and trapped the WM_CHARTOITEM message and performed my own lookup. I can post code if anyone else is interested.
|
2,063,623 | 2,063,743 | Qt's pragma directives | Could anyone point me out to an article, where pragma directives, available in Qt environment would be discussed?
| AFAIK pragma directives are preprocessor and compiler directives and have not much to do with Qt itself.
http://gcc.gnu.org/onlinedocs/cpp/Pragmas.html
http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html
https://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/gcc/pragmas.html
Qt provides some defines, which can be used to do things like enable/disable parts of the source code depending on which platform you are compiling:
http://qt.nokia.com/doc/4.6/qtglobal.html
You can use them like this:
#ifdef Q_WS_MAC
(some mac code goes here)
#endif
#ifdef Q_WS_WIN32
(some windows code goes here)
#endif
|
2,063,933 | 2,063,953 | Listview controls don't appear in dialog | C++ win32 application (not MFC), whose GUI comprises just one dialog box from the resource file [WinMain() calls DialogBox()]. This works fine.
However, adding any "common controls" (listview, tab control, etc) to the dialog and they don't appear when the program is run. Normal controls (textbox, button, radiobox etc) are displayed, just not listviews or tabs.
The controls are marked as Visible=True in the dialog box editor. Program is linked against comctl32.lib, and even tried putting a copy of comctl32.dll in the same directory as the exe. Yet these listview and tab controls still don't appear in the dialog box. What could be causing that?
| Are you calling InitCommonControlsEx() in your program? Required.
|
2,064,103 | 2,064,823 | Interprocess communication between C# and C++ | I'm writing a bot for a game, which has a C++ API interface (ie. methods in a Cpp dll get called by the game when events occur, the dll can call back methods in the game to trigger actions).
I don't really want to write my bot in C++, I'm a fairly experienced C# programmer but I have no C++ experience at all. So, the obvious solution is to use ipc to send event to a C# program, and send actions back to the C++ one, that way all I need to write in C++ is a basic framework for calling methods and sending events.
What would be the best way to do this? Sample code would be greatly appreciated as I no have particular desire to learn C++ at this point!
| One solution is to create a managed C++ class library with regular __declspec(dllexport) functions which call managed methods in a referenced C# class library.
Example - C++ code file in managed C++ project:
#include "stdafx.h"
__declspec(dllexport) int Foo(int bar)
{
csharpmodule::CSharpModule mod;
return mod.Foo(bar);
}
C# Module (separate project in solution):
namespace csharpmodule
{
public class CSharpModule
{
public int Foo(int bar)
{
MessageBox.Show("Foo(" + bar + ")");
return bar;
}
}
}
Note that I am demonstrating that this is an actual .NET call by using a System.Windows.Forms.MessageBox.Show call.
Sample basic (non-CLR) Win32 console application:
__declspec(dllimport) int Foo(int bar);
int _tmain(int argc, _TCHAR* argv[])
{
std::cout << Foo(5) << std::endl;
return 0;
}
Remember to link the Win32 console application with the .lib file resulting from the build of the managed C++ project.
|
2,064,138 | 2,064,255 | Coding for ease of debugging | I am looking for tips on how to aid my debugging by adding code to my application. An example so that it becomes more clear what I'm after: in order to detect dangling objects held by shared_ptrs I have created a tracker class that allows me to keep track of how many objects are alive and where they where originally created, which is then used like this:
class MyClass {
TRACK_THIS_TYPE(MyClass);
};
boost::shared_ptr<MyClass> myObj(new MyClass);
TRACK_THIS_OBJECT(myObj);
where TRACK_THIS_TYPE(t) is a macro that makes sure that I get an instance count (and count of how many objects have been created) for a class and TRACK_THIS_OBJECT is a macro that stores the file and line where the object was created together with a weak_ptr to the object.
This allows me to detect dangling objects and where they where created. It does not allow me to find out what objects are holding a shared_ptr to my objects, which could be an improvement to the above. I guess one could create something like a TRACK_THIS_PTR(T) macro that would store the file and line for where the shared_ptr instance is created.
Another example would be the old
assert(condition && "My descriptive text");
that allows you to put meaningful comments directly into your assert.
Does anyone have any neat c++ tricks to collect stats, automatic stack traces, tracking of objects/pointers/resources, deadlock/starvation or other threading issues detection, making sure that exceptions get handled somewhere, documentation help or similar? Anything goes really, whether it is something that helps prevent errors or something that helps after the fact.
Edit: In addition to the replies to this question, I've received tips about google-glog as a logging utility.
| The following are mainly for ease of debugging after the release.
Stackwalker and similar tools provide an easy way to get usable callstacks on end-user machines, with no debugger being active. Rather similar, Google Breakpad can be used to easily extract mini dumps from crashed processes.
|
2,064,334 | 2,064,394 | How to check that all exceptions thrown have their matching catch clause | In java the compiler complains about uncatched exceptions.
I use exceptions in C++ and I miss that feature.
Is there a tool out there capable of doing it? maybe a compiler option (but I doubt it)
| a static analyzer can run over your code and warn you if a function might throw an unhandled exception
for example good old pc-lint
or coverity
|
2,064,550 | 2,064,565 | C++ : why bool is 8 bits long? | In C++, I'm wondering why the bool type is 8 bits long (on my system), where only one bit is enough to hold the boolean value ?
I used to believe it was for performance reasons, but then on a 32 bits or 64 bits machine, where registers are 32 or 64 bits wide, what's the performance advantage ?
Or is it just one of these 'historical' reasons ?
| Because every C++ data type must be addressable.
How would you create a pointer to a single bit? You can't. But you can create a pointer to a byte. So a boolean in C++ is typically byte-sized. (It may be larger as well. That's up to the implementation. The main thing is that it must be addressable, so no C++ datatype can be smaller than a byte)
|
2,064,553 | 2,064,599 | What is a good resource to read about stack/heap and symbol table concepts? | Please suggest some website or some book that deals with these topics in really good detail.
I need to have a better understanding of these concepts (in reference to C++):
stack and heaps
symbol tables
implementation of scope rules
implementation of function calls
| You could read the Dragon Book, but I guess it might be too much.
|
2,064,692 | 2,064,722 | How to print function pointers with cout? | I want to print out a function pointer using cout, and found it did not work.
But it worked after I converting the function pointer to (void *), so does printf with %p, such as
#include <iostream>
using namespace std;
int foo() {return 0;}
int main()
{
int (*pf)();
pf = foo;
cout << "cout << pf is " << pf << endl;
cout << "cout << (void *)pf is " << (void *)pf << endl;
printf("printf(\"%%p\", pf) is %p\n", pf);
return 0;
}
I compiled it with g++ and got results like this:
cout << pf is 1
cout << (void *)pf is 0x100000b0c
printf("%p", pf) is 0x100000b0c
So what does cout do with type int (*)()? I was told that the function pointer is treated as bool, is it true?
And what does cout do with type (void *)?
Thanks in advance.
EDIT: Anyhow, we can observe the content of a function pointer by converting it into (void *) and print it out using cout.
But it does not work for member function pointers and the compiler complains about the illegal conversion. I know that member function pointers is rather a complicated structure other than simple pointers, but how can we observe the content of a member function pointers?
| There actually is an overload of the << operator that looks something like:
ostream & operator <<( ostream &, const void * );
which does what you expect - outputs in hex. There can be no such standard library overload for function pointers, because there are infinite number of types of them. So the pointer gets converted to another type, which in this case seems to be a bool - I can't offhand remember the rules for this.
Edit: The C++ Standard specifies:
4.12 Boolean conversions
1 An rvalue of arithmetic,
enumeration, pointer, or pointer to
member type can be converted to an
rvalue of type bool.
This is the only conversion specified for function pointers.
|
2,064,811 | 2,065,460 | How do I compile a 64-bit version of ffmpeg on Windows? | i need to compile ffmpeg (64 bit shared dll) for windows.
however I configure in mingw, it always produces 32 bit binary for me.
tried this already
./configure --enable-shared --disable-static --enable-memalign-hack --arch=amd64
./configure --enable-shared --disable-static --enable-memalign-hack --arch=x86_64
my guess is that a x86 to x86_64 cross compiler is missing.
but just can find a way to make those 64bit dlls.
| See here for a 64-bit version of MinGW. The site even has 64-bit binaries of the ffmpeg library!
UPDATE: Meanwhile it's easier to build on/for Windows, e.g. through ffmpeg-windows-build-helpers scripts.
|
2,065,228 | 2,065,497 | is there a better way to select correct method overload? | Is this really the only way to get the correct address for an instance function:
typedef CBitmap * (CDC::* SelectObjectBitmap)(CBitmap*);
SelectObjectBitmap pmf = (SelectObjectBitmap)&CDC::SelectObject;
First, one has to create a typedef, and then one has to use that to force the compiler to select the correct overloaded method when taking its address?
Is there no syntax that is more natural, and self-contained, such as:
SelecdtObjectBitmap pmf = &CDC::SelectObject(CBitmap*);
I use ScopeGuard often in my code. And one obvious use is to ensure that any temporary CDC objects are first selected into a given DC, then removed at scope exit, making my code leak free even under exceptional circumstances - and simultaneously cleaning up the code written (stupid multiple exit paths and try/catch and so on to try to handle removing any selected objects from a given CDC).
So a more complete example of what I currently am forced to do looks like:
// get our client rect
CRect rcClient;
GetClientRect(rcClient);
// get the real DC we're drawing on
PAINTSTRUCT ps;
CDC * pDrawContext = BeginPaint(&ps);
// create a drawing buffer
CBitmap canvas;
canvas.CreateCompatibleBitmap(pDrawContext, rcClient.Width(), rcClient.Height());
CDC memdc;
memdc.CreateCompatibleDC(pDrawContext);
//*** HERE'S THE LINE THAT REALLY USES THE TYPEDEF WHICH i WISH TO ELIMINATE ***//
ScopeGuard guard_canvas = MakeObjGuard(memdc, (SelectObjectBitmap)&CDC::SelectObject, memdc.SelectObject(&canvas));
// copy the image to screen
pDrawContext->BitBlt(rcClient.left, rcClient.top, rcClient.Width(), rcClient.Height(), &memdc, rcClient.left, rcClient.top, SRCCOPY);
// display updated
EndPaint(&ps);
It has always struck me as goofy as hell that I need to typedef every overloaded function which I wish to take the address of.
So... is there a better way?!
EDIT: Based on the answers folks have supplied, I believe I have a solution to my underlying need: i.e. to have a more natural syntax for MakeGuard which deduces the correct SelectObject override for me:
template <class GDIObj>
ObjScopeGuardImpl1<CDC, GDIObj*(CDC::*)(GDIObj*), GDIObj*> MakeSelectObjectGuard(CDC & dc, GDIObj * pGDIObj)
{
return ObjScopeGuardImpl1<CDC, GDIObj*(CDC::*)(GDIObj*), GDIObj*>::MakeObjGuard(dc, (GDIObj*(CDC::*)(GDIObj*))&CDC::SelectObject, dc.SelectObject(pGDIObj));
}
Which makes my above code change to:
ScopeGuard guard_canvas = MakeSelectObjectGuard(memdc, &canvas);
//////////////////////////////////////////////////////////
For those who might look here for a non-MFC version of the same thing:
//////////////////////////////////////////////////////////////////////////
//
// AutoSelectGDIObject
// selects a given HGDIOBJ into a given HDC,
// and automatically reverses the operation at scope exit
//
// AKA:
// "Tired of tripping over the same stupid code year after year"
//
// Example 1:
// CFont f;
// f.CreateIndirect(&lf);
// AutoSelectGDIObject select_font(*pDC, f);
//
// Example 2:
// HFONT hf = ::CreateFontIndirect(&lf);
// AutoSelectGDIObject select_font(hdc, hf);
//
// NOTE:
// Do NOT use this with an HREGION. Those don't need to be swapped with what's in the DC.
//////////////////////////////////////////////////////////////////////////
class AutoSelectGDIObject
{
public:
AutoSelectGDIObject(HDC hdc, HGDIOBJ gdiobj)
: m_hdc(hdc)
, m_gdiobj(gdiobj)
, m_oldobj(::SelectObject(m_hdc, gdiobj))
{
ASSERT(m_oldobj != m_gdiobj);
}
~AutoSelectGDIObject()
{
VERIFY(m_gdiobj == ::SelectObject(m_hdc, m_oldobj));
}
private:
const HDC m_hdc;
const HGDIOBJ m_gdiobj;
const HGDIOBJ m_oldobj;
};
//////////////////////////////////////////////////////////
Thanks Everyone who replied & commented! :D
| What you're asking is similar to an earlier question, and the answer I gave there is relevant here as well.
Conditional operator can’t resolve overloaded member function pointers
From section 13.4/1 ("Address of overloaded function," [over.over]):
A use of an overloaded function name without arguments is resolved in certain contexts to a function, a pointer to function or pointer to member function for a specific function from the overload set. A function template name is considered to name a set of overloaded functions in such contexts. The function selected is the one whose type matches the target type required in the context. The target can be
an object or reference being initialized (8.5, 8.5.3),
the left side of an assignment (5.17),
a parameter of a function (5.2.2),
a parameter of a user-defined operator (13.5),
the return value of a function, operator function, or conversion (6.6.3), or
an explicit type conversion (5.2.3, 5.2.9, 5.4).
The overload function name can be preceded by the & operator. An overloaded function name shall not be used without arguments in contexts other than those listed. [Note: any redundant set of parentheses surrounding the overloaded function name is ignored (5.1). ]
In your case, the target from the above list is the third one, a parameter of your MakeObjGuard function. However, I suspect that's a function template, and one of the type parameters for the template is the type of the function pointer. The compiler has a Catch-22. It can't deduce the template parameter type without knowing which overload is selected, and it can't automatically select which overload you mean without knowing the parameter type.
Therefore, you need to help it out. You can either type-cast the method pointer, as you're doing now, or you can specify the template argument type explicitly when you call the function: MakeObjGuard<SelectObjectBitmap>(...). Either way, you need to know the type. You don't strictly need to have a typedef name for the function type, but it sure helps readability.
|
2,065,342 | 2,065,400 | How to create a CString from an array of chars? | Need to log the content of buf using the LogMethod() below the problem is that
LogMethos only accepts a "Const CString&"
char buf[1024];
strcpy(buf, cErrorMsg);
// need to pass to LogMethod "buf" how do i do that?
log.LogMethod(const CString &);
Thans
Rev
Reversed
| If you're talking about MFC CString, as far as I can tell, it should have a non-explicit constructor taking TCHAR const *. In other words, the following should work.
log.LogMethod(buf);
If it doesn't, please post the error message.
|
2,065,392 | 2,065,769 | C++ test if input is an double/char | I am trying to get input from the user and need to know a way to have the program recognize that the input was or was not a double/char this is what i have right now... but when you type an incorrect type of input
1) the double test one just loops infinatly
2) the char one won't stop looping even with the correct imput
int main () {
double _double = 0;
bool done = true;
while ( done ) {
cout << "Please enter a DOUBLE:\n" << endl;
cin >> _double;
if ( _double > 0 ) { done = false; }
if ( _double < 0 ) { cout << "\nthe number you entered was less than zero\nplease enter a valad number..." << endl; }
if(cin.fail()) { cin.clear(); }
}
done = false;
char _char = ' ';
while ( !done ) {
cout << "Please enter a CHAR" << "\n";
cout << "\t'y' = yes\n\t'n' = no" << endl;
cin >> _char;
if ( _char == 'y' || _char == 'n' ) { done = true; }
if ( ! (_char == 'y' || _char == 'n') ) { cout << "\nyou have entered an invald symbol... \n" << endl; }
if(cin.fail()) { cin.clear(); }
}
| The problem is that when you read something and cin sees the input can never be a double, it stops reading, leaving the stuff in the buffer that it didn't consume. It will signal failure, which you clear but you won't eat the remaining input that cin didn't eat up. So, the next time the same wrong input is tried to read again, and again...
The problem with the char one is that you have to press the return key to make it process any characters on most terminals (this does not happen if you make your program read from a file, for instance). So if you press y then it won't go out of the read call, until you hit the return key. However, then it will normally proceed and exit the loop.
As others mentioned you are better off with reading a whole line, and then decide what to do. You can also check the number with C++ streams instead of C functions:
bool checkForDouble(std::string const& s) {
std::istringstream ss(s);
double d;
return (ss >> d) && (ss >> std::ws).eof();
}
This reads any initial double number and then any remaining whitespace. If it then hit eof (end of the file/stream), it means the string contained only a double.
std::string line;
while(!getline(std::cin, line) || !checkForDouble(line))
std::cout << "Please enter a double instead" << std::endl;
For the char, you can just test for length 1
std::string line;
while(!getline(std::cin, line) || line.size() != 1)
std::cout << "Please enter a double instead" << std::endl;
If you want to read only 1 char and continue as soon as that char was typed, then you will have to use platform dependent functions (C++ won't provide them as standard functions). Look out for the conio.h file for windows for instance, which has the _getch function for this. On unix systems, ncurses provides such functionality.
|
2,065,938 | 2,065,961 | Virtual destructor: is it required when not dynamically allocated memory? | Do we need a virtual destructor if my classes do not allocate any memory dynamically ?
e.g.
class A
{
private:
int a;
int b;
public:
A();
~A();
};
class B: public A
{
private:
int c;
int d;
public:
B();
~B();
};
In this case do we need to mark A's destructor as virtual ?
| The issue is not whether your classes allocate memory dynamically. It is if a user of the classes allocates a B object via an A pointer and then deletes it:
A * a = new B;
delete a;
In this case, if there is no virtual destructor for A, the C++ Standard says that your program exhibits undefined behaviour. This is not a good thing.
This behaviour is specified in section 5.3.5/3 of the Standard (here referring to delete):
if the static type of the operand is
different from its dynamic type, the
static type shall be a base class of
the operand’s dynamic type and the
static type shall have a virtual
destructor or the behavior is
undefined.
|
2,066,090 | 2,066,268 | Plotting waveform of the .wav file | I wanted to plot the wave-form of the .wav file for the specific plotting width.
Which method should I use to display correct waveform plot ?
Any Suggestions , tutorial , links are welcomed....
| Basic algorithm:
Find number of samples to fit into draw-window
Determine how many samples should be presented by each pixel
Calculate RMS (or peak) value for each pixel from a sample block. Averaging does not work for audio signals.
Draw the values.
Let's assume that n(number of samples)=44100, w(width)=100 pixels:
then each pixel should represent 44100/100 == 441 samples (blocksize)
for (x = 0; x < w; x++)
draw_pixel(x_offset + x,
y_baseline - rms(&mono_samples[x * blocksize], blocksize));
Stuff to try for different visual appear:
rms vs max value from block
overlapping blocks (blocksize x but advance x/2 for each pixel etc)
Downsampling would not probably work as you would lose peak information.
|
2,066,126 | 2,066,146 | How can I count the number of characters that are printed as output? | Does anyone know how I can print and count the number of characters that I printed?
Say I have a number I am printing via printf or cout. How could I count the actual number of digits I have printed out?
| According to the printf man page, printf returns the number of characters printed.
int count = printf("%d", 1000);
If an output error is encountered, a negative value is returned.
|
2,066,180 | 2,066,195 | the specified module could not be found 0x8007007E | Inside the constructor of a Form when I am stepping through my code, a method declared in the very same form is called. Before I can step inside the method, I get a System.IO.FileNotFoundException with message "The specified module could not be found. (Exception from HRESULT: 0x8007007E)". The member method I try to enter is declared unsafe because it deals with unmanaged C++ code, but like I said I can never step into the method anyways.
Since it sounds like a DLL dependency issue, I ran Dependency Walker. Dependency walker only shows problems with MPR.DLL under SHLWAPI.DLL. The problem method is WNetRestoreConnectionA which I never call. The dependency walker FAQ suggests that this is not a problem http://dependencywalker.com/faq.html. Also, this is not a web application or anything. I am unfortunately stuck with VS2005.
What are some possible reasons for this problem to occur? Any ideas on what I could be missing or how I could debug this problem?
| The error is occurring when the .Net runtime JITs the method you're about to step into, because it couldn't find one of the types used by the method.
What exactly does the method that you can't step into do, and what types / methods does it use?
|
2,066,184 | 2,066,197 | How to use C++ String Streams to append int? | could anyone tell me or point me to a simple example of how to append an int to a stringstream containing the word "Something" (or any word)?
| stringstream ss;
ss << "Something" << 42;
For future reference, check this out.
http://www.cplusplus.com/reference/iostream/stringstream/
|
2,066,325 | 2,066,344 | ways to add text to an image | In my C++ program, I want to be able to add some text over an image. I can already load/save the image, I just need to add text to it.
I want to be able to use different fonts, colors, italics, bold, solid/transparent background, have a border around the letters, etc.
I've done this in the past by using the old MFC functions to generate black text over white background, and doing my own processing to add the other effects.
1-Is there an easier way to generate the black text on white background? Using a DLL created from C# or .net would be fine.
2-Is there something which can directly do all I want to do?
| GDI+ is what you are looking for.
In .NET you will find the GDI+ classes in System.Drawing (requires assembly reference to System.Drawing.dll).
Especially the class Graphics and its method Graphics.FromImage will help you.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.