question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
1,859,499 | 1,859,643 | Using C++, how do I read a string of a specific length, from a non-binary file? | The cplusplus.com example for reading text files shows that a line can be read using the getline function. However, I don't want to get an entire line; I want to get only a certain number of characters. How can this be done in a way that preserves character encoding?
I need a function that does something like this:
ifstream fileStream;
fileStream.open("file.txt", ios::in);
resultStream << getstring(fileStream, 10); // read first 10 chars
file.ftell(10); // move to the next item
resultStream << getstring(fileStream, 10); // read 10 more chars
I thought about reading to a char buffer, but wouldn't this change the character encoding?
| I really suspect that there's some confusion here regarding the term "character." Judging from the OP's question, he is using the term "character" to refer to a char (as opposed to a logical "character", like a multi-byte UTF-8 character), and thus for the purpose of reading from a text-file the term "character" is interchangeable with "byte."
If that is the case, you can read a certain number of bytes from disk using ifstream::read(), e.g.
ifstream fileStream;
fileStream.open("file.txt", ios::in);
char buffer[1024];
fileStream.read(buffer, sizeof(buffer));
Reading into a char buffer won't affect the character encoding at all. The exact sequence of bytes stored on disk will be copied into the buffer.
However, it is a different story if you are using a multi-byte character set where each character is variable-length. If characters are not fixed-size, there's no way to read exactly N characters from disk with a single disk read. This is not a limitation of C++, this is simply the reality of dealing with block devices (disks). At the lowest levels of your OS, block devices are addressed in terms of blocks, which in turn are made up of bytes. So you can always read an exact number of bytes from disk, but you can't read an exact number of logical characters from disk, unless each character is a fixed number of bytes. For character-sets like UTF-8 where each character is variable length, you'll have to either read in the entire file, or else perform speculative reads and parse the read buffer after each read to determine if you need to read more.
|
1,859,614 | 1,859,657 | A bot to Access data on grid of a windows application (like a human) | I'm so desperate to use a vpn application. In this app there are some limitations that I've found no solutions for them for example multiple users can connect to the vpn server using one username at the same time. In order to stop that I have to look at the 'connected vpn clients' and see if a username exists more than once and then disconnect him.
Here is a photo of the app page:
alt text http://img339.imageshack.us/img339/3937/kerioq.jpg
Is it possible to develop an application to access this specific grid on this application and read the content and magically use some of those actions (like disconnect menu as you can see)
| Any solution which reads the GUI will leave connections open for a short amount of time, and be brittle against changes in the GUI, you will be better off asking Kerio support or serverfault if there is any proper, integrated way to achieve what you want.
It can be done, although C++ is probably the wrong choice, python + pywinauto might be a better choice.
Use pywinauto to enumerate the components of the window till you work out how to get to the list. Work through the list, finding duplicates, and disconnecting them.
|
1,859,788 | 1,859,810 | Compiling a gnu program without sse3 | I'm compiling an app for a device where the architecture does not support sse beyond sse2, and was wondering is it possible to disable compiling with sse3 instructions from GNU autoconf generated configure scripts? I know you can turn it off in gcc/g++ with mno-sse3 option, but it would be nice if I could turn it off at the configuration level rather than generating a make file and then manually inserting that compiler flag...
| Sure. Just set the required flags before calling configure:
$ CFLAGS="-mtune i386" ./configure --enable-this --disable-that ...
You might want to try -march if -mtune does the wrong thing, I haven't tested this lately.
|
1,860,065 | 1,860,080 | Problem with operator == | I am facing some problem with use of operator == in the following c++ program.
#include < iostream>
using namespace std;
class A
{
public:
A(char *b)
{
a = b;
}
A(A &c)
{
a = c.a;
}
bool operator ==(A &other)
{
return strcmp(a, other.a);
}
private:
char *a;
};
int main()
{
A obj("test");
A obj1("test1");
if(obj1 == A("test1"))
{
cout<<"This is true"<<endl;
}
}
What's wrong with if(obj1 == A("test1")) line ?? Any help is appreciated.
| bool operator ==( const A &other)
Use const reference, so a temporary object that is constructed in if statement can be used as parameter for operator==.
|
1,860,187 | 1,860,275 | How do I read binary C++ protobuf data using Python protobuf? | The Python version of Google protobuf gives us only:
SerializeAsString()
Where as the C++ version gives us both:
SerializeToArray(...)
SerializeAsString()
We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string?
Is this the correct way of doing it?
binary = get_binary_data()
binary_size = get_binary_size()
string = None
for i in range(len(binary_size)):
string += i
message = new MyMessage()
message.ParseFromString(string)
Update:
Here's a new example, and a problem:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(data)
When we get to the foo_bar.ParseFromString(data) line, I get this error:
Exception Type: DecodeError
Exception Value: Too many bytes when decoding varint.
Update 2:
It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding).
This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
string = ''
for i in range(0, len(data)):
byte = data[i]
if byte != '\xcc': # yuck!
string += data[i]
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(string)
There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.
| I'm not an expert with Python, but you can pass the result of a file.read() operation into message.ParseFromString(...) without having to build a new string type or anything.
|
1,860,404 | 1,860,431 | A C++ library for Arrays, Matrix, Vector, and classical linear algebra operations | Which library do you use for N-dimensional arrays?
I use blitz++ at work and I really dislike some aspect of it.
Some aspect of it are even dangerous. The need for resizing before
using operator=. A(Range::all(), Range::all()) throws for an (0,0)
matrix, etc. and the linear algebra operations are to be
done via clapack.
I used and loved eigen. I appreciate its "all-in-header" implementations,
the C++ syntactic sugar, and the presence of all the linear algebra operations
I need (matrix multiplication, system resolution, cholesky...)
What are you using?
| boost::array and also boost::MultiArray. There's also a pretty good linear algebra package in boost called uBLAS
|
1,860,461 | 1,860,704 | Why is `i = ++i + 1` unspecified behavior? | Consider the following C++ Standard ISO/IEC 14882:2003(E) citation (section 5, paragraph 4):
Except where noted, the order of
evaluation of operands of individual
operators and subexpressions of individual
expressions, and the order in
which side effects take place, is
unspecified. 53) Between the previous
and next sequence point a scalar
object shall have its stored value
modified at most once by the
evaluation of an expression.
Furthermore, the prior value shall be
accessed only to determine the value
to be stored. The requirements of this
paragraph shall be met for each
allowable ordering of the
subexpressions of a full expression;
otherwise the behavior is undefined.
[Example:
i = v[i++]; // the behavior is unspecified
i = 7, i++, i++; // i becomes 9
i = ++i + 1; // the behavior is unspecified
i = i + 1; // the value of i is incremented
—end example]
I was surprised that i = ++i + 1 gives an undefined value of i.
Does anybody know of a compiler implementation which does not give 2 for the following case?
int i = 0;
i = ++i + 1;
std::cout << i << std::endl;
The thing is that operator= has two args. First one is always i reference.
The order of evaluation does not matter in this case.
I do not see any problem except C++ Standard taboo.
Please, do not consider such cases where the order of arguments is important to evaluation. For example, ++i + i is obviously undefined. Please, consider only my case
i = ++i + 1.
Why does the C++ Standard prohibit such expressions?
| You make the mistake of thinking of operator= as a two-argument function, where the side effects of the arguments must be completely evaluated before the function begins. If that were the case, then the expression i = ++i + 1 would have multiple sequence points, and ++i would be fully evaluated before the assignment began. That's not the case, though. What's being evaluated in the intrinsic assignment operator, not a user-defined operator. There's only one sequence point in that expression.
The result of ++i is evaluated before the assignment (and before the addition operator), but the side effect is not necessarily applied right away. The result of ++i + 1 is always the same as i + 2, so that's the value that gets assigned to i as part of the assignment operator. The result of ++i is always i + 1, so that's what gets assigned to i as part of the increment operator. There is no sequence point to control which value should get assigned first.
Since the code is violating the rule that "between the previous and next sequence point a scalar object shall have its stored value modified at most once by the evaluation of an expression," the behavior is undefined. Practically, though, it's likely that either i + 1 or i + 2 will be assigned first, then the other value will be assigned, and finally the program will continue running as usual — no nasal demons or exploding toilets, and no i + 3, either.
|
1,860,615 | 1,860,953 | Code with undefined behavior in C# | In C++ there are a lot of ways that you can write code that compiles, but yields undefined behavior (Wikipedia). Is there something similar in C#? Can we write code in C# that compiles, but has undefined behavior?
| As others have mentioned, pretty much anything in the "unsafe" block can yield implementation-defined behaviour; abuse of unsafe blocks allows you to change the bytes of code that make up the runtime itself, and therefore all bets are off.
The division int.MinValue/-1 has an implementation-defined behaviour.
Throwing an exception and never catching it causes implementation-defined behaviour -- terminate the process, start a debugger, and so on.
There are a number of other situations in C# where we are forced to emit code which has implementation-determined behaviour. For example, this situation:
https://learn.microsoft.com/en-us/archive/blogs/ericlippert/odious-ambiguous-overloads-part-two
However, the situations in which a safe, well-behaved C# program has implementation-defined behaviour should be quite rare.
|
1,860,705 | 1,860,726 | On declarative programming in C++ | Often I face the problem of mapping the parameter space of one API onto the parameter space of another one. Often I see this solved by nested nested nested ... switch statements.
And I was wondering if there would happen to be a library or a technique that allows you to 'declare' the mapping instead of 'program' it.
A trivial example would consist of merging the values of two enumerates into one:
namespace sourceAPI {
struct A { typedef e { A1, A2, A3 } };
struct B { typedef e { B1, B2 } };
}
namespace targetAPI {
struct AB { typedef e { A1B1, A1B2, A2B1, A2B2, A3B1, A3B2 } };
}
In which the mapping is often done like
switch( a ){
case( A::A1 ): switch( b ) {
case( B::B1 ): return A1B1;
case( B::B2 ): return A1B2;
....
}
And this mapping still needs a 'reverse' switch, too.
But I would rather like something 'dense' like
declare( source( A::A1, B::B1 ), target( AB::A1B1 ) );
declare( source( A::A1, B::B2 ), target( AB::A1B2 ) );
....
Has anyone seen such a technique or framework or library?
| You can use Boost.Bimap, which provides a bidirectional mapping between two types.
It has a bit of runtime overhead (generally, roughly the same amount of overhead you would get by using a pair of std::maps for this purpose, which isn't a whole lot).
It does allow you to define mappings about as densely as your example, though; generally you just add pairs to the map, one pair at a time.
|
1,860,783 | 1,860,807 | strange double to int conversion behavior in c++ | The following program shows the weird double to int conversion behavior I'm seeing in c++:
#include <stdlib.h>
#include <stdio.h>
int main() {
double d = 33222.221;
printf("d = %9.9g\n",d);
d *= 1000;
int i = (int)d;
printf("d = %9.9g | i = %d\n",d,i);
return 0;
}
When I compile and run the program, I see:
g++ test.cpp
./a.out
d = 33222.221
d = 33222221 | i = 33222220
Why is i not equal to 33222221?
The compiler version is GCC 4.3.0
| Floating point representation is almost never precise (only in special cases). Every programmer should read this: What Every Computer Scientist Should Know About Floating-Point Arithmetic
In short - your number is probably 33222220.99999999999999999999999999999999999999999999999999999999999999998 (or something like that), which becomes 33222220 after truncation.
|
1,860,796 | 1,860,862 | Your thoughts on "Large Scale C++ Software Design" | Reading the reviews at Amazon and ACCU suggests that John Lakos' book, Large-Scale C++ Software Design may be the Rosetta Stone for modularization.
At the same time, the book seems to be really rare: not many have ever read it, and no pirate electronic copies are floating around.
So, what do you think?
| I've read it, and consider it a very useful book on some practical issues with large C++ projects. If you have already read a lot about C++, and know a bit about physical design and its implications, you may not find that much which is terribly "new" in this book.
On the other hand, if your build takes 4 hours, and you don't know how to whittle it down, get a copy, read it, and take it all in.
You'll start writing physically better code quite quickly.
[Edit]
If you want to start somewhere, and can't immediately get a hold of the book, I found the Games From Within series on physical structure useful even after reading Large Scale C++ design.
|
1,860,955 | 1,862,211 | Why does 'unspecified_bool' for classes which have intrinsic conversions to their wrappered type fail? | I have recently read the safe bool idiom article. I had seen this technique used a few times, but had never understood quite why it works, or exactly why it was necessary (probably like many, I get the gist of it: simply using operator bool () const allowed some implicit type conversion shenanigans, but the details were for me always a bit hazy).
Having read this article, and then looked at a few of its implementations in boost's shared_ptr.hpp, I thought I had a handle on it. But when I went to implement it for some of the classes that we've borrowed and extended or developed over time to help manage working with Windows APIs, I found that my naive implementation fails to work properly (the source compiles, but the usage generates a compile-time error of no valid conversion found).
Boost's implementations are littered with conditions for various compilers level of support for C++. From using the naive operator bool () const, to using a pointer to member function, to using a pointer to member data. From what I gather, pointer to member data is the most efficient for compilers to handle IFF they handle it at all.
I'm using MS VS 2008 (MSVC++9). And below is a couple of implementations I've tried. Each of them results in Ambiguous user-defined-conversion or no operator found.
template<typename HandlePolicy>
class AutoHandleTemplate
{
public :
typedef typename HandlePolicy::handle_t handle_t;
typedef AutoHandleTemplate<HandlePolicy> this_type;
{details omitted}
handle_t get() const { return m_handle; }
operator handle_t () const { return m_handle; }
#if defined(NAIVE)
// The naive implementation does compile (and run) successfully
operator bool () const { return m_handle != HandlePolicy::InvalidHandleValue(); }
bool operator ! () const { return m_handle == HandlePolicy::InvalidHandleValue(); }
#elif defined(FUNC_PTR)
// handle intrinsic conversion to testable bool using unspecified_bool technique
typedef handle_t (this_type::*unspecified_bool_type)() const;
operator unspecified_bool_type() const // never throws
{
return m_handle != HandlePolicy::InvalidHandleValue() ? &this_type::get() : NULL;
}
#elif defined(DATA_PTR)
typedef handle_t this_type::*unspecified_bool_type;
operator unspecified_bool_type() const // never throws
{
return m_handle != HandlePolicy::InvalidHandleValue() ? &this_type::m_handle : NULL;
}
#endif
private :
handle_t m_handle;
{details omitted}
};
And here's a snippet of code that either works (naive implementation), or errors (either of the unspecified_bool techniques, above):
// hModule is an AutoHandleTemplate<ModuleHandlePolicy>
if (!hModule)
and:
if (hModule)
I have already tried enabling the operator! in all cases - but although the first case then works, the second fails to compile (ambiguous).
This class seems to me to be so very like a smart_ptr (or auto_ptr). It should support implicit conversion to its underlying handle type (HMODULE) in this case, but it should also handle if (instance) and if (!instance). But if I define both the operator handle_t and the unspecified_bool technique, I get errors.
Can someone please explain to me why that is so, and perhaps suggest a better approach? (or should I be content with the naive approach, at least until C++0x is complete and explicit operators are implemented in my compiler)?
EDIT:
It seems that the answer may well be that if I define an implicit conversion to an integral, that C++ will use that conversion for any if (instance) type expressions. And that, at least for the above class, the only reason to define any other operators (operator bool) is to explicitly override using the implicit integral conversion to something else (in the above case, forcing it to be a comparison to INVALID_HANDLE_VALUE instead of the implicit NULL).
And using the unspecified_bool technique only really makes sense when you're not providing an integral conversion operator?
| AutoHandleTemplate<ModuleHandlePolicy> hModule( ... );
HMODULE raw_handle = hModule; // if we want to this line works,
// AutoHandleTemplate<ModuleHandlePolicy> should \
// be implicitly converted to it's raw handle type - HMODULE.
If one smart-ptr can implicitly converted to it's raw handle type and the raw handle type could be used in a boolean test itself, like :
HMODULE the_raw_handle = ...;
if ( the_raw_handle ) {} // this line is ok
For those smart-ptrs, there is no need (and should not) to define conversions to bool,void* or safe_bool, otherwise, ambiguity.
operator bool(), void*(), safe_bool() are used for the smart-ptrs which could not be implicitly convert to it's raw handle or it's raw handle couldn't used in a boolean context.
Try this code :
template<typename HandlePolicy>
class AutoHandleTemplate
{
public :
typedef typename HandlePolicy::handle_t handle_t;
typedef AutoHandleTemplate<HandlePolicy> this_type;
{details omitted}
operator handle_t () const {
return m_handle==HandlePolicy::InvalidHandleValue()? 0: m_handle;
}
// no more conversion functions
private :
handle_t m_handle;
{details omitted}
};
|
1,861,294 | 1,861,337 | How to Calculate Execution Time of a Code Snippet in C++ | I have to compute execution time of a C++ code snippet in seconds. It must be working either on Windows or Unix machines.
I use code the following code to do this. (import before)
clock_t startTime = clock();
// some code here
// to compute its execution duration in runtime
cout << double( clock() - startTime ) / (double)CLOCKS_PER_SEC<< " seconds." << endl;
However for small inputs or short statements such as a = a + 1, I get "0 seconds" result. I think it must be something like 0.0000001 seconds or something like that.
I remember that System.nanoTime() in Java works pretty well in this case. However I can't get same exact functionality from clock() function of C++.
Do you have a solution?
| You can use this function I wrote. You call GetTimeMs64(), and it returns the number of milliseconds elapsed since the unix epoch using the system clock - the just like time(NULL), except in milliseconds.
It works on both windows and linux; it is thread safe.
Note that the granularity is 15 ms on windows; on linux it is implementation dependent, but it usually 15 ms as well.
#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Remove if already defined */
typedef long long int64; typedef unsigned long long uint64;
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
uint64 GetTimeMs64()
{
#ifdef _WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
uint64 ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
gettimeofday(&tv, NULL);
uint64 ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
|
1,861,506 | 1,862,286 | Prevent Modal Dialog on win32 process crash | We have a legacy build infrastructure for nightly builds (implemented in Perl) to compile, link and unit tests our applications/plugins. On Windows, if the unit testing process crashes, this pops up a Modal Dialog which "locks" our build farm.
Is there a way (win32 API call, system config, env var, something...) to disable this behavior to have the child process terminate immediately on crashes, with no Modal Dialog and a non-zero exit status instead?
Thanks, --DD
PS: We compile with SEC (Structured Exception Handling) on Windows, to be able to "catch" crashes using catch (...), therefore avoiding this issue most of the time, but sometime that's not enough, since of course some crashes are not recoverable (if they corrupted the stack for example).
| Depending on who's throwing the dialog, you may have to combine multiple approaches.
SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX);
...will shut up one set of dialogs.
|
1,861,541 | 1,861,574 | Array of char or std::string for a public library? | my question is simple:
Should I use array of char eg:
char *buf, buf2[MAX_STRING_LENGTH]
etc or should I use std::string in a library that will be used by other programmers where they can use it on any SO and compiler of their choice?
Considering performance and portability...
from my point of view, std strings are easier and performance is equal or the difference is way too little to not use std:string, about portability I don't know. I guess as it is standard, there shouldn't be any compiler that compiles C++ without it, at least any important compiler.
EDIT:
The library will be compiled on 3 major OS and, theorically, distributed as a lib
Your thoughts?
ty,
Joe
| Depends on how this library will be used in conjunction with client code. If it will be linked in dynamically and you have a set of APIs exposed for the client -- you are better off using null terminated byte strings (i.e. char *) and their wide-character counterparts. If you are talking about using them within your code, you certainly are free to use std::string. If it is going to be included in source form -- std::string works fine.
|
1,861,568 | 1,861,595 | .c_str() weirdness? Data changes without rhyme or reason? | I have this simple function:
const wchar_t *StringManager::GetWCharTStar(int stringId)
{
std::wstring originalString = StringManager::GetString(stringId);
const wchar_t *retStr = originalString.c_str();
return retStr;
}
At the second line of that function, I have the correct wchar_t*. However, when I go to return, the data switches to garbage data. There are no functions in between. What gives?!
| originalString is allocated on the stack. The .c_str() method just returns a pointer to some contiguous internal memory of the wstring object. When the function returns, originalString goes out of scope and is destroyed, therefore the pointer value you return points to deleted memory.
If you need to do this, you should make a copy of the data into memory you allocate with new or malloc(), and then the caller must delete/free() that memory.
|
1,861,581 | 1,861,623 | Does defining a function inside a header always make the compiler treat it as inline? | I just learned that defining a c++ function inside a class's header file make the function inline. But I know that putting the inline keyword next to a function is only a suggestion and the compiler wont necessarily follow it. Is this the same for header defined c++ functions and is there a difference in behavior between a standalone c++ function and a c++ function that is part of a class?
| "defining a c++ function inside a class's header file make the function inline"
That's not true. Defining a function (that is to say, providing the body of the function instead of just a declaration) inside a class definition makes it inline. By "makes it inline", I mean it's the same as giving it the inline keyword. But class definitions don't have to be in headers, and headers can contain other things than class definitions.
So in this example, the function foo is implicitly inline. The function bar is not implicitly inline:
struct Foo {
void foo() {}
void bar();
};
void Foo::bar() {}
"putting the inline keyword next to a function is only a suggestion and the compiler wont necessarily follow it"
inline has two effects. One of them is a hint to the compiler which it can ignore. The other is not optional, and always has its effect. The "hint" is that the compiler is advised to replace calls to that function with a copy of the code for the function itself.
The guaranteed effect is that an inline function can be defined in multiple translation units, and those be linked together, without a multiple definition error, and all but one of the copies is removed by the linker. So, if the example above appears in a header file which is shared between multiple translation units, bar needs to be explicitly marked inline. Otherwise, the linker will discover multiple definitions of bar, which is not allowed.
Despite the name, inline in C++ is mostly about the second, compulsory effect, not the first, optional one. Modern optimising compilers have their own ideas about which calls should be inlined, and don't pay a whole lot of attention to inline when making that decision. For instance I've seen it have an effect in gcc at moderate optimisation levels, but at low levels approximately nothing is inlined, and at high levels approximately everything is (if the definition is available when the call is compiled) unless it makes the function too big.
Whether a function is defined in a header or in a cpp file has absolutely no effect on anything by itself. You can safely imagine that what #include does is copy and paste the header file into the cpp file in the preprocessor, before the compiler ever sees it. If a function is defined in the same translation unit as a call to it, then the function code is available to be inlined by the compiler. If they're in different translation units, then the code is not available and the call can only be inlined by the linker, with whole-program optimisation or similar. A "translation unit" more or less means, "a cpp file, after all the headers have been copy and pasted into it".
|
1,861,654 | 1,861,790 | Is there a way to prevent a header defined c++ function from being treated as inlined | I am making a Qt application and as I was coding, I took the habit of defining my slots in the header. I found it was easier for me to develop that way though I still define normal functions in the .cpp (unless the function is really small). But now there are some worries from my colleague that putting these in the header is bad practice because the fact of defining them in the header makes them inline so I am looking into the matter to understand everything that is going on. This is the reason I was given:
"Even in-lined functions (other than as required by classes) is a highly debatable practice. In theory, it creates faster, but larger code (avoids function calls and returns by duplicating code). However, several people have noticed that often using in-lining actually creates slower code. The reason why is because it can cause the code to get larger and exceed the size of what fits in one or more caches used at run-time. As a result it causes portions of the function to go in and out of cache every pass through some loop and the cache misses and subsequent reloads are far more costly than a function call to something already in another cache page. It’s an interesting situation and one that can’t be predicted, only observed by trial and error."
| Your colleague needs to check up on the meaning of inlining in C++.
There are two meanings of the word, and it's important to keep them separated:
According to the C++ standard, a function is inline if it is marked with the inline keyword, or defined inside the class definition.
The only required effect of this, is to disable the One Definition Rule -- that is, to make it legal for the definition to be seen in multiple translation units without producing a linker error. Basically, it allows you to put the full definition in a header file
Then there is the "inline" optimization, which consists of taking the function body, and inserting it instead of a function call.
These meanings are almost entirely orthogonal. A function can be inlined by the compiler whether or not you, the programmer, marked it as inline. (Although it is harder and less common for a compiler to be able to inline if the function is called in a different translation unit than the one in which it was defined) A function marked inline in C++ may or may not be inlined by the compiler. The compiler tries to estimate the possible benefit of this, based on code size, how frequently the function is called, number of call sites and such heuristics. The result is that the compiler is pretty good at determining when the inlining optimization is worthwhile, and your best bet is usually to leave it to do its thing alone.
You should simply mark functions as inline when 1) it is convenient for you, and 2) you want to ensure that the compiler has the option of applying the inlining optimization.
But you're not forcing the compiler to inline anything. You're merely arranging the code so that it can, if it chooses to do so, inline the function call.
|
1,861,679 | 1,861,695 | Is it possible to define multiple classes in just one .cpp file? | I'm working on a project for school and the instructor insists that all code go into one .cpp file (for easier grading on his part). I would like to define multiple classes within this file. Will I run into any problems in doing this?
| There is no rule you have to follow (like in java). You're free to place and name classes in however named files you like (besides the suffix).
However its another question if that is good practices (its not!).
|
1,861,837 | 1,861,977 | Workarounds for the forward-declared class enumeration problem? | I am maintaining a large code base and am using a combination of forward declarations and the pImpl idiom to keep compile times down and reduce dependencies (and it works really well,)
The problem I have is with classes that contain public enumerations. These enumerations cannot be forward declared so I am left with no option but to include the class header. For example:
// Foo.h
class Foo
{
public:
enum Type
{
TYPE_A,
TYPE_B,
};
...
};
// Bar.h
#include "Foo.h" // For Foo::Type
class Bar
{
public:
void someFunction(Foo::Type type);
...
};
So, I'm looking for ways to avoid this and can only think of the following:
Move the class enumerations to a separate 'types' namespace
// FooTypes.h
namespace FooTypes
{
enum Type
{
TYPE_A,
TYPE_B,
};
}
// Bar.h
#include "FooTypes.h"
class Bar
{
public:
void someFunction(FooTypes::Type type);
...
};
Use int instead of the enumeration
// Bar.h
class Bar
{
public:
void someFunction(int type);
...
};
What have I missed? How do other people get around this limitation (not being able to forward declare enumerations.)
| Put the enumeration into its own type:
struct FooEnum
{
enum Type
{
TYPE_A,
TYPE_B,
};
};
Then Foo and Bar can both access FooEnum::Type and Bar.h doesn't need to include Foo.h.
|
1,861,912 | 1,861,933 | Should "delete this" be called from within a member method? | I was just reading this article and wanted SO folks advice:
Q: Should delete this; be called from within a member method?
| Normally this is a bad idea, but it's occasionally useful.
It's perfectly safe as long as you don't use any member variables after you delete, and as long as clients calling this method understand it may delete the object.
A good example of when this is useful is if your class employs reference counting:
void Ref() {
m_References++;
}
void Deref() {
m_References--;
if (m_References == 0) {
delete this;
}
}
|
1,861,964 | 1,862,017 | Qt QString cloning Segmentation Fault | I'm building my first Qt app using Qt Creator, and everything was going fine until I started getting a strange SIGSEGV from a line apparently harmless.
This is the error:
Program received signal SIGSEGV, Segmentation fault.
0x0804e2fe in QBasicAtomicInt::ref (this=0x0) at /usr/lib/qt/include/QtCore/qatomic_i386.h:120
By backtracing the exception on gdb, I found that a simple getter is passing a NULL pointer to the clone constructor when I return my attribute.
Backtrace output:
(gdb) backtrace
#0 0x0804e2fe in QBasicAtomicInt::ref (this=0x0) at /usr/lib/qt/include/QtCore/qatomic_i386.h:120
#1 0x0804eb1b in QString (this=0xbfcc8e48, other=@0xbfcc8e80) at /usr/lib/qt/include/QtCore/qstring.h:712
#2 0x0805715e in Disciplina::getId (this=0xbfcc8e7c) at disciplina.cpp:13
[...]
Inspecting the pointer passed to the QString constructor:
(gdb) x 0xbfcc8e80
0xbfcc8e80: 0x00000000
And this is disciplina.cpp:13
QString Disciplina::getId()
{
return id;
}
So, all points towards the copy constructor of QString receiving an empty pointer, which makes no sense to me. id was declared as a private QString.
private:
QString id;
Well, I have no clue of what could be going on, and my debugging skills only go so far, so if anyone could throw an idea I'd be really glad.
Thanks.
edit
More code, as requested.
disciplina.h
#ifndef DISCIPLINA_H
#define DISCIPLINA_H
#include <QString>
#include <QMap>
#include "curso.h"
#include "turma.h"
class Curso;
class Turma;
class Disciplina
{
private:
unsigned short int serie;
QString id;
QString nome;
Curso* curso;
QMap<unsigned int, Turma*> turmas;
public:
Disciplina(QString id, Curso* curso, QString nome, unsigned short int serie);
QString getId();
const Curso getCurso();
QString getNome();
void setNome(QString nome);
void addTurma(Turma* t, unsigned int id);
QMap<unsigned int, Turma*> getTurmas();
};
#endif // DISCIPLINA_H
disciplina.cpp
#include "disciplina.h"
Disciplina::Disciplina(QString id, Curso* curso, QString nome, unsigned short int serie)
{
this->id = id;
this->curso = curso;
this->nome = nome;
this->serie = serie;
}
QString Disciplina::getId()
{
return id;
}
const Curso Disciplina::getCurso()
{
const Curso c(*this->curso);
return c;
}
QString Disciplina::getNome()
{
return this->nome;
}
void Disciplina::setNome(QString nome)
{
this->nome = nome;
}
void Disciplina::addTurma(Turma* t, unsigned int id)
{
this->turmas.insert(id, t);
}
QMap<unsigned int, Turma*> Disciplina::getTurmas()
{
return this->turmas;
}
Caller function (I broke it down for easier debugging)
Disciplina*
MainWindow::getSelectedDisciplina()
{
if(ui->disciplinaTurma->count() > 0 && currentCurso)
{
QMap<QString, Disciplina*> qm(currentCurso->getDisciplinas());
QString key = ui->disciplinaTurma->itemText(ui->disciplinaTurma->currentIndex());
Disciplina* d = qm[key];
QMessageBox::information(this, d->getId(), d->getNome());
return d;
}
else
return NULL;
}
Solved
The Disciplina object inserted into the map was getting out of scope and therefore deleted. Since, as Jacinto pointed out, Map created a vanilla value when you try to access a nonexistent key, it looked like the object was there.
Thank you both Jacinto and sth for your help.
| In c++'s map, if the element doesn't exist when you try to access it by its key, it just creates one for you. You are attempting to do the same thing here, and if QMap works the same way, this is what is causing your segfault.
What you should be doing is testing for the key's presence in the map before accessing it.
edit: for the C++ purists, please let me know if i have that right. I know in practice it's safer to test before accessing it, but I don't know if the phraseology of "it creates one for you" is a very good way to put it. It might just return you the space in memory where such a value would be; I don't know if it would actually call the default constructor.
|
1,862,214 | 1,862,227 | Why are operators sometimes stand-alone and sometimes class methods? | Why is that sometimes an operator override is defined as a method in the class, like
MyClass& MyClass::operatorFoo(MyClass& other) { .... return this; };
and sometimes it's a separate function, like
MyClass& operatorFoo(MyClass& first, MyClass& bar)
Are they equivalent? What rules govern when you do it one way and when you do it the other?
| If you want to be able to do something like 3 + obj you have to define a free (non-member) operator.
If you want to make your operators protected or private, you have to make them methods.
Some operators cannot be free functions, e.g., operator->.
This is already answered here:
difference between global operator and member operator
|
1,862,287 | 1,866,779 | Optimizing a pinhole camera rendering system | I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it
Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();
This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;
// if time is smaller than 0 the ray is either parallel to the plane or misses it
if (time >= 0)
{
// retrieving the actual intersection point
a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));
// subtracting the plane origin from the intersection point
// puts the point at world origin (0, 0, 0)
Vec3 sub = a_Point - plane_pos;
// the axes are calculated by saying the directional vector of the camera
// is the new z axis
projected.x = sub.GetDotProduct(m_Axis[0]);
projected.y = sub.GetDotProduct(m_Axis[1]);
}
}
This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.
float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;
Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.
My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)
Thanks in advance.
P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)
EDIT:
Vec2 projected;
// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();
// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;
return projected;
This returns the correct results, however the model is now independent of the camera position. :(
It's a lot shorter and faster though!
| It is difficult to understand exactly what your code doing, because it seems to be performing a lot of redundant operations! However, if I understand what you say you're trying to do, you are:
finding the vector from the pinhole to the point
normalizing it
projecting backwards along the normalized vector to an "image plane" (behind the pinhole, natch!)
finding the vector to this point from a central point on the image plane
doing dot products on the result with "axis" vectors to find the x and y screen coordinates
If the above description represents your intentions, then the normalization should be redundant -- you shouldn't have to do it at all! If removing the normalization gives you bad results, you are probably doing something slightly different from your stated plan... in other words, it seems likely that you have confused yourself along with me, and that the normalization step is "fixing" it to the extent that it looks good enough in your test cases, even though it probably still isn't doing quite what you want it to.
The overall problem, I think, is that your code is massively overengineered: you are writing all your high-level vector algebra as code to be executed in the inner loop. The way to optimize this is to work out all your vector algebra on paper, find the simplest expression possible for your inner loop, and precompute all the necessary constants for this at camera setup time. The pinhole camera specs would only be the inputs to the camera setup routine.
Unfortunately, unless I miss my guess, this should reduce your pinhole camera to the traditional, boring old matrix calculations. (ray tracing does make it easy to do cool nonstandard camera stuff -- but what you describe should end up perfectly standard...)
|
1,862,360 | 1,862,367 | Error making C++ functions virtual | The error states:
"error: virtual outside class definition"
Cpp members in question:
virtual void Account::creditBalance(double plus)
{
if(plus > 0)
balance += plus;
else
cout << "Cannot credit negative.";
}
virtual void Account::debitBalance(double minus)
{
if(minus <= balance)
balance -= minus;
else
cout << "Debit amount exceeded account balance.";
}
The rest of the code is here (although I'm not sure it's necessary): http://pastebin.com/de5e9f77
| Simply strip the virtual off of the method definition. The virtual keyword is only necessary at the declaration.
Example:
class Account {
public:
virtual void creditBalance(double);
virtual void debitBalance(double);
};
void Account::creditBalance(double plus)
{
if(plus > 0)
balance += plus;
else
cout << "Cannot credit negative.";
}
void Account::debitBalance(double minus)
{
if(minus <= balance)
balance -= minus;
else
cout << "Debit amount exceeded account balance.";
}
|
1,862,453 | 1,862,592 | portable signed/unsigned byte cast,C++ | I am using signed to unsigned byte(int8_t) cast to pack byts.
uint32_t(uint8_t(byte)) << n
This works using GCC on Intel Linux. Is that portable for other platforms/compilers, for example PowerPC?
is there a better way to do it? using bitset is not possible in my case.
I am using stdint via boost
| If you are using boost/cstdint.hpp from the Boost Integer library, then yes, the typedefs are portable (cross-platform.) The boost/cstdint.hpp header is meant to implement C99 stdint.h functionality in C++.
From the Boost documentation:
The header
provides the typedef's useful for
writing portable code that requires
certain integer widths. All typedef's
are in namespace boost.
|
1,862,645 | 1,862,664 | should i put .hpp and .cpp in the same fold or in ./src and ./hdr respectively? | it seems that separate them into src and hdr is a popular solution. however, this is not the case in Netbeans: by default, netbeans will put both hpp and cpp files in the same directory.
questions:
should I separate them or not? why?
if 1 is yes, is there any way to automatically set this in Netbeans?
| If you plan to distribute a library as binaries and headers, it might be easier to have the headers in a separate directory to begin with.
|
1,862,658 | 1,862,673 | what's the difference between hpp and hxx? | for gcc they should be the same, right? which one of them is more popular , i am now preparing a project from scratch and i would like to pick one among these 2.
thanks
| In C++, the file extension doesn't actually matter. The use of .h, .hpp, .hxx, or no file extension are all by convention.
The standard library uses no file extension for its header files. Many projects, including Boost, use .hpp. Many projects use .h. Just pick one and be consistent in your project.
|
1,862,821 | 1,862,863 | Who can tell me what this bit of C++ does? | CUSTOMVERTEX* pVertexArray;
if( FAILED( m_pVB->Lock( 0, 0, (void**)&pVertexArray, 0 ) ) ) {
return E_FAIL;
}
pVertexArray[0].position = D3DXVECTOR3(-1.0, -1.0, 1.0);
pVertexArray[1].position = D3DXVECTOR3(-1.0, 1.0, 1.0);
pVertexArray[2].position = D3DXVECTOR3( 1.0, -1.0, 1.0);
...
I've not touched C++ for a while - hence the topic but this bit of code is confusing myself. After the m_pVB->Lock is called the array is initialized.
This is great and all but the problem I'm having is how this happens. The code underneath uses nine elements, but another function (pretty much copy/paste) of the code I'm working with only access say four elements.
CUSTOMVERTEX is a struct, but I was under the impression that this matters not and that an array of structs/objects need to be initialized to a fixed size.
Can anyone clear this up?
Edit:
Given the replies, how does it know that I require nine elements in the array, or four etc...?
So as long as the buffer is big enough, the elements are legal. If so, this code is setting the buffer size if I'm not mistaken.
if( FAILED( m_pd3dDevice->CreateVertexBuffer( vertexCount * sizeof(CUSTOMVERTEX), 0, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &m_pVB, NULL ) ) ) {
return E_FAIL;
}
| m_pVB points to a graphics object, in this case presumably a vertex buffer. The data held by this object will not generally be in CPU-accessible memory - it may be held in onboard RAM of your graphics hardware or not allocated at all; and it may be in use by the GPU at any particular time; so if you want to read from it or write to it, you need to tell your graphics subsystem this, and that's what the Lock() function does - synchronise with the GPU, ensure there is a buffer in main memory big enough for the data and it contains the data you expect at this time from the CPU's point of view, and return to you the pointer to that main memory. There will need to be a corresponding Unlock() call to tell the GPU that you are done reading / mutating the object.
To answer your question about how the size of the buffer is determined, look at where the vertex buffer is being constructed - you should see a description of the vertex format and an element count being passed to the function that creates it.
|
1,862,867 | 1,865,852 | What is the best single-source shortest path algorithm for programming contests? | I was working on this graph problem from the UVa problem set. It's a single-source-shortest-paths problem with no negative edge weights. From what I've gathered, the algorithm with the best big-O running time for such problems is Dijkstra with a Fibonacci heap as the priority queue, although practically speaking a binary heap is easier to implement and works pretty well too.
However, it would seem that even a binary heap takes quite some time to roll, and in a competition time is limited. I am aware that the STL provides some heap algorithms and priority queues, but they don't seem to provide a decrease-key function which Dijkstra's needs. Or am I wrong here?
It seems that another possibility is to simply not use Dijkstra's. This forum thread has people claiming that they solved the above problem with breadth-first search / Bellman-Ford, which are much easier to code up. (Edit: OTOH, Dijkstra's with an unsorted array for the priority queue timed out.) That BFS/Bellman-Ford worked surprised me a little as I thought that the input size was quite large. I guess different problems will require solutions of different complexity, but my question is, how often would I need to use Dijkstra's in such competitions? Should I practice more on the simpler-but-slower algorithms instead?
| Based on my own experience, I never needed to implement Dijkstra algorithm with a heap in a programming contest. You can get away most of the time, using a slower but efficient enough algorithm. You might use a best Dijkstra implementation to solve a problem which expects a different/simpler algorithm, but this is rare the case.
|
1,862,952 | 1,862,966 | Why do C++ class definitions on Windows often have a macro token after 'class'? | I am trying to understand an open source project, where I came across the following class declaration:
class STATE_API AttributeSubject : public AttributeGroup, public Subject
{
public:
AttributeSubject(const char *);
virtual ~AttributeSubject();
virtual void SelectAll() = 0;
virtual const std::string TypeName() const;
virtual void Notify();
virtual AttributeSubject *CreateCompatible(const std::string &) const;
virtual AttributeSubject *NewInstance(bool copy) const { return 0; };
virtual bool VarChangeRequiresReset(void) { return false; };
};
What does STATE_API before the class name AttributeSubject signify? Is it some sort of macro?
| It's probably a typedef to __declspec(dllimport) or __declspec(dllexport) and is used inside DLLs on windows platform to export classes.
Neil is right, it's a macro.
It usually looks like this:
#ifdef INDSIDE_DLL
#define STATE_API __declspec(dllexport)
#else
#define STATE_API __declsped(dllimport)
#endif
You define INSIDE_DLL only in your dll and export all the classes declared with STATE_API macro.
|
1,863,153 | 1,863,219 | Why unsigned int 0xFFFFFFFF is equal to int -1? | In C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see Invalid Value for size_t
Why?
I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold.
Then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF?
I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why.
| C and C++ can run on many different architectures, and machine types. Consequently, they can have different representations of numbers: Two's complement, and Ones' complement being the most common. In general you should not rely on a particular representation in your program.
For unsigned integer types (size_t being one of those), the C standard (and the C++ standard too, I think) specifies precise overflow rules. In short, if SIZE_MAX is the maximum value of the type size_t, then the expression
(size_t) (SIZE_MAX + 1)
is guaranteed to be 0, and therefore, you can be sure that (size_t) -1 is equal to SIZE_MAX. The same holds true for other unsigned types.
Note that the above holds true:
for all unsigned types,
even if the underlying machine doesn't represent numbers in Two's complement. In this case, the compiler has to make sure the identity holds true.
Also, the above means that you can't rely on specific representations for signed types.
Edit: In order to answer some of the comments:
Let's say we have a code snippet like:
int i = -1;
long j = i;
There is a type conversion in the assignment to j. Assuming that int and long have different sizes (most [all?] 64-bit systems), the bit-patterns at memory locations for i and j are going to be different, because they have different sizes. The compiler makes sure that the values of i and j are -1.
Similarly, when we do:
size_t s = (size_t) -1
There is a type conversion going on. The -1 is of type int. It has a bit-pattern, but that is irrelevant for this example because when the conversion to size_t takes place due to the cast, the compiler will translate the value according to the rules for the type (size_t in this case). Thus, even if int and size_t have different sizes, the standard guarantees that the value stored in s above will be the maximum value that size_t can take.
If we do:
long j = LONG_MAX;
int i = j;
If LONG_MAX is greater than INT_MAX, then the value in i is implementation-defined (C89, section 3.2.1.2).
|
1,863,380 | 1,863,392 | Cross platform c++ with libcurl | I am a perl developer that has never went into the client side programming of things. I'd like to think that I'm a pretty good developer, except I know that my severe lack of knowledge of the way desktop programming really takes away from my credibility.
That said, I really want to get into doing some desktop applications.
I want to try to develop a simple application that will connect to my server and grab an rss feed, then display it in the console. My plan of attack is to use libcurl (and curlpp) to grab the feed (I'd also like to do more curl stuff in the future). But I want to be able to run this small program on linux, windows, and mac because I want to understand developing cross platform.
So here is the question (and I know it is extremely noobish): How do I write c++ code that will use libcurl and curlpp, and will work on the 3 major OSes? The main thing I don't understand is if I have to compile libcurl and curlpp, then how does it work when trying to take it over to the other platforms?
| You need to write the code portably - basically make it a console application. You then transfer the source code (not the exe) to the other platforms and compile it there and link with the version of llibcurl on each specific platform.
|
1,863,597 | 1,863,988 | C++ "Scrolling" through items in an stl::map | I've made a method to scroll/wrap around a map of items, so that if the end is reached, the method returns the first item and vice-versa.
Is there more succinct way of doing this?
MyMap::const_iterator it = myMap.find(myKey);
if (it == myMap.end())
return 0;
if (forward) {
it++;
if (it == myMap.end()) {
it = myMap.begin();
}
} else {
if (it == myMap.begin()) {
it = myMap.end();
}
it--;
}
| You can do this with a template. As was stated by a previous poster, this can be cumbersome from the standpoint that it never reaches the end so the user must somehow control this. I'm assuming you have a good reason, perhaps producing some round robin behavior.
#include <iostream>
#include <string>
#include <vector>
#include <set>
#include <map>
using namespace std;
template <class T>
class ScrollIterator
{
public:
ScrollIterator(T &myCtr, typename T::iterator pos)
:ctr(myCtr),
it(pos)
{
}
ScrollIterator operator++()
{
if (++it == ctr.end()) { it = ctr.begin(); }
return *this;
}
bool operator!=(const ScrollIterator &rhs) const
{
return (this->it != rhs.it);
}
bool operator!=(const typename T::const_iterator &rhsIT) const
{
return (this->it != rhsIT);
}
typename T::value_type operator*() const
{
return *it;
}
private:
T &ctr;
typename T::iterator it;
};
int main (int argc, char *argv[])
{
vector<int> v;
v.push_back(2);
v.push_back(3);
v.push_back(5);
v.push_back(7);
int i = 0;
for (ScrollIterator<vector<int> > it(v,v.begin()); it != v.end() && i < 10; ++i, ++it)
{
cout << "Vector = " << i << " Value: " << *it << "\n";
}
set<string> s;
s.insert("c");
s.insert("a");
s.insert("b");
i = 0;
for (ScrollIterator<set<string> > it(s,s.begin()); it != s.end() && i < 10; ++i, ++it)
{
cout << "Set = " << i << " Value: " << *it << "\n";
}
map<string, int> y;
y["z"] = 10;
y["y"] = 20;
y["x"] = 30;
i = 0;
for (ScrollIterator<map<string, int> > it(y,y.begin()); it != y.end() && i < 10; ++i, ++it)
{
cout << "Map = " << i << " Iterator: " << (*it).first << " = " << (*it).second << "\n";
}
return 1;
}
|
1,863,613 | 1,863,924 | What does "symbol value" from nm command mean? | When you list the symbol table of a static library, like nm mylib.a, what does the 8 digit hex that show up next to each symbol mean? Is that the relative location of each symbol in the code?
Also, can multiple symbols have the same symbol value? Is there something wrong with a bunchof different symbols all having the symbol value of 00000000?
| Here's a snippet of code I wrote in C:
#include
#include
void foo();
int main(int argc, char* argv[]) {
foo();
}
void foo() {
printf("Foo bar baz!");
}
I ran gcc -c foo.c on that code. Here is what nm foo.o showed:
000000000000001b T foo
0000000000000000 T main
U printf
For this example I am running Ubuntu Linux 64-bit; that is why the 8 digit hex you see is 16 digit here. :-)
The hex digit you see is the address of the code in question within the object file relative to the beginning of the .text. section. (assuming we address sections of the object file beginning at 0x0). If you run objdump -td foo.o, you'll see the following in the output:
Disassembly of section .text:
0000000000000000 :
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 89 7d fc mov %edi,-0x4(%rbp)
b: 48 89 75 f0 mov %rsi,-0x10(%rbp)
f: b8 00 00 00 00 mov $0x0,%eax
14: e8 00 00 00 00 callq 19
19: c9 leaveq
1a: c3 retq
000000000000001b :
1b: 55 push %rbp
1c: 48 89 e5 mov %rsp,%rbp
1f: b8 00 00 00 00 mov $0x0,%eax
24: 48 89 c7 mov %rax,%rdi
27: b8 00 00 00 00 mov $0x0,%eax
2c: e8 00 00 00 00 callq 31
31: c9 leaveq
32: c3 retq
Notice that these two symbols line right up with the entries we saw in the symbol table from nm. Bare in mind, these addresses may change if you link this object file to other object files. Also, bare in mind that callq at 0x2c will change when you link this file to whatever libc your system provides, since that is currently an incomplete call to printf (it doesn't know where it is right now).
As for your mylib.a, there is more going on here. The file you have is an archive; it contains multiple object files, each one of which with it's own text segment. As an example, here is part of an nm against /usr/lib/libm.a on my box here
e_sinh.o:
0000000000000000 r .LC0
0000000000000008 r .LC1
0000000000000010 r .LC2
0000000000000018 r .LC3
0000000000000000 r .LC4
U __expm1
U __ieee754_exp
0000000000000000 T __ieee754_sinh
e_sqrt.o:
0000000000000000 T __ieee754_sqrt
e_gamma_r.o:
0000000000000000 r .LC0
U __ieee754_exp
0000000000000000 T __ieee754_gamma_r
U __ieee754_lgamma_r
U __rint
You'll see that multiple text segment entries -- indicated by the T in the second column rest at address 0x0, but each individual file has only one text segment symbol at 0x0.
As for individual files having multiple symbols resting at the same address, it seems like it would be possible perhaps. After all, it is just an entry in a table used to determine the location and size of a chunk of data. But I don't know for certain. I have never seen multiple symbols referencing the same part of a section before. Anyone with more knowledge on this than me can chime in. :-)
Hope this helps some.
|
1,863,751 | 1,863,767 | Array decay to pointers in templates | Please consider this code:
#include <iostream>
template<typename T>
void f(T x) {
std::cout << sizeof(T) << '\n';
}
int main()
{
int array[27];
f(array);
f<decltype(array)>(array);
}
Editor's Note: the original code used typeof(array), however that is a GCC extension.
This will print
8 (or 4)
108
In the first case, the array obviously decays to a pointer and T becomes int*. In the second case, T is forced to int[27].
Is the order of decay/substitution implementation defined? Is there a more elegant way to force the type to int[27]? Besides using std::vector?
| Use the reference type for the parameter
template<typename T> void f(const T& x)
{
std::cout << sizeof(T);
}
in which case the array type will not decay.
Similarly, you can also prevent decay in your original version of f if you explicitly specify the template agument T as a reference-to-array type
f<int (&)[27]>(array);
In your original code sample, forcing the argument T to have the array type (i.e. non-reference array type, by using typeof or by specifying the type explicitly), will not prevent array type decay. While T itself will stand for array type (as you observed), the parameter x will still be declared as a pointer and sizeof x will still evaluate to pointer size.
|
1,863,827 | 1,863,835 | GCC Compiler Warning: extended initializer lists only available with c++0x | Using this member initialization...
StatsScreen::StatsScreen( GameState::State level )
: m_Level( level ) {
...//
}
I get the following warning...
extended initializer lists only available with -std=c++0x or -std=gnu++0x
Any information regarding this warning?
Edit: Warning went away after I removed one of the member that was assigned to a value inside the constructor (couldn't be done through member initialization) and made it a local variable instead of a class member. Still want to know what that warnings means though.
| I think you are initializing the object with {...} instead of (...):
StatsScreen ss{...}; // only available in C++0x
StatsScreen ss(...); // OK in C++98
To compile your code as C++0x code, just add the following flag when compiling:
g++ test.cpp -std=c++0x
|
1,864,032 | 1,864,060 | In .cpp file, defining methods with "class Foo { void method() {} };" instead of "void Foo::method() {}"? | When you have inline definitions of functions in the header file, and you want to move the the function definition bodies out of the header and into a .cpp file, you can't just cut-and-paste the functions as they were defined in the header; you have to convert the syntax from this:
class Foo
{
void method1() { definition(); }
void method2() { definition(); }
void method3() { definition(); }
};
To this:
void Foo::method1() { definition(); }
void Foo::method2() { definition(); }
void Foo::method3() { definition(); }
Edit: Just wanted to point out that what I'm hoping to avoid is having to type the class name in front of every method name. It may seem like a small thing but when you're moving a lot of function definitions out of the header and into the cpp file, it adds up. And when the return type is especially complicated, you have to find where in the line each return type ends and each method name begins.
So my question is, do I have to do it like that second block of code above? What if I did this (is the following standards compliant C++?):
In Foo.h:
class Foo
{
void method1();
void method2();
void method3();
};
In Foo.cpp:
#include "Foo.hpp"
class Foo
{
void method1() { definition(); }
void method2() { definition(); }
void method3() { definition(); }
};
| This will not work: the compiler will see this as you redefining the class. I'm afraid there is no way around this, it's part of the language.
[ps: I know these kinds of jobs are tedious, but we all have to do them at one time or another. If it is that big a job, you could look at writing a script to do it for you, but it really would have to be a big job to justify the effort imho, parsing C++ is not fun]
|
1,864,103 | 1,864,138 | Reading Superblock into a C Structure | I have a disk image which contains a standard image using fuse. The Superblock contains the following, and I have a function read_superblock(*buf) that returns the following raw data:
Bytes 0-3: Magic Number (0xC0000112)
4-7: Block Size (1024)
8-11: Total file system size (in blocks)
12-15: FAT length (in blocks)
16-19: Root Directory (block number)
20-1023: NOT USED
I am very new to C and to get me started on this project I am curious what is a simple way to read this into a structure or some variables and simply print them out to the screen using printf for debugging.
I was initially thinking of doing something like the following thinking I could see the raw data, but I think this is not the case. There is also no structure and I am trying to read it in as a string which also seems terribly wrong. for me to grab data out of. Is there a way for me to specify the structure and define the number of bytes in each variable?
char *buf;
read_superblock(*buf);
printf("%s", buf);
| Yes, I think you'd be better off reading this into a structure. The fields containing useful data are all 32-bit integers, so you could define a structure that looks like this (using the types defined in the standard header file stdint.h):
typedef struct SuperBlock_Struct {
uint32_t magic_number;
uint32_t block_size;
uint32_t fs_size;
uint32_t fat_length;
uint32_t root_dir;
} SuperBlock_t;
You can cast the structure to a char* when calling read_superblock, like this:
SuperBlock_t sb;
read_superblock((char*) &sb);
Now to print out your data, you can make a call like the following:
printf("%d %d %d %d\n",
sb.magic_number,
sb.block_size,
sb.fs_size,
sb.fat_length,
sb.root_dir);
Note that you need to be aware of your platform's endianness when using a technique like this, since you're reading integer data (i.e., you may need to swap bytes when reading your data). You should be able to determine that quickly using the magic number in the first field.
Note that it's usually preferable to pass a structure like this without casting it; this allows you to take advantage of the compiler's type-checking and eliminates potential problems that casting may hide. However, that would entail changing your implementation of read_superblock to read data directly into a structure. This is not difficult and can be done using the standard C runtime function fread (assuming your data is in a file, as hinted at in your question), like so:
fread(&sb.magic_number, sizeof(sb.magic_number), 1, fp);
fread(&sb.block_size, sizeof(sb.block_size), 1, fp);
...
|
1,864,373 | 1,864,388 | What common application types are created with Visual C++? | C# and VB .net (higher level languages) tend to be good for n-tier business applications and such.
I find C++ a very interesting language and would like to spend more time developing in it.
What kinds of applications are better suited to C++ applications? Are many windows forms apps (for example) created using C++?
| C++ is best suited for system programming. For example, creating windows services. Also, traditionally it has been the language of choice for writing high-performance code such as in 3D graphics or scientific applications.
In general, C++ frameworks (MFC, etc) are tedious for creating graphical applications compared to modern alternatives such as WPF and Windows Forms.
|
1,864,506 | 1,864,694 | Native Makefile alternative for windows | What is a good alternative to Makefile on windows?
I'm compiling a collection of c++ files (.cpp's and .h's) using cl.exe.
I'd rather not use Makefile, as I want to minimise the amount of 3rd party utilities people will need to build my application.
Drew J. Sonne.
| VisualStudio comes with nmake which would not require any 3rd party tools.
|
1,864,615 | 1,864,643 | How to declare a static variable but not define it | Some times we need to pre-declare a static variable and then use it.
But the variable name of this declaration may be wrong, and the compiler can not detect it, oops!
Example:
/* lots of codes */
static some_type some_name; /* pre-declaration */
/* but it may define "some_name" */
/* use some_name */
/* lots of codes */
static some_type someName = initialization; /* definition */
/* use someName */
/* lots of codes */
"some_name" and "someName" are different, we use a wrong variable at the begin.
If the pre-declaration statement does not define any thing, the compiler will detect the mistake.
So, how to declare a static variable but not define it? How can I change the pre-declaration to a new one which makes compiler can detects wrong names?
| gcc will give a warning in the case you've described:
./x.c:3010: warning: 'someName' defined but not used
Solution: Do what you're currently doing, but don't ignore compiler warnings ;)
Edit:
With your updated question: No, I don't believe there is a way to simply declare a static variable (without also defining it).
The common solution is just to make sure all your global scope variables are declared once only, with an initialiser if they need it.
|
1,865,069 | 8,582,538 | How to compile a 64-bit application using Visual C++ 2010 Express? | Is there a simple way to compile a 64 bit app with the 32-bit edition of Visual C++ 2010 Express? What configurations, if any, are necessary?
| Here are step by step instructions:
Download and install the Windows Software Development Kit version 7.1. Visual C++ 2010 Express does not include a 64 bit compiler, but the SDK does. A link to the SDK: http://msdn.microsoft.com/en-us/windowsserver/bb980924.aspx
Change your project configuration. Go to Properties of your project. On the top of the dialog box there will be a "Configuration" drop-down menu. Make sure that selects "All Configurations." There will also be a "Platform" drop-down that will read "Win32." Finally on the right there is a "Configuration Manager" button - press it. In the dialog that comes up, find your project, hit the Platform drop-down, select New, then select x64. Now change the "Active solution platform" drop-down menu to "x64." When you return to the Properties dialog box, the "Platform" drop-down should now read "x64."
Finally, change your toolset. In the Properties menu of your project, under Configuration Properties | General, change Platform Toolset from "v100" to "Windows7.1SDK".
These steps have worked for me, anyway. Some more details on step 2 can be found in a reference from Microsoft that a previous poster mentioned: http://msdn.microsoft.com/en-us/library/9yb4317s.aspx.
|
1,865,236 | 1,865,248 | Is it necessary to learn Java for contributing to an open source project? | I am more into C/C++. But many of my seniors here in college ask me to learn Java if I want to contribute to an open source project.. I'm in dilemma. what to do? Can't we do a design project in C/C++?
| There are plenty of open source C and C++ projects - as well as loads in virtually any other language you can come up with.
Of course it's never a bad idea to learn another language, but don't feel too constrained by "only" knowing C and C++.
If you want to contribute to a specific open source project which is written in Java, of course, that's a different matter... but if you're trying to find C and C++ open source projects, some of the major hosting sites support querying by project language, I believe. For example, you can look at Google Code C++ projects and SourceForge projects tagged C++.
|
1,865,557 | 1,865,570 | Which C++ does Visual Studio 2008 (or later) use? | I find C++ is very controversial language in microsoft world. By default we have ISO C++ and then microsoft has Managed C++ and now C++ CLI.
I just know standard (ISO) C++. I don't know microsoft's version of C++.
I'm confused about interpretation of any c++ code by visual studio 2008 (or later). Thats why I'm using gnu tools for compiling my programs. But I do love Visual Studio.
What settings do I need to make if I only want to use
STRICTLY ISO C++
Managed C++ (its deprecated but I think they still support it for sake of backward compatibility)
C++ CLI (for .NET platform)
I want to build native assemblies using C++ not managed ones. So, is there anything else should I need to do?
| Everything is in the build settings:
Common Language Runtime Support (/clr) - add or remove CLR support
Advance Compile as C++ Code (/TP) - to choose if c++ or c..
Language: Disable Language Extention - use this to force ANSI.
|
1,865,629 | 1,865,653 | Generic VC++ vs g++ query | I have trouble understanding the compilers.
The following code does work in UNIX under g++, but under VC++ it would not even compile. Anyone can provide valid reasons why?
#include <stdio.h>
#include <iostream>
#include <string.h>
using namespace std;
int main()
{
string tmp_nw_msg, crc_chksum, buffer;
cout << "Enter the string : ";
cin >> buffer;
if (strlen(buffer.c_str()) >15 ) {
tmp_nw_msg = buffer.substr(1,12);
crc_chksum = buffer.substr(13,2);
cout << " N/W msg : "<< tmp_nw_msg << endl;
cout << " crc chksum : "<< crc_chksum << endl;
}
else {
cout << "error" << endl;
}
std::cin.get();
return 0;
}
The following error is thrown by VC++, but in g++ it does work fine.
Error 1 error C2679: binary '>>' : no
operator found which takes a
right-hand operand of type
'std::string' (or there is no
acceptable conversion) c:\documents
and settings\my
documents\visual studio
2005\projects\dummy_substr\dummy_substr\substr.cpp 13
Error 2 error C2679: binary '<<' : no
operator found which takes a
right-hand operand of type
'std::string' (or there is no
acceptable conversion) c:\documents
and settings\my
documents\visual studio
2005\projects\dummy_substr\dummy_substr\substr.cpp 19
Error 3 error C2679: binary '<<' : no
operator found which takes a
right-hand operand of type
'std::string' (or there is no
acceptable conversion) c:\documents
and settings\my
documents\visual studio
2005\projects\dummy_substr\dummy_substr\substr.cpp 20
Error 4 fatal error C1075: end of file
found before the left brace '{' at
'c:\documents and
settings\my documents\visual
studio
2005\projects\dummy_substr\dummy_substr\substr.cpp(9)'
was matched c:\documents and
settings\my documents\visual
studio
2005\projects\dummy_substr\dummy_substr\substr.cpp 29
Output from g++ :
Enter the string : BD2d1100mayor47E
N/W msg : D2d1100mayor crc chksum :
47
| You need to replace #include <string.h> by #include <string>
C++ headers don't have the .h extension to differentiate them from C headers that would have the same name.
Also, you don't need the #include <stdio.h> header for your program -- and in case you need to call stdio functions from a C++ program you should #include <cstio> anyway.
EDIT: "If that really was the problem, the error should be on the definition of the string variables" commented by PierreBdR
In MSVC++, #include <iostream> creates a cascade of includes which at some point #include <stdexcept>. Then when you look at the stdexcept header file, you can see #include <xstring>. MSVC++ definition and implementation of std::string really is in this xstring header which explains why the compiler knows the type even-though you didn't #include <string>.
Then if you look at the content of the string header, you can see this is where binary operators compatible with std::string are defined which explains why the error only pops up on the line containing cin >> buffer; statement.
|
1,865,631 | 1,866,237 | Loading an EXE as a DLL, local vftable | I have an exe named test.exe which is usually used as a stand-alone application. I want to use this exe as a module (a dll) inside another application, app.exe.
The code in test.exe does something really simple like:
void doTest()
{
MyClass *inst = new MyClass();
inst->someMethod();
}
Where someMethod() is virtual and MyClass has a virtual d'tor.
doTest() is exported from test.exe and thus a lib called test.lib is created
app.exe is linked with this lib to statically load test.exe when it starts.
When I'm running test.exe stand-alone it runs just fine but when I'm running it loaded from within app.exe it crashes.
Stepping into the code with the debugger revealed that the crash is in the call to the virtual method. It turns out that the vftable somehow goes bad.
After some investigations it turns out that when the code inside the constructor of MyClass is running , the vftable is one thing but when the call to new returns it is replace with something else called a "local vftable". I found this obscure discussion about why this is.
After about a day of debugging it occurred to me that the pointers in this "local vftable" are the same in both cases, when test.exe is stand alone and whenit is loaded as a module. This can't be right because test.exe is loaded into a different address...
To test this theory I changed the loading address in the linker options to the one where test.exe is loaded when it is in app.exe and now, lo and behold, everything works.
Obviously, this is not a permanent solution because next time this randomly selected address may be occupied and the same problem will occur again.
So my question: Why is this "local vftable" tied to the static loading address of the exe? is loading an exe as a module a bad thing? why does the exe assume it is loaded to its static address?
Just for context: this is all done with MSVC 2008, Windows XP x64.
| The workaround I ended up using is to simply add a compile configuration and compile the exe as a real dll instead of forcing it to act like one.
using /fixed:no didn't solve the problem for some reason.
Another difference I between exes and DLLs is that the entry point is different. a DLL's entry point is DllMain where as an exe has its entry point in the CRT which eventually calls main() or WinMain().
|
1,865,652 | 1,866,840 | Using (void*) as a type of an identifier | In my program, I have objects (of the same class) that must all have a unique identifier. For simplicity and performance, I chose to use the address of the object as identifier. And to keep the types simple, I use (void*) as a type for this identifier. In the end I have code like this:
class MyClass {
public:
typedef void* identity_t;
identity_t id() const { return (void*)this; }
}
This seems to be working fine, but gcc gives me a strict-aliasing warning. I understand the code would be bad if the id was used for data transfer. Luckily it is not, but the question remains: will aliasing optimisations have an impact on the code produced? And how to avoid the warning?
Note: I am reluctant to use (char*) as this would imply the user can use the data for copy, which it can not!
| You are violating logical constness returning the object as mutable in a const method.
As Neil points out, no cast is needed.
class MyClass {
public:
typedef const void* identity_t;
identity_t id() const { return this; }
};
|
1,865,719 | 1,866,030 | How to NOT generate debug information for specific source files / source sections? | Is there a way to create a Debug build of our Vs2005 (C++) project and exclude specific modules or code sections from being included into the debug information? Or is there an option to have VS generate multiple PDB files from a single project?
It looks like our generated PDB file is getting too large for Visual Studio to handle/generate correctly and the result is that VS tells us that the debug symbols do not match. Sometimes it works, sometimes it does not.
I investigate the option of splitting the project into multiple smaller projects, but I guess this will take some time. But it would be great if we could debug the current project as it is in the meantime.
| I agree with Andreas' comment - you're almost certainly better splitting the project.
However, if you right-click a C++ source file (don't think you can do this with C#), and open the properties you've got complete control over how that specific file (or files) is built.
K
|
1,865,800 | 1,891,603 | C++ vs Java constructors | According to John C. Mitchell - Concepts in programming languages,
[...] Java guarantees that a
constructor is called whenever an
object is created. [...]
This is pointed as a Java peculiarity which makes it different from C++ in its behaviour. So I must argue that C++ in some cases does not call any constructor for a class even if an object for that class is created.
I think that this happens when inheritance occurs, but I cannot figure out an example for that case.
Do you know any example?
| Giving an interpretation, I have a suggestion about why the author says that for Java, without looking for any corner cases which I think don't address really the problem: you could think for example that PODs are not objects.
The fact that C++ has unsafe type casts is much more well known. For example, using a simple mixture of C and C++, you could do this:
class A {
int x;
public:
A() : X(0) {}
virtual void f() { x=x+1; }
virtual int getX() { return x; }
};
int main() {
A *a = (A *)malloc(sizeof(A));
cout << a->getX();
free(a);
}
This is a perfectly acceptable program in C++ and uses the unchecked form of type cast to avoid constructor invocation. In this case x is not initialized, so we might expect an unpredictable behaviour.
However, there might be other cases in which also Java fails to apply this rule, the mention of serialized object is perfectly reasonable and correct, even though you know for sure that the object has already been constructed in some way (unless you do some hacking on the serialized encoding of course).
|
1,866,080 | 1,866,123 | Is there any standard to consume a webservice inside of native C++? | I am looking for resources to show me how I can consume web services inside native C++ . Are there any popular libraries I can use?
TIA
Andrew
| It really depends on what webservice architecture you are talking about... For XML-RPC IBM has a nice article showcasing XMLRPC++, for SOAP there is e.g. gSOAP or WSO2 WSF/C++, ...
|
1,866,181 | 1,867,920 | Why can I not perform a sizeof against a static char[] of another class? | Why does the following code generate a compile error?
Edit: My original code wasn't clear - I've split the code up into separate files...
First.h
class First
{
public:
static const char* TEST[];
public:
First();
};
First.cpp
const char* First::TEST[] = {"1234", "5678"};
First::First()
{
uint32_t len = sizeof(TEST); // fine
}
Determining the size within the First class seems fine, however...
Second.h
class Second
{
public:
Second();
};
Second.cpp
#include "First.h"
Second::Second()
{
uint32_t len = sizeof(First::TEST); // error
uint32_t elements = (sizeof(First::TEST) / sizeof(First::TEST[0])); // error
}
I get the following error: 'const char *[]': illegal sizeof operand
| sizeof only works on complete types. const char* TEST[] is not a complete type until it is defined in First.cpp.
sizeof(char*[10]) == sizeof(char*) * 10 == 40
sizeof(short[10]) == sizeof(short) * 10 == 20
// a class Foo will be declared
class Foo;
sizeof(Foo) == //we don't know yet
// an array bar will be defined.
int bar[];
sizeof(bar) == sizeof(int) * ? == //we don't know yet.
// actually define bar
int bar[/*compiler please fill this in*/] = { 1, 2, 3 };
sizeof(bar) == sizeof(int) * 3 == 12
// note bar is exactly the right size
// an array baz is defined.
int baz[4];
sizeof(baz) == sizeof(int) * 4 == 16
// initialize baz
int baz[4] = { 1, 2, 3 };
sizeof(bar) == sizeof(int) * 4 == 16
// note baz is still 4 big, the compiler doesn't control its size
To get this to work as you wish, you can:
add the size of the First::TEST array to its declaration (static const char* TEST[2];)
add a new static method that returns the sizeof First::TEST. The method cannot be inline, it would have to be defined in First.cpp.
|
1,866,193 | 1,866,211 | To what extent is using "delete this" compliant to C++ standard? | When implementing reference counting in objects the "release and possibly delete object" primitive is usually implemented like this:
void CObject::Release()
{
--referenceCount;
if( referenceCount == 0 ) {
delete this;
}
}
First of all, delete this looks scary. But since the member function returns immediately and doesn't try to access any member variables the stuff still works allright. At least that's how it is explained usually. The member function might even call some global function to write to a log that it deleted the object.
Does the C++ standard guarantee that a member function can call delete this and then do anything that will not require access to member variables and calling member functions and this will be defined normal behaviour?
| Yes, that will behave like deleting any other object.
|
1,866,461 | 1,866,543 | Why should I not try to use "this" value after "delete this"? | In this paragraph of C++ FAQ usage of delete this construct is discussed. 4 restrictions are listed.
Restrictions 1 to 3 look quite reasonable. But why is restriction 4 there that I "must not examine it, compare it with another pointer, compare it with NULL, print it, cast it, do anything with it"?
I mean this is yet another pointer. Why can't I reinterpret_cast it to an int or call printf() to output its value?
| The reason that you cannot do anything with a pointer after you delete it (this, or any other pointer), is that the hardware could (and some older machines did) trap trying to load an invalid memory address into a register. Even though it may be fine on all modern hardware, the standard says that the only thing that you can do to a invalid pointer (uninitialized or deleted), is to assign to it (either NULL, or from another valid pointer).
|
1,866,884 | 2,408,109 | GUI Control For Audio Presentation | I need GUI control for audio file presentation. The language is not very important but it should run on windows platform.
I should be able to :-
load the file
play the sound
put and move markers across the audio bar.
it would be nice if it can load itself from RTP wireshark captures (and not wav files).
An example may be seen in audacity (may be someone even had an experience extracting it from there). Writing nyquist scripts in audacity is not a good option because I have to operate on RTP captures and not on raw sound samples.
(source: sourceforge.net)
Another example of such control is wireshark RTP analyzer.
Any advise?
| Try AudioExCs example from Alvas.Audio library http://alvas.net/alvas.audio.aspx
|
1,867,030 | 1,867,286 | Combinations of Multiple Vector's Elements Without Repetition | I have n amount of vectors, say 3, and they have n amount of elements (not necessarily the same amount). I need to choose x amount of combinations between them. Like choose 2 from vectors[n].
Example:
std::vector<int> v1(3), v2(5), v3(2);
There cannot be combinations from one vector itself, like v1[0] and v1[1]. How can I do this?
I've tried everything, but cannot figure this out.
| If I understand you correctly you have N vectors, each with a different number of elements (call the size of the ith vector Si) and you which to choose M combinations of elements from these vectors without repetition. Each combination would be N elements, one element from each vector.
In this case the number of possible permutations is the product of the sizes of the vectors, which, for lack of some form of equation setting I'll call P and compute in C++:
std::vector<size_t> S(N);
// ...populate S...
size_t P = 1;
for(size_t i=0;i<S.size();++i)
P *= S[i];
So now the problem becomes one of picking M distinct numbers between 0 and P-1, then converting each of those M numbers into N indices into the original vectors. I can think of a few ways to compute those M numbers, perhaps the easiest is to keep drawing random numbers until you get M distinct ones (effectively rejection sampling from the distribution).
The slightly more convoluted part is to turn each of your M numbers into a vector of indices. We can do this with
size_t m = /* ... one of the M permutations */;
std::vector<size_t> indices_m(N);
for(size_t i=0; i<N; ++i)
{
indices[i] = m % S[i];
m /= S[i];
}
which basically chops m up into chunks for each index, much like you would when indexing a 2D array represented as a 1D array.
Now if we take your N=3 example we can get the 3 elements of our permutation with
v1[indices[0]]
v2[indices[1]]
v3[indices[2]]
generating as many distinct values of m as required.
|
1,867,067 | 1,867,201 | Read from same file (until EOF) using ifstream after file contents change | Requirement :
I must read until EOF (16 bytes a
time) from a particular file , and
then say sleep for 5 seconds. Now,
after 5 seconds, when I try to read
from the file (whose contents would
have been appended by that time), the
intended design must be in such a way
that it reads from the point where it
left previously and again scan the
contents (16 bytes a time) until EOF
is reached.
I have written (basic) code to read from the given file (until EOF - 16 bytes a time) using ifstream as follows :
#include <stdio.h>
#include <fstream>
#include <iostream>
#include <sstream>
using namespace std;
int main()
{
int fd, i, j, length, pos;
char buffer[100][16];
ifstream Read;
std::ostringstream oss;
int current_position = 0;
Read.open("from4to5", ios::binary);
//Get the size of the file
Read.seekg(0, ios::end);
length = Read.tellg();
Read.seekg(0, ios::beg);
for(i=0; i<length; i++)
{
buffer[i][16] = '\0';
}
//Read the file in 16byte segments or eof(), whichever comes first
//Testing the return condition of the function is preferred, as opposed to testing eof()
while(Read.get(buffer[i], 17))
{
for(j=0; j<=16; j++)
oss << buffer[i][j];
cout << "Contents : " << oss.str() << endl;
oss.seekp(0);
i++;
}
// Output is :
// Contents : BD8d3700indiaC#E
// Contents : BD6d4700godgeD3E
// Contents : BD9d1311badge3TE
return 0;
}
I need to modify this to suit my requirement. I tried using the seekg() call, but somehow failed. I was wondering if, when the first time I accessed and read from the file into filestream, somehow the program would have placed an exclusive lock on the file, which would mean that I'll not be able to read from it the next time around.
Can anyone show me how it's to be done?
Filename : "from4to5"
Contents:
BD8d3700indiaC#EBD6d4700godgeD3EBD9d1311badge3TE
Within 5 seconds, some other process writes(appends) to the same file "from4to5"
Now,
File Contents:
BD8d3700indiaC#EBD6d4700godgeD3EBD9d1311badge3TEBD6d1210clerk41EBD2d1100mayor47EBD4d2810bread6YE
Now, when the program reads from the file "from4to5", it must read from the point where it left previously, 16 bytes a time until it encounters EOF.
Intention for output this time around is :
// Output is :
// Contents : BD6d1210clerk41E
// Contents : BD2d1100mayor47E
// Contents : BD4d2810bread6YE
| You'll have to:
save your position
close the file
reopen the file
seek to your saved postion and resume reading until EOF
|
1,867,262 | 1,867,368 | What is the fastest way to deconstruct a fixed length binary/alpha message? | What would you suggest as the fastest or best way to parse a fixed length message in c++ which has fields defined like
field = 'type', length = 2, type = 'alphanumeric'
field = 'length', length = 2, type = 'binary' (edit:length = 2 means 16 bit)
...
...
and so on
I read about making a struct and then using reinterpret_cast but im not sure how to use that or if there is any better method.
By parsing, i mean extracting human readable format like 'Type = X', 'Length = 15' etc
| Is this what you mean?
char* binaryMessage; //From somewhere
struct Fields {
short type; // 2 bytes
short length; // 2 bytes
};
Fields* fields = reinterpret_cast<Fields*>(binaryMessage);
std::cout << "Type = " << fields->type;
std::cout << "Length = " << fields->length;
A safer alternative is boost::basic_bufferstream:
basic_bufferstream<char> stream(binaryMessage, lengthOfMessage, std::ios_base::in);
Fields fields;
stream >> fields.type;
stream >> fields.length;
|
1,867,634 | 1,867,667 | Properly extending a COM interface (IDL) | I am working with some legacy c++ code and I need to extend an interface. The current interfaces are for example:
[
object,
uuid(guid),
version(1.0),
dual,
nonextensible,
oleautomation
]
interface IInfo : ITask {
// Methods here
}
[
object,
uuid(guid),
version(1.0),
dual,
nonextensible,
oleautomation
]
interface IExtendedInfoTask : IInfo {
// Methods here
}
What I would like to extend is the IInfo interface. Now from my understanding the proper way to do this would be to create an IInfo2 interface that inherits the IInfo interface, however I need my IExtendedInfoTask to inherit from this IInfo2. Changing its current inheritance would break the existing interface would it not?
Would the proper way to do this be creating a IExtendedInfoTask that extends the IInfo2 and duplicate the methods of the IExtendedInfoTask?
| The proper way to do this is to create an IExtendedInfoTask2 that extends the new IInfo2 interface. COM requires that an interface, once defined, is immutable.
You can have the same class implement both IExtendedInfoTask and IExtendedInfoTask2, so the caller can use either version. It's only a vtable difference -- you don't have to implement the methods separately.
|
1,868,492 | 1,870,536 | How to do hardware accelerated alpha blending in SDL? | I'm trying to find the most efficient way to alpha blend in SDL. I don't feel like going back and rewriting rendering code to use OpenGL instead (which I've read is much more efficient with alpha blending), so I'm trying to figure out how I can get the most juice out of SDL's alpha blending.
I've read that I could benefit from using hardware surfaces, but this means I'd have to run the game in fullscreen. Can anyone comment on this? Or anything else regarding alpha transparency in SDL?
| Decided to just not use alpha blending for that part. Pixel blending is too much for software surfaces, and OpenGL is needed when you want the power of your hardware.
|
1,868,993 | 1,869,052 | Continuous Streaming PCM data in C++? | I have a stream of PCM audio captured from a cell phone, and I want to play it.
I am trying to find a lightweight method of playing this audio in C++.
I can already slap on a wave header and create a file that plays in any media player, but I want to play the file in real time as it streams in. I would like to avoid writing the file to disc just to read it again, and I also don't want to have pauses in the audio as I stop one file and start another.
I realize that OpenAL provides audio streaming functionality, but I was hoping for something simpler. I only need to play a single channel PCM stream.
Does anyone know of a lightweight, free(for commercial use) library or windows API that can do this?
| Use the waveOut API in Windows
|
1,869,171 | 1,869,616 | Returning std::pair versus passing by non-const reference | Why is returning a std::pair or boost::tuple so much less efficient than returning by reference? In real codes that I've tested, setting data by non-const reference rather than by std::pair in an inner kernel can speed up the code by 20%.
As an experiment, I looked at three simplest-case scenarios involving adding two (predefined) integers to two integers:
Use an inner, inlined function to modify the integers by reference
Use two inner, inlined function to return ints by value
Use an inner, inlined function to return a std::pair which are copied to the result.
Compiling with g++ -c $x -Wall -Wextra -O2 -S results in the same assembly code for passing by reference and returning ints by value:
__Z7getPairiRiS_:
LFB19:
pushq %rbp
LCFI0:
leal 1023(%rdi), %eax
addl $31, %edi
movl %eax, (%rsi)
movq %rsp, %rbp
LCFI1:
movl %edi, (%rdx)
leave
ret
(Pass by reference code:
#include <utility>
inline void myGetPair(const int inp, int& a, int& b) {
a = 1023 + inp;
b = 31 + inp;
}
void getPair(const int inp, int& a, int& b) {
myGetPair(inp, a, b);
}
Using individual rvalues:
#include <utility>
inline int myGetPair1(int inp) {
return 1023 + inp;
}
inline int myGetPair2(int inp) {
return 31 + inp;
}
void getPair(const int inp, int& a, int& b) {
a = myGetPair1(inp);
b = myGetPair2(inp);
}
)
Using std::pair, however, adds five extra assembly statements:
__Z7getPairiRiS_:
LFB18:
leal 31(%rdi), %eax
addl $1023, %edi
pushq %rbp
LCFI0:
salq $32, %rax
movq %rsp, %rbp
LCFI1:
orq %rdi, %rax
movq %rax, %rcx
movl %eax, (%rsi)
shrq $32, %rcx
movl %ecx, (%rdx)
leave
ret
The code for that is nearly as simple as the previous examples:
#include <utility>
inline std::pair<int,int> myGetPair(int inp) {
return std::make_pair(1023 + inp, 31 + inp);
}
void getPair(const int inp, int& a, int& b) {
std::pair<int,int> result = myGetPair(inp);
a = result.first;
b = result.second;
}
Can anyone who knows the inner workings of compilers help with this question? The boost tuple page makes reference to a performance penalty for tuples vs. pass-by-reference, but none of the linked papers answer the question.
The reason I'd prefer std::pair to these pass-by-reference statements is that it makes the intent of the function much clearer in many circumstances, especially when other parameters are input as well as the ones that are to be modified.
| I tried that with VC++2008, using cl.exe /c /O2 /FAs foo.cpp (that's "compile only and do not link", "optimize for speed", and "dump assembly output with matching source code lines in comments"). Here's what getLine() ended up being.
"byref" version:
PUBLIC ?getPair@@YAXHAAH0@Z ; getPair
; Function compile flags: /Ogtpy
; COMDAT ?getPair@@YAXHAAH0@Z
_TEXT SEGMENT
_inp$ = 8 ; size = 4
_a$ = 12 ; size = 4
_b$ = 16 ; size = 4
?getPair@@YAXHAAH0@Z PROC ; getPair, COMDAT
; 9 : myGetPair(inp, a, b);
mov eax, DWORD PTR _inp$[esp-4]
mov edx, DWORD PTR _a$[esp-4]
lea ecx, DWORD PTR [eax+1023]
mov DWORD PTR [edx], ecx
mov ecx, DWORD PTR _b$[esp-4]
add eax, 31 ; 0000001fH
mov DWORD PTR [ecx], eax
; 10 : }
ret 0
?getPair@@YAXHAAH0@Z ENDP ; getPair
"byval" std::pair-returning version:
PUBLIC ?getPair@@YAXHAAH0@Z ; getPair
; Function compile flags: /Ogtpy
; COMDAT ?getPair@@YAXHAAH0@Z
_TEXT SEGMENT
_inp$ = 8 ; size = 4
_a$ = 12 ; size = 4
_b$ = 16 ; size = 4
?getPair@@YAXHAAH0@Z PROC ; getPair, COMDAT
; 8 : std::pair<int,int> result = myGetPair(inp);
mov eax, DWORD PTR _inp$[esp-4]
; 9 :
; 10 : a = result.first;
mov edx, DWORD PTR _a$[esp-4]
lea ecx, DWORD PTR [eax+1023]
mov DWORD PTR [edx], ecx
; 11 : b = result.second;
mov ecx, DWORD PTR _b$[esp-4]
add eax, 31 ; 0000001fH
mov DWORD PTR [ecx], eax
; 12 : }
ret 0
?getPair@@YAXHAAH0@Z ENDP ; getPair
As you can see, the actual assembly is identical; the only difference is in mangled names and comments.
|
1,869,305 | 1,869,474 | Guide to switch from Visual Studio to Emacs on windows? | I do not want to learn an IDE or similar software which is only made for one platform only. I want to spend my time+energy in learning something which is a timeless-truth.
I want to switch to an editor-religion, which has no religion but of development and progress, it sees & treats all with equality.
Yes, please provide me some guide about how to switch to Emacs on windows.
like, doing compiler settings, source setting, TFS binding ...and all things I do not know about.
PS most of (all) my code is in C++ (unmanaged)
| You'll need to consider whether you want to use Emacs as your editor only, but continue to maintain your project settings, source files and build/debug environment in Visual Studio, or switch completely to Emacs as you editor and use some other tools (e.g., make) to build your project using VS compilers or other compilers completely.
The former case is relatively easy - you can have your file open in Emacs, and the project open in Visual studio, and just Alt-tab over to VS to build and debug. There are a couple good ports of graphical Emacs for Windows, or you can just use Cygwin combined with the terminal version of the application.
The second option - switching to a fully UNIX-like build environment is more involved and extends far beyond what editor you will be using.
Update, given comment below on "baby steps":
If your goal is to get to a complete non-VS (with the possible exception of the actual compiler and linker executable) environment in baby steps, then I would recommend first simply using Emacs to edit your source, and becoming used to the various shortcut keys and so on. Speaking only about raw editing, I find myself considerably more productive in Emacs than VS given the power of the editing functionality - and less use of the mouse is another upside if you suffer from mouse-related RSI. That's the first baby step you can take.
Unfortunately, the next step - to move from the VS build environment to something cross platform isn't so simple, and I can't see a particularly gradual way to do it. You'll need to decide what your alternative would be - it could be as simple the classic GNU tool chain - make, makedepends, gcc, gdb and related components. Here, I'd recommend Cygwin on Windows - get used to this and you'll be immediately familiar with the tools when you make the jump to a UNIX environment. The details of how to set up a nice environment with this toolchain could probably fill a book or two, but if your needs are simple it is not difficult.
There are certainly other more modern alternatives, although many of them are oriented towards Java - but you can still use things like ANT and Maven with other languages with the appropriate plugin or extension.
Once you've got your non-VS build set up (nothing to do with Emacs), only then can you go about the task of learning how to trigger your builds, fix compile errors and debug your programs using emacs in an integrated way.
|
1,869,308 | 1,869,351 | Trouble with objects and Pointers | vector < Shape* > shapes;
void createScene()
{
image = QImage(width, height, 32); // 32 Bit
Color amb(0.1,0.1,0.1);
Color difCoef(0.75,0.6,0.22);
Color spec(0.5,0.5,0.5);
double shine= 3.0;
Sphere *s = new Sphere(Point(0.0,0.0,-5), 100.0, amb, difCoef, spec, shine);
shapes.push_back(s);
}
int main(){
// initialize glut
init();
createScene();
Shape *x = shapes[0];
cout << x->shine << endl;
}
class Shape
{
public:
Shape() {}
~Shape(){}
Color ambient;
Color dif;
Color spec;
double shine;
virtual bool checkIntersect(Point p, Point d, Point &temp) = 0; // If intersects, return true else false.
virtual Point getNormal(Point intPt) = 0; // Get the normal at the point of intersection
//virtual void printstuff() = 0;
};
When it prints out shine, i get a value of zero? Why is that?
| I think object slicing occures (since you'r using a Shape object and assigning to it). What you would want to do in order to preserve polymorphism is use a pointer or a reference. In this case I would use a pointer:
Shape *x = shapes[0];
If shapes is an odd container which does de-reference (this is what i understand from your code) then I would use a reference:
Shape &x = shapes[0];
You could use a const reference, but it isnt mandatory here since your object is not a temporary one by any means.
Btw, hasn't anybody told you globals are a bad practice?
|
1,869,439 | 1,869,518 | Header inclusion optimization | Is there an automatic way to optimize inclusion of header files in C++, so that compilation time is improved ? With the word "automatic" I mean a tool or program. Is it possible to find which headers files are obsolete (e.g exposed functionality is not used) ?
Edit: Having each include header "included only once is one important thing, but is there a way to even change the contents of files so that frequently used "functionality" is on specific includes and less frequently used functionality is on other includes? Am i asking too much ? Unfortunately, we are talking about an existing code base with thousands of files. Could it be a refactoring tool what I am actually asking for ?
Thank you.
| Update
I think what you really want is "include what you use" rather than a minimal set of headers. IWYU means forward declare as much as possible, and include headers that directly declare the symbols you use. You cannot mindlessly convert a file to be IWYU clean as it may no longer compile. When that occurs, you need to find the missing header and add it. However, if every file is IWYU clean your compiles will be faster overall even if you have to add headers occasionally. Not to mention you headers will be more meaningful/self-documenting.
As my previous answer points out it is technically possible to include even fewer headers than necessary for IWYU, but it's generally a waste of time.
Now if only there was a tool to most of the IWYU refactoring grunt work for you :)
Google's IWYU
Include What You Use
I had considered a creating/using a tool like this once. The idea is to use binary search and repeated compilation to find the minimal set of includes. Upon further investigation it didn't seem that useful.
Some issues:
Changing the included header files can change the behavior, and still allow the file to compile. One example in particular, if you defined your own std::swap in a separate header file. You could remove that header and your code would still compile using the default std::swap implementation. However, the std::swap may be: inefficient, cause a runtime error, or worse produce subtly wrong logic.
Sometimes a header file inclusion works as documentation. For instance, to used std::foreach, often including <vector> is sufficient to get it to compile. The code is more meaningful with the extra #include <algorithm>.
The minimal compilation set may not be portable, between compilers or compiler versions. Using the std::foreach example again, there is no guarantee that std::foreach will provided in by <vector>.
The minimal set of includes may not affect compile time significantly anyway. Visual studio and gcc support #pragma once which make repeated included essentially non-existent performance wise. And at least gcc's preprocessor has been optimized to process include guards very fast (as fast as #pragma once).
|
1,869,504 | 1,869,688 | c++ registry read/write from a non-admin Windows Service | I'd like to read/write some registry information from my non-admin Windows Service, and have it applied regardless of the user logged in. Would using a subkey of HKEY_USERS/.DEFAULT do the trick?
Essentially, something like CSIDL_COMMON_APPDATA but in the registry.
Thanks!
| What do you mean by "have it applied"? I assume you mean write it to one place and all other users can read it; in that case, HLKM is the only answer. Why not change your service to run under one of the service accounts?
|
1,869,701 | 1,871,689 | Drawing text on a framebuffer in Linux from C | How can a program draw text on a frame buffer mapped in as an array? What is needed is both a means of representing the individual characters, and of drawing the characters pixel by pixel in a manner that is not too inefficient. The representation of the characters should ideally be defined solely in code, and no third party libraries would be required.
Does anyone know of code to do this available under a liberal license? Or a tool to generate data definitions for the font for use in program code e.g. the array of bitmap glyph/character values?
| I don't have any information specific to frame buffers, but I do have an interesting way of encoding a font.
If you have an application that can write to the XBM format, you can encode a font just by creating an image containing all the characters. The XBM file can be included as a C or C++ file, and by using the proper offsets you can easily access a single character. Make sure each character starts at an X-coordinate divisible by 8, because the image is coded as one bit per pixel; anything that doesn't line up on an 8-bit boundary will need masking and shifting.
|
1,869,970 | 1,870,046 | C++ Switch won't compile with externally defined variable used as case | I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant".
Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be?
Below is an example:
test.cpp
#include <iostream>
#include "x_def.h"
int main()
{
std::cout << "Main Entered" << std::endl;
switch(0)
{
case test_int:
std::cout << "Case X" << std::endl;
break;
default:
std::cout << "Case Default" << std::endl;
break;
}
return 0;
}
x_def.h
extern const int test_int;
x_def.cpp
const int test_int = 0;
This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious?
Here's my compiler version information:
Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs
Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.5 (mingw-vista special r3)
| A case label requires an integral constant expression which have strict requirements that enable their value to be determined at compile time at the point of use.
From 5.19 [expr.const], "an integral constant expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5),...".
At the point at which you use test_int where a constant expression is required, it is a const variable declared extern and without any initializer and does not meet the requirements for a constant expression, despite the fact that you do actually initialize it with a integral constant expression in another translation unit. (*This is not completely clear from the wording of the standard but is my current interpretation of it.)
The restrictions in the standard disallow usages such as:
void f(int a, int b)
{
const int c = b;
switch (a)
{
case c:
//...
}
}
In your example, when the compiler is compiling test.cpp, it has no way to determine what the initializer might be in x_def.cpp. You might have done:
const int test_int = (int)time();
Clearly, in neither of these examples could the value of the const int be determined at compile time which is the intention for integral constant expressions.
|
1,870,035 | 1,870,052 | struct vs class as STL functor when using not2 | Studing STL I have written a a simple program to test functors and modifiers. My question is about the difference aon using CLASS or STRUCT to write a functor and try to operate on it with function adaptors.
As far as I understand in C++ the difference beetween CLASS and STRUCT is that in the last case the members are public by default. This is also what I read many times in the answers in this site. So please explain me why this short piece of code will fail to compile even if I declared all members ( just a function overloading () ) public when I try to use the not2 modifier. (I have not tried other modifiers e.g. binders yet)
#include <iostream>
#include <vector>
#include <functional>
#include <algorithm>
using namespace std;
template <class T>
void print (T i) {
cout << " " << i;
}
// In the manual I read:
// "In C++, a structure is the same as a class except that its members are public by default."
// So if I declare all members public it should work....
template <class T>
class mystruct : binary_function<T ,T ,bool> {
public :
bool operator() (T i,T j) const { return i<j; }
};
template <class T>
class generatore
{
public:
generatore (T start = 0, T stp = 1) : current(start), step(stp)
{ }
T operator() () { return current+=step; }
private:
T current;
T step;
};
int main () {
vector<int> first(10);
generate(first.begin(), first.end(), generatore<int>(10,10) );
first.resize(first.size()*2);
generate(first.begin()+first.size()/2, first.end(), generatore<int>(1,17) );
cout << "\nfirst :";
for_each (first.begin(), first.end(), print<int>);
cout << "\nFORWARD SORT :";
sort(first.begin(),first.end(),mystruct<int>()); // OK ! even with CLASS
for_each (first.begin(), first.end(), print<int>);
sort(first.begin(),first.end(),not2(mystruct<int>())); // <--- THIS LINE WILL NOT COMPILE IF I USE CLASS INSTEAD OF STRUCT
cout << "\nBACKWARD SORT :";
for_each (first.begin(), first.end(), print<int>);
cout << endl;
}
Everithing runs as expected if I use:
struct mystruct : binary_function<T ,T ,bool> {
public :
bool operator() (T i,T j) const { return i<j; }
};
Part of the error message I obtain is:
g++ struct.cpp
/usr/include/c++/4.2.1/bits/stl_function.h:
In instantiation of
‘std::binary_negate >’:
struct.cpp:52: instantiated from
here
/usr/include/c++/4.2.1/bits/stl_function.h:116:
error: ‘typedef int
std::binary_function::first_argument_type’ is
inaccessible
/usr/include/c++/4.2.1/bits/stl_function.h:338:
error: within this context
/usr/include/c++/4.2.1/bits/stl_function.h:119:
error: ‘typedef int
std::binary_function::second_argument_type’ is
inaccessible ....
Seems that at least in this case a struct is not equivalent to a class with public members, but why ?
| The difference you read from other answers is correct. struct is just a class with public accessibility by default. This includes the inheritance modifier. Basically, you should mention public before the base class name when you're using a class to make those definitions equivalent:
template <class T>
class mystruct : public binary_function<T ,T ,bool> {
public:
bool operator() (T i,T j) const { return i<j; }
};
Otherwise, the compiler will assume that mystruct is privately inheriting binary_function<T,T,bool>.
You can verify this fact by changing the struct to:
struct mystruct : private binary_function<T ,T ,bool> {
public: // not required here
bool operator() (T i,T j) const { return i<j; }
};
which is equivalent to your current definition of the class and see the compiler whine with a similar error message.
|
1,870,047 | 1,870,345 | What encryption scheme meets requirement of decimal plaintext & ciphertext and preserves length? | I need an encryption scheme where the plaintext and ciphertext are composed entirely of decimal digits.
In addition, the plaintext and ciphertext must be the same length.
Also the underlying encryption algorithm should be an industry-standard.
I don't mind if its symmetric (e.g AES) or asymmetric (e.g RSA) - but it must be a recognized algorithm for which I can get a FIPS-140 approved library. (Otherwise it won't get past the security review stage).
Using AES OFB is fine for preserving the length of hex-based input (i.e. where each byte has 256 possible values: 0x00 --> 0xFF). However, this will not work for my means as plaintext and ciphertext must be entirely decimal.
NB: "Entirely decimal" may be interpreted two ways - both of which are acceptable for my requirements:
Input & output bytes are characters '0' --> '9' (i.e. byte values: 0x30 -> 0x39)
Input & output bytes have the 100 (decimal) values: 0x00 --> 0x99 (i.e. BCD)
Some more info:
The max plaintext & ciphertext length is likely to be 10 decimal digits.
(I.e. 10 bytes if using '0'-->'9' or 5 bytes if using BCD)
Consider following sample to see why AES fails:
Input string is 8 digit number.
Max 8-digit number is: 99999999
In hex this is: 0x5f5e0ff
This could be treated as 4 bytes: <0x05><0xf5><0xe0><0xff>
If I use AES OFB, I will get 4 byte output.
Highest possible 4-byte ciphertext output is <0xFF><0xFF><0xFF><0xFF>
Converting this back to an integer gives: 4294967295
I.e. a 10-digit number.
==> Two digits too long.
One last thing - there is no limit on the length any keys / IVs required.
| Use AES/OFB, or any other stream cipher. It will generate a keystream of pseudorandom bits. Normally, you would XOR these bits with the plaintext. Instead:
For every decimal digit in the plaintext
Repeat
Take 4 bits from the keystream
Until the bits form a number less than 10
Add this number to the plaintext digit, modulo 10
To decrypt, do the same but subtract instead in the last step.
I believe this should be as secure as using the stream cipher normally. If a sequence of numbers 0-15 is indistinguishable from random, the subsequence of only those of the numbers that are smaller than 10 should still be random. Using add/subtract instead of XOR should still produce random output if one of the inputs are random.
|
1,870,249 | 1,870,267 | Question about Pointers and Objects | My last question was a mess. I'm getting the wrong output.
So here I have in my main:
image = QImage(width, height, 32); // 32 Bit
Color amb(0.1,0.1,0.1);
Color difCoef(0.75,0.6,0.22);
Color spec(0.5,0.5,0.5);
double shineExp = 3.0;
Sphere *s = new Sphere(Point(0.0,0.0,-5), 100.0, amb, difCoef, spec, shineExp);
shapes.push_back(s);
Where shapes is vector < Shape * > shapes;
Shape *x = shapes[0];
cout << "Shine" << x->shine << endl;
Prints out zero even though the answer should be 3.0.
The following are my classes:
#include "shape.h"
class Sphere : public Shape
{
public:
Point centerPt;
double radius;
Color ambient;
Color dif;
Color spec;
double shine;
Sphere(Point center, double rad, Color amb, Color difCoef, Color specu, double shineVal)
{
centerPt = center;
radius = rad;
ambient = amb;
dif = difCoef;
spec = specu;
shine = shineVal;
}
class Shape
{
public:
Shape() {}
~Shape(){}
Color ambient;
Color dif;
Color spec;
double shine;
virtual bool checkIntersect(Point p, Point d, Point &temp) = 0; // If intersects, return true else false.
virtual Point getNormal(Point intPt) = 0; // Get the normal at the point of intersection
//virtual void printstuff() = 0;
};
| The problem is that you're repeating your variable declarations in the derived class. You don't need to redeclare variables like double shine, which are already in Shape, in the derived class Sphere. Since Sphere inherits from Shape, all the public member variables in Shape are automatically inherited in Sphere, and do not need to be redeclared. Redeclaring them will result in two different member variables, i.e. Sphere::shine is a totally different variable from Shape::shine.
Therefore, when you assign a value to Sphere::shine, and then later access an instance of Sphere with a base-class Shape pointer, the value of shine is not going to be what you expect.
|
1,870,329 | 1,870,359 | (c/c++) do copies of string literals share memory in TEXT section? | If I call a function like
myObj.setType("fluid");
many times in a program, how many copies of the literal "fluid" are saved in memory? Can the compiler recognize that this literal is already defined and just reference it again?
| I believe that in C/C++ there is no specified handling for that case, but in most cases would use multiple definitions of that string.
|
1,870,389 | 1,871,322 | Qt tells me that my SLOT doesnt exist, but with a make clean, make it doesnt complain anymore | when i download a fresh copy from our SVN, make then run my program, Qt tells me that one of my SLOTS doesn't work but with a handy-dandy make clean then make, it seems to solve the problem. i continue to make changes in the code on my PC and that message never shows again.
C++
Qt 4.6
gcc
has anyone had this problem?
and ideas?
thanks
| Qt creates a whole bunch of metadata about your Q_OBJECT classes when you build. That metadata is stored in 'moc' files, one of which may have become inconsistent with your C++ code. It's usually a bad idea to store intermediate build stages in your version control system. I'd suggest running make clean, then looking at your VCS to find out what files got deleted, then commit the result.
In svn:
make clean
svn st # you should see some files deleted
svn rm <all the deleted files that svn st showed>
svn ci -m "Cleaned out intermediate build stages"
|
1,870,627 | 1,870,664 | Type traits definition. Traits blobs & Metafunctions | Reading some source code, I have found next traits definition:
namespace dds {
template <typename Topic> struct topic_type_support { };
template <typename Topic> struct topic_data_writer { };
template <typename Topic> struct topic_data_reader { };
template <typename Topic> struct topic_data_seq { };
}
#define REGISTER_TOPIC_TRAITS(TOPIC) \
namespace dds { \
template<> struct topic_type_support<TOPIC> { \
typedef TOPIC##TypeSupport type; }; \
template<> struct topic_data_writer<TOPIC> { \
typedef TOPIC##DataWriter type; }; \
template<> struct topic_data_reader<TOPIC> { \
typedef TOPIC##DataReader type; }; \
template<> struct topic_data_seq<TOPIC> { \
typedef TOPIC##Seq type; }; \
}
That looks weird to me. I would have grouped all the traits in a unique class like this:
namespace dds {
template <typename Topic> struct topic_traits { };
}
#define REGISTER_TOPIC_TRAITS(TOPIC) \
namespace dds { \
template<> struct topic_traits<TOPIC> { \
typedef TOPIC##TypeSupport type_support; \
typedef TOPIC##DataWriter data_writter; \
typedef TOPIC##DataReader data_reader; \
typedef TOPIC##Seq seq_type; \
}; \
}
Can any of you figure out why second approach could be more fragile than the first one or significantly harder to add new traits?
| Having a single template class is now called a "traits blob". "Traits blob" are not recommended as they do not work well with meta-function (i.e. compile-time functions).
A meta-function is a template that takes a class and performs some operation on it. Something like:
template <class T>
class metafunction
{
typename T::type value = ...;
}
You can then call the meta function for any of your traits by doing:
metafunction<topic_type_support<int> >::value;
metafunction<topic_data_writer<int> >::value;
You would not be able to call the meta-function with your traits blob class because there is now way to tell the metafunction which typedef to use.
If you want to learn more about meta-functions, I recommend the book C++ Template Metaprogramming.
|
1,870,662 | 1,870,679 | How do you use a C++ iterator? | I have a vector like so:
vector<MyType*> _types;
And I want to iterate over the vector and call a function on each of MyTypes in the vector, but I'm getting invalid return errors from the compiler. It appears the pos iterator isn't a pointer to MyType, it's something else. What am I not understanding?
Edit: Some code..
for (pos = _types.begin(); pos < _types.end(); pos++)
{
InternalType* inst = *pos->GetInternalType();
}
The compiler errors are:
invalid return type 'InternalType**' for overloaded 'operator ->'
'GetInternalType' : is not a member of 'std::_Vector_iterator<_Ty,_Alloc>'
Edit pt2
Should my vector contain pointers or objects? What are the pros and cons? If I am using new to create an instance, I am guessing I can only use a vector of pointers to MyType is that correct?
| If the vector contained objects, not pointers, you could do pos->foo(). The iterator "acts like" a pointer. But your vector contains pointers, so an iterator will act like a pointer to a pointer, so needs to be dereferenced twice.
MyType *pMyType = *pos; // first dereference
if (pMyType) { // make sure the pointer is not null
pMyType->foo(); // second dereference
}
If you are sure the pointer is not null, you could do this:
(*pos)->foo();
The parenthesis around *pos are needed so the dereference applies to pos, not to pos->foo(). Order of operations.
If your vector needs to contain items from a class hierarchy (e.g., subclasses of MyType), then you have to make it a vector of pointers. Otherwise a vector of objects is probably simpler.
|
1,871,198 | 1,871,213 | Member function still const if it calls functions that break "constness"? | I'm wondering if a class's member function should be const if it calls other functions that modify its data members. A good example would be a public member function that uses private member functions to do the brunt of the work.
void Foo::Show() const { // Doesn't directly modify data members in this function
ShowPig(); // One or all of these functions modify data members
ShowCow();
ShowBar();
}
Maybe not the best example, but you get the idea.
| If a function calls functions that modify certain data, then it should be said that the function itself modifies data. It just happens to be abstracted.
So no, that function should not be const.
|
1,871,375 | 1,871,409 | Python ctypes: initializing c_char_p() | I wrote a simple C++ program to illustrate my problem:
extern "C"{
int test(int, char*);
}
int test(int i, char* var){
if (i == 1){
strcpy(var,"hi");
}
return 1;
}
I compile this into an so. From python I call:
from ctypes import *
libso = CDLL("Debug/libctypesTest.so")
func = libso.test
func.res_type = c_int
for i in xrange(5):
charP = c_char_p('bye')
func(i,charP)
print charP.value
When I run this, my output is:
bye
hi
hi
hi
hi
I expected:
bye
hi
bye
bye
bye
What am I missing?
Thanks.
| The string which you initialized with the characters "bye", and whose address you keep taking and assigning to charP, does not get re-initialized after the first time.
Follow the advice here:
You should be careful, however, not to
pass them to functions expecting
pointers to mutable memory. If you
need mutable memory blocks, ctypes has
a create_string_buffer function which
creates these in various ways.
A "pointer to mutable memory" is exactly what your C function expects, and so you should use the create_string_buffer function to create that buffer, as the docs explain.
|
1,871,379 | 1,871,527 | C++ static member functions and their scope | I have two questions.
In C++, a static member function has direct access to a public non-static data member defined in the same class?
False
In C++, a non-static member function has direct access to a private static data member defined in the same class?
True
My note say false for the first question and true for the second one. I just cannot find out why? Can you explain why this is? Thank you.
P.S. I'm studying for my final and I cannot seem to figure out why.
| Everyone's in agreement, but should be very careful about their wording, because actually static member functions do have access to public non-static data members. For that matter, they have access to private non-static data members too. They just need an object to operate on, to access its members. This could be a parameter, or a global, or created in the static member function, or acquired via one of those things.
The following code is fine:
class foo {
public:
int a;
// static member function "get_a" ...
static int get_a(foo *f) {
// ... accesses public non-static data member "a"
return f->a;
}
};
So we ask ourselves, what's the difference between "access" and "direct access"?
I guess what's meant by "direct access" here must be "using only the name of the data member, without specifying an object". Everyone always needs to have an object in order to access non-static members - that's what non-static means. Non-static member functions just don't have to mention which object if they don't want to, because this is implicit. Hence their access to non-static data members can be direct.
The reason non-static member functions have direct access to private static data members is firstly that the code is in a member of the class, hence it can access private data members. Second, you never need an object in order to access static data members (you can specify one if you want, but all that's used is the static type of the expression, not the actual object), hence the access is direct.
|
1,871,563 | 1,871,640 | Building libcurl library in c++, Noob Question | I wanna write a little c++ program using libcurl. It's for a school project so I need to be able to package everything in a zip file and email it to my instructor.
I've just downloaded the tar from the libcurl website but now I'm not sure what the next step is... What else do I gotta do in order to be able to do #include "curl/curl.h" and call curl functions from my main function? Once I do that how would I zip it and make sure my instructor will be able to compile it too? I'm using Ubuntu. Any help will be apprecitated!
| 1) Download the source from here.
2) unpack with "tar xvzf tarfilename"
3) cd to newly created directory from the unpack
4) enter "./configure"
5) enter "make" and "make install"
6) write your program and remember to link against the library.
7) When ready to send to prof, I would zip the original libcurl source along with the instructions above and any other you used to get your project to work.
Edit - Something like:
g++ -g -Wall -o myapp myapp.cpp -L/usr/local/lib -lcurl
|
1,871,570 | 1,871,583 | Operation on different data types | Considering the basic data types like char, int, float, double etc..in any standard language C/C++, Java etc
Is there anything like.."operating on integers are faster than operating on characters".. by operating I mean assignment, arithmetic op/ comparison etc.
Are data types slower than one another?
| For almost anything you're doing this has almost no effect, but purely for informational purposes, it is usually fastest to work with data types whose size is machine word size (i.e. 32 bits on x86 and 64-bits on amd64). Additionally, SSE/MMX instructions give you benefits as well if you can group these and work on them at the same time
|
1,871,688 | 1,871,699 | is it possible to have a C/C++ GUI application in linux bare-bone server? | I am very disappointed with my school linux server when doing the homework on it.
The reason is: my homework requires to make GUI application.
All the tool that I have is:
- ssh from my local machine to school machine
- gcc/g++ in my school machine
I have been thinking and tried out different solutions for a week.
I still can't be able to figure out how to bring GUI to my application.
Here is some solutions I tried:
- Install some graphical library (sdl,ncurses...) but school computer does not allow to install because i'm not the root user
- Try to compile with /X11/ to produce X-GUI application. Then running it throgh ssh (tunneling). This does not work either because school computer does not have headers file located in X11.
So, What CAN I DO? Anybody has suggestion?
I will thank you million times if you could help for a solution.
Thanks you much.
tsubasa
| It should be possible to install most things, like ncurses or even X11, in user space (in your home directory), if you install them from source. With a Gnu package, you just use --prefix= as an argument to configure, like this:
./configure --prefix=/name/of/directory/to/install/into
I'm not sure about the other packages.
|
1,872,571 | 1,872,860 | NULL pointer compatibility with static_cast | Q1. Why does using NULL pointers with static_cast cause crashes while dynamic_cast and reinterpret_cast give a NULL pointer in return?
The problem occurred in a method similar to the one given below:
void A::SetEntity(B* pEntity, int iMyEntityType)
{
switch (iMyEntityType)
{
case ENTITY1:
{
Set1(static_cast<C*>(pEntity));
return;
}
case ENTITY2:
{
Set2(static_cast<D*>(pEntity));
return;
}
case ENTITY3:
{
Set3(static_cast<E*>(pEntity));
return;
}
}
}
Inheritance:
class X: public B
class Y: public B
class Z: public B
class C: public X, public M
class D: public Y, public M
class E: public Z, public M
Q2. Is static_casting from B to C/D/E valid? (this worked ok till the input became NULL)
I'm using gcc version 3.4.3
| What compiler are you using? A static cast from a base type to a derived type might result in an adjustment to the pointer - especially likely if multiple inheritance is involved (which doesn't seem to be the case in your situation from your description). However, it's still possible without MI.
The standard indicates that if a null pointer value is being cast that the result will be a null pointer value (5.2.9/8 Static cast). However, I think that on many compilers most downcasts (especially when single inheritance is involved) don't result in a pointer adjustment, so I could imagine that a compiler might have a bug such that it wouldn't make the special check for null that would be required to avoid 'converting' a zero value null pointer to some non-zero value senseless pointer. I would assume that for such a bug to exist you must be doing something unusual to get the compiler to have to adjust the pointer in the downcast.
It might be interesting to see what kind of assembly code was generated for your example.
And for detailed information about how a compiler might layout an object that might need pointer adjustment with static casts, Stan Lippman's "Inside the C++ Object Model" is a great resource.
Stroustrup's paper on Multiple Inheritance for C++ (from 1989) is also a good read. It's too bad if a C++ compiler has a bug like I speculate about here - Stroustrup discusses the null pointer issue explicitly in that paper (4.5 Zero Valued Pointers).
For your second question:
Q2. Is static_casting from B to C/D/E valid?
This is perfectly valid as long as when you perform the cast of the B pointer to a C/D/E pointer the B pointer is actually pointing to the B sub-object of a C/D/E object (respectively) and B isn't a virtual base. This is mentioned in the same paragraph of the standard (5.2.9/8 Static cast). I've highlighted the sentences of the paragraph most relevant to your questions:
An rvalue of type “pointer to cv1 B”, where B is a class type, can be converted to an rvalue of type “pointer to cv2 D”, where D is a class derived (clause 10) from B, if a valid standard conversion from “pointer to D” to “pointer to B” exists (4.10), cv2 is the same cv-qualification as, or greater cv-qualification than, cv1, and B is not a virtual base class of D. The null pointer value (4.10) is converted to the null pointer value of the destination type. If the rvalue of type “pointer to cv1 B” points to a B that is actually a sub-object of an object of type D, the resulting pointer points to the enclosing object of type D. Otherwise, the result of the cast is undefined.
As a final aside, you can workaround the problem using something like:
Set1(pEntity ? static_cast<C*>(pEntity) : 0);
which is what the compiler should be doing for you.
|
1,872,575 | 1,872,590 | How to write a binary algorithm for C/C++ | I am having trouble to write the binary algorithm in C/C++.
My question is like that:
Apply binary algorithm to search for a number from 1 to 100 in a number guessing game.
The user will respond with 'y' for a correct guess, 'h' if the guess is too high or 'l' if the guess is too low.
I don't have any idea to apply it. Can someone just give me an example of the code.
| Detailed instructions here plus various implementations.
int low = 1;
int high = 100;
while (low <= high) {
int mid = (low + high) / 2;
char answer = evaluateGuess(mid); //return l, h or y;
if ('y'==answer) {
return mid;
}
if ('l' == answer) {
low = mid + 1;
} else {
high = mid - 1;
}
}
// If you get here the human player lied and the answer wasn't in [1..100]
|
1,872,624 | 1,872,765 | how to design CPM algorithm? | how to represent a graph with list data structure i have three class (Graph, Node, Edge) and would like to find the critical path in graph.
how to calculate
ES : Earliest Start
EC : Earliest Complete
LS : Latest Start
LC : Latest Complete
thanks
| Another alternative for storing the graph is the Boost Graph Library (BGL). From what I see at wikipedia, the critical path is the longest path between two vertices. Furthermore it seems like finding the longest path is NP Complete for the general case but for a directed acyclic graph (DAG), which I think is your case, there are more efficient algorithms.
The longest path algorithm isn't in BGL but the DAG algorithm on wikipedia looks reasonably easy to implement.
|
1,872,714 | 1,872,839 | Can I use std::basic_string with things that aren't character types? | I've got a loop in my code that uses std::basic_string<HANDLE>, and then waits on it like this:
DWORD dwWaitResult = WaitForMultipleObjects((DWORD)handles.size(),
handles.data(),
FALSE, POLL_INTERVAL_MS);
It works fine, but when I turn on /W4 and /analyze, Visual C++ 2008 warns with the following (cut down and wrapped for brevity):
iosfwd(266) : warning C6001: Using uninitialized memory '*_First'
iosfwd(262) : while compiling class template member function
'HANDLE *std::char_traits<_Elem>::assign(_Elem *,size_t,_Elem)'
with [ _Elem=HANDLE ]
xstring(2155) : see reference to class template instantiation
'std::char_traits<_Elem>' being compiled
with [ _Elem=HANDLE ]
xstring(2148) : while compiling class template member function
'void std::basic_string<_Elem>::_Tidy(bool,unsigned int)'
with [ _Elem=HANDLE ]
.\MyCode.cpp(231) : see reference to class template instantiation
'std::basic_string<_Elem>' being compiled
with [ _Elem=HANDLE ]
iosfwd(222) : warning C6001: Using uninitialized memory '*_First1'
iosfwd(216) : while compiling class template member function
'HANDLE *std::char_traits<_Elem>::_Copy_s
(_Elem *,size_t,const _Elem *,size_t)'
with [ _Elem=HANDLE ]
Questions:
Is it legal to use std::basic_string with something that's not a simple character type (i.e. char or wchar_t)?
If it is, what should I do to get rid of these warnings? Answering #pragma warning(disable) needs justification.
If it's not, why not? And what should I use instead?
Extra credit will be given for bearing in mind that Boost is out, and we're restricted to the STL implementation in Visual C++ 2008; the Visual C++ bits of TR1 are allowed.
|
Yes, if you provide a template specialization of the class traits for your type.
Provide your specialized class traits. (Second template parameter of class std::basic_string).
It is. But have you thought about using std::vector instead of std::basic_string? (It does not have this traits template argument that provides string helper functions such as compare)
|
1,872,759 | 2,048,835 | Statically linking against library built with different version of C Runtime Library, ok or bad? | Consider this scenario:
An application links to 3rd party library A.
A is built using MSVC 2008 and is statically linking (ie. built with /MT) to the C Runtime Library v9.0.
The application is built using MSVC 2005 and is statically linking to A and (using /MT) to the C Runtime Library v8.0.
I can see trouble with this - for instance if types are changed in the headers between runtime library versions.
Is care taken to keep the runtime library headers compatible between versions, or should one always make sure all statically linked libraries are linking to the same version of the runtime library?
| It should not be a problem. Each library links to its own runtime and mostly functions independently from other libraries in the process. The problem comes about when the libraries ABI is badly defined. If any kind of heap allocated object is allocated in one library, passed across a library boundary and 'freed' in another library there are going to be problems as a different heap manager is being used to free a block from the heap manager used to allocate it.
Any kind of c-runtime defined struct, object or entity should not be passed accross boundries where a different runtime version might be being used :- FILE*'s obtained from one library for example will have no meaning to a different library linked against a different runtime.
As long as the library API's use only raw types, and do not try to free() passed in pointers, or pass out pointers to internally malloc()'d memory that they expect the application (or another library) to free() you should be ok.
Its easy to fall for the FUD that "anything can go wrong" if c-runtimes are mixed, but you have to remember that libs, and dynamic libraries (.so / .dll / .dylib) have traditionally been developed in a wide variety of languages: allowing code written in asm, c, c++, fortran, pascal etc to comminicate via an effective CPU efficient binary interface.
Why suddenly panic when C is being linked to C?
|
1,872,799 | 1,872,854 | Simple Distributed Computation (similar to summation) (in C++) | I'm looking for a framework / approach to do message passing distributed computation in C++.
I've currently got an iterative, single-threaded algorithm that incrementally updates some data model. The updates are literally additive, and I'd like to distribute (or at least parallelize) the computation hereof over as many machines+cores as possible. The data model can be viewed as a big array of (independent) floating point values.
Since the updates are all additive (i.e. commutative and associative), it's OK to merge in updates from other nodes in arbitrary order or even to batch merge updates. When it comes to applying updates, the map/reduce paradigm would work fine.
On the other hand, the updates are computed with respect to the current model state. Each step "corrects" some flaw, so it's important that the model used for computing the update is as fresh as possible (the more out of date the model, the less useful the update). Worst case, the updates are fully dependent, and parallelism doesn't do any good.
I've never implemented anything flexibly distributable, but this looks like a prime candidate. So, I'm looking for some framework or approach to distribute the updates (which consist mostly of floating point numbers and a few indexes into the array to pinpoint where to add the update). But, I'm unsure as to how:
I can broadcast updates to all connected processes. But that means massive network traffic, so I'd realistically need to batch updates; and then updates will be less current. This doesn't look scalable anyhow.
I can do some kind of ring topology. Basically, a machine sends the next machine the sum of its own updates and those of it's predecessors. But then I'd need to figure out how to not duplicate updates, after all, the ring is circular and eventually it's own updates will arrive as part of the sum of its predecessors.
or some kind of tree structure...
To recap, to get decent convergence performance, low latency is critical; the longer between update computation and update application, the less useful the update is. Updates need to be distributed to all nodes as quickly as possible; but because of the commutative and associate nature of the updates, it doesn't matter whether these updates are individually broadcast (probably inefficient) or arrive as part of a merged batch.
Does anybody know of any existing frameworks or approaches to speed up development? Or even just general pointers? I've never done anything quite like this...
| You probably want MPI (Message Passing Interface.) It's essentially the industry-standard for distributed computing. There are many implementations, but I would recommend OpenMPI because it's both free, and highly regarded. It provides you with a C API to pass messages between nodes, and also provides higher-level functionality like broadcast, all-to-all, reduce, scatter-gather, etc. It works over TCP, as well as faster, lower-latency interconnects like Infiniband or Myrinet, and supports various topologies.
There is also a Boost wrapper around MPI (Boost.MPI) that will provide you with a more C++ friendly interface.
|
1,872,806 | 1,873,345 | Is there any heap compaction in C++? | I have a notion that C++ runtime doesn't do any heap compaction which means that the address of an object created on heap never changes. I want to confirm if this is true and also if it is true for every platform (Win32, Mac, ...)?
| The C++ standard says nothing about a heap, nor about compaction. However, it does require that if you take the address of an object, that address stays the same throughout the object's lifetime.
A C++ implementation could do some kind of heap compaction and move objects around behind the scenes. But then the "addresses" it return to you when you use the address-of operator, are not actually memory addresses but some other kind of mapping.
In other words, yes, it is safe to assume that addresses in C++ stay the same while the object you're taking the address of lives.
What happens behind the scenes is unknown. It is possible that the physical memory addresses change (although common C++ compilers wouldn't do this, it might be relevant for compilers targeting various forms of bytecode, such as Flash), but the addresses that your program sees are going to behave nicely.
|
1,872,949 | 1,872,959 | Loading text from a file into a 2-dimensional array (C++) | I'm making a game and I have stored the map data in a 2-dimensional array of size [34][10]. Originally I generated the map using a simple function to fill up the array and saved this data to a file using the following code:
ofstream myFile;
myFile.open("map.txt");
for ( int y = 0 ; y < MAP_HEIGHT ; ++y )
{
for ( int x = 0 ; x < MAP_WIDTH ; ++x )
{
myFile << m_acMapData[x][y];
}
myFile << '\n';
}
myFile.close();
This outputs a text file which looks something like how I want it to. However, when I try to read it back in using the following code, I get a load of access violations and it crashes at runtime:
ifstream myFile;
int i=0;
int j=0;
char line[MAP_WIDTH];
myFile.open("map.txt");
while (!myFile.eof())
{
myFile.getline(line, MAP_WIDTH);
for ( i=0; i<MAP_WIDTH; i++ )
{
m_acMapData[i][j] = line[i];
}
++j;
cout << line;
}
Does anyone know what the problem is?
| while (!myFile.eof())
{
myFile.getline(line, MAP_WIDTH);
should be:
while ( myFile.getline(line, MAP_WIDTH) )
{
It would however be safer to read into a std::string:
string line:
while ( getline( myFile, line ) )
{
You might also want to read my blog on this subject, at http://punchlet.wordpress.com.
|
1,873,110 | 1,879,588 | How to check if application runs from \program files\ | Is there a reliable method to check if an application is run from somewhere beneath program files?
If the user installs the application to program files on local machine, we need to put writable files somewhere else to avoid virtualization on Vista and Win7. When installed to a network disk, though, we want to keep these files with the installation for shared access among users.
Today we do an string comparison between startup path and CSIDL_PROGRAM_FILES, but something tells me this is a very unreliable method.
Any smart solution out there?
Is there a 'IsRunningFromProtectedFolder( )'-api that I do not know about?
Are there any other folders giving the same problems as program files do?
| This is not a terribly good idea, as a user can install it wherever they want to, and then the check might fail. Instead have a checkbox when the user installs the app, deciding if it is installed locally or on a server.
|
1,873,113 | 1,921,274 | How to implement a video widget in Qt that builds upon GStreamer? | I want to use Qt to create a simple GUI application that can play a local video file. I could use Phonon which does all the work behind the scenes, but I need to have a little more control. I have already succeeded in implementing an GStreamer pipeline using the decodebin and autovideosink elements. Now I want to use a Qt widget to channel the output to.
Has anyone ever succeeded in doing this? (I suppose so since there are Qt-based video players that build upon GStreamer.) Can someone point me in the right direction on how to do it?
Note: This question is similar to my previous posted question on how to connect Qt with an incoming RTP stream. This seemed to be quite challenging. This question will be easier to answer I think.
Update 1
Patrice's suggestion to use libVLC is very helpful already. Here's a somewhat cleaner version of the code found on VLC's website:
Sample for Qt + libVLC.
However, my original question remains: How do I connect GStreamer to a Qt widget?
Update 2
After some experimentation I ended up with this working sample. It depends on GstWidget.h and GstWidget.cpp from my own little GstSupport library. However, take note that is is currently only tested on the Mac version of Qt.
| To connect Gstreamer with your QWidget, you need to get the window handle using QWidget::winId() and you pass it to gst_x_overlay_set_xwindow_id();
Rough sample code:
sink = gst_element_factory_make("xvimagesink", "sink");
gst_element_set_state(sink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
Also, you will want your widget to be backed by a native window which is achieved by setting the Qt::AA_NativeWindows attribute at the application level or the Qt::WA_NativeWindow attribute at the widget level.
|
1,873,219 | 1,873,233 | Strange Behaviour Class Objects Inside Union | Hi I wanted know the reasons of the following code
void main()
{
class test
{
public:
test(){}
int k;
};
class test1
{
public:
test1(){}
int k;
};
union Test
{
test t1;
test1 t2;
};
}
For the Above code it gives error "error C2620: union 'Test' : member 't1' has user-defined constructor or non-trivial default constructor"
class test
{
public:
//test(){}
int k;
};
class test1
{
public:
//test()1{};
int k;
};
union Test
{
test t1;
test1 t2;
};
For the Above, No Errors.
I wanted to know the Reasons.
Thank you in Advance. :)
| According to the C++ standard (§9.5.1, cited as well in other answers):
A union can have member functions (including constructors and destructors), but not virtual functions. A union shall not have base classes. A union shall not be used as a base class. An object of a class with a non-trivial constructor, a non-trivial copy-constructor, a non-trivial destructor, or a non-trivial copy assignment operator cannot be a member of a union, nor can an array of such objects. If a union contains a static data member, or a member of a reference type, the program is ill-formed.
I first linked to the Wikipedia article about POD types which states:
A POD type in C++ is defined as either a scalar type or a POD class. POD class has no user-defined copy assignment operator, no user-defined destructor, and no non-static data members that are not themselves PODs. Moreover, POD class must be an aggregate, meaning it has no user-declared constructors, no private nor protected non-static data, no bases and no virtual functions. The standard includes statements about how PODs must behave in C++.
and
In certain contexts, C++ allows only POD types to be used. For example, a union in C++ cannot contain a class that has virtual functions, or nontrivial constructors or destructors. This restriction is imposed because the compiler cannot know which constructor or destructor should be called for a union.
The first sentence of the second paragraph might make you think C++ only allows POD types to be part of a union. This isn't exactly the case as it allows a class with private members to be part of a union:
#include <iostream>
using namespace std;
class test1
{
int i;
};
class test2
{
int i;
};
union test
{
test1 t1;
test2 t2;
};
int main()
{
cout << __is_pod(test1) << endl;
cout << __is_pod(test2) << endl;
cout << __is_pod(test) << endl;
return 0;
}
The program above compiled with MSVC++ prints out:
0
0
1
|
1,873,334 | 1,873,409 | Staying away from virtual memory in Windows\C++ | I'm writing a performance critical application where its essential to store as much data as possible in the physical memory before dumping to disc.
I can use ::GlobalMemoryStatusEx(...) and ::GetProcessMemoryInfo(...) to find out what percentage of physical memory is reserved\free and how much memory my current process handles.
Using this data I can make sure to dump when ~90% of the physical memory is in use or ~90 of the maximum of 2GB per application limit is hit.
However, I would like a method for simply recieving how many bytes are actually left before the system will start using the virtual memory, especially as the application will be compiled for both 32bit and 64bit, whereas the 2 GB limit doesnt exist.
| Even if you're able to stop your application from having memory paged out to disk, you'll still run into the problem that the VMM might be paging out other programs to disk and that might potentially affect your performance as well. Not to mention that another application might start up and consume memory that you're currently occupying and thus resulting in some of your applications memory being paged out. How are you planning to deal with that?
There is a way to use non-pageable memory via the non-paged pool but (a) this pool is comparatively small and (b) it's used by device drivers and might only be usable from inside the kernel. It's also not really recommended to use large chunks of it unless you want to make sure your system isn't that stable.
You might want to revisit the design of your application and try to work around the possibility of having memory paged to disk before you either try to write your own VMM or turn a Windows machine into essentially a DOS box with more memory.
|
1,873,352 | 1,875,190 | How do I convert a value from host byte order to little endian? | I need to convert a short value from the host byte order to little endian. If the target was big endian, I could use the htons() function, but alas - it's not.
I guess I could do:
swap(htons(val))
But this could potentially cause the bytes to be swapped twice, rendering the result correct but giving me a performance penalty which is not alright in my case.
| Something like the following:
unsigned short swaps( unsigned short val)
{
return ((val & 0xff) << 8) | ((val & 0xff00) >> 8);
}
/* host to little endian */
#define PLATFORM_IS_BIG_ENDIAN 1
#if PLATFORM_IS_LITTLE_ENDIAN
unsigned short htoles( unsigned short val)
{
/* no-op on a little endian platform */
return val;
}
#elif PLATFORM_IS_BIG_ENDIAN
unsigned short htoles( unsigned short val)
{
/* need to swap bytes on a big endian platform */
return swaps( val);
}
#else
unsigned short htoles( unsigned short val)
{
/* the platform hasn't been properly configured for the */
/* preprocessor to know if it's little or big endian */
/* use potentially less-performant, but always works option */
return swaps( htons(val));
}
#endif
If you have a system that's properly configured (such that the preprocessor knows whether the target id little or big endian) you get an 'optimized' version of htoles(). Otherwise you get the potentially non-optimized version that depends on htons(). In any case, you get something that works.
Nothing too tricky and more or less portable.
Of course, you can further improve the optimization possibilities by implementing this with inline or as macros as you see fit.
You might want to look at something like the "Portable Open Source Harness (POSH)" for an actual implementation that defines the endianness for various compilers. Note, getting to the library requires going though a pseudo-authentication page (though you don't need to register to give any personal details): http://hookatooka.com/poshlib/
|
1,873,766 | 1,873,891 | c++ templated container scanner | here's today's dilemma:
suppose I've
class A{
public:
virtual void doit() = 0;
}
then various subclasses of A, all implementing their good doit method. Now suppose I want to write a function that takes two iterators (one at the beginning of a sequence, the other at the end). The sequence is a sequence of A subclasses like say list<A*> or vector... The function should call all the doit methods while scanning the iterators... How to do this? I've thought of:
template<typename Iterator> void doList(Iterator &begin, Iterator &end) {
for (; begin != end; begin++) {
A *elem = (serializable*) *begin;
elem.doIt();
}
}
but gives weird errors... do you have better ideas or specific information? Is it possible to use list<A> instead of list<A*>?
| You can use the std::foreach for that:
std::for_each( v.begin(), v.end(), std::mem_fun( &A::doIt ) );
The std::mem_fun will create an object that calls the given member function for it's operator() argument. The for_each will call this object for every element within v.begin() and v.end().
|
1,873,773 | 1,873,804 | How can a template function 'know' the size of the array given as template argument? | In the C++ code below, the templated Check function gives an output that is not what I would like: it's 1 instead of 3. I suspect that K is mapped to int*, not to int[3] (is that a type?). I would like it to give me the same output than the second (non templated) function, to which I explicitly give the size of the array...
Short of using macros, is there a way to write a Check function that accepts a single argument but still knows the size of the array?
#include <iostream>
using namespace std;
int data[] = {1,2,3};
template <class K>
void Check(K data) {
cout << "Deduced size: " << sizeof(data)/sizeof(int) << endl;
}
void Check(int*, int sizeofData) {
cout << "Correct size: " << sizeofData/sizeof(int) << endl;
}
int main() {
Check(data);
Check(data, sizeof(data));
}
Thanks.
PS: In the real code, the array is an array of structs that must be iterated upon for unit tests.
| template<class T, size_t S>
void Check(T (&)[S]) {
cout << "Deduced size: " << S << endl;
}
|
1,873,782 | 1,873,822 | Create thread with specific privilege c++ | I have multi-thread application that I want to create a thread with different user privilege (for example : multi domain admin privilege).
but I can't find any Win32 API CreateThread to do that.
How to create thread with specific user privileges?
thanks.
| Call CreateThread() with CREATE_SUSPENDED flag, then call SetThreadToken(), then ResumeThread().
|
1,874,051 | 1,874,109 | c++ multiple enums in one function argument using bitwise or "|" | I recently came across some functions where you can pass multiple enums like this:
myFunction(One | Two);
Since I think this is a really elegant way I tried to implement something like that myself:
void myFunction(int _a){
switch(_a){
case One:
cout<<"!!!!"<<endl;
break;
case Two:
cout<<"?????"<<endl;
break;
}
}
now if I try to call the function with One | Two, I want that both switch cases get called. I am not really good with binary operators so I dont really know what to do. Any ideas would be great!
Thanks!
| For that you have to make enums like :
enum STATE {
STATE_A = 1,
STATE_B = 2,
STATE_C = 4
};
i.e. enum element value should be in power of 2 to select valid case or if statement.
So when you do like:
void foo( int state) {
if ( state & STATE_A ) {
// do something
}
if ( state & STATE_B ) {
// do something
}
if ( state & STATE_C ) {
// do something
}
}
int main() {
foo( STATE_A | STATE_B | STATE_C);
}
|
1,874,056 | 1,874,149 | How to get started with game programming using VC++,C++,DirectX quickly? | Hi I am working in VC++ and I am quite interested in game programming and I have few queries.
1).What one must know before starting game programming ?
2).Can anybody give me info @ resources like tutorial ,links ,etc. which would help me to start as fast as possible ?
3).Also give me info @ some good books on game programming ?
Any help would be greatly appreciated.
| Before you start programming you must have a good understanding of the language, how to program and how to structure and test your code. Oh, and a huge amount of either patience or free time. On the maths front, Vectors, Matrices and Quaternions are the main things I found I needed.
The other thing that often goes overlooked when I programmer starts writing a game is someone to create the assets. Preferably someone specialized in it.
You mention DirectX, which is not actually a fast way to go as you have to build everything from square one, which means a lot more maths, performance testing and overall handwork. I would suggest at least a rendering engine like Ogre3D. There are plenty of tutorials and a very good community.
There is a good post here on why you should write games not engines.
The main reason you would want to use DirectX is to enhance your understanding of the lower levels, all the things an engine is abstracting for you. While I think this is a good thing to do, I wouldn't want to do it for a major or first project.
The main site I used for help was gamedev.net, although I also found some intresting articles on gamesutra
|
1,874,061 | 1,874,169 | template class with overridden operators | I want to add a operator override to perform assignments/__set__s inline.
Template :-
class CBase {
public :
static void SetupVmeInterface(CVmeInterface *in);
protected :
static CVmeInterface *pVmeInterface;
};
template <class T> class TCVmeAccess : public CBase {
public:
TCVmeAccess(int address);
T get()
{
unsigned long temp = pVmeInterface->ReadAddress(Address);
T ret = *reinterpret_cast<T*>(&temp);
return ret;
};
T *operator->();
unsigned long asLong();
bool set(T data)
{
unsigned long write_data = *reinterpret_cast<unsigned long*>(&data);
return pVmeInterface->WriteAddress(Address, write_data);
};
// void operator->(T);
void operator=(T data)
{ set(data); }
private :
int Address;
};
A struct that will be used in the template :-
typedef struct
{
int a: 1; // 0
int b: 1; // 1
int c: 1; // 2
int d: 1; // 3
int NotUsed : 28; // 31-4
} _HVPSUControl;
Code body :-
TCVmeAccess<_HVPSUControl> HVPSUControl(constHVPSUControlBlock);
_HVPSUControl hvpsu = HVPSUControl.get(); // Yep, good, but not as nice as...
int a = HVPSUControl2.get().OperationalRequestPort; // yep, also good, but...
int b = HVPSUControl->a; // works, and is all go so far
HVPSUControl.set(hvpsu); // works, but need _HVPSUControl type
HVPSUControl = hvpsu; // also works, as operator = is used, but still need type
// this line does not work!
// as the = assignment is redirected into a copy of the struct, not the template
HVPSUControl->a = 1; // this line
So, is there a way to get this line above to work?
Edit:
As in, I want "this line" to perform as a "set" as does in the template class.
Edit:
1. Assign a value directly in-line to a member of the struct that the template is formed
of.
2. Cause that assignment to go though a template accessor.
So that I dont have to do this on assignments :-
// HVPSUControl is predefined and used many times.
_HVPSUControl hvpsu;
hvpsu.a = 1;
HVPSUControl.set(hvpsu);
I want to do
HVPSUControl.a = 1; // or
HVPSUControl->a = 1; // or ?
As gets work on line :
if (HVPSUControl->a)
| Instead of overwriting the "->" and the "=" operator, you could derive from the template struct.
template <class T> class TCVmeAccess : public CBase, public T {
public:
TCVmeAccess(int address);
T get();
// T *operator->();
unsigned long asLong();
bool set(T);
// void operator->(T);
// void operator=(T);
private :
int Address;
};
HVPSUControl.a = 1; // and use this for setting a bitfield.
Edit: If you want to use an custom assignment operator, you should declare it in HVPSUControl, or a even a base class of it, if you have more of this control-like structures.
struct _HVPSUControl
{
int a: 1; // 0
int b: 1; // 1
int c: 1; // 2
int d: 1; // 3
int NotUsed : 28; // 31-4
void operator = (int x);
};
or
struct _HVPSUBase {
void operator = (int x);
}
struct _HVPSUControl: public _HVPSUBase
{
int a: 1; // 0
int b: 1; // 1
int c: 1; // 2
int d: 1; // 3
int NotUsed : 28; // 31-4
};
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.