question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
2,287,621 | 2,287,706 | How to calculate length of an UCS-2 string and its size in C++? | I have a string in UCS-2 encoding. I need to copy this string to another UCS-2 string. Before copying I need to calculate the length of a UCS-2 string for memeory allocation.
How to calculate length of an UCS-2 string?
| UCS2 does not carry information about string length itself. Your original string representation either is 0-terminated, in which case you can just check for a 0 code unit (i.e. a 16-bit 0 value), or it is not 0-terminated, in which case you need some out-of-band information about the length of the string (such as a separately stored buffer size).
In general, UCS 2 is a subset of UTF-16 (http://www.unicode.org/faq/basic_q.html#14). Hence, you should be fine with UTF-16 tools, such as the ICU library http://site.icu-project.org/.
|
2,287,757 | 2,289,638 | Saving a simple image buffer to png in C++ | I'd like to do this in a platform independant way, and I know libpng is a possibility, but I find it hard to figure out how. Does anyone know how to do this in a simple way?
| There is a C++ wrapper for libpng called Png++. Check it here or just google it.
They have a real C++ interface with templates and such that uses libpng under the hood. I've found the code I have written quite expressive and high-level.
Example of "generator" which is the heart of the algorithm:
class PngGenerator : public png::generator< png::gray_pixel_1, PngGenerator>
{
typedef png::generator< png::gray_pixel_1, PngGenerator> base_t;
public:
typedef std::vector<char> line_t;
typedef std::vector<line_t> picture_t;
PngGenerator(const picture_t& iPicture) :
base_t(iPicture.front().size(), iPicture.size()),
_picture(iPicture), _row(iPicture.front().size())
{
} // PngGenerator
png::byte* get_next_row(size_t pos)
{
const line_t& aLine = _picture[pos];
for(size_t i(0), max(aLine.size()); i < max; ++i)
_row[i] = pixel_t(aLine[i] == Png::White_256);
// Pixel value can be either 0 or 1
// 0: Black, 1: White
return row_traits::get_data(_row);
} // get_next_row
private:
// To be transformed
const picture_t& _picture;
// Into
typedef png::gray_pixel_1 pixel_t;
typedef png::packed_pixel_row< pixel_t > row_t;
typedef png::row_traits< row_t > row_traits;
row_t _row; // Buffer
}; // class PngGenerator
And usage is like this:
std::ostream& Png::write(std::ostream& out)
{
PngGenerator aPng(_picture);
aPng.write(out);
return out;
}
There were some bits still missing from libpng (interleaving options and such), but frankly I did not use them so it was okay for me.
|
2,287,804 | 2,288,344 | pClass1 = (Class1*)pBase->next without (Class1*) cast | class Base
{
Base* next;
}
class Class1 : Base
{
}
Base* pBase = new Base();
Class1* pTest = new Class1();
pBase->next = pTest;
Class1* pClass1;
pClass1 = (Class1*)pBase->next;
I want to be able to write
pClass1 = pBase->next;
and get no compilation error C2440 (cannot convert). Or in other words I want pClass1 point to a class that pBase->next points to.
Is it possible with some operator overloading? How to do it?
| template<class T>
class Base
{
T* next;
}
class Class1 : Base<Class1>
{
}
Class1* pTest1 = new Class1();
Class1* pTest2 = new Class1();
pTest1->next = pTest2;
Class1* pClass1;
pClass1 = pTest1->next;
|
2,287,879 | 2,287,923 | Why can't convert TCHAR* to char* | error C2664: 'strcpy' : cannot convert parameter 1 from 'TCHAR *' to 'char *'
code:
LPCTSTR name, DWORD value
strcpy (&this->valueName[0], name);
error C2664: 'strlen' : cannot convert parameter 1 from 'LPCTSTR' to 'const char *'
LPCTSTR name;
strlen (name)
The code above to a class which works fine in another project, I can't find the reason why it doesn't work in this MS VS2010 Project.
| You need to use a function such as wcstombs when _UNICODE is defined. Either that or just use _tcslen (Look under Generic-Text Routine Mappings) on the TCHAR string and the compiler will transfer it to either strlen or wcslen depending if you are using unicode or not.
|
2,287,922 | 2,288,252 | What library implements asynchronous processing of messages? | Help find a library that implements:
1) Publisher-Subscribers. Publisher sends (SendMessage - not WinAPI function) the message, not knowing how many subscribers will receive it, maybe 0.
2) Asynchronously. If there is a free flow, the subscriber (s) must start in parallel with the code after the SendMessage.
3) Smart pointers to parameters. The parameter for different message can have different type, created in the heap and is available to all subscribers to read. After all subscribers have worked, the memory allocated for the parameters is released.
4) The pool of threads. Thread is not removed after processing the message, and wait a new message.
5) Optional: Priorities, several threads pools and mapping messages to different pools.
| Have a look at Boost.Asio
|
2,287,962 | 2,288,897 | C++ - How to read Unicode characters( Hindi Script for e.g. ) using C++ or is there a better Way through some other programming language? | I have a hindi script file like this:
3. भारत का इतिहास काफी समृद्ध एवं विस्तृत है।
I have to write a program which adds a position to each and every word in each sentence.
Thus the numbering for every line for a particular word position should start off with 1 in parentheses. The output should be something like this.
3. भारत(1) का(2) इतिहास(3) काफी(4) समृद्ध(5) एवं(6) विस्तृत(7) है(8) ।(9)
The meaning of the above sentence is:
3. India has a long and rich history.
If you observe the '।'( which is a full stop in hindi equivalent to a '.' in English ) also has a word position and similarly other special symbols would also have as I am trying to go about English-Hindi Word alignment( a part of Natural Language Processing ( NLP ) ) so the full stop in english '.' should map to '।' in Hindi. Serial nos remain as it is untouched.
I thought reading character by character could be a solution. How can I do this?
The thing is I am able to get word positions for my English text using C++ as I was able to read character by character using ASCII values in C++ but I don't have a clue to how to go about the same for the hindi text.
The final aim of all this is to see which word position of the English text maps to which postion in Hindi. This way I can achieve bidirectional alignment.
Thank you for your time...:)
| I would seriously suggest that you'd use Python for an applicatin like this.
It will lift the burden of decoding the strigns (not to mention allocating memory for them and the like). You will be free to concentrate on your problem, instead of problems of the language.
For example, if the sentence above is contained in an utf-8 file, and you are uisng python2.x.
If you use python 3.x it is even more readible, as you don't have to prefix the unicode strings with 'u" ', as in this example (but you will be missing a lot of 3rd party libraries:
separators = [u"।", u",", u"."]
text = open("indiantext.txt").read()
#This converts the encoded text to an internal unicode object, where
# all characters are properly recognized as an entity:
text = text.decode("utf-8")
#this breaks the text on the white spaces, yielding a list of words:
words = text.split()
counter = 1
output = ""
for word in words:
#if the last char is a separator, and is joined to the word:
if word[-1] in separators and len(word) > 1:
#word up to the second to last char:
output += word[:-1] + u"(%d) " % counter
counter += 1
#last char
output += word[-1] + u"(%d) " % counter
else:
output += word + u"(%d) " % counter
counter += 1
print output
This is an "unfolded" example, As you get more used to Python there are shorer ways to express this. You can learn the basics of teh language in just a couple of hours, following a tutorial. (for example, the one at http://python.org itself)
|
2,288,171 | 2,288,322 | How to get 2 random (different) elements from a c++ vector | I would like to get 2 random different elements from an std::vector. How can I do this so that:
It is fast (it is done thousands of times in my algorithm)
It is elegant
The elements selection is really uniformly distributed
| For elegance and simplicty:
void Choose (const int size, int &first, int &second)
{
// pick a random element
first = rand () * size / MAX_RAND;
// pick a random element from what's left (there is one fewer to choose from)...
second = rand () * (size - 1) / MAX_RAND;
// ...and adjust second choice to take into account the first choice
if (second >= first)
{
++second;
}
}
using first and second to index the vector.
For uniformness, this is very tricky since as size approaches RAND_MAX there will be a bias towards the lower values and if size exceeds RAND_MAX then there will be elements that are never chosen. One solution to overcome this is to use a binary search:
int GetRand (int size)
{
int lower = 0, upper = size;
do
{
int mid = (lower + upper) / 2;
if (rand () > RAND_MAX / 2) // not a great test, perhaps use parity of rand ()?
{
lower = mid;
}
else
{
upper = mid;
}
} while (upper != lower); // this is just to show the idea,
// need to cope with lower == mid and lower != upper
// and all the other edge conditions
return lower;
}
|
2,288,238 | 2,288,384 | Converting method signatures | typedef void (__thiscall* LPVOIDPROC) (void);
class ClassA
{
LPVOIDPROC m_pProc;
void SetProc(LPVOIDPROC pProc) { m_pProc = pProc; }
void OnSomeEvent() { m_pProc(); }
}
class ClassB
{
ClassA* pCA;
void Proc() { /* ... */ }
void Init()
{
// Assume pCA != NULL
pCA->Set((LPVOIDPROC)&ClassB::Proc); // error C2440
}
}
How to get rid of this error C2440: 'type cast' : cannot convert from 'void (__thiscall ClassB::* )(void)' to 'LPVOIDPROC' ? I don't want to limit LPVOIDPROC signature to ClassB only. This should be any class and referenced proc should not be static.
| Workaround:
typedef void (* CLASSPROC) (void *);
template<class T, void (T::*proc)()>
void class_proc(void * ptr)
{
(static_cast<T*>(ptr)->*proc)();
}
class ClassA
{
CLASSPROC m_pProc;
void * m_pInstance;
public:
void SetProc(void *pInstance, CLASSPROC pProc) {
m_pInstance = pInstance;
m_pProc = pProc;
}
void OnSomeEvent() { m_pProc(m_pInstance); }
};
class ClassB
{
ClassA* pCA;
void Proc() { /* ... */ }
void Init()
{
// Assume pCA != NULL
pCA->SetProc(this, class_proc<ClassB, &ClassB::Proc>);
}
};
|
2,288,291 | 2,289,265 | Is it a good practice to free memory via a pointer-to-const | There are many questions discussing the details of C and C++ dealing with pointer-to-const deletion, namely that free() does not accept them and that delete and delete[] do and that constness doesn't prevent object destruction.
What I am interested on is whether you think it is a good practice to do so, not what the languages (C and C++) allow.
Arguments for pointer-to-const deletion include:
Linus Torvalds' kfree(), unlike C's free(), takes a void const* argument because he thinks that freeing the memory does not affect what is pointed to.
free() was designed before the introduction of the const keyword.
C++'s delete operators allow deletion of const data.
Arguments against it include:
Programmers do not expect data to be modified (or deleted) when they pass a pointer-to-const to it.
Many think that pointer-to-const implies not getting ownership of the data (but not that non-const would imply getting ownership).
This is the common practice seen in most libraries and existing code.
Please argument well in your responses and possibly refer to authorities. My intention is not to start a poll here.
| Well, here's some relevant stuff possibly too long to fit into a comment:
Some time ago the practice to free memory via a pointer-to-const was plain forbidden, see this dr. Dobb's article, the "Language Law" ( :)) part.
There has twice been a relevant discussion on http://groups.google.ru/group/comp.lang.c++.moderated: "Delete a const pointer?", "Why can operator delete be called on a const pointer" (both actually deal with the case in question, i.e. pointer to const).
My own point (since you are asking for arguments): possibility of the operation in question in any given context is defined by the (explicitly or implicitly defined in the documentation) contract of a class or a function, not by just the method signature or parameter types.
|
2,288,293 | 2,288,355 | Windows & C++: extern & __declspec(dllimport) | What is the difference/relationship between "extern" and "__declspec(dllimport")? I found that sometimes it is necessary to use both of them, sometimes one is enough.
Am I right that:
"extern" is for statically linked libraries,
"__declspec(dllimport)" is for DLL (dynamically linked libraries),
both do actually the same job for their respective type of linking,
you need to use both when you use import libraries (small .lib files that help linking with dll)?
| extern means that the entity has external linkage, i.e. is visible outside its translation unit (C or CPP file). The implication of this is that a corresponding symbol will be placed in the object file, and it will hence also be visible if this object file is made part of a static library. However, extern does not by itself imply that the symbol will also be visible once the object file is made part of a DLL.
__declspec(dllexport) means that the symbol should be exported from a DLL (if it is indeed made part of a DLL). It is used when compiling the code that goes into the DLL.
__declspec(dllimport) means that the symbol will be imported from a DLL. It is used when compiling the code that uses the DLL.
Because the same header file is usually used both when compiling the DLL itself as well as the client code that will use the DLL, it is customary to define a macro that resolves to __declspec(dllexport) when compiling the DLL and __declspec(dllimport) when compiling its client, like so:
#if COMPILING_THE_DLL
#define DLLEXTERN __declspec(dllexport)
#else
#define DLLEXTERN __declspec(dllimport)
#endif
To answer your specific questions:
Yes, extern alone is sufficient for static libraries.
Yes -- and the declaration also needs an extern (see explanation here).
Not quite -- see above.
You don't strictly need the extern with a __declspec(dllimport) (see explanation linked to above), but since you'll usually be using the same header file, you'll already have the extern in there because it's needed when compiling the DLL.
|
2,288,456 | 2,356,594 | Using desktop as canvas on linux |
i was wondering if somebody could help me out. I have a plan of making clone of geek tools for linux. But i have no idea if you can somehow use linux desktop as canvas for drawing text etc. I tried to google it up but i found nothing. What i need to do is basically be able to draw text on certain parts of desktop so it would appear like they are part of wallpaper (from c++). Either that or be able to create borderless, transparent windows that can be clicked through and are always on background. If anyone could give me any pointers where to start, i will be very happy.
Thanks for your help in advance :]
| You already accepted a partial answer, but I hope you will still read this.
It is true that the desktop background by convention is the root window. However, there are two important mechanics going on on a typical modern desktop:
root pixmap setting (wallpaper), which is not drawn on the background of the root window
enrichment of the desktop background (for example clickable icons) by obstructing the root window with another base level window
If you only want to draw on the background, only the latter matters to you. If you however also want to read the background, for example for real transparency, also the first point plays a role.
Drawing to the background:
The authors of the program xsnow and xpenguins were the first to deal with this problem. They wrote a clever function that can derive KDE and Gnome desktop windows if they are present. As other window managers that obstruct the root window tend to obey these de-facto standards, it works very reliably. With their code, you instantly know which window to draw to.
Reading the root background (pixmap):
This is harder. The naive query for window pixels will fail because all foreground windows are also part of the root window; so this makes it easy to do a screenshot, but not to get the real background.
There is a convention however, on a global name of the root pixmap (as used by any decent background pixmap setter). The pixmap can be found by querying for that name. It gets nasty however, if either the background setter sucks and does not obey to that rule, or the background is not a pixmap, but only a pattern or whatever.
The second option, of which I only found recently, is to use the XDBE (double buffer) extension to get the root window's background. This is very clean, only takes like two or three lines of codes and works in any case. But Xorg sees XDBE as deprecated (or, more precisely, soon-to-be deprecated). So I don't know if using it only for that purpose is still a good idea. But I can give you code on request!
Finally the implementation:
Yes, there is an implementation available for both things. Check out http://fopref.meinungsverstaerker.de/xmms-rootvis/
In that archive, which is GPL, getroot.c is taken from xpenguins, without dependencies to other xpenguins code.
Also, starting in line 144 of rootvis.c you will find the code to grab the background pixmap.
Have fun!
|
2,288,692 | 2,288,727 | What happens if I use "throw;" without an exception to throw? | Here's the setup.
I have a C++ program which calls several functions, all of which potentially throw the same exception set, and I want the same behaviour for the exceptions in each function
(e.g. print error message & reset all the data to the default for exceptionA; simply print for exceptionB; shut-down cleanly for all other exceptions).
It seems like I should be able to set the catch behaviour to call a private function which simply rethrows the error, and performs the catches, like so:
void aFunction()
{
try{ /* do some stuff that might throw */ }
catch(...){handle();}
}
void bFunction()
{
try{ /* do some stuff that might throw */ }
catch(...){handle();}
}
void handle()
{
try{throw;}
catch(anException)
{
// common code for both aFunction and bFunction
// involving the exception they threw
}
catch(anotherException)
{
// common code for both aFunction and bFunction
// involving the exception they threw
}
catch(...)
{
// common code for both aFunction and bFunction
// involving the exception they threw
}
}
Now, what happens if "handle" is called outside of the exception class.
I'm aware that this should never happen, but I'm wondering if the behaviour is undefined by the C++ standard.
| If handle() is called outside the context of an exception, you will throw without an exception being handled. In this case, the standard (see section 15.5.1) specifies that
If no exception is presently being handled, executing a throw-expression with no operand calls terminate().
so your application will terminate. That's probably not what you want here.
|
2,288,698 | 2,289,021 | Communicating between a ruby script and a running c++ program | I have a c++ program which performs one function. It loads a large data-file into an array, receives an array of integers and performs a lookup in that array, returning a single integer. I am currently calling the program with each integer as an argument, like so:
$ ./myprogram 1 2 3 4 5 6 7
I also have a ruby script, and I would like this script to utilize the c++ program.
Currently, I am doing this like so.
Ruby Code:
arguments = "1 2 3 4 5 6 7"
an_integer = %x{ ./myprogram #{arguemnts} }
puts "The program returned #{an_integer}" #=> The program returned 2283
This is all working properly, but my problem is that each time ruby makes this call, the c++ program has to reload the data-file (which is over 100mb) - very slow, and very inefficient.
How can I rewrite my c++ program load the file only once, allowing me to make many lookups via a ruby script without reloading the file each time. Would using sockets be a sensible approach? Writing the c++ program as a ruby extension?
Obviously I am not an experienced c++ programmer, so thanks for your help.
| A possible approach is to modify your C++ program so that it takes its input from the standard input stream (std::cin) instead of from the command line parameters, and returns its result through the standard ouput (std::cout) instead of as main's return value. Your Ruby script would then use popen to launch the C++ program.
Assuming the C++ program currently looks like:
// *pseudo* code
int main(int argc, char* argv[])
{
large_data_file = expensive_operation();
std::vector<int> input = as_ints(argc, argv);
int result = make_the_computation(large_data_file, input);
return result;
}
It would be transformed into something like:
// *pseudo* code
int main(int argc, char* argv[])
{
large_data_file = expensive_operation();
std::string input_line;
// Read a line from standard input
while(std:::getline(std::cin, input_line)){
std::vector<int> input = tokenize_as_ints(input_line);
int result = make_the_computation(large_data_file, input);
//Write result on standard output
std::cout << result << std::endl;
}
return 0;
}
And the Ruby script would look like
io = IO.popen("./myprogram", "rw")
while i_have_stuff_to_compute
arguments = get_arguments()
# Write arguments on the program's input stream
IO.puts(arguments)
# Read reply from the program's output stream
result = IO.readline().to_i();
end
io.close()
|
2,288,730 | 2,288,742 | What's the scope of a type declaration within a class? | If a new type is declared whithin a class, like:
class foo {
public :
struct s1 {
int a ;
};
private :
struct s2 {
int b ;
};
};
then in what scope can the following statements be used:
s1 ss1;
s2 ss2;
Thanks in advance.
| The type s1 can be used anywhere, but if used outside of foo's member functions, it must be qualified:
foo::s1 ss1;
The type s2 can only be used in member functions of foo.
|
2,288,834 | 2,288,894 | CComVariant vs. _variant_t, CComBSTR vs. _bstr_t | I am using ATL (VS2008, so ATL9 IIRC) to create COM objects and have been using the CComVariant class (defined in atlcomcli.h) to manage VARIANT types. However, there is also another VARIANT wrapper called _variant_t. Is there any difference between CComVariant and _variant_t and which one should I be using?
Similarly, there are two BSTR wrappers available - CComBSTR and _bstr_t. Again, which should I prefer and why?
| _variant_t and _bstr_t are provided by the compiler as COM support classes and get used when you use constructs like #import . You can use them if you like.
CComVariant and CComBSTR are provided by the ATL libraries.
Whether you use the COM Support classes or the ATL classes is up to you. If you often need to do operations like attaching to 'raw' BSTRs or VARIANTs, the COM Support classes may be a safer bet.
There are some behavioural differences (check the docs), the most important of which seems to be that the COM Support classes will throw a _com_error& exception when something fails. If you don't want to do exception-handling, go with the ATL classes.
|
2,288,931 | 2,288,962 | How could I simulate _set_abort_behavior in VC++7 and earlier? | In Visual C++ when terminate() is called the default behavior is to call abort() which by default shows a message box and then - after OK button on the message box is pressed - terminates the application. The "shows message box" part is not very good for programs that must work without human interaction since the program just hangs until the button is pressed.
In VC++8 Microsoft introduced _set_abort_behavior() function that can be called at application startup and prohibit showing the message box in abort().
How do I achieve the same in VC++7 and earlier? I could write my custom terminate() handler, but what is the best action to invoke inside it so that the program terminates the same way as with abort() but without the message box?
| Call the operating system's process terminate function. TerminateProcess() on Windows.
|
2,288,970 | 2,289,025 | C++: How to build Strings / char* | I'm new to C++. I want to make a char*, but I don't know how.
In Java is it just this:
int player = 0;
int cpu = 0;
String s = "You: " + player + " CPU: " + cpu;
How can I do this? I need a char*.
I'm focusing on pasting the integer after the string.
| You almost certainly don't want to deal with char * if you can help it - you need the C++ std::string class:
#include <string>
..
string name = "fred";
or the related stringstream class:
#include <sstream>
#include <string>
#include <iostream>
using namespace std;
int main() {
int player = 0;
int cpu = 0;
ostringstream os;
os << "You: " << player << " CPU: " << cpu;
string s = os.str();
cout << s << endl;
}
if you really need a character pointer (and you haven't said why you think you do), you can get one from a string by using its c_str() member function.
All this should be covered by any introductory C++ text book. If you haven't already bought one, get Accelerated C++. You cannot learn C++ from internet resources alone.
|
2,289,048 | 2,289,113 | ReadWrite lock using Boost.Threads (how to convert this simple class) | I am porting some code from windows to Linux (Ubuntu 9.10). I have a simple class (please see below), which uses windows functions to implement simple mutex locking. I want to use Boost.Threads to reimplement this, but that library is new to me.
Can someone point out the changes I need to make to the class below, in order tomuse Boost.Threads instead of the WIN specific functions?
#ifndef __my_READWRITE_LOCK_Header__
#define __my_READWRITE_LOCK_Header__
#include <windows.h>
//Simple RW lock implementation is shown below.
#define RW_READERS_MAX 10
#define RW_MAX_SEMAPHORE_COUNT 10
#define RW_MUTEX_NAME L"mymutex"
#define RW_SEMAPHORE_NAME L"mysemaphore"
class CThreadRwLock
{
public:
CThreadRwLock()
{
InitializeCriticalSection(&m_cs);
m_hSem = CreateSemaphore(0, RW_READERS_MAX, RW_READERS_MAX, 0);
}
~CThreadRwLock()
{
DeleteCriticalSection(&m_cs);
CloseHandle(m_hSem);
}
void AcquireReaderLock()
{
EnterCriticalSection(&m_cs);
WaitForSingleObject(m_hSem, INFINITE);
LeaveCriticalSection(&m_cs);
}
void AcquireWriterLock()
{
EnterCriticalSection(&m_cs);
for(int i = 0; i < RW_READERS_MAX; i++)
{
WaitForSingleObject(m_hSem, INFINITE);
}
LeaveCriticalSection(&m_cs);
}
void ReleaseReaderLock()
{
ReleaseSemaphore(m_hSem, 1, 0);
}
void ReleaseWriterLock()
{
ReleaseSemaphore(m_hSem, RW_READERS_MAX, 0);
}
private:
CRITICAL_SECTION m_cs;
HANDLE m_hSem;
};
class CProcessRwLock
{
public:
CProcessRwLock()
{
m_h = CreateMutex(NULL, FALSE, RW_MUTEX_NAME);
m_hSem = CreateSemaphore(NULL, RW_MAX_SEMAPHORE_COUNT, RW_MAX_SEMAPHORE_COUNT, RW_SEMAPHORE_NAME);
}
~CProcessRwLock()
{
CloseHandle(m_h);
}
void AcquireReaderLock()
{
WaitForSingleObject(m_h, INFINITE);
ReleaseMutex(m_h);
}
void AcquireWriterLock()
{
WaitForSingleObject(m_h, INFINITE);
for(int i = 0; i < RW_READERS_MAX; i++)
{
WaitForSingleObject(m_hSem, INFINITE);
}
ReleaseMutex(m_h);
}
void ReleaseReaderLock()
{
ReleaseSemaphore(m_hSem, 1, 0);
}
void ReleaseWriterLock()
{
ReleaseSemaphore(m_hSem, RW_READERS_MAX, 0);
}
private:
HANDLE m_h, m_hSem;
};
class AutoThreadRwLock
{
public:
AutoThreadRwLock(const bool readlock = true):m_readlock(readlock)
{
if (readlock)
m_lock.AcquireReaderLock();
else
m_lock.AcquireWriterLock();
}
~AutoThreadRwLock()
{
if (m_readlock)
m_lock.ReleaseReaderLock();
else
m_lock.ReleaseWriterLock();
}
private:
AutoThreadRwLock(const AutoThreadRwLock&);
AutoThreadRwLock& operator= (const AutoThreadRwLock& );
CThreadRwLock m_lock ;
bool m_readlock ;
};
class AutoProcessRwLock
{
public:
AutoProcessRwLock(const bool readlock = true): m_readlock(readlock)
{
if (readlock)
m_lock.AcquireReaderLock();
else
m_lock.AcquireWriterLock();
}
~AutoProcessRwLock()
{
if (m_readlock)
m_lock.ReleaseReaderLock();
else
m_lock.ReleaseWriterLock();
}
private:
AutoProcessRwLock(const AutoProcessRwLock&);
AutoProcessRwLock& operator= (const AutoProcessRwLock&);
CProcessRwLock m_lock ;
bool m_readlock ;
};
#endif //__my_READWRITE_LOCK_Header__
| I'm not going to re-write all your code for you. However you should look into boost's shared_mutex class.
Also, this question from StackOverflow shows how to use a boost::shared_mutex
|
2,289,128 | 2,298,805 | CallBack function from c# to c++ | I have a C# exe and some vc++ dll's . I am creating a callBackFunction in C# whichh takes structure as its Parameters. My c++ dll will fill this structure and return it back.But 95% of the time My Exe crashes. My dll is multi threaded and my C# has backgroundWorker in it. I have put try catch block to check if any exceptions are coming or not but no exception is being caught. Can any one suggest a better way to send data from my vc++ dll to c# exe
| this must work:
(c#)
namespace Test
{
public class CallbackClass
{
public void Callback(string s)
{
MessageBox.Show(s);
}
}
}
(c++/cli)
...
Test::CallbackClass::Callback(gcnew System::String("woof!");
...
|
2,289,168 | 2,289,192 | Is this code valid, or is it the compiler? | int cpu = 0;
int player = 0;
char * getPoints()
{
using namespace std;
string str = "You: ";
str += player;
str += " CPU: ";
str += cpu;
char c [100];
strcpy(c, str.c_str());
return c;
}
This code doesn't compile. Is the code wrong or is there something wrong with my compiler?
I'm using Microsoft Visual Studio with the DarkGDK.
If it is me, can someone improve it?
This is the output:
1>------ Build started: Project: Pong, Configuration: Debug Win32 ------
1>Compiling...
1>Main.cpp
1>c:\users\martijn\documents\visual studio 2008\projects\pong\pong\main.cpp(42) : warning C4172: returning address of local variable or temporary
1>Linking...
1>libcpmtd.lib(xdebug.obj) : warning LNK4098: defaultlib 'libcmt.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>libcpmtd.lib(xdebug.obj) : error LNK2019: unresolved external symbol __malloc_dbg referenced in function "void * __cdecl operator new(unsigned int,struct std::_DebugHeapTag_t const &,char *,int)" (??2@YAPAXIABU_DebugHeapTag_t@std@@PADH@Z)
1>libcpmtd.lib(xdebug.obj) : error LNK2019: unresolved external symbol __free_dbg referenced in function "void __cdecl operator delete(void *,struct std::_DebugHeapTag_t const &,char *,int)" (??3@YAXPAXABU_DebugHeapTag_t@std@@PADH@Z)
1>libcpmtd.lib(stdthrow.obj) : error LNK2019: unresolved external symbol __CrtDbgReportW referenced in function "void __cdecl std::_Debug_message(wchar_t const *,wchar_t const *,unsigned int)" (?_Debug_message@std@@YAXPB_W0I@Z)
1>Debug\Pong.exe : fatal error LNK1120: 3 unresolved externals
1>Build log was saved at "file://c:\Users\Martijn\Documents\Visual Studio 2008\Projects\Pong\Pong\Debug\BuildLog.htm"
1>Pong - 4 error(s), 2 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
| The errors seem to be from problems with the compiler. Are you sure you have the project set up correctly?
The warning about returning a local variable is serious. You allocate the c array on the stack in your function. When the function returns, the array will be gone from the stack and the pointer you returned ends up pointing to garbage. Consider returning a string instead, or allocate c on the heap.
|
2,289,193 | 2,289,277 | How to test if template parameter is a pair associative container? | Let's imagine I want to make a templated function that returns the first element of any stl container. The general way would be :
template<typename Container>
Container::value_type first(Container c){
return *(c.begin());
}
This works for vectors, lists, deques, sets and so on.
However, for pair associative containers (std::map), if would like to have
return c.begin()->second;
How could I test (in the function or with template specialization) if I have an pair associative container ?
STL container seem to have no traits attached to it. Is it possible to check if it has a ::key_type ?
| You can do quite easily:
namespace result_of // pillaged from Boost ;)
{
template <class Value>
struct extract { typedef Value type; };
template <class First, class Second>
struct extract < std::pair<First,Second> > { typedef Second type; };
}
template <class Value>
Value extract(Value v) { return v; }
template <class First, class Second>
Second extract(std::pair<First,Second> pair) { return pair.second; }
template <class Container>
typename result_of::extract< typename Container::value_type >::type
first(const Container& c) { return extract(*c.begin()); }
I should note though, that I would probably add a test to see if the container is empty... Because if the container is empty, you're on for undefined behavior.
In movement:
int main(int argc, char* argv[])
{
std::vector<int> vec(1, 42);
std::map<int,int> m; m[0] = 43;
std::cout << first(vec) << " " << first(m) << std::endl;
}
// outputs
// 42 43
Example shamelessly taken from litb ;)
|
2,289,305 | 2,289,345 | Are there Visual C++ runtime implementations for other platforms? | Does Visual C++ runtime imply Windows platform? I mean if I write a program that only directly uses functions specific to VC++ runtime and doesn't directly call Windows API functions can it be recompiled and run on any OS except Windows? I don't mean on Windows system emulator, I mean a ready implementation of VC++ runtime for some other OS.
| The Visual C++ runtime contains the standard C++ library and platform specific auxiliary functions.
The Windows API is part of the Windows SDK, and is not included in the Visual C++ runtime.
When you compile a C++ program on a different platform you will use that platform's C++ library implementation.
I mean if I write a program that only directly uses functions specific to VC++ runtime and doesn't directly call Windows API functions can it be recompiled and run on any OS except Windows?
As long as you only use standard C++ functions and classes, yes.
I don't mean on Windows system emulator, I mean a ready implementation of VC++ runtime for some other OS.
The runtime itself is only available on Windows, as the implementation is very platform specific. As I have mentioned above, you only get source level compatibility and only if you don't use MS specific functions.
|
2,289,389 | 5,729,795 | Can I have platform specific sections in my vsprops (Property Sheet) file? | I'm creating a vsprops file to contain include and lib paths that are common to all projects in my solution.
However, I have platform specific paths for the lib paths which can be Win32/x64. Is it possible to put these settings in one vsprops file? Or do I have to create a different vsprops file for each platform and then spend time with the Property Manager in visual studio to ensure the correct ones are referenced?
| No, there doesn't seem to be a way, I ended up creating two different vs props files.
|
2,289,481 | 2,291,475 | What is the mistake in my code? | The sample code mentioned below is not compiling. Why?
#include "QprogressBar.h"
#include <QtGui>
#include <QApplication>
#include<qprogressbar.h>
#include <qobject.h>
lass myTimer: public QTimer
{
public:
myTimer(QWidget *parent=0):QTimer(parent)
{}
public slots:
void recivetime();
};
void myTimer::recivetime()
{
}
class Progressbar: public QProgressDialog
{
public:
Progressbar(QWidget *parent=0):QProgressDialog(parent)
{
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QObject::connect(QTimer,SIGNAL(timeout()),QTimer,SLOT(recivetime()));
return a.exec();
}
It is giving me a problem when it tries to connect. I think that maybe it is fine to write the connect code in the main function.
| To sum-up the previous comments and answers:
the compiler tells you at least what it does not understand if not straight-up what's wrong with your code => if you don't understand what the compiler says post the error message with your question so that it helps those who speak "compilese"
"connect" will connect a object's signal with another object's slot -> pass objects to connect, not classes
the connected objects must exist for the intended duration of your connection you are connecting now at best automatic instances of QTimer which will be out of scope by the time the connect call ends.
The correct way of doing this:
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
myTimer myTimerObject(a);
QObject::connect(&myTimerObject, SIGNAL(timeout()), &myTimerObject, SLOT(recivetime()));
return a.exec();
}
As a side note this has nothing to do with Symbian, nor is it specific to Qt 4.x. Also Qt is not QT just as QT is not Qt ;)
|
2,289,548 | 2,289,586 | Array indexing starting at a number not 0 | Is it possible to start an array at an index not zero...I.E.
you have an array a[35], of 35 elements, now I want to index at say starting 100, so the numbers would be a[100], a[101], ... a[134], is that possible?
I'm attempting to generate a "memory map" for a board and I'll have one array called SRAM[10000] and another called BRAM[5000] for example, but in the "memory" visiblity they're contiguous, I.E. BRAM starts right after SRAM, so therefore if I try to point to memory location 11000 I would read it see that it's over 10000 then pass it to bram.
While typing this I realized I could I suppose then subtract the 10K from the number and pass that into BRAM, but for the sake of argument, is this possible to index passing 11000 to BRAM?
Thank you for any help.
Updated to fix the a[34] to a[134]
Updated for additional information:
In the actual architecture I will be implementing, there can/may be a gap between the sram and bram so for example the address 11008 might not be visible in the memory map, thus writing a giant array full of memory then "partitioning" it will work, but I'll still have to do logic to determine if it's within the ranges of "sram and bram". Which is what I wanted to avoid in the first place.
| Is it possible to start an array at an index not zero...I.E. you have an array a[35], of 35 elements, now I want to index at say starting 100, so the numbers would be a[100], a[101], ... a[134], is that possible?
No, you cannot do this in C. Arrays always start at zero. In C++, you could write your own class, say OffsetArray and overload the [] operator to access the underlying array while subtracting an offset from the index.
I'm attempting to generate a "memory map" for a board and I'll have one array called SRAM[10000] and another called BRAM[5000] for example, but in the "memory" visiblity they're contiguous, I.E. BRAM starts right after SRAM, so therefore if I try to point to memory location 11000 I would read it see that it's over 10000 then pass it to bram.
You could try something like this:
char memory[150000];
char *sram = &memory[0];
char *bram = &memory[100000];
Now, when you access sram[110000] you'll be accessing something that's "in bram"
|
2,289,569 | 2,289,926 | C++: How to add a library in Netbeans (DarkGDK + DirectX SDK) | I'm trying to learn how to make games with DarkGDK. But I have to write in Visual Studio.
I don't like Visual Studio. Its suggestions (Ctrl-Space for Completion) are bad (in my opinion) and the compiler is broken (See my previous questions).
So I want to migrate to Netbeans, with MSys and MinGW. But I'm not able to use the DarkGDK
library in Netbeans. I added two include folders:
C:\Program Files\The Game Creators\Dark GDK\Include
C:\Program Files\Microsoft DirectX SDK (August 2007)\Include
After adding this include directories, I can #include <DarkGDK.h>.
But he shows a warning: "There are unresolved includes inside <DarkGDK.h>"
And when I try to compile: main.cpp:9:21: warning: DarkGDK.h: No such file or directory
In Visual Studio are Include files and Library files. And in Netbeans, there is only Include Directories when I go to Tools -> Options -> C/C++ -> Code Assistance.
So, my question is: "How can I add the Library files in Netbeans"?
Or does any-one did this yet and knows how to do this.
| Personally I found the include directories in Tools -> Options don't work. You need to right click on your project and go to properties -> C++ Compiler and add your include directories. Then from properties -> Linker to add your library directories and libraries.
|
2,289,593 | 2,289,740 | How to select the version of the VC 2008 DLLs the application should be linked to? | I'm using Visual Studio 2008 SP1 for C++. When compiling, Visual Studio needs to choose against which version of the CRT and MFC DLLs the application should be linked, version 9.0.21022.8 (= RTM), 9.0.30729.17 (= SP1) or 9.0.30729.4148 (= SP1 with security update). I'd like to know how you can choose which of both versions will be linked against. Does anyone know?
Note: this is important when using a private assembly, because you need to know which versions of the VC 9.0 DLLs to copy along with the .exe.
Note that the _BIND_TO_CURRENT_VCLIBS_VERSION flag only makes sure that the right version is included in the manifest. The DLL version selection at runtime is apparently not done based upon the version that is included in the manifest file. Even if the manifest file says that v21022 should be used, the .exe uses the v30729 .DLLs. I know this, because it uses std::tr1::weakptr, which is not present in v21022.
| _BIND_TO_CURRENT_VCLIBS_VERSION sets the current version in the manifest - or the RTM version if not.
And setting it in the manifest is the correct way to do this.
What you are seeing however is the effects of an assembly policy file :- When the VCRedist package containing the 2008 SP1 runtime is installed, it installs a policy file into the WinSxS store with a bindingRedirect entry that redirects attempts to load the RTM runtime to the SP1 runtime.
So applications that specify the RTM runtime in their manifest will load the SP1 runtime, and applications that specify the SP1 runtime, will load the SP1 runtime.
If you actually DO want to use the RTM runtime, even when the SP1 runtime and policy files are installed, then you need to specify the RTM version in your manifest, AND make use of an application configuration file. Basically "yourappname.exe.config" ( or "yourdllname.dll.2.config" if its an isolation aware dll causing grief).
Application congifuration files can supply a bindingRedirect element that overrides any assembly version specified in the manifest, or policy files.
This config file will tell the OS to load the RTM runtime even if the SP1 runtime is installed :-
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration>
<windows>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity type="win32" name="Microsoft.VC90.CRT" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"/>
<bindingRedirect oldVersion="9.0.30729.1" newVersion="9.0.21022.8"/>
</dependentAssembly>
</assemblyBinding>
</windows>
</configuration>
Note: oldVersion is allowed to be a range: oldVersion="9.0.30729.1-9.1.0.0"
See: Application Configuration Files documented on MSDN.
|
2,289,637 | 2,358,297 | Directdraw: Rotate video stream | Problem
Windows Mobile / Directdraw: Rotate video stream
The video preview is working, all I need now is a way to rotate the image. I think the only way to handle this is to write a custom filter based on CTransformFilter that will rotate the camera image for you. If you can help me to solve this problem, e.g. by helping me to develop this filter with my limited DirectDraw knowledge, the bounty is yours.
Background / Previous question
I'm currently developing an application for a mobile device (HTC HD2, Windows Mobile 6). One of things the program needs to do is to take pictures using the built-in camera. Previously I did this with the CameraCaptureDialog offered by the Windows Mobile 6 SDK, but our customer wants a more user-friendly solution.
The idea is to preview the camera's video stream in a control and take a high resolution picture (>= 2 megapixels) using the camera's photo function, when the control is clicked. We did some research on the topic and found out the best way to accomplish this seems to be using Direct Draw.
The downsides are that I never really used any native windows API and that my C++ is rather bad. In addition to this I read somewhere that the Direct Draw support of HTC phones is particularity bad and you will have to use undocumented native HTC libraries calls to take high quality pictures.
The good news is that a company offered us to develop a control that meets the specifications stated above. They estimated it would take them about 10 days, which lead to the discussion if we could develop this control ourself within a reasonable amount of time.
It's now my job to research which alternative is better. Needless to say it's far too less time to study the whole architecture and develop a demo, which lead me to the following questions:
Questions no longer relevant!
Does any of you have experience with similar projects? What are your recommendations?
Is there a good Direct Draw source code example that deals with video preview and image capturing?
| Well if you look at the EZRGB24 sample you get the basics of a simple video transform filter.
There are 2 things you need to do to the sample to get it to do what you want.
1) You need to copy x,y to y,x.
2) You need to tell the media sample that the sample is now Height x Width instead of Width x Height.
Bear in mind that the final image will have exactly the same number of pixels.
To solve 1 is relatively simple. You can calculate the position of a pixel by doing "x + (y * Width)". So you step through each x and y calculate the position that way and then write it to "y + (x * Height)". This will transpose the image. Of course without step2 this will look completely wrong.
To solve 2 you need to get the AM_MEDIA_TYPE of the input sample. You then need to find out what the formatType is (Probably FormatType_VideoInfo or FormatType_VideoInfo2). You can thus cast the pbFormat member of AM_MEDIA_TYPE to either a VIDEOINFOHEADER or a VIDEOINFOHEADER2 (Depending on the FormatType). You need to now set VIDEOINFOHEADER[2]::bmiHeader.biWidth and biHeight to the biHeight and biWidth (respectively) of the input media sample. Everything else should be the same as the input AM_MEDIA_TYPE.
I hope that helps a bit.
|
2,289,785 | 2,289,900 | Strange Serial MisComunication | Ok, so I have 3 devices.
an AVR Butterfly microcontroller, set up with USART
A Bifferboard, running Debian, using a custom made program for serial.
A Desktop machine running Br@y's.
So I'm trying to make the Bifferboard send serial to the AVR, But the AVR never receives the signal, (we've checked the wires). But if i connect the AVR to the desktop box, and send with Br@y's it receives just fine.
If I connect the Bifferboard to the Desktop, Br@y's receives just fine.
Heres the code for the Bifferboard.
#include "string2num.h" //a custom header
#include <cstdlib>
#include <iostream>
#include <SerialStream.h>
using namespace LibSerial;
//using namespace std;
int main(int argc, char*argv[])
{
if (argc<2)
{
std::cout<<argv[0]<<" requires the device name eg \'dev/tty0\' as a parameter\nterminating.\n";
return 1;
}
SerialStream theSerialStream(argv[1]); //open the device
if(!theSerialStream.IsOpen()) //did the device succesfuilly open
{ //open faile
std::cerr<<"Open " << argv[1] << " failed\n Terminating.\n";
return 1; //exit failure
}
theSerialStream.SetVMin(0);//no min number of characters to send
theSerialStream.SetVTime(0);// don't wait betwenn characters
theSerialStream.SetBaudRate( SerialStreamBuf::BAUD_19200);
theSerialStream.SetCharSize(SerialStreamBuf::CHAR_SIZE_8); //8
theSerialStream.SetParity(SerialStreamBuf::PARITY_NONE);// N
theSerialStream.SetNumOfStopBits(1);// 1
theSerialStream.SetFlowControl(SerialStreamBuf::FLOW_CONTROL_NONE);
std::cout<<"Ready for serial trasmission. Press Ctrl+C to quit\n";
//insert basic instructions here
while (1)
{
char input[BUFSIZ];
std::cin>>input;
char* values=getAllValues(input); //DECODE any formatting (this function is in the custom header)
std::cout<<"about to transmit: " << values << "\n";
theSerialStream << values;
free(values);
}
theSerialStream.Close();
return 0;
}
I've also tried using minicom from Bifferboard - it can talk to the desktop windows machine, but not the the AVR.
|
(We've checked the wires)
This still sounds like a cabling problem. If Br@y's can communicate with both, then it doesn't seem to be a configuration issue. You should throw a logic analyzer or oscilloscope on the receive pin (and probably probe other pins) of the AVR and see what's happening electrically when yo try to send data from the Bifferboard.
I'd bet that you see the data on some other pin. But I wouldn't bet a whole lot, because serial RS232 connectivity is such a touchy thing.
|
2,289,992 | 2,290,456 | How to hide menu? lpszMenuName | I managed to make the menu with this piece of code and using Visual Studio 2008:
WNDCLASS wc;
...
wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1);
...
if(!RegisterClass(&wc))
...
But how i can hide the menu by just pressing a button of my choice? There is ShowWindow() function, but it doesnt work on menus... so what function i use to hide menu...?
| I think you can do something like this:
// save the menu
HMENU hMenuOld = GetMenu(hWnd);
// hide the menu
SetMenu(hWnd, NULL);
// show the menu
SetMenu(hWnd, hMenuOld);
|
2,290,007 | 2,294,359 | Beginner extending C with Python (specifically Numpy) | I am working on a real time audio processing dynamically linked library where I have a 2 dimensional C array of floating point data which represents the audio buffer. One dimension is time (samples) and the other is channel. I would like to pass this to a python script as a numpy array for the DSP processing and then I would like to pass this back to C so the data can carry on down the processing chain in C. The member function in C++ which does the processing looks like this:
void myEffect::process (float** inputs, float** outputs, int buffersize)
{
//Some processing stuff
}
The arrays inputs and outputs are of equal size. The integer buffersize is the number of columns in the inputs and outputs arrays. On the python side I would like the processing to be carried out by a function which looks like the following:
class myPyEffect
...
...
def process(self,inBuff):
#inBuff and outBuff should be numpy arrays
outBuff = inBuff * self.whatever # some DSP stuff
return outBuff
...
...
Now, my question is, how can I go about getting the data in and out of C in the most efficient way possible (avoiding unnecessary memory copying etc.)? So far, for simple parameter changes I have been using C-API calls like the following:
pValue = PyObject_CallMethod(pInstance, "setParameter", "(f)", value);
Do I use something similar for my numpy arrays or is there a better way? Thanks for reading.
| You may be able to avoid dealing with the NumPy C API entirely. Python can call C code using the ctypes module, and you can access pointers into the numpy data using the array's ctypes attribute.
Here's a minimal example showing the process for a 1d sum-of-squares function.
ctsquare.c
#include <stdlib.h>
float mysumsquares(float * array, size_t size) {
float total = 0.0f;
size_t idx;
for (idx = 0; idx < size; ++idx) {
total += array[idx]*array[idx];
}
return total;
}
compilation to ctsquare.so
These command lines are for OS X, your OS may vary.
$ gcc -O3 -fPIC -c ctsquare.c -o ctsquare.o
$ ld -dylib -o ctsquare.so -lc ctsquare.o
ctsquare.py
import numpy
import ctypes
# pointer to float type, for convenience
c_float_p = ctypes.POINTER(ctypes.c_float)
# load the library
ctsquarelib = ctypes.cdll.LoadLibrary("ctsquare.so")
# define the return type and arguments of the function
ctsquarelib.mysumsquares.restype = ctypes.c_float
ctsquarelib.mysumsquares.argtypes = [c_float_p, ctypes.c_size_t]
# python front-end function, takes care of the ctypes interface
def myssq(arr):
# make sure that the array is contiguous and the right data type
arr = numpy.ascontiguousarray(arr, dtype='float32')
# grab a pointer to the array's data
dataptr = arr.ctypes.data_as(c_float_p)
# this assumes that the array is 1-dimensional. 2d is more complex.
datasize = arr.ctypes.shape[0]
# call the C function
ret = ctsquarelib.mysumsquares(dataptr, datasize)
return ret
if __name__ == '__main__':
a = numpy.array([1,2,3,4])
print 'sum of squares of [1,2,3,4] =', myssq(a)
|
2,290,154 | 2,290,227 | Multiple DLLs writing to the same text file? | I want to add logging support to a COM object (DLL) of mine, and there are usually at least two instances of this object loaded. I want both of the DLLs to be able to write lines to the same text file but am worried that this is going to give me problems. Is this possible to achieve? Am I right in thinking that a single Windows API WriteFile call is atomic? Can two processes both open the same file for writing?
I'd like to use the STL for the file handling here (std::ofstream) if possible. My other idea is to use a separate log per-DLL but a single log would be much easier to manage.
| #include <iostream>
#include <fstream>
#include <windosws.h>
struct Mutex {
Mutex () {
h = ::CreateMutex(0, false, "{any-GUID-1247965802375274724957}");
}
~Mutex () {
::CloseHandle(h);
}
HANDLE h;
};
Mutex mutex; // GLOBAL mutex
void dll_write_func() {
::WaitForSingleObject(mutex.h, INFINITE);
////////////////
// Write here
std::ofstrem f("output.txt");
f << "Test" << std::endl;
////////////////
::ReleaseMutex(mutex.h);
}
Or
struct MutexLock {
explicit MutexLock(Mutex & m) : m(m) {
::WaitForSingleObject(m.h, INFINITE);
}
~MutexLock() {
::ReleaseMutex(m.h);
}
Mutex & m;
};
void dll_write_func2() {
MutexLock mlock(mutex);
// Write here
std::ofstrem f("output.txt");
f << "Auto mutex Release" << std::endl;
}
|
2,290,299 | 2,290,356 | Calling a member function from a member function templated argument | Given the following code which I can't get to compile.
template < typename OT, typename KT, KT (OT::* KM)() const >
class X
{
public:
KT mfn( const OT & obj )
{
return obj.*(KM)(); // Error here.
}
};
class O
{
public:
int func() const
{
return 3;
}
};
int main( int c, char *v[] )
{
int a = 100;
X< O, int, &O::func > x;
O o;
std::cout << x.mfn( o ) << std::endl;
}
I get the folling error message
error: must use '.*' or '->*' to call pointer-to-member function in '&O::func (...)'
I thought I was using .* but I've obviously got something wrong.
How do I call the member function ?
I've tried
return obj.*(template KM)();
return obj.*template (KM)();
return obj.template *(KM)();
None of which worked.
| The correct syntax is
return (obj.*KM)();
|
2,290,306 | 2,319,494 | boost::thread_group - is it ok to call create_thread after join_all? | I have the following situation:
I create a boost::thread_group instance, then create threads for parallel-processing on some data, then join_all on the threads.
Initially I created the threads for every X elements of data, like so:
// begin = someVector.begin();
// end = someVector.end();
// batchDispatcher = boost::function<void(It, It)>(...);
boost::thread_group processors;
// create dispatching thread every ASYNCH_PROCESSING_THRESHOLD notifications
while(end - begin > ASYNCH_PROCESSING_THRESHOLD)
{
NotifItr split = begin + ASYNCH_PROCESSING_THRESHOLD;
processors.create_thread(boost::bind(batchDispatcher, begin, split));
begin = split;
}
// create dispatching thread for the remainder
if(begin < end)
{
processors.create_thread(boost::bind(batchDispatcher, begin, end));
}
// wait for parallel processing to finish
processors.join_all();
but I have a problem with this: When I have lots of data, this code is generating lots of threads (> 40 threads) which keeps the processor busy with thread-switching contexts.
My question is this: Is it possible to call create_thread on the thread_group after the call to join_all.
That is, can I change my code to this?
boost::thread_group processors;
size_t processorThreads = 0; // NEW CODE
// create dispatching thread every ASYNCH_PROCESSING_THRESHOLD notifications
while(end - begin > ASYNCH_PROCESSING_THRESHOLD)
{
NotifItr split = begin + ASYNCH_PROCESSING_THRESHOLD;
processors.create_thread(boost::bind(batchDispatcher, begin, split));
begin = split;
if(++processorThreads >= MAX_ASYNCH_PROCESSORS) // NEW CODE
{ // NEW CODE
processors.join_all(); // NEW CODE
processorThreads = 0; // NEW CODE
} // NEW CODE
}
// ...
Whoever has experience with this, thanks for any insight.
| I believe this is not possible. The solution you want might actually be to implement a producer-consumer or a master-worker (main 'master' thread divides the work in several fixed size tasks, creates pool of 'workers' threads and sends one task to each worker until all tasks are done).
These solutions will demand some synchronization through semaphores but they will equalize well the performance one you can create one thread for each available core in the machine avoiding waste of time on context switches.
Another not-so-good-and-fancy option is to join one thread at a time. You can have a vector with 4 active threads, join one and create another. The problem of this approach is that you may waste processing time if your tasks are heterogeneous.
|
2,290,587 | 2,290,838 | GCC style weak linking in Visual Studio? | GCC has the ability to make a symbol link weakly via __attribute__((weak)). I want to use the a weak symbol in a static library that users can override in their application. A GCC style weak symbol would let me do that, but I don't know if it can be done with visual studio.
Does Visual Studio offer a similar feature?
| MSVC++ has __declspec(selectany) which covers part of the functionality of weak symbols: it allows you to define multiple identical symbols with external linkage, directing the compiler to choose any one of several available. However, I don't think MSVC++ has anything that would cover the other part of weak symbol functionality: the possibility to provide "replaceable" definitions in a library.
This, BTW, makes one wonder how the support for standard replaceable ::operator new and ::operator delete functions works in MSVC++.
|
2,290,733 | 2,290,749 | Initialize parent's protected members with initialization list (C++) | Is it possible to use the initialization list of a child class' constructor to initialize data members declared as protected in the parent class? I can't get it to work. I can work around it, but it would be nice if I didn't have to.
Some sample code:
class Parent
{
protected:
std::string something;
};
class Child : public Parent
{
private:
Child() : something("Hello, World!")
{
}
};
When I try this, the compiler tells me: "class 'Child' does not have any field named 'something'". Is something like this possible? If so, what is the syntax?
Many thanks!
| It is not possible in the way you describe. You'll have to add a constructor (could be protected) to the base class to forward it along. Something like:
class Parent
{
protected:
Parent( const std::string& something ) : something( something )
{}
std::string something;
}
class Child : public Parent
{
private:
Child() : Parent("Hello, World!")
{
}
}
|
2,290,861 | 2,290,984 | How to render non trivial particles in OpenGL | I have a particle system where the positions and various properties are stored in a vertex buffer object. The values are continuously updated by a CUDA kernel. Presently I am just rendering them using GL_POINTS as flat circles. What I am interested in is rendering these particles are more involved things like 3d animated bird models for instance. I am trying to figure out what the best approach for would be. For the sake of conversation lets say the animation of the bird model is made up of 3 frames (position of the wings and what not)
I can see loading up the models into display lists and looping over all the particles in translating, rotating, etc the matrix then calling the display list. This doesn't seem like an ideal approach because it would require bringing all the particle data over to the host from the gpu just to do matrix operations to shove back to the GPU (unless I can call the drawing functions from a cuda kernel???)
I don't know much about shaders, would they be able to handle something like this?
At this point I am mostly looking for advice on what avenue to pursue, but if you know of any articles or tutorials that deal the subject, kudos.
I am working with the OpenGL natively on Windows 7 64bit with c++. I am not using GLUT
| You probably want to use: EXT_draw_instanced. I've never actually used Instancing, but most modern GPUS (GF6 and up I think) allow you to feed the GPU a model and a list of points and have it draw the model at every point.
I'll google some more info and see what I come up with....
Well this is the official spec, and it looks like they have a tutorial on it.
http://www.opengl.org/registry/specs/ARB/draw_instanced.txt
|
2,291,110 | 2,291,321 | How do I improve breaking substitution ciphers programmatically? | I have written (am writting) a program to analyze encrypted text and attempt to analyze and break it using frequency analysis.
The encrypted text takes the form of each letter being substituted for some other letter ie. a->m, b->z, c->t etc etc. all spaces and non alpha chars are removed and upper case letters made lowercase.
An example would be :
Orginal input - thisisasamplemessageitonlycontainslowercaseletters
Encrypted output - ziololqlqdhstdtllqutozgfsnegfzqoflsgvtkeqltstzztkl
Attempt at cracking - omieieaeanuhtnteeawtiorshylrsoaisehrctdlaethtootde
Here it has only got I, A and Y correctly.
Currently my program cracks it by analysing the frequency of each individual character, and mapping it to the character that appears in the same frequency rank in a non encrypted text.
I am looking for methods and ways to improve the accuracy of my program as at the moment I don't get too many characters right. For example when attempting to crack X amount of characters from Pride and Prejudice, I get:
1600 - 10 letters correct
800 - 7 letters correct
400 - 2 letters correct
200 - 3 letters correct
100 - 3 letters correct.
I am using Romeo and Juliet as a base to get the frequency data.
It has been suggested to me to look at and use the frequency of character pairs, but I am unsure how to use this because unless I am using very large encrypted texts I can imagine a similar approach to how I am doing single characters would be even more inaccurate and cause more errors than successes. I am hoping also to make my encryption cracker more accurate for shorter 'inputs'.
| I'm not sure how constrained this problem is, i.e. how many of the decisions you made are yours to change, but here are some comments:
1) Frequency mapping is not enough to solve a puzzle like this, many frequencies are very close to each other and if you aren't using the same text for frequency source and plaintext, you are almost guaranteed to have a few letters off no matter how long the text. Different materials will have different use patterns.
2) Don't strip the spaces if you can help it. This will allow you to validate your potential solution by checking that some percentage of the words exist in a dictionary you have access to.
3) Look into natural language processing if you really want to get into the language side of this. This book has all you could ever want to know about it.
Edit:
I would look into bigraphs and trigraphs first. If you're fairly confident of one or two letters, they can help predict likely candidates for the letters that follow. They're basically probability tables where AB would be the probability of an A being followed by a B. So assuming you have a given letter solved, that can be used to solve the letters next to it, rather than just guessing. For example, if you've got the word "y_u", it's obvious to you that the word is you, but not to the computer. If you've got the letters N, C, and O left, bigraphs will tell you that YN and YC are very uncommon where as YO is much more likely, so even if your text has unusual letter frequencies (which is easy when it's short) you still have a fairly accurate system for solving for unknowns. You can hunt around for a compiled dataset, or do your own analysis, but make sure to use a lot of varied text, a lot of Shakespeare is not the same as half of Shakespeare and half journal articles.
|
2,291,114 | 2,291,235 | Runtime array bounds checking in C++ built with g++ | Is there any way to do array bounds checking in C++ compiled using g++?
Valgrind's Memcheck can't detect overflows on arrays allocated on the stack.
The GCC extension enabled by -fbounds-checking is only implemented for the C front end.
Ideally, the source code shouldn't be modified in any way. Using std::vector, std::tr1::array or boost::array is not an option because the codebase is large and such shift would be infeasible.
| There is a Valgrind tool called SGCheck (formerly known as Ptrcheck) that does check stack array bounds overrun.
valgrind --tool=exp-sgcheck <program> <arguments>
The tool is still labeled experimental and it comes with several limitations. One of them is:
Platforms: the stack/global checks won't work properly on PowerPC, ARM
or S390X platforms, only on X86 and AMD64 targets. That's because the
stack and global checking requires tracking function calls and exits
reliably, and there's no obvious way to do it on ABIs that use a link
register for function returns.
|
2,291,369 | 2,309,123 | Set up Eclipse C++ compiler without auto-install or altering System Path on Windows | I am trying to install a C++ compiler on Eclipse without altering the Path variables as I can't, the machine has limited rights. Eclipse obviously runs fine, it's the build that doesn't, it complains about.
The first thing I noticed was a warning that said "Unresolved inclusion" for the libary file stdio.h
I added the path variable inside Eclipse's "Windows > Preferences > C/C++ > Environment" with a new environment variable named "Path" with a path to my minGW/bin folder but to no avail. I also tried setting it to "Replace the native environment variable with specified one" but also no change.
The build errors out saying:
**** WARNING: The "Default" Configuration may not build ****
**** because it uses the "MinGW GCC" ****
**** tool-chain that is unsupported on this system. ****
and then
(Cannot run program "make": Launching failed)
And of course no more. It's a simple Hello World test, so the code shouldn't be an issue. I can see the includes under a folder in the "Includes" area that Eclipse generates (D:\MinGW\binutils\lib) but clicking on them in the Outline tab of Eclipse brings up the error "No include files were found that matched that name".
| It looks like you're trying to build a simple hello world program using Eclipse/CDT and a development environment and using mingw as the compiler tool chain. I was able to get this working just now without modifying my system path environment variable. This is what I did:
I already had Eclipse 3.5 (Galileo) installed with CDT
Installed MinGW to C:\MinGW (I assume you already had this done). Make sure the mingw-make component is installed (it's not installed by default, I had to check the box to install this component).
Create a new empty makefile project, add main.c, write hello world code (as you say this isn't the problem so I'm skipping detail here), add a new file called "makefile" and fill it in.
Contents of my main.c file
#include
int main()
{
printf("Hello World!");
return 0;
}
Contents of my makefile:
all:
gcc -o HelloWorld.exe main.c
Open the project properties; Under C/C++ Build uncheck the "use default build command" and change the build command to "mingw32-make".
Under "C/C++ Build/Environment" add a new PATH variable with C:\Mingw\bin in the path
Under "C/C++ General/Paths and Symbols" add C:\mingw\include as an include path.
After doing this, my project built successfully and produced a HelloWorld.exe exectuable in my project.
Another option that doesn't require adding a PATH variable to the system or project properties, or adding the include path to the project properties is to simply gives full path info to the commands in the makefile. For small projects this is manageable. Here's an example makefile:
Contents of makefile:
all:
c:\mingw\bin\gcc -o HelloWorld.exe -I c:\mingw\include main.c
Of course you'll also have to change the build command from simply "mingw32-make" to "C:\mingw\bin\mingw32-make" as well.
Another downside of this approach is that the CDT code parser will not be able to locate include files so you'll have warning in the editor to that effect.
|
2,291,533 | 2,291,668 | Bitstream to Float Type Coercion | I'm having trouble getting the following code to work correctly. Using an online IEEE-754 converter, I wrote out (by hand) to the testData.txt file that is read with the bit string that should signify the floating point number 75.5; the actual cout.write does show that the bit string is as I expect as well. However, when I try to coerce the char* into a float using a union (as I have seen is a typical way to accomplish this conversion) the resulting float is not the number I expect.
#include<climits>
#include<iostream>
#include<fstream>
#include<bitset>
int main( int, char** )
{
std::ifstream inputFile( "testData.txt", std::ios_base::in | std::ios_base::binary );
if( !inputFile ) std::cout << "Failed to open input file!" << std::endl;
char buffer[ CHAR_BIT * sizeof(float) ];
inputFile.read( buffer, CHAR_BIT * sizeof(float) );
std::cout << "cout.write of input from file = ";
std::cout.write( buffer, CHAR_BIT * sizeof(float) );
std::cout << std::endl;
union { float f; char* c; } fToCharStarUnion;
fToCharStarUnion.c = buffer;
std::bitset< sizeof(float) * CHAR_BIT > bits( std::string( fToCharStarUnion.c ) );
std::cout << "fToCharStarUnion.f = " << fToCharStarUnion.f << " bits = " << bits << std::endl;
inputFile.close();
return 0;
}
The return result of running this is:
cout.write of input from file = 01000010100101110000000000000000
fToCharStarUnion.f = -1.61821e+38 bits = 01000010100101110000000000000000
Is there something fundamental I am not doing which will make this work correctly?
| You are translating the ASCII into bits using the constructor of bitset. That causes your decoded bits to be in the bitset object rather than the union. To get raw bits out of a bitset, use the to_ulong method:
#include<climits>
#include<iostream>
#include<fstream>
#include<bitset>
int main( int, char** )
{
std::ifstream inputFile( "testData.txt",
std::ios_base::in | std::ios_base::binary );
if( !inputFile ) std::cout << "Failed to open input file!" << std::endl;
char buffer[ CHAR_BIT * sizeof(float) ];
inputFile.read( buffer, CHAR_BIT * sizeof(float) );
std::cout << "cout.write of input from file = ";
std::cout.write( buffer, CHAR_BIT * sizeof(float) );
std::cout << std::endl;
union {
float f[ sizeof(unsigned long)/sizeof(float) ];
unsigned long l;
} funion;
funion.l = std::bitset<32>( std::string( buffer ) ).to_ulong();
std::cout << "funion.f = " << funion.f[0]
<< " bits = " << std::hex <<funion.l << std::endl;
inputFile.close();
return 0;
}
This generally assumes that your FPU operates with the same endianness as the integer part of your CPU, and that sizeof(long) >= sizeof(float)… less guaranteed for double, and indeed the trick is harder to make portable for 32-bit machines with 64-bit FPUs.
Edit: now that I've made the members of the union equal sized, I see that this code is sensitive to endianness. The decoded float will be in the last element of the array on a big-endian machine, first element on little-endian. :v( . Maybe the best approach would be to attempt to give the integer member of the union exactly as many bits as the FP member, and perform a narrowing cast after getting to_ulong. Very difficult to maintain the standard of portability you seemed to be shooting for in the original code.
|
2,291,551 | 2,291,568 | Question about Inheritance / Method overriding C++ | class Class1
{
public:
void print()
{
cout << "test" << endl;
}
void printl()
{
print();
}
};
class Class2 : public Class1
{
public:
void print()
{
cout << "test2" << endl;
}
};
Why does print() not get overridden in Class2, is there any way a function can be overridden like this? (Without virtual functions). Thanks
Class2 t;
t.printl();
| No. This is the entire reason for virtual functions.
Without a virtual method here, when printl() calls print(), it's calling Class1.print(), which prints "test". If you flag the method as virtual, then it will handle it as you were expecting.
|
2,291,702 | 2,291,870 | Can C++'s value_type be extended from iterator_traits to all types? | I would like to create a construct similar to std::iterator_traits::value_type that can work seamlessly for all types using the same syntax. Imagine we have the following:
template <typename T>
struct value_type {
typedef T type;
};
#define VALUE_TYPE(T) typename value_type<T >::type
This will work for POD types. I can specialize it for my own class:
struct MyClass {
typedef float value_type;
};
template <>
struct value_type<MyClass> {
typedef MyClass::value_type type;
};
though I would prefer to avoid extra value_type instantiations in an ideal world.
The problem is with STL iterators. I need a specialization that gets me to the iterator hierarchy. This fails because the compiler chooses the base case:
template <>
struct value_type<std::_Iterator_base_aux> { // MSVC implementation
typedef value_type type;
};
Choosing a class higher up the hierarchy (_Iterator_with_base would be most natural because that is where value_type is defined) fails because it requires specifying all the iterator traits as template arguments.
Is what I'm trying to do even possible in C++?
| You can use SFINAE to detect the presence of the value_type typedef. No need to specialize for individual types (which might not be possible, since you'd be relying entirely on internal implementation details).
#include <vector>
template <class T>
struct has_value_type
{
typedef char true_type;
typedef char false_type[2];
//template not available if there's no nested value_type in U's scope
template <class U>
static true_type test(typename U::value_type* );
//fallback
template <class U>
static false_type& test(...);
//tests which overload of test is chosen for T
static const bool value = sizeof(test<T>(0)) == sizeof(true_type);
};
template <class T, bool b>
struct value_type_impl;
template <class T>
struct value_type_impl<T, false> //if T doesn't define value_type
{
typedef T type;
};
template <class T>
struct value_type_impl<T, true> //if T defines value_type
{
typedef typename T::value_type type;
};
template <class T>
struct value_type: value_type_impl<T, has_value_type<T>::value>
{
};
struct MyClass {
typedef float value_type;
};
template <class T>
int foo(T )
{
return typename value_type<T>::type();
}
int main()
{
foo(MyClass());
std::vector<int> vec;
foo(vec.begin());
foo(10);
}
|
2,291,779 | 2,292,876 | Is Linux Standard Base (LSB) AppChecker reliable? | According to the LSB scanner, my binary is supposedly incompatible with a specific version of Linux because it uses GBLICXX_3.4.9 symbols. But when I tried to run the binary myself on that version, everything seems to work fine...
Can a binary even start on a Linux distro if that distro is missing the runtime libraries containing the required symbols?
| I don't know if I've understood well the question but as far as I know even though you have compiled your program with a modern glibc does not necessarily mean that you won't be able to execute into an older version. The next Linux command:
objdump -T "your exe or lib file" | grep GLIB
will show you which version of the glibc the symbols of your program belong to.
For further information there is a paper called How to write shared libraries by Ulrich Drepper that explains a lot of things of how symbols work in linux not only for shared libraries but also for executables
|
2,291,802 | 2,291,845 | Is there a C++ iterator that can iterate over a file line by line? | I would like to get an istream_iterator-style iterator that returns each line of the file as a string rather than each word. Is this possible?
| EDIT: This same trick was already posted by someone else in a previous thread.
It is easy to have std::istream_iterator do what you want:
namespace detail
{
class Line : std::string
{
friend std::istream & operator>>(std::istream & is, Line & line)
{
return std::getline(is, line);
}
};
}
template<class OutIt>
void read_lines(std::istream& is, OutIt dest)
{
typedef std::istream_iterator<detail::Line> InIt;
std::copy(InIt(is), InIt(), dest);
}
int main()
{
std::vector<std::string> v;
read_lines(std::cin, std::back_inserter(v));
return 0;
}
|
2,291,995 | 2,292,070 | use c++ template class from objective c file | I would like to use a C++ class in an objective c application.
How to achieve this?
Thanks for your helps :)
| You can do this using "Objective-C++". You should read Apple's documentation, but the short version is that you may mix C++ and Objective-C in files with the extension ".mm". It's not magic (C++ classes and Objective-C classes are still separate things), but you can now use them in the same code.
|
2,292,202 | 2,292,262 | While loop with try catch fails at bad cin input | I can't seem to figure out why this falls into a loop after getting non-int input. I've tried cin.flush(), which doesn't seem to exist, cin.clear(), which seems like it should work, even cin.sync() after reading someone else post about it working, but didn't seem to make much sense. Also tried cin.bad().
Thank you very much for any help
Please enter the first number: f
Sorry, I don't think that's a number?
Please enter the first number: Sorry,
I don't think that's a number?
Please enter the first number: Sorry,
I don't think that's a number?
Please enter the first number: Sorry,
I don't think that's a number?
Please enter the first number: Sorry,
I don't think that's a number?Sorry,
you d on't get any more tries. Press
any key to continue . . .
#include <iostream>
using namespace std;
int main(){
int entry;
int attempts = 1;
int result;
while(attempts <= 5) {
try {
cout << "\n\nPlease enter the first number: ";
cin >> entry;
if (cin.fail())
throw "Sorry, I don't think that's a number?";
if (entry < 0)
throw "Sorry, no negative numbers. Try something else? ";
cout << "\nNow the second number: ";
cin >> entry;
cin.clear();
cin.get();
}
catch (char* error) {
cout << error;
attempts++;
}
}
if (attempts > 5)
cout << "Sorry, you don\'t get any more tries.\n";
system("pause");
return 0;
}
| You should think carefully what you want to do if user gives invalid input in this case. Usually in these cases the best solution is to read one line from the input and throw it away.
Try putting cin.clear() and std::cin.ignore(std::numeric_limits<streamsize>::max(),'\n'); in your catch clause. cin.clear() clears the failure state in cin, and cin.ignore() throws away rest of the line waiting in the input buffer.
(And yes, you probably should rethink your use of exceptions).
|
2,292,245 | 2,292,500 | obtain output of command to parse in in c / MacOSX | I'm working on a Command Line app to help me on launchd tasks to know if a task is running by returning a BOOL, the problem comes when i need to do a command line and obtain the output for further parsing.
i'm coding it in C/C++ so i can't use NSTask for it, any ideas on how to achieve the goal?
The Command
sudo launchctl list -x [job_label]
If i use system(), i'm unable to get the output so in further research I came with popen(), but no success there.
Thanks in advance.
| You'll want to create a pipe from which you can read the output of the program. This will involve using pipe, fork, exec*, and maybe even dup. There's a good tutorial on the linux documentation project.
|
2,292,294 | 2,293,741 | how do I clean up my lua state stack? | I am using the lua C-API to read in configuration data that is stored in a lua file.
I've got a nice little table in the file and I've written a query C-function that parses out a specific field in the table. (yay it works!)
It works by calling a few of these kinds of functions over and over:
...
lua_getglobal (...);
lua_pushinteger (...);
lua_gettable (...);
lua_pushstring (...);
lua_gettable (...);
lua_lua_getfield (...);
...
you get the idea.
After I am done querying my data like this, do I have to clean up the stack?
| As long as your stack doesn't grow without bound, you'll be fine. When you return integer N from the C API into Lua, two things happen:
The Lua engine takes the top N values from the stack and considers them as the results of the call.
The Lua engine deallocates (and reuses) everything else on the stack.
David Seiler mentions the possibility of your C code being called from other C code and not from the Lua engine. This is an advanced technique, and if you are asking this question, you are unlikely to have to worry about that particular issue. (But the way it happens from Lua's perspective is the same—when all the C code finishes executing, it has to return an integer, and Lua peels that many values off the stack and then deallocates the rest.)
If you use too many stack slots, your program will halt with a sane and sensible error message (as I know from experience).
|
2,292,296 | 3,283,132 | Can somebody recommend a good U3D library? | I need to put some 3D images into PDF files, and PDF uses Universal 3D (U3D) formats. I don't like the U3D Sourceforge project (basically what Intel released after the ECMA standardization effort).
Does anybody know of good U3D libraries I could use? I'm using C++ on Microsoft Windows, FWIW.
| VCGLib is a mesh processing library that has a U3D exporter and a variety of importers (see http://vcg.sourceforge.net/index.php/Tutorial#File_Formats). MeshLab is a tool built on top of it.
|
2,292,304 | 2,292,331 | Are there any actively maintained tools that can transform C++ code to xml? | Are there any tools that can transform C++ code to xml, or some other format that would be easier to parse?
It would be great if it would also have the option of turning xml back to C++ . I already know of doxygen's xml format ... maybe it's just me, but I don't find it particularly helpful.
| Something like gcc xml?
|
2,292,466 | 2,292,479 | Dynamic memory and inherited structs in C++ | Say I have some structs like this:
struct A{
int someInt;
}
struct B : public A{
int someInt;
int someOtherInt;
}
And a class:
class C{
A *someAs;
void myFunc(A *someMoreAs){
delete [] someMoreAs;
}
}
would this cause a problem:
B *b=new B[10];
C c;
c.myFunc(b);
Because it's deleting b, thinking that it's of type A, which is smaller. Would this cause a memory leak?
Also, say I want to allocate more of the same as b within myFunc, using new, but without C knowing whether b is of A or B? A friend sugegsted typeof, but VC doesn't seem to support this.
| No memory will leak in this particular case because both A and B are POD (plain old data) and thus their contents do not require any destruction.
Still, it is a good practice to always have a virtual destructor in a (base) class that is supposed to be inherited from. If you add a virtual destructor to A, any deletion via A* will call the proper derived destructors too (including destruction of derived classes' members).
virtual ~A() {}
Notice, however, that you cannot use inheritance in arrays of base type, precisely because the actual size of the objects might differ.
What you really want is probably an array of pointers to base class:
std::vector<A*> someAs;
Or the Boost equivalent (with automatic memory management and nicer API):
boost::ptr_vector<A> someAs;
|
2,292,468 | 2,293,375 | Class names that start with C | The MFC has all class names that start with C. For example, CFile and CGdiObject. Has anyone seen it used elsewhere? Is there an official naming convention guide from Microsoft that recommends this style? Did the idea originate with MFC or was it some other project?
| Something a bit similar is used in Symbian C++, where the convention is that:
T classes are "values", for example TChar, TInt32, TDes
R classes are handles to kernel (or other) resources, for example RFile, RSocket
M classes are mixins, which includes interfaces (construed as mixins with no function implementations). The guideline is that multiple inheritance should involve at most 1 non-M class.
C classes are pretty much everything else, and derive from CBase, which has some stuff in it to help with resource-handling.
HBufC exists primarily to generate confused posts on Symbian forums, and having its very own prefix is just the start. The H stands for "huh?", or possibly "Haw, haw! You have no STL!" ;-)
This is close in spirit to Apps Hungarian Notation rather than Systems Hungarian notation. The prefix tells you something about the class which you could look up in the documentation, but which you would not know otherwise. The whole point of naming anything in programming is to provide such hints and reminders, otherwise you'd just call your classes "Class001", "Class002", etc.
Systems Hungarian just tells you the type of a variable, which IMO is nothing to get very excited about, especially in a language like C++ where types tend to be either repeated constantly or else completely hidden by template parameters. Its analogue when naming types is the Java practice of naming all interfaces with I. Again, I don't get very excited about this (and neither do the standard Java libraries), but if you're going to define an interface for every class, in addition to the interfaces which are actually used for polymorphism in non-test situations, then you need some way to distinguish the two.
|
2,292,647 | 2,292,678 | memory allocation and inherited classes in C++ | Say I have these structs:
struct Base{
...
}
struct Derived:public Base{
//everything Base contains and some more
}
I have a function in which I want to duplicate an array of these and then alter it.
void doStuff(Base *data, unsigned int numItems){
Base *newdata = new Base[numItems];
memcpy(newdata, data, numItems*sizeof(Base));
...
delete [] newdata;
}
But if I used this function like so:
Base *data = new Derived[100];
doStuff(data, 100);
It wouldn't work, would it? Because Derived1 is larger than Base, so allocating for Base is not enough memory?
| You could do this easily with a template:
template< class T >void doStuff(T *data, unsigned int numItems)
{
T *newdata = new T[numItems];
memcpy( newdata, data, sizeof( T ) * numItems );
...
delete [] newdata;
}
Edit as per the comments: If you wanted to do this for a mixed collection things will get more complicated quickly ... one possible solution is this:
struct Base{
virtual Base* CopyTo() { return new Base( *this ); }
};
struct Derived:public Base{
virtual Derived* CopyTo() { return new Derived( *this ); }
};
void doStuff( Base** ppArray, int numItems )
{
Base** ppNewArray = new Base*[numItems];
int count = 0;
while( count < numItems )
{
ppNewArray[count] = ppArray[count]->CopyTo();
count++;
}
// do stuff
count = 0;
while( count < numItems )
{
delete ppNewArray[count];
count++;
}
delete[] ppNewArray;
}
|
2,292,701 | 2,292,738 | What Visual C++ setting/option/flag is the counterpart of -ansi -pedantic in g++ | I have a C++ codebase, and I am porting from Visual Studio to g++, which should I set in Visual Studio so that build errors in gcc are reduced? With g++ this is achieved by -ansi -pedantic.
| I believe you are looking for /Za.
|
2,292,898 | 2,292,905 | Help rearranging /solving an Equation | I have the following C formula
bucket = (hash - _min) * ((_capacity-1) / range());
What I need to to rearrange the equation to return the _capacity instead of bucket (I have all other variables apart from _capacity). e.g.
96 = (926234929-805306368) * (( x -1) /1249540730)
836 = (1852139639-805306368) * ((x -1) /1249540730)
As you can see it's a fairly simple equation, all I need is x on the left. But my algebra is very rusty, so any help appreciated.
| capacity = (range() * bucket) / (hash - _min) + 1;
bucket = (hash - _min) * ((_capacity - 1) / range()); // start
bucket = ((hash - _min) * (_capacity - 1)) / range(); // rearrange
range() * bucket = (hash - _min) * (_capacity - 1); // multiply by range
(range() * bucket) / (hash - _min) = _capacity - 1; // divide by (hash - _min)
(range() * bucket) / (hash - _min) + 1 = _capacity; // add 1
capacity = (range() * bucket) / (hash - _min) + 1; // rearrange
|
2,292,995 | 2,293,006 | c++ allocation on the stack acting curiously | Curious things with g++ (maybe also with other compilers?):
struct Object {
Object() { std::cout << "hey "; }
~Object() { std::cout << "hoy!" << std::endl; }
};
int main(int argc, char* argv[])
{
{
Object myObjectOnTheStack();
}
std::cout << "===========" << std::endl;
{
Object();
}
std::cout << "===========" << std::endl;
{
Object* object = new Object();
delete object;
}
}
Compied with g++:
===========
hey hoy!
===========
hey hoy!
The first type of allocation does not construct the object. What am I missing?
| The first type of construction is not actually constructing the object. In order to create an object on the stack using the default constructor, you must omit the ()'s
Object myObjectOnTheStack;
Your current style of definition instead declares a function named myObjectOnTheStack which returns an Object.
|
2,293,231 | 2,293,411 | sizeof(...) = 0 or conditional variable declaration in c++ templates | Suppose I have something like this:
struct EmptyClass{};
template<typename T1, typename T2 = EmptyClass,
typename T3 = EmptyClass, typename T4 = EmptyClass,
..., typename T20> class PoorMansTuple {
T1 t1;
T2 t2;
...
T20 t20;
};
Now, I may waste up to 19bytes per PoorMansTuple.
Question is:
1) Is there a way to create a class of size 0?
2) Is there a way to conditionally define a varaible? Somethign like:
T1 t1;
if (T2 != EmptyClass) T2 t2; // pseudo code
if (T3 != EmptyClass) T3 t3; // ...
Thanks!
The use of black magic macros is premitted.
I'm using g++ on MacOSX.
| Partial specialization may be what you are looking for the first part of the question. This program
#include <string>
#include <iostream>
struct EmptyClass {};
template<typename T1, typename T2>
class Tuple
{
T1 t1;
T2 t2;
};
template<typename T1>
class Tuple <T1, EmptyClass>
{
T1 t1;
};
int main (void)
{
Tuple<std::string, std::string> two;
Tuple<std::string, EmptyClass> one1;
Tuple<std::string> one2;
std::cout << "<string, string>: " << sizeof(two) << std::endl;
std::cout << "<string, empty> : " << sizeof(one1) << std::endl;
std::cout << "<string> : " << sizeof(one2) << std::endl;
return 0;
}
prints
<string, string>: 32
<string, empty> : 16
<string> : 16
|
2,293,270 | 2,293,286 | Is there any keyword to redefine "all" methods of templated base class in a templated derived class? | I know this looks like a silly question, but using object oriented stuff with templates in C++ is really troublesome.
For example, Foo is the base class:
template <typename T>
class Foo {
public:
virtual void Method1() { }
virtual void Method1(int a) { }
virtual void Method2() { }
virtual void Method2(int a) { }
//... lots of other methods
};
Is there something like:
template <typename T>
class Bar : public Foo<T> {
public:
using Foo<T>::*; //redefine all inherited methods from Foo
virtual void Method1(int a) { }
virtual void Method2(int a) { }
//other methods overloading..
};
Instead of:
template <typename T>
class Bar : public Foo<T> {
public:
using Foo<T>::Method1
using Foo<T>::Method2
//... lots of other methods
virtual void Method1(int a) { }
virtual void Method2(int a) { }
//other methods overloading..
};
So we can do:
int main() {
Bar<int> b;
b.Method1();
b.Method2();
//... lots of other methods
//This obviously works without the 'using' keyword:
Foo<int>* f = &b;
f->Method1();
f->Method2();
//etc
return 0;
}
| No, there is no functionality like that but it usually isn't needed. What you intend to do with using is already provided by the basic inheritance mechanism.
You need to use using if overloads in the deriving class hide methods from the base class or if you want to change the access mode, but not in general:
class A {
void f() {}
public:
void g(int) {}
void h(int) {}
};
struct B : A {
using A::f; // make f public
void g(double) {}
using A::g; // otherwise A::g is hidden by the overload
// using A::h isn't needed
};
Note that you can still call A::h() through a B instance because nothing hides it.
|
2,293,338 | 2,293,366 | Possible to pass name as argument to c++ template? | Is it possible to write a class:
template<typename T, ... name> struct Magic {
T name;
};
such that:
Magic<int, foo> gives:
Magic<int, foo> {
int foo;
}
and
Magic<float, bar> gives:
Magic<float, bar> {
float bar;
}
Basically, I want to be able to specify not only the Type, but also the name of the member variables.
| That is not possible, you have to resort to either macro-based solutions or use a predefined set of types that provide named members.
A possible macro-based approach:
#define MAGIC(name_) \
template<typename T> struct Magic1 { \
T name_; \
};
MAGIC(foo);
or:
#define MAGIC(type_, name_) \
struct Magic1 { \
type_ name_; \
};
MAGIC(foo);
Using preprocessor magic, e.g. utilizing Boost.Preprocessor, you should be able to generate n named members in a more convenient way.
Another approach might be using a predefined set of classes providing certain named members from that you inherit:
enum { MemberNameFoo, MemberNameBar };
template<class T, int id>
struct named_member;
template<class T>
struct named_member<T, MemberNameFoo> {
T foo;
};
template<class T>
struct named_member<T, MemberNameBar> {
T bar;
};
// holder for the above, just one member for this example:
template<class T, int name>
struct holder : named_member<T, name> {};
// using it:
typedef holder<int, MemberNameFoo> HasFoo;
typedef holder<int, MemberNameBar> HasBar;
Using compile-time lists you could then inherit from n named_member instantiations, Boost.MPL could help here.
|
2,293,404 | 2,293,437 | Simple modular guide in C/++? | I think modular is the correct term; to give a basic example if I was to create an encryption application which you could type in like notepad, and then save encrypted, however under the save menu there are options to save for the encryption methods that you have plugins for like AES, Blowfish etc, and also allow new methods to be coded into a plugin and distributed without having to recompile the main application.
I've found a couple of explanations online but I'm mostly struggling to get my head around how you would get new options to appear under the save menu that originally didn't exist (this maybe more a Windows application question), if you get what I mean.
Seeing as modular development seems to be very platform specific I'll stick with Windows examples for now and hopefully try and scope out after that.
| Assuming Win32api, you do something like this:
Have a plugins directory for your application.
On load of your application, list all files in that directory
Any with the extension DLL, you load with the LoadLibrary call.
You get some information from the dll that tells you what the plugin's name is
You create menus/ui changes appropriately.
Now, when you create your dll, you have a standard set of functions common to all plugins. Or, a standard set of functions per type of plugin and a function that identifies this with your application. This way, you can test each plugin is of the correct form and call the methods in the dynamic library on the fly without having to compile / link them in to your main program.
The routine is broadly similar on any platform that supports shared libraries (DLLs, so's etc).
As a code example for what I mean, you might have a plugin.h file like this:
#ifndef PLUGIN_H_
#define PLUGIN_H_
#define PLUGIN_CRYPTO 1
#define PLUGIN_FORMAT 2
#define PLUGIN_EXAMPLE 3
#endif
Then you #include this header in both your main program and any plugins you create. In plugin-dll.cpp (example again) you have a method like this:
int GetPluginType()
{
return PLUGIN_CRYPTO;
}
Then you can switch between the results of this function and assign function pointers to the correct routines you want to run.
More info for implenentation:
GetProcAddress() - find the name of a function in a library. Example with function pointers included.
FreeLibary - to release something opened with LoadLibrary.
FindFirstFileEx, FindNextFile and CloseFind for directory searching.
Just because, Linux (POSIX) equivalents:
dlopen - Dynamic library open.
dlsym - Equivalent to GetProcAddress - get function ptr to symbol name.
dlclose - Equivalent to FreeLibrary
|
2,293,481 | 2,293,490 | Understanding c++ code; what do *datatype and classname::method mean? | I am new to C++ and I am trying to understand some code. What does it mean to have a * in front of the datatype ? and why is the class Name in front of the method name CAStar::LinkChild
void CAStar::LinkChild(_asNode *node, _asNode *temp)
{
}
|
A * in front of the data type says that the variable is a pointer to the data type, in this case, a pointer to a node. Instead of passing a copy of the entire "node" into the method, a memory address, or pointer, is passed in instead. For details, see Pointers in this C++ Tutorial.
The class name in front of the method name specifies that this is defining a method of the CAStar class. For details, see the Tutorial pages for Classes.
|
2,293,670 | 2,301,029 | C++ Swapping an array of integers passed through an int& parameter | I need to swap a couple of integers in the format int i[2] using a void swap(int& x) function. As you see the function takes an argument of type int&. Here is non-working version of the function:
int i[2] = {3, 7};
void swap (int& x)
{
int temp;
temp = x[1];
x[1] = x[0];
x[0] = temp;
}
int main()
{
cout << i[0] << ", " << i[1] << "\n"; // return the original: (3, 7)
swap(i);
cout << i[0] << ", " << i[1] << "\n"; // return i swapped: (7, 3)
}
How should I do this?
Edit: The answer CANNOT use anything else for the function parameter. It MUST use a int& parameter. This is a problem from Bjarne Stroustrup's book: "The C++ programming language", third edition. It is problem #4 from chapter 5. The problem first asks to write a function taking a int* as parameter, than to modify it to accept a int& as parameter.
| All right, thanks to @gf's suggestions, I found a solution :) Many thanks! Please tell me if you see anything not very C++ish in there.
// Swap integers
#include<iostream>
using namespace std;
int i = 3;
int j = 7;
void swap (int& x, int& y)
{
int temp = x;
x = y;
y = temp;
}
int main()
{
cout << i << ", " << j << "\n"; // return the original: (3, 7)
swap(i, j);
cout << i << ", " << j << "\n"; // return i swapped: (7, 3)
}
|
2,293,796 | 2,293,926 | PODs, non-PODs, rvalue and lvalues | Could anyone explain the details in terms of rvalues, lvalues, PODs, and non-PODs the reason why the first expression marked below is not ok while the second expression marked below is ok? In my understanding both int() and A() should be rvalues, no?
struct A {};
int main()
{
int i;
A a;
int() = i; //Not OK (error).
A() = a; //OK.
return 0;
}
| Rvalues are what you get from expressions (a useful simplification taken from the C standard, but not worded in C++ standardese). Lvalues are "locator values". Lvalues can be used as rvalues. References are always lvalues, even if const.
The major difference of which you have to be aware can be condensed to one item: you can't take the address of an rvalue (again, not standardese but a useful generalization of the rules). Or to put it another way, you can't fix a precise location for an rvalue—if you could, then you'd have an lvalue. (You can, however, bind a const& to an rvalue to "fix it in place", and 0x is changing the rules drastically.)
User-defined types (UDTs), however, are slightly special: you can convert any rvalue into an lvalue, if the class's interface allows it:
struct Special {
Special& get_lvalue() { return *this; }
};
void f() {
// remember "Special()" is an rvalue
Special* p = &Special().get_lvalue(); // even though you can't dereference the
// pointer (because the object is destroyed), you still just took the address
// of a temporary
// note that the get_lvalue() method doesn't need to operate on a const
// object (though that would be fine too, if the return type matched)
}
Something similar is happening for your A() = a, except through the compiler-supplied assignment operator, to turn the rvalue A() into *this. To quote the standard, 12.8/10:
If the class definition does not explicitly declare a copy assignment operator, one is declared implicitly. The implicitly-declared copy assignment operator for a class X will have the form
X& X::operator=(const X&)
And then it goes on with more qualifications and specs, but that's the important bit here. Since that's a member function, it can be called on rvalues, just like Special::get_lvalue can be, as if you had written A().operator=(a) instead of A() = a.
The int() = 1 is explicitly forbidden as you discovered, because ints don't have operator= implemented in the same way. However, this slight discrepancy between types doesn't matter in practice (at least not that I've found).
POD means Plain Old Data and is a collection of requirements that specify using memcpy is equivalent to copying. Non-POD is any type for which you cannot use memcpy to copy (the natural opposite of POD, nothing hidden here), which tends to be most types you'll write in C++. Being POD or non-POD doesn't change any of the above, and is really a separate issue.
|
2,293,923 | 2,293,951 | IDirect3DTexture9::SetData? | In XNA, you can do
texture = new Texture2D( GraphicsDevice, width, height ) ;
I'm guessing somewhere deep down in the MSFT bowels, this is equivalent to C++ code:
D3DXCreateTexture( GraphicsDevice, width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &texture ) ;
In XNA there's this nifty function that lets you set the pixel values of a texture you've created:
texture.SetData<Color>( new Color[]{ pixel, values, pixel, values ) ;
Now I'm pretty sure there's got to be a C++ DirectX equivalent. Anyone know what it is?
| I found it.. IDirect3DTexture9::LockRect()
|
2,293,961 | 2,293,999 | c++ methods in a base class | When having a base class with pure virtual methods this makes it so that the class can not be instantiated. If I have regular methods and attributes in this base class does the derived classes still inherit those as normal?
For e.g. a getter and setter for an attribute.
| Yes, all methods are inherited.
|
2,293,970 | 2,293,978 | error: expected unqualified-id before ‘for’ | The following code returns this: error: expected unqualified-id before ‘for’
I can't find what is causing the error. Thanks for the help!
#include<iostream>
using namespace std;
const int num_months = 12;
struct month {
string name;
int n_days;
};
month *months = new month [num_months];
string m[] = {"Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"};
int n[] = {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
for (int i=0; i<num_months; i++) {
// will initialize the months
}
int main() {
// will print name[i]: days[i]
return 0;
}
| Your for loop is outside a function body.
|
2,293,979 | 2,311,558 | Encoded character buffer storage problem in MySQL varchar using C | I have a encoded character buffer array of size 512 in C, and a database field of varchar in MySQL. Is it possible to store the encoded character buffer into varchar?
I have tried this, but the problem which I face is that it only stores the limited area of the buffer into the database and ignore. What is the actual problem, and how do I solve this problem?
| It is not clear what you mean by encoded.
If you mean that you have an arbitrary string of byte values, then varchar is a bad fit because it will attempt to trim trailing spaces. A better choice in such cases is to use varbinary fields.
If the string you are inserting contains control characters, you might be best converting that into a hex string and inserting it like follows:
create table xx (
v varbinary(512) not null );
insert into xx values ( 0x68656C6C6F20776F726C64);
This will prevent any component in the tool chain from choking on NUL characters and so forth.
|
2,294,003 | 2,294,015 | How do I declare the size of a string array if it's a member function | I have a problem with setting the size of my array. In my code I have:
class Test {
public:
....//Functions
private:
string name[];
};
Test() {
//heres where i want to declare the size of the array
}
Is this possible?
| No. But you could use a vector of strings instead:
private:
std::vector<std::string> name;
Then in your constructor:
Test()
: name(sizeOfTheArray)
{
}
The vector will be sized for the number of strings you specify. This means all memory for the strings will be allocated at once. You can change the size of the array as you wish, but there's nothing saying you have to. Thus, you get all the benefits of using a dynamically allocated array, and then some, without the drawbacks.
|
2,294,032 | 2,294,086 | algorithm to find edges using vertices (2D and 3D) in a mesh | I have a a mesh, with certain types of elements (e.g. triangular, tetra). For each element I know all its vertices i.e. a triangular 2D element will have 3 vertices v1, v2 and v3 whose x,y,z coords are known.
Question 1
I am looking for an algorithm that will return all the edges... in this case:
edge(v1, v2), edge(v1, v3) , edge(v2, v3). Based on how many vertices each element has , the algorithm should efficiently determine the edges.
Question 2
I am using C++, so, what will be the most efficient way to store the information about the edges returned by the above algorithm? Example, all I am interested in is a tuple (v1, v2) that I want to use for some computation and then forget about it.
Thank you
| You can use the half-edge data structure.
Basically your mesh also has a list of edges, and there is one edge structure per pair of verts in each direction. That means if you have verts A and B then there are two edge structures stored somewhere, one for A->B and one for B->A. Each edge has 3 pointers, one called previous, one called next and one called twin. Following the next and previous pointers walks you around the edges of the triangle or polygon in the mesh. Calling twin takes you to the adjacent edge in the adjacent polygon or triangle. (Look at the arrows int he picture) This is the most useful and verbose edge data structure I know of. I've used it to smooth meshes by creating new edges and updating the pointers. Btw, each edge should also point to a vertex so it knows where it is in space.
|
2,294,300 | 2,294,321 | What does 'Font(..)' mean when Font is a class? | I need help in understanding the following C++ code (in a .h file):
bool setFontDescription(const FontDescription& v)
{
if (inherited->font.fontDescription() != v) {
inherited.access()->font = Font(v, inherited->font.letterSpacing(), inherited->font.wordSpacing());
return true;
}
return false;
}
What does 'Font(..)' mean? Font is a C++ class. Does Font(...) mean new Font()? Or create a Font object on the stack?
| Create a Font object on stack, as a temporary. The object's scope is the line where it's created.
|
2,294,306 | 2,294,400 | Byte array to UTF8 CString | I'm using Visual Studio 2008 (C++). How do I create a CString (in a non-Unicode app) from a byte array that has a string encoded in UTF8 in it?
Thanks,
kreb
EDIT: Clarification: I guess what I'm asking is.. CStringA doesn't seem to be able to interpret a UTF8 string as UTF8, but rather as ASCII or the current codepage (I think).. How do I convert this UTF8 string to a CStringW? (UTF-16..?) Thanks
| CStringW filename= CA2W(null_terminated_byte_buffer, CP_UTF8) should do the trick.
|
2,294,443 | 2,294,931 | Base Conversion Problem | I'm trying to convert an integer to a string right now, and I'm having a problem.
I've gotten the code written and working for the most part, but it has a small flaw when carrying to the next place. It's hard to describe, so I'll give you an example. Using base 26 with a character set consisting of the lowercase alphabet:
0 = "a"
1 = "b"
2 = "c"
...
25 = "z"
26 = "ba" (This should equal "aa")
It seems to skip the character at the zero place in the character set in certain situations.
The thing that's confusing me is I see nothing wrong with my code. I've been working on this for too long now, and I still can't figure it out.
char* charset = (char*)"abcdefghijklmnopqrstuvwxyz";
int charsetLength = strlen(charset);
unsigned long long num = 5678; // Some random number, it doesn't matter
std::string key
do
{
unsigned int remainder = (num % charsetLength);
num /= charsetLength;
key.insert(key.begin(), charset[remainder]);
} while(num);
I have a feeling the function is tripping up over the modulo returning a zero, but I've been working on this so long, I can't figure out how it's happening. Any suggestions are welcome.
EDIT: The fact that the generated string is little endian is irrelevant for my application.
| If I understand correctly what you want (the numbering used by excel for columns, A, B, .. Z, AA, AB, ...) this is a based notation able to represent numbers starting from 1. The 26 digits have values 1, 2, ... 26 and the base is 26. So A has value 1, Z value 26, AA value 27... Computing this representation is very similar to the normal reprentation you just need to adjust for the offset of 1 instead of 0.
#include <string>
#include <iostream>
#include <climits>
std::string base26(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
#include <cstdlib>
int main(int argc, char* argv[])
{
for (int i = 1; i < argc; ++i) {
unsigned long value = std::strtol(argv[i], 0, 0);
std::cout << value << " = " << base26(value) << '\n';
}
return 0;
}
Running with 1 2 26 27 52 53 676 677 702 703 gives
1 = A
2 = B
26 = Z
27 = AA
52 = AZ
53 = BA
676 = YZ
677 = ZA
702 = ZZ
703 = AAA
|
2,294,646 | 2,294,776 | Strange vector initialization issue | I recently debugged a strange C++ problem, in which a newly declared vector somehow had a size of 477218589. Here's the context:
struct Triangle {
Point3 a,b,c;
Triangle(Point3 x, Point3 y, Point3 z) : a(x), b(y), c(z) {}
Vector3 flat_normal() { return (a-c)^(b-c); }
};
vector<Triangle> triangles;
Calling triangles.size() returns the value 477218589.
I 'fixed' the problem by changing struct Triangle to class Triangle, but I'm wondering why there's any difference. Should I have done that typedef struct Foo { ... } Foo; magic? If so, why would that help?
If it matters, I'm using g++-4.1.
| This
#include <vector>
#include <iostream>
struct Point3 {};
struct Triangle {
Point3 a,b,c;
Triangle(Point3 x, Point3 y, Point3 z) : a(x), b(y), c(z) {}
};
int main()
{
std::vector<Triangle> triangles;
std::cout << triangles.size() << '\n';
return 0;
}
prints 0 for me. If it also does for you, then the problem is in parts of the code not included in this snippet. If it prints anything else, something is fishy with your compiler/std lib/setup.
|
2,294,665 | 2,294,678 | Linker error LNK2019 while trying to compile prog with template declarations | Here the code
#include <iostream>
#include <conio.h>
using namespace std;
template <typename T> class grid
{
public:
grid();
~grid();
void createCells();
private:
T **cells;
};
int main(int argc, char **argv)
{
grid<int> intGrid;
_getch();
return 0;
}
While trying to compile - got a message:
1>main.obj : error LNK2019: unresolved external symbol "public: __thiscall
grid<int>::~grid<int>(void)" (??1?$grid@H@@QAE@XZ) referenced in function _main
1>main.obj : error LNK2019: unresolved external symbol "public: __thiscall
grid<int>::grid<int>(void)" (??0?$grid@H@@QAE@XZ) referenced in function _main
What need to do?
| You need to define the constructor and destructor (you just declared them):
template <typename T> class grid
{
public:
grid()
{} // here
~grid()
{} // and here
void createCells();
private:
T **cells;
};
|
2,294,809 | 2,294,851 | Going from Java imports to C++ includes | I've been struggling with understanding how C++ classes include other classes. I'm guessing this is easier to understand without any preconceived notions.
Assume my two classes are Library and Book. I have a .h and .cpp file for each. My "main.cpp" runs a simple console app to use them. Here is a simple example:
//Library.h
#ifndef LIBRARY_H_
#define LIBRARY_H_
#endif
class Library
{
public:
Library();
~ Library();
private:
Book *database;
};
This throws an error about how "Book does not name a type". In Java I would import some package like org.me.Domain.Book. Can someone please explain how this works in C++?
| In C++ source files are conceptually completely separate from class definitions.
#include and header files work at a basic text level. #include "myfile" simply includes the contents of the file myfile at the point at which the include directive is placed.
Only after this process has happened is the resulting block of text interpreted as C++ code. There is no language requirement that a class called Book has to be defined in a file called Book.h. Although it is highly recommended that you do follow such a convention, it's essential to remember that it's not a given when debugging missing declaration or definition issues.
When parsing your Library.h file the compiler must have seen a declaration for the identifier Book at the point at which it is used in the defintion of the class Library.
As you are only declaring a member variable of type "pointer to Book", you only need a declaration and not a full definition to be available so if Book is a class then the simplest 'fix' is to add a forward declaration for it before the definition of Library.
e.g.
class Book;
class Library
{
// definition as before
};
Include Guards
It looks like you may have some include guard errors. Because you can only define classes once per translation units the definitions inside header files are usually protected with include guards. These ensure that if the same header is included multiple times via different include files that the definitions it provides aren't seen more than once. Include guards should be arranged something like this. Looking at your Library.h, it may be that your include guards are not terminated correctly.
myclass.h:
#ifndef MYCLASS_H
#define MYCLASS_H
class MyClass
{
};
// The #ifndef is terminated after all defintions in this header file
#endif //MYCLASS_H
|
2,294,908 | 2,294,965 | operator bool() converted to std::string and conflict with operator std::string() | How can operator bool() cause an error when declaring operator std::string in a class and also serving as an implicit conversion to string by itself?
#include <iostream>
#include <string>
using namespace std;
class Test {
public:
operator std::string() { cout << "op string" << endl; return "whatever";}
operator bool() { cout << "op bool" << endl; return true;}
};
int main(int argc, char *argv[]) {
string s;
Test t;
s = t;
}
| The problem you are facing (besides operator std::string() returning a bool) is that implicit conversions trigger when you want and when you don't.
When the compiler sees s = t it identifies the following potential std::operator= matches:
// using std::string for compactness instead of the full template
std::string::operator=( std::string const & );
std::string::operator=( char );
Now, t is neither of them, so it tries to convert it to something that can fit and finds two paths: convert to bool that can be promoted to char or convert to std::string directly. The compiler cannot really decide and gives up.
This is one of the reasons that you want to avoid providing many different conversion operators. Anything that can be implicitly called by the compiler will eventually be called when you don't think it should.
This article specifically deals with this problem. The suggestion is instead of providing a conversion to bool, provide a conversion to a member function
class testable {
typedef void (testable::*bool_type)();
void auxiliar_function_for_true_value() {}
public:
operator bool_type() const {
return condition() ? &testable::auxiliar_function_for_true_value : 0;
}
bool condition() const;
};
If an instance of this class is used inside a condition (if (testable())) the compiler will try and convert to bool_type that can be used in a condition.
EDIT:
After the comment on how the code is more complex with this solution, you can always provide it as a generic small utility. Once you provide the first part of the code, the complexity is encapsulated in the header.
// utility header safe_bool.hpp
class safe_bool_t;
typedef void (safe_bool_t::*bool_type)();
inline bool_type safe_bool(bool);
class safe_bool_t {
void auxiliar_function_for_true_value() {}
friend bool_type safe_bool(bool);
};
inline bool_type safe_bool(bool)
{
return condition ? &safe_bool_t::auxiliar_function_for_true_value : 0;
}
Your class now becomes much more simple, and it is readable in itself (by choosing appropriate names for the functions and types):
// each class with conversion
class testable {
public:
operator bool_type() {
return safe_bool(true);
}
};
Only if the reader is interested in knowing how the safe_bool idiom is implemented and reads the header they fill be faced with the complexity (which can be explained in comments)
|
2,295,011 | 2,295,029 | Preventing implicit cast of numerical types in constructor in C++ | I have a constructor of the form:
MyClass(int a, int b, int c);
and it gets called with code like this:
MyClass my_object(4.0, 3.14, 0.002);
I would like to prevent this automatic conversion from double to int, or at least get warnings at compile time.
It seems that the "explicit" keyword does not work in these case, right?
| What's your compiler? Under gcc, you can use -Wconversion to warn you about these types of conversions.
|
2,295,296 | 2,295,373 | Debugging memory leaks with libMallocDebug | I want to use the MallocDebug app to find some memory leaks in my app. I'm running Mac OS X 10.6.2. Whenever I try and following the instructions listed in this guide, I get the following error:
dyld: could not load inserted library: /usr/lib/libMallocDebug.A.dylib
Trace/BPT trap
I have verified that the .dylib file exists, and I get the same error no matter which app I try and run (it's not limited to my application). Several others have reported this problem as well, but so far no one has found a solution.
Any ideas?
| libMallocDebug is not available for 64-bit executables.
% lipo -info /usr/lib/libMallocDebug.A.dylib
Architectures in the fat file: /usr/lib/libMallocDebug.A.dylib are: i386 ppc7400
It does appear to work with 32-bit executables in 10.6, though, for example:
% lipo -thin i386 /bin/ls -out foo
% DYLD_INSERT_LIBRARIES=/usr/lib/libMallocDebug.A.dylib ./foo
libMallocDebug[foo-9141]: initializing libMallocDebug on thread 903
[...]
I'm not sure whether this is an oversight or it was never ported to the 64-bit runtime. You might try filing a bug.
Update: Seems there are just more debugging features in the regular malloc now. This discussion is pretty good.
|
2,295,297 | 2,295,346 | Why does this C++ code fail? | I have the following code
#include <iostream>
#include <vector>
using namespace std;
int distance(vector<int>& set1, vector<int>& set2) {
int distance = 0;
unsigned int i1 = 0;
unsigned int i2 = 0;
while(i1 < set1.size() && i2 < set2.size()) {
if(set1[i1] == set2[i2]) {
++i1; ++i2;
} else {
++distance;
set1[i1] < set2[i2] ? ++i1 : ++i2;
}
}
unsigned int zero = 0;
distance += std::max(set1.size() - i1, zero) + std::max(set2.size() - i2, zero);
}
int main() {
vector<vector<int> > frequent_sets;
vector<int> vector3;
vector3.push_back(1);vector3.push_back(2);vector3.push_back(3);
vector<int> vector2;
vector2.push_back(1);vector2.push_back(2);
frequent_sets.push_back(vector3);
frequent_sets.push_back(vector3);
frequent_sets.push_back(vector2);
frequent_sets.push_back(vector3);
for(vector<vector<int> >::iterator itouter = frequent_sets.begin(); itouter != frequent_sets.end(); ++itouter)
for(vector<vector<int> >::iterator itinner = (itouter + 1); itinner != frequent_sets.end(); ++itinner)
if(distance(*itinner, *itouter) == 0) {
cout << "Hey" << endl;
}
}
When I try to compile I get the error:
make all Building file:
../src/TestIterator.cpp Invoking: GCC
C++ Compiler g++ -O0 -g3 -Wall -c
-fmessage-length=0 -MMD -MP -MF"src/TestIterator.d" -MT"src/TestIterator.d" -o"src/TestIterator.o" "../src/TestIterator.cpp"
/usr/include/c++/4.3/bits/stl_iterator_base_types.h: In instantiation of
'std::iterator_traits > >':
../src/TestIterator.cpp:50:
instantiated from here
/usr/include/c++/4.3/bits/stl_iterator_base_types.h:133:
error: no type named
'iterator_category' in 'class
std::vector
' make: *** [src/TestIterator.o] Error 1
Why is this? When I replace distance(*itouter, *itinner) == 0 with itinner->size() == itouter->size() the code is compiling and running fine.
| Youd distance function is clashing with the one in std. That's why it's usually not recommended to write using namespace std; in your code. Try removing that or renaming your function to something like my_distance.
|
2,295,440 | 2,295,502 | C++ Exception Handler problem | I written an exception handler routine that helps us catch problems with our software. I use
SetUnhandledExceptionFilter();
to catch any uncaught exceptions, and it works very well.
However my handler pop's up a dialog asking the user to detail what they were doing at the time of the crash. This is where the problem comes, because the dialog is in the same thread context as the crash, the dialog continues to pump the messages of application. This causes me a problem, as one of our crashes is in a WM_TIMER, which goes off every minute. As you can imagine if the dialog has been on the screen for over a minute, a WM_TIMER is dispatched and the app re-crashes. Re-entering the exception handler under this situation is bad news.
If I let Windows handle the crash, Windows displays a dialog that appears to function, but stops the messages propagating to the rest of the application, hence the WM_TIMER does not get re-issued.
Does anyone know how I can achieve the same effect?
Thanks
Rich
| Perhaps you could launch a separate data collection process using CreateProcess() when you detect an unhandled exception. This separate process would prompt the user to enter information about what they were just doing, while your main application can continue to crash and terminate.
Alternatively, if you don't want to start another process, you could perhaps create another thread with a separate message queue, that blocks your main thread from doing anything at all while the dialog is on the screen. While your main thread is blocked it won't have the opportunity to handle WM_TIMER messages.
|
2,295,582 | 2,295,591 | Template class won't build properly | Header
class linkNode {
public:
linkNode(void *p) {
before = 0;
after = 0;
me = p;
}
linkNode *before;
void *me;
linkNode *after;
};
template <class T>
class list
{
public:
list(void) { first = last = NULL; size = 0; }
~list(void) { while(first) deleteNode(first); }
private:
void deleteNode(linkNode *l);
linkNode *first, *last;
unsigned int size;
};
.Cpp
template <class T>
inline void list<T>::deleteNode(linkNode *l) {
if(c->before)
if(c->after) {
c->before->after = c->after;
c->after->before = c->before;
} else
c->before->after = last = NULL;
else
if(c->after)
c->after = first = NULL;
delete c; size--;
}
I have this set to build as a .lib and it builds fine. When I try ePhys::list<int> myList; I get a linker error saying it cant find ePhys::list<int>::deleteNode(class ePhys::linkNode *) This is not a problem with setting up using the library, i have tested with other dummy classes.
I am using MSVC 2010 beta.
Is there any way to get this to link properly?
| C++ does not really support the separate compilation of templates - you need to put all your template code in the header file(s).
|
2,295,639 | 3,423,041 | Why is event handling in native Visual C++ deprecated? | http://msdn.microsoft.com/en-us/library/ee2k0a7d.aspx
Event handling is also supported for
native C++ classes (C++ classes that
do not implement COM objects),
however, that support is deprecated
and will be removed in a future
release.
Anyone knows why? Couldn't find any explanation for this statement.
|
It's totally non-standard kludge that probably has very little actual
users. And I mean non-stndard kludge even in WinNT and Microsoft-private world.
COM has much richer repertoire for event-like mechanisms and also allow
fully multi-threaded code these days
This one is lethal - that functionality is doing implicit locking (probably our grandpa's idea
of "synchonized" before templates and widespread safe use of normal critical sections). That
makes it more dangerous than COM's single apartment, ahem, thing :-) As in it can give you a deadlock out of nowhere (happened to Java's synchronized methods as well - nothing special :-)
Everyone and their dogs know how to use normal multi-threading and at least critical sections with smart pointers these days, so besides being dangerous, that thing is also irrelevant.
|
2,295,969 | 2,296,104 | Visual Studio 2010 and boost::bind | I have this simple piece of code that uses boost::bind:
#include <boost/bind.hpp>
#include <utility>
#include <vector>
#include <iterator>
#include <algorithm>
int main()
{
std::vector<int> a;
std::vector<std::pair<bool,int> > b;
a.push_back(1);
a.push_back(2);
a.push_back(3);
std::transform(a.begin(), a.end(), std::back_inserter(b),
boost::bind(std::make_pair<bool, int>, false, _1));
}
I'm getting a ton of errors in VS2010 RC, such as:
Error 1 error C2780: 'boost::_bi::bind_t<_bi::dm_result<MT::* ,A1>::type,boost::_mfi::dm<M,T>,_bi::list_av_1<A1>::type> boost::bind(M T::* ,A1)' : expects 2 arguments - 3 provided c:\projects\testtuple\main.cpp 18
Error 2 error C2780: 'boost::_bi::bind_t<Rt2,boost::_mfi::cmf8<R,T,B1,B2,B3,B4,B5,B6,B7,B8>,_bi::list_av_9<A1,A2,A3,A4,A5,A6,A7,A8,A9>::type> boost::bind(boost::type<T>,R (__thiscall T::* )(B1,B2,B3,B4,B5,B6,B7,B8) const,A1,A2,A3,A4,A5,A6,A7,A8,A9)' : expects 11 arguments - 3 provided c:\projects\testtuple\main.cpp 18
Am I doing something wrong? If this is a bug in the compiler, how can I workaround it?
EDIT: added the entire test case.
Clarification: the code compiles in VS2008.
| Update:
The problem is that make_pair seems to be overloaded in the STL that ships with VS2010 (it wasn't in previous versions of VS or in GCC). The workaround is to make explicit which of the overloads you want, with a cast:
#include <boost/bind.hpp>
#include <utility>
#include <vector>
#include <iterator>
#include <algorithm>
int main()
{
std::vector<int> a;
std::vector<std::pair<bool,int> > b;
a.push_back(1);
a.push_back(2);
a.push_back(3);
typedef std::pair<bool, int> (*MakePairType)(bool, int);
std::transform(a.begin(), a.end(), std::back_inserter(b),
boost::bind((MakePairType)&std::make_pair<bool, int>,
false, _1));
}
For additional details see the Boost bind manual.
|
2,295,994 | 2,296,020 | performance: sorting 'm' vectors with N/m elems Vs sorting single vector with N elements | Operation A
I have N vectors, each containing certain number of unique 3D points. For Example : std::vector<double*> vec1; and like that
I am performing sort operation on each of the vector like:
std::sort(vec1.begin(), vec1.end(), sortCriteria());
std::sort(vec2.begin(), vec2.end(), sortCriteria());
std::sort(vec3.begin(), vec3.end(), sortCriteria());
Operation B
Suppose I have a vector called "all_point_vector" which holds the 3D points from vec1, vec2, vec3 ...
i.e. 3D points in all_point_vector = points_in_vec1 +.... +points_in_vector3.
and I am performing the sort operation:
std::sort(all_point_vec.begin(), all_point_vec.end(), sortCriteria());
My question is , which of the above methods (Operation A or B) will be faster in general? sorting a single vector (all_point_vector) or sorting individual vectors. I am just interested in the speed of execution of these two operations.
Thanks
| Sorting is an O(n log n) operation. Sorting N vectors with m/N elements will become strictly faster than sorting a single vector of m elements as you increase m.
Which one is faster for any fixed m can only be determined by profiling.
|
2,296,101 | 2,296,177 | File version information | How can I add version information to a file? The files will typically be executables, .so and .a files.
Note: I'm using C++, dpkg-build and Ubuntu 8.10 if any of those have support for this.
| For shared objects pass -Wl,soname,<soname> to gcc, or -soname <soname> to ld.
Executables and static libraries do not have version information per se, but you can add it to the filename if you like.
|
2,296,106 | 2,296,121 | Non static members as default parameters in C++ | I'm refactoring a large amount of code where I have to add an extra parameter to a number of functions, which will always have a value of a member of that object. Something like
class MyClass
{
public:
CMyObject A,B;
void MyFunc(CMyObject &Object);
// used to be void MyFunc();
};
Now, I'd actually like it to read
class MyClass
{
public:
CMyObject A,B;
void MyFunc(CMyObject &Object = A);
};
But I'm not allowed to have a default parameter that is a non-static member. I've read this similar question which suggest this isn't possible, but I'm wondering if there is any reasonable workaround. Reason being that 95% of the time the default parameter will be used, and thus using a default parameter would hugely reduce the amount of code I have to change. My best solution so far is something like this;
class MyClass
{
public:
CMyObject A,B;
void MyFunc(BOOL IsA = TRUE);
};
void MyClass::MyFunc(BOOL IsA)
{
CMyObject &Object = A;
if (!IsA)
Object = &B;
}
This is less than elgant, but is there a better way of doing this that I'm missing?
Edit: FWIW, the reason for the extra parameter is to externalize some state related members from the object in question to aid multi-threading.
| How about :
class MyClass
{
public:
CMyObject A,B;
void MyFunc()
{
MyFunc(A);
}
void MyFunc(CMyObject &Object);
};
?
|
2,296,129 | 2,296,145 | Array of classes. Stack or heap? | class temp;
temp *t;
void foo() { temp foo2; t[1] = foo2; }
int main() {
t = new temp[100];
foo();
//t[1] is still in memory?
}
If i want an array of classes like this, am i going to have to use
pointer to pointer? (and use 'new'
on each element in the array) E.G:
temp **t;
if i want to make an
array of 100 ptr to ptr i have todo
temp **t = new temp[100][1]; is
there a better way to do that without
4 square brackets?
| The code:
t = new temp[100];
constructs an array 100 objects of type temp. A safer way to do the same thing is:
std::vector <temp> t(100);
which absolves you of ever having to call delete[] on the array.
|
2,296,577 | 2,296,588 | When object is constructed statically inside a function, would it be allocated on the heap or on the stack? | if i have the following code:
for (...)
{
A a;
}
would a be allocated on the heap or on the stack?
| On the stack.
Memory is only allocated on the heap when doing new (or malloc and its friends if you are doing things C-style, which you shouldn't in C++).
|
2,296,634 | 2,296,663 | DRYing c++ structure | I have a simple c++ struct that is extensively used in a program. Now I wish to persist the structure in a sqlite database as individual fields (iow not as a blob).
What good ways are there to map the attributes of the struct to database columns?
| Since C++ isn't not a very "dynamic" language, it is running short of the kinds of ORM's you might commonly find available in other languages that make this task light work.
Personally speaking, I've always ended up having to write very thin wrapper classes for each table manually. Basically, you need a structure that maps to each table and an accessor class to get data in and out of the table as needed.
The structures should have a field per column and you'll need methods for each database operation you want to perform (CRUD for example).
|
2,296,918 | 2,297,106 | Calling WNetAddConnection2 with empty local name | I have a small program that simply checks if a specified file is located on a specified network drive that is not mapped on the computer.
To check this I temporarily map to the network location, check if the file exists and than unmap the drive. I now figured out that I can call WNetAddConnection2 with an empty local name (MSDN: If the string is empty, or if lpLocalName is NULL, the function makes a connection to the network resource without redirecting a local device.).
Just for showing the code:
NETRESOURCE nr;
nr.dwType = RESOURCETYPE_DISK;
nr.lpLocalName = NULL; // explicitly set this to NULL
nr.lpRemoteName = "\\\\computer\\c$";
nr.lpProvider = NULL;
DWORD dwResult = WNetAddConnection2(&nr, cstrPassword, cstrUsername, FALSE);
if (dwResult != 0)
{
return false;
}
CPath cLocation(cstrFileLocation);
return cLocation.FileExists() != FALSE;
So far so good the code works fine. But what I now want to know is if there is any problem with that call of WNetAddConnection2? I cannot call WNetCancelConnection, as I do not have a local name. So do I have some kind of zombies on my computer now?
How can I see all my network connections on my computer? Best would be a short command for the Command Prompt (something like NET USE).
| Ok, figured it out. I can call WNetCancelConnection2(nr.lpRemoteName, 0, TRUE); to unmap the drive properly.
|
2,297,059 | 2,307,798 | Release management system for Linux | What we need in our firm is a sort of release management tool for Linux/C++. Our products consist of multiple libraries and config files. Here I will list the basic features we want such system to have:
Ability to track dependencies, easily increase major versions of libraries whose dependencies got their major version increased. It should build some sort of dependency graph internally so it can know who is affected by an update.
Know how to build the products it handle. Either a specific build file or even better - ability to read and understand makefiles.
Work with SVN so it can check for new releases from there and does the build.
Generate some installers - in rpm or tar.gz format. For that purpose it should be able to understand the rpm spec file format.
Currently we are working on such tool which is already pretty usable. However I believe that our task is not unique and there should be some tool out there which does the job.
| In the project I'm currently working on we use cmake and other Kitware tools to handle most of this issues for native code (C++). Answering point by point:
The cmake scripts handle the dependencies for our different projects. We have a dependency graph but I don't know if is a home-made script or it is a functionality that cmake provides.
Well cmake generates the makefiles regarding the platform. I generates projects for eclipse cdt and visual studio if it is asked to do so in case of developing.
Cmake has a couple of tools, ctest and cdash that we use to do the daily build and see how the test are doing.
In order to create the installer cmake has cpack. From just one script it can generate tar.gz, deb or rpm files in Linux or an automatically generated NSIS script to generate installers in windows.
For Java code we use maven and hudson that have been already mentioned here.
|
2,297,064 | 2,297,160 | Typedeffing a function (NOT a function pointer) | typedef void int_void(int);
int_void is a function taking an integer and returning nothing.
My question is: can it be used "alone", without a pointer? That is, is it possible to use it as simply int_void and not int_void*?
typedef void int_void(int);
int_void test;
This code compiles. But can test be somehow used or assigned to something (without a cast)?
/* Even this does not work (error: assignment of function) */
typedef void int_void(int);
int_void test, test2;
test = test2;
| What happens is that you get a shorter declaration for functions.
You can call test, but you will need an actual test() function.
You cannot assign anything to test because it is a label, essentially a constant value.
You can also use int_void to define a function pointer as Neil shows.
Example
typedef void int_void(int);
int main()
{
int_void test; /* Forward declaration of test, equivalent to:
* void test(int); */
test(5);
}
void test(int abc)
{
}
|
2,297,164 | 2,308,670 | STL deque accessing by index is O(1)? | I've read that accessing elements by position index can be done in constant time in a STL deque. As far as I know, elements in a deque may be stored in several non-contiguous locations, eliminating safe access through pointer arithmetic. For example:
abc->defghi->jkl->mnop
The elements of the deque above consists of a single character. The set of characters in one group indicate it is allocated in contiguous memory (e.g. abc is in a single block of memory, defhi is located in another block of memory, etc.). Can anyone explain how accessing by position index can be done in constant time, especially if the element to be accessed is in the second block? Or does a deque have a pointer to the group of blocks?
Update: Or is there any other common implementation for a deque?
| I found this deque implementation from Wikipedia:
Storing contents in multiple smaller arrays, allocating additional
arrays at the beginning or end as needed. Indexing is implemented by
keeping a dynamic array containing pointers to each of the smaller
arrays.
I guess it answers my question.
|
2,297,363 | 2,297,493 | What alternatives to the Windows registry exist to store software configuration settings | I have a C++ MFC app that stores all of its system wide configuration settings to the registry. Previously, we used .INI files, and changed over to using the registry some years back using
SetRegistryKey("MyCompanyName");
We now get regular support calls from users having difficulty migrating from PC and Windows version to another, and I've come to the conclusion that using the registry causes many more problems than it solves. I don't particularly want to go back to .INI files either as I'd like to store settings per user, so the plan is to write my own versions of the GetProfile... and SetProfile... functions to work with a database. Has anybody done this already, and does anyone know of an existing drop in replacement library for registry usage that wouldn't require too much code modification? Ideally, I'd also like it to have options to read initial values from the registry to support existing users.
| I suggest moving over to an XML file in the same location as the executable. One benefit is that XML is portable across non-Windows machines (and even between Windows versions).
Edit: The idea behind an XML configuration file in the same location as the executable is that the configuration file is for program configurations (not user) and for encapsulation / data hiding (not to pollute global file system directories). User configuration data should be in another file in a directory of the user or user's choice.
|
2,297,390 | 2,297,415 | How can the order of inherited includes be controlled in vsprops Property Sheets? | I'm using some vsprops sheets that inherit from each other. My base property sheet defines some include paths. In a second vsprops file that inherits from it, I want to add some more include paths.
However, I want to be able to choose whether the additional include paths come before or after the base include paths. I'm sure I've seen something like an $(Inherits) variable that can be used to do this, but I can't remember what it was or find it again.
Any help would be appreciated.
| Ok, found it now:
$(Inherit) is what I want, as in this example:
c:\test2;$(Inherit);c:\mystuff
See:
http://msdn.microsoft.com/en-us/library/hx1tt59t(VS.80).aspx
|
2,297,402 | 2,297,498 | Element is removed from QList but static counter of existing objects doesn't decrease | I have question about removing element from QList.
"myclass.h":
class node2D : public QObject
{
Q_OBJECT
public:
node2D(){++s_NCount;};
~node2D(){--s_NCount;};
int checkCount(){return s_NCount;};
private:
static int s_NCount;
};
"myclass.cpp":
int node2D::s_NCount = 0;
"main.cpp":
void main()
{
int i,max_el(4);
QList<node2D*> *Nlist;
Nlist = new QList<node2D*>;
node2D controlNode;
for (i = 0 ;i < max_el ; i++)
{
Nlist->append(new node2D);
}
cout << "Nlist size before: " << Nlist->size() << endl;
cout << "Number of nodes before removing: " << controlNode.checkCount() << endl;
Nlist->clear();
cout << "NList size after: " << Nlist->size() << endl;
delete Nlist;
cout << "Number of nodes after removing: " << controlNode.checkCount() << endl;
}
After executing I get:
NList size before: 4
Number of nodes before removing: 5
NList size after: 0
Number of nodes after removing: 5
What's bothering me is the fact that number of node2D objects is still 5 instead of 1.
Of course it can be managed like this:
for (i = 0; i < Nlist->size(); i++)
{
delete (*Nlist)[i];
}
Nlist->clear();
but shouldn't node2D objects be automatically deleted while Nlist->clear()?
Or does it only happen when there is parent-child relation?
Thanks in advance,
Pawel
|
but shouldn't node2D objects be automatically deleted while Nlist->clear()?
Not at all. What if i want to use these objects somewhere else, which is the case mostly for me. Managing the objects pointed by the pointers you add to the list is your concern, not QList's. Managing the copies of these pointers is on the other hand, QList's concern.
|
2,297,567 | 2,297,584 | where should "include" be put in C++ | I'm reading some c++ code and Notice that there are "#include" both in the header files and .cpp files . I guess if I move all the "#include" in the file, let's say foo.cpp, to its' header file foo.hh and let foo.cpp only include foo.hh the code should work anyway taking no account of issues like drawbacks , efficiency and etc .
I know my "all of sudden" idea must be in some way a bad idea, but what is the exact drawbacks of it? I'm new to c++ so I don't want to read lots of C++ book before I can answer this question by myself. so just drop the question here for your help . thanks in advance.
| As a rule, put your includes in the .cpp files when you can, and only in the .h files when that is not possible.
You can use forward declarations to remove the need to include headers from other headers in many cases: this can help reduce compilation time which can become a big issue as your project grows. This is a good habit to get into early on because trying to sort it out at a later date (when its already a problem) can be a complete nightmare.
The exception to this rule is templated classes (or functions): in order to use them you need to see the full definition, which usually means putting them in a header file.
|
2,297,962 | 2,298,001 | extend boost.asio file i/o for linux | According to this question about the topic there is no asynchronous file io in asio anything but Windows...
So fine, does anyone know of any already written extensions to asio that do asynchronous file io on Linux?
Or does anyone know of any examples on how to extend asio to support asynchronous io to {insert-whatever-here}?
| My guess is that if this was easy to do, they would have done it.
|
2,298,099 | 2,298,120 | Why isn't C++ strtok() working for me? | The program is supposed to receive an input through cin, tokenize it, and then output each one to show me that it worked properly. It did not.
The program compiles with no errors, and takes an input, but fails to output anything.
What am I doing wrong?
int main(int argc, char* argv[])
{
string input_line;
while(std::cin >> input_line){
char* pch = (char*)malloc( sizeof( char ) *(input_line.length() +1) );
char *p = strtok(pch, " ");
while (p != NULL) {
printf ("Token: %s\n", p);
p = strtok(NULL, " ");
}
}
return 0;
}
I followed the code example here: http://www.cplusplus.com/reference/clibrary/cstring/strtok/
Thanks.
| Looks like you forget to copy the contents of input_line to pch:
strcpy(pch, input_line.c_str());
But I'm not sure why you're doing string tokenization anyway. Doing cin >> input_line will not read a line, but a token.. so you get tokens anyway?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.