question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,035,890
2,036,005
Unit Testing Concurrent Code
My weekend project consists of writing a cross-platform concurrency primitives library (critical sections, read/write mutexes, interlocked integers, events, etc) and was wondering how to unit test this stuff. I realize that testing concurrent code is hard in itself, but testing the primitives of said code couldn't be that hard, could it? Turns out, it is that hard. At least, for me it is. So how would you go about approaching this? Just as an example, I don't even know where to start with testing critical sections.
Don't think about unit tests, think about the behaviour you want to specify. For example: Given_an_unlocked_lock It_should_be_possible_to_take_it Given_a_locked_lock It_should_not_be_possible_to_take_it_from_another_thread It_should_be_possible_take_it_from_the_same_thread Given_a_locked_lock_when_unlocked It_should_be_possible_to_take_it Given_a_locked_lock_when_owning_thread_terminates It_should_be_possible_to_take_it I think that will help you identify what to do. And yes probably you need a helper thread in your unit tests to make it happen. Maybe this example is helpful.
2,036,104
2,036,121
Validity of the code
Consider the following code : void populate(int *arr) { for(int j=0;j<4;++j) arr[j]=0; } int main() { int array[2][2]; populate(&array[0][0]); } There was a discussion regarding this on a local community whether the code is valid or not(Am I supposed to mention its name?). One guy was saying that it invokes UB because it violates C++ Standard ($5.7/5 [expr.add]) "If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined." But I don't see anything wrong with the code,the code is perfectly OK for me. So, I just want to know is this code valid or not? Am I missing something?
Your array is two arrays of int[2], while your function populate() treats it as a single array of int[4]. Depending on exactly how the compiler decides to align the elements of array, this may not be a valid assumption. Specifically, when j is 2 and you try to access arr[2], this is outside the bounds of main's array[0] and is therefore invalid.
2,036,182
2,036,298
boost, shared ptr Vs weak ptr? Which to use when?
In my current project I am using boost::shared_ptr quite extensively. Recently my fellow team mates have also started using weak_ptr. I don't know which one to use and when. Apart from this, what should I do if I want to convert weak_ptr to shared_ptr. Does putting a lock on weak_ptr to create a shared_ptr affect my code in other thread?
In general and summary, Strong pointers guarantee their own validity. Use them, for example, when: You own the object being pointed at; you create it and destroy it You do not have defined behavior if the object doesn't exist You need to enforce that the object exists. Weak pointers guarantee knowing their own validity. Use them, for example, when: You access it, but it's not yours. You have defined behavior if the object doesn't exist Lock() on a weak pointer returns a strong pointer; this is how you access the weak pointer. If the object is no longer valid (it's been deleted, etc), then the strong pointer will be NULL, otherwise, it will point at the object. You will need to check this. It's set up this way so that you cannot accidentally delete the object while you're using it, because you've made a temporary (local) strong pointer, and thus garunteed the object's existence while that strong pointer remains. When you're done using the object, you generally let the strong pointer fall out of scope (or reassigning it), which then allows the object to be deleted. For multithreading, treat them with same care you treat other things that don't have built-in thread safety, noting that the guarantee I mentioned above will hold when multithreading. AFAIK they don't do anything special past that. The boost shared pointers also have garbage-collector like features, since when the last strong pointer to an object goes away or points somewhere else, the object gets deleted. There's also the performance and circular dependencies mentioned in the other answers. Fundamentally, I would say that the boost shared pointer library allows you to not mess up putting together a program, but it is no substitute for taking the time to properly design your pointers, object ownerships and lifetimes. If you have such a design, you can use the library to enforce it. If you don't have such a design, you're likely to run into different problems than before.
2,036,473
2,036,487
Know what references an object
I have an object which implements reference counting mechanism. If the number of references to it becomes zero, the object is deleted. I found that my object is never deleted, even when I am done with it. This is leading to memory overuse. All I have is the number of references to the object and I want to know the places which reference it so that I can write appropriate cleanup code. Is there some way to accomplish this without having to grep in the source files? (That would be very cumbersome.)
A huge part of getting reference counting (refcounting) done correctly in C++ is to use Resource Allocation Is Initialization so it's much harder to accidentally leak references. However, this doesn't solve everything with refcounts. That said, you can implement a debug feature in your refcounting which tracks what is holding references. You can then analyze this information when necessary, and remove it from release builds. (Use a configuration macro similar in purpose to how DEBUG macros are used.) Exactly how you should implement it is going to depend on all your requirements, but there are two main ways to do this (with a brief overview of differences): store the information on the referenced object itself accessible from your debugger easier to implement output to a special trace file every time a reference is acquired or released still available after the program exits (even abnormally) possible to use while the program is running, without running in your debugger can be used even in special release builds and sent back to you for analysis The basic problem, of knowing what is referencing a given object, is hard to solve in general, and will require some work. Compare: can you tell me every person and business that knows your postal address or phone number?
2,036,503
2,036,723
Java, C++, NIO, mmaped buffer, synchronization
Exposition: I am on Linux / Mac. Part of my code is in Java, part of my code is in C++. They both have the same file mmapped for fast communication. I want to synchronize the Java & C++ code. I know the following: 1) given two threads in Java, I can use Locks / monitors. 2) given one piece of code in Java, one in C++, I can have them synchronize over tcp/ip 3) given two pieces of C++ code, that have mmaped an area of memory, I can have them synchronize using gcc's compare_and_swap on a integer in the mmaped region. Question: Given that part of my code is in Java, part of my code is in C++, can I somehow do (3) -- does the JVM support some type of atomic compare & swap ? So both my Java & C++ code can use this integer as a lock of sorts? Thanks!
You could write a small C/C++ library that only purpose is to sync with your C++ code (using conventional IPC sync objects). Then you could would this library from your java process using JNI.
2,036,592
2,036,613
how to convert decimal to binary in c++
I have a method to convert dec to bin QList<bool> widgetInput::decToBin(int number) { int remainder; QList<bool> result; if(number <= 1) { result << number; return result; } remainder = number%2; decToBin(number >> 1); result << remainder; } but unfortunately this method only holds one element in list . but when I replace the "result << number" with "cout << number" it will work. could you please help me and let me know where is my exact problem? regards.
On each recursive step, you are creating a new QList result; which is local to that step, then inserting the remainder into it. You don't need recursion (and in general it should be avoided when iteration will do): QList<bool> result; while(number > 0) { result << number%2; number /=2; } // Edited to add: Just realized you would also have to reverse QList here. // Depends on it's interface. return result; or better yet, just use a standard container: bitset<sizeof(int)*CHAR_BIT> bs(number);
2,036,745
2,037,079
My threadspool just make 4~5threads. why?
I use QueueUserWorkItem() function to invoke threadpool. And I tried lots of work with it. (about 30000) but by the task manager my application only make 4~5 thread after I push the start button. I read the MSDN which said that the default number of thread limitation is about 500. why just a few of threads are made in my application? I'm tyring to speed up my application and I dout this threadpool is the one of reason that slow down my application. thanks
It is important to understand how the threadpool scheduler works. It was designed to fine-tune the number of running threads against the capabilities of your machine. Your machine probably can run only two threads at the same time, dual-core CPUs are the current standard. Maybe four. So when you dump a bunch of threads in its lap, it starts out by activating only two threads. The rest of them are in a queue, waiting for CPU cores to become available. As soon as one of those two threads completes, it activates another one. Twice a second, it evaluates what's going on with active threads that didn't complete. It makes the rough assumption that those threads are blocking and thus not making progress and allows another thread to activate. You've now got three running threads. Getting up the 500 threads, the default max number of threads, will take 249 seconds. Clearly, this behavior spells out what a thread should do to be suitable to run as a threadpool thread. It should complete quickly and don't block often. Note that blocking on I/O requests is dealt with separately. If this behavior doesn't suit you then you can use a regular Thread. It will start running right away and compete with other threads in your program (and the operating system) for CPU time. Creating 30,000 of such threads is not possible, there isn't enough virtual memory available for that. A 32-bit operating system poops out somewhere south of 2000 threads, consuming all available virtual memory. You can get about 50,000 threads on a 64-bit operating system before the paging file runs out. Testing these limits in a production program is not recommended.
2,036,892
2,036,902
STL containers fails to add a structure defined inside a function
I have enanoutered a problem with a code similar to this void aFuncion() { struct entry { std::string field1; int field2; int field3; entry(const entry& ent) { // copy constructor code } entry() { // default constructor code } entry(std::string s, int a, int b) { field1 = s; field2 = a; field3 = b; } }; //end of structure definition std::vector<entry> vec; entry en("a string", 1, 2); vec.push_back(en); // vec has garbage in index 0 } after pushing the entry into the vector the debuggers shows only garbage in the vector first entry. the problem resolved once we took the structure definition out of the function. why did the problem occur how did getting the definition into out of the function resolved it ? (we are working with VS 2008 on XP 32 bit )
The current C++ standard does not allow the template arguments to be locally defined types. This is remedied in the upcoming version of the standard. 14.3.1/2: A local type, a type with no linkage, an unnamed type or a type compounded from any of these types shall not be used as a template-argument for a template type-parameter.
2,036,913
2,036,934
auto_ptr baffling behaviour
#include<iostream> #include<memory> #include<stdio> using namespace std; class YourClass { int y; public: YourClass(int x) { y= x; } }; class MyClass { auto_ptr<YourClass> p; public: MyClass() //:p(new YourClass(10)) { p= (auto_ptr<YourClass>)new YourClass(10); } MyClass( const MyClass &) : p(new YourClass(10)) {} void show() { //cout<<'\n'<<p; //Was not working hence commented printf("%p\n",p); } }; int main() { MyClass a; a.show(); MyClass b=a; cout<<'\n'<<"After copying"; a.show();//If I remove copy constructor from class this becomes NULL(the value of auto_ptr becomes NULL but if class has copy constructor it remains same(unchanged) b.show();//expected bahavior with copy construcotr and withought copy constructor } Making the problem more specific: Currently the class has copy constructor so there is no problem with the value of auto_ptr printed by a.show()(when it is called second time). It remians the same as it was when it was initiazed). It remians unchanged. If I remove the copy contructor from the class MyClass , the value of auto_ptr printed by a.show()(when it is called second time) is NULL.
What's happening is due to the strange (but only justifiable if you think about it) semantics of assigning or copying an auto_ptr, e.g. auto_ptr<T> a; auto_ptr<T> b(new T()); a = b; ... or ... auto_ptr<T> b(new T()); auto_ptr<T> a(b); These will set a to b as expected, but they will also set b to NULL (see http://www.cplusplus.com/reference/std/memory/auto_ptr/auto_ptr/). If you don't define a copy constructor for MyClass, then the compiler will generate one for you and will do just something similar to the above when it copies the auto_ptr member. Hence the copied from class will have a NULL member after the copy constructor has been called.
2,036,985
2,037,002
How can I develop a virtual drive
I would like to create a virtual drive for windows. I'm not looking to map a drive or something like that, I'm looking to map it to my DLL functions or something of that sort. How can I get this accomplished? I read that I would have to develop a device driver, or a shell extension? I have a lot of experience with C++ and C#. Where do I get started? I want to make it as simple as possible at first, how much time do I think it would take me? And what's the simplest way to do it, so that I could build my prototype. My motivation is to pipe a file into an application, that currently only reads files from the file system. I'm only interested in pumping 1 file, so I think that a whole virtual drive would be an overkill, as well as writing kernel mode code.
If you want to build it from scratch then yes, you have to build a driver. However, it would be much easier for you to use a proxy driver like Dokan, and create the file system in user mode. Take a look at the Wikipedia article on IFS, there are links to other useful tools at the bottom of the page.
2,037,155
2,037,242
std::string as C++ byte array
Google's Protocol buffer uses the C++ standard string class std::string as variable size byte array (see here) similar to Python where the string class is also used as byte array (at least until Python 3.0). This approach seems to be good: It allows fast assignment via assign and fast direct access via data that is not allowed with vector<byte> It allows easier memory management and const references, unlike using byte*. But I am curious: Is that the preferred way for a byte arrays in C++? What are the drawbacks of this approach (more than a few static_casts)
std::strings may have a reference counted implementation which may or may not be a advantage/disadvantage to what you're writing -- always be careful about that. std::string may not be thread safe. The potential advantage of std::string is easy concatenation, however, this can also be easily achieved using STL. Also, all those problems in relation to protocols dissapear when using boost::asio and it's buffer objects. As for drawbacks of std::vector: fast assign can be done by a trick with std::swap data can be accessed via &arr[0] -- vectors are guaranteed (?) to be continious (at least all implementations implement them so) Personally I use std::vector for variable sized arrays, and boost::array for static sized ones.
2,037,209
2,037,215
What is a null-terminated string?
How does it differ from std::string?
A null-terminated string is a contiguous sequence of characters, the last one of which has the binary bit pattern all zeros. I'm not sure what you mean by a "usual string", but if you mean std::string, then a std::string is not required (until C++11) to be contiguous, and is not required to have a terminator. Also, a std::string's string data is always allocated and managed by the std::string object that contains it; for a null-terminated string, there is no such container, and you typically refer to and manage such strings using bare pointers. All of this should really be covered in any decent C++ text book - I recommend getting hold of Accelerated C++, one of the best of them.
2,037,212
2,037,338
Concatenating/Merging/Joining two AVL trees
Assume that I have two AVL trees and that each element from the first tree is smaller then any element from the second tree. What is the most efficient way to concatenate them into one single AVL tree? I've searched everywhere but haven't found anything useful.
Assuming you may destroy the input trees: remove the rightmost element for the left tree, and use it to construct a new root node, whose left child is the left tree, and whose right child is the right tree: O(log n) determine and set that node's balance factor: O(log n). In (temporary) violation of the invariant, the balance factor may be outside the range {-1, 0, 1} rotate to get the balance factor back into range: O(log n) rotations: O(log n) Thus, the entire operation can be performed in O(log n). Edit: On second thought, it is easier to reason about the rotations in the following algorithm. It is also quite likely faster: Determine the height of both trees: O(log n). Assuming that the right tree is taller (the other case is symmetric): remove the rightmost element from the left tree (rotating and adjusting its computed height if necessary). Let n be that element. O(log n) In the right tree, navigate left until you reach a node whose subtree is at most one 1 taller than left. Let r be that node. O(log n) replace that node with a new node with value n, and subtrees left and r. O(1) By construction, the new node is AVL-balanced, and its subtree 1 taller than r. increment its parent's balance accordingly. O(1) and rebalance like you would after inserting. O(log n)
2,037,315
2,037,322
Creating a simple scripted 'language' - VARIANT-like value type
For a rules engine developed in C++, one of the core features is the value type. What I have so far is a bit like a COM-style VARIANT - each value knows its type. There are some rules for type conversion but it's a bit messy. I wondered if there are nice drop-in value classes I could use which solve this, without requiring me to use a whole pre-built system. For instance maybe boost has something?
Looking for boost::any or boost::variant? There are basically three types of variant implementations: A type that can be freely casted between types (think untyped languages) -- boost::lexical_cast is your friend here, or boost::variant... A type that can hold any type, but is typesafe -- e.g. initialized with an int, stays an int and doesn't allow to be treated implicitly like anything else -- this is the boost::any type The evil allow anything type -- cast to what you want without error checking, no type information held -- think void*
2,037,589
2,037,623
C++ special instance of template function for some type which is a template class itself
I got trouble in creating special instance of member template function of non-template class. I have, for example, class A with template member function F: class A {public: template <class T> int F (T arg) const; .... } and want to have a special instance of this template function F for type B: class B; ... template <> void A::F (B arg) const //GOOD! and it works perfectly, until appears that B is a template itself! This code template <class T> class B ... ... template <> void A::F (B<T> arg) const //error, T undeclared as well as template <class T> class B ... ... template <class T> template <> void A::F (B<T> arg) const //error, too many templates gives compiling error. The second trouble is, how to declare this special instance (or template instance at whole) to be friend function of class B? (Is does not work even if B is not a template). class B {friend template <> void A::F (B arg) const // error // as well as template <> friend void A::F (B arg) const // error } Is there a way to write code in a way I'm going to at all or it is not possible?
You're attempting to create a partial specialization for a function template, which is illegal. What you can do is simply create an overload. To create a friend, you merely have to use the correct syntax. The following compiles without errors. template <typename T> struct B {}; struct A { template <typename T> void F(T arg) const; template <typename T> void F(B<T> arg) const; template <typename T> friend void G(B<T> arg); template <typename T> friend struct B; };
2,037,765
2,037,995
What is the optimal multiplatform way of dealing with Unicode strings under C++?
I know that there are already several questions on StackOverflow about std::string versus std::wstring or similar but none of them proposed a full solution. In order to obtain a good answer I should define the requirements: multiplatform usage, must work on Windows, OS X and Linux minimal effort for conversion to/from platform specific Unicode strings like CFStringRef, wchar_t *, char* as UTF-8 or other types as they are required by OS API. Remark: I don't need code-page convertion support because I expect to use only Unicode compatible functions on all operating systems supported. if requires an external library, this one should be open-source and under a very liberal license like BSD but not LGPL. be able to use a printf format syntax or similar. easy way of string allocation/deallocation performance is not very important because I assume that the Unicode strings are used only for application UI. some example could would be appreciated I would really appreciate only one proposed solution per answer, by doing this people may vote for their prefered alternative. If you have more than one alternative just add another answer. Please indicate something that did worked for you. Related questions: stdwstring-vs-stdstring does-c0x-support-stdwstring-conversion-to-from-utf-8-byte-sequence portable-wchart-in-c
Same as Adam Rosenfield answer (+1), but I use UTFCPP instead.
2,037,826
2,037,848
I need some C++ guru's opinions on extending std::string
I've always wanted a bit more functionality in STL's string. Since subclassing STL types is a no no, mostly I've seen the recommended method of extension of these classes is just to write functions (not member functions) that take the type as the first argument. I've never been thrilled with this solution. For one, it's not necessarily obvious where all such methods are in the code, for another, I just don't like the syntax. I want to use . when I call methods! A while ago I came up with the following: class StringBox { public: StringBox( std::string& storage ) : _storage( storage ) { } // Methods I wish std::string had... void Format(); void Split(); double ToDouble(); void Join(); // etc... private: StringBox(); std::string& _storage; }; Note that StringBox requires a reference to a std::string for construction... This puts some interesting limits on it's use (and I hope, means it doesn't contribute to the string class proliferation problem)... In my own code, I'm almost always just declaring it on the stack in a method, just to modify a std::string. A use example might look like this: string OperateOnString( float num, string a, string b ) { string nameS; StringBox name( nameS ); name.Format( "%f-%s-%s", num, a.c_str(), b.c_str() ); return nameS; } My question is: What do the C++ guru's of the StackOverflow community think of this method of STL extension?
As most of us "gurus" seem to favour the use of free functions, probably contained in a namespace, I think it safe to say that your solution will not be popular. I'm afraid I can't see one single advantage it has, and the fact that the class contains a reference is an invitation to that becoming a dangling reference.
2,037,863
2,037,873
Waiting on WaitForMultipleObjects
I'm trying to write a unit test for my FileWatcher class. FileWatcher derives from a Thread class and uses WaitForMultipleObjects to wait on two handles in its thread procedure: The handle returned from FindFirstChangeNotification A handle for an Event that lets me cancel the above wait. So basically FileWatcher is waiting for whatever comes first: a file change or I tell it to stop watching. Now, when trying to write code that tests this class I need to wait for it to start waiting. Peusdo Code: FileWatcher.Wait(INFINITE) ChangeFile() // Verify that FileWatcher works (with some other event - unimportant...) Problem is that there's a race condition. I need to first make sure that FileWatcher has started waiting (i.e. that its thread is now blocked on WaitForMultipleObjects) before I can trigger the file change in line #2. I don't want to use Sleeps because, well, it seems hacky and is bound to give me problems when debugging. I'm familiar with SignalObjectAndWait, but it doesn't really solve my problem, because I need it to "SignalObjectAndWaitOnMultipleObjects"... Any ideas? Edit To clarify a bit, here's a simplified version of the FileWatcher class: // Inherit from this class, override OnChange, and call Start() to turn on monitoring. class FileChangeWatcher : public Utils::Thread { public: // File must exist before constructing this instance FileChangeWatcher(const std::string& filename); virtual int Run(); virtual void OnChange() = 0; }; It inherits from Thread and implements the thread function, which looks something like this (very simplified): _changeEvent = ::FindFirstChangeNotificationW(wfn.c_str(), FALSE, FILE_NOTIFY_CHANGE_LAST_WRITE); HANDLE events[2] = { _changeEvent, m_hStopEvent }; DWORD hWaitDone = WAIT_OBJECT_0; while (hWaitDone == WAIT_OBJECT_0) { hWaitDone = ::WaitForMultipleObjects(2, events, FALSE, INFINITE); if (hWaitDone == WAIT_OBJECT_0) OnChange(); else return Thread::THREAD_ABORTED; } return THREAD_FINISHED; Notice that the thread function waits on two handles, one - the change notification, and the other - the "stop thread" event (inherited from Thread). Now the code that tests this class looks like this: class TestFileWatcher : public FileChangeWatcher { public: bool Changed; Event evtDone; TestFileWatcher(const std::string& fname) : FileChangeWatcher(fname) { Changed = false; } virtual void OnChange() { Changed = true; evtDone.Set(); } }; And is invoked from a CPPUnit test: std::string tempFile = TempFilePath(); StringToFile("Hello, file", tempFile); TestFileWatcher tfw(tempFile); tfw.Start(); ::Sleep(100); // Ugly, but we have to wait for monitor to kick in in worker thread StringToFile("Modify me", tempFile); tfw.evtDone.Wait(INFINITE); CPPUNIT_ASSERT(tfw.Changed); The idea is to get rid of that Sleep in the middle.
There's no race, you don't have to wait for the FileWatcher to enter WaitForMultipleObjects. If you perform the change before the function is called, it will simply return immediately. Edit: I can see the race now. Why don't you move the following line _changeEvent = ::FindFirstChangeNotificationW(/*...*/); from the thread function to the constructor of FileChangeWatcher? That way, you can be certain that by the time the StringToFile function is called, the file is already being watched.
2,037,867
2,038,101
Can I convert a reverse iterator to a forward iterator?
I have a class called Action, which is essentially a wrapper around a deque of Move objects. Because I need to traverse the deque of Moves both forward and backwards, I have a forward iterator and a reverse_iterator as member variables of the class. The reason for this is becuase I need to know when I have gone one past the "end" of the deque, both when I am going forwards or backwards. The class looks like this: class Action { public: SetMoves(std::deque<Move> & dmoves) { _moves = dmoves; } void Advance(); bool Finished() { if( bForward ) return (currentfwd==_moves.end()); else return (currentbck==_moves.rend()); } private: std::deque<Move> _moves; std::deque<Move>::const_iterator currentfwd; std::deque<Move>::const_reverse_iterator currentbck; bool bForward; }; The Advance function is as follows: void Action::Advance { if( bForward) currentfwd++; else currentbck++; } My problem is, I want to be able to retrieve an iterator to the current Move object, without needing to query whether I am going forwards or backwards. This means one function returning one type of iterator, but I have two types. Should I forget returning an iterator, and return a const reference to a Move object instead?
This is exactly the sort of problem that prompted the design of STL to start with. There are real reasons for:Not storing iterators along with containersUsing algorithms that accept arbitrary iteratorsHaving algorithms evaluate an entire range instead of a single item at a time I suspect what you're seeing right now is more or less the tip of the iceberg of the real problems. My advice would be to take a step back, and instead of asking about how to deal with the details of the design as it currently stands, ask a somewhat more general question about what you're trying to accomplish, and how best to accomplish that end result. For those who care primarily about the question in the title, the answer is a heavily qualified "yes". In particular, a reverse_iterator has a base() member to do that. The qualifications are somewhat problematic though. The demonstrate the problem, consider code like this: #include <iostream> #include <vector> #include <iterator> int main() { int i[] = { 1, 2, 3, 4}; std::vector<int> numbers(i, i+4); std::cout << *numbers.rbegin() << "\n"; std::cout << *numbers.rbegin().base() << "\n"; std::cout << *(numbers.rbegin()+1).base() << "\n"; std::cout << *numbers.rend() << "\n"; std::cout << *numbers.rend().base() << "\n"; std::cout << *(numbers.rend()+1).base() << "\n"; } Running this at this particular moment on my particular machine produces the following output: 4 0 4 -1879048016 1 -1879048016 Summary: with rbegin() we must add one before converting to a forward iterator to get an iterator that's valid -- but with rend() we must not add one before converting to get a valid iterator. As long as you're using X.rbegin() and X.rend() as the parameters to a generic algorithm, that's fine--but experience indicates that converting to forward iterators often leads to problems. In the end, however, for the body of the question (as opposed to the title), the answer is pretty much as given above: the problem stems from trying to create an object that combines the collection with a couple of iterators into that collection. Fix that problem, and the whole business with forward and reverse iterators becomes moot.
2,038,200
2,038,215
Write a program that will print "C" if compiled as an (ANSI) C program, and "C++" if compiled as a C++ program
Taken from http://www.ocf.berkeley.edu/~wwu/riddles/cs.shtml It looks very compiler specific to me. Don't know where to look for?
Simple enough. #include <stdio.h> int main(int argc, char ** argv) { #ifdef __cplusplus printf("C++\n"); #else printf("C\n"); #endif return 0; } Or is there a requirement to do this without the official standard?
2,038,247
2,038,319
Integration of Python console into a GUI C++ application
I'm going to add a python console widget (into a C++ GUI) below some other controls: Many classes are going to be exposed to the python code, including some access to GUI (maybe I'll consider PyQt). Should I run the Python code in a separate thread? I think it's a good approach, because GUI won't be frozen while executing long commands. But on the other hand, shouldn't other controls be disabled to preserve objects' state and avoid conflicts?
Since you're apparently wanting to embed a Python interpreter to use Python as a scripting language in a what seems to be a Qt application, I suggest you have a look at PythonQt. With the PythonQt module, Python scripts will be able to interact with the GUI of your host application. Unlike PyQt and Qt Jambi, PythonQt is not designed to provide support for developers writing standalone applications. Instead, it provides facilities to embed a Python interpreter and focuses on making it easy to expose parts of the application to Python. If I understood your needs correctly, that's all you need. PyQt and PySide (officially supported by Nokia) aim at accessing Qt features from a Python program by providing bindings. It's possible to embed PyQt in your application (even a Qt application) and your Python scripts will be able to provide their own GUI while interacting with your application scripting API. About thread safety, Qt offers a thread-safe way of posting events, and signal-slot connections across threads. References: Embedding Python into Qt Applications. Notes for embedding python in your C/C++ app EmbedingPyQtTutorial
2,038,302
2,038,869
Is storing iterators inside this class unwise? How else to iterate through this sequence?
Warning this is a long question! I am implementing a Solitaire card game in C++ on Win32, and after asking this question, it's becoming clear that I may need a bit of guidance regarding actual class design rather than implementation details. I am using a model view Controller pattern to implement the game. The model is the game, cards, columns and move history. The view is responsible for painting to screen and the control responsible for handling messages and the timer. The aspect of the game that I am currently trying to implement is the Action history, which belongs to the Model - I want to be able to "undo" and "redo" any Action. I describe a Move object as an atomic move of a single card from one CardPile to another. And I described an Action as consisting of one or more Moves. e.g. a deal will be 10 Moves from the Deck to a particular Column. ( Deck and Column are simply specializations of CardPile). I define the Action as a deque of Moves, and have provided some functions to GetCurrentMove() and to Advance() a move when it has been performed. class Action { public: void SetMoves( std::deque<Move> dmoves){ _m_deque = dmoves; } void Advance(); std::deque<Move>::const_iterator GetCurrentMove(); private: std::deque<Move> _m_deque; std::deque<Move>::const_iterator currentmove; }; When dealing (or setting up, or undoing), these Move objects are data for an animation. I need to be able to access a single Move object at a time. I retrieve the Move, parse it into x,y co-ords and then kick off an animation to move a card from one place to another on screen. When the card has reached its destination, I then pull another Move from the deque. I have been advised by others with more experience, not to store iterators inside the Action class. The STL doesn't do this and there are good reasons apparently. But my question is - don't I have to store iterators inside the Action class? You see both my Model and View need access to the current Move, so where else can I store that iterator that refers to the current Move ... inside the Controller? My game animation is based (very broadly) on this model: void control.game_loop( message ) { switch( message ) { case TIMER: { if( view.CardHasReachedDestination() ) { game.AdvanceAnimation(); if( !game.AnimationFinished() ) view.Parse(game.GetNextMove()); } view.MoveCardXY(); view.PaintToTheScreen(); controller.StartTheTimerAgain(); } } } Best wishes, BeeBand
I would create a function template to animate an entire Action, not just one Move at a time. Invoke that with beginning and ending iterators for the Action that needs to be animated: template <class iterator> void animate_action(iterator first, iterator last) { for (iterator i=first; i!=last; ++i) animate_move(*i); } Where animate_move is pretty much what you already had for showing the animation of a single move. You'd invoke this with something like: animate_action(action.begin(), action.end()); or to animate in reverse order: animate_action(action.rbegin(), action.rend()); This is (a large part of) why you want to make animate_action a template -- this way it neither knows nor cares whether it receives a forward iterator or a reverse_iterator. Edit: Based on the further comments, there seem to be a few alternatives. The standard stand-by would be to use a separate thread to handle the animated drawing, so it would just have something like: while (current_position != final_position) { draw_card(currrent_position); current_position = next_position(); sleep(timer_period); } Another would be rather than waiting for a timer to fire, and requesting the current iterator at that point, I'd tend to create and queue up an object representing each move in the animation, then when the timer fires, the timer function retrieves and executes the next item in the queue: for (int i=0;i<Move.size(); i++) for (int j=0; j<num_positions; j++) enqueue(move, Move[i], position(j));
2,038,453
2,038,534
C++ Standard: Unexpected const_iterator in multiset
I recently ran into an odd issue where I'd get a const_iterator instead of the expected iterator when iterating through a multiset. It turned out to be a non-issue for MSVC but g++ gave me an error: error: invalid initialization of reference of type 'myPtr&' from expression of type 'const boost::shared_ptr' Relevant code: typedef std::multiset<myPtr> myList; myList _mystuff; void tick(float dt) { for (myList::iterator i = _mystuff.begin(); i != _mystuff.end(); ++i) { myPtr &mine = *i; // g++ problem here, not for MSVC // const myPtr &mine = *i; works fine for g++ mine->tick(dt); } } Quite a bit of research revealed that is a problem with lots of previous discussion. I found these relevant bits: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14990 http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#322 http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#103 http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#279 http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#528 My background knowledge and grasp on the issue is limited and thus I'd like to know whether the standard doesn't define this behavior well enough in which case g++ and MSVC implement the behavior to their liking or whether either g++ or MSVC deviate from a well-defined standard. Thanks in advance.
The iterators for set and multiset were changed from the standard iterator/const iterator pair to just being const iterators. The reason for this change was that they are ordered containers, and changing the element inside of an iterator can actually invalidate this ordering constraint. The version of GCC you're testing against has made this change, the version of VC that you're using has not. VC10 (and VC9 SP1, I believe) always return const_iterators from sets and multisets. 23.2.4/6 of the latest draft of C++1x (n3000.pdf at the moment) says For associative containers where the value type is the same as the key type, both iterator and const_iterator are constant iterators. std::set and std::multi_set are the associative containers where the value type is the same as the key type.
2,038,640
2,038,651
In C and C++, why is each .h file usually surrounded with #ifndef #define #endif directives?
Why does each .h file starts with #ifndef #define #endif? We can certainly compile the program without those directives.
It's a so-called "include guard". The purpose is to prevent the file from having to be parsed multiple times if it is included multiple times.
2,038,705
2,038,710
C++ Unit Testing Libraries
I've come across cppunit but it didn't look super-easy to use (maybe I didn't look hard, maybe because C++ doesn't work like Java/C#). Are there widely used, simple alternatives? In fact, is cppunit the standard unit testing framework for C++?
There is no standard unit testing library for C++. There are many choices to choose from; cppunit being one of them. At my company we use Google Test along with its partner Google Mock for unit testing and object mocking. I find them both combined easier to use and much more powerful than cppunit.
2,038,717
2,038,793
C++ object size with virtual methods
I have some questions about the object size with virtual. 1) virtual function class A { public: int a; virtual void v(); } The size of class A is 8bytes....one integer(4 bytes) plus one virtual pointer(4 bytes) It's clear! class B: public A{ public: int b; virtual void w(); } What's the size of class B? I tested using sizeof B, it prints 12 Does it mean that only one vptr is there even both of class B and class A have virtual function? Why there is only one vptr? class A { public: int a; virtual void v(); }; class B { public: int b; virtual void w(); }; class C : public A, public B { public: int c; virtual void x(); }; The sizeof C is 20........ It seems that in this case, two vptrs are in the layout.....How does this happen? I think the two vptrs one is for class A and another is for class B....so there is no vptr for the virtual function of class C? My question is, what's the rule about the number of vptrs in inheritance? 2) virtual inheritance class A { public: int a; virtual void v(); }; class B: virtual public A{ //virtual inheritance public: int b; virtual void w(); }; class C : public A { //non-virtual inheritance public: int c; virtual void x(); }; class D: public B, public C { public: int d; virtual void y(); }; The sizeof A is 8 bytes -------------- 4(int a) + 4 (vptr) = 8 The sizeof B is 16 bytes -------------- Without virtual it should be 4 + 4 + 4 = 12. why there is another 4 bytes here? What's the layout of class B ? The sizeof C is 12 bytes. -------------- 4 + 4 + 4 = 12. It's clear! The sizeof D is 32 bytes -------------- it should be 16(class B) + 12(class C) + 4(int d) = 32. Is that right? class A { public: int a; virtual void v(); }; class B: virtual public A{ //virtual inheritance here public: int b; virtual void w(); }; class C : virtual public A { //virtual inheritance here public: int c; virtual void x(); }; class D: public B, public C { public: int d; virtual void y(); }; sizeof A is 8 sizeof B is 16 sizeof C is 16 sizeof D is 28 Does it mean 28 = 16(class B) + 16(class C) - 8(class A) + 4 ( what's this? ) My question is , why there is an extra space when virtual inheritance is applied? What's the underneath rule for the object size in this case? What's the difference when virtual is applied on all the base classes and on part of the base classes?
This is all implementation defined. I'm using VC10 Beta2. The key to help understanding this stuff (the implementation of virtual functions), you need to know about a secret switch in the Visual Studio compiler, /d1reportSingleClassLayoutXXX. I'll get to that in a second. The basic rule is the vtable needs to be located at offset 0 for any pointer to an object. This implies multiple vtables for multiple inheritance. Couple questions here, I'll start at the top: Does it mean that only one vptr is there even both of class B and class A have virtual function? Why there is only one vptr? This is how virtual functions work, you want the base class and derived class to share the same vtable pointer (pointing to the implementation in the derived class. It seems that in this case, two vptrs are in the layout.....How does this happen? I think the two vptrs one is for class A and another is for class B....so there is no vptr for the virtual function of class C? This is the layout of class C, as reported by /d1reportSingleClassLayoutC: class C size(20): +--- | +--- (base class A) 0 | | {vfptr} 4 | | a | +--- | +--- (base class B) 8 | | {vfptr} 12 | | b | +--- 16 | c +--- You are correct, there are two vtables, one for each base class. This is how it works in multiple inheritance; if the C* is casted to a B*, the pointer value gets adjusted by 8 bytes. A vtable still needs to be at offset 0 for virtual function calls to work. The vtable in the above layout for class A is treated as class C's vtable (when called through a C*). The sizeof B is 16 bytes -------------- Without virtual it should be 4 + 4 + 4 = 12. why there is another 4 bytes here? What's the layout of class B ? This is the layout of class B in this example: class B size(20): +--- 0 | {vfptr} 4 | {vbptr} 8 | b +--- +--- (virtual base A) 12 | {vfptr} 16 | a +--- As you can see, there is an extra pointer to handle virtual inheritance. Virtual inheritance is complicated. The sizeof D is 32 bytes -------------- it should be 16(class B) + 12(class C) + 4(int d) = 32. Is that right? No, 36 bytes. Same deal with the virtual inheritance. Layout of D in this example: class D size(36): +--- | +--- (base class B) 0 | | {vfptr} 4 | | {vbptr} 8 | | b | +--- | +--- (base class C) | | +--- (base class A) 12 | | | {vfptr} 16 | | | a | | +--- 20 | | c | +--- 24 | d +--- +--- (virtual base A) 28 | {vfptr} 32 | a +--- My question is , why there is an extra space when virtual inheritance is applied? Virtual base class pointer, it's complicated. Base classes are "combined" in virtual inheritance. Instead of having a base class embedded into a class, the class will have a pointer to the base class object in the layout. If you have two base classes using virtual inheritance (the "diamond" class hierarchy), they will both point to the same virtual base class in the object, instead of having a separate copy of that base class. What's the underneath rule for the object size in this case? Important point; there are no rules: the compiler can do whatever it needs to do. And a final detail; to make all these class layout diagrams I am compiling with: cl test.cpp /d1reportSingleClassLayoutXXX Where XXX is a substring match of the structs/classes you want to see the layout of. Using this you can explore the affects of various inheritance schemes yourself, as well as why/where padding is added, etc.
2,038,871
2,038,934
Copy-protecting a static library
I will soon be shipping a paid-for static library, and I am wondering if it is possible to build in any form of copy protection to prevent developers copying the library. Ideally, I would like to prevent the library being linked into an executable at all, if (and only if!) the library has been illegitimately copied onto the developer's machine. Is this possible? Alternatively, it might be acceptable if applications linked to an illegitimate copy of the library simply didn't work; however, it is very important that this places no burden on the users of these applications (such as inputting a license key, using a dongle, or even requiring an Internet connection). The library is written in C++ and targets a number of platforms including Windows and Mac. Do I have any options?
I agree with other answers that a fool-proof protection is simply impossible. However, as a gentle nudge... If your library is precompiled, you could discourage excessive illegitimate use by requiring custom license info in the API. Change a function like: jeastsy_lib::init() to: jeastsy_lib::init( "Licenced to Foobar Industries", "(hex string here)" ); Where the first parameter identifies the customer, and the second parameter is an MD5 or other hash of the first parameter with a salt. When your library is purchased, you would supply both of those parameters to the customer. To be clear, this is an an easily-averted protection for someone smart and ambitious enough. Consider this a speed bump on the path to piracy. This may convince potential customers that purchasing your software is the easiest path forward.
2,038,881
2,038,977
GPL and libmysqlclient
I have an application, it uses the libmysqlclient.so I wonder if I need GPL license on this application due to libmysqlclient be GPL or if I can continue the program in closed source EDIT: According to this site, I can use the libmysqlclient in a closed-source software. Just do not understand why the GPL "infects" the code so much ... EDIT2: If a library is released under the GPL (not the LGPL), does that mean that any program which uses it has to be under the GPL or a GPL-compatible license?
libmysqlclient, the JDBC connector, and other libraries to interfacing to MySQL are GPL (GPLv2). Strict reading of the license would show that you need to distribute your source code under the GPL. There is the FLOSS exemption, which allows any open source license to include libmysqlclient, however this does not apply to you. Sun/Oracle aggressively license the connector libraries and server components, and in my experience are quite expensive. There are some tricks you can use, such as a query proxy server - simply launch a child process which can transform your own SQL commands to libmysqlclient. You will need to ship the source of the proxy, but its a self contained piece.
2,039,152
2,039,176
C++ functions exposed to scripting system - self-describing parameter types
A C++ rules engine defines rules in XML where each rule boils down to "if X, then Y" where X is a set of tests and Y a set of actions. In C++ code, 'functions' usable in tests/actions are created as a class for each 'function', each having a "run(args)" method... each takes its own set of parameters. This works fine. But, a separate tool is wanted to save users hand-crafting XML; the rules engine is aimed at non-programmers. The tool needs to know all the 'functions' available, as well as their required input parameters. What's the best way to consider doing this? I considered a couple of possibilities: A config file describes the 'functions' and their parameters, and is read by the tool. This is pretty easy, and the actual C++ code can use it to perform argument validation, but still the C++ and XML are not guaranteed to be in sync - a programmer could modify C++ and forget to update the XML leading to validation bugs Each 'function' class has methods which describe it. Somehow the tool loads the C++ classes... this would be easy in a language supporting reflection but messier in C++, probably you'd have to build a special DLL with all 'functions' or something. Which means extra overhead. What makes sense given the nature of C++ specifically? EDIT: is the title descriptive? I can't think of a better one.
There's a 3rd way - IDL. Imagine you have a client-server app, and you have a code generator that produces wrapper classes that you can deploy on client and server so the user can write an app using the client API and the processing occurs on the server... this is a typical RPC scenario and is used in DCE-RPC, ONC-RPC, CORBA, COM and others. The trick here is to define the signatures of the methods the client can call, which is done in an Interface Definition Language. This doesn't have to be difficult, but it is the source for the client/server API, you run it through a generator and it produces the C++ classes that you compile up for the client to use. In your case, it sounds like the XML is the IDL. so you can create a tool that takes the XML and produces the C++ headers describing the functions that your code exposes. You don't really have to generate the cpp files (you could) but its easier to just generate the headers, so the programmer who adds a new function/parameter cannot forget to update the implementation - it just won't compile once the headers have been re-generated. You can generate a header that is #included into the existing c++ headers if there is more there than just the function definitions. So - that's my suggestion, #3: generate the definitions from your definitive XML signatures.
2,039,444
2,039,453
Why are drivers and firmwares almost always written in C or ASM and not C++?
I am just curious why drivers and firmwares almost always are written in C or Assembly, and not C++? I have heard that there is a technical reason for this. Does anyone know this? Lots of love, Louise
Because, most of the time, the operating system (or a "run-time library") provides the stdlib functionality required by C++. In C and ASM you can create bare executables, which contain no external dependencies. However, since windows does support the C++ stdlib, most Windows drivers are written in (a limited subset of) C++. Also when firmware is written ASM it is usually because either (A) the platform it is executing on does not have a C++ compiler or (B) there are extreme speed or size constraints. Note that (B) hasn't generally been an issue since the early 2000's.
2,039,529
2,039,547
C++ Exception Design Pattern
I'd like to encapsulate Win32 errors (those returned from GetLastError()) in some form of exception class. Rather than having a single Win32 exception, however, I'd like to be able to have a specialized exception catchable for common errors, such as ERROR_ACCESS_DENIED. For example, I'd have classes declared like this: class WindowsException : public std::exception { public: static WindowsException Create(DWORD lastError); //blah }; class ErrorAccessDeniedException : public WindowsException { public: //blah }; However, I'd like the Win32 exception to be responsible for picking the right exception to return. That is, the thrower of the exception should look like: int DangerousMethod() { throw WindowsAPI::WindowsException::Create(GetLastError()); } and the catcher might look like: try { DangerousMethod(); } catch(WindowsAPI::ErrorAccessDeniedException ex) { //Code for handling ERROR_ACCESS_DENIED } catch(WindowsAPI::WindowsException ex) { //Code for handling other kinds of error cases. } My problem is that if the WindowsException::Create factory method returns a WindowsException, then the subtype (potentially ErrorAccessDeniedException) is sliced down to the base type. That is, the instance can't be polymorphic. I don't want to use a new'd pointer, because that would force the exception handler to delete it when it's done. Does anyone know of a design solution that would be feasible for solving this problem elegantly? Billy3
Change int DangerousMethod() { throw WindowsAPI::WindowsException::Create(GetLastError()); } To int DangerousMethod() { WindowsAPI::WindowsException::Throw(GetLastError()); } Meaning, instead of returning the exception then throwing it (which will slice, as you observed), have your helper/factory method throw it directly.
2,039,661
2,039,723
Fread skipping characters reading into object
I'm trying to read in a bitmap starting with its header, but fread is skipping characters for me. I'm using this typedef in my header: #include <windows.h> // Used for other #include <cstdio> typedef struct tagBITMAPHEADER{ WORD wFileType; DWORD dwFileSize; WORD dwReserved; WORD dwReserved2; DWORD dwBmpDataOffset; DWORD dwBmpHeaderSize; DWORD dwWidth; DWORD dwHeight; WORD wPlanes; WORD wBitsPerPixel; DWORD dwCompression; DWORD dwBitmapDataSz; DWORD dwHRes; DWORD dwVRes; DWORD dwColors; DWORD dwImpColors; } BITMAPHEADER, *PBITMAPHEADER; And in my code, I just use a simple fopen and fread with binary. #include "ImageLoader.h" BITMAPHEADER pbhFileInfo; FILE *fBitmap = fopen(FileName,"rb"); //Open file in read / binary if (fBitmap) //File is now open { fread(&pbhFileInfo,sizeof(BITMAPFILEHEADER),1,fBitmap); fclose(fBitmap); } Although my bitmap starts with '424DF25A0D' (hex), the first two variables read in seem to skip the 'F25A' wFileType = 0x4d42 dwFileSize = 0x0000000d Any idea what might be up? Thanks in advance.
In my opinion it is highly unwise to be using a struct in this way. Yes, you can get what you want in this case with a compiler specific pragma. I would consider that an acceptable solution if you were writing a Windows device driver or something else that was already very specific to a particular platform. But this is loading a file in a standard format. It's code that could run in any environment. Personally, I would write code that lifted the data out of a character array and plopped it into the structure by hand instead of relying on the structure to be layed out in just the right way by the compiler such that fread will magically put all the little bits of data in the right places.
2,039,918
2,040,012
std::getline does not work inside a for-loop
I'm trying to collect user's input in a string variable that accepts whitespaces for a specified amount of time. Since the usual cin >> str doesn't accept whitespaces, so I'd go with std::getline from <string> Here is my code: #include <iostream> #include <vector> #include <string> #include <algorithm> using namespace std; int main() { int n; cin >> n; for(int i = 0; i < n; i++) { string local; getline(cin, local); // This simply does not work. Just skipped without a reason. //............................ } //............................ return 0; } Any idea?
You can see why this is failing if you output what you stored in local (which is a poor variable name, by the way :P): #include <iostream> #include <vector> #include <string> #include <algorithm> using namespace std; int main() { int n; cin >> n; for(int i = 0; i < n; i++) { string local; getline(cin, local); std::cout << "> " << local << std::endl; } //............................ return 0; } You will see it prints a newline after > immediately after inputting your number. It then moves on to inputting the rest. This is because getline is giving you the empty line left over from inputting your number. (It reads the number, but apparently doesn't remove the \n, so you're left with a blank line.) You need to get rid of any remaining whitespace first: #include <iostream> #include <vector> #include <string> #include <algorithm> using namespace std; int main() { int n; cin >> n; cin >> ws; // stream out any whitespace for(int i = 0; i < n; i++) { string local; getline(cin, local); std::cout << "> " << local << std::endl; } //............................ return 0; } This the works as expected. Off topic, perhaps it was only for the snippet at hand, but code tends to be more readable if you don't have using namespace std;. It defeats the purpose of namespaces. I suspect it was only for posting here, though.
2,040,210
2,040,287
Is Foo* f = new Foo good C++ code
Reading through an old C++ Journal I had, I noticed something. One of the articles asserted that Foo *f = new Foo(); was nearly unacceptable professional C++ code by and large, and an automatic memory management solution was appropriate. Is this so? edit: rephrased: is direct memory management unacceptable for new C++ code, in general? Should auto_ptr(or the other management wrappers) be used for most new code?
This example is very Java like. In C++ we only use dynamic memory management if it is required. A better alternative is just to declare a local variable. { Foo f; // use f } // f goes out of scope and is immediately destroyed here. If you must use dynamic memory then use a smart pointer. // In C++14 { std::unique_ptr<Foo> f = std::make_unique<Foo>(); // no need for new anymore } // In C++11 { std::unique_ptr<Foo> f(new Foo); // See Description below. } // In C++03 { std::auto_ptr<Foo> f(new Foo); // the smart pointer f owns the pointer. // At some point f may give up ownership to another // object. If not then f will automatically delete // the pointer when it goes out of scope.. } There are a whole bunch os smart pointers provided int std:: and boost:: (now some are in std::tr1) pick the appropriate one and use it to manage the lifespan of your object. See Smart Pointers: Or who owns you baby? Technically you can use new/delete to do memory management. But in real C++ code it is almost never done. There is nearly always a better alternative to doing memory management by hand. A simple example is the std::vector. Under the covers it uses new and delete. But you would never be able to tell from the outside. This is completely transparent to the user of the class. All that the user knows is that the vector will take ownership of the object and it will be destroyed when the vector is destroyed.
2,040,348
2,040,501
Glass Effect - Artistic Effect
I wish to give an effect to images, where the resultant image would appear as if we are looking at it through a textured glass (not plain/smooth)... Please help me in writing an algo to generate such an effect. Here's an example of the type of effect I'm looking for The first image is the original image and the second image is the output im looking for.
Begin by creating a noise map with dimensions (width + 1) x (height + 1)that will be used displace the original image. I suggest using some sort of perlin noise so that the displacement is not to random. Here's a good link on how to generate perlin noise. Once we have the noise we can do something like this: Image noisemap; //size is (width + 1) x (height + 1) gray scale values in [0 255] range Image source; //source image Image destination; //destination image float displacementRadius = 10.0f; //Displacemnet amount in pixels for (int y = 0; y < source.height(); ++y) { for (int x = 0; x < source.width(); ++x) { const float n0 = float(noise.getValue(x, y)) / 255.0f; const float n1 = float(noise.getValue(x + 1, y)) / 255.0f; const float n2 = float(noise.getValue(x, y + 1)) / 255.0f; const int dx = int(floorf((n1 - n0) * displacementRadius + 0.5f)); const int dy = int(floorf((n2 - n0) * displacementRadius + 0.5f)); const int sx = std::min(std::max(x + dx, 0), source.width() - 1); //Clamp const int sy = std::min(std::max(y + dy, 0), source.height() - 1); //Clamp const Pixel& value = source.getValue(sx, sy); destination.setValue(x, y, value); } }
2,040,355
2,040,386
Is it good practice to initialize array in C/C++?
I recently encountered a case where I need to compare two files (golden and expected) for verification of test results and even though the data written to both the files were same, the files does not match. On further investigation, I found that there is a structure which contains some integers and a char array of 64 bytes, and not all the bytes of char array were getting used in most of the cases and unused fields from the array contain random data and that was causing the mismatch. This brought me ask the question whether it is good practice to initialize the array in C/C++ as well, as it is done in Java?
It is good practice to initialise memory/variables before you use them - uninitialised variables are a big source of bugs that are often very hard to track down. Initialising all the data is a very good idea when writing it to a file format: It keeps the file contents cleaner so they are easier to work with, less prone to problems if someone incorrectly tries to "use" the uninitialised data (remember it may not just be your own code that reads the data in future), and makes the files much more compressible. The only good reason not to initialise variables before you use them is in performance-critical situations, where the initialisation is technically "unnecessary" and incurs a significant overhead. But in most cases initialising variables won't cause significant harm (especially if they are only declared immediately before they are used), but will save you a lot of development time by eliminating a common source of bugs.
2,040,425
2,106,201
PostgreSQL : SQL timestamp to Unix timestamp using libpq
I know I can convert SQL timestamp to unix timestamp, using the following way. SELECT extract(epoch FROM now()); Now, I have a stored procedure function, which will directly return a table row to the caller. One of the row field is "timestamp" type. In my application, I am using libpq. I wish to use libpq functions (or any c/c++ function), to convert "2010-01-11 13:10:55.283" into unix timestamp. Off course, I can create another stored procedure named SQLTimestamp2UnixTimestamp SELECT extract(epoch FROM $1); But I just wish to accomplish this task with a single c/c++ function call, without involving stored procedure. Any suggestion? Thanks!
boost::posix_time::ptime t(boost::posix_time::time_from_string(ts)); boost::posix_time::ptime start(boost::gregorian::date(1970,1,1)); boost::posix_time::time_duration dur = t - start; time_t epoch = dur.total_seconds(); long timestamp = static_cast<long>(epoch);
2,040,615
2,040,656
Show dialog/frame fullscreen on a second screen sing QT/c++
I have an application with a secondary view that should be shown fullscreen on the other monitor (the one the main app is not on). Displaying the frame works quite well with frame.showFullScreen(); But, how can I tell it which screen it should be on? Is there a way to detect if a second screen is avauilable, as well?
You can retrieve screen information from QDesktopWidget. To move a window to a specific screen, you can do something like this: QRect screenres = QApplication::desktop()->screenGeometry(screenNumber); widget->move(QPoint(screenres.x(), screenres.y()));
2,040,776
2,040,810
QtWebkit as a desktop application GUI
I was wondering if anyone knows about good tutorials or articles describing methods of creating an HTML GUI for an application using QTWebKit for a windows desktop application. I am mainly concerned about communicating messages, events and information between lets say a DLL(written in C++ for example) and the GUI (QtWebKit). need good reliable references...
This won't be easy: Web browsers are fortresses because of security concerns. So it's pretty hard to get from JS in a web page to something outside of the browser. Also, QtWebKit isn't a very open API. The biggest obstacle in your case is that it doesn't offer you access to the DOM, so you can only replace the whole HTML. Therefore, you'll need to patch and write a lot of code to implement the missing APIs and functions. Since Qt 4.6 has been released, there is QWebElement (see the docs for examples), so you can at least access the DOM and modify it. That will make a lot of things more simple. I suggest to decide who controls the browser: Will your app be JavaScript which calls outside or is the app really in C++ and you use the browser as a smart UI renderer? A much more simple way might be to make your idea work would be to start an internal web server when your app starts and then open a QtWebKit view pointing to the URL of the local server. Then, you could use all the standard web development tools. Eclipse uses this technique for its internal help system.
2,041,078
2,043,000
Why does my code result in "cannot instantiate abstract class"?
This is the line where the error occurs: this->_tbfCmdHandler.reset(new Bar()); facade_impl.cpp(202): error C2259: 'FOO::Bar' : cannot instantiate abstract class due to following members: 'void Subscriber::update(T)' : is abstract with T=char & observer.h(66) : see declaration of 'Subscriber::update' with T=char & 'void Subscriber::update(T)' : is abstract with T=const char & observer.h(66) : see declaration of 'Subscriber::update' with T=const char & ] This is the declaration for Facade::Implementation namespace FOO { class Facade::Implementation :public Subscriber<const char& > { facade.cpp FOO::Facade::Facade () : impl (new Implementation) { Singleton<SPM::Facade>::instance (); } The update functions: void update( const char *aMsg) { printf("foo"); }; I hope this helps to figure out where I can find the error.
You are inheriting from an abstract class, so you need to implement the void update( const char& ) function inside class Facade::Implementation. You did define an update function, but it is not related in any way to Subscriber. You have to put it inside your implementation.
2,041,241
2,041,290
Convert CString to std::wstring
How can I convert from CString to std::wstring?
To convert CString to std::wstring: CString hi("Hi"); std::wstring hi2(hi); And to go the other way, use c_str(): std::wstring hi(L"Hi"); CString hi2(hi.c_str());
2,041,329
2,041,838
Migration from MSXML to Xerces
I am planning to port my application from Windows to Linux, currently my application uses MSXML for XML parsing. I have decided to use Xerces XML parser to provide a cross platform solution. My code size is too big and I do not want to touch all the internal part of the code for this porting purpose as it might break some of the functionality. Can anybody suggest me the best way to do this.
depends on what you mean with 'the internal part'; one pretty extensible way to do this would go in some steps (having tests for your application would be beneficial so you can spot when something goes wrong): create an interface for all XML operations you use provide an implementation of that interface that uses MSXML make all your code talk to the interface instead of directly to MSXML. If you designed the interface well, this could be a matter of just a thorough find/replace, but more work might be needed now everything should still be working, but with the benefit that it's seperated from the actual xml logic provide another implementation for the interface, now using Xercesc
2,041,336
2,041,626
access declaration can only be applied to a base class member
i'm using the observer pattern. I've a class that implements the publisher class: class foo : public Publisher<const RecoveryState &>, public Publisher<char &>, therin in try to bind the attach function: using Publisher<const RecoveryState &>::attach; using Publisher<const char &>::attach; the RecoveryState works, but at the char line the following error occurs: Error 5 error C3210: 'Publisher' : access declaration can only be applied to a base class member c:\projekte\ps3controlmodule\tbfcontrol\tbfcmdhandler.h 363
There is a discrepancy "char&" vs. "const char&".
2,041,355
2,041,372
C++: Constructor accepting only a string literal
Is it possible to create a constructor (or function signature, for that matter) that only accepts a string literal, but not an e.g. char const *? Is it possible to have two overloads that can distinguish between string literals and char const *? C++ 0x would kind-of allow this with a custom suffix - but I'm looking for an "earlier" solution. Rationale: avoiding heap copy of strings that won't be modified when given as string literals. These strings directly go to an API expecting a const char * without any processing. Most calls do use literals requiring no additional processing, only in a few cases they are constructed. I am looking for a possibility to preserve the native call behavior. Note: - since it comes up in the answers: the code in question does not use std::string at all, but a good example would be: class foo { std::string m_str; char const * m_cstr; public: foo(<string literal> s) : m_cstr(p) {} foo(char const * s) : m_str(s) { m_cstr = s.c_str(); } foo(std::string const & s) : m_str(s) { m_cstr = s.c_str(); } operator char const *() const { return m_cstr; } } Results: (1) it can't be done. (2) I realized I am not even looking for a literal, but for a compile-time-constant (i.e. "anything that needs not be copied"). I will probably use the following pattern instead: const literal str_Ophelia = "Ophelia"; void Foo() { Hamlet(str_Ophelia, ...); // can receive literal or string or const char * } with a simple struct literal { char const * data; literal(char const * p) : data(p) {} operator const char *() const { return data; } }; That doesn't stop anyone from abusing it (I should find a better name...), but it allows the required optimization but remains safe by default.
No, you just can't do this - string literals and const char* are interchangeable. One workaround could be to introduce a special class to hold pointers to string literals and make a constructor only accepting that. This way whenever you need to pass a literal you call a constructor of that class and pass the temporary object. This doesn't completely prevent misuse, but makes code much more maintainable.
2,042,416
2,042,490
TI DSP: interfacing C++ and assembly
I posted this Q to TI's 28xx DSP forum but haven't heard a response and figured maybe someone here might know. I know how to write functions in assembly so that they are C-callable; if the C-callable name is foo() then the assembly function is named _foo(). What if I want to use C++ and optimize a class method in assembly? How do I do that? I assume the only major issues are: naming accessing the "this" pointer accessing class members by somehow knowing offsets and if I don't want to worry about the last two, then perhaps I would write a static member function and do this: class MyClass { int x; static int _doSomething(int u); // implement this in assembly public: inline void doSomething() { x = _doSomething(x); } // lightweight C++ wrapper to handle the class member / "this" pointer stuff };
The this pointer gets passed as an additional argument to the function, using the standard calling convention on your platform. On all the platforms I'm familiar with it is passed as the first argument, but I don't do a lot of C++ coding, so I'm not sure if this is guaranteed by the standard. You can always disassemble some C++ code on your platform to confirm. The C++ symbol naming is rather more painful than in C, and varies from compiler to compiler. I suppose you could figure out the right symbol name to use by disassembling a compiled function definition, just make sure that: the function is a member of the right class, and has the right number and type of arguments. Unless you really need to reproduce a C++ function in situ, I would probably just make a standard C function and do the usual extern "C" { ... } around its declaration.
2,042,512
2,042,533
How do I let a program D read a memory location within the memory allocated to a program A?
So I'd like to let read D read this memory location and do some work on it. Any thoughts? Is writing a debugger extension the only way - if so, any recommendations? I considered executing a memory dump to file (still don't know how, AFAIK I can only view memory in a window) and letting D work on the file, but is there a better way?
It is possible to read memory of another process. You should use ReadProcessMemory function.
2,042,516
2,042,590
Using C++ Boost memory mapped files to create disk-back data structures
I have been looking into using Boost.Interprocess to create a disk-backed data structure. The examples on Boost Documentation (http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess.html) are all for using shared memory even though they mention that memory mapped files can also be used. I am wondering whether anyone here has used memory mapped files? Any publicly available code samples to get started (say, a memory mapped file backed map or set)?
You might take look at stldb project that's being actively discussed on boost mail list. It tries to build an ACID database on top of boost::interprocess.
2,042,582
2,042,619
Best way to create a string containing multiple copies of another string
I want to create a function that will take a string and an integer as parameters and return a string that contains the string parameter repeated the given number of times. For example: std::string MakeDuplicate( const std::string& str, int x ) { ... } Calling MakeDuplicate( "abc", 3 ); would return "abcabcabc". I know I can do this just by looping x number of times but I'm sure there must be a better way.
I don't see a problem with looping, just make sure you do a reserve first: std::string MakeDuplicate( const std::string& str, int x ) { std::string newstr; newstr.reserve(str.length()*x); // prevents multiple reallocations // loop... return newstr; }
2,042,748
2,091,165
Bjam: ignore specific library
Using Visual Studio, it is possible to 'Ignore Specific Library' (Project Properties > Configuration Properties > Linker > Input > Ignore Specific Library). We found this useful in a project. Now we want to build that project using boost-build (bjam), but we need to reproduce that linker behaviour. Is there any ignore library feature with bjam?
You could set it at the command line bjam linkflags=/NODEFAULTLIB:xxx Or from within a jamfile <linkflags>/NODEFAULTLIB:xxx Or use Visual Studio's pragma comment feature in your code itself #pragma comment(linker, "/NODEFAULTLIB:xxx")
2,042,780
2,043,239
How to raise warning if return value is disregarded?
I'd like to see all the places in my code (C++) which disregard return value of a function. How can I do it - with gcc or static code analysis tool? Bad code example: int f(int z) { return z + (z*2) + z/3 + z*z + 23; } int main() { int i = 7; f(i); ///// <<----- here I disregard the return value return 1; } Please note that: it should work even if the function and its use are in different files free static check tool
You want GCC's warn_unused_result attribute: #define WARN_UNUSED __attribute__((warn_unused_result)) int WARN_UNUSED f(int z) { return z + (z*2) + z/3 + z*z + 23; } int main() { int i = 7; f(i); ///// <<----- here i disregard the return value return 1; } Trying to compile this code produces: $ gcc test.c test.c: In function `main': test.c:16: warning: ignoring return value of `f', declared with attribute warn_unused_result You can see this in use in the Linux kernel; they have a __must_check macro that does the same thing; looks like you need GCC 3.4 or greater for this to work. Then you will find that macro used in kernel header files: unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n);
2,043,104
2,043,372
How to use boost lambda to populate a vector of pointers with new objects
I've recently started using boost lambda and thought I'd try and use it in places where it will/should make things easier to read. I have some code similar to the following std::vector< X * > v; for ( int i = 0 ; i < 20 ; ++i ) v.push_back( new X() ); and later on, to delete it... std::for_each( v.begin(), v.end(), boost::lamda::delete_ptr() ); Which neatly tidies up. However, I thought I'd have a go at "lambda-ising" the population of the vector using lambda... That's then the fireworks started... I tried.. std::generate_n( v.begin(), 20, _1 = new X() ); but this threw all kinds of compiler errors. Any ideas which is the best "lambda" way to achieve this. Thx Mark.
Here's a code snippet that does what you want: #include <algorithm> #include <vector> #include <boost/lambda/lambda.hpp> #include <boost/lambda/construct.hpp> typedef int X; int main() { std::vector<X*> v; std::generate_n( std::back_inserter(v), 20, boost::lambda::new_ptr<X>() ); std::for_each( v.begin(), v.end(), boost::lambda::delete_ptr() ); } You might want to consider using boost::ptr_vector though, as using a std::vector with dynamically allocated pointers in an exception safe way isn't easy.
2,043,381
2,043,561
Ok to provide constructor for behaviorless aggregates (bundle-o-data) in C++?
Please refer to rule #41 of C++ Coding Standards or Sutter's Gotw #70, which states that: Make data members private, except in behaviorless aggregates (C-style structs). I often would like to to add a simple constructor to these C-style structs, for the sake of convenience. For example: struct Position { Position(double lat=0.0, double lon=0.0) : latitude(lat), longitude(lon) {} double latitude; double longitude; }; void travelTo(Position pos) {...} main() { travelTo(Position(12.34, 56.78)); } While making it easier to construct a Position on the fly, the constructor also kindly zero-initializes default Position objects for me. Maybe I can follow std::pair's example and provide a "makePosition" free function? NRVO should make it as fast as the constructor, right? Position makePosition(double lat, double lon) { Position p; p.latitude = lat; p.longitude = lon; return p; } travelTo(makePosition(12.34, 56.78)); Am I going against the spirit of the "behaviorless aggregate" concept by adding that measly little constructor? EDIT: Yes, I was aware of Position p={12.34, 56.78}. But I can't do travelTo({12.34, 56.78}) with pure C structs. EDIT 2: For those curious about POD types: What are POD types in C++? FOLLOW-UP: I've asked a follow-up question here that is closely related to this one.
We regularly define constructors for our aggregate types, with no adverse effects. In fact the only adverse effects I can think of are that in performance critical situations you cannot avoid default initialisation and that you can't use the type in unions. The alternatives are the curly brace style of initialisation Position p = {a,b}; or a free "make" function Position makePosition(double a, double b) { Position p = {a,b}; return p; } the problem with the former is that you can't use it to instantiate a temporary to pass into a function void func(Position p) { // ... } // func({a,b}) is an error the latter is fine in this case, but is very slightly more typing for the lazy programmer. The problem with the latter form (a make function) is that it leaves the possibility that you forget to initialise your data structure. Because uninitialised variables leave me feeling rather uncomfortable I prefer to define a constructor for my aggregate types. The main reason std::make_pair exists is actually not for this reason (std::pair has constructors), but in fact because to call the constructor of a template type you have to pass the template arguments - which is inconvenient: std::pair<int,int> func() { return std::pair<int,int>(1,2); } Finally, in your example, you should at least make your constructor explicit explicit Position(double lat=0.0, double lon=0.0) otherwise you allow an implicit cast to a Position from a double Position p = 0.0; which might be lead to unintended behaviour. In fact I would define two constructors, one to initialise to zero and one to initialise with two values because the Position construct probably doesn't make much sense without both a latitude and a longitude.
2,043,823
2,043,831
Why is a C++ bool var true by default?
bool "bar" is by default true, but it should be false, it can not be initiliazied in the constructor. is there a way to init it as false without making it static? Simplified version of the code: foo.h class Foo{ public: void Foo(); private: bool bar; } foo.c Foo::Foo() { if(bar) { doSomethink(); } }
In fact, by default it's not initialized at all. The value you see is simply some trash values in the memory that have been used for allocation. If you want to set a default value, you'll have to ask for it in the constructor : class Foo{ public: Foo() : bar() {} // default bool value == false // OR to be clear: Foo() : bar( false ) {} void foo(); private: bool bar; } UPDATE C++11: If you can use a C++11 compiler, you can now default construct instead (most of the time): class Foo{ public: // The constructor will be generated automatically, except if you need to write it yourself. void foo(); private: bool bar = false; // Always false by default at construction, except if you change it manually in a constructor's initializer list. }
2,043,837
2,043,981
Detaching a native socket from Boost.ASIO's socket class
Is it possible to detach a native socket from Boost.ASIO's socket class? If so, how can it be done? I can't seem to find anything obvious in the documentation. As a quick overview of what I'm trying to accomplish: I have a class that makes a connection and does some negotiation using Boost.ASIO, then passes back a native Windows SOCKET on success or 0 on failure. Unless I'm mistaken, the native socket will be closed and deallocated when my boost::asio::basic_socket is destructed.
Answering my own question. Windows has a WSADuplicateSocket function, which can be used to duplicate the native socket. The underlying socket will remain open until all descriptors for this socket are deallocated. http://msdn.microsoft.com/en-us/library/ms741565(VS.85).aspx
2,043,974
2,044,050
Do C++ compilers optimize pass by const reference POD parameters into pass by copy?
Consider the following: struct Point {double x; double y;}; double complexComputation(const& Point p1, const Point& p2) { // p1 and p2 used frequently in computations } Do compilers optimize the pass-by-reference into pass-by-copy to prevent frequent dereferencing? In other words convert complexComputation into this: double complexComputation(const& Point p1, const Point& p2) { double x1 = p1.x; double x2 = p2.x; double y1 = p1.y; double y2 = p2.y; // x1, x2, y1, y2 stored in registers and used frequently in computations } Since Point is a POD, there can be no side effect by making a copy behind the caller's back, right? If that's the case, then I can always just pass POD objects by const reference, no matter how small, and not have to worry about the optimal passing semantics. Right? EDIT: I'm interested in the GCC compiler in particular. I guess I might have to write some test code and look at the ASM.
I can't speak for every compiler, but the general answer is no. It will not make that optimization. See GOTW#81 to read about how casting to const in C++ doesn't affect optimization as some might think.
2,044,124
2,045,870
What happens in C++ when an integer type is cast to a floating point type or vice-versa?
Do the underlying bits just get "reinterpreted" as a floating point value? Or is there a run-time conversion to produce the nearest floating point value? Is endianness a factor on any platforms (i.e., endianness of floats differs from ints)? How do different width types behave (e.g., int to float vs. int to double)? What does the language standard guarantee about the safety of such casts/conversions? By cast, I mean a static_cast or C-style cast. What about the inverse float to int conversion (or double to int)? If a float holds a small magnitude value (e.g., 2), does the bit pattern have the same meaning when interpreted as an int?
Do the underlying bits just get "reinterpreted" as a floating point value? No, the value is converted according to the rules in the standard. is there a run-time conversion to produce the nearest floating point value? Yes there's a run-time conversion. For floating point -> integer, the value is truncated, provided that the source value is in range of the integer type. If it is not, behaviour is undefined. At least I think that it's the source value, not the result, that matters. I'd have to look it up to be sure. The boundary case if the target type is char, say, would be CHAR_MAX + 0.5. I think it's undefined to cast that to char, but as I say I'm not certain. For integer -> floating point, the result is the exact same value if possible, or else is one of the two floating point values either side of the integer value. Not necessarily the nearer of the two. Is endianness a factor on any platforms (i.e., endianness of floats differs from ints)? No, never. The conversions are defined in terms of values, not storage representations. How do different width types behave (e.g., int to float vs. int to double)? All that matters is the ranges and precisions of the types. Assuming 32 bit ints and IEEE 32 bit floats, it's possible for an int->float conversion to be imprecise. Assuming also 64 bit IEEE doubles, it is not possible for an int->double conversion to be imprecise, because all int values can be exactly represented as a double. What does the language standard guarantee about the safety of such casts/conversions? By cast, I mean a static_cast or C-style cast. As indicated above, it's safe except in the case where a floating point value is converted to an integer type, and the value is outside the range of the destination type. If a float holds a small magnitude value (e.g., 2), does the bit pattern have the same meaning when interpreted as an int? No, it does not. The IEEE 32 bit representation of 2 is 0x40000000.
2,044,486
2,044,513
Allocating an array of Derived without new[]: Pointer to Base vtable is bad
Basically, I have a pure virtual class Base, and a concrete class Derived which inherits from Base. I then allocate a piece of memory and treat it as an array of Derived via a simple cast. Then, I populate the array using =. Finally, I loop through the array, trying to call the virtual method GetIndex that is declared in Base and defined in Derived. The problem is that I end up getting an access violation exception trying to read the pointer to the vtable for Base (in Visual Studio debugging, this is shown as __vfptr, and it is always 0xbaadf00d). Following is a simple example of the problem I am encountering: #include "stdafx.h" #include "windows.h" struct Base { virtual int GetIndex() const = 0; }; struct Derived : public Base { int index; Derived() { static int test = 0; index = test++; } int GetIndex() const { return index; } }; int _tmain(int argc, _TCHAR* argv[]) { int count = 4; // Also fails with malloc Derived* pDerived = (Derived*)HeapAlloc(GetProcessHeap(), 0, sizeof(Derived) * count); for (int i = 0; i < count; i++) { Derived t; pDerived[i] = t; } // Should print 0 1 2 3 for (int i = 0; i < count; i++) { Base& lc = pDerived[i]; printf("%d\n", lc.GetIndex()); // FAIL! } return 0; } This behavior only occurs when allocating the memory via HeapAlloc or malloc; if new[] is used, it works fine. (Also, the cstor is called 4 times previously, so the output is 4 5 6 7.)
If you allocate memory without new you always need to call the constructor manually with placement new and the destructor with x->~Derived();
2,044,512
2,044,534
hexadecimal value in input string needs to be checked
I am basically trying to get a user to input a hexadecimal input via getline into a string as i will do other operations on this. (using c++ .net stuff won't work) i do not want to break this into chars per say and then go through each char in the string and see if its in range from [0-9] or [Aa-Ff]. Instead, I wanted to know if there was a cool function that anyone knew of or a better way to do it. I am aware of the strtoul function but it returns a long. this will force me to then i guess pass it to a stream to make it back into a string again. another thing with the long i am not sure of if i have to worry about 64 bit long vs 32 bit long. I am developing this on a linx box using an intel processor but it could be used on a unix box whose processor could be 64 bit i am not sure. so i guess there are two questions here really. any help would be most welcome could I get an answer on: can you also comment on my second question about the long? even though i don't have to worry about that now...if i save a variable in a long using a 32 bit system....would that change ( i imagine so the size of long should change on a 64 bit processor) what would this mean for the info saved in the variable? and second in order to avoid the whole little/big endian thing i saved it in a long thinking since its a register of sorts it would not be an issue with porting. was i wrong to think that? thanks
Checking each char is the only way it can be done, period. However, you may be interested in isxdigit(int character) which returns 0 if the character passed isn't a valid hexadecimal character (note that x is not included as a valid character). You can test if it's a hex string in a single line using algorithms, though it's a bit ugly. If you're using Boost, you can pretty it up a lot by using boost::bind. The headers required by this snippet are <locale>, <functional>, and <algorithm>. bool is_hex_string(std::string& str) { return std::count_if(str.begin(), str.end(), std::not1(std::ptr_fun((int(*)(int))std::isxdigit))) > 0; }
2,044,734
2,044,753
Reading data from hard drive into a class
Every time i try to read a file form the hard drive and cast the data into a structure, i end up with problems of the data not casting properly. Is there a requirement with the reinterpret_cast() function that requires the number of bytes in a structure be a multiple of 4 bytes? If not, what am I doing wrong? If so, how do i get around that? my structure looks like this: (they are in 50 byte chunks) class stlFormat { public: float normalX, normalY, normalZ; float x1,y1,z1; float x2,y2,z2; float x3,y3,z3; char byte1, byte2; }; Rest of my code: void main() { int size; int numTriangles; int * header = new int [21]; // size of header ifstream stlFile ("tetrahedron binary.STL", ios::in|ios::binary|ios::ate); size = stlFile.tellg(); // get the size of file stlFile.seekg(0, ios::beg); //read the number of triangles in the file stlFile.read(reinterpret_cast<char*>(header), 84); numTriangles = header[20]; stlFormat * triangles = new stlFormat [numTriangles]; //create data array to hold vertex data stlFile.seekg (84, ios::beg); //read vertex data and put them into data array stlFile.read(reinterpret_cast<char*>(triangles), (numTriangles * 50)); cout << "number of triangles: " << numTriangles << endl << endl; for (int i = 0; i < numTriangles; i++) { cout << "triangle " << i + 1 << endl; cout << triangles[i].normalX << " " << triangles[i].normalY << " " << triangles[i].normalZ << endl; cout << triangles[i].x1 << " " << triangles[i].y1 << " " << triangles[i].z1 << endl; cout << triangles[i].x2 << " " << triangles[i].y2 << " " << triangles[i].z2 << endl; cout << triangles[i].x3 << " " << triangles[i].z3 << " " << triangles[i].z3 << endl << endl; } stlFile.close(); getchar(); } Just for you John, although its rather incomprehensible. Its in hex format. 73 6f 6c 69 64 20 50 61 72 74 33 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 04 00 00 00 ec 05 51 bf ab aa aa 3e ef 5b f1 be 00 00 00 00 00 00 00 00 f3 f9 2f 42 33 33 cb 41 80 e9 25 42 9a a2 ea 41 33 33 cb 41 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ab aa aa 3e ef 5b 71 3f 33 33 4b 42 00 00 00 00 f3 f9 2f 42 33 33 cb 41 80 e9 25 42 9a a2 ea 41 00 00 00 00 00 00 00 00 f3 f9 2f 42 00 00 ec 05 51 3f ab aa aa 3e ef 5b f1 be 33 33 cb 41 00 00 00 00 00 00 00 00 33 33 cb 41 80 e9 25 42 9a a2 ea 41 33 33 4b 42 00 00 00 00 f3 f9 2f 42 00 00 00 00 00 00 00 00 80 bf 00 00 00 00 33 33 cb 41 00 00 00 00 00 00 00 00 33 33 4b 42 00 00 00 00 f3 f9 2f 42 00 00 00 00 00 00 00 00 f3 f9 2f 42 00 00
Most likely, float has an alignment of four bytes on your system. This means that, because you use it in your structure, the compiler will make sure the start of the structure when allocated using normal methods will always be a multiple of four bytes. Since the raw size of your structure is 4*12+2 = 50 bytes, it needs to be rounded up to the next multiple of four bytes - otherwise, the second element of arrays of this structure would be unaligned. So your struct ends up 52 bytes, throwing off your parsing. If you need to parse a binary format, it's often a good idea to either use compiler-specific directives to disable alignment, or read one field at a time, to avoid these problems. For example, on MSVC++, you can use __declspec(align(1)) Edit: Actually __declspec(align(X)) can only increase alignment restrictions. Oops. You'll need to either load one field at a time, or make the padding part of the binary format.
2,044,999
2,045,050
What is private MFC and why are they not accessible through the normal interface?
I am using MFC for gui development and I stumbled upon a function that could be useful for what I'm trying to do. The function is _AfxCompareClassName. However, it is included in the file "afximpl.h" which is located in the directory "VC/altmfc/src/mfc/afximpl.h". Normal mfc includes are in the directory "VC/atlmfc/include". Now from what I've gathered those files and functions located in src/mfc are considered private mfc (according to this guy) and I shouldn't use them. Why ? This function does look nice. It would help me know where in the UI I am currently. Ultimately what I wanted to do was to change the escape/return keys behavior when editing a field of text (Edit Control). My questions are the following : What is a private MFC function ? Why shouldn't I use those functions (From what I have gathered they change often, so it is why I shouldn't use them. Is there another reason?) Is there a cleaner way to do what I'm looking to do ? I though it'd be nice to get some info about private MFC since there doesn't seem to be any on SO so far. Thanks a lot, JC
The 'private' MFC files are the implementation details of MFC. Just as you wouldn't want or expect users of your classes to get at the private: data or methods, you shouldn't rely on the MFC implementation-level utility code. Note that almost any cool thing you can find in the MFC implementation details is available publicly -- somewhere. You just have to dig. There is built-in functionality in MFC that does what you want. It's called RUNTIME_CLASS, and here's sample code from MSDN: // Example for RUNTIME_CLASS CRuntimeClass* prt = RUNTIME_CLASS( CAge ); ASSERT( lstrcmp( prt->m_lpszClassName, "CAge" ) == 0 );
2,045,314
2,045,452
Why can't I cause a seg fault?
OK for whatever reason I'm having trouble causing a seg fault. I want to produce one so that I can use gdb to see how to debug one. I have tried both examples from the Wikipedia article yet neither work. The first one: char *s = "Hello World!"; *s = 'H'; And the second example: int main(void) { main(); } EDIT: I'm using Ubutnu 9.10 and g++ as my compiler. Can anyone show me some code that is guaranteed to segfault?
It impossible to try and reliable do it dereferencing pointers. This is because how the application handles memory can vary from compiler to compiler also across the same compiler with different options (debug/release mode handled differently). What you can do is explicitly raise the segfault using a signal: #include <signal.h> int main() { raise(SIGSEGV); }
2,045,396
2,045,411
How to initialise a std::map once so that it can be used by all objects of a class?
I have an enum StackIndex defined as follows: typedef enum { DECK, HAND, CASCADE1, ... NO_SUCH_STACK } StackIndex; I have created a class called MoveSequence, which is a wrapper for a std::deque of a bunch of tuples of the form <StackIndex, StackIndex>. class MoveSequence { public: void AddMove( const tpl_move & move ){ _m_deque.push_back( move ); } void Print(); protected: deque<tpl_move> _m_deque; }; I thought I could create a static std::map member of the MoveSequence class, which would translate a StackIndex to a std::string, for use by the Print() function. But when I tried, I got the error: "error C2864: 'MoveSequence::m' : only static const integral data members can be initialized within a class" If its not possible to created a std::map as a static member, is there another way to create a std::map that translates a StackIndex to a std::string that can be used to print out MoveSequence objects? thanks Beeband.
You can make a std::map a static member of the class. What you can't do is initiliaze it within the class definition. Note that this is what the error is telling you: error C2864: 'MoveSequence::m' : only static const integral data members can be *initialized* within a class So, you want to have this in the header: class MoveSequence { static std::map<StackIndex, std::string> _m_whatever; }; And then in a source (.cpp) file you want this: std::map<StackIndex, std::string> MoveSequence::_m_whatever( ..constructor args.. );
2,045,509
2,045,532
How to save settings in gdb?
Does anyone know how to save gdb settings (like "set print pretty on" or "set print elements 0", both from here)? I don't want to set my configuration every time that I will use gdb :/ I searched in google and SO, but I found nothing.
Add any commands you want to auto run in the .gdbinit file in your home directory.
2,045,541
2,076,957
Discrete Wavelet Transform integer Daub 5/3 lifting issue
I'm trying to run an integer-to-integer lifting 5/3 on an image of lena. I've been following the paper "A low-power Low-memory system for wavelet-based image compression" by Walker, Nguyen, and Chen (Link active as of 7 Oct 2015). I'm running into issues though. The image just doesn't seem to come out quite right. I appear to be overflowing slightly in the green and blue channels which means that subsequent passes of the wavelet function find high frequencies where there ought not to be any. I'm also pretty sure I'm getting something else wrong as I am seeing a line of the s0 image at the edges of the high frequency parts. My function is as follows: bool PerformHorizontal( Col24* pPixelsIn, Col24* pPixelsOut, int width, int pixelPitch, int height ) { const int widthDiv2 = width / 2; int y = 0; while( y < height ) { int x = 0; while( x < width ) { const int n = (x) + (y * pixelPitch); const int n2 = (x / 2) + (y * pixelPitch); const int s = n2; const int d = n2 + widthDiv2; // Non-lifting 5 / 3 /*pPixelsOut[n2 + widthDiv2].r = pPixelsIn[n + 2].r - ((pPixelsIn[n + 1].r + pPixelsIn[n + 3].r) / 2) + 128; pPixelsOut[n2].r = ((4 * pPixelsIn[n + 2].r) + (2 * pPixelsIn[n + 2].r) + (2 * (pPixelsIn[n + 1].r + pPixelsIn[n + 3].r)) - (pPixelsIn[n + 0].r + pPixelsIn[n + 4].r)) / 8; pPixelsOut[n2 + widthDiv2].g = pPixelsIn[n + 2].g - ((pPixelsIn[n + 1].g + pPixelsIn[n + 3].g) / 2) + 128; pPixelsOut[n2].g = ((4 * pPixelsIn[n + 2].g) + (2 * pPixelsIn[n + 2].g) + (2 * (pPixelsIn[n + 1].g + pPixelsIn[n + 3].g)) - (pPixelsIn[n + 0].g + pPixelsIn[n + 4].g)) / 8; pPixelsOut[n2 + widthDiv2].b = pPixelsIn[n + 2].b - ((pPixelsIn[n + 1].b + pPixelsIn[n + 3].b) / 2) + 128; pPixelsOut[n2].b = ((4 * pPixelsIn[n + 2].b) + (2 * pPixelsIn[n + 2].b) + (2 * (pPixelsIn[n + 1].b + pPixelsIn[n + 3].b)) - (pPixelsIn[n + 0].b + pPixelsIn[n + 4].b)) / 8;*/ pPixelsOut[d].r = pPixelsIn[n + 1].r - (((pPixelsIn[n].r + pPixelsIn[n + 2].r) >> 1) + 127); pPixelsOut[s].r = pPixelsIn[n].r + (((pPixelsOut[d - 1].r + pPixelsOut[d].r) >> 2) - 64); pPixelsOut[d].g = pPixelsIn[n + 1].g - (((pPixelsIn[n].g + pPixelsIn[n + 2].g) >> 1) + 127); pPixelsOut[s].g = pPixelsIn[n].g + (((pPixelsOut[d - 1].g + pPixelsOut[d].g) >> 2) - 64); pPixelsOut[d].b = pPixelsIn[n + 1].b - (((pPixelsIn[n].b + pPixelsIn[n + 2].b) >> 1) + 127); pPixelsOut[s].b = pPixelsIn[n].b + (((pPixelsOut[d - 1].b + pPixelsOut[d].b) >> 2) - 64); x += 2; } y++; } return true; } There is definitely something wrong but I just can't figure it out. Can anyone with slightly more brain than me point out where I am going wrong? Its worth noting that you can see the un-lifted version of the Daub 5/3 above the working code and this, too, give me the same artifacts ... I'm very confused as I have had this working once before (It was over 2 years ago and I no longer have that code). Any help would be much appreciated :) Edit: I appear to have eliminated my overflow issues by clamping the low pass pixels to the 0 to 255 range. I'm slightly concerned this isn't the right solution though. Can anyone comment on this?
OK I can losslessly forward then inverse as long as I store my post forward transform data in a short. Obviously this takes up a little more space than I was hoping for but this does allow me a good starting point for going into the various compression algorithms. You can also, nicely, compress 2 4 component pixels at a time using SSE2 instructions. This is the standard C forward transform I came up with: const int16_t dr = (int16_t)pPixelsIn[n + 1].r - ((((int16_t)pPixelsIn[n].r + (int16_t)pPixelsIn[n + 2].r) >> 1)); const int16_t sr = (int16_t)pPixelsIn[n].r + ((((int16_t)pPixelsOut[d - 1].r + dr) >> 2)); const int16_t dg = (int16_t)pPixelsIn[n + 1].g - ((((int16_t)pPixelsIn[n].g + (int16_t)pPixelsIn[n + 2].g) >> 1)); const int16_t sg = (int16_t)pPixelsIn[n].g + ((((int16_t)pPixelsOut[d - 1].g + dg) >> 2)); const int16_t db = (int16_t)pPixelsIn[n + 1].b - ((((int16_t)pPixelsIn[n].b + (int16_t)pPixelsIn[n + 2].b) >> 1)); const int16_t sb = (int16_t)pPixelsIn[n].b + ((((int16_t)pPixelsOut[d - 1].b + db) >> 2)); pPixelsOut[d].r = dr; pPixelsOut[s].r = sr; pPixelsOut[d].g = dg; pPixelsOut[s].g = sg; pPixelsOut[d].b = db; pPixelsOut[s].b = sb; It is trivial to create the inverse of this (A VERY simple bit of algebra). Its worth noting, btw, that you need to inverse the image from right to left bottom to top. I'll next see if I can shunt this data into uint8_ts and lost a bit or 2 of accuracy. For compression this really isn't a problem.
2,045,678
2,045,712
issues concerning a byte array to a long long(64 bit) array vs a long (32 bit)
I have a byte array that has hex values and I initially put those values in a unsigned long. I am using a 32 bit processor via Ubuntu at the moment. But, i might have to port this program to a 64 bit processor. now I am aware of strtoul function but since I was able to convert it would any issues via a direct assignment I did not bother with that function. The reason I put it in a unsigned long was because I was thinking about little/big endian issues and so using a register like signed long would just take care of that problem for me regardless of processor. now however, i have been thinking about how my program would work on a 64 bit processor. since i am on a 32bit processor it might only recognize 32bit long vs a 64 bit processor only recognizing a 64 bit long which would put my signed long array in jeopardy. so, to fix this issue I just made that signed array into long long. Would that address my concerns? or do I need to do something else? some help and explanation would be appreciated. all my code is in c++.
Instead of using long or long long you should use a typedef like uint32_t, or something similar, so it can be 32-bits on all platforms, unless this isn't what you want? It seems you do have a potential problem with endianness though, if you are simply doing: char bytes[4] = {0x12, 0x23, 0xff, 0xed}; long* p_long = reinterpret_cast<long*>(bytes); std::cout << std::hex << *p_long << std::endl; // prints edff2312 on a little endian platform, 1223ffed on a big endian one. since the actual value of the bytes when interpreted as an integer will change depending on endianness. There is a good answer on converting endianness here.
2,045,735
2,045,768
Memoization in static Objective-C class
Say I have a class method like + (double)function:(id)param1 :(id)param2 { // I want to memoize this like... static NSMutableDictionary* cache = nil; // // test if (param1,param2) is in cache and return cached value, etc. etc // } Thanks!!
If you want to create the cache once and check against it, I generally use an +initialize method. This method is called before the first message sent to the class, so the cache would be created before +function:: (which, by the way, is a terrible selector name) could be called. In this case, I usually declare the cache variable in the .m file, but declaring it in the method definition may also work. Edit: Adding an example at request of OP: // MyClass.m static NSMutableDictionary* cache; + (void) initialize { cache = [[NSMutableDictionary alloc] init]; } + (double) cachedValueForParam1:(id)param1 param2:(id)param2 { // Test if (param1,param2) is in cache and return cached value. } Obviously, if a value doesn't exist in the cache, you should have some code that adds the value. Also, I have no idea how you intend to combine param1 and param2 as the key for the cache, or how you'll store the value. (Perhaps +[NSNumber numberWithDouble:] and -[NSNumber doubleValue]?) You'll want to make sure you understand dictionary lookups before implementing such a strategy.
2,045,774
2,045,860
Developing C wrapper API for Object-Oriented C++ code
I'm looking to develop a set of C APIs that will wrap around our existing C++ APIs to access our core logic (written in object-oriented C++). This will essentially be a glue API that allows our C++ logic to be usable by other languages. What are some good tutorials, books, or best-practices that introduce the concepts involved in wrapping C around object-oriented C++?
This is not too hard to do by hand, but will depend on the size of your interface. The cases where I've done it were to enable use of our C++ library from within pure C code, and thus SWIG was not much help. (Well maybe SWIG can be used to do this, but I'm no SWIG guru and it seemed non-trivial) All we ended up doing was: Every object is passed about in C an opaque handle. Constructors and destructors are wrapped in pure functions Member functions are pure functions. Other builtins are mapped to C equivalents where possible. So a class like this (C++ header) class MyClass { public: explicit MyClass( std::string & s ); ~MyClass(); int doSomething( int j ); } Would map to a C interface like this (C header): struct HMyClass; // An opaque type that we'll use as a handle typedef struct HMyClass HMyClass; HMyClass * myStruct_create( const char * s ); void myStruct_destroy( HMyClass * v ); int myStruct_doSomething( HMyClass * v, int i ); The implementation of the interface would look like this (C++ source) #include "MyClass.h" extern "C" { HMyClass * myStruct_create( const char * s ) { return reinterpret_cast<HMyClass*>( new MyClass( s ) ); } void myStruct_destroy( HMyClass * v ) { delete reinterpret_cast<MyClass*>(v); } int myStruct_doSomething( HMyClass * v, int i ) { return reinterpret_cast<MyClass*>(v)->doSomething(i); } } We derive our opaque handle from the original class to avoid needing any casting, and (This didn't seem to work with my current compiler). We have to make the handle a struct as C doesn't support classes. So that gives us the basic C interface. If you want a more complete example showing one way that you can integrate exception handling, then you can try my code on github : https://gist.github.com/mikeando/5394166 The fun part is now ensuring that you get all the required C++ libraries linked into you larger library correctly. For gcc (or clang) that means just doing the final link stage using g++.
2,045,993
2,046,010
Abstract classes issue in C++ undo/redo implementation
I have defined an "Action" pure abstract class like this: class Action { public: virtual void execute () = 0; virtual void revert () = 0; virtual ~Action () = 0; }; And represented each command the user can execute with a class. For actual undo/redo I would like to do something like this: Undo Action a = historyStack.pop(); a.revert(); undoneStack.push(a); Redo Action a = undoneStack.pop(); a.execute(); historyStack.push(a); The compiler obviously does not accept this, because "Action" is an abstract class which can not be istantiated. So, do I have to redesign everything or is there a simple solution to this problem?
You should store actions as pointers, that will keep the compiler happy. std::vector<Action*> historyStack; /*...*/ historyStack.push_back(new EditAction(/*...*/)); Action* a = historyStack.pop(); a->revert(); undoneStack.push(a); There is another reason why std::vector<Action> historyStack; will not work and that's slicing. When adding objects of derived classes to the vector they will be cast to the base class and loose all their polymorphism. More about it here: What is object slicing? EDIT Look into using ptr_vector to manage the lifetime of the objects in the vector: http://www.boost.org/doc/libs/1_37_0/libs/ptr_container/doc/tutorial.html
2,046,331
2,046,509
Declaring pointer to base and derived classes
I just found that I am confused about one basic question in C++ class Base { }; class Derived : public Base { } Base *ptr = new Derived(); What does it mean? ptr is pointing to a Base class or Derived class? At this line, how many memory is allocated for ptr? based on the size of Derived or Base? What's the difference between this and follows: Base *ptr = new Base(); Derived *ptr = new Derived(); Is there any case like this? Derived *ptr = new Base(); Thanks!
To understand the type system of C++, its important to understand the difference between static types and dynamic types. In your example, you defined the types Base and Derived and the variable ptr which has a static type of Base *. Now when you call new Derived(), you get back a pointer with a static and dynamic type of Derived *. Since Derived is a subtype of Base this can be implicitly converted to a static type of Base * and assigned to ptr as the static types now match. The dynamic type remains Derived * however, which is very important if you call any virtual function of Base via ptr, as calling virtual functions is always based on the dynamic type of the object, not the static type.
2,046,432
2,090,530
Using ZODB directly from C++. Examples and design hints
I'd like to use ZODB directly from C++ and don't want to write Python code for that. Have you had any experience doing so? If I were to use C++ for GUI and quering/writing data from/to ZODB, how the design should be?
seems like you have 2 choices a) work out how to call ZODB python module from c++ google shows boost has a library, and I am sure python.org will tell you too b) work out the file format and write the equivalent code in c++ Probably not impossible for reading, harder for writing. However you will eventually end up with the impedance mismatch of python->dynamic, c++->static I dont know ZODB but I will guess it is tightly matched to the dynamic nature of python's objects and so having a general purpose equivalent for c++ wont work. You would be able to create a particular object schema implementation though. I mean you could have a zodb with Customer, Order, Product and you can create a layer that maps the ZODB data to equivalent C++ objects
2,046,515
2,046,550
from file object to file name
I wonder if we can get the file name including its path from the file object that we have created for the file name in C and in C++ respectively FILE *fp = fopen(filename, mode); // in C ofstream out(filename); // in C++ ifstream in(filename); // in C++ Thanks!
You can't, in general. The file may not ever have had a file name, as it may be standard input, output, or error, or a socket. The file may have also been deleted; on Unix at least, you can still read to or write from a file that has been deleted, as the process retains a reference to it so the underlying file itself is not deleted until the reference count goes to zero. There may also be more than one name for a file; you can have multiple hard links to a single file. If you want to retain the information about where a file came from, I would suggest creating your own struct or class that consists of a filename and the file pointer or stream.
2,046,829
2,046,860
Write and read object of class into and from binary file
I try to write and read object of class into and from binary file in C++. I want to not write the data member individually but write the whole object at one time. For a simple example: class MyClass { public: int i; MyClass(int n) : i(n) {} MyClass() {} void read(ifstream *in) { in->read((char *) this, sizeof(MyClass)); } void write(ofstream *out){ out->write((char *) this, sizeof(MyClass));} }; int main(int argc, char * argv[]) { ofstream out("/tmp/output"); ifstream in("/tmp/output"); MyClass mm(3); cout<< mm.i << endl; mm.write(&out); MyClass mm2(2); cout<< mm2.i << endl; mm2.read(&in); cout<< mm2.i << endl; return 0; } However the running output show that the value of mm.i supposedly written to the binary file is not read and assigned to mm2.i correctly $ ./main 3 2 2 So what's wrong with it? What shall I be aware of when generally writing or reading an object of a class into or from a binary file?
The data is being buffered so it hasn't actually reached the file when you go to read it. Since you using two different objects to reference the in/out file, the OS has not clue how they are related. You need to either flush the file: mm.write(&out); out.flush() or close the file (which does an implicit flush): mm.write(&out); out.close() You can also close the file by having the object go out of scope: int main() { myc mm(3); { ofstream out("/tmp/output"); mm.write(&out); } ... }
2,046,903
2,049,826
How might one create an extra worker thread for a single threaded GUI application?
I am currently developing new features for an existing VCL application. The application creates charts and static images using a thirdparty package called TeeChart. There is one instance where I have to load in 2 million data points to create a static image chart. However, this takes a while to load and the user can't do anything in the application until it is completed. Therefore I would prefer to create a worker thread to process the data points so the GUI doesn't freeze. The method setData() sets the following member variables, which the VCL component will then go on and use to create the Chart: // Holds the Y position for the image (columns) DynamicArray<double>* mpda_XValues; // Holds the colour for the corresponding element in the x and y // position DynamicArray<double>* mpda_YValues; // Holds the z position for the image (rows) DynamicArray<double>* mpda_ZValues; What things should I consider when creating a worker thread? How might I create the thread using boost when all the data processing occurs in one method setData(){...}?
Since you are using the VCL, it might be a good idea to look at the TThread class. Create an inherited class from this, and use the Synchronize method when communicating with your main thread. You can try looking at: http://docwiki.embarcadero.com/VCL/en/Classes.TThread and http://docwiki.embarcadero.com/RADStudio/en/Defining_Thread_Objects
2,046,952
2,046,986
Limit the confusion caused by undefined-behavior?
As I understand from my reading, undefined-behavior is the result of leaving the compiler with several non-identical alternatives at compile time. However, wouldn't that mean that if one were to follow strict coding practice (like putting each assignment and each equality in a separate statement, proper debugging and commenting) then it shouldn't pose a significant problem in finding the source of the undefined-behavior. Further, there are, for each error that comes up, if you identify the code, you should know what statements can be used in that particular statement's stead, correct? EDIT: I'm not interested in places where you have written code that you didn't mean to write. I'm interested in examples where code that is sound by mathematical logic fails to work. Also, I consider 'good coding practice' to be strong informative comments every few lines, proper indentation, and debugging dumps on a regular basis.
Undefined behavior isn't necessarily leaving the compiler with multiple alternatives. Most commonly it is simply doing something that doesn't make sense. For example, take this code: int arr[2]; arr[200] = 42; this is undefined behavior. It's not that the compiler was given multiple alternatives to choose from. it's just that what I'm doing does not make sense. Ideally, it should not be allowed in the first place, but without potentially expensive runtime checking, we can't guarantee that something like this won't occur in our code. So in C++, the rule is simply that the language specifies only the behavior of a program that sticks to the rules. If it does something erroneous like in the above example, it is simply undefined what should happen. Now, imagine how you're going to detect this error. How is it going to surface? It might never seem to cause any problems. Perhaps we just so happen to write into memory that's mapped to the process (so we don't get an access violation), but is never otherwise used (so no other part of the program will read our garbage value, or overwrite what we wrote). Then it'll seem like the program is bug-free and works just fine. Or it might hit an address that's not even mapped to our process. Then the program will crash immediately. Or it might hit an address that's mapped to our process, but at some point later will be used for something. Then all we know is that sooner or later, the function reading from that address will get an unexpected value, and it'll behave weird. That part is easy to spot in the debugger, but it doesn't tell us anything about when or from where that garbage value was written. So there's no simple way to trace the error back to its source.
2,047,030
2,047,125
Hackable scheme implementation with decent C++ FFI
I have tried: scheme48, chicken, gambit, clojure. I am looking for a scheme implementation that's readable (the implementation itself), and has a decent C++ ffi, and good debugging suport (so when I get an exception, I should get popped into a recursive repl). What suggestions do people have?
MzScheme. It can be embedded in C/C++ programs. It can be extended with C libraries, with no special interfacing code. Checkout "Inside PLT MZScheme" for detailed documentation.
2,047,072
2,048,346
Boost library for RTTI
which boost library should I use to implement RTTI in my project?? Is it Statechart library?
RTTI is built into the C++ language. However, you'll find a few RTTI-related, convenient tools in Boost, e.g. Boost.TypeTraits which enables you to checks such as is_base_of(), has_trivial_constructor() and loads of other stuff. Boost.FunctionTypes might also be worth checking out.
2,047,220
2,047,273
What is/are the Python equivalent(s) to the Java Collections Framework?
The Java Collections Framework is like the C++ Standard Template Library: "a unified architecture for representing and manipulating collections (objects that group multiple elements into a single unit)." http://java.sun.com/docs/books/tutorial/collections/intro/index.html
Other than the built-ins you might what to check out collections. >>> import collections >>> dir(collections) ['Callable', 'Container', 'Hashable', 'ItemsView', 'Iterable', 'Iterator', 'KeysView', 'Mapping', 'MappingView', 'MutableMapping', 'MutableSequence', 'MutableSet', 'Sequence', 'Set', 'Sized', 'ValuesView', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '_abcoll', '_iskeyword', '_itemgetter', '_sys', 'defaultdict', 'deque', 'namedtuple'] >>>
2,047,414
2,048,377
Advantages of std::for_each over for loop
Are there any advantages of std::for_each over for loop? To me, std::for_each only seems to hinder the readability of code. Why do then some coding standards recommend its use?
The nice thing with C++11 (previously called C++0x), is that this tiresome debate will be settled. I mean, no one in their right mind, who wants to iterate over a whole collection, will still use this for(auto it = collection.begin(); it != collection.end() ; ++it) { foo(*it); } Or this for_each(collection.begin(), collection.end(), [](Element& e) { foo(e); }); when the range-based for loop syntax is available: for(Element& e : collection) { foo(e); } This kind of syntax has been available in Java and C# for some time now, and actually there are way more foreach loops than classical for loops in every recent Java or C# code I saw.
2,047,563
2,047,599
Dependency on Derived class constructor problem
I am working on a legacy framework. Lets say 'A' is the base-class and 'B' is the derived class. Both the classes do some critical framework initialization. FWIW, it uses ACE library heavily. I have a situation wherein; an instance of 'B' is created. But the ctor of 'A' depends on some initialization that can only be performed from 'B'. As we know when 'B' is instantiated the ctor for 'A' is invoked before that of 'B'. The virtual mechanism dosen't work from ctors, using static functions is ruled-out (due to static-initialization-order-fiasco). I considered using the CRTP pattern as follows :- template<class Derived> class A { public: A(){ static_cast<Derived*>(this)->fun(); } }; class B : public A<B> { public: B() : a(0) { a = 10; } void fun() { std::cout << "Init Function, Variable a = " << a << std::endl; } private: int a; }; But the class members that are initialized in the initializer list have undefined values as they are not yet executed (f.e. 'a' in the above case). In my case there a number of such framework-based initialization variables. Are there any well-known patterns to handle this situation? Thanks in advance, Update: Based on the idea given by dribeas, i conjured-up a temporary solution to this problem (a full-fledged refactoring does not fit my timelines for now). The following code will demonstrate the same:- // move all A's dependent data in 'B' to a new class 'C'. class C { public: C() : a(10) { } int getA() { return a; } private: int a; }; // enhance class A's ctor with a pointer to the newly split class class A { public: A(C* cptr) { std::cout << "O.K. B's Init Data From C:- " << cptr->getA() << std::endl; } }; // now modify the actual derived class 'B' as follows class B : public C, public A { public: B() : A(static_cast<C*>(this)) { } }; For some more discussion on the same see this link on c.l.c++.m. There is a nice generic solution given by Konstantin Oznobikhin.
Probably the best thing you can do is refactoring. It does not make sense to have a base class depend on one of its derived types. I have seen this done before, providing quite some pain to the developers: extend the ACE_Task class to provide a periodic thread that could be extended with concrete functionality and activating the thread from the periodic thread constructor only to find out that while in testing and more often than not it worked, but that in some situations the thread actually started before the most derived object was initialized. Inheritance is a strong relationship that should be used only when required. If you take a look at the boost thread library (just the docs, no need to enter into detail), or the POCO library you will see that they split the problem in two: thread classes control thread execution and call a method that is passed to them in construction: the thread control is separated from the actual code that will be runned, and the fact that the code to be run is received as an argument to the constructor guarantees that it was constructed before the thread constructor was called. Maybe you could use the same approach in your own code. Divide the functionality in two, whatever the derived class is doing now should be moved outside of the hierarchy (boost uses functors, POCO uses interfaces, use whatever seems to fit you most). Without a better description of what you are trying to do, I cannot really go into more detail. Another thing you could try (this is fragile and I would recommend against) is breaking the B class into a C class that is independent of A and a B class that inherits from both, first from C then from A (with HUGE warning comments there). This will guarantee that C will be constructed prior to A. Then make the C subobject an argument of A (through an interface or as a template argument). This will probably be the fastest hack, but not a good one. Once you are willing to modify the code, just do it right.
2,047,987
2,051,279
How do I write shell extension context menu in C++ Builder 2010?
I'm looking for some examples for writing a shell extension in C++ Builder 2010 (2007 and 2009 would also probably be relevant) so I can right click a file in Explorer and get the file path in my VCL program. I have followed Clayton Todd's tutorial, but it's from 2001, and I have some trouble getting it to work. I can't get it to call my methods (initialize , QueryContextMenu etc.).
For many years Delphi and C++ Builder have included a sample project (in ActiveX\ShellExt) that adds a "compile" item to project files' context menus. You should start with that. Also read the MSDN discussion on how to create a context menu handler. Overall, I recommend not using much of the VCL in your shell extension. Keep it small. All it's going to do is implement the basic IContextMenu methods and then send the file names it collects to your main program. If you've followed the tutorial and read the documentation and some of your methods still aren't being called, then do some debugging to figure out why. Ask yourself: Which functions are being called? Is the DLL getting loaded at all?
2,048,155
2,048,236
Java & memory management
I'm new to java world from C++ background. I'd like to port some C++ code to Java. The code uses Sparse vectors: struct Feature{ int index; double value; }; typedef std::vector<Feature> featvec_t; As I understood, if one makes an object, there will be some overhead on memory usage. So naive implementation of Feature will overhead signifiantly when there will be 10-100 millions of Features in a set of featvec_t. How to represent this structure memory efficiently in Java?
If memory is really your bottleneck, try storing your data in two separate arrays: int[] index and double[] value. But in most cases with such big structures performance (time) will be the main issue. Depending on operations mostly performed on your data (insert, delete, get, etc.) you need to choose appropriate data structure to store objects of class Feature. Start your explorations with java.util.Collection interface, its subinterfaces (List, Set, etc) and their implementations provided in java.util package.
2,048,440
2,048,471
Giving up control: machine code generation vs memory layout?
This may be a bit off topic of "right answer, not discussion." However, I am trying to debug my thought process, so maybe someone can help me: I use compilers all the time, and the fact that I'm giving up control over machine code generation (the layout of my caches, and the flow of electrons) does not bother me. However, giving up control of memory layout (being able to place stuff in memory) and memory management (garbage collection) still bothers me these days. Have others dealt with this? If so, how did you get past it? (In particular, how I often feel "safer" in C++ than in Java.) Thanks!
Your feeling is, naturally, very subjective. You might feel comfortable managing your own memory space in C++. Others might appreciate the easiness of Java managing the heap for you, and reducing memory management overhead to a minimum. Programming domain has an influence as well. For example, in an embedded environment, you most likely will not have the privilege to enjoy a garbage collection mechanism, leaving you to manage your own memory, whether you like it or not. Bottom line - subjective and domain-dependent.
2,048,561
2,048,633
adding win32 app icon to task bar
I want to add some simple win32 application's icon to task bar while app is running in background. During this time, i want to send some msgs to that icon so that it pops up as per my req. Unfortunately i know only c\c++ and i use visual studio8, is there a way or api to do this? example: outlook icon or wifi icon
Sure there is an api, Shell_NotifyIcon function does that. You have to fill a NOTIFYICONDATA Structure and then call the above function. What Shell_NotifyIcon will do depends on the flag that you'll set.
2,048,577
2,262,178
Displaying a cvMatrix containing complex numbers (CV_64FC2)
I'm new to OpenCV, and I would like to compare the results of a python program with my calculations in OpenCV. My matrix contains complex numbers since its the result of a cvDFT. Python handles complex numbers well and displays it with scientific notation. My C++ program is not effective when trying to use std::cout. I tried to store my numbers array in a std::complex[] instead of a double[] but it is not compiling. Here is my code, and its result : CvMat *dft_A; dft_A = cvCreateMat(5, 5, CV_64FC2); // complex matrix double a[] = { 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4 }; dft_A->data.db = a; std::cout << "before : " << a[0] << std::endl; cvDFT( dft_A, dft_A, CV_DXT_FORWARD); // DFT ! std::cout << "after : " << a[0] << std::endl; >> before : 0 Here is the same in python, with the output : >>> a = np.mgrid[:5, :5][0] >>> a array([[0, 0, 0, 0, 0], [1, 1, 1, 1, 1], [2, 2, 2, 2, 2], [3, 3, 3, 3, 3], [4, 4, 4, 4, 4]]) >>> np.fft.fft2(a) array([[ 50.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j ], [-12.5+17.20477401j, 0.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j ], [-12.5 +4.0614962j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j ], [-12.5 -4.0614962j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j ], [-12.5-17.20477401j, 0.0 +0.j , 0.0 +0.j , 0.0 +0.j , 0.0 +0.j ]]) >>> The problem is obviously coming from the second cout which is inefficient with the type of date (CV_64FC2 for complex number). My question is : how can I dump the result so I can check that my python code is doing the same as my cpp/opencv code ? Thanks !
There is a dft example in OpenCV 2.0 code, which I am also studying right now. Here is a copy paste for you that might give you an idea. As you can see, it uses cvSplit to spilit to real and imaginary components. Hope that helps: im = cvLoadImage( filename, CV_LOAD_IMAGE_GRAYSCALE ); if( !im ) return -1; realInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1); imaginaryInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1); complexInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 2); cvScale(im, realInput, 1.0, 0.0); cvZero(imaginaryInput); cvMerge(realInput, imaginaryInput, NULL, NULL, complexInput); dft_M = cvGetOptimalDFTSize( im->height - 1 ); dft_N = cvGetOptimalDFTSize( im->width - 1 ); dft_A = cvCreateMat( dft_M, dft_N, CV_64FC2 ); image_Re = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1); image_Im = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1); // copy A to dft_A and pad dft_A with zeros cvGetSubRect( dft_A, &tmp, cvRect(0,0, im->width, im->height)); cvCopy( complexInput, &tmp, NULL ); if( dft_A->cols > im->width ) { cvGetSubRect( dft_A, &tmp, cvRect(im->width,0, dft_A->cols - im->width, im->height)); cvZero( &tmp ); } // no need to pad bottom part of dft_A with zeros because of // use nonzero_rows parameter in cvDFT() call below cvDFT( dft_A, dft_A, CV_DXT_FORWARD, complexInput->height ); cvNamedWindow("win", 0); cvNamedWindow("magnitude", 0); cvShowImage("win", im); // Split Fourier in real and imaginary parts cvSplit( dft_A, image_Re, image_Im, 0, 0 ); // Compute the magnitude of the spectrum Mag = sqrt(Re^2 + Im^2) cvPow( image_Re, image_Re, 2.0); cvPow( image_Im, image_Im, 2.0); cvAdd( image_Re, image_Im, image_Re, NULL); cvPow( image_Re, image_Re, 0.5 ); // Compute log(1 + Mag) cvAddS( image_Re, cvScalarAll(1.0), image_Re, NULL ); // 1 + Mag cvLog( image_Re, image_Re ); // log(1 + Mag)
2,048,664
2,048,688
Passing by reference [C++], [Qt]
I wrote something like this: class Storage { public: Storage(); QString key() const; int value() const; void add_item(QString&,int); private: QMap<QString,int>* my_map_; }; void Storage::add_item(QString& key,int value)//------HERE IS THE SLOT FOR ADDING { *my_map_[key] = value; } and when I'm trying to add item to QMap by: class Dialog : public QDialog { Q_OBJECT public: Dialog(QWidget* = 0); public slots: void add_item() { strg_->add_item(ui->lineEdit->text(),ui->spinBox->value());//---HERE I'M "PASSING" TWO OBJECTS: QString AND int ui->lineEdit->clear(); } private: Ui::Dialog* ui; Storage* strg_; }; I'm getting error: error: no matching function for call to 'Storage::add_item(QString, int) note: candidates are: void Storage::add_item(QString&, int) How am I suppose to send QString by ref. other then I do it now? Thank you.
add_item should take a "const QString&" rather than a "QString&" as parameter.
2,048,967
2,049,011
Why does std::for_each(from, to, function) return function?
I just read the code for std::for_each: template<class InputIterator, class Function> Function for_each(InputIterator first, InputIterator last, Function f) { for ( ; first!=last; ++first ) f(*first); return f; } and could not see any good reasons for this template function to return the input function. Does anyone have any examples on where this would be useful?
It's to allow you to accrue state in your function and then return it to your calling code. For instance, your function (as a functor class) could have a member int for counting the number of times it had been called. Here is a page with some examples : https://web.archive.org/web/20171127171924/http://xenon.arcticus.com/c-morsels-std-for-each-functors-member-variables
2,049,190
2,049,264
Debugging a big double array
I am using a C++ library that provides an object that, for the sake of simplicity, is more or less like this: class ExampleSO { public double* narray; }; I have an instance of ExampleSO whose narray is about 200. Some other method ExampleSO::method() does a lot of arithmetic functions with this array and assigns it to different array elements: ExampleSO::method() { // a lot of operations narray[50] = narray[1] * narray[2] / narray[40]; // and so on This code is generated by another program and it uses a bunch of defines to handle the array elements, so the code looks like this: #define A narray[0] #define X narray[1] #define Y narray[2] // ... #define Z narray[40] // .... #define U narray[50] // ... more, until narray[199] ExampleSO::method() { // a lot of operations U = X * Y / Z; // and so on } My problem is that eventually some array elements are NaN and I am trying to debug the code to see why. I have already found out some of them, which are mainly caused by divisions by zero, others by exponentiation by very small numbers (small as in between 0 and +/-0.1). With my little knowledge of gdb magic, I managed to see the array elements by display *(this->narray) @ 200, but this array is very big and therefore, unreadable. So debugging this piece of code has turned out to be a bundersome task, because the #defines hide me the position of the element, the array is way too big and because so many elements become NaN that I get lost. My question is: what ideas/suggestions do you have to help me debug this code? Perhaps a conditional breakpoint when the first array element becomes NaN would be useful? How could I do that with such structure? Thanks!
Rewrite it. The structure you describe is horrible beyond description. Write a python script to turn the #defines into gdb variable aliases, so that you can refer to them symbolically. Use array syntax in gdb: p narray[12] Add some debugging helper functions and call them from the debugger: p printMyFavoriteValues(narray) Learn how to enable signalling NaNs. It is different on Windows versus Mac versus Linux. #ifdef DARWIN _mm_setcsr( _MM_MASK_MASK &~ (_MM_MASK_OVERFLOW|_MM_MASK_INVALID|_MM_MASK_DIV_ZERO) ); #else feenableexcept(FE_DIVBYZERO | FE_UNDERFLOW | FE_OVERFLOW | FE_INVALID); #endif
2,049,238
2,049,292
Inherited class "invalid pointer error" when calling virtual functions
As you can see in the code below, I have an Abstract Base Class "HostWindow", and class that derives from it "Chrome". All the functions are implemented in Chrome. The issue is, I can't call functions in Chrome if they're virtual. class HostWindow : public Noncopyable { public: virtual ~HostWindow() { } // Pure virtual functions: virtual void repaint(const IntRect&, bool contentChanged, bool immediate = false, bool repaintContentOnly = false) = 0; virtual void scrollbarsModeDidChange() const = 0; } class Chrome : public HostWindow { // HostWindow functions: virtual void repaint(const IntRect&, bool contentChanged, bool immediate = false, bool repaintContentOnly = false); virtual void scrollbarsModeDidChange() const; void focus() const; } So lets say we have an instance of Chrome, and we call a few functions: WebCore::Chrome *chrome = new Chrome(); chrome->repaint(IntRect(), true); // Null pointer error chrome->focus(); // returns void (works) The null pointer error I get whenever I call virtual functions is: Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000008 Any idea what's happening? Update: As many of you pointed out - this code actually runs. Unfortunately I can't provide a more full example, since the code is deep inside WebCore (WebKit). However, I have narrowed the problem down. If I create a Chrome instance manually, calling virtual functions work. So the issue is with this particular chrome instance - it can't instantiated properly. Now, the Chrome instance is instantiated in a constructor of another class. I'll investigate further... Update 2: Ok, examining the vtable on the offending instance shows that it's null; from GDB: p *(void **)chrome $52 = (void *) 0x0 A normal instance has a correct vtable. So, I've got to work out why the vtable is nil - I wonder how that could happen? Maybe because it's being instantiated in some other classes Constructor? Update 3: Looks like I'm correct about the issue being it's instantiation inside another class' constructor. So, before the instantiation looked like this: Page::Page(ChromeClient* chromeClient, ...) : m_chrome(new Chrome(this, chromeClient)) And m_chrome is an invalid instance, with a nil vtable. I've changed the instantiation so it happens when the first time the variable is needed (this involves saving ChromeClient for later): Page::Page(ChromeClient* chromeClient, ...) : m_chrome(0) , m_chrome_client(chromeClient) Chrome* Page::chrome() const { if(!m_chrome) { m_chrome = new Chrome(this, m_chrome_client); } return m_chrome; } Now the Page::chrome() instance is the correct one, with the proper vtable - rather odd! Update 4: Last update, I promise :). Ok, so I've pinpointed it down exactly. You get the correct instance, with the vtable, if you instantiate it in Page constructor's body. If you instantiate it in Page constructor's head, it doesn't have a vtable. Is there any limitation in the types of variable setting you can do in a constructor's head? I guess that's another Stackoverflow question. Thanks guys for being so helpful.
Yes, the 'this' pointer is zero. Add 8 to get an offset, and there's your fault. You apparently don't have any actual object at all. Since you haven't posted enough code to really come to grips, I'm guessing. Either the entire this pointer is 0, or the virtual function table pointer is 0, perhaps because the object has been deleted after it was created and before you try to call it. The best advice I can give you is to create a much smaller test-tube. Either you will find your problem or you will end up with a postable example. The vtbl isn't in place in an instance until the end of the construction process. In fact, the spec requires progressive modification of the vtbl to match the state of construction of the class hierarchy.
2,049,291
4,490,785
Force deletion of slot in boost::signals2
I have found that boost::signals2 uses sort of a lazy deletion of connected slots, which makes it difficult to use connections as something that manages lifetimes of objects. I am looking for a way to force slots to be deleted directly when disconnected. Any ideas on how to work around the problem by designing my code differently are also appreciated! This is my scenario: I have a Command class responsible for doing something that takes time asynchronously, looking something like this (simplified): class ActualWorker { public: boost::signals2<void ()> OnWorkComplete; }; class Command : boost::enable_shared_from_this<Command> { public: ... void Execute() { m_WorkerConnection = m_MyWorker.OnWorkDone.connect(boost::bind(&Command::Handle_OnWorkComplete, shared_from_this()); // launch asynchronous work here and return } boost::signals2<void ()> OnComplete; private: void Handle_OnWorkComplete() { // get a shared_ptr to ourselves to make sure that we live through // this function but don't keep ourselves alive if an exception occurs. shared_ptr<Command> me = shared_from_this(); // Disconnect from the signal, ideally deleting the slot object m_WorkerConnection.disconnect(); OnComplete(); // the shared_ptr now goes out of scope, ideally deleting this } ActualWorker m_MyWorker; boost::signals2::connection m_WorkerConnection; }; The class is invoked about like this: ... boost::shared_ptr<Command> cmd(new Command); cmd->OnComplete.connect( foo ); cmd->Execute(); // now go do something else, forget all about the cmd variable etcetera. the Command class keeps itself alive by getting a shared_ptr to itself which is bound to the ActualWorker signal using boost::bind. When the worker completes, the handler in Command is invoked. Now, since I would like the Command object to be destroyed, I disconnect from the signal as can be seen in the code above. The problem is that the actual slot object is not deleted when disconnected, it is only marked as invalid and then deleted at a later time. This in turn appears to depend on the signal to fire again, which it doesn't do in my case, leading to the slot never expiring. The boost::bind object thus never goes out of scope, holding a shared_ptr to my object that will never get deleted. I can work around this by binding using the this pointer instead of a shared_ptr and then keeping my object alive using a member shared_ptr which I then release in the handler function, but it kind of makes the design feel a bit overcomplicated. Is there a way to force signals2 to delete the slot when disconnecting? Or is there something else I could do to simplify the design? Any comments are appreciated!
I ended up doing my own (subset) implementation of a signal, the main requirement being that a slot should be destroyed by a call to connection::disconnect(). The implementation goes along the lines of the signal storing all slots in a map from slot implementation pointer to a shared_ptr for a slot implementation instead of a list/vector, thereby giving quick access to individual slots without having to iterate over all slots. A slot implementation is in my case basically a boost::function. Connections have a weak_ptr to the internal implementation class for the signal and a weak_ptr to the slot implementation type to allow the signal to go out of scope and to use the slot pointer as the key into the signal map as well as an indication on whether the connection is still active (can't use a raw pointer as that could potentially be reused). When disconnect is called, both of these weak pointers are converted to shared_ptrs and if both of these succeed, the signal implementation is asked to disconnect the slot given by the pointer. This is done by simple erasing it from the map. The map is protected by a mutex to allow for multithreaded use. To prevent deadlocks, the mutex is not held while calling the slots, however this means that a slot may be disconnected from a different thread just prior to being called by the signal. This is also the case with regular boost::signals2 and in both of these scenarios one needs to be able to handle a callback from a signal even after one has disconnected. To simplify the code for when the signal is fired, I am forcing all slots to be disconnected during this. This is different from boost::signals2, that does a copy of the list of slots before calling them in order to handle disconnections/connections while firing the signal. The above works well for my scenario, where the signal of interest is fired very seldom (and in that case only once) but there are a lot of short-lived connections that otherwise use up a lot of memory even when using the trick outlined in the question. For other scenarios, I've been able to replace the use of a signal with just a boost::function (thus requiring that there can only be a single connection) or just by sticking with the workaround in the question where the listener itself manages its lifetime.
2,049,793
5,514,417
MPI , Sungrid vs JPPF?
I have a little experience with SungridEngine and MPI (using OpenMPI). Whats the different between these frameworks/API and JPPF?
All three of these are somehow related to parallel computing, but on pretty different levels. The Sun Grid Engine (SGE) is a queueing system. It is usually set up by the system administrator of a big computing site, and allows users to submit long-running computing "jobs". SGE checks whether any computing nodes are unoccupied, and if they are, it starts the job on that machine, otherwise the job will have to wait in the queue until a machine becomes available. SGE mainly cares about correct distribution of the jobs. For a single user, SGE is of very limited use. SGE is often used in high performance computing to schedule the user jobs. JPPF is a Java framework which can help an application developer to run and implement a parallel Java program. It allows a Java application to run independent parts of it on other machines in parallel. It is useful to split a computing-intensive Java application into several mostly independent parts (which are typically called "tasks"). Although I do not really know the framework, I guess that it is mostly used to distribute big business applications onto several computers. MPI (Message Passing interface) is an API (mainly for C/FORTRAN, but bindings for other languages exist) that allows developers to write massively parallel applications. MPI is mostly intended for data-parallel applications, where all parallel jobs do the same operations, but on different data, and where the different jobs have to communicate a lot. It is used in high performance computing, where a single application may run on up to several thousands of processors for up to several days.
2,049,944
2,050,086
boost memorybuffer and char array
I'm currently unpacking one of blizzard's .mpq file for reading. For accessing the unpacked char buffer, I'm using a boost::interprocess::stream::memorybuffer. Because .mpq files have a chunked structure always beginning with a version header (usually 12 bytes, see http://wiki.devklog.net/index.php?title=The_MoPaQ_Archive_Format#2.2_Archive_Header), the char* array representation seems to truncate at the first \0, even if the filesize (something about 1.6mb) remains constant and (probably) always allocated. The result is a streambuffer with an effective length of 4 ('REVM' and byte nr.5 is \0). When attempting to read further, an exception is thrown. Here an example: // (somewhere in the code) { MPQFile curAdt(FilePath); size_t size = curAdt.getSize(); // roughly 1.6 mb bufferstream memorybuf((char*)curAdt.getBuffer(), curAdt.getSize()); // bufferstream.m_buf.m_buffer is now 'REVM\0' (Debugger says so), // but internal length field still at 1.6 mb } ////////////////////////////////////////////////////////////////////////////// // wrapper around a file oof the mpq_archive of libmpq MPQFile::MPQFile(const char* filename) // I apologize my naming inconsistent convention :P { for(ArchiveSet::iterator i=gOpenArchives.begin(); i!=gOpenArchives.end();++i) { // gOpenArchives points to MPQArchive, wrapper around the mpq_archive, has mpq_archive * mpq_a as member mpq_archive &mpq_a = (*i)->mpq_a; // if file exists in that archive, tested via hash table in file, not important here, scroll down if you want mpq_hash hash = (*i)->GetHashEntry(filename); uint32 blockindex = hash.blockindex; if ((blockindex == 0xFFFFFFFF) || (blockindex == 0)) { continue; //file not found } uint32 fileno = blockindex; // Found! size = libmpq_file_info(&mpq_a, LIBMPQ_FILE_UNCOMPRESSED_SIZE, fileno); // HACK: in patch.mpq some files don't want to open and give 1 for filesize if (size<=1) { eof = true; buffer = 0; return; } buffer = new char[size]; // note: size is 1.6 mb at this time // Now here comes the tricky part... if I step over the libmpq_file_getdata // function, I'll get my truncated char array, which I absolutely don't want^^ libmpq_file_getdata(&mpq_a, hash, fileno, (unsigned char*)buffer); return; } } Maybe someone could help me. I'm really new to STL and boost programming and also inexperienced in C++ programming anyways :P Hope to get a convenient answer (plz not suggest to rewrite libmpq and the underlying zlib architecture^^). The MPQFile class and the underlying uncompress methods are acutally taken from a working project, so the mistake is either somewhere in the use of the buffer with the streambuffer class or something internal with char array arithmetic I haven't a clue of. By the way, what is the difference between using signed/unsigned chars as data buffers? Has it anything to do with my problem (you might see, that in the code randomly char* unsigned char* is taken as function arguments) If you need more infos, feel free to ask :)
How are you determining that your char* array is being 'truncated' as you call it? If you're printing it or viewing it in a debugger it will look truncated because it will be treated like a string, which is terminated by \0. The data in 'buffer' however (assuming libmpq_file_getdata() does what it's supposed to do) will contain the whole file or data chunk or whatever.
2,049,952
2,049,993
How to get Boost libraries binaries that work with Visual Studio?
Here's a question you may have seen around the 'nets in various forms...summed up here for you googling pleasure :-) I have a project that is built with Microsoft's Visual Studio and uses functionality from boost (http://www.boost.org/). I already have my project working with some of the libraries that are header only (no binary library needed to link with). How or where can I get the windows binaries for the other libraries?
There are three different options for accessing the binary libraries: 1) Build them from source. Go into the boost directory and run: bootstrap .\bjam Or get more complicate and do something like: bjam --stagedir="c:\Program Files\Boost" --build-type=complete --toolset=msvc-9.0 --with-regex --with-date_time --with-thread --with-signals --with-system --with-filesystem --with-program_options stage 2) Use the BoostPro installer (http://www.boostpro.com/download) to get the specific libraries that you need. This is very nice because it only downloads and installs the files that you say you want. However, it never has the most current version available, and there are no 64 bit binaries. 3) Download the entire set of libraries (http://boost.teeks99.com) Easy to use, simple to do, but the libraries are huge (7GB unzipped!). Edit 2013-05-13: My builds are now available (starting from 1.53) directly from the sourceforge page.
2,050,369
2,314,650
Display image in second thread, OpenCV?
I have a loop to take in images from a high speed framegrabbger at 250fps. /** Loop processes 250 video frames per second **/ while(1){ AcquireFrame(); DoProcessing(); TakeAction(); } At the same time, I would like the user to be able to monitor what is going on. The user only needs to see images at around 30 fps (or less). How do I set up a second thread that displays the current frame every so often? Thread(){ cvShowImage(); Wait(30); /** Wait for 30 ms **/ } I am on Windows on a quad core Intel machine using MinGW, gcc and OpenCV 1.1. The main criteria is that the display thread must take as little time away from my main processing loop as possible. Every millisecond counts. I have tried using CreateThread() to create a new thread with cvShowImage() and cvWaitKey() but apparently those functions are not threadsafe. I am considering using OpenMP, but some people report problems with OpenMP and OpenCV. I also am considering trying to use DirectX directDraw because apparently it is very fast. but it looks complicated and evidentally there are problems using Windows DLL's with MinGw. Which of these avenues would be the best place to start?
Ok. So embarrassingly my question is also its own answer. Using CreateThread(), CvShowImage() and CvWaitKey() as described in my question actually works-- contrary to some postings on the web which suggest otherwise. In any event, I implemented something like this: /** Global Variables **/ bool DispThreadHasFinished; bool MainThreadHasFinished; iplImage* myImg; /** Main Loop that loops at >100fps **/ main() { DispThreadHasFinished = FALSE; MainThreadHasFinished = FALSE; CreateThread(..,..,Thread,..); while( IsTheUserDone() ) { myImg=AcquireFrame(); DoProcessing(); TakeAction(); } MainThreadHasFinished = TRUE; while ( !DisplayThreadHasFinished ) { CvWaitKey(100); } return; } /** Thread that displays image at ~30fps **/ Thread() { while ( !MainThreadHasFinished ) { cvShowImage(myImg); cvWaitKey(30); } DispThreadHasFinished=TRUE; return; } When I originally posted this question, my code was failing for unrelated reasons. I hope this helps!
2,050,404
2,050,427
How to sprintf an unsigned char?
This doesn't work: unsigned char foo; foo = 0x123; sprintf("the unsigned value is:%c",foo); I get this error: cannot convert parameter 2 from 'unsigned char' to 'char'
Use printf() formta string's %u: printf("%u", 'c');
2,050,460
2,050,486
C++ retrieve exception information
I have a c++ dll which I need to debug. Due to the circumstances in which I am using the dll, I am unable to debug it via the calling application. So, I created a try -catch, where the catch writes the exception to a file. The line which needs to be debugged involves imported classes from a 3rd party dll, so I have no way of knowing what type of exception it is. When I tried catch(exception e), no message was written to the file. So I tried catch(...), which did trigger something: using std::exception::what, the only thing that got written to the file was "1". using std::exception::exception, the file received the following code : "0579EF90". Is there any way for me to retrieve meaningful info about the exception that was thrown? TIA CG
If you don't use catch(KnownExceptionType ex) and use your knwoledge about KnownExceptionType to extract info, no you can't. When you catch with catch(...) you are pretty much lost, you know that you handled an exception but there is no type information there, there is little you can do. You are in the worse case, an exception coming out from a library, you have no info on the exception, even if you had headers for the lib, that exception type doesn't need to be defined there.
2,050,462
2,054,884
Prevent a QMenu from closing when one of its QAction is triggered
I'm using a QMenu as context menu. This menu is filled with QActions. One of these QActions is checkable, and I'd like to be able to check/uncheck it without closing the context menu (and having to re-open it again to choose the option that I want). I've tried disconnecting the signals emitted by the checkable QAction with no luck. Any ideas? Thanks.
Use a QWidgetAction and QCheckBox for a "checkable action" which doesn't cause the menu to close. QCheckBox *checkBox = new QCheckBox(menu); QWidgetAction *checkableAction = new QWidgetAction(menu); checkableAction->setDefaultWidget(checkBox); menu->addAction(checkableAction); In some styles, this won't appear exactly the same as a checkable action. For example, for the Plastique style, the check box needs to be indented a bit.
2,050,551
2,050,777
Qt 4.5.3 QEvent::EnterEditFocus
In Qt docs EnterEditFocus is a event about an editor widget gaining focus for editing but using Qt 4.5.3 the compilation fails with ‘EnterEditFocus’ is not a member of ‘QEvent’. What's wrong?
If Idan's suggestion doesn't work, note that QEvent::EnterEditFocus isn't defined unless you built Qt with QT_KEYPAD_NAVIGATION defined. Refer to the following documentation: http://doc.qt.io/archives/4.6/qapplication.html#keypadNavigationEnabled
2,050,766
2,051,695
How to run gdb against a daemon in the background?
I'm trying to debug a server I wrote with gdb as it segfaults under very specific and rare conditions. Is there any way I can make gdb run in the background (via quiet or batch mode?), follow children (as my server is a daemon and detaches from the main PID) and automatically dump the core and the backtrace (to a designated file) once the program crashes?
Why not just run the process interactively in a persistent screen session? Why must it be a daemon when debugging? Or just run gdb in the screen session and attach it to the running process (e.g. gdb /path/to/binary -p PID_of_binary) after it forks.
2,050,900
2,050,956
C++ templates: prevent instantiation of base template
I have an interface std::string get_string(Source const &s, std::string const &d); int get_int(Source const &s, int const &d); bool get_bool(Source const &s, bool const &d); which I'd like to change to template<class T> T get(Source const &s, T const &d); But there's no sensible base template, so the actual base definition is a legal but useless (return d;). What can I do to force compile-time failure if the base is instantiated? Is there an idiomatic solution for this situation?
Don't define the template, just declare it and define the three specializations. template <typename T> T get(Source const &, T const &); template<> std::string get(Source const &s, std::string const &d) { return d + s.stringval(); // or whatever } [Edit: removed stuff about overloads - just for once, template function specialization does actually seem to be better. Who woulda thunk?]
2,050,961
2,051,031
Is "argv[0] = name-of-executable" an accepted standard or just a common convention?
When passing argument to main() in a C or C++ application, will argv[0] always be the name of the executable? Or is this just a common convention and not guaranteed to be true 100% of the time?
Guesswork (even educated guesswork) is fun but you really need to go to the standards documents to be sure. For example, ISO C11 states (my emphasis): If the value of argc is greater than zero, the string pointed to by argv[0] represents the program name; argv[0][0] shall be the null character if the program name is not available from the host environment. So no, it's only the program name if that name is available. And it "represents" the program name, not necessarily is the program name. The section before that states: If the value of argc is greater than zero, the array members argv[0] through argv[argc-1] inclusive shall contain pointers to strings, which are given implementation-defined values by the host environment prior to program startup. This is unchanged from C99, the previous standard, and means that even the values are not dictated by the standard - it's up to the implementation entirely. This means that the program name can be empty if the host environment doesn't provide it, and anything else if the host environment does provide it, provided that "anything else" somehow represents the program name. In my more sadistic moments, I would consider translating it into Swahili, running it through a substitution cipher then storing it in reverse byte order :-). However, implementation-defined does have a specific meaning in the ISO standards - the implementation must document how it works. So even UNIX, which can put anything it likes into argv[0] with the exec family of calls, has to (and does) document it.