question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,590,905
1,590,968
Is it possible to change a background color of an edit control inside edit?
I am writing a GUI wrapper for windows api right now ( i can't use qt or mfc ). The library itself is extremely basic. After subclassing windows common controls ( and wrapping them into the classes ) i have faced a problem. As far as i know (and i hope i am wrong), only parent control can handle a message like WM_CTLCOLOREDIT and the same. But i want to be able to write something like this: myedit->SetBkColor ( RGB ( 0, 0, 0 ) ); Is it possible to implement at all ( like in windows forms, for example ), or i should write a new controll from scratch, and write the painting by myself? Thank you, #535.
You can do it, but it is a fair amount of work. The basic idea is that you create another window to act as the parent to the control you're subclassing. In that, you keep track of whether a notification message (e.g. WM_CTLCOLOREDIT) is being handled by the parent or the sub-classed control itself. If it's being handled by the parent, it just passes the message through, but if it's supposed to be handled by the subclassed control itself, it reflects it back to the control.
1,590,961
1,591,088
Call class member function in VC++ debugger
A while ago I read the Debugging Windows Programs book, and one of the tricks that it talked about extensively was calling functions from the Visual C++ debugger (quick)watch window. As luck would have it, I don't have a copy on hand and the little documentation that I could find about this is really really poor. So how DO you call a member function in the watch window? What if the function lives in a DLL? What if it is part of a namespace? Can you pass non-trivial parameters? Let's use this example: I want to call the size() method of QList<MyType>, where MyType is a custom type. Thanks!
It works and is hugely useful. You can evaluate expressions in the watch window or open the quick watch window (ctrl-alt-Q -- a very handy shortcut to know). It will let you call most forms of member functions. The only times it commonly tends to fail is if you've got overloaded operators, eg with smart pointers. For a simple class without overloaded operators you should find it should work well. I think it should accept non-trivial parameters (though obviously it depends how non-trivial!) As well as calling functions that return values, you can also call functions that modify the object -- there's no constraint on only calling getter methods. The other kind-of-obvious thing to remember is that all variables are evaluated in the local stack frame, so ensure the variable is visible from the current point in the stack.
1,591,018
1,591,039
std::vector::clear() in constructor and destructor
I encounter many times with code where std::vector::clear() of class member of type std::vector is called in constructor and destructor. I don't see why it's required: constructor - the class member of type std::vector is empty by default, so no need to call clear(). destructor - the class member of type std::vector will be destroyed as part of standard destruction of object contnaining it. As part of vector destruction all value objects containied in it will be destroyed (if it heap allocated pointers to memory, they should be deleted "manually"), so again no need to call clear(). Do I miss something?
From the sound of things, the people who wrote that code were the ones who missed something. The only time it would make sense to call clear() in a ctor or dtor would be in the middle of some other code. For example, a ctor might read in some data, process it, then read in more data. In such a case, it's probably faster to use a single container for the data as you read it in, and clear it each time, than to create a new container every iteration.
1,591,114
1,591,148
Embedded scripting engine for DSL
I'm working on a project which needs an embedded DSL to fullfill its expected requirements. The DSL would be user defined event based. Here goes a mockup of the desired syntax: user-defined-event-1 { // event body } user-defined-event-2 { // event body } Probably, most similar language I know based on events is LSL (from Second Life). So, after reading other similar questions on SO, I would like to ask for the best embeddable scripting engine (Ruby, Lua, Python, etc) on C++ (I work in Qt) which allows me to create this DSL. In my project, I would test that the script properly uses the DSL syntax (at least one event defined) and give the user all the power of the underlying scripting engine and, if possible, Qt. It is not a requirement for the embedded language to work with Qt. It can be isolated, but it would be nice to have some integration too.
There's at least a few Qt-Lua bindings out there. Lua can somewhat do the syntax you've shown above; specifically, {} indicates a table (associative array) in Lua, and if you are only passing an anonymous table to a function, you don't need parentheses: Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio > function LengthOfTable(t) print(#t) end > LengthOfTable ({"a","b","c"}) 3 > LengthOfTable {"a","b","c"} 3 Whether Lua is actually the best for your application, depends on your application, of course. Either way, Lua is very easy (IMO) to embed in C or C++.
1,591,217
1,591,235
What is the meaning of leading underscores in a C++ constructor?
OK I am not a very experienced C++ programmer, but I was wondering what is the significance of the underscores in the arguments of the following constructor? class floatCoords { public: floatCoords(float _x, float _y, float _width, float _height) : x(_x), y(_y), width(_width), height(_height) { } float x, y, width, height; ...
It's just a convenient naming convention, it means nothing to the language. Just be sure you don't follow it with an upper-case letter: What does double underscore ( __const) mean in C?
1,591,218
1,595,087
How can i compile boost::spirit for unsigned char type?
boost::spirit asserts at boost::spirit::char_class::ascii::isalnum() when passing ascci characters > 127. I changed all my private variables from std::string to a typedef std::basic_string<unsigned char, std::char_traits<unsigned char>, std::allocator<unsigned char> > u_string; but still boost uses std:.string internally. What do i have to do ?
The solution is quite simple: instead of using namespace boost::spirit::ascii; i now use using namespace boost::spirit::iso8859_1; This recognizes all charcters in the iso8859 character set.
1,591,269
1,591,283
Using an abstract class to implement a stack of elements of the derived class
I have to do this for a basic C++ lecture at my university, so just to be clear: i would have used the STL if i was allowed to. The Problem: I have a class named "shape3d" from which i derived the classes "cube" and "sphere". Now i have to implement "shape3d_stack", which is meant be able of holding objects of the types "cube" and "sphere". I used arrays for this and it worked quite well when i tried to do so with a stack of ints. I tried to do it like so: shape3d_stack.cpp: 15 // more stuff 16 17 shape3d_stack::shape3d_stack (unsigned size) : 18 array_ (NULL), 19 count_ (0), 20 size_ (size) 21 { array_ = new shape3d[size]; } 22 23 // more stuff but, unfortunately, the compiler tells me: g++ -Wall -O2 -pedantic -I../../UnitTest++/src/ -c shape3d_stack.cpp -o shape3d_stack.o shape3d_stack.cpp: In constructor ‘shape3d_stack::shape3d_stack(unsigned int)’: shape3d_stack.cpp:21: error: cannot allocate an object of abstract type ‘shape3d’ shape3d.hpp:10: note: because the following virtual functions are pure within ‘shape3d’: shape3d.hpp:16: note: virtual double shape3d::area() const shape3d.hpp:17: note: virtual double shape3d::volume() const i guess this must be some kind of really ugly design error caused by myself. so how would be the correct way of using all kinds of objects derived from "shape3d" with my stack?
You can't create objects from abstract classes. You'll probably want to create an array of pointers to the abstract class, which is allowed, and fill them with derived instances: // declaration somewhere: shape3d** array_; // initalization later: array_ = new shape3d*[size]; // fill later, triangle is derived from shape3d: array_[0] = new triangle;
1,591,444
1,591,761
Creating a Windows Forms Control (C++)
trying to run this basic form control example on msdn. At step 1 of the portion "To add a custom property to a control" we place the ClickAnywhere code in the public section of the class. First error: "error C2144: syntax error : 'bool' should be preceded by ';'" Is this syntax correct in C++? (see below) (removing the ClickAnywhere portion of code, it compiles fine...) #pragma once using namespace System; using namespace System::ComponentModel; using namespace System::Collections; using namespace System::Windows::Forms; using namespace System::Data; using namespace System::Drawing; namespace clickcounter { /// <summary> /// Summary for clickcounterControl /// </summary> /// /// WARNING: If you change the name of this class, you will need to change the /// 'Resource File Name' property for the managed resource compiler tool /// associated with all .resx files this class depends on. Otherwise, /// the designers will not be able to interact properly with localized /// resources associated with this form. public __gc class clickcounterControl : public System::Windows::Forms::UserControl { public: //Problem code***** property bool ClickAnywhere { //Is this syntax right in C++? bool get() { return (label1->Dock == DockStyle::Fill); } void set(bool val) { if (val) label1->Dock = DockStyle::Fill; else label1->Dock = DockStyle::None; } } //End Problem code***** clickcounterControl(void) { InitializeComponent(); } protected: void Dispose(Boolean disposing) { if (disposing && components) { components->Dispose(); } __super::Dispose(disposing); } private: System::Windows::Forms::Label * label1; private: /// <summary> /// Required designer variable. /// </summary> System::ComponentModel::Container* components; /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> void InitializeComponent(void) { this->label1 = new System::Windows::Forms::Label(); this->SuspendLayout(); // // label1 // this->label1->BorderStyle = System::Windows::Forms::BorderStyle::FixedSingle; this->label1->Location = System::Drawing::Point(32, 40); this->label1->Name = S"label1"; this->label1->Size = System::Drawing::Size(30, 20); this->label1->TabIndex = 0; this->label1->Text = S"0"; this->label1->TextAlign = System::Drawing::ContentAlignment::MiddleCenter; this->label1->Click += new System::EventHandler(this, label1_Click); // // clickcounterControl // this->Controls->Add(this->label1); this->Name = S"clickcounterControl"; this->Size = System::Drawing::Size(100, 100); this->ResumeLayout(false); } private: System::Void label1_Click(System::Object * sender, System::EventArgs * e) { int temp = System::Int32::Parse(label1->Text); temp++; label1->Text = temp.ToString(); } }; }
Since you are using Visual Studio .Net 2003, you are using Managed C++, not C++/CLI. There is a significant difference in syntax. For a property, you must use the __property keyword, not the C++/CLI property keyword and its new style. It should therefore be: __property bool get_ClickAnywhere() { return (label1->Dock == DockStyle::Fill); } __property void set_ClickAnywhere(bool value) { if (value) label1->Dock = DockStyle::Fill; else label1->Dock = DockStyle::None; } It looks like you are being tripped up by following a guide written for C++/CLI (Visual Studio 2005 and later) while still using Visual Studio 2003.
1,591,547
1,591,559
Template classes and their methods
Linking my program produces a bunch of errors like below. /home/starlon/Projects/LCDControl/DrvQt.cpp:8: undefined reference to `Generic<LCDText>::Generic(Json::Value*, int)' /home/starlon/Projects/LCDControl/DrvQt.cpp:18: undefined reference to `Generic<LCDText>::~Generic()' /home/starlon/Projects/LCDControl/DrvQt.cpp:8: undefined reference to `Generic<LCDText>::Generic(Json::Value*, int)' /home/starlon/Projects/LCDControl/DrvQt.cpp:18: undefined reference to `Generic<LCDText>::~Generic()' DrvQt.o: In function `~DrvQt': /home/starlon/Projects/LCDControl/DrvQt.cpp:23: undefined reference to `Generic<LCDText>::~Generic()' /home/starlon/Projects/LCDControl/DrvQt.cpp:23: undefined reference to `Generic<LCDText>::~Generic()' /home/starlon/Projects/LCDControl/DrvQt.cpp:23: undefined reference to `Generic<LCDText>::~Generic()' /home/starlon/Projects/LCDControl/DrvQt.cpp:23: undefined reference to `Generic<LCDText>::~Generic()' DrvQt.o:(.rodata._ZTV5DrvQt[vtable for DrvQt]+0xc): undefined reference to `Generic<LCDText>::CFG_Key()' DrvQt.o:(.rodata._ZTC5DrvQt0_7GenericI7LCDTextE[vtable for DrvQt]+0xc): undefined reference to `Generic<LCDText>::CFG_Key()' Does that mean I have to have a Generic::MethodName (and Generic::MethodName) for every template parameter I'm going to use? I hope not. I was under the impression that templates were supposed to avoid that sort of scenario. Edit: Here's DrvQt.cpp #include <iostream> #include <QMainWindow> #include "LCDControl.h" #include "DrvQt.h" DrvQt::DrvQt(LCDControl *visitor, Json::Value *config, int rows, int cols) : Generic<LCDText>(config, LCD_TEXT) { // line 8 display_ = new QtDisplay((Generic<LCDText> *)this, config, rows, cols, 8, 6); lcd_ = (LCDText *)display_; std::cout << "DrvQt" << std::endl; visitor_ = visitor; std::cout << "Eval: " << Eval("uptime.Uptime('%H')").toString().toStdString() << std::endl; std::cout << "Eval: " << Eval("cpuinfo.Cpuinfo('model name')").toString().toStdString() << std::endl; std::cout << "Eval: " << Eval("foo").toString().toStdString() << std::endl; std::cout << "Eval: " << Eval("name").toString().toStdString() << std::endl; } // line 18 DrvQt::~DrvQt() { delete display_; //delete window; } // line 23
More code to debug would be nice, please post some code as that debug message is not enough to go with (for me atleast). For now, It sounds like an undefined reference so you might want to make sure that the declaration and implementation of the template class is in the same file. Not separated into a .hpp and .cpp as usual. This is a shortcoming of template classes (that makes sense once you understand how they work). Edit: How are you supposed to use template classes in other files? I mean if you have (in a normal project): someclass.hpp someclass.cpp main.cpp And then you make someclass a template class then you must merge all of the implementation of someclass into its header file so it would now be: someclass.hpp main.cpp In both situations you can still have a #include "someclass.hpp" in the main.cpp file and get access to that class both times. For more information look at the bottom paragraph here (cppreference).
1,591,568
1,591,638
Signal and Slot vs Multithreading in Boost Library
I have gone through similar questions on Stackoverflow but still can't get a good answer: how boost implements signals and slots How signal and slots are implemented I am quite puzzled on how this signal/slot is achieved. Q1: From the following code, sig is connected to two function(Hello() and World()), and it seems that the functions are called in a serialized manner, which also implies that, one function(Hello()) need to be completed before going into another function(World())? => Single thread program Q2: Are there anyway to enable multi-threaded signal/slot?(=> World() will start instantly, don't need to wait for Hello() to complete.) Or if it's not recommended, would you mind tell me why? Sample codes on Boost website: struct Hello { void operator()() const { std::cout << "Hello";} }; struct World { void operator()() const { std::cout << ", World!" << std::endl;} }; boost::signal<void ()> sig; sig.connect(Hello()); sig.connect(World()); sig(); Output: Hello, World!
Q1: The calls are serialized. What signals are doing internally is, greatly simplified: foreach connection: call handler Therefore you don't want to block in the handlers for long. If you need to do much work you can invoke it from there though, for example by creating a thread for it. Q2: boost signals 1 isn't even thread-safe; signals 2 is, but still does serialized calls. As signals are mostly used for event handling it is common style to not actually do much work in the handlers. Thus there is no real benefit in calling them 'in parallel', the benefits would not in general justify the overhead of the neccessary thread invocations.
1,591,591
1,591,713
Can one leverage std::basic_string to implement a string having a length limitation?
I'm working with a low-level API that accepts a char* and numeric value to represent a string and its length, respectively. My code uses std::basic_string and calls into these methods with the appropriate translation. Unfortunately, many of these methods accept string lengths of varying size (i.e. max(unsigned char), max(short), etc...) and I'm stuck writing code to make sure that my string instances do not exceed the maximum length prescribed by the low-level API. By default, the maximum length of an std::basic_string instance is bound by the maximum value of size_t (either max(unsigned int) or max(__int64)). Is there a way to manipulate the traits and allocator implementations of a std::basic_string implementation so that I may specify my own type to use in place of size_t? By doing so, I am hoping to leverage any existing bounds checks within the std::basic_string implementation so I don't have to do so when performing the translation. My initial investigation suggests that this is not possible without writing my own string class, but I'm hoping that I overlooked something :)
you can pass a custom allocator to std::basic_string which has a max size of whatever you want. This should be sufficient. Perhaps something like this: template <class T> class my_allocator { public: typedef T value_type; typedef std::size_t size_type; typedef std::ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address(reference r) const { return &r; } const_pointer address(const_reference r) const { return &r; } my_allocator() throw() {} template <class U> my_allocator(const my_allocator<U>&) throw() {} ~my_allocator() throw() {} pointer allocate(size_type n, void * = 0) { // fail if we try to allocate too much if((n * sizeof(T))> max_size()) { throw std::bad_alloc(); } return static_cast<T *>(::operator new(n * sizeof(T))); } void deallocate(pointer p, size_type) { return ::operator delete(p); } void construct(pointer p, const T& val) { new(p) T(val); } void destroy(pointer p) { p->~T(); } // max out at about 64k size_type max_size() const throw() { return 0xffff; } template <class U> struct rebind { typedef my_allocator<U> other; }; template <class U> my_allocator& operator=(const my_allocator<U> &rhs) { (void)rhs; return *this; } }; Then you can probably do this: typedef std::basic_string<char, std::char_traits<char>, my_allocator<char> > limited_string; EDIT: I've just done a test to make sure this works as expected. The following code tests it. int main() { limited_string s; s = "AAAA"; s += s; s += s; s += s; s += s; s += s; s += s; s += s; // 512 chars... s += s; s += s; s += s; s += s; s += s; s += s; // 32768 chars... s += s; // this will throw std::bad_alloc std::cout << s.max_size() << std::endl; std::cout << s.size() << std::endl; } That last s += s will put it over the top and cause a std::bad_alloc exception, (since my limit is just short of 64k). Unfortunately gcc's std::basic_string::max_size() implementation does not base its result on the allocator you use, so it will still claim to be able to allocate more. (I'm not sure if this is a bug or not...). But this will definitely allow you impose hard limits on the sizes of strings in a simple way. You could even make the max size a template parameter so you only have to write the code for the allocator once.
1,591,779
1,591,883
Why should I use Apache C++ Standard Library rather than any other STL implementation along with Boost?
What benefits do I get from Apache C++ standard library that I don't get from STL implementations that come with the compiler and from Boost libraries?
The Apache C++ Standard Library project is a complete implementation of the ISO/IEC 14882 C++ Standard Library. The most distinguishing characteristic of this implementation of the C++ Standard Library is its portability to a large number of C++ compilers, operating systems, and hardware architectures. Sounds like if you value extreme portability this is for you. If you don't, most everyone defaults to their compiler's default implementation, so if you ever have trouble, the community will be bigger. If you don't run into implementation issues I don't see why it should make much of a difference. I typically get flamed for this opinion but I like the idea of having a company I can pay to fix problems, my developers (and I) aren't smart enough (or don't have the time) to dive into internals and fix bugs.
1,591,873
1,592,035
How do I write a cpp __DIR__ macro, similar to __FILE__
The __FILE__ and __LINE__ macros are built into the C Pre-Processor, and are often used for printing debug output with file names and line numbers. I need something similar, but with just the name of the directory at the end of the path. For instance if my code is in: /home/davidc/some/path/to/some/code/foo/bar I need a macro that will just give me "bar", if the code is in /home/davidc/some/path/to/some/code/foo/bee then I need it to give me "bee". Any thoughts? (btw, this is for a C++ application). Update: to be clear, I'm after a macro that will give me a string containing the directory name at compile-time, I don't want to do any string-processing at runtime.
If you are using GNU make to build your project, then you might be able to do something like this: %.o: %.cpp $(CC) $(CFLAGS) -D__DIR__="$(strip $(lastword $(subst /, , $(dir $(abspath $<)))))" -c $< -o $@ That has to be about the most God-awful thing that I have thought about doing in a Makefile in quite a while. I don't think that you will find a quick or clean way to do this within the confines of the compiler so I'd look for clever ways to inject the information into the compilation process. Good luck.
1,591,924
1,591,989
using namespace issue
When I use the following #include <map> using namespace LCDControl; Any reference to the std namespace ends up being associated with the LCDControl name space. For instance: Generic.h:249: error: 'map' is not a member of 'LCDControl::std' How do I get around this? I didn't see anything specific to this on any documentation I looked over. Most of them said not to use: using namespace std;. Here's line 249: for(std::map<std::string,Widget *>::iterator w = widgets_.begin();
It looks like there's a std namespace within LCDControl that's hiding the global std namespace. Try using ::std::map instead of std::map. I would say that either there's a using namespace std somewhere within the LCDControl namespace, or possibly there's an #include of a STL header that defines std within the LCDControl namespace. e.g.: namespace LCDControl { #include <map> } Which would define all the symbols in <map> as part of LCDControl::std, which in turn would hide the global std, or at least any symbols defined in the inner namespace, I'm not sure. When I tried this under VS2008, I got an error: namespace testns { int x = 1; } namespace hider { namespace testns { int x = 2; } } int y = testns::x; using namespace hider; int z = testns::x; // <= error C2872: 'testns' : ambiguous symbol
1,592,039
1,593,484
recursively find subsets
Here is a recursive function that I'm trying to create that finds all the subsets passed in an STL set. the two params are an STL set to search for subjects, and a number i >= 0 which specifies how big the subsets should be. If the integer is bigger then the set, return empty subset I don't think I'm doing this correctly. Sometimes it's right, sometimes its not. The stl set gets passed in fine. list<set<int> > findSub(set<int>& inset, int i) { list<set<int> > the_list; list<set<int> >::iterator el = the_list.begin(); if(inset.size()>i) { set<int> tmp_set; for(int j(0); j<=i;j++) { set<int>::iterator first = inset.begin(); tmp_set.insert(*(first)); the_list.push_back(tmp_set); inset.erase(first); } the_list.splice(el,findSub(inset,i)); } return the_list; }
From what I understand you are actually trying to generate all subsets of 'i' elements from a given set right ? Modifying the input set is going to get you into trouble, you'd be better off not modifying it. I think that the idea is simple enough, though I would say that you got it backwards. Since it looks like homework, i won't give you a C++ algorithm ;) generate_subsets(set, sizeOfSubsets) # I assume sizeOfSubsets cannot be negative # use a type that enforces this for god's sake! if sizeOfSubsets is 0 then return {} else if sizeOfSubsets is 1 then result = [] for each element in set do result <- result + {element} return result else result = [] baseSubsets = generate_subsets(set, sizeOfSubsets - 1) for each subset in baseSubssets for each element in set if no element in subset then result <- result + { subset + element } return result The key points are: generate the subsets of lower rank first, as you'll have to iterate over them don't try to insert an element in a subset if it already is, it would give you a subset of incorrect size Now, you'll have to understand this and transpose it to 'real' code.
1,592,277
1,593,006
Conversion from unsigned to signed type safety?
Is it safe to convert, say, from an unsigned char * to a signed char * (or just a char *?
The access is well-defined, you are allowed to access an object through a pointer to signed or unsigned type corresponding to the dynamic type of the object (3.10/15). Additionally, signed char is guaranteed not to have any trap values and as such you can safely read through the signed char pointer no matter what the value of the original unsigned char object was. You can, of course, expect that the values you read through one pointer will be different from the values you read through the other one. Edit: regarding sellibitze's comment, this is what 3.9.1/1 says. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.9); that is, they have the same object representation. For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. So indeed it seems that signed char may have trap values. Nice catch!
1,592,360
1,592,411
C++ writing string to file = extra bytes
I am using c++ to look through 256 counts and write the ASCII representative to a file. If i use the method of generating a 256 character string then write that string to the file, the file weighs 258bytes. string fileString = ""; //using the counter to attach the ASCII count to the string. for(int i = 0; i <= 256; i++) { fileString += i; } file << fileString; If i use the method of writing to the file withing the loop, the file is exactly 256bytes. //using the counter to attach the ASCII count to the string. for(int i = 0; i <= 256; i++) { file << (char)i; } Whats going here with the string, what extra information from the string is being written to the file?
Both of these create a 256 byte file: #include <fstream> #include <string> int main(void) { std::ofstream file("output.txt", std::ios_base::binary); std::string fileString; for(int i = 0; i < 256; i++) { fileString += static_cast<char>(i); } file << fileString; } And: #include <fstream> #include <string> int main(void) { std::ofstream file("output.txt", std::ios_base::binary); std::string fileString; for (int i = 0; i < 256; ++i) { file << static_cast<char>(i); } file.close(); } Note, before you had an off-by-one error, as there is no 256th ASCII character, only 0-255. It will truncate to a char when printed. Also, prefer static_cast. If you do not open them as binary, it will append a newline to the end. My standard-ess is weak in the field of outputs, but I do know text files are suppose to always have a newline at the end, and it is inserting this for you. I think this is implementation defined, as so far all I can find in the standard is that "the destructor can perform additional implementation-defined operations." Opening as binary, of course, removes all bars and let's you control every detail of the file. Concerning Alterlife's concern, you can store 0 in a string, but C-style strings are terminated by 0. Hence: #include <cstring> #include <iostream> #include <string> int main(void) { std::string result; result = "apple"; result += static_cast<char>(0); result += "pear"; std::cout << result.size() << " vs " << std::strlen(result.c_str()) << std::endl; } Will print two different lengths: one that is counted, one that is null-terminated.
1,592,476
1,592,854
Why isn't std::string::max_size() == std::string::allocator::max_size()
Recently I've noticed that the following statement is not true given std::string s. s.max_size() == s.get_allocator().max_size(); I find this interesting, by default std::string will use std::allocator<char> which has a theoretical limit of size_type(-1) (yes i know I'm assuming 2's complement, but that's unrelated to the actual question). I know that the practical limitations will be significantly less than this. On a typical 32-bit, x86 system, the kernel will occupy 2GB (perhaps 1GB) of the address space leaving a much smaller practical upper limit. Anyway, GNU libstdc++'s std::basic_string<>::max_size() appears to return the same value regardless of what the allocator it is using says (something like 1073741820). So the question remains, why doesn't std::basic_string<>::max_size() just return get_allocator().max_size()? It seems to me that this is the hypothetical upper limit. And if the allocation comes up short, it'll just throw a std::bad_alloc, so why not try? This is more of a curiosity than anything else, I was just wondering why the two are defined separately in at least this one implementation.
In Microsoft Connect was posted bug related to your question. Microsoft has interesting answer to it: We've resolved it as By Design according to our interpretation of the Standard, which doesn't clearly explain what the intended purpose for max_size() is. Allocator max_size() is described as "the largest value that can meaningfully be passed to X::allocate()" (C++03 20.1.5 [lib.allocator.requirements]/Table 32), but container max_size() is described as "size() of the largest possible container" (23.1 [lib.container.requirements]/Table 65). Nothing describes whether or how container max_size() should be derived from allocator max_size(). Our implementation for many years has derived container max_size() directly from allocator max_size() and then used this value for overflow checks and so forth. Other interpretations of the Standard, such as yours, are possible, but aren't unambiguously correct to us. The Standard's wording could certainly benefit from clarification here. Unless and until that happens, we've decided to leave our current implementation unchanged for two reasons: (1) other customers may be depending on our current behavior, and (2) max_size() fundamentally doesn't buy anything. At most, things that consume allocators (like containers) could use allocator max_size() to predict when allocate() will fail - but simply calling allocate() is a better test, since the allocator will then decide to give out memory or not. Things that consume containers could use container max_size() as a guarantee of how large size() could be, but a simpler guarantee is size_type's range. Additionally here you could find Core Issue #197. The committee has considered request to improve the wording of Standard, but it was declined. So the answer to your question "Why..?" is that Standard doesn't clearly explain what the intended purpose for max_size() is.
1,592,535
1,592,545
Operator new and bad_alloc on linux
On Linux, malloc doesn't necessarily return a null pointer if you're out of memory. You might get back a pointer and then have the OOM killer start eating processes if you're really out of memory. Is the same true for c++'s operator new or will you get the bad_alloc exception?
The same is true for operator new, alas :^(
1,592,632
1,592,652
De-referencing null in VS with Windows 7
I have noticed that when I was running Windows XP, if my code dereferenced null I would get a crash in debug and I could then easily identify where the bug was coming from. It seems that in Windows 7 (I'm running 64-bit), instead of crashing or creating any sort of notification, the code will simply break its current iteration and start a new one. This has been very frustrating for me, as when this happens it takes much longer to debug than it used to. Has anyone else with Windows 7 noticed this?
The access violation exception is probably being caught by something higher up in the program. If you're using MSVC, under your "debug" menu, look at "Exceptions..." and make sure that "Access violation" is checked (to tell the debugger to give you a first-chance break when the exception is thrown, before it gets percolated up to the usual exception handlers).
1,592,690
1,592,721
Setting a timeout when using connect() in C++
I'm using sockets in c++ on linux to connect to a server using connect(), but it doesn't seem to have a timeout value. What would be the easiest/best way to give it a timeout of a few seconds. I'll post the code I have up to, and including, the blocking call to connect(): using namespace std; int main( int argc, char* argv[] ) { // Interpret the command line arguments string uri_string = ""; if ( argc != 2 ) { cerr << "Usage: " << argv[0]; cerr << " <URI>" << endl; return 1; } else { uri_string = argv[1]; } // Create URI object and have it parse the uri_string URI *uri = URI::Parse(uri_string); if ( uri == NULL ) { cerr << "Error: Cannot parse URI." << endl; return 2; } // Check the port number specified, if none use port 80 unsigned port = 80; if ( uri->Is_port_defined() ) { port = uri->Get_port(); } // Create TCP socket and connect to server int tcp_sock = socket( AF_INET, SOCK_STREAM, 0 ); if ( tcp_sock < 0 ) { cerr << "Unable to create TCP socket." << endl; return 3; } sockaddr_in server; socklen_t slen = sizeof(server); server.sin_family = AF_INET; server.sin_port = htons( port ); cout << "Resolving " << uri->Get_host() << "... "; hostent *hostp = gethostbyname( uri->Get_host().c_str() ); memcpy( &server.sin_addr, hostp->h_addr, hostp->h_length ); struct in_addr **pptr; pptr = (struct in_addr **)hostp->h_addr_list; string ip_addr = ""; while( *pptr != NULL ) { ip_addr += inet_ntoa(**(pptr++)); } cout << ip_addr << endl; cout << "Connecting to " << uri->Get_host() << "|" << ip_addr << "|:"; cout << port << "... "; if ( connect( tcp_sock, (sockaddr*)&server, slen ) < 0 ) { cerr << "Unable to connect to server via TCP." << endl; close( tcp_sock ); return 4; } cout << "connected." << endl; // rest of code }
Make the socket a non-blocking socket, and then use select() or poll() with a timeout value to check for writability. If the select() returns with a timeout you did not connect in time, and you can close the socket and deal with the connection failure. If it returns with a completion, everything is fine and you can proceed.
1,592,901
1,600,458
n-values UUID generator, reusable IDs
I need a simple UUID generator. The ID is required to be unique for this single instance. Another requirement is, that it has n hashes coexisting at a time, and being releasable. I don't know wether this fits the UUID concept or not. I allrdy thought about a Stack with n-values using pop and push, but this practice seems bad memory wise. Using random based UUIDs (excluding cryptographic ones) isn't save enough, as by bad luck there could be 2 matching IDs, which can not be accapet (though neglegable chance), as this is supposed to be used in an productive environment.
Universally Unique Identifiers (UUID) / Globally Unique Identifier (GUID) The problem of generating unique IDs can be broken down as uniqueness over space and uniqueness over time which, when combined, aim to produce a globally unique sequence. UUIDs are officially and specifically defined as part of the ISO-11578 standard other specifications also exist, like RFC 4122, ITU-T Rec. X.667. OSSP uuid ( http://www.ossp.org/pkg/lib/uuid/ ) is an API for ISO C, ISO C++, Perl and PHP and a corresponding CLI for the generation of DCE 1.1, ISO/IEC 11578:1996, and RFC4122 compliant Universally Unique Identifiers (UUIDs). It supports DCE 1.1 variant UUIDs of version 1 (time and node based), version 3 (name based, MD5), version 4 (random number based), and version 5 (name based, SHA-1). UUIDs are 128-bit numbers that are intended to have a high likelihood of uniqueness over space and time and are computationally difficult to guess. They are globally unique identifiers that can be locally generated without contacting a global registration authority. It is Open Sourced under the MIT/X Consortium License. I have included some further explanations in the http://en.wikibooks.org/wiki/The_World_of_Peer-to-Peer_%28P2P%29/Building_a_P2P_System#Unique_ID On windows check the RPC library (see #include "Rpcdce.h" ) it has functions to generate UUIDs.
1,592,930
1,594,593
Send command to service from C++
how can I send command to a Windows service from C++? Equivalent .NET code is: ServiceController sc = new ServiceController("MyService"); sc.ExecuteCommand(255);
From native C++, you will need to: Open a handle to the service control manager, Use the service control manager to obtain a service handle for the service you want to control, Send a control code or codes to the service, and Close the handles opened in steps 1 and 2. For example, this code restarts the time synchronization service. First, I create a wrapper class for the service handles, to close them automatically when leaving the block. class CSC_HANDLE { public: CSC_HANDLE(SC_HANDLE h) : m_h(h) { } ~CSC_HANDLE() { ::CloseServiceHandle(m_h); } operator SC_HANDLE () { return m_h; } private: SC_HANDLE m_h; }; Then, I open the service control manager (using OpenSCManager()) and the service I want to control. Note that the dwDesiredAccess parameter to OpenService() must include permissions for each control I want to send, or the relevant control functions will fail. BOOL RestartTimeService() { CSC_HANDLE hSCM(::OpenSCManager(NULL, SERVICES_ACTIVE_DATABASE, GENERIC_READ)); if (NULL == hSCM) return FALSE; CSC_HANDLE hW32Time(::OpenService(hSCM, L"W32Time", SERVICE_START | SERVICE_STOP | SERVICE_QUERY_STATUS)); if (NULL == hW32Time) return FALSE; To stop the service, I use ControlService() to send the SERVICE_CONTROL_STOP code, and then check the return value to make sure the command succeeded. If any error other than ERROR_SERVICE_NOT_ACTIVE is reported, I assume that starting the service is not going to succeed. SERVICE_STATUS ss = { 0 }; ::SetLastError(0); BOOL success = ::ControlService(hW32Time, SERVICE_CONTROL_STOP, &ss); if (!success) { DWORD le = ::GetLastError(); switch (le) { case ERROR_ACCESS_DENIED: case ERROR_DEPENDENT_SERVICES_RUNNING: case ERROR_INVALID_HANDLE: case ERROR_INVALID_PARAMETER: case ERROR_INVALID_SERVICE_CONTROL: case ERROR_SERVICE_CANNOT_ACCEPT_CTRL: case ERROR_SERVICE_REQUEST_TIMEOUT: case ERROR_SHUTDOWN_IN_PROGRESS: return FALSE; case ERROR_SERVICE_NOT_ACTIVE: default: break; } } After instructing the service to stop, I wait for the service manager to report that the service is in fact stopped. This code has two potential bugs, which you may wish to correct for production code: Sleep(1000) will suspend the message loop on this thread, so you should use another method to delay execution if this function will run on a UI thread. You can construct a suitable sleep-with-message-loop using MsgWaitForMultipleObjectsEx(). The DWORD returned from GetTickCount() will wrap around to zero eventually; if it wraps around while this function is waiting, the wait may give up sooner than I intended. DWORD waitstart(::GetTickCount()); while (true) { ZeroMemory(&ss, sizeof(ss)); ::QueryServiceStatus(hW32Time, &ss); if (SERVICE_STOPPED == ss.dwCurrentState) break; ::Sleep(1000); DWORD tick(::GetTickCount()); if ((tick < waitstart) || (tick > (waitstart + 30000))) return FALSE; } Finally, knowing that the service is in a stopped state, I call StartService() run it again. success = ::StartService(hW32Time, 0, NULL); if (!success) return FALSE; return TRUE; }
1,593,039
1,593,118
load libavcodec in Qt-Projekt
I want to include the libavcodec in my Qt-project. Should I do that with #include <ffmpeg/libavcodec.h> or with something more Qt, for example QLibrary mylib("libavcodec"); And also a question to understand if I really got it: To use libavcodec, do I need to import the source-files or the .dll?
You should use libavcodec like any other library. That is, include it's headers and link against it's import library. If you are using qmake, you will need to modify the INCLUDEPATH and LIBS variables, see the documentation for some examples. The QLibrary option is only useful for DLLs that you want to load at run-time (e.g. plugins).
1,593,231
1,593,297
Possible to use precompiled headers with MIDL generated files?
We do have a project wich uses the MIDL tool to create specific header/iid and proxy files. Those files are compiled and linked with the rest of the project using a post build step that calls nmake. Is it possible to use precompiled headers with thos IDL generated files? How can I inject #include "stdafx-h" and remove other included headers?
Use the /FI option (Force Include): "This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line, in the CL environment variable, or in a command file." It won't remove the other headers, but this is not necessary for the Precompiled Header to be used... All the headers that you want to precompile should be included by stdafx.h. Then, provided the files have inclusion guards, it won't be a problem when they are included again in the sources. Example Generated a.cpp file: #include <a.h> #include <b.h> //rest of the code Suppose you want to pre-compile a.h and b.h. Then you create the file stdafx.h: #include <a.h> #include <b.h> And then you use the /FI option to have this stdafx.h included as the first file into a.cpp. If the files a.h and b.h have include guards, leaving them in a.cpp is not an issue...
1,593,233
1,593,241
How can I call a masked function in C++?
Let's say I have this C++ code: void exampleFunction () { // #1 cout << "The function I want to call." << endl; } class ExampleParent { // I have no control over this class public: void exampleFunction () { // #2 cout << "The function I do NOT want to call." << endl; } // other stuff }; class ExampleChild : public ExampleParent { public: void myFunction () { exampleFunction(); // how to get #1? } }; I have to inherit from the Parent class in order to customize some functionality in a framework. However, the Parent class is masking the global exampleFunction that I want to call. Is there any way I can call it from myFunction? (I actually have this problem with calling the time function in the <ctime> library if that makes any difference)
Do the following: ::exampleFunction() :: will access the global namespace. If you #include <ctime>, you should be able to access it in the namespace std: std::time(0); To avoid these problems, place everything in namespaces, and avoid global using namespace directives.
1,593,349
1,593,592
VS2008 win32 project defaults - remove default precompiled headers
I have been through every option to try to find a way to get the IDE to let me create a new win32pject without precompiled headers. I have read every thread on this forum with the words "precpmpiled headers" in it and the closest I got was: Precompiled Headers Using 2008 pro (not express, althought the behaviour seems to be similar) I go to: File -> New -> Project This opens the New Project dialog in which I select Visual C++ Win32 Project, enter a name and hit OK. THen I get the "Win32 Application Wizard". With the Application Type set to "Windows Application", the application settings pane will not allow me to uncheck the pre-compiled headers. THe check box is greyed out. IF I choose "Console Application" I can uncheck it, but I am creating a GUI app. WHen I click Finish I get 6 yards of code in xxx.cpp, four header files and the obligatory stdafx.cpp. Perhaps I could remove and delete all this stuff and the go into the properties and turn off PCH, but thats a hasssel for the many small project examples I want to write. I just want an empty project that will compile to a win32 app, so how do i change the PCH default to NONE?
You could make your own template to do this, or you could edit the default one. The relevant wizard can be found here: C:\Program Files\Microsoft Visual Studio 9.0\VC\VCWizards\AppWiz\Generic\Application Obviously if you're gonna edit the default template, backup the folder first. I'll show you how to get started on editing it. First of all you need to tell the wizard script that you don't want precompiled headers. Edit this file in your favourite text editor: \scripts\1033\default.js Find this line: var Pch = wizard.FindSymbol("PRE_COMPILED_HEADER"); and comment out some of the lines below it like this: // if ((strAppType == "LIB" || ((strAppType == "CONSOLE") && // !wizard.FindSymbol("SUPPORT_MFC") && !wizard.FindSymbol("SUPPORT_ATL"))) && !Pch) { AddFilesToProjectWithInfFile(selProj, strProjectName); SetNoPchSettings(selProj); } // else // { // AddFilesToProjectWithInfFile(selProj, strProjectName); // SetCommonPchSettings(selProj); // } Now open this file: \templates\1033\Templates.inf and find the first occurrence of [!else] and delete these 3 lines below it: stdafx.h targetver.h stdafx.cpp This will give you a project without stdafx.cpp/.h or targetver.h, and the CPP file will not try to use a PCH. However it won't build because we haven't added any #includes to the appropriate header files. I'll leave that for you to figure out :) (you can edit the files that get generated automatically by modifying the files in \templates\1033)
1,593,580
1,593,588
C++ how to get the address stored in a void pointer?
how can i get the memory address of the value a pointer points to? in my case it is a void pointer. just assigning it to an uint gives me this error: Error 1 error C2440: 'return' : cannot convert from 'void *' to 'UInt32' thanks!
std::size_t address = reinterpret_cast<std::size_t>(voidptr); // sizeof(size_t) must be greater or equal to sizeof(void*) // for the above line to work correctly. @Paul Hsieh I think it is sufficient to convert void* to size_t in this specific question for three reasons: The questioner didn't specify if he wants a portable solution or not. He said, that it worked with him. I don't know exactly what that means but it is clear to me he is working on IA-32 on Windows or other system under protected mode. That means converting a pointer to an integer is a defined operation on that system even if it is not defined by standard C++. Second, I proposed first converting the pointer to int which is clearly meaningless as litb and jalf showed me. I corrected the mistake I've done, and replaced int with size_t. Finally, I tried my hard to find something relevant to what you proposed as a solution in the standards. Unfortunately, I couldn't find anything relevant. I have this reference: ANSI ISO IEC 14882 2003. I think sellibitze pointed out that it will be part of the coming standards. I really don't know about C, and obviously C99 introduced this perfect solution. I would like someone to show me a portable solution in C++. Please, don't hesitate to correct my mistakes, I am still a student at uni :) Thanks,
1,593,737
1,593,763
ImageList and BltBit - ting
I am having trouble in CE BltBit from a previously created compatable hdc to device's hdc. The following code works: hdc = pdis->hDC; FillRect(hdc, &(pdis->rcItem), (HBRUSH)GetStockObject(BLACK_BRUSH)); ImageList_Draw(himl, imageIndex, hdc, 15 , 30, ILD_NORMAL); However the following just draws the black rectangle and does not put the image on top. hdc = pdis->hDC; hdcmem = CreateCompatibleDC(hdc); FillRect(hdc, &(pdis->rcItem), (HBRUSH)GetStockObject(BLACK_BRUSH)); ImageList_Draw(himl, imageIndex, hdcmem, 0 , 0, ILD_NORMAL); BitBlt(hdc, 15, 30, 130, 100, hdcmem, 0, 0, SRCCOPY); Any ideas most welcome. Best regards E
CreateCompatibleDC doesn't do what you think it does. From the linked page: Before an application can use a memory device context for drawing operations, it must select a bitmap of the correct width and height into the device context. This may be done by using CreateCompatibleBitmap to specify the height, width, and color organization required in the function call. Device contexts are an abstraction. There must be a storage behind them -- a screen or, in your case, a bitmap.
1,594,187
1,594,213
How does function-style cast syntax work?
I guess I am a bit puzzled by the syntax. What does the following mean? typedef char *PChar; hopeItWorks = PChar( 0x00ff0000 );
It is equivalent to (PChar) 0x00ff0000 or (char *) 0x00ff0000. Syntactically think of it as invoking a one-argument constructor.
1,594,582
1,594,619
C format specifier question
While I was working i came across a code which was written by somebody else. i see a statement as , sprintf(o_params->o_file_name, "%s_%s_%04.4d_%s_%s.ASC", "OUTD", "RM", sequence_no, DateStamp_buf1, TimeStamp_buf1 ); In the above statement, I see %04.4d. Is this a correct format specifier? The variable sequence_no is static int and it doesn't have decimal.
From the FreeBSD manpage man 3 printf An optional precision, in the form of a period . followed by an optional digit string. If the digit string is omitted, the precision is taken as zero. This gives the minimum number of digits to appear for d, i, o, u, x, and X conversions, the number of digits to appear after the decimal-point for a, A, e, E, f, and F conversions, the maximum number of significant digits for g and G conversions, or the maximum number of characters to be printed from a string for s conversions. So in this case, %04.4d, the .4 specifies that all four digits of the number should be printed. Of course, the 04 part just pads the number with leading zeros if it is less than 1000. However, in this case, as the above manual page states, `0' (zero) Zero padding. For all conversions except n, the converted value is padded on the left with zeros rather than blanks. If a precision is given with a numeric conversion (d, i, o, u, i, x, and X), the 0 flag is ignored. Since surely all four digits would be printed anyway, my guess would be that it was just a leftover or typo or something. This syntax produces compiler warnings with gcc -Wall (see Sinan Unur's example) but it does not seem to be an actual error.
1,594,607
1,594,646
index operator constness
Why do we need two? Under which circumstance each of the following operator[]s are called? class X { public: //... int &operator [](int index); const int &operator [](int index) const; };
foo( X x ) { x[0]; // non-const indexer is called } bar ( const X x ) { x[0]; //const indexer is called }
1,594,631
1,594,669
std::map difference between index and insert calls
What is the difference between the index overloaded operator and the insert method call for std::map? ie: some_map["x"] = 500; vs. some_map.insert(pair<std::string, int>("x", 500));
I believe insert() will not overwrite an existing value, and the result of the operation can be checked by testing the bool value in the iterator/pair value returned The assignment to the subscript operator [] just overwrites whatever's there (inserting an entry if there isn't one there already) Either of the insert and [] operators can cause issues if you're not expecting that behaviour and don't accommodate for it. Eg with insert: std::map< int, std::string* > intMap; std::string* s1 = new std::string; std::string* s2 = new std::string; intMap.insert( std::make_pair( 100, s1 ) ); // inserted intMap.insert( std::make_pair( 100, s2 ) ); // fails, s2 not in map, could leak if not tidied up and with [] operator: std::map< int, std::string* > intMap; std::string* s1 = new std::string; std::string* s2 = new std::string; intMap[ 100 ] = s1; // inserted intMap[ 100 ] = s2; // inserted, s1 now dropped from map, could leak if not tidied up I think those are correct, but haven't compiled them, so may have syntax errors
1,594,746
1,594,788
Win32 equivalent of getuid()
I'm in the process of porting a C++ library from Linux to Windows, and am having problems with getuid(), which is not supported in Windows. Any ideas what I can use in its place?
You can retrieves the name of the user associated with the current thread with GetUserName : // ANSI version string GetWindowsUserNameA() { char buffer[UNLEN + 1] = {0}; DWORD buffer_len = UNLEN + 1; if (!::GetUserNameA(buffer, & buffer_len)) { // error handling } return string(buffer); }
1,594,803
1,595,552
Is std::string thead-safe with gcc 4.3?
I'm developing a multithreaded program running on Linux (compiled with G++ 4.3) and if you search around for a bit you find a lot of scary stories about std::string not being thread-safe with GCC. This is supposedly due to the fact that internally it uses copy-on-write which wreaks havoc with tools like Helgrind. I've made a small program that copies one string to another string and if you inspect both strings they both share the same internal _M_p pointer. When one string is modified the pointer changes so the copy-on-write stuff is working fine. What I'm worried about though is what happens if I share a string between two threads (for instance passing it as an object in a threadsafe dataqueue between two threads). I've already tried compiling with the '-pthread' option but that does not seem to make much difference. So my question: Is there any way to force std::string to be threadsafe? I would not mind if the copy-on-write behaviour was disabled to achieve this. How have other people solved this? Or am I being paranoid? I can't seem to find a definitive answer so I hope you guys can help me.. Edit: Wow, that's a whole lot of answers in such a short time. Thank you! I will definitely use Jack's solution when I want to disable COW. But now the main question becomes: do I really have to disable COW? Or is the 'bookkeeping' done for COW thread safe? I'm currently browsing the libstdc++ sources but that's going to take quite some time to figure out... Edit 2 OK browsed the libstdc++ source code and I find something like this in libstd++-v3/include/bits/basic_string.h: _CharT* _M_refcopy() throw() { #ifndef _GLIBCXX_FULLY_DYNAMIC_STRING if (__builtin_expect(this != &_S_empty_rep(), false)) #endif __gnu_cxx::__atomic_add_dispatch(&this->_M_refcount, 1); return _M_refdata(); } // XXX MT So there is definitely something there about atomic changes to the reference counter... Conclusion I'm marking sellibitze's comment as answer here because I think we've reached the conclusion that this area is still unresolved for now. To circumvent the COW behaviour I'd suggest Jack Lloyd's answer. Thank you everybody for an interesting discussion!
Threads are not yet part of the standard. But I don't think that any vendor can get away without making std::string thread-safe, nowadays. Note: There are different definitions of "thread-safe" and mine might differ from yours. Of course, it makes little sense to protect a container like std::vector for concurrent access by default even when you don't need it. That would go against the "don't pay for things you don't use" spirit of C++. The user should always be responsible for synchronization if he/she wants to share objects among different threads. The issue here is whether a library component uses and shares some hidden data structures that might lead to data races even if "functions are applied on different objects" from a user's perspective. The C++0x draft (N2960) contains the section "data race avoidance" which basically says that library components may access shared data that is hidden from the user if and only if it activly avoids possible data races. It sounds like a copy-on-write implementation of std::basic_string must be as safe w.r.t. multi-threading as another implementation where internal data is never shared among different string instances. I'm not 100% sure about whether libstdc++ takes care of it already. I think it does. To be sure, check out the documentation
1,594,809
1,594,955
Convert float array image to a format usable for opencv
i wonder if there is a easy way to convert my float array image to iplimage, which can be handled by opencv. Of course i could create an empty iplimage with the same size and just copy ever pixel from my float array image to the emplty iplimage, but is there more elegant solution to this. Maybe a faster less memory consuming method, since the source images are pretty large and the copy process would take a while. Best regards, Zhengtonic
You can do something like this (assuming 32 bit floats): float* my_float_image_data; CvSize size; size.height = height ; size.width = width; IplImage* ipl_image_p = cvCreateImageHeader(size, IPL_DEPTH_32F, 1); ipl_image_p->imageData = my_float_image_data; ipl_image_p->imageDataOrigin = ipl_image_p->imageData;
1,594,841
1,594,860
Which language to use for implementing few Linux shell commands (homework) - plain C or C++?
I need to implement a few commands of Linux shell for my homework - 5 or 6 of them, including ls. Do not know much about which parameters to implement for each of commands... I planned to use C++, but when I asked my colleague for advice what language to choose - plain C or C++, he said that interpreter was not a program in traditional meaning, it`s a functional tool, and it absolutely must be implemented in C. My arguments on C++ is great code reuse, better separation of concerns, and in fact I do not know C very well - actually, I learned C++ and enjoyed it. So, what is your point on this? Thanks in advance. It is an individual assignment - I mean for every person in my group, so no collaboration supposed. I have experience of low level programming, pointers arithmethic, void*, etc.
First: Use what you know. There is no reason to enter uncharted waters if you can get there with a familiar route. C++ is a very viable option in your circumstance, anyways. So, you aren't making a mistake to just use it. Second: Your friend is wrong. (I would use harsher words, but I'll be nice.) C++ and C are both compiled languages. A C++ program absolutely is a program in the traditional sense. Both C and C++ are statically typed as well. PS: You can still use a C++ compiler to build C programs. You can do everything available in C with C++.
1,594,949
1,594,965
What is a good book/guide for socket programming in C?
Could anybody please tell me which is best guide/book/material for socket programming in C? I am reading beej's guide for network programming but it just gives an overview. Can you suggest any other books or guides?
UNIX Network Programming, Volume 1, Second Edition: Networking APIs: Sockets and XTI. Then go from there.
1,595,063
1,595,127
How could a pointer to a structure be an array?
This is really a noob quick question. Imagine you have a struct called "No" and the following piece of code: No *v_nos; // What does this mean? Where I took this from they were calling "v_nos" a array? Isn't it simply a pointer to a struct "No"? Thanks.
In terms of implementation, arrays and pointers are the same. That is, arrays are simply implemented as pointers to the first element in the array. The difference between No *v_nos; and No v_nos[3]; Is that the latter sets aside memory for 3 elements for that array, while the pointer would need to have memory allocated using malloc (or new). You could still treat the second one as the pointer that it is though, for example *v_nos would give you the first element, &v_nos would give the address of the pointer.
1,595,085
1,595,159
How to implement automatically select item in a html and click submit?
I have a website, and username and password, and usually I will login the website with the username and password, and select some items in check boxes and submit them to execute actions. but right now i need to write a application to select the checkbox by some keywords and submit them automatically. Do anyone have good idea ? I used IBM appscan before, it can automatically login my website, how does it implement that ?
You may be interested in HttpUnit.
1,595,270
13,337,612
how does the stl's multimap insert respect orderings?
I have some data which come with a integer index. I am continuous generating new data which needs to added to the collection of data I have, sorted by that index, at the same time I want to easily be able to go the start of the data and iterate through it. This sounds like std::multimap is just what I need. However, I also need data with the same index to be kept in the order in which it was inserted, in this case meaning that when I iterate through the data I get to the earlier data before the later data. Does multimap do this? I haven't found any guarantees that this is the case. In the sgi manual, I didn't see any mention of whether. I tried it on gcc 4.3.4 implementation and it seemed to be true for some limited test cases, but of course I was wondering whether the standard demands this and I can rely on this fact. Edit: To be clearer in response to some of the answers, I wanted the data sorted first by (non-unique) index and second by insertion time. I had hoped that maybe the second part came for free with multimap, but it seems like it doesn't.
It seems the new standard (C++11) changed this: The order of the key-value pairs whose keys compare equivalent is the order of insertion and does not change.[cppreference] I'm hesitating to use it though, as this seems like a detail easily overlooked when modifying the standard library to be C++11 compliant and it's the sort of detail that will silently cause errors if your compiler's library failed to implement properly.
1,595,355
1,595,414
syncing iostream with stdio
I am trying to add iostream to the legacy code and thus want to sync those two libraries. According to this article, I should use std::ios_base::sync_with_stdio. Now, I wonder how it is used in practice (examples please), side-effects I should be aware of. Thx
By default the streams are synchronized, it's guaranteed to work by the standard, you don't have to do anything. sync_with_stdio is only here to disable synchronisation if you want to. From the article you mentioned : For the predefined streams, it's safe to mix stdio and iostreams. For example, you can safely use stdin and cin in the same program; the C++ Standard guarantees that it will work the way you would naively expect it to. The only drawback is a potential performance hit (I guess that's why it can be disabled).
1,595,439
1,595,462
User Interface clarifications
As you know, many programs are written in C++. Some of these have fancy GUI with non-classical-Windows style ( think to Photoshop, 3ds max, maya etc )..now my question is: how are they done? In pure Win32 API? MFC? DirectX/OpenGL? or other? I can reach similar results with C#/WPF but how can I do it in C++?
Read Programming Windows by Petzold In my experience, it seems to be the most practical way to learn Win32 programming. If you care about cool effects, Petzold can definitely help you. After you're somewhat familiar with win32, you can skip to the chapter in Petzold to bitblt'ng and doing animation. I had to do some fancy animated graphs in a win32 app once, and I pretty much used Petzold (and some MSDN) as my primary reference.
1,595,859
1,596,017
Why is non-type template parameter expression handling inconsistent across compilers?
Here is something I observed across various compilers. It seems there are compiler bugs. template <int I> struct X { }; int main(void) { X<(16 > 1)> a; // Works on vc9, works on g++ 4.1.2, works on Comeau 4.3.10.1 X<(int(16) > 1)> b; // Works on vc9, works on g++ 4.1.2, works on Comeau 4.3.10.1 X<(16 >> 1)> c; // Works on vc9, works on g++ 4.1.2, works on Comeau 4.3.10.1 X<(int(16) >> 1)> d; // Fails on vc9, works on g++ 4.1.2, works on Comeau 4.3.10.1 X<16 > 1> e; // Fails on vc9, works on g++ 4.1.2, fails on Comeau 4.3.10.1 X<int(16) > 1> f; // Fails on vc9, fails on g++ 4.1.2, fails on Comeau 4.3.10.1 X<16 >> 1> g; // Fails on vc9, works on g++ 4.1.2, fails on Comeau 4.3.10.1 X<int(16) >> 1> h; // Fails on vc9, works on g++ 4.1.2, fails on Comeau 4.3.10.1 } Why is that inconsistency? What is allowed/disallowed by the standard? Such behavior is also responsible for syntax error while using BOOST_AUTO on vc9. It appears to me that Comeau is doing the right job by rejecting all the expressions without parenthesis.
The rules are as follows for C++03: After name lookup (3.4) finds that a name is a template-name, if this name is followed by a <, the < is always taken as the beginning of a template-argument-list and never as a name followed by the less-than operator. When parsing a template-id, the first non-nested > [foot-note: A > that encloses the type-id of a dynamic_cast, static_cast, reinterpret_cast or const_cast, or which encloses the template-arguments of a subsequent template-id, is considered nested for the purpose of this description. ] is taken as the end of the template-argument-list rather than a greater-than operator. So the result is: X<(16 > 1)> a; // works X<(int(16) > 1)> b; // works X<(16 >> 1)> c; // works X<(int(16) >> 1)> d; // works X<16 > 1> e; // fails X<int(16) > 1> f; // fails X<16 >> 1> g; // works (">>" is not a ">" token) X<int(16) >> 1> h; // works (">>" is not a ">" token). However, in C++0x the following are the rules After name lookup (3.4) finds that a name is a template-name, or that an operator-function-id refers to a set of overloaded functions any member of which is a function template, if this is followed by a <, the < is always taken as the delimiter of a template-argument-list and never as the less-than operator. When parsing a template-argument-list, the first non-nested > [foot-note: A > that encloses the type-id of a dynamic_cast, static_cast, reinterpret_cast or const_cast, or which encloses the template-arguments of a subsequent template-id, is considered nested for the purpose of this description.] is taken as the ending delimiter rather than a greater-than operator. Similarly, the first non-nested >> is treated as two consecutive but distinct > tokens, the first of which is taken as the end of the template-argument-list and completes the template-id. Result will be X<(16 > 1)> a; // works X<(int(16) > 1)> b; // works X<(16 >> 1)> c; // works X<(int(16) >> 1)> d; // works X<16 > 1> e; // fails X<int(16) > 1> f; // fails X<16 >> 1> g; // fails (">>" translated to "> >") X<int(16) >> 1> h; // fails (">>" translated to "> >") Be sure to disable C++0x mode in comeau when testing
1,596,053
1,596,120
Inlining std::inner_product
Allegedly inlining std::inner_product() does NOT get inlined with gcc compiler < gcc 4.1 compilers, per the following bug . Hence I would like to implement my own version of inner_product. Are there existing implementation available? Thanks
You just need to look in your C++ header files, find the definition, and redefine it with the "inline" keyword (possibly in your namespace). For example, looking at my headers: template <class T1, class T2, class T> inline T inner_product(T1 first1, T1 last1, T2 first2, T init) { for (; first1 != last1; ++first1, ++first2) init = init + *first1 * *first2; return init; }
1,596,104
1,596,666
How to change the color of a textual cue when sending an EM_SETCUEBANNER Message?
When you send an EM_SETCUEBANNER message, you get a grey textual cue in your edit control. How do you change the color of the textual cue in Win32/C ?
Edit controls do not support custom cue banner colors. You will have to subclass the Edit control and custom-draw it manually to get that kind of effect.
1,596,117
1,603,591
"Don't show this again" option in message boxes
In C++/MFC, what's the simplest way to show a message box with a "Don't show this again" option? In my case, I just want a simple MB_OK message box (one OK button).
Or just use the SHMessageBoxCheck() function.
1,596,239
1,596,326
Simple Flex/Bison C++
I already looked for my answer but I didn't get any quick response for a simple example. I want to compile a flex/bison scanner+parser using g++ just because I want to use C++ classes to create AST and similar things. Searching over internet I've found some exploits, all saying that the only needed thing is to declare some function prototypes using extern "C" in lex file. So my shady.y file is %{ #include <stdio.h> #include "opcodes.h" #include "utils.h" void yyerror(const char *s) { fprintf(stderr, "error: %s\n", s); } int counter = 0; extern "C" { int yyparse(void); int yylex(void); int yywrap() { return 1; } } %} %token INTEGER FLOAT %token T_SEMICOL T_COMMA T_LPAR T_RPAR T_GRID T_LSPAR T_RSPAR %token EOL %token T_MOV T_NOP %% ... GRAMMAR OMITTED ... %% main(int argc, char **argv) { yyparse(); } while shady.l file is %{ #include "shady.tab.h" %} %% "MOV"|"mov" { return T_MOV; } "NOP"|"nop" { return T_NOP; } ";" { return T_SEMICOL; } "," { return T_COMMA; } "(" { return T_LPAR; } ")" { return T_RPAR; } "#" { return T_GRID; } "[" { return T_LSPAR; } "]" { return T_RSPAR; } [1-9][0-9]? { yylval = atoi(yytext); return INTEGER;} [0-9]+"."[0-9]+ | "."?[0-9]? { yylval.d = atof(yytext); return FLOAT; } \n { return EOL; } [ \t] { /* ignore whitespace */ } . { printf("Mystery character %c\n", *yytext); } %% Finally in the makefile I use g++ instead of gcc: shady: shady.l shady.y bison -d shady.y -o shady.tab.c flex shady.l g++ -o $@ shady.tab.c lex.yy.c -lfl flex and bison work correctly but upon linking I get the following error: Undefined symbols: "_yylex", referenced from: _yyparse in ccwb57x0.o Of course if I try to change anything about the function in bison file it says that yylex is not declared in the scope of yyparse. Am I trying to solve simply something that is more complex than it seems? Actually I don't need a closed structure to have access to parse and lexer in a object oriented manner, I just want to make it work. I just want to be able to use C++ in bison file (to create AST) and to call yyparse() from C++ objects.. Thanks in advance
You need the extern "C" {} for yylex to be in shady.l: %{ extern "C" { int yylex(void); } #include "shady.tab.h" %} %% "MOV"|"mov" { return T_MOV; } "NOP"|"nop" { return T_NOP; } ...etc... Also, after adding a dummy grammar rule, I was able to build and run this with just: 559 flex shady.l 560 bison -d shady.y 561 g++ shady.tab.c lex.yy.c
1,596,401
1,596,480
C++ Serial Port Question
Problem: I have a hand held device that scans those graphic color barcodes on all packaging. There is a track device that I can use that will slide the device automatically. This track device functions by taking ascii code through a serial port. I need to get this thing to work in FileMaker on a Mac. So no terminal programs, etc... What I've got so far: I bought a Keyspan USB/Serial adapter. Using a program called ZTerm I was successful in sending commands to the device. Example: "C,7^M^J" I was also able to do the same thing in Terminal using this command: screen /dev/tty.KeySerial1 57600 and then type in the same command above(but when I typed in I just hit Control-M and Control-J for the carriage return and line feed) Now I'm writing a plug-in for FileMaker(in C++ of course). I want to get what I did above happen in C++ so when I install that plug-in in FileMaker I can just call one of those functions and have the whole process take place right there. I'm able to connect to the device, but I can't talk to it. It is not responding to anything. I've tried connecting to the device(successfully) using these: FILE *comport; if ((comport = fopen("/dev/tty.KeySerial1", "w")) == NULL){...} and int fd; fd = open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY); This is what I've tried so far in way of talking to the device: fputs ("C,7^M^J",comport); or fprintf(comport,"C,7^M^J"); or char buffer[] = { 'C' , ',' , '7' , '^' , 'M' , '^' , 'J' }; fwrite (buffer , 1 , sizeof(buffer) , comport ); or fwrite('C,7^M^J', 1, 1, comport); Questions: When I connected to the device from Terminal and using ZTerm, I was able to set my baud rate of 57600. I think that may be why it isn't responding here. But I don't know how to do it here.... Does any one know how to do that? I tried this, but it didn't work: comport->BaudRate = 57600; There are a lot of class solutions out there but they all call these include files like termios.h and stdio.h. I don't have these and, for whatever reason, I can't find them to download. I've downloaded a few examples but there are like 20 files in them and they're all calling other files I can't find(like the ones listed above). Do I need to find these and if so where? I just don't know enough about C++ Is there a website where I can download libraries?? Another solution might be to put those terminal commands in C++. Is there a way to do that? So this has been driving me crazy. I'm not a C++ guy, I only know basic programming concepts. Is anyone out there a C++ expert? I ideally I'd like this to just work using functions I already have, like those fwrite, fputs stuff. Thanks!
Sending a ^ and then a M doesn't send control-M, thats just the way you write it, to send a control character the easiest way is to just use the ascii control code. ps. ^M is carriage return ie "\r" and ^J is linefeed "\n" edit: Probably more than you will (hopefully) ever need to know - but read The Serial Port Howto before going any further.
1,596,432
1,596,448
Getter and setter, pointers or references, and good syntax to use in c++?
I would like to know a good syntax for C++ getters and setters. private: YourClass *pMember; the setter is easy I guess: void Member(YourClass *value){ this->pMember = value; // forget about deleting etc } and the getter? should I use references or const pointers? example: YourClass &Member(){ return *this->pMember; } or YourClass *Member() const{ return this->member; } whats the difference between them? Thanks, Joe EDIT: sorry, I will edit my question... I know about references and pointers, I was asking about references and const pointers, as getters, what would be the difference between them in my code, like in hte future, what shoud I expect to lose if I go a way or another... so I guess I will use const pointers instead of references const pointers can't be delete or setted, right?
As a general law: If NULL is a valid parameter or return value, use pointers. If NULL is NOT a valid parameter or return value, use references. So if the setter should possibly be called with NULL, use a pointer as a parameter. Otherwise use a reference. If it's valid to call the getter of a object containing a NULL pointer, it should return a pointer. If such a case is an illegal invariant, the return value should be a reference. The getter then should throw a exception, if the member variable is NULL.
1,596,575
1,596,583
Good C++ Debugging/IDE Environment for Linux?
I have a friend who is trying to make the switch to Linux, but is hung up on the apparent lack of debugging/IDE environments for C++, especially as they relate to template programming. He has been using visual studio for years and is maybe a little spoiled by their awesome IDE. Does anyone have any good suggestions for an environment where he can, under Linux, develop and debug with all of the usual things (Breakpoints, line highlighting for compilation errors, step in/over/out/etc, etc) that he's accustomed to? Thanks!
Although many people think of it as a Java IDE, he could try NetBeans. I've used it on Windows for C and C++ development without a problem, and I know NetBeans is supported on Linux, so it would be worth a shot. It looks like most of the features he wants are included in the C/C++ development toolkit, including integration with GDB, a profiler, and more.
1,596,594
1,596,605
C++ Template + Iterator (noob question)
My disclaimer here is that I started teaching myself C++ about a week ago and my former experience with programming has been with dynamic languages (Python, javascript). I'm trying to iterate though the contents of a vector using a generic function to print out the items: #include <iostream> #include <algorithm> #include <vector> using std::vector; using std::cout; template <class T> void p(T x){ cout << x; } int main () { vector<int> myV; for(int i = 0; i < 10; i++){ myV.push_back(i); } vector<int>::const_iterator iter = myV.begin(); for_each(iter, myV.end(), p); return 0; } The code doesn't compile. Would someone explain why? Edit: The compiler error: error: no matching function for call to 'for_each(_gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<const int, _gnu_norm::vector<int, std::allocator<int> > >, __gnu_debug_def::vector<int, std::allocator<int> > >&, __gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<int, __gnu_norm::vector<int, std::allocator<int> > >, __gnu_debug_def::vector<int, std::allocator<int> > >, <unknown type>)' Thanks!
Try: for_each(myV.begin(), myV.end(), p<int>); There were two mistakes in your code: The iterators were not the same type The function pointer was not actually a pointer. Normally templated functions can be deduced from there parameters. But in this case you are not actually using it you are passing it (or its address) to a function (thus the normal rules on template function deduction did not work). As the compiler can not deduce which version of the function 'p' you need to use you must be explicit. There is also a nice output iterator that does this: std::copy(myV.begin(),myV.end(), std::ostream_iterator<int>(std::cout)); Also note that very few compilers can optimise code across a function pointer call. Though most are able to optimise the call if it is an functor object. Thus the following may have been a viable alternative to a function pointer: template<typename T> struct P { void operator()(T const& value) const { std::cout << value; } }; .... for_each(myV.begin(), myV.end(), P<int>()); Another note: When you use templated methods/functions it is usually better to pass by const reference than value. If the Type is expensive to copy then passing by value will generate a copy construction which may not be what you expected.
1,596,668
1,596,681
Logical XOR operator in C++?
Is there such a thing? It is the first time I encountered a practical need for it, but I don't see one listed in Stroustrup. I intend to write: // Detect when exactly one of A,B is equal to five. return (A==5) ^^ (B==5); But there is no ^^ operator. Can I use the bitwise ^ here and get the right answer (regardless of machine representation of true and false)? I never mix & and &&, or | and ||, so I hesitate to do that with ^ and ^^. I'd be more comfortable writing my own bool XOR(bool,bool) function instead.
The != operator serves this purpose for bool values.
1,596,691
1,596,807
cvCanny and float 32 bit (IPL_DEPTH_32F) problem
I have some problems with OpenCVs cvCanny(...) and the Image data types it can handle. Well, maybe you guys/gals know a solution. I have a 32 bit float image and I want to perform cvCanny on it. The problem is cvCanny can only handle "IPL_DEPTH_8S" or U (signed / unsigned short), or at least that's what I suspect. The OpenCV manual does not indicate how much it can handle and this line in cv/cvcanny.cpp didn't raise my hopes: ... if( CV_MAT_TYPE( src->type ) != CV_8UC1 || CV_MAT_TYPE( dst->type ) != CV_8UC1 ) CV_ERROR( CV_StsUnsupportedFormat, "" ); ... The images I have are greyscale / single channel float32 bit and the values in the image are between 0.0 and 16.0. Casting my float32 to unsigned short wouldn't help much since the values would loose their precision and I would miss edges with OpenCVs canny. Do you guys/gals happen to know a solution for my problem? (besides using ITK :) )
Sorry, since cvCanny only supports single-channel 8-bit images, the only thing I can think of is to scale each value in your image by 255/16 into a new image of type CV_8UC1 so that it ranges from 0 - 255 to minimize the precision you've lost.
1,596,837
1,596,877
Can I create an anonymous, brace-initialized aggregate in C++?
One can create an anonymous object that is initialized through constructor parameters, such as in the return statement, below. struct S { S(int i_, int j_) : i(i_), j(j_) { } int i, j; }; S f() { return S(52, 100); } int main() { cout << f().i << endl; return 0; } However, can one similarly create an anonymous aggregate that is initialized with a brace initializer? For example, can one collapse the body of f(), below, down to a single return statement without an "s?" struct S { int i, j; }; S f() { S s = { 52, 100 }; return s; } int main() { cout << f().i << endl; return 0; }
You can't in the current version of C++. You will be able to in C++ 0x -- I believe anyway. Of course, it's still open to revision -- at one time I believed you'd be able to specify concepts in C++ 0x, but that's gone... Edit: The reference would be [dcl.init] (§8.5/1) in N2960. The most relevant bit is the definition of 'braced-init-list' in the BNF (and the last bit of text, saying that the initialization described in that section can/does apply to return values).
1,597,373
1,597,412
Getters without shared ownership
How to write a getter that can not be deleted? I want to own the variables and not share them. reading here and there I figured out that no matter what I return the memory can be freed however I define it, is this true? references, const pointers, no matter what, the function which is calling the getter can delete it and my private variable would not be nullified but with broken memory, right? I would like to develop a getter where I can return my private variable and be sure that the callee can't delete it... I am afraid that, while internally using the private variable, the callee has destroyed it and then it crashes away my programm on my internal next attempt to use it in a first attempt I wouldn't like to use boost, as I am trying to learn the most from this project, boost would be used if not other way around or if the other way around is too complex/much-work Thanks, Joe My other question wasn't really focused so I did it again, its not a problem to asks things here, right? =]
Depends on what you mean. Any time you have a pointer, it is possible to call delete on it. And if you have a reference, you can take the address of it, which gives you a pointer Anyway, if you have this class for example: class X { int getByVal() { return i; } // returns a copy of i int& getByRef() { return i; } // returns a reference to i private: int i; }; then I, as a user of your class, do not have an obvious way to delete your data. I can do the following: X x; int j = x.getByVal(); int& k = x.getByRef(); j = 42; // doesn't affect x.i because we returned a copy k = 42; // sets x.i to 42, because k is a reference And there's no obvious way for me to delete the class member. Of course, I could do this: delete &j; delete &k; (and of course, neither of these would do anything meaningful, but they would compile) but I wouldn't do so by accident. If you don't return a pointer, it's pretty clear that I'm not supposed to take ownership of the data. "Protect your code against Murphy, not Machiavelli" is usually a good rule of thumb. You can't prevent people from wrecking your code if they try. All you should worry about is preventing them from doing it accidentally. Edit In response to your comment under the question: as I said, I am learning... Copies make think that the callee must free the memory of the returning variable, which is more trouble to the callee(even thought it is me =p), so I wasn't talking about concepts, but the easyness of writing... and again, I am newbie on this memory stuff. I was developing in C#, PHP etc. I used to develop in C long time ago when I was learning with CircleMUD No, copies don't have to be deleted manually. Local variables are automatically deleted when they go out of scope. So in the above example, j is a copy of the class member i. When the calling function returns, j will be automatically deleted. Hope that helps. The variable lifetime rules in C++ are not very complicated, but it is extremely important to get them right as a lot of code depends on them. void foo() { int i = 0; // allocate a local (on the stack) int, and initialize it to 0 int* p = new int(1); // allocate an int on the heap, and initialize it to 1 int j = i; // create a *copy* of i. Now we have two ints on the stack int k = *p; // create a copy of the int pointed to by p. k is also on the stack, so even though it was copied from a heap-allocated variable, k does not have to be manually deleted int* q = p; // create a copy of p. q is not a separate pointer, which points to the *same* heap-allocated integer. } in the above example, all the copies are automatically cleaned up when foo returns. The only thing we have to do manually is to delete the integer we allocated on the heap. Both p and q point to it, but we must only delete the object once. But i, j, k, p, and q are all local variables, declared on the stack. Each of them are cleaned up when the function returns. For primitive types (such as ints as well as pointers), nothing really has to happen (they don't have destructors). When they go out of scope, they just disappear - even if they pointed to something important, like a heap-allocated object such as our integer. For non-POD objects, when they go out of scope, their destructors are called, so they too get cleaned up nicely, all by themselves. So even if we'd used a more complex type than int, the above would have worked just fine. We can still copy non-POD objects and pass them by value. I hope that helps clear things up a bit.
1,597,503
1,597,520
derive from an arbitrary number of classes
I have a class whose functionality I'd like to depend on a set of plug-in policies. But, I'm not sure how to get a class to derive from an arbitrary number of classes. The code below is an example of what I'm trying to achieve. // insert clever boost or template trickery here template< class ListOfPolicies > class CMyClass : public ListOfPolicies { public: CMyClass() { // identifiers should be the result of OR-ing all // of the MY_IDENTIFIERS in the TypeList. DWORD identifiers; DoSomeInitialization( ..., identifiers, ... ); } int MyFunction() { return 100; } // ... }; template< class T > class PolicyA { public: enum { MY_IDENTIFIER = 0x00000001 }; int DoSomethingA() { T* pT = static_cast< T* >( this ); return pT->MyFunction() + 1; }; // ... }; template< class T > class PolicyB { public: enum { MY_IDENTIFIER = 0x00000010 }; int DoSomethingB() { T* pT = static_cast< T* >( this ); return pT->MyFunction() + 2; }; // ... }; int _tmain(int argc, _TCHAR* argv[]) { CMyClass< PolicyA > A; assert( A.DoSomethingA() == 101 ); CMyClass< PolicyA, PolicyB > AB assert( AB.DoSomethingA() == 101 ); assert( AB.DoSomethingB() == 102 ); return 0; } Thanks, PaulH
With the Boost.MPL library: //Warning: Untested namespace bmpl = boost::mpl; template<class Typelist> class X : bmpl::inherit_linearly<Typelist, bmpl::inherit<bmpl::_1, bmpl::_2> >::type { ... }; Used as: X<bmpl::vector<Foo, Bar, Baz> > FooBarBaz; For the "OR-ing all MY_IDENTIFIER" part, something along the lines of the following: //Warning: still not tested: enum {OR_ED_IDENTIFIERS = bmpl::fold<Typelist, bmpl::int_<0>, bmpl::bitor_<_1, _2> >::value; }
1,597,695
1,597,721
Make my C++ Class iterable via BOOST_FOREACH
I have a class which I want to expose a list of structs (which just contain some integers). I don't want the outside to modify these data, just iterate over it and read them Example: struct TestData { int x; int y; // other data as well } class IterableTest { public: // expose TestData here }; now in my code I want to use my class like this: IterableTest test; BOOST_FOREACH(const TestData& data, test.data()) { // do something with data } I've already read this article http://accu.org/index.php/journals/1527 about memberspaces. However, I don't want to (or can't) save all TestData in an internal vector or something. This is because the class itself doesn't own the storage, i.e. there is actually no underlying container which can be accessed directly by the class. The class itself can query an external component to get the next, previous or ith element, though. So basically I want my class to behave as if it had a collection, but in fact it doesn't have one. Any ideas?
It sounds like you have to write your own iterators. The Boost.Iterator library has a number of helpful templates. I've used their Iterator Facade base class a couple of times, and it's nice and easy to define your own iterators using it. But even without it, iterators aren't rocket science. They just have to expose the right operators and typedefs. In your case, they're just going to be wrappers around the query function they have to call when they're incremented. Once you have defined an iterator class, you just have to add begin() and end() member functions to your class. It sounds like the basic idea is going to have to be to call your query function when the iterator is incremented, to get the next value. And dereference should then return the value retrieved from the last query call. It may help to take a look at the standard library stream_iterators for some of the semantics, since they also have to work around some fishy "we don't really have a container, and we can't create iterators pointing anywhere other than at the current stream position" issues. For example, assuming you need to call a query() function which returns NULL when you've reached the end of the sequence, creating an "end-iterator" is going to be tricky. But really, all you need is to define equality so that "iterators are equal if they both store NULL as their cached value". So initialize the "end" iterator with NULL. It may help to look up the required semantics for input iterators, or if you're reading the documentation for Boost.Iterator, for single-pass iterators specifically. You probably won't be able to create multipass iterators. So look up exactly what behavior is required for a single-pass iterator, and stick to that.
1,597,827
1,598,151
using boost::mpl::bitor_
I have a class that accepts a list of policy classes using boost::mpl. Each policy class contains an identifying tag. I would like MyClass to produce the OR-ed result of each policy class' identifying tag. Unfortunately, I'm having some trouble figuring out how to correctly use the boost::mpl::fold<> functionality. If anybody can help, I would appreciate it. #include <boost/mpl/vector.hpp> #include <boost/mpl/bitor.hpp> #include <boost/mpl/inherit.hpp> #include <boost/mpl/inherit_linearly.hpp> namespace bmpl = boost::mpl; template< class ListOfPolicies > class CMyClass : public bmpl::inherit_linearly< ListOfPolicies, bmpl::inherit< bmpl::_1, bmpl::_2 > >::type { public: int identifier() const { // error C2039: 'tag' : is not a member of 'PolicyA' return bmpl::fold< ListOfPolicies, bmpl::int_< 0 >, bmpl::bitor_< bmpl::_1, bmpl::_2 > >::value } }; template< class T > class PolicyA { public: enum { MY_IDENTIFIER = 0x00000001 }; }; class PolicyB { public: enum { MY_IDENTIFIER = 0x00000010 }; }; int _tmain(int argc, _TCHAR* argv[]) { CMyClass< PolicyA, PolicyAB > AB assert( AB.identifier() == ( PolicyA::MY_IDENTIFIER | PolicyB::MY_IDENTIFIER )); return 0; } Thanks, PaulH
I haven't explicitly tested if it does what you intend to (aside from not getting the assert), but as fold returns a type containing a value, the line giving you an error should be: return bmpl::fold< ListOfPolicies, bmpl::int_<0>, bmpl::bitor_<bmpl::_1, bmpl::_2> >::type::value; Aside from that, bitor expects its arguments to be an integral constant (doc): class PolicyA { public: typedef boost::mpl::integral_c_tag tag; typedef int value_type; enum { value = 0x00000001 }; }; Continuing, fold works on mpl::vectors, thus you need a change in main: CMyClass< boost::mpl::vector<PolicyA, PolicyB> > AB; You also can't just hand an undefined type as a template parameter - thus i had to make PolicyA a non-template class. You'll have to see how to get working what you originally intended there.
1,598,207
1,598,217
Odd Circular Dependency Issue
So I have 2 classes, Bullet and Ship, that are dependent on each other, hence circular inclusion. Since I have Ship's interface #included into Bullet's interface, the obvious decision was to forward declare Bullet to Ship. However, when I first tried this I still got compiler errors. I read up a bit on forward declaration and realized that I was constructing a Bullet in one of Ship's methods, and Bullet's default constructor is member initialized, which (and I may be wrong) wouldn't work because a forward class declaration doesn't allow Ship to see definitions in the interface (i.e. member initialization). So I decided I could give up the member init and just defined the constructor in Bullet's implementation file, however I still receive the same problem with circular dependency. The message in particular is invalid use of undefined type struct Bullet. I could just put the interface for Bullet and Ship in the same file, but that's kind of a last resort. Any assistance regarding this problem is appreciated. Thanks. Here is the spot where the error occurs: case SDLK_UP: // Fire { Bullet(*this) fired_bullet; // Create bullet. Line where error occurs. fired_bullet.Move(); // Move bullet break; } Bullet's default constructor takes an argument of the Ship that is firing the bullet, and that code is in a Ship method.
You want: Bullet fired_bullet(*this); But your coupling is very tight. What does Bullet need from Ship, and what does Ship need from bullet? I assume the bullet needs to know what ship it came from so enemy bullets don't hurt enemy's and vice versa. Perhaps you need a team type? enum bullet_team { bullet_player, bullet_enemy, } And your ships and enemies will only tell the bullet what team they are on, rather than forcing the bullet to keep track of where it came from: About firing, maybe make a BulletManager singleton. Tell the manager you want to make a bullet at a position X, with team orientation Y, and properties Z, and the manager will take care of it for you. BulletManager::reference().fire(getPosition(), bullet_player);
1,598,351
1,598,678
emacs, etags and using emacs as an IDE
My usual tools are Emacs with g++ on a Linux system to implement my research algorithms. For the last some years, I have used emacs in a fairly basic way. I open C or C++ files, edit them with a syntax highlighting scheme of my choice and compile and do other stuff from within emacs (or maybe from a terminal), including using gdb within emacs for debugging. I know about etags and ctags and have played a bit with etags and emacs but don't seem to find that "sweet spot" with the tools. I was wondering, what do others do to configure emacs just so that it meshes nicely with etags and other tools? What tweaks does one need to do to emacs to make it a better IDE?
For just tagging info, I also recommend GNU Global. CScope can do a lot also. In both cases, they provide a way to find the location of a tag by name, and also the uses of a particular tag. For "IDE Stuff" there is more to it than just a tagging system. For that, I recommend the CEDET set of tools for Emacs. This provides a project management system (EDE) which can create Makefiles to compile your code for you. There is also a parsing and code analysis part (Semantic) which provides smart completion. There is a template / code generation system (SRecode) which can convert tags from semantic back into code. There is even a UML diagram editor (COGRE) which can generate code from a class diagram that you draw in Emacs. Most people using CEDET only use the parser and smart completion systems, possibly in combination with ECB, and it is ok to use only a subset of CEDET. For good measure CEDET will also integrate with GNU Global databases so you can reference symbol uses in addition to tag locations along with the regular GNU Global interface for Emacs.
1,598,397
1,598,409
Creating array of objects on the stack and heap
Consider the following code: class myarray { int i; public: myarray(int a) : i(a){ } } How can you create an array of objects of myarray on the stack and how can you create an array of objects on the heap?
You can create an array of objects on the stack† via: myarray stackArray[100]; // 100 objects And on the heap† (or "freestore"): myarray* heapArray = new myarray[100]; delete [] heapArray; // when you're done But it's best not manage memory yourself. Instead, use a std::vector: #include <vector> std::vector<myarray> bestArray(100); A vector is a dynamic array, which (by default) allocates elements from the heap.†† Because your class has no default constructor, to create it on the stack you need to let the compiler know what to pass into the constructor: myarray stackArray[3] = { 1, 2, 3 }; Or with a vector: // C++11: std::vector<myarray> bestArray{ 1, 2, 3 }; // C++03: std::vector<myarray> bestArray; bestArray.push_back(myarray(1)); bestArray.push_back(myarray(2)); bestArray.push_back(myarray(3)); Of course, you could always give it a default constructor: class myarray { int i; public: myarray(int a = 0) : i(a) {} }; † For the pedants: C++ doesn't really have a "stack" or "heap"/"freestore". What we have is "automatic storage" and "dynamic storage" duration. In practice, this aligns itself with stack allocation and heap allocation. †† If you want "dynamic" allocation from the stack, you'd need to define a max size (stack storage is known ahead of time), and then give vector a new allocator so it uses the stack instead.
1,598,514
1,598,994
Infinite loop on EOF in C++
This code works as desired for the most part, which is to prompt the user for a single character, perform the associated action, prompt the user to press return, and repeat. However, when I enter ^D (EOF) at the prompt, an infinite loop occurs. I am clearing the error state via std::cin.clear() and calling std::cin.ignore(...) to clear the buffer. What could be causing the infinite loop? #include <iostream> #include <limits> void wait() { std::cout << std::endl << "press enter to continue."; std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); std::cin.clear(); std::cin.get(); } int main() { char response; while (true) { std::cout << "enter a character at the prompt." << std::endl << "> "; std::cin >> response; switch (response) { case 'q': exit(0); break; } wait(); } } I am running this in the Mac OS X terminal, if it matters. UPDATE: What I am really asking here is, when the user enters EOF (^D) at the prompt, how do I (a) detect it and (b) reset the stream so that the user can continue to enter data. The following example is different from the code above, but illustrates the same principle of clearing the stream after a ^D has been detected and continuing to read from that stream. > a you entered: a > b you entered: b > ^D you entered EOF > c you entered: c ...
You should always check whether any of a stream's failure flags are set after calling formatted extraction operation, in your example you are checking response without checking whether response was correctly extracted. Also, you are using std::endl in your prompt output where it doesn't make sense. std::endl prints \n and then flushes the buffer, but you then immediately print more characters so the flush is redundant. As cin and cout are (usually) tied, calling an input function for std::cin will cause std::cout to be flushed in any case so you may as well put a \n into your prompt string and save on the verbose extra << operators. Why not make a prompting function that prints the prompt, retrieves the input an returns a reference to the stream so that you can test it for success using the usual stream to boolean type conversion. This way you can get rid of the while true and explicit break. std::istream& prompt_for_input( std::istream& in, std::ostream& out, char& response ) { out << "enter a character at the prompt.\n> "; in >> response; return in; } int main() { char response; while ( prompt_for_input( std::cin, std::cout, response ) && response != 'q' ) { wait(); } }
1,598,673
1,598,760
Dynamic arrays size and dynamic arrays allocators in VC++
I'm confused a little while writing own tiny discovering program to clear up how Visual C++ allocates the memory for dynamic arrays. I must note, I have never met technical documents that describe this question on new[]/delete[] operators for any C++ implementation. Initially I thought that new[] and delete[] are something similar to the following if it is interpreted as simple C: void fake_int_ctor(int _this) { printf("borns with 0x%08X in the heap\n", _this); } void fake_int_dtor(int _this) { printf("dies with %d\n", _this); } void *new_array(unsigned int single_item_size, unsigned int count, void (*ctor)()) { unsigned int i; unsigned int *p = malloc(sizeof(single_item_size) + sizeof(count) + single_item_size * count); p[0] = single_item_size; // keep single item size for delete_array p[1] = count; // and then keep items count for delete_array p += 2; for ( i = 0; i < count; i++ ) { ctor(p[i]); // simulate constructor calling } return p; } void delete_array(void *p, void (*dtor)()) { unsigned int *casted_p = p; unsigned int single_item_size = casted_p[-2]; unsigned int count = casted_p[-1]; unsigned int i; for ( i = 0; i < count; i++ ) { dtor(casted_p[i]); // simulate destructor } free(casted_p - 2); } void test_allocators(void) { unsigned int count = 10; unsigned int i; int *p = new_array(sizeof(int), count, fake_int_ctor); // allocate 10 ints and simulate constructors for ( i = 0; i < count; i++ ) { p[i] = i + i; // do something } delete_array(p, fake_int_dtor); // deletes the array printing death-agony-values from 0 to 19 stepping 2 } This code implies the following structure for dynamic arrays: -2..-1..0.....|.....|.....|..... ^ ^ ^ | | +-- start of user data, slots may have variable size | | depending on "single item size" slot | +------ "items count" slot +---------- "single item size" slot My VC++ compiler generated the program that produces the following output: borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap borns with 0xCDCDCDCD in the heap dies with 0 dies with 2 dies with 4 dies with 6 dies with 8 dies with 10 dies with 12 dies with 14 dies with 16 dies with 18 Obviously, everything is fine in this case. But now when I was trying to discover the nature of "native" VC++ dynamic arrays allocators, I understand that I'm wrong (at least for VC++). So I've got several questions. Where the values of dynamic array sizes are stored in? How do the dynamic arrays allocators work? Which byte-by-byte structure do they use for dynamic arrays? Or... Or could you provide any links that would clarify this for me (VC++ has the highest priority), please?
I'm not sure what you are looking for here but fake_int_ctor(int) is printing uninitialized memory in the allocated array. Try something like this instead: void fake_int_ctor(int& _this) { printf("born at %p\n", (void*)&_this); } void fake_int_dtor(int& _this) { printf("dies at %p\n", (void*)&_this); } This should print out the addresses. I'm guessing that this is more along the lines of what you want to see. This little program isn't really showing anything since you are just allocating a chunk of contiguous storage (ala malloc) and printing out the range of addresses. Nothing really surprising there. The actual storage of arrays is implementation defined. The only thing that is guaranteed is that when you do something like C *p = new C[10], p will point to enough contiguous storage for 10 C objects. How the environment keeps track of what was allocated so that delete [] p calls the destructors for each allocated element is completely implementation defined. If you really want to dig into this, then start with something like the following snippet. Compile it with assembly listings enabled and look at the generated assembly code. struct C { C(): x(0) {} int x; }; int main() { C *p = new C[10]; for (int i=0; i<10; ++i) { p[i].x = i; } delete [] p; return 0; } You should be able to figure out how the compiler represents arrays as long as you turn off all of the optimizations.
1,598,703
1,598,977
Profiling DLL/LIB Bloat
I've inherited a fairly large C++ project in VS2005 which compiles to a DLL of about 5MB. I'd like to cut down the size of the library so it loads faster over the network for clients who use it from a slow network share. I know how to do this by analyzing the code, includes, and project settings, but I'm wondering if there are any tools available which could make it easier to pinpoint what parts of the code are consuming the most space. Is there any way to generate a "profile" of the DLL layout? A report of what is consuming space in the library image and how much?
When you build your DLL, you can pass /MAP to the linker to have it generate a map file containing the addresses of all symbols in the resulting image. You will probably have to do some scripting to calculate the size of each symbol. Using a "strings" utility to scan your DLL might reveal unexpected or unused printable strings (e.g. resources, RCS IDs, __FILE__ macros, debugging messages, assertions, etc.). Also, if you're not already compiling with /Os enabled, it's worth a try.
1,598,710
1,598,758
How to design an efficient image buffer in C++?
I am trying to create a data buffer, more specifically, an image buffer, which will be shared among multiple modules. Those modules only reads from the buffer and don't communicate with each other at all. My difficulty is: 1.Large data size: larger than 10M per image, that means copying those data around for different threads is not desirable 2.I don't want the memory grow wild: as new data continuously come in(in real-time), the very old data must be deleted when all the modules have finished using it. However, to make things even more complex, those modules who consumes the data are at different pace: some are faster/slower, some needs more data(multiple images) to get a result, some need less(only one image) I have been thinking about using shared_ptr to solve the 1st problem: create a queue of boost shared_ptr(s), each shared_ptr() points to an image(char array). Then pass a subset of those pointers to different module. I am a totally newbie in this smart pointer field. What will be a good solution for this problem? Thanks.
Assuming you hand the shared_ptrs to the modules as soon as the buffer is created, they are a good fit. You don't even need to store them centrally in that case. It gets more complicated however, if you create the buffers at one point and only at some other point later the modules request the buffer. In that case you have to figure out what behaviour you want. Do you want to hold the buffers for some time? Or until at least one module has used them? Or until some new data comes in? integration of comment: As you want all your readers/modules to handle all incoming data you can simply give these an input queue. On incoming data just hand the modules an shared_ptr/shared_array to the new buffer, which add them to the queue. Remember to handle the multi-threading issues though for the queue access.
1,598,742
1,606,768
Sending large chunks of data over Boost TCP?
I have to send mesh data via TCP from one computer to another... These meshes can be rather large. I'm having a tough time thinking about what the best way to send them over TCP will be as I don't know much about network programming. Here is my basic class structure that I need to fit into buffers to be sent via TCP: class PrimitiveCollection { std::vector<Primitive*> primitives; }; class Primitive { PRIMTYPES primType; // PRIMTYPES is just an enum with values for fan, strip, etc... unsigned int numVertices; std::vector<Vertex*> vertices; }; class Vertex { float X; float Y; float Z; float XNormal; float ZNormal; }; I'm using the Boost library and their TCP stuff... it is fairly easy to use. You can just fill a buffer and send it off via TCP. However, of course this buffer can only be so big and I could have up to 2 megabytes of data to send. So what would be the best way to get the above class structure into the buffers needed and sent over the network? I would need to deserialize on the recieving end also. Any guidance in this would be much appreciated. EDIT: I realize after reading this again that this really is a more general problem that is not specific to Boost... Its more of a problem of chunking the data and sending it. However I'm still interested to see if Boost has anything that can abstract this away somewhat.
Have you tried it with Boost's TCP? I don't see why 2MB would be an issue to transfer. I'm assuming we're talking about a LAN running at 100mbps or 1gbps, a computer with plenty of RAM, and don't have to have > 20ms response times? If your goal is to just get all 2MB from one computer to another, just send it, TCP will handle chunking it up for you. I have a TCP latency checking tool that I wrote with Boost, that tries to send buffers of various sizes, I routinely check up to 20MB and those seem to get through without problems. I guess what I'm trying to say is don't spend your time developing a solution unless you know you have a problem :-) --------- Solution Implementation -------- Now that I've had a few minutes on my hands, I went through and made a quick implementation of what you were talking about: https://github.com/teeks99/data-chunker There are three big parts: The serializer/deserializer, boost has its own, but its not much better than rolling your own, so I did. Sender - Connects to the receiver over TCP and sends the data Receiver - Waits for connections from the sender and unpacks the data it receives. I've included the .exe(s) in the zip, run Sender.exe/Receiver.exe --help to see the options, or just look at main. More detailed explanation: Open two command prompts, and go to DataChunker\Debug in both of them. Run Receiver.exe in one of the Run Sender.exe in the other one (possible on a different computer, in which case add --remote-host=IP.ADD.RE.SS after the executable name, if you want to try sending more than once and --num-sends=10 to send ten times). Looking at the code, you can see what's going on, creating the receiver and sender ends of the TCP socket in the respecitve main() functions. The sender creates a new PrimitiveCollection and fills it in with some example data, then serializes and sends it...the receiver deserializes the data into a new PrimitiveCollection, at which point the primitive collection could be used by someone else, but I just wrote to the console that it was done. Edit: Moved the example to github.
1,598,807
1,598,842
MapViewOfFile with pointers between threads
I have some programs that use MapViewOfFile to share data, but I am getting strange access violations that seem to be from accessing the mapped file data. Some of the shared data has pointers, however these pointers are only set and used by one process, but by several threads within the process. I understand that you can't use pointers in mapped view across different processes, as obviously they could be mapped to different memory for each process, but is it safe to use pointers in mapped memory between threads on the same process?
Yes, it is safe to share pointers (in mapped memory or not) between threads in the same process, since the threads share the same address space.
1,598,967
1,598,980
Benefits of Initialization lists
Of what I know of benefits of using initialization list is that they provide efficiency when initializing class members which are not build-in. For example, Fred::Fred() : x_(whatever) { } is preferable to, Fred::Fred() { x_ = whatever; } if x is an object of a custom class. Other than that, this style is used even with built-in types for the sake of consistency. The most common benefit of doing this is improved performance. If the expression whatever is the same type as member variable x_, the result of the whatever expression is constructed directly inside x_ — the compiler does not make a separate copy of the object. With the other style, the expression whatever causes a separate, temporary object to be created, and this temporary object is passed into the x_ object's assignment operator. Then that temporary object is destructed at the ;. That's inefficient. Question Is there any efficiency gain in the following example with using initialization list. I think there is no gain. The first version calls string's copy constructor and the other calls string's assignment operator (there isn't any temporary thats created). It that correct? class MyClass { public: MyClass(string n):name(n) { } private: string name; }; class MyClass { public: MyClass(string n) { name=n; } private: string name; };
The second version is calling string's default ctor and then string's copy-assignment operator -- there could definitely be (minor) efficiency losses compared to the first one, which directly calls c's copy-ctor (e.g., depending on string's implementation, there might be useless allocation-then-release of some tiny structure). Why not just always use the right way?-)
1,599,171
1,599,191
C++ Class member access problem with templates
Ive got a problem that if I have a template class, which in turn has a template method that takes a parameter of another instance of the class (with different template arguments), that it can not access protected or private members of the class passed as a parameter, eg: template<typename T>class MyClass { T v; public: MyClass(T v):v(v){} template<typename T2>void foo(MyClass<T2> obj) { std::cout << v << " "; //error C2248: 'MyClass<T>::v' : cannot access private member declared in class 'MyClass<T>' std::cout << obj.v << " "; std::cout << v + obj.v << std::endl; } }; int main() { MyClass<int> x(5); MyClass<double> y(12.3); x.foo(y); } Is there someway to say that methods in MyClass<T> have full access to MyClass<SomeOtherT>?
They are different types: templates construct new types from a template. You have to make other instantiations of your class friends: template <typename T>class MyClass { T v; public: MyClass(T v):v(v){} template<typename T2>void foo(MyClass<T2> obj) { std::cout << v << " "; std::cout << obj.v << " "; std::cout << v + obj.v << std::endl; } // Any other type of MyClass is a friend. template <typename U> friend class MyClass; // You can also specialize the above: friend class MyClass<int>; // only if this is a MyClass<int> will the // other class let us access its privates // (that is, when you try to access v in another // object, only if you are a MyClass<int> will // this friend apply) };
1,599,382
1,599,391
Is it worth passing values of "simple" types by reference?
Consider the following : int increment1 (const int & x) { return x+1; } int increment2 (const int x) { return x+1; } I understand passing references to class objects an such, but I'm wondering if it's worth to pass reference to simple types ? Which is more optimal ? Passing by reference or passing by value (in case of a simle type?)
Unless you need the "call by reference" semantics, i.e. you want to access the actual variable in the callee, you shouldn't use call by reference for simple types. For a similar, more general discussion see: "const T &arg" vs. "T arg"
1,599,414
1,599,489
Active Wait in Windows I/O Driver
Continuing the question in: Keep windows trying to read a file Thanks to accepted answer in that question I realized that keeping windows waiting for data is a driver responsability. As i'm using Dokan, I am be able to look into the driver code. Dokan complete the IRP request with a STATUS_END_OF_FILE when you return no data, that obvioulsy forces windows to stop waiting for data and close the file. What i want to do is to hold the application that request file data until data is available and as i said in the original question, the user must be able to cancel the process at any time. The code that completes the request is: PIRP irp irp->IoStatus.Status = STATUS_END_OF_FILE IoCompleteRequest(irp, IO_NO_INCREMENT); Actually, i can return any error code, and i wanted to know if some STATUS code ( one of NTSTATUS values ), force windows to wait for data, and if returning that status code is enough to hold windows in reading operation. I already tried to return STATUS_WAIT_0, but it doesn't seem to work. Thanks again :)
You should return STATUS_PENDING and set CancelRoutine for the IRP. Complete your IRP when the data is available or an error occurred. See Asynchronous I/O Responses and Canceling IRPs for more info.
1,599,416
1,599,535
can you have a private member of the same class as the base class you're inheriting?
Im using the Qt library. I'm currently trying to create my own QDockWidget (the class MY class is inheriting). Right now MY class has an ptr to QDockWidget. Does this even make sense? is that a legal statement? is there a better way to separate the QDockWidget from the rest of my program in Qt? Im a little lost on how to implement a new Dock Widget. Here is a copy of MY QDockWidget class: #ifndef DOC_MENU_WIDGET_H #define DOC_MENU_WIDGET_H #include "App_interface.h" #include <QObject> #include <QWidget> #include <QDockWidget> class Doc_menu_widget : public QWidget { //Q_OBJECT public: Doc_menu_widget(App_interface *parent); ~Doc_menu_widget(); private: QDockWidget *dock_widget; }; #endif
You seem to be confusing the IS-A relationship and the HAS-A relationship. IS-A relations are implemented by inheritance. For instance, a QWidget IS-A QObject. HAS-A relations are implemted by members. For instance, a QWidget HAS-A size. Now, what's the relation between the class you are trying to develop and a QDockWidget? That will tell you which of the two you should choose.
1,599,536
1,599,566
How can I create an interface without parameterless constructor in C++?
How can I hide the default constructor from consumers? I tried to write in private but got compilation issues. solution is: class MyInterface { public: MyInterface(SomeController *controller) {} }; class Inherited : public MyInterface { private: Inherited () {} public: Inherited(SomeController *controller) { } };
In your case, since you have already provided a constructor that takes one parameter SomeController*, compiler doesn't provide any default constructor for you. Hence, default constructor is not available. ie, MyInterface a; will cause compiler to say no appropriate constructor. If you want to make constructor explicitly not available then make the same as private. EDIT for the code you have posted: You need to call base class MyInterface constructor (with single parameter) explicitly. Otherwise, by default the derived class constructor ( Inherited) will look for Base class default constructor which is missing. class Inherited : public MyInterface { private: Inherited (); public: Inherited(SomeController *controller):MyInterface(controller) {} };
1,599,604
1,620,994
Mouse jiggling / message processing loop
I have written a multithreaded program which does some thinking and prints out some diagnostics along the way. I have noticed that if I jiggle the mouse while the program is running then the program runs quicker. Now I could go in to detail here about how exactly I'm printing... but I will hold off just for now because I've noticed that in many other programs, things happen faster if the mouse is jiggled, I wonder if there is some classic error that many people have made in which the message loop is somehow slowed down by a non-moving mouse. EDIT: My method of "printing" is as follows... I have a rich edit control window to display text. When I want to print something, I append the new text on to the existing text within the window and then redraw the window with SendMessage(,WM_PAINT,0,0). Actually its a bit more complicated, I have multiple rich edit control windows, one for each thread (4 threads on my 4-core PC). A rough outline of my "my_printf()" is as follows: void _cdecl my_printf(char *the_text_to_add) { EnterCriticalSection(&my_printf_critsec); GetWindowText(...); // get the existing text SetWindowText(...); // append the_text_to_add SendMessage(...WM_PAINT...); LeaveCriticalSection(&my_printf_critsec); } I should point out that I have been using this method of printing for years in a non-multithreaded program without even noticing any interaction with mouse-jiggling. EDIT: Ok, here's my entire messageloop that runs on the root thread while the child threads do their work. The child threads call my_printf() to report on their progress. for(;;) { DWORD dwWake; MSG msg; dwWake = MsgWaitForMultipleObjects( current_size_of_handle_list, hThrd, FALSE, INFINITE, QS_ALLEVENTS); if (dwWake >= WAIT_OBJECT_0 && dwWake < (WAIT_OBJECT_0 + current_size_of_handle_list)) { int index; index = dwWake - WAIT_OBJECT_0; int j; for (j = index+1;j < current_size_of_handle_list;j++) { hThrd[j-1] = hThrd[j]; } current_size_of_handle_list--; if (current_size_of_handle_list == 0) { break; } } else if (dwWake == (WAIT_OBJECT_0 + current_size_of_handle_list)) { while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } } else if (dwWake == WAIT_TIMEOUT) { printmessage("TIMEOUT!"); } else { printmessage("Goof!"); } } EDIT: Solved! This may be an ugly solution - but I just changed the timeout from infinite to 20ms, then in the if (dwWake == WAIT_TIMEOUT) section I swapped printmessage("TIMEOUT!"); for: while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } I'm not closing this question yet because I'd still like to know why the original code did not work all by itself.
i can see 3 problems here: the documentation for WM_PAINT says: The WM_PAINT message is generated by the system and should not be sent by an application. unfortunately i don't know any workaround, but i think SetWindowText() will take care of repainting the window, so this call may be useless. SendMessage() is a blocking call and does not return until the message has been processed by the application. since painting may take a while to be processed, your program is likely to get hanged in your critical section, especially when considering my 3rd point. PostMessage() would be much better here, since you have no reason to need your window to be repainted "right now". you are using QS_ALLEVENTS in MsgWaitForMultipleObjects(), but this mask DOES NOT include the QS_SENDMESSAGE flag. thus your SendMessage() call is likely ignored and does not wake your thread. you should be using QS_ALLINPUT. can you check the behavior of your application with an INFINITE timeout and the above 3 modifications included ?
1,599,662
1,715,999
Black border around characters when draw Image to a transparent Bitmap
I have to draw a String on a transparent bitmap at first, then draw A to destination canvas. However on certain case, there is black border around the characters. Bitmap* tempImg = new Bitmap(1000, 1000, PixelFormat32bppARGB); Graphics tempGr(tempImg); tempGr.Clear(Color(0, 255,255,255)); Gdiplus::SolidBrush* brush = new SolidBrush(Color(255, 255, 0, 0 )); Gdiplus::FontFamily fontFamily(L"Times New Roman"); Gdiplus::Font* font = new Gdiplus::Font(&fontFamily, 19, FontStyleRegular, UnitPixel); RectF rec(400, 400, 1000, 10000); tempGr.DrawString( L"Merry Chrismas", -1, font, rec, NULL, brush ); Graphics desGr(hdc); desGr.Clear(Color::Gray); desGr.DrawImage(tempImg , 0,0, 1000, 1000); The character draw on desGr have black board for some fontsize. How can I avoid this problem? Many thanks!
I think the problem here is that you are drawing the text onto a transparent background. You could try adding this line after the call to tempGr.Clear... tempGr.TextRenderingHint = TextRenderingHint.AntiAlias; ps - sorry not sure the exact syntax in C++ ;)
1,599,702
1,599,724
C++ Console Application, hiding the title bar
I have a Windows console application written in C++ and want to hide/remove the complete title bar of the console window, including the close, min/max controls etc. I searched a lot but didn't found anything useful yet. I inquire the console HWND with GetConsoleWindow and tried to change the console window style with SetWindowLong by removing the WS_CAPTION flag, but this seems to have no effect at all: HWND hwnd = GetConsoleWindow(); LONG style = GetWindowLong(hwnd, GWL_STYLE); style &= ~(WS_BORDER|WS_CAPTION|WS_THICKFRAME); SetWindowLong(hwnd, GWL_STYLE, style); SetWindowPos( hwnd, NULL, 0,0,0,0, SWP_NOSIZE|SWP_NOMOVE|SWP_NOZORDER|SWP_NOACTIVATE |SWP_FRAMECHANGED ); I also tried GetSystemMenu/RemoveMenu but this seems only to disable controls like the close button.
You can't. Generally the hWnd of a console window is not guaranteed to be suitable for all window handle operations as, for example, documented here.
1,599,869
1,599,942
Project dependency in Visual Studio
In Visual Studio, I have two C++ projects - Gui.vcproj and Dll.vcproj. Gui is an application and Dll produces a DLL. What's the best way to make the dependency resolution automatic? I tried adding Dll.vcproj into Gui.vcproj's references, but it doesn't seem working.
Create a solution Add Gui.vcproj and Dll.vcproj to the solution From solution explorer window, right click the solution yuo just created. Choose Project dependencies. Select the project you want of these two, and check the 'Depends on' check box.
1,599,895
1,599,917
Multiple definitions of Split
Maybe I should still be in bed. I woke up wanting to program. At any rate, now I'm getting some linker errors that I'm baffled over. What do you make of all this? I hope I'm not posting too much of it. I was going to post just a piece, but that didn't feel right. I checked some of the header files mentioned in the errors, but I didn't see Split anywhere. Oddly enough it started out named split, but I got similar errors to this. /home/starlon/Projects/LCDControl/WidgetIcon.h:59: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here QtDisplay.o: In function `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)': /usr/lib/gcc/i586-redhat-linux/4.4.1/../../../../include/c++/4.4.1/new:101: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here DrvQt.o: In function `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)': /usr/lib/gcc/i586-redhat-linux/4.4.1/../../../../include/c++/4.4.1/bits/stl_deque.h:79: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here LCDText.o: In function `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)': /usr/lib/gcc/i586-redhat-linux/4.4.1/../../../../include/c++/4.4.1/new:101: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here Property.o: In function `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)': /usr/include/QtCore/qatomic_i386.h:125: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here moc_QtDisplay.o: In function `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)': /home/starlon/Projects/LCDControl/WidgetIcon.h:59: multiple definition of `LCD::Split(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char)' LCDControl.o:/home/starlon/Projects/LCDControl/WidgetIcon.h:59: first defined here Here's Split: std::vector<std::string> Split(const std::string &s, char delim) { std::vector<std::string> elems; return elems; //Split(s, delim, elems); }
A usual cause for multiple definitions errors like this is when you define the function in a header file without the inline keyword. Also, if the Split function you posted is from a LCD class the signature is missing the LCD:: part.
1,600,282
1,600,346
Guideline: while vs for
Disclaimer: I tried to search for similar question, however this returned about every C++ question... Also I would be grateful to anyone that could suggest a better title. There are two eminent loop structure in C++: while and for. I purposefully ignore the do ... while construct, it is kind of unparalleled I know of std::for_each and BOOST_FOREACH, but not every loop is a for each Now, I may be a bit tight, but it always itches me to correct code like this: int i = 0; while ( i < 5) { // do stuff ++i; // I'm kind and use prefix notation... though I usually witness postfix } And transform it in: for (int i = 0; i < 5; ++i) { // do stuff } The advantages of for in this example are multiple, in my opinion: Locality: the variable i only lives in the scope of the loop Pack: the loop 'control' is packed, so with only looking at the loop declaration I can figure if it is correctly formed (and will terminate...), assuming of course that the loop variable is not further modified within the body It may be inlined, though I would not always advised it (that makes for tricky bugs) I have a tendency therefore not to use while, except perhaps for the while(true) idiom but that's not something I have used in a while (pun intended). Even for complicated conditions I tend to stick to a for construct, though on multiple lines: // I am also a fan of typedefs for (const_iterator it = myVector.begin(), end = myVector.end(); it != end && isValid(*it); ++it) { /* do stuff */ } You could do this with a while, of course, but then (1) and (2) would not be verified. I would like to avoid 'subjective' remarks (of the kind "I like for/while better") and I am definitely interested to references to existing coding guidelines / coding standards. EDIT: I tend to really stick to (1) and (2) as far as possible, (1) because locality is recommended >> C++ Coding Standards: Item 18, and (2) because it makes maintenance easier if I don't have to scan a whole body loop to look for possible alterations of the control variable (which I takes for granted using a for when the 3rd expression references the loop variables). However, as gf showed below, while do have its use: while (obj.advance()) {} Note that this is not a rant against while but rather an attempt to find which one of while or for use depending on the case at hand (and for sound reasons, not merely liking).
Not all loops are for iteration: while(condition) // read e.g.: while condition holds { } is ok, while this feels forced: for(;condition;) { } You often see this for any input sources. You might also have implicit iteration: while(obj.advance()) { } Again, it looks forced with for. Additionally, when forcing for instead of while, people tend to misuse it: for(A a(0); foo.valid(); b+=x); // a and b don't relate to loop-control
1,600,399
1,600,413
Are C++ libs created with different versions of Visual Studio compatible with each other?
I am creating a open-source C++ library using Visual Studio 2005. I would like to provide prebuilt libs along with the source code. Are these libs, built with VS2005, also going to work with newer versions of Visual Studio (esp VS Express Edition 2008)? Or do I need to provide separate libs per VS version?
If you are distributing static libraries, you may be able to distribute version-independent libraries, depending on exactly what you are doing. If you are only making calls to the OS, then you may be OK. C RTL functions, maybe. But if you use any C++ Standard Library functions, classes, or templates, then probably not. If distributing DLLs, you will need separate libraries for each VS version. Sometimes you even need separate libraries for various service-pack levels. And as mentioned by VolkerK, users of your library will have to use compatible compiler and linker settings. And even if you do everything right, users may need to link with other libraries that are somehow incompatible with yours. Due to these issues, instead of spending time trying to build all these libraries for your users, I'd spend the time making them as easy to build as possible, so that users can can build them on their own with minimal fuss.
1,600,464
1,600,561
templates problem ('typename' as not template function parameter)
Actually I've a problem with compiling some library with intel compiler. This same library has been compiled properly with g++. Problem is caused by templates. What I'd like to understand is the declaration of **typename** as not template function parameter and variable declaration inside function body example: void func(typename sometype){.. ... typename some_other_type; .. } Compilation this kind of code produce following errors (intel),(gcc doesn't claim): I've got following errors ../../../libs/log/src/attribute_set.cpp(415): error: no operator "!=" matches these operands operand types are: boost::log_st::basic_attribute_set<wchar_t>::iter<'\000'> != boost::log_st::basic_attribute_set<wchar_t>::iter<'\000'> while (begin != end) ^ detected during instantiation of "void boost::log_st::basic_attribute_set<CharT>::erase(boost::log_st::basic_attribute_set<CharT>::iter<'\000'>, boost::log_st::basic_attribute_set<CharT>::iter<'\000'>) [with CharT=wchar_t]" at line 438 ../../../boost/log/attributes/attribute_set.hpp(115): error: no operator "!=" matches these operands operand types are: boost::log_st::basic_attribute_set<wchar_t>::iter<'\000'> != boost::log_st::basic_attribute_set<wchar_t>::iter<'\000'> if (it != m_pContainer->end()) What I'd like to understand is the usage of the typename inside the body of functions, parameter declarations. ex.: template< typename CharT > struct basic_attribute_values_view< CharT >::implementation { public: .. .. void adopt_nodes( **typename attribu**te_set_type::const_iterator& it, **typename attribut**e_set_type::const_iterator end) { for (; it != end; ++it) push_back(it->first, it->second.get()); } in different file I've: template< typename CharT > class basic_attribute_set { friend class basic_attribute_values_view< CharT >; //! Self type typedef basic_attribute_set< CharT > this_type; public: //! Character type typedef CharT char_type; //! String type typedef std::basic_string< char_type > string_type; //! Key type typedef basic_slim_string< char_type > key_type; //! Mapped attribute type typedef shared_ptr< attribute > mapped_type; //! Value type typedef std::pair< const key_type, mapped_type > value_type; //! Allocator type typedef std::allocator< value_type > allocator_type; //! Reference type **typedef typename allocator_type::reference reference;**
You need to use typename for so-called "dependent types". Those are types that depend on a template argument and are not known until the template is instantiated. It's probably best explained using an example: struct some_foo { typedef int bar; }; template< typename Foo > struct baz { typedef Foo::bar barbar; // wrong, shouldn't compile barbar f(); // would be fine if barbar were a type // more stuff... }; That typedef defining barbar is one that requires a typename in order for the compiler to be able to check the template for blatant syntactic errors before it is instantiated with a concrete type. The reason is that, when the compiler sees the template for the first time (when it's not instantiated with concrete template parameters yet), the compiler doesn't know whether Foo::bar is a type. For all it know, I might intent baz to be instantiated with types like this one struct some_other_foo { static int bar; }; in which case Foo::bar would refer to an object, not a type, and the definition of baz::bar would be syntactic nonsense. Without knowing whether Foo::bar refers to a type, the compiler has no chance to check anything within baz that's directly or indirectly using barbar for even the most stupid typos until baz is instantiated. Using the proper typename, baz looks like this: template< typename Foo > struct baz { typedef typename Foo::bar barbar; barbar f(); // more stuff... }; Now the compiler at least knows that Foo::bar is supposed to be the name of a type, which makes barbar a type name, too. So the declaration of f() is syntactical OK, too. By the way, there's a similar problem with templates instead of types: template< typename Foo > struct baz { Foo::bar<Foo> create_wrgl(); // wrong, shouldn't compile }; When the compiler "sees" Foo::bar it doesn't know what it is, so bar<Foo could just as well be a comparison, leaving the compiler confused about the trailing >. Here, too, you need to give the compiler a hint that Foo::bar is supposed to be the name of a template: template< typename Foo > struct baz { Foo::template bar<Foo> create_wrgl(); }; Beware: Notably Visual C++ still doesn't implement proper two-phase lookup (in essence: it doesn't really check templates until they are instantiated). Therefor it often accepts erroneous code that misses a typename or a template.
1,600,936
1,600,968
Officially, what is typename for?
On occasion I've seen some really indecipherable error messages spit out by gcc when using templates... Specifically, I've had problems where seemingly correct declarations were causing very strange compile errors that magically went away by prefixing the typename keyword to the beginning of the declaration... (For example, just last week, I was declaring two iterators as members of another templated class and I had to do this)... What's the story on typename?
Following is the quote from Josuttis book: The keyword typename was introduced to specify that the identifier that follows is a type. Consider the following example: template <class T> Class MyClass { typename T::SubType * ptr; ... }; Here, typename is used to clarify that SubType is a type of class T. Thus, ptr is a pointer to the type T::SubType. Without typename, SubType would be considered a static member. Thus T::SubType * ptr would be a multiplication of value SubType of type T with ptr.
1,600,939
1,600,976
Storing heterogeneous objects in vector with stack-allocated objects
Storing objects in heterogeneous vector with stack-allocated objects Hello, Say I have an abstract class CA, derived into CA1, CA2, and maybe others. I want to put objects of these derived types into a vector, that I embbed into a class CB. To get polymorphism right, I need to store a vector of pointers: class CB { std::vector <CA*> v; }; Now, say I have the following main function: int main() { CB b; CA1 a1; CA2 a2; b.Store( a1 ); b.Store( a2 ); } How do I write the method void CB::Store(const CA&) in a simple way, so the stored objects survive when the original objects gets destroyed (which doesn't occur in the simple example above). My problem is that I need to first copy objects on the heap before copying their adress in the vector, but how can I create an object of a derived type ? Sure, I could use RTTI, and search for all possible types, create and allocate a pointer, and copy (with proper casting) the object into the allocated space before pushing it into the vector. But this seems quite complicated, no ? Is there a simpler way ? (And without using dynamic allocation in the main !)
Commonly, you will provide a clone function: struct CA { virtual CA *clone(void) const = 0; virtual ~CA() {} // And so on for base classes. } struct CA1 : public CA { virtual CA *clone(void) const { return new CA1(*this); } } struct CA2 : public CA { virtual CA *clone(void) const { return new CA2(*this); } } This is called a virtual constructor, you can construct copies of objects at runtime: void CB::Store(const CA& pObject) { CA *cloned = pObject.clone(); } You should consider using the Boost.Pointer Container library. Your code would be: boost::ptr_vector<CA> objects; void CB::Store(const CA& pObject) { objects.push_back(pObject->clone()); } And now you have no need to manage memory yourself. The library also respects clone functions, and will call it when making copies of your objects. Tutorial here.
1,601,008
1,601,044
identify the exact header file
I am using some macro in my source file (*.c ) . Is there any way during compilation or from the library that I can identify the exact header file from which this particular macro is getting resolved ? The issue is we are using a macro #defined to 10 in some header file, but the value being received in the code is 4 . So instead of going and checking in all the dep files , we want to know incase there is some direct way to identify the source from which the macro got resolved.
If you just run cpp (the C preprocessor) on the file, the output will contain #line directives of the form #line 45 "silly-file-with-macros.h" for the compiler saying where everything came from. So one way is to use cpp my-file.c | more and look for the #line directive. Depending on your compiler, another trick you could use is to redefine the macro to something else, and the compiler will spit out a warning like test-eof.c:5:1: warning: "FRED" redefined test-eof.c:3:1: warning: this is the location of the previous definition (this is from gcc) which should tell you where the macro was previously defined. But come to think of it, how is it that you aren't getting that warning already? Another idea is to use makedepend to get a list of all the included files, then grep them for #define lines in them.
1,601,030
1,601,082
Internet Explorer window in Qt?
Is there a way to show an Internet Explorer instance/frame inside a Qt Widget? I need to show a web page in my application (just show, no need for interaction), and while I read about WebKit for Qt, I'd like to know if there is another way without it, since I'm trying to keep the application as small as possible, and it would make me very unhappy to include such a large library (and nobody wants that, right?)
Yes, you need the commercial edition of Qt and then, you can use ActiveQt.
1,601,060
1,602,142
STL like container with O(1) performance
I couldn't find an answer but I am pretty sure I am not the first one looking for this. Did anyone know / use / see an STL like container with bidirectional access iterator that has O(1) complexity for Insert/Erase/Lookup ? Thank you.
In practice, it may be sufficient to use array (vector) and defer costs of inserts and deletes. Delete element by marking it as deleted, insert element into bin at desired position and remember offset for larger indices. Inserts and deletes will O(1) plus O(N) cleanup at convenient time; lookup will be O(1) average, O(number of changes since last cleanup) worst case.
1,601,129
1,601,374
DLL Injection/IPC question
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly. My question is: I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately. I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
1,601,261
1,601,482
Marking library functions as deprecated/unusable without modifying their source code
I have a large codebase that uses a number of unsafe functions, such as gmtime and strtok. Rather than trying to search through the codebase and replace these wholesale, I would like to make the compiler emit a warning or error when it sees them (to highlight the problem to maintenance developers). Is this possible with GCC? I already know about __attribute__((deprecated)), but AFAIK I can't use it since I don't have control of the header files where these functions are declared.
Create a custom header deprecated.h. In there, create your own wrapper functions, deprecated_strtok() etcetera that merely call strtok. Mark those with __attribute__((deprecated)). Below those definitions, #define strtok deprecated_strtok. Finally, use -include deprecated.h
1,601,431
1,612,131
Is Lua the best/fastest choice for a gaming server?
I am working on a project where I want users to be able to modify and customize as much as possible. Open source might be a good choice but not due to the fact that I want to keep a few internal classes closed. Two other options that I thought about were plug-ins as external libraries and Lua scripting. The problem with libraries (DLLs) are that cross-platform compatibility is a must-have because it is some kind of a game server and it is mainly designed for use on dedicated servers (often Linux) yet many people will also use it on their local machine (mostly Windows). Due to the fact that it's a game server application that should be able to handle lots of connections and actions related to the game performance is very important so I have doubts with Lua scripts. Are my doubts reasonable or would Lua be a good solution? Also can you think of any better / other option for my concern? To sum up the important aspects: cross-platform compatibility good performance (-> online game) plug-ins / scripts that anyone can create as long as he/she knows about the language, may it be Lua, C or whatever option for closed source plug-ins / scripts (not so important, but would be fine :)
I'm afraid the only one who can answer if Lua will be fast enough for you is... you. We have no idea what exactly are you doing and how are you implementing it. My suggestion is to prototype and measure. Write a small, but relevant, part of your system in both Lua and C/C++, measure the performance of both and decide if Lua is fast enough. Having WoW as a case study, Lua seems to be fast enough for the client/UI part of the game, but I cannot say anything about the server. But anyway, I doubt there's language out there that's faster and easily embeddable compared to Lua (disclaimer: I haven't measured Lua performance myself, especially not against other similar languages, so take this with a grain of salt) You mention something about DLLs not being cross-platform, so just FYI: if you want to use DLLs for plugins and load them dynamically, the same functionality exists on Linux. The "DLLs" are called "shared libraries" or "shared objects" and usually go by the extension of .so. And instead of the windows LoadLibrary, GetProcAddress and FreeLibrary, there are dlopen, dlsym and dlclose.
1,601,457
1,601,486
C++ Interfaces in stl::list
LessonInterface class ILesson { public: virtual void PrintLessonName() = 0; virtual ~ILesson() {} }; stl container typedef list<ILesson> TLessonList; calling code for (TLessonList::const_iterator i = lessons.begin(); i != lessons.end(); i++) { i->PrintLessonName(); } The error: Description Resource Path Location Type passing ‘const ILesson’ as ‘this’ argument of ‘virtual void ILesson::PrintLessonName()’ discards qualifiers
You can't "put" objects of a class that has pure virtual functions(because you can't instantiate it). Maybe you mean: // store a pointer which points to a child actually. typedef list<ILesson*> TLessonList; OK, as others pointed out, you have to make PrintLessonName a const member function. I would add that there is another small pitfall here. PrintLessonName must be const in both the base and the derived classes, otherwise they will not have the same signature: class ILesson { public: virtual void PrintLessonName() const = 0; virtual ~ILesson() {} }; class SomeLesson : public ILesson { public: // const is mandatory in the child virtual void PrintLessonName() const { // } virtual ~SomeLesson() {} }; To be honest, I find Jerry Coffin's answer helpful for redesigning the printing functionality.
1,601,598
1,601,891
Confusion with the problems of inline function
In the problems of inline functions in wikipedia : http://en.wikipedia.org/wiki/Inline_expansion#Problems it says : "# A language specification may allow a program to make additional assumptions about arguments to procedures that it can no longer make after the procedure is inlined." Could somebody elaborate this point? How do you prevent the GCC from inlining a C++ function?
In C++, the inline keyword really only has one required meaning: that the One-Definition Rule is suspended for that function (e.g., the function can be defined in several translation units, and the code still conforms). Specifically, using the inline keyword does not ensure that the code for that function will be generated inline. Defining a function inside a class definition also makes it an inline function -- but, again, that doesn't ensure that its code will be generated inline either. Conversely, a function that is defined outside a class definition, without the inline keyword can and may still have its code generated inline. The only difference is that in this case multiple definitions of the function renders the code non-conforming. The bottom line is that portable code cannot assure that code either is or is not generated inline. If you don't mind making your code non-portable, however, you can use __attribute__(noinline). I would not, however, do this on the basis of the cited quote from Wikipedia. Wikipedia is hardly an authoritative source, and even if it was, what you're quoting is just a vague statement about what could happen with some hypothetical language on some hypothetical compiler under some hypothetical conditions. You're generally better off writing your code to be clear and readable, and letting the compiler worry about generating good results from that.
1,601,904
1,602,088
stuck in a template requirement loop
I have a class that uses an "add-on" template to add additional functionality as below: template< class T > class AddOn_A { public: int SomeFuncA() { T* pT = static_cast< T* >( this ); return pT->DoSomething() + 1; }; }; class CMyClass : public AddOn_A< CMyClass > { public: int DoSomething() { return 100; }; }; int _tmain(int argc, _TCHAR* argv[]) { CMyClass A; _ASSERT( A.SomeFuncA() == 101 ); return 0; } Now I would like to extend this such that CMyClass can accept different add-ons like AddOn_B. template< class T > class AddOn_B { public: int SomeFuncB() { T* pT = static_cast< T* >( this ); return pT->DoSomething() + 2; }; }; template< class AddOn > class CMyClass : public AddOn { public: int DoSomething() { return 100; }; }; int _tmain(int argc, _TCHAR* argv[]) { // error C3203: 'AddOn_A' : unspecialized class template can't be used as a template argument for template parameter 'AddOn', expected a real type // error C2955: 'AddOn_A' : use of class template requires template argument list CMyClass< AddOn_A > A; _ASSERT( A.SomeFuncA() == 101 ); // same errors here CMyClass< AddOn_B > B; _ASSERT( B.SomeFuncB() == 102 ); return 0; } Unfortunately, each Add_On requires CMyClass as a template parameter which requires an Add_On, etc... I'm in a requirement loop. Is there some template magic I can use to get the functionality I'm looking for? Is there a better method of doing this? Thanks, PaulH
Apparently you are trying to use the famous Curiously Recurring Template Pattern. I am not sure of what you exactly want to do, but you might get away with another solution: What if you used two classes: class Base {}; class MyClass: public AddOn<Base> {}; You may also use a Policy Based approach: class PolicyA_A {}; class PolicyA_B {}; class PolicyB_A {}; class PolicyB_B {}; template <class PolicyA, class PolicyB> class MyClass: private PolicyA, private PolicyB {}; typdef MyClass<PolicyA_A, PolicyB_A> MyClassAA; The idea is to delegate part of the job to policies to add flexibility. Last but not least you may use a Decorator approach: class Base {}; template <class T> class AddOn_A: public T {}; class MyClass: public AddOn_A< AddOn_B< Base > > {}; It allows you to get rid of the virtual inheritance by suppressing the Multi Inheritance and making the hierarchy linear.
1,601,943
1,601,974
Mutex lock on write only
I have a multithreaded C++ application which holds a complex data structure in memory (cached data). Everything is great while I just read the data. I can have as many threads as I want access the data. However the cached structure is not static. If the requested data item is not available it will be read from database and is then inserted into the data tree. This is probably also not problematic and even if I use a mutex while I add the new data item to the tree that will only take few cycles (it's just adding a pointer). There is a Garbage Collection process that's executed every now and then. It removes all old items from the tree. To do so I need to lock the whole thing down to make sure that no other process is currently accessing any data that's going to be removed from memory. I also have to lock the tree while I read from the cache so that I don't remove items while they are processed (kind of "the same thing the other way around"). "Pseudocode": function getItem(key) lockMutex() foundItem = walkTreeToFindItem(key) copyItem(foundItem, safeCopy) unlockMutex() return safeCopy end function function garbageCollection() while item = nextItemInTree if (tooOld) then lockMutex() deleteItem(item) unlockMutex() end if end while end function What's bothering me: This means, that I have to lock the tree while I'm reading (to avoid the garbage collection to start while I read). However - as a side-effect - I also can't have two reading processes at the same time anymore. Any suggestions? Is there some kind of "this is a readonly action that only collides with writes" Mutex?
Look into read-write-lock. You didn't specify which framework can you use but both pThread and boost have implemented that pattern.
1,602,058
1,602,165
Why is the copy-constructor argument const?
Vector(const Vector& other) // Copy constructor { x = other.x; y = other.y; Why is the argument a const?
You've gotten answers that mention ensuring that the ctor can't change what's being copied -- and they're right, putting the const there does have that effect. More important, however, is that a temporary object cannot bind to a non-const reference. The copy ctor must take a reference to a const object to be able to make copies of temporary objects.
1,602,083
1,605,107
Async operations with I/O Completion Ports return 0 bytes transferred
Asynchronous operations with I/O Completion Ports return 0 bytes transferred, although the I/O operations work as expected (my read buffers become full). BYTE buffer[1024] = {0}; OVERLAPPED o = {0}; HANDLE file = CreateFile( _T("hello.txt"), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL ); HANDLE completion_port = CreateIoCompletionPort( file, NULL, 0, 0 ); ReadFile( file, buffer, 1024, NULL, &o ); In the work thread: DWORD numBytes = 0; LPOVERLAPPED po; GetQueuedCompletionStatus( completion_port, &numBytes, 0, &po, INFINITE ); GetOverlappedResult(file, &o, &numBytes, FALSE); Both functions return 0 bytes in numBytes, but buffer is filling. Is this expected behaviour? Thanks.
For GetIoCompletionPort to work correctly, you need to specify a non-null pointer to a ULONG_PTR for it to write the 'key' value to: ULONG_PTR key; GetQueuedCompletionStatus( completion_port, &numBytes, &key, &po, INFINITE ); To use GetOverlappedResult successfully, I believe you need to specify an event handle in the OVERLAPPED structure (strongly recommended in any case): o.hEvent = CreateEvent(NULL, FALSE, TRUE, NULL); Calling the two in succession as you were doesn't really accomplish much -- they both tell you about the same things. Though if you do call both in succession, you'll need to change the Event to be a manual-reset by changing the third parameter to CreateEvent to TRUE. My guess is that you were just trying both to see if you could get one to work. All things considered, I'd probably just use GetQueuedCompletionStatus, and leave it at that. Of course, you'll usually do more than call it once and quit. You normally call it in a loop, processing the current buffer you've read, then calling ReadFile again to read another buffer of info, something like this: DWORD numBytes; LPOVERLAPPED po; while (GetQueuedCompletionStatus(completion_port, &numBytes, &key, &po, INFINITE)) { std::cout << "\rRead: " << numBytes; // just to show it's set correctly. process(buffer); po->offset += sizeof(buffer); ReadFile(file, buffer, sizeof(buffer), NULL, po); } At least in a quick test on my machine, this showed the number of bytes read correctly (sizeof(buffer) up to the last packet, then the remaining size of the file).
1,602,451
1,602,594
C++ valarray vs. vector
I like vectors a lot. They're nifty and fast. But I know this thing called a valarray exists. Why would I use a valarray instead of a vector? I know valarrays have some syntactic sugar, but other than that, when are they useful?
Valarrays (value arrays) are intended to bring some of the speed of Fortran to C++. You wouldn't make a valarray of pointers so the compiler can make assumptions about the code and optimise it better. (The main reason that Fortran is so fast is that there is no pointer type so there can be no pointer aliasing.) Valarrays also have classes which allow you to slice them up in a reasonably easy way although that part of the standard could use a bit more work. Resizing them is destructive and they lack iterators they have iterators since C++11. So, if it's numbers you are working with and convenience isn't all that important use valarrays. Otherwise, vectors are just a lot more convenient.
1,602,692
1,602,765
Pointer stability under Windows Vista
I have been using Visual Studio 2005 under Windows XP Pro 64-bit for C and C++ projects for a while. One of the popular tricks I have been using from time to time in the debugger was to remember a numeric pointer value from the previous debugging run of the program (say 0x00000000FFAB8938), add it to watch window with a proper typecast (say, ((MyObject *) 0x00000000FFAB8938)->data_field) and then watch the memory occupied by the object during the next debugging run. In many cases this is quite a convenient and useful thing to do, since as long as the code remains unchanged, it is reasonable to expect that the allocated memory layout will remain unchanged as well. In short, it works. However, relatively recently I started using the same version of Visual Studio on a laptop with Windows Vista (Home Premium) 64-bit. Strangely enough, it is much more difficult to use this trick in that setup. The actual memory address seems to change rather often from run to run for no apparent reason, i.e. even when the code of the program was not changed at all. It appears that the actual address is not changing entirely randomly, it just selects one value from a fixed more-or-less stable set of values, but in any case it makes it much more difficult to do this type of memory watching. Does anyone know the reason of this behavior in Windows Vista? What is causing the change in memory layout? Is that some external intrusion into the process address space from other [system] processes? Or is it some quirk/feature of Heap API implementation under Vista? Is there any way to prevent this from happening?
Windows Vista implements address space layout randomization, heap randomization, and stack randomization. This is a security mechanism, trying to prevent buffer overflow attacks that rely on the knowledge of where each piece of code and data is in memory. It's possible to turn off ASLR by setting the MoveImages registry value. I couldn't find a way to disable heap randomization, but some Microsoft guy recommends computing addresses relative to _crtheap. Even if the heap moves around, the relative address may remain stable.
1,602,998
1,603,039
Fastest way to obtain the largest X numbers from a very large unsorted list?
I'm trying to obtain the top say, 100 scores from a list of scores being generated by my program. Unfortuatly the list is huge (on the order of millions to billions) so sorting is a time intensive portion of the program. Whats the best way of doing the sorting to get the top 100 scores? The only two methods i can think of so far is either first generating all the scores into a massive array and then sorting it and taking the top 100. Or second, generating X number of scores, sorting it and truncating the top 100 scores then continue generating more scores, adding them to the truncated list and then sorting it again. Either way I do it, it still takes more time than i would like, any ideas on how to do it in an even more efficient way? (I've never taken programming courses before, maybe those of you with comp sci degrees know about efficient algorithms to do this, at least that's what I'm hoping). Lastly, whats the sorting algorithm used by the standard sort() function in c++? Thanks, -Faken Edit: Just for anyone who is curious... I did a few time trials on the before and after and here are the results: Old program (preforms sorting after each outer loop iteration): top 100 scores: 147 seconds top 10 scores: 147 seconds top 1 scores: 146 seconds Sorting disabled: 55 seconds new program (implementing tracking of only top scores and using default sorting function): top 100 scores: 350 seconds <-- hmm...worse than before top 10 scores: 103 seconds top 1 scores: 69 seconds Sorting disabled: 51 seconds new rewrite (optimizations in data stored, hand written sorting algorithm): top 100 scores: 71 seconds <-- Very nice! top 10 scores: 52 seconds top 1 scores: 51 seconds Sorting disabled: 50 seconds Done on a core 2, 1.6 GHz...I can't wait till my core i7 860 arrives... There's a lot of other even more aggressive optimizations for me to work out (mainly in the area of reducing the number of iterations i run), but as it stands right now, the speed is more than good enough, i might not even bother to work out those algorithm optimizations. Thanks to eveyrone for their input!
take the first 100 scores, and sort them in an array. take the next score, and insertion-sort it into the array (starting at the "small" end) drop the 101st value continue with the next value, at 2, until done Over time, the list will resemble the 100 largest value more and more, so more often, you find that the insertion sort immediately aborts, finding that the new value is smaller than the smallest value of the candidates for the top 100.
1,603,000
1,603,272
decorator with a base that requires a constructor argument
I have a decorator-like pattern with a base that requires a constructor parameter. The decorator is constructed such that it can take an arbitrary number of add-on components as template parameters (up to 3 in this example). Unfortunately, I can't figure out how to pass the base's constructor parameter to it when more than one add-on is specified. In the example below, CMyClass< AddOn_A > A( 100 ); works perfectly, but CMyClass< AddOn_A, AddOn_B > AB( 100 ); generates an error at the CMyClass constructor. template< class Base > class AddOn_A : public Base { public: AddOn_A( int x ) : Base( x ) { }; int AddOne() { return static_cast< Base* >( this )->DoSomething() + 1; }; }; template< class Base > class AddOn_B : public Base { public: AddOn_B( int x ) : Base( x ) { }; int AddTwo() { return static_cast< Base* >( this )->DoSomething() + 2; }; }; class CBase { public: explicit CBase( int x ) : x_( x ) { }; int DoSomething() { return x_; }; private: int x_; }; // define an empty AddOn template< class > class empty {}; // forward declaration and Add-On defaults template< template< class > class AddOn1 = empty, template< class > class AddOn2 = empty, template< class > class AddOn3 = empty > class CMyClass; // specialized template for the default case template<> class CMyClass< empty, empty, empty > {}; // actual definition template< template< class > class AddOn1, template< class > class AddOn2, template< class > class AddOn3 > class CMyClass : public AddOn1< CBase >, public CMyClass< AddOn2, AddOn3 > { public: // what needs to go here??? CMyClass( int x ) : AddOn1< CBase >( x ) {}; }; int _tmain( int argc, _TCHAR* argv[] ) { // works CMyClass< AddOn_A > A( 100 ); _ASSERT( A.AddOne() == 101 ); // works CMyClass< AddOn_B > B( 100 ); _ASSERT( B.AddTwo() == 102 ); // generates an error at the CMyClass ctor: // error C2512: 'CMyClass<AddOn1>' : no appropriate default constructor available CMyClass< AddOn_A, AddOn_B > AB( 100 ); _ASSERT( AB.AddOne() == 101 ); _ASSERT( AB.AddTwo() == 102 ); return 0; } If anybody can point out what I may be doing wrong, please let me know. Thanks, PaulH
Your errors are generally originating out of the fact that CMyClass does not have a default constructor (because you define a CMyClass(int) instead), so it is necessary to explicitly instantiate your parents with the CMyClass(int) constructor that you have. So, for example, in your definition of CMyClass you need to add the call to CMyClass(int) in the initializer list CMyClass(int x) : AddOn1<CBase>(x), CMyClass<AddOn2, AddOn3>(x) //send x down Now that we have CMyClass sending x down the line, it is necessary for your base case specialization (CMyClass<empty, empty, empty>) to now have a constructor that accepts x but does nothing with it template<> class CMyClass<empty, empty, empty> { public: CMyClass(int) {} //do nothing }; Now the compiler can find the right constructors and create your classes as you expect Just to explain why lines like CMyClass<AddOn_A> A(100) work, it's because A (in that example) has only one parent, CMyClass<empty, empty, empty>, and your specialization template<> class CMyClass< empty, empty, empty > {}; does have a default constructor, because it's empty (or, more formally, because it defines no other constructors). This breaks down immediately once you call CMyClass<AddOn_A, AddOn_B> AB(100) because that has two parents, CMyClass<AddOn_B, empty, empty> and CMyClass<empty, empty, empty>, however the former does not have a default constructor, so the compiler does not know how to construct it. That's why we must add that one line to the initializer list, so we tell the compiler to create CMyClass<AddOn_B, empty, empty> using its CMyClass(int x) constructor (note how that means the compiler will also try to make CMyClass<empty, empty, empty> with the x parameter, so we need to add a constructor to that specialization which will accept the parameter).