question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,222,418
1,222,941
When exactly is the postfix increment operator evaluated in a complex expression?
Say I have an expression like this short v = ( ( p[ i++ ] & 0xFF ) << 4 | ( p[ i ] & 0xF0000000 ) >> 28; with p being a pointer to a dynamically allocated array of 32 bit integers. When exactly will i be incremented? I noticed that the above code delivers a different value for v than the following code: short v = ( p[ i++ ] & 0xFF) << 4; v |= ( p[ i ] & 0xF0000000 ) >> 28; My best guess for this behaviour is that i is not incremented before the right side of the above | is evaluated. Any insight would be appreciated! Thanks in advance, \Bjoern
The problem is order of evaluation: The C++ standard does not define the order of evaluation of sub expressions. This is done so that the compiler can be as aggressive as possible in optimizations. Lets break it down: a1 a2 v = ( ( p[ i++ ] & 0xFF ) << 4 | ( p[ i ] & 0xF0000000 ) >> 28; ----- (1) a1 = p[i] (2) i = i + 1 (i++) after (1) (3) a2 = p[i] (4) t3 = a1 & 0xFF after (1) (5) t4 = a2 & 0xF0000000 after (3) (6) t5 = t3 << 4 after (4) (7) t6 = t4 >> 28 after (5) (8) t7 = t5 | t6 after (6) and (7) (9) v = t7 after (8) Now the compiler is free to re-arrange thus sub expressions as long as the above 'after' clauses are not violated. So one quick easy optimization is move 3 up one slot and then do common expression removal (1) and (3) (now beside each other) are the same and thus we can eliminate (3) But the compiler does not have to do the optimization (and is probably better than me at it and has other tricks up its sleeve). But you can see how the value of (a1) will always be what you expect, but the value of (a2) will depend on what order the compiler decides to do the other sub-expressions. The only guarantees that you have that the compiler can not move sub-expressions past a sequence point. Your most common sequence point is ';' (the end of the statement). There are others, but I would avoid using this knowledge as most people don't know the compiler workings that well. If you write code that uses sequence point tricks then somebody may re-factor the code to make it look more readable and now your trick has just turned into undefined be-behavior. short v = ( p[ i++ ] & 0xFF) << 4; v |= ( p[ i ] & 0xF0000000 ) >> 28; ----- (1) a1 = p[i] (2) i = i + 1 (i++) after (1) (4) t3 = a1 & 0xFF after (1) (6) t5 = t3 << 4 after (4) (A) v = t5 after (6) ------ Sequence Point (3) a2 = p[i] (5) t4 = a2 & 0xF0000000 after (3) (7) t6 = t4 >> 28 after (5) (8) t7 = v | t6 after (7) (9) v = t7 after (8) Here everything is well defined as the write to i is sued in place and not re-read in the same expression. Simple rule. don't use ++ or -- operators inside a larger expression. Your code looks just as readable like this: ++i; // prefer pre-increment (it makes no difference here, but is a useful habit) v = ( ( p[ i ] & 0xFF ) << 4 | ( p[ i ] & 0xF0000000 ) >> 28; See this article for detailed explanation of evaluation order: What are all the common undefined behaviours that a C++ programmer should know about?
1,222,608
1,356,502
ATL Security update broke compatibility for DLLs depending on the older version
The recent ATL security update updated the C++ runtimes to version 8.0.50727.4053. Unfortunately, this update broke one of our DLLs that dynamically links to the runtime, as we only have 8.0.50727.762 available to us on the target machine (we don't even use ATL). Is there a way we can get Visual Studio to dynamically link to the older DLL? I'd like to avoid statically linking if possible.
Another solution is forcing VS to link against the old versions of the WinSxS DLLs as explained in this article.
1,222,626
1,222,936
What should I do to develop a well structed C++ program?
Right now, I want to develop a C++ program. And the UI design is the difficult issue. my question is: 1. is there any good practice for developing a well structed c++ program ? 2. is there any good practice for developing UI in c++? 3. I usually heard of Activex in C++, can it use to encapsulate a UI, and is good for maintain after software is finished. Thanks in advance !
I try to give some answers to your questions: 1) Good Program Structure This really depends on what matters most - cost of development, ease of deployment/update, maintainability, target machine requirements. It is hard to give you a good answer because the topic is so large. I suggest this as a good place to start reading: Software Design (2) Good Practice for developing UI This really does depend on what technology you are going to use. If you're running on Windows, you have a handful of options: a) Win32 API programming This is the hardest and involves writing code to call functions like 'CreateWindow' to create your UI a piece at a time. b) ATL - Active Template Library This is a bit easier than (1) but uses really hardcore C++ - you need to know about Templates, Multiple Inheritance, some Patterns and you'll end up learning Win32 anyway. c) Microsoft Foundation Classes (MFC) If you have Visual C++ or Visual Studio, you can create an MFC project which has dialog editors and a UI framework to more easily create rich user interfaces. Microsoft Outlook is written in MFC. d) Use C#.net If you have Visual Studio, then I would recommend that you make your UI in Visual C# using the Forms Designer tool as it is quite easy to create a flexible and responsive UI. You can still do all your business logic in C++ and link to it from the C#. It is also the newest of all these options. 3) ActiveX For all options (a), (b), (c), (d) you can make an ActiveX control to make your program re-usable. You can also make an ActiveX control in Visual Basic. Hope this helps! James
1,222,632
1,223,499
MVC with Qt widget that uses a QAbstractTableModel subclass
I'm doing some refactoring. I'm implementing a Model-View-Controller pattern. The view is a Qt widget. Originally, the Qt widget created a new instance of a QAbstractTableModel subclass on the heap. Lets call it FooTableModel. e.g Widget::Widget(QWidget* parent) : QWidget(parent) m_model(new FooTableModel(this)) { Should I create the new instance of FooTableModel in the MVC model instead? By doing so, I could create a dependency on the view (assuming I still pass the widget's pointer to the FooTableModel constructor) Alternatively, I could pass nothing to the FooTableModel constructor and manually delete the FooTableModel in my MVC model. * The last option would be to leave the creation of the FooTableModel in the widget. (And let the widget handle the FooTableModel directly?) Any suggestions, or preferences? My guess is to go with * at the moment.
Generally you want to avoid passing the view to the model. If your MVC model is a QObject and the FooTableModel instance is a child of it then you don't need to worry about the cleanup becasue Qt will do it for you. Ideally, if you are using Qt the FooTableModel would be THE model, or whatever had the instance of it would be. Qt follows the Model/View pattern since the controller work is handled by the view. Check out: http://doc.trolltech.com/4.5/model-view-introduction.html for more. Short answer: Pass nothing to FooTableModel, delete it when done.
1,222,806
1,223,938
some questions about MFC development?
How do you develop UI in MFC? do you use any free libray, or usually develop from scratch? There are always so many DLL files in a C++ developed software, what are them used for ? What's the difference between MFC ActiveX Control and MFC DLL ?
Visual Studio 2008 enhances MFC by adding the 'Feature Pack'. This allows you to create MS Office 2007 style GUIs (amongst others), complete with a Ribbon Bar. http://msdn.microsoft.com/en-us/library/bb982354.aspx I cut my C++ teeth using MFC, but I'd recommend you look at Qt instead - it's a much more modern framework plus you get cross-platform support (Linux, Mac, etc.) for free. MFC is pretty much a dead framework IMHO (the Feature Pack was bought in and is actually a cut-down version of the BCG library.) http://www.bcgsoft.com/ If you want to stick with MFC there is another popular GUI framework, by CodeJock: http://www.codejock.com/products/overview.asp?platform=mfc
1,222,914
1,301,884
QGraphicsView and QGraphicsItem: don´t scale item when scaling the view rect
I am using Qt´s QGraphicsView - and QGraphicsItem-subclasses. is there a way to not scale the graphical representation of the item in the view when the view rectangle is changed, e.g. when zooming in. The default behavior is that my items scale in relation to my view rectangle. I would like to visualize 2d points which should be represented by a thin rectangle which should not scale when zooming in the view. See a typical 3d modelling software for reference where vertex points are always shown at the same size. Thanks!
Set the QGraphicItem's flag QGraphicsItem::ItemIgnoresTransformations to true does not work for you?
1,222,926
1,226,955
boost::asio, asynchronous read error
For some reason this results in an access violation, however not having any detailed documentation/help on this I'm not sure where I'm doing it wrong. Since going by what I've seen on the boost site this should be correct, and print the contents of each asio::write call from the client to a new line. The client seems to work fine. Although at the point the server crashes, but it hasn't sent anything yet. The access violation occurs in basic_stream_socket.hpp on line 275. The cause seems to be that the object (boost::asio::stream_socket_service) is not initialized (the value of the this pointer is 0xfeeefeee), however I don't see why it isn't. The programs output: Start server Server::startAccept() Server::handleAccept() Connection accepted Connection::startRead() Server::startAccept() Connection::handleRead() READ ERROR: The I/O operation has been aborted because either a thread exited or an application request Connection::startRead() The code #include "precompiled.h" #include "db.h" class Connection : public boost::enable_shared_from_this<Connection> { public: typedef boost::shared_ptr<Connection> Pointer; static Pointer create(boost::asio::io_service& ioService) { return Pointer(new Connection(ioService)); } ip::tcp::socket& getSocket() { return socket; } void startRead() { std::cout << "Connection::startRead()" << std::endl; socket.async_read_some(boost::asio::buffer(readBuffer), boost::bind(&Connection::handleRead,this,_1,_2)); } private: Connection(asio::io_service& ioService) : socket(ioService) { } void handleWrite(const boost::system::error_code&,size_t) { } void handleRead(const boost::system::error_code&error,size_t len) { std::cout << "Connection::handleRead()" << std::endl; if(error) { std::cout << "READ ERROR: "; std::cout << boost::system::system_error(error).what(); std::cout << std::endl; } else { std::cout << "read: "; std::cout.write(readBuffer.data(),len); std::cout << std::endl; } startRead(); } boost::array<char, 256> readBuffer; ip::tcp::socket socket; }; class Server { public: Server(asio::io_service& ioService) :acceptor(ioService, ip::tcp::endpoint(ip::tcp::v4(), getPort())) { startAccept(); } private: void startAccept() { std::cout << "RServer::startAccept()" << std::endl; Connection::Pointer newConn = Connection::create(acceptor.io_service()); acceptor.async_accept(newConn->getSocket(), boost::bind(&Server::handleAccept, this, newConn, asio::placeholders::error)); } void handleAccept(Connection::Pointer newConn, const boost::system::error_code& error) { std::cout << "Server::handleAccept()" << std::endl; if(error) { std::cout << "CONNECTION ERROR: "; std::cout << boost::system::system_error(error).what(); std::cout << std::endl; } else { std::cout << "Connection accepted" << std::endl; startAccept(); newConn->startRead(); } } ip::tcp::acceptor acceptor; }; int main() { std::cout << "Start server" << std::endl; asio::io_service ioService; RemoteAdminServer server(ioService); boost::system::error_code error; ioService.run(error); }
You should change this code snippet: void startRead() { std::cout << "Connection::startRead()" << std::endl; socket.async_read_some(boost::asio::buffer(readBuffer), boost::bind(&Connection::handleRead,this,_1,_2)); } to: void startRead() { std::cout << "Connection::startRead()" << std::endl; socket.async_read_some(boost::asio::buffer(readBuffer), boost::bind(&Connection::handleRead,this->shared_from_this(),_1,_2)); } Notice that I passed a shared pointer to bind. This will keep your Connection instance around until the handler is invoked. Otherwise, the use count goes to zero in Server::startAccept and the object is deleted. Then, when the handler is invoked, the memory is invalid and you experience the dreaded "undefined behavior."
1,223,172
1,223,202
What is this piece of c++ code doing?
I don't know how and why this piece of code works: // postorder dfs Iterator< Index<String<char> >, TopDown<ParentLink<Postorder> > >::Type myIterator(myIndex); while (goDown(myIterator)); for (; !atEnd(myIterator); goNext(myIterator)) // do something with myIterator ( Traverse Through (Suffix)-tree ) It's an example from seqan and the interface is described here: API How can the while affect the for-loop? Why isn't the for loop initialized?
You've run into the fun parts of C++ - using language constructs in syntactically valid but difficult-for-human-parsing techniques. while (goDown(myIterator)); This will "goDown(myIterator)" until it returns false. Then it will continue onto the for loop. It's looping over nothing - but that's okay, because the function goDown is doing work. for (; !atEnd(myIterator); goNext(myIterator)) This doesn't initialize anything, but tests that it's not atEnd(myIterator) - while it's not, it will goNext(myIterator). It could also be written as a while loop to make it slghtly easier to understand: while(!atEnd(myIterator)) { goNext(myIterator)); } So the code will: Go down until goDown() returns false - this means goDown is modifying myIterator each time goNext() until it is atEnd()
1,223,297
1,223,522
Library plans for C++0x?
Lately I've been getting very excited about the support for lambdas in VC2010. I'm slowly starting to grasp the full potential this feature has in transforming C++ into something alot better. But then I realized that this potential greatly depends on main stream support of lambdas in day to day libraries like boost and QT. Does anyone know if there are plans to extend these libraries with the new features of C++0x? lambdas practically replace the need for boost::lambda and everything in boost that interacts with it. QT could add support for lambdas in all of their container and maybe even as an alternative way of defining SLOTs
Lambdas already fit very well into existing libraries - anywhere that a function accepts a function object of a type given by a template parameter. This is one of the great things about them - they're a classic example of a language feature that codifies existing practice in a nifty syntax. Obviously the boost lambda library becomes redundant, but that means it doesn't require any new features to be added to it.
1,223,616
1,223,642
How to implement a queued map?
The problem: I want to be able to FIFO queue outgoing messages. For update/deletion reasons, I also want to be able to access every message in the queue based upon an object ID. I've currently implemented a solution where data is pushed into a deque, and an iterator to that data is kept. The iterator, keyed by an object ID, is then placed into a map. This was fine in the one place that I did it, but I now find myself wanting to do this elsewhere. Am I over-complicating the problem? Is there a data structure out there that does this already?
Why not make the deque a deque of IDs and the map a map from ID to object. Then when you access an ID in the deque, you look up the ID in the map. If the IDs are globally unique, you only need one map to service all the deques.
1,223,690
1,223,697
PInvoke error when marshalling struct with a string in it
I have a C++ struct struct UnmanagedStruct { char* s; // Other members }; and a C# struct struct ManagedStruct { [MarshalAs(UnmanagedType.LPStr)] string s; // Other members } the C++ library exposes extern "C" UnmanagedStruct __declspec(dllexport) foo( char* input ); And it is imported like [DllImport("SomeDLL.dll", CharSet = CharSet.Ansi)] static extern ManagedStruct foo( string input ); However when I call this function I get MarshalDirectiveException was unhandled Method's type signature is not PInvoke compatible. The thing is, this function call works if I remove the char* s and the string s from the structs.
For this type of scenario, do not use a String directly. Instead switch the type to be an IntPtr value and use Marshal.PtrToStringAuto/Ansi/Uni as appropriate. In this case, since your native code uses char*, PtrToStringAnsi is the best choice. struct ManagedStruct { IntPtr s; public string sAsString { get { return Marshal.PtrToStringAnsi(s); } } }
1,223,962
1,224,132
Container with two indexes (or a compound index)
I have a class like this class MyClass { int Identifier; int Context; int Data; } and I plan to store it in a STL container like vector<MyClass> myVector; but I will need to access it either by the extenal Index (using myVector[index]); and the combination of Identifier and Context which in this case I would perform a search with something like vector<MyClass>::iterator myIt; for( myIt = myVector.begin(); myIt != myVector.end(); myIt++ ) { if( ( myIt->Idenfifier == target_id ) && ( myIt->Context == target_context ) ) return *myIt; //or do something else... } Is there a better way to store or index the data?
Boost::Multi-Index has this exact functionality if you can afford the boost dependency (header only). You would use a random_access index for the array-like index, and either hashed_unique, hashed_non_unique, ordered_unique, or ordered_non_unique (depending on your desired traits) with a functor that compares Identifier and Context together.
1,223,999
1,224,056
C++ Map can't insert with pair
Why can't I insert as shown below? #include <map> struct something { } some_object; typedef std::map<std::string, something*> list; typedef std::pair<std::string, something*> pair; int main() { list l; pair p("abc", &some_object); // working fine!!! l.insert(p); // 17 errors return 0; } Visual studio gives me many errors and I don't understand anything of them. The first one is: error C2784: 'bool std::operator <(const std::_Tree<_Traits> &,const std::_Tree<_Traits> &)' : could not deduce template argument for 'const std::_Tree<_Traits> &' from 'const std::string' I can post more but I don't want to spam here. Thanks a lot for your help.
You need to #include <string>
1,224,306
1,224,357
Template Metaprogramming - I still don't get it :(
I have a problem... I don't understand template metaprogramming. The problem is, that I’ve read a lot about it, but it still doesn’t make much sense to me. Fact nr.1: Template Metaprogramming is faster template <int N> struct Factorial { enum { value = N * Factorial<N - 1>::value }; }; template <> struct Factorial<0> { enum { value = 1 }; }; // Factorial<4>::value == 24 // Factorial<0>::value == 1 void foo() { int x = Factorial<4>::value; // == 24 int y = Factorial<0>::value; // == 1 } So this metaprogram is faster ... because of the constant literal. BUT: Where in the real world do we have constant literals? Most programs I use react on user input. FACT nr. 2 : Template metaprogramming can accomplish better maintainability. Yeah, the factorial example may be maintainable, but when it comes to complex functions, I and most other C++ programmers can't read them. Also, the debugging options are very poor (or at least I don't know how to debug). When does template metaprogramming make sense?
Just as factorial is not a realistic example of recursion in non-functional languages, neither is it a realistic example of template metaprogramming. It's just the standard example people reach for when they want to show you recursion. In writing templates for realistic purposes, such as in everyday libraries, often the template has to adapt what it does depending on the type parameters it is instantiated with. This can get quite complex, as the template effectively chooses what code to generate, conditionally. This is what template metaprogramming is; if the template has to loop (via recursion) and choose between alternatives, it is effectively like a small program that executes during compilation to generate the right code. Here's a really nice tutorial from the boost documentation pages (actually excerpted from a brilliant book, well worth reading). http://www.boost.org/doc/libs/1_39_0/libs/mpl/doc/tutorial/representing-dimensions.html
1,224,361
1,224,501
Determine static initialization order after compilation?
In C++, I know that the compiler can choose to initialize static objects in any order that it chooses (subject to a few constraints), and that in general you cannot choose or determine the static initialization order. However, once a program has been compiled, the compiler has to have made a decision about what order to initialize these objects in. Is there any way to determine, from a compiled program with debugging symbols, in what order static constructors will be called? The context is this: I have a sizeable program that is suddenly segfaulting before main() when it is built under a new toolchain. Either this is a static initialization order problem, or it is something wrong with one of the libraries that it is loading. However, when I debug with gdb, the crash location is simply reported as a raw address without any symbolic information or backtrace. I would like to decide which of these two problems it is by placing a breakpoint at the constructor of the very first statically-initialized object, but I don't know how to tell which object that is.
Matthew Wilson provides a way to answer this question in this section (Safari Books Online subscription required) of Imperfect C++. (Good book, by the way.) To summarize, he creates a CUTrace.h header that creates a static instance of a class that prints the filename of the including source file (using the nonstandard preprocessor macro __BASE_FILE__) when created, then he includes CUTrace.h in every source file. This requires a recompilation, but the #include "CUTrace.h" can easily be added and removed via a script, so it shouldn't be too hard to set up.
1,224,464
1,224,474
C/C++ Packing and Compression
I'm working on a commercial project that requires a couple of files to be bundled (packed) into an archive and then compressed. Right now we have zlib in our utility library, but it doesn't look like zlib has the functionality to compress multiple files into one archive. Does anyone know of free libraries I'd be able to use for this?
Perhaps libtar? Also under a BSD license.
1,225,177
1,226,393
Initializing member variables
I've started to pick up this pattern: template<typename T> struct DefaultInitialize { DefaultInitialize():m_value(T()){} // ... conversions, assignments, etc .... }; So that when I have classes with primitive members, I can set them to be initialized to 0 on construction: struct Class { ... DefaultInitialize<double> m_double; ... }; The reason I do this is to avoid having to remember to initialize the member in each constructor (if there are multiple constructors). I'm trying to figure out if: This is a valid pattern? I am using the right terminology?
This is a valid pattern? It's a known "valid" pattern, i would say. Boost has a class template called value_initialized that does exactly that, too. I am using the right terminology? Well, your template can be optimized to have fewer requirements on the type parameter. As of now, your type T requires a copy constructor, unfortunately. Let's change the initializer to the following DefaultInitialize():m_value(){} Then, technically this kind of initialization is called value initialization, starting with C++03. It's a little bit weird, since no kind of value is provided in the first place. Well, this kind of initialization looks like default initialization, but is intended to fill things with zero, but respecting any user defined constructor and executing that instead. To summarize, what you did was to value initialize an object having type T, then to copy that object to m_value. What my version of above does it to value initialize the member directly.
1,225,411
1,225,429
Boost's Linear Algebra Solution for y=Ax
Does boost have one? Where A, y and x is a matrix (sparse and can be very large) and vectors respectively. Either y or x can be unknown. I can't seem to find it here: http://www.boost.org/doc/libs/1_39_0/libs/numeric/ublas/doc/index.htm
Linear solvers are generally part of the LAPACK library which is a higher level extension of the BLAS library. If you are on Linux, the Intel MKL has some good solvers, optimized both for dense and sparse matrices. If you are on windows, MKL has a one month trial for free... and to be honest I haven't tried any of the other ones out there. I know the Atlas package has a free LAPACK implementation but not sure how hard it is to get running on windows. Anyways, search around for a LAPACK library which works on your system.
1,225,589
1,225,599
Most Compact Way to Count Number of Lines in a File in C++
What's the most compact way to compute the number of lines of a file? I need this information to create/initialize a matrix data structure. Later I have to go through the file again and store the information inside a matrix. Update: Based on Dave Gamble's. But why this doesn't compile? Note that the file could be very large. So I try to avoid using container to save memory. #include <iostream> #include <vector> #include <fstream> #include <sstream> using namespace std; int main ( int arg_count, char *arg_vec[] ) { if (arg_count !=2 ) { cerr << "expected one argument" << endl; return EXIT_FAILURE; } string line; ifstream myfile (arg_vec[1]); FILE *f=fopen(myfile,"rb"); int c=0,b; while ((b=fgetc(f))!=EOF) c+=(b==10)?1:0; fseek(f,0,SEEK_SET); return 0; }
FILE *f=fopen(filename,"rb"); int c=0,b;while ((b=fgetc(f))!=EOF) c+=(b==10)?1:0;fseek(f,0,SEEK_SET); Answer in c. That kind of compact?
1,225,695
1,225,726
C++ STL map typedef errors
I'm having a really nasty problem with some code that I've written. I found someone else that had the same problem on stackoverflow and I tried the solutions but none worked for me. I typedef several common STL types that I'm using and none of the others have any problem except when I try to typedef a map. I get a "some_file.h:83: error: expected initializer before '<' token" error when including my header in a test program. Here's the important part of the header(some_file.h): #ifndef SOME_FILE_H #define SOME_FILE_H // some syntax-correct enums+class prototypes typedef std::string str; typedef std::vector<Column> col_vec; typedef col_vec::iterator col_vec_i; typedef std::vector<Row> row_vec; typedef row_vec::iterator row_vec_i; typedef std::vector<str> str_vec; typedef str_vec::iterator str_vec_i; typedef std::vector<Object> obj_vec; typedef obj_vec::iterator obj_vec_i; typedef std::map<Column, Object> col_obj_map; // error occurs on this line typedef std::pair<Column, Object> col_obj_pair; The includes in some_file.cpp are: #include <utility> #include <map> #include <vector> #include <iostream> #include <string> #include <stdio.h> #include <cc++/file.h> #include "some_file.h" The test file simply includes string, vector, and my file in that order. It has a main method that just does a hello world sort of thing. The funny thing is that I quickly threw together a templated class to see where the problem was (replacing the "std::map<Column..." with "hello<Column...") and it worked without a problem. I've already created the operator overload required by the map if you're using a class without a '<' operator.
You are getting this problem because the compiler doesn't know what a map is. It doesn't know because the map header hasn't been included yet. Your header uses the STL templates: string, vector, map, & pair. However, it doesn't define them, or have any reference to where they are defined. The reason your test file barfs on map and not on string or vector is that you include the string and vector headers before some_file.h so string and vector are defined, but map is not. If you include map's header, it will work, but then it may complain about pair (unless your particular STL implementation includes pair in map's header). Generally, the best policy is to include the proper standard header for every type you use in your own header. So some_file.h should have, at least, these headers: #include <string> #include <map> #include <utility> // header for pair #include <vector> The downside to this approach is that the preprocessor has to load each file every time and go through the #ifdef ... #endif conditional inclusion processing, so if you have thousands of files, and dozens of includes in each file, this could increase your compilation time significantly. However, on most projects, the added aggravation of having to manage the header inclusion manually is not worth the miniscule gain in compilation time. That is why Scott Meyers' Effective STL book has "Always #include the proper headers" as item #48.
1,225,741
1,240,024
Performance impact of -fno-strict-aliasing
Is there any study or set of benchmarks showing the performance degradation due to specifying -fno-strict-aliasing in GCC (or equivalent in other compilers)?
It will vary a lot from compiler to compiler, as different compilers implement it with different levels of aggression. GCC is fairly aggressive about it: enabling strict aliasing will cause it to think that pointers that are "obviously" equivalent to a human (as in, foo *a; bar *b = (bar *) a;) cannot alias, which allows for some very aggressive transformations, but can obviously break non-carefully written code. Apple's GCC disables strict aliasing by default for this reason. LLVM, by contrast, does not even have strict aliasing, and, while it is planned, the developers have said that they plan to implement it as a fall-back case when nothing else can judge equivalence. In the above example, it would still judge a and b equivalent. It would only use type-based aliasing if it could not determine their relationship in any other way. In my experience, the performance impact of strict aliasing mostly has to do with loop invariant code motion, where type information can be used to prove that in-loop loads can't alias the array being iterated over, allowing them to be pulled out of the loop. YMMV.
1,225,769
1,225,787
Is setting parent for a window from different process correct?
I have two applications having two different top level windows: App1 -- Window1 App2 -- Window2 Now, I am creating a Dialog Dlg1 in App1 and I want to set window2(App2) as a parent window. ( That is because I want my Dlg1 to come on top of Window2 ). I created the dialog by setting Window2 as parent. It worked. But is it the correct way? Is there any known issues\restrictions in setting parent across process? I checked windows documentation and found not much information.
This is more or less supported and it does work with some restrictions. You will need to be careful that the two processes are running as the same user, and that you have no security or elevation issues that would prevent the two processes communicating. Secondly, you may run into issues if the window in question has some inbuilt assumptions about which window is the parent - this is less of an issue if you have created both processes. Although I just read what you said here: That is because I want my Dlg1 to come on top of Window2 This sounds kind of morally and technologically dicey. What happens if the author of the first program objects? Might you not get into some kind of war between the two windows? If this is all you are trying to do, why not just set your window as TOPMOST or TOP and leave it at that?
1,225,828
1,225,838
FireFox Com Function
For IE microsoft provides COM to access it programatically. Is there any function to access Firefox from our Program
Mozilla Active X Control has largely compatible interface. (IWebBrowser/IWebBrowser2/...) Of course Native XPCOM interfaces are a possibility for C++ programs.
1,225,958
1,225,996
What is the correct way to handle timezones in datetimes input from a string in Qt
I'm using Qt to parse an XML file which contains timestamps in UTC. Within the program, of course, I'd like them to change to local time. In the XML file, the timestamps look like this: "2009-07-30T00:32:00Z". Unfortunately, when using the QDateTime::fromString() method, these timestamps are interpreted as being in the local timezone. The hacky way to solve this is to add or subtract the correct timezone offset from this time to convert it to "true" local time. However, is there any way to make Qt realize that I am importing a UTC timestamp and then automatically convert it to local time?
Do it like this: QDateTime timestamp = QDateTime::fromString(thestring); timestamp.setTimeSpec(Qt::UTC); // mark the timestamp as UTC (but don't convert it) timestamp = timestamp.toLocalTime() // convert to local time
1,226,044
1,226,639
Do you know tool building tree of include files in project\file?
Say, I'd like to have a tool (or script?) taking project (or .h file) and building searchable tree of "includes" included into it (of included into of included into and so so on). Is there exist something like this? Should I write this by myself [of course I am :), but may be somebody had it already written or may be has an idea how to get it]?
Not entirely sure this is what you're after, but you can easily get a list of includes by generating the post-CPP-processed file from the base c file, and grepping out the file/line number comments, e.g., using gcc gcc -E main.c {usual flags} | grep '#' | cut -d' ' -f3 | sort | uniq where main.c is your base c file.
1,226,634
1,226,957
How to use base class's constructors and assignment operator in C++?
I have a class B with a set of constructors and an assignment operator. Here it is: class B { public: B(); B(const string& s); B(const B& b) { (*this) = b; } B& operator=(const B & b); private: virtual void foo(); // and other private member variables and functions }; I want to create an inheriting class D that will just override the function foo(), and no other change is required. But, I want D to have the same set of constructors, including copy constructor and assignment operator as B: D(const D& d) { (*this) = d; } D& operator=(const D& d); Do I have to rewrite all of them in D, or is there a way to use B's constructors and operator? I would especially want to avoid rewriting the assignment operator because it has to access all of B's private member variables.
You can explicitly call constructors and assignment operators: class Base { //... public: Base(const Base&) { /*...*/ } Base& operator=(const Base&) { /*...*/ } }; class Derived : public Base { int additional_; public: Derived(const Derived& d) : Base(d) // dispatch to base copy constructor , additional_(d.additional_) { } Derived& operator=(const Derived& d) { Base::operator=(d); additional_ = d.additional_; return *this; } }; The interesting thing is that this works even if you didn't explicitly define these functions (it then uses the compiler generated functions). class ImplicitBase { int value_; // No operator=() defined }; class Derived : public ImplicitBase { const char* name_; public: Derived& operator=(const Derived& d) { ImplicitBase::operator=(d); // Call compiler generated operator= name_ = strdup(d.name_); return *this; } };
1,226,652
1,226,678
Thread-local singletons
I would like to create a singleton class that is instantiated once in each thread where it is used. I would like to store the instance pointers in TLS slots. I have come up with the following solution but I am not sure whether there are any special considerations with multithreaded access to the singelton factory when thread local storage is involved. Maybe there is also a better solution to implement thread local singletons. class ThreadLocalSingleton { static DWORD tlsIndex; public: static ThreadLocalSingleton *getInstance() { ThreadLocalSingleton *instance = static_cast<ThreadLocalSingleton*>(TlsGetValue(tlsIndex)); if (!instance) { instance = new ThreadLocalSingleton(); TlsSetValue(tlsIndex, instance); } return instance; } }; DWORD ThreadLocalSingleton::tlsIndex = TlsAlloc(); The Tls*-functions are of course win32 specific but portability is not the main issue here. Your thoughts concerning other platforms would still be valuable. Major Edit: I had originally asked about using double-checked locking in this scenario. However as DavidK pointed out, the singletons are to be created on a per thread basis anyway. The two remaining questions are: is it appropriate to reply on TlsGetValue/TlsSetValue to ensure that each thread gets one instance and that the instance is created only once for each thread? Is it possible to register a callback that allows me to clean up an instance that was associated with a particular thread when that thread finishes?
Since your objects are thread-local, why do you need locking to protect them at all? Each threads that calls getInstance() will be independent of any other thread, so why not just check that the singleton exists and create it if needed? The locking would only be needed if multiple threads tried to access the same singleton, which isn't possible in your design as it is above. EDIT: Moving on to the two other questions... I can't see any reason why using TlsAlloc/TlsGetValue etc. wouldn't work as you'd expect. Since the memory holding the pointer to your singleton is only accessible to the relevant thread, there won't be any problems with a lazy initialization of it. However there's no explicit callback interface to clean them up. The obvious solution to that would be to have a method that is called by all your thread main functions that clears up the created singleton, if any. If it's very likely that the thread will create a singelton, a simpler pattern might be to create the singleton at the start of the thread main function and delete it at the end. You could then use RAII by either creating the singleton on the stack, or holding it in a std::auto_ptr<>, so that it gets deleted when the thread ends. (Unless the thread terminates abnormally, but if that happens all bets are off and a leaked object is the least of your problems.) You could then just pass the singleton around, or store it in TLS, or store it in a member of a class, if most of the thread functionality is in one class.
1,226,876
1,227,524
How can I open a help file (chm or so) from my GUI developed in VC++ 2008?
I'm trying to add some help to my GUI developed in VC++ 2008. I want to compile a chm file, or a hlp file that can be accessed from my menu. Anyone can give me any idea about how to do this? Thanks a lot
Under HKLM\Software\Microsoft\Windows\HTMLHelp , create an entry named help.chm value C:\path to\help file.chm Then to open the chm at a particular topic call HtmlHelp(m_hWnd, "Help.chm", HH_DISPLAY_TOPIC, NULL);
1,227,020
1,227,269
What is function __tcf_0? (Seen when using gprof and g++)
We use g++ 4.2.4 and I'm trying to track down some performance problems in my code. I'm running gprof to generate the profile, and I'm getting the following "strangeness" in that the most expensive function is __tcf_0: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 40.00 0.04 0.04 1 40.00 95.00 __tcf_0 This function then appears to calls most of my user functions (ie. it is the one that's called from main). The nearest explanation that I found for this was here, but that link refers to static objects and atexit, and I don't think this applies in my case. If it's helpful, I'm using Boost (program_options and fusion) and the HDF5 libraries. UPDATE: The command I use when building is: g++ -Wreturn-type -Wunused -Winline -pg -DLINUX -DHAS_SETENV \ -DFUSION_MAX_MAP_SIZE=15 -DFUSION_MAX_VECTOR_SIZE=15 -g -O0 \ --param large-function-growth=300 --param inline-unit-growth=200
__tcf_0 seems indeed to be a function which calls destructor of static objects and which is registered for each static objects, to be called at exit (taking for granted what is said on this page) Now, the result of your gprof is quite strange, since the function which takes most of the time only takes 0.04 seconds, which means the whole program takes 0.1 s to execute. If I'm not mistaken, my guess is that you didn't profile correctly. Did you compile your code with profiling enabled?
1,227,379
1,227,422
When would you use an std::auto_ptr instead of boost::shared_ptr?
We've pretty much moved over to using boost::shared_ptr in all of our code, however we still have some isolated cases where we use std::auto_ptr, including singleton classes: template < typename TYPE > class SharedSingleton { public: static TYPE& Instance() { if (_ptrInstance.get() == NULL) _ptrInstance.reset(new TYPE); return *_ptrInstance; } protected: SharedSingleton() {}; private: static std::auto_ptr < TYPE > _ptrInstance; }; I've been told that there's a very good reason why this hasn't been made a shared_ptr, but for the life of me I can't understand why? I know that auto_ptr will eventually get marked as depreciated in the next standard, so I'd like to know what/how I can replace this implementation. Also, are there any other reasons why you'd consider using an auto_ptr instead of a shared_ptr? And do you see any problems moving to shared_ptr in the future? Edit: So in answer to "can I safely replace auto_ptr with shared_ptr in the above code", the answer is yes - however I'll take a small performance hit. When auto_ptr is eventually marked as depreciated and we move over to std::shared_ptr, we'll need to thoroughly test our code to make sure we're abiding by the different ownership semantics.
auto_ptr and shared_ptr solve entirely different problems. One does not replace the other. auto_ptr is a thin wrapper around pointers to implement RAII semantics, so that resources are always released, even when facing exceptions. auto_ptr does not perform any reference counting or the like at all, it does not make multiple pointers point to the same object when creating copies. In fact, it's very different. auto_ptr is one of the few classes where the assignment operator modifies the source object. Consider this shameless plug from the auto_ptr wikipedia page: int *i = new int; auto_ptr<int> x(i); auto_ptr<int> y; y = x; cout << x.get() << endl; // Print NULL cout << y.get() << endl; // Print non-NULL address i Note how executing y = x; modifies not only y but also x. The boost::shared_ptr template makes it easy to handle multiple pointers to the same object, and the object is only deleted after the last reference to it went out of scope. This feature is not useful in your scenario, which (attempts to) implement a Singleton. In your scenario, there's always either 0 references to 1 reference to the only object of the class, if any. In essence, auto_ptr objects and shared_ptr objects have entirely different semantics (that's why you cannot use the former in containers, but doing so with the latter is fine), and I sure hope you have good tests to catch any regressions you introduced while porting your code. :-}
1,227,506
1,227,714
Unix Makefile in Windows Visual Studio 2008
I've done a decent search, but can't seem to find a way to get Visual Studio 2008 to use a unix Makefile, or even to create some MSVC compatible equivalent from the Makefile. Does anyone have ideas or similar issues? Note: I already know the benefits/drawbacks of using Makefiles or not, and I don't want to hear your opinion. All I'm interested in right now is creating a Windows library from some originally unix code which only has a Makefile, and getting something functional out of it. TIA.
You can also use cccl with make for windows. cccl is a wrapper around Microsoft Visual C++'s cl.exe and link.exe. It converts Unix compiler parameters into parameters understood by cl and link.
1,227,653
1,227,918
Linking against library in release and .exe in debug crashes in Visual studio
I'm using Visual C++ 2008 SP1. I have an app that is compiled in debug mode, but links against a library in release mode. I'm getting a crash at the start-up of the application. To make the problem smaller, I created a simple solution with 2 projects: lib_release (generates a .lib, in release mode) exec_using_lib_release (genereates a .exe, in debug mode) The 'lib_release' project is simple enough to have a simple class: //Foo.h #include <vector> class Foo { std::vector<int> v; public: void doSomething(); }; //Foo.cpp #include "Foo.h" void Foo::doSomething() {} The 'exec_using_lib_release' project is simple like this: //main.cpp #include "Foo.h" int main() { Foo foo; foo.doSomething(); return 0; } And it crashes, it's the same problem reported by How do you build a debug .exe (MSVCRTD.lib) against a release built lib (MSVCRT.lib)?, but his answer didn't work for me. I get the same linker warnings, I tried the same steps, but none worked. Is there something I'm missing? EDIT: On the lib_release (that creates a library in release mode), I'm using Multi Threaded (/MT), and at the exec_using_lib_release, I'm using Multi Threaded Debug (/MTd). I think this is the expected way of doing it, since I want the .lib to be created without debug info. I read the document at MSDN Runtime library and those are the settings of linking against the CRT in a static way. I don't have 'Common Language Runtime Support' either.
You don't have to use the same runtimes for release and debug modules (but it helps), as long as you follow very specific rules: never mix and ,match accessing the memory allocated using each runtime. To put this more simply, if you have a routine in a dll that allocates some memory and returns it to the caller, the caller must never free it - you must create a function in the original dll that frees the memory. That way you're safe from runtime mismatches. If you consider that the Windows dlls are built release only (unless you have the debug version of Windows), yet you use them from your debug applications, you'll see how this matters. Your problem now is that you're using a static library, there is no dll boundary anymore, and the calls in the lib are compiled using the static version of the C runtime. If your exe uses the dynamic dll version of the runtime, you'll find that the linker is using that one instead of the one your static lib used... and you'll get crashes. So, you could rebuild your lib as a dll; or you could make sure they both use the same CRT library; or you could make sure they both use the same type of CRT - ie the dll version or the static version, whilst keeping debug/release differences. At least, I think this is your problem - what are the 'code generation, runtime library' settings?
1,227,842
1,227,846
(C++ and gcc) error: expected constructor, destructor, or type conversion before 'inline'
I have a header file with some inline template methods. I added a class declaration to it (just a couple of static methods...it's more of a namespace than a class), and I started getting this compilation error, in a file that uses that new class. There are several other files that include the same .h file that still compile without complaint. Googling for the error gives me a bunch of links to mailing lists about bugs on projects that have a similar error message (the only difference seeming to be what the constructor, destructor, or type conversion is supposed to precede). I'm about ready to start stripping everything else away until I have a bare-bones minimal sample so I can ask the question intelligently, but I figured I'd take a stab at asking it the stupid way first: Can anyone give me a basic clue about what this error message actually means so I might be able to begin to track it down/google it? Just for the sake of completeness, the first example of where I'm seeing this looks more or less like namespace Utilities { template <typename T> GLfloat inline NormalizeHorizontally(T x) { GLfloat scaledUp = x*2.0; GLfloat result = scaledUp / Global::Geometry::ExpectedResolutionX; return result; } }
It means that you put the "inline" keyword in the wrong place. It needs to go before the method's return type, e.g. template <typename T> inline GLfloat NormalizeHorizontally(T x) Simple as that. The reason that you got this message on one compilation unit and not others may be because it is a templated function that was not being instantiated from those other compilation units. Generally, if you get an "expected blah blah before foobar" error, this is a parsing error and it often indicates a simple syntax mistake such as a missing semicolon, missing brace, or misordered keywords. The problem is usually somewhere around the portion mentioned, but could actually be a while back, so sometimes you have to hunt for it.
1,228,025
1,228,110
pthread_key_t and pthread_once_t?
Starting with pthreads, I cannot understand what is the business with pthread_key_t and pthread_once_t? Would someone explain in simple terms with examples, if possible? thanks
No, it can't be explained in layman terms. Laymen cannot successfully program with pthreads in C++. It takes a specialist known as a "computer programmer" :-) pthread_once_t is a little bit of storage which pthread_once must access in order to ensure that it does what it says on the tin. Each once control will allow an init routine to be called once, and once only, no matter how many times it is called from how many threads, possibly concurrently. Normally you use a different once control for each object you're planning to initialise on demand in a thread-safe way. You can think of it in effect as an integer which is accessed atomically as a flag whether a thread has been selected to do the init. But since pthread_once is blocking, I guess there's allowed to be a bit more to it than that if the implementation can cram in a synchronisation primitive too (the only time I ever implemented pthread_once, I couldn't, so the once control took any of 3 states (start, initialising, finished). But then I couldn't change the kernel. Unusual situation). pthread_key_t is like an index for accessing thread-local storage. You can think of each thread as having a map from keys to values. When you add a new entry to TLS, pthread_key_create chooses a key for it and writes that key into the location you specify. You then use that key from any thread, whenever you want to set or retrieve the value of that TLS item for the current thread. The reason TLS gives you a key instead of letting you choose one, is so that unrelated libraries can use TLS, without having to co-operate to avoid both using the same value and trashing each others' TLS data. The pthread library might for example keep a global counter, and assign key 0 for the first time pthread_key_create is called, 1 for the second, and so on.
1,228,161
1,228,199
Why use prefixes on member variables in C++ classes
A lot of C++ code uses syntactical conventions for marking up member variables. Common examples include m_memberName for public members (where public members are used at all) _memberName for private members or all members Others try to enforce using this->member whenever a member variable is used. In my experience, most larger code bases fail at applying such rules consistently. In other languages, these conventions are far less widespread. I see it only occasionally in Java or C# code. I think I have never seen it in Ruby or Python code. Thus, there seems to be a trend with more modern languages to not use special markup for member variables. Is this convention still useful today in C++ or is it just an anachronism. Especially as it is used so inconsistently across libraries. Haven't the other languages shown that one can do without member prefixes?
You have to be careful with using a leading underscore. A leading underscore before a capital letter in a word is reserved. For example: _Foo _L are all reserved words while _foo _l are not. There are other situations where leading underscores before lowercase letters are not allowed. In my specific case, I found the _L happened to be reserved by Visual C++ 2005 and the clash created some unexpected results. I am on the fence about how useful it is to mark up local variables. Here is a link about which identifiers are reserved: What are the rules about using an underscore in a C++ identifier?
1,228,170
1,280,286
How does Visual Build (kinook) build c++ projects?
The bld file has the sln file specified, but what does it call to build it? MSDev? MSBuild? other? I want to add some command line params, but I am not sure which executable it calls for unmanaged C++ solutions.
It depends. For Visual Studio 2002/2003, it always calls devenv.com. For Visual Studio 2005 and up, it calls msbuild.exe by default, or devenv or vcbuild if specified in the Override field on the Options tab. ... the action will automatically locate the correct devenv.com or msbuild.exe compiler, based on the version of the project or solution being built. For Visual Studio 2005 and later and Delphi Prism, this action locates and calls msbuild.exe (installed with the .NET Framework 2.0 or later); for Visual Studio 2002/2003, it invokes the appropriate devenv.com compiler for the specified project or solution version. http://www.kinook.com/VisBuildPro/Manual/vsnetoptionstab.htm
1,228,362
1,228,392
Boost::Asio read/write operations
What is the difference between calling boost::asio::ip::tcp::socket's read_some/write_some member functions and calling the boost::asio::read/boost::asio::write free functions? More specifically: Is there any benefit to using one over the other? Why are both included in the library?
read_some and write_some may return as soon as even a single byte has been transferred. As such you need to loop if you want to make sure you get all of the data - but this may be what you want. The free functions are wrappers around read_some and write_some, and have different termination conditions depending on the overload. Typically they wait for the buffer to be fully transferred (or an error to occur, or in some overloads an explicit completion condition to occur)
1,228,402
1,228,898
How does one include TR1?
Different compilers seem to have different ideas about TR1. G++ only seems to accept includes of the type: #include <tr1/unordered_map> #include <tr1/memory> ... While Microsofts compiler only accept: #include <unordered_map> #include <memory> ... As for as I understand TR1, the Microsoft way is the correct one. Is there a way to get G++ to accept the second version? How does one in general handle TR1 in a portable way?
Install boost on your machine. Add the following directory to your search path. <Boost Install Directory>/boost/tr1/tr1 see here boost tr1 for details Now when you include <memory> you get the tr1 version of memory that has std::tr1::shared_ptr and then it includes the platform specific version of <memory> to get all the normal goodies.
1,228,545
1,234,191
What configuration file format allows the inclusions of otherfiles and the inheritance of settings?
I'm writing a Multiplayer C++ based game. I need a flexible file format to store information about the game charactors. The game charactors will often not share the same attributes, or use a basew For example: A format that would allow me to do something like this: #include "standardsettings.config" //include other files which this file //then changes FastSpaceship: Speed: 10 //pixels/sec Rotation: 5 //deg/sec MotherShip : FastSpaceship //inherits all the settings of the Spaceship ship ShieldRecharge: 4 WeaponA [ power:10, range:20, style:fireball] SlowMotherShip : MotherShip //inherits all the settings of the monther ship Speed: 4 // override speed I've been searching for a pre-existing format that does all this, or is similar, but with no luck. I'm keen not to reinvent the wheel unless I have to, so i was wondering if anyone knows any good configuration file formats that support these features
After alot of searching i've found a pretty good solution using Lua Lua I found out was originally designed as a configuration file language, but then evolved into a complete programming language. Example util.lua -- helper function needed for inheritance function inherit(t) -- return a deep copy (incudes all subtables) of the table t local new = {} -- create a new table local i, v = next(t, nil) -- i is an index of t, v = t[i] while i do if type(v)=="table" then v=inherit(v) end -- deep copy new[i] = v i, v = next(t, i) -- get next index end return new end globalsettings.lua require "util" SpaceShip = { speed = 1, rotation =1 } myspaceship.lua require "globalsettings" -- include file FastSpaceship = inherits(SpaceShip) FastSpaceship.Speed = 10 FastSpaceship.Rotation = 5 MotherShip = inherits(FastSpaceship) MotherShip.ShieldRecharge = 4 ShieldRecharge.WeaponA = { Power = 10, Range = 20, Style = "fireball" SlowMotherShip = inherits(MotherShip) SlowMotherShip.Speed = 4 Using the print function in Lua its also easy to test the settings if they are correct. Syntax isn't quite as nice as I would like it, but its so close to what I want, i'm not gonna mind writing out a bit more. The using the code here http://windrealm.com/tutorials/reading-a-lua-configuration-file-from-c.php I can read the settings into my C++ program
1,228,777
1,228,882
Visual Studio 2008, Runtime Libraries usage advice
I would like some information on the runtime libraries for Visual Studio 2008. Most specifically when should I consider the DLL versions and when should I consider the Static versions. The Visual Studio documentation delineates the technical differences in terms of DLL dependencies and linked libraries. But I'm left wondering why I should want to use one over the other. More important, why should I want to use the multi-threaded DLL runtime when this obviously forces my application into a DLL dependency, whereas the static runtime has no such requirement on my application user machine.
Larry Osterman feels that you should always use the multi-threaded DLL for application programming. To summarize: Your app will be smaller Your app will load faster Your app will support multiple threads without changing the library dependency Your app can be split into multiple DLLs more easily (since there will only be one instance of the runtime library loaded) Your app will automagically stay up to date with security fixes shipped by Microsoft Please read his whole blog post for full details. On the downside, you need to redistribute the runtime library, but that's commonly done and you can find documentation on how to include it in your installer.
1,229,050
1,229,100
How to pass bool from c# through c++ com interface in idl
I know I'm missing something simple, I have next to no experience with these com things. I would like to do this within an interface in an idl [id(5), helpstring("Returns true if the object is in a valid state.")] HRESULT IsValid([out, retval] boolean bValid); However this gives : [out] paramter is not a pointer. Ok, I understand that. However, in the C# code implementing this, I can't return a bool* from the method IsValid() because it is unsafe. What is the correct way for me to return the boolean value?
Try: HRESULT IsValid([out, retval] VARIANT_BOOL *bValid); In order to work as an output, it has to be a pointer to the value; this is how it will be written to on the C++ side: *bValue = VARIANT_TRUE; I don't know if you can write the type as boolean - I've only ever seen VARIANT_BOOL being used. On the C# side, it will become equivalent to: public bool IsValid() In other words, the runtime callable wrapper (RCW) will implement a "nicer" version of the method signature and take care of the unsafe translation for you. If the C++ implementation returns E_FAIL (or E_WHATEVER), then the RCW's method will throw a ComException. You might also consider adding the [propget] attribute, which will make IsValid available as a property instead of a method. As with any property, only do this if it is cheap to evaluate and has no side effects (the debugger will evaluate it as you step through C# code).
1,229,241
1,229,277
How do I force a program to appear to run out of memory?
I have a C/C++ program that might be hanging when it runs out of memory. We discovered this by running many copies at the same time. I want to debug the program without completely destroying performance on the development machine. Is there a way to limit the memory available so that a new or malloc will return a NULL pointer after, say, 500K of memory has been requested?
Try turning the question on its head and asking how to limit the amount of memory an OS will allow your process to use. Try looking into http://ss64.com/bash/ulimit.html Try say: ulimit -v Here is another link that's a little old but gives a little more back ground: http://www.network-theory.co.uk/docs/gccintro/gccintro_77.html
1,229,429
1,229,530
Using SQL statements to query in-memory objects
Suppose I have a collection of C++ objects in memory and would like to query them using an SQL statement. I’m willing to implement some type of interface to expose the objects’ properties like columns of a database row. Is there a library available to accomplish this? In essence, I’m trying to accomplish something like LINQ without using the .NET platform.
C++ objects are not the same thing as SQL tables. If you want to use SQL syntax to query the objects, you will first need to map/persist them into a table structure (ORM, object-relational-mapping). There are a number of fine ORM solutions out there besides Linq. Once you have your objects represented in SQL tables, you should look to the SQL engine to do the heavy lifting. Most SQL platforms can be configured to keep a table mostly or always in memory. As an alternative, you might consider a system specifically designed to cache objects. On Linux, memcached is a leading choice.
1,229,430
1,229,542
How do I prevent my 'unused' global variables being compiled out?
I'm using static initialisation to ease the process of registering some classes with a factory in C++. Unfortunately, I think the compiler is optimising out the 'unused' objects which are meant to do the useful work in their constructors. Is there any way to tell the compiler not to optimise out a global variable? class SomeClass { public: SomeClass() { /* do something useful */ } }; SomeClass instance; My breakpoint in SomeClass's constructor doesn't get hit. In my actual code, SomeClass is in a header file and instance is in a source file, more or less alone. EDIT: As guessed by KJAWolf, this code is actually compiled into a static lib, not the executable. Its purpose is to register some types also provided by the static lib with a static list of types and their creators, for a factory to then read from on construction. Since these types are provided with the lib, adding this code to the executable is undesirable. Also I discovered that by moving the code to another source file that contains other existing code, it works fine. It seems that having a file purely consisting of these global objects is what's causing the problem. It's as if that translation unit was entirely ignored.
The compiler is not allowed to optimiza away global objects. Even if they are never used. Somthing else is happening in your code. Now if you built a static library with your global object and that global object is not referenced from the executable it will not be pulled into the executable by the linker.
1,229,433
1,229,448
Manual invocation of constructor?
Suppose I am allocating an arbitrary block of memory. Part of this block is atomic data (ints, bytes, etc.) and some of this block of data I want to be occupied by objects. Can I turn any arbitrary piece of memory into an object through a constructor call, such as data->MyObject () and subsequently destroying the object via data->~MyObject(), or is this impossible?
What you are looking for is called placement new.
1,229,441
1,229,459
Simultaneous C++ development on Linux and Windows
We have a handful of developers working on a non-commercial (read: just for fun) cross-platform C++ project. We've already identified all the cross-platform libraries we'll need. However, some of our developers prefer to use Microsoft Visual C++ 2008, others prefer to code in Emacs on GNU/Linux. We're wondering if it is possible for all of us to work more or less simultaneously out of both environments, from the same code repository. Ultimately we want the project to compile cleanly on both platforms from the start. Any of our developers are happily willing to switch over to the other environment if this is not possible. We all use both Linux and Windows on a regular basis and enjoy both, so this isn't a question of trying to educate one set devs about the virtues of the other platform. This is about each of us being able to develop in the environment we enjoy most yet still collaborate on a fun project. Any suggestions or experiences to share?
Use CMake to manage your build files. This will let you setup a single repository, with one set of text files in it. Each dev can then run the appropriate cmake scripts to build the correct build environment for their system (Visual Studio 2008/2005/GNU C++ build scripts/etc). There are many advantages here: Each dev can use their own build environment Dependencies can be handled very cleanly, including platform specific deps. Builds can be out of source, which helps prevent accidentally committing inappropriate files Easy migration to new dev. environments (ie: when VS 2010 is released, some devs can migrate there just by rebuilding their build folder)
1,229,728
1,229,746
Serialize a structure in C# to C++ and vice versa
Is there an easy way to serialize a C# structure and then deserialize it from c++. I know that we can serialize csharp structure to xml data, but I would have to implement xml deserializer in c++. what kind of serializer in C# would be the easiest one to deserialize from c++? I wanted two applications (one C++ and another csharp ) to be able to communicate using structures of data
Try Google Protocol Buffers. There are a bunch of .NET implementations of it.
1,229,786
1,229,965
Using Boost.Thread headers with MSVC Language Extensions disabled
I just discovered that when Language Extensions are disabled in MSVC, you get this error if you try to include boost/thread/thread.hpp: fatal error C1189: #error : "Threading support unavaliable: it has been explicitly disabled with BOOST_DISABLE_THREADS" It seems that when Boost detects that language extensions are disabled (_MSC_EXTENSIONS isn't defined), they define BOOST_DISABLE_WIN32, to indicate that it is not safe to include windows.h (which won't compile without extensions enabled). And as a consequence of that #define, BOOST_DISABLE_THREADS is defined, even though Boost.Thread isn't a header-only library, and windows.h is only included in the .cpp files. The headers should in principle be safe to use without language extensions. All the actual win32 calls are isolated in the compiled library (the .dll or .lib) I can see here that they're aware of the problem, but as it's remained untouched for the last two years, it's probably naive to hope for a quick fix. It seems like it should be a fairly simple case of modifying some of the #ifdef's and #defines in the various Boost configuration files, but there are a lot of them, and they define and use a lot of macros whose purpose isn't clear to me. Do anyone know of a simple hack or workaround to allow inclusion of the Boost.Thread headers when language extensions are disabled?
I don't see any simple way to turn off the behavior. You could wrap the block with your own #ifdef starting at boost\config\suffix.hpp(214): #ifndef TEMP_HACK_DONT_DISABLE_WIN32_THREADS // XXX TODO FIXME #if defined(BOOST_DISABLE_WIN32) && defined(_WIN32) \ && !defined(BOOST_DISABLE_THREADS) && !defined(BOOST_HAS_PTHREADS) # define BOOST_DISABLE_THREADS #endif #endif // ndef TEMP_HACK_DONT_DISABLE_WIN32_THREADS Not a perfect fix, but it should be temporary until you can get them to fix it upstream. The boost stuff is good, but it's not immutable in its perfection. Of course, make some kind of tracking item so you don't lose track of your divergence from upstream.
1,230,006
1,230,021
C++ Overriding Methods
I can't figure out what is up with this. I have a Scene class that has a vector of Entities and allows you to add and get Entities from the scene: class Scene { private: // -- PRIVATE DATA ------ vector<Entity> entityList; public: // -- STRUCTORS --------- Scene(); // -- PUBLIC METHODS ---- void addEntity(Entity); // Add entity to list Entity getEntity(int); // Get entity from list int entityCount(); }; My Entity class is as follows (output is for testing): class Entity { public: virtual void draw() { cout << "No" << endl; }; }; And then I have a Polygon class that inherits from Entity: class Polygon: public Entity { private: // -- PRIVATE DATA ------ vector<Point2D> vertexList; // List of vertices public: // -- STRUCTORS --------- Polygon() {}; // Default constructor Polygon(vector<Point2D>); // Declare polygon by points // -- PUBLIC METHODS ---- int vertexCount(); // Return number of vertices void addVertex(Point2D); // Add vertex void draw() { cout << "Yes" << endl; }; // Draw polygon // -- ACCESSORS --------- Point2D getVertex(int); // Return vertex }; As you can see, it has a draw() method that should override the draw() method it inherits from the Entity class. But it doesn't. When using the following code: scene->getEntity(0).draw(); where entity 0 is a Polygon (or at least should be), it prints "No" from the parent method (as though it's not a Polygon, just an Entity). In fact, it doesn't seem to let me call any methods unique to Polygon without getting: 'some method name' : is not a member of 'Entity' So any idea what's up? Thanks for the help. UPDATE: So I've implemented the code given in the first answer, but I'm not sure how to add my polygon to the list. Something like this? const tr1::shared_ptr<Entity>& poly = new Polygon; poly->addVertex(Point2D(100,100)); poly->addVertex(Point2D(100,200)); poly->addVertex(Point2D(200,200)); poly->addVertex(Point2D(200,100)); scene->addEntity(poly); I'm just not used to this shared_ptr business.
I think that you need to post your calling code, but the essentially problem is this. You have a concrete class Polygon deriving from another concrete class Entity. Your addEntity and getEntity functions take and return an Entity by value so if you try to pass in or retrieve an Entity, you will copy only the Entity part of that object (slicing it) and the information about the derived part of the object will be lost. In addition you have a vector of Entity, which is a vector of base class objects, so you have no way of storing anything other than the base type of object. If you need to have a collection of a mixed type of objects, but all derived from Entity, you may need to use dynamically created objects and some sort of smart pointer such as a tr1::shared_ptr or a boost::shared_ptr. E.g. class Scene { private: // -- PRIVATE DATA ------ vector< std::tr1::shared_ptr<Entity> > entityList; public: // -- STRUCTORS --------- Scene(); // -- PUBLIC METHODS ---- void addEntity( const std::tr1::shared_ptr<Entity>& ); // Add entity to list const std::tr1::shared_ptr<Entity> getEntity(int); // Get entity from list int entityCount(); }; Edit Your updated calling code is essentially correct, although using a local const reference to a shared pointer is a bit obscure. I'd probably go with something like: std::tr1::shared_ptr<Polygon> poly( new Polygon ); poly->addVertex(Point2D(100,100)); poly->addVertex(Point2D(100,200)); poly->addVertex(Point2D(200,200)); poly->addVertex(Point2D(200,100)); scene->addEntity(poly);
1,230,065
1,234,024
Speedup Matlab to C++ Conversion
I have some Matlab image processing code which runs pretty slowly and I'm prepared to convert it over to C/C++. I don't really know much about how matlab works and how code is executed but I'm just interested to hear what kind of speedups I might expect. Clearly there are many variables that will affect this but I'm just looking for a guide perhaps from your own experiences. Thanks Zenna
It mostly depends on the tightness of your loops in Matlab. If you are simply calling a series of built-in Matlab image processing functions, you will most likely not be able to improve performance (most likely you will hurt it). If you are looping over image pixels or doing some kind of block processing, you may see big improvements. If you are doing some looping, but the amount of processing within each iteration is substantial, you may only see little or no improvement. The way I look at Matlab is that every executed line has some amount of overhead. If you can put your solution into the form of a matrix multiply, or some other vector/matrix operation, you only suffer that overhead once and it is negligible. However, with loops, you suffer that overhead every time the loop iterates. Also, most of Matlab's image processing functions are just making calls out to optimized libraries, so don't try to recreate them unless you know for sure where they can be improved. I found that the best approach is to use a combination of C and Matlab. I use Matlab when the operation can be easily vectorized (put in terms of vector/matrix operations). This may mean coming at the solution from a different angle than what seems the most straightforward. Also, it is hard to beat Matlab's plotting and visualization so I would definitely not move to an all C/C++ solution unless you have a plan for how to display with C/C++ (if that is part of your project). If I can't come up with a relatively easy way to vectorize, I just implement the part of processing that needs tight loops in a C mex function that can be called from Matlab. I tend to use C instead of C++ in this case since the process should be relatively small and not need a lot of complicated data abstraction, but C++ would work fine too. Make sure you access image data in column-major order to maximize cache hits since this is how Matlab organizes its matrices.
1,230,222
1,230,530
Selected Rows in QTableView, copy to QClipboard
I have a SQLite-Database and I did it into a QSqlTableModel. To show the Database, I put that Model into a QTableView. Now I want to create a Method where the selected Rows (or the whole Line) will be copied into the QClipboard. After that I want to insert it into my OpenOffice.Calc-Document. But I have no Idea what to do with the Selected SIGNAL and the QModelIndex and how to put this into the Clipboard.
To actually capture the selection you use the item view's selection model to get a list of indices. Given that you have a QTableView * called view you get the selection this way: QAbstractItemModel * model = view->model(); QItemSelectionModel * selection = view->selectionModel(); QModelIndexList indexes = selection->selectedIndexes(); Then loop through the index list calling model->data(index) on each index. Convert the data to a string if it isn't already and concatenate each string together. Then you can use QClipboard.setText to paste the result to the clipboard. Note that, for Excel and Calc, each column is separated from the next by a newline ("\n") and each row is separated by a tab ("\t"). You have to check the indices to determine when you move to the next row. QString selected_text; // You need a pair of indexes to find the row changes QModelIndex previous = indexes.first(); indexes.removeFirst(); foreach(const QModelIndex &current, indexes) { QVariant data = model->data(current); QString text = data.toString(); // At this point `text` contains the text in one cell selected_text.append(text); // If you are at the start of the row the row number of the previous index // isn't the same. Text is followed by a row separator, which is a newline. if (current.row() != previous.row()) { selected_text.append('\n'); } // Otherwise it's the same row, so append a column separator, which is a tab. else { selected_text.append('\t'); } previous = current; } QApplication.clipboard().setText(selected_text); Warning: I have not had a chance to try this code, but a PyQt equivalent works.
1,230,260
1,230,319
MFC CEdit Ctrl Question
I have a CEdit control that I want to be able to take time input from. Now I want this input to come in the form hh:mm:ss. Currently I am using a separate CEdit control for hour, mins, & secs. I know I could require the user enter in colons to separate hours, mins, secs, but this I believe will get confusing for my users. I actually want my control to show the colons, and have the different sections of the control to be tab stops, so that it is clear to the user what time exactly they are entering in. I know I have seen this elsewhere, and I just don't know how to do it myself. Ideally these would come in as 3 separate strings, because I am not using Epoch time, or any other type of system time, but am using my own time count. (ie. how many data samples we are into the file.) Meaning each time, my clock starts at zero, and counts up from there. Thanks Dan
Reformatting the text is simple enough, although I would wait until a lost focus message rather than insert colons while the user is typing, it gets confusing especially if they need to edit or delete a character. You can implement tab stops within the field by getting VK_TAB but I'm not sure I would do this - users are used to tabs jumping to the next control not to positions within a control. Another way to do this is to have 3 separate controls but detect when the user has entered enough characters for the first, or entered a tab (or colon) and then automatically switch focus to the next one. I think this is neater, it's what the IP_ADDRESS control does.
1,230,423
1,230,558
C++ : handle resources if constructors may throw exceptions (Reference to FAQ 17.4]
Thanks for all the response. I reformatted my question to understand the state of the member pointer after the containg class constructor throws an exception Again my example class :) class Foo { public: Foo() { int error = 0; p = new Fred; throw error; // Force throw , trying to understand what will happen to p } ~Foo() { if (p) { delete p; p = 0; } } private: Fred* p; }; int main() { try { Foo* lptr = new Foo; } catch (...) {} } The consturctor for class foo would throw an exception for some random reason. I understand that the desturctor of foo will never be called but in this case will the destructor for p get called? what difference it makes to have p as a boost smart pointer than a raw pointer to fred. Thanks.
There is a similar question here that covers what your asking. In this case, if the call to new fails, then the memory for the pointer is guaranteed to be freed. If the call succeeds, and the constructor throws after that, you will have a memory leak. The destructor of the class will not be called, because the object was never fully constructed. There are two ways to fix this. 1) Have exceptions fully managed in the constructor: class Foo { public: Foo() try { p = new p; throw /* something */; } catch (...) { delete p; throw; //rethrow. no memory leak } private: int *p; }; 2) Or use a smart pointer. When a constructor is entered, all of its members have been constructed. And because when a constructor throws, and objects members have been constructed, they must be destructed. And a smart pointer fixes that: class Foo { public: Foo() : p(new int) { throw /* something */; } private: std::auto_ptr<int> p; };
1,230,450
1,234,159
error: syntax error before '@' token (why?)
I include the amalgamation sqlite code in my iPhone project, and remove the reference to the iPhone sqlite framework. My main target compile fine. I have a second target for unit testing with the google framework. When compile I get: error: syntax error before '@' token I don't understand why. I have set both projects to sdk 2. UPDATE: I include the link to the sqlite code & google. I must add that the target compile just fine for months before I added the sqlite code. I don't post sample code because I get 1263 errors - so I get error in all files -, but this is a sample traceback: @class NSString, Protocol; <== ERROR HERE Traceback: cd /Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.0 -x c-header -arch i386 -fmessage-length=0 -pipe -std=c99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -D__IPHONE_OS_VERSION_MIN_REQUIRED=20000 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk -fvisibility=hidden -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/Testing-generated-files.hmap -I/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/Testing-own-target-headers.hmap -I/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/Testing-all-target-headers.hmap -iquote /Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/Testing-project-headers.hmap -F/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/Debug-iphonesimulator -F/Volumes/CrashReporter-1.0-rc2/CrashReporter-iPhone -F/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone -I/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/Debug-iphonesimulator/include -I/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/usr/include/libxml2 "-I/Developer/RemObjects Software/Source" -I/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/DerivedSources/i386 -I/Users/trtrrtrtr/mamcx/projects/JhonSell/iPhone/build/JhonSell.build/Debug-iphonesimulator/Testing.build/DerivedSources -c /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIKit.h -o /var/folders/EA/EAmC8fuyElexZfnpnjdyr++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/UIKit-dqqtnrciylhdtjbmyglpcezxchmz/UIKit.h.gch In file included from /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/Foundation.framework/Headers/Foundation.h:12, from /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIAccelerometer.h:8, from /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIKit.h:9: /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/Foundation.framework/Headers/NSObjCRuntime.h:120: error: syntax error before '@' token
I finally figure out the problem. I copy this from the iPhone target to the Testing target: GCC_DYNAMIC_NO_PIC = NO GCC_OPTIMIZATION_LEVEL = 0 GCC_PRECOMPILE_PREFIX_HEADER = YES GCC_PREFIX_HEADER = JhonSell_Prefix.pch GCC_PREPROCESSOR_DEFINITIONS = DEBUG But why before I have not issues? I truly not understand.
1,230,598
1,230,614
Non-destructible read from a stream
Is is possible to try to read from a stream but do not change the stream itself (and return bool whether it was a success)? template <typename T> bool SilentRead (stringstream& s, T& value) { stringstream tmp = s; tmp >> value; return tmp; } This doesn't work because stringstream doesn't have public copy constructor. How to do it then? Is it possible to solve it if we replace stringstream with istream ?
StringStream, refering to this allows you to use tellg and seekg to get / set position. So you could: 1. Get current position 2. Read 3. Set current position to one, that you have just read.
1,230,677
1,230,684
How does the compiler determine which member functions mutate?
A comment to one of my posts interested me: Me too. I also give accessors/mutators the same name. I was wondering about this, because I have always used setBar(int bar) instead of a mutator named the same thing. I want to know: can the compiler determine based on a const identifier what mutates at runtime, or can it use the same function name because it has a parameter? Will this compile fine: class Foo { int bar_; public: int bar() { return bar_; } void bar(int bar) { bar_ = bar; } } Or do I have to do this (I realize I should be doing this anyways, just run with me on this): int bar() const { return bar_; } I don't know which is which. Const correctness is important, so I think I would want the compiler to object to the overloading since one mutates and one does not. Why does it work this way?
The first thing the compiler looks at is the number and type of parameters you're passing to the function. This resolves the overload on bar before it even needs to look at const-ness. If you fail to mark bar() as const, the compiler will inform you of this the first time you attempt to call bar() on a const instance of the object.
1,230,915
1,230,979
Static Pointer to Dynamically allocated array
So the question is relatively straight forward, I have several semi-large lookup tables ~500kb a piece. Now these exact same tables are used by several class instantiations (maybe lots), with this in mind I don't want to store the same tables in each class. So I can either dump the entire tables onto the stack as 'static' members, or I can have 'static' pointers to these tables. In either case the constructor for the class will check whether they are initialized and do so if not. However, my question is, if I choose the static pointers to the tables (so as not to abuse the stack space) what is a good method for appropriately cleaning these up. Also note that I have considered using boost::share_ptr, but opted not to, this is a very small project and I am not looking to add any dependencies. Thanks
Static members will never be allocated on the stack. When you declare them (which of course, you do explicitly), they're assigned space somewhere (a data segment?). If it makes sense that the lookup tables are members of the class, then make them static members! When a class is instanced on the stack, the static member variables don't form part of the stack cost. If, for instance, you want: class MyClass { ... static int LookUpTable[LARGENUM]; }; int MyClass:LookUpTable[LARGENUM]; When you instance MyClass on the stack, MyClass:LookUpTable points to the object that you've explicitly allocated on the last line of the codesample above. Best of all, there's no need to deallocate it, since it's essentially a global variable; it can't leak, since it's not on the heap.
1,231,178
3,803,333
Load an PEM encoded X.509 certificate into Windows CryptoAPI
I need to load a PEM encoded X.509 certificate into a Windows Crypto API context to use with C++. They are the ones that have -----BEGIN RSA XXX KEY----- and -----END RSA XXX KEY-----. I found examples for Python and .NET but they use specific functions I can't relate to the plain Windows Crypto API. I understand how to encrypt/decrypt once I've got a HCRYPTKEY. BUT, I just don't get how to import the Base64 blob in the .PEM file(s) and get a HCRYPTKEY that I can use out of it. I have that strange feeling that there is more to it than simply calling CryptDecodeObject(). Any pointers that can put me on track? I've already lost two days doing "trial & error" programming and getting nowhere.
KJKHyperion said in his answer: I discovered the "magic" sequence of calls to import a RSA public key in PEM format. Here you go: decode the key into a binary blob with CryptStringToBinary; pass CRYPT_STRING_BASE64HEADER in dwFlags decode the binary key blob into a CERT_PUBLIC_KEY_INFO with CryptDecodeObjectEx; pass X509_ASN_ENCODING in dwCertEncodingType and X509_PUBLIC_KEY_INFO in lpszStructType decode the PublicKey blob from the CERT_PUBLIC_KEY_INFO into a RSA key blob with CryptDecodeObjectEx; pass X509_ASN_ENCODING in dwCertEncodingType and RSA_CSP_PUBLICKEYBLOB in lpszStructType import the RSA key blob with CryptImportKey This sequence really helped me understand what's going on, but it didn't work for me as-is. The second call to CryptDecodeObjectEx gave me an error: "ASN.1 bad tag value met". After many attempts at understanding Microsoft documentation, I finally realized that the output of the fist decode cannot be decoded as ASN again, and that it is actually ready for import. With this understanding I found the answer in the following link: http://www.ms-news.net/f2748/problem-importing-public-key-4052577.html Following is my own program that imports a public key from a .pem file to a CryptApi context: int main() { char pemPubKey[2048]; int readLen; char derPubKey[2048]; size_t derPubKeyLen = 2048; CERT_PUBLIC_KEY_INFO *publicKeyInfo; int publicKeyInfoLen; HANDLE hFile; HCRYPTPROV hProv = 0; HCRYPTKEY hKey = 0; /* * Read the public key cert from the file */ hFile = CreateFileA( "c:\\pub.pem", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL ); if ( hFile == INVALID_HANDLE_VALUE ) { fprintf( stderr, "Failed to open file. error: %d\n", GetLastError() ); } if ( !ReadFile( hFile, pemPubKey, 2048, &readLen, NULL ) ) { fprintf( stderr, "Failed to read file. error: %d\n", GetLastError() ); } /* * Convert from PEM format to DER format - removes header and footer and decodes from base64 */ if ( !CryptStringToBinaryA( pemPubKey, 0, CRYPT_STRING_BASE64HEADER, derPubKey, &derPubKeyLen, NULL, NULL ) ) { fprintf( stderr, "CryptStringToBinary failed. Err: %d\n", GetLastError() ); } /* * Decode from DER format to CERT_PUBLIC_KEY_INFO */ if ( !CryptDecodeObjectEx( X509_ASN_ENCODING, X509_PUBLIC_KEY_INFO, derPubKey, derPubKeyLen, CRYPT_ENCODE_ALLOC_FLAG, NULL, &publicKeyInfo, &publicKeyInfoLen ) ) { fprintf( stderr, "CryptDecodeObjectEx 1 failed. Err: %p\n", GetLastError() ); return -1; } /* * Acquire context */ if( !CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT) ) { { printf( "CryptAcquireContext failed - err=0x%x.\n", GetLastError() ); return -1; } } /* * Import the public key using the context */ if ( !CryptImportPublicKeyInfo( hProv, X509_ASN_ENCODING, publicKeyInfo, &hKey ) ) { fprintf( stderr, "CryptImportPublicKeyInfo failed. error: %d\n", GetLastError() ); return -1; } LocalFree( publicKeyInfo ); /* * Now use hKey to encrypt whatever you need. */ return 0; }
1,231,433
1,231,475
Strange backtrace - where is the error?
I'm developing an image processing application in C++. I've seen a lot of compiler errors and backtraces, but this one is new to me. #0 0xb80c5430 in __kernel_vsyscall () #1 0xb7d1b6d0 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7d1d098 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0xb7d5924d in ?? () from /lib/tls/i686/cmov/libc.so.6 #4 0xb7d62276 in ?? () from /lib/tls/i686/cmov/libc.so.6 #5 0xb7d639c5 in malloc () from /lib/tls/i686/cmov/libc.so.6 #6 0xb7f42f47 in operator new () from /usr/lib/libstdc++.so.6 #7 0x0805bd20 in Image<Color>::fft (this=0xb467640) at ../image_processing/image.cpp:545 What's happening here? The operator new is crashing, ok. But why? That's not an out of memory (it tries to allocate about 128Kb, a 128x64 pixel with two floats each). Also, it doesn't seam as it's an error in my own code (the constructor doesn't get touched!). The code in the mentioned line (#7) is: Image<Complex> *result = new Image<Complex>(this->resX, resY); // this->resX = 128, resY = 64 (both int), Complex is a typedef for std::complex<float> Almost the same instantiation works on other places in my code. If I comment out this part of the code, it will crash a bit later on a similar part. I don't understand it, I also don't have any ideas, how to debug it. Any help? Compiler is gcc 4.3.3, libc is 2.9 (both from Ubuntu Jaunty) Update: I've included the following lines just above the faulty line in the same method and in main() Image<Complex> *test = new Image<Complex>(128, 64); delete test; The strange thing: in the same method it will crash, in main() it won't. As I mentioned, Complex is a typedef of std::complex<float>. The constructor doesn't get called, I've inserted a cout just before this line and in the constructor itself. Update 2: Thanks to KPexEA for this tip! I tried this: Image<Complex> *test = new Image<Complex>(128, 64); delete test; kiss_fft_cpx *output = (kiss_fft_cpx*) malloc( this->resX * this->resY/2 * sizeof(kiss_fft_cpx) ); kiss_fftndr( cfg, input, output ); Image<Complex> *test2 = new Image<Complex>(128, 64); delete test2; It crashes at - you guess? - test2! So the malloc for my kissfft seams to be the faulty one. I'll take a look at it. Final update: Ok, it's done! Thanks to all of you! Actually, I should have noticed it before. Last week, I noticed, that kissfft (a fast fourier transform library) made a 130x64 pixel fft image from a 128x128 pixel source image. Yes, 130 pixel broad, not 128. Don't ask me why, I don't know! So, 130x64x2xsizeof(float) bytes had to be allocated, not 128x64x... as I thought before. Strange, that it didn't crash just after I fixed that bug, but some days later. For the record, my final code is: int resY = (int) ceil(this->resY/2); kiss_fft_cpx *output = (kiss_fft_cpx*) malloc( (this->resX+2) * resY * sizeof(kiss_fft_cpx) ); kiss_fftndr( cfg, input, output ); Image<Complex> *result = new Image<Complex>(this->resX, resY); Thanks! craesh
Perhaps a previously allocated chunk of memory has a buffer overflow that is corrupting the heap?
1,231,685
1,231,693
How do I display more decimals in the output console?
I want to output the value of a double in it's full precision. However, when using the cout function, it only displays the first 6 digits even though there is around 15-16 digits of precision. How do I get my program to display the entire value, including the magnitude (power) component?
Use the setprecision() manipulator: http://www.cplusplus.com/reference/iostream/manipulators/setprecision/ You can also force scientific notation with the scientific manipulator: http://www.cplusplus.com/reference/iostream/manipulators/scientific/ cout << scientific << setprecision(15) << my_number << endl;
1,231,788
1,231,794
How do I initialize a const std::pair?
Let's say that I've got a : #include <utility> using namespace std; typedef pair<int, int> my_pair; how do I initialize a const my_pair ?
Use its constructor: const my_pair p( 1, 2 );
1,231,899
1,231,918
Check if a char* buffer contains UTF8 characters?
In the absence of a BOM is there a quick and dirty way in which I can check if a char* buffer contains UTF8 characters?
Not reliably. See Raymond Chen's series of posts on the subject. The problem is that UTF-8 without a BOM is all too often indistinguishable from equally valid ANSI encoding. I think most solutions (like the win32 API IsTextUnicode) use various heuristics to give a best guess to the format of the text.
1,231,942
1,232,035
Learning C++ without an IDE
I've recently started to learn C++ and am completely confused with the choices of IDEs and compilers out there. I am competent with interpreted languages and like the simplicity of using any IDE or text editor and then running the interpreter from the command line. Everything works as I expect, regardless of the IDE used, because I use the same interpreter each time. Now that I have started learning C++ I am overwhelmed by the choice of different compilers and more importantly, their differences. It seems that things will be simpler for me (not necessarily easier) if, while learning, I use a text editor and a compiler that I run from the command line. I have a basic understanding of how compiling and linking works and I understand the role of header files. Firstly, are there any books or websites that teach C++ from this approach? (IDE-less) Many books try to point out the differences between IDEs and compilers by selecting two and comparing them, which confuses me. Secondly, how should I set up my workflow? (Ignore the choice of text editor, I am talking about compilers, linkers etc.) I am struggling to understand what differences different compilers have and so please bear this in mind when answering. It seems like the most popular compilers are g++ and CL. Similar question but I am more interested in why some programs will work with some compilers and not others: C++ Compiler for Windows without IDE? Further information: I am developing on Windows and from what I understand, it seems that there is 'pure' C++ and then C++ that is somehow related to windows, is this Visual C++? I would like to write programs that make use of Windows features but I want to know when I am using windows features and when I am writting code that would work on any platform. Update: So it seems that I shouldn't be worrying about compilers when I am just starting out. The reason for me wanting to understand the differences is because I don't want to write code for a specific compiler and get into bad habits. Is this a non-issue?
Firstly, are there any books or websites that teach C++ from this approach? (IDE-less) Yes, definitely. Stroustrup's book has already been mentioned. For learning C++ I'd also recommend two other books: If you like thorough explanations and don't shy away from 1000 pages, look at Lippman et al. If you rather like a short introduction and don't fear a steep learning curve, look at Koenig/Moo. Both are excellent books. (BTW, a good place to look for good books has always been the book review section at the ACCU.) As for which tool chain you want to use: If you rather have a standalone editor and invoke the compiler from the command line, you can do this with either GCC or VC. This approach has the advantage that it is more unlikely to lure you into using something proprietary (like C++/CLI). If you would like to try an IDE, VC Express is fine, once you're past setting up a new C++ project. Of course, the number of options you can tweak for a new project can be very overwhelming. But on the other hand you get things like an integrated debugger. Note that there are other integrated solutions, too. The most mature and prominent is probably eclipse. Edit: If you don't mind spending a little money, look at Comeau. It's not free, but it's not expensive either and it's usually considered to be the most standard-conforming C++ compiler around and has excellent error messages. (You can test-drive it at the website.) Note that it emits C code, though. That means you have to have another compiler to create an executable program. But both GCC and VC Express will do, so there's no other cost. (Note that using VC you will get Dinkumware's std lib implementation, which is also considered to be a very good one.)
1,231,991
1,286,591
Unlock a thread from another process, in c++
I'm programming an interprocess communication module (Process A and Process B). Is there any way the communication thread in B can run (be unlock) as soon as process A finishes a specific action, I mean without B doing any polling nor B waiting too much after A finishes its action? Which are the concepts/models/design patterns governing these issues? (Something more precise than interprocess synchronization). Which libraries/methods do you recommend? Thanks. Edit: I'm looking for methods suitable for each of the three main OSes: Ms Windows, Apple Mac OS X, GNU/Linux.
This is quite hard job: For Unix OSes you can use: pthread condition and mutex with setpshared argument. Note: it is supported well under Linux 2.6, Solaris, but it does not supported FreeBSD and Cygwin (don't know about Mac OS X) For Unixes you may also use named semaphores, but I don't know the support level of them For Windows there are some events... This is hard job, especially for IPC... So if you want something portable, I'd suggest to take a look on Boost.Interprocess that has conditions and mutexes... But make sure that all feature supported on all OSes you want to support. Things you should note about Boost.Interprocess Check carefully level of support for each Unix OS you need to work with, because Boost.Interprosess uses pthread_* functions that are not always supported... and then fails back to emulation -- check the quality of such emulation Also, check how this stuff works on Windows -- as far as I know that there is no "in-shared-memory" mutexes in Win32 API, generally named objects should be used, so check what is supported and how.
1,232,006
1,232,068
Initializing aggregate unions
I've got a union : union my_union { short int Int16; float Float; }; I'd like to create : const my_union u1 = ???; const my_union u2 = ???; and initialize their values to be of different types respectively : u1 -> int16 u2 -> float How do I do that ? If the above is not possible, are there any workarounds?
union can have any number of constructors - this will work for any datatypes without constructor, so your example is well if exclude string (or make pointer to string) #include <string> using namespace std; union my_union { my_union(short i16): Int16(i16){} my_union(float f): Float(f){} my_union(const string *s): str(s){} short int Int16; float Float; const string *str; }; int main() { const my_union u1 = (short)5; const my_union u2 = (float)7.; static const string refstr= "asdf"; const my_union u3 = &refstr; } There is more complicated way to create class, that owns by union, class must have a selector (scalar or vector datatype used) - to correctly destroy string.
1,232,081
1,346,381
Heap randomization in Windows
Windows 7 has Heap randomization and Stack randomization features. How could I manage it? How they are affects performance of my application? Where I could find more information on how it works? I'm using Visual Studio 2008 for developing C++ programs. I can't find any compiler's options for that features.
Ok, Heap randomization and Stack randomization are Windows features, but have to be explicitly enabled for each process at link time. Mark Russinovich described how it is work in his 5-th Windows Internals book. Stack randomization consists of first selecting one of 32 possible stack locations separated by either 64 KB or 256 KB. This base address is selected by finding the first appropriate free memory region and then choosing the xth available region, where x is once again generated based on the current processor's TSC shifted and masked into a 5-bit value.<...> Finally, ASLR randomizes the location of the initial process heap (and subsequent heaps) when created in user mode. The RtlCreateHeap function uses another pseudo-random, TSC-derived value to determine the base address of the heap. This value, 5 bits this time, is multiplied by 64 KB to generate the final base address, starting at 0, giving a possible range of 0x00000000 to 0x001F0000 for the initial heap. Additionally, the range before the heap base address is manually deallocated in an attempt to force an access violation if an attack is doing a brute-force sweep of the entire possible heap address range.
1,232,176
1,232,195
How do I put two increment statements in a C++ 'for' loop?
I would like to increment two variables in a for-loop condition instead of one. So something like: for (int i = 0; i != 5; ++i and ++j) do_something(i, j); What is the syntax for this?
A common idiom is to use the comma operator which evaluates both operands, and returns the second operand. Thus: for(int i = 0; i != 5; ++i,++j) do_something(i,j); But is it really a comma operator? Now having wrote that, a commenter suggested it was actually some special syntactic sugar in the for statement, and not a comma operator at all. I checked that in GCC as follows: int i=0; int a=5; int x=0; for(i; i<5; x=i++,a++){ printf("i=%d a=%d x=%d\n",i,a,x); } I was expecting x to pick up the original value of a, so it should have displayed 5,6,7.. for x. What I got was this i=0 a=5 x=0 i=1 a=6 x=0 i=2 a=7 x=1 i=3 a=8 x=2 i=4 a=9 x=3 However, if I bracketed the expression to force the parser into really seeing a comma operator, I get this int main(){ int i=0; int a=5; int x=0; for(i=0; i<5; x=(i++,a++)){ printf("i=%d a=%d x=%d\n",i,a,x); } } i=0 a=5 x=0 i=1 a=6 x=5 i=2 a=7 x=6 i=3 a=8 x=7 i=4 a=9 x=8 Initially I thought that this showed it wasn't behaving as a comma operator at all, but as it turns out, this is simply a precedence issue - the comma operator has the lowest possible precedence, so the expression x=i++,a++ is effectively parsed as (x=i++),a++ Thanks for all the comments, it was an interesting learning experience, and I've been using C for many years!
1,232,262
1,232,272
Memory leak in C,C++; forgot to do free,delete
We allocate memory in C using malloc and in C++ using new. I know that memory allocated must be freed or given back to OS using free in C and delete in C++. If I forgot to use free/delete after allocating memory, it means there will be memory leak. Now, my question is, is this memory leak only during the time period of execution of program; or is it permanent leak/loss or is it gained again once I restart the system ? What is the internal process actually ? What does memory leak/loss mean exactly? I would be very grateful if someone could explain this in detail or provide me with some nice references. UPDATE 1 After reading some answers, I learnt that memory is given back to OS/system after program terminates, if that is so, why everyone needs to care so much about memory leak, and why preventing memory leak is very important ? UPDATE 2 So, memory leak should be prevented so that the system doesn't crash due to lack of sufficient memory for allocation purposes ?? UPDATE 3 So, after reading all answers, I realized that memory leak is quiet important issues to prevent system crashes. But, for a beginner like me, how can I be sure if my program is completely free from memory leak. I try to do free,delete if I am using malloc,new but sometimes, it gets messy. Is there any tool or method which I can use to know if my program is doing any memory leak ? UPDATE 4 After reading answers, I have now understood the importance of memory leak free code, less use of new/delete, more use of STL, learnt new stuffs like RAII,valgrind and good programming practices. Thanks to all :)
It's per-process. Once your process exits, the allocated memory is returned to the OS for use by other processes (new or existing). To answer your edited question, there's only a finite amount of memory in your machine. So if you have a memory leak, then the major problem is that the memory isn't available for other processes to use. A secondary, but not negligible, effect is that your process image grows, you'll swap to disc and performance will be hit. Finally your program will exhaust all the memory in the system and fail, since it's unable to allocate any memory for itself. It's arguable that for a small process with a short lifetime, memory leaks are tolerable, since the leaked memory will be small in quantity and short-lived. Take a look at this resource, for possibly more info than you'll ever need. What we're discussing here is dynamic or heap allocation.
1,232,329
1,232,343
How to take output from .NET executable and convey to MFC application?
I have a dialog based MFC application through which I have to call a .NET executable. My question are: How will the MFC application know that the .NET executable is closed? if suppose a .Net executable process some information and want to convey the output to the MFC application, how can this be achieved. Please help!!
The MFC application can just wait for the .NET process to exit in the normal way - either using a wait handle or by polling it. As for collecting output - the simplest mechanisms is likely to be for the .NET executable to write to a file, and then the MFC app can read it afterwards. It's crude but very easy to implement!
1,232,505
1,232,515
Register a C# COM component?
I have developed a C# com component which I am using from managed c++. On my dev machine when everything works fine. However when I distribute the files, I received an error that the component has not been registered. When I try a regsvr32 on the dll it gives me an error (C# dlls cannot be registered). How do I properly register this COM dll?
You use regasm with /codebase (and it needs to be ComVisible [but as Patrick McDonald correctly poinhts out, you've already got past that as it works locally])
1,232,791
1,232,923
How can I make a file selector with a combobox in VC++ 2008?
I have this dialog: ID__BATERIA __FAX DIALOGEX 0, 0, 235, 86 STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_CAPTION | WS_SYSMENU CAPTION "Nueva batería de fax" FONT 8, "MS Shell Dlg", 400, 0, 0x1 BEGIN DEFPUSHBUTTON "OK",IDOK,120,65,50,14 PUSHBUTTON "Cancel",IDCANCEL,175,65,50,14 LTEXT "Archivo",IDC_STATIC,20,12,25,8 LTEXT "Descripción",IDC_STATIC,20,40,37,8 EDITTEXT IDC_DESCBATER,65,38,120,13,ES_AUTOHSCROLL COMBOBOX IDC_ARCH2,65,10,120,60,CBS_DROPDOWN | CBS_AUTOHSCROLL | CBS_SORT | WS_VSCROLL | WS_TABSTOP END I want the combobox to be a file selector. So I wrote this: BOOL CALLBACK BateriaFaxDlg(HWND hDlg, UINT msg, WPARAM wParam, LPARAM lParam){ char descripcion[100]; char archivo[20]; switch (msg) /* manipulador del mensaje */ { case WM_INITDIALOG: SendMessage(GetDlgItem(hDlg, IDC_ARCH2), CB_DIR, DDL_READWRITE | DDL_DIRECTORY, (LPARAM)"*"); return TRUE; case WM_COMMAND: switch (LOWORD(wParam)) { case IDOK: SendDlgItemMessage(hDlg, IDC_ARCH2, WM_GETTEXT, 20, (LPARAM)archivo); GetDlgItemText(hDlg, IDC_DESCBATER, descripcion , 100); actualizarBaterias("FAX", archivo, descripcion); EndDialog(hDlg, FALSE); break; case IDCANCEL: EndDialog(hDlg, FALSE); break; case IDC_ARCH2: switch(HIWORD(wParam)) { case CBN_DBLCLK: if(DlgDirSelectEx(hDlg, archivo, 512, IDC_ARCH2)) { // DlgDirList(hDlg, "*", IDC_ARCH2, ID_TITULO, DDL_DIRECTORY | DDL_DRIVES); SendMessage(GetDlgItem(hDlg, IDC_ARCH2), CB_DIR, 0, (LPARAM)"*"); // IniciarLista(hwnd, cad); } break; } break; default: break; return TRUE; } } return FALSE; } It shows correctly the files and the directorys, but when I try to enter a directory it won't work. The thing I select is [dir] instead going inside and showing the files. Can anyone help me? Thanks a lot. UPDATE: Ok, I changed it and now it is a Simple ComboBox. Still when I double click on the directories it won't enter and list the files inside. Any ideas? UPDATE: It finally works.
From http://msdn.microsoft.com/en-us/library/bb775808.aspx "This notification message occurs only for a combo box with the CBS_SIMPLE style. In a combo box with the CBS_DROPDOWN or CBS_DROPDOWNLIST style, a double-click cannot occur because a single click closes the list box."
1,232,951
1,233,018
Are there good Patterns/Idioms for Data Translation/Transformation?
I'm sorry for the generic title of this question but I wish I was able to articulate it less generically. :-} I'd like to write a piece of software (in this case, using C++) which translates a stream of input tokens into a stream of output tokens. There are just five input tokens (lets call them 0, 1, 2, 3, 4) and each of them can have a few different attributes (like, There might be an 4.x property or 0.foo). There are a few more output tokens, about ten, let's call them (Out0..Out9) each of them also has a few properties. Now, we've been working on a mapping of sequences from input tokens to possible output tokens, like this: 01 -> Out0 34 -> Out1 0101 -> Out3 ...so different sequences of input tokens map to a single output token. In my scenario, the set of input tokens is fixed, but the set of output tokens is not - we might decide to introduce new 'productions'. My question is: Does anybody know whether there are good patterns and/or idioms which help in such a situation? Right now I have a set of 'Compressor' object, each of which can eat the input tokens and eventually produces the output tokens. The problem is that some input tokens clash, consider 'Out0' and 'Out3' in the above case. The input '0101' should yield Out3 but not Out0. However, the input '0104 should yield Out0 and then leave 0 and 4 in the queue. I'm wondering whether there are maybe patterns from data compression or other areas which might be beneficial. This work of 'reducing' an input of lowlevel tokens to highlevel tokens and dealing with possible conflicts is common among parser writers, no? Are there are useful patterns there? UPDATE: A bit more information: in my concrete case, the input tokens are C structs, and the output tokens are C++ objects. I have no control whatsoever over the input stream of tokens, but I can queue them and then modify the queue in case that is beneficial. I solved clashes (like Out3 (0101) vs. Out0 (01)) by trying to match Out3 first and then Out0, but it's a bit ugly. The possible productions are in a list and I simply try to apply them to the input stream, one after the other The list of possible productions can be extended by the user, so I cannot generate one huge DAG and then have a state machine which implements that to handle every possible transition. Of course, this means that the user can add clashes, but that's just the way it is.
You could define a graph, where each node contains an input token and an associated output. The links of each node describe the possible next tokens. Thus, a path in the graph describe a possible transformation rule. To transform the data, start from the node corresponding to the first input token, and try to navigate the graph on the longest path possible, matching the next input token to the nodes linked to the current node. When no linked node matches the next input node, take the output associated with the current node as the result.
1,232,964
1,232,986
Reading data from file into array of structs C++
I have a sample txt file and want to read the contents of the file into an array of structs. My persons.txt file contains 5 arbitrary nos one on each line. 7 6 4 3 2 My program looks like this: #include <iostream> #include <fstream> using namespace std; struct PersonId { typedef PersonId* ptr; PersonId(); int fId; }; istream& operator >> (istream& is, PersonId &p) { is >> p; // return is; // return p.read(is); } // istream& PersonData::read(std::istream& is) // { // is >> fId; // return is; // } int main () { ifstream indata; int i, is; indata.open("persons.txt", ios::in); // opens the file if(!indata) { // file couldn't be opened cout << "Error: file could not be opened" << endl; exit(1); } int n = 5; PersonId* p; p = (PersonId*) malloc (n * sizeof(PersonId)); while ( !indata.eof() ) { // keep reading until end-of-file // p[i].read(); indata >> is; i++; cout << "The next number is "<< is << endl; cout << "PersonId [" << i << "] is " << p[i].fId << endl; // indata >> is; // sets EOF flag if no value found } return 0; } My output looks like this: test6.cpp: In function ‘std::istream& operator>>(std::istream&, PersonId&)’: test6.cpp:27: warning: control reaches end of non-void function The next number is 7 PersonId [1] is 0 The next number is 6 PersonId [2] is 0 The next number is 4 PersonId [3] is 0 The next number is 3 PersonId [4] is 0 The next number is 2 PersonId [5] is 0
istream& operator >> (istream& is, PersonId &p) { is >> p.fId; return is; } (Reading the member fId of p, not the entire structure) And the the while in main, read the structure, not a value: instead of indata >> is; put indata >> p[i];
1,233,040
1,233,351
Why should I setup a plugin interface in c++ instead of c
As a result of my previous questions I asked myself: Is it usefull at all to setup a C++ interface for a plugin system? The following points are speaking against it: No common ABI between different compilers and their versions, no common layout of the objects in memory No direct class export. You have to export factories and destructors. Problems arises if your objects are held by other objects which only delete them, for example smart pointers. Different implementations of the STL, you can't pass a std::list<T> to the plugin Different versions of used libraries like Boost If you restrain yourself to the remaining parts of the C++ language you nearly end up with the "C subset". Are there any points speaking for using C++? How do the Qt-Toolkit solve the mentioned problems? Remark: I'm referring mostly to the Linux system. Nevertheless I'm interested in solutions on other platforms. Additional question: What are the problems using a C interface? The memory layout of structs? Which language parts of C should be avoided?
Although this is more about the "how" than the "why", you may be interested in the (not yet)Boost.Extension library, as well as the author's blog on the topic. For the "why" part, my 2 (Canadian) cents: It depends on the audience (the plugin writers) and on the richness of the interface between your application and its plugins: If the audience is large or heterogeneous, the limitations of a C++ plugin system (keeping the plugin side and the app side in synch with respect to compiler and library versions) gets impractical, and a C interface is more maintainable. If the audience is small, homogeneous, or under your control, these problems are not as significant. If the interface is rich (hand-waving on the precise meaning of "rich"), a C interface may get cumbersome to write, and the balance tilts on the C++ side. However, the first criterion (the audience) is more important, and a C++ interface thus makes sense only if the audience is homogeneous and the interface significantly benefits from the expressiveness gains.
1,233,042
1,233,174
How can I separate headers, classes and main functions in C++?
Please help me in separating the classes, headers and main() in the following program. I tried my best but there is problem. #include "stdafx.h" #include<iostream> #include<string> using namespace std; class player { public: string name; string type; void getdata() { cout<<"Enter the name of the Player : "<<endl; cin>>name; cout<<"Enter the Game he play : "<<endl; cin>>type; } void display() { cout<<"The name of the Player is : "<<name<<endl; cout<<"The game he will play is : "<<type<<endl; } }; int main() { player sachin; sachin.getdata(); sachin.display(); system("pause"); return(0); }
If you want to separate your classes you should use create two files; .h & .cpp. In the header file you place your definitions and declarations, and in the CPP file you implement your methods. Player.h #ifndef __PLAYER_H_ #define __PLAYER_H_ #include <string> class Player { public: Player(); ~Player(); // Methods void GetData(); void Display(); private: std::string Name; std::string Type; } #endif Player.cpp #include "Player.h" Player::Player(): Name(""), Type("") { } Player::~Player(){} void Player::GetData() { std::cout << "Enter the name of the Player : " << std::endl; std::cin >> name; std::cout << "Enter the Game he play : " << std::endl; std::cin >> type; } void Player::Display() { std::cout <<"The name of the Player is : " << name << std::endl; std::cout <<"The game he will play is : " << type << std::endl; } Edit: Class member variables should never be public; Write a set method if you have a need to modify a member variable.
1,233,381
1,233,460
linking and using a C++ library with an Objective-C application
I'm writing a graphical application using Objective-C for the front end and C++ for the graphics processing and network communication. I read around on Apple's site looking for a way to link either a .dylib or .so with my C++ code in it to my Xcode project, but nothing seemed to work. I was able to get the project to reference it and link against it, but when I tried to call functions from that .dylib, it was saying that it didn't know what I was trying to do. Does anyone know what is going on here? I know that Objective-C has all the libraries I would need to do graphics and networking, but I just feel like doing it like this. I haven't done much C++ in a while and I want to learn more Objective-C, so what better way than to use them together? Thanks, Robbie
You're going to hit one obstacle in the form of what's called "name mangling". C++ stores function names in a way not compatible with Obj-C. Objective-C doesn't implement classes in the same way as C++, so it's not going to like it. One way around this is to implement a set of simple C functions which call the C++ functions. It'll be a good challenge to keep the number of C functions as low as possible! You'll end up with a nice compact interface! :) To declare these functions in a C++ file, you'll need to mark them as C with: extern "C" int function_name(char *blob,int number, double foo) {...} This disables the standard name-mangling. Build a header file with the prototypes for all these functions that you can share with your objective C code. You won't be able to pass classes around in the same way (because your ObjC code can't use them), but you'll be able to pass pointers (although you might have to lie about the types a little).
1,233,400
1,233,577
How to circumvent Symbian naming conventions?
I'm about to write a C++ library that is to be used by a Windows application as well as on Symbian. Linux is not a current requirement but should generally be possible, too. For this reason I would like to use the STL/Boost naming conventions instead of Symbian's, which I think, are hard to get used to. This seems to already present a problem, when compiling the code with Carbide.c++ as it enforces the Symbian naming convention. How can I use "normal" names and still be Symbian compatible? I first thought about conditionally re-#define-ing class names for the Symbian platform but I fear, that this will lead to confusion. Could there occur other problems by not complying with Symbian's naming convention?
Coding conventions are not strict. They are there to make understanding code easier for us humans. If you're writing a multi-platform library, feel free to use whatever convention you are comfortable with. Of course, your library probably needs to interface with the underlying operating system in some ways. With the help of Open C/C++ libraries, you can do many things without needing to use native Symbian C++ APIs and their naming conventions. In Carbide.c++ you may want to disable CodeScanner static analysis as it is really only useful to code written in native Symbian C++. So in summary, the problems are as follows: People coming from native Symbian C++ background are not immediately familiar with your conventions Using native Symbian C++ APIs can expose some platform-specific peculiarities (exceptions vs. leaves, trap harnesses, active schedulers etc.) Symbian-specific static analyzers such as CodeScanner assume Symbian C++ code style and may generate errors/warnings you really don't need to care about
1,233,435
1,233,827
Detect compiler with #ifdef
I'm trying to build a small code that works across multiple platforms and compilers. I use assertions, most of which can be turned off, but when compiling with PGI's pgicpp using -mp for OpenMP support, it automatically uses the --no_exceptions option: everywhere in my code with a "throw" statement generates a fatal compiler error. ("support for exception handling is disabled") Is there a defined macro I can test to hide the throw statements on PGI? I usually work with gcc, which has GCC_VERSION and the like. I can't find any documentation describing these macros in PGI.
Take a look at the Pre-defined C/C++ Compiler Macros project on Sourceforge. PGI's compiler has a __PGI macro. Also, take a look at libnuwen's compiler.hh header for a decent way to 'normalize' compiler versioning macros.
1,233,501
1,237,981
read from file to array of structs within structs in C++
I have asked this question previously here and a similar question was closed. SO based on a comment from another user, I have reframed my question: In the first post, I was trying to read tha data from a file into an array with a struct.By using indata << p[i] and is >> p.fId, I was able to read values from data file into PersonId. Now I want to try this: struct PersonId { int fId; }; struct PersonData { public: typedef PersonData* Ptr; PersonData(); PersonId fId; istream& read(std::istream&); }; istream& PersonData::read(std::istream& is) { is >> fId; return is; } istream& operator >> (istream& is, PersonData &p) { // is >> p.fId; return p.read(is); } int main () { ifstream indata; // indata is like cin int i; indata.open("persons.txt", ios::in); // opens the file if(!indata) { // file couldn't be opened cout << "Error: file could not be opened" << endl; exit(1); } int n = 5; PersonData* p; p = (PersonData*) malloc (n * sizeof(PersonData)); while ( !indata.eof() ) { indata >> p[i]; i++; } for(i = 0; i < n; ++i) { cout << "PersonData [" << i << "] is " << p[i] << endl; } return 0; } I want to use member function "read" to actually read values into structures defined by PersonData. My question: How to read the data from file into PersonId struct which is stored in the PersonData struct?? While reading PersonData[i], I should see it have a struct PersonId with updated value. I hope my questions are clear now?
OK, first some grumbling :-) You say what you want. You wrote how you try. Great. I guess result is not what you expected. But you didn't tell us what is the result you get and why you are disappointed with it. As I look at your code, it shouldn't compile. The problem is here: istream& PersonData::read(std::istream& is) { is >> fId; return is; } I can't see any operator >> defined for PersonId type, and fId is of type PersonId. Am I right? Or maybe there is operator >> defined somewhere and you didn't just paste it into your question? My crystal ball is unclear. If I guessed properly, the solution is given by Dave Gamble: istream& operator >> (istream& is, PersonId &p) { is >> p.fId; return is; } You wrote "still getting errors in trying to access PersonData". I seems that this time Dave's crystal ball is also unclear, he can't say what problems you have. Neither can I. You have to either provide us details or send us better crystal balls. Maybe you missed his another advice "Also, fix the cout to use p[i].fId.fId." It means, that instead of writing cout << "PersonData [" << i << "] is " << p[i] << endl; you should write cout << "PersonData [" << i << "] is " << p[i].fId.fId << endl; There can be also another problem - you are not referring to std namespace members consequently - sometimes you write istream, and sometimes you write std::istream, you write endl instead of std::endl. Maybe Koenig's lookup works it out for you, I'm not good at it, but adding std:: prefix may help (of course if this is your problem).
1,233,612
1,233,677
Comparing 2 graphs created by Boost Graph Library
This may be a rather novice or even wrong question so please be forgiving. Is there a way to compare 2 graphs created using the Boost Graph Library => with 1 graph created in memory and the 2nd loaded from an archive (i.e. 2nd was serialized out previously)? I don't see an operator== provided in BGL's documentation, but not sure if that means that I have to write both traversal and comparison. Any pointers to tutorials, reference pages or samples would be most helpful Thanks in advance Ganesh
Boost.Graph can do this but not with the == operator: http://www.boost.org/doc/libs/1_39_0/libs/graph/doc/isomorphism.html It is a hard problem so it will take long for large graphs.
1,233,963
1,234,050
How Operating System callbacks work
Follow up question to: This question As described in the linked question, we have an API that uses an event look that polls select() to handle user defined callbacks. I have a class using this like such: class example{ public: example(){ Timer* theTimer1 = Timer::Event::create(timeInterval,&example::FunctionName); Timer* theTimer2 = Timer::Event::create(timeInterval,&example::FunctionName); start(); cout<<pthread_self()<<endl; } private: void start(){ while(true){ if(condition) FunctionName(); sleep(1); } } void FunctionName(){ cout<<pthread_self()<<endl; //Do stuff } }; The idea behind this is that you want FunctionName to be called both if the condition is true or when the timer is up. Not a complex concept. What I am wondering, is if FunctionName will be called both in the start() function and by the callback at the same time? This could cause some memory corruption for me, as they access a non-thread safe piece of shared memory. My testing tells me that they do run in different threads (corruption only when I use the events), even though: cout<<pthread_self()<<endl; says they have the same thread id. Can someone explains to me how these callbacks get forked off? What order do they get exectued? What thread do they run in? I assume they are running in the thread that does the select(), but then when do they get the same thread id?
The real answer would depend on the implementation of Timer, but if you're getting callbacks run from the same thread, it's most likely using signals or posix timers. Either way, select() isn't involved at all. With signals and posix timers, there is very little you can do safely from the signal handler. Only certain specific signal safe calls, such as read() and write() (NOT fread() and fwrite(), or even new and cout) are allowed to be used. Typically what one will do is write() to a pipe or eventfd, then in another thread, or your main event loop running select(), notice this notification and handle it. This allows you to handle the signal in a safe manner.
1,234,031
1,234,062
How do i forward declare a class that has been typedef'd?
I have a string class that, unsurprisingly, uses a different implementation depending on whether or not UNICODE is enabled. #ifdef UNICODE typedef StringUTF16 StringT; #else typedef StringUTF8 StringT; #endif This works nicely but I currently have a problem where I need to forward declare the StringT typedef. How can I do this? I can't do typedef StringT; so it makes forward declaration tricky. Is it possible to do a forward declare of this typedef'd type without having to past the code above into the top of the header file?
Follow the example set by the iosfwd standard header. Write a header file that contains this, and call it StringTFwd.h class StringUTF16; class StringUTF8; #ifdef UNICODE typedef StringUTF16 StringT; #else typedef StringUTF8 StringT; #endif At least this is reusable and doesn't ugly up the headers that refer to it.
1,234,107
1,234,134
why >?= and <?= don't work in VC++?
why >?= and <?= don't work in VC++? but they work fine in gcc/g++ like: a>?=b; are they right usages?
Because those are the old GC++-specific extensions for minimum and maximum. From 6. Extensions to the C++ Language: The GNU compiler provides these extensions to the C++ language (and you can also use most of the C language extensions in your C++ programs). If you want to write code that checks whether these features are available, you can test for the GNU compiler the same way as for C programs: check for a predefined macro __GNUC__. You can also use __GNUG__ to test specifically for GNU C++ (see section 'Predefined Macros' in The GNU C Preprocessor). 6.1 Minimum and Maximum Operators in C++ C++ Minimum and maximum operators. ... Instead, you should use std::min and std::max.
1,234,303
1,234,338
How can I get an HDC object from a CDC object?
I have an object, dc, of type CDC and I'd like to get an HDC object. I read the MSDN documentation here, but don't really understand it. Can someone provide me with a brief example/explanation on how to do this?
CDC class has operator HDC() defined which allows the compiler to convert a CDC object to HDC implicitly. Hence if you have CDC* and a function which takes HDC then you just dereference the pointer and send it to the function.
1,234,582
1,234,618
Purpose of Trigraph sequences in C++?
According to C++'03 Standard 2.3/1: Before any other processing takes place, each occurrence of one of the following sequences of three characters (“trigraph sequences”) is replaced by the single character indicated in Table 1. ---------------------------------------------------------------------------- | trigraph | replacement | trigraph | replacement | trigraph | replacement | ---------------------------------------------------------------------------- | ??= | # | ??( | [ | ??< | { | | ??/ | \ | ??) | ] | ??> | } | | ??’ | ˆ | ??! | | | ??- | ˜ | ---------------------------------------------------------------------------- In real life that means that code printf( "What??!\n" ); will result in printing What| because ??! is a trigraph sequence that is replaced with the | character. My question is what purpose of using trigraphs? Is there any practical advantage of using trigraphs? UPD: In answers was mentioned that some European keyboards don't have all the punctuation characters, so non-US programmers have to use trigraphs in everyday life? UPD2: Visual Studio 2010 has trigraph support turned off by default.
This question (about the closely related digraphs) has the answer. It boils down to the fact that the ISO 646 character set doesn't have all the characters of the C syntax, so there are some systems with keyboards and displays that can't deal with the characters (though I imagine that these are quite rare nowadays). In general, you don't need to use them, but you need to know about them for exactly the problem you ran into. Trigraphs are the reason the the '?' character has an escape sequence: '\?' So a couple ways you can avoid your example problem are: printf( "What?\?!\n" ); printf( "What?" "?!\n" ); But you have to remember when you're typing the two '?' characters that you might be starting a trigraph (and it's certainly never something I'm thinking about). In practice, trigraphs and digraphs are something I don't worry about at all on a day-to-day basis. But you should be aware of them because once every couple years you'll run into a bug related to them (and you'll spend the rest of the day cursing their existance). It would be nice if compilers could be configured to warn (or error) when it comes across a trigraph or digraph, so I could know I've got something I should knowingly deal with. And just for completeness, digraphs are much less dangerous since they get processed as tokens, so a digraph inside a string literal won't get interpreted as a digraph. For a nice education on various fun with punctuation in C/C++ programs (including a trigraph bug that would defintinely have me pulling my hair out), take a look at Herb Sutter's GOTW #86 article. Addendum: It looks like GCC will not process (and will warn about) trigraphs by default. Some other compilers have options to turn off trigraph support (IBM's for example). Microsoft started supporting a warning (C4837) in VS2008 that must be explicitly enabled (using -Wall or something).
1,234,750
1,238,315
C++ Socket Server - Unable to saturate CPU
I've developed a mini HTTP server in C++, using boost::asio, and now I'm load testing it with multiple clients and I've been unable to get close to saturating the CPU. I'm testing on a Amazon EC2 instance, and getting about 50% usage of one cpu, 20% of another, and the remaining two are idle (according to htop). Details: The server fires up one thread per core Requests are received, parsed, processed, and responses are written out The requests are for data, which is read out of memory (read-only for this test) I'm 'loading' the server using two machines, each running a java application, running 25 threads, sending requests I'm seeing about 230 requests/sec throughput (this is application requests, which are composed of many HTTP requests) So, what should I look at to improve this result? Given the CPU is mostly idle, I'd like to leverage that additional capacity to get a higher throughput, say 800 requests/sec or whatever. Ideas I've had: The requests are very small, and often fulfilled in a few ms, I could modify the client to send/compose bigger requests (perhaps using batching) I could modify the HTTP server to use the Select design pattern, is this appropriate here? I could do some profiling to try to understand what the bottleneck's are/is
boost::asio is not as thread-friendly as you would hope - there is a big lock around the epoll code in boost/asio/detail/epoll_reactor.hpp which means that only one thread can call into the kernel's epoll syscall at a time. And for very small requests this makes all the difference (meaning you will only see roughly single-threaded performance). Note that this is a limitation of how boost::asio uses the Linux kernel facilities, not necessarily the Linux kernel itself. The epoll syscall does support multiple threads when using edge-triggered events, but getting it right (without excessive locking) can be quite tricky. BTW, I have been doing some work in this area (combining a fully-multithreaded edge-triggered epoll event loop with user-scheduled threads/fibers) and made some code available under the nginetd project.
1,234,760
1,234,794
Is it a bad idea to use pointers as loop incrementers instead of the usual "int i"?
An example of this would be: char str[] = "Hello"; int strLength = strlen(str); for ( char * pc = str; pc < str + strLength; pc++) { *pc += 2; } Edit: Accounted for write-protected memory issue.
My one issue is that you'd have a lot of fun if you leave out the * in *pc in the for loop. Whoops? More generally, it is slightly harder to tell the difference between reassigning the pointer and modifying the value. However, (though I don't have it handy), Stroustroup himself endorses(see edit) pointer iteration in the C++ Programming Language book. Basically, you can have a pretty terse implementation of string comparison between two char arrays using pointer arithmetic. In short, I would recommend using such pointers in a "read only" fashion. If you need to write to the array, I would use the more traditional i. This is, of course, all my personal preference. Edit: Stroustroup doesn't endorse pointer iteration OVER integer -- he simply uses it at one point in the book, so my reasoning is that he doesn't think its anethema to good practice.
1,234,988
1,236,098
How to get a Win32 Thread to wait on a work queue and a socket?
I need a client networking thread to be able to respond both to new messages to be transmitted, and the receipt of new data on the network. I wish to avoid this thread performing a polling loop, but rather to process only as needed. The scenario is as follows: A client application needs to communicate to a server via a protocol that is largely, but not entirely, synchronous. Typically, the client sends a message to the server and blocks until a response is received. The server may process client requests asynchronously, in which case the response to client is not a result, but a notification that processing has begun. A result message is sent to to the client at some point in the future, when the server has finish processing the client request. The asynchronous result notifications can arrive at the client at any time. These notifications need processed when they are received i.e. it is not possible to process a backlog only when the client transmits again. The clients networking thread receives and processes notifications from the server, and to transmit outgoing messages from the client. To achieve this, I need to to make a thread wake to perform processing either when network data is received OR when a message to transmit is enqueued into an input queue. How can a thread wake to perform processing of an enqueued work item OR data from a socket? I am interested primarily in using the plain Win32 APIs. A minimal example or relevant tutorial would be very welcome!
An alternative to I/O Completion Ports for sockets is using WSAEventSelect to associate an event with the socket. Then as others have said, you just need to use another event (or some sort of waitable handle) to signal when an item has been added to your input queue, and use WaitForMultipleObjects to wait for either kind of event.
1,235,165
1,235,208
C++ cross-platform dynamic libraries for Linux and Windows
I am wanting to write some cross-platform library code. I am creating a library both static and dynamic with most of the development done in Linux, I have got the static and shared library generated in Linux but now wanted to generate a Windows version of a static and dynamic library in the form of .lib and .dll using the same source code. Is this possible? I'm a bit worried because I noticed generating Windows .dll files required using _dllspec or something similiar in your source code. I am seeking the best and quickest solution to getting my code compiled on Windows. I don't need to do the compiling under Linux; I am happy to do it directly under Windows. Also I am using two external libraries which are Boost and Xerces XML which I have installed on both my Windows and Linux system so hopefully they shouldn't be a problem. What I really want is to have a single source code copy that can be compiled under both Linux and Windows to generate libraries specific to each platform. I don't really care if I have to edit my code in favour of Windows or Linux as long as I can have a single source code copy.
In general, there are two issues you need to be concerned with: The requirement that, on Windows, your DLL explicitly exports symbols that should be visible to the outside world (via __declspec(dllexport), and Being able to maintain the build system (ideally, not having to maintain a separate makefile and Microsoft Visual C++ Project/Solution) For the first, you will need to learn about __declspec(dllexport). On Windows only projects, typically this is implemented in the way I describe in my answer to this question. You can extend this a step further by making sure that your export symbol (such as MY_PROJECT_API) is defined but expands to nothing when building for Linux. This way, you can add the export symbols to your code as needed for Windows without affecting the linux build. For the second, you can investigate some kind of cross-platform build system. If you're comfortable with the GNU toolset, you may want to investigate libtool (perhaps in conjunction with automake and autoconf). The tools are natively supported on Linux and supported on Windows through either Cygwin or MinGW/MSYS. MinGW also gives you the option of cross-compiling, that is, building your native Windows binaries while running Linux. Two resources I've found helpful in navigating the Autotools (including libtool) are the "Autobook" (specifically the section on DLLs and Libtool) and Alexandre Duret-Lutz's PowerPoint slides. As others have mentioned, CMake is also an option, but I can't speak for it myself.
1,235,286
1,235,294
why does pointer to array fails to return as **
I don't understand why the following fails: #include<string> class Foo { public: std::string** GetStr(){return str;} private: std::string * str[10]; }; Thanks
First, you tag this as C++ and C. Which is it? C does not have a string class. If it is C++, please remove the C tag, it is misleading (they are not the same language!). Edit: I misunderstood what you are trying to do. Your method should compile. You just have to remember to dereference the returned str to get the string. I rarely deal with double indirection, but you have to do something like this to set the string: *(*str) = "STR"; //or *(str[i]) = "STR"; I don't know how you would use the address operator here, because it returns a reference and not a pointer. to set the string in the str array. Your method is really weird. The problem is that the compiler doesn't know that you want to dereference a string, so it tries to dereference a char*. I do not understand why you want to do it this way, though. It would be better to do this: std::string str[10]; std::string* GetStr() { return str; }
1,235,299
1,235,353
C++ multiple processes?
I've got a project that consists of two processes and I need to pass some data between them in a fast and efficent manner. I'm aware that I could use sockets to do this using TCP, even though both processes will always exist on the same computer, however this does not seem to be a very efficient solution. I see lots of information about using "pipes" on Linux. However I primarily want this for Windows and Linux (preferably via a cross platform library), ideally in a type safe, non-blocking manner. Another important thing is I need to support multiple instances of the whole application (i.e. both processes), each with their own independent copy of the communication objects. Also is there a cross platform way to spawn a new process?
For IPC, Windows supports named pipes just like Linux does, except that the pipe names follow a different format, owing to the difference in path formats between the two operating systems. This is something that you could overcome with simple preprocessor defines. Both operating systems also support non-blocking IO on pipes and IO multiplexing with select().
1,235,371
1,235,674
Fastest base conversion method?
Right now I'm working on a project which requires an integer to be converted to a base 62 string many times a second. The faster this conversion is completed, the better. The problem is that I'm having a hard time getting my own base conversion methods to be fast and reliable. If I use strings, it's generally reliable and works well, but it's slow. If I use char arrays, it's generally much faster, but it's also very messy, and unreliable. (It produces heap corruption, comparison of strings that should match return a negative, etc.) So what's the fastest and most reliable way of converting from a very large integer to a base 62 key? In the future, I plan on utilizing SIMD model code in my application, so is this operation parallelizable at all? EDIT: This operation is performed several million times a second; as soon as the operation finishes, it begins again as part of a loop, so the faster it runs, the better. The integer being converted is of arbitrary size, and can easily be as large as a 128 bit integer (or larger). EDIT: This is the function I am currently using. char* charset = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; int charsetLength = (int)(strlen(charset)); //maxChars is an integer specifying the maximum length of the key char* currentKey = new char[maxChars]; void integerToKey(unsigned long long location) { unsigned long long num = location; int i = 0; for(; num > 0; i++) { currentKey[i] = charset[num % (charsetLength)]; num /= charsetLength + 1; } currentKey[i + 1] = '\0'; } I ripped this out of a class that is part of my application, and some of the code is modified so that it makes sense sans its owning class.
Probably what you want is some version of itoa. Here is a link that shows various versions of itoa with performance tests: http://www.strudel.org.uk/itoa/ In general, I know of two ways to do this. One way it to perform successive divisions to strip off one digit at a time. Another way is to precompute conversions in "blocks". So you could precompute a block of int to text conversion of size 62^3 then do the digits 3 at a time. Provided you do the memory layout and lookup efficiently this can be slightly faster at runtime but incurs a startup penalty.
1,235,425
1,235,515
coclass in .idl import interface defined elsewhere?
I have an IDL file that defines a few interfaces followed by a coclass. Can I make this class import interfaces that are not defined in this class?
Yes. You need to use the import directive to load the .idl for the external interfaces, or use importlib to load the type library. Something like this: import "otherlibrary.idl"; library MyLibrary { coclass MyClass { interface OtherInterface; }; }; Or this: library MyLibrary { importlib "otherlibrary.tlb"; coclass MyClass { interface OtherInterface; }; };
1,235,447
1,235,471
std::getline and eol vs eof
I've got a program that is tailing a growing file. I'm trying to avoid grabbing a partial line from the file (e.g. reading before the line is completely written by the other process.) I know it's happening in my code, so I'm trying to catch it specifically. Is there a sane way to do this? Here's what I'm trying: if (getline (stream, logbuffer)) { if (stream.eof()) { cout << "Partial line found!" << endl; return false; } return true; } return false; However, I can't easily reproduce the problem so I'm not sure I'm detecting it with this code. std::getline strips off newlines, so I can't check the buffer for a trailing newline. My log message (above) is NEVER tripping. Is there some other way of trying to check what I want to detect? Is there a way to know if the last line I read hit EOF without finding a EOL character? Thanks.
This will never be true: if (getline (stream, logbuffer)) { if (stream.eof()) { /// will never get here If getline() worked, the stream cannot be in an eof state. The eof() and related state tests only work on the results of a previous read operation such as getline()- they do not predict what the next read will do. As far as I know, there is no way of doing what you want. However, if the other process writes a line at a time, the problems you say you are experiencing should be very rare (non -existent in my experience), depending to some extent on the OS you are are using. I suspect the problem lies elsewhere, probably in your code. Tailing a file is a very common thing to do, and one does not normally need to resort to special code to do it. However, should you find you do need to read partial lines, the basic algorithm is as follows: forever do wait for file change read all possible input using read or readsome (not getline) chop input into lines and possible partial line process as required end
1,235,798
1,235,956
How do I use CharNext in the Windows API properly?
I have a multi-byte string containing a mixture of japanese and latin characters. I'm trying to copy parts of this string to a separate memory location. Since it's a multi-byte string, some of the characters uses one byte and other characters uses two. When copying parts of the string, I must not copy "half" japanese characters. To be able to do this properly, I need to be able to determine where in the multi-byte string characters starts and ends. As an example, if the string contains 3 characters which requires [2 byte][2 byte][1 byte], I must copy either 2, 4 or 5 bytes to the other location and not 3, since if I were copying 3 I would copy only half the second character. To figure out where in the multi-byte string characters starts and ends, I'm trying to use the Windows API function CharNext and CharNextExA but without luck. When I use these functions, they navigate through my string one byte at a time, rather than one character at a time. According to MSDN, CharNext is supposed to The CharNext function retrieves a pointer to the next character in a string.. Here's some code to illustrate this problem: #include <windows.h> #include <stdio.h> #include <wchar.h> #include <string.h> /* string consisting of six "asian" characters */ wchar_t wcsString[] = L"\u9580\u961c\u9640\u963f\u963b\u9644"; int main() { // Convert the asian string from wide char to multi-byte. LPSTR mbString = new char[1000]; WideCharToMultiByte( CP_UTF8, 0, wcsString, -1, mbString, 100, NULL, NULL); // Count the number of characters in the string. int characterCount = 0; LPSTR currentCharacter = mbString; while (*currentCharacter) { characterCount++; currentCharacter = CharNextExA(CP_UTF8, currentCharacter, 0); } } (please ignore memory leak and failure to do error checking.) Now, in the example above I would expect that characterCount becomes 6, since that's the number of characters in the asian string. But instead, characterCount becomes 18 because mbString contains 18 characters: 門阜陀阿阻附 I don't understand how it's supposed to work. How is CharNext supposed to know whether "é–€é" in the string is an encoded version of a Japanese character, or in fact the characters é – € and é? Some notes: I've read Joels blog post about what every developer needs to know about Unicode. I may have misunderstood something in it though. If all I wanted to do was to count the characters, I could count the characters in the asian string directly. Keep in mind that my real goal is copying parts of the multi-byte string to a separate location. The separate location only supports multi-byte, not widechar. If I convert the content of mbString back to wide char using MultiByteToWideChar, I get the correct string (門阜陀阿阻附), which indicates that there's nothing wrong with mbString. EDIT: Apparantly the CharNext functions doesn't support UTF-8 but Microsoft forgot to document that. I threw/copiedpasted together my own routine, which I won't use and which needs improving. I'm guessing it's easily crashable. LPSTR CharMoveNext(LPSTR szString) { if (szString == 0 || *szString == 0) return 0; if ( (szString[0] & 0x80) == 0x00) return szString + 1; else if ( (szString[0] & 0xE0) == 0xC0) return szString + 2; else if ( (szString[0] & 0xF0) == 0xE0) return szString + 3; else if ( (szString[0] & 0xF8) == 0xF0) return szString + 4; else return szString +1; }
Here is a really good explanation of what is going on here at the Sorting it All Out blog: Is CharNextExA broken?. In short, CharNext is not designed to work with UTF8 strings.
1,236,117
1,236,131
attached process error VS C++ .NET
When I go to Debug -> Start I get the error: "Unable to attach to machine 'mypc' Do you want to continue anyway? YES/NO I did not attach a proces and am not sure why it is coming up. (Also, when I hit YES to the error, it does not run.) How do I remove all attachments on the debugger?
It's probably a problem with your project configuration settings. Right click on the project in Solution Explorer and click Properties. Go to the Debugging Tab. Make sure that you're debugging on your machine. In the "Remote Settings" option your connection should be Local. You also want to make sure the option to "Attach" is set to No.
1,236,161
1,236,262
Why does the original CString get overwritten when passing a copy to the DrawText function with the DT_MODIFYSTRING option?
I've already found a workaround to this problem, but was just wondering if anyone knew what was actually happening to cause the problem I was seeing. My guess is that it has something to do with mutability of strings, but I thought the CString object accounted for that in the copy constructor. The following code causes mFileName to be overwritten: class File { public: ... CString GetFilename() {return mFileName;} private: CString mFileName; }; class FileContainer { private: File* mFile; public: FileContainer() { mFile = new File("C:\temp.txt"); } GetFilename(CString& fileName) { fileName = mFile->GetFileName(); } } void UpdateText() { FileContainer fileCnt; CString filePath(L""); this->fileCnt.GetFilename(filePath); ... ::DrawText(hDC, filePath, -1, &destRect, DT_PATH_ELLIPSIS | DT_MODIFYSTRING | DT_CALCRECT); } What happens is that the first time UpdateText is called, GetFilename returns C:\temp.txt. Assuming that the bounding rect caused the text to be truncated to "...\temp.txt" on the first call, "...\temp.txt" is what is returned from GetFilename on the second call to UpdateText. Even more perplexing is that this didn't cause mFileName to be changed: void UpdateText() { FileContainer fileCnt; CString filePath(L""); this->fileCnt->GetFilename(filePath); filePath = L"TEST"; } GetFilename always returned C:\temp.txt. So it would seem that the DrawText function is somehow finding the original CString and modifying it. But how? UPDATE: I figured I'd throw another odd chunk of code that also causes mFileName to be overwritten: class File { public: ... CString GetFilename() {return CString(mFileName);} private: CString mFileName; }; That seems like it should create a new object and return that new object. Yet, somehow, DrawText still overwrites mFileName. If I change the code to the following, I don't have any issues: class File { public: ... CString GetFilename() {return CString(mFileName.GetBuffer());} private: CString mFileName; }; The only thing that seems to solve the problem is to construct a new CString the way I showed in the workaround. What is DrawText doing when I pass the DT_MODIFYSTRING option?
First, note that CString can be used as a raw string pointer in two ways: operator LPCSTR - gives a pointer which should never be modified. GetBuffer - gives a pointer to memory specifically for the purpose of modifying the string. Now, DrawText is declared to accept a LPCSTR. So when you pass a CString object directly as in your code, it implicitly uses operator LPCSTR to give the function what it says it wants, a constant string pointer. However, DT_MODIFYSTRING says that DrawText can modify the string it was given. So internally, DrawText must be throwing away the constness of the pointer and modifying the string anyway. This combination is a bad thing. But the fault is mainly in the implmentation of DrawText which is violating its own declaration. As for why this modifies other CString objects: Apparently when a CString object is copied, it delays copying the internal string memory until something tries to modify the string through a CString member function. But until that happens, the operator LPCSTR of each CString object would still point to the same shared internal memory. This is normally fine, as long as any code using it is obeying the rules of const-correctness. However, as we've already seen, DrawText with DT_MODIFYSTRING is not playing by the rules. Thus, it is overwriting memory shared by multiple CString objects. So to fix this problem, you either need to stop using DT_MODIFYSTRING if you don't actually need the modified text. Or else you need to pass the string to DrawText using filePath.GetBuffer() and then call filePath.ReleaseBuffer() afterwards.
1,236,485
1,236,492
How to access elements of a C++ map from a pointer?
Simple question but difficult to formulate for a search engine: if I make a pointer to a map object, how do I access and set its elements? The following code does not work. map<string, int> *myFruit; myFruit["apple"] = 1; myFruit["pear"] = 2;
You can do this: (*myFruit)["apple"] = 1; or myFruit->operator[]("apple") = 1; or map<string, int> &tFruit = *myFruit; tFruit["apple"] = 1; or (C++ 11) myFruit->at("apple") = 1;
1,236,550
1,236,559
Incorrect floating point math?
Here is a problem that has had me completely baffled for the past few hours... I have an equation hard coded in my program: double s2; s2 = -(0*13)/84+6/42-0/84+24/12+(6*13)/42; Every time i run the program, the computer spits out 3 as the answer, however doing the math by hand, i get 4. Even further, after inputting the equation into Matlab, I also get the answer 4. Whats going on here? The only thing i can think of that is going wrong here would be round off error. However with a maximum of 5 rounding errors, coupled with using double precision math, my maximum error would be very very small so i doubt that is the problem. Anyone able to offer any solutions? Thanks in advance, -Faken
You're not actually doing floating point math there, you're doing integer math, which will floor the results of divisions. In C++, 5/4 = 1, not 1.25 - because 5 and 4 are both integers, so the result will be an integer, and thus the fractional part of the result is thrown away. On the other hand, 5.0/4.0 will equal approx. 1.25 because at least one of 5.0 and 4.0 is a floating-point number so the result will also be floating point.
1,236,670
1,237,091
How to make OpenGL apps in 64-bit Windows?
My project compiles, link and run in xp32 then I tried to cross compile it to x64 and I came across a lot of questions: There's no native x64 instalable OpenGL SDK so I link against what? I saw someone saying that x64 apps use 32bits opengl dll. I tryied to run my compiled 64-bits app in a xp64 with drivers to my video card (radeon 4850), the same I use on xp32 and I got that typical error "bla bla bla, maybe reinstalling you application will resolve the problem" If I use video card drivers how to keep it working with another Cards, should I build a version for each? (no sense). Should I load an available library dinamicaly? (same no sense) Which is the reference implementation for x64? where do I find its libs to link against? I'm really lost on that matter. I did a lot of searchs and found nothing that helped me understant till the momment. So, what is the path? What I want to know to make native x64 OpenGL apps?
The 64-bit OpenGL import library is included in the Windows SDK and gets installed to %ProgramFiles%\Microsoft SDKs\Windows\<version>\Lib\x64\OpenGL32.lib. The corresponding DLL is named opengl32.dll and is located in %SystemRoot%\system32. The 32-bit version is also named opengl32.dll and is located in %SystemRoot%\syswow64 on 64-bit Windows. You can't load 32-bit DLLs in a 64-bit process, so whatever you read about x64 apps using the 32-bit OpenGL DLL was incorrect. There is definitely a 64-bit OpenGL DLL, but it has "32" in its name, presumably to make porting easier.