question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,415,779
2,045,189
Why are C++ inheritance mechanisms opaque?
Why, for example, is there no language support to examine a vtable? Why can't I replace a member function with a new one? I have a gut feeling that there are ways to put such features to good use. Are there any other languages out there which allow me to do such things?
The main reason is that keeping vtable as an implementation detail allows any concrete implementation to optimize it as it sees fit; this means that it can e.g. trim or even eliminate vtable altogether if it can prove that there are no virtual calls for a given method (or all methods). Or it may replace a vtable dispatch with an if-else type check if e.g. it sees that there are only a few alternatives (this can be advantageous because branch prediction will work in this case, but not with vtables, and also because if-else branches can then be inlined). It can reorder methods in vtable such that most commonly called ones come earlier, or such that those that are commonly called one right after another fill adjacent slots in vtable to take advantage of caching. And so on, and so forth. Of course, all those implementations would also make vtable layout entirely unpredictable, and thus useless, if it were to be exposed (by the language spec) to implementation. As well, vtables aren't as simple as they sound to be. For example, compilers often have to generate thunks to fix-up this pointer for things such virtual inheritance, or multiple inheritance combined with covariant return types. This is again something that doesn't have a "single best way" to do it (which is why different compilers do it differently), and standardizing it would effectively require settling on a particular way. That said, "vtable switching" is a potentially useful technique if exposed as a higher-level construct (so that optimizations are still possible). For an example, see UnrealScript, which allows one to define several states for a class (one default, other named), and override some methods in named states. Derived classes can override more methods in existing states, or add their own states and override in them. Furthermore, states can extend other states (so if a method isn't overridden for a particular state, it falls back to the "parent" state, and so on until the chain reaches the default state). For actor modelling (which games essentially are) this all makes a lot of sense, which is why UnrealScript has it. And the obvious efficient implementation mechanism for all of this is vtable switching, with each state having a separate vtable.
1,415,913
1,415,962
Sleep() becomes less accurate after replacing a PC? (C++)
I have a program that was built in C++ (MFC, Visual Studio 6.0) several years ago and has been running on a certain Windows machine for quite some time (more than 5 years). The PC was replaced a month ago (the old one died), and since then the program's timing behavior changed. I need help understanding why. The main functionality of the program is to respond to keystrokes by sending out ON and OFF signals to an external card, with very accurate delay between the ON and the OFF. An example program flow: > wait for keystroke... > ! keystroke occurred > send ON message > wait 150ms > send OFF message Different keystrokes have different waiting periods associated with them, between 20ms and 150ms (a very deterministic time depending on the specific keystroke). The timing is very important. The waiting is executed using simple Sleep(). The accuracy of the sleep on the old PC was 1-2ms deviation. I can measure the timing externally to the computer (on the external card), so my measurement of the sleep time is very accurate. Please take into account this machine executed such ON-sleep-OFF cycles thousands of times a day for years, so the accuracy data I have is sound. Since the PC was replaced the timing deviation is more than 10ms. I did not install the previous PC, so it may had some additional software packages installed. Also, I'm ashamed to admit I don't remember whether the previous PC was Windows 2000 or Windows XP. I'm quite sure it was XP, but not 100% (and I can't check now...). The new one is Windows XP. I tried changing the sleeping mechanism to be based on timers, but the accuracy did not improve. Can anything explain this change? Is there a software package that may have been installed on the previous PC that may fix the problem? Is there a best practice to deal with the problem?
The time resolution on XP is around 10ms - the system basically "ticks" every 10ms. Sleep is not a very good way to do accurate timing for that reason. I'm pretty sure win2000 has the same resolution but if I'm wrong that could be a reason. You can change that resolution , atleast down to 1ms - see http://technet.microsoft.com/en-us/sysinternals/bb897569.aspx or use this http://www.lucashale.com/timerresolution/ - there's probably a registry key as well(windows media player will change that timer as well, probably only while it's running. Could be the resolution somehow was altered on your old machine.
1,416,009
1,416,071
Get year from boost ptime
I'm converting an existing program to C++ and here need to manipulate Sybase timestamps. These timestamps contain date and time info, which to my knowledge can be best handled by a boost::posix_time::ptime variable. In a few places in the code I need to get the year from the variable. My question is: how can I most efficiently extract the year from a boost ptime variable? Below is a sample program in which it takes three lines of code, with the overhead of an extra ostringstream variable and a boost::gregorian::date variable. According to boost documentation: Class ptime is dependent on gregorian::date for the interface to the date portion of a time point however gregorian::date doesn't seem to be a base class of ptime. Somehow I'm missing something here. Isn't there an easier way to extract the year from the ptime? Sample: #include <boost/date_time/local_time/local_time.hpp> #include <iostream> int main() { boost::posix_time::ptime t(boost::posix_time::second_clock::local_time()); boost::gregorian::date d = t.date(); std::ostringstream os; os << d.year(); std::cout << os.str() << std::endl; return 0; }
Skip the ostringstream. Otherwise, you may benefit from "using namespace..." #include <boost/date_time/local_time/local_time.hpp> #include <iostream> int main() { using namespace boost::posix_time; std::cout << second_clock::local_time().date().year() << std::endl; return 0; }
1,416,082
1,416,104
Drawing issues with c++
I'm sort of new to c++ and i'm trying to create a game. I have a 2d array RECT_GRID of rectangles. I have a 2d array GRID of unsigned short. I fill the rectangle array during WM_CREATE The WM_PAINT event paints rectangles for all the elements in the array. The color of the rectangle is based on the value of GRID[x][y] I made it so when the down key is pressed, It changes the color of one of the rectangles by setting GRID[1][XMOVE] = to a different color then it invalidates the client rectangle Basically what happens is, it works well for a while, but eventually it just stops drawing stuff. I checked my XMOVE variable during debug, I checked by grid values and stuff and everything is fine. When I remove the for loop from the paint event and focus on 1 specific rectangle, it never fails, but if I try to redraw all of them at once, after about 20 times, it stops painting things. What could cause this? I'm new to c++ and I bet I'm not painting properly and causing an overflow or something. If anyone could explain what's going wrong, or a proper way to do this, I'd really appreciate it. I could not find anything like this example on Google. Thanks EDIT: I'm using 3 global brushes HBRUSH A; HBRUSH B; HBRUSH C; and when I modify them, I always say A = MakeBrush(NUM); ami I using brushes properly?
My first guess, if you're a total GDI/C++ newbie, is that you are probably creating a lot of Pens and Brushes. These are constrained resources in Windows. You can only create so many of them before you start to tax your resources. So either make your Brushes and Pens and Windows, etc all at once and re-use them, or dispose of them properly when you're done. I recommend getting a copy of "the Bible" (http://www.amazon.com/Programming-Windows%C2%AE-Fifth-Microsoft/dp/157231995X/ref=sr_1_1?ie=UTF8&s=books&qid=1252788457&sr=8-1) and reading the chapters in there about drawing. EDIT: It doesn't sound like you're modifying your brushes properly, but since I can't see the code for MakeBrush, I don't know. You're probably creating a lot of brushes behind the scenes and you don't even know it. Seriously, get a copy of Petzold's book and spend an hour or two. You'll end up with more hair on your head later! ;-)
1,416,094
1,416,187
Revert exception specifications behavior under VC++ 9.0
I'm working on old code that relies heavily on the exception specifications behavior described in the language standard. Namely, calls to std::unexpected() on exception specification violations of the form described below. foo() throw(T) { /*...*/ } Nothrow specifications are indeed guaranteed to not throw, but throw(T) ones are expected to be violated both by design and... well, because the standard expects as much and provides a mechanism to handle it. The reasons for this are tied to the designers decision of using EH also as an error handling mechanism (controlled by its own error class hierarchy) in addition to exception handling. The idiom presented in EH closely mapped to their needs and they took the path of least effort. This is at least how I see it and isn't particularly shocking to me, given the size and complexity of the system. I'm however now tasked to include new and unrelated functionality and the code isn't behaving as expected under VC++ 9.0, due to the deviation from the standards regarding exception specifications introduced in 8.0. (reference: Microsoft) I'm trying to find a way to force the standard behavior. Was hoping for a fallback to be offered by the compiler. But there is none. Am I out of luck and need to change correctly written standard-obedient code running on the 350,000 lines of code with a fully developed error handling class hierarchy? Or can you think of a way that will help me to force std::unexpected() behavior? EDIT: I'm providing some background information. The system in question is a School Year Calendars Generator for a school serving a little over 4,000 students distributed among, I'm unsure as to some of the numbers yet, 6 grades and ~190 classes, plus 12 virtual (long-distance teaching) classes. MINGW is out of the question as is any compiler other than VC++ 8.0 or 9.0. This is due to regulations pertaining to software serving the Educational System in this country. The changes needed to the code are exactly to accommodate the introduction of the virtual classes with a vastly different schema for calendar generation. And then I bumped into this problem. The software makes heavy use of the exceptions mechanism on a few parts of the calendar generation process as a means to control workflow through both unexpected() mappings (saved and restored) and bad_exception mappings, none of which work under VC++. On a purely personal note, I find the mechanism in place actually very elegant even if entirely uncommon. But I digress.
As you mentioned, Visual Studio has an "interesting" way of dealing with exception specifications: throw() has its normal meaning (the function must not throw) anything else (including no exception specification) is interpreted as throw(...) There is no way to circumvent this. However, the C++ community pretty much agrees that exception specifications are useless. Do you really need runtime checking of error types thrown? Perhaps proper unit testing can replace your runtime checks.
1,416,096
1,417,002
C++ Debug builds broke in Snow Leopard Xcode
After upgrading to Xcode 3.2 and Snow Leopard, my debug builds are broken and fail at runtime. Stringstreams do not seem to work. They work in Release mode. I've narrowed it down to a combination of GCC 4.2, OSX SDK 10.6 and the _GLIBCXX_DEBUG pre-processor symbol. These are the defaults for new Xcode projects' Debug configurations. This code shows the problem: #include <iostream> #include <string> #include <sstream> int main (int argc, char * const argv[]) { std::stringstream stream; std::cout << " expected actual" << std::endl; std::cout << "stream.bad: 0 " << stream.bad() << std::endl; std::cout << "stream.fail: 0 " << stream.fail() << std::endl; std::cout << "stream.eof: 0 " << stream.eof() << std::endl; std::cout << "stream.good: 1 " << stream.good() << std::endl; stream.exceptions(std::ios::badbit | std::ios::failbit | std::ios::eofbit); try{ stream << 11; //< Does not work as expected (see output) }catch (std::bad_cast &e) { std::cout << "Unexpected bad_cast: " << e.what() << std::endl; }catch(std::exception &e){ std::cout << "Unexpected exception: " << e.what() << std::endl; } std::cout << " expected actual" << std::endl; std::cout << "stream.bad: 0 " << stream.bad() << std::endl; std::cout << "stream.fail: 0 " << stream.fail() << std::endl; std::cout << "stream.eof: 0 " << stream.eof() << std::endl; std::cout << "stream.good: 1 " << stream.good() << std::endl; std::cout << std::endl; std::cout << "EXPECT: " << 11 << std::endl; std::cout << "ACTUAL: " << stream.str() << std::endl; std::cout << std::endl << "Done" << std::endl; return 0; } The stringstream insertion should work, but when using GCC 4.2 and _GLIBCXX_DEBUG, the '<<' operator throws an exception, and the bad and fail bits are set. I've tried various combinations of compiler and SDK with these results: - Using GCC 4.2, LLVM-GCC, or CLANG with SDK 10.6 does NOT work. - Using GCC 4.2, LLVM-GCC, or CLANG with SDK 10.5 does work. - Using GCC 4.0 with either SDK 10.5 or 10.6 works. If _GLIBCXX_DEBUG is broken or not supported (with SDK 10.6 and GCC 4.2), then why is this the default for Debug configurations in new projects (C++ command line)?
STL debug mode is not supported in gcc 4.2 at this time. You can use gcc 4.0 with STL debug mode, or remove the debug mode preprocessor macros from your Debug configuration and keep using gcc 4.2.
1,416,273
1,424,817
Parse out Non-Alpha Numeric characters from SQLCHAR object
I currently have a bunch of SQLCHAR objects from a database query. The query results are stored in a std::string and then binded to the individual SQLCHAR variables. Some of these variables need to be parsed in order to remove any non-alphanumeric characters. What is the best approach here? I have implemented a basic parsing of a std::string ... for (std::string::iterator i = str.end()-1; i >= str.begin(); --i) { if ( !isalpha(*i) && !isdigit(*i) ) { str1.erase(i); } } But now I have the problem of converting a SQLCHAR to a std::string and then back again. Is there a better way to do this?
consider this pseudocode bool is_not_alnum(char c){return !isalnum(c);} unsigned char* s = ()blah_as_sql_char; //somehow its gotta cast to cstr right? std::remove_if(s, strlen(s), is_not_alnum); SQLCHAR result = (SQLCHAR)s; //cast it back however http://www.cplusplus.com/reference/clibrary/cctype/isalnum/ http://www.sgi.com/tech/stl/remove_if.html
1,416,345
1,416,382
C++ template specialization of function: "illegal use of explicit template arguments"
The following template specialization code: template<typename T1, typename T2> void spec1() { } Test case 1: template< typename T1> //compile error void spec1<int>() { } Test case 2: template< typename T2> //compile error void spec1<int>() { } generates the following compilation error: error C2768: 'spec1' : illegal use of explicit template arguments Does anyone know why?
Function templates cannot be partially specialised, only fully, i.e. like that: template<> void spec1<char, int>() { } For why function templates cannot be partially specialised, you may want to read this. When you specialise partially (only possible for classes), you'd have to do it like that: template <typename T1> class class1<T1, int> { }; so you have to list T1 again. The way your specialisations are written, they would be ambiguous for spec1<int, int>.
1,416,468
1,416,499
c++ operator overload and usage
bool operator()(Iterator it1, Iterator it2) const { return (*it1 < *it2); } Can someone explain this function for me, thanks! is this means overload the operator ()? after overload this, how to use it ?
It means something like if you have a class called Compare for example: Compare cmp; .... if(cmp(it1, it2)) { std::cout << "First element is greater"; } else { std::cout << "Second element is greater"; } Your object becomes like a function and it is called in C++ world Functor.
1,416,474
1,416,931
How is variant_row implemented in database template library(C++)?
is there anyone have read the source code of dtl in c++? I found there is a class called variant_row, it used to store all kinds of data, and i tried to read the source code, but it is really hard for me, can someone explain how it is implemented and the class struct? Thanks !
Consider investigating the implementation of BOOST.Variant and BOOST.Optional, They are definitions of a general purpose "generic" types. http://www.boost.org/doc/libs/1_40_0/doc/html/variant.html http://www.boost.org/doc/libs/1_40_0/libs/optional/doc/html/index.html
1,416,797
1,418,635
Reference to Lua function in C/C++
I have a functions nested relatively deeply in a set of tables. Is there a way in C/C++ to get a "reference" to that function and push that (and args) onto the stack when I need to use it?
This is what the reference system is for. The function call r = luaL_ref(L, LUA_REGISTRYINDEX) stores the value on the top of the stack in the registry and returns an integer that can be stored on the C side and used to retrieve the value with the function call lua_rawgeti(L, LUA_REGISTRYINDEX, r). See the PiL chapter, as well as the documentation of luaL_ref(), lua_rawgeti(), and luaL_unref() for the full story.
1,417,061
1,417,881
Automatic increment of build number in Qt Creator
I would like to have a variable (or #define) in C++ source that will increment each time I use Qt Creator to build source code. Is there any way I can do this, perhaps some Qt Creator plugin or similar? If there is a way to do it if I use "make" on command line to build?
In your .pro file, you can create a variable that contains the results of a command-line program. You can then use that to create a define. BUILDNO = $$(command_to_get_the_build_number) DEFINES += BUILD=$${BUILDNO} If you just want a simple incrementing number, you could use a pretty simple script: #!/bin/bash number=`cat build_number` let number += 1 echo "$number" | tee build_number #<-- output and save the number back to file I should note that this would cause the build number to increment every time you build, and also increment if you try to build but it fails. A better way is to get a build number based on the state of the code, and many version control tools can get you a text string for that, if not a number.
1,417,121
1,417,124
Getting ring 0 mode in C++ (Windows)
How I can get ring 0 operating mode for my process in Windows 7(or Vista)?
Allowing arbitrary code to run in ring 0 violates basic OS security principles. Only the OS kernel and device drivers run in ring 0. If you want to write ring 0 code, write a Windows device driver. This may be helpful. Certain security holes may allow your code to run in ring 0 also, but this isn't portable because the hole might be fixed in a patch :P
1,417,298
1,417,348
How is insert iterator work in c++
there is insert iterator in database template library or other library, Can someone tell me how it work ? Thanks!
It is a template class so you should be able to look it up in the implementation. However, the idea is that it stores an iterator (current location) and a reference (pointer) to a container (that is being inserted in). Then it overloads operator= like this: insert_iterator& operator= (typename Container::const_reference value) { m_iter = m_container->insert(m_iter, value); ++m_iter; return *this; } So it requires a container that supports the insert method and at least a forward iterator, and has the standard typedefs (const_reference or perhaps value_type), so it can declare the right-hand type of its operator=. The other output iterator operators (*, ++) just return *this.
1,417,355
1,417,582
'There is no source code available for the current location.' when throwing an exception in C++ Visual Studio
I have a problem in catching an exception. I am trying to rethrow an exception and I get a message: There is no source code available for the current location. The code is very simple: #include <exception> using namespace std; try { throw exception("Asas"); } catch (const exception& e) { cout<< "Error msg" << e.what() << endl; throw; //This the error message I get from the IDE. } It also repeats if I try to throw a string message and try to rethrow it.
Your question is so misleading, it's very hard to give you back anything but more questions. You write you get this message when you're trying to rethrow, but it's very unclear what you mean: Is this a compiler error, a run-time error, or something you get while you're debugging? If the latter (which I assume), why are you debugging? Isn't the thrown exception caught? If so, what's the code that attempts to catch it? I suggest you change your question to show a small, but complete piece of code (ideally self-contained and compilable) plus a exact description of A) what happens and B) what you think should happen instead. Meanwhile, all I can say is that the error message you quote might mean that you are running/debugging something that isn't compiled from the code you look at. But that#s about as vague as your question...
1,417,473
1,417,598
Call Python from C++
I'm trying to call a function in a Python script from my main C++ program. The python function takes a string as the argument and returns nothing (ok.. 'None'). It works perfectly well (never thought it would be that easy..) as long as the previous call is finished before the function is called again, otherwise there is an access violation at pModule = PyImport_Import(pName). There are a lot of tutorials how to embed python in C and vice versa but I found nothing about that problem. int callPython(TCHAR* title){ PyObject *pName, *pModule, *pFunc; PyObject *pArgs, *pValue; Py_Initialize(); pName = PyUnicode_FromString("Main"); /* Name of Pythonfile */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, "writeLyricToFile"); /* function name. pFunc is a new reference */ if (pFunc && PyCallable_Check(pFunc)) { pArgs = PyTuple_New(1); pValue = PyUnicode_FromWideChar(title, -1); if (!pValue) { Py_DECREF(pArgs); Py_DECREF(pModule); showErrorBox(_T("pValue is false")); return 1; } PyTuple_SetItem(pArgs, 0, pValue); pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pValue != NULL) { //worked as it should! Py_DECREF(pValue); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); showErrorBox(_T("pValue is null")); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); showErrorBox(_T("pFunc null or not callable")); return 1; } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); showErrorBox(_T("pModule is null")); return 1; } Py_Finalize(); return 0; }
When you say "as long as the previous call is finished before the function is called again", I can only assume that you have multiple threads calling from C++ into Python. The python is not thread safe, so this is going to fail! Read up on the Global Interpreter Lock (GIL) in the Python manual. Perhaps the following links will help: http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock http://docs.python.org/c-api/init.html#PyEval_InitThreads http://docs.python.org/c-api/init.html#PyEval_AcquireLock http://docs.python.org/c-api/init.html#PyEval_ReleaseLock The GIL is mentioned on Wikipedia: http://en.wikipedia.org/wiki/Global_Interpreter_Lock
1,417,484
1,417,499
Memory and Register panels in Visual Studio 2008 missing
When I still had VS2005 there were a Memory and a Register panel available while debugging C/C++ projects. I think they could be activated from the Debug menu, I'm not sure anymore. The problem is that in VS2008 (Pro) I can't find them nowhere. I thought that it may be some corruption of the program files, but after installing Win7 and all the programs from scratch, they still don't appear :( Thanks in advance.
I've seen something similar, where if you've installed SQL Server first, then you get the "Business Intelligence" configuration of Visual Studio, which is missing a bunch of stuff. Go to Tools / Import and Export Settings... / Reset all settings.
1,417,907
1,417,924
Sizeof in C++ and how to calculate pointer length?
Can someone explain the following code snippet for me? // Bind base object so we can compute offsets // currently only implemented for indexes. template<class DataObj> void BindAsBase(DataObj &rowbuf) { // Attempting to assign working_type first guarantees exception safety. working_type = DTL_TYPEID_NAME (rowbuf); working_addr = reinterpret_cast<BYTE*>(&rowbuf); working_size = sizeof(rowbuf); } My problem is what is the result of sizeof(rowbuf)? Is it the length of DataObj or either the length of Byte*? why? Another question: why there is a need to calculate offset of pointer? What is the usual use of it? What is sizeof(working_addr) equal to?
sizeof(rowbuf) returns the length in bytes of an object of type DataObj. Note that rowbuf is no pointer, but it is a reference which is quite a difference. If you want to calculate the size of y DataObj pointer use sizeof(&rowbuf) or sizeof(DataObj*).
1,418,015
1,418,703
How to get Python exception text
I want to embed python in my C++ application. I'm using Boost library - great tool. But i have one problem. If python function throws an exception, i want to catch it and print error in my application or get some detailed information like line number in python script that caused error. How can i do it? I can't find any functions to get detailed exception information in Python API or Boost. try { module=import("MyModule"); //this line will throw excetion if MyModule contains an error } catch ( error_already_set const & ) { //Here i can said that i have error, but i cant determine what caused an error std::cout << "error!" << std::endl; } PyErr_Print() just prints error text to stderr and clears error so it can't be solution
Well, I found out how to do it. Without boost (only error message, because code to extract info from traceback is too heavy to post it here): PyObject *ptype, *pvalue, *ptraceback; PyErr_Fetch(&ptype, &pvalue, &ptraceback); //pvalue contains error message //ptraceback contains stack snapshot and many other information //(see python traceback structure) //Get error message char *pStrErrorMessage = PyString_AsString(pvalue); And BOOST version try{ //some code that throws an error }catch(error_already_set &){ PyObject *ptype, *pvalue, *ptraceback; PyErr_Fetch(&ptype, &pvalue, &ptraceback); handle<> hType(ptype); object extype(hType); handle<> hTraceback(ptraceback); object traceback(hTraceback); //Extract error message string strErrorMessage = extract<string>(pvalue); //Extract line number (top entry of call stack) // if you want to extract another levels of call stack // also process traceback.attr("tb_next") recurently long lineno = extract<long> (traceback.attr("tb_lineno")); string filename = extract<string>(traceback.attr("tb_frame").attr("f_code").attr("co_filename")); string funcname = extract<string>(traceback.attr("tb_frame").attr("f_code").attr("co_name")); ... //cleanup here
1,418,019
1,418,055
Casting pointer as template argument: Comeau & MSVC compile, GCC fails
Consider the following code: template<int* a> class base {}; int main() { base<(int*)0> test; return 0; } Both Comeau and MSVC compile this without issues (except for Comeau warning about an unused variable), while GCC fails on the base<(int*)0> test; line, stating In function `int main()': a casts to a type other than an integral or enumeration type cannot appear in a constant-expression template argument 1 is invalid What exactly is it complaining about? And who's right -- should this code compile? It's worth noting that my GCC version is extremely old (3.4.2) so that may have something to do with it. Thanks.
From a draft standard (emphasis added): 14.1.3 A non-type template-parameter shall have one of the following (option- ally cv-qualified) types: ... --pointer to object, accepting an address constant expression desig- nating a named object with external linkage, ... Apparently, it's not legal to instantiate a template with a null pointer, as a null pointer doesn't point to a "named object with external linkage".
1,418,036
1,418,247
C++ implicit function calls
Will c++ implicit function calls be a feature of C++0x ? It is an interesting feature, but I haven't seen any progress on this and the GCC C++0x page didn't even mention it. See http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1611.pdf
No they will not be included in the next standard update to C++ (C++0x). The idea of implicit function calls (informally: use of a niladic function name in an expression evaluates to a function call instead of decaying to its address) is interesting, and it wasn't dismissed by the committee as a bad idea. It was classified as: "Not ready for C++0x, but open to resubmit in future." (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2869.html).
1,418,068
1,418,152
What are the operations supported by raw pointer and function pointer in C/C++?
What are all operations supported by function pointer differs from raw pointer? Is > , < , <= , >=operators supported by raw pointers if so what is the use?
For both function and object pointers, they compile but their result is only guaranteed to be consistent for addresses to sub-objects of the same complete object (you may compare the addresses of two members of a class or array) and if you compare a function or object against itself. Using std::less<>, std::greater<> and so on will work with any pointer type, and will give consistent results, even if the result of the respective built-in operator is unspecified: void f() { } void g() { } int main() { int a, b; ///// not guaranteed to pass assert((&a < &b) == (&a < &b)); ///// guaranteed to pass std::less<int*> lss1; assert(lss1(&a, &b) == lss1(&a, &b)); // note: we don't know whether lss1(&a, &b) is true or false. // But it's either always true or always false. ////// guaranteed to pass int c[2]; assert((&c[0] < &c[1]) == (&c[0] < &c[1])); // in addition, the smaller index compares less: assert(&c[0] < &c[1]); ///// not guaranteed to pass assert((&f < &g) == (&f < &g)); ///// guaranteed to pass assert((&g < &g) == (&g < &g)); // in addition, a function compares not less against itself. assert(!(&g < &g)); ///// guaranteed to pass std::less<void(*)()> lss2; assert(lss2(&f, &g) == lss2(&f, &g)); // note: same, we don't know whether lss2(&f, &g) is true or false. ///// guaranteed to pass struct test { int a; // no "access:" thing may be between these! int b; int c[1]; // likewise here int d[1]; test() { assert((&a < &b) == (&a < &b)); assert((&c[0] < &d[0]) == (&c[0] < &d[0])); // in addition, the previous member compares less: assert((&a < &b) && (&c[0] < &d[0])); } } t; } Everything of that should compile though (although the compiler is free to warn about any code snippet it wants). Since function types have no sizeof value, operations that are defined in terms of sizeof of the pointee type will not work, these include: void(*p)() = ...; // all won't work, since `sizeof (void())` won't work. // GCC has an extension that treats it as 1 byte, though. p++; p--; p + n; p - n; The unary + works on any pointer type, and will just return the value of it, there is nothing special about it for function pointers. + p; // works. the result is the address stored in p. Finally note that a pointer to a function pointer is not a function pointer anymore: void (**pp)() = &p; // all do work, because `sizeof (void(*)())` is defined. pp++; pp--; pp + n; pp - n;
1,418,125
18,652,863
Eclipse CDT generated getters / setters name
Is there a way (either via the UI, or in config files) to change the names of the C++ getters/setters generated by Eclipse CDT from the Java-style getSomething() to the more C++ like something() ?
It's now possible via the following menu:
1,418,141
1,418,299
C++ error when opening file
when I try to open a file for reading in my console application i get this error message: "Unhandled exception at 0x1048766d (msvcp90d.dll) in homework1.exe: 0xC0000005: Access violation writing location 0x00000000." It works fine when I compile and run the program on my macbook but when I run it on my desktop using VS 2008 it gives me this error. here is my code: int main (void){ //Open 1st file (baseproduct.dat) ifstream fin; //fin.open(filename.c_str()); fin.open("baseproduct.dat"); int tries; tries = 0; while( fin.bad() ) { if( tries >= 4 ) { cout > filename; fin.open(filename.c_str()); tries++; } SodaPop inventory[100]; //read file into array string strName; double dblPrice; int i; i = 0; fin >> strName; while( !fin.eof() ) { inventory[i].setName(strName); fin >> dblPrice; inventory[i].setPrice(dblPrice); fin >> strName; i++; } fin.close(); cout > filename; //fin.open(filename.c_str()); fin.open("soldproduct.dat"); tries = 0; while( fin.bad() ) { if( tries >= 4 ) { cout > filename; fin.open(filename.c_str()); tries++; } //read file into array i = 0; fin >> strName; while( !fin.eof() ) { cout > dblPrice; inventory[i].setPrice(dblPrice);*/ fin >> strName; i++; //1. search array for name //2. get price (what should happen with it?) //3. add # sold to quantity } fin.close(); cout
If you want to check if the file is open or not, don't use fin.bad() instead: while( !fin.is_open() ) { ... }
1,418,225
1,418,397
In the Visual Studio debugger, what does {null=???} mean?
I was debugging a C++ program in VS 2003, and a boost variable showed up as having the value {null=???}. What does that mean?
Typically when you see ??? in the C++ debugger, it means the underlying expression evaluator had problems accessing the memory for the particular expression. So it's likely the value points to invalid or inaccessible memory. It's also possible that this session is using an autoexp.dat file and it points to a member that is not accessible / available in the underlying expression. I believe this will also lead to the ??? display.
1,418,399
1,418,727
Gradient Brush in Native C++?
In c#, you can use drawing2d.lineargradientbrush, but in c++ right now I only found the CreateSolidBrush function. Is there a function in the native gdi dll to create a gradient brush? I couldn't find anything like this at msdn. Thanks
To draw a vertical gradient: void VerticalGradient(HDC hDC, const RECT& GradientFill, COLORREF rgbTop, COLORREF rgbBottom) { GRADIENT_RECT gradientRect = { 0, 1 }; TRIVERTEX triVertext[ 2 ] = { GradientFill.left - 1, GradientFill.top - 1, GetRValue(rgbTop) << 8, GetGValue(rgbTop) << 8, GetBValue(rgbTop) << 8, 0x0000, GradientFill.right, GradientFill.bottom, GetRValue(rgbBottom) << 8, GetGValue(rgbBottom) << 8, GetBValue(rgbBottom) << 8, 0x0000 }; GradientFill(hDC, triVertext, 2, &gradientRect, 1, GRADIENT_FILL_RECT_V); }
1,418,476
1,418,520
BHO Handle OnSubmit event
Basically I want to develop a BHO that validates certain fields on a form and auto-places disposable e-mails in the appropriate fields (more for my own knowledge). So in the DOCUMENTCOMPLETE event I have this: for(long i = 0; i < *len; i++) { VARIANT* name = new VARIANT(); name->vt = VT_I4; name->intVal = i; VARIANT* id = new VARIANT(); id->vt = VT_I4; id->intVal = 0; IDispatch* disp = 0; IHTMLFormElement* form = 0; HRESULT r = forms->item(*name,*id,&disp); if(S_OK != r) { MessageBox(0,L"Failed to get form dispatch",L"",0);// debug only continue; } disp->QueryInterface(IID_IHTMLFormElement2,(void**)&form); if(form == 0) { MessageBox(0,L"Failed to get form element from dispatch",L"",0);// debug only continue; } // Code to listen for onsubmit events here... } How would I use the IHTMLFormElement interface to listen for the onsubmit event?
Once you have the pointer to the element you want to sink events for, you would QueryInterface() it for IConnectionPointContainer and then connect to that: REFIID riid = DIID_HTMLFormElementEvents2; CComPtr<IConnectionPointContainer> spcpc; HRESULT hr = form->QueryInterface(IID_IConnectionPointContainer, (void**)&spcpc); if (SUCCEEDED(hr)) { CComPtr<IConnectionPoint> spcp; hr = spcpc->FindConnectionPoint(riid, &spcp); if (SUCCEEDED(hr)) { DWORD dwCookie; hr = pcp->Advise((IDispatch *)this, &dwCookie); } } Some notes: You probably want to cache dwCookie and cpc, since you need them later when you call pcp->Unadvise() to disconnect the sink. In the call to pcp->Advise() above, I pass this. You can use any object you have that implements IDispatch, which may or may not be this object. Design left to you. riid will be the event dispinterface you want to sink. In this case, you probably want DIID_HTMLFormElementEvents2. Here's how to disconnect: pcp->Unadvise(dwCookie); Let me know if you have further questions. Edit-1: Yeah, that DIID was wrong. It should be: DIID_HTMLFormElementEvents2. Here is how I found it: C:\Program Files (x86)\Microsoft Visual Studio 8\VC\PlatformSDK>findstr /spin /c:"Events2" *.h | findstr /i /c:"form"
1,418,756
1,418,783
How to use bind1st and bind2nd?
I would like to learn how to use binding functions. Here is the idea: I have this function which takes to parameters: void print_i(int t, std::string separator) { std::cout << t << separator; } And I would like to do: std::vector<int> elements; // ... for_each(elements.begin(), elements.end(), std::bind2nd(print_i, '\n')); But it does not work ! Here is what I get: /usr/include/c++/4.3/backward/binders.h: In instantiation of ‘std::binder2nd<void ()(int, std::string)>’: main.cpp:72: instantiated from here /usr/include/c++/4.3/backward/binders.h:138: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/backward/binders.h:141: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/backward/binders.h:145: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/backward/binders.h:149: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/backward/binders.h:155: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/backward/binders.h:140: error: field ‘std::binder2nd<void ()(int, std::string)>::op’ invalidly declared function type /usr/include/c++/4.3/backward/binders.h: In function ‘std::binder2nd<_Operation> std::bind2nd(const _Operation&, const _Tp&) [with _Operation = void ()(int, std::string), _Tp = char]’: main.cpp:72: instantiated from here /usr/include/c++/4.3/backward/binders.h:164: error: ‘void ()(int, std::string)’ is not a class, struct, or union type /usr/include/c++/4.3/bits/stl_algo.h: In function ‘_Funct std::for_each(_IIter, _IIter, _Funct) [with _IIter = __gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > >, _Funct = std::binder2nd<void ()(int, std::string)>]’: main.cpp:72: instantiated from here /usr/include/c++/4.3/bits/stl_algo.h:3791: error: no match for call to ‘(std::binder2nd<void ()(int, std::string)>) (int&)’ make: *** [all] Error 1 I could use functor, but it is quicker to use binding. Thanks!
The argument to bind2nd must be an AdaptableBinaryFunction. A plain binary function does not fulfill this requirement (an adaptable function required typedefs for its return and argument types, a plain function type does not provide any typedefs). You could use std::bind which is probably the better choice anyway.
1,418,831
1,420,944
C++ 2D tessellation library?
I've got some convex polygons stored as an STL vector of points (more or less). I want to tessellate them really quickly, preferably into fairly evenly sized pieces, and with no "slivers". I'm going to use it to explode some objects into little pieces. Does anyone know of a nice library to tessellate polygons (partition them into a mesh of smaller convex polygons or triangles)? I've looked at a few I've found online already, but I can't even get them to compile. These academic type don't give much regard for ease of use.
CGAL has packages to solve this problem. The best would be probably to use the 2D Polygon Partitioning package. For example you could generate y-monotone partition of a polygon (works for non-convex polygons, as well) and you would get something like this: The runnning time is O(n log n). In terms of ease of use this is a small example code generating a random polygon and partitioning it (based on this manual example): typedef CGAL::Exact_predicates_inexact_constructions_kernel K; typedef CGAL::Partition_traits_2<K> Traits; typedef Traits::Point_2 Point_2; typedef Traits::Polygon_2 Polygon_2; typedef std::list<Polygon_2> Polygon_list; typedef CGAL::Creator_uniform_2<int, Point_2> Creator; typedef CGAL::Random_points_in_square_2<Point_2, Creator> Point_generator; int main( ) { Polygon_2 polygon; Polygon_list partition_polys; CGAL::random_polygon_2(50, std::back_inserter(polygon), Point_generator(100)); CGAL::y_monotone_partition_2(polygon.vertices_begin(), polygon.vertices_end(), std::back_inserter(partition_polys)); // at this point partition_polys contains the partition of the input polygons return 0; } To install cgal, if you are on windows you can use the installer to get the precompiled library, and there are installations guides for every platform on this page. It might not be the simplest to install but you get the most used and robust computational geometry library there is out there, and the cgal mailing list is very helpful to answer questions...
1,418,965
1,434,328
C++ Executable distribution strategy
Recently I have asked a question about what I should use to create self-contained executables that would be deployed under a number of Linux distribution. I got very scared at first, but after reading about C++ a little, I managed to get the first version of my executable going. After a day full of joy, I just hit the wall again with another dilemma. The resulting executable must be installed in a number of Linux distributions (Slackware, Arch, Ubuntu, Debian, CentOS and a few more), and I am completely clueless on how to achieve it. All I know CentOS and Debian-based OSes has package managers, like apt or yum, but I am not sure those apply to my case. The code I wrote depends on a couple of libraries (more specifically RudeSocket and yaml-cpp. I have been told that I would be able to compile the executable and link it dynamically, so I just needed to distribute the executable. It happens that I could not find the .a file for the yaml-cpp library (just for RudeSocket). And here's my problem so far: At first, I went with dynamic linking but (obviously) when I copied the executable to another box: $ ./main ./main: error while loading shared libraries: libyaml-cpp.so.0.2: cannot open shared object file: No such file or directory When trying to compile it statically, I get an error too (because I don't have the yaml-cpp .a file as I mentioned): $ g++ main.cpp parse.cpp parse.h rudesocket-1.3.0/.libs/librudesocket.a -o main -static -L/usr/local/librudesocket-1.3.0/.libs/librudesocket.a(socket_connect_normal.o): In function `rude::sckt::Socket_Connect_Normal::simpleConnect(int&, char const*, int)': /root/webbyget/sockets/rudesocket-1.3.0/src/socket_connect_normal.cpp:250: warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /tmp/cc3cEVK1.o: In function `operator>>(YAML::Node const&, Job&)': parse.cpp:(.text+0x1a83): undefined reference to `YAML::Node::size() const' /tmp/cc3cEVK1.o: In function `handle_job(rude::Socket, char const*)': parse.cpp:(.text+0x1b79): undefined reference to `YAML::Parser::Parser(std::basic_istream<char, std::char_traits<char> >&)' parse.cpp:(.text+0x1bfd): undefined reference to `YAML::Node::Node()' parse.cpp:(.text+0x1c10): undefined reference to `YAML::Parser::GetNextDocument(YAML::Node&)' parse.cpp:(.text+0x1dc6): undefined reference to `YAML::Node::size() const' parse.cpp:(.text+0x1dee): undefined reference to `YAML::Node::~Node()' parse.cpp:(.text+0x1e18): undefined reference to `YAML::Node::~Node()' parse.cpp:(.text+0x1e37): undefined reference to `YAML::Parser::~Parser()' parse.cpp:(.text+0x1e61): undefined reference to `YAML::Parser::~Parser()' (...) It's pretty obvious to me that g++ cannot compile it statically without telling it where to find the classes for yaml-cpp. It is very important that the installation should happen without human interaction, in an automated fashion. So my question is really twofold: how can I distribute this compiled program in the least complex way targeting all those distributions? is there any de facto standard solution for this kind of problem? Thank you in advance, Felipe.
You might give this technique a try.
1,419,099
1,420,198
Reading/writing QObjects
I think I can write a QObject like this by taking advantage of the Q_PROPERTYs: QDataStream &operator<<(QDataStream &ds, const Object &obj) { for(int i=0; i<obj.metaObject()->propertyCount(); ++i) { if(obj.metaObject()->property(i).isStored(&obj)) { ds << obj.metaObject()->property(i).read(&obj); } } return ds; } Which, if that's true, I don't know why QObjects don't already have that method implemented because it's pretty generic. But that's besides the point. How would I read the file? i.e., implement this function? QDataStream &operator>>(QDataStream &ds, Object &obj) { return ds; } I'm thinking I can somehow use ds.readBytes but how would I get the length of the property? PS: If it wasn't obvious, Object is my custom class that inherits from QObject.
This seems to work. QDataStream &operator>>(QDataStream &ds, Object &obj) { QVariant var; for(int i=0; i<obj.metaObject()->propertyCount(); ++i) { if(obj.metaObject()->property(i).isStored(&obj)) { ds >> var; obj.metaObject()->property(i).write(&obj, var); } } return ds; } Thanks to Eugene.
1,419,169
1,419,201
std::string::assign() causes segfault
I have a std::vector<uint8_t> that contains strings at specific offsets. Here's a shortened dump: ... @128 00 00 00 00 00 00 00 00 73 6F 6D 65 74 68 69 33 ........somethin @144 38 36 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ng.............. @160 00 00 00 00 00 00 00 00 31 2E 32 2E 33 00 00 00 ........1.2.3... @176 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ ... I am trying to extract the data at offset 136 and put it into a std::string: std::string x; x.assign(vec.begin()+136, vec.begin()+168); This however, causes my application to segfault. Now I'm pretty new at software development under Linux, but I do know how to start my app in GDB and get a backtrace, and tracked the problem down here: (gdb) backtrace #0 0xb7536d78 in ?? () from /lib/i686/cmov/libc.so.6 #1 0xb7538cd5 in malloc () from /lib/i686/cmov/libc.so.6 #2 0xb7708957 in operator new(unsigned int) () from /usr/lib/libstdc++.so.6 #3 0xb76e4146 in std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) () from /usr/lib/libstdc++.so.6 #4 0xb76e63b0 in std::string::_M_mutate(unsigned int, unsigned int, unsigned int) () from /usr/lib/libstdc++.so.6 #5 0xb76e654a in std::string::_M_replace_safe(unsigned int, unsigned int, char const*, unsigned int) () from /usr/lib/libstdc++.so.6 #6 0x0806d651 in std::string::_M_replace_dispatch<__gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > > > (this=0xbfffe464, __i1=..., __i2=..., __k1=..., __k2=...) at /usr/include/c++/4.3/bits/basic_string.tcc:637 #7 0x0806d26e in std::string::replace<__gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > > > (this=0x811c730, vec=...) at /usr/include/c++/4.3/bits/basic_string.h:1390 #8 std::string::assign<__gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > > > ( this=0x811c730, vec=...) at /usr/include/c++/4.3/bits/basic_string.h:958 #9 myclass::somemethod (this=0x811c730, vec=...) at myclass.cpp:135 Printing vec.size() returns 200 and even looping over the vector and printing the data causes me no problems (exactly above the crashing snippet!). I am compiling in Debian with g++ 4.3.4. Any pointers on what this problem could be?
There is likely a mismatched free/delete somewhere else in your code that is delaying the symptom until now. When you use freed memory, the operating system is free to continue as long as it sees fit. Try running the program in valgrind. valgrind uses its own malloc and free so it can alert you to incorrect news and deletes. Make sure to compile without optimisations and with -g1: g++ -g main.cc -o binary valgrind --leak-check=full ./binary Make sure you to do not create a pointer from a stack variable that goes out of scope. For example, this is a common mistake among newer developers: int *foo() { int a = 0; // do something to a here return &a; } As a has gone out of scope, you are returning a pointer to freed memory. 1About -g, from the manpage: Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF 2). GDB can work with this debugging information.
1,419,186
1,419,833
Troubles at Including (Linking) a static library inside another one
I'll try to explain shortly what I want to do: A project using a static library which have another one as depandency. It produce a project called MyProject linking on MyLib1 linking on MyLib2. Here is the compile order: MyLib2 MyLib1 (linking to MyLib2) MyProject (linking to MyLib1) I'm using Visual Studio 2008 and I have some troubles at defining include. When linking, I use the property "Additional Include Directory" (on project property C/C++ node). This seems working between MyProject and MyLib1 but not MyLib1 and MyLib2. For Exemple: I've a file in MyLib2 called foo.cpp; Using #include "foo.cpp" makes visual studio telling that foo.cpp is unknow (missing file or folder). To ensure it's NOT a wrong path I gave, I've done many attemps like following: copy-paste the path shown in Command Line (used to compile the library) into win explorer: I well see the source code of my second library. I've remake the project many times and each times I used differents names (forcing me to pay attention to this) and everything seems well defined (but not "including"). The only way I actually find to make it works: using #include "c:\\foo.cpp" as include... Very nice for portability ! Here is a Zip of the Solution to test it yourself and tell me what's wrong: MyProject.rar Thanks for taking some time to help me ! Lucyberad
First, never include *.cpp files. Second, use forward declaration of your external functions: void appellib2(void); void appellib1(void) { appellib2(); } Third, right-click each project in the Solution Explorer, and select "Project dependencies..." and set-up proper dependencies: MyProject -> MyLib1 -> MyLib2. At last, in properties for MyProject, set up additional dependencies MyLib1.lib. Now I can build and run your project without errors. UPDATE Never rename *.cpp to *.h just to solve linking problems. If you have a definition in your *.h file you will unable to include it twice or more. Here is your project YourProject.rar.
1,419,314
1,419,557
A confusion about parallel_accumulate in C++ Concurrency in action
In the following example (Chapter 2), Anthony Williams is trying to parallelize the standard accumulate function. my question is why is he doing this: unsigned long const max_threads=(length+min_per_thread-1)/min_per_thread; why add length and subtract 1? why not just: unsigned long const max_threads=length/min_per_thread; ................................................................................... template<typename Iterator,typename T> struct accumulate_block { void operator()(Iterator first,Iterator last,T& result) { result=std::accumulate(first,last,result); } }; template<typename Iterator,typename T> T parallel_accumulate(Iterator first,Iterator last,T init) { unsigned long const length=std::distance(first,last); if(!length) return init; unsigned long const min_per_thread=25; unsigned long const max_threads=(length+min_per_thread-1)/min_per_thread; unsigned long const hardware_threads=std::thread::hardware_concurrency(); unsigned long const num_threads= std::min(hardware_threads!=0?hardware_threads:2,max_threads); unsigned long const block_size=length/num_threads; std::vector<T> results(num_threads); std::vector<std::thread> threads(num_threads-1); Iterator block_start=first; for(unsigned long i=0;i<(num_threads-1);++i) { Iterator block_end=block_start; std::advance(block_end,block_size); #6 threads[i]=std::thread( accumulate_block<Iterator,T>(), block_start,block_end,std::ref(results[i])); block_start=block_end; } accumulate_block()(block_start,last,results[num_threads-1]); std::for_each(threads.begin(),threads.end(), std::mem_fn(&std::thread::join)); return std::accumulate(results.begin(),results.end(),init); }
The problem with using unsigned long const max_threads=length/min_per_thread; is caused by the truncation rounding used during integer division if length = 7 min_per_thread = 5 then max_threads = length / min_per_thread = 1 while max threads should actually be 2 length + min_per_thread - 1 = 11 max_threads = (length + min_per_thread - 1) / min_per_thread = 2
1,419,342
1,419,513
C++: How to localize an already-written program
I want to localize a program I already written.. It's fairly big (almost 50k lines) and ideally I want a system that allows me (the programmer) to do the least amount of work possible, and without major changes to the program - if possible none at all. I looked at gettext() and liked it a lot, but it's unclear to me how it would translate strings such as these: const char *Colors[] = { { "Red" }, { "Blue" }, { "Yellow" }, .... }; which are VERY common in my program.. Here replacing "Red" with gettext("Red") would obviously not work. So I thought I would do something like, OutputFunction(gettext(Colors[Id])), but then how can I get a list of strings to localize? I doubt any program is smart enough to be able to get "Red", "Blue", "Yellow" from that in a to-localize list statically. Since it's basically a server there is no need for the ability to change the language without recompiling (I can compile it for every supported language without any major problem or annoyance), I thought about C++0x's constexpr, which would be perfect! It would work in arrays/etc and I would easily get a list of strings to localize at compile time.. Too bad that no compiler implemented it yet. Changing all the strings to an ID is not an option since it would require a massive amount of work on my part and especially creating a new id for every new string would be annoying as hell. The same applies to converting all the arrays like the one above to something else. So, any ideas? :/
After a lot of playing around with gettext() and xgettext I think I found a way myself (sorry onebyone but I didn't like your approach.. There must be hundreds of arrays like that and I would have to import all of them in main(), that's a lot of extern and a lot of extra work :/). Anyways, this is how I think it can theoretically be done (I haven't tried yet to actually translate but I don't see why it wouldn't work) Two #define's: #define _ gettext #define __(x) x Then you use _ to actually translate and __ to simply mark strings as "to be translated": const char *Colors[] = { { __("Red") }, { __("Blue") }, { __("Yellow") }, .... }; void PrintColor(int id) { cout << _("The color is: ") << _(Colors[id]); } Then you run: xgettext -k_ -k__ *.cpp And you get the following .po file: #: test.cpp:2 msgid "Red" msgstr "" #: test.cpp:3 msgid "Blue" msgstr "" #: test.cpp:4 msgid "Yellow" msgstr "" #: test.cpp:9 msgid "The color is: " msgstr "" So, you use __ (or any other name, doesn't really matter) as a "dummy function" to just let xgettext know that the string needs to be translated, and _ to actually call gettext(). If you call _ with a string then the string will be marked to-be-translated as well, if you call it with a variable, array, whatever then it appears to be simply ignored by xgettext. Great! Now all I have to do is go through 5 trillion files and add underscores around, as if I was a monkey :/
1,419,449
1,419,453
Piping to process with c++
I have finally worked out how to get stdin and stdout to pipe between the main app and a process created with CreateProcess (win32) or exec (linux). Now I am interested in harnessing the piping nature of an app. The app I am running can be piped into: eg: cat file.txt | grep "a" If I want to run "grep", sending the contents of "file.txt" to it (which I have in a buffer in my c++ app), how do I do this? I assume I don't just pump it down stdin, or am I wrong. Is that what I do?
Yes, that's exactly what you do: read from stdin and write to stdout. One of the strokes of genius behind linux is the simplicity of redirecting input and output almost effortlessly, as long as your apps obey some very simple, basic rules. For example: send data to stdout and errors or informational messages to stderr. That makes it easy for a user to keep track of status, and you can still use your app to send data to a pipe. You can also redirect data (from stdout) and messages (from stderr) independently: myapp | tail -n 5 > myapp.data # Save the last 5 lines, display msgs myapp 2> myapp.err | sort # Sort the output, send msgs to a file myapp 2> /dev/null # Throw msgs away, display output myapp > myapp.out 2>&1 # Send all output (incl. msgs) to a file Redirection may be a bit confusing at first, but you'll find the time spent learning will be well worth it!
1,419,463
1,419,504
Forward typedef declarations, effect on build times, and naming conventions
I am curious about the impact my typedef approach has on my builds. Please consider the following example. #include "SomeClass.h" class Foo { typedef SomeClass SomeOtherName; SomeOtherName* storedPointer; void setStoredPointer(SomeOtherName* s); } void Foo::setStoredPointer(SomeOtherName* s) { storedPointer = s; } Whenever I end up with situations like above, this drives the typedef into the header file and thus, requiring I #include it in the header file. I am concerned the lack of forward declarations may be causing longer build times. Based on comments from this post: Forward declaration of a typedef in C++ I can forward declare the class, typedef a reference or pointer, and then #include inside the .cpp file. This should then permit for faster build times. Am I correct in my conclusions about this? If so, I would end up with a typedef such as this: typedef SomeClass* SomeOtherNamePtr; typedef SomeClass& SomeOtherNameRef; typedef const SomeClass* SomeOtherNameConstPtr; typedef const SomeClass& SomeOtherNameConstRef; This doesn't look like very clean code to me, and I think I have read articles/postings (not necessarily on SO) recommending against this. Do you find this acceptable? Better alternatives? Update: Using Michael Burr's answer, I was able to solve the case of pointers and references only. However, I ran into a problem when trying to take the sizeof() in my function. For example, say the class has the following function: //Foo.h class Foo { typedef class SomeClass SomeOtherName; void doSomething(const SomeOtherName& subject) } //Foo.cpp #include "Foo.h" #include "SomeClass.h" void Foo::doSomething(const SomeOtherName& subject) { sizeof(subject); //generates error C2027: use of undefined type 'SomeClass'; sizeof(SomeClass); //generates same error, even though using the sizeof() //the class that has been #include in the .cpp. Shouldn't //the type be known by now? } Alternatively, this would work. //Foo.h class SomeClass; class Foo { void doSomething(const SomeClass& subject) } //Foo.cpp #include "Foo.h" #include "SomeClass.h" void Foo::doSomething(const SomeClass& subject) { sizeof(subject); sizeof(SomeClass); } I'm using Microsoft Visual C++ 6.0. Is this a bug of the compiler or is this in general against the standard? In the example that has the error, please note that a sizeof(SomeClass) is the original class that is being typedef, not the new typedef type being created in Foo. I'm surprised that by doing a forward declaration in a typedef is restricting my ability to do anything with the class that is being typedef. Followup: Just tested it using the XCode compiler and I believe my sizeof question was a Visual C++ 6.0 compiler issue. I'd guess that the XCode compiler is probably correct, but I don't have anything else to try at the moment. So, while this was informative, I personally am out of luck on my current task since the best answer doesn't work for my situation.
Would typedef class SomeClass SomeOtherName; do the trick for you? With that, the compilation unit that's using the typedef only for pointers or references doesn't need to #include the SomeClass header.
1,419,581
1,419,628
How to access most frequently used programs in OS and most recent files of programs programmatically?
I need to gain access to the list of most recently used programs and list of recently opened files in Windows OS programatically. These are the items you generally see once you click start in windows. I am looking to use C# but if its better in Managed C++ I will do that too.
I think for files you can access recent directory. string folderName = Environment.GetFolderPath (Environment.SpecialFolder.Recent); DirectoryInfo recentFolder=new DirectoryInfo(folderName); FileInfo[] files=recentFolder.GetFiles();
1,419,681
1,419,925
Boost Library, how to get determinant from lu_factorize()?
I am trying to calculate a determinant using the boost c++ libraries. I found the code for the function InvertMatrix() which I have copied below. Every time I calculate this inverse, I want the determinant as well. I have a good idea how to calculate, by multiplying down the diagonal of the U matrix from the LU decomposition. There is one problem, I am able to calculate the determinant properly, except for the sign. Depending on the pivoting I get the sign incorrect half of the time. Does anyone have a suggestion on how to get the sign right every time? Thanks in advance. template<class T> bool InvertMatrix(const ublas::matrix<T>& input, ublas::matrix<T>& inverse) { using namespace boost::numeric::ublas; typedef permutation_matrix<std::size_t> pmatrix; // create a working copy of the input matrix<T> A(input); // create a permutation matrix for the LU-factorization pmatrix pm(A.size1()); // perform LU-factorization int res = lu_factorize(A,pm); if( res != 0 ) return false; Here is where I inserted my best shot at calculating the determinant. T determinant = 1; for(int i = 0; i < A.size1(); i++) { determinant *= A(i,i); } End my portion of the code. // create identity matrix of "inverse" inverse.assign(ublas::identity_matrix<T>(A.size1())); // backsubstitute to get the inverse lu_substitute(A, pm, inverse); return true; }
The permutation matrix pm contains the information you'll need to determine the sign change: you'll want to multiply your determinant by the determinant of the permutation matrix. Perusing the source file lu.hpp we find a function called swap_rows which tells how to apply a permutation matrix to a matrix. It's easily modified to yield the determinant of the permutation matrix (the sign of the permutation), given that each actual swap contributes a factor of -1: template <typename size_type, typename A> int determinant(const permutation_matrix<size_type,A>& pm) { int pm_sign=1; size_type size=pm.size(); for (size_type i = 0; i < size; ++i) if (i != pm(i)) pm_sign* = -1; // swap_rows would swap a pair of rows here, so we change sign return pm_sign; } Another alternative would be to use the lu_factorize and lu_substitute methods which don't do any pivoting (consult the source, but basically drop the pm in the calls to lu_factorize and lu_substitute). That change would make your determinant calculation work as-is. Be careful, however: removing pivoting will make the algorithm less numerically stable.
1,419,883
1,424,514
How to prevent Flikkering on picturebox of windows mobile
I have a transparent rectangle on a picture box,if i click next,the next image comes and transparent rectangle is drawn.The problem is flickering,while moving from one image to another image,the transparent rectangle flickers.please help me how to get rid of this problem.i want to eliminate flicker,please help. Thanks
How are you implementing it? I had a similar problem and implemented my own picturebox by inheriting from Control, overriding OnPaint to draw my image and transparent background etc, and also overriding OnPaintBackground and doing nothing. (The default behaviour of OnPaintBackground is to paint the background of the control, which you don't need to do if you are controlling the painting of the whole control) You can also implement double buffering in the OnPaint if needed to reduce tearing if your paint operations take time.
1,419,963
1,419,983
does C++ automatically cast const ints to floats?
I am aware that casting ints to floats (and vice versa) is fairly expensive. However, does the compiler automatically do it at compile time for constants in your code? For e.g. is there any difference between float y = 123; float x = 1 / y; and float y = 123.f; float x = 1.f / y; I see some code that does the latter, but I'm not sure if it's for optimization or safety issues (ie just making sure that the divide is floating point even if y happens to be an int). I'm using gcc (since the answer might be compiler specific.) Also, any pointers to a list of what the compiler can and cannot optimize in general would be appreciated. Thanks!
Yes, the compiler will do the conversion automatically. Your two blocks of code are identical. It is not an optimization. Turning off optimization won't make the compiler include the int-to-float conversion in the executable code, unless it's a very poor-quality implementation. It's not for safety, either. The compiler never does anything "just in case" an operand happens to be of a different type. The compiler knows the types of everything in your code. If you change the type of a variable, everything that uses that variable gets recompiled anyway; the compiler doesn't try to keep everything else untouched and just update the changed sections.
1,420,009
1,420,072
I don't get this C/C++ Joke
After reading this article on thedailywtf.com, I'm not sure that I really got the joke. It says there that some guy changed the code from int function() { int x; char data_string[15]; ... x = 2; strcpy(data_string,"data data data"); ... } to int function() { int x = 2; char data_string[15] = "data data data"; ... } everywhere in the code and that for some reason did inflate the size of the executable from 1 to 2 CDs (or maybe it didn't do that?). Obviously I'm not familiar enough with C/C++ to get this joke, but what seems strangest is that the 2nd code listing seems "cleaner"—at least from what I've been told in school (that is that initializing variables is a good thing, not a bad one).
Depending on the compiler and compiler options, initialization like this char data_string[15] = "data data data"; results in a lot of move instructions to copy the literal data to stack. Calling strcpy requires less instructions. Doing this kind of thing all over a large codebase can increase the binary size significantly. And of course, he was not spending his time on adding any value.
1,420,029
1,420,100
How to break out of a loop from inside a switch?
I'm writing some code that looks like this: while(true) { switch(msg->state) { case MSGTYPE: // ... break; // ... more stuff ... case DONE: break; // **HERE, I want to break out of the loop itself** } } Is there any direct way to do that? I know I can use a flag, and break from the loop by putting a conditional break just after the switch. I just want to know if C++ has some construct for this already.
Premise The following code should be considered bad form, regardless of language or desired functionality: while( true ) { } Supporting Arguments The while( true ) loop is poor form because it: Breaks the implied contract of a while loop. The while loop declaration should explicitly state the only exit condition. Implies that it loops forever. Code within the loop must be read to understand the terminating clause. Loops that repeat forever prevent the user from terminating the program from within the program. Is inefficient. There are multiple loop termination conditions, including checking for "true". Is prone to bugs. Cannot easily determine where to put code that will always execute for each iteration. Leads to unnecessarily complex code. Automatic source code analysis. To find bugs, program complexity analysis, security checks, or automatically derive any other source code behaviour without code execution, specifying the initial breaking condition(s) allows algorithms to determine useful invariants, thereby improving automatic source code analysis metrics. Infinite loops. If everyone always uses while(true) for loops that are not infinite, we lose the ability to concisely communicate when loops actually have no terminating condition. (Arguably, this has already happened, so the point is moot.) Alternative to "Go To" The following code is better form: while( isValidState() ) { execute(); } bool isValidState() { return msg->state != DONE; } Advantages No flag. No goto. No exception. Easy to change. Easy to read. Easy to fix. Additionally the code: Isolates the knowledge of the loop's workload from the loop itself. Allows someone maintaining the code to easily extend the functionality. Allows multiple terminating conditions to be assigned in one place. Separates the terminating clause from the code to execute. Is safer for Nuclear Power plants. ;-) The second point is important. Without knowing how the code works, if someone asked me to make the main loop let other threads (or processes) have some CPU time, two solutions come to mind: Option #1 Readily insert the pause: while( isValidState() ) { execute(); sleep(); } Option #2 Override execute: void execute() { super->execute(); sleep(); } This code is simpler (thus easier to read) than a loop with an embedded switch. The isValidState method should only determine if the loop should continue. The workhorse of the method should be abstracted into the execute method, which allows subclasses to override the default behaviour (a difficult task using an embedded switch and goto). Python Example Contrast the following answer (to a Python question) that was posted on StackOverflow: Loop forever. Ask the user to input their choice. If the user's input is 'restart', continue looping forever. Otherwise, stop looping forever. End. Code while True: choice = raw_input('What do you want? ') if choice == 'restart': continue else: break print 'Break!' Versus: Initialize the user's choice. Loop while the user's choice is the word 'restart'. Ask the user to input their choice. End. Code choice = 'restart'; while choice == 'restart': choice = raw_input('What do you want? ') print 'Break!' Here, while True results in misleading and overly complex code.
1,420,044
1,428,407
Generically reading a well-formed binary file
I'm trying to read contents of a game's map/model files into a program for the purposes of writing a small model viewer and testing out some DirectX features. The model/map file formats are chunked in nature, and I know the format of these files. I can easily read the files by parsing through the individual chunks, using an approach like this: class FileType1 { private Chunk1 c1; private Chunk2 c2; // etc public void Read(BinaryReader reader) { c1 = new Chunk1(reader); c2 = new Chunk2(reader); } } However, I am trying to think of some way to generically read these files, by specifying the format the file adheres to (i.e. Chunk1 is followed by Chunk2 etc etc) so that the reader can ensure the file is of an appropriate structure. I can use a Chunk super class and a Chunk factory to generically read all the chunks in any given file. Essentially, I would like to augment this with the additional functionality of a structure validator (or something similar) to result in a method similar to this: public void Read(BinaryReader reader, ChunkFileFormat format) { while (!EOF) { char[] chunkID = reader.ReadChars(4); Chunk c = chunkFactory.Create(chunkID); if (c.GetType() != format.Next.GetType()) throw new Exception("File format is invalid"); format.SetCurrentRecord(c); } } The idea here is that the ChunkFileFormat class specifies the structure of the file, indicating what chunk type is expected to be the next read in from the binary stream. This would allow subclasses of ChunkFileFormat to specify the layout of that particular format, and the single read method could be used for reading all different chunked file formats, rather than writing a long-winded and repetitive method for each. My question is, is anyone aware of design patterns or approaches that could deal with this situation? The project I'm working on is currently in C# although I would be interested in solutions in C++ (or any language for that matter). Thanks in advance!
These kind of rules are easily coded using a finite-state machine. Each chunk should change the state you are in. Each state waiting for specific chunks afterwards. If you encounter a chunk that you should not encounter in the current, that's an error.
1,420,145
1,420,740
visiting all free slots in a bitfield
I have an array of uint64 and for all unset bits (0s), I do some evaluations. The evaluations are not terribly expensive, but very few bits are unset. Profiling says that I spend a lot of time in the finding-the-next-unset-bit logic. Is there a faster way (on a Core2duo)? My current code can skip lots of high 1s: for(int y=0; y<height; y++) { uint64_t xbits = ~board[y]; int x = 0; while(xbits) { if(xbits & 1) { ... with x and y } x++; xbits >>= 1; } } (And any discussion about how/if to SIMD/CUDA-ise this would be an intriguing tangent!)
Here's a quick micro-benchmark; please run it if you can to get stats for your system, and please add your own algorithms! The commandline: g++ -o bit_twiddle_mirco_opt bit_twiddle_mirco_opt.cpp -O9 -fomit-frame-pointer -DNDEBUG -march=native And the code: #include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <stdint.h> static unsigned long get_usecs() { struct timeval tv; gettimeofday(&tv,NULL); return tv.tv_sec*1000000+tv.tv_usec; } enum { MAX_HEIGHT = 64 }; uint64_t board[MAX_HEIGHT]; int xsum, ysum; void evaluate(int x,int y) { xsum += x; ysum += y; } void alphaneo_unrolled_8(int height) { for(int y=0; y < height; y++) { uint64_t xbits = ~board[y]; int x = 0; while(xbits) { if(xbits & (1 << 0)) evaluate(x,y); if(xbits & (1 << 1)) evaluate(x+1,y); if(xbits & (1 << 2)) evaluate(x+2,y); if(xbits & (1 << 3)) evaluate(x+3,y); if(xbits & (1 << 4)) evaluate(x+4,y); if(xbits & (1 << 5)) evaluate(x+5,y); if(xbits & (1 << 6)) evaluate(x+6,y); if(xbits & (1 << 7)) evaluate(x+7,y); x+=8; xbits >>= 8; } } } void will_while(int height) { for(int y=0; y<height; y++) { uint64_t xbits = ~board[y]; int x = 0; while(xbits) { if(xbits & 1) evaluate(x,y); xbits >>= 1; x++; } } } void will_ffs(int height) { for(int y=0; y<height; y++) { uint64_t xbits = ~board[y]; int x = __builtin_ffsl(xbits); while(x) { evaluate(x-1,y); xbits >>= x; xbits <<= x; x = __builtin_ffsl(xbits); } } } void rnd_board(int dim) { for(int y=0; y<dim; y++) { board[y] = ~(((uint64_t)1 << dim)-1); for(int x=0; x<dim; x++) if(random() & 1) board[y] |= (uint64_t)1 << x; } } void test(const char* name,void(*func)(int)) { srandom(0); printf("testing %s... ",name); xsum = ysum = 0; const unsigned long start = get_usecs(); for(int i=0; i<100000; i++) { const int dim = (random() % MAX_HEIGHT) + 1; rnd_board(dim); func(dim); } const unsigned long stop = get_usecs(); printf("%lu usecs (check %d,%d)\n",stop-start,xsum,ysum); } int main() { test("will_while()",will_while); test("will_ffs()",will_ffs); test("alphaneo_unrolled_8()",alphaneo_unrolled_8); return 0; }
1,420,234
1,420,248
Converting C++ function to Delphi: what to do with void* parameter?
I'm writing a DLL in Delphi using the below C++ example: USERDLL_API double process_message (const char* pmessage, const void* param) { if (pmessage==NULL) { return 0; } if (param==NULL) { return 0; } if (strcmp(pmessage,"state")==0) { current_state *state = (current_state*) param; return process_state( (current_state*)param ); } } Unfortunately, I know next to nothing about C++ and pointers. What should I use instead of char* (PChar?) and void*? function process_message (const pmessage: PChar; const param: ???): Double; export; begin ??? end; exports process_message; Any help with the body of the function will be highly appreciated, too. I realize it's not rocket science, but I wouldn't learn the basics of C++ just to convert a couple of lines, if someone's kind enough to do that for me :-)
function process_message (const pmessage: PChar; const param: Pointer): Double; export; stdcall; begin If (pmessage = nil) Or (param = nil) Then Result := 0; Else If StrComp(pmessage, 'state') = 0 Then Result := process_state(current_state^(param)); // missing a return statement for cases where pmessage is not 'state' here! end; exports process_message; Untested, but should help to get you started.
1,420,280
1,420,285
Returning pointers in functions
Is the following code legal? char* randomMethod1() { char* ret = "hello"; return ret; } And this one? char* randomMethod2() { char* ret = new char[10]; for (int i = 0; i < 9; ++i) { ret[i] = (char)(65 + i); } ret[9] = '\0'; return ret; } I'd say the first one is legal, as you are actually returning a pointer to a string literal that I think is loaded from the string table of the program. However, I'd say the second is not. I'd say in the second method you are allocating memory on the stack, which as soon as you leave the function, might be used by another method, turning to trash the pointer you are returning. Is it how that it really works? Here is the disassembled code. How can I see it is being allocated on the heap? char* randomMethod2() { 000536E0 push ebp 000536E1 mov ebp,esp 000536E3 sub esp,0E4h 000536E9 push ebx 000536EA push esi 000536EB push edi 000536EC lea edi,[ebp-0E4h] 000536F2 mov ecx,39h 000536F7 mov eax,0CCCCCCCCh 000536FC rep stos dword ptr es:[edi] char* ret = new char[10]; 000536FE push 0Ah 00053700 call operator new (511E0h) 00053705 add esp,4 00053708 mov dword ptr [ebp-0E0h],eax 0005370E mov eax,dword ptr [ebp-0E0h] 00053714 mov dword ptr [ret],eax for (int i = 0; i < 9; ++i) { 00053717 mov dword ptr [i],0 0005371E jmp randomMethod2+49h (53729h) 00053720 mov eax,dword ptr [i] 00053723 add eax,1 00053726 mov dword ptr [i],eax 00053729 cmp dword ptr [i],9 0005372D jge randomMethod2+5Fh (5373Fh) ret[i] = (char)(65 + i); 0005372F mov eax,dword ptr [i] 00053732 add eax,41h 00053735 mov ecx,dword ptr [ret] 00053738 add ecx,dword ptr [i] 0005373B mov byte ptr [ecx],al } 0005373D jmp randomMethod2+40h (53720h) ret[9] = '\0'; 0005373F mov eax,dword ptr [ret] 00053742 mov byte ptr [eax+9],0 return ret; 00053746 mov eax,dword ptr [ret] } 00053749 pop edi 0005374A pop esi 0005374B pop ebx 0005374C add esp,0E4h 00053752 cmp ebp,esp 00053754 call @ILT+320(__RTC_CheckEsp) (51145h) 00053759 mov esp,ebp 0005375B pop ebp 0005375C ret
Both are legal. In the second one, you are not allocating memory from the stack. You are using new, and it allocates memory from the heap. If you don't free the pointer returned from the second method using delete, you'll have a memory leak. By the way, stack-allocated arrays are declared like this: char x[10]; // Note that there isn't any `new`. This line calls operator new which allocates memory from the heap and initializes the object. 00053700 call operator new (511E0h)
1,420,354
1,420,379
making a object equal to another object
i know you can make two objects equal to each other when one of them is being declared. i tested this in my program. but when i went to use a assignment statement it freaked out. Can you make two objects equal to each other with a assignment statement or can you only do that when one object is being declared?
You have provide operator= to a class so as copy the contents of another object. For example: class A { public: //Default constructor A(); //Copy constructor A(const A&); //Assignment operator A& operator=(const A& a); }; int main() { A a; //Invokes default constructor A b(a); //Invokes copy constructor; A c; c = a; //Invokes assignment operator }
1,420,515
1,433,858
Causes for ILINK32 Error: Unresolved external '__fastcall System::TObject::NewInstance(System::TMetaClass *)' referenced from XXX.obj?
I am getting the following error from C++ Builder 2009's linker Unresolved external '__fastcall System::TObject::NewInstance(System::TMetaClass *)' referenced from XXX.obj? We have a set of Delphi files (.pas) and set of C++ Builder files (.hpp and .obj), which was generated from these .pas files. Set of files is copied to another machine. Both machine has the very same C++ Builder 2009 version with the same updates (latest: 3+4) installed. When I create an empty VCL application in C++ Builder on other machine and include one obj file from this set to the active project, I get the above mentioned error at linking stage. The strange things about this error are: This error can be reproduced not on every machine or C++ Builder installation (I've checked at least 5 of them). If you remove obj-file and instead add corresponding pas file to the project - the error will dissappear. But if you remove pas-file and include obj-file again - there will be no error. None of the obj of pas files gets modified in the process. I.e. if you delete this set of files from the machine and bring them from the first machine again (where they were created) - you still will have no error. Once you do that sequence on one particular machine (include/exclude pas-file from the project) - you can not longer get this error on that machine, no matter how hard you try (move files between folders, playing with settings, etc, etc). Actually, I already have no machines, where I can reproduce this error right now :( I do not see, how situation "after" is different from situation "before" (after/before inclusion of pas-file), so error is visible only before and not after. The only mention of this error (or a very similar error) on the internet is this. But there is no solution. There are no "+" chars in the path nor spaces (" "). Am I missing something? Right now it looks like C++ Builder bug to me. P.S. We can not use "just include pas-file" solution, as we need to deploy only .hpp and .obj (no .pas files) to certain machines.
OKay, I've found answer: the reason was some wrong IDE's or project's settings (I do not know for sure). I have several versions of C++ Builders and Delphis installed. And for some reason C++ Builder's 2009 linker picked up wrong obj files - the ones, which should be used for another version (possible 2007). The reason for the error was that NewInstance was changed between 2007 and 2009 versions - see here: https://forums.codegear.com/thread.jspa?messageID=161105
1,420,546
1,420,554
Does C or C++ have a standard regex library?
Does it? If yes, where can I get the documentation for it... if not, then which would be the best alternative?
C++11 now finally does have a standard regex library - std::regex. If you do not have access to a C++11 implementation, a good alternative could be boost regex. It isn't completely equivalent to std::regex (e.g. the "empty()" method is not in the std::regex) but it's a very mature regex implementation for C++ none the less.
1,420,552
1,420,564
What's the difference between virtual function instantiations in C++?
What's the difference between the following two declarations? virtual void calculateBase() = 0; virtual void calculateBase(); I read the first one (=0) is a "pure abstract function" but what does that make the second one?
First one is called a pure virtual function. Normally pure virtual functions will not have any implementation and you can not create a instance of a class containing a pure virtual function. Second one is a virtual function (i.e. a 'normal' virtual function). A class provides the implementation for this function, but its derived class can override this implementation by providing its own implementation for this method.
1,420,602
1,420,821
Compiling JVMTI agent (using GCC, on OSX Snow Leopard)
I am trying to build a JVMTI agent using the g++ command on Snow Leopard and I get the following error: $ g++ -o agent.so -I `/usr/libexec/java_home`/include agent.cpp Undefined symbols: "_main", referenced from: start in crt1.10.6.o ld: symbol(s) not found collect2: ld returned 1 exit status I am a total novice when it comes to gcc and C++ programming so I have no idea what that error means. The agent itself is extremely basic: #include #include JNIEXPORT jint JNICALL Agent_OnLoad(JavaVM *vm, char *options, void *reserved) { std::cout <<"Loading aspect..." <<std::endl; return JNI_OK; } Any help with the message would be greatly appreciated.
The command line options you've supplied to g++ are telling it that you're trying to build an executable, not a shared library. g++ is complaining that you haven't defined a main function, as every executable requires one. Compile your shared library with the -c flag so that g++ knows to build a library, i.e. compile and assemble your code, but don't try to link it into an executable file. g++ -c -o agent.so -I `/usr/libexec/java_home`/include agent.cpp
1,420,825
1,420,967
Windows Mobile/C: Wait until variable changes
I'm currently writing a wrapper library for windows mobile in C/C++. I have to implement and export the following functions: void start_scanning(); int wait_for_scanning_result(); void stop_scanning(); start_scanning() is called to start the scanning process. wait_for_scanning_result() will wait until a result is available and return it, and stop_scanning will abort the process. The library I am using has a callback function that is executed when a result is available. void on_scanning_result(int result) { /* My code goes here */ } Unfortunately I have to implement the functions above, so my plan was to solve it like this: void on_scanning_result(int result) { scan_result_available = 1; scan_result = result; } int wait_for_scanning_result() { /* ... wait until scan_result_available == 1 */ return scan_result; } I have no idea how to do this in windows/C and I would be very glad if someone could help me or tell me which functions I have to use to accomplish this.
You can use windows Synchronization Functions. Basically all you have to do is: * CreateEvent - create an event * WaitForSingleObject - wait for this event to become signaled * SetEvent - signal the event
1,420,972
1,421,025
Accessing an array with a negative number!
I am converting an extremely large and very old (25 years!) program from C to C++. In it there are many (very very many) places where I access a global one dimensional UBYTE array using a variety of integer indexes. Occasionally this index may be negative. I sometimes, but not always, trapped this case and made sure nothing went wrong, but as a belt and braces measure I actually went to the trouble of making sure that there was another chunk of memory immediately preceding the array and filled it with the right values such that if I accidentally omitted to trap the negative number condition then a correct answer would still be fetched in the array access. This actually worked fine for many many years. But now under C++ it seems that accessing an array with a negative number behaves differently and now I have a program behaving badly. I fixed one case of an unhandled negative number and the program appears to be working fine, but I am nervous that I have not trapped all the negative numbers and there may be problems ahead. So my question now is, is there a way, at runtime, for me to detect any instances of accessing arrays with negative indexes? I'll be impressed if anyone can come up with an answer. If you're pretty certain it can not be done in any automated way then telling me that is valuable information too. I should just add that I'm not really a C++ programmer (yet). So far all I've done is the absolute bare minimum (almost nothing) to get the program to compile under a C++ compiler. So if your answer involves fancy "experts only, C++ solutions", then please try and explain in words of one syllable or give me a link so I can look it up.
Can you replace the global one-dimensional ubyte array with an object with overloaded operator[]? Using the absolute value of the int input might solve some of your issues. Edit: Depending on the usage pattern of your array (no pointer shenanigans), using an object with overloaded operator[] could actually be entirely transparent to the users of the array, hence my suggestion.
1,421,277
1,421,522
How do you design a C++ application so that Mock Objects are easiest to use?
I've never developed using Test Driven Development, and I've never used Mock Objects for unit testing. I've always unit tested simple objects that don't incorporate other aspects of the application, and then moved on to less simple objects that only reference objects that have already been unit tested. This tends to progress until the final "unit" test is a component test. What design techniques are used to make the replacing of internal classes with Mock Objects as easy as possible? For example, in my code, I would include the header file for myDataClass within myWorkerClass. myDataClass is constructed by myWorkerClass, and its lifetime is tied to myWorkerClass. How can you set it up so that it would include a mock myDataClass when the include is hard-wired?
You could look to adapt your code to follow an (Abstract) Factory Design pattern, whereby a different factory could be used in a unit test environment that would create your mock objects.
1,421,367
1,431,430
Does the isSelect-method of QSqlQuery return true when a stored procedure is executed?
Will the isSelect-method of QSqlQuery return true when a stored procedure containing a SELECT-statment is executed on sqlserver?
The documentation states that isSelect: "Returns true if the current query is a SELECT statement; otherwise returns false" During my testing I found that it also returns true for an EXEC statement on sqlserver if there is a result-set to be fetched.
1,421,485
1,421,497
template class, implementation code causing linking issues
I currently have a program where my main code is in a file main.cpp. Main.cpp includes a header file "class.h" that declares a class that is used within main.cpp. Also in main.cpp I have function declarations that declare the functions I use within main.cpp. The code for these functions is in a separate .cpp file fucntions.cpp. Like main.cpp, functions.cpp also includes class.h as the class type is used within the functions. class.h contains the class declaration only. The implementation code for class.h is in a separate .cpp file classimplementation.cpp. It all works fine until I try to make the class in class.h a template class. Then I get linking problems. Research and testing has shown me that this is because the definition of the template class functions needs to reside in class.h with the declaration. I therefore took the required code out of classimplementations.cpp and put it into class.h. This did solve my original linking issues but instead I get more linking errors that seem to be telling me I am trying to redefine the functions that I moved to into class.h. This I think is because class.h is being called by main.cpp and again by functions.cpp. Therefore the functions in class.h are being defined twice: Error 41 error LNK2005: "public: __thiscall RecordPocket::RecordPocket(int)" (??0?$RecordPocket@VT@@@@QAE@H@Z) already defined in classimplementation.obj functions.obj I know that class implementation code should really be kept out of include files but due to the template class limitation of having to keep the class functions local I appear (in my novice mind) to have no choice. Has anyone been in this scenario and can offer any advice. I have tried surrounding the functions I moved from classimplementation.cpp to class.h with the standard ifndef CLASSIMP, #define CLASSIMP code and PRAGMA ONCE but neither make any difference. If all else fails I will move the functions from functions.cpp into main.cpp so that class.h gets called just the once but I’d rather find out what I’m doing wrong as I’m sure it will happen again.
You could keep the template functions inside the template<> class what{/HERE/}; template<typename T> class MyTempClass{ void myFunctions{ // code here } } EDITED: I removed the code corrected by Glen
1,421,487
1,421,511
DirectX 9 or 10 Overlay
How is it possible to draw an overlay over an game with DirectX 9 or 10? I found code with deprecated DirectShow code, but it will not run.
If this is what you have already found then ignore it, but try this: Direct3D Hooking Example
1,421,658
30,828,487
Qt Creator: “XYZ does not name a type”
This is a very frustrating error message in Qt Creator: ’XYZ’ does not name a type. This usually means that there is an error in the class XYZ that prevents the compiler from generating the type, but there are no additional hints as to what went wrong. Any suggestions?
I found this problem on qtcreator 3.4.1 and QT 5.4, when I replace such as #include <QTextEdit> with class QTextEdit; this problem gone.
1,421,666
1,421,730
Qt Creator: “inline function used but never defined” – why?
Why am I getting this warning in Qt Creator: ` inline function ‘bool Lion::growl ()’ used but never defined? I double-checked my code, and have a declaration inline bool growl () in Lion (lion.h) and the corresponding implementation in lion.cpp: inline bool Lion::growl () What’s going on? EDIT: My assumption has been that it is legal to define the actual inline method in the .cpp file (the inline keyword alerts the compiler to look for the method body elsewhere), or am I mistaken? I don't want to clutter my header files with implementation details.
Well, I don't know the exact problem, but for starters: Inline methods are supposed to be implemented in the header file. The compiler needs to know the code to actually inline it. Also using the "inline" keyword in the class declaration doesn't have any effect. But it cannot hurt either. See also: c++ faq lite
1,421,668
1,421,678
C++ tutorial for experienced C programmer
I have been programming exclusively in C for 25 years but have never used C++. I now need to learn the basics of C++ programming. Can anyone recommend an online tutorial (or failing that a book) that would be most suitable for me. Thanks. Edit: I actually needed the C++ purely for the purposes of adding a couple of dirty hacks to a huge and old C program. Converting the entire program in to properly written OO code is entirely economically unfeasible. Some people have criticized the suggested solutions based on the fact that they will lead me down the path of becoming a "C programmer who knows some C++ without getting in to the proper spirit of C++" - but actually that fits my requirements perfectly. Edit: The link in the top voted answer seems to be broken right now but the file appears to exist in multiple places - e.g. here.
This might be of some use: C++ tutorial for C users. If you're looking for a book, check out "C++ for C Programmers" by Ira Pohl (Amazon).
1,421,671
1,421,780
When are static C++ class members initialized?
There appears to be no easy answer to this, but are there any assumptions that can be safely made about when a static class field can be accessed? EDIT: The only safe assumption seems to be that all statics are initialized before the program commences (call to main). So, as long as I don't reference statics from other static initialization code, I should have nothing to worry about?
The standard guarantees two things - that objects defined in the same translation unit (usually it means .cpp file) are initialized in order of their definitions (not declarations): 3.6.2 The storage for objects with static storage duration (basic.stc.static) shall be zero-initialized (dcl.init) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD types (basic.types) with static storage duration initialized with constant expressions (expr.const) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. The other guaranteed thing is that initialization of static objects from a translation unit will be done before use of any object or function from this translation unit: It is implementation-defined whether or not the dynamic initialization (dcl.init, class.static, class.ctor, class.expl.init) of an object of namespace scope is done before the first statement of main. If the initialization is deferred to some point in time after the first statement of main, it shall occur before the first use of any function or object defined in the same translation unit as the object to be initialized. Nothing else i guaranteed (especially order of initialization of objects defined in different translation units is implementation defined). EDIT As pointed in Suma's comment, it is also guaranteed that they are initialized before main is entered.
1,421,684
1,421,835
Converting float to double
How expensive is the conversion of a float to a double? Is it as trivial as an int to long conversion? EDIT: I'm assuming a platform where where float is 4 bytes and double is 8 bytes
Platform considerations This depends on platform used for float computation. With x87 FPU the conversion is free, as the register content is the same - the only price you may sometimes pay is the memory traffic, but in many cases there is even no traffic, as you can simply use the value without any conversion. x87 is actually a strange beast in this respect - it is hard to properly distinguish between floats and doubles on it, as the instructions and registers used are the same, what is different are load/store instructions and computation precision itself is controlled using status bits. Using mixed float/double computations may result in unexpected results (and there are compiler command line options to control exact behaviour and optimization strategies because of this). When you use SSE (and sometimes Visual Studio uses SSE by default), it may be different, as you may need to transfer the value in the FPU registers or do something explicit to perform the conversion. Memory savings performance As a summary, and answering to your comment elsewhere: if you want to store results of floating computations into 32b storage, the result will be same speed or faster, because: If you do this on x87, the conversion is free - the only difference will be fstp dword[] will be used instead of fstp qword[]. If you do this with SSE enabled, you may even see some performance gain, as some float computations can be done with SSE once the precision of the computation is only float insteead of default double. In all cases the memory traffic is lower
1,421,697
1,421,768
C# running faster than C++?
A friend and I have written an encryption module and we want to port it to multiple languages so that it's not platform specific encryption. Originally written in C#, I've ported it into C++ and Java. C# and Java will both encrypt at about 40 MB/s, but C++ will only encrypt at about 20 MB/s. Why is C++ running this much slower? Is it because I'm using Visual C++? What can I do to speed up my code? Is there a different compiler that will optimize C++ better? I've already tried optimizing the code itself, such as using x >> 3 instead of x / 8 (integer division), or y & 63 instead of y % 64 and other techniques. How can I build the project differently so that it is more performant in C++ ? EDIT: I must admit that I have not looked into how the compiler optimizes code. I have classes that I will be taking here in College that are dedicated to learning about compilers and interpreters. As for my code in C++, it's not very complicated. There are NO includes, there is "basic" math along with something we call "state jumping" to produce pseudo random results. The most complicated things we do are bitwise operations that actually do the encryption and unchecked multiplication during an initial hashing phase. There are dynamically allocated 2D arrays which stay alive through the lifetime of the Encryption object (and properly released in a destructor). There's only 180 lines in this. Ok, so my micro-optimizations aren't necessary, but I should believe that they aren't the problem, it's about time. To really drill the point in, here is the most complicated line of code in the program: input[L + offset] ^= state[state[SIndex ^ 255] & 63]; I'm not moving arrays, or working with objects. Syntactically the entire set of code runs perfect and it'll work seamlessly if I were to encrypt something with C# and decrypt it with C++, or Java, all 3 languages interact as you'd expect they would. I don't necessarily expect C++ to run faster then C# or Java (which are within 1 MB/s of each other), but I'm sure there's a way to make C++ run just as fast, or at least faster then it is now. I admit I'm not a C++ expert, I'm certainly not as seasoned in it as many of you seem to be, but if I can cut and paste 99% of the code from C# to C++ and get it to work in 5 mins, then I'm a little put out that it takes twice as long to execute. RE-EDIT: I found an optimization in Visual Studio I forgot to set before. Now C++ is running 50% faster then C#. Thanks for all the tips, I've learned a lot about compilers in my research.
Without source code it's difficult to say anything about the performance of your encryption algorithm/program. I reckon though that you made a "mistake" while porting it to C++, meaning that you used it in a inefficient way (e.g. lots of copying of objects happens). Maybe you also used VC 6, whereas VC 9 would/could produce much better code. As for the "x >> 3" optimization... modern compilers do convert integer division to bitshifts by themselves. Needless to say that this optimization may not be the bottleneck of your program at all. You should profile it first to find out where you're spending most of your time :)
1,422,056
1,422,184
Weird behaviour of Koenig Lookup
consider the following program: namespace NS2 { class base { }; template<typename T> int size(T& t) { std::cout << "size NS2 called!" << std::endl; return sizeof(t); } }; namespace NS1 { class X : NS2::base { }; } namespace NS3 { template<typename T> int size(T& t) { std::cout << "size NS3 called!" << std::endl; return sizeof(t) + 1; } template<typename T> class tmpl { public: void operator()() { size(*this); } }; }; int main() +{ NS3::tmpl<NS1::X> t; t(); return 0; } My compiler (gcc 4.3.3) does not compile the program because the call to size is ambigous. The namespace NS2 seems to be added to the set of associate namespaces for the size call in the class tmpl. Even after reading the section about Koenig Lookup in the ISI Standard I am not sure if this behaviour is standard conform. Is it? Does any one know a way to work around this behaviour without qualifying the size call with the NS3 prefix? Thanks in advance!
Template arguments and base classes both affect ADL, so I think GCC is correct, here: NS3 comes from the current scope, NS1 from the X template argument, and NS2 from the base class of the template argument. You have to disambiguate somehow; I'd suggest renaming one or more of the functions, if feasible, or perhaps use SFINAE to disambiguate the functions. (Similar Situation: Note that boost::noncopyable is actually "typedef noncopyable_::noncopyable noncopyable;" so that the boost namespace doesn't get added to the ADL set of types that derive from it.)
1,422,064
1,422,077
In C++, how can I hold a list of an abstract class?
I have two implemented classes: class DCCmd : public DCMessage class DCReply : public DCMessage Both are protocol messages that are sent and received both ways. Now in the protocol implementation I'd need to make a message queue, but with DCMessage being abstract it won't let me do something like this: class DCMsgQueue{ private: vector<DCMessage> queue; public: DCMsgQueue(void); ~DCMsgQueue(void); bool isEmpty(); void add(DCMessage &msg); bool deleteById(unsigned short seqNum); bool getById(unsigned short seqNum, DCMessage &msg); }; The problem is that, as the compiler puts it, "DCMessage cannot be instantiated", since it has a pure abstract method: virtual BYTE *getParams()=0; Removing the =0 and putting empty curly braces in DCMessage.cpp fixes the problem, but it is just a hack. The other solution is that I should make two DCMsgQueues: DCCmdQueue and DCReplyQueue, but this is just duplicated code for something trivial. Any ideas? =)
You cannot instantiate the object because it is abstract as you said. You can however hold a vector of pointers to the DCMessage class which will work, you just need to add the memory address and not the object when pushing it on to the list. vector<DCMessage*> queue; DCCmd* commandObject = new DCCmd(...params...); queue.push_back(commandObject); BYTE* params = queue[0]->getParams();
1,422,144
1,428,607
Designing a better API?
What are the best practices and patterns to be followed for designing APIs? How to achieve implementation hiding the best way (C++/Java)? Designing APIs which are generic in nature? Any reference books/links which guide with neat examples to beginners?
This might be useful for you. The Little Manual of API Design (wayback machine) The Little Manual of API Design (original; dead)
1,422,145
1,422,255
How should I organize test cases in my project?
I have a project that looks like this: xdc/ hubactions/ hubconnection.cpp hubconnection.h uiinterface/ readme uiconnection.cpp uiconnection.h ... uiactions/ readme connectaction.cpp connectaction.h quitaction.cpp quitaction.h ... utils/ parser.cpp parser.h ... Now I want to start testing before the project becomes too big. So how should I got about organising my tests? I have come up with two options: Option 1 xdc/ hubactions/ hubconnection.cpp hubconnection.h uiinterface/ readme uiconnection.cpp uiconnection.h ... uiactions/ readme connectaction.cpp connectaction.h quitaction.cpp quitaction.h ... utils/ parser.cpp parser.h ... tests/ utils/ parsertest.cpp uiinterface/ uiconnectiontest.cpp uiactions/ connectactiontest.cpp quitactiontest.cpp hubactions/ fakehubconnection.cpp fakehubconnection.h ... Option 2 xdc/ hubactions/ tests/ fakehubconnection.cpp fakehubconnection.h hubconnection.cpp hubconnection.h uiinterface/ tests/ uiconnectiontest.cpp readme uiconnection.cpp uiconnection.h ... uiactions/ tests/ connectactiontest.cpp quitactiontest.cpp readme connectaction.cpp connectaction.h quitaction.cpp quitaction.h ... utils/ tests/ parsertest.cpp parser.cpp parser.h ... Which method is better? Can I do it in a different/better way? Thanks!
I like the code structure followed by the Apache Software Foundation (ASF) and its primary build tool, Maven. This structure is Java-centric, but can be applied to other languages. The best C++ plug-in for Maven, in my opinion, follows the ASF structure for C++ and looks like this: project/ /src /main /include /c++ /test /include /c++ I use this structure and it works out well, and is consistent with my projects written in other languages.
1,422,151
1,422,234
How to print a double with a comma
In C++ I've got a float/double variable. When I print this with for example cout the resulting string is period-delimited. cout << 3.1415 << endl $> 3.1415 Is there an easy way to force the double to be printed with a comma? cout << 3.1415 << endl $> 3,1415
imbue() cout with a locale whose numpunct facet's decimal_point() member function returns a comma. Obtaining such a locale can be done in several ways. You could use a named locale available on your system (std::locale("fr"), perhaps). Alternatively, you could derive your own numpuct, implement the do_decimal_point() member in it. Example of the second approach: template<typename CharT> class DecimalSeparator : public std::numpunct<CharT> { public: DecimalSeparator(CharT Separator) : m_Separator(Separator) {} protected: CharT do_decimal_point()const { return m_Separator; } private: CharT m_Separator; }; Used as: std::cout.imbue(std::locale(std::cout.getloc(), new DecimalSeparator<char>(',')));
1,422,228
1,422,517
any good method to insert a control just like excel into MFC/c++ program?
I need a excel-like grid control in MFC, do anyone have good suggestion to implement that ?] with the control i can filter the data by clicking on the header, then it will display distinct data of current column for selection. Thanks!
Codeproject's MFC Grid control is very popular for this task. You will have to hack it to your own needs. For filtering and other more advanced features you might consider buying BCGSuite for MFC. Here is what they say about their Grid Control: MFC Document/View integration Integrated Field Chooser In-place cell editing Single and multiple row and cell selection Printing and Print Preview Filters Merged cells and more Microsoft has added parts of BGCControlBar Pro into Visual Studio 2008 as the famous "Feature Pack" (renamed all CBCG to CMFC, changed some function names, fixed some typos), BCGSuite contains the parts they didn't sell to Microsoft.
1,422,402
1,425,974
What Are Binding Generators For?
A friend raised this on Twitter: @name_removed doesn't understand why binding generators seem to think writing pages of XML is vastly superior to writing pages of C++... Having never encountered binding generators before, I decided to look them up. Seems pretty self-explanatory, convert C++ classes to XML format. But now I need someone to explain what they're for. Yes I have googled, for example, http://www.google.co.uk/search?hl=en&q=binding+generator+useful&meta= . Note that the resulting pages do not actually contain the word useful. I suppose I can see advantages if you wanted to auto-generate web documentation, but this seems like a demolition ball to crush a walnut. You'd end up with very poor documentation, and you might as well just release the source code. Any suggestions?
Several reasons: You focus in writing the protocol itself, not parsers. Writing parsing code is tedious, error prone work, and most of the code is boiler plate code anyway. If you have the protocol specified as XML, you can have the server written in one language and the client in another. In this way you can generate clients in many languages very fast. For example, our server is written in Java, but we have clients written in C++ on Symbian and Windows Mobile, Objective-C for iPhone and Java ME on BlackBerry. Writing the same code on three different platforms is redundant work. BTW, you don't need to write just schemas today - there are similar tools which do not use XML both for definition and for transport: Protocol Buffers and thrift
1,422,425
1,422,449
Need help allocating space for vector within class definition using boost
I am trying to allocate space for a boost vector type in a class definition. I am not a good c++ programmer, but shown below is my best attempt. There are no error messages, but when I try to access the vector from my main function it believes that the vector has zero elements. I know this is because I did not tell the compiler how much space to allot when I declared the vector in the class definition, but I do not know how to do this without getting an error. I tried to circumvent this by telling it how big I wanted it in the constructor, but I know the compiler treats this as a redefinition that does not exist outside of the scope of the constructor. Can someone lead me in the right direction? Thanks in advance. namespace ublas = boost::numeric::ublas; class Phase { ublas::vector<cdouble> lam; public: // Constructor: Phase() { ublas::vector<cdouble> lam(2); for(int i = 0; i < 2; i++) { lam(i) = 1.0; } } // Destructor: ~Phase() {} // Accessor Function: ublas::vector<cdouble> get_lam() { return lam; } };
In your constructor you are creating a local variable lam that shadows the class variable lam. You want to initialize the vector in the constructor's initialization list: Phase() : lam(2) { for(int i = 0; i < 2; i++) { lam(i) = 1.0; } } This calls the vector constructor you want as the class is being initialized, instead of the default constructor for the class.
1,422,433
1,422,783
How do you set system time using C/C++?
I have an embedded system (ARM 9263) running an RTOS, IAR tools. The system supports the standard time() function which gives me the current time. I need the reverse call, that is I need to set the time - is there a "C" standard way to do this? I've googled around, sure thought it would be obvious, but perhaps it is platform dependent? I'm not sure why, since time() is not - any ideas? Thanks!
Using the IAR toolset the time of day C runtime API (time()) can be overridden using the example in ARM\src\lib\time.c. The default routine always returns -1, an indication that the CRT has no idea what time it is. Once you provide your own implementation of time(), which will obtain the time of day from a source that depends on your tartget platform and/or RTOS, you can set the time of day by updating whatever that time source is. IAR may well have already done this for their RTOS - I haven't used IAR's PowerPac RTOS. The details of how this works for another RTOS or a system with no RTOS is outlined in the IAR C/C++ Development Guide. For example, on a system I've worked on that uses an ST Micro STM32 microscontroller, the real time clock (RTC) is set to tick once per second, and the time() library function simply returns the value in the RTC. Setting a new date/time is a matter of setting the RTC with a new value. The RTC's time counter is set with a Unix epoch value (seconds since 1 Jan 1970), which allows the rest of the library functions from time.h to work just fine (up to some time around in 2035 when 32-bit overflows start wreaking havoc). The calendar routines in the IAR DLIB C runtime library support dates through 2035-12-31 (they overflow before 2038 I suspect because internal calcuations use a 1 Jan 1900 epoch). If you use the Unic epoch, the other DLIB routines more or less just work - I'm not sure what level of effort would be required to use a different epoch.
1,422,601
1,422,653
How do I turn on multi-CPU/Core C++ compiles in the Visual Studio IDE (2008)?
I have a Visual Studio 2008 C++ project that has support for using multiple CPUs/cores when compiling. In the VCPROJ file I see this: <Tool Name="VCCLCompilerTool" AdditionalOptions="/MP" ... I can't find where that was turned added via the IDE and I want to set up another project that uses all of my cores during compilation. I found tons of references to the MSDN /MP page but that is for using the command line; I have yet to find any references to setting that with the IDE. How do I do that? EDIT: To clarify, the two projects are completely separate and are not in the same VCPROJ file. I wanted to turn on support for multiple cores during the C++ compilation phase.
To enable /MP option you could add it to Project Settings->C/C++->Command Line|Additional options. This is the only way to switch it on in vcproj.
1,422,817
1,422,854
How to read a float from binary file in C?
Everything I'm finding via google is garbage... Note that I want the answer in C, however if you supplement your answer with a C++ solution as well then you get bonus points! I just want to be able to read some floats into an array from a binary file EDIT: Yes I know about Endian-ness... and no I don't care how it was stored.
How you have to read the floats from the file completely depends on how the values were saved there in the first place. One common way could be: void writefloat(float v, FILE *f) { fwrite((void*)(&v), sizeof(v), 1, f); } float readfloat(FILE *f) { float v; fread((void*)(&v), sizeof(v), 1, f); return v; }
1,423,031
1,423,044
How do I write to shared memory in C++?
I'd like to write to shared memory and then dump the contents to a file in the win32 api. Currently I have this code: HANDLE hFile, hMapFile; LPVOID lpMapAddress; hFile = CreateFile("input.map", GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); hMapFile = CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0, 0, TEXT("SharedObject")); lpMapAddress = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0); sprintf(MapViewOfFile, "<output 1>"); UnmapViewOfFile(lpMapAddress); CloseHandle(hFile); CloseHandle(hMapFile); However, line 31 (the sprintf call) gives the error: error: cannot convert `void*(*)(void*, DWORD, DWORD, DWORD, DWORD)' to `char*' for argument `1' to `int sprintf(char*, const char*, ...)' I've tried casting the lpMapAddress to LPTSTR, but it has no effect. What am I doing wrong? Or is there a better way to do it?
In the sprintf(MapViewOfFile, "<output 1>"); line, you wanted lpMapAddress, not MapViewOfFile. Or (char*)lpMapAddress to be precise.
1,423,251
1,424,893
talking between python tcp server and a c++ client
I am having an issue trying to communicate between a python TCP server and a c++ TCP client. After the first call, which works fine, the subsequent calls cause issues. As far as WinSock is concerned, the send() function worked properly, it returns the proper length and WSAGetLastError() does not return anything of significance. However, when watching the packets using wireshark, i notice that the first call sends two packets, a PSH,ACK with all of the data in it, and an ACK right after, but the subsequent calls, which don't work, only send the PSH,ACK packet, and not a subsequent ACK packet the receiving computers wireshark corroborates this, and the python server does nothing, it doesnt have any data coming out of the socket, and i cannot debug deeper, since socket is a native class when i run a c++ client and a c++ server (a hacked replica of what the python one would do), the client faithfully sends both the PSH,ACk and ACK packets the whole time, even after the first call. Is the winsock send function supposed to always send a PSH,ACK and an ACK? If so, why would it do so when connected to my C++ server and not the python server? Has anyone had any issues similar to this?
client sends a PSH,ACK and then the server sends a PSH,ACK and a FIN,PSH,ACK There is a FIN, so could it be that the Python version of your server is closing the connection immediately after the initial read? If you are not explicitly closing the server's socket, it's probable that the server's remote socket variable is going out of scope, thus closing it (and that this bug is not present in your C++ version)? Assuming that this is the case, I can cause a very similar TCP sequence with this code for the server: # server.py import socket from time import sleep def f(s): r,a = s.accept() print r.recv(100) s = socket.socket() s.bind(('localhost',1234)) s.listen(1) f(s) # wait around a bit for the client to send it's second packet sleep(10) and this for the client: # client.py import socket from time import sleep s = socket.socket() s.connect(('localhost',1234)) s.send('hello 1') # wait around for a while so that the socket in server.py goes out of scope sleep(5) s.send('hello 2') Start your packet sniffer, then run server.py and then, client.py. Here is the outout of tcpdump -A -i lo, which matches your observations: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on lo, link-type EN10MB (Ethernet), capture size 96 bytes 12:42:37.683710 IP localhost:33491 > localhost.1234: S 1129726741:1129726741(0) win 32792 <mss 16396,sackOK,timestamp 640881101 0,nop,wscale 7> E..<R.@.@...............CVC.........I|....@.... &3.......... 12:42:37.684049 IP localhost.1234 > localhost:33491: S 1128039653:1128039653(0) ack 1129726742 win 32768 <mss 16396,sackOK,timestamp 640881101 640881101,nop,wscale 7> E..<..@.@.<.............C<..CVC.....Ia....@.... &3..&3...... 12:42:37.684087 IP localhost:33491 > localhost.1234: . ack 1 win 257 <nop,nop,timestamp 640881102 640881101> E..4R.@.@...............CVC.C<......1...... &3..&3.. 12:42:37.684220 IP localhost:33491 > localhost.1234: P 1:8(7) ack 1 win 257 <nop,nop,timestamp 640881102 640881101> E..;R.@.@...............CVC.C<......./..... &3..&3..hello 1 12:42:37.684271 IP localhost.1234 > localhost:33491: . ack 8 win 256 <nop,nop,timestamp 640881102 640881102> E..4.(@.@...............C<..CVC.....1}..... &3..&3.. 12:42:37.684755 IP localhost.1234 > localhost:33491: F 1:1(0) ack 8 win 256 <nop,nop,timestamp 640881103 640881102> E..4.)@.@...............C<..CVC.....1{..... &3..&3.. 12:42:37.685639 IP localhost:33491 > localhost.1234: . ack 2 win 257 <nop,nop,timestamp 640881104 640881103> E..4R.@.@...............CVC.C<......1x..... &3..&3.. 12:42:42.683367 IP localhost:33491 > localhost.1234: P 8:15(7) ack 2 win 257 <nop,nop,timestamp 640886103 640881103> E..;R.@.@...............CVC.C<......./..... &3%W&3..hello 2 12:42:42.683401 IP localhost.1234 > localhost:33491: R 1128039655:1128039655(0) win 0 E..(..@.@.<.............C<......P...b... 9 packets captured 27 packets received by filter 0 packets dropped by kernel
1,423,297
1,423,387
Printing the contents of a file using the #include directive (preprocessor)
Say I have a file, t.txt, that contains the following two lines: one two Now, I would like to write a program which will #include that file somehow and print its contents, nothing more. That is, I want the contents of that file to appear in my code as a static text, at compile time. Any ideas? The reason im asking is this: I would like to create a quine by including my own file (with ifndefs to prevent recursive inclusion after the first two): http://en.wikipedia.org/wiki/Quine_(computing). So I'd still love to get an answer.
Alternative solution (since the original one won't work without limitations, as mentioned in the comments): As part of your build process, use a script (perl or python would do it easily) to generate staticstring.h from staticstring.txt, adding quotes and \n's as necessary, then use the other solution. This way your original file does not change. You want to change Text file with text on multiple lines to "Text file with text\n" "on multiple\n" "lines" I think that doing it purely with the preprocessor is not possible.
1,423,357
1,423,382
Writing to shared memory
How can I write from a file to shared memory using the Win32 API? I have this code: hFile = CreateFile("input.map", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); hMapFile = CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0, 0, TEXT("SharedObject")); lpMapAddress = (LPTSTR) MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0); ReadFile( hFile, lpMapAddress, 75, &bytesRead, NULL); sprintf((char*)lpMapAddress, "<output 1>"); printf((char*) lpMapAddress); However, the printf call only returns "< output 1 >" and not the contents of the file. EDIT: Found the problem. I'm writing to the input file when I call sprintf. But I still don't know why...
Is this the entire code sample? It looks to me like the call to sprintf places a null-terminated string at lpMapAddress, which effectively overwrites whatever you read from the file--at least for the purposes of your printf statement. If you want to replace the first part of what you read with the string "<output 1>", you could do this after reading the file: char *tmp = "<output 1>"; strncpy((char*)lpMapAddress, tmp, strlen(tmp)); That copies the text of the string but not its null terminator.
1,423,489
1,423,546
Is there a caching penalty for mixing binary data and instructions within close proximity of each other?
I'm procedurally generating 128-byte blocks with some set n-byte header reserved for machine-language functions that I'm simply calling via in-line assembly. They aren't defined anywhere and are generated at run-time into pages allocated into memory with access for execution. However, I want to reserve the end (128 - n) bytes of these blocks for storing data for use within these functions due to being able to shrink the memory offset calls to 8 bits instead of 32 bits and also (possibly?) aiding with caching. However, caching is what I'm worried about. Assuming I have a processor that has both cache(s) for data and also an instruction cache, how well does the typical processor of this kind deal with this sort of formatting? Will it attempt to load the data after my instructions as instructions themselves into the instruction cache? Could this cause a significant performance penalty as the processor tries to figure out how to deal with these junk and possibly invalid "instructions" considering they'll be floating around in near proximity for essentially every call? Will it load this data into the normal L1/L2 caches once I do my first access of it at the head of the data segment or will it just be all confused at this point? Edit: I guess I should add that optimization of through-put is, obviously, rather important. How confusing or difficult the optimization is doesn't matter in this case, just minimizing the execution time of the code.
There will be some penalty since the blocks will be loaded into both the L1 instruction and data caches, which will waste space. The amount of space wasted depends on the size of a cache block, but it probably won't be offset by the savings of a reduced instruction size. L2 caches and below are usually shared between instructions and data and will not be affected. The CPU probably won't attempt to decode the data in the blocks, since you probably have a return or unconditional branch as the last instruction. Any sane CPU will not fetch or decode instructions following this.
1,423,560
1,423,845
How do I use basic_filebuf with element type other than char?
Say I want to read the contents of a file using basic_filebuf. I have a type called boost::uintmax_t which has a size of 8 bytes. I am trying to write the following: typedef basic_filebuf<uintmax_t> file; typedef istreambuf_iterator<uintmax_t> ifile; file f; vector<uintmax_t> data, buf(2); f.open("test.txt", std::ios::in | std::ios::binary); f.pubsetbuf(&buf[0], 1024); ifile start(&f), end; while(start != end) { data.push_back(*start); start++; } The problem is that some of the bytes get read, others don't. For example, lets say there are 9 bytes in the file numbered 1-9: |1|2|3|4|5|6|7|8|9| When I run the above code, only one element is pushed back into data, which contains 4 bytes only from the original data in f: [0|0|0|0|4|3|2|1] --> only element in [data] What am I doing wrong? This is my first time to use basic_filebuf directly, though I know how to use filebuf.
A basic_filebuf deals with an "internal" char type and an "external" one. The "external" one is the contents of the file, and is always bytes. The "internal" one is the template parameter, and is the one used in its interface with the program. To convert between the two, basic_filebuf uses the codecvt facet of its locale. So if you want it to write directly the bytes you give it, you have two options: use a "degenerate" codecvt that only casts between the "internal" and "external" encodings instead of trying to perform a conversion. use a basic_filebuf, make sure to use the "classic" locale, and do the cast to char yourself
1,423,566
1,423,681
Template Type Conversion
I'm building a matrix template. There are operators, functions and all work fine. Except when I try to convert a double type matrix to int type matrix (or vice versa). = operator cannot be defined so its not possible to override it for basic_Matrix2D and basic_Matrix2D external to class. I know I can write in class = operators to convert from but in this case there will be two = operator with same parameters. When using double as template parameter converting from double will be the same as converting from template parameter. Class definition is follows, codes can be accessed from SourceForge template <class _T> class basic_Matrix2D { } There is also another problem I remembered about templates, converting to template type pointer works while converting to template type does not. This might be compiler specific. observe: operator _T() { return something; } operator _T*() { return somethingelse; } Thanks in advance, Cem
Your question is very unclear, but theres nothing wrong with making the operator= something like this: // incomplete, but you get the idea template<class U> basic_Matrix2D<T> & operator=(const basic_Matrix2D<U> &x) { rows = x.rows; cols = x.cols; data = new T[rows * cols]; for (size_t i = 0; i < rows * cols; ++i) data[i] = x.data[i]; } This will allow you to assign from any matrix where the expression T t; t = U(); is well formed. And if you can't, it will fail to compile. You can also include a simple basic_Matrix2D<T> & operator=(const basic_Matrix2D<T> &); assignment operator as well - maybe you can get some additional efficiency or something out of it.
1,423,696
1,423,708
How to initialize a const field in constructor?
Imagine I have a C++ class Foo and a class Bar which has to be created with a constructor in which a Foo pointer is passed, and this pointer is meant to remain immutable in the Bar instance lifecycle. What is the correct way of doing it? In fact, I thought I could write like the code below but it does not compile.. class Foo; class Bar { public: Foo * const foo; Bar(Foo* foo) { this->foo = foo; } }; class Foo { public: int a; }; Any suggestion is welcome.
You need to do it in an initializer list: Bar(Foo* _foo) : foo(_foo) { } (Note that I renamed the incoming variable to avoid confusion.)
1,423,739
1,471,891
Waiting for a DBus service to be available in Qt
With a Qt DBus proxy built on QDbusAbstractInterface (via qdbusxml2cpp), what's the best way to handle the service/object you want to interface to not being available when you start? Note: I'm not interested in simply knowing it (you can use BlahService.isValid() to find that out); I want to be able to know if it's valid, and know when it becomes valid so I can change state (and broadcast that state change with a signal), and on that state change do other stuff. Conversely, I want to know when it's no longer valid for similar reasons. Without tracking the state of the service: #define CONNECT_DBUS_SIG(x,y) connect(blah,SIGNAL(x),this,SLOT(y)) // FIX - should watch for service, and also handle it going away and // coming back blah = new BlahService("com.xyzzy.BlahService", "/com/xyzzy/BlahService", QDBusConnection::sessionBus(), this); if (!blah) return 0; if (blah.isValid()) { CONNECT_DBUS_SIG(foo(),Event_foo()); } else { // Since we aren't watching for registration, what can we do but exit? } Probably we need to watch for NameOwnerChanged on the DBus connection object - unless QT's dbus code does this for us - and then when we get that signal change state, and if needed connect or disconnect the signals from the object. All the examples I find either ignore the issue or simply exit if the server object doesn't exist, and don't deal with it going away. The Car/Controller Qt example at least notices if the server goes away and prints "Disconnected" if isValid() becomes false during use, but it's polling isValid(). Added: Note that QtDbusAbtractInterface registers for changes of ownership of the server (NameOwnerChanged), and updates isValid() when changes occur. So I suspect you can connect to that serverOwnerChanged signal directly to find out about changes to ownership and use that as an indicator to try again - though you won't be able to trust isValid since it may be updated before or after you get signaled. Alternatively (ugly) you can set up a timer and poll for isValid().
Ok, since no one answered, I've found the answer in the meantime: You want to watch NameOwnerChanged: // subscribe to notifications about when a service is registered/unregistered connect(QDBusConnection::sessionBus().interface(), SIGNAL(serviceOwnerChanged(QString,QString,QString)), this,SLOT(serviceOwnerChanged(QString,QString,QString))); and void VcsApplicationController::serviceOwnerChanged(const QString &name, const QString &oldOwner, const QString &newOwner) { Q_UNUSED(oldOwner); if (name == "com.foo.bar.FooService") { qLog(Whatever) << "serviceOwnerChanged" << name << oldOwner << newOwner; if (!newOwner.isEmpty()) { // New owner in town emit Initialized(); // or if you control the interface and both sides, you can wait for // a "Ready()" signal before declaring FooService ready for business. } else { // indicate we've lost connection, etc emit Uninitialized(); } } } Note that there may be race conditions with doing methods on FooService from within serviceOwnerChanged - I'm not sure yet if they're a side-effect of the binding (dbus-c++ in my test case), or inherent in the design of dbus (possible - no on on the dbus mailing list will answer the question). If there is a real race condition, you can wait on a Ready()/whatever signal, if you control the DBus API. If you don't control the other end, you can add a very short delay or you can also watch AddMatch() to make sure the new owner has added a match on the name as well.
1,423,786
1,423,808
What is the difference between declaring and defining a structure?
struct { char a; int b; } x; Why would one define a struct like that instead of just declaring it as: struct x { char a; int b; };
In the first case, only variable x can be of that type -- strictly, if you defined another structure y with the same body, it would be a different type. So you use it when you won't ever need any other variables of the same type. Note that you cannot cast things to that type, declare or define functions with prototypes that use that type, or even dynamically allocate variables of that type - there is no name for the type to use. In the second case, you do not define a variable - you just define a type struct x, which can then be used to create as many variables as you need of that type. This is the more normal case, of course. It is often combined with, or associated with, a typedef: typedef struct x { char a; int b; } x; Usually, you'd use a more informative tag name and type name. It is perfectly legal and safe to use the same name for the structure tag (the first 'x') and the typedef name (the second 'x'). To a first approximation, C++ automatically 'creates a typedef' for you when you use the plain 'struct x { ... };' notation (whether or not you define variables at the same time). For fairly complete details on the caveats associated with the term 'first approximation', see the extensive comments below. Thanks to all the commentators who helped clarify this to me.
1,424,177
1,424,346
Using GCC through Xcode to compile basic programs
So, I'm a brand new CS student, on a Mac, and I'm learning C++ for one of my classes. And I have a dumb question about how to compile my super basic C++ program. I installed Xcode, and I'm looking through the documentation to try and figure out how to use it (and I highly suspect it's extremely overpowered for what I'm doing right now) and eventually end up going into Terminal and going "gcc [filename]". And I've got a screen full of text that starts with "Undefined Symbols", and goes on about trying to reference things, so I'm wondering if I didn't hook up something somewhere, especially as when I'm actually in Xcode with a C++ program open, most of the menu items are greyed out. So. In really really basic terms. What did I miss doing, and how do I fix it? Is there a basic guide to Xcode? Most of the documentation is aimed at real developers, and I'm totally missing a lot of what is being assumed.
If XCode is installed then everything is set up correctly. If you typed gcc on the command line then you invoked the 'C' compiler (not the C++ compiler). Usually this does not matter as GCC compensates by looking at the file extension. But what does matter is that it does not invoke the linker with the correct C++ flags. What you should do (from the command line) is use g++ g++ <fileName>.cpp By default the output file is a.out and placed in the same directory. g++ has a flag to specify a different output name -o g++ -o <outputName> <fileName>.cpp
1,424,239
1,425,111
Static array of const pointers to overloaded, templatized member function
Static array initialization... with const pointers... to overloaded, templatized member functions. Is there a way it can be done (C++03 standard code)? I mean, if I have the template class template <class T1, class U1, typename R1> class Some_class { public: typedef T1 T; typedef U1 U; typedef R1 R; R operator()(T& v) { /* dereference pointer to a derived class (U), overloaded member function (U::f) */ }; private: static R (U::* const pmfi[/* # of overloaded functions in U */])(T&); }; Used as template <class BASE, typename RET> class Other_class : public Some_class<BASE, Other_class<BASE, RET>, RET> { RET f(/* type derived from BASE */) {} RET f(/* other type derived from BASE */) {} RET f(/* another type derived from BASE */) {} ... }; Question: how can I initialize de array pmfi (no typedefs, please)? Notes: 1. As a static array MUST be initialized at file scope, template parameters and pmfi must be full qualified (the only way I know to access template parameters outside a class scope is to typedef them...). 2. So far so good. No problems with the compiler (Comeau 4.3.10.1). Problems start popping up when I try to fullfill the initializer list { ... }. 2.1. The compiler complains the template argument list is missing, no matter what I do. 2.2. I have no idea how to select the correct overloaded U::f function. BTW, this is a kind of "jump table" generator from a boost.preprocessor list. The code I am trying to implement is of course much more complex then this one, but this is his essence. Thanks for any help
To use BOOST_PP_ENUM in the way that you've shown, you would need a macro that takes a 'number' and yields an expression that is the address of an appropriate member of the appropriate class. I don't see a good way to do this without an explicit list unless the desired functions all have manufactured names (e.g. memfun1, memfun2, etc.). Except in the case, it's going to be easier to list the function address expressions explicitly that to used BOOST_PP_ENUM. You are using identifiers in this array that are the same as the template parameters in Some_class. R (U::* const pmfi[])(T&) = { /* ... */ } Is this really supposed to be the templated member of Some_class? template< class T, class U, class R > R (U::* const Some_class<T, U, R>::pmfi[])(T&) = { /* ... */ } If so, is the same instantiation going to work with all combinations of types that you are going to us the template Some_class with? If so, you have a very constrained set of classes, perhaps you can do away with the template. If not, you are going to have to specialize Some_class for every combination of template parameters in which case the template is not gaining you very much. Edit, post edit: If I've understood you correctly then you can't do what you've suggested because the array of pointers must be of exactly the right signature. Reducing it to a simple function pointer example, you can't do this: void f(Derived&); void (*p)(Base&) = &f; otherwise, it would subvert type safety: OtherDerived od; // derived from Base, but no from Derived // I've managed to pass something that isn't a Derived reference to f // without an explicit (and dangerous) cast (*p)(od); In your array of function pointers, the initializers must all be to functions of the right signature.
1,424,261
1,424,314
Conditional operator can't resolve overloaded member function pointers
I'm having a minor issue dealing with pointers to overloaded member functions in C++. The following code compiles fine: class Foo { public: float X() const; void X(const float x); float Y() const; void Y(const float y); }; void (Foo::*func)(const float) = &Foo::X; But this doesn't compile (the compiler complains that the overloads are ambiguous): void (Foo::*func)(const float) = (someCondition ? &Foo::X : &Foo::Y); Presumably this is something to do with the compiler sorting out the return value of the conditional operator separately from the function pointer type? I can work around it, but I'm interested to know how the spec says all this is supposed to work since it seems a little unintuitive and if there's some way to work around it without falling back to 5 lines of if-then-else. I'm using MSVC++, if that makes any difference. Thanks!
From section 13.4/1 ("Address of overloaded function," [over.over]): A use of an overloaded function name without arguments is resolved in certain contexts to a function, a pointer to function or pointer to member function for a specific function from the overload set. A function template name is considered to name a set of overloaded functions in such contexts. The function selected is the one whose type matches the target type required in the context. The target can be an object or reference being initialized (8.5, 8.5.3), the left side of an assignment (5.17), a parameter of a function (5.2.2), a parameter of a user-defined operator (13.5), the return value of a function, operator function, or conversion (6.6.3), or an explicit type conversion (5.2.3, 5.2.9, 5.4). The overload function name can be preceded by the & operator. An overloaded function name shall not be used without arguments in contexts other than those listed. [Note: any redundant set of parentheses surrounding the overloaded function name is ignored (5.1). ] The target you were hoping would be selected from the above list was the first one, an object being initialized. But there's a conditional operator in the way, and conditional operators determine their types from their operands, not from any target type. Since explicit type conversions are included in the list of targets, you can type-cast each member-pointer expression in the conditional expression separately. I'd make a typedef first: typedef void (Foo::* float_func)(const float); float_func func = (someCondition ? float_func(&Foo::X) : float_func(&Foo::Y));
1,424,471
1,424,516
C++ Timer not working?
I'm trying to make a timer in c++. I'm new to c++. I found this code snippet UINT_PTR SetTimer(HWND hWnd, UINT_PTR nIDEvent, UINT uElapse, TIMERPROC lpTimerFunc); I put it in my global variables and it tells me Error 1 error C2373: 'SetTimer' : redefinition; different type modifiers I'm not sure what this means. Is there a more proper way to define a timer? I'm not using mfc / afx Thanks
You should call it like this: void CALLBACK TimerProc( HWND hwnd, UINT uMsg, UINT idEvent, DWORD dwTime ) { //do something } SetTimer(NULL, NULL, 1000, TimerProc); This would set a timer for 1 second and will call TimerProc when it expires. Read TimerProc MSDN here: http://msdn.microsoft.com/en-us/library/ms644907%28VS.85%29.aspx
1,424,510
1,424,535
My attempt at value initialization is interpreted as a function declaration, and why doesn't A a(()); solve it?
Among the many things Stack Overflow has taught me is what is known as the "most vexing parse", which is classically demonstrated with a line such as A a(B()); //declares a function While this, for most, intuitively appears to be the declaration of an object a of type A, taking a temporary B object as a constructor parameter, it's actually a declaration of a function a returning an A, taking a pointer to a function which returns B and itself takes no parameters. Similarly the line A a(); //declares a function also falls under the same category, since instead of an object, it declares a function. Now, in the first case, the usual workaround for this issue is to add an extra set of brackets/parenthesis around the B(), as the compiler will then interpret it as the declaration of an object A a((B())); //declares an object However, in the second case, doing the same leads to a compile error A a(()); //compile error My question is, why? Yes I'm very well aware that the correct 'workaround' is to change it to A a;, but I'm curious to know what it is that the extra () does for the compiler in the first example which then doesn't work when reapplying it in the second example. Is the A a((B())); workaround a specific exception written into the standard?
There is no enlightened answer, it's just because it's not defined as valid syntax by the C++ language... So it is so, by definition of the language. If you do have an expression within then it is valid. For example: ((0));//compiles Even simpler put: because (x) is a valid C++ expression, while () is not. To learn more about how languages are defined, and how compilers work, you should learn about Formal language theory or more specifically Context Free Grammars (CFG) and related material like finite state machines. If you are interested in that though the wikipedia pages won't be enough, you'll have to get a book.
1,424,606
1,424,622
Lost Focus and GotFocus in c++
How do you add code to these events for native c++? I couldn't find a WM_LOSTFOCUS OR WM_GOTFOCUS; I only found WM_SETFOCUS. I need code to happen when my window loses focus, and regains it. Thanks.
JUST BEFORE your window loses focus it will be sent: WM_KILLFOCUS AFTER your window gains focus, it will be sent: WM_SETFOCUS Sending a WM_SETFOCUS message does not set the focus. You need to call SetFocus for that.
1,424,779
1,424,807
is there any good library for printing preview in MFC?
I need to print records in a grid view, and need to preview it before printing. I want to know whether or not there is a strong library for printing preview? And with the library I can change the position, layout of the data to print. More important: I need to change the data's layout, how can I do that?
MFC itself supports Print Preview, there shouldn't be a need for an additional library.
1,424,934
1,425,031
Question About CFile Seek
I am using MFC CFile Seek function. I have a problem about Seek out of file length. CFile cfile; BOOL bResult = cfile.Open( L"C:\\2.TXT", CFile::modeReadWrite | CFile::modeCreate | CFile::modeNoTruncate | CFile::typeBinary | CFile::shareDenyNone); cfile.Seek(10000, CFile::End); cfile.Close(); MSDN: Remarks The Seek function permits random access to a file's contents by moving the pointer a specified amount, absolutely or relatively. No data is actually read during the seek. If the requested position is larger than the size of the file, the file length will be extended to that position, and no exception will be thrown. According to MSDN, the file length will be extended. However after cfile.Close(), the file remains the same. Why?Many thanks!
I think MSDN misstated the matter slightly. When you call Seek the file pointer is adjusted, but the actual file on the disk doesn't change yet. If you call Write after that, then the actual file will become a sparse file (on NTFS) or a longer file (on FAT), with the expected length. There don't seem to be any definite rules.
1,424,948
1,424,964
C++ Console Progress Indicator
What would be an easy way of implementing a console-based progress indicator for a task that's being executed, but I can't anticipate how much time it would take? I used to do this back when I coded in Clipper, and it was only a matter of iterating through the chars '/', '-', '\', '|' and positioning them in the same place. Any way / links / libs for doing that (or something similar) in C++? The target for this is *nix environments. Edits: changed the title to be more coherent and generic; added target environment.
A very simple way to do it is to print out a string followed by a '\r' character. That is carriage return by itself and on most consoles, it returns the cursor to the beginning of the line without moving down. That allows you to overwrite the current line. If you are writing to stdout or cout or clog remember to fflush or std::flush the stream to make it output the line immediately. If you are writing to stderr or cerr then the stream is unbuffered and all output is immediate (and inefficient). A more complicated way to do it is to get into using a screen drawing library like curses. The Windows consoles have some other ways of setting them for direct screen writing but I don't know what they are.
1,425,227
1,425,267
how to create files named with current time?
I want to create a series of files under "log" directory which every file named based on execution time. And in each of these files, I want to store some log info for my program like the function prototype that acts,etc. Usually I use the hard way of fopen("log/***","a") which is not for this purpose.And I just write a timestamp function: char* timeStamp(char* txt){ char* rc; char timestamp[16]; time_t rawtime = time(0); tm *now = localtime(&rawtime); if(rawtime != -1) { strftime(timestamp,16,"%y%m%d_%H%M%S",now); rc = strcat(txt,timestamp); } return(rc); } But I don't know what to do next. Please help me with this!
Declare a char array big enough to hold 16 + "log/" (so 20 characters total) and initialize it to "log/", then use strcat() or something related to add the time string returned by your function to the end of your array. And there you go! Note how the string addition works: Your char array is 16 characters, which means you can put in 15 characters plus a nul byte. It's important not to forget that. If you need a 16 character string, you need to declare it as char timestamp[17] instead. Note that "log/" is a 4 character string, so it takes up 5 characters (one for the nul byte at the end), but strcat() will overwrite starting at the nul byte at the end, so you'll end up with the right number. Don't count the nul terminator twice, but more importantly, don't forget about it. Debugging that is a much bigger problem. EDIT: While we're at it, I misread your code. I thought it just returned a string with the time, but it appears that it adds the time to a string passed in. This is probably better than what I thought you were doing. However, if you wanted, you could just make the function do all the work - it puts "log/" in the string before it puts the timestamp. It's not that hard.
1,425,256
1,425,308
How do I read a java object in C++?
I am implementing a log server in C++; that accepts log messages from a Java program (via log4j socket appender). How do I read these java logging objects in C++?
You should configure the log4j appender to send XML format messages. Then it is simply a matter of reading XML in C++.
1,425,349
1,425,683
How do I find an element position in std::vector?
I need to find an element position in an std::vector to use it for referencing an element in another vector: int find( const vector<type>& where, int searchParameter ) { for( int i = 0; i < where.size(); i++ ) { if( conditionMet( where[i], searchParameter ) ) { return i; } } return -1; } // caller: const int position = find( firstVector, parameter ); if( position != -1 ) { doAction( secondVector[position] ); } however vector::size() returns size_t which corresponds to an unsigned integral type that can't directly store -1. How do I signal that the element is not found in a vector when using size_t instead of int as an index?
You could use std::numeric_limits<size_t>::max() for elements that was not found. It is a valid value, but it is impossible to create container with such max index. If std::vector has size equal to std::numeric_limits<size_t>::max(), then maximum allowed index will be (std::numeric_limits<size_t>::max()-1), since elements counted from 0.
1,425,648
1,425,748
ListView Movement Problem
I am using Listview,the View selected is largeicon mode. The problem i am facing is selection.using the arrow-keys i am able to navigate only in the first row(suppose i have 3 images in a row,if i press right arrow key,it will move till the end of row and again it will comeback to the first image of the same row)But the expected behavior is it should go to the next row first icon..as a remedy i have handled the key down event still.. if i do focus next item manually it is not taking the focused item,instead it will go next of it..i mean 2 item jump taking place.i am not getting no how to avoid this,i think we need to set some style? or the feature of list view only like this? i don't no.. please help me how to make this.. Thanks
If you are only handling keydown then it may be that the current keypress is still being processed (e.g. the move right), but you have already moved the focus to the item on the next row, and so when the key up happens it moves the focus on to the 2nd item. Try setting the Handled property to true of the KeyEventArgs object that is passed to your keydown method, as well as moving the focus.
1,425,695
1,426,834
Safe way to initialize a derived class
I have a base class: class CBase { public: virtual void SomeChecks() {} CBase() { /* Do some checks */ SomeChecks(); /* Do some more checks */ } }; and a derived class: class CDerived : public CBase { public: virtual void SomeChecks() { /* Do some other checks */ } CDerived() : CBase() {} }; This construction seems to be a bit weird but in my case this is required, because CBase does some checks and CDerived can mix some checks in between them. You can see it as a way to "hook" functions in the constructor. The problem with this construction is that while constructing CDerived first a CBase is constructed and there is no awareness of CDerived (so overloaded function SomeChecks() is not called). I could do something like this: class CBase { public: void Init() { /* Do some checks */ SomeChecks(); /* Do some more checks */ } virtual void SomeChecks() {} CBase(bool bDoInit=true) { if (bDoInit) { Init(); } } }; class CDerived : public CBase { public: virtual void SomeChecks() { /* Do some other checks */ } CDerived() : CBase(false) { Init() } }; This isn't really safe, because I want the constructor with the false parameter be protected, so only derived classes can call it. But then I'll have to create a second constructor (that is protected) and make it take other parameters (probably unused because is constructor is called when Init() does not have to be called). So I'm quite stuck here. EDIT Actually I want something like this: class CBase { protected: void Init() { /* Implementation of Init ... */ } CBase() { /* Don't do the Init(), it is called by derived class */ } public: CBase() { Init(); } // Called when an object of CBase is created }; class CDerived : public CBase { public: CDerived() : CBase() { Init(); } }; It seems to me it is impossible to have 2 constructors with the same arguments being protected and public?
Calling virtual methods in the constructor/destructor is not allowed. The though processes behind this is that virtual methods are calling the most derived version of a method and if the constructor has not finished then the most derived data has not been correctly initialized and therefore doing so potentially provides an opertunity for use of an invalid object. What you are looking for is the PIMPL design pattern: class CBase { ... }; class CDerived: public CBase { ... } template<typename T> class PIMPL { public: PIMPL() :m_data(new T) { // Constructor finished now do checks. m_data->SomeChecks(); } // Add appropriate copy/assignment/delete as required. private: // Use appropriate smart pointer. std::auto_ptr<T> m_data; }; int main() { PIMPL<CDerived> data; }
1,425,905
1,425,910
C++: Performance impact of BIG classes (with a lot of code)
I wonder if and how writing "almighty" classes in c++ actually impacts performance. If I have for example, a class Point, with only uint x; uint y; as data, and have defined virtually everything that math can do to a point as methods. Some of those methods might be huge. (copy-)constructors do nothing more than initializing the two data members. class Point { int mx; int my; Point(int x, int y):mx(x),my(y){}; Point(const Point& other):mx(other.x),my(other.y){}; // .... HUGE number of methods.... }; Now. I load a big image and create a Point for every pixel, stuff em into a vector and use them. (say, all methods get called once) This is only meant as a stupid example! Would it be any slower than the same class without the methods but with a lot of utility functions? I am not talking about virtual functions in any way! My Motivation for this: I often find myself writing nice and relatively powerful classes, but when I have to initialize/use a ton of them like in the example above, I get nervous. I think I shouldn't. what I think I know is: Methods exist only once in memory. (optimizations aside) Allocation only takes place for the data members, and they are the only thing copied. So it shouldn't matter. Am I missing something?
You are right, methods only exist once in memory, they're just like normal functions with an extra hidden this parameter. And of course, only data members are taken in account for allocation, well, inheritance may introduce some extra ptrs for vptrs in the object size, but not a big deal