question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
2,895,377
2,908,849
General advice about scaling on QGraphicsView/QGraphicsScene
In my project I am using QGraphicsView/QGraphicsScene stuff. On my scene there will be regions that contains 2D graphics. Region count will be limited(Lets say 20) Users can choose to display one or more regions. If user choose to display one region I am going to show one region on scene If user choose to display n regions I am going to show n regions on scene I need a scaling logic to fit n regions on same scene. How can I achieve this?
QGraphicsView::fitInView() should do what you want: QRectF bounding; foreach(QRectF r, selectedRegionRects) { bounding |= r; } scene->fitInView(bounding, Qt::KeepAspectRatio);
2,895,529
2,895,558
How to read input until the user enters ^X
I am creating an interpreter for my esolang, and I need the user to enter some text which then will be interpreted as an INTERCAL program. I want the user to enter text, which may contain any character including newlines, until the user presses ^X (Ctrl-X), like this: Enter your code followed by ^X: Bla Blablabla Bla^X Thank you for entering your code (line 2, 3 and 4 were entered by the user) can anyone explain me how I can read input including newlines till the user enters ^X? Thanks
^X has ASCII code 24, try checking for that. http://www.unix-manuals.com/refs/misc/ascii-table.html
2,895,614
2,895,676
Do I need to use locking with integers in c++ threads
If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems? Edit: Just saw this related post which helps a fair bit.
You are never locking a value - you are locking an operation ON a value. C & C++ do not explicitly mention threads or atomic operations - so operations that look like they could or should be atomic - are not guaranteed by the language specification to be atomic. It would admittedly be a pretty deviant compiler that managed a non atomic read on an int: If you have an operation that reads a value - theres probably no need to guard it. However- it might be non atomic if it spans a machine word boundary. Operations as simple as m_counter++ involves a fetch, increment, and store operation - a race condition: another thread can change the value after the fetch but before the store - and hence needs to be protected by a mutex - OR find your compilers support for interlocked operations. MSVC has functions like _InterlockedIncrement() that will safely increment a memory location as long as all other writes are similarly using interlocked apis to update the memory location - which is orders of magnitude more lightweight than invoking a even a critical section. GCC has intrinsic functions like __sync_add_and_fetch which also can be used to perform interlocked operations on machine word values.
2,895,669
15,982,337
How to use OPENGL API in wince 6.0 r3 platform builder
How to use the following interface in wince 6.0 r3 for implementing hardware acceleration for graphics in silverlight IRenderer ICustomSurface ICustomGraphicsDevice
You can find here a Guide how to implement the Hardware acceleration for Silverlight in WEC7. http://msdn.microsoft.com/en-us/library/jj919286.aspx (I recommend to use Windows Embedded 7 for Silverlight if possible, Microsoft did quite some improvements compared to wince 6.0 r3) Alternative you can also use a module which proivdes the HW acceleration out of the box. Here a video which shows the differenc with enable/disable HW acceleration: http://developer.toradex.com/knowledge-base/microsoft-silverlight If you mainly want to have a fancy 3D User interface, take also a look at the Nvidia UI Composer which is available for Win CE 6 too.
2,895,964
2,895,995
Ofstream writes empty file on linux
I have a program which writes its output using ofstream. Everything works perfectly fine on Windows when compiled with Visual Studio, but it only writes empty file on Linux when compiled with GCC. ofstream out(path_out_cstr, ofstream::out); if(out.bad()){ cout << "Could not write the file" << flush; } else{ cout << "writing"; out << "Content" << endl; if(out.fail()) cout << "writing failed"; out.flush(); out.close(); } The directory which is being writen into has 0777 privileges. The weird thing is: nothing is written, but no error is reported. The gcc --version is: (Gentoo 4.3.4 p1.0, pie-10.1.5) 4.3.4 I'm aware the code SHOULD work, so I'm more like looking for suggestions, what could be wrong, than for direct code-fix. EDIT: fwrite seems to fail in exactly the same fashion (nothing is writte, no error is reported). EDIT: i'm exectuing both the GCC and the program over SSH on my university directory, if it can have any significance. I have sufficient permisssions to exectute and write files (ls . > out.txt works just fine), it only my program which has trouble. Thanks for help
Works for me, ubuntu g++-4.1. Have you tried to execute strace ./test and see if there are write() calls over the file?
2,896,238
2,896,267
Dereferencing deleted pointers always result in an Access Violation?
I have a very simple C++ code here: char *s = new char[100]; strcpy(s, "HELLO"); delete [] s; int n = strlen(s); If I run this code from Visual C++ 2008 by pressing F5 (Start Debugging,) this always result in crash (Access Violation.) However, starting this executable outside the IDE, or using the IDE's Ctrl+F5 (Start without Debugging) doesn't result in any crash. What could be the difference? I also want to know if it's possible to stably reproduce the Access Violation crash caused from accessing deleted area? Is this kind of crash rare in real-life?
Accessing memory through a deleted pointer is undefined behavior. You can't expect any reliable/repeatable behavior. Most likely it "works" in the one case because the string is still "sitting there" in the now available memory -= but you cannot rely on that. VS fills memory with debug values to help force crashes to help find these errors.
2,896,286
2,896,310
whats the difference between dot operator and scope resolution operator
I just wanted to know the difference between . operator and :: operator?
The former (dot, .) is used to access members of an object, the latter (double colon, ::) is used to access members of a namespace or a class. Consider the following setup. namespace ns { struct type { int var; }; } In this case, to refer to the structure, which is a member of a namespace, you use ::. To access the variable in an object of type type, you use .. ns::type obj; obj.var = 1;
2,896,600
2,896,627
How to replace all occurrences of a character in string?
What is the effective way to replace all occurrences of a character with another character in std::string?
std::string doesn't contain such function but you could use stand-alone replace function from algorithm header. #include <algorithm> #include <string> void some_func() { std::string s = "example string"; std::replace( s.begin(), s.end(), 'x', 'y'); // replace all 'x' to 'y' }
2,896,654
4,533,524
Is there open-source java/c/c++ framework for pipes-and-filters paradigm?
I'm interested in open-source Java/C/C++ framework for pipes-and-filters (like described in that book) paradigm. Can you recommend some? EDIT: I'm looking for as much "pure" (or lightweight) frameworks as possible: frameworks which encourage programming in "pipes-and-filters" technique, without reliance on any standards, such as XML or JMS, but on language only.
I just discovered Spring integration framework. Looks like the sample of what I was asking for.
2,896,689
2,896,975
dereferencing the null pointer
int* p = 0; int* q = &*p; Is this undefined behavior or not? I browsed some related questions, but this specific aspect didn't show up.
The answer to this question is: it depends which language standard you are following :-). In C90 and C++, this is not valid because you perform indirection on the null pointer (by doing *p), and doing so results in undefined behavior. However, in C99, this is valid, well-formed, and well-defined. In C99, if the operand of the unary-& was obtained as the result of applying the unary-* or by performing subscripting ([]), then neither the & nor the * or [] is applied. For example: int* p = 0; int* q = &*p; // In C99, this is equivalent to int* q = p; Likewise, int* p = 0; int* q = &p[0]; // In C99, this is equivalent to int* q = p + 0; From C99 §6.5.3.2/3: If the operand [of the unary & operator] is the result of a unary * operator, neither that operator nor the & operator is evaluated and the result is as if both were omitted, except that the constraints on the operators still apply and the result is not an lvalue. Similarly, if the operand is the result of a [] operator, neither the & operator nor the unary * that is implied by the [] is evaluated and the result is as if the & operator were removed and the [] operator were changed to a + operator. (and its footnote, #84): Thus, &*E is equivalent to E (even if E is a null pointer)
2,897,067
3,097,849
Sorting a string in array, making it sparsely populated
For example, say I have string like: duck duck duck duck goose goose goose dog And I want it to be as sparsely populated as possible, say in this case duck goose duck goose dog duck goose duck What sort of algorithm would you recommend? Snippets of code or general pointers would be useful, languages welcome Python, C++ and extra kudos if you have a way to do it in bash.
If I understood correctly your definition of “sparse”, this function should be exactly what you want: # python ≥ 2.5 import itertools, heapq def make_sparse(sequence): grouped= sorted(sequence) item_counts= [] for item, item_seq in itertools.groupby(grouped): count= max(enumerate(item_seq))[0] + 1 item_counts.append( (-count, item) ) # negative count for heapq purposes heapq.heapify(item_counts) count1, item1= heapq.heappop(item_counts) yield item1; count1+= 1 while True: try: count2, item2= heapq.heappop(item_counts) except IndexError: # no other item remains break yield item2; count2+= 1 if count1 < 0: heapq.heappush(item_counts, (count1, item1)) item1, count1= item2, count2 # loop is done, produce remaining item1 items while count1 < 0: yield item1; count1+= 1 if __name__ == "__main__": # initial example print list(make_sparse( "duck duck duck duck goose goose goose dog".split())) # updated example print list(make_sparse([ 'duck', 'duck', 'duck', 'duck', 'duck', 'duck', 'goose', 'goose', 'goose', 'goose', 'dog', 'dog'])) # now a hard case: item 'a' appears more than: # > total_len//2 times if total_len is even # > total_len//2+1 times if total_len is odd print list(make_sparse("aaaaaabbcc")) These examples produce this output: ['duck', 'goose', 'duck', 'goose', 'duck', 'dog', 'duck', 'goose'] ['duck', 'goose', 'duck', 'goose', 'duck', 'dog', 'duck', 'goose', 'duck', 'dog', 'duck', 'goose'] ['a', 'b', 'a', 'c', 'a', 'b', 'a', 'c', 'a', 'a'] A subtle note: in the first and second examples, reversing the output order might look more optimal.
2,897,167
2,897,174
wchar to char in c++
I have a Windows CE console application that's entry point looks like this int _tmain(int argc, _TCHAR* argv[]) I want to check the contents of argv[1] for "-s" convert argv[2] into an integer. I am having trouble narrowing the arguments or accessing them to test. I initially tried the following with little success if (argv[1] == L"-s") I also tried using the narrow function of wostringstream on each character but this crashed the application. Can anyone shed some light? Thanks
It should be: if (_tcscmp(argv[1], _T("-s")) == 0)
2,897,317
3,093,504
writing XML with Xerces 3.0.1 and C++ on windows
i have the following function i wrote to create an XML file using Xerces 3.0.1, if i call this function with a filePath of "foo.xml" or "../foo.xml" it works great, but if i pass in "c:/foo.xml" then i get an exception on this line XMLFormatTarget *formatTarget = new LocalFileFormatTarget(targetPath); can someone explain why my code works for relative paths, but not absolute paths please? many thanks. const int ABSOLUTE_PATH_FILENAME_PREFIX_SIZE = 9; void OutputXML(xercesc::DOMDocument* pmyDOMDocument, std::string filePath) { //Return the first registered implementation that has the desired features. In this case, we are after a DOM implementation that has the LS feature... or Load/Save. DOMImplementation *implementation = DOMImplementationRegistry::getDOMImplementation(L"LS"); // Create a DOMLSSerializer which is used to serialize a DOM tree into an XML document. DOMLSSerializer *serializer = ((DOMImplementationLS*)implementation)->createLSSerializer(); // Make the output more human readable by inserting line feeds. if (serializer->getDomConfig()->canSetParameter(XMLUni::fgDOMWRTFormatPrettyPrint, true)) serializer->getDomConfig()->setParameter(XMLUni::fgDOMWRTFormatPrettyPrint, true); // The end-of-line sequence of characters to be used in the XML being written out. serializer->setNewLine(XMLString::transcode("\r\n")); // Convert the path into Xerces compatible XMLCh*. XMLCh *tempFilePath = XMLString::transcode(filePath.c_str()); // Calculate the length of the string. const int pathLen = XMLString::stringLen(tempFilePath); // Allocate memory for a Xerces string sufficent to hold the path. XMLCh *targetPath = (XMLCh*)XMLPlatformUtils::fgMemoryManager->allocate((pathLen + ABSOLUTE_PATH_FILENAME_PREFIX_SIZE) * sizeof(XMLCh)); // Fixes a platform dependent absolute path filename to standard URI form. XMLString::fixURI(tempFilePath, targetPath); // Specify the target for the XML output. XMLFormatTarget *formatTarget = new LocalFileFormatTarget(targetPath); //XMLFormatTarget *myFormTarget = new StdOutFormatTarget(); // Create a new empty output destination object. DOMLSOutput *output = ((DOMImplementationLS*)implementation)->createLSOutput(); // Set the stream to our target. output->setByteStream(formatTarget); // Write the serialized output to the destination. serializer->write(pmyDOMDocument, output); // Cleanup. serializer->release(); XMLString::release(&tempFilePath); delete formatTarget; output->release(); }
Are you using Windows Vista? perhaps you don't have the necessary permissions? See this question: Exception in two line Xerces program
2,897,560
2,897,702
How to change the meaning of pointer access operator
This may be very obvious question, pardon me if so. I have below code snippet out of my project, #include <stdio.h> class X { public: int i; X() : i(0) {}; }; int main(int argc,char *arv[]) { X *ptr = new X[10]; unsigned index = 5; cout<<ptr[index].i<<endl; return 0; } Question Can I change the meaning of the ptr[index] ? Because I need to return the value of ptr[a[index]] where a is an array for subindexing. I do not want to modify existing source code. Any new function added which can change the behavior is needed. Since the access to index operator is in too many places (536 to be precise) in my code, and has complex formulas inside the index subscript operator, I am not inclined to change the code in many locations. PS : 1. I tried operator overload and came to conclusion that it is not possible. 2. Also p[i] will be transformed into *(p+i). I cannot redefine the basic operator '+'. So just want to reconfirm my understanding and if there are any possible short-cuts to achieve. Else I need fix it by royal method of changing every line of code :) .
As Alex says, your 'subindexing' usage of [] would be totally nonobvious for anyone reading your code. That said, you can define a class such as this: template<class T> class SubindexingList { vector<T> data; vector<int> subindexes; public: SubindexingList(int count) : data(count) { } void set_subindexes(vector<T> const& newval) { subindexes = newval; } T& operator[](int index) { return data[subindexes[index]]; } T const& operator[](int index) const { return data[subindexes[index]]; } }; And replace your X *ptr = new X[10]; with SubindexingList<X> stuff(10);.
2,897,706
2,898,251
can i braodcast a UDP package to part of a network?
i am trying to broadcast a UDP package using subnet. i want to braodcast my package to 192.168.1.255 ? can i do that ? and how using c++ ?
If you're using C++, I'd recommend using the Boost ASIO package for networking. The only gotcha is to be sure to set the broadcast ability on your UDP socket via: boost::asio::socket_base::broadcast option(true); socket.set_option(option); The "Examples" section of the boost documentation should have plenty of references to get you up and running.
2,897,907
2,897,945
How to simulate a file read error in the CRT
Using VS2008, we would like to simulate a file that has a size of X, but that has a read failure at X-Y bytes, so that we get an error indication. Anyone have an idea of how to do this on windows? Looks like there is a solution for linux, but I can't really come up with a way to do this on windows. We have multiple developers, multiple machines, and cppunit testing framework, so I want a software only design. I'm trying to simulate the actual CRT failing, so I can test the code that is dealing with the failure.
Wrap the file I/O functions in a class; override those in a testing derived class; simulate failure with a fake or mock.
2,897,921
2,898,002
How to use openssl crypto lib headers in C++?
I am trying to test the crypto library that comes with openssl, I downloaded openssl from http://www.openssl.org/source/ and it contains a /crypto folder with subfolders for each encryption type. I wanted to try BIO_f_base64 so I created an empty console app, and added the includes needed, also added the paths to the /bio and /evp folders to c++ incl directories, and also added the main /openssl folder. When I try to compile I get Cannot open include file: 'openssl/e_os2.h': No such file or directory But the file is there, should I use the crypto lib in a different way? How can I use it adding only the /openssl path and not all the crypto subfolders I use? Also I don't have any .lib files, where can I get them?
You need a version of the OpenSSL that is built for Windows, not the source release. I recommend installing a version from here, which has some nice installers for .lib files and headers. Once you have it installed you will have to update your VS project with the proper include paths to pick up the headers from where ever the installer put them.
2,897,936
2,899,879
Is it possible to have an out-of-process COM server where a separate O/S process is used for each object instance?
I have a legacy C++ "solution engine" that I have already wrapped as an in-process COM object for use by client applications that only require a single "solution engine". However I now have a client application that requires multiple "solution engines". Unfortunately the underlying legacy code has enough global data, singletons and threading horrors that given available resources it isn't possible to have multiple instances of it in-process simultaneously. What I am hoping is that some kind soul can tell me of some COM magic where with the flip of a couple of registry settings it is possible to have a separate out-of-process COM server (separate operating system process) for each instance of the COM object requested. Am I in luck?
Yes, this is possible. The key is to register your coclass by calling CoRegisterClassObject, and OR-in the value REGCLS_SINGLEUSE in the flags parameter. If your project is an ATL 7.0+ project, you can do this by overriding CAtlExeModuleT::PreMessageLoop(), which is responsible for registering the class object, thusly: HRESULT CATLHacksModule::PreMessageLoop(int nShow) { HRESULT hr = RegisterClassObjects(CLSCTX_LOCAL_SERVER, REGCLS_SINGLEUSE); if (hr == S_OK) { if (m_bDelayShutdown && !StartMonitor()) { hr = E_FAIL; } } else { m_bDelayShutdown = false; } return hr; }
2,898,094
2,898,310
Why C++ virtual function defined in header may not be compiled and linked in vtable?
Situation is following. I have shared library, which contains class definition - QueueClass : IClassInterface { virtual void LOL() { do some magic} } My shared library initialize class member QueueClass *globalMember = new QueueClass(); My share library export C function which returns pointer to globalMember - void * getGlobalMember(void) { return globalMember;} My application uses globalMember like this ((IClassInterface*)getGlobalMember())->LOL(); Now the very uber stuff - if i do not reference LOL from shared library, then LOL is not linked in and calling it from application raises exception. Reason - VTABLE contains nul in place of pointer to LOL() function. When I move LOL() definition from .h file to .cpp, suddenly it appears in VTABLE and everything works just great. What explains this behavior?! (gcc compiler + ARM architecture_)
The linker is the culprit here. When a function is inline it has multiple definitions, one in each cpp file where it is referenced. If your code never references the function it is never generated. However, the vtable layout is determined at compile time with the class definition. The compiler can easily tell that the LOL() is a virtual function and needs to have an entry in the vtable. When it gets to link time for the app it tries to fill in all the values of the QueueClass::_VTABLE but doesn't find a definition of LOL() and leaves it blank(null). The solution is to reference LOL() in a file in the shared library. Something as simple as &QueueClass::LOL;. You may need to assign it to a throw away variable to get the compiler to stop complaining about statements with no effect.
2,898,155
2,898,186
Is rand() predictable in C++
Possible Duplicate: What’s the Right Way to use the rand() Function in C++? When I run the below program I always get the same values each time. Is rand not a true random function? int main() { while(1) { getch(); cout<<rand()<<endl; } } In each run I am getting the below values. 41 18467 6334 26500 19169 15724 ......
Yes and no. rand() is a pseudo random number generator that will always return the same sequence of numbers given the same seed value. Typically one 'seeds' the random number generator with some random data and then uses rand() to return a sequence of seemingly random numbers. If your random data isn't needed for something requiring 'true' randomness (such as cryptography based security) just using the current system time is sufficient. However, if you are using it for security purposes, look into obtaining more truly random data from entropy gathering utilities and use that to seed the random number generator. As aa mentioned, the seed function is referenced here
2,898,316
2,898,328
Using a member function pointer within a class
Given an example class: class Fred { public: Fred() { func = &Fred::fa; } void run() { int foo, bar; *func(foo,bar); } double fa(int x, int y); double fb(int x, int y); private: double (Fred::*func)(int x, int y); }; I get a compiler error at the line calling the member function through the pointer "*func(foo,bar)", saying: "term does not evaluate to a function taking 2 arguments". What am I doing wrong?
The syntax you need looks like: ((object).*(ptrToMember)) So your call would be: ((*this).*(func))(foo, bar); I believe an alternate syntax would be: (this->*func)(foo, bar);
2,898,590
2,898,597
How to see c++ and c# dll dependencies?
I have a python project that calls a c++ wrapper dll that calls a c# com interop dll. In my computer, with all frameworks and programs installed, my project runs very well. But in a computer that just got formatted it doesn't. I allready installed c++ 2008 redistribute and the c++ part is working but when I call a function from it (that will call the c# correspondent one), it gives an error. I want to know what are the dll dependencies from both c++ and c# dll's to see what is missing :)
Looks like you need Dependency Walker.
2,898,758
2,900,125
ptr to c++ source for a simple mac (cocoa) based console to use as a command interpreter
I have the command language part sorted out, I'm looking for good sample on how to build a custom console in Cocoa. Need features like copy/paste, command stack, ctrl-z processsing etc. Thanks in advance.
There's the open source iTerm console application, that might give you the example you want.
2,898,870
2,899,689
Suggestions on syntax to express mathematical formula concisely
I am developing functional domain specific embedded language within C++ to translate formulas into working code as concisely and accurately as possible. I posted a prototype in the comments, it is about two hundred lines long. Right now my language looks something like this (well, actually is going to look like): // implies two nested loops j=0:N, i=0,j (range(i) < j < N)[T(i,j) = (T(i,j) - T(j,i))/e(i+j)]; // implies summation over above expression sum(range(i) < j < N))[(T(i,j) - T(j,i))/e(i+j)]; I am looking for possible syntax improvements/extensions or just different ideas about expressing mathematical formulas as clearly and precisely as possible (in any language, not just C++). Can you give me some syntax examples relating to my question which can be accomplished in your language of choice which consider useful. In particular, if you have some ideas about how to translate the above code segments, I would be happy to hear them. Thank you. Just to clarify and give an actual formula, my short-term goal is to express the following expression concisely where values in <> are already computed as 4-dimensional arrays.
If you're going to be writing this for the ab-initio world (which I'm guessing from your MP2 equation) you want to make it very easy and clear to express things as close to the mathematical definition that you can. For one, I wouldn't have the complicated range function. Have it define a loop, but if you want nested loops, specify them both: So instead of (range(i) < j < N)[T(i,j) = (T(i,j) - T(j,i))/e(i+j)]; use loop(j,0,N)[loop(i,0,j)[T(i,j) = (T(i,j) - T(j,i))/e(i+j)]] And for things like sum and product, make the syntax "inherit" from the fact that it's a loop. So instead of sum(range(i) < j < N))[(T(i,j) - T(j,i))/e(i+j)]; use sum(j,0,n)[loop(i,0,j)[(T(i,j) - T(j,i))/e(i+j)]] or if you need a double sum sum(j,0,n)[sum(i,0,j)[(T(i,j) - T(j,i))/e(i+j)]] Since it looks like you're trying to represent quantum mechanical operators, then try to make your language constructs match the operator on a 1-1 basis as closely as possible. That way it's easy to translate (and clear about what's being translated). EDITED TO ADD since you're doing quantum chemistry, then it's fairly easy (at least as syntax goes). You define operators that always work on what's to the right of them and then the only other thing you need are parenthesis to group where an operator stops. Einstein notation is fun where you don't specify the indices or bounds and they're implied because of convention, however that doesn't make clear code and it's harder to think about. For sums, even if the bounds implied, they're always easy to figure out based on the context, so you should always make people specify them. sum(i,0,n)sum(j,0,i)sum(a,-j,j)sum(b,-i,i).... Since each operator works to the right, its variables are known, so j can know about i, a can know about i and j and b can know about i,j, and a. From my experience with quantum chemists (I am one too!) they don't like complicated syntax that differs much from what they write. They are happy to separate double and triple sums and integrals into a collection of singles because those are just shorthand anyway. Symmetry isn't going to be that hard either. It's just a collection of swaps and adds or multiplies. I'd do something where you specify the operation which contains a list of the elements that are the same and can be swapped: c2v(sigma_x,a,b)a+b This says that a and b are can be considered identical particles under a c2v operation. That means that any equation with a and b (such as the a+b after it) should be transformed into a linear combination of the c2v transformations. the sigma_x is the operation in c2v that you want applied to your function, (a+b). If I remember correctly, that's 1/sqrt(2)((a+b)+(b+a)). But I don't have my symmetry book here, so that could be wrong.
2,899,013
2,899,042
How do I get the application data path in Windows using C++?
I looked all over the internet and there doesn't seem to be a decent solution that I could find. I want to be able to programmatically in C++ obtain the path "%ALLUSERSPROFILE%\Application Data" that explorer can translate into a real path. Can I do this without relying on third-party code?
Use SHGetFolderPath with CSIDL_COMMON_APPDATA as the CSIDL. TCHAR szPath[MAX_PATH]; if (SUCCEEDED(SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA, NULL, 0, szPath))) { //.... }
2,899,207
2,899,333
Implementing Dijkstra's Algorithm
I've been tasked (coursework @ university) to implement a form of path-finding. Now, in-spec, I could just implement a brute force, since there's a limit on the number of nodes to search (begin, two in the middle, end), but I want to re-use this code and came to implement Dijkstra's algorithm. I've seen the pseudo on Wikipedia and a friend wrote some for me as well, but it flat out doesn't make sense. The algorithm seems pretty simple and it's not a problem for me to understand it, but I just can't for the life of me visualize the code that would realize such a thing. Any suggestions/tips? Edit for some confusions: Yes, there is a target node and a source node. I'm looking to implement Dijkstra's in a general case, not the "only two intermediate stops" case, because I want to use the code again afterwards. Else, I'd just write a brute-force implementation. The specific issue that I'm having a little trouble with is storing the sub-optimal half-formed paths, in case they may become optimal. When I'm visiting a given node, I just don't see how I'm going to update all the connections that go through it. More edit: Going through a couple of the answers now and having a go. REALLY EDIT: I forgot to mention a serious complication, which is that any two vertices can have up to UINT_MAX different distances between them. Sorry. Infact, the fact that I forgot to deal with this is probably the cause of the damn problem in the first place, although the solution: pick the shortest is fortunately obvious to me. No wonder other people's pseudo for a distance variable didn't take into account my variable distance.
Here's a high level breakdown of Dijkstra's algorithm: You stick all of the vertices in a priority queue where all of the vertices have a priority (distance) of infinity except for the source vertex, which has a distance of zero (the source vertex is zero units of distance away from itself, right?). Pop the priority queue. The vertex removed is the vertex with the shortest distance from the source. Obviously the first vertex popped from the queue is the source. Well call that popped vertex v. Look at each of the neighbors of v. All of them will have a distance greater than v's distance (otherwise we would have already popped them from the queue, right?). Let's say v has a distance of 3, and v has 3 neighbors x (dist-from-source: 5), y (dist-from-source: 10) and z (dist-from-source: infinity). Now we look at each neighbors distance from v. Let's say they are: x - 3, y - 2, z - 4. This means that the path from the source to x that goes through v has a distance of |v| + 3 (3 + 3 = 6), y has a distance of 5 (3 + 2 = 5) and z has a distance of 7 (3 + 4 = 7). The path to x through v is longer than the current shortest path to x so we ignore it. However the paths to y and z that go through v are shorter than the previous known shortest path so we update them. You keep doing this until you have gone through all the vertices. At each point, when you pop the min from the priority queue you know you have found the shortest path to that vertex because any possible shorter path must pass through a vertex with a distance less than v's, which means we would have already popped that from the queue.
2,899,511
2,899,546
Should I call class destructor in this code?
I am using this sample to decode/encode some data I am retrieving/sending from/to a web server, and I want to use it like this: BOOL HandleMessage(UINT uMsg,WPARAM wParam,LPARAM lParam,LRESULT* r) { if(uMsg == WM_DESTROY) { PostQuitMessage(0); return TRUE; } else if(uMsg == WM_CREATE) { // Start timer StartTimer(); return TRUE; } else if(uMsg == WM_TIMER) { //get data from server char * test = "test data"; Base64 base64; char *temp = base64.decode(test); MessageBox(TEXT(temp), 0, 0); } } The timer is set every 5 minutes. Should I use delete base64 at the end? Does delete deallocates everything used by base64?
base64 is in local storage. It goes out of scope and is destructed at the end of the block. The only question left is ownership of temp. If its memory is owned by base64, then you do not need to delete anything.
2,899,604
2,899,848
Using sem_t in a Qt Project
I'm working on a simulation in Qt (C++), and would like to make use of a Semaphore wrapper class I made for the sem_t type. Although I am including semaphore.h in my wrapper class, running qmake provides the following error: 'sem_t does not name a type' I believe this is a library/linking error, since I can compile the class without problems from the command line. I've read that you can specify external libraries to include during compilation. However, I'm a) not sure how to do this in the project file, and b) not sure which library to include in order to access semaphore.h. Any help would be greatly appreciated. Thanks, Tom Here's the wrapper class for reference: Semaphore.h #ifndef SEMAPHORE_H #define SEMAPHORE_H #include <semaphore.h> class Semaphore { public: Semaphore(int initialValue = 1); int getValue(); void wait(); void post(); private: sem_t mSemaphore; }; #endif Semaphore.cpp #include "Semaphore.h" Semaphore::Semaphore(int initialValue) { sem_init(&mSemaphore, 0, initialValue); } int Semaphore::getValue() { int value; sem_getvalue(&mSemaphore, &value); return value; } void Semaphore::wait() { sem_wait(&mSemaphore); } void Semaphore::post() { sem_post(&mSemaphore); } And, the QT Project File: TARGET = RestaurantSimulation TEMPLATE = app QT += SOURCES += main.cpp \ RestaurantGUI.cpp \ RestaurantSetup.cpp \ WidgetManager.cpp \ RestaurantView.cpp \ Table.cpp \ GUIFood.cpp \ GUIItem.cpp \ GUICustomer.cpp \ GUIWaiter.cpp \ Semaphore.cpp HEADERS += RestaurantGUI.h \ RestaurantSetup.h \ WidgetManager.h \ RestaurantView.h \ Table.h \ GUIFood.h \ GUIItem.h \ GUICustomer.h \ GUIWaiter.h \ Semaphore.h FORMS += RestaurantSetup.ui LIBS += Full Compiler Output: g++ -c -pipe -g -gdwarf-2 -arch i386 -Wall -W -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED - I/usr/local/Qt4.6/mkspecs/macx-g++ -I. - I/Library/Frameworks/QtCore.framework/Versions/4/Headers -I/usr/include/QtCore - I/Library/Frameworks/QtGui.framework/Versions/4/Headers -I/usr/include/QtGui - I/usr/include -I. -I. -F/Library/Frameworks -o main.o main.cpp In file included from RestaurantGUI.h:10, from main.cpp:2: Semaphore.h:14: error: 'sem_t' does not name a type make: *** [main.o] Error 1 make: Leaving directory `/Users/thauburger/Desktop/RestaurantSimulation' Exited with code 2. Error while building project RestaurantSimulation When executing build step 'Make'
I was able to compile and link your semaphore class using qmake without any unexpected steps (including linking in the rt or pthread libraries). I created the following main: #include "Semaphore.h" int main(int argc, char* argv[]) { Semaphore sem; return 0; } And then I generated the following project file using qmake -project: ###################################################################### # Automatically generated by qmake (2.01a) Mon May 24 12:50:02 2010 ###################################################################### TEMPLATE = app TARGET = DEPENDPATH += . INCLUDEPATH += . # Input HEADERS += Semaphore.h SOURCES += main.cpp Semaphore.cpp Whatever error you're seeing is caused by something other than your Semaphore class. I'd recommend taking a good look at your RestaurantGUI.h file. You may need to take a look at the preprocessed output (gcc's -E flag) in order to see what's really happening. NOTE: I'd recommend renaming your semaphore files to something that will work on case-insensitive filesystems, such as Windows.
2,899,634
2,899,667
event handling in C++
I'm writing a game in C++, and I'm trying to get it to recognize keyboard and mouse events. Google tells me that boost.signal is suitable for event handling, but none of the code samples or tutorials I've found tell me how to associate a keypress or mouseclick with a function. Can anyone shed any light on this?
I think taking a look at SDL and it's SDL input subsystem might give you some hints. It all depends on what is the source of your keyboard and mouse events. Is it SDL? DirectX? If neither, then you're probably doing it wrong :).
2,899,655
4,646,999
BitBlt ignores CAPTUREBLT and seems to always capture a cached copy of the target
I am trying to capture screenshots using the BitBlt function. However, every single time I capture a screenshot, the non-client area NEVER changes no matter what I do. It's as if it's getting some cached copy of it. The client area is captured correctly. If I close and then re-open the window, and take a screenshot, the non-client area will be captured as it is. Any subsequent captures after moving/resizing the window have no effect on the captured screenshot. Again, the client area will be correct. Furthermore, the CAPTUREBLT flag seems to do absolutely nothing at all. I notice no change with or without it. Here is my capture code: QPixmap WindowManagerUtils::grabWindow(WId windowId, GrabWindowFlags flags, int x, int y, int w, int h) { RECT r; switch (flags) { case WindowManagerUtils::GrabWindowRect: GetWindowRect(windowId, &r); break; case WindowManagerUtils::GrabClientRect: GetClientRect(windowId, &r); break; case WindowManagerUtils::GrabScreenWindow: GetWindowRect(windowId, &r); return QPixmap::grabWindow(QApplication::desktop()->winId(), r.left, r.top, r.right - r.left, r.bottom - r.top); case WindowManagerUtils::GrabScreenClient: GetClientRect(windowId, &r); return QPixmap::grabWindow(QApplication::desktop()->winId(), r.left, r.top, r.right - r.left, r.bottom - r.top); default: return QPixmap(); } if (w < 0) { w = r.right - r.left; } if (h < 0) { h = r.bottom - r.top; } #ifdef Q_WS_WINCE_WM if (qt_wince_is_pocket_pc()) { QWidget *widget = QWidget::find(winId); if (qobject_cast<QDesktopWidget*>(widget)) { RECT rect = {0,0,0,0}; AdjustWindowRectEx(&rect, WS_BORDER | WS_CAPTION, FALSE, 0); int magicNumber = qt_wince_is_high_dpi() ? 4 : 2; y += rect.top - magicNumber; } } #endif // Before we start creating objects, let's make CERTAIN of the following so we don't have a mess Q_ASSERT(flags == WindowManagerUtils::GrabWindowRect || flags == WindowManagerUtils::GrabClientRect); // Create and setup bitmap HDC display_dc = NULL; if (flags == WindowManagerUtils::GrabWindowRect) { display_dc = GetWindowDC(NULL); } else if (flags == WindowManagerUtils::GrabClientRect) { display_dc = GetDC(NULL); } HDC bitmap_dc = CreateCompatibleDC(display_dc); HBITMAP bitmap = CreateCompatibleBitmap(display_dc, w, h); HGDIOBJ null_bitmap = SelectObject(bitmap_dc, bitmap); // copy data HDC window_dc = NULL; if (flags == WindowManagerUtils::GrabWindowRect) { window_dc = GetWindowDC(windowId); } else if (flags == WindowManagerUtils::GrabClientRect) { window_dc = GetDC(windowId); } DWORD ropFlags = SRCCOPY; #ifndef Q_WS_WINCE ropFlags = ropFlags | CAPTUREBLT; #endif BitBlt(bitmap_dc, 0, 0, w, h, window_dc, x, y, ropFlags); // clean up all but bitmap ReleaseDC(windowId, window_dc); SelectObject(bitmap_dc, null_bitmap); DeleteDC(bitmap_dc); QPixmap pixmap = QPixmap::fromWinHBITMAP(bitmap); DeleteObject(bitmap); ReleaseDC(NULL, display_dc); return pixmap; } Most of this code comes from Qt's QWidget::grabWindow function, as I wanted to make some changes so it'd be more flexible. Qt's documentation states that: The grabWindow() function grabs pixels from the screen, not from the window, i.e. if there is another window partially or entirely over the one you grab, you get pixels from the overlying window, too. However, I experience the exact opposite... regardless of the CAPTUREBLT flag. I've tried everything I can think of... nothing works. Any ideas?
Your confusion about BitBlt with CAPTUREBLT behaviour comes from the fact that official BitBlt documentation is unclear and misleading. It states that "CAPTUREBLT -- Includes any windows that are layered on top of your window in the resulting image. By default, the image only contains your window." What actually means (at least for any windows OS without Aero enabled) "CAPTUREBLT -- Includes any layered(!) windows (see WS_EX_LAYERED extended window style) that overlap your window. Non-layered windows that overlap your window is never included." Windows without WS_EX_LAYERED extended window style that overlap your window is not included with or without CAPTUREBLT flag (at least for any windows OS without Aero enabled). QT developers also misunderstood BitBlt/CAPTUREBLT documentation so QT documentation is actually wrong about QPixmap::grabWindow behaviour on WIN32 platform without Aero enabled. ADD: If you want to capture your window as it is on the screen you have to capture the entire desktop with CAPTUREBLT flag and then extract the rectangle with your window. (QT developers should do the same thing). It will work correctly in both cases: with and without Aero enabled/available.
2,899,675
2,984,126
getElementsByTagName returns 0-length list when called from didFinishLoad delegate
I'm using the Chromium port of WebKit on Windows and I'm trying to retrieve a list of all of the images in my document. I figured the best way to do this was to implement WebKit::WebFrameClient::didFinishLoading like so: WebNodeList list = document->getElementsByTagName(L"img"); for (size_t i = 0; i < list.length(); ++i) { // Manipulate images here... } However, when this delegate fires, list.length() returns 0. The only times I've seen it return a list of non-zero length is when I substitute "body" or "head" for "img". Strangely enough, if I call getElementsByTagName(L"img") outside of the delegate, it works correctly. I'm guessing that the DOM isn't fully loaded when didFinishLoading is called, but that would seem to contradict the delegate's name. Does anyone know what I may be missing here?
It turns out that the mistake was purely on my side. I was caching a pointer to the DOM document in my frame wrapper. Of course, since a frame can outlive a DOM document, I ended up referencing an out-of-date document once I loaded a new page.
2,899,723
2,899,757
C++ hook process and show status
Ok so I am learning C++ slowly. I am familiar with all the console syntax and everything, but now I'm moving on to windows programming. Now what im trying to do, is create a DLL that I inject into a process, so it's hooked in. All I want the C++ application to do, is have text in it, that says "Hooked" if it's successfully injected, and an error if something wrong happened. Or even if I can do it without a DLL, Just open an executable, and when the certain process I'm trying to hook is opened, the status is changed to "Hooked". Also I have a safaribooksonline.com account so if there is any good reads you would recommend, just write it down. thanks
I think you might be looking at this backwards. In C/C++ an application 'pulls' a DLL in rather than having a DLL 'injected' into an application. Typically for plugins/hooks, there is some mechanism to inform an application of a DLL's availability (often just its presence in a specific directory) and a configuration file or some other logic is used to instruct the application to explicitly load the library, extract a function or two, and call them. For Windows programming, I'd suggest doing a search for examples of the LoadLibrary() API call. You'll likely find a tutorial or two on how to do it.
2,899,764
2,900,190
How can one make a 'passthru' function in C++ using macros or metaprogramming?
So I have a series of global functions, say: foo_f1(int a, int b, char *c); foo_f2(int a); foo_f3(char *a); I want to make a C++ wrapper around these, something like: MyFoo::f1(int a, int b, char* c); MyFoo::f2(int a); MyFoo::f3(char* a); There's about 40 functions like this, 35 of them I just want to pass through to the global function, the other 5 I want to do something different with. Ideally the implementation of MyFoo.cpp would be something like: PASSTHRU( f1, (int a, int b, char *c) ); PASSTHRU( f2, (int a) ); MyFoo::f3(char *a) { //do my own thing here } But I'm having trouble figuring out an elegant way to make the above PASSTHRU macro. What I really need is something like the mythical X getArgs() below: MyFoo::f1(int a, int b, char *c) { X args = getArgs(); args++; //skip past implicit this.. ::f1(args); //pass args to global function } But short of dropping into assembly I can't find a good implementation of getArgs().
You could use Boost.Preprocessor to let the following: struct X { PASSTHRU(foo, void, (int)(char)) }; ... expand to: struct X { void foo ( int arg0 , char arg1 ) { return ::foo ( arg0 , arg1 ); } }; ... using these macros: #define DO_MAKE_ARGS(r, data, i, type) \ BOOST_PP_COMMA_IF(i) type arg##i #define PASSTHRU(name, ret, args) \ ret name ( \ BOOST_PP_SEQ_FOR_EACH_I(DO_MAKE_ARGS, _, args) \ ) { \ return ::name ( \ BOOST_PP_ENUM_PARAMS(BOOST_PP_SEQ_SIZE(args), arg) \ ); \ }
2,899,803
2,903,075
Monochrome BitMap Library
I am trying to create a piece of software that can be used to create VERY large (10000x10000) sized bitmaps. All I need is something that can work in monochrome, since the required output is a matrix containing details of black and white pixels in the bitmap. The closest thing I can think of is a font editor, but the size is a problem. Is there any library out there that I can use to create the software, or will I have to write the whole thing from the start? Edited on May 25: OK, so I've been searching around and I have found that using the GtkTree Widget is a good way to create grids. Has anybody tried that with the large sizes that I require? And if so, can it be made to look like a drawing surface rather than a Spreadsheet like view?
Why don't you use bitmap objects, like gdk pixmaps if you use GTK? 10,000 x 10,000 pixels with a depth of 1 (monochrome) is 100,000,000 bits, which is 12,500,000 bytes, around 12 megabytes. Not that large.
2,900,244
2,900,368
Is this usage of test_and_set thread safe?
I've been thinking of how to implement a lock-free singly linked list. And to be honest, I don't see many bullet proof ways to do it. Even the more robust ways out there that use CAS end up having some degree of the ABA problem. So I got to thinking. Wouldn't a partially lock-free system be better than always using locks? May some operations can be atomic and lock-free? If I can do that, it should still be thread safe. So, onto the question. I am thinking of a simple singly linked list. 2 main operations. push and pop. push always inserts at the front. Something like this: void push(int n) { T *p = new T; p->n = n; p->next = root; root = p; } And pop which always takes the first element. Something like this: T *pop() { T *p = root; root = root->next; return p; } Obviously push is non-trivial enough that a simple lock-free approach is probably not going to happen. But pop looks maybe doable. Using gcc-intrinsics I've thought of this: T *pop() { return __sync_lock_test_and_set(&root, root->next); } Functionally equivalent? yup. Lock-free? yup. Thread Safe? I Don't Know. My gut reaction is no, and here's why. I am concerned about the fact that one of the parameters to the test_and_set has to dereference memory. What if root changes between the root->next and the call to __sync_lock_test_and_set. I suppose that this code is equivalent to this: T *pop() { T *temp = root->next; // are we broken if a push/pop happens here? return __sync_lock_test_and_set(&root, temp); } So, like I said, I think that this code is not correct. But can anyone say for certain that I am drawing the right conclusions (I'd hate to write off something that works nicely). If it is in fact broken as I suspect. Is there any simple solution?
You're correct. In C++, the arguments to a function are evaluated in any order, but certainly your compiler has no way of knowing that root->next is an atomic operation in your sequence. Consider two threads calling pop(): One thread evaluates root->next, then the other evaluates root->next, and both call test_and_set(). Now you've only popped one node.
2,900,357
2,900,523
Producer/Consumer Implementation -- Feedback Wanted
I'm preparing for an interview in a few weeks and I thougth I would give threads in boost a go, as well as do the simple producer/consumer problem I learned in school. Haven't done it quite awhile so I was curious what you guys think of this? What should I add to make it a better example etc. Thanks for the feedback! :) ////////////////////////////////////////////////////////////////////////// boost::mutex bufferMutex; deque<int> buffer; const int maxBufferSize = 5; ////////////////////////////////////////////////////////////////////////// bool AddToBuffer(int i) { if (buffer.size() < maxBufferSize) { buffer.push_back(i); return true; } else { return false; } } bool GetFromBuffer(int& toReturn) { if (buffer.size() == 0) { return false; } else { toReturn = buffer[buffer.size()-1]; buffer.pop_back(); return true; } } struct Producer { int ID; void operator()() { while (true) { boost::mutex::scoped_lock lock(bufferMutex); int num = dice(); bool result = AddToBuffer(num); lock.unlock(); //safe area done if (result) { cout << "Producer " << this->ID << " Added " << num << endl; } else { cout << "!!Buffer was Full!!" << endl; } //Added //Now wait boost::xtime xt; xtime_get( &xt, boost::TIME_UTC); xt.nsec += 1000000 + 100000 * (rand() % 1000); boost::thread::sleep(xt); } } }; struct Consumer { int ID; void operator()() { while (true) { int returnedInt = 0; boost::mutex::scoped_lock lock(bufferMutex); bool result = GetFromBuffer(returnedInt); lock.unlock(); //safe area done if (result) { cout << "\tConsumer " << this->ID << " Took Out " << returnedInt << endl; } else { cout << "!!Buffer was Empty!!" << endl; } //Added //Now wait boost::xtime xt; xtime_get( &xt, boost::TIME_UTC); xt.nsec += 1000000 + 100000 * (rand() % 1000); boost::thread::sleep(xt); } } }; void main() { Producer p, p2; Consumer c, c2; p.ID = 1; p2.ID = 2; c.ID = 1; c2.ID = 2; boost::thread thread1(boost::ref(p)); boost::thread thread2(boost::ref(c)); boost::thread thread3(boost::ref(p2)); boost::thread thread4(boost::ref(c2)); int x; cin >> x; }
If you are already wrapping your buffer object in calls like AddToBuffer and GetFromBuffer, it would make more sense for you to put the locking within your wrapper functions. In addition, you are making an explicit call to unlock, which completely defeats the purpose of scoped_lock; scoped_lock uses Resource Acquisition is Initialization (RAII) to both acquire and release the lock. A better use would be to place the critical section within a block so that the mutex is released due to the lock going out of scope rather than due to an explicit call to the unlock function, as scopes are less fragile. For example: // Code that doesn't need locking { boost::mutex::scoped_lock lck(bufferMutex); // Lock is acquired here // Code that needs to be synchronized } // Lock is automatically released here without explicit call to unlock() // More code that doesn't need locking
2,900,392
2,900,574
Truncate C++ string fields generated by ostringstream, iomanip:setw
In C++ I need string representations of integers with leading zeroes, where the representation has 8 digits and no more than 8 digits, truncating digits on the right side if necessary. I thought I could do this using just ostringstream and iomanip.setw(), like this: int num_1 = 3000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_1; cout << "field: " << out_target.str() << " vs input: " << num_1 << endl; The output here is: field: 00003000 vs input: 3000 Very nice! However if I try a bigger number, setw lets the output grow beyond 8 characters: int num_2 = 2000000000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_2; cout << "field: " << out_target.str() << " vs input: " << num_2 << endl; out_target.str(""); output: field: 2000000000 vs input: 2000000000 The desired output is "20000000". There's nothing stopping me from using a second operation to take only the first 8 characters, but is field truncation truly missing from iomanip? Would the Boost formatting do what I need in one step?
I can't think of any way to truncate a numeric field like that. Perhaps it has not been implemented because it would change the value. ostream::write() allows you to truncate a string buffer simply enough, as in this example... int num_2 = 2000000000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_2; cout << "field: "; cout.write(out_target.str().c_str(), 8); cout << " vs input: " << num_2 << endl;
2,900,512
2,900,537
Ctypes "symbol not found" for dynamic library in OSX
I have made a C++ library and have built a .dylib dynamic library from it. However when I load it with ctypes, it fails. Something doesn't seem to have linked properly. I have no idea why. The error (The relevant part): cscalelib.setup_framebuffer(flip,surface.frame_buffer,surface.texture,surface._scale[0],surface._scale[1]) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 325, in __getattr__ func = self.__getitem__(name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 330, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: dlsym(0x56ecd0, setup_framebuffer): symbol not found Here's the C++ code which is still in progress but should work with what I have so far. #include <OpenGL/gl.h> #include <OpenGL/glu.h> #include <vector.h> void setup_framebuffer(bool flip,GLuint frame_buffer_id,GLuint texture_id,int width,int height){ glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer_id); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texture_id, 0); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,width,height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix if (flip){ gluOrtho2D(0,width,height,0); }else{ gluOrtho2D(0,width,0,height); } } void end_framebuffer(){ glPopAttrib(); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix gluOrtho2D(0,1280,720,0); //Set an orthorgraphic view } void add_lines(bool antialias,vector< vector<double> > coordinates,double w,double r,double g, double b,double a){ glDisable(GL_TEXTURE_2D); if (antialias){ glEnable(GL_LINE_SMOOTH); //Enable line smoothing. } glColor4d(r,g,b,a); glLineWidth(w); glBegin(GL_LINE_STRIP); for (int x = 0; x < coordinates.size(); x++) { glVertex2d(coordinates[x][0],coordinates[x][1]); } glEnd(); if (antialias){ glDisable(GL_LINE_SMOOTH); //Disable line smoothing. } glEnable(GL_TEXTURE_2D); } I compiled it with: g++ -dynamiclib CPPEXTSCALELIB.cp -framework opengl -arch i386 -o CPPEXTSCALELIB.dylib Here's the Python code with "..." to represent irrelevant parts. ... from ctypes import * ... cscalelib = CDLL(os.path.dirname(sys.argv[0]) + "/CPPEXTSCALELIB.dylib") ... def setup_framebuffer(surface,flip=False): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) cscalelib.setup_framebuffer(flip,surface.frame_buffer,surface.texture,surface._scale[0],surface._scale[1]) ... Thank you.
The problem is most likely the fact that you are using C++, and hence the function name will be mangled and use C++ calling conventions. If you declare the function with extern "C" then it should be exported in such a way as to callable from C code (and from Python's CTypes module).
2,900,531
2,900,561
Help with memory leak (malloc)
I'v followed a tutorial to use OGL tesselaton. In one of the callbacks there is a malloc and it creates a leak every time I render a new frame. void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; vertex = (GLdouble *) malloc(6 * sizeof(GLdouble)); vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } I'v tried to free(vertex) but then the polygons did not render. I also tried allocating on the heap then doing delete(vertex) but then the polygon rendered awkwardly. I'm not sure what to do. Thanks
You should call free on whatever dataOut points to. For example, if you did this from the calling function: combineCallback (coords, vertex_data, weight, &dataOut); then you should call free (dataOut) after you're done using it later. If you free (vertex), that effectively means whatever dataOut points to is free to be overwritten because you assigned the address of vertex to *dataOut. In other words, don't free vertex; free whatever dataOut points to.
2,900,729
2,901,007
C++ templated factory constructor/de-serialization
I was looking at the boost serialization library, and the intrusive way to provide support for serialization is to define a member function with signature (simplifying): class ToBeSerialized { public: //Define this to support serialization //Notice not virtual function! template<class Archive> void serialize(Archive & ar) {.....} }; Moreover, one way to support serilization of derived class trough base pointers is to use a macro of the type: //No mention to the base class(es) from which Derived_class inherits BOOST_CLASS_EXPORT_GUID(Derived_class, "derived_class") where Derived_class is some class which is inheriting from a base class, say Base_class. Thanks to this macro, it is possible to serialize classes of type Derived_class through pointers to Base_class correctly. The question is: I am used in C++ to write abstract factories implemented through a map from std::string to (pointer to) functions which return objects of the desired type (and eveything is fine thanks to covariant types). Hover I fail to see how I could use the above non-virtual serialize template member function to properly de-serialize (i.e. construct) an object without knowing its type (but assuming that the type information has been stored by the serializer, say in a string). What I would like to do (keeping the same nomenclature as above) is something like the following: XmlArchive xmlArchive; //A type or archive xmlArchive.open("C:/ser.txt"); //Contains type information for the serialized class Base_class* basePtr = Factory<Base_class>::create("derived_class",xmlArchive); with the function on the righ-hand side creating an object on the heap of type Derived_class (via default constructor, this is the part I know how to solve) and calling the serialize function of xmlArchive (here I am stuck!), i.e. do something like: Base_class* Factory<Base_class>::create("derived_class",xmlArchive) { Base_class* basePtr = new Base_class; //OK, doable, usual map string to pointer to function static_cast<Derived_class*>( basePtr )->serialize( xmlArchive ); //De-serialization, how????? return basePtr; } I am sure this can be done (boost serialize does it but its code is impenetrable! :P), but I fail to figure out how. The key problem is that the serialize function is a template function. So I cannot have a pointer to a generic templated function. As the point in writing the templated serialize function is to make the code generic (i.e. not having to re-write the serialize function for different Archivers), it does not make sense then to have to register all the derived classes for all possible archive types, like: MY_CLASS_REGISTER(Derived_class, XmlArchive); MY_CLASS_REGISTER(Derived_class, TxtArchive); ... In fact in my code I relies on overloading to get the correct behaviour: void serialize( XmlArchive& archive, Derived_class& derived ); void serialize( TxtArchive& archive, Derived_class& derived ); ... The key point to keep in mind is that the archive type is always known, i.e. I am never using runtime polymorphism for the archive class...(again I am using overloading on the archive type). Any suggestion to help me out? Thank you very much in advance! Cheers
All you need is to store some sort of identifier before storing the information from the derived type. Then upon reading you use that identifier, which you've read first, to direct you to a factory that can then interpret the next block of information correctly and generate your derived type. This is probably what boost::serialization does at a very basic level. Maybe something like so: ar >> type; Base_class* basePtr = Factory<Base_class>::create(type,xmlArchive); Then you have a map of objects that look something like so: struct reader_base { virtual void load(xmlArchive, base_ptr) = 0; } template < typename T > struct reader : reader_base { virtual void load(xmlArchive, base_ptr) { static_cast<T*>(base_ptr)->serialize(xmlArchive); } };
2,900,785
2,900,792
What's the difference between cstdlib and stdlib.h?
When writing C++ code is there any difference between: #include <cstdlib> and #include <stdlib.h> other than the former being mostly contained within the std:: namespace? Is there any reason other than coding standards and style to use one over the other?
The first one is a C++ header and the second is a C header. Since the first uses a namespace, that would seem to be preferable.
2,900,862
2,900,890
sort an array of floats in c++
I have an array of (4) floating point numbers and need to sort the array in descending order. I'm quite new to c++, and was wondering what would be the best way to do this? Thanks.
Use std::sort with a non-default comparator: float data[SIZE]; data[0] = ...; ... std::sort(data, data + size, std::greater<float>());
2,901,152
2,901,178
What is the purpose of the garbage (files) that Qt Creator auto-generates and how can I tame them?
I'm fairly new to Qt, and I'm using the new Nokia Qt SDK beta and I'm working to develop a small application for my Nokia N900 in my free time. Fortunately, I was able to set up everything correctly, and also to run my app on the device. I've learned C++ in school, so I thought it won't be so difficult. I use Qt Creator as my IDE, because it doesn't work with Visual Studio. I also wish to port my app to Symbian, so I have run the emulator a few times, and I also compile for Windows to debug the most evil bugs. (The debugger doesn't work correctly on the device.) I come from a .NET background, so there are some things that I don't understand. When I hit the build button, Qt Creator generates a bunch of files to my project directory: moc_*.cpp files - what is their purpose? *.o files - I assume these are the object code *.rss files - I don't know their purpose, but they definitely don't have anything to do with RSS Makefile and Makefile.Debug - I have no idea AppName (without extension) - the executable for Maemo, and AppName.sis - the executable for Symbian, I guess? AppName.loc - I have no idea AppName_installer.pkg and AppName_template.pkg - I have no idea qrc_Resources.cpp - I guess this is for my Qt resources (where AppName is the name of the application in question) I noticed that these files can be safely deleted, Qt Creator simply regenerates them. The problem is that they pollute my source directory. Especially because I use version control, and if they can be regenerated, there is no point in uploading them to SVN. So, what the exact purpose of these files is, and how can I ask Qt Creator to place them into another directory? Edit What Rob recommended seems to be the most convenient solution, but I marked Kotti's answer accepted, because he provided me with the best explanation about how Qt's build mechanism works. The solution It seems that neither the Maemo nor the Symbian toolchain supports shadow builds as of yet, so I use these in my project file to solve the situation: DESTDIR = ./NoSVN OBJECTS_DIR = ./NoSVN MOC_DIR = ./NoSVN RCC_DIR = ./NoSVN UI_HEADERS_DIR = ./NoSVN
Not a fully answer to your question, but just part of it :) Also, it's googlable. Guess that if you develop in C++, you should know what does Makefile stand for. Also I think the .loc file is generally a file with localized strings / content. (source: thelins.se) Comparing the C++ build system to the Qt build system, you can see that the C++ build system, (the gray boxes), are left unmodified. We are still building C++ code here. However, we add more sources and headers. There are three code generators involved here: The meta-object compiler (moc in the illustration) – the meta-object compiler takes all classes starting with the Q_OBJECT macro and generates a moc_*.cpp C++ source file. This file contains information about the class being moc’ed such as class name, inheritance tree, etc, but also implementation of the signals. This means that when you emit a signal, you actually call a function generated by the moc. The user interface compiler (uic in the illustration) – The user interface compiler takes designs from Designer and creates header files. These header files are then included into source files as usual, making it possible to call setupUi to instanciate a user interface design. The Qt resource compiler (rcc in the illustration) – The resource compiler is something we have not talked about yet. It makes it possible to embedd images, text files, etc into your executable, but still to access them as files. We will look at this later, I just want to include it in this picture where it belongs. I hope this illustration clarifies what Qt really does to add new nice keywords to C++. If you are curious – feel free to read some of the generated files. Just don’t alter them – they are regenerated each time you build your application. If you are using QtCreator, the moc files are generated in the debug and release sub-directories of your project directory. The uic files are stored in the root of the project directory. The rcc files are generally boring, but I’m sure that you can find them in your project directory hierarcy somewhere. Edit: You don't have to include these files into your SVN. This is pretty the same crap as commiting .ncb, .pdb and other temporary files. Every time you change something in your Qt application, these temporary files get regenerated as an update to your changes, so there is no sense to commit them to SVN.
2,901,186
2,905,653
SQL Server Native Client API examples
I am writing a C++ application that needs to execute SQL queries in a SQL Server DB and I want to do it using SQL Server Native Client. The MSDN documentation has no a full reference on it and has a few examples so I am looking for some site having more information on how to connect, execute queries and retrieve results using this API. Do you guys know where can I more info on it? Thanks in advance, Ernesto
In addition to ODBC as Brian mentions, you can also use OLE DB and/or ADO (which actually makes OLE DB "easy" to use). The three options are briefly introduced in this blog entry. Of the ODBC, OLE DB and ADO options, I think the simplest route would be to use ADO. Using ODBC or OLE DB directly is, in my opinion, a somewhat painful process. It can certainly result in very fast code, but you pay for it in development time. This page has some simple examples. Edit Since this post was made (both question and answers), OLE DB has been deprecated by Microsoft. So going forward, it probably makes sense to use a solution that does not go through OLE DB. This blog post talks about it some.
2,901,305
2,901,339
Why doesn't this work?
I'v tried to solve a memory leak in the GLU callback by creating a global variable but now it dos not draw anything: GLdouble *gluptr = NULL; void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; if(gluptr == NULL) { gluptr = (GLdouble *) malloc(6 * sizeof(GLdouble)); } vertex = (GLdouble*)gluptr; vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } basically instead of doing malloc each time in the loop (thus the memory leak) im using a global pointer, but this doesn't work (drawing to the screen not working) which means dataOut is not receiving the vertex data pointed to by my pointer. Why would using malloc to a pointer created in the function work any different than a global variable? Thanks
You allocate the data only once -- but GLUtesselator needs more than one set of data at a time! What you do here, is putting all the vertex data into a single place in memory, where in the original code, you had memory per vertex. GLUtesselator needs more then one vertex to function properly. You do call void gluDeleteTess(GLUtesselator *tessobj); ...afterwards, do you?
2,901,327
2,901,366
Protecting an Application's Memory From Tampering
We are adding AES 256 bit encryption to our server and client applications for encrypting the TCP/IP traffic containing sensitive information. We will be rotating the keys daily. Because of that, the keys will be stored in memory with the applications. Key distribution process: Each server and client will have a list of initial Key Encryption Key's (KEK) by day If the client has just started up or the server has just started up, the client will request the daily key from the server using the initial key. The server will respond with the daily key, encrypted with the initial key. The daily key is a randomly generated set of alphanumeric characters. We are using AES 256 bit encryption. All subsequent communications will be encrypted using that daily key. Nightly, the client will request the new daily key from the server using the current daily key as the current KEK. After the client gets the new key, the new daily key will replace the old daily key. Is it possible for another bad application to gain access to this memory illegally or is this protected in Windows? The key will not be written to a file, only stored in a variable in memory. If an application can access the memory illegally, how can you protect the memory from tampering? We are using C++ and XP (Vista/7 may be an option in the future so I don't know if that changes the answer).
I think that you may have a more fundamental problem on your hands. If there is even the faintest chance that this machine might catch a rootkit, then your all of your keys are ours, as it were. On Windows, Process A can read the memory of Process B if any of the below are true: it has privileges to open the memory device. it has privileges to open the virtual memory of process B. it has a friend in the kernel. If you have complete control over what's running on the machine, and complete confidence that no one can introduce any surprises, you're golden. This is, of course, not unique to Windows. What is unique to Windows is the volume of rootkit malware.
2,901,556
2,901,638
How to multi-thread this?
I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); // i.e. via 1st condition variable Wait for thread2 to finish exclusive access. // i.e. via 2nd condition variable. } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. Wait for thread1 to be ready. // i.e. via 1st condition variable. Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. // i.e. via 2nd condition variable. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.
I looks to me like you are trying to do a rendezvous (a term from Ada). The second thread is sitting, waiting for the first thread to call it, then it does some work immediately while the first thread waits, and some more work after the first thread is finished. The first thread is "calling" the second thread - with an immediate timeout if the second thread is unavailable to take the call. Ada supports this directly in the language, but assuming that porting to Ada isn't an option... This could be implemented with three semaphores. Semaphore 1 indicates that thread 1 is ready to rendezvous. Semaphore 2 indicates thread 2 is ready to rendevous. Semaphore 3 indicates the rendezvous is complete. Thread 1: Defaults with Semaphore 1 acquired. if Semaphore 2.acquire(timeout = 0) is successful # Thread 2 is ready Semaphore 1.release() # Indicate I am ready Semaphore 3.acquire() # Wait until the rendevous is complete. Semaphore 3.release() Semaphore 1.acquire() # Indicate I am not ready Semaphore 2.release() # I am no longer using thread 2. Do concurrent work Thread 2: Defaults with Semaphore 2 acquired. Loop forever Semaphore 3.acquire() # Indicate Rendevous is not complete. Semaphore_2.release() # Indicate I am ready Semaphore_1.acquire() # Wait for Thread 1 to be ready Joint processing Semaphore 1.release() # I am no longer using thread 1. Semaphore 3.release() # Rendevous is complete. Semaphore 2.acquire() # I am not ready Post-processing NOTE: Written from scratch, not tested. Looks a lot more complicated than I thought it would be when I started; have I missed something?
2,901,590
2,901,621
Cannot run an executable binary file on another Linux System?
I'm using Ubuntu 10.04 and Qt4.6, and I've created an executable binary file on my own computer through QtCreator. Now I want to put my executable file on CentOS 5, but it seems that this executable file cannot run on CentOS. The error message is bash: ./[filename]: cannot execute binary file Now I know this comes from 32-bits and 64-bits problem, and successfully create 32-bit exexutable file. However, this executable file still cannot run on CentOS because of the dynamic linkage problem, it always shows that : Error while loading shared libraries: libQtGUI.so.4: cannot open shared object file: No such file or directory I tried to add the "-static" flag on .pro file QMAKE_CFLAGS_RELEASE += -Os -static QMAKE_CPPFLAGS_RELEASE += -Os -static QMAKE_CXXFLAGS_RELEASE += -Os -static QMAKE_CCFLAGS_RELEASE += -Os -static however, looks like that it only generate "static binary" but not "static linked", the dependency still exists. I also tried to add following line on .pro file: QMAKE_LFLAGS += static But this project cannot compile after doing this. I don't have permission to install Qt on Cent OS, how can I compile this project with static linkage so that the executable file can run independently? Thanks for your help!
Check 64-bit vs. 32-bit - file(1) is your friend here. Then check what libraries are missing with ldd(1). Edit: Take a look at this SO question Qt static linking and deployment.
2,901,738
2,901,762
How do bezier handles work?
On Wikipedia I found information about bezier curves and made a function to generate the inbetween points for a bezier polygon. I noticed that Expression Design uses bezier handles. This allows a circle to be made with 4 points each with a bezier handle. I'm just not sure mathematically how this works in relation with the formula for bezier point at time T. How do these handle vectors work to modify the shape? Basically what's there relation to the bezier formula? Thanks
Basically, the 4 points used in the cubic bezier formula are the 2 points the curve is between, plus the two points of the handles on that "side" of the first two points (1 handle from each of the first points). If there are double handles on each point, the handles on the "opposite" side of the points from the curve currently being calculated are ignored (they're used for generating the curve that comes out of the opposite side). The actual generation method used for cubic bezier curves is outlined on the Wikipedia page you linked in your question.
2,902,186
2,902,277
pass fortran 77 function to C/C++
Is it possible to pass fortran 77 function as a callback function pointer to C/C++? if so, how? information I found on the web relates to fortran 90 and above, but my legacy code base is in 77. many thanks
If it can be done in FORTRAN 77, it will be compiler and platform specific. The new ISO C Binding of Fortran 2003 provides a standard way of mixing Fortran and C, and any language that follows or can follow the calling conventions of C, such as C++. While formally a part of Fortran 2003, and while there are extremely few Fortran compilers that fully support the entirety of Fortran 2003, the ISO C Binding is supported by numerous Fortran 95 compilers, including gfortran, g95, Sun, ifort, etc. So I recommend using one of these Fortran 95 compilers and the ISO C Binding method rather than figuring out some method for a particular method. Since FORTRAN 77 is a subset of of Fortran 95, why not compile your legacy code with one of these compilers, using Fortran 95 to add this new feature? I have called Fortran procedures from C using the ISO C Binding, but haven't passed them as pointers. It should be possible. The steps are: 1) you declare the Fortran function with the Bind(C) attribute, 2) you declare all of the arguments using special types, such as integer(c_int), that match the types of C. Steps 1 & 2 make the Fortran function interoperable with C. 3) You obtain a C-pointer to this Fortran function with the Fortran instrinsic function "c_funloc", assigning the pointer value to a pointer of type "c_funptr". 4) In the Fortran code, you declare the C routine that you want to pass the function pointer to with an Interface, declaring it in in Fortran terms, but using the Bind(C) attribute and interoperable types so that the Fortran compiler knows to use the C-calling convention -- making the C routine interoperable with Fortran. Then when you call the C-routine in the Fortran code, you can pass it the function pointer created in step 3. UPDATE: Code example: The Fortran main program "test_func_pointer" passes a pointer to the Fortran function "my_poly" to the C routine "C_Func_using_Func_ptr" and receives the result back from that C function. module func_pointer_mod use, intrinsic :: iso_c_binding implicit none interface C_func_interface function C_Func_using_Func_ptr ( x, Func_ptr ) bind (C, name="C_Func_using_Func_ptr") import real (c_float) :: C_Func_using_Func_ptr real (c_float), VALUE, intent (in) :: x type (c_funptr), VALUE, intent (in) :: Func_ptr end function C_Func_using_Func_ptr end interface C_func_interface contains function my_poly (x) bind (C, name="my_poly") real (c_float) :: my_poly real (c_float), VALUE, intent (in) :: x my_poly = 2.0 * x**2 + 3.0 * x + 5.0 return end function my_poly end module func_pointer_mod program test_func_pointer use, intrinsic :: iso_c_binding use func_pointer_mod implicit none type (c_funptr) :: C_func_ptr C_func_ptr = c_funloc ( my_poly ) write (*, *) C_Func_using_Func_ptr ( 2.5_c_float, C_func_ptr ) stop end program test_func_pointer and float C_Func_using_Func_ptr ( float x, float (*Func_ptr) (float y) ) { return ( (*Func_ptr) (x) ); }
2,902,511
2,902,820
C++ double division by 0.0 versus DBL_MIN
When finding the inverse square root of a double, is it better to clamp invalid non-positive inputs at 0.0 or MIN_DBL? (In my example below double b may end up being negative due to floating point rounding errors and because the laws of physics are slightly slightly fudged in the game.) Both division by 0.0 and MIN_DBL produce the same outcome in the game because 1/0.0 and 1/DBL_MIN are effectively infinity. My intuition says MIN_DBL is the better choice, but would there be any case for using 0.0? Like perhaps sqrt(0.0), 1/0.0 and multiplication by 1.#INF000000000000 execute faster because they are special cases. double b = 1 - v.length_squared()/(c*c); #ifdef CLAMP_BY_0 if (b < 0.0) b = 0.0; #endif #ifdef CLAMP_BY_DBL_MIN if (b <= 0.0) b = DBL_MIN; #endif double lorentz_factor = 1/sqrt(b); double division in MSVC: 1/0.0 = 1.#INF000000000000 1/DBL_MIN = 4.4942328371557898e+307
When dealing with floating point math, "infinity" and "effectively infinity" are quite different. Once a number stops being finite, it tends to stay that way. So while the value of lorentz_factor is "effectively" the same for both methods, depending on how you use that value, later computations can be radically different. sqrt(lorentz_factor) for instance remains infinite if you clamp to 0, but will actually be calculated if you clamp to some very very small number. So the answer will largely depend on what you plan on doing with that value once you've clamped it.
2,902,717
2,902,768
C++: Is there a way to limit access to certain methods to certain classes without exposing other private members?
I have a class with a protected method Zig::punt() and I only want it to be accessible to the class "Avocado". In C++, you'll normally do this using the "friend Avocado" specifier, but this will cause all of the other variables to become accessible to "Avocado" class; I don't want this because this breaks encapsulation. Is what I want impossible, or does there already exist an obscure trick out there that I can use to achieve what I want? Or possibly alternative class design patterns that'll achieve the same thing? Thanks in advance for any ideas!
Here's an ugly yet working trick: class AvocadoFriender { protected: virtual void punt() = 0; friend class Avocado; } class Zig : public AvocadoFriender { ... protected: void punt(); } Basically you add a mixin class that exposes only the part of the interface that you want to Avocado. We take advantage of the fact that by inheriting a class that is befriended to Avocado you don't expose anything more except what was exposed originally.
2,902,749
2,903,128
Singleton class issue in Qt
i created a singleton class and trying to access that class in other class but getting error "cannot access private member" Setupconfig is my singleton class and i am trying to access this class in other class which have QMainWindow And here is the error message: Error 'Setupconfig::Setupconfig' : cannot access private member declared in class 'Setupconfig' Setupconfig.h static Setupconfig *buiderObj() { static Setupconfig *_setupObj= new Setupconfig(); return _setupObj; } private: Setupconfig(); ////////////////////////////////////// EasyBudget.h class EasyBudget : public QMainWindow, public Ui::EasyBudgetClass, public Setupconfig { Q_OBJECT public: Setupconfig *setupObj; } ////////////////////////////////////// EasyBudget.cpp EasyBudget::EasyBudget(QWidget *parent, Qt::WFlags flags) : QMainWindow(parent,Qt::FramelessWindowHint) { setupObj=Setupconfig::buiderObj(); }
Why are you deriving "EasyBudget" from the singleton class "SetupConfig"? Remove that part to resolve your problem. EasyBudget.h class EasyBudget : public QMainWindow, public Ui::EasyBudgetClass {......
2,902,752
2,907,986
Best way to have common class shared by both C++ and Ruby?
I am currently working on a project where a team of us are designing a game, all of us are proficient in ruby and some (but not all) of us are proficient in c++. Initially we made the backend in ruby but we ported it to c++ for more speed. The c++ port of the backend has exactly the same features and algorithms as the original ruby code. However we still have a bunch of code in ruby that does useful things but we would rather not have to port it all, so we want to keep using the ruby code and get data from the c++ classes. Is this unrealistic? Our first thought was that we could save some of the data structures in something like XML or redis and call that, but some of the developers don't like that idea. We don't need any particularly complex data structures to be passed between the different parts of the code, just tuples, strings and ints. Is there any way of integrating the ruby code so that it can call the c++ stuff natively? Will we need to embed code? Will we have to make a ruby extension? If so are there any good resources/tutorials you could suggest? For example say we have some code like this in the c++ backend: class The_game{ private: bool printinfo; //print the player diagnostic info at the beginning if true int numplayers; std::vector<Player*> players; string current_action; int action_is_on; // the index of the player in the players array that the action is now on //more code here public: Table(std::vector<Player *> in_players, std::vector<Statistics *> player_stats ,const int in_numplayers); ~Table(); void play_game(); History actions_history; }; class History{ private: int action_sequence_number; std::vector<Action*> recent_actions; public: void print_history(); void add_action(Action* the_action_to_be_added); int get_action_sequence_number(){ return action_sequence_number;} bool history_actions_are_equal(); int last_action_size(int street,int number_of_actions_ago); History(); ~History(); }; Is there any way to natively call something in the actions_history via The_game object in ruby? (The objects in the original ruby code all had the same names and functionality) By this I mean: class MyRubyClass def method1(arg1) puts arg1 self.f() # ... but still available puts cpp_method.the_current_game.actions_history.get_action_sequence_number() end # Constructor: def initialize(arg) puts "In constructor with arg #{arg}" #get the c++ object here and call it cpp_method end end Is this possible? Any advice or suggestions are appreciated.
For creating ruby extensions you also might want to have a look at: Rice RubyInline
2,902,917
2,902,943
static assert for const variables?
Static asserts are very convenient for checking things in compile time. A simple static assert idiom looks like this: template<bool> struct StaticAssert; template<> struct StaticAssert<true> {}; #define STATIC_ASSERT(condition) do { StaticAssert<(condition)>(); } while(0) This is good for stuff like STATIC_ASSERT(sizeof(float) == 4) and: #define THIS_LIMIT (1000) ... STATIC_ASSERT(THIS_LIMIT > OTHER_LIMIT); But using #define is not the "C++" way of defining constants. C++ would have you use an anonymous namespace: namespace { const int THIS_LIMIT = 1000; } or even: static const int THIS_LIMIT = 1000; The trouble with this is that with a const int you can't use STATIC_ASSERT() and you must resort to a run-time check which is silly. Is there a way to properly solve this in current C++? I think I've read C++0x has some facility to do this... EDIT Ok so this static const int THIS_LIMIT = 1000; ... STATIC_ASSERT(THIS_LIMIT > 0); compiles fine But this: static const float THIS_LIMIT = 1000.0f; ... STATIC_ASSERT(THIS_LIMIT > 0.0f); does not. (in Visual Studio 2008) How come?
Why, you can still static assert with const int: #define static_assert(e) extern char (*ct_assert(void)) [sizeof(char[1 - 2*!(e)])] static_assert( THIS_LIMIT > OTHER_LIMIT ) Also, use boost! BOOST_STATIC_ASSERT( THIS_LIMIT > OTHER_LIMIT ) ... you'll get a lot nicer error messages...
2,902,976
2,903,018
Programming Language to manipulate an Access Database
By quote, what is the appropiate language to manipulate an Access database? A Windows user interface to manipulate an existing Access Database. ... and why?
Visual Basic .NET or C# would be my choice as there are enough objects and classes built-in to support create medium size database driven applications without writing much code :) objects in the OleDb namespace can be used to connect and insert/retrieve/update data in the database Here is a C# tutorial http://msdn.microsoft.com/en-us/library/aa288452(VS.71).aspx
2,903,144
2,909,765
How to set input focus to a shown dialog in Qt?
In a button click slot, I create and exec() a dialog with a NULL parent. Inside the dialog's constructor, I have: this->activateWindow(); this->raise(); this->setFocus(); The dialog is application modal and has strong focus. However, it does NOT respond to keyboard events until I click on it. How do I make the dialog get focus without having to click it?
The problem was that I was setting the Qt:Tool window flag. Using Qt::Popup or Qt::Window instead will cause input focus is automatically set when the dialog is shown. I used Qt::Window myself. Some of the other flags will probably work as well, but the main thing is that a QDialog with the Qt::Tool flag will not automatically set input focus when the dialog is shown.
2,903,162
2,903,223
Reading from istream
How can I extract data (contents) from istream without using operator>>() ?.
If you want to read characters from the istream, then by using get and getline: std::istream::get std::istream::getline For general reading you may want to use read: std::istream::read
2,903,179
39,926,700
intrusive_ptr: Why isn't a common base class provided?
boost::intrusive_ptr requires intrusive_ptr_add_ref and intrusive_ptr_release to be defined. Why isn't a base class provided which will do this? There is an example here: http://lists.boost.org/Archives/boost/2004/06/66957.php, but the poster says "I don't necessarily think this is a good idea". Why not? Update: I don't think the fact that this class could be misused with Multiple Inheritance is reason enough. Any class which derives from multiple base classes with their own reference count would have the same issue. Whether these refcounts are implemented via a base class or not makes no difference. I don't think there's any issue with multithreading; boost::shared_ptr offers atomic reference counting and this class could too.
Boost provides a facility for that. It can be configured for either thread-safe or thread-unsafe refcounting: #include <boost/intrusive_ptr.hpp> #include <boost/smart_ptr/intrusive_ref_counter.hpp> class CMyClass : public boost::intrusive_ref_counter< CMyClass, boost::thread_unsafe_counter> ... boost::intrusive_ptr<CMyClass> myPtr; http://www.boost.org/doc/libs/1_62_0/libs/smart_ptr/intrusive_ref_counter.html
2,903,360
2,903,388
How to check whether a file exists in C:\ drive using VC++?
I want to check a file is present in C drive or not..? can any one tell me how ? Update: I got errors, I am using VC++ 2008 #include "stdafx.h" #include <stdio.h> int main(int argc, _TCHAR argv[]) { FILE * f = fopen("C:\\Program Files (x86)\\flower.jpeg"); if (f == NULL) { file_exists = FALSE: } else { file_exists = TRUE; fclose(f); } return 0; } Update 2 When trying to cut and paste code from the linked example below: #include "stdafx.h" #include <windows.h> #include "Shlwapi.h" int tmain(int argc, _TCHAR argv[]) { // Valid file path name (file is there). TCHAR buffer_1[ ] = _T("C:\\TEST\\file.txt"); TCHAR *lpStr1; lpStr1 = buffer_1; // Return value from "PathFileExists". int retval; // Search for the presence of a file with a true result. retval = PathFileExists(lpStr1); return 0; } I am getting this error: files.obj : error LNK2019: unresolved external symbol __imp__PathFileExistsW@4 referenced in function _wmain
Given you mention C drive, I'm assuming you can use the Windows API, if so PathFileExists(LPCTSTR) will do you
2,903,394
2,903,413
How to compare two vectors, in C++
This is my code: #include <algorithm> void f() { int[] a = {1, 2, 3, 4}; int[] b = {1, 2, 100, 101}; // I want to do something like this: // int* found = compare(a[0], a[3], b[0]); // in order to get a pointer to a[2] } Maybe I missed this algorithm in the manual… Please help :)
Sort your array and use mismatch.
2,903,427
2,903,498
plug-in architecture based c/c++ application
I need to know how to start writing an application based on plug-in architecture. I mean how to write a base code and let others develop the application by adding the plug-ins they write. I know that there is some problems in doing so in c++. most people use another language such as python to add plug-ins to their c++ application.
I think, this is not the answer you expect, but you could try to examine the Rainmeter sources. It's written in C++ (some places could be done better, to my mind, but overall it's ok) and the whole application is done the way so it just handles plugins. Even the simple API is done via plugins, there is also a bunch of examples of contributed plugins, I mean, written by someone else (I did that too, one day). I think you could actually study a lot of new tricks in the plugin-based development by looking at other applications. Also to mention, another good sample is Miranda IM. Edit: Also, if I han the same task, I would actually add some python (or something like that) backend to my application and use it as the language for SDK (for example, using boost::python).
2,903,629
2,903,658
Why doesn't this inner class compile?
This is my code: #include <algorithm> class A { void f() { struct CompareMe { bool operator() (int i, int j) { return i < j; } } comp; int a[] = {1, 2, 3, 4}; int found = std::min_element(a[0], a[3], comp); } } Error message: no matching function for call to ‘min_element(int&, int&, A::f()::CompareMe&) What am I doing wrong?
The error has nothing to do with your inner class. STL algorithms work on iterators. An iterator into an array of ints is a int*. The second of those iterators must always point to one passed the last element of the range. This int* found = std::min_element(&a[0], &a[4], comp); works fine for me. However, as far as I remember, C++98 did not allow templates to be instantiated with function-local class types. Unless C++03 fixed that, it's still forbidden. (I think C++11 will allow it, though.) So in order to be fully compliant with the current C++ standard, you would have to move your comparator out of the function into the class. (The class or a file-local unnamed namespace seem good candidates for where to move it to.)
2,903,804
2,903,824
can't compile min_element in c++
This is my code: #include <algorithm> #include <vector> #include <string> using namespace std; class A { struct CompareMe { bool operator() (const string*& s1, const string*& s2) const { return true; } }; void f() { CompareMe comp; vector<string*> v; min_element(v.begin(), v.end(), comp); } }; And this is the error: error: no match for call to ‘(A::CompareMe) (std::string*&, std::string*&)’ test.cpp:7: note: candidates are: bool A::CompareMe::operator()(const std::string*&, const std::string*&) const I feel that there is some syntax defect, but can't find out which one. Please, help!
Your placement of const is wrong. A T*& cannot be implicitly converted to a const T*&. Try bool operator() (const string* const& s1, const string* const& s2) const { ... // ^^^^^ ^^^^^ instead. Or just pass by value (thanks Mike): bool operator() (const string* s1, const string* s2) const { ... which will be more efficient for simple objects like a pointer, if the compiler uses a standard ABI.
2,903,882
2,903,915
c++: strange syntax in what() method of std::exception
When i am inheriting from std::exception in order to define my own exception type, i need to override the what() method, which has the following signature: virtual const char* what() const throw(); This definitely looks strange to me, like if there were two method names in the signature. Is this some very specific syntax, like with pure virtual methods, e.g.: virtual int method() const = 0; or is this a feature, that could somehow be used in another context, too? And if so, for what could it be used?
It is called exception specifications. The throw() doesn't allow any exception to be thrown from inside this method throw(int) would only allow exceptions of type int to be thrown. Exception specifications will be dropped in C++0x. This gives a very good explanation of the reasons.
2,904,010
5,906,553
What is the difference between C++, Java and JavaScript exception handling?
They are very different kind of languages and the way they handle exceptions might be rather different.. How is exception handling implemented and what are the implementation difference within these languages? I am asking this question also because I noticed that C++ exception handling seems to be very slow compared to the JavaScript version.
The most detailed answer I found regarding Exception handling and performance/implementation is on this page: http://lazarenko.me/tips-and-tricks/c-exception-handling-and-performance
2,904,104
2,905,078
How to differ between Windows Mobile 6.5.3 and previous versions during runtime?
Is there an established or unofficial way of finding out if my application is running on a Windows Mobile 6.5.3 device or if it's a previous version? Managed or native doesn't matter and I don't mind interop-ing.
Since I want some reputation ;) Here is the information I found on the web: How to detect Windows Mobile 6.1 (Detecting AKUs) List of AKUs on channel9 Windows CE / Windows Mobile Versions
2,904,137
2,904,827
How best to deal with warning c4305 when type could change?
I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being: warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real' When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible. One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code. Another is to create some macro based on the accuracy compiler flag like: #if OGRE_DOUBLE_PRECISION #define INIT_REAL(x) (x) #else #define INIT_REAL(x) static_cast<float>( x ) #endif which would require changing all the variable initialisation done so far but at least it would be future proof. Any preferences or something I haven't thought of?
The simple solution would be to just add a cast: static_cast<Ogre::Real>(0.1); or you could write a function to do the conversion for you (similar to your macro, but avoiding all the yucky problems macros bring: template <typename T> inline Ogre::Real real(T val) { return static_cast<Ogre::Real>(val); } Then you can just call real(0.1) and get the value as an Ogre::Real.
2,904,244
2,904,307
C++ Grid Controls For Desktop Applications
Is there a c++ library like Extjs that can be used in desktop applications written in c++?.
There is also Qt which has pretty customizable Grid controls widgets and a lot of examples to learn how to use them.
2,904,246
2,904,270
How to debug without Visual Studio?
Python -> c++ dll -> c# dll I have a com interop c# dll that is loaded in a wrapper c++ dll throught the .tlb file generated in c# to be used in a python project. When I run in my computer it works fine but when I run in a computer that just got formated it gives: WindowsError: exception code 0xe0434f4d I have the redistribute c++ installed and the .net compact framework 3.5 on the formatted computer. How can I see what is the correct exception on a computer that does not have visual studio installed? How can I debug all of this? I can't debug the dll's itself can I? Note: in my computer all works well so maybe is some dll or file missing. I allready used Dependency Walker to see if there's some dll missing, and nop!
Download the Microsoft Debugging Tools for Windows. It contains the WinDbg debugger, which can also be used for debugging. Advantage of WinDbg over Visual Studio is that you have much more low-level commands to find problems. Disadvantage of WinDbg is that it's not that user friendly (compared to Visual Studio).
2,904,304
2,904,339
overloading -> operator in c++
I saw this code but I couldn't understand what it does: inline S* O::operator->() const { return ses; //ses is a private member of Type S* } so what happens now if I used ->?
Is you have an instance of class O and you do obj->func() then the operator-> returns ses and then it uses the returned pointer to call func(). Full example: struct S { void func() {} }; class O { public: inline S* operator->() const; private: S* ses; }; inline S* O::operator->() const { return ses; } int main() { O object; object->func(); return 0; }
2,904,341
3,418,216
Exceptions and Access Violations in Paint events in Windows
After executing some new code, my C++ application started to behave strange (incorrect or incomplete screen updates, sometimes no screen updates at all). After a while we found out that the new code is causing an Access Violation. Strange enough, the application simply keeps on running (but with the incorrect screen updates). At first we thought the problem was caused by a "try-catch(...)" construction (put there by an overactive ex-colleague), but several hours later (carefully inspecting the call stacks, adding many breakpoints, ...) we found out that if there's an Access Violation in a paint event, Windows catches it, and simply continues running the application. Is this normal behavior? Is it normal that Windows catches exceptions/errors during a paint event? Is there a way to disable this? (if not, it would mean that we have to always run in the debugger with all exceptions enabled while testing our code). EDIT: On XP the correctly crashes (the wanted behavior after an Access Violation) On Vista and Windows 7 the application keeps on running
It's a known defect. Check the hotfix. http://support.microsoft.com/kb/976038
2,904,376
2,904,392
Use a template parameter in a preprocessor directive?
Is it possible to use a non-type constant template parameter in a preprocessor directive? Here's what I have in mind: template <int DING> struct Foo { enum { DOO = DING }; }; template <typename T> struct Blah { void DoIt() { #if (T::DOO & 0x010) // some code here #endif } }; When I try this with something like Blah<Foo<0xFFFF>>, VC++ 2010 complains something about unmatched parentheses in the line where we are trying to use #if. I am guessing the preprocessor doesn't really know anything about templates and this sort of thing just isn't in its domain. What say?
No, this is not possible. The preprocessor is pretty dumb, and it has no knowledge of the structure of your program. If T::Doo is not defined in the preprocessor (and it can't be, because of the ::), it cannot evaluate that expression and will fail. However, you can rely on the compiler to do the smart thing for you: if (T::Doo & 0x010) { // some code here } Constant expressions and dead branches are optimized away even at the lower optimization settings, so you can safely do this without any runtime overhead.
2,904,451
2,904,481
refactoring my code. My headers (Header Guard Issues)
I had a post similar to this awhile ago based on a error I was getting. I was able to fix it but since then I been having trouble doing things because headers keep blocking other headers from using code. Honestly, these headers are confusing me and if anyone has any resources that will address these types of issues, that will be helpful. What I essentially want to do is be able to have rModel.h be included inside RenderEngine.h. every time I add rModel.h to RenderEngine.h, rModel.h is no longer able to use RenderEngine.h. (rModel.h has a #include of RenderEngine.h as well). So in a nutshell, RenderEngine and rModel need to use each others functionalities. On top of all this confusion, the Main.cpp needs to use RenderEngine. stdafx.h #include "targetver.h" #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files: #include <windows.h> // C RunTime Header Files #include <stdlib.h> #include <malloc.h> #include <memory.h> #include <tchar.h> #include "resource.h" main.cpp #include "stdafx.h" #include "RenderEngine.h" #include "rModel.h" // Global Variables: RenderEngine go; rModel *g_pModel; ...code........... rModel.h #ifndef _MODEL_H #define _MODEL_H #include "stdafx.h" #include <vector> #include <string> #include "rTri.h" #include "RenderEngine.h" ........Code RenderEngine.h #pragma once #include "stdafx.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #include "rModel.h" .......Code......
As I wrote in my previous answer on this question, google about Forward declaration in C++. This may solve your problems, but, again, circular header dependencies indicate poor application design.
2,904,467
2,904,618
Use of const double for intermediate results
I a writing a Simulation program and wondering if the use of const double is of any use when storing intermediate results. Consider this snippet: double DoSomeCalculation(const AcModel &model) { (...) const double V = model.GetVelocity(); const double m = model.GetMass(); const double cos_gamma = cos(model.GetFlightPathAngleRad()); (...) return m*V*cos_gamma*Chi_dot; } Note that the sample is there only to illustrate -- it might not make to much sense from the engineering side of things. The motivation of storing for example cos_gamma in a variable is that this cosine is used many time in other expressions covered by (...) and I feel that the code gets more readable when using cos_gamma rather than cos(model.GetFlightPathAngleRad()) in various expressions. Now the actual is question is this: since I expect the cosine to be the same througout the code section and I actually created the thing only as a placeholder and for convenience I tend to declare it const. Is there a etablished opinion on wether this is good or bad practive or whether it might bite me in the end? Does a compiler make any use of this additional information or am I actually hindering the compiler from performing useful optimizations? Arne
Given your code: const double V = model.GetVelocity(); const double m = model.GetMass(); const double cos_gamma = cos(model.GetFlightPathAngleRad()); I would probably leave cos_gamma as it is. I'd consider changing V and m to references though: const double &V = model.GetVelocity(); const double &m = model.GetMass(); This way you're making it clear that these are strictly placeholders. It does, however, raise the possibility of lifetime issues -- if you use a reference, you clearly have to ensure that what it refers to has sufficient lifetime. At least from the looks of things, this probably won't be a problem though. First of all, GetVelocity() and GetMass() probably return values, not references (in which case you're initializing the references with temporaries, and the lifetime of the temporary is extended to the lifetime of the reference it initializes). Second, even if you return an actual reference, it's apparently to a member of the model, which (at a guess) will exist throughout the entire calculation in question anyway.
2,904,470
2,904,612
Learning Win32 to develop GUI Applications
if you're a c++ programmer, would you go for the Win32 API or .NET to develop GUI applications?
Win32 is an API (Application Programming Interface). So is .NET. So is POSIX. The first two have GUI toolkits integrated into the main API, but you can use other toolkits such as Qt (as suggested by Skildrick) or wxWindows instead if you choose. For *nix, the main API is POSIX and almost all of them use X11 as the low-level graphics layer, then you need some GUI toolkit on top (none is integrated into POSIX). Depending on the type of displays you want, OpenGL is another very good highly portable GUI toolkit, though it focuses on high-speed vector graphics rather than UI widgets. One good reason for using the Win32 API's integrated GUI toolkit is that many of the other parts of the Win32 API use it, e.g. WSAAsyncSelect and MsgWaitForMultipleObjectsEx are non-GUI functions which are integrated into GUI message processing. A good wrapper toolkit will give you enough control to continue using these, but few do since this approach is very different with non-Windows OSes and most alternative toolkits value portability above capability. Even .NET, which is designed from the ground up to run optimally on Windows, can't use asynchronous procedure calls or waitable timers from a UI thread, since none of the message processing in .NET uses MsgWaitForMultipleObjects. So you end up forced to use multiple threads and a ton of yucky synchronization code. But stay away from MFC. It is basically an academic exercise in implementing exceptions without compiler support, not the kind of framework you want for serious applications. Most of the other "features" got added after modern C++ design was much better understood but continue to use the dangerous messy style started by the early hacks on exceptions and virtual inheritance in the name of keeping things consistent. There are much better choices available today.
2,904,622
2,906,029
Resource allocation and automatic deallocation
In my application I got many instances of class CDbaOciNotifier. They all share a pointer to only one instance of class OCIEnv. What I like to achieve is that allocation and deallocation of the resource class OCIEnv will be handled automatically inside class CDbaOciNotifier. The desired behaviour is, with the first instance of class CDbaOciNotifier the environment will be created, after that all following notifiers use that same environment. With the destruction of the last notifier, the environment will be destroyed too (call to custom deleter). Later on, this cycle can start again with the creation of a new environment. What I've got so far (using a static factory method to create notifiers): #pragma once #include <string> #include <memory> #include "boost\noncopyable.hpp" class CDbaOciNotifier : private boost::noncopyable { public: virtual ~CDbaOciNotifier(void); static std::auto_ptr<CDbaOciNotifier> createNotifier(const std::string &tnsName, const std::string &user, const std::string &password); private: CDbaOciNotifier(OCIEnv* envhp); // All notifiers share one environment static OCIEnv* m_ENVHP; // Custom deleter static void freeEnvironment(OCIEnv *env); OCIEnv* m_envhp; }; CPP: #include "DbaOciNotifier.h" using namespace std; OCIEnv* CDbaOciNotifier::m_ENVHP = 0; CDbaOciNotifier::~CDbaOciNotifier(void) { } CDbaOciNotifier::CDbaOciNotifier(OCIEnv* envhp) :m_envhp(envhp) { } void CDbaOciNotifier::freeEnvironment(OCIEnv *env) { OCIHandleFree((dvoid *) env, (ub4) OCI_HTYPE_ENV); *env = null; } auto_ptr<CDbaOciNotifier> CDbaOciNotifier::createNotifier(const string &tnsName, const string &user, const string &password) { if(!m_ENVHP) { OCIEnvCreate( (OCIEnv **) &m_ENVHP, OCI_EVENTS|OCI_OBJECT, (dvoid *)0, (dvoid * (*)(dvoid *, size_t)) 0, (dvoid * (*)(dvoid *, dvoid *, size_t))0, (void (*)(dvoid *, dvoid *)) 0, (size_t) 0, (dvoid **) 0 ); } //shared_ptr<OCIEnv> spEnvhp(m_ENVHP, freeEnvironment); ...got so far... return auto_ptr<CDbaOciNotifier>(new CDbaOciNotifier(m_ENVHP)); } I'd like to avoid counting references (notifiers) myself, and use something like shared_ptr. Do you see an easy solution to my problem?
There is a lot going on in your code. Here is the solution, but simplified to just the bare essentials. class CDbaOciNotifier { public: CDbaOciNotifier() : m_resource(get_env()) { } private: shared_ptr<OCIEnv> m_env; struct Delete_env { void operator()(OCIEnv* env) { OCIHandleFree( ... ); } }; static shared_ptr<OCIEnv> get_env() { // make sure a mutex is involved if CDbaOciNotifier // can be constructed concurrently. static weak_ptr<OCIEnv> s_env; shared_ptr<OCIEnv> env = s_env.lock(); if( ! env ) { OCIEnv* env_ptr = OCIEnvCreate( ... ); env.reset( env_ptr, Delete_env() ); s_env = env; } return env; } }; As written you cannot construct CDbaOciNotifier concurrently. You'll need a static mutex to protect s_env if you want that ability. The weak_ptr needs to be a function local static otherwise you're app might explode if a global or static CDbaOciNotifier is created (static initialization order is undefined).
2,904,694
2,905,083
Parser problem - Else-If and a Function Declaration
A quick, fun question - What is the difference between a function declaration in C/C++ and an else-if statement block from a purely parsing standpoint? void function_name(arguments) { [statement-block] } else if(arguments) { [statement-block] } Looking for the best solution! =) Edit: Thanks for the insight guys. I was actually writing a regex to match all functions in a program and I started getting these else-if blocks with the results. That is when I realized the unique connection between the two. =)
The two are actually completely different. A function follows the pattern: return-type function([argument1, argument2... argumentN]) // arguments optional { [statement-block] } An else-if on the other hand, the way you've written it in C style, is a special case of a single statement else block. Just like you can have one statement under an else when the curly braces are omitted: if (boolean-condition) // ... else single-statement; The single statement is also allowed to be an if-else statement: if (boolean-condition) // ... else if (boolean-condition) { // ... } more usually written the way you have (else if (...)). Further, there is no parameter list, just a required boolean condition, and there is no return type in an else if. So one's the definition of a subroutine, and the other is two conditional blocks chained together - there is nothing in particular connecting the two. This is a good example why regex can't be used to parse C++/HTML/XML/anything with complex grammar.
2,904,839
2,904,843
How can I use a class before defining it?
class Node { string name; Node previous; }; Error: Node::previous uses "Node" which is being defined. How can I get this to work in C++? It works in C#. EDIT: Why Node* previous works?
Use pointers. Node* previous; would solve the problem. As you're doing it now, you actually try to make your class infinitely large.
2,904,887
2,905,082
Sub-millisecond precision timing in C or C++
What techniques / methods exist for getting sub-millisecond precision timing data in C or C++, and what precision and accuracy do they provide? I'm looking for methods that don't require additional hardware. The application involves waiting for approximately 50 microseconds +/- 1 microsecond while some external hardware collects data. EDIT: OS is Wndows, probably with VS2010. If I can get drivers and SDK's for the hardware on Linux, I can go there using the latest GCC.
When dealing with off-the-shelf operating systems, accurate timing is an extremely difficult and involved task. If you really need guaranteed timing, the only real option is a full real-time operating system. However if "almost always" is good enough, here are a few tricks you can use that will provide good accuracy under commodity Windows & Linux Use a Sheilded CPU Basically, this means turn off IRQ affinity for a selected CPU & set the processor affinity mask for all other processes on the machine to ignore your targeted CPU. On your app, set the CPU affinity to run only on your shielded CPU. Effectively, this should prevent the OS from ever suspending your app as it will always be the only runnable process for that CPU. Never allow let your process willingly yield control to the OS (which is inherently non-deterministic for non realtime OSes). No memory allocation, no sockets, no mutexes, nada. Use the RDTSC to spin in a while loop waiting for your target time to arrive. It'll consume 100% CPU but it's the most accurate way to go. If number 2 is a bit too draconic, you can 'sleep short' and then burn the CPU up to your target time. Here, you take advantage of the fact that the OS schedules the CPU at set intervals. Usually 100 times per second or 1000 times per second depending on your OS and configuration (On windows you can change the default scheduling period of 100/s to 1000/s using the multimedia API). This can be a little hard to get right but essentially you need determine when the OS scheduling periods occur and calculate the one prior to your target wake time. Sleep for this duration and then, upon waking, spin on RDTSC (if you're on a single CPU... use QueryPerformanceCounter or the Linux equivalent if not) until your target time arrives. Occasionally, OS scheduling will cause you to miss but, generally speaking, this mechanism works pretty good. It seems like a simple question, but attaining 'good' timing get's exponentially more difficult the tighter your timing constraints are. Good luck!
2,905,046
2,905,079
Why are some Microsoft languages called "visual"? (Visual C#, Visual Basic .NET, Visual C++)
I understand visual programming languages to be those languages that allow the programmer to to manipulate graphical--rather than textual--objects onscreen to build functionality. The closest thing I see in C#, VB, etc. is RAD controls, but that is just composing UI and the very simplest functionality -- it has nothing to do with the language itself, even. Why, then is C# called "Visual C#", Basic .NET called "Visual Basic .NET," etc.? What is "visual," or what is the rationale or history behind the nomenclature?
I don't think it has to do with the languages themselves being "visual." From the Wikipedia article: The term Visual denotes a brand-name relationship with other Microsoft programming languages such as Visual Basic, Visual FoxPro, Visual J# and Visual C++. All of these products are packaged with a graphical IDE and support rapid application development of Windows-based applications.
2,905,329
2,905,352
How to make compiler work out template class arguments at assigmnet?
Here's the code. Is it possible to make last line work? #include<iostream> using namespace std; template <int X, int Y> class Matrix { int matrix[X][Y]; int x,y; public: Matrix() : x(X), y(Y) {} void print() { cout << "x: " << x << " y: " << y << endl; } }; template < int a, int b, int c> Matrix<a,c> Multiply (Matrix<a,b>, Matrix<b,c>) { Matrix<a,c> tmp; return tmp; } int main() { Matrix<2,3> One; One.print(); Matrix<3,5> Two; (Multiply(One,Two)).print(); // this works perfect Matrix Three=Multiply(One,Two); // !! THIS DOESNT WORK return 0; }
In C++11 you can use auto to do that: auto Three=Multiply(One,Two); In current C++ you cannot do this. One way to avoid having to spell out the type's name is to move the code dealing with Three into a function template: template< int a, int b > void do_something_with_it(const Matrix<a,b>& One, const Matrix<a,b>& Two) { Matrix<a,b> Three = Multiply(One,Two); // ... } int main() { Matrix<2,3> One; One.print(); Matrix<3,5> Two; do_something_with_it(One,Two); return 0; } Edit: A few more notes to your code. Be careful with using namespace std;, it can lead to very nasty surprises. Unless you plan to have matrices with negative dimensions, using unsigned int or, even more appropriate, std::size_t would be better for the template arguments. You shouldn't pass matrices per copy. Pass per const reference instead. Multiply() could be spelled operator*, which would allow Matrix<2,3> Three = One * Two; print should probably take the stream to print to as std::ostream&. And I'd prefer it to be a free function instead of a member function. I would contemplate overloading operator<< instead of naming it print.
2,905,332
2,905,515
Why does /MANIFESTUAC:NO work?
Windows 7, C++, VS2008 I have a COM DLL that needs to be registered using "runas administrator" (it is a legacy app that writes to the registry) The DLL is used by a reports app which instantiates it using CoCreateInstance. This failed unless I also ran the reports app as administrator; until I changed the linker setting from /MANIFESTUAC to /MANIFESTUAC:NO Can anyone tell me why this works? Does it mean that I can write apps that bypass the UAC using this setting?
If your installer/registerer app has a manifest, and it says "don't run elevated", when you try to write to HKLM it fails. If you have a manifest and it says "run elevated", when you try to write to HKLM it succeeds. If you have no manifest (which you request with /MANIFESTUAC:NO), when you try to write to HKLM it writes to a virtualized location instead. When you run the reports app, a similar triple applies although it it can read HKLM. Therefore if the reports app has a manifest, whether elevated or not, it reads HKLM. Without a manifest it reads the virtualized location. This is why you have success when both apps have a manifest or don't have a manifest. It would probably be preferably to have your installer app with a manifest that requests elevation, and your reports app have a manifest that does not request elevation. That way all your apps are telling the truth and everything works. Plus you know why it's happening.
2,905,377
2,905,419
"Temporary object" warning - is it me or the compiler?
The following snippet gives the warning: [C++ Warning] foo.cpp(70): W8030 Temporary used for parameter '_Val' in call to 'std::vector<Base *,std::allocator<Base *> >::push_back(Base * const &)' .. on the indicated line. class Base { }; class Derived: public Base { public: Derived() // << warning disappears if constructor is removed! { }; }; std::vector<Base*> list1; list1.push_back(new Base); list1.push_back(new Derived); // << Warning on this line! Compiler is Codegear C++Builder 2007. Oddly, if the constructor for Derived is deleted, the warning goes away... Is it me or the compiler? EDIT: The only way I've found to remove the warning is to something similar to this: Derived * d; list1.push_back(d = new Derived); // << No warning now...
Simple try: list1.push_back(new Derived()); I am afraid there is something about POD (with trivial constructors) vs non-POD going on here. EDIT: Given that the code compiles fine with gcc.3.4.2 (--pedantic) I would say it's a compiler quirk. I am leaning toward MarkB explanation, ie the compiler creating a temporary even though I don't understand why it would be required and then complaining when assigning it to the const&... but I'm still perplex.
2,905,578
2,905,716
c++ setting string attribute value in class is throwing "Access violation reading location"
I am having some trouble getting this simple code to work: #pragma once #include <iostream> #include <string> using std::string; class UserController; #include "UserController.h" class CreateUserView { public: CreateUserView(void); ~CreateUserView(void); UserController* controller; void showView(); string name; string lastname; string address; string email; string dateOfBirth; }; All i need is to set these attributes in the implementation with getline(). CreateUserView::CreateUserView(void) { } void CreateUserView::showView() { cout << endl << " New User" << endl; cout << "--------------------------" << endl; cout << " Name\t\t: "; getline(cin, name); cout << " Lastname\t: "; getline(cin, lastname); cout << " Email\t\t: "; getline(cin, email); cout << " ===============================" << endl; cout << " 1. SAVE 2.CHANGE 3.CANCEL" << endl; cout << " ===============================" << endl; cout << " choice: "; int choice; cin >> choice; cin.ignore(); controller->createUser_choice(choice); } I keep getting this "Access violation reading location" error at this line: getline(cin, name); what's the best way of assigning a value to an std::string attribute of a class? even name = "whatever" is throwing that error!! thanks EDIT: a UserController is instantiating the CreateUserView: CreateUserView *_createUserView; This how the CreateUserView is being instantiated: void UserController::createUser() { //Init the Create User View if(_createUserView == NULL) { _createUserView = new CreateUserView(); _createUserView->controller = this; } _createUserView->showView(); }
You don't seem the initialize your variable properly: CreateUserView *_createUserView; Therefore it is a dangling pointer, not NULL (in C++, with a few exceptions, variables are not initialized automatically to 0). So here if(_createUserView == NULL) { _createUserView = new CreateUserView(); _createUserView->controller = this; } the if block is not executed, and here _createUserView->showView(); you get access violation. Initialize your pointer properly to NULL: CreateUserView *_createUserView = NULL;
2,905,624
2,905,677
Getting the Dimensions of an LPDIRECT3DTEXTURE9 in Direct X 9.0c?
Does anyone know if there is a function in DirectX to get the dimensions of an LPDIRECT3DTEXTURE9? I just need the width and height. If there isn't, anyone know of a quick and dirty way to accomplish this?
LPDIRECT3DTEXTURE's may contain multiple images of different sizes. You'll have to specify which one you want. Usually, 0 is the original size, others are mipmaps that used for optimizing performance on GPU. D3DSURFACE_DESC surfaceDesc; int level = 0; //The level to get the width/height of (probably 0 if unsure) myTexture->GetLevelDesc(level, &surfaceDesc); size_t size = surfaceDesc.Width * surfaceDesc.Height;
2,905,834
2,907,124
Is calling of overload operator-> resolved at compile time?
when I tried to compile the code: (note: func and func2 is not typo) struct S { void func2() {} }; class O { public: inline S* operator->() const; private: S* ses; }; inline S* O::operator->() const { return ses; } int main() { O object; object->func(); return 0; } there is a compile error reported: D:\code>g++ operatorp.cpp -S -o operatorp.exe operatorp.cpp: In function `int main()': operatorp.cpp:27: error: 'struct S' has no member named 'func' it seems that invoke the overloaded function of "operator->" is done during compile time? I'd added "-S" option for compile only.
object->func() is just syntactic sugar for object->operator->()->func() for user-defined types. Since O::operator->() yields an S*, this requires the existence of the method S::func() at compile time.
2,906,095
2,907,582
Boost.Test: Looking for a working non-Trivial Test Suite Example / Tutorial
The Boost.Test documentation and examples don't really seem to contain any non-trivial examples and so far the two tutorials I've found here and here while helpful are both fairly basic. I would like to have a master test suite for the entire project, while maintaining per module suites of unit tests and fixtures that can be run independently. I'll also be using a mock server to test various networking edge cases. I'm on Ubuntu 8.04, but I'll take any example Linux or Windows since I'm writing my own makefiles anyways. Edit As a test I did the following: // test1.cpp #define BOOST_TEST_MODULE Regression #include <boost/test/included/unit_test.hpp> BOOST_AUTO_TEST_SUITE(test1_suite) BOOST_AUTO_TEST_CASE(Test1) { BOOST_CHECK(2 < 1); } BOOST_AUTO_TEST_SUITE_END() // test2.cpp #include <boost/test/included/unit_test.hpp> BOOST_AUTO_TEST_SUITE(test2_suite) BOOST_AUTO_TEST_CASE(Test1) { BOOST_CHECK(1<2); } BOOST_AUTO_TEST_SUITE_END() Then I compile it: g++ test1.cpp test2.cpp -o tests This gives me about a bazillion "multiple definition of" errors during linking. When it's all in a single file it works fine.
C++ Unit Testing With Boost.Test (permanent link: http://web.archive.org/web/20160524135412/http://www.alittlemadness.com/2009/03/31/c-unit-testing-with-boosttest/) The above is a brilliant article and better than the actual Boost documentation. Edit: I also wrote a Perl script which will auto-generate the makefile and project skeleton from a list of class names, including both the "all-in-one" test suite and a stand alone test suite for each class. It's called makeSimple and can be downloaded from Sourceforge.net. What I found to be the basic problem is that if you want to split your tests into multiple files you have to link against the pre-compiled test runtime and not use the "headers only" version of Boost.Test. You have to add #define BOOST_TEST_DYN_LINK to each file and when including the Boost headers for example use <boost/test/unit_test.hpp> instead of <boost/test/included/unit_test.hpp>. So to compile as a single test: g++ test_main.cpp test1.cpp test2.cpp -lboost_unit_test_framework -o tests or to compile an individual test: g++ test1.cpp -DSTAND_ALONE -lboost_unit_test_framework -o test1 . // test_main.cpp #define BOOST_TEST_DYN_LINK #define BOOST_TEST_MODULE Main #include <boost/test/unit_test.hpp> // test1.cpp #define BOOST_TEST_DYN_LINK #ifdef STAND_ALONE # define BOOST_TEST_MODULE Main #endif #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_SUITE(test1_suite) BOOST_AUTO_TEST_CASE(Test1) { BOOST_CHECK(2<1); } BOOST_AUTO_TEST_SUITE_END() // test2.cpp #define BOOST_TEST_DYN_LINK #ifdef STAND_ALONE # define BOOST_TEST_MODULE Main #endif #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_SUITE(test2_suite) BOOST_AUTO_TEST_CASE(Test1) { BOOST_CHECK(1<2); } BOOST_AUTO_TEST_SUITE_END()
2,906,350
2,906,355
Botan::SecureVector - Destructor called in Constructor?
When using the Botan::SecureVector in the following unit test: void UnitTest() { std::vector<byte> vbData; vbData.push_back(0x04); vbData.push_back(0x04); vbData.push_back(0x04); Botan::SecureVector<Botan::byte> svData(&vbData[0], vbData.size()); CPPUNIT_ASSERT(vbData == std::vector<byte>(svData.begin(), svData.end())); } a segmentation fault occurs when trying to allocate the SecureVector as it tries to deallocate a buffer during its construction.
Add line: LibraryInitializer botanInit; to function. This seemed to me to be odd behavior, so I figured I should post it.
2,906,386
2,906,601
Change IP settings using C++
How do I change the IP settings of a Windows CE 6 box Programatically via C++? Functions for Windows might also work. I found that I can change the hostname via sethostname but couldn't find how to change IP address settings such as: IP Address DHCP Subnet Gateway DNS1 / DNS2 WINS1 / WINS2 Any advice / pointers would be great. Thanks. P.s. How would you get the box to update to those settings - is a refresh or the programming equivalent of ipconfig /renew required?
Have you checked out the IP Helper Routines on MSDN? I think these provide some, if not all, of what you need. **EDIT: ** Updated link. Thanks ctacke
2,906,405
2,906,550
Error: default parameter given for parameter 1
Here is my class definition: class MyClass { public: void test(int val = 0); } void MyClass::test(int val = 0) { // } When I try to compile this code I get the error: "default parameter given for parameter 1" It's just a simple function, I don't know what's wrong. I'm using Eclipse + MinGW.
Formally, you can specify the default argument wherever you want, but you can do it only once per parameter. Even if the value is the same, it has to be specificed either in the function declaration or in the definition, but not in both. Of course, if the declaratuion is in the header file (and the definition is in implementation file), the common sense says that the default argument has to be specified in the header file, so that all translation units can "see" it.
2,906,478
2,907,021
sorting char* arrays
I have a datastructure struct record { char cont[bufferSize]; record *next; }; When I add new records to this structure, I want them to be sorted alphabetically. I made this function, that adds record in the right place (by alphabet) in the linked list: record *start=NULL, *p, *x; void recAdd(char*temp) { p = new record; temp[strlen(temp)] = '\0'; for (int j=0;j<bufferSize;j++) p->cont[j] = temp[j]; if (start==NULL) start=p; else { x=start; int c=0; while (recComp(x->cont,p->cont) <= 0 && x->next != NULL) { x=x->next; c++; } if (c == 0) { p->next=start; start=p; } else { x=start; for (int i=0;i<c;i++) x=x->next; p->next=x->next; x->next=p; } } for (int j=0;j<bufferSize;j++) temp[j] = NULL; }; But somehow it doesn't sort things right. What is wrong with my function?
Your code is a mess. There are a number of problems both semantic and logical, but fundamentaly the logic that decides where to insert new nodes is the most flawed. Change it to this (note my new code in the else block): void recAdd(const char*t) { char temp[bufferSize]; strcpy(temp, t); p = new record; temp[strlen(temp)] = '\0'; for (int j=0;j<bufferSize;j++) p->cont[j] = temp[j]; if (start==NULL) { start=p; start->next = 0; } else { record* x = start; record* prev = 0; while( x && recComp(x->cont, p->cont) <= 0 ) { prev = x; x = x->next; } // p is a new node. p, x and prev are arranged thusly: // prev -> p -> x // if prev is null, p is a new head // if x is null, p is a new tail // otherwise, p is inserted between prev and x if( !prev ) { p->next = start; start = p; } else if( !x ) // note this block and the next one could be combined. // done this way for clarity. { prev->next = p; p->next = 0; } else { p->next = x; prev->next = p; } } for (int j=0;j<bufferSize;j++) temp[j] = NULL; }; BUT the fact that you had enough difficulty writing this code that you would ask SO for help in fixing it illustrates an important point: the best code is code that you never have to write. You have written both a linked list type structure (bare bones tho it may be) and a sorting algorithm. Both are flawed, and both have working, tested and efficient versions available as part of the standard C++ libraries. You should be using them. Use strings instead of char*s. Use vectors instead of your linked list. Use sort instead of your hand rolled sorting algorithm. Taken together, all your code can be replaced by this: vector<string> records; // this for block just populates the vector with random strings for( int i = 0; i < 10; ++i ) { string s; for( int j = 0, jx = 3+(rand()/(RAND_MAX/10)); j < jx; ++j ) s += 'A'-1+(rand()/(RAND_MAX/26)); cout << s << endl; records.push_back(s); } sort(records.begin(), records.end()); copy( records.begin(), records.end(), ostream_iterator<string>(cout, " ")); Why hand-roll a bunch of stuff and expose yourself to countless defects when you can use tools that already work and do what you want?
2,906,500
2,906,588
Can't cast a class with multiple inheritance
I am trying to refactor some code while leaving existing functionality in tact. I'm having trouble casting a pointer to an object into a base interface and then getting the derived class out later. The program uses a factory object to create instances of these objects in certain cases. Here are some examples of the classes I'm working with. // This is the one I'm working with now that is causing all the trouble. // Some, but not all methods in NewAbstract and OldAbstract overlap, so I // used virtual inheritance. class MyObject : virtual public NewAbstract, virtual public OldAbstract { ... } // This is what it looked like before class MyObject : public OldAbstract { ... } // This is an example of most other classes that use the base interface class NormalObject : public ISerializable // The two abstract classes. They inherit from the same object. class NewAbstract : public ISerializable { ... } class OldAbstract : public ISerializable { ... } // A factory object used to create instances of ISerializable objects. template<class T> class Factory { public: ... virtual ISerializable* createObject() const { return static_cast<ISerializable*>(new T()); // current factory code } ... } This question has good information on what the different types of casting do, but it's not helping me figure out this situation. Using static_cast and regular casting give me error C2594: 'static_cast': ambiguous conversions from 'MyObject *' to 'ISerializable *'. Using dynamic_cast causes createObject() to return NULL. The NormalObject style classes and the old version of MyObject work with the existing static_cast in the factory. Is there a way to make this cast work? It seems like it should be possible.
You have to virtually inherit from ISerializable (I just tested it with VS2010). This is a common issue called the Diamond Problem, where the compiler does not know wich hierarchy path to take. EDIT: This should do it: class NewAbstract : public virtual ISerializable { ... } class OldAbstract : public virtual ISerializable { ... }
2,906,638
2,908,457
C++0x class factory with variadic templates problem
I have a class factory where I'm using variadic templates for the c'tor parameters (code below). However, when I attempt to use it, I get compile errors; when I originally wrote it without parameters, it worked fine. Here is the class: template< class Base, typename KeyType, class... Args > class GenericFactory { public: GenericFactory(const GenericFactory&) = delete; GenericFactory &operator=(const GenericFactory&) = delete; typedef Base* (*FactFunType)(Args...); template <class Derived> static void Register(const KeyType &key, FactFunType fn) { FnList[key] = fn; } static Base* Create(const KeyType &key, Args... args) { auto iter = FnList.find(key); if (iter == FnList.end()) return 0; else return (iter->second)(args...); } static GenericFactory &Instance() { static GenericFactory gf; return gf; } private: GenericFactory() = default; typedef std::unordered_map<KeyType, FactFunType> FnMap; static FnMap FnList; }; template <class B, class D, typename KeyType, class... Args> class RegisterClass { public: RegisterClass(const KeyType &key) { GenericFactory<B, KeyType, Args...>::Instance().Register(key, FactFn); } static B *FactFn(Args... args) { return new D(args...); } }; Here is the error: when calling (e.g.) // Tucked out of the way RegisterClass<DataMap, PDColumnMap, int, void *> RC_CT_PD(0); GCC 4.5.0 gives me: In constructor 'RegisterClass<B, D, KeyType, Args>::RegisterClass(const KeyType&) [with B = DataMap, D = PDColumnMap, KeyType = int, Args = {void*}]': no matching function for call to 'GenericFactory<DataMap, int, void*>::Register(const int&, DataMap* (&)(void*))' I can't see why it won't compile and after extensive googling I couldn't find the answer. Can anyone tell me what I'm doing wrong (aside from the strange variable name, which makes sense in context)?
I think it's barfing here: template <class Derived> static void Register(const KeyType &key, FactFunType fn) { FnList[key] = fn; } You don't use Derived in this function, but it's probably messing up gcc's attempt to resolve GenericFactory<...>.Register(...). You might also want to change that to GenericFactory<...>::Register(...).
2,906,655
2,906,836
Using Windows media foundation
Ok so my new gig is high performance video (think Google streetview but movies) - the hard work is all embedded capture and image processing but: I was looking at the new MS video offerings to display content = Windows Media Foundation. Is anyone actually using this ? There are no books on the topic. The only documentation is a developer team blog with a single entry 9 months old. I thought we had got past having to learn an MS api by spying on the com control messages! Is it just another wrapper around the same old activeX control?
Did you read Media Foundation Programming Guide? It looks pretty complete.
2,907,009
2,907,456
How to extract the current state of the registry? (in C/C++, XP)
I was wondering how one might extract the current state of the registry, of Windows XP, in C or C++? (While the OS is active). I been trying to use BackupRead() on the registry-files, but it is impossible to CreateFile() them. I managed to create a Shadow Copy of the registry-files, but it wasn't the current state of the registry. Would appreciate any hint... (I know ERUNT is able to do it) Thanks, Doori Bar
RegSaveKey used to be the preferred method, but the documentation now states that you should use the Volume Shadow Copy Service. I think RegSaveKey should continue to work, though (assuming you have the required privileges). Of course you could always roll your own implementation as is demonstrated in the link in one of the other answers.
2,907,087
2,907,116
Embedding a scripting engine in C++
I'm researching how to best extend a C++ application with scripting capability, and I am looking at either Python or JavaScript. User-defined scripts will need the ability to access the application's data model. Have any of you had experiences with embedding these scripting engines? What are some potential pitfalls?
It's sure easy to embed Python by using the Boost::Python library (ok, ok, sarcasm.) Nothing is "easy" when it comes to cross-language functionality. Boost has done a great deal to aid such development. One of the developers I've worked with swears on the Boost->Python interface. His code can be programmed by a user in Python, with a REPL built right into the UI. Amazing. However, my experience has been better observed using SWIG and other languages such as Java. I'm currently working with SWIG to wrap C++ with Python. There's all sorts of gotchas with exceptions, threading, cross-language polymorphism and the like. I'd look at these two places first. As I said, nothing will be "easy" but both these make life more livable.
2,907,091
2,907,556
How can I tell if CString allocates memory on the heap or stack?
How can I tell if the MFC CString allocates memory on the heap or stack? I am compiling for the Windows Mobile/Windows CE platform. I am working on a project developed by someone else and I have witnessed stack overflows under certain circumstances. I am trying to figure out if the custom SQLite recordset classes (with many CString member variables) allocated on the stack are causing the stack overflows.
If you're putting an object onto the stack that contains "many" CStrings, you'll have some data on the stack and some on the heap. The CString "management" data is what the object itself is. sizeof(CString) will tell you how big it is. It includes information about its size and the pointer to the actually character array. The character array itself is taken from the heap. CString::GetLength() or whatever the call is will tell you how much space is taken on the heap. sizeof(YourCustomRecordset) will tell you how much stack space is taken up by your object when you put it on the stack.
2,907,094
2,907,240
C++ conversion operator between types in other libraries
For convenience, I'd like to be able to cast between two types defined in other libraries. (Specifically, QString from the Qt library and UnicodeString from the ICU library.) Right now, I have created utility functions in a project namespace: namespace MyProject { const icu_44::UnicodeString ToUnicodeString(const QString& value); const QString ToQString(const icu_44::UnicodeString& value); } That's all well and good, but I'm wondering if there's a more elegant way. Ideally, I'd like to be able to convert between them using a cast operator. I do, however, want to retain the explicit nature of the conversion. An implicit conversion should not be possible. Is there a more elegant way to achieve this without modifying the source code of the libraries? Some operator overload syntax, perhaps?
If what you're striving for is to be able to say QStrign qs; UnicodeString us(qs); or UnicodeString us; QString qs(us); then no, you can't do that unless you can change either of the classes. You can, of course, introduce a new string: NewString ns; UnicodeString us(ns); QString qs(us); NewString nsus(us); NewString nsqs(qs); I'm not sure about this approach's elegance though, compared with your two explicit conversion functions.
2,907,221
2,907,979
Get the lua command when a c function is called
Supposed I register many different function names in Lua to the same function in C. Now, everytime my C function is called, is there a way to determine which function name was invoked? for example: int runCommand(lua_State *lua) { const char *name = // getFunctionName(lua) ? how would I do this part for(int i = 0; i < functions.size; i++) if(functions[i].name == name) functions[i].Call() } int main() { ... lua_register(lua, "delay", runCommand); lua_register(lua, "execute", runCommand); lua_register(lua, "loadPlugin", runCommand); lua_register(lua, "loadModule", runCommand); lua_register(lua, "delay", runCommand); } So, how do I get the name of what ever function called it?
Another way to attack your question is by using upvalues. Basically, you register the C functions with the function below instead of lua_register: void my_lua_register(lua_State *L, const char *name, lua_CFunction f) { lua_pushstring(L, name); lua_pushcclosure(L, f, 1); lua_setglobal(L, name); } Then, getFunctionName is straight forward const char* getFunctionName(lua_State* L) { return lua_tostring(L, lua_upvalueindex(1)); } That said, what you trying to do seems fishy - what are you trying to achieve? The runCommand function posted in the question looks like a horribly inefficient way to do something that Lua does for you anyway.