question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
1,334,069 | 1,334,112 | Are const arrays declared within a function stored on the stack? | if this was declared within a function, would it be declared on the stack?
(it being const is what makes me wonder)
void someFunction()
{
const unsigned int actions[8] =
{ e1,
e2,
etc...
};
}
| Yes, they're on the stack. You can see this by looking at this code snippet: it will have to print the destruction message 5 times.
struct A { ~A(){ printf( "A destructed\n" ); } };
int main() {
{
const A anarray [5] = {A()} ;
}
printf( "inner scope closed\n");
}
|
1,334,284 | 1,334,319 | Debugging Visual Studio builds from Eclipse | I'm just starting out on a cross-platform (Windows, Linux, OS X) C++ project, and we've decided to use Scons for our build system and Eclipse as our IDE. I've figured out how to trigger Scons to do a Visual C++ build from Eclipse, and for errors etc. to get reflected in Eclipse, so all good so far. However, what would be really nice is if we could use Eclipse for debugging as well, but Eclipse's various gdb debugging options can't read the debug symbols that VC puts into the build. So does anyone know a way round this, or (as I suspect) will I have to use Visual Studio for debugging?
Obviously this is by no means a bad solution, but using a single IDE would be even better!
Thanks in advance for any help....
| Visual C++ creates PDB files for its own symbols that map into the binary. The only provision for other debuggers is to C7 format and hope that is enough for gdb.
Go to Properties | C/C++ | General | Debug Information = C7 Compatible (instead of the default PDB). Command line is /Z7 instead of other /Z? (which can be PDB or PDB with continue).
|
1,334,582 | 1,334,635 | Where to find a description of all the math functions like floorf and others? | math.h is not really a documentation. Is there something else that will describe these functions a but more in detail?
| floorf() is a variant for the floor() function, accepting and returning float values instead of double.
The GNU Libc documentation is very helpful.
The iPhone documentation also has a man page about floor() functions.
|
1,334,596 | 1,334,644 | C++ vs C# for GUI programming | I am going to program a GUI under windows (will be about 10,000 line code with my estimates) and don't know C# or C++ (QT library) to choose for my needs. Please help me to choose.
| If you have to debate on using C# or C++ then the correct answer is probably C#. I would stay away from a low level language like C++ unless you absolutely have to as the amount of time required to develop/debug with it will be much greater. C# has a lot of GUI functionality that it harnesses from the .NET framework. There isn't a lot you can't do with it right out of the box as opposed to C++ which you'll have to hand code a lot of functionality.
|
1,334,858 | 6,952,193 | Why don't iostream objects overload operator bool? | In this answer I talk about using a std::ifstream object's conversion to bool to test whether the stream is still in a good state. I looked in the Josuttis book for more information (p. 600 if you're interested), and it turns out that the iostream objects actually overload operator void*. It returns a null pointer when the stream is bad (which can be implicitly converted to false), and a non-null pointer otherwise (implicitly converted to true). Why don't they just overload operator bool?
| This is an instance of the "safe bool" problem.
Here is a good article: http://www.artima.com/cppsource/safebool.html .
C++0x helps the situation with explicit conversion functions, as well as the change that Kristo mentions. See also Is the safe-bool idiom obsolete in C++11? .
|
1,334,989 | 1,337,988 | Debugging data in 'anonymous namespaces' (C++) | Recently, I got a crash dump file from a customer. I could track the problem down to a class that could contain incorrect data, but I only got a void-pointer to the class, not a real pointer (void-pointer came from a window-property, therefore it was a void-pointer).
Unfortunately, the class to which I wanted to cast the pointer to, was in an anonymous namespace, like this:
namespace
{
class MyClass
{
...
};
}
...
void *ptr = ...
// I know ptr points to an instance of MyClass,
// and at this location I want to cast ptr to (MyClass *) in the debugger.
When I use ptr in the watch window, Visual Studio 2005 just shows the pointer value.
If I use (MyClass *)ptr, the debugger tells me it cannot cast to it.
How can I cast ptr to a MyClass pointer?
Note: I could eventually use a silly-named namespace (like the name of the source file), and then use a "using namespace", but I would expect better solutions.
| This is mentioned in MSDN. It doesn't look like there's a nice solution within the Watch window (you can get the decorated name of your class from a listing I guess).
Your "silly-named namespace" idea would work okay, you could also just declare an identical class with a silly name and cast to that type instead.
|
1,335,040 | 1,335,112 | Boost linking, Visual Studio & version control | I'm using Visual Studio 2008, and writing some stuff in C++. I'm using a Boost library (that is not header only).
So, linking to Boost requires one to add the directory to Boost binaries to the project's "additional linker paths" setting.
However, doesn't this conflict with source control? If I check in the project files, wouldn't the absolute path to Boost libs on my computer also be included in them?
I obviously don't want this to happen, so what should I do? Just adding the Boost directory to "Visual C++ Directories/Libraries" doesn't work.
| Adding the Boost paths to "Visual C++ Directories" should work.
You should add include path <Full path here>\boost_1_39_0 (no boost at the end)
and library path <Full path here>\boost_1_39_0\bin.v2\lib (bin.v2 is a stage dir that could be different in you case).
Personally, I store boost sources in my source control and use relative paths in project settings.
|
1,335,052 | 1,338,991 | c++, syntax for passing parameters | This code snippet is part of an isapi redirect filter written in managed c++ that will capture url requests with prefix "http://test/. Once the url is captured it will redirect those request to a test.aspx file i have at the root of my web app.
I need some syntax help how to:
1) pass the "urlString" parameter to be displayed in my "test.aspx" page. Problem line:
urlString.Replace(urlString, "/test.aspx?urlString");
2) syntax for my aspx page to display urlString
DWORD CRedirectorFilter::OnPreprocHeaders(CHttpFilterContext* pCtxt,
PHTTP_FILTER_PREPROC_HEADERS pHeaderInfo)
{
char buffer[256];
DWORD buffSize = sizeof(buffer);
BOOL bHeader = pHeaderInfo->GetHeader(pCtxt->m_pFC, "url", buffer, &buffSize);
CString urlString(buffer);
urlString.MakeLower(); // for this exercise
if(urlString.Find("/test/") != -1) //insert url condition
{
urlString.Replace(urlString, "/test.aspx?urlString");
char * newUrlString= urlString.GetBuffer(urlString.GetLength());
pHeaderInfo->SetHeader(pCtxt->m_pFC, "url", newUrlString);
return SF_STATUS_REQ_HANDLED_NOTIFICATION;
}
//we want to leave this alone and let IIS handle it
return SF_STATUS_REQ_NEXT_NOTIFICATION;
}
-------------- aspx page
<html>
<body>
<%
dim url as string = request.querystring("urlString")
response.write(url)
%>
</body>
</html>
| The CString::Replace method takes the string-to-be-replaced and the string-to-be-put-in-place as arguments. s.Replace( "foo", "bar" ) will convert "tadafoo" into "tadabar".
Now your code will replace "anystring" with "/test.aspx?urlString". Literally.
My guess is that you want your url to be appended to the "/text.aspx" url as a GET argument, in which case you can do this:
CString newurl = "/text.aspx?urlString=";
newurl += urlString;
This will compose the url "/test.aspx?urlString=http://test/somethingelse.html": a GET request with a variable named "urlString" containing your original url.
Your asp should read the GET urlString variable with the request.QueryString[ "urlString" ] as to be read on this website, and looks just fine otherwise, But I'm not really into that.
|
1,335,137 | 1,335,202 | Using STL to bind multiple function arguments | In the past I've used the bind1st and bind2nd functions in order to do straight forward operations on STL containers. I now have a container of MyBase class pointers that are for simplicities sake the following:
class X
{
public:
std::string getName() const;
};
I want to call the following static function using for_each and binding both the 1st and 2nd parameters as such:
StaticFuncClass::doSomething(ptr->getName(), funcReturningString() );
How would I use for_each and bind both parameters of this function?
I'm looking for something along the lines of:
for_each(ctr.begin(), ctr.end(),
bind2Args(StaticFuncClass::doSomething(),
mem_fun(&X::getName),
funcReturningString());
I see Boost offers a bind function of its own that looks like something that would be of use here, but what is the STL solution?
Thanks in advance for your responses.
| A reliable fallback when the bind-syntax gets too weird is to define your own functor:
struct callDoSomething {
void operator()(const X* x){
StaticFuncClass::doSomething(x->getName(), funcReturningString());
}
};
for_each(ctr.begin(), ctr.end(), callDoSomething());
This is more or less what the bind functions do behind the scenes anyway.
|
1,335,301 | 1,336,533 | Using boost::bind with a constructor | I'm trying to create new objects and add them to a list of objects using boost::bind. For example.
struct Stuff {int some_member;};
struct Object{
Object(int n);
};
....
list<Stuff> a;
list<Object> objs;
....
transform(a.begin(),a.end(),back_inserter(objs),
boost::bind(Object,
boost::bind(&Stuff::some_member,_1)
)
);
This doesn't appear to work. Is there any way to use a constructor with boost::bind, or should I try some other method?
| If Stuff::some_member is int and Object has a non-explicit ctor taking an int, this should work:
list<Stuff> a;
list<Object> objs;
transform(a.begin(),a.end(),back_inserter(objs),
boost::bind(&Stuff::some_member,_1)
);
Otherwise, you could use boost::lambda::constructor
|
1,335,590 | 1,335,610 | Removing duplicate string from List (.NET 2.0!) | I'm having issues finding the most efficient way to remove duplicates from a list of strings (List).
My current implementation is a dual foreach loop checking the instance count of each object being only 1, otherwise removing the second.
I know there are MANY other questions out there, but they all the best solutions require above .net 2.0, which is the current build environment I'm working in. (GM and Chrysler are very resistant to changes ... :) )
This limits the possible results by not allowing any LINQ, or HashSets.
The code I'm using is Visual C++, but a C# solution will work just fine as well.
Thanks!
| This probably isn't what you're looking for, but if you have control over this, the most efficient way would be to not add them in the first place...
Do you have control over this? If so, all you'd need to do is a myList.Contains(currentItem) call before you add the item and you're set
|
1,335,965 | 1,336,246 | Howto make Java JNI KeyListener with C++ | I'm trying to make a program like AutoHotKey, but with a graphical interface.
I'm using java.awt.Robot
Now I want to make the code for checking the state from a key (In AHK: getKeyState)
Of course somthing like a KeyListener without having focus.
I read already something with JNI and C++, but....
I can't find some information.
Can somebody help me??
| There are lot of good JNI resources for starting out with JNI Programming like the Sun JNI Tutorial. Almost all Tutorials assume a good knowledge of C/C++ because the Java Native Interface (JNI) is the bridge between native C/C++ code, the Java Virtual Machine and everything running in there (meaning your Java Bytecode).
What you may want to do first is to find a key capturing library for your operating system of choice (you didn't mention anything specific here) in C++ and try that out as well as checking if there are already some Java bindings (libraries that use JNI and offer Java classes) to interact with. I didn't find any promising on a quick search unfortunately.
|
1,336,426 | 1,336,509 | FindFirstFile and FindNextFile question | Output:
The first file found is LOG_09.TXT
Next file name is LOG_10.TXT
Next file name is LOG_11.TXT
Next fi (cut off word "file"?)
Function:
//Find last modified log file
hFind = FindFirstFile("..\\..\\LOGS\\LOG*.TXT", &FindFileData);
if (hFind == INVALID_HANDLE_VALUE)
{
printf ("FindFirstFile failed (%d)\n", GetLastError());
return;
}
else
{
printf("The first file found is %s<br>",FindFileData.cFileName);
//List all the other files in the directory.
while (FindNextFile(hFind, &FindFileData) != 0)
{
printf ("Next file name is %s<br>", FindFileData.cFileName); //NOT DISPLAYING ALL NAMES CONSISTENTLY??
}
dwError = GetLastError();
FindClose(hFind);
if (dwError != ERROR_NO_MORE_FILES)
{
printf ("FindNextFile error. Error is %u.\n", dwError);
return (-1);
}
}
The word "file" is actually cut short in my printf. Sometimes it displays all the file names sometimes it displays a few, sometimes it doesn't even finish the printf quoted line, like shown above. What is causing this and am I being mislead by the printf functionality? In the debugger it looks like everything is OK, but I want to be certain and understand this. For example, I don't have a null char after i in file right? Why is it being cut off here? Thanks.
EDIT: Incorrect - Single threaded application library. (Was multithreaded before, sorry)
printing to a file gives the complete list of files while printf concurrently is "unstable". Not sure I understand why....
| Since your app is multithreaded, the printf may get cut short half way through, by another thread which then gets control, try this:
After all printf() calls, use fflush(stdout);, to make sure that the buffer is flushed.
If that doesn't fix it you can protect the stdout resource with a named mutex, or a critical section. Basically wrap all printf + fflush calls with a Wait, followed by a Signal on the named mutex.
(Not sure if step 2 will be necessary).
|
1,336,476 | 1,336,609 | Stack Frame Question: Java vs C++ | Q1. In Java, all objects, arrays and class variables are stored on the heap? Is the same true for C++? Is data segment a part of Heap?
What about the following code in C++?
class MyClass{
private:
static int counter;
static int number;
};
MyClass::number = 100;
Q2. As far as my understanding goes, variables which are given a specific value by compiler are stored in data segment, and unintialized global and static variables are stored in BSS (Block started by symbol). In this case, MyClass::counter being static is initialized to zero by the compiler and so it is stored at BSS and MyClass::number which is initialized to 100 is stored in the data segment. Am I correct in making the conclusion?
Q3. Consider following piece of codes:
void doHello(MyClass &localObj){
// 3.1 localObj is a reference parameter, where will this get stored in Heap or Stack?
// do something
}
void doHelloAgain(MyClass localObj){
// 3.2 localObj is a parameter, where will this get stored in Heap or Stack?
// do something
}
int main(){
MyClass *a = new MyClass(); // stored in heap
MyClass localObj;
// 3.3 Where is this stored in heap or stack?
doHello(localObj);
doHelloAgain(localObj);
}
I hope I have made my questions clear to all
EDIT:
Please refer this article for some understanding on BSS
EDIT1: Changed the class name from MyInstance to MyClass as it was a poor name. Sincere Apologies
EDIT2: Changed the class member variable number from non-static to static
| This is somewhat simplified but mostly accurate to the best of my knowledge.
In Java, all objects are allocated on the heap (including all your member variables). Most other stuff (parameters) are references, and the references themselves are stored on the stack along with native types (ints, longs, etc) except string which is more of an object than a native type.
In C++, if you were to allocate all objects with the "new" keyword it would be pretty much the same situation as java, but there is one unique case in C++ because you can allocate objects on the stack instead (you don't always have to use "new").
Also note that Java's heap performance is closer to C's stack performance than C's heap performance, the garbage collector does some pretty smart stuff. It's still not quite as good as stack, but much better than a heap. This is necessary since Java can't allocate objects on the stack.
|
1,336,561 | 1,337,408 | Change resize behavior in Qt layouts | I want my custom widgets to gain extra space when the dialog is resized. This was working when I only had a handful of widgets, but after adding several more columns of these same widgets and putting them in a QGridLayout, the extra space merely goes in as padding between the widgets.
| I've had trouble with this in the past and here are some of the things I've found:
First make sure all the widgets you want to expand have sizePolicy set to "Expanding".
Make sure the widgets that make up your custom widgets are in a layout that allows for expanding. You can check this by just adding one of your custom widgets to the window and seeing that it expands as expected.
Make sure any widgets on the form that you do not want to expand have a fixed (minimum=maximum) size in the dimension you want them to stay static.
Sometimes the grid layout causes some weird spacing issues because rows are resized based on the largest widget in the entire row and similarly for columns. For some layouts, it is better to use a vertical layout that contains horizontal layouts or vica versa to create a grid-like effect. Only this way, each sub-layout is spaced independently of the other rows or columns.
|
1,336,688 | 1,336,700 | Looking for an application GUI library for C++ | I'm thinking about writing a very simple paint program. I would like a more advanced method of inputting data into my program like colors, thickness of the brush, etc. I would like to use a GUI library so I can program buttons and menus to make input easier.
Any suggestions?
(I'm running Visual C++ 2005 SP1)
| Qt is a pretty solid GUI application framework. It is cross-platform, well documented, supported, and free.
|
1,336,895 | 1,337,671 | boost::bind, boost::asio, boost::thread, and classes | sau_timer::sau_timer(int secs, timerparam f) : strnd(io),
t(io, boost::posix_time::seconds(secs))
{
assert(secs > 0);
this->f = f;
//t.async_wait(boost::bind(&sau_timer::exec, this, _1));
t.async_wait(strnd.wrap(boost::bind(&sau_timer::exec, this)));
boost::thread thrd(&io,this);
io.run();
//thrd(&sau_timer::start_timer);
}
This is the code I have in the constructor for the class 'sau_timer' (which will hopefully run a timer in a seperate thread and then call another function).
Unfortunately, atm when I try to compile, I get the following error:
1>c:\program files\boost\boost_1_39\boost\bind\bind.hpp(246) : error C2064: term does not evaluate to a function taking 1 arguments
Aswell as a whole bunch of warnings. What am I doing wrong? I've tried everything I can think of, thank you.
| The explanation is at the end of the error messages:
c:\users\ben\documents\visual studio 2008\projects\sauria\sauria\sau_timer.cpp(11) :
see reference to function template instantiation
'boost::thread::thread<boost::asio::io_service*,sau_timer*>(F,A1)' being compiled
The error occurs while generating the ctor of boost::thread. It expects a function object (something with an opererator()()), and you pass it what (I guess) is an io::service. If what you want is a thread calling io_service::run, write:
boost::thread thrd(boost::bind(&io_service::run, &io));
If you use a relatively recent version of Boost, I believe that thread's ctor has a convenience overload that takes care of the bind(), allowing to simply write:
boost::thread thrd(&io_service::run, &io);
|
1,336,950 | 1,337,020 | Call Tiny C Compiler from a C++ code | I'm trying to compile a C code in a file from a program in C++. When I run my program it call the Tiny C Compiler and generate a dll from the compilation of c code.
I tried to do it by a lot of ways but I couldn't. Did anyone already do something like this?
Thanks
| What platform are you on?
On most platforms, you can use the C standard library's system() function to launch a separate process from your C++ program.
#include <stdlib.h>
int main (int argc, char *argv[])
{
system ("tcc -o myproc a.c");
return 0;
}
This will block until the spawned process exits.
On Windows, if you're not concerned about portability, you can use CreateProcess().
|
1,337,210 | 1,337,265 | Programs run in 2 seconds on my machine but 15 seconds on others | I have two programs written in C++ that use Winsock. They both accept TCP connections and one sends data the other receives data. They are compiled in Visual Studio 2008. I also have a program written in C# that connects to both C++ programs and forwards the packets it receives from one and sends them to the other. In the process it counts and displays the number of packets forwarded. Also, the elapsed time from the first to the most recent packet is displayed.
The C++ program that sends packets simply loops 1000 times sending the exact same data. When I run all three apps on my development machine (using loopback or actual IP) the packets get run through the entire system in around 2 seconds. When I run all three on any other PC in our lab it always takes between 15 and 16 seconds. Each PC has different processors and amounts of memory but all of them run Windows XP Professional. My development PC actually has an older AMD Athlon with half as much memory as one of the machines that takes longer to perform this task. I have watched the CPU time graph in Task Manager on my machine and one other and neither of them is using a significant amount of the processor (i.e. more than 10%) while these programs run.
Does anyone have any ideas? I can only think to install Visual Studio on a target machine to see if it has something to do with that.
Problem Solved ====================================================
I first installed Visual Studio to see if that had any effect and it didn't. Then I tested the programs on my new development PC and it ran just as fast as my old one. Running the programs on a Vista laptop yielded 15 second times again.
I printed timestamps on either side of certain instructions in the server program to see which was taking the longest and I found that the delay was being caused by a Sleep() method call of 1 millisecond. Apparently on my old and new systems the Sleep(1) was being ignored because I would have anywhere from 10 to >20 packets being sent in the same millisecond. Occasionally I would have a break in execution of around 15 or 16 milliseconds which led to the the time of around 2 seconds for 1000 packets. On the systems that took around 15 seconds to run through 1000 packets I would have either a 15 or 16 millisecond gap between sending each packet.
I commented out the Sleep() method call and now the packets get sent immediately. Thanks for the help.
| You should profile your application on the good, 2 second case, and the 15 second lab case and see where they differ. The difference could be due to any number of a problems (disk, antivirus, network) - without any data backing it up we'd just be shooting in the dark.
If you don't have access to a profiler, you can add timing instrumentation to various phases of your program to see which phase is taking longer.
|
1,337,470 | 1,337,487 | Type limitation in loop variables in Java, C and C++ | Why Java, C and C++ (maybe other languages also) do not allow more than one type on for-loop variables? For example:
for (int i = 0; i < 15; i++)
in this case we have a loop variable i, which is the loop counter.
But I may want to have another variable which scope is limited to the loop, not to each iteration. For example:
for (int i = 0, variable = obj.operation(); i < 15; i++) { ... }
I'm storing obj.operation() return data in variable because I want to use it only inside the loop. I don't want variable to be kept in memory, nor stay visible after the loop execution. Not only to free memory space, but also to avoid undesired behaviour caused by the wrong use of variable.
Therefore, loop variables are useful, but aren't extensively supported because of its type limitation. Imagine that operation() method returns a long value. If this happens, I can't enjoy the advantages of loop variables without casting and losing data. The following code does not compile in Java:
for (int i = 0, long variable = obj.operation(); i < 15; i++) { ... }
Again, anybody know why this type limitation exists?
| This limitation exists because your requirement is fairly unusual, and can be gained with a very similar (and only slightly more verbose) construct. Java supports anonymous code blocks to restrict scope if you really want to do this:
public void method(int a) {
int outerVar = 4;
{
long variable = obj.operation();
for (int i = 0; i < 15; i++) { ... }
}
}
|
1,337,517 | 1,337,692 | Debugging/tracing inside a shared library during runtime? | I'm trying to understand how a certain library works. I've compiled it with my added prinfts and everything is great. Now I want to stop the example program during runtime to look at the call stack, but I can't quite figure out how to do it with gdb. The function I want to break on, is inside a shared library. I've reviewed a previous question here on SO, but the approach doesn't work for me. The language in question is C++. I've attempted to provide the filename and line number, but gdb refuses to understand that, it only lists the source files from the demo app.
Any suggestions?
| You can do "break main" first. By the time you hit that, the shared library should be loaded, and you can then set a breakpoint in any of its routines.
|
1,337,523 | 56,544,100 | Measuring text width in Qt | Using the Qt framework, how do I measure the width (in pixels) of a piece of text rendered with a given font/style?
| Since Qt 5.11 you must use horizontalAdvance() method of QFontMetrics class instead of width(). width() is now obselete.
QFont myFont(fontName, fontSize);;
QString str("I wonder how wide this is?");
QFontMetrics fm(myFont);
int width=fm.horizontalAdvance(str);
|
1,337,529 | 1,337,546 | How to update a printed message in terminal without reprinting | I want to make a progress bar for my terminal application that would work something like:
[XXXXXXX ]
which would give a visual indication of how much time there is left before the process completes.
I know I can do something like printing more and more X's by adding them to the string and then simply printf, but that would look like:
[XXXXXXX ]
[XXXXXXXX ]
[XXXXXXXXX ]
[XXXXXXXXXX ]
or something like that (obviously you can play with the spacing.) But this is not visually aesthetic. Is there a way to update the printed text in a terminal with new text without reprinting? This is all under linux, c++.
| try using \r instead of \n when printing the new "version".
for(int i=0;i<=100;++i) printf("\r[%3d%%]",i);
printf("\n");
|
1,337,722 | 1,337,886 | How to place objects that seem not be comparable in a C++ std::set? | Suppose I want to put objects that identify a server into a stl set. Then I would have to make sure that I also implement operator< for these objects otherwise I would run into a compiler error:
struct ServerID
{
std::string name; // name of the server
int port;
};
std::set<ServerID> servers; // compiler error, no operator< defined
This is just one example of a common problem where I want to make an object comparable.
My current solution usually goes like this:
bool operator< (const ServerID & lhs, const ServerID & rhs)
{
if (lhs.name != rhs.name)
{
return lhs.name < rhs.name;
}
else
{
return lhs.port < rhs.port;
}
}
This is just a solution that I found myself. But I suspect that this problem might also have been recognized in computer science. So if I'm lucky there is a better solution for this. Can anyone hint me towards that?
| I would recommend not implementing it as operator<, to avoid possible confusion, but rather pass the order function as a parameter to the std::set template argument.
struct server
{
std::string name;
int port;
};
struct name_then_port : public std::binary_function<server,server,bool>
{
bool operator()( server const & lhs, server const & rhs ) {
// using litb approach (more efficient as it does not call both < and == on strings:
int cmp = lhs.name.compare(rhs.name);
return ( cmp < 0 ) || ((cmp==0) && ( lhs.port < rhs.port));
}
};
struct port_then_name : public std::binary_function<server,server,bool>
{
bool operator()( server const & lhs, server const & rhs ) {
return (lhs.port < rhs.port) || ((lhs.port==rhs.port) && (lhs.name<rhs.name));
}
};
int main()
{
std::set< server, name_then_port > servers; // or:
std::set< server, port_then_name > servers2;
}
About the question of whether this problem has been identified before, it has. The general solution is exactly what you posted: lexicographical order. While the term is usually referred to string ordering, but the ordering is the same: take the first element, compare if it does not define an order take the next data element and iterate.
|
1,337,923 | 1,337,959 | Authenticating users using Active Directory in Client-Server Application | I've been asked to provide support for authenticating users against an Active Directory in our existing client server application.
At the moment a user supplies a user name and password from a client machine, passed over the wire (encrypted) to our server process and matched against a user name/password stored in a database.
Initially, I thought this would be a easy problem to solve, since I could simply authenticate the users' name/password against Active Directory from our server process. However it turns out that users shouldn't have to enter a password from our client application, instead taking it's credentials from the current Windows login session.
I'm now faced with a problem of how to authenticate using Active Directory without having a password? I'm sure there must be a way of somehow passing some sort of "token" from the client to our server process that could be used as an alternative authentication method, but my research so far has drawn a blank.
Our server is written in C++, so we'll be using the win32 API. I also intend to develop and debug this using a virtual machine running Windows 2008 AD LDS - I'm hoping this will be sufficient for what I'm trying to achieve.
Any help or advice is much appreciated.
| You do an NTLM/Kerberos/Negotiate SSPI exchange loop. There is a a full sample on MSDN for both the client and the server. To be clear: you do not use any sort of LDAP access explictily. Is the LSA (Local Security Authority) that talks with LDAP and establishes the identity of the client. If you are succesful in doing the entire SSPI loop, the authentication has succeeded already and the client identity is alread authenticated against the LDAP. If your server needs to know the client identity (eg. to know the use rname) it retrieves it from the security context resulted in the SSPI loop using the QueryContextAttributes(..., SECPKG_ATTR_NAMES,...) and retrieves the user name from the SecPkgContext_Names structure.
|
1,337,938 | 1,337,991 | What are some reasons not to statically link to the VC CRT? | I'm finding that with dynamic linking, even with SxS, Windows Update will come along and stomp on a version of the VC8 CRT (for example it has a security flaw) and then my app will fail to run with older versions.
What are some of the important reasons to stay with the dynamic linking with VC CRT, other than increasing the size of your binaries?
|
Staying up to date on security fixes is a good reason. Otherwise, you're responsible for rebuilding your application with a fixed CRT and deploying it to your customers.
Using a shared CRT should result in lower memory footprint for the system, since most of the DLL's pages can be shared between processes.
|
1,338,179 | 1,338,188 | How to take these parameters the same way this function does? | For example:
- (BOOL)compare:(NSDecimal)leftOperand greaterThan:(NSDecimal)rightOperand {
NSComparisonResult result = NSDecimalCompare(&leftOperand, &rightOperand);
// rest not important
}
like you can see, the method just receives these two types of NSDecimal, leftOperand and rightOperand. Then it passes them on to a C API function which likes to have them by reference. Sorry if that's the wrong term, didn't study that stuff. Currect me if I'm wrong :-)
I want to modify this method in such a way, that I can also accept parameters the way that function does. I think that's clever, because the method won't copy the parameters (I believe it does). What would I have to add in there in order to get this reference thing right? And after that, my parameters are just references, right? How would I pass these then along to the NSDecimalCompare function?
I slighlty remember there was some dereferencing operator around for that?
| Try:
- (BOOL)compare:(const NSDecimal*)leftOperand greaterThan:(const NSDecimal*)rightOperand {
NSComparisonResult result = NSDecimalCompare(leftOperand, rightOperand);
// rest not important
}
|
1,338,436 | 1,338,467 | How to initial static member in C++ using function | I am using C++.
in .h:
static CRITICAL_SECTION g_CS;
in .cpp:
CRITICAL_SECTION CQCommon::g_CS;
but I want to use
QGUID temp;
EnterCriticalSection(&g_CS);
temp = g_GUID++;
LeaveCriticalSection(&g_CS);
return temp;
in one static function.
How can I invoke InitializeCriticalSection(PCRITICAL_SECTION pcs);?
Can I using the following one:
QGUID func(XXX)
{
static {
InitializeCriticalSection(&g_CS);
}
QGUID temp;
EnterCriticalSection(&g_CS);
temp = g_GUID++;
LeaveCriticalSection(&g_CS);
return temp;
}
And how can I invoke DeleteCriticalSection(&g_CS) after app leave?
Using MFC, it seems CCriticalSection is a solution.
| If you want a different approach you can create an object to manage it:
class CriticalSectionManager
{
public:
CriticalSectionManager()
{
InitializeCriticalSection(&g_CS);
}
~CriticalSectionManager()
{
DeleteCriticalSection(&g_CS);
}
};
void Func(void)
{
static CriticalSectionManager man;
//Do stuff
}
This will now be managed automatically by C++. The critical section will be initialized when the function is first entered, and deleted when the program exits.
Furthermore you can extend this by having the actual PCRITICAL_SECTION variable inside the class, etc.. etc..
|
1,338,645 | 1,344,934 | TokyoCabinet: Segmentation fault at hdb->close() | I'm stuck at a Segmentation fault after inserting about 8 million records in a TC Hash Database. After everything is inserted I close the DB but I caught a Segmentation Fault at this part of the code (tchdb.c):
static void tchdbsetflag(TCHDB *hdb, int flag, bool sign){
assert(hdb);
char *fp = (char *)hdb->map + HDBFLAGSOFF;
if(sign){
*fp |= (uint8_t)flag; //SEGFAULT HERE!
} else {
*fp &= ~(uint8_t)flag;
}
hdb->flags = *fp;
}
More especifically at the commented line.
The DB was opened like this:
tchdbopen(hdb, db_file, HDBOWRITER | HDBOCREAT))
The DB is tunned with:
tchdbtune(hdb, 25000000, -1, -1, HDBTLARGE);
tchdbsetcache(hdb, 100000);
The .tch file is about 2GB (2147483647 bytes). The interesting thing is that it is only happening when I insert around 8 million records. With 2 or 3 millions the DB closes all right. Inserting 8 million records takes around 3 hours because I read data from text files.
Any ideas?
Thanks
| Just solved the problem.
I'm on a 32bits system and TC can only handle databases up to 2GB in such systems.
The solution is building TC with the "--enable-off64" option. Something like this:
./configure --enable-off64
make
make install
|
1,338,742 | 1,338,783 | Initialize a pointer to a class with NULL values | I am tring to intialize an array of pointers to a NODE struct that I made
struct Node{
int data;
Node* next;
};
the private member of my other class is declared as
Node** buckets;
It is currently initialised as
buckets = new Node*[SIZE]
Is there anyway to initialize the array so that its members point to NULL or some other predefined Node pointer?
EDIT: Im looking for a means to initilize it without trying to generate a for loop to traverse through the full lenght of the array. The size of the array is determined at runtime.
EDIT 2: I tried std::fill_n(buckets, SIZE_OF_BUCKET, NULL); but the compiler gives the error "cannot convert from 'const int' to 'Node *'" I am using visual studio 2008. Is there something wrong that I am doing?
| First of all, the simplest solution is to do the following:
Node** buckets = new Node*[SIZE]();
As litb previously stated, this will value initialize SIZE pointers to null pointers.
However, if you want to do something like Node **buckets and initialize all of the pointers to a particular value, then I recommend std::fill_n from <algorithm>
Node **buckets = new Node*[SIZE];
std::fill_n(buckets, SIZE, p);
this will set each Node*' to p after allocation.
In addition, if you want the Node to have sane member valuesupon construction, the proper way is to have a constructor. Something like this:
struct Node {
Node() : data(0), next(NULL){}
Node(int d, Node *n = NULL) : data(d), next(n) {}
int data;
Node* next;
};
That way you can do this:
Node *p = new Node();
and it will be properly initialized with 0 and NULL, or
Node *p = new Node(10, other_node);
Finally, doing this:
Node *buckets = new Node[N]();
will construct N Node objects and default construct them.
|
1,338,846 | 1,338,851 | C global static - shared among threads? | In C, declaring a variable static in the global scope makes it a global variable. Is this global variable shared among threads or is it allocated per thread?
Update:
If they are shared among threads, what is an easy way to make globals in a preexisting library unique to a thread/non-shared?
Update2:
Basically, I need to use a preexisting C library with globals in a thread-safe manner.
| It's visible to the entire process, i.e., all threads. Of course, this is in practice. In theory, you couldn't say because threads have nothing to do with the C standard (at least up to c99, which is the standard that was in force when this question was asked).
But all thread libraries I've ever used would have globals accessible to all threads.
Update 1:
Many thread libraries (pthreads, for one) will allow you to create thread-specific data, a means for functions to create and use data specific to the thread without having it passed down through the function.
So, for example, a function to return pseudo random numbers may want each thread to have an independent seed. So each time it's called it either creates or attaches to a thread-specific block holding that seed (using some sort of key).
This allows the functions to maintain the same signature as the non-threaded ones (important if they're ISO C functions for example) since the other solution involves adding a thread-specific pointer to the function call itself.
Another possibility is to have an array of globals of which each thread gets one, such as:
int fDone[10];
int idx;
: : :
for (i = 0; i < 10; i++) {
idx = i;
startThread (function, i);
while (idx >= 0)
yield();
}
void function () {
int myIdx = idx;
idx = -1;
while (1) {
: : :
}
}
This would allow the thread function to be told which global variable in the array belongs to it.
There are other methods, no doubt, but short of knowing your target environment, there's not much point in discussing them.
Update 2:
The easiest way to use a non-thread-safe library in a threaded environment is to provide wrapper calls with mutex protection.
For example, say your library has a non-thread-safe doThis() function. What you do is provide a wrapper for it:
void myDoThis (a, b) {
static mutex_t serialize;
mutex_claim (&serialize);
doThis (a, b);
mutex_release (&serialize);
}
What will happen there is that only one thread at a time will be able to claim the mutex (and hence call the non-thread-safe function). Others will be blocked until the current one returns.
|
1,338,976 | 1,338,984 | Converting integer identifiers to pointers | I have ID values of the type unsigned int. I need to map an Id to a pointer in constant time.
Key Distribution:
ID will have a value in the range of 0 to uint_max. Most of keys will be clustered into a single group, but there will be outliers.
Implementation:
I thought about using the C++ ext hash_map stuff, but I've heard their performance isn't too great when keys have a huge potential range.
I've also thought of using some form of chained lookup (equivalent to recursively subdividing the range into C chucks). If there are no keys in a range, that range will point to NULL.
N = Key Range
Level 0 (divided into C = 16, so 16 pieces) = [0, N/16), [N/16, 2*(N/16)), ...
Level 1 (divided into C = 16, so 16 * 16 pieces) = ...
Does anyone else have ideas on how this mapping can be more efficiently implemented?
Update:
By constant, I just meant each key lookup is not significantly influenced by the # of values in the item. I did not mean it had to be a single op.
| Use a hash map (unordered_map). This gives ~O(1) look-up times. You "heard" it was bad, but did you try it, test it, and determine it to be a problem? If not, use a hash map.
After your code gets close to completion, profile it and determine if the look-up times are the main cause of slowness in your program. Chances are, it won't be.
|
1,339,118 | 1,339,125 | c++ problem with enum and struct | why doesn't this compile:
enum E { a, b}
typedef struct { int i; E e; } S;
int main(){return 0;}
I get different errors on different system.
| You need a semicolon after the enum.
enum E { a, b};
|
1,339,270 | 1,339,327 | How do I prefix the length of message in TCP/IP | I'm sending messages over TCP/IP, I need to prefix message length in a char array and then send it. How do I do it?
Also can you please provide an example of how to extract it at the another end. And if possible, please explain.
I'm using C++ and Winsock.
EDIT:
string writeBuffer = "Hello";
unsigned __int32 length = htonl(writeBuffer.length());
It's not returning the correct length rather a very large number.
For the receiving part, if I use ntohl(), then I also get a large number instead of the correct length? Why is that so? I'm receiving like this
bool Server::Receive(unsigned int socketIndex)
{
// Read data from the socket
if (receivingLength)
{
bytesReceived = recv(socketArray[socketIndex - WSA_WAIT_EVENT_0],
((char*)&messageLength) + bytesReceived, MESSAGE_LENGTH_SIZE - bytesReceived, 0);
if (bytesReceived == SOCKET_ERROR)
{
return false;
}
if (bytesReceived == MESSAGE_LENGTH_SIZE)
{
// If uncomment the following line,
// I won't get the correct length, but a large number
//messageLength = ntohl(messageLength);
receivingLength = false;
bytesReceived = 0;
bytesLeft = messageLength;
}
}
else
{
if (bytesLeft > BUFFER_SIZE)
{
return false;
}
bytesReceived = recv(socketArray[socketIndex - WSA_WAIT_EVENT_0],
&receiveBuffer[bytesReceived], bytesLeft, 0);
if (bytesReceived == SOCKET_ERROR)
{
return false;
}
if (bytesReceived == messageLength)
{
// we have received full message
messageReceived = true;
receiveBuffer[bytesReceived] = '\0';
// wait for next message
receivingLength = true;
}
bytesLeft -= bytesReceived;
}
return true;
}
| When sending a length field on a TCP stream, you need to decide two things:
what length should the length have (1 byte, 2 bytes, 4 bytes, variable length)
what endianness should I use
I recommend to use 4 bytes length, and network byte order (i.e. big-endian). For network byte order, the macros htonl and ntohl will convert between host (native) byte order (little-endian, in your case), and network byte order.
To send data, the fragment should look like this:
size_t length = strlen(data);
uint32_t nlength = htonl(length);
send(sock, &nlength, 4, 0);
send(sock, data, length, 0);
On the receiving side, you extract first the length, then the data:
uint32_t length, nlength;
recv(sock, &nlength, 4, 0);
length = ntohl(nlength);
data = malloc(length+1);
recv(sock, data, length, 0);
data[length] = 0;
What this code is missing is error handling: each of the send and receive calls may fail; the recvs may receive less data than expected. But this should give you an idea.
Edit: To deal with the case that the recv returns too few data, run it in a loop, keeping a count of what you have read so far, e.g.
int length_bytes = 0;
while(length_bytes < 4){
int read = recv(sock, ((char*)&nLength)+length_bytes, 4-length_bytes, 0);
if (read == -1) some_error_occurred_check_errno();
length_bytes += read;
}
|
1,339,428 | 1,339,436 | Constant reference to temporary object | Let's say there is a function like
void SendToTheWorld(const Foo& f);
and I need to preprocess Foo object before sending
X PreprocessFoo(const Foo& f)
{
if (something)
{
// create new object (very expensive).
return Foo();
}
// return original object (cannot be copied)
return f;
}
Usage
Foo foo;
SendToTheWorld(PreprocessFoo(foo));
So X PreprocessFoo() function should be able to return original object or copy/modify and than return new one. I cannot return const Foo& as it may refer a temporary object. Also I don't like to create Foo on the heap.
Perfectly, X should be some kind of union of const Foo& and Foo that may be treated as const Foo&. Any idea how to do that in more elegant way?
My current solution:
Foo PreprocessFoo(const Foo& f)
{
// create new object (very expensive).
return Foo();
}
Usage:
Foo foo;
if (something)
{
SendToTheWorld(PreprocessFoo(foo));
}
else
{
SendToTheWorld(foo);
}
| I’m not 100% clear on what you mean but if you just want to omit an unnecessary copy of Foo in the return value, make your function small (right now, it is) and rely on the optimizing compiler to take care of your problem.
Once the function has been inlined, the compiler will elide unnecessary copies of Foo.
(Note for the interested: NRVO (named return value optimization) cannot be applied here since no unique name can be assigned to all instances of possible return values.)
|
1,339,470 | 1,339,487 | How to get the address of the std::vector buffer start most elegantly? | I want to use std::vector for dynamically allocating memory. The scenario is:
int neededLength = computeLength(); // some logic here
// this will allocate the buffer
std::vector<TCHAR> buffer( neededLength );
// call a function that accepts TCHAR* and the number of elements
callFunction( &(buffer[0]), buffer.size() );
The code above works, but this &(buffer[0]) looks ugly. Is there a more elegant way to achieve the same?
| Well, you can remove one set of parens:
&buffer[0]
but that is the common, idiomatic way of doing it. If it really offends you, I suppose you could use a template - something like:
template <typename T>
T * StartOf( std::vector <T> & v ) {
return &v[0];
}
|
1,339,601 | 1,339,685 | warning: returning reference to temporary | I have a function like this
const string &SomeClass::Foo(int Value)
{
if (Value < 0 or Value > 10)
return "";
else
return SomeClass::StaticMember[i];
}
I get warning: returning reference to temporary. Why is that? I thought the both values the function returns (reference to const char* "" and reference to a static member) cannot be temporary.
| This is an example when an unwanted implicit conversion takes place. "" is not a std::string, so the compiler tries to find a way to turn it into one. And by using the string( const char* str ) constructor it succeeds in that attempt.
Now a temporary instance of std::string has been created that will be deleted at the end of the method call. Thus it's obviously not a good idea to reference an instance that won't exist anymore after the method call.
I'd suggest you either change the return type to const string or store the "" in a member or static variable of SomeClass.
|
1,339,605 | 1,339,667 | Remote debugging error with eclipse CDT | I am using Galileo CDT on mac os x 10.5.7. I want to remotelly debug a c++ application on a linux machine. I found this guide:
http://www.embedded-linux.co.uk/tutorial/eclipse-rse
But when it comes to the step
"Install Remote System Explorer", when I go to "available software", I only get error messages like "No repository found at http::/downlo" and so.
So please,
1- Is there any other way to install this package or to solve this issue?
2- Is there any other better/different guide to remotely debug c++ application with eclipse(galileo)?
Thanks a lot
|
I only get error messages like "No repository found at http::/downlo" and so.
What is the exact error message?
Do not forget Eclipse has its own HTTP proxy settings. Please check them out. (Preferences / General / Network Connections)
alt text http://help.eclipse.org/stable/topic/org.eclipse.platform.doc.user/whatsNew/images/system-proxy-settings.png
If your browser is using a proxy then you need to configure Eclipse to use one too.
If your proxy has a cache and you are using eclipse < 3.4.2, you may get some further problems with that until release 3.4.2 which will include a fix for bug 249990:
|
1,339,922 | 1,340,030 | prevent accidental object copying in C++ | In our company's coding standard, we have been told to "be aware of the ways (accidental) copying can be prevented".
I am not really sure what this means, but assume that they mean we should stop classes from being copied if this is not required.
What I can think of is as follows:
Make the copy constructor of a class private.
Make the assignment operator (operator=) of a class private.
Make the constructor of a class explicit (to stop classes from being created using incorrect variables).
For all classes that carry out memory allocation and where copying is required, make sure that the copy constructor and assignment operator carry out deep copying rather than shallow copying.
Am I on the right track? Is there anything I might have missed out?
| If your coding standard states "be aware of the ways (accidental) copying can be prevented", I'm guessing they aren't just talking about preventing copies from within the classes itself, but about the performance implications of unnecessary / accidental copies when using the classes.
One of the main causes of unnecessarily wasted performance in the code of people new to C++ is unnecessary copying, usually through temporaries. Compilers are getting better and better at deciding when temporaries are not necessary (see "Want speed? Pass by Value", thanks to Konrad's comment), but the best thing to do is to learn to be aware of the inner workings of copying and temporaries in C++ (among others). For me, reading Efficient C++ really got me started.
|
1,339,997 | 1,340,124 | Is it possible to use __func__ in gcc 3.3+ the old way? (C++) | With gcc versions before 3.3 and with the MS compiler I use the following macro:
DEBUG_WARNING(...) printf(">WARNING: "__FUNCTION__"() " __VA_ARGS__);
Use:
DEBUG_WARNING("someFunction returned %d", ret);
Output:
>WARNING: Class::FunctionName() someFunction returned -1
Its extremely handy when we have lots of systems, all sending output. Its a single line macro, that allows us to filter the output accordingly. Small code, big use, happy me.
As the __FUNCTION__ (and __func__ in C++) definition has changed (to make it standards compliant I believe) it has also made that macro unworkable.
I've got it working using a function that builds the string by hand, but I like my macro.
Am I missing an easy way to get this simple one line macro to still work under Gcc 3.3?
: D
| Since __FUNCTION__ and __func__ is a predefined identifier and not a string literal, you cannot use it in preprocessor string literal concatenation. But you can use it in printf formatting. Also note the use of ##args instead of __VA_ARGS__ to use GNU style variadic macro arguments to work around the issue with the comma between __func__ and possibly zero args.
#define DEBUG_WARNING(fmt, args...) \
printf(">WARNING: %s() " fmt "\n", __func__, ##args)
|
1,340,099 | 1,340,203 | Worker threads stop their work after a moment | I have a serial application that I parallelized using OpenMP. I simply added the following to my main loop :
#pragma omp parallel for default(shared)
for (int i = 0; i < numberOfEmitters; ++i)
{
computeTrajectoryParams* params = new computeTrajectoryParams;
// defining params...
outputs[i] = (int*) ComputeTrajectory(params);
delete params;
}
It seems to work well : at the beginning, all my worker threads execute an iteration of the loop, everything goes fast, and I have a 100% CPU load (on a quad-core machine). However, after a moment, one of the worker thread stops, and stays in a function called _vcomp::PersistentThreadFunc from vcomp90.dll (the file is vctools\openmprt\src\ttpool.cpp), and then another, etc... until only the main thread remains working.
Does anybody have an idea why this happens ? This starts to happen after about half of the iterations have been executed.
| It might depend on the scheduling scheme, and the computation size in each cycle.
If the scheduling is static - each thread is assigned with work before it is run. Each thread will get 1/4 of the indexes. It is possible that some threads finish before others because their work is easier than that of other threads (or maybe they are just less loaded with other things).
Try to work with dynamic scheduling, and see if it works better.
|
1,340,181 | 1,341,039 | Own Object as Data in stl Multimap | I'm writing an application, where I want to store strings as keys ans a custom Object as value
multimap<string, owncreatedobject> mymap;
Compiling does well, but I get a "Segmentation fault" when using the function insert
during runtime.
mymap.insert(string,myobject); --> Segmentation Error
A already added a copyconstructor an assignmentfunction (which calls the copyconstructor)
any idea about the "Segmentation fault?
| I fiddle with your problem for the last hour and I think I got it.
Try using something in the matter of this code segment:
multimap<string,yourclass const*> mymap
void addtomap(string s,yourclass const* c)
{
mymap.insert(pair<string, yourclass const*>(s,c));
}
...`addtomap("abc",&z);
In your given example you tried to insert without an iterator. The first element for a 2 element insert call for a multimap is the iterator. That's the reason for your segmentation fault and why one should use pair.
|
1,340,268 | 1,340,289 | What is Microsoft using as the data type for Unicode Strings? | I am in the process of learning C++ and came across an article on the MSDN here:
http://msdn.microsoft.com/en-us/magazine/dd861344.aspx
In the first code example the one line of code which my question relates to is the following:
VERIFY(SetWindowText(L"Direct2D Sample"));
More specifically that L prefix. I had a little read up, and correct me if I am wrong :-), but this is to allow for unicode strings, i.e. to prep for a long character set. Now in during my read up on this I came across another article on Adavnced String Techniques in C here http://www.flipcode.com/archives/Advanced_String_Techniques_in_C-Part_I_Unicode.shtml
It says there are a few options including the inclusion of the header:
#define UNICODE
OR
#define _UNICODE
in C , again point out if I am wrong, appreciate your feedback. Further it shows the datatype suitable for these unicode strings being:
wchar_t
It throws into the mix a macro and a kind of hybrid datatype, the macro being:
_TEXT(t)
which simply prefixes the string with the L and the hybrid data type as
TCHAR
Which it points out will allow for unicode if the header is there and ASCII if not. Now my question is, or more of an asumption which I would like to confirm, would Microsoft use this TCHAR data type which is more flexible or is there any benefit to committing to using the wchar_t.
Also when I say does Microsoft use this, more specifically for exmaple in the ATL and WTL libraries, do anyone of yourselves have preference or have some advice regarding this?
Cheers,
Andrew
| For all new software you should define UNICODE and use wchar_t directly. Using ANSI stirngs will come back to haunt you.
You should just use wchar_t and the wide versions of all the CRT functions (ex: wcscmp instead of strcmp). The TEXT macros and TCHAR etc just exist if your code needs to work in both ANSI and UNICODE environments which I feel code rarely needs to do.
When you create a new windows application using Visual Studio UNICODE is automatically defined and wchar_t will work like a built-in.
|
1,340,485 | 1,360,952 | Identify user and machine on the local network | In my company we use small application called IPMsg, a messenger kind of tool to pass messages and file to other fellows in company, even it allows to multicast the message.
And also it lists the user name, host name and IP addresses of users.
How can it do that? There is no server present for message routing and when checked through netstat command in CMD it does not show any details like what protocol and port it is using to communicate.
There is source code also available on the same site which is in VC++. I didn't understand a line of code... (I'm a C# guy)
Can anyone explain me how it can do that?
| IPMsg is a daemon which listens to incoming connections on a specific port which is the connection port. You can find out which port it used by using Wireshark.
Start wireshark, start listening on the interface where you have connected to LAN and then start sending any message, wireshark will show you the message on the screen with the port number also.
The application is a peer-to-peer software and doesn't require a central server software to route messages. it only has a small daemon which accepts incoming connections. This is the way Jabber Instant messaging protocol also works.
As you said it lists username, hostname and ip address of users, do you mean it pings the network and finds it? If yes, then it is actually possible to find the IP addresses of computers on the Local Network which requires you to know the subnet on which you are connected.
You can use ARP/ICMP Ping to know the hosts present on your network provided you enter the correct subnet information
Multicasting a message is also nothing special. It is a feature provided with all Networking Stacks.
If you want mutlicasting in .NET, it is allowed. Check this page on Code Project which gives a nice example
|
1,340,577 | 1,340,615 | C++ wstring how to assign from NULL-terminated wchar_t array | Most texts on the C++ standard library mention wstring as being the equivalent of string, except parameterized on wchar_t instead of char, and then proceed to demonstrate string only.
Well, sometimes, there are some specific quirks, and here is one: I can't seem to assign a wstring from an NULL-terminated array of 16-bit characters. The problem is the assignment happily uses the null character and whatever garbage follows as actual characters. Here is a very small reduction:
typedef unsigned short PA_Unichar;
PA_Unichar arr[256];
fill(arr); // sets to 52 00 4b 00 44 00 61 00 74 00 61 00 00 00 7a 00 7a 00 7a 00
// now arr contains "RKData\0zzz" in its 10 first values
wstring ws;
ws.assign((const wchar_t *)arr);
int l = ws.length();
At this point l is not the expected 6 (numbers of chars in "RKData"), but much larger. In my test run, it is 29. Why 29? No idea. A memory dump doesn't show any specific value for the 29th character.
So the question: is this a bug in my standard C++ library (Mac OS X Snow Leopard), or a bug in my code?
How am I supposed to assign a null-terminated array of 16-bit chars to a wstring?
Thanks
| Under most Unixes (Mac OS X as well), whar_t represents UTF-32 single code point, and not 16bit utf-16 point like at windows.
So you need to:
Either:
ws.assing(arr,arr + length_of_string);
That would use arr as iterator and copy each short int to wchar_t.
But this would work only if your characters lay in BMP or representing UCS-2
(16bit legacy encoding).
Or, correctly work with utf-16: converting utf-16 to utf-32 -- you need to find surrogate pairs and merge them to single code point.
|
1,340,658 | 1,340,770 | -fno-omit-frame-pointer without optimization | I was wondering what -fno-omit-frame-pointer will do without optimization?
CXXFLAGS = -Wall -ggdb3 -DDEBUG -fno-omit-frame-pointer
Isn't it that fomit-frame-pointer auto turned on at all levels of -O (except -O0)? I assume in my example it is -O0 by default.
Thanks and regards!
| As you already imply yourself, -fno-omit-frame-pointer is just ignored in your case, as the frame pointer wouldn't be ommitted anyways in the default -O0.
|
1,340,729 | 26,853,142 | How do you generate a random double uniformly distributed between 0 and 1 from C++? | How do you generate a random double uniformly distributed between 0 and 1 from C++?
Of course I can think of some answers, but I'd like to know what the standard practice is, to have:
Good standards compliance
Good randomness
Good speed
(speed is more important than randomness for my application).
Thanks a lot!
PS: In case that matters, my target platforms are Linux and Windows.
| In C++11 and C++14 we have much better options with the random header. The presentation rand() Considered Harmful by Stephan T. Lavavej explains why we should eschew the use of rand() in C++ in favor of the random header and N3924: Discouraging rand() in C++14 further reinforces this point.
The example below is a modified version of the sample code on the cppreference site and uses the std::mersenne_twister_engine engine and the std::uniform_real_distribution which generates numbers in the [0,1) range (see it live):
#include <iostream>
#include <iomanip>
#include <map>
#include <random>
int main()
{
std::random_device rd;
std::mt19937 e2(rd());
std::uniform_real_distribution<> dist(0, 1);
std::map<int, int> hist;
for (int n = 0; n < 10000; ++n) {
++hist[std::round(dist(e2))];
}
for (auto p : hist) {
std::cout << std::fixed << std::setprecision(1) << std::setw(2)
<< p.first << ' ' << std::string(p.second/200, '*') << '\n';
}
}
output will be similar to the following:
0 ************************
1 *************************
Since the post mentioned that speed was important then we should consider the cppreference section that describes the different random number engines (emphasis mine):
The choice of which engine to use involves a number of tradeoffs*: the
**linear congruential engine is moderately fast and has a very small
storage requirement for state. The lagged Fibonacci generators are
very fast even on processors without advanced arithmetic instruction
sets, at the expense of greater state storage and sometimes less
desirable spectral characteristics. The Mersenne twister is slower and
has greater state storage requirements but with the right parameters
has the longest non-repeating sequence with the most desirable
spectral characteristics (for a given definition of desirable).
So if there is a desire for a faster generator perhaps ranlux24_base or ranlux48_base are better choices over mt19937.
rand()
If you forced to use rand() then the C FAQ for a guide on How can I generate floating-point random numbers?, gives us an example similar to this for generating an on the interval [0,1):
#include <stdlib.h>
double randZeroToOne()
{
return rand() / (RAND_MAX + 1.);
}
and to generate a random number in the range from [M,N):
double randMToN(double M, double N)
{
return M + (rand() / ( RAND_MAX / (N-M) ) ) ;
}
|
1,341,399 | 1,341,433 | Rasterizing a 2D polygon | I need to create a binary bitmap from a closed 2D polygon represented as a list of points. Could you please point me to efficient and sufficiently simple algorithms to do that, or, even better, some C++ code?
Thanks a lot!
PS: I would like to avoid adding a dependency to my project. However if you suggest an open-source library, I can always look at the code, so it can be useful too.
| The magic google phrase you want is either "non-zero winding rule" or "even odd polygon fill".
See the wikipedia entries for:
non-zero winding rule
even odd polygon fill
Both are very easy to implement and sufficiently fast for most purposes. With some cleverness, they can be made antialiased as well.
|
1,341,528 | 1,343,973 | Find mapping between Windows heap and modules | I am searching for a way to find a mapping between a heap and the module which owns the heap.
I retrieve the heaps in the following way:
HANDLE heaps[1025];
DWORD nheaps = GetProcessHeaps((sizeof(heaps) / sizeof(HANDLE)) - 1, heaps);
for (DWORD i = 0; i < nheaps; ++i) {
// find module which created for heap
// ...
}
The reason why i want to do that is that in my application i find round about 40 heaps, some are standard heaps, other are low-fragmentation heaps. Now i am trying to figure out which module uses which kind of heap.
Thanks a lot!
| Add a CreateHeap call to the very beginning of your program and put a breakpoint on it. Run. Step into the call (going to the disassembly level). Set a new breakpoint. Now continue and the breakpoint should be hit each time a new heap is created. The call stack will show you where it came from.
If the heaps are being created by global objects, those will happen before main(). You can poke around in your C run-time start-up code to set your breakpoint even earlier.
|
1,341,793 | 1,341,863 | Symbian character printing | I am attempting to construct a very simple proof of concept that I can write a web service and actually call the service from a symbian environment. The service is a simple Hello service which takes a name in the form of a const char* and returns back a greeting of the form "hello " + name in the form of a char*. My question is, how do I convert a char* to a TPtrC16 so that I can use the console->Write function to print out the response to screen? I know I could search through the API and figure this out, but for a basic conceptual demo I'd rather not spend the time (not sure that Symbian is something I will ever work with again).
Thanks!
| If the const char* string is in US-ASCII, you can use TDes::Copy to copy it wrapped in a TPtrC8 to a 16-bit descriptor:
const char *who = "world";
TBuf<128> buf;
buf.Copy(TPtrC8((TText8*)who));
console->Printf(_L("hello %S\n"), &buf);
If it is in some other encoding, have a look at the charconv API in the SDK help.
|
1,341,796 | 1,346,204 | Print n levels of callstack? | Using C++ with Visual Studio, I was wondering if there's an API that will print the callstack for me. Preferably, I'd like to print a callstack 5 levels deep. Does windows provide a simple API to allow me to do this?
| There is a number of ways to do this.
See How to Log Stack Frames with Windows x64
In my opinion, the simplest and as well the most reliable way is the Win32 API function:
USHORT WINAPI CaptureStackBackTrace(
__in ULONG FramesToSkip,
__in ULONG FramesToCapture,
__out PVOID *BackTrace,
__out_opt PULONG BackTraceHash
);
This FramesToCapture parameter, determines the maximum call stack depth returned.
|
1,341,903 | 1,342,557 | C++-like usage of Moose with Perl for OOP | I've been playing around with Moose, getting a feel for it. I'd like an example of pure virtual functions like in C++ but in Moose parlance (specifically in a C++-looking way). I know that even with Moose imposing a stricter model than normal Perl, there's still more than one way to do what I'm asking (via method modifiers or SUPER:: calls). That is why I'm asking specifically for an implementation resembling C++ as much as possible. As for the "why?" of this restriction? Mostly curiosity, but also planning to port some C++ code to Perl with Moose in a way that C++-centric people could mostly identify with.
| I can think of this way using roles instead of subclassing:
{
package AbstractRole;
use Moose::Role;
requires 'stuff';
}
{
package Real;
use Moose;
with 'AbstractRole';
}
This will give a compilation error because Real doesn't have stuff defined.
Adding stuff method to Real will now make it work:
{
package Real;
use Moose;
with 'AbstractRole';
sub stuff { print "Using child function!\n" }
}
|
1,342,045 | 1,342,077 | How do I find the largest int in a std::set<int>? | I have a std::set<int>, what's the proper way to find the largest int in this set?
| What comparator are you using?
For the default this will work:
if(!myset.empty())
*myset.rbegin();
else
//the set is empty
This will also be constant time instead of linear like the max_element solution.
|
1,342,118 | 1,342,131 | Is it possible to override the array access operator for pointers to an object in C++? | I'm trying to do some refactoring of code, and have run into a problem. The program has a data manager that returns pointers to arrays of structures as a void*. One of the new types of data, instead of having a single pointer to an array of structures, has two pointers to arrays of numbers. The problem is that all the processing code is done by accessing array[index].qwTimestamp and array[index].snSample which is common to all record types.
I thought that doing an override of the array access operator( [] ) like the following might solve the problem:
class ADRec {
public:
ADRec(unsigned __int64* ts, __int32* data, unsigned index = 0): mTimestamps(ts), mDataPoints(data), mIndex(index) {
qwTimeStamp = mTimestamps[mIndex];
snSample = mDataPoints[mIndex];
}
ADRec operator[](unsigned i) {
return ADRec(mTimestamps, mDataPoints, i);
}
unsigned __int64 qwTimeStamp;
__int32 snSample;
private:
unsigned __int64* mTimestamps;
__int32* mDataPoints;
unsigned mIndex;
};
This approach works fine if you are using an object:
unsigned __int64 ts[] = { 2, 3, 4, 5};
__int32 data[] = {4, 6, 8, 10};
ADRec tmp = ADRec(ts, data, 0);
ASSERT(tmp[0].qwTimeStamp == 2);
ASSERT(tmp[0].snSample == 4);
ASSERT(tmp[1].qwTimeStamp == 3);
ASSERT(tmp[1].snSample == 6);
But fails if you use a pointer to an object:
unsigned __int64 ts[] = { 2, 3, 4, 5};
__int32 data[] = {4, 6, 8, 10};
ADRec* tmp = new ADRec(ts, data, 0);
ASSERT(tmp[0].qwTimeStamp == 2);
ASSERT(tmp[0].snSample == 4);
ASSERT(tmp[1].qwTimeStamp == 3); //fails
ASSERT(tmp[1].snSample == 6); //fails
C++ is indexing off of the pointer when tmp[1] is called, and thus pointing to random memory.
Is it possible to override the way C++ indexes off of a pointer to an object, or some other mechanism that would accomplish the same goal?
| No, it isn't possible - pointers are considered to be of a built-in type and so cannot have their operators overloaded. However, you can certainly
create smart pointer classes (classes that act like pointers, but with added abilities) and overload their operators - take a look at your compiler's implementation of std::auto_ptr, for example.
|
1,342,126 | 1,342,207 | Virtual Table layout in memory? | how are virtual tables stored in memory? their layout?
e.g.
class A{
public:
virtual void doSomeWork();
};
class B : public A{
public:
virtual void doSomeWork();
};
How will be the layout of virtual tables of class A and class B in memory?
| As others have said, this is compiler dependant, and not something that you ever really need to think about in day-to-day use of C++. However, if you are simply curious about the issue, you should read Stan Lippman's book Inside the C++ Object Model.
|
1,342,158 | 1,342,195 | How to figure out what value MSVC is using for a preprocessor macro | I'm attempting to use a /D compiler option on MSVC6 to define a string, but there's something weird about using double quotes around it. To debug this problem, it would be extremely helpful for me to be able to see what value the preprocessor is actually substituting into my code where the macro is expanded. Is there any way I can do this? I tried creating a Listing file with "assembly and source", but the source contains the original macro name and the ASM is some incomprehensible gibberish at that line. Is there a way to get the macro value at compile time?
Failing that (or perhaps more useful), how do I specify a string with the /D option? It needs to substitute into my source with double quotes around it, since I'm using it as a string literal.
| Try one of the following options to CL.exe:
/E preprocess to stdout
/P preprocess to file
If you're building within Visual Studio, you can specify custom command-line options in one of the project property dialogs.
|
1,342,211 | 1,346,223 | Does SHGetPathFromIDList() (and similar) put a terminating 0 in its argument? | This is actually a question about a huge number of winapi functions.
A typical MS documentation says (from http://msdn.microsoft.com/en-us/library/bb762194(VS.85).aspx ):
BOOL SHGetPathFromIDList(
PCIDLIST_ABSOLUTE pidl,
LPTSTR pszPath
);
pidl [in] The address of an item identifier list that specifies a file
or directory location relative to the root of the namespace (the desktop).
pszPath [out] The address of a buffer to receive the file system path.
This buffer must be at least MAX_PATH characters in size.
Nowhere does it say about whether a terminating 0 is written to pszPath. Also, it doesn't say whether the path can fill the pszPath, leaving no room for 0 there.
Googling around yeidls about 50/50 distribution of users who allocate a buffer with MAX_PATH+1 chars and users who only deal with MAX_PATH.
While I can certainly do something like char buf[MAX_PATH+1]={0} to be on the safe side, I would really like to know - is there some place where this stuff is described? Some page for all path-related functions maybe, I don't know...
| To answer the title question: Yes. It's part of the definition of LPTSTR - a pointer to a string. It is also reflected in the prefix: psz - "Pointer (to) String (terminated by) Zero".
There is a non-null-terminated stringtype as well, but it's rare in userland API's: UNICODE_STRING. You see it mostly in kernel-level APIs
|
1,342,299 | 1,342,413 | Windows Mobile - Stop Main Phone App | Is there a way to make Windows Mobile not use the main phone app? I have my own phone app that I want to handle phone transactions for a business device.
My app works fine (detects the call and can hang up), but the main phone app still wants to allow the user to answer a call normally. I can try to hide the incoming call window or programmatically press the ignore key, but that is a bit clunky.
Basically, I need a way to make the built in phone app not know about incoming calls.
Any advice would be appreciated!
In case it matters I am using a Symbol MC70 running Windows Mobile 5.
Thanks!
EDIT: Thanks to djhowell's answer to this question I now know that the offending app is cprog.exe. But apparently it is hard to kill because services.exe keeps bringing it back.
| First of all, you should not do it. Replacing system dialer will create you more troubles than you can expect.
If you still want to do it, there is no nice way to do it, even if you opt to use RIL directly. So, there is a trick in which you create a dummy cprog.exe (which does absolutely nothing), and put in the root folder . After the phone boots, that program will be started instead of the native one that is located in the \Windows folder. Then no program will be listening for incoming calls.
|
1,342,321 | 1,342,352 | Template Partial Specialization - any real-world example? | I am pondering about partial specialization. While I understand the idea, I haven't seen any real-world usage of this technique. Full specialization is used in many places in STL so I don't have a problem with that. Could you educate me about a real-world example where partial specialization is used? If the example is in STL that would be superior!
| C++0x comes with unique_ptr which is a replacement for auto_ptr which is going to be deprecated.
If you use unique_ptr with an array type, it uses delete[] to free it, and to provide operator[] etc. If you use it with a non-array type, it uses delete. This needs partial template specialization like
template<typename T>
struct my_unique_ptr { ... };
template<typename T>
struct my_unique_ptr<T[]> { ... };
Another use (although a very questionable) is std::vector<bool, Allocator> in the standard library. The bool specialization uses a space optimization to pack bools into individual bits
template<typename T, typename Allocator = std::allocator<T> >
struct vector { ... };
template<typename Allocator>
struct vector<bool, Allocator> { ... };
Yet another use is with std::iterator_traits<T>. Iterators are required to define the nested typedefs value_type, reference and others to the correct types (for a const iterator, reference would usually be T const&, for example) so algorithms may use them for their work. The primary template uses type-members of the iterator type in turn
template<typename T>
struct iterator_traits {
typedef typename T::value_type value_type;
...
};
For pointers, that of course doesn't work. There is a partial specialization for them
template<typename T>
struct iterator_traits<T*> {
typedef T value_type;
...
};
|
1,342,480 | 1,342,641 | Check if input is blank when input is declared as double [C++] | I have three variable declared as doubles:
double Delay1 = 0;
double Delay2 = 0;
double Delay3 = 0;
I Then get their values from the user:
cout << "Please Enter Propogation Delay for Satellite #1:";
cin >> Delay1;
...
But when I check these values to see if they are null (user just hit enter and did not put a number) it doesn't work:
if(Delay1 || Delay2 || Delay3 == NULL)
print errror...
This runs every time.
What is the proper way to check if an input that has been declared a double is blank?
| Something like
cin >> Delay1;
if(cin) { ... }
won't work according to your specification, because cin will skip leading whitespace. The user can't just hit enter. He first has to enter some text. If he enters the following
3a
Then the input is read into the double, up to a, where it stops. cin won't find anything wrong, and leaves a in the stream. Often, this is enough error handling, i think. But if it's a requirement that you want to actually repeat when the user enters something like above, then you need a bit more code.
If you want to test whether the whole input up to the newline is a number, then you should use getline, read into a string and then try to convert to a number
string delay;
if(!getline(std::cin, delay) || !isnumber(delay)) {
...
}
The isnumber function can use a stringstream to test the string
bool isnumber(string const &str) {
std::istringstream ss(str);
double d;
// allow leading and trailing whitespace, but no garbage
return (ss >> d) && (ss >> std::ws).eof();
}
The operator>> will eat leading whitespace, and std::ws will consume trailing whitespace. If it hits to the end of the stream, it will signal eof. This way, you can signal the error to the user immediately, instead of erroring out at the next time you try to read from cin.
Write a similar function that returns the double or pass the address of a double to `isnumber, so that it can write the result in case of a successful parse.
It's also worth to have a look at the various error flags and how they relate to operator void*, operator!, good(), fail(), bad() and eof() which can be quite confusing:
flag | badbit | failbit | eofbit
function | | |
-----------------+---------+-----------+--------
op void* | x | x |
-----------------+---------+-----------+--------
op ! | x | x |
-----------------+---------+-----------+--------
good() | x | x | x
-----------------+---------+-----------+--------
fail() | x | x |
-----------------+---------+-----------+--------
bad() | x | |
-----------------+---------+-----------+--------
eof() | | | x
-----------------+---------+-----------+--------
There is an x if the respective bit influences the result. operator void* is used when converting to bool (if(cin) ...) while operator! is used for code doing !cin
|
1,343,046 | 1,343,098 | Speed comparison of 2 loop styles | I'm reading about STL algorithms and the book pointed out that algorithms like find use a while loop rather than a for loop because it is minimal, efficient, and uses one less variable. I decided to do some testing and the results didn't really match up.
The forfind consistently performed better than the whilefind. At first I simply tested by pushing 10000 ints back into a vector, and then using find to get a single value from it and return it to the iterator. I timed it and output that time.
Then I decided to change it so that the forfind and whilefind functions were used multiple times (in this case 10000 times). However, the for loop find still came up with better performance than the while find. Can anyone explain this? Here is the code.
#include "std_lib_facilities.h"
#include<ctime>
template<class ln, class T>
ln whilefind(ln first, ln last, const T& val)
{
while (first!=last && *first!=val) ++first;
return first;
}
template<class ln, class T>
ln forfind(ln first, ln last, const T& val)
{
for (ln p = first; p!=last; ++p)
if(*p == val) return p;
return last;
}
int main()
{
vector<int> numbers;
vector<int>::iterator whiletest;
vector<int>::iterator fortest;
for (int n = 0; n < 10000; ++n)
numbers.push_back(n);
clock_t while1 = clock(); // start
for (int i = 0; i < 10000; ++i)
whiletest = whilefind(numbers.begin(), numbers.end(), i);
clock_t while2 = clock(); // stop
clock_t for1 = clock(); // start
for (int i = 0; i < 10000; ++i)
fortest = forfind(numbers.begin(), numbers.end(), i);
clock_t for2 = clock(); // stop
cout << "While loop: " << double(while2-while1)/CLOCKS_PER_SEC << " seconds.\n";
cout << "For loop: " << double(for2-for1)/CLOCKS_PER_SEC << " seconds.\n";
}
The while loop consistently reports taking around .78 seconds and the for loop reports .67 seconds.
| if(*p = val) return p;
That should be a ==. So forfind will only go through the entire vector for the first value, 0, and return immediately for numbers 1-9999.
|
1,343,320 | 1,343,365 | When debugging on Windows where does stderr go? | When trying to debug a program on Windows I can't seem to find where the output I push to stderr is going. How do I get a hold of my stderr output? Is there a debugger-level setting (MSVC 9) I can change to redirect stderr to some part of the UI?
Update: I have not looked into TRACE or OutputDebugString, but the code base is cross-platform, so platform-specific APIs, while not totally off the table, are secondary to a standards-compliant solution.
| When you have a GUI process stderror should show up in the output window in visual studio. You can open a new console window if you want to have the output go there.look at the output. See my answer to this question. for details.
|
1,343,324 | 1,343,351 | Find out what a random number generator was seeded with in C++ | I've got an unmanaged c++ console application in which I'm using srand() and rand(). I don't need this to solve a particular problem, but was curious: is the original seed passed to srand() stored somewhere in memory that I can query? Is there any way to figure out what the seed was?
| The seed is not required to be stored, only the last random number returned is.
Here's the example from the manpage:
static unsigned long next = 1;
/* RAND_MAX assumed to be 32767 */
int myrand(void) {
next = next * 1103515245 + 12345;
return((unsigned)(next/65536) % 32768);
}
void mysrand(unsigned seed) {
next = seed;
}
|
1,343,577 | 1,343,599 | Checking if a registry key exists | I am looking for a clean way to check if a registry key exists. I had assumed that RegOpenKey would fail if I tried to open a key that didn't exist, but it doesn't.
I could use string processing to find and open the parent key of the one I'm looking for, and then enumerate the subkeys of that key to find out if the one I'm interested in exists, but that feels both like a performance hog and a weird way to have to implement such a simple function.
I'd guess that you could use RegQueryInfoKey for this somehow, but MSDN doesn't give too many details on how, even if it's possible.
Update: I need the solution in Win32 api, not in managed code, .NET or using any other library.
The docs in MSDN seem to indicate that you should be able to open a key for read permission and get an error if it doesn't exist, like this:
lResult = RegOpenKeyEx (hKeyRoot, lpSubKey, 0, KEY_READ, &hKey);
if (lResult != ERROR_SUCCESS)
{
if (lResult == ERROR_FILE_NOT_FOUND) {
However, I get ERROR_SUCCESS when I try this.
Update 2: My exact code is this:
HKEY subKey = nullptr;
LONG result = RegOpenKeyEx(key, subPath.c_str(), 0, KEY_READ, &subKey);
if (result != ERROR_SUCCESS) {
... but result is ERROR_SUCCESS, even though I'm trying to open a key that does not exist.
Update 3: It looks like you guys are right. This fails on one specific test example (mysteriously). If I try it on any other key, it returns the correct result. Double-checking it with the registry editor still does not show the key. Don't know what to make of all that.
| First of all don't worry about performance for stuff like this. Unless you are querying it 100x per sec, it will be more than fast enough. Premature optimization will cause you all kinds of headaches.
RegOpenKeyEx will return ERROR_SUCCESS if it finds the key. Just check against this constant and you are good to go.
|
1,343,626 | 1,343,648 | Interprocess Communication in C++ | I have a simple c++ application that generates reports on the back end of my web app (simple LAMP setup). The problem is the back end loads a data file that takes about 1.5GB in memory. This won't scale very well if multiple users are running it simultaneously, so my thought is to split into several programs :
Program A is the main executable that is always running on the server, and always has the data loaded, and can actually run reports.
Program B is spawned from php, and makes a simple request to program A to get the info it needs, and returns the data.
So my questions are these:
What is a good mechanism for B to ask A to do something?
How should it work when A has nothing to do? I don't really want to be polling for tasks or otherwise spinning my tires.
| Use a named mutex/event, basically what this does is allows one thread (process A in your case) to sit there hanging out waiting. Then process B comes along, needing something done, and signals the mutex/event this wakes up process A, and you proceed.
If you are on Microsoft :
Mutex, Event
Ipc on linux works differently, but has the same capability:
Linux Stuff
Or alternatively, for the c++ portion you can use one of the boost IPC libraries, which are multi-platform. I'm not sure what PHP has available, but it will no doubt have something equivalent.
|
1,343,654 | 1,343,762 | What are the pros & cons of pre-compiled headers specifically in a GNU/Linux environment/tool-chain? | Pre-compiled headers seem like they can save a lot of time in large projects, but also seem to be a pain-in-the-ass that have some gotchas.
What are the pros & cons of using pre-compiled headers, and specifically as it pertains to using them in a Gnu/gcc/Linux environment?
| The only potential benefit to precompiled headers is that if your builds are too slow, precompiled headers might speed them up. Potential cons:
More Makefile dependencies to get right; if they are wrong, you build the wrong thing fast. Not good.
In principle, not every header can be precompiled. (Think about putting some #define's before a #include.) So which cases does gcc actually get right? How much do you want to trust this bleeding edge feature.
If your builds are fast enough, there is no reason to use precompiled headers. If your builds are too slow, I'd consider
Buying faster hardware, which is cheap compared to salaries
Using a tool like AT&T nmake or like ccache (Dirk is right on), both of which use trustworthy techniques to avoid recompilations.
|
1,343,864 | 1,343,892 | Automatically port conversion elements of C code to C++ | Is there some tool that can automatically convert the following c style code
A *a = b;
to
A *a = (A*)b;
Thanks,
James
| Assuming this is to eliminate compiler errors, I would probably write one myself. Run the compiler on the source, and redirect error messages to a file. Filter out the errors where it complains about the type. For example, in gcc, they will look like this:
a.cc:3: error: invalid conversion from ‘int’ to ‘int*’
This gives you all you need: file and line number, as well as the type you need to cast to (i.e. int*). Find a likely place in the line to insert the cast (i.e. after the = character, or after the return statement), and try again. Keep track of the lines that you already edited, and skip them for human intervention.
|
1,344,040 | 1,344,086 | documentation for STL | I have spent the last several years fighting tooth and nail to avoid working with C++ so I'm probably one of a very small number of people who likes systems programming and template meta programming but has absolutely no experience when it comes to the STL and very little C++ template experience.
Does anyone know of a good document for getting started using STL?
I'd prefer PDF or something else I can kill trees with and I'm looking for something more along the lines of a reference than a tutorial (although an 80/20 split would be nice there).
I ended up using the docs from here, pringing them out via a PDF driver and tacking them together with this idea. Now I'm off to print them off 2-up double sided (190 pages even so, but I have >1k pages in my quota and only 4 months till graduation).
| Here is the reference I'm using. SGI , Offline Download
Here is another reference
|
1,344,052 | 1,350,332 | boost microsec_time_clock.hpp warning C4244 | I'm new in using boost and have a problem. I need shared_mutex function in my project. So I've done
#include "boost/thread/shared_mutex.hpp"
And compiled my project. My MSVC 2005 with "treat warnings as errors" stops compilation because of a warning:
c:\\...\microsec_time_clock.hpp(103) : warning C4244: 'argument' : conversion from 'int' to 'unsigned short', possible loss of data
I have no idea, why shared_mutex needs microseconds function (I've read than boost libraries have rather big dependences list), but i can't compile my project. I've googled a bit, found same problem, but no decision.
UPDATE: I'm compiling boost now, but i want to put all sources to my open-source project, including boost.thread.shared_mutex.
| I'll bet they're doing a += on an unsigned short. The result of the addition gets cast to an int implicitly, then needs to be downcast back to an unsigned short for the assignment.
|
1,344,631 | 1,344,674 | How can I create an executable to run on a certain processor architecture (instead of certain OS)? | So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific?
My Insights:
I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor.
Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".
| Let's think about what "run" means...
Something has to load the binary codes into memory. That's an OS feature. The .EXE or binary executable file or bundle or whatever, is formatted in a very OS-specific way so that the OS can load it into memory.
Something has to turn control over to those binary codes. There's the OS, again.
The I/O routines (in C++, but this is true in most places) are just a library that encapsulate OS API's. Drat that OS, it's everywhere.
Reminiscing.
In the olden days (yes, I'm this old) I worked on machines that didn't have OS's. We also didn't have C.
We wrote machine codes using tools like "assemblers" and "linkers" to create big binary images that we could load into the machine. We had to load these binary images through a painful bootstrap process.
We'd use front panel keys to load enough code into memory to read a handy device like a punched paper-tape reader. This would load a small piece of fairly standard boot linking loader software. (We used mylar tape so it wouldn't wear out.)
Then, when we had this linking loader in memory, we could feed the tape we'd prepared earlier with the assembler.
We wrote our own device drivers. Or we used library routines that were in source form, punched on paper tapes.
A "patch" was actually patched pieces of paper tape. Plus, since there were also little bugs, we'd have to adjust the memory image based on hand-written instructions -- patches that hadn't been put into the tape.
Later, we had simple OS's that had simple API's, simple device drivers, and a few utilities like a "file system", an "editor" and a "compiler". It was for a language called Jovial, but we also used Fortran sometimes.
We had to solder serial interface boards so we could plug in a device. We had to write device drivers.
Bottom Line.
You can easily write C++ programs that don't require an OS.
Learn about the hardware BIOS (or BIOS-like) facilities that are part of your processor's chipset. Most modern hardware has a simple OS wired into ROM that does power-on self-test (POST), loads a few simple drivers, and locates boot blocks.
Learn how to write your own boot block. That is the first proper "software" thing that's loaded after POST. This isn't all that hard. You can use various partitioning tools to force your boot block program onto a disk and you'll have complete control over the hardware. No OS.
Learn how GRUB, LILO or BootCamp launch an OS. It's not complicated. Once they're booted, they can load your program and you're off and running. This is slightly simpler because you create the kind of partition that a boot loader wants to load. Base yours on the Linux kernel and you'll be happier. Don't try to figure out how Windows boots -- it's too complicated.
Read up on ELF. http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
Learn how device drivers are written. If you don't use an OS, you'll need to write device drivers.
|
1,344,684 | 1,344,704 | why a variable declared but not used may cause a error? | If i declare a variable but not use it later in the program, the complier will give me a warning, and since "every warning should not be ignored", why the warning is there? and how can it cause a error? thanks!
| First, a minor point: declaring a variable that's never used is a waste of memory, and thus is itself a bug.
Second, and more importantly: you took the trouble of writing out a declaration for a variable you then never used. Since you would not have bothered to declare a variable if you had no plan to use it, this suggests you've forgotten to use it! Is it possible you typed the wrong variable name in its place? Is it possible you forgot to perform a critical calculation whose result you'd store in that variable?
Of course, you might just have declared something you ended up not needing, which is why it's a warning and not an error, but it's easy to see situations where that warning can point you to an important piece of missing code, which would indeed be a bug.
|
1,345,115 | 1,345,139 | C++ class design problem | I have a class that executes the MSNP15 protocol. The protocol requires clients to perform frequent connection/disconnection to various servers like the dispatch server, login server and the switchboard server.
I decided to store the protocol related variables ( like ticket tokens, nonce etc ) as static member variables in a utility class like below:
class MsnUtility
{
public:
static void SetChallengeStringL ( const char *string );
static const char* GetChallengeString ( );
static void SetContactTicketL ( const char *ticket );
static const char* GetContactTicket ( );
private:
MsnUtility();
static char *iChallengeString;
static char *iContactTicket;
};
The static variables above are initialized to NULL at startup and then newed when the tokens become available as the protocol executes.
Since I don't have access to C++ standard library ( as I am developing on Symbian S60 platform ) I cannot use the string library. Will the allocated character pointers be freed when the program exits or is there any other mechanism by which I could ensure they are freed.
I am also open to alternative design suggestions.
| If you allocate the memory, then you are the one who must release it. Since the members are static, they do not belong to any instance of the class. So you will have to ensure that the memory is released after the last possible use of the character pointers. This is often very difficult to determine.
I think a better idea in this case would be to have a singleton-class with all the needed tokens. Make this class globally accessible, provide necessary setters\getters for token-manipulation. Then when the dtor of the singleton class is called at program exit, you can de-alllocate the memory.
Something on the following lines :-
class TokenDict {
public:
static TokenDict& instance() {
static TokenDict instance;
return instance;
}
// getters \ setters for tokens
void setToken(char* tptr) {
if(token1)
delete[] token1;
// allocate memory for new token here
token1 = new char[strlen(tptr) + 1];
// copy tptr into token1
}
char const* getToken() const { return token1; }
private:
~TokenDict()
{
delete[] token1;
}
TokenDict() : token1(0) // ctor hidden
{ }
TokenDict(TokenDict const&); // copy ctor hidden
TokenDict& operator=(TokenDict const&); // assign op. hidden
char* token1;
};
|
1,345,179 | 1,345,198 | How to Draw Two Detached Rectangles in DirectX using the D3DPT_TRIANGLESTRIP Primitive Type | I am new to DirectX and I am trying to draw two rectangles in one scene using D3DPT_TRIANGLESTRIP. One Rectangle is no problem but two Rectangles is a whole different ball game. Yes i could draw them using four triangles drawn with the D3DPT_TRIANGLELIST primitive type. My curiosity is on the technic involved using D3DPT_TRIANGLESTRIP.
Parts of the code i am using for one Rectangle using D3DPT_TRIANGLESTRIP is as follows:
CUSTOMVERTEX recVertex[] = {
{ 10.0f, 10.0f, 0.10f, 1.0f, 0xffffffff, }, // x, y, z, rhw, color
{ 220.0f, 10.0f, 0.10f, 1.0f, 0xffffffff, },
{ 10.0f, 440.0f, 0.10f, 1.0f, 0xffffffff, },
{ 220.0f, 440.0f, 0.10f, 1.0f, 0xffffffff, },
};
if( FAILED( g_pd3dDevice->CreateVertexBuffer( 4 * sizeof( CUSTOMVERTEX ),
0, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &g_pVB, NULL ) ) )
{
return E_FAIL;
}
More vital code...
VOID* precVertex;
if( FAILED( g_pVB->Lock( 0, sizeof( recVertex ), ( void** )&pGameField, 0 ) ) )
{
return E_FAIL;
}
memcpy( precVertex, recVertex, sizeof( recVertex ) );
then Render like so...
g_pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( CUSTOMVERTEX ) );
g_pd3dDevice->SetFVF( D3DFVF_CUSTOMVERTEX );
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
Based on this model I could easily duplicate the code change the x and y values on the Custom Vertex and create another vertex buffer and it would work.
Personally i feel this is not the best way to go especially considering a situation when i have to draw say 100 rectangles or something like that. The thing is, I don't have other ideas. So my question is, what is the most efficient way to draw two Rectangles with D3DPT_TRIANGLESTRIP? Also is there a possible way of duplicating and transforming the current rectangle?
| Add the four vertices for the second rectangle to the vertex buffer, then call DrawPrimitive twice.
g_pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( CUSTOMVERTEX ) );
g_pd3dDevice->SetFVF( D3DFVF_CUSTOMVERTEX );
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 4, 2 );
|
1,345,240 | 1,346,014 | Do you know a good freeware that creates C++ or C# classes (data model) from XSD? | I know examples but there are not free tools. Need one that is free and reliable.
Thanks!
| I am not sure what type of "free" you are looking for but I have used the open source codesynthesis xsd tool www.codesynthesis.com/products/xsd/ for c++ data binding.
|
1,345,300 | 1,345,336 | Equivalent code in managed C++ & C# (Events in VB6) | In VB6 events created in an ActiveX component were stated like this:
Public Event ProcessingComplete()
and called in that ActiveX component like:
RaiseEvent ProcessingComplete
I am creating a managed C++ DLL that I want to do the same thing with. It doesnt look like delegates are exactly what I want. I think the more appropriate item is an __event declaration. Help?!?
In the end, I have a C# application that I want to have a function like this:
MyObject::ProcessingComplete() <--- This being the called function when "RaiseEvent" occurs.
{
}
Thanks.
| It does sound like you want an event. In .NET, an event is just a delegate which by convention has a special signature. Here is a C# example of declaring an event in a class:
public class MyObject
{
// ...
public event EventHandler ProcessingComplete;
// ...
}
EventHandler is a delegate with two parameters:
public delegate EventHandler(object sender, EventArgs e);
The sender is the object which raised the event and the EventArgs encode any information you want to pass to an event subscriber.
Every event is expected to follow this convention. If you wish to communicate specialized information for your event, you can create your own class derived from EventArgs. .NET defines a generically typed EventHandler delegate for this purpose, EventHandler<TEventArgs>. C# example:
class ProcessingCompleteEventArgs : EventArgs
{
public ProcessingCompleteEventArgs(int itemsProcessed)
{
this.ItemsProcessed = itemsProcessed;
}
public int ItemsProcessed
{
get;
private set;
}
}
// ...
// event declaration would look like this:
public event EventHandler<ProcessingCompleteEventArgs> ProcessingComplete;
To subscribe to an event, use the += operator. To unsubscribe, use the -= operator.
void Start()
{
this.o = new MyObject();
this.o.ProcessingComplete += new EventHandler(this.OnProcessingComplete);
// ...
}
void Stop()
{
this.o.ProcessingComplete -= new EventHandler(this.OnProcessingComplete);
}
void OnProcessingComplete(object sender, EventArgs e)
{
// ...
}
Inside your class, to fire the event, you can use the normal syntax to invoke a delegate:
void Process()
{
// ...
// processing is done, get ready to fire the event
EventHandler processingComplete = this.ProcessingComplete;
// an event with no subscribers is null, so always check!
if (processingComplete != null)
{
processingComplete(this, EventArgs.Empty);
}
}
|
1,345,305 | 1,373,515 | VTK Delete() and data deletion | I was browsing through the VTK 5.4.2 code, and can't seem to understand how does the Delete() function works. I mean, the Delete() function is in vtkObjectBase and is virtual, but, through what chain of commands is the destructor of the vtkDoubleArray class (for example) executed?
best regards,
mightydodol
| vtkObjectBase Delete() will call UnRegisterInternal. If the classes ReferenceCount is less than or equal to 1 it will call delete on the class.
|
1,345,382 | 1,345,418 | Binding temporary to a lvalue reference | I have the following code
string three()
{
return "three";
}
void mutate(string& ref)
{
}
int main()
{
mutate(three());
return 0;
}
You can see I am passing three() to mutate method. This code compiles well. My understanding is, temporaries can't be assigned to non-const references. If yes, how this program is compiling?
Any thoughts?
Edit:
Compilers tried : VS 2008 and VS2010 Beta
| It used to compile in VC6 compiler, so I guess to maintain backward comptibility VS2008 is supporting this non-standard extension. Try with /Za (disable language extension) flag, you should get an error then.
|
1,345,425 | 1,346,810 | How to access Digital I/O using USB | How to access Digital I/O using USB using C or C++ or Vb.net Or C#.net?
| I use the Velleman K8055 USB EXPERIMENT INTERFACE BOARD
It is simple to program for, and has several inputs and outputs
I got one from Maplin for less than £30
|
1,345,478 | 1,345,911 | How to detect the amount of stack space available to my program? | My Win32 C++ application acts as an RPC server - it has a set of functions for processing requests and RPC runtime creates a separate thread and invokes one of my functions in that thread.
In my function I have an std::auto_ptr which is used to control a heap-allocated char[] array of size known at compile time. It accidentially works when compiled with VC++ but it's undefined behaviour according to C++ standard and I'd like to get rid of it.
I have two options: std::vector or a stack-allocated array. Since I have no idea why there's a heap-allocated array I would like to consider replacing it with a stack-allocated one. The array is 10k elements and I can hypothetically face a stack overflow if the RPC runtime spawns a thread with a very small stack.
I would like to detect how much stack space is typilcally allocated to the thread and how much of it is available to my function (its callees certainly consume some of allocated space). How could I do that?
| I'm not sure what you're after.
If you just want typical numbers, then go ahead and try! Create a function with nested scopes, each of which allocates some more stack space. Output in each scope. See how far the thing gets.
If you want concrete numbers in a concrete situation, ask yourself what you would want to do once you have them? Branch into different implementations? This sounds like a maintenance problem the use of which should be very well justified. What do you expect to gain? Is this really worth such a hassle?
I agree that 10k usually shouldn't be a problem. So if your code isn't mission critical, go ahead and use boost::array (or std::tr1::array, if your std lib comes with it). Otherwise just use std::vector or, if you feel you must, boost::scoped_array (or std::tr1::scoped_array, if your std lib comes with it).
|
1,345,745 | 1,346,124 | Visual C++ - Why bother with Debug Mode? | So I have just followed the advice in enabling debug symbols for Release mode and after enabling debug symbols, disabling optimization and finding that break-points do work if symbols are complied with a release mode, I find myself wondering...
Isn't the purpose of Debug mode to help you to find bugs?
Why bother with Debug mode if it let bugs slip past you?
Any advice?
| Debug mode doesn't "let bugs slip past you". It inserts checks to catch a large number of bugs, but the presence of these checks may also hide certain other bugs. All the error-checking code catches a lot of errors, but it also acts as padding, and may hide subtle bounds errors.
So that in itself should be plenty of reason to run both. MSVC performs a lot of additional error checking in Debug mode.
In addition, many debug tools, such as assert rely on NDEBUG not being defined, which is the case in debug builds, but not, by default, in release builds.
|
1,345,885 | 1,345,903 | Best approach for common functionality | I have built up a number of common modules which I have been hitherto keeping in one directory and referencing from the project directories that need them.
I was wondering if there was a better way of doing this ?
Both the common modules and the code using them are written in C++.
| I would suggest creating a static library and linking with it. It's much simpler than a DLL, you don't have to worry about versioning, and all-around you work with them just as you would with your common modules.
EDIT: Basically, a static library is a collection of object files. If you're working with Visual Studio, you can create a static library project (or change the output type of your project) which will create a .lib file that you can link in your application. If you're using GCC, you can use "ar" to create the library file from a bunch of .o files, and again link it to your application.
|
1,345,937 | 1,346,022 | How to get print preview of webpage in smartphone application | how to get an print preview content of an webpage using HTML control or web-browser control in windows mobile smart phone application using c#, c++ or ATL control.
please guide us with any technical detail or any sample application associated with it.
-Thanks in advance.
GrabIt
| if you can use Qt, you can try this
|
1,346,187 | 1,346,379 | WinForms or WPF or Qt for Windows GUI with C/C++ as backend | I am to develop an application on windows. I have never done that before ;-)
I need to do some heavy audio calculation, which has to be written in C/C++. This part will be a room correction algorithm which currently takes about 10 seconds per channel to run in Matlab. It has to be written in C/C++, since it might be ported to a DSP later on, which has to be programmed in C/C++.
Additionally, I need a GUI to review calculations, visualize results and modify calculation parameters. The tough part of this GUI will be lots of plotting of spectra, spectrograms, audio waveforms and the like.
Now, I hear that WPF is all the rage in Windows GUIs, but it seems to be limited to C#. Is there a simple way to integrate my C/C++ code with some C# GUI code? Or should I rather take WinForms and just write the whole thing in C++? Or would Qt work just as well and provide some cross-platform capabilities "for free"?
I have some experience with C/C++, Matlab and VST-development, but I never wrote a real application and honestly, I don't even know where to start.
Thank you in advance!
| I think the biggest drawback to using WPF or WinForms is that you will have to program in two programming languages, which is a big logistics overhead.
I've seen this type of argument before: use C or C++ for low level, something else for high level. In this case Qt/C++ is as high level as WPF/WinForms, with the benefit of very easy integration of UI to your other C++ code.
For spectrograms and other graphs check out Qwt.
P.S: WPF is not all the rage on Windows, in fact the market is quite fragmented and WPF is one of the lesser used GUI toolkits. Most of the code out there uses MFC, WTL, Delphi, Win32, etc.
|
1,346,207 | 1,346,278 | Qt Application Performance vs. WinAPI/MFC/WTL/ | I'm considering writing a new Windows GUI app, where one of the requirements is that the app must be very responsive, quick to load, and have a light memory footprint.
I've used WTL for previous apps I've built with this type of requirement, but as I use .NET all the time in my day job WTL is getting more and more painful to go back to. I'm not interested in using .NET for this app, as I still find the performance of larger .NET UIs lacking, but I am interested in using a better C++ framework for the UI - like Qt.
What I want to be sure of before starting is that I'm not going to regret this on the performance front.
So: Is Qt fast?
I'll try and qualify the question by examples of what I'd like to come close to matching: My current WTL app is Programmer's Notepad. The current version I'm working on weighs in at about 4mb of code for a 32-bit, release compiled version with a single language translation. On a modern fast PC it takes 1-3 seconds to load, which is important as people fire it up often to avoid IDEs etc. The memory footprint is usually 12-20 mb on 64-bit Win7 once you've been editing for a while. You can run the app non-stop, leave it minimized, whatever and it always jumps to attention instantly when you switch to it.
For the sake of argument let's say I want to port my WTL app to Qt for potential future cross-platform support and/or the much easier UI framework. I want to come close to if not match this level of performance with Qt.
| Going native API is the most performant choice by definition - anything other than that is a wrapper around native API.
What exactly do you expect to be the performance bottleneck? Any strict numbers? Honestly, vague ,,very responsive, quick to load, and have a light memory footprint'' sounds like a requirement gathering bug to me. Performance is often overspecified.
To the point:
Qt's signal-slot mechanism is really fast. It's statically typed and translates with MOC to quite simple slot method calls.
Qt offers nice multithreading support, so that you can have responsive GUI in one thread and whatever else in other threads without much hassle. That might work.
|
1,346,287 | 1,346,371 | How to use boost::lambda together with std::find_if? | I have a std::vector and I want to check a specific attribute
of each element. SomeStruct has an attribute 'type'. I want to check this attribute
to be either Type1 or Type2.
My plan is to use boost::lambda.
std::vector<SomeStruct>::const_iterator it =
std::find_if(
vec.begin(), vec.end(),
_1.type == SomeStruct::Type1 || _1.type == SomeStruct::Type2);
Because I need to access a specific attribute of each element, I'm not sure if I can
use boost::lambda at all.
Any hints?
| std::find_if(
vec.begin(), vec.end(),
bind(&SomeStruct::type, _1) == SomeStruct::Type1 ||
bind(&SomeStruct::type, _1) == SomeStruct::Type2);
|
1,346,355 | 1,346,374 | C++ game and decorator pattern | I'm porting a 2D platformer and I need a good way of getting some extensibility out of my level tiles. I'm not sure if Decorator is correct here, but nothing else comes to mind. Maybe I need something unique.
Anyway, say I have a basic Tile that has things like it's image, and whether the player can pass through it (background or foreground). Now I want to add properties to them like making some tiles act as ladders, or making some hurt or kill the player on contact. Maybe some can disappear. Some properties can be combined.
This sounds like decorator, but I really don't like how that requires my base class to implement dummy versions of everything, like isLadder(), isLethal(), etc. Is there a better way? I'm not going to be the guy who changes my design just so it looks like something out of the GoF book.
Sorry if this has been asked a million times before, I didn't quite find it in the related questions.
| How you design your class methods can make a huge difference in how difficult Decorator is to implement - for instance, instead of isLadder() and isLethal() and so on, why not have methods based on the ways in which the player could interact with a tile (enter(from_dir), exit(to_dir)), etc? A lethal tile could override the enter() method to signal that the player should be killed; a ladder tile could override either enter() or exit() depending on the desired functionality.
|
1,346,408 | 1,346,426 | Need help attaching gdb to my project | I use VS2k8 to write and compile (but not run) a program using the MPICH2 libraries on Vista x64. I then use mpiexec from the command line to launch the program (with only 1 process for the purposes of debugging), and I'd like to attach gdb to it. Simply using attach or gdb --pid=### doesn't work (I get the error Can't attach to process), presumably because VS doesn't compile the code with the right debug info. On the other hand, despite several google sessions I have yet to find the actual command line that VS uses to compile, so I can't just go in and edit it.
Note that the only reason I use VS is because I couldn't get g++ to find the MPI libraries when trying to compile from command line, whereas VS only needed a couple clicks to make everything work. (Yes, I tried the -I and -l switches, but to no avail)
All I need is attaching gdb to the process running my MPI program, I don't really care how it's done. Any help is appreciated.
| The binary formats of cl.exe (Visual Studio) and gdb are unfortunately incompatible. You won't be able to use gdb for debugging unless you can figure out a way to rebuild the code with gcc. In the meantime, you can debug your program with Visual Studio directly, by going to Tools > Attach to Process (or pressing Ctrl+Alt+P)
|
1,346,583 | 1,346,631 | Most common reasons for unstable bugs in C++? | I am currently working on a large project, and I spend most of the time debugging. While debugging is a normal process, there are bugs, that are unstable, and these bugs are the greatest pain for the developer. The program does not work, well, sometimes... Sometimes it does, and there is nothing you can do about it.
What can be done about these bugs? Most common debugging tools (interactive debuggers, watches, log messages) may lead you nowhere, because the bug will disappear ... just to appear once again, later. That is why I am asking for some heuristics: what are the most common reasons for such bugs? What suspicious code should we investigate to locate such a bugs?
Let me start the list:
using uninitialized variables.
Common misprints like mMember =
mMember;
thread synchronization.
Sometimes it can be a matter of
luck;
working with non-smart
pointers, dereferencing invalid
ones;
what else?
| IME the underlying problem in many projects is that developers use low-level features of C++ like manual memory management, C-style string handling, etc. even though they are very rarely ever necessary (and then only well encapsulated in classes). This leads to memory corruption, invalid pointers, buffer overflows, resource leaks and whatnot. All the while nice and clean high-level constructs are available.
I was part of the team for a large (several MLoC) application for several years and the number of crashing bugs for different parts of the application nicely correlated to the programming style used within these parts. When asked why they wouldn't change their programming style some of the culprits answered that their style in general yields more performance. (Not only is this wrong, it's also a fact that customers rather have a more stable but slower program than a fast one that keeps crashing on them. Also, most of their code wasn't even required to be fast...)
As for multi-threading: I don't feel expert enough to offer solutions here, but I think Herb Sutter's Effective Concurrency columns are a very worthwhile read on the subject.
Edit to address the discussions in the comments:
I did not write that "C-style string handling is not more performant". (Certainly a lot of negation in this sentence, but since I feel misread, I try to be precise.) What I said is that high level constructs are not in general less performant: std::vector isn't in general slower than manually doing dynamically allocated C arrays, since it is a dynamically allocated C array. Of course, there are cases where something coded according to special requirements will perform better than any general solution -- but that doesn't necessarily mean you'll have to resort to manual memory management. This is why I wrote that, if such things are necessary, then only well-encapsulated in classes.
But what's even more important: in most code the difference doesn't matter. Whether a button depresses 0.01secs after someone clicked it or 0.05secs simply doesn't matter, so even a factor 5 speed gain is irrelevant in the button's code. Whether the code crashes, however, always matters.
To sum up my argument: First make it work correctly. This is best done using well-proven off-the-shelf building blocks. Then measure. Then improve performance where it matters, using well-proven off-the-shelf idioms.
|
1,346,989 | 1,347,058 | Catch a type error in C++ | How do i check if a result is of the right type(int, float, double, etc.) and then throw and catch an exception in case it's not?
Thanks all,
Vlad.
| Could you give more detail about what is giving you "a result" you may be able to determine what you need from there and more likely in a better way.
If all you really want is to check the type, use typeid.
More info here
Following Daniel's model of editing posts to actually answer the question after stating something else...
From my other comment:
You have to do this BEFORE you have
just the result. Checking for overflow
after is not a good idea. Do a check
on the numbers before adding to see if
they will overflow, or restrict input
to be less than half the max value of
the type
|
1,347,248 | 1,347,259 | .Net - Can a Class Library (dll) written in .Net be used by an application written in C or C++? | Let's say I have written a Class Library (dll) in .Net and now I have developers using it in their .Net applications.
However, the library itself could probably be useful also for developers writing natively (in C or C++). So my question is if my managed dll can be used in C or C++?
If not, why? Maybe I must add some specific code to make it available to native coders?
Thanks.
EDIT: In case anyone else is interested in this issue, I found this article from Google Books which gives an introduction how to use Net.classes from COM.
| The only way to expose a .NET assembly outside of a CLR-based language is through COM Interop. There is nothing that you can do to reference a .NET .dll directly in an unmanaged language.
The reason for this is that the ".dll" file extension used is purely for appearances and the illusion of consistency. .NET generated .dll files do not contain any machine code, they contain IL (intermediate language) that is compiled at runtime (called Just-In-Time compilation, or JIT). The code is not compiled at the time that it is placed in the .dll file. As a result, there is nothing for the unmanaged language to execute.
COM interop allows the CLR to load the .dll, perform that JIT compilation, and use the COM system to handle communication between the native code and the .NET .dll.
|
1,347,592 | 1,348,403 | iptables c++ control | I need to control inbound and outbound traffic to/from a linux box from within a C++ program. I could call iptables from within my program, but I'd much rather cut out the middle man and access the kernel API functions myself.
I believe I need to use libnfnetlink, however, I have not been able to find any API documentation or example programs.
The rules I need to construct are fairly simple - things like dropping packets with a destination port equal to X etc. I do NOT intend to write a full firewall application.
can anyone suggest a better approach, or provide a link to some documentation or example apps? I'd rather avoid reading the iptables code, but i guess I may have to, if I can't find any better resources.
| An year back I was having the same requirement and probed around. But after contacting some open source kernel guys this is what I came to know -
The kernel APIs of iptables are not externalised, means to say, they are not documented APIs. In the sense, the APIs can change any moment. They should be used only by the iptables tool. they should not be used by the application developers.
-satish
|
1,347,691 | 1,347,786 | Static vs dynamic type checking in C++ | I want to know what are static and dynamic type checking and the differences between them.
| Static type checking means that type checking occurs at compile time. No type information is used at runtime in that case.
Dynamic type checking occurs when type information is used at runtime. C++ uses a mechanism called RTTI (runtime type information) to implement this. The most common example where RTTI is used is the dynamic_cast operator which allows downcasting of polymorphic types:
// assuming that Circle derives from Shape...
Shape *shape = new Circle(50);
Circle *circle = dynamic_cast<Circle*> shape;
Furthermore, you can use the typeid operator to find out about the runtime type of objects. For example, you can use it to check whether the shape in the example is a circle or a rectangle. Here is some further information.
|
1,348,078 | 1,348,138 | Why STL containers are preferred over MFC containers? | Previously, I used to use MFC collection classes such CArray and CMap. After a while I switched to STL containers and have been using them for a while. Although I find STL much better, I am unable to pin point the exact reasons for it. Some of the reasoning such as :
It requires MFC: does not hold because other parts of my program uses MFC
It is platform dependent: does not hold because I run my application only on windows.(No need for portability)
It is defined in the C++ standard: OK, but MFC containers still work
The only reason I could come up is that I can use algorithms on the containers. Is there any other reason that I am missing here - what makes STL containers better than MFC containers?
| Ronald Laeremans, VC++ Product Unit Manager, even said to use STL in June 2006:
And frankly the team will give you the same answer. The MFC collection classes are only there for backwards compatibility. C++ has a standard for collection classes and that is the Standards C++ Library. There is no technical drawback for using any of the standard library in an MFC application.
We do not plan on making significant changes in this area.
Ronald LaeremansActing Product Unit ManagerVisual C++ Team
However, at one point where I was working on some code that ran during the installation phase of Windows, I was not permitted to use STL containers, but was told to use ATL containers instead (actually CString in particular, which I guess isn't really a container). The explanation was that the STL containers had dependecies on runtime bits that might not actually be available at the time the code had to execute, while those problems didn't exist for the ATL collections. This is a rather special scenario that shouldn't affect 99% of the code out there.
|
1,348,636 | 1,349,957 | How to get started with Drivers Programming under windows | I want to start learning drivers programming under windows .
I never programed drivers , and i am looking for information how to get started .
Any tutorials ,links ,book recommendations , and what development tool kit i should start with ? (WDF will be good one ?)
I really want to program following clock link text
Thanks for your help .
| To interact with USB hardware you would be best served by looking at WinUSB or the Usermode Driver Framework. Usermode drivers are orders of magnitude easier, being able to use a C++/COM(kind of) framework and a normal debugging environment.
Writing kernelmode drivers should be reserved for stuff like video card, disk, and other latency/throughput sensitive drivers.
An even easier method would be to use libusb-win32 which is a C library that makes talking to a USB endpoint almost as easy as writing data to a file.
|
1,348,692 | 1,348,731 | Why does C++ define the norm as the Euclidean norm squared? | This may sound like a bit of a rhetorical question, but I ask it here for two reasons:
It took me a while to figure out what C++ std::norm() was doing differently from MATLAB/Octave, so others may stumble upon it here.
I find it odd to define the norm() function as being something different (though closely related) to what is generally considered to be the norm (or L2-norm, or Euclidean norm, etc., etc.)
Specifically the C++ standard library defines norm() for complex numbers to be the square of the modulus (or absolute value), where the modulus is sqrt(a^2 + b^2) when the complex number is in the form a + i*b.
This goes against my understanding of the norm, which when specified as the Euclidean norm (which corresponds to the modulus used here), is the square root of the sum of squares. I'll reference Mathworld's definition of the complex modulus.
Is this something others have run into? I found it as a result of porting some signal processing code from Octave to C++, and the only other place I found reference to this difference was on the GCC mailing list.
| The C++ usage of the word "norm" is rather confusing, since most people have only ever come across norms in the context of vector spaces. If you view the complex numbers as a vector space over the reals, this is definitely not a norm. In fairness to C++, the std::norm( ) function does compute the so-called Field Norm from the complex numbers to the reals.
Fortunately, there is the std::abs( ) function, which does what you want.
|
1,348,847 | 1,348,861 | What is it called when you can fill a string with <<< and an end-delimiter? | I know in C++ and in PHP you can fill a string or a file with hard-coded text. If I remember correctly this is how it is supposed to look:
var <<< DELIMITER
Menu for program X
1.Add two numbers
2.Substract two numbers
3.Multiply two numbers
Please pick an option from (0-3);
DELIMITER
This can be used for menus or text that remains the same no matter what like a header. But without having to do:
foobar << "Menu for program X" << endl << "1.Add two numbers" << endl << "2.Substract two numbers"
| It's called the HEREDOC syntax in PHP:
<?php
$str = <<<EOD
Example of string
spanning multiple lines
using heredoc syntax.
EOD;
?>
|
1,349,060 | 1,349,072 | Easiest implement pattern 1 writer - multiple readers with boost library | I develop a module with multiple threads and one Cache in std::map. Some times I need to update cache. In that time all readers must wait, while I do update map.
How can I do this synchronization with boost library?
P.S.: some time ago in Boost was read_write_mutex. But in current releases of Boost it missed.
|
will shared_mutex replace read_write_mutex?
Yes.
...
Basically unique_lock<shared_mutex> will give you a write lock,
shared_lock<shared_mutex> will give you a read lock, and
upgrade_mutex<shared_mutex> will give you a read lock than you can upgrade by
transferring ownership (with move) to a unique_lock<shared_mutex>.
http://lists.boost.org/Archives/boost/2008/01/132656.php
|
1,349,313 | 1,349,333 | lock free arena allocator implementation - correct? | for a simple pointer-increment allocator (do they have an official name?) I am looking for a lock-free algorithm. It seems trivial, but I'd like to get soem feedback whether my implementaiton is correct.
not threadsafe implementation:
byte * head; // current head of remaining buffer
byte * end; // end of remaining buffer
void * Alloc(size_t size)
{
if (end-head < size)
return 0; // allocation failure
void * result = head;
head += size;
return head;
}
My attempt at a thread safe implementation:
void * Alloc(size_t size)
{
byte * current;
do
{
current = head;
if (end - current < size)
return 0; // allocation failure
} while (CMPXCHG(&head, current+size, current) != current));
return current;
}
where CMPXCHG is an interlocked compare exchange with (destination, exchangeValue, comparand) arguments, returning the original value
Looks good to me - if another thread allocates between the get-current and cmpxchg, the loop attempts again. Any comments?
| Your current code appears to work. Your code behaves the same as the below code, which is a simple pattern that you can use for implementing any lock-free algorithm that operates on a single word of data without side-effects
do
{
original = *data; // Capture.
result = DoOperation(original); // Attempt operation
} while (CMPXCHG(data, result, original) != original);
EDIT: My original suggestion of interlocked add won't quite work here because you support trying to allocate and failing if not enough space left. You've already modified the pointer and causing subsequent allocs to fail if you used InterlockedAdd.
|
1,349,375 | 1,349,400 | Qt UI for existing C++ project | I have already written a C++ program and I would like to write a GUI for it. I realize Qt is a wonderful tool, however, Qt has it's own classes, which make me quite confused. eg: instead of String, Qt has a class named QString..
I am wondering if I can mix C++ code and Qt code in C++?
| Yes you can intermix Qt and STL very easily.
The GUI takes QStrings but will silently create these form std::string or char*, QStrings returned from Qt can be converted with toStdString() or toAscii().
Qt includes a set of collection classes but you don't have to use them.
Qt does a good job of looking like modern C++, there are a few 'extras' to handle the signal and slot event mechanism but these are more a complication to the build environment than to the code.
edit:thanks bill
|
1,349,447 | 1,349,475 | MSVC's _M_X64 Predefined Macro Clarification | The documentation for MSVC's Predefined Macros state "_M_X64 [is] Defined for x64 processors." What does that mean, exactly? Will it be defined:
When I'm building for x64 processors, or
When I'm building with x64 processors?
Specifically, I'm looking for a compiler switch for the former case, not the latter. Will _M_X64 suffice for that purpose?
| It means that _M_X64 is the target processor. It is what you are building for, not what you are building on.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.