question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
1,990,963 | 1,990,971 | stuck at defining static pointer and using it | I stuck at defining a static pointer inside one of my class which is pointing to another class
here is the schema of what I've done :
#busineslogic.h
class BussinesLogic {
private :
static Samenamespace::otherclass_DataLogic::clsDL *DL;
};
#busineslogic.cpp
samenamespace {
businessnamespace{
clsBL{
Samenamespace::businessnamespace::clsBL *Samenamespace::businessnamespace::clsBL::DL;
}
}
}
so with above definition I'll get error every time I compile the code , I've tried several other ways to overcome this problem but the face of the errors gonna change not the whole problem.
I want to know how can I access to another class from my class in such a static way I mean something like above example , how should I change my code ? or should add something extra?
| // header:
#include <other/b.hpp>
namespace example {
struct A {
static other::B* name;
};
}
// implementation: (.cpp)
namespace example {
other::B* A::name;
}
Edit: With the cleanup of the question, it looks like B and A are in the same namespace, which would simplify the example:
// header:
#include <example/b.hpp>
namespace example {
struct A {
static B* name;
};
}
// implementation: (.cpp)
namespace example {
B* A::name;
}
|
1,991,065 | 1,991,076 | Role of C,C++,python,perl in Web development | Please bear with me experts i'm a newbie in web dev.
With html,css can take care of webpages..
javascript,ajax for some dynamic content..
php for server side scripting,accessing databases,sending emails,doing all other stuf...
What role do these programming languages play?
Can they do any other important task which cannot be done by PHP?
| All languages can all do basically any task any other one of them can do, as they are all Turing complete.
PHP works as a server-side scripting language, but you can also use Perl, Python, Ruby, Haskell, Lisp, Java, C, C++, assembly, or pretty much any other language that can access standard input and standard output for CGI communication with web content.
PHP is widely used because a) it's easy to learn a little and go, and b) the rather tedious CGI protocols are skipped, as the language handles them for you, so you can just plug your PHP script into an HTML page and not have to know how your program reads the information at all. This makes web programming easier for PHP, but the PHP interpreter is written in C, which does all the heavy lifting, so logically if PHP can do server-side scripting, so can C. Since most other languages are written in C, they too can do server-side scripting. (And since C compiles down to assembly, assembly can do it too, and so can any language that compiles down to assembly. Which is all of them not already covered.)
|
1,991,136 | 1,991,150 | How to teach Object Oriented Programming - Any idea where to start? | I'm teaching C++ for about 2 years in high schools, computer training institutes and etc. After teaching basics about variables, arrays, structures, functions, I always start object oriented examples part with traditional examples, like this one:
class Person {
public:
Person();
~Person();
char* getFirstName(); //we can use std::string instead of char* in optimization part
char* getLastName();
char* getFullName();
int getAge();
bool getGender();
void printFullProfile();
void setFirstName(char*);
void setLastName(char*);
void setAge(unsigned int);
void setGender(bool);
void setGender(char);//f for female, m for male.
private:
char* first_name; //or std::string but i prefer to use pointers in the beginning.
char* last_name;
unsigned int age;
bool gender; //ladies 1(st) , male 0
}
and then completing this Person class and teach new things like why getter and setters methods are evil and avoiding accessors, inheritance, polymorphism by creating other classes ( like Student, Employee, Moderator etc. ), necessary OOP skills and concepts.
[EDIT]: And make these classes useful for solving programming problems. ( Like calculating salary for each Employee object, Students marks average, and many others )
another basic examples are Vehicle class, Shape class, etc.
I wanna know your ideas about how to (JUST) start an OOP classroom.
looking forward for great ideas.
| I would start without code, with CRC cards. Let the class play out the cards roles, and do a real OO design session. There you can introduce the single responsibility principle, talk about has-a vs is-a and inheritance, encapsulation. I meet too many programmers who don't have a clue about OO and are still programming in c++, c#, java or delphi.
[edit]
Later you might want to compare class-based with javascript (or self) and prototype based OO to talk about different ways of classification.
|
1,991,147 | 1,991,220 | Pure Virtual Function called error | I find this strange. In the ctor of Sample_Base, I call bar() which internally calls fun() which is a pure virtual function. I get the error "pure virtual function" called. Which is fine. Now, if I call fun() directly from Sample_Base's ctor, I don't get that error. I tried it on VC++ 2010 Beta 2 and on g++ 4.4.1 on Ubuntu 9.10. I agree that giving an implementation for pure virtual function, other than pure virtual destructor, is meaningless. But, I am a bit surprised about this behaviour.
class Sample_Base
{
public:
Sample_Base()
{
bar();
// fun();
}
/* This is code does not throw any error.
Sample_Base()
{
fun();
}
*/
void bar()
{
fun();
}
virtual void fun() = 0;
virtual ~Sample_Base();
};
Sample_Base::~Sample_Base()
{
}
void Sample_Base::fun()
{
std::cout << "Sample_Base::fun\n";
}
class Sample_Derived : public Sample_Base
{
public:
Sample_Derived() : Sample_Base()
{
fun();
}
void fun()
{
std::cout << "Sample_Derived::fun\n";
}
~Sample_Derived()
{
}
};
| When you call the function directly, since you are in the constructor, the compiler resolves the static type of your object (Sample_Base) and calls Sample_Base::fun() directly. Since you provided an implementation for it, the compiler finds the function and it works.
When you call it indirectly, through bar(), the compiler must use the dynamic type, so it does a virtual call that gets resolved at runtime. And there it fails, because it calls a pure virtual function.
So the difference is in the moment it binds the function to the call.
|
1,991,628 | 1,994,885 | Arduino web client class not working | I am trying to use the Arduino client class to fetch an HTML page from the Internet (example from the Arduino library itself), but it's not working (connection is not getting established).
It's failing at:
client.connect();
I have tried both Ethernet and Ethernet2 libraries.
My Arduino development platform version is 0017, OS is Windows XP.
Following is my code and configurations inline:
#include <Client.h>
#include <Ethernet2.h>
// #include <Ethernet.h>
#include <Print.h>
#include <Server.h>
byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };
byte ip[] = { 192,168,1,7 };
byte server[] = { 74,125,47,103 }; // Google home page
byte gateway[] = { 192,168,1,1 };
byte mask[] = { 255,255,255,0 };
Client client(server, 80);
void setup()
{
Ethernet.begin(mac, ip, gateway, mask);
Serial.begin(9600);
delay(1000);
Serial.println("connecting...");
if (client.connect()) {
Serial.println("connected");
client.println("GET /search?q=arduino HTTP/1.0");
client.println();
} else {
Serial.println("connection failed");
}
}
void loop()
{
if (client.available()) {
char c = client.read();
Serial.print(c);
}
if (!client.connected()) {
Serial.println();
Serial.println("disconnecting.");
client.stop();
for(;;);
}
}
| I don't know the reason but I had to modify the following setup() function to get the code working:
void setup() {
Ethernet.begin(mac, ip, gateway, mask);
Serial.begin(9600);
delay(1000);
Serial.println("connecting...");
for(int i = 0;i <100 ; i++) {
if (client.connect()) {
Serial.println("connected");
client.println("GET /search?q=arduino HTTP/1.0");
client.println();
break;
} else {
Serial.println("connection failed");
}
}
}
The code:
client.connect()
does fail twice or thrice but eventually it connects to google web server as well as my local web server in the 3rd or 4th iteration of the 'for' loop.
If anyone knows the reason for such behavior, please reply.
|
1,991,713 | 1,991,834 | Using dynamic allocations in a mission-critical / life-critical software | Is it safe to use dynamic allocations in a mission-critical / life-critical system, or should it be avoided?
| With critical software you want your system to have as deterministic behaviour as possible.
Dynamic memory, memory fragmentation, possible leaks, and in some corner cases (not too rare) misbehaviour of malloc will make it that much harder to gain 100% determinism.
That said, if part of your program (say an algorithm) requires dynamic allocation and you can prove that your memory allocation and de-allocation (free) will be deterministic (see valuable notes by RickNZ) then you're closer to having a deterministic system.
|
1,991,939 | 1,991,957 | Languages with direct C compatibilty | Apart from C++, which non-toy languages have direct or easy-to-use compatibility to C? As in "I can take a C library out there, and compile my code against it without having to find, write, or configure some kind of wrapper."
I know that lots of languages have compatibility with C through some form of external call or binding (I've been using bindings in Java, Ruby, Python, etc., so I know it can be done). But you rely on someone (possibly you), to write and maintains a binding for all the libraries you want to use, and the binding has to works on all platforms, etc.
Do more expressive languages than C++ have this feature?
Thanks to all for the mentions of swig or related wrapper-generation tools.
I am aware that those exists, but I don't think they're really as easy as C->C++ integration... but then integrating with C might be the only thing that is easier in C++ ;) )
| Objective-C, the bastard child of C and Smalltalk.
Objective-C is a direct superset of C (you can't get more compatible than that), but there are languages which compile to C. Some recent examples would be Vala and Lisaac.
Most statically compiled languages allow interfacing with C libraries. Examples of such languages are Ada, Fortran, Pascal and D. The details are platform and compiler specific; for x86, this basically means supporting the cdecl calling convention.
|
1,991,984 | 1,991,997 | Algorithm for finding the number which appears the most in a row - C++ | I need a help in making an algorithm for solving one problem: There is a row with numbers which appear different times in the row, and i need to find the number that appears the most and how many times it's in the row, ex:
1-1-5-1-3-7-2-1-8-9-1-2
That would be 1 and it appears 5 times.
The algorithm should be fast (that's my problem).
Any ideas ?
| You could keep hash table and store a count of every element in that structure, like this
h[1] = 5
h[5] = 1
...
|
1,992,147 | 1,992,175 | Is there a C++ IDE which handles templates well? | Every IDE I've tried fails to provide code-completion when something template-related is used.
For example,
boost::shared_ptr<Object> ptr;
ptr->[cursor is here]
Is there IDE that can provide code completion in this case?
| Actually this is a fairly simple template use-case, Qt Creator can handle this easily and more complex template code aswell.
|
1,992,708 | 2,037,263 | How to make plugable factory work with lua? | class data is like this:
struct Base_data
{
public:
Base_data(){
protocolname = "Base";
}
string protocolname;
};
class HttpData : public Base_data
{
public:
HttpData(){
protocolname = "Http";
}
};
class Professor:
class Base_Professor
{
public:
void Process(Base_data &data)
{
std::map::const_iterator it = ListProfessor.find(data.protocolname);
if(it == ListProfessor.end())
cout second->Do(data);
}
virtual void Do(Base_data &data){}
virtual std::string GetProfessorname(){
return "Base";
}
~Base_Professor(){
std::map::const_iterator it;
for(it = ListProfessor.begin(); it != ListProfessor.end(); ++it)
delete it->second;
}
bool Register(Base_Professor *Professor){
std::map::const_iterator it = ListProfessor.find(Professor->GetProfessorname());
if(it != ListProfessor.end())
return false;
ListProfessor.insert(std::make_pair(Professor->GetProfessorname(), Professor));
}
private:
std::map ListProfessor;
};
class HttpProfessor : public Base_Professor
{
public:
std::string GetProfessorname(){
return "Http";
}
void Do(Base_data &data){
std::cout
I can add new protocol by inheritant Base_Professor and register new class but I have no ideal how to do that in lua. Do you have any ideal how to do that ?
| Lua doesn't provide "classes" out-of-the-box. It has other features, somewhat different.
However you can simulate classes and inheritance functionality by using some of these functionalities (tables & metatables).
If you are not interested in knowing the technical details, you can use some already-built lua library.
I've myself created a library called "MiddleClass" that provides this functionality. You can find a manual in its LÖVE wiki page.
This library provides most of what you need, except for "public" and "private" functionalities (everything is "public"). You may find other libraries on the lua users wiki.
|
1,992,879 | 1,992,926 | C++/SDL 'void*' is not a point-to-object type | I'm new on C++ and I'm trying to make some testing with C++ and SDL and in SDL we have a function:
SDL_TimerID SDL_AddTimer(Uint32 interval, SDL_NewTimerCallback callback, void *param);
which I can pass a callback for the timer created.
But apparently it converts my instance this to *void so I can't retrieve it again on the update method which is static, and it's interesting but the the SDL_AddTime doesn't work on a non static callback function.
Well, so my problem is that when trying to call the public method render through the void* param argument It complains about not being a pointer-to-object-type...
Is there any way I can get the Character instance again inside the update method since I don't have control over the SDL_AddTime function and I have to pass the required parameters?
Thanks
#include "Character.h"
Character::Character(void)
{
timer = SDL_AddTimer(33, update, this);
this->render(); // is called without problem
}
//static method
Uint32 Character::update(Uint32 interval,void* param)
{
param->render(); // yields: 'void*' is not a pointer-to-object type;
SDL_Event event;
event.type = SDL_USEREVENT;
event.user.code = 1020;
event.user.data1 = param;
SDL_PushEvent(&event);
return interval;
}
void Character::render(void)
{
printf("rendering character \n");
}
| You don't need a reinterpret_cast - a static_cast should be OK:
Character * cp = static_cast <Character *>( param );
You should avoid reinterpret_cast - it is almost always implementation specific, and may hide problems - just like old-style C casts.
|
1,993,216 | 3,068,106 | boost::asio cleanly disconnecting | Sometimes boost::asio seems to disconnect before I want it to, i.e. before the server properly handles the disconnect. I'm not sure how this is possible because the client seems to think its fully sent the message, yet when the server emits the error its not even read the message header... During testing this only happens maybe 1 in 5 times, the server receives the client shut down message, and disconnects the client cleanly.
The error: "An existing connection was forcibly closed by the remote host"
The client disconnecting:
void disconnect()
{
boost::system::error_code error;
//just creates a simple buffer with a shutdown header
boost::uint8_t *packet = createPacket(PC_SHUTDOWN,0);
//sends it
if(!sendBlocking(socket,packet,&error))
{
//didnt get here in my tests, so its not that the write failed...
logWrite(LOG_ERROR,"server",
std::string("Error sending shutdown message.\n")
+ boost::system::system_error(error).what());
}
//actaully disconnect
socket.close();
ioService.stop();
}
bool sendBlocking(boost::asio::ip::tcp::socket &socket,
boost::uint8_t *data, boost::system::error_code* error)
{
//get the length section from the message
boost::uint16_t len = *(boost::uint16_t*)(data - 3);
//send it
asio::write(socket, asio::buffer(data-3,len+3),
asio::transfer_all(), *error);
deletePacket(data);
return !(*error);
}
The server:
void Client::clientShutdown()
{
//not getting here in problem cases
disconnect();
}
void Client::packetHandler(boost::uint8_t type, boost::uint8_t *data,
boost::uint16_t len, const boost::system::error_code& error)
{
if(error)
{
//error handled here
delete[] data;
std::stringstream ss;
ss << "Error recieving packet.\n";
ss << logInfo() << "\n";
ss << "Error: " << boost::system::system_error(error).what();
logWrite(LOG_ERROR,"Client",ss.str());
disconnect();
}
else
{
//call handlers based on type, most will then call startRead when
//done to get the next packet. Note however, that clientShutdown
//does not
...
}
}
void startRead(boost::asio::ip::tcp::socket &socket, PacketHandler handler)
{
boost::uint8_t *header = new boost::uint8_t[3];
boost::asio::async_read(socket,boost::asio::buffer(header,3),
boost::bind(&handleReadHeader,&socket,handler,header,
boost::asio::placeholders::bytes_transferred,boost::asio::placeholders::error));
}
void handleReadHeader(boost::asio::ip::tcp::socket *socket, PacketHandler handler,
boost::uint8_t *header, size_t len, const boost::system::error_code& error)
{
if(error)
{
//error "thrown" here, len always = 0 in problem cases...
delete[] header;
handler(0,0,0,error);
}
else
{
assert(len == 3);
boost::uint16_t payLoadLen = *((boost::uint16_t*)(header + 0));
boost::uint8_t type = *((boost::uint8_t*) (header + 2));
delete[] header;
boost::uint8_t *payLoad = new boost::uint8_t[payLoadLen];
boost::asio::async_read(*socket,boost::asio::buffer(payLoad,payLoadLen),
boost::bind(&handleReadBody,socket,handler,
type,payLoad,payLoadLen,
boost::asio::placeholders::bytes_transferred,boost::asio::placeholders::error));
}
}
void handleReadBody(ip::tcp::socket *socket, PacketHandler handler,
boost::uint8_t type, boost::uint8_t *payLoad, boost::uint16_t len,
size_t readLen, const boost::system::error_code& error)
{
if(error)
{
delete[] payLoad;
handler(0,0,0,error);
}
else
{
assert(len == readLen);
handler(type,payLoad,len,error);
//delete[] payLoad;
}
}
| I think you should probably have a call to socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec) in there before the call to socket.close().
The boost::asio documentation for basic_stream_socket::close states:
For portable behaviour with respect to graceful closure of a connected socket, call shutdown() before closing the socket.
This should ensure that any pending operations on the socket are properly cancelled and any buffers are flushed prior to the call to socket.close.
|
1,993,297 | 1,993,342 | Cygwin port not working => exits immediately on launch | I am trying to port a C++ program from Linux to Windows using cygwin. I have it building and linking fine now, but when I launch the program, it exits immediately with an error. When I try it in gdb, I get the following 'unknown target exception' result:
$ gdb ../../bin/ARCH.cygwin/release/myApp
GNU gdb 6.8.0.20080328-cvs (cygwin-special)
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This GDB was configured as "i686-pc-cygwin"...
(no debugging symbols found)
(gdb) run
Starting program: bin/ARCH.cygwin/release/myApp.exe
[New thread 1452.0x99c]
gdb: unknown target exception 0xc0000139 at 0x77149eed
Program exited with code 030000000471.
You can't do that without a process to debug.
When not in gdb, it raises dialog that reads: "A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available."
Any ideas what I may have done wrong?
Thanks.
-William
| Microsoft describes 0xC0000139 as STATUS_ENTRYPOINT_NOT_FOUND. That suggests your program isn't being linked properly. Double-check your build scripts to make sure it compiles and links all relevant files.
If you are using any libraries, then you might have a linking issue there (or maybe you are missing a DLL of some sort).
You might be able to get more information by checking out the error report it has generated - the error message Microsoft associates with that error should include exactly what entry point it couldn't find.
|
1,993,309 | 1,993,331 | Complete C++ "from scratch" frameworks | What C++ frameworks provide a complete skeleton, in the fashion of Ruby on Rails?
I think Poco C++ does it, are there other options?
| It's hard to provide a skeleton for client applications, because there is no common functionality like in the case of web applications. Qt does a pretty good job at providing what you might need in a new application though (yes, it does much more than just GUI).
|
1,993,356 | 1,993,421 | help with async_read_until | I'm having trouble implmenting the 3rd parameter in the function documented here:
http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/async_read_until/overload4.html
What I'd like to be able to do is use the callback on the 3rd parameter of async_read_until to detect when a complete chunk has arrived. My packets have the following format.
1 byte id (semantic meaning of the data)
unsigned int (the number of bytes in the data, since some data chunks can change size)
payload
Looking at the example code in the documentation, I'm a little confused about how I'm supposed to be able to extract a byte, let alone an unsigned int from the begin and end iterators.
I've instantiated my iterators as
typedef boost::asio::buffers_iterator<
boost::asio::streambuf::const_buffers_type> iterator;
but even then I'm not sure what type that is, since I don't know what const_buffers_type is. I followed some links in the documentation and found out it was "implementation defined", but I guess I could be wrong.
So my two concrete questions are:
how can I use those two iterators to read an unsigned int?
what type are those iterators pointing to?
Thanks!
| A sample match function is presented in the documentation.
std::pair<iterator, bool>
match_whitespace(iterator begin, iterator end)
{
iterator i = begin;
while (i != end)
if (std::isspace(*i++))
return std::make_pair(i, true);
return std::make_pair(i, false);
}
Dereferencing i here, pulls out one byte. You need to pull out enough bytes to match an int.
Remember however, that a callback is not the only option for read_until. Actually it's the most complex. Are you sure that it wouldn't be enough to use a regex instead?
template<
typename AsyncReadStream,
typename Allocator,
typename ReadHandler>
void async_read_until(
AsyncReadStream & s,
boost::asio::basic_streambuf< Allocator > & b,
const boost::regex & expr,
ReadHandler handler);
Anyway, considering that your read is not delimeted, a lot better way would be to async_read_some until you've read the size, and then async_read_some with read at least.
|
1,993,390 | 1,993,407 | Static linking vs dynamic linking | Are there any compelling performance reasons to choose static linking over dynamic linking or vice versa in certain situations? I've heard or read the following, but I don't know enough on the subject to vouch for its veracity.
1) The difference in runtime performance between static linking and dynamic linking is usually negligible.
2) (1) is not true if using a profiling compiler that uses profile data to optimize program hotpaths because with static linking, the compiler can optimize both your code and the library code. With dynamic linking only your code can be optimized. If most of the time is spent running library code, this can make a big difference. Otherwise, (1) still applies.
|
Dynamic linking can reduce total resource consumption (if more than one process shares the same library (including the version in "the same", of course)). I believe this is the argument that drives its presence in most environments. Here "resources" include disk space, RAM, and cache space. Of course, if your dynamic linker is insufficiently flexible there is a risk of DLL hell.
Dynamic linking means that bug fixes and upgrades to libraries propagate to improve your product without requiring you to ship anything.
Plugins always call for dynamic linking.
Static linking, means that you can know the code will run in very limited environments (early in the boot process, or in rescue mode).
Static linking can make binaries easier to distribute to diverse user environments (at the cost of sending a larger and more resource-hungry program).
Static linking may allow slightly faster startup times, but this depends to some degree on both the size and complexity of your program and on the details of the OS's loading strategy.
Some edits to include the very relevant suggestions in the comments and in other answers. I'd like to note that the way you break on this depends a lot on what environment you plan to run in. Minimal embedded systems may not have enough resources to support dynamic linking. Slightly larger small systems may well support dynamic linking because their memory is small enough to make the RAM savings from dynamic linking very attractive. Full-blown consumer PCs have, as Mark notes, enormous resources, and you can probably let the convenience issues drive your thinking on this matter.
To address the performance and efficiency issues: it depends.
Classically, dynamic libraries require some kind of glue layer which often means double dispatch or an extra layer of indirection in function addressing and can cost a little speed (but is the function calling time actually a big part of your running time???).
However, if you are running multiple processes which all call the same library a lot, you can end up saving cache lines (and thus winning on running performance) when using dynamic linking relative to using static linking. (Unless modern OS's are smart enough to notice identical segments in statically linked binaries. Seems hard, does anyone know?)
Another issue: loading time. You pay loading costs at some point. When you pay this cost depends on how the OS works as well as what linking you use. Maybe you'd rather put off paying it until you know you need it.
Note that static-vs-dynamic linking is traditionally not an optimization issue, because they both involve separate compilation down to object files. However, this is not required: a compiler can in principle, "compile" "static libraries" to a digested AST form initially, and "link" them by adding those ASTs to the ones generated for the main code, thus empowering global optimization. None of the systems I use do this, so I can't comment on how well it works.
The way to answer performance questions is always by testing (and use a test environment as much like the deployment environment as possible).
|
1,993,391 | 1,994,132 | Boost numeric_cast<> with a default value instead of an exception? | Whenever boost's numeric_cast<> conversion fails, it throws an exception. Is there a similar template in boost that lets me specify a default value instead, or is catching the exception the only thing I can do in this case?
I'm not too worried about the performance of all the extra exception handling, but I'd rather use a standard template than write useless wrapper functions. Besides, from past experience, I thought it's likely that boost actually has what I'm thinking of, and I simply haven't found it.
| The numeric_cast function simply calls the boost::numeric::converter template class with the default arguments. One of the arguments is OverflowHandler, and the default value for that is def_overflow_handler, but you can specify silent_overflow_handler to suppress the exception instead.
Then specify the FloatToIntRounder argument that will provide your desired default value if the input argument is outside your desired range. The argument is normally used for providing an integer for rounding from a floating-point type, but you can really use it for whatever you want. More information, plus code describing the sequence of events, at the converter documentation.
As far as I know, Boost doesn't have what you're thinking of, but it provides the facility for you to build it yourself.
|
1,993,431 | 1,993,457 | OpenGL: Rendering more than 8 lights, how? | How should I implement more than 8 lights in OpenGL?
I would like to render unlimited amounts of lights efficiently.
So, whats the preferred method for doing this?
| Deferred shading.
In a nutshell you render your scene without any lights. Instead you store the normals and world positions along with the textured pixels into multiple frame-buffers (so called render targets). You can even do this in a single pass if you use a multiple render-target extension.
Once you have your buffers prepared you start to render a bunch of full-screen quads, each with a pixel shader program that reads out the normals and positions and computes the light for one or multiple light-sources.
Since light is additive you can render as much full-screen quads as you want and accumulate the light for as much light-sources as you want.
A final step does a composition between your light and the unlit textured frame-buffer.
That's more or less the state-of-the-art way to do it. Getting fog and transparency working with such a system is a challenge though.
|
1,993,482 | 1,993,523 | Compiler error in declaring template friend class within a template class | I have been trying to implement my own linked list class for didactic purposes.
I specified the "List" class as friend inside the Iterator declaration, but it doesn't seem to compile.
These are the interfaces of the 3 classes I've used:
Node.h:
#define null (Node<T> *) 0
template <class T>
class Node {
public:
T content;
Node<T>* next;
Node<T>* prev;
Node (const T& _content) :
content(_content),
next(null),
prev(null)
{}
};
Iterator.h:
#include "Node.h"
template <class T>
class Iterator {
private:
Node<T>* current;
Iterator (Node<T> *);
public:
bool isDone () const;
bool hasNext () const;
bool hasPrevious () const;
void stepForward ();
void stepBackwards ();
T& currentElement () const;
friend class List<T>;
};
List.h
#include <stdexcept>
#include "Iterator.h"
template <class T>
class List {
private:
Node<T>* head;
Node<T>* tail;
unsigned int items;
public:
List ();
List (const List<T>&);
List& operator = (const List<T>&);
~List ();
bool isEmpty () const {
return items == 0;
}
unsigned int length () const {
return items;
}
void clear ();
void add (const T&);
T remove (const T&) throw (std::length_error&, std::invalid_argument&);
Iterator<T> createStartIterator () const throw (std::length_error&);
Iterator<T> createEndIterator () const throw (std::length_error&);
};
And this is the test program I've been trying to run:
trial.cpp
using namespace std;
#include <iostream>
#include "List/List.cc"
int main ()
{
List<int> myList;
for (int i = 1; i <= 10; i++) {
myList.add(i);
}
for (Iterator<int> it = myList.createStartIterator(); !it.isDone(); it.stepForward()) {
cout << it.currentElement() << endl;
}
return 0;
}
When I try to compile it, the compiler gives me the following errors:
Iterator.h:26: error: ‘List’ is not a template
Iterator.h: In instantiation of ‘Iterator’:
trial.cpp:18: instantiated from here
Iterator.h:12: error: template argument required for ‘struct List’
List.cc: In member function ‘Iterator List::createStartIterator() const [with T = int]’:
trial.cpp:18: instantiated from here
Iterator.h:14: error: ‘Iterator::Iterator(Node*) [with T = int]’ is private
List.cc:120: error: within this context
Seems like it is not recognizing the friend declaration.
Where did I go wrong?
| try adding a forward declaration
template <class T> class List;
at the start of Iterator.h -- that might be what you need to allow the friend declaration inside the Iterator class to work.
|
1,993,621 | 1,993,714 | compilation error about exceptions | I met some compilation error but do not know what the problem is. The code seems not use exception, but the error is about it.
//in misc.h:
char *basename(char *name); // line 94
// in misc.cc:
char *basename(char *name) { // line 12
char *result = name;
while(*name) {
if(*name == '/') result = name + 1;
name++;
}
return result;
}
Compilation error
g++ -pipe -W -Wall -fopenmp -ggdb3 -O2 -c -o misc.o ../../src/misc.cc
../../src/misc.cc: In function ‘char* basename(char*)’:
../../src/misc.cc:12: error: declaration of ‘char* basename(char*)’ throws different exceptions
../../src/misc.h:94: error: from previous declaration ‘char* basename(char*) throw ()’
make: *** [misc.o] Error 1
Does someone have some clue? Thanks and regards!
EDIT:
Files included in misc.h are
#include <iostream>
#include <cmath>
#include <fstream>
#include <cfloat>
#include <stdlib.h>
#include <string.h>
EDIT:
in misc.i generated by -E option,
extern "C++" char *basename (char *__filename)
throw () __asm ("basename") __attribute__ ((__nonnull__ (1)));
extern "C++" __const char *basename (__const char *__filename)
throw () __asm ("basename") __attribute__ ((__nonnull__ (1)));
# 640 "/usr/include/string.h" 3 4
# 1 "/usr/include/bits/string3.h" 1 3 4
# 23 "/usr/include/bits/string3.h" 3 4
extern void __warn_memset_zero_len (void) __attribute__((__warning__ ("memset used with constant zero length parameter; this could be due to transposed parameters")));
# 48 "/usr/include/bits/string3.h" 3 4
extern __inline __attribute__ ((__always_inline__)) __attribute__ ((__gnu_inline__, __artificial__)) void *
memcpy (void *__restrict __dest, __const void *__restrict __src, size_t __len) throw ()
{
return __builtin___memcpy_chk (__dest, __src, __len, __builtin_object_size (__dest, 0));
}
...
# 641 "/usr/include/string.h" 2 3 4
...
| You may be picking up the definition of basename() from libgen.h. On my OpenSUSE system, the version in libgen.h is defined with "throw ()" at the end (via the __THROW macro).
One thing you can try is to tell gcc to only run the preprocessor stage by adding the -E flag and then search for basename to see what is being defined:
g++ -pipe -W -Wall -fopenmp -ggdb3 -O2 -E -o misc.i ../../src/misc.cc
If that is happening, you'll either need to drop the include of libgen.h, match the throw specifier or change the name of your function.
|
1,993,682 | 1,994,108 | Why does my QGraphicsView not showing up in my MainWindow in Qt4? | This is probably something very obvious, but I have a new to Qt and can't figure it out. I have a simple MainWindow which has one button. When that button is clicked I want to create a QGraphicsScene, add a few lines and then show that in the Window. However when I run this code in a Window it does not show up.
BUT, if I run this as a QApplication, it shows up just fine. What am I missing?
Here is the code in the MainWindow:
void TheDrawings::drawScene() {
qDebug() << "Setting up Scene";
QGraphicsScene scene(QRect(-50, -50, 400, 200));
QPen pen(Qt::red, 3, Qt::DashDotDotLine);
QGraphicsRectItem *rectItem = new QGraphicsRectItem(QRect(-50, -50, 400,
200), 0, &scene);
rectItem->setPen(pen);
rectItem->setBrush(Qt::gray);
QGraphicsSimpleTextItem *textItem = new QGraphicsSimpleTextItem(
"Amit Bahree", 0, &scene);
textItem->setPos(50, 0);
QGraphicsEllipseItem *eclipseItem = new QGraphicsEllipseItem(QRect(170, 20,
100, 75), 0, &scene);
eclipseItem->setPen(QPen(Qt::darkBlue));
eclipseItem->setBrush(Qt::darkBlue);
QGraphicsPolygonItem *polygonItem = new QGraphicsPolygonItem(QPolygonF(
QVector<QPointF> () << QPointF(10, 10) << QPointF(0, 90)
<< QPointF(40, 70) << QPointF(80, 110) << QPointF(70, 20)),
0, &scene);
polygonItem->setPen(QPen(Qt::darkGreen));
polygonItem->setBrush(Qt::yellow);
qDebug() << "Setting up the view";
QGraphicsView view;
view.setScene(&scene);
view.show();
}
| your QGraphicsView needs a centralwidget of the mainwindow (or whatever widget you want to put it on top of) to be set as a parent. Also you need to "new" your view and scene objects to put them on the heap so they don't get destroyed once drawScene finishes. See of following changes to your code would work fine for you:
QGraphicsScene* scene = new QGraphicsScene(QRect(-50, -50, 400, 200));
...
QGraphicsView* view = new QGraphicsView(ui->centralWidget);
view->setScene(scene);
view->setGeometry(QRect(50, 50, 400, 200));
view->show();
hope this helps, regards
|
1,993,738 | 1,993,754 | malloc Error in specific makefile | I have a software, code of which I have modified and run make again.
If I run the modified code in a black QtCreator project it runs well (nothing specific to Qt, just an example), but if I compile with software's original makefile, I get error on a line as:
(*F)=(double**) malloc((size_arr)*sizeof(double*));
Not in compile time, but a segmentation fault at runtime. F is a ***double by the way.
What should I check in makefile? Any guess?
| Probably F is NULL or pointing to an invalid memory location. Since F gets dereferenced on the left side of the assignment, it needs to be properly initialized, so that it points to a memory location that can store the double** returned by malloc.
|
1,993,907 | 1,993,912 | What is 'v' in vtable? | What does v indicate in vtable or vptr
| The 'v' stands for 'Virtual'.
|
1,994,186 | 1,994,384 | Casting between integers and pointers in C++ | #include<iostream>
using namespace std;
int main()
{
int *p,*c;
p=(int*)10;
c=(int*)20;
cout<<(int)p<<(int)c;
}
Somebody asked me "What is wrong with the above code?" and I couldn't figure it out. Someone please help me.
| Some wanted a quote from the C++ standard (I'd have put this in the comments of that answer if the format of comments wasn't so restricted), here are two from the 1999 one:
5.2.10/3
The mapping performed by reinterpret_cast is implementation defined.
5.2.10/5
A value of integral type or enumeration type can be explicitly converted to a pointer.
A pointer converted to an integer of sufficient size (if ant such exists on the implementation)
and back to the same pointer type will have its original value; mappings between pointers and
integers are otherwise implementation-defined.
And I see nothing mandating that such implementation-defined mapping must give a valid representation for all input. Otherwise said, an implementation on an architecture with address registers can very well trap when executing
p = (int*)10;
if the mapping does not give a representation valid at that time (yes, what is a valid representation for a pointer may depend of time. For instance delete may make invalid the representation of the deleted pointer).
|
1,994,258 | 1,994,276 | Resolving ambiguous calls in C++ with namespaces | I'm merging a static library (assimp) into an existing project (Spring RTS) where both the library and the project are under regular development. I'm trying to add the library in such a way that I can easily repeat the integration as new releases come out.
Anyway, the issue is that Spring requires the library to perform all maths using the streflop maths library. In practice this means min(x,y) should be replaced with streflop::min(x,y) everywhere it is used (which is a lot, considering the issue applies to all maths functions).
I could do a mass regex replace but I was hoping for something a little more elegant. After some research and testing it seemed that adding using namespace streflop; at the top of each .cpp file would do the trick but it didn't.
The exact error is:
/mnt/work/workspace/spring-patch-git/spring/rts/lib/assimp/code/FixNormalsStep.cpp:147: error: call of overloaded sqrtf(const float&) is ambiguous
/usr/include/bits/mathcalls.h:157: note: candidates are: float sqrtf(float)
/mnt/work/workspace/spring-patch-git/spring/rts/lib/streflop/SMath.h:347: note: streflop::Simple streflop::sqrtf(streflop::Simple)
I thought the whole point of namespaces was to resolve this kind of thing but it doesn't seem to work here. I'm a bit confused by the reference to streflop::Simple as well. Is this a nested namespace and could that be part of the reason it isn't working as expected?
| If you only need the min function from the streflop namespace, you can use
using streflop::min;
instead of
using namespace streflop;
This will import only the name min, not the whole namespace.
Your error is because what you are doing imports every name from the streflop namespace so that they can be used unqualified, and sqrtf already exists unqualified. Are you perhaps including C header files as they are in C? That is, using math.h instead of cmath? Because if you use the C++ headers like cmath, the functions from the standard library will be in the std namespace and you shouldn't get a clash even if you import the whole streflop namespace.
Another option is that if the places where you now get errors from are few, you can explicitly qualify them. Like in this case, you can replace sqrtf with either streflop::sqrtf or ::sqrtf, depending on which version you want to use.
The streflop::Simple has little to do with your issue; it is just the parameter type and return value for streflop::sqrtf. The only way it is involved is that in overload resolution it gets treated like float so that both of the sqrtf functions listed are possible to call and the compiler cannot determine which one you meant.
|
1,994,374 | 1,994,452 | Passing a Delegate as Callback to a Native C++ API Call | Could someone point me whats wrong with this code please? I'm having a very hard experience in mixing C++ and MC++. I have read a lot of blogs and tutorial regarding this subject (passing delegates) but now that looks my code is ok (its compiling and runs well when in debug mode and step by step) it crashs.
The main problem is that it needs to have a Delegate that is a member function (which needs to access other class members).
I remembered that theres a note in waveInProc documentation which says that inside the callback you cannot call any system function. Should be this what's crashing the application since it tryies to use other members and the managed environment takes place here calling other system methods?
ref class CWaveIn
{
public:
void CWaveIn::Open(int currentInputDeviceId)
private:
void AllocateBuffer(void);
void WaveInProc(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2);
delegate void CallBack(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2);
CallBack^ myDelegate;
protected:
WAVEFORMATEX* waveFormat;
int bufferDuration; // in seconds
BYTE* waveInBuffer;
int bufferSize;
};
void CWaveIn::AllocateBuffer(void)
{
free(waveInBuffer);
bufferSize = waveFormat->nAvgBytesPerSec * bufferDuration;
waveInBuffer = new BYTE[bufferSize];
Debug::WriteLine("BufferSize: " + bufferSize);
}
void CWaveIn::WaveInProc(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2)
{
switch(uMsg) {
case WIM_CLOSE:
Debug::WriteLine("WIM_CLOSE");
break;
case WIM_DATA:
for(int i=0;i<bufferSize; i++) {
Debug::Write(waveInBuffer[i] + " ");
}
Debug::WriteLine("WIM_DATA");
break;
case WIM_OPEN:
Debug::WriteLine("WIM_OPEN");
break;
}
}
void CWaveIn::Open(int currentInputDeviceId)
{
MMRESULT result = ::waveInOpen(0, currentInputDeviceId, waveFormat, 0, 0, WAVE_FORMAT_QUERY);
Debug::WriteLine(L"CWaveIn::Open() WAVE_FORMAT_QUERY: device " + currentInputDeviceId.ToString());
DebugError(result);
if(result == MMSYSERR_NOERROR)
{
myDelegate = gcnew CallBack(this, &CWaveIn::WaveInProc);
pin_ptr<CallBack^> ptrMyDelegate= &myDelegate;
IntPtr delegatePointer = System::Runtime::InteropServices::Marshal::GetFunctionPointerForDelegate(myDelegate);
HWAVEIN hWaveIn;
MMRESULT result = ::waveInOpen(&hWaveIn, currentInputDeviceId, waveFormat, (DWORD_PTR)delegatePointer.ToPointer(), 0, CALLBACK_FUNCTION | WAVE_FORMAT_DIRECT);
Debug::WriteLine(L"CWaveIn::Open() : device " + currentInputDeviceId.ToString());
DebugError(result);
AllocateBuffer();
WAVEHDR WaveInHdr;
WaveInHdr.lpData = (LPSTR)waveInBuffer;
WaveInHdr.dwBufferLength = bufferSize;
WaveInHdr.dwBytesRecorded=0;
WaveInHdr.dwUser = 0L;
WaveInHdr.dwFlags = 0L;
WaveInHdr.dwLoops = 0L;
::waveInPrepareHeader(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
result = ::waveInAddBuffer(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
result = ::waveInStart(hWaveIn);
Debug::WriteLine(L"CWaveIn::Start() : device " + currentInputDeviceId.ToString());
DebugError(result);
}
}
| your are declaring the pin_ptr on the managed heap and then you pass it to an un-managed function
all the managed references to this pointer are inside CWaveIn::Open(int currentInputDeviceId)
so I guess the GC sees no reason to keep this object after CWaveIn::Open exits.
try to create it in the class scope instead of the function scope
|
1,994,649 | 1,995,013 | Windows network packet modification | I'm looking to write a small program which will intercept network packets (on the local machine) and modify them before they go out on the network. I need to be able to modify the headers as well, not just the data.
I've already looked through several possibilities but am unsure which one is best to pursue. There are open source packet filters out there, but filtering only seems to be able to either allow or reject packets, not much else.
The other solution would be to write an NDIS intermediate driver, but writing drivers is a beyond me. Even the simple pass-thru example in the WinDDK is thousands of lines. I'm also not looking forward to having to constantly reinstall a driver and reboot to test my code.
I'd ideally like the program to be self contained, and not rely on the installation of 3rd party drivers/software/whatever.
So if you people could point me in the right direction, throw some helpful links my way, whatever, I'd appreciate it.
| Depends what kind of packets do you want to filter/modify.
If you're after application-level filtering, and want to get your hands on HTTP or similar packets, your best bet would probably be an LSP. Note however, following this path has certain disadvantages. First MS seems to be trying to get rid of this technology, and IIRC a part of Windows 7 logo requirements is "no LSP in your product", they seem to be promoting the Windows Filtering Platform. Second, you'd be very surprised with how much trouble you're getting into in terms of 3rd party LSP compatibility. Third, a very dummy LSP is still around 2 KLOC :)
If you're after an IP level packet filtering you'd need to go for a driver.
Windows Filtering Platform provides you with functionality needed in either case. However, it's only available on Windows Vista and later products, so no XP there. Another thing to take into consideration, WFP was only capable of allow/reject packets in user-land, and if you need to modify them, you'd need to go kernel-mode. (At least that what the situation was at the time it appeared, maybe they've improved something by now).
|
1,994,676 | 1,994,722 | Hooking DirectX EndScene from an injected DLL | I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated.
I know the following methods to do this:
Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code.
Downside: You have to put it there before you start the game.
Same as the first method, but replacing the DLL in system32 directly.
Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded.
Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it.
Downside: The offset is not the same on every system.
Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9->CreateDevice to get the device pointer, and then hooking Device->EndScene through the virtual table.
Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9.
Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game.
Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc.
Same as the third method. However, you'll do a pattern scan to get EndScene.
Downside: doesn't look that reliable.
How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks?
The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.
| You install a system wide hook. (SetWindowsHookEx) With this done, you get to be loaded into every process.
Now when the hook is called, you look for a loaded d3d9.dll.
If one is loaded, you create a temporary D3D9 object, and walk the vtable to get the address of the EndScene method.
Then you can patch the EndScene call, with your own method. (Replace the first instruction in EndScene by a call to your method.
When you are done, you have to patch the call back, to call the original EndScene method. And then reinstall your patch.
This is the way FRAPS does it. (Link)
You can find a function address from the vtable of an interface.
So you can do the following (Pseudo-Code):
IDirect3DDevice9* pTempDev = ...;
const int EndSceneIndex = 26 (?);
typedef HRESULT (IDirect3DDevice9::* EndSceneFunc)( void );
BYTE* pVtable = reinterpret_cast<void*>( pTempDev );
EndSceneFunc = pVtable + sizeof(void*) * EndSceneIndex;
EndSceneFunc does now contain a pointer to the function itself. We can now either patch all call-sites or we can patch the function itself.
Beware that this all depends on the knowledge of the implementation of COM-Interfaces in Windows. But this works on all windows versions (either 32 or 64, not both at the same time).
|
1,994,841 | 1,994,857 | Initialising C structures in C++ code | Is there a better way to initialise C structures in C++ code?
I can use initialiser lists at the variable declaration point; however, this isn't that useful if all arguments are not known at compile time, or if I'm not declaring a local/global instance, eg:
Legacy C code which declares the struct, and also has API's using it
typedef struct
{
int x, y, z;
} MyStruct;
C++ code using the C library
void doSomething(std::vector<MyStruct> &items)
{
items.push_back(MyStruct(5,rand()%100,items.size()));//doesn't work because there is no such constructor
items.push_back({5,rand()%100,items.size()});//not allowed either
//works, but much more to write...
MyStruct v;
v.x = 5;
v.y = rand()%100;
v.z = items.size();
items.push_back(v);
}
Creating local instances and then setting each member one at a time (myStruct.x = 5; etc) is a real pain, and somewhat hard to read when trying to add say 20 different items to the container...
| If you can't add a constructor (which is the best solution in C++03 but you probably have compatibility constraint with C), you can write a function with the same effect:
MyStruct makeAMyStruct(int x, int y, int z)
{
MyStruct result = { x, y, z };
return result;
}
items.push_back(makeAMyStruct(5,rand()%100,items.size()));
Edit: I'd have checked now that C++0X offers something for this precise problem:
items.push_back(MyStruct{5,rand()%100,items.size()});
which is available in g++ 4.4.
|
1,994,910 | 1,994,917 | convert Class member callback from __stdcall to DWORD_PTR | I'm trying to use a class member as a callback but the compiler gives me the following error:
Error 2 error C2440: 'type cast' : cannot convert from 'void (__stdcall CWaveIn::* )(HWAVEIN,UINT,DWORD_PTR,DWORD_PTR,DWORD_PTR)' to 'DWORD_PTR'
Is it possible to use a member function as a callback this way? and how do I convert the stdcall member pointer to the DWORD_PTR requested by the winapi function?
class CWaveIn
{
private:
void CALLBACK WaveInProc(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2);
};
void CWaveIn::Open()
{
(...)
MMRESULT result = ::waveInOpen(&hWaveIn, currentInputDeviceId, waveFormat, (DWORD_PTR)CWaveIn::WaveInProc, 0, CALLBACK_FUNCTION | WAVE_FORMAT_DIRECT);
}
| You cannot directly pass in class methods.
This is the right way :
class CWaveIn
{
private:
static void CALLBACK staticWaveInProc(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2)
{
CWaveIn* pThis = reinterpret_cast<CWaveIn*>( dwParam1 );
pThis->WaveInProc( ... );
}
void WaveInProc(HWAVEIN hwi, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2)
{
// your code
}
};
void CWaveIn::Open()
{
(...)
MMRESULT result = ::waveInOpen(&hWaveIn, currentInputDeviceId, waveFormat, CWaveIn::staticWaveInProc, this, CALLBACK_FUNCTION | WAVE_FORMAT_DIRECT);
}
|
1,994,970 | 1,995,712 | C++ - Failed loading a data file ! | I have a simple data file that I want to load, in a C++ program. For weird reasons, it doesn't work:
I tried it on Windows assuming the file was in the same directory: failed.
I tried it on Windows by moving the file in C:\ directory: worked.
I tried it on Linux putting the file in the same directory: failed.
The snippet:
void World::loadMap(string inFileName) {
ifstream file(inFileName.c_str(), ios::in);
if (file) {
}
else
{
cout<<"Error when loading the file \n";
exit(-1);
}
}
I call the loadMap method like this:
World::Instance()->loadMap("Map.dat");
(World is a singleton class).
How can I find the exact error, by using try-catch or anything else?
| The problem is the working directory.
When you specify a relative path for a file it uses the working directory (which may not be the same as the directory where you application is stored on the file system).
Thus you either need to use an absolute path.
Or you need to find the current working directory and specify the file relative to that.
Or change the current working directory.
|
1,995,051 | 1,995,058 | While(1) in constructor or using threads? | Is it recommed to put a while loop, which never ends in a constructor? Or should I use threads to get the same result?
Is it good when a constructor never terminates? Or is it more secure to avoid segmentation faults?
Hope you understand my bad English..
| An object does not exist if its constructor does not finish. So putting a while(1) loop in a constructor will prevent objects being created using that constructor. You need to describe what problem you think doing this will solve.
|
1,995,053 | 1,995,057 | const char* concatenation | I need to concatenate two const chars like these:
const char *one = "Hello ";
const char *two = "World";
How might I go about doing that?
I am passed these char*s from a third-party library with a C interface so I can't simply use std::string instead.
| In your example one and two are char pointers, pointing to char constants. You cannot change the char constants pointed to by these pointers. So anything like:
strcat(one,two); // append string two to string one.
will not work. Instead you should have a separate variable(char array) to hold the result. Something like this:
char result[100]; // array to hold the result.
strcpy(result,one); // copy string one into the result.
strcat(result,two); // append string two to the result.
|
1,995,062 | 1,995,084 | Why can't I run my OpenGL program outside of Visual Studio? | I have an OpenGL-program using GLSL, that I can run just fine with the Play-button in Visual Studio (2008) -- both in the standard Release and Debug configurations.
However, when I try to run the executable from Explorer, all I get is a flashing cmd-prompt with no text in it to indicate any kind of failure loading something.
I have tried copying the required DLL-files (glut32.dll, glew.dll, etc.) to the same folder as the executable is located in, but that didn't make a difference. I also tried copying the GLSL-files to the same folder, but that didn't help either.
What do I need to do, to make the program run without Visual Studio?
| Have you tried checking the paths of any external resources. The run button in Visual Studio will by default run the program with a different working directory than if you use explorer.
The other thing you should do is try adding some logging, (even just writing text to stderr at critical points). That way you can see, for example, if you application ever got to the main function.
|
1,995,109 | 1,995,131 | Why use string::iterator rather than index? |
Possible Duplicate:
Why use iterators instead of array indices?
string::iterator it;
for (it = str.begin(); it < str.end(); it++)
cout << *it;
cout << endl;
Why not:
for (int i = 0; i < str.size(); i++)
cout << str[i];
cout << endl;
It seems that string::iterator does not provide range check either. Why should we use string::iterator rather than index?
Thanks.
| The index can only be used for containers that support random access - direct access to a given position.
The iterator offers a unified way to access any collection/data structure. The flexibility when refactoring your code is immense.
|
1,995,269 | 1,995,287 | My Function chain doesn't want to work, why? | I have the following class:
GLRectangle.h
#include "XPView.h"
class GLRectangle
{
public:
int top, left, bottom, right;
public:
GLRectangle(void);
~GLRectangle(void);
GLRectangle* centerRect(int rectWidth, int rectHeight, int boundWidth=0, int boundHeight=0);
};
GLRectangle.cpp
#include "GLRectangle.h"
GLRectangle::GLRectangle(void)
{
}
GLRectangle::~GLRectangle(void)
{
}
GLRectangle* GLRectangle::centerRect(int rectWidth, int rectHeight, int boundWidth, int boundHeight)
{
if(boundWidth == 0)
{
boundWidth = XPView::getWindowWidth();
}
if(boundHeight == 0)
{
boundHeight = XPView::getWindowHeight();
}
// Set rectangle attributes
left = boundWidth / 2 - rectWidth / 2;
top = boundHeight / 2 + rectHeight / 2;
right = boundWidth / 2 + rectWidth / 2;
bottom = boundHeight / 2- rectHeight / 2;
return this;
}
and I'm trying to chain a function onto the construction of the object as follows:
wndRect = new GLRectangle()->centerRect(400, 160);
but getting the following error:
error C2143: syntax error:missing ';' before '->'
Is there a way to get this to work?
| It's an operator precedence problem. Try
// Add some brackets
wndRect = (new GLRectangle())->centerRect(400, 160);
|
1,995,290 | 1,995,322 | Should I make my functions as general as possible? | template<class T>
void swap(T &a, T &b)
{
T t;
t = a;
a = b;
b = t;
}
replace
void swap(int &a, int &b)
{
int t;
t = a;
a = b;
b = t;
}
This is the simplest example I could come up with,but there should be many other complicated functions.Should I make all methods I write templated if possible?
Any disadvantages to do this?
thanks.
| Genericity has the advantage of being reusable. However, write things generic, only if:
It doesn't take much more time to do that, than do it non-generic
It doesn't complicate the code more than a non-generic solution
You know will benefit from it later
However, know your standard library. The case you presented is already in STL as std::swap.
Also, remember that when writing generically using templates, you can optimize special cases by using template specialization. However, always to it when it's needed for performance, not as you write it.
Also, note that you have the question of run-time and compile-time performance here. Template-based solutions increase compile-time. Inline solutions can but not must decrease run-time.
`Cause "Premature optimization and genericity is the root of all evil". And you can quote me on that -_-.
|
1,995,311 | 1,995,621 | Drawing Text On Window | I want to make chat application and for the first step, I need to know which API there is to use to display text in lines and also erase if needed. thanks!
| Charles Petzold's classic book Programming Windows is one of the best ways to learn the Win32 API.
|
1,995,328 | 2,031,708 | Are there any better methods to do permutation of string? | void permute(string elems, int mid, int end)
{
static int count;
if (mid == end) {
cout << ++count << " : " << elems << endl;
return ;
}
else {
for (int i = mid; i <= end; i++) {
swap(elems, mid, i);
permute(elems, mid + 1, end);
swap(elems, mid, i);
}
}
}
The above function shows the permutations of str(with str[0..mid-1] as a steady prefix, and str[mid..end] as a permutable suffix). So we can use permute(str, 0, str.size() - 1) to show all the permutations of one string.
But the function uses a recursive algorithm; maybe its performance could be improved?
Are there any better methods to permute a string?
| Here is a non-recursive algorithm in C++ from the Wikipedia entry for unordered generation of permutations. For the string s of length n, for any k from 0 to n! - 1 inclusive, the following modifies s to provide a unique permutation (that is, different from those generated for any other k value on that range). To generate all permutations, run it for all n! k values on the original value of s.
#include <algorithm>
void permutation(int k, string &s)
{
for(int j = 1; j < s.size(); ++j)
{
std::swap(s[k % (j + 1)], s[j]);
k = k / (j + 1);
}
}
Here swap(s, i, j) swaps position i and j of the string s.
|
1,995,495 | 1,995,513 | What are static variables? | What are static variables designed for? What's the difference between static int and int?
| The static keyword has four separate uses, only two of which are closely related:
static at global and namespace scope (applied to both variables and functions) means internal linkage
this is replaced by unnamed namespaces and is unrelated to the rest
in particular, others tend to imply some sort of uniqueness, but internal linkage means the opposite: you can have many objects with the same name, as long as each has internal linkage and you only have one per translation unit
static data members are "shared" among all instances of the class
it's more like they are independent of any class instance
this is sometimes grouped with static methods
static methods do not "operate" on a current instance
no this pointer; can call without an instance
static local variables (in functions) persist across the scope of each function call
Both static data members and static local variables can become hidden global state, and should be used carefully.
Now which two are closely related? It's not the two class members—the warning about global state gives it away. You can consider static data members as static local variables, where the functions to which they belong are all methods of the class, instead of a single function.
I found many related questions, but, surprisingly, no duplicates.
|
1,995,546 | 1,995,601 | linking objective c++ | I am trying to figure out why when I convert my main.m file to a main.mm file, it no longer will link properly.
I have reduces the problem to the following example code:
#import <Foundation/Foundation.h>
#import <AppKit/AppKit.h>
int main( int argc, const char ** argv ) {
return NSApplicationMain( argc, argv);
}
I am using gnustep and linux. I enter the following commands and everything works as expected:
g++ -g -c main.m -I/usr/GNUstep/Local/Library/Headers -I/usr/GNUstep/System/Library/Headers
g++ -g -o test main.o -L/usr/GNUstep/Local/Library/Libraries -L/usr/GNUstep/System/Library/Libraries -lgnustep-base -lgnustep-gui
Now if I rename main.m to main.mm and use these two commands ( same exept main.m now main.mm):
g++ -g -c main.mm -I/usr/GNUstep/Local/Library/Headers -I/usr/GNUstep/System/Library/Headers
g++ -g -o test main.o -L/usr/GNUstep/Local/Library/Libraries -L/usr/GNUstep/System/Library/Libraries -lgnustep-base -lgnustep-gui
I get the following error:
main.mm:7: undefined reference to `NSApplicationMain(int, char const**)'
Can someone please find what I am doing wrong? I do not see why it is now failing to link.
I am trying to add some C++ classes to an objective c program and this is preventing me from continuing.
Thank you for any help you can provide.
| The problem is that when you compile it as C++, the compiler mangles the name of the symbol NSApplicationMain, so it can't find it, since it's looking for something like __Z17NSApplicationMainiPPKc. You can use the nm program (from binutils) to see what symbols the object files are referencing:
$ # When compiled as Objective-C:
$ nm main.o | grep NSApplicationMain
U NSApplicationMain
$ # When compiled as Objective-C++:
$ nm main.o | grep NSApplicationMain
U _Z17NSApplicationMainiPPKc
In order to avoid this problem, C functions need to be declared with an extern "C" modifier in order to tell the compiler not to mangle the name. Looking into <AppKit/NSApplication.h>, the header file where NSApplicationMain is declared, I see this:
APPKIT_EXPORT int
NSApplicationMain(int argc, const char **argv);
Alas, APPKIT_EXPORT is defined to be one of extern, __declspec(dllexport), extern __declspec(dllexport), or nothing in <AppKit/AppKitDefines.h>. Since it's also used for global variable declarations, we can't get around this by redefining it to extern "C" (which would be extremely hacky and kludgy anyways). The AppKit header files do not seem to contain any extern "C" declarations at all, although I do see them in various header files under Foundation/ and GNUStepBase/.
So what can you do? The solution is to wrap your includes with an extern "C":
extern "C"
{
#import <Foundation/Foundation.h>
#import <AppKit/AppKit.h>
}
int main( int argc, const char ** argv ) {
return NSApplicationMain( argc, argv);
}
This will give the functions defined in those header files the proper linkage, and it will all work out. But you shouldn't have to do this -- I would file a bug report with GNUstep, telling them to add proper extern "C" declarations to their header files.
|
1,995,554 | 2,127,821 | An Optimum 2D Data Structure | I've given this a lot of thought but haven't really been able to come up with something.
Suppose I want a m X n collection of elements sortable by any column and any row in under O(m*n), and also the ability to insert or delete a row in O(m+n) or less... is it possible?
What I've come up with is a linked-grid, where the nodes are inserted into a vector so I have indices for them, and indexed the first row and column to remove the necessity to traverse the list in any one direction. with my method I've achieved the above complexity, but I was just wondering if it is possible to reduce that further by a non-constant factor.
Example for sortability:
1 100 25 34
2 20 15 16
3 165 1 27
Sorted by 3rd row:
25 1 34 100
15 2 16 20
1 3 27 165
Sorting THAT by 1st column:
1 3 27 165
15 2 16 20
25 1 34 100
| I would create two index arrays, one for the columns, and one for the rows. So for your data
1 100 25 34
2 20 15 16
3 165 1 27
You create two arrays:
cols = [0, 1, 2, 3]
rows = [0, 1, 2]
Then when you want to sort the matrix by the 3rd row, you keep the original matrix intact, but just change the indices array accordingly:
cols = [2, 0, 3, 1]
rows = [0, 1, 2]
The trick now is to access your matrix with one indirection. So instead of accessing it with m[x][y] you access it by m[cols[x]][rows[y]]. You also have to use m[cols[x]][rows[y]] when you perform the reordering of the rows/cols array.
This way sorting is O(n*log(n)), and access is O(1).
For the data structure, I would use an array with links to another array:
+-+
|0| -> [0 1 2 3 4]
|1| -> [0 1 2 3 4]
|2| -> [0 1 2 3 4]
+-+
To insert a row, just insert it at the last position and update the the rows index array accordingly, with the correct position. E.g. when rows was [0, 1, 2] and you want to insert it at the front, rows will become [3, 0, 1, 2]. This way insertion of a row is O(n).
To insert a column, you also add it as the last element, and update cols accordingly. Inserting a column is O(m), row is O(n).
Deletion is also O(n) or O(m), here you just replace the column/row you want to delete with the last one, and then remove the index from the index array.
|
1,995,734 | 1,995,979 | How are exceptions implemented under the hood? | Just about everyone uses them, but many, including me simply take it for granted that they just work.
I am looking for high-quality material. Languages I use are: Java, C, C#, Python, C++, so these are of most interest to me.
Now, C++ is probably a good place to start since you can throw anything in that language.
Also, C is close to assembly. How would one emulate exceptions using pure C constructs and no assembly?
Finally, I heard a rumor that Google employees do not use exceptions for some projects due to speed considerations. Is this just a rumor? How can anything substantial be accomplished without them?
Thank you.
| Exceptions are just a specific example of a more general case of advanced non-local flow control constructs. Other examples are:
notifications (a generalization of exceptions, originally from some old Lisp object system, now implemented in e.g. CommonLisp and Ioke),
continuations (a more structured form of GOTO, popular in high-level, higher-order languages),
coroutines (a generalization of subroutines, popular especially in Lua),
generators à la Python (essentially a restricted form of coroutines),
fibers (cooperative light-weight threads) and of course the already mentioned
GOTO.
(I'm sure there's many others I missed.)
An interesting property of these constructs is that they are all roughly equivalent in expressive power: if you have one, you can pretty easily build all the others.
So, how you best implement exceptions depends on what other constructs you have available:
Every CPU has GOTO, therefore you can always fall back to that, if you must.
C has setjmp/longjmp which are basically MacGyver continuations (built out of duct-tape and toothpicks, not quite the real thing, but will at least get you out of the immediate trouble if you don't have something better available).
The JVM and CLI have exceptions of their own, which means that if the exception semantics of your language match Java's/C#'s, you are home free (but if not, then you are screwed).
The Parrot VM as both exceptions and continuations.
Windows has its own framework for exception handling, which language implementors can use to build their own exceptions on top.
A very interesting use case, both of the usage of exceptions and the implementation of exceptions is Microsoft Live Lab's Volta Project. (Now defunct.) The goal of Volta was to provide architectural refactoring for Web applications at the push of a button. So, you could turn your one-tier web application into a two- or three-tier application just by putting some [Browser] or [DB] attributes on your .NET code and the code would then automagically run on the client or in the DB. In order to do that, the .NET code had to be translated to JavaScript source code, obviously.
Now, you could just write an entire VM in JavaScript and run the bytecode unmodified. (Basically, port the CLR from C++ to JavaScript.) There are actually projects that do this (e.g. the HotRuby VM), but this is both inefficient and not very interoperable with other JavaScript code.
So, instead, they wrote a compiler which compiles CIL bytecode to JavaScript sourcecode. However, JavaScript lacks certain features that .NET has (generators, threads, also the two exception models aren't 100% compatible), and more importantly it lacks certain features that compiler writers love (either GOTO or continuations) and that could be used to implement the above-mentioned missing features.
However, JavaScript does have exceptions. So, they used JavaScript Exceptions to implement Volta Continuations and then they used Volta Continuations to implement .NET Exceptions, .NET Generators and even .NET Managed Threads(!!!)
So, to answer your original question:
How are exceptions implemented under the hood?
With Exceptions, ironically! At least in this very specific case, anyway.
Another great example is some of the exception proposals on the Go mailing list, which implement exceptions using Goroutines (something like a mixture of concurrent coroutines ans CSP processes). Yet another example is Haskell, which uses Monads, lazy evaluation, tail call optimization and higher-order functions to implement exceptions. Some modern CPUs also support basic building blocks for exceptions (for example the Vega-3 CPUs that were specifically designed for the Azul Systems Java Compute Accelerators).
|
1,995,744 | 1,995,780 | C++ (C3867) Passing a member function to a function call | I'm trying to pass a member function as an argument. Basically I have a class called AboutWindow, and the header looks as this (trimmed for brevity):
class AboutWindow
{
private:
AboutWindow(void);
~AboutWindow(void);
public:
int AboutWindowCallback(XPWidgetMessage inMessage, XPWidgetID inWidget, long inParam1, long inParam2);
};
and in the source I am trying to pass the AboutWindowCallback member function as a (pointer to / reference to) the function.
It looks something like this:
XPAddWidgetCallback(widgetId, this->AboutWindowCallback);
but I am getting the following IntelliSense warning:
A pointer to a bound function may only
be used to call the function
Is it possible to pass the member function to XPAddWidgetCallback. Note it has to be that specific function, of that specific instance, as inside the AboutWindowCallback function, the this keyword is used.
Thanks in advance!
| I assume you're using x-plane here.
Unfortunately XPAddWidgetCallback expects a callback in the form of
int callback(XPWidgetMessage inMessage, XPWidgetID inWidget,
long inParam1, long inParam2)
You provided a class member function. It would work if this were a global function. So this :
class AboutWindow
{
private:
AboutWindow(void);
~AboutWindow(void);
};
int AboutWindowCallback(XPWidgetMessage inMessage, XPWidgetID inWidget,
long inParam1, long inParam2);
Would be correct.
However, it's not nice to have to have global functions for that, as noted in this thread on x-plane.
To circumvent that, have a look at the Wrapper class example "XPCWidget.cpp" that is supplied with x-plane, or use tr1/boost bind to wrap the callback.
XPAddWidgetCallback( std::tr1::bind( &AboutWindow::AboutWindowCallback, this ) );
You can also make the function in question static:
class AboutWindow
{
private:
AboutWindow(void);
~AboutWindow(void);
static int AboutWindowCallback(XPWidgetMessage inMessage, XPWidgetID inWidget,
long inParam1, long inParam2);
};
|
1,995,963 | 2,001,251 | Is (1 + sqrt(2))^2 = 3 + 2*sqrt(2) satisfied in Floating Point arithmetics? | In mathematics the identity (1 + sqrt(2))^2 = 3 + 2*sqrt(2) holds true. But in floating point (IEEE 754, using single precision i.e. 32 bits) calculations it's not the case, as sqrt(2) doesn't have an exact representation in binary.
So does using a approximated value of sqrt(2) provide different results for left and right hand sides? If so why? Does squaring the approximated value reduce accuracy significantly?
Which of the equivalent expressions then gives the most accurate result?
| This identity happens to hold when computed as written in IEEE-754 double precision. Here's why:
The square root of two correctly rounded to double precision is:
sqrt(2) = 0x1.6a09e667f3bcd * 2^0
(I'm using hexadecimal here because the representations are tidier, and the translation into the IEEE754 format is much easier). Multiplication by two is exact in binary floating-point if no overflow occurs, as in this case here, so:
2*sqrt(2) = 0x1.6a09e667f3bcd * 2^1
When we add three, we get:
3 + 2*sqrt(2) = 0x1.7504f333f9de68 * 2^2
This, however, is not a representable double-precision number (it is one bit too wide), so the result is rounded to the nearest representable number. It happens that this value is exactly halfway between two representable numbers, so we pick the one with a trailing zero bit:
3 + 2*sqrt(2) = 0x1.7504f333f9de6 * 2^2
Now the other side of the computation. When we add one to the double-precision square root of two, we get:
1 + sqrt(2) = 0x1.3504f333f9de68 * 2^1
This is also an exact halfway case between to representable double-precision numbers, and again it is rounded to the nearest "even" representable number:
1 + sqrt(2) = 0x1.3504f333f9de6 * 2^1
When this value is squared, the result is:
(1 + sqrt(2))*(1 + sqrt(2)) = 0x1.7504f333f9de599cacbc97eaa4 * 2^2
Which is not a representable double-precision number either. This one is not an exact halfway case, so it merely rounds to the nearest representable number, which is:
(1 + sqrt(2))*(1 + sqrt(2)) = 0x1.7504f333f9de6 * 2^2
Summary: Computing this value in two different ways incurs two different sequences of roundings, but the final result is the same. We only looked at the computation in double precision, however; this may not be the case when the computation is carried out using different arithmetic types.
In general, however, the expression 3 + 2*sqrt(2) should be expected to be the more accurate (in cases where they differ), because it incurs only two roundings (the square root and the add) for any binary IEEE-754 type, whereas (1 + sqrt(2))*(1 + sqrt(2)) incurs three roundings (square root, add, and multiply). It should also be noted that the difference between the two will be at most one or two bits, and is probably negligable for your purposes.
|
1,996,548 | 1,996,586 | segmentation fault in overloading operator = | I just got a seg fault in overloading the assignment operator for a class FeatureRandomCounts, which has _rects as its pointer member pointing to an array of FeatureCount and size rhs._dim, and whose other date members are non-pointers:
FeatureRandomCounts & FeatureRandomCounts::operator=(const FeatureRandomCounts &rhs)
{
if (_rects) delete [] _rects;
*this = rhs; // segment fault
_rects = new FeatureCount [rhs._dim];
for (int i = 0; i < rhs._dim; i++)
{
_rects[i]=rhs._rects[i];
}
return *this;
}
Does someone have some clue? Thanks and regards!
| As mentioned, you have infinite recursion; however, to add to that, here's a foolproof way to implement op=:
struct T {
T(T const& other);
T& operator=(T copy) {
swap(*this, copy);
return *this;
}
friend void swap(T& a, T& b);
};
Write a correct copy ctor and swap, and exception safety and all edge cases are handled for you!
The copy parameter is passed by value and then changed. Any resources which the current instance must destroy are handled when copy is destroyed. This follows current recommendations and handles self-assignment cleanly.
#include <algorithm>
#include <iostream>
struct ConcreteExample {
int* p;
std::string s;
ConcreteExample(int n, char const* s) : p(new int(n)), s(s) {}
ConcreteExample(ConcreteExample const& other)
: p(new int(*other.p)), s(other.s) {}
~ConcreteExample() { delete p; }
ConcreteExample& operator=(ConcreteExample copy) {
swap(*this, copy);
return *this;
}
friend void swap(ConcreteExample& a, ConcreteExample& b) {
using std::swap;
//using boost::swap; // if available
swap(a.p, b.p); // uses ADL (when p has a different type), the whole reason
swap(a.s, b.s); // this 'method' is not really a member (so it can be used
// the same way)
}
};
int main() {
ConcreteExample a (3, "a"), b (5, "b");
std::cout << a.s << *a.p << ' ' << b.s << *b.p << '\n';
a = b;
std::cout << a.s << *a.p << ' ' << b.s << *b.p << '\n';
return 0;
}
Notice it works with either manually managed members (p) or RAII/SBRM-style members (s).
|
1,996,703 | 2,005,516 | Specializing a template by a template base class | I'm writing a template for which I'm trying to provide a specialization on a class which itself is a template class. When using it I'm actually instanciating it with derivitives of the templated class, so I have something like this:
template<typename T> struct Arg
{
static inline const size_t Size(const T* arg) { return sizeof(T); }
static inline const T* Ptr (const T* arg) { return arg; }
};
template<typename T> struct Arg<Wrap<T> >
{
static inline const size_t Size(const Wrap<T>* arg) { return sizeof(T); }
static inline const T* Ptr (const Wrap<T>* arg) { return arg.Raw(); }
};
class IntArg: public Wrap<int>
{
//some code
}
class FloatArg: public Wrap<float>
{
//some code
}
template<typename T>
void UseArg(T argument)
{
SetValues(Arg<T>::Size(argument), Arg<T>::Ptr(&argument));
}
UseArg(5);
UseArg(IntArg());
UseArg(FloatArg());
In all cases the first version is called. So basically my question is: Where did I went wrong and how do I make him call the the version which returns arg when calling UseArg(5), but the other one when calling UseArg(intArg)? Other ways to do something like this (without changing the interface of UseArg) are of course welcome to.
As a note the example is a little simplyified meaning that in the actual code I'm wrapping some more complex things and the derived class has some actual operations.
| I think there are three approaches:
1) Specialize Arg for derived types:
template <typename T> struct Arg ...
template <> struct Arg <IntArg> ...
template <> struct Arg <FloatArg> ...
// and so on ...
This sucks, because you can't know in advance what types you will have. Of course you you can specialize once you have these types, but this has to be done by someone who implements these types.
2) Do not provide default one and specialize for basic types
template <typename T> struct Arg;
template <> struct Arg <int> ...
template <> struct Arg <float> ...
// and so on...
template <typename T> struct Arg <Wrap<T> > ...
It's not ideal too (depends on how many "basic types" you expect to use)
3) Use IsDerivedFrom trick
template<typename T> struct Wrap { typedef T type; };
class IntArg: public Wrap<int> {};
class FloatArg: public Wrap<float> {};
template<typename D>
class IsDerivedFromWrap
{
class No { };
class Yes { No no[3]; };
template <typename T>
static Yes Test( Wrap<T>* ); // not defined
static No Test( ... ); // not defined
public:
enum { Is = sizeof(Test(static_cast<D*>(0))) == sizeof(Yes) };
};
template<typename T, bool DerivedFromWrap> struct ArgDerived;
template<typename T> struct ArgDerived<T, false>
{
static inline const T* Ptr (const T* arg)
{
std::cout << "Arg::Ptr" << std::endl;
return arg;
}
};
template<typename T> struct ArgDerived<T, true>
{
static inline const typename T::type* Ptr (const T* arg)
{
std::cout << "Arg<Wrap>::Ptr" << std::endl;
return 0;
}
};
template<typename T> struct Arg : public ArgDerived<T, IsDerivedFromWrap<T>::Is> {};
template<typename T>
void UseArg(T argument)
{
Arg<T>::Ptr(&argument);
};
void test()
{
UseArg(5);
UseArg(IntArg());
UseArg(FloatArg());
}
Calling test() outputs (as I understand that's your goal):
Arg::Size
Arg<Wrap>::Size
Arg<Wrap>::Size
Extending it to work with more types like Wrap is possible, but messy too, but it does the trick - you don't need to do a bunch of specializations.
One thing worth mentioning is that in my code ArgDerived is specialized with
IntArg instead of Wrap<int>, so calling sizeof(T) in ArgDerived<T, true> returns size of IntArg instead of Wrap<int>, but you can change it to sizeof(Wrap<T::type>) if that was your intention.
|
1,996,867 | 1,996,888 | Controlling access to output in multi-threaded applications | I have an application that creates a job queue, and then multiple threads execute the jobs. By execute them, I mean they call system() with the job string.
The problem is that output to stdout looks like the output at the bottom of the question. I would like each application run to be separated, so the output would look like:
flac 1.2.1 ...
...
...
flac 1.2.1 ...
...
...
etc.
I'm using programs I have no control over, and so cannot wrap the IO in mutexes.
How do I get the output to look like the above?
ffllaacc 11..22..11,, CCooppyyrriigghhtt ((CC)) 2000,2001,2002, 2003,220004,2
005,0200,0260,0210,0270 0 2J,o2s0h0 3C,o2a004,2005,2l0s0o6n,
007f l aJco scho mCeosa lwsiotnh
AfBl
OcL UcfTolEmaLecYs 1Nw.Oi2t .hW1 A,AR BRCSAoONpLTyUYrT.iE gL hYTt h Ni(OsC )W
i AsR2 R0fA0rN0eT,eY2 .0s 0o 1fT,th2wi0as0r 2ei,,s2 0af0nr3de, e2y 0os0uo4 f,at
2rw0ea0
e,,2 0wa0en6ld,c 2oy0mo0eu7 ta orJ eor
hd iCswotearllicsbooumnte
tiotf lruaencdd iecsrot mrceiesbr utwtaieit nhi tcA oBunSndOdiLetUriT oEcnLes
Yr. t Na OiT nyW pAceRo Rn`AdfNilTtaYic.o' n sfT.oh ri sTd yeiptsea if`lrfsel
.ea
s
'f tfwor adreet,a iandl sy.ou
a
e
welcome to redistribute it under certain conditions. Type `flac' for details.
| Rather than using system().. you could use popen().
Then, read from each child's output in the parent program, and do what you want with it (e.g. synchronize on some mutex when outputting each line).
|
1,997,119 | 1,997,256 | View function template instantiations | I have a simple function template:
#include <iostream>
using namespace std;
template <class T>
T GetMax (T a, T b) {
T result;
result = (a > b) ? a : b;
return (result);
}
int main () {
cout << GetMax<int>(5, 6) << endl;
cout << GetMax<long>(10, 5) << endl;
return 0;
}
The above example will generate 2 function template instantiations, one for int and another for long. Is there any g++ option to view the function template instantiations?
| You can use the nm program (part of binutils) to see the list of symbols used by your program. For example:
$ g++ test.cc -o test
$ nm test | grep GetMax
00002ef0 T __Z6GetMaxIiET_S0_S0_
00002f5c T __Z6GetMaxIiET_S0_S0_.eh
00002f17 T __Z6GetMaxIlET_S0_S0_
00002f80 T __Z6GetMaxIlET_S0_S0_.eh
I don't know why each one has two copies, one with a .eh suffix, but otherwise you can tell that this particular function was instantiated twice. If you version of nm supports the -C/--demangle flag, you can use that to get readable names:
$ nm --demangle test | grep GetMax
00002ef0 T int GetMax<int>(int, int)
00002f5c T _Z6GetMaxIiET_S0_S0_.eh
00002f17 T long GetMax<long>(long, long)
00002f80 T _Z6GetMaxIlET_S0_S0_.eh
If that option isn't supported, you can use c++filt to demangle them:
$ nm test | grep GetMax | c++filt
00002ef0 T int GetMax<int>(int, int)
00002f5c T __Z6GetMaxIiET_S0_S0_.eh
00002f17 T long GetMax<long>(long, long)
00002f80 T __Z6GetMaxIlET_S0_S0_.eh
So, you can see that GetMax was instantiated with int and long respectively.
|
1,997,778 | 1,997,788 | C++ comparing c string troubles | I have written the following code which will does not work but the second snippet will when I change it.
int main( int argc, char *argv[] )
{
if( argv[ 1 ] == "-i" ) //This is what does not work
//Do Something
}
But if I write the code like so this will work.
int main( int argc, char *argv[] )
{
string opti = "-i";
if( argv[ 1 ] == opti ) //This is what does work
//Do Something
}
Is it because the string class has == as an overloaded member and hence can perform this action?
Thanks in advance.
|
Is it because the string class has == as an overloaded member and hence can perform this action?
You are correct. Regular values of type char * do not have overloaded operators. To compare C strings,
if (strcmp(argv[1], "-i") == 0) {
...
}
By comparing the strings the way you did (with == directly), you are comparing the values of the pointers. Since "-i" is a compile time constant and argv[1] is something else, they will never be equal.
|
1,998,094 | 1,998,421 | Java JNI - associating resources allocated in C with java objects? | I want to allocate some memory in C and keep it associated with a java object instance, like this:
void configure(JNIEnv *object, jobject obj, ....) {
char *buf = new char[1024];
// associated <buf> with <obj> somehow
}
And then later free the memory when the java object gets garbage collected - I could do this by calling a JNI function from the finalize() method of the java object.
The question is, how do I associate a C pointer with the java object? Keep a long field in the object and cast the pointer to long? Is there a better way?
| Generally, if you want to transfer a pointer from C to Java, it's recommended to use long so that there are enough bits to hold the pointer value in case the platform is 64 bits.
Then, have a look at ByteBuffer.allocateDirect() which creates a ByteBuffer instance which memory can be shared with C. You can allocate such a direct ByteBuffer from the Java side then pass it as a jobject to a JNI function and inside this JNI function you use the GetDirectBufferAddress function.
Another way is to wrap a native area of memory with the NewDirectByteBuffer JNI function from the native side. It gives you a jobject you pass back to the Java side (pay attention to local and global references). Pay attention to the fact that once the direct ByteBuffer that wraps the native memory has been created, you are still responsible for managing the native memory: at some point, you will have to call delete buf; in your native code, Java won't do it for you.
|
1,998,207 | 1,998,231 | Defining global array | I have the following static array in header file:
static MyStruct_t MyStructArray[] = {
......
......
......
}
But gcc issues a warning:
warning: `MyStructArray' defined but not used
What is the correct way to handle the situation?
UPD:
Defining the array as const:
const MyStruct_t MyStructArray[] = {
......
fixes thwe situation. So what is the preferred way extern or const in header?
| Because you've declared the array static in a header file, each compilation unit (i.e. preprocessed .cpp file) gets its own copy of the array--almost certainly not what you intended, and the sure reason that you get the "defined but not used" error.
Instead, you probably want this in your header file:
extern MyStruct_t *MyStructArray;
...and then in exactly 1 .cpp file:
MyStruct_t MyStructArray[] = { ...};
|
1,998,251 | 1,998,344 | Basic C++ memory question | a friend of mine declared a new type using
typedef GLfloat vec3_t[3];
and later used vec3_t to allocate memory
vertices=new vec3_t[num_xyz* num_frames];
He freed the memory using
delete [] vertices;
Question:
1. Since vec3_t is an alias for GLfloat[3], does it mean that
vec3_t[num_xyz* num_frames]
is equivalent to
GLfloat[3][num_xyz* num_frames];
2. If the above is a 2 dimentional array, How is it supporsed to be properly deleted from memory?
thanks in advance;
from deo
| 1. a two dimensional array can be thoght of as a one dimensional array where each element is an array.
using this definition you can see that new vec3_t[num_xyz* num_frames] is equivalent to a two dimensional array.
2. this array is made of num_xyz* num_frames members each taking a space of sizeof (vec3_t)
when new is carried out it allocates num_xyz* num_frames memory blokes in the heap, it takes note of this number so that when calling delete[] it will know how many blocks of sizeof (vec3_t) it should mark as free in the heap.
|
1,998,632 | 1,999,004 | Invoke PostgreSQL Stored Procedure using C | I am referring to http://www.postgresql.org/docs/8.1/static/libpq.html
I try to find an example of C/C++ to call PostgreSQL stored procedure. However, I cannot find one. Can anyone point me the right direction?
| As as previously been answered, the easiest way is to use SELECT myStoredProcedure(1,2,3). You can also use the fast-path call interface to call a function directly. See http://www.postgresql.org/docs/current/static/libpq-fastpath.html for reference. But note that if you are working on modern versions of PostgreSQL, you're likely better off using the regular interface and a prepared statement.
|
1,998,744 | 1,998,883 | Benefits of a swap function? | Browsing through some C++ questions I have often seen comments that a STL-friendly class should implement a swap function (usually as a friend.) Can someone explain what benefits this brings, how the STL fits into this and why this function should be implemented as a friend?
| For most classes, the default swap is fine, however, the default swap is not optimal in all cases. The most common example of this would be a class using the Pointer to Implementation idiom. Where as with the default swap a large amount of memory would get copied, is you specialized swap, you could speed it up significantly by only swapping the pointers.
If possible, it shouldn't be a friend of the class, however it may need to access private data (for example, the raw pointers) which you class probably doesn't want to expose in the class API.
|
1,998,752 | 2,000,646 | memset() or value initialization to zero out a struct? | In Win32 API programming it's typical to use C structs with multiple fields. Usually only a couple of them have meaningful values and all others have to be zeroed out. This can be achieved in either of the two ways:
STRUCT theStruct;
memset( &theStruct, 0, sizeof( STRUCT ) );
or
STRUCT theStruct = {};
The second variant looks cleaner - it's a one-liner, it doesn't have any parameters that could be mistyped and lead to an error being planted.
Does it have any drawbacks compared to the first variant? Which variant to use and why?
| Those two constructs a very different in their meaning. The first one uses a memset function, which is intended to set a buffer of memory to certain value. The second to initialize an object. Let me explain it with a bit of code:
Lets assume you have a structure that has members only of POD types ("Plain Old Data" - see What are POD types in C++?)
struct POD_OnlyStruct
{
int a;
char b;
};
POD_OnlyStruct t = {}; // OK
POD_OnlyStruct t;
memset(&t, 0, sizeof t); // OK as well
In this case writing a POD_OnlyStruct t = {} or POD_OnlyStruct t; memset(&t, 0, sizeof t) doesn't make much difference, as the only difference we have here is the alignment bytes being set to zero-value in case of memset used. Since you don't have access to those bytes normally, there's no difference for you.
On the other hand, since you've tagged your question as C++, let's try another example, with member types different from POD:
struct TestStruct
{
int a;
std::string b;
};
TestStruct t = {}; // OK
{
TestStruct t1;
memset(&t1, 0, sizeof t1); // ruins member 'b' of our struct
} // Application crashes here
In this case using an expression like TestStruct t = {} is good, and using a memset on it will lead to crash. Here's what happens if you use memset - an object of type TestStruct is created, thus creating an object of type std::string, since it's a member of our structure. Next, memset sets the memory where the object b was located to certain value, say zero. Now, once our TestStruct object goes out of scope, it is going to be destroyed and when the turn comes to it's member std::string b you'll see a crash, as all of that object's internal structures were ruined by the memset.
So, the reality is, those things are very different, and although you sometimes need to memset a whole structure to zeroes in certain cases, it's always important to make sure you understand what you're doing, and not make a mistake as in our second example.
My vote - use memset on objects only if it is required, and use the default initialization x = {} in all other cases.
|
1,998,926 | 1,999,957 | mfc autosuggest textbox (like in Windows Start->Run dialog) | In Windows XP if you click Start, Run (or Windows-key + R) you get a little dialog for running things directly. If you start typing, a resizable scroll-list pops up underneath the edit-box.
I want something similar, so when a user is typing in a name to an edit-box, a list will suddenly appear if suggestions can be made. But I don't know if I need to write it all myself, or can use some existing controls/code from somewhere.
| I'm currently looking at this one: http://www.codeproject.com/KB/combobox/akautocomplete.aspx Will try to remember to post my findings.
|
1,999,049 | 1,999,072 | c++ identify current thread in function? | let's say i have a singleton object with a static function:
static int MySingletonObject::getInt()
now i would like to return a different int depending on which workingthread (MFC threading) is calling the function.
I know that i can pass parameters to the threadingfunction when creating the thread. But Is there a way to identify the thread without info from those parameters?
Thanks!
| You can call GetCurrentThreadId() - will return an integer identifier - or GetCurrentThread() - will return a handle which can be cast to an integer identifier - from any thread - those values will be unique for any thread within the process.
|
1,999,150 | 1,999,210 | Is it possible to have identically named source files in one visual studio c++ project? | I'm working on a static library project for a c++ course I'm taking. The teacher insists that we define only one function per source file, grouping files/functions belonging to the same class in subdirectories for each class. This results in a structure like:
MyClass
\MyClass.cc (constructor)
\functionForMyClass.cc
\anotherFunctionForMyClass.cc
OtherClass
\OtherClass.cc (constructor)
Whether this is good practice or not is something I'd not like to discuss, since I'm simply obliged to organize my project in this manner.
I'm working in visual studio 2008, and somehow got strange link errors when using an identically named function (and thus filename) in two classes. This appears to be caused by the fact that visual studio puts all .obj files (one for each source file) in one intermediate directory, overwriting earlier generated object files when compiling identically named source files.
This could be solved by putting the object files in subdirectories based on the relative path of the input file. Visual studio allows one to configure the names of object files it generates and has macros to use in there, but there appears to be no macro for 'relative path of input file'.
So, is there some way to get this to work? If not, is using one project for each class the best work-around?
| You are right, by default all object files are put into the same directory and their filenames are based on the source file name. The only solution I can think of is to change conflicting file's output file path in here:
Project Properties-C/C++-Output Files-Object File Name http://img37.imageshack.us/img37/3695/outputfile.png
PS. It sounds like the lecturer has a crappy (probably written by the lecturer himself) automatic code verifier that imposes this restriction. To get extra marks, offer to rewrite the parser so it works with normal/sane/non-weird projet layout.
|
1,999,348 | 1,999,371 | Pointers or references for dynamically allocated members that always exist? | I have a class CContainer that has some members CMemberX, CMemberY, which are independent of each other and other CClientA, CClientB classes that use CContainer.
#include "MemberX.h"
#include "MemberY.h"
class CContainer
{
public:
CMemberX & GetX() const { return m_x; }
CMemberY & GetY() const { return m_y; }
private:
CMemberX m_x;
CMemberY m_y;
};
I want to avoid having to recompile all CClient classes when modifying one of the CMember classes using forward declarations and dynamic allocation of m_x and m_y.
Initially, I made the members pointers:
// Container.h
class CMemberX;
class CMemberY;
class CContainer
{
public:
CContainer();
~CContainer();
CMemberX & GetX() const { ASSERT(m_pX != NULL); return *m_pX; }
CMemberY & GetY() const { ASSERT(m_pY != NULL); return *m_pY; }
private:
CMemberX* m_pX;
CMemberY* m_pY;
};
// Container.cpp
#include "Container.h"
#include "MemberX.h"
#include "MemberY.h"
// Allocate members on heap
CContainer::CContainer() : m_pX(new CMemberX()), m_pY(new CMemberY()) {}
CContainer::~CContainer() { delete m_pX; delete m_pY; }
Then I thought, that I could as well use references instead of pointers, so it looks more like the original code:
// Container.h
class CMemberX;
class CMemberY;
class CContainer
{
public:
CContainer();
~CContainer();
CMemberX & GetX() const { return m_x; }
CMemberY & GetY() const { return m_y; }
private:
CMemberX & m_x;
CMemberY & m_y;
};
// Container.cpp
#include "Container.h"
#include "MemberX.h"
#include "MemberY.h"
// Allocate members on heap
CContainer::CContainer() : m_x(*new CMemberX()), m_y(*new CMemberY()) {}
CContainer::~CContainer() { delete &m_x; delete &m_y; }
What I don't like about the pointer members is that it looks like the pointers could be NULL or the objects be replaced at runtime, which is not the case.
What I don't like about the references is that the code in the CTor and DTor looks a bit hacky.
Which approach is preferable? Is there a better solution?
Note regarding copying/assigning: Instances of the CContainer class will not under any circumstances be copied or assigned to each other.
| I think that's what the const variables are for:
CMember * const m_x;
Cannot change m_x after initialization...
|
1,999,706 | 1,999,817 | C++/POSIX how to get a millisecond time-stamp the most efficient way? | I'm using a open-source library for i2c bus actions. This library frequently uses a function to obtain an actual time-stamp with millisecond resolution.
Example Call:
nowtime = timer_nowtime();
while ((i2c_CheckBit(dev) == true) && ((timer_nowtime() - nowtime) < I2C_TIMEOUT));
The application using this i2c library uses a lot of CPU capacity. I figured out, that the running program the most time is calling the function timer_nowtime().
The original function:
unsigned long timer_nowtime(void) {
static bool usetimer = false;
static unsigned long long inittime;
struct tms cputime;
if (usetimer == false)
{
inittime = (unsigned long long)times(&cputime);
usetimer = true;
}
return (unsigned long)((times(&cputime) - inittime)*1000UL/sysconf(_SC_CLK_TCK));
}
My aim now is, to improve the efficiency of this function. I tried it this way:
struct timespec systemtime;
clock_gettime(CLOCK_REALTIME, &systemtime);
//convert the to milliseconds timestamp
// incorrect way, because (1 / 1000000UL) always returns 0 -> thanks Pace
//return (unsigned long) ( (systemtime.tv_sec * 1000UL) + (systemtime.tv_nsec
// * (1 / 1000000UL)));
return (unsigned long) ((systemtime.tv_sec * 1000UL)
+ (systemtime.tv_nsec / 1000000UL));
Unfortunately, I can't declare this function inline (no clue why).
Which way is more efficient to obtain an actual timestamp with ms resolution?
I'm sure there is a more per-formant way to do so. Any suggestions?
thanks.
| Its clear that your example call uses most CPU time in timer_nowtime() function. You are polling, and the loop eats your CPU time. You could exchange the timer function with a better alternative and so you may achieve more loop iterations, but it will still use most CPU time in that function! You will not achieve using less CPU time by changing your timer function!
You may change your loop and introduce wait times - but only if it makes sense in your application, e.g.:
start = timer_nowtime();
while( i2c_CheckBit(dev) ) {
now = timer_nowtime();
diff = now - start;
if( diff < I2C_TIMEOUT ) break;
else if( diff > SOME_TRESHOLD ) usleep( 1000*std::max(1,I2C_TIMEOUT-diff-SOME_SMALL_NUMBER_1_TO_10_MAYBE) );
}
The timer: I think gettimeofday() would be a good decision, it has high precision and is available in most (all?) Unices.
|
1,999,967 | 2,000,033 | Odd socket() error -- returns -1, but errno=ERROR_SUCCESS | I'm developing a dedicated game server on a linux machine, in C/C++ (mixed). I have the following snippet of code:
int sockfd=socket(AI_INET, SOCK_DGRAM, 0);
if(sockfd==-1)
{
int err=errno;
fprintf(stderr,"%s",strerror(err));
exit(1);
}
My problem here, is that socket is returning -1 (implying a failure) and the error string is being printed, but it is "Success" (ERROR_SUCCESS).
Other notes:
I am requesting a socket on a port >1024 (out of context, but thought I'd mention)
I'm executing the app as the super user
| I feel incredibly stupid. Carefully looking over my code, on my dev-computer shows:
if(sockfd==-1);
...
|
2,001,141 | 2,001,169 | Why doesn't g++ link with the dynamic library I create? | I've been trying to make some applications which all rely on the same library, and dynamic libraries were my first thought: So I began writing the "Library":
/* ThinFS.h */
class FileSystem {
public:
static void create_container(string file_name); //Creates a new container
};
/* ThinFS.cpp */
#include "ThinFS.h"
void FileSystem::create_container(string file_name) {
cout<<"Seems like I am going to create a new file called "<<file_name.c_str()<<endl;
}
I then compile the "Library"
g++ -shared -fPIC FileSystem.cpp -o ThinFS.o
I then quickly wrote a file that uses the Library:
#include "ThinFS.h"
int main() {
FileSystem::create_container("foo");
return (42);
}
I then tried to compile that with
g++ main.cpp -L. -lThinFS
But it won't compile with the following error:
/usr/bin/ld: cannot find -lThinFS
collect2: ld returned 1 exit status
I think I'm missing something very obvious, please help me :)
| -lfoo looks for a library called libfoo.a (static) or libfoo.so (shared) in the current library path, so to create the library, you need to use g++ -shared -fPIC FileSystem.cpp -o libThinFS.so
|
2,001,203 | 2,001,371 | Referencing an unmanaged C++ project within another unmanaged C++ project in Visual Studio 2008 | I am working on a neural network project that requires me to work with C++. I am working with the Flood Neural Network library. I am trying to use a neural network library in an unmanaged C++ project that I am developing. My goal is to create an instance of a class object within the Flood library from within another project.
There is plenty of documentation online regarding how to reference an unmanaged C++ project from within a C# project, but there is not enough information on how to reference one C++ project within another. Similar to how I would do it in C#, I added the Flood project as a reference in my other project, but I have tried all sorts of techniques to work with the object. I have attempted to use the #include directive to reference the header file, but that gives me errors stating that I need to implement the methods declared in the header file.
How to add a reference in unmanaged C++ and work with the class objects?
| Yes. You need to do two things:
#include the respective header files, as you did
Add a reference (Visual C++ supports two types, "dependencies" which are outdated and should not be used anymore, and "references" which are the correct ones). Use them to reference the other project, which must be a part of your solution. Meaning, in this case you must be able to COMPILE the other project.
Alternatively, if you do not have the source code, or you do not wish to compile the 3rd-party code for any other reason, you may also reference a compiled binary. The best way to do it is pragma comment lib. If this is what you need, please comment and I will edit my response.
|
2,001,215 | 2,001,423 | STL algorithms on containers of boost::function objects | I have the following code that uses a for loop and I would like to use transform, or at least for_each instead, but I can't see how.
typedef std::list<boost::function<void(void) > CallbackList;
CallbackList callbacks_;
//...
for(OptionsMap::const_iterator itr = options.begin(); itr != options.end(); ++itr)
{
callbacks_.push_back(boost::bind(&ClassOutput::write_option_,this,*itr));
}
Later on in the code, I actually want to call this collection of nullary function objects. I'm using a for loop here as well, and it seems like I should be able to use for_each somehow.
for(CallbackList::iterator itr = callbacks_.begin(); itr != callbacks_.end(); ++itr)
{
(*itr)();
}
| To do all this in a single transform call, you I think you need to call bind on itself, because you need a functor which calls boost:bind. That's something I've never attempted. Would you settle for something like this (untested)?
struct GetFunc {
ClassOutput *obj;
boost::function<void(void) > operator()(const OptionsMap::value_type &v) {
return boost::bind(&ClassOutput::write_option_, obj, v);
}
GetFunc(ClassOutput *obj) : obj(obj) {}
};
transform(options.begin(), options.end(), back_inserter(callbacks_), GetFunc(this));
In C++0x you can use a lambda instead of the functor class:
transform(options.begin(), options.end(), back_inserter(callbacks_),
[this](const OptionsMap::value_type &v) {
return boost::bind(&ClassOutput::write_option_, this, v);
}
);
|
2,001,286 | 2,001,307 | const char* 's in C++ | How does string expressions in C++ work?
Consider:
#include <iostream>
using namespace std;
int main(int argc, char *argv[]){
const char *tmp="hey";
delete [] tmp;
return 0;
}
Where and how is the "hey" expression stored and why is there segmentation fault when I attempt to delete it?
| Where it's stored is left to the compiler to decide in this (somewhat special) case. However, it doesn't really matter to you - if you don't allocate memory with new, it's not very nice to attempt to deallocate it with delete. You cannot delete memory allocated in the way you have allocated it.
If you want to control the deallocation of that resource, you should use a std::string, or allocate a buffer using malloc().
|
2,001,354 | 2,003,522 | How can MATLAB function wavread() be implemented in C++? | How do I implement the MATLAB function wavread in C++?
It means read a WAV file into a vector array.
| If you want to do it in C++, there are two options. Use a library, or write your own function that can extract information from WAV files. Several C/C++ libraries such as Juce, SDL etc. have functions/classes that can read WAV files. This is probably total overkill for your case. If you want a simple(ish) library specialised to read audio files, libsndfile sounds (pun not intended) like a good bet. If you must roll your own implementation, a description of the WAV format in C can be found here (Warning: The link leads to a page that allows you to download a zipped MS Word file).
|
2,001,604 | 2,001,784 | Class member function as callback using boost::bind and boost::function | I'm working through setting up a member function as a callback for a C-library that I'm using. The C-library sets up callbacks like this:
typedef int (*functionPointer_t)(myType1_t*, myType2_t*, myType3_t*);
setCallback(param1, param2, functionPointer, param4)
I would like to use boost::bind (if possible) to pass in the function pointer. I would prefer that the function being pointed to was a member of the instantiated class, not a static member. E.g.
Class A {
public:
A();
protected:
int myCallback(myType1_t*, myType2_t*, myType3_t*); //aka functionPointer_t
}
Can this be done using boost::bind and boost::function? Per How can I pass a class member function as a callback? (the 3rd answer) it appears that I could declare the following (somewhere, or as a typedef):
boost::function<int (A*, myType1_t*, myType2_t*, myType3*> myCallbackFunction
And then somewhere in A (the ctor) call boost::bind on that type, and pass it into the C-library call.
Is this possible, or am I off base? Thanks much.
| No. Functor types like boost::function don't convert to function pointers for use with C callback mechanisms.
However, most C callback mechanisms have some kind of token mechanism, so your callback function (which is static) has some kind of context information. You can use this to write a wrapper class which maps these tokens to functor objects, and passes execution along to the right one:
class CallbackManager {
public:
typedef boost::function<int (type1*, type2*, type3*)> callback;
static void setCallback(CallbackManager::callback cb)
{
void *token = ::setCallback(staticCallback);
callbacks[token] = callback_I;
}
static void staticCallback(void* token, type1* a, type2* b, type3* c)
{ return mcallbacks[token](a, b, c); }
private:
static std::map<void*, callback > callbacks;
};
|
2,001,913 | 2,002,038 | C++0x memory model and speculative loads/stores | So I was reading about the memory model that is part of the upcoming C++0x standard. However, I'm a bit confused about some of the restrictions for what the compiler is allowed to do, specifically about speculative loads and stores.
To start with, some of the relevant stuff:
Hans Boehm's pages about threads and the memory model in C++0x
Boehm, "Threads Cannot be Implemented as a Library"
Boehm and Adve, "Foundations of the C++ Concurrency Memory Model"
Sutter, "Prism: A Principle-Based Sequential Memory Model for Microsoft Native Code Platforms", N2197
Boehm, "Concurrency memory model compiler consequences", N2338
Now, the basic idea is essentially "Sequential Consistency for Data-Race-Free Programs", which seems to be a decent compromise between ease of programming and allowing the compiler and hardware opportunities to optimize. A data race is defined to occur if two accesses to the same memory location by different threads are not ordered, at least one of them stores to the memory location, and at least one of them is not a synchronization action. It implies that all read/write access to shared data must be via some synchronization mechanism, such as mutexes or operations on atomic variables (well, it is possible to operate on the atomic variables with relaxed memory ordering for experts only, but the default provides for sequential consistency).
In light of this, I'm confused about the restrictions about spurious or speculative loads/stores on ordinary shared variables. For instance, in N2338 we have the example
switch (y) {
case 0: x = 17; w = 1; break;
case 1: x = 17; w = 3; break;
case 2: w = 9; break;
case 3: x = 17; w = 1; break;
case 4: x = 17; w = 3; break;
case 5: x = 17; w = 9; break;
default: x = 17; w = 42; break;
}
which the compiler is not allowed to transform into
tmp = x; x = 17;
switch (y) {
case 0: w = 1; break;
case 1: w = 3; break;
case 2: x = tmp; w = 9; break;
case 3: w = 1; break;
case 4: w = 3; break;
case 5: w = 9; break;
default: w = 42; break;
}
since if y == 2 there is a spurious write to x which could be a problem if another thread were concurrently updating x. But, why is this a problem? This a data race, which is prohibited anyway; in this case, the compiler just makes it worse by writing to x twice, but even a single write would be enough for a data race, no? I.e. a proper C++0x program would need to synchronize access to x, in which case there would no longer be data race, and the spurious store wouldn't be a problem either?
I'm similarly confused about Example 3.1.3 in N2197 and some of the other examples as well, but maybe an explanation for the above issue would explain that too.
EDIT: The Answer:
The reason why speculative stores are a problem is that in the switch statement example above, the programmer might have elected to conditionally acquire the lock protecting x only if y != 2. Hence the speculative store might introduce a data race that was not there in the original code, and the transformation is thus forbidden. The same argument applies to Example 3.1.3 in N2197 as well.
| I'm not familiar with all the stuff you refer to, but notice that in the y==2 case, in the first bit of code, x is not written to at all (or read, for that matter). In the second bit of code, it is written twice. This is more of a difference than just writing once vs. writing twice (at least, it is in existing threading models such as pthreads). Also, storing a value which would not otherwise be stored at all is more of a difference than just storing once vs. storing twice. For both these reasons, you don't want compilers just replacing a no-op with tmp = x; x = 17; x = tmp;.
Suppose thread A wants to assume that no other thread modifies x. It's reasonable to want it to be allowed to expect that if y is 2, and it writes a value to x, then reads it back, it will get back the value it has written. But if thread B is concurrently executing your second bit of code, then thread A could write to x and later read it, and get back the original value, because thread B saved "before" the write and restored "after" it. Or it could get back 17, because thread B stored 17 "after" the write, and stored tmp back again "after" thread A reads. Thread A can do whatever synchronisation it likes, and it won't help, because thread B isn't synchronised. The reason it's not synchronised (in the y==2 case) is that it's not using x. So the concept of whether a particular bit of code "uses x" is important to the threading model, which means compilers can't be allowed to change code to use x when it "shouldn't".
In short, if the transformation you propose were allowed, introducing a spurious write, then it would never be possible to analyse a bit of code and conclude that it does not modify x (or any other memory location). There are a number of convenient idioms which would therefore be impossible, such as sharing immutable data between threads without synchronisation.
So, although I'm not familiar with C++0x's definition of "data race", I assume that it includes some conditions where programmers are allowed to assume that an object is not written to, and that this transformation would violate those conditions. I speculate that if y==2, then your original code, together with concurrent code: x = 42; x = 1; z = x in another thread, is not defined to be a data race. Or at least if it is a data race, it's not one which permits z to end up with value either 17, or 42.
Consider that in this program, the value 2 in y might be used to indicate, "there are other threads running: don't modify x, because we aren't synchronised here, so that would introduce a data race". Perhaps the reason there's no synchronisation at all, is that in all other cases of y, there are no other threads running with access to x. It seems reasonable to me that C++0x would want to support code like this:
if (single_threaded) {
x = 17;
} else {
sendMessageThatSafelySetsXTo(17);
}
Clearly then, you don't want that transformed to:
tmp = x;
x = 17;
if (!single_threaded) {
x = tmp;
sendMessageThatSafelySetsXTo(17);
}
Which is basically the same transformation as in your example, but with only 2 cases, instead of there being enough to make it look like a good code-size optimisation.
|
2,002,282 | 2,002,342 | C++ std::queue::pop() calls destructor. What of pointer types? | I have a std::queue that is wrapped as a templated class to make a thread-safe queue. I have two versions of this class: one that stores value types, one that stores pointer types.
For the pointer type, I'm having trouble deleting the elements of the queue on destruction. The reason is that I don't know a way to remove the items from the queue safely.
This reference states (vacuously, so I guess it doesn't actually STATE it) that the only way to remove elements from the queue is to call pop(). The reference also says that pop() calls the destructor for the item.
Well, this causes problems with my pointer types because they may or may not actually point to aggregates. If one of them points to an aggregate, they all will, but because the wrapper is templated, there is no guarantee which type (aggregated or non-aggregated) we are dealing with.
So, when pop() calls the destructor, what happens? How do I ensure that everything is being removed and the memory deallocation properly?
Lastly, my solution is using an older version of GCC for ARM9. I don't have control over this. I understand that there are libraries that have smart pointers and containers that would assist here, but they are off-limits for me.
| Online sources are worth what you pay for them - get a proper reference like Josuttis's book. pop() does not "call the destructor" - it simply removes an element from the queue adaptor's underlying representation (by default a std::deque) by calling pop_front() on it. If the thing being popped has a destructor, it will be used when the popped object goes out of scope, but the queue class has nothing to do with it.
|
2,002,509 | 2,002,536 | Is it possible to refresh two lines of text at once using something like a CR? (C++) | Right now, I have a console application I'm working on, which is supposed to display and update information to the console at a given interval. The problem I'm having is that with a carriage return, I can only update one line of text at a time. If I use a newline, the old line can no longer be updated using a carriage return.
What can I do here?
| You might be able to find a curses library variant that works on your platform.
|
2,002,567 | 2,002,585 | Array of pointers member, is it initialized? | If I have a
class A
{
private:
Widget* widgets[5];
};
Is it guaranteed that all pointers are NULL, or do I need to initialize them in the constructor? Is it true for all compilers?
Thanks.
| It depends on the platform and how you allocate or declare instances of A. If it's on the stack or heap, you need to explicitly initialize it. If it's with placement new and a custom allocator that initializes memory to zero or you declare an instance at file scope AND the platform has the null pointer constant be bitwise zero, you don't. Otherwise, you should.
EDIT: I suppose I should have stated the obvious which was "don't assume that this happens".
Although in reality the answer is "it depends on the platform". The standard only tells you what happens when you initialize explicitly or at file scope. Otherwise, it is easiest to assume that you are in an environment that will do the exact opposite of what you want it to do.
And if you really need to know (for educational or optimizational purposes), consult the documentation and figure out what you can rely on for that platform.
|
2,002,752 | 2,002,972 | What will happen to namespace tr1 when c++ xx is approved? | I'm writing some stuff using the tr1 namespace in VS2008. What will happen when C++xx becomes ratified? Has this happened before with other C++ revisions? Will the tr1 stuff still work or will I have to change all of my include? I realize that I'm making the very large assumption that this ratification will someday occur. I know that most likely none of you work for MS or contribute to GCC, but if you have experience with these kinds of changes, I would appreciate the advice.
| std::tr1 will become part of std in C++1x (std::tr1::shared_ptr becomes std::shared_ptr, etc). std::tr1 will continue to exist as long as that compiler claims to implement TR1. At some point your compiler may drop that claim, and drop std::tr1 as a result. This probably will never happen.
std::tr1 has already been "copied" into namespace std in Visual Studio 2010 Beta (via a using directive)
|
2,002,792 | 2,068,395 | OpenGL: Enabling multisampling draws messed up edges for polygons at high zoom levels | When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
| Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
|
2,002,947 | 2,004,727 | Why is my call to TransmitFile performing poorly compared to other methods? | First, a bit of background --
I am writing a basic FTP server for a personal project. I'm currently working on retrieving files. My current implementation looks like this:
HANDLE hFile = CreateFile("file.tar.gz", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
TransmitFile(sd, hFile, fileSize, 65536, NULL, NULL, TF_USE_KERNEL_APC | TF_WRITE_BEHIND);
CloseHandle(hFile);
It works, but the performance is questionable. At first, the transfer starts out at about 10 MB/s, but slowly decreases to about 3 MB/s. Using FileZilla Server and IIS FTP, it maintains consistent >30 MB/s transfer speeds. Therefore, I know it's not working to its full capacity. I've tried tinkering with the buffer size but it's not improved the performance.
If anyone has any suggestions for a more efficient way to transfer the file, please let me know. The API documentation seems to suggest that TransmitFile was optimized for my application, which is why I chose to use it.
[Please excuse my lack of Windows API knowledge.]
Also, all of the sockets are opened on localhost.
| Have you increased the socket's TCP buffer size (and potentially the TCP window size) by setting the the SO_SNDBUF and SO_RCVBUF socket options before you start transmitting? (do it after bind and before connecting)?
From the sound of the problem, faster start which then slows down, I'd guess at it being a TCP flow control issue (probably due to the TCP window being smaller than you'd like). It would be useful to look at the data flow using Wireshark (ideally before and after the change that I suggest above).
See:
http://msdn.microsoft.com/en-us/library/ms819736.aspx - TCP window tuning
http://msdn.microsoft.com/en-us/library/ms740476(VS.85).aspx - setsockopt
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control - TCP flow
control
http://www.serverframework.com/asynchronousevents/2011/06/tcp-flow-control-and-asynchronous-writes.html - beware TCP flow control if using async networking APIs.
|
2,003,255 | 2,004,027 | Why the std::swap of Bits in a std::bitset instance doesn't work? | In the following example i expected the swap of the bits. Instead the second bit becomes overwritten, but why and how could i achieve the expected behavior?
#include <iostream>
#include <string>
#include <algorithm>
using namespace std;
int main()
{
bitset<2> test(string("10"));
cout << test; // Prints "10"
swap(test[0], test[1]);
cout << test; // Prints "11", why not "01"?
}
| This is pure nasty. First we have to look at the declaration of swap:
template<class T>
void swap(T &left, T &right);
Now, operator[]() on bitset has two overloads:
bool operator[](size_type _Pos) const;
reference operator[](size_type _Pos);
Here reference is bitset::reference, a nested class in bitset that effectively acts as a proxy reference to the one of the underlying bits. What it encapsulates is the bitset and a position in the bitset. Because of the declaration of swap, the second overload is chosen and we are swapping two bitset::references. Now here's where it gets nasty. Let's look at a typical implementation of swap:
template class<T> swap(T &left, T &right) {
T temp = left;
left = right;
right = temp;
}
The problem is that left and right are both references to a bitset::reference. They have the same underlying data (because they are proxies; same meaning the both point to the same bitset!) they just encapsulate different positions in that bitset. Thus, think of it like this left is position 0 in some bitset and right is position 1 in some bitset and that bitset is the same bitset as left! Let's forever refer to this bitset as BS (chosen intentionally).
So,
T temp = left;
says that temp is position 0 in BS.
left = right;
sets position 0 in left to position 1 in BS (which simultaneously changes position 0 in temp!)
right = temp;
sets position 1 in right to position 0 in BS (which was just set to position 1 in BS!). So at the end of this mess have that position 0 is whatever position 1 was and position 1 is unchanged! Now, because position 0 is the LSB and position 1 is the MSB we have that "10" becomes "11". Ugly.
You can get around this with a template specialization:
namespace std {
template<>
void swap<bitset<2>::reference>(
bitset<2>::reference &left,
bitset<2>::reference &right
) {
bool temp = (bool)left;
left = (bool)right;
right = (bool)temp;
}
}
Then:
int main() {
bitset<2> test(string("10"));
cout << test; // Prints "10"
swap(test[0], test[1]);
cout << test; // Prints "01", hallelujah!
}
|
2,003,266 | 2,004,287 | C++ explicit constructors and iterators | Consider the following code:
#include <vector>
struct A
{
explicit A(int i_) : i(i_) {}
int i;
};
int main()
{
std::vector<int> ints;
std::vector<A> As(ints.begin(), ints.end());
}
Should the above compile? My feeling is that it should not, due to the constructor being marked explicit.
Microsoft Visual C++ agrees, giving a clear error message: cannot convert from 'int' to 'const A'; Constructor for struct 'A' is declared 'explicit'
However, using Comeau's online compiler, the code compiles successfully.
Which is correct?
Edit:
Interestingly, changing vector to set (after adding an operator < to A) causes both compilers to give an error.
However, changing vector<int> to map<int, int> and vector<A> to map<A, A> causes both compilers to accept the code!
| I looked through GCC's STL implementation and it should have similar behavior. Here's why.
Elements of a vector are initialized by a generic function template which accepts any two types X and V and calls new( p ) X( v ) where v is a V (I'm paraphrasing a bit). This allows explicit conversion.
Elements of a set or map are initialized by a private member function of _tree<T,…> which specifically expects a T const & to be passed in. This member function isn't a template (beyond being a member of a template), so if the initial value can't be implicitly converted to T, the call fails. (Again I'm simplifying the code.)
The standard doesn't require that explicit conversion work or that implicit conversion not work when initializing a container with a range. It simply says that the range is copied into the container. Definitely ambiguous for your purpose.
Surprising such ambiguity exists, considering how they've already refined the standard in consideration of problems like the one I had a couple weeks ago.
|
2,003,395 | 2,188,476 | Finding boost::shared_ptr cyclic references | Is there any tips/tricks for finding cyclic references of shared_ptr's?
This is an exmaple of what I'm trying to find - unfortunately I can't seem to find the loop in my code.
struct A
{
boost::shared_ptr<C> anC;
};
struct B
{
boost::shared_ptr<A> anA;
};
struct C
{
boost::shared_ptr<B> anB;
};
| I used a combination of the above posts. I used a memory profiler, came up with some suspected cycles and broke those by using weak_ptr's.
I've used the built in CRT memory leak detection before, but unfortunately in my case there are several static singletons that dont get deallocated until module unload which I believe is after the CRT detectors lifecycle. Basically it gives alot of spew that are false positives.
|
2,003,506 | 2,055,958 | How to build a boost dependent project using regular makefiles? | I'm working on a c++ project, and we recently needed to include a small part of boost in it. The boost part is really minimal (Boost::Python), thus, using bjam to build everything looks like an overkill (besides, everyone working on the project feels comfortable with make, and has no knowloedge of jam).
I made quite some tests already, but I cant find a way to include the formerly mentioned library in my makefile and make the build succesful.
All your help is deeply apreciated. :)
| I had the same problem and found a solution in this tutorial. You 1) need to compile the source into an object file with the -fPIC gcc option, and 2) compile this object into a library with the -shared gcc option. Of course you have also to link against the Boost.Python library (generally -lboost_python, however for my debian system it is for example -lboost_python-mt-py25, I have also to add -I/usr/include/pythyon25). In my makefile I end up doing those two steps in one command. See also p. 13 of this presentation.
|
2,003,756 | 2,010,979 | nginx not forwarding POST to @fallback | I wrote a high-performance HTTP event server in C++ and I want to make it work flawlessly with nginx and PHP-FPM (fastcgi). This is a snippet of my nginx configuration.
location ~ \.eve$ {
gzip off;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://127.0.0.1:9001;
proxy_intercept_errors on;
error_page 505 = @fallback // this is actually BACKEND.php
}
My event server returns 505 errors if there IS an event, otherwise it hangs, and eventually returns a "NO STATE CHANGE" directive that I handle with JS or what have you (this is basically comet). The point is that I would like nginx to catch the 505 error and forward that request to PHP so PHP can handle the event accordingly. My server is basically just an event hub, allowing many many users to be able to connect and see if there are any new events. If there IS an event, PHP handles the event distribution, including permissions and other volatile stuff.
The problem is that nginx isn't passing the POST (or GET) variables that were passed to *.eve, to BACKEND.php. Now I presume this is by design (due to the error_page directive), but I figured that there must be some way of making it work. My server runs on 9001, PHP-FPM runs on 9000. Any ideas?
| I fixed the problem by simply rebuilding the most recent version of nginx. The config, as well as the POST and GET forwarding works perfectly. Weirdness.
|
2,003,895 | 2,003,920 | In C++ what causes an assignment to evaluate as true or false when used in a control structure? | So can someone help me grasp all the (or most of the relevant) situations of an assignment inside something like an if(...) or while(...), etc?
What I mean is like:
if(a = b)
or
while(a = &c)
{
}
etc...
When will it evaluate as true, and when will it evaluate as false? Does this change at all depending on the types used in the assignment? What about when there are pointers involved?
Thanks.
| In C++ an attribution evaluates to the value being attributed:
int c = 5; // evaluates to 5, as you can see if you print it out
float pi = CalculatePi(); // evaluates to the result
// of the call to the CalculatePi function
So, you statements:
if (a = b) { }
while (a = &c) { }
are roughly equivalent to:
a = b
if (b) { }
a = &c
while (&c) { }
which are the same as
a = b
if (a) { }
a = &c
while (a) { }
And what about those if (a) etc when they are not booleans? Well, if they are integers, 0 is false, the rest is true. This (one "zero" value -> false, the rest -> true) usually holds, but you should really refer to a C++ reference to be sure (however note that writting if (a == 0) is not much more difficult than if (!a), being much simpler to the reader).
Anyways, you should always avoid side-effects that obscure your code.
You should never need to do if (a = b): you can achieve exactly the same thing in other ways that are more clear and that won't look like a mistake (if I read a code like if (a = b) the first thing that comes to my mind is that the developper who wrote that made a mistake; the second, if I triple-check that it is correct, is that I hate him! :-)
Good luck
|
2,003,963 | 2,004,011 | MSVC and FreeGlut Compiler Error | Receiving alot of these messages when compiling which is making compiling a simple program very time consuming.
freeglut_static.lib(freeglut_callbacks.obj) : warning LNK4204: 'z:\CST328\Lab1\block\Release\vc90.pdb' is missing debugging information for referencing module; linking object as if no debug info
1>freeglut_static.lib(freeglut_cursor.obj) : warning LNK4204: 'z:\CST328\Lab1\block\Release\vc90.pdb' is missing debugging information for referencing module; linking object as if no debug info
1>freeglut_static.lib(freeglut_display.obj) : warning LNK4204: 'z:\CST328\Lab1\block\Release\vc90.pdb' is missing debugging information for referencing module; linking object as if no debug info
Is there anyway to prevent this? Would making working on my assignments much more pleasant. :)
EDIT:
I Am using Microsoft Visual C++ Express 2008.
| your pdb file is out of sync with the library binary
in Windows, the pdb holds the debug information for a module. it is linked to a particular build. if your rebuild your library you have to produce a new pdb file. the pdb file your have is out of sync so you either have to delete (or rename) it (find a file freeglut_static.pdb) or recompile the library and produce an up to date pdb file.
|
2,004,035 | 2,004,069 | Design alternative? Composition and construction | So I'm using composition to bring together a collection of objects, all of which are derived from a base class, lets say Component. E.g:
class Component {
public:
Component();
...
private:
int m_address;
...
};
class SpecializedComponent: public Component {
public:
SpecializedComponent()
... //and so on
};
class SpecializedComponent2: public Component {
public:
SpecialIzedComponent2()
... //and so on
};
class ComponentHolder{
SpecializedComponent* m_descriptiveName;
SpecializedComponent2* m_descriptiveName2;
// and so on... many different types of components
}
So, each SpecializedComponentX will communicate over a network with an individual data source, each with their own unique address. These addresses are specified in a parameter file. At the moment, I'm parsing the parameter file and the m_address is initialized in the derived class constructor - this is because each m_address is specified by the type of object we're initializing.
Every SpecializedComponentX has some common functionality that I want to execute in the base class Component. So, I spin up a thread associated with the base class Component, right? Sure - makes sense. Until I realize that I don't have the destination address for that component yet - because the object hasn't been fully constructed. I want to spin up the base class thread in the ctor, but I don't know the m_address yet.
The only way I can think of getting around this is providing a (simple) virtual function, void start(), that the derived class can call to spin up the thread once the object is fully constructed. Is this a valid and appropriate design choice or is there a pattern that I could be overlooking? Thanks.
| Component could have a constructor with one argument that initializes m_address.
|
2,004,089 | 2,004,104 | When using variable length argument lists (...), can you do anything besides step forward through va_list? | If each invocation of va_arg modifies the object declared with va_list so that the object points to the next argument in the list, is there any way to step back so that it points to the previous one, jump back to the first one, jump to the end? Go three quarters of the way though the list and then... you get the idea. Or is it a once passed it's gone type thing?
| You can "jump back" to the beginning by executing va_start again. There is no other supported operation other than going on to the next argument.
However, most implementations use trivial pointer arithmetic. If you guarantee the code runs only on a particular architecture, then you can do the reverse of the arithmetic. But this completely defeats the purpose of va_list which is to safely abstract the operations for all architectures.
|
2,004,323 | 2,004,330 | Is there any way I can find how many objects are instantiated from stack and how many objects from Heap | Is there any way I can find how many objects are instantiated from stack and how many objects from Heap. I don't wish to have the restrictions of scoping in objects from Stack.
If i use a static counter in constructor and destructor, it will be called in both the cases(object from stack and heap). One way is to exploit the idea are objects from stack are scoped and destructor is called when they go out of scope but i want a better solution so as to know the number of objects at any point of time.
| Override operator new and operator delete for the class. Have another counter there that is incremented/decremented in these operators. This will keep track of objects created on the heap. The constructor/destructor can increment/decrement another counter that will count all objects. The difference between the two is the set of objects on the stack (as well as global objects).
|
2,004,419 | 2,004,459 | How to check if a link exists or not in VC++? | I have a link. I have checked that the link is a valid URL through regular expressions. Now, I want to check if the link is a valid http link or not. i.e. it should not be a non-existing link.
Is there a way in VC++ 6.0 (MFC) to check that?
| One option is to try to get data from that URL by using the URLOpenBlockingStream function.
Example:
#include <Urlmon.h>
IStream* pStream = NULL;
if (SUCCEEDED(URLOpenBlockingStream(0, "URL string", &pStream, 0, 0))) {
// Release the stream immediately since we don't use the data.
pStream->Release();
return TRUE;
}
else {
return FALSE;
}
|
2,004,743 | 2,004,796 | different type of instantiating on c++ | since I've came from c# to c++ everything looks crazy for me in c++.
I just wondering If someone could explain me why do we have these kind of instantiating in c++ :
method 1:
ClassA obj1; // this is going to stack
method 2:
ClassA *obj1 = new ClassA(); //this is going to heap
whereas we don't have the common instantiating in C# way on c++ :
ClassA obj2 = new obj2();
and one more question in method1 I get an instance from the ClassA but without the () and this is the exact place the I've got confused , why do we have to instatiating like that?
our ClassA has an constructor but instantiating without parentheses???
how come we call its constructor?
p.s : I've read these topics :
Different methods for instantiating an object in C++
Stack, Static, and Heap in C++
What and where are the stack and heap?
| Indeed moving to C++ from a language like Java or C# can be daunting, I've gone through it as well.
The first and foremost difference is that in C++ you almost always manage your own memory. When creating an object on the heap you are responsible for deleting it so it does not leak memory - this in turn means you can delete it when you see fit. When creating an object on the stack, it is automatically deleted when it goes out of scope - you must be careful not to use it after it goes out of scope.
Example:
void do_queue(B& queue)
{
Evt *e = new Evt;
queue.queueEvent(e);
} // all well, e can be popped and used (also must be deleted by someone else!)
versus
void do_queue(B& queue)
{
Evt e;
queue.queueEvent(&e);
} // e is out of scope here, popping it from the queue and using it will most likely cause a sigseg
That being said, the two methods are also significantly different in one aspect: the first one creates an object. The second one creates a pointer to an object. The nice thing about having pointers is that you can pass them around as parameters with only minimal memory being copied on the stack (the pointer is copied, instead of the whole object). Of course, you can always get the address of an object allocated on the stack by using "&", or pass it around as a reference - however, when using objects allocated on the stack you much be especially careful with their scope.
I've found this website a great resource when I moved from Java to C++: http://www.parashift.com/c++-faq-lite/ - you will probably find it too, it offers a lot of good explanations
|
2,004,808 | 2,006,360 | boost-test application initialisation | I'm just getting stated with boost-test and unit testing in general with a new application, and I am not sure how to handle the applications initialisation (eg loading config files, connecting to a database, starting an embedded python interpretor, etc).
I want to test this initialisation process, and also most of the other modules in the application require that the initialisation occurred successfully.
Some way to run some shut down code would also be appreciated.
How should I go about doing this?
| It seems what you intent to do is more integration test than unit-test. It's not to pinpoint on wording, but it makes a difference. Unit testing mean testing methods in isolation, in an environment called a fixture, created just for one test, end then deleted. Another instance of the fixture will be re-created if the next case require the same fixture. This is done to isolate the tests so that an error in one test does not affect the outcome of the subsequent tests.
Usually, one test has three steps:
Arrange - prepare the fixture : instantiate the class to be tested, possibly other objects needed
Act - call the method to be tested
Assert - verify the expectations
Unit tests typically stays away of external resources such as files and databases. Instead mock objects are used to satisfy the dependencies of the class to be tested.
However, depending on the type of your application, you can try to run tests from the application itself. This is not "pure" unit testing, but can be valuable anyway, especially if the code has not been written with unit testing in mind, it might not be "flexible" enough to be unit tested.
This need a special execution mode, with a "-test" parameter for instance, which will initialize the application normally, and then invoke tests that will simulate inputs and use assertions to verify the application reacted as expected. Likewise, it might be possible to invoke the shutdown code and verify with assertions if the database connection has be closed (if the objects are not deleted).
This approach has several drawbacks compared to unit tests: it depends on the config files (the software may behave differently depending on the parameters), on the database (on its content and on the ability to connect to it), the tests are not isolated ... The two first can be overcome using default values for the configuration and connecting to a test database in test mode.
|
2,004,820 | 2,005,142 | Inherit interfaces which share a method name | There are two base classes have same function name. I want to inherit both of them, and over ride each method differently. How can I do that with separate declaration and definition (instead of defining in the class definition)?
#include <cstdio>
class Interface1{
public:
virtual void Name() = 0;
};
class Interface2
{
public:
virtual void Name() = 0;
};
class RealClass: public Interface1, public Interface2
{
public:
virtual void Interface1::Name()
{
printf("Interface1 OK?\n");
}
virtual void Interface2::Name()
{
printf("Interface2 OK?\n");
}
};
int main()
{
Interface1 *p = new RealClass();
p->Name();
Interface2 *q = reinterpret_cast<RealClass*>(p);
q->Name();
}
I failed to move the definition out in VC8. I found the Microsoft Specific Keyword __interface can do this job successfully, code below:
#include <cstdio>
__interface Interface1{
virtual void Name() = 0;
};
__interface Interface2
{
virtual void Name() = 0;
};
class RealClass: public Interface1,
public Interface2
{
public:
virtual void Interface1::Name();
virtual void Interface2::Name();
};
void RealClass::Interface1::Name()
{
printf("Interface1 OK?\n");
}
void RealClass::Interface2::Name()
{
printf("Interface2 OK?\n");
}
int main()
{
Interface1 *p = new RealClass();
p->Name();
Interface2 *q = reinterpret_cast<RealClass*>(p);
q->Name();
}
but is there another way to do this something more general that will work in other compilers?
| This problem doesn't come up very often. The solution I'm familiar with was designed by Doug McIlroy and appears in Bjarne Stroustrup's books (presented in both Design & Evolution of C++ section 12.8 and The C++ Programming Language section 25.6). According to the discussion in Design & Evolution, there was a proposal to handle this specific case elegantly, but it was rejected because "such name clashes were unlikely to become common enough to warrant a separate language feature," and "not likely to become everyday work for novices."
Not only do you need to call Name() through pointers to base classes, you need a way to say which Name() you want when operating on the derived class. The solution adds some indirection:
class Interface1{
public:
virtual void Name() = 0;
};
class Interface2{
public:
virtual void Name() = 0;
};
class Interface1_helper : public Interface1{
public:
virtual void I1_Name() = 0;
void Name() override
{
I1_Name();
}
};
class Interface2_helper : public Interface2{
public:
virtual void I2_Name() = 0;
void Name() override
{
I2_Name();
}
};
class RealClass: public Interface1_helper, public Interface2_helper{
public:
void I1_Name() override
{
printf("Interface1 OK?\n");
}
void I2_Name() override
{
printf("Interface2 OK?\n");
}
};
int main()
{
RealClass rc;
Interface1* i1 = &rc;
Interface2* i2 = &rc;
i1->Name();
i2->Name();
rc.I1_Name();
rc.I2_Name();
}
Not pretty, but the decision was it's not needed often.
|
2,004,952 | 2,005,216 | How to set Native Microsoft compiler for VS 2003 if Intel Compiler is default compiler? | I am using a development environment with VS 2003 as IDE and Intel compiler as default compiler. I have to set Microsoft default compiler for compilation of my project. As I could not find where to set the compiler in VS 2003.
Thanks
Anil
| You cannot set the compiler explicitly, VS simply runs cl.exe. Windows tries to find a file named cl.exe to start, it searches the directories listed in the PATH environment variable. You do explicitly set which directories are in the PATH.
Not sure about VS2003, 2005 and up uses Tools + Options, Projects and Solutions, VC++ Directories, Executable files. Remove the Intel compiler directory from that list.
|
2,005,005 | 2,015,953 | C++ - global setlocale works, the same locale passed to _vsnprintf_l doesn't | I have following C++ code sample:
void SetVaArgs(const char* fmt, const va_list argList)
{
setlocale( LC_ALL, "C" );
// 1
m_FormatBufferLen = ::_vsnprintf(m_FormatBuffer, Logger::MAX_LOGMESSAGE_SIZE, fmt, argList);
setlocale( LC_ALL, "" );
//2
m_FormatBufferLen = ::_vsnprintf(m_FormatBuffer, Logger::MAX_LOGMESSAGE_SIZE, fmt, argList);
_locale_t locale = _create_locale(LC_ALL, "C");;
//3
m_FormatBufferLen = ::_vsnprintf_l(m_FormatBuffer, Logger::MAX_LOGMESSAGE_SIZE, fmt,locale, argList);
The arglist contains LPCTSTR with extended ascii characters. Command //1 copies it to the buffer, as expected. Command //2 stops copying at first character from range 129-161 (few exceptions there).
I'd like to address this issue without changing global locale for process, but command //3 works like //2, why? I'm passing "C" locale, so I would expect effect from command //1.
By default I'm using Polish locale on english Windows XP.
| It turned out to be a CRT bug in VS2005 and above (2008 and 2010). Submitted to Microsoft here: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=523503#details
Bug applies to _sprintf_l, _vsnprintf_l, _sprintf_s_l, _vsnprintf_s_l and possibly other relatives.
|
2,005,075 | 2,582,882 | Add Exception to firewall on Mac either during installation of application or when application is launched | I have a client - server application, where I want to add a exception to firewall so that my applications can communicate properly.
I want to add add an exception to the firewall (without changing the setup for the other firewalls options).
I am using Carbon, Qt, C++. However, I feel this has more to do with some install time settings.
| I'm not sure this is would be good practice without notifying the user or asking the user for permission. Since osx has a built in system for creating application signing and the user must explicitly enable this level of security.
That being said, you should have a look at
/usr/libexec/ApplicationFirewall/socketfilterfw
and
/usr/libexec/ApplicationFirewall/com.apple.alf.plist
OS X also use ipfw and I'm pretty sure it supersedes any rules set by the application filter. So you could make an ipfw rule as well.
|
2,005,385 | 2,031,790 | Qt app text size incorrect under MacOSX | Designing UIs with QtCreator under Windows, and porting the same .ui file under MacOSX leads to designs with some text parts very small -- actually, the HTML ones. It seems it comes from the fact that QtCreator uses pt instead of px as text size unit, and that the default screen resolutions are quite different under Windows and MacOSX.
Is there any reason I didn't come to more consistent results? Apart from editing each pt into px, are there any workaround?
Thanks.
| As a rule of thumb you should not specify the font sizes for controls manually in Qt Designer/Creator as this leads to the prolems you have. The reason for inconsistency is the fact that different platforms use different DPI settings (96 dpi on Windows vs. 72 DPI on Mac OS X).
This results in fonts being displayed with different sizes.
Also, you mentioned HTML. I assume you have set some HTML text in a QTextEdit-like widget using the built-in editor. When you select a font size there, Qt Creator will produce some HTML like this:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta name="qrichtext" content="1" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body style=" font-family:'Sans'; font-size:11pt; font-weight:400; font-style:normal;">
<p style=" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;">Hello World</p></body></html>
As you can see, it sets some font-size attributes, which is really nasty. A simple, easy solution to this desaster is to remove the style= attributes entirely. This causes the QTextEdit to use the default application font instead (which should be fine on all platforms):
<html><head></head><body><p>Hello World</p></body></html>
As a sidenote, this is much friendlier for translators, as they don't have to fight through all the useless CSS.
Unfortunately Qt's QTextEdit does not support the "percent" font-size specification (just px and pt). If it did, you could have used something like "90%" to make the text smaller than the default font while still being on the safe side.
Another option would be a QWebView, which you make editable. This allows for good text formatting while having the full CSS subset. But that might be overkill.
Hope that helps!
|
2,005,426 | 2,007,380 | Message queuing solutions? | (Edited to try to explain better)
We have an agent, written in C++ for Win32. It needs to periodically post information to a server. It must support disconnected operation. That is: the client doesn't always have a connection to the server.
Note: This is for communication between an agent running on desktop PCs, to communicate with a server running somewhere in the enterprise.
This means that the messages to be sent to the server must be queued (so that they can be sent once the connection is available).
We currently use an in-house system that queues messages as individual files on disk, and uses HTTP POST to send them to the server when it's available.
It's starting to show its age, and I'd like to investigate alternatives before I consider updating it.
It must be available by default on Windows XP SP2, Windows Vista and Windows 7, or must be simple to include in our installer.
This product will be installed (by administrators) on a couple of hundred thousand PCs. They'll probably use something like Microsoft SMS or ConfigMgr. In this scenario, "frivolous" prerequisites are frowned upon. This means that, unless the client-side code (or a redistributable) can be included in our installer, the administrator won't be happy. This makes MSMQ a particularly hard sell, because it's not installed by default with XP.
It must be relatively simple to use from C++ on Win32.
Our client is an unmanaged C++ Win32 application. No .NET or Java on the client.
The transport should be HTTP or HTTPS. That is: it must go through firewalls easily; no RPC or DCOM.
It should be relatively reliable, with retries, etc. Protection against replays is a must-have.
It must be scalable -- there's a lot of traffic. Per-message impact on the server should be minimal.
The server end is C#, currently using ASP.NET to implement a simple HTTP POST mechanism.
(The slightly odd one). It must support client-side in-memory queues, so that we can avoid spinning up the hard disk. It must allow flushing to disk periodically.
It must be suitable for use in a proprietary product (i.e. no GPL, etc.).
| How is your current solution showing its age?
I would push the logic on to the back end, and make the clients extremely simple.
Messages are simply stored in the file system. Have the client write to c:/queue/{uuid}.tmp. When the file is written, rename it to c:/queue/{uuid}.msg. This makes writing messages to the queue on the client "atomic".
A C++ thread wakes up, scans c:\queue for "*.msg" files, and if it finds one it then checks for the server, and HTTP POSTs the message to it. When it receives the 200 status back from the server (i.e. it has got the message), then it can delete the file. It only scans for *.msg files. The *.tmp files are still being written too, and you'd have a race condition trying to send a msg file that was still being written. That's what the rename from .tmp is for. I'd also suggest scanning by creation date so early messages go first.
Your server receives the message, and here it can to any necessary dupe checking. Push this burden on the server to centralize it. You could simply record every uuid for every message to do duplication elimination. If that list gets too long (I don't know your traffic volume), perhaps you can cull it of items greater than 30 days (I also don't know how long your clients can remain off line).
This system is simple, but pretty robust. If the file sending thread gets an error, it will simply try to send the file next time. The only time you should be getting a duplicate message is in the window between when the client gets the 200 ack from the server and when it deletes the file. If the client shuts down or crashes at that point, you will have a file that has been sent but not removed from the queue.
If your clients are stable, this is a pretty low risk. With the dupe checking based on the message ID, you can mitigate that at the cost of some bookkeeping, but maintaining a list of uuids isn't spectacularly daunting, but again it does depend on your message volume and other performance requirements.
The fact that you are allowed to work "offline" suggests you have some "slack" in your absolute messaging performance.
|
2,005,596 | 2,007,777 | Quartz display services replacement for deprecated functions? | The Quartz display services reference manual lists several functions as deprecated, (for example CGDisplayCurrentMode), but doesn't mention what the replacement function is.
What should I be using to find information about the current video mode?
Is there a way to find out this kind of information? The reference manual on the apple developer site seems very hard to navigate.
| I think CGDisplayCopyDisplayMode() looks like the replacement. It is new in 10.6.
http://developer.apple.com/mac/library/documentation/Carbon/Reference/ApplicationServicesRefUpdate/Articles/ApplicationServices_10.5-10.6_SymbolChanges.html#//apple_ref/doc/uid/TP40009185-SW41
|
2,006,146 | 2,006,260 | Linking MTL (Matrix Template Library) in Visual Studio | I have MTL header files; I want to use those header files in Visual Studio 2008. How can I link those header files so that I can write a matrix program using the MTL library?
| Maybe you're referring to how to tell the IDE to notice them? In that case, you can simply add them in a directory to your project. In VS, right-click the project, select Properties. Go to Configuration Properties -> C/C++ -> General. Add the MTL directory, and any sub-directory, to the Additional Include Directories field.
|
2,006,225 | 2,006,406 | Getting location of file tnsnames.ora by code | How can I get the location of the tnsnames.ora file by code, in a machine with the Oracle client installed?
Is there a windows registry key indicating the location of this file?
| Some years ago I had the same problem.
Back then I had to support Oracle 9 and 10 so the code only takes care of those versions, but maybe it saves you from some research.
The idea is to:
search the registry to determine the oracle client version
try to find the ORACLE_HOME
finally get the tnsnames from HOME
public enum OracleVersion
{
Oracle9,
Oracle10,
Oracle0
};
private OracleVersion GetOracleVersion()
{
RegistryKey rgkLM = Registry.LocalMachine;
RegistryKey rgkAllHome = rgkLM.OpenSubKey(@"SOFTWARE\ORACLE\ALL_HOMES");
/*
* 10g Installationen don't have an ALL_HOMES key
* Try to find HOME at SOFTWARE\ORACLE\
* 10g homes start with KEY_
*/
string[] okeys = rgkLM.OpenSubKey(@"SOFTWARE\ORACLE").GetSubKeyNames();
foreach (string okey in okeys)
{
if (okey.StartsWith("KEY_"))
return OracleVersion.Oracle10;
}
if (rgkAllHome != null)
{
string strLastHome = "";
object objLastHome = rgkAllHome.GetValue("LAST_HOME");
strLastHome = objLastHome.ToString();
RegistryKey rgkActualHome = Registry.LocalMachine.OpenSubKey(@"SOFTWARE\ORACLE\HOME" + strLastHome);
string strOraHome = "";
object objOraHome = rgkActualHome.GetValue("ORACLE_HOME");
string strOracleHome = strOraHome = objOraHome.ToString();
return OracleVersion.Oracle9;
}
return OracleVersion.Oracle0;
}
private string GetOracleHome()
{
RegistryKey rgkLM = Registry.LocalMachine;
RegistryKey rgkAllHome = rgkLM.OpenSubKey(@"SOFTWARE\ORACLE\ALL_HOMES");
OracleVersion ov = this.GetOracleVersion();
switch(ov)
{
case OracleVersion.Oracle10:
{
string[] okeys = rgkLM.OpenSubKey(@"SOFTWARE\ORACLE").GetSubKeyNames();
foreach (string okey in okeys)
{
if (okey.StartsWith("KEY_"))
{
return rgkLM.OpenSubKey(@"SOFTWARE\ORACLE\" + okey).GetValue("ORACLE_HOME") as string;
}
}
throw new Exception("No Oracle Home found");
}
case OracleVersion.Oracle9:
{
string strLastHome = "";
object objLastHome = rgkAllHome.GetValue("LAST_HOME");
strLastHome = objLastHome.ToString();
RegistryKey rgkActualHome = Registry.LocalMachine.OpenSubKey(@"SOFTWARE\ORACLE\HOME" + strLastHome);
string strOraHome = "";
object objOraHome = rgkActualHome.GetValue("ORACLE_HOME");
string strOracleHome = strOraHome = objOraHome.ToString();
return strOraHome;
}
default:
{
throw new Exception("No supported Oracle Installation found");
}
}
}
public string GetTNSNAMESORAFilePath()
{
string strOracleHome = GetOracleHome();
if (strOracleHome != "")
{
string strTNSNAMESORAFilePath = strOracleHome + @"\NETWORK\ADMIN\TNSNAMES.ORA";
if (File.Exists(strTNSNAMESORAFilePath))
{
return strTNSNAMESORAFilePath;
}
else
{
strTNSNAMESORAFilePath = strOracleHome + @"\NET80\ADMIN\TNSNAMES.ORA";
if (File.Exists(strTNSNAMESORAFilePath))
{
return strTNSNAMESORAFilePath;
}
else
{
throw new SystemException("Could not find tnsnames.ora");
}
}
}
else
{
throw new SystemException("Could not determine ORAHOME");
}
}
|
2,006,402 | 2,006,553 | Compiling CUDA examples gives build error | I am running Windows 7 64bit, with Visual Studio 2008. I installed the CUDA drivers and SDK. The SDK comes with quite a few examples including compiled executables and source code. The compiled executables run wonderfully. When I open the vc90 solutions and go to build in Win32 configuration I get this error:
Error 1 fatal error LNK1181: cannot open input file '.\Release\bandwidthTest.cu.obj' bandwidthTest bandwidthTest
Build log:
1>------ Build started: Project: bandwidthTest, Configuration: Release Win32 ------
1>Compiling with CUDA Build Rule...
1>"C:\CUDA\bin64\nvcc.exe" -arch sm_10 -ccbin "c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\CUDA\include" -I"../../common/inc" -maxrregcount=32 --compile -o "Release\bandwidthTest.cu.obj" "c:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\src\bandwidthTest\bandwidthTest.cu"
1>nvcc fatal : Visual Studio configuration file '(null)' could not be found for installation at 'c:/Program Files (x86)/Microsoft Visual Studio 9.0/VC/bin/../..'
1>Linking...
1>LINK : fatal error LNK1181: cannot open input file '.\Release\bandwidthTest.cu.obj'
1>Build log was saved at "file://c:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\src\bandwidthTest\Release\BuildLog.htm"
1>bandwidthTest - 1 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
If I attempt to compile in x64 it doesn't build at all and just skips the project
1>------ Skipped Build: Project: bandwidthTest ------
1>
========== Build: 0 succeeded or up-to-date, 0 failed, 1 skipped ==========
I am new to C++, having been doing C# for a while. I'm certain there is something small that I am missing, but any clues you could provide would be appreciated.
| Check if you have x64 compiler installed. Then change project type to x64. I had the same problem when trying to compile 32bit cuda program with 64bit win7.
Also make sure you have added 64bit libs and includes to the search path.
|
2,006,627 | 2,006,652 | Print Hexadecimal Numbers Of a File At C And C++ | I'm now developing a home project, but before I start, I need to know how can I printcout the content of a file(*.bin as example) in hexadecimal?
I like to learn, then a good tutorial is very nice too ;-)
Remember that I need to develop this, without using external applications, because this home project is to learn more about hexadecimal manipulating on C++ and also a good practice of my knowledge.
Some other questions
Is there any way to do this using C?
How can I store this value into a variable?
I already got the way in C++, but how to make it in C?
| To print hex:
std::cout << std::hex << 123 << std::endl;
but yes, use the od tool :-)
A good file reading/writing tutorial is here. You will have to read the file into a buffer then loop over each byte/word of the file.
|
2,006,634 | 2,006,759 | Compiling Qt app agains latest VC++ 2008 runtime | Hi I have a problem compiling my Qt app with Visual studio 2008 SP1.
For the sake of purity I created a windows XP virtual machine, installed VS 2008, then SP1 and then compiled Qt 4.6. Now from looking at the manifest of the Qt DLLs (using XN Resource Editor) I can see that they depends on 9.0.21022.8 CRT.
But at the same time the latest CRT that comes with SP1 is 9.00.30729.4926. Why were the Qt DLLs generated with the the older manifest embedded? How can I fix this?
Besides this, on my dev machine in winsxs folder I can only find 9.00.21022.8 CRT (note double zero). Where is the 9.0.21022.8 CRT - I have no idea, but they must be somewhere((c) Bush) since the app actually starts on the dev machine.
Thanks
Anton
| This should fix it: App does not run with VS 2008 SP1 DLLs, previous version works with RTM versions
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.