question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
2,178,109 | 2,178,136 | Win32 exe not finding resources compiled in a .lib | I have a pretty plain Win32 application that links against a library I wrote that contains resources in an .rc file. The problem is that the .exe doesn't seem to find those resources. But if I move the .rc file from the library project to the .exe project resource loading works.
What step am I missing? I have the impression I have to call some kind of general resource setup/loading function because the resources aren't directly in the executable but I don't know what.
| VC++ resources in a static library
|
2,178,171 | 2,178,272 | Unix program console vs something else | I am writing a program in unix. At the moment, it has a console interface. I am just curious like dll's in windows does a similar concept exist in unix when another program wants to call your program. I have been asked to simply provide a binary with little else in terms of details. I have a feeling that there might be another program that might be calling this. So if I go with that what would I need to do? I cannot share source. I can only provide a compiled binary (which i am a bit confused about as well...when we talk about binary in unix that means that some are executable while others are not. In the case of my program I assume its an executable they are asking for at least till I get a confirmation). would i need to do anything special like provide api's like they do with dlls? i am just not sure how that all works out in unix.
| The unix equivalent of a Windows dll is a shared library, e.g. libfoobar.so.
With regards to how to distribute your code to a third party in binary form your options are:
a static library: libfoobar.a
a shared / dynamic library: libfoobar.so
an executable
The first two cases are effectively the same. People tend to prefer dynamic libraries these days, because the library code can be shared by multiple executables making both the size of the executables and the amount of memory required smaller.
In both cases the user of your code will have to write their code to use your API, and they need to compile their code against your library.
In the third case you would provide the third party with an executable that they run. They would call into your application via some sort of Inter process communication mechanism, e.g. pipes or shared memory, or over the network, e.g. UDP or TCP as a low level mechanism, or some sort of RPC mechanism like SunRPC, SOAP, HTTP, REST, what have you.
|
2,178,252 | 2,178,339 | Pimpl idiom: What size_type to use if implementation is unknown? | I have a class that holds an array of elements, and I want to give it a GetSize member function. But what return type should I give that function?
I'm using the pimpl idiom, and so in the header file it is not known what the implementation will use to store the elements. So I cannot just say std::vector<T>::size_type, for example:
class FooImpl;
class Foo {
FooImpl* impl_;
public:
TYPE GetSize(); // what TYPE??
};
| If the client code can only see Foo (which is the purpose of pimpl idiom), then there's no use in define a specific size_type in the concrete implementation - it won't be visible/accessible to the client anyway. Standard containers can do that since they are built on so called "compile-time polymorphism", while you are specifically trying to use a [potentially] run-time implementation hiding method.
In your situation the only choice would be to choose an integer type that "should be enough for all possible implementations" (like unsigned long, for example) and stick with it.
Another possibility is to use the uintptr_t type, if it is available in your implementation (it is standardized in C99, but not in C++). This integer type is supposed to cover the entire storage address range available to the program, which means that it will always be sufficient for representing the size of any in-memory container. Note, that other posters often use the same logic, but incorrectly arrive at the conclusion that the appropriate type to use here is size_t. (This is usually a result of lack of experience with non-flat memory model implementatioons.) If your containers are always based on physical arrays, size_t will work. However, if your containers are not always array-based, size_t is not even remotely the correct type to use here, since its range is generally smaller than the maximum size of a non-continuous (non-array-based) container.
But in any case, regardelss of what size you are end up using, it is a good idea to hide it behind a typedef-name, just like it is done in standard containers.
|
2,178,281 | 2,178,694 | small string optimization for vector? | I know several (all?) STL implementations implement a "small string" optimization where instead of storing the usual 3 pointers for begin, end and capacity a string will store the actual character data in the memory used for the pointers if sizeof(characters) <= sizeof(pointers). I am in a situation where I have lots of small vectors with an element size <= sizeof(pointer). I cannot use fixed size arrays, since the vectors need to be able to resize dynamically and may potentially grow quite large. However, the median (not mean) size of the vectors will only be 4-12 bytes. So a "small string" optimization adapted to vectors would be quite useful to me. Does such a thing exist?
I'm thinking about rolling my own by simply brute force converting a vector to a string, i.e. providing a vector interface to a string. Good idea?
| You can borrow the SmallVector implementation from LLVM. (header only, located in LLVM\include\llvm\ADT)
|
2,178,303 | 2,178,531 | C++ Read file from bottom to top | I have a very large file I need to parse, so reading it into memory all at once is non-ideal. The way the file is structured, it would be much, much easier if I could start at eof and go up to the beginning. Does anyone have a good trick for doing this? I'm using Visual Studio 2008 and C++. Thanks
| If your operating system supports it, consider using a memory mapped file. You can then treat the file contents as a very large array of bytes, with the operating system managing bringing the data into memory (and releasing it) as necessary.
|
2,178,316 | 2,179,873 | Private members in pimpl class? | Is there any reason for the implementation class as used in the pimpl idiom to have any private members at all? The only reason I can really think of is to protect yourself from yourself -- i.e. the private members serve to enforce some kind of contract between the class and the user, and in this case the class and the user are rather intimately related, so it seems unnecessary.
| I think people are confusing the Pimpl idiom with Adapter/Bridge/Strategy patterns. Idioms are specific to a language. Patterns can apply to many languages.
The Pimpl idiom was devised to address the following problem in C++: Private members of a class are visible in the class declaration, which adds unnecessary #include dependencies to the user of the class. This idiom is also known as compiler firewall.
If the implementation is written directly in the outer class's corresponding *.cpp file, and is not accessible outside the module, then I think its perfectly fine to simply use a struct for the Pimpl class. To further re-enforce the idea that implementations are not meant to be directly re-used, I define them as a private inner struct:
// foo.h
class Foo : boost::noncopyable
{
public:
...
private:
struct Impl;
boost::scoped_ptr<Impl> impl_;
};
// foo.cpp
struct Foo::Impl
{
// Impl method and member definitions
};
// Foo method definitions
As soon as there's a header file for the implementation class, I think we are no longer talking about the Pimpl idiom. We are rather talking about Adapter, Bridge, Strategy, interface classes, etc...
Just my 2 cents.
|
2,178,760 | 2,178,914 | Public new private constructor | When I try compiling the following:
#include <iostream>
class Test
{
public:
void* operator new (size_t num);
void operator delete (void* test);
~Test();
private:
Test();
};
Test::Test()
{
std::cout << "Constructing Test" << std::endl;
}
Test::~Test()
{
std::cout << "Destroying Test" << std::endl;
}
void* Test::operator new (size_t num)
{
::new Test;
}
void Test::operator delete(void* test)
{
::delete(static_cast<Test*>(test));
}
int main()
{
Test* test = new Test;
delete test;
}
I get :
$ g++ -o test test.cpp
test.cpp: In function ‘int main()’:
test.cpp:14: error: ‘Test::Test()’ is private
test.cpp:36: error: within this context
If the new is a member function, why can it not call the private constructor?
Edit:
My idea is to create a class that can only be instantiated on the heap using totally standard syntax. I was hoping since new is a data member, it could call the private constructor but since new is not used for stack objects, you would not be allowed to create the object on the stack.
| I think you have a misunderstanding on what the operator new does. It does not create objects, but rather allocates memory for the object. The compiler will call the constructor right after calling your operator new.
struct test {
void * operator new( std::size_t size );
};
int main()
{
test *p = new test;
// compiler will translate this into:
//
// test *p = test::operator new( sizeof(test) );
// new (static_cast<void*>(p)) test() !!! the constructor is private in this scope
}
The main usage of the operator new is having a memory allocator different to the default allocator for the system (usually malloc), and it is meant to return an uninitialized region of memory on which the compiler will call the constructor. But the constructor is called after the memory is allocated in the scope where the new call was written (main in this case).
After acceptance note
The complete solution to the unformulated question: how do I force users of my class to instantiate in the heap? is to make the constructors private and offer a factory function, as shown in some other answers (as the one by villintehaspam) point out.
|
2,178,909 | 2,178,915 | How to initialize 3D array in C++ | How do you initialize a 3d array in C++
int min[1][1][1] = {100, { 100, {100}}}; //this is not the way
| The array in your question has only one element, so you only need one value to completely initialise it. You need three sets of braces, one for each dimension of the array.
int min[1][1][1] = {{{100}}};
A clearer example might be:
int arr[2][3][4] = { { {1, 2, 3, 4}, {1, 2, 3, 4}, {1, 2, 3, 4} },
{ {1, 2, 3, 4}, {1, 2, 3, 4}, {1, 2, 3, 4} } };
As you can see, there are two groups, each containing three groups of 4 numbers.
|
2,179,017 | 2,183,343 | Can i call multiple times JNI_CreateJavaVM? | I´m Trying to launch two threads witch calls "DispFrontEnd" function
First thread ended OK, second failed to start jvm.. ??
tks
#include "jni.h"
#include <process.h>
#include "Stdafx.h"
//DISPATCH Thread Check
bool DispatchThreadCreated = FALSE;
if (DispatchThreadCreated == FALSE)
{
HANDLE hDispThread;
hDispThread = (HANDLE)_beginthread(DispFrontEnd,0,(void *)dispatchInputs);
if ((long)hDispThread == -1)
{
log.LogError("Thread DispFrontEnd Returned********BG ", (long)hDispThread);
log.LogError("errno", errno);
log.LogError("_doserrno", _doserrno);
}
else
{
logloc->LogMethod("Dispatch Thread CREATED");
DispatchThreadCreated= TRUE;
//Espera que a thread termine
WaitForSingleObject( hDispThread, INFINITE );
DispatchThreadCreated= FALSE; // 01_02_2010
logloc->LogMethod("Dispatch Thread ENDED");
}
}
if (DispatchThreadCreated == FALSE)
{
HANDLE hDispThread3;
logloc->LogMethod("3 : Dispatch Thread CREATED");
hDispThread3 = (HANDLE)_beginthread(DispFrontEnd,0,(void *)dispatchInputs);
if ((long)hDispThread3 == -1)
{
log.LogError("3 : Thread DispFrontEnd Returned********BG ", (long)hDispThread3);
log.LogError("errno", errno);
log.LogError("_doserrno", _doserrno);
}
else
{
logloc->LogMethod("3 : Dispatch Thread CREATED");
DispatchThreadCreated= TRUE;
//Espera que a thread termine
WaitForSingleObject( hDispThread3, INFINITE );
DispatchThreadCreated= FALSE; // 01_02_2010
logloc->LogMethod("3 : Dispatch Thread ENDED");
}
}
void DispFrontEnd(void * indArr)
{
JNIEnv *env;
JavaVM *jvm;
env = create_vm(&jvm); // return null on second call ???
}
JNIEnv* create_vm(JavaVM ** jvm) {
CString str;
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options;
options.optionString = "-Djava.class.path=C:\\dispatch\\lib\\Run.jar;C:\\dispatch\\classes"; //Path to the java source code
vm_args.version = JNI_VERSION_1_6; //JDK version. This indicates version 1.6
vm_args.nOptions = 1;
vm_args.options = &options;
vm_args.ignoreUnrecognized = 0;
int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
if(ret < 0)
{
env = NULL;
str.Format("ERROR! create JVM (%d)",ret); // show this on second call!! ?
logloc->LogMethod( str );
}
else
{
str.Format("JVM %x created Success!",env->GetVersion());
logloc->LogMethod( str );
}
return env;
}
| Do you really have to start many JVM ? Could you use
jint AttachCurrentThread(JavaVM *vm, JNIEnv **p_env, void *thr_args);
instead ?
The only thing I know is a native thread cannot attach two different JVM at the same time.
|
2,179,065 | 2,179,248 | std::vector overwriting final value, rather than growing? | I'm having an issue where using vector.push_back(value) is overwriting the final value, rather than appending to the end. Why might this happen? I have a sample item in the vector, so it's size never hits zero. Below is the code..
void UpdateTable(vector<MyStruct> *Individuals, MyStruct entry)
{
MyStruct someEntry;
bool isNewEntry = true;
for (int i = 0; i < Individuals->size(); i++)
{
if (!(strcmp(Individuals->at(i).sourceAddress, entry.sourceAddress)))
{
isNewEntry = false;
//snip. some work done here.
}
}
if(isNewEntry)
{
Individuals->push_back(entry);
}
}
This let's my first "sample" value stay in, and will allow for just one more item in the vector. When 2 new entries are added, the second overwrites the first, so the size is never larger than 2.
edit: More code, since this is apparently not the issue?
void *TableManagement(void *arg)
{
//NDP table to store discovered devices.
//Filled with a row of sample data.
vector<MyStruct> discoveryTable;
MyStruct sample;
sample.sourceAddress = "Sample";
sample.lastSeen = -1;
sample.beaconReceived = 1;
discoveryTable.push_back(sample);
srand(time(NULL));
while(1)
{
int sleepTime = rand() % 3;
sleep(sleepTime);
MyStruct newDiscovery = ReceivedValue();
if (newDiscovery.lastSeen != -1000) //no new value from receivedValue()
{
UpdateTable(&discoveryTable, newDiscovery);
}
printTable(&discoveryTable);
}
return NULL;
}
| I'm going to hazard a guess:
Suppose MyStruct is declared like
struct MyStruct
{
const char *sourceAddress;
// Other Gubbins ...
};
And that ReceivedValue does something like
MyStruct ReceivedValue()
{
static char nameBuffer[MAX_NAME_LEN];
// Do some work to get the value, put the name in the buffer
MyStruct s;
s.sourceAddress = nameBuffer;
// Fill out the rest of MyStruct
return s;
}
Now, every structure you push into your vector has sourceAddress pointing to the same global buffer, every time you call ReceivedValue it overwrites that buffer with the new string - so every entry in your vector ends up with the same string.
I can't be sure without seeing the rest of your code, but I can be sure that if you follow some of the good C++ style suggestions in the comments to your question this possiblity would go away.
Edit for clarification: there's no need to heap allocate your structures, simply declaring sourceAddress as a std::string would be sufficient to eliminate this possibility.
|
2,179,270 | 2,179,603 | Pass C# string to C++ and pass C++ result (string, char*.. whatever) to C# | I tried different things but i'm getting mad with Interop.
(here the word string is not referred to a variabile type but "a collection of char"):
I have an unmanaged C++ function, defined in a dll, that i'm trying to access from C#, this function has a string parameter and a string return value like this:
string myFunction(string inputString)
{
}
What should be string in C++ side? and C# one? and what parameters need DllImport for this?
| What I've found to work best is to be more explicit about what's going on here. Having a string as return type is probably not recommended in this situation.
A common approach is to have the C++ side be passed the buffer and buffer size. If it's not big enough for what GetString has to put in it, the bufferSize variable is modified to indicate what an appropriate size would be. The calling program (C#) would then increase the size of the buffer to the appropriate size.
If this is your exported dll function (C++):
extern "C" __declspec void GetString( char* buffer, int* bufferSize );
Matching C# would be the following:
void GetString( StringBuilder buffer, ref int bufferSize );
So to use this in C# you would then do something like the following:
int bufferSize = 512;
StringBuilder buffer = new StringBuilder( bufferSize );
GetString( buffer, ref bufferSize );
|
2,179,345 | 2,181,710 | Python method to boost function | I have a method exported to Python using boost python that takes a boost::function as an argument.
From what I have read boost::python should support boost::function without much fuss, but when I try to call the function with a python method it gives me this error
Boost.Python.ArgumentError: Python argument types in
Class.createTimer(Class, int, method, bool)
did not match C++ signature:
createTimer(class Class {lvalue}, unsigned long interval,
class boost::function<bool _cdecl(void)> function, bool recurring=False)
I am calling it from python with this code
self.__class.createTimer( 3, test.timerFunc, False )
and in C++ it is defined as
boost::int32_t createTimer( boost::uint32_t interval, boost::function< bool() > function, bool recurring = false );
The goal here is a timer class where I can do something like
class->createTimer( 3, boost::bind( &funcWithArgs, arg1, arg2 ) )
to create a timer that executes the funcWithArgs. Thanks to boost bind this will work with pretty much any function or method.
So what is the syntax I need to use for boost::python to accept my python functions as a boost::function?
| Got an answer on the python mailing list, and after a bit of reworking and more research I got exactly what I wanted :)
I did see that post before mithrandi but I did not like the idea of having to declare the functions like that. With some fancy wrappers and a bit of python magic this can work and look good at the same time!
To start, wrap up your python object with code like this
struct timer_func_wrapper_t
{
timer_func_wrapper_t( bp::object callable ) : _callable( callable ) {}
bool operator()()
{
// These GIL calls make it thread safe, may or may not be needed depending on your use case
PyGILState_STATE gstate = PyGILState_Ensure();
bool ret = _callable();
PyGILState_Release( gstate );
return ret;
}
bp::object _callable;
};
boost::int32_t createTimerWrapper( Class* class, boost::uint64_t interval, bp::object function, bool recurring = false )
{
return class->createTimer( interval, boost::function<bool ()>( timer_func_wrapper_t( function ) ), recurring );
}
when in your class define the method like so
.def( "createTimer", &createTimerWrapper, ( bp::arg( "interval" ), bp::arg( "function" ), bp::arg( "recurring" ) = false ) )
With that little bit of wrapper you can work magic like this
import MyLib
import time
def callMePls():
print( "Hello world" )
return True
class = MyLib.Class()
class.createTimer( 3, callMePls )
time.sleep( 1 )
To mimic the C++ completely, we also need a boost::bind implementation which can be found here: http://code.activestate.com/recipes/440557/
With that, we can now do something like this
import MyLib
import time
def callMePls( str ):
print( "Hello", str )
return True
class = MyLib.Class()
class.createTimer( 3, bind( callMePls, "world" ) )
time.sleep( 1 )
EDIT:
I like to follow up on my questions when I can. I was using this code successfully for a while but I found out that this falls apart when you want to take boost::function's in object constructors.
There is a way to make it work similarly to this but the new object you construct ends up with a different signature and will not work with other objects like itself.
This finally bugged me enough to do something about it and since I know more about boost::python now I came up with a pretty good 'fits all' solution using converters.
This code here will convert a python callable to a boost::python< bool() > object, it can be easily modified to convert to other boost functions.
// Wrapper for timer function parameter
struct timer_func_wrapper_t
{
timer_func_wrapper_t( bp::object callable ) : _callable(callable) {}
bool operator()()
{
return _callable();
}
bp::object _callable;
};
struct BoostFunc_from_Python_Callable
{
BoostFunc_from_Python_Callable()
{
bp::converter::registry::push_back( &convertible, &construct, bp::type_id< boost::function< bool() > >() );
}
static void* convertible( PyObject* obj_ptr )
{
if( !PyCallable_Check( obj_ptr ) ) return 0;
return obj_ptr;
}
static void construct( PyObject* obj_ptr, bp::converter::rvalue_from_python_stage1_data* data )
{
bp::object callable( bp::handle<>( bp::borrowed( obj_ptr ) ) );
void* storage = ( ( bp::converter::rvalue_from_python_storage< boost::function< bool() > >* ) data )->storage.bytes;
new (storage)boost::function< bool() >( timer_func_wrapper_t( callable ) );
data->convertible = storage;
}
};
Then in your init code, ie, BOOST_PYTHON_MODULE(), just register the type by creating the struct
BOOST_PYTHON_MODULE(Foo)
{
// Register function converter
BoostFunc_from_Python_Callable();
|
2,179,426 | 2,179,811 | How to structure data for optimal speed in a CUDA app | I am attempting to write a simple particle system that leverages CUDA to do the updating of the particle positions. Right now I am defining a particle has an object with a position defined with three float values, and a velocity also defined with three float values. When updating the particles, I am adding a constant value to the Y component of the velocity to simulate gravity, then adding the velocity to the current position to come up with the new position. In terms of memory management is it better to maintain two separate arrays of floats to store the data or to structure in a object oriented way. Something like this:
struct Vector
{
float x, y, z;
};
struct Particle
{
Vector position;
Vector velocity;
};
It seems like the size of the data is the same with either method (4 bytes per float, 3 floats per Vector, 2 Vectors per Particle totaling 24 bytes total) It seems like the OO approach would allow more effiecient data transfer between the CPU and GPU because I could use a single Memory copy statement instead of 2 (and in the long run more, as there are a few other bits of information about particles that will become relevant, like Age, Lifetime, Weight/Mass, Temperature, etc) And then theres also just the simple readability of the code and ease of dealing with it that also makes me inclined toward the OO approach. But the examples I have seen don't utilize structured data, so it makes me wonder if theres a reason.
So the question is which is better: individual arrays of data or structured objects?
| It's common in data parallel programming to talk about "Struct of Arrays" (SOA) versus "Array of Structs" (AOS), where the first of your two examples is AOS and the second is SOA. Many parallel programming paradigms, in particular SIMD-style paradigms, will prefer SOA.
In GPU programming, the reason that SOA is typically preferred is to optimise the accesses to the global memory. You can view the recorded presentation on Advanced CUDA C from GTC last year for a detailed description of how the GPU accesses memory.
The main point is that memory transactions have a minimum size of 32 bytes and you want to maximise the efficiency of each transaction.
With AOS:
position[base + tid].x = position[base + tid].x + velocity[base + tid].x * dt;
// ^ write to every third address ^ read from every third address
// ^ read from every third address
With SOA:
position.x[base + tid] = position.x[base + tid] + velocity.x[base + tid] * dt;
// ^ write to consecutive addresses ^ read from consecutive addresses
// ^ read from consecutive addresses
In the second case, reading from consecutive addresses means that you have 100% efficiency versus 33% in the first case. Note that on older GPUs (compute capability 1.0 and 1.1) the situation is much worse (13% efficiency).
There is one other possibility - if you had two or four floats in the struct then you could read the AOS with 100% efficiency:
float4 lpos;
float4 lvel;
lpos = position[base + tid];
lvel = velocity[base + tid];
lpos.x += lvel.x * dt;
//...
position[base + tid] = lpos;
Again, check out the Advanced CUDA C presentation for the details.
|
2,179,477 | 2,179,588 | Best way to parse HTML in Qt? | How would I go about parsing all of the "a" html tags "href" properties on a page full of BAD html, in Qt?
| I would use the builtin QtWebKit. Don't know how it does in terms of performance, but I think it should catch all "bad" HTML.
Something like:
class MyPageLoader : public QObject
{
Q_OBJECT
public:
MyPageLoader();
void loadPage(const QUrl&);
public slots:
void replyFinished(bool);
private:
QWebView* m_view;
};
MyPageLoader::MyPageLoader()
{
m_view = new QWebView();
connect(m_view, SIGNAL(loadFinished(bool)),
this, SLOT(replyFinished(bool)));
}
void MyPageLoader::loadPage(const QUrl& url)
{
m_view->load(url);
}
void MyPageLoader::replyFinished(bool ok)
{
QWebElementCollection elements = m_view->page()->mainFrame()->findAllElements("a");
foreach (QWebElement e, elements) {
// Process element e
}
}
To use the class
MyPageLoader loader;
loader.loadPage("http://www.example.com")
and then do whatever you like with the collection.
|
2,179,506 | 2,179,535 | LoadString, static library and executables | My project is set up so all the framework code and modules are compiled to a static .lib (let's call it framework.lib), and many test projects use framework.lib and compile to executable files.
For error handling, I'm trying to put the resource strings in framework.rc (part of the framework.lib project) and load the strings in the executable files. However, LoadString() just fails. Using GetLastError() / FormatMessage() I get the following message:
"The specified resource type cannot be found in the image file."
Here is how I call LoadString, which returns 0:
char szString[256];
int iNbOfChars = LoadStringA(GetModuleHandle(NULL), iStringID, szString, 256);
Should what I do be failing because the resource is not defined in the app, but in the lib? If so, any suggestions so I can have a centralized resource file?
| Static libraries are just concatenations of .OBJ files - they don't have features like resources. To do this you need to put the resources in DLL.
|
2,179,543 | 2,179,742 | C++ standard/de facto STL algorithm wrappers | Are there any standard/de facto standard (boost) wrappers around standard algorithms which work with containers defining begin and end. Let me show you what I mean with the code:
// instead of specifying begin and end
std::copy(vector.begin(), vector.end(), output);
// write as
xxx::copy(vector, output);
I know it can be written easily, but I am looking specifically for something ubiquitous.
Thanks.
| There is an extension to the Boost Range library called RangeEx which contains range wrappers for all stl algorithms, plus some new ones.
It has recently been accepted into Boost and so it's not yet in the current "official" release (1.41). Until this changes, you can download the latest version from the Boost Vault.
Don't know if this will ever become part of the C++ standard, but the fact that it's in Boost means that it will be the de facto standard.
|
2,179,623 | 2,180,056 | How does QDebug() << stuff; add a newline automatically? | I'm trying to implement my own qDebug() style debug-output stream, this is basically what I have so far:
struct debug
{
#if defined(DEBUG)
template<typename T>
std::ostream& operator<<(T const& a) const
{
std::cout << a;
return std::cout;
}
#else
template<typename T>
debug const& operator<<(T const&) const
{
return *this;
}
/* must handle manipulators (endl) separately:
* manipulators are functions that take a stream& as argument and return a
* stream&
*/
debug const& operator<<(std::ostream& (*manip)(std::ostream&)) const
{
// do nothing with the manipulator
return *this;
}
#endif
};
Typical usage:
debug() << "stuff" << "more stuff" << std::endl;
But I'd like not to have to add std::endl;
My question is basically, how can I tell when the return type of operator<< isn't going to be used by another operator<< (and so append endl)?
The only way I can think of to achieve anything like this would be to create a list of things to print with associated with each temporary object created by qDebug(), then to print everything, along with trailing newline (and I could do clever things like inserting spaces) in ~debug(), but obviously this is not ideal since I don't have a guarantee that the temporary object is going to be destroyed until the end of the scope (or do I?).
| Qt uses a method similar to @Evan. See a version of qdebug.h for the implementation details, but they stream everything to an underlying text stream, and then flush the stream and an end-line on destruction of the temporary QDebug object returned by qDebug().
|
2,179,624 | 2,179,803 | Can I force a parent window to redraw without causing its children to redraw? | Is it possible to invalidate a window without invalidating its children? (display invalidation to cause a repaint of the parent window, but not redraw its children)
This assumes that the parent window already has the "clipchildren" style, so that its painting wouldn't inherently invalidate the children.
| InvalidateRect() already does this. Another way is RedrawWindow() with the RDW_NOCHILDREN option.
|
2,179,724 | 2,179,840 | Why can't I index a std::vector in the immediate window? | So, I have a vector
std::vector<std::string> lines.
I fill this vector up, and can access it like
std::string temp = lines[0];
However, in the immediate window, both
lines[0] - error:overloaded operator not found
and
lines.at(0) - error:symbol is ambiguous
don't work at all. Is there a trick to using the immediate window with c++. I'm mostly coming from a C# background, where everything works nicely (and I have intellisense in the Immediate Window). I wasn't expecting C++ to be great, but I figured it would work for things besides ints. Can anyone tell me what I'm doing wrong? Thanks.
EDIT: I should be clear, nothing really works in the immediate window, this is just a simplified example
EDIT: I'm in debug mode
| The immediate and watch windows don't support overloaded operators. There is some support in there for printing standard containers as a whole in a sensible fashion (see, e.g., http://www.virtualdub.org/blog/pivot/entry.php?id=120), but this doesn't extend to being able to use operator[] on them.
Hopefully this will be improved in later revisions of the debugger, but for now, to look at the i'th element of a vector, try lines._Myfirst[i].
(_Myfirst, in the standard libraries that come with VC++, happens to be the member variable in a std::vector that points to the first element of the sequence. So this is just examining a vector as if it were any other object. To work this out, I had to look at the headers... not very convenient, but hopefully this will help you. You can probably do something similar with the other containers, but you'll have to look in the headers to work out how.)
(By the way, if you've been working in C#, the C++ debugger will probably seem by comparison a bit less slick in general, and this is just one example of that. I get the impression there's been much more work put into the CLR side.)
|
2,179,946 | 2,179,985 | I would like to see a hash_map example in C++ | I don't know how to use the hash function in C++, but I know that we can use hash_map. Does g++ support that by simply including #include <hash_map>? What is a simple example using hash_map?
| The current C++ standard does not have hash maps, but the coming C++0x standard does, and these are already supported by g++ in the shape of "unordered maps":
#include <unordered_map>
#include <iostream>
#include <string>
using namespace std;
int main() {
unordered_map <string, int> m;
m["foo"] = 42;
cout << m["foo"] << endl;
}
In order to get this compile, you need to tell g++ that you are using C++0x:
g++ -std=c++0x main.cpp
These maps work pretty much as std::map does, except that instead of providing a custom operator<() for your own types, you need to provide a custom hash function - suitable functions are provided for types like integers and strings.
|
2,179,999 | 2,180,304 | C++ Adobe source libraries impressions? | I just stumbled upon Adobe source libraries, ASL. It is set of templates and functions similar to boost, under MIT license.
Some of the utilities in the library I found quite useful and now I consider using it.
the library seems pretty straightforward, however.
Have you used ASL yourself? if so, what were your impressions? do you recommend it?
does it work well with a range of compilers and platforms e.g. IBM C++, ICC, g++?
have you encountered quirks/unexpected things?
thanks
|
ASL uses Boost heavily, so it's not such similar to Boost, as (in some cases) a relatively thin wrapper around Boost.
The "big" pieces of ASL are Adam and Eve. Most of the rest appears to be (and if memory serves, really is) little more than support for those.
ASL hasn't been updated in a while, and if I'm not mistaken some of what it provides in wrappers around Boost has now been incorporated into the Boost libraries themselves (most Boost authors have been aware of ASL at least since they featured in Sean Parent's keynote presentation at Boostcon 1).
My own experience with them has been somewhat mixed. At one time, I used a couple of their Boost-wrapper classes a bit, but IIRC, within the next release or two, the bits I cared about were available in Boost without any wrappers (though offhand, I don't remember exactly what those pieces were...)
Adam and Eve are kind of cool for playing around with different UI layouts and such -- but I've never used them for a finished version of a program. At least to me, it appears that they're useful primarily with a relatively complex UI. My impression was that if you find them very useful, your UI probably needs work. If you need Adam and Eve to help understand what's going on, chances are your users can't figure out either.
OTOH, there are probably at least a few cases where a dialog is clear to a user, but the code much less so to a developer. If you do a lot of disabling some controls until values have been entered in other controls, and such, it can make it a lot easier to ensure controls are disabled until all values they depend upon have been entered.
|
2,180,161 | 2,180,814 | Can anyone explain event handling in C++ please? | I should mention that i am using Mac OS X, XCode.
When a buffer has finished writing to file, it generates an event to tell the gui to read the data off the file.
I am not sure what kind of event would i need in this case? Is it possible to do it without using event?
Thank you.
| Event handling in C++ primarily consists of exceptions and signals. The exact details of how these are handled is best described in the specification or one of Stroustrup's books.
Other event handling, such as mouse clicks, interrupts, and semaphores, is handled by the OS. Different OSes have different API and set up requirements for handling events. Many multi-thread and multi-tasking OSes allow a program to sleep until an event occurs (such as a setting a semaphore, generating a signal or sending a message).
You need to have your program, or thread, signal the GUI when finished writing to a file. Signal is defined by your OS or GUI framework.
FYI, in most designs, buffers don't write to files. Programs, tasks, or execution threads write buffers to files. Having a buffer write to a file may generate more signaling or context switching than having a thread write a buffer to a file.
|
2,180,368 | 2,180,477 | State machine implementation | I have a state machine as described below.
We can start in one of two starting states, but we must hit all 4 states of the handshake. From there, we can either transfer a payload of data or receive a payload of data. Then, we return to our original starting state.
Handshake:
-> StartingState1 -> FinalState1 -> StartingState2 -> FinalState2
-> StartingState2 -> FinalState2 -> StartingState1 -> FinalState1
Payload Transfer:
-> SendPayload -> SendEnd -> StartingState?
-> ReceivePayload -> ReceiveEnd -> StartingState?
The code below represents my current architecture. Unfortunately, at the end of each process, I don't have enough information from within the states to know what the next state is I should hit.
Does anybody have any suggestions on how to improve this architecture based on my requirements?
Thanks,
PaulH
class MyMachine;
class Payload;
class IState
{
MyMachine* context_;
IState( MyMachine* context ) : context_( context) {};
virtual void Consume( byte data );
void ChangeState( IState* state )
{
context_->SetState( state );
}
}
class FinalState1 : IState
{
void Consume( byte data )
{
// Either go to StartingState1, SendPayload, or ReceivePayload.
// How can I tell from within the context of this state where I
// should go?
}
}
class StartingState1 : IState
{
void Consume( byte data )
{
if ( /*some condition*/ )
{
ChangeState( new FinalState1( context_ ) );
}
}
}
class MyMachine
{
IState* state_;
Payload* payload_;
void Start1( Mode mode )
{
state_ = new StartingState1( this );
}
void Start2( Mode mode )
{
state_ = new StartingState2( this );
}
void Consume( byte data )
{
state_->Consume( data );
}
void SetPayload( const Payload* payload )
{
payload_ = payload;
}
const Payload* GetPayload()
{
return payload_;
}
void SetState( State* state )
{
delete state_;
state_ = state;
}
}
// get a byte of data from some source
byte GetData();
void main()
{
MyMachine machine;
Payload payload;
machine.SetPayload( payload );
machine.Start1( Mode::SendPayload );
// could also call:
// machine.Start1( Mode::ReceivePayload );
// machine.Start2( Mode::SendPayload );
// machine.Start2( Mode::ReceivePayload );
for(;;)
{
machine.Consume( GetData() );
}
}
| What you have doesn't represent the possible states of your system completely, but it's easy to transform it so that it does. You need additional states to represent the difference between being in state 1 and not having been in state 2, and being in state 1, whilst having been in state 2 (and the same for state 2). So you need:
S1 S2 F1 F2 S12 F12 S21 F21
SP SE
RP RE
with transitions
S1 --> F1
F1 --> S12
S12 --> F12
F12 --> SP or F12 --> RP
S2 --> F2
F2 --> S21
S21 --> F21
F21 --> SP or F21 --> RP
SP --> SE
RP --> RE
SE --> S1 or SE --> S2
RE --> S1 or RE --> S2
The key difference is the introduction of new states S12, F12, S21 and F21. In terms of implementation you could almost certainly just derive S12 from S2, F12 from F2, S21 from S1 and F21 from F2 and override the transition function to go to the correct state.
(Apologies for acronymising all your states).
|
2,180,501 | 2,180,769 | In C++, are static initializations of primitive types to constant values thread-safe? | i.e., would the following be expected to execute correctly even in a multithreaded environment?
int dostuff(void) {
static int somevalue = 12345;
return somevalue;
}
Or is it possible for multiple threads to call this, and one call to return whatever garbage was at &somevalue before execution began?
| From the C++ Standard, section 6.7:
A local object of POD type (3.9) with static storage duration
initialized with constant-expressions
is initialized before its block is
first entered.
This means that a function-level static object must be initialised by the first time the function is entered, not necessarily when the process as a whole is initialised. At this point, multiple threads may well be running.
|
2,180,527 | 2,180,545 | sort() function in C++ | I found two forms of sort() in C++:
1) sort(begin, end);
2) XXX.sort();
One can be used directly without an object, and one is working with an object.
Is that all? What's the differences between these two sort()? They are from the same library or not? Is the second a method of XXX?
Can I use it like this
vector<int> myvector
myvector.sort();
or
list<int> mylist;
mylist.sort();
| std::sort is a function template that works with any pair of random-access iterators. Consequently, the algorithm implemented by std::sort is tailored (optimized) for random access. Most of the time it is going to be some flavor of quick-sort.
std::list::sort is a dedicated list-oriented version of sort. std::list is a container that does not support [efficient] random access, which means that you can't use std::sort with it. This creates the need for a dedicated sorting algorithm. Most of the time it will be implemented as some flavor of merge-sort.
|
2,180,561 | 2,180,583 | Visual Studio resource editor: there can only be one string table? | I created a string table in my .rc file containing my English strings - now I need to add another string table for a different language.
If I try to do:
Add Resource... -> String Table -> New
I get the error: "there cannot be more than one instance of this type".
I know I can open up the .rc file in notepad and add language in there but how am I suppose to do this from inside Visual Studio?
| Yes, it is very well hidden. Double-click the .rc file in Solution Explorer to open the Resource View window. Expand the String Table node, right-click "String Table" and select "Insert Copy". That takes you to the language selection combo.
|
2,180,698 | 2,180,751 | How do I create a non managed Windows GUI in Visual C++? | When I create a 'Windows Forms Application', the resultant program is a managed one. Creating a 'Win32 Application' results in a native one, but when I try to add a form I'm informed that the project will be converted to CLI if I continue. How do I design a native Windows GUI with Visual C++ 2008 Express Edition? I'm probably being very silly here, but I just can't figure it out.
| As Reed Copsey, MFC would be the "default" way of creating a native unmanaged GUI on the Windows platform. However, MFC is not included with Visual Studio Express. Consequently, you would either need to upgrade to the full version or you could look into using a freely available C++ GUI library such as wxWidgets.
There is also wxFormsBuilder if you want a GUI editor.
You could also go down to the "bare metal" and code right to the Win32 API, maybe take some help from the common controls library. But you'll be entering a world of pain ;)
|
2,180,755 | 2,180,782 | C++ GUI Tutorial: undefined reference to TextOut | So after a bit of searching for Win32 GUI tutorials (I decided a tutorial on making GUIs might make me more active in making C++ applications and therefore stronger at programming in C++ in general,) I came across a rohitab tutorial. There are two parts that I have been able to find. Part 1 worked fine, and I'm now working on Part 2, however, I'm getting this error in Code::Blocks:
C:\Users\John\Documents\Windows GUIs\first_gui.cpp||In function 'C:\Users\John\Documents\Windows GUIs\first_gui.o:first_gui.cpp:(.text+0x281)||undefined reference to '_TextOutA@20'|
My code can be found here (broken link).
I would greatly appreciate any help.
| Did you link your app against GDI32.LIB?
|
2,180,909 | 2,185,061 | How to use ALSA's snd_pcm_writei()? | Can someone explain how snd_pcm_writei
snd_pcm_sframes_t snd_pcm_writei(snd_pcm_t *pcm, const void *buffer,
snd_pcm_uframes_t size)
works?
I have used it like so:
for (int i = 0; i < 1; i++) {
f = snd_pcm_writei(handle, buffer, frames);
...
}
Full source code at http://pastebin.com/m2f28b578
Does this mean, that I shouldn't give snd_pcm_writei() the number of
all the frames in buffer, but only
sample_rate * latency = frames
?
So if I e.g. have:
sample_rate = 44100
latency = 0.5 [s]
all_frames = 100000
The number of frames that I should give to snd_pcm_writei() would be
sample_rate * latency = frames
44100*0.5 = 22050
and the number of iterations the for-loop should be?:
(int) 100000/22050 = 4; with frames=22050
and one extra, but only with
100000 mod 22050 = 11800
frames?
Is that how it works?
Louise
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html#gf13067c0ebde29118ca05af76e5b17a9
| frames should be the number of frames (samples) you want to write from the buffer. Your system's sound driver will start transferring those samples to the sound card right away, and they will be played at a constant rate.
The latency is introduced in several places. There's latency from the data buffered by the driver while waiting to be transferred to the card. There's at least one buffer full of data that's being transferred to the card at any given moment, and there's buffering on the application side, which is what you seem to be concerned about.
To reduce latency on the application side you need to write the smallest buffer that will work for you. If your application performs a DSP task, that's typically one window's worth of data.
There's no advantage in writing small buffers in a loop - just go ahead and write everything in one go - but there's an important point to understand: to minimize latency, your application should write to the driver no faster than the driver is writing data to the sound card, or you'll end up piling up more data and accumulating more and more latency.
For a design that makes producing data in lockstep with the sound driver relatively easy, look at jack (http://jackaudio.org/) which is based on registering a callback function with the sound playback engine. In fact, you're probably just better off using jack instead of trying to do it yourself if you're really concerned about latency.
|
2,181,025 | 2,181,810 | Draw World Map [WGS84] with OpenGL in a wide Range | I want to draw a Map, consisting of 256x256 pixle Tiles (a lot of them). These cover a big area.
The transformation to convert lat|lon(from the globe) to x|y (map) spawns quite big values.
With these big values goes accurcy and so, if I go far away from the 0/0 Point, I get inaccuracies of multiple pixels between textures of two Tiles being next to each other (e.g. 2E8 | 0 and 2E8-1 | 0).
How can I fix these messy unwanted grid appearences? The current failing implementaition is to use float to draw the primitives (this should not matter, as all Tiles are clipped to multiples of 256 in both coordinate directions).
Note: I allready tried to use glTranslated for offset but either double's accurcy is not enough too, or this is not the reason for the glitches.
| Actually, I have recently run into the exact same problem in my implementation of rendering entire planets from space to ground. In fact, what you have to do is create a new position structure that splits the accuracy between different floats. For example,
struct location
{
vec3 metre;
vec3 megametre;
vec3 terametre;
vec3 examtre;
vec3 yottametre;
};
Then you code all of the operators and type cast functions for this structure (+, -, *, /, toVec3), you then use this location struct to encode the location of your camera and each grid tile. On rendering time, you dont translate the camera, but instead you translate the tiles by difference. For example:
void render()
{
// ...
location diff = this->position - camera.position;
vec3 diffvec = diff.toVec3();
glPushMatrix();
glTranslatef(diffvec.x, diffvec.y, diffvec.z);
// render the tile
glPopMatrix();
}
What it does is removes the difference calculation from the OpenGL pipeline which only has up to double precision and puts the work on your program which can essentially have infinite precision. Now the precision and accuracy fall the further away you are from the camera instead of the further away you are from the orgin.
Happy Coding.
|
2,181,028 | 2,181,129 | C++ Seg fault on reference to stored base class pointer | I'm getting some nasty segmentation faults through the g++ compiler on the following code. Any ideas on why this would happen and how to fix it would be great.
#include <iostream>
using namespace std;
class Base {
public:
Base() {}
virtual ~Base() {};
virtual int getNum(int) = 0;
};
class Derived: public Base {
public:
Derived() :
Base() {}
~Derived() {}
int getNum(int num) {
return num;
}
};
class Foo {
public:
Foo() {
};
void init() {
Derived n;
*baseId = n;
}
void otherStuff() {
cout << "The num is" << baseId->getNum(14) << baseId->getNum(15) << baseId->getNum(16) << baseId->getNum(15) << endl;
}
Derived* baseId;
};
int main() {
Foo f;
f.init();
f.otherStuff();
return 0;
}
| void init() {
Derived n;
*baseId = n;
}
Apart from what Neil noted, derived n is local to your init function. It "dies" when you exit the function, so even if you assigned it correctly, it won't work.
What you want is not assigning on the stack but on the heap:
void init() {
baseId = new Derived();
}
or even better:
void init() {
delete baseId;
baseId = new Derived();
}
and a destructor and constructor pair to prevent problems :
Foo() : baseId(0) {};
~Foo() { delete baseId; }
If going for this method, be sure to either block copy constructor and assignment operator, or implement them properly. To implement them however, you'd need to implement copying of Derived too -- or best: use a safe shared_ptr to store the pointer.
|
2,181,135 | 2,181,198 | Finite State Machine : Bad design? | Are Finite State Machines generally considered as bad design in OOP ?
I hear that a lot. And, after I had to work on a really old, undocumented piece of C++ making use of it, I tend to agree. It was a pain to debug.
what about readability/maintainability concerns?
| FSMs should never be considered bad. They are far too useful, but people whom aren't accustomed to them will often consider them burdensome.
There are numerous ways to implement one with OOP. Some are uglier than others. Your low-level guys will use switch statements, jump tables or even "goto."
If you're looking for a cleaner way to do it, I'd recommend Boost's State Chart library, which is built just for implementing UML state diagrams in C++. It makes use of modern template techniques, to make things more readable. It also performs very well.
|
2,181,205 | 2,181,226 | utf-8 to/from utf-16 problem | I based these two conversion functions and an answer on StackOverflow, but converting back-and-forth doesn't work:
std::wstring MultiByteToWideString(const char* szSrc)
{
unsigned int iSizeOfStr = MultiByteToWideChar(CP_ACP, 0, szSrc, -1, NULL, 0);
wchar_t* wszTgt = new wchar_t[iSizeOfStr];
if(!wszTgt) assert(0);
MultiByteToWideChar(CP_ACP, 0, szSrc, -1, wszTgt, iSizeOfStr);
std::wstring wstr(wszTgt);
delete(wszTgt);
return(wstr);
}
std::string WideStringToMultiByte(const wchar_t* wszSrc)
{
int iSizeOfStr = WideCharToMultiByte(CP_ACP, 0, wszSrc, -1, NULL, 0, NULL, NULL);
char* szTgt = new char[iSizeOfStr];
if(!szTgt) return(NULL);
WideCharToMultiByte(CP_ACP, 0, wszSrc, -1, szTgt, iSizeOfStr, NULL, NULL);
std::string str(szTgt);
delete(szTgt);
return(str);
}
[...]
// はてなブ in utf-16
wchar_t wTestUTF16[] = L"\u306f\u3066\u306a\u30d6\u306f\u306f";
// shows the text correctly
::MessageBoxW(NULL, wTestUTF16, L"Message", MB_OK);
// convert to UTF8, and back to UTF-16
std::string strUTF8 = WideStringToMultiByte(wTestUTF16);
std::wstring wstrUTF16 = MultiByteToWideString(strUTF8.c_str());
// this doesn't show the proper text. Should be same as first message box
::MessageBoxW(NULL, wstrUTF16.c_str(), L"Message", MB_OK);
| Check the docs for WideCharToMultiByte(). CP_ACP converts using the current system code page. That's a very lossy one. You want CP_UTF8.
|
2,181,474 | 2,181,510 | How to use Redis within a C++ program? | What would be the best way to use a Redis DB within a C++ program?
| Using a C bindings library? There doesn't seem to be a C++ wrapper available anywhere.
|
2,181,594 | 2,181,761 | Select mutex or dummy mutex at runtime | I have a class that is shared between several projects, some uses of it are single-threaded and some are multi-threaded. The single-threaded users don't want the overhead of mutex locking, and the multi-threaded users don't want to do their own locking and want to be able to optionally run in "single-threaded mode." So I would like to be able to select between real and "dummy" mutexes at runtime.
Ideally, I would have a shared_ptr<something> and assign either a real or fake mutex object. I would then "lock" this without regard to what's in it.
unique_lock<something> guard(*mutex);
... critical section ...
Now there is a signals2::dummy_mutex but it does not share a common base class with boost::mutex.
So, what's an elegant way to select between a real mutex and a dummy mutex (either the one in signals2 or something else) without making the lock/guard code more complicated than the example above?
And, before you point out the alternatives:
I could select an implementation at compile time, but preprocessor macros are ugly and maintaining project configurations is painful for us.
Users of the class in a multi-threaded environment do not want to take on the responsibility of locking the use of the class rather than having the class do its own locking internally.
There are too many APIs and existing usages involved for a "thread-safe wrapper" to be a practical solution.
| How about something like this?
Its untested but should be close to OK.
You might consider making the template class hold a value rather than a pointer
if your mutexes support the right kinds of constructions. Otherwise you could specialise the MyMutex class to get value behaviour.
Also it's not being careful about copying or destruction .. I leave that as an exercise to the reader ;) ( shared_ptr or storing a value rather than a pointer should fix this)
Oh and the code would be nicer using RAII rather than explicit lock/unlock... but that's a different question.I assume thats what the unique_lock in your code does?
struct IMutex
{
virtual ~IMutex(){}
virtual void lock()=0;
virtual bool try_lock()=0;
virtual void unlock()=0;
};
template<typename T>
class MyMutex : public IMutex
{
public:
MyMutex(T t) : t_(t) {}
void lock() { t_->lock(); }
bool try_lock() { return t_->try_lock(); }
void unlock() { t_->unlock(); }
protected:
T* t_;
};
IMutex * createMutex()
{
if( isMultithreaded() )
{
return new MyMutex<boost::mutex>( new boost::mutex );
}
else
{
return new MyMutex<signal2::dummy_mutex>( new signal2::dummy_mutex );
}
}
int main()
{
IMutex * mutex = createMutex();
...
{
unique_lock<IMutex> guard( *mutex );
...
}
}
|
2,181,600 | 2,181,645 | Need basic help parsing a string in C++ | C++ is not my preferred language.
I have a file that contains this:
e 225,370 35,75
I want to separate e, 225, 370, 35 and 75 from each other into a char and ints but I'm having trouble. I tried doing everything I found online and in my C++ book and still it's not working out. Please help.
I would have an easier time doing this in Java.
| #include <iostream>
#include <fstream>
using namespace std;
int main()
{
ifstream f("a.txt"); // check for errors.
char ch,dummy;
int i1,i2,i3,i4;
f>>ch>>i1>>dummy>>i2>>i3>>dummy>>i4;
cout<<ch<<endl<<i1<<endl<<i2<<endl<<i3<<endl<<i4<<endl;
return 0;
}
|
2,181,659 | 2,181,678 | Help understanding multidimensional array and pointer notation in c++ | I understand the basic idea that when an array is the sole operand of the & or sizeof() operator, it decays to a pointer to the first element in the array. I'm unsure how these notations work though. In our text, there is the 1-D case, vs the 3-D case for an array. The first example is the function declaration for a function called average. The 1-D case is
double average(double set[]) or
double average(double *set)
Those make sense to me. The equivalent multi-D case does not. Their declaration is
double average (double set[][DIM1][DIM2]) or
double average (double (*set)[DIM1][DIM2])
Similarly, the function declaration for printing a value for 1-D is:
double *printvalue(double value)
The multi-D case is:
double (*printvalue(double value))[DIM1][DIM2]
Can anyone shed any light on this? Thanks.
| The parameter still decays to a pointer. The important part is that DIM1 and DIM2 specify the size of all but one dimension. So, if we have:
double average (double set[][DIM1][DIM2] myset)
myset[0][0] is DIM2 * sizeof(double) before myset[0][1]. Together, the two dimensions say that myset[0] is DIM1 * DIM2 * sizeof(double) before myset[1]. You don't need a DIM0 on the left, because the number of rows doesn't affect the pointer arithmetic. You can always leave out the leftmost dimension for this reason.
|
2,181,742 | 2,181,757 | Identical Class Member Names and Function Argument Names in C++ | I have a simple object that holds some [public] data.
I want to keep my interface clean, so I don't want to pre-/post- fix anything to the names of the publically accessible variables nor to the names of my function arguments.
That said, I ended up doing something like this:
template<typename T> struct Foo
{
explicit Foo(T x) : x(x) // This [i.e., x(x)] seems to be doing the "Right Thing", but is this well defined?
{/* ^
No pre-/post- fixing.
*/
}
T x; // No pre-/post- fixing.
};
Just to reiterate: All I'm asking is whether this is well defined behavior. Not whether I should or shouldn't be doing this...
Thanks.
| Yes, that's fine, and perfectly standard.
Local variables always come first in a name lookup, but the x(...) in an initialization list can obviously only refer to member variables [edit:or a base class].
If you didn't use the initialization list, you would have to write:
explicit Foo(T x)
{
this->x = x;
}
|
2,181,920 | 2,232,375 | Qt::What needs to be included in the configuration to use dbus? | I'm using stripped down as much as possible configuration of Qt but now I need to use the dbus and can't figure out what I need to include to be able to use it? There doesnt seem to be anything obvious to me using the qconfig tool. The errors I get at the moment when making are:
qdbus_symbols.cpp:53: error: expected initializer before ‘*’ token
qdbus_symbols.cpp: In function ‘void qdbus_unloadLibDBus()’:
qdbus_symbols.cpp:57: error: ‘qdbus_libdbus’ was not declared in this scope
qdbus_symbols.cpp: In function ‘bool qdbus_loadLibDBus()’:
qdbus_symbols.cpp:67: error: ‘QLibrary’ was not declared in this scope
qdbus_symbols.cpp:67: error: ‘lib’ was not declared in this scope
qdbus_symbols.cpp:67: error: ‘qdbus_libdbus’ was not declared in this scope
qdbus_symbols.cpp:71: error: expected type-specifier before ‘QLibrary’
qdbus_symbols.cpp:71: error: expected ‘;’ before ‘QLibrary’
qdbus_symbols.cpp:85: error: type ‘<type error>’ argument given to ‘delete’, expected pointer
qdbus_symbols.cpp: In function ‘void* qdbus_resolve_conditionally(const char*)’:
qdbus_symbols.cpp:93: error: ‘qdbus_libdbus’ was not declared in this scope
qdbus_symbols.cpp: In function ‘void* qdbus_resolve_me(const char*)’:
qdbus_symbols.cpp:103: error: ‘qdbus_libdbus’ was not declared in this scope
make[1]: *** [.obj/release-static-emb-x86/qdbus_symbols.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: Leaving directory `/home/mark/qt-qvfb-4.5.3-static/src/dbus'
make: *** [sub-dbus-make_default-ordered] Error 2
Does anyone know a module that I must not be including which is necessary or how to find out? thanks
| QT += dbus
should be enough to include the dbus option in the .pro project file, ins't it?
I need more info to give apropriate answer.
|
2,181,933 | 2,181,941 | Is there a way to find the cardinality (size) of an enum in C++? | Could one write a function that returns the number of elements in an enum? For example, say I have defined:
enum E {x, y, z};
Then f(E) would return 3.
| Nope.
If there were, you wouldn't see so much code like this:
enum E {
VALUE_BLAH,
VALUE_OTHERBLAH,
...
VALUE_FINALBLAH,
VALUE_COUNT
}
Note that this code is also a hint for a (nasty) solution -- if you add a final "guard" element, and don't explicitly state the values of the enum fields, then the last "COUNT" element will have the value you're looking for -- this happens because enum count is zero-based:
enum B {
ONE, // has value = 0
TWO, // has value = 1
THREE, // has value = 2
COUNT // has value = 3 - cardinality of enum without COUNT
}
|
2,181,955 | 2,181,999 | Center an OpenGL window with GLUT | I have an openGL window that is 640x480 that I need to center in the middle of the screen. I previously used:
glutInitWindowPosition((GetSystemMetrics(SM_CXSCREEN)-640)/2,
(GetSystemMetrics(SM_CYSCREEN)-480)/2);
which WORKED.
But now all of a sudden when I compile...
Linking...
1>Project1.obj : error LNK2028: unresolved token (0A000372) "extern "C" int __stdcall GetSystemMetrics(int)" (?GetSystemMetrics@@$$J14YGHH@Z) referenced in function "int __cdecl main(int,char * *)" (?main@@$$HYAHHPAPAD@Z)
1>Project1.obj : error LNK2019: unresolved external symbol "extern "C" int __stdcall GetSystemMetrics(int)" (?GetSystemMetrics@@$$J14YGHH@Z) referenced in function "int __cdecl main(int,char * *)" (?main@@$$HYAHHPAPAD@Z)
1>C:\Users\My Computer\Documents\School Stuff\CS445\Project1\Debug\Project1.exe : fatal error LNK1120: 2 unresolved externals
Someone please help. This is very annoying and frustrating for me as I don't know a lot about OpenGL and GLUT.
| Also, instead of linking user32.lib you can do it solely using glut:
glutGet(GLUT_SCREEN_WIDTH) // returns Screen width
and
glutGet(GLUT_SCREEN_HEIGHT) // returns Screen height
Why depend on Windows when you can be cross-platform?
Hence, your code would look:
glutInitWindowPosition((glutGet(GLUT_SCREEN_WIDTH)-640)/2,
(glutGet(GLUT_SCREEN_HEIGHT)-480)/2);
|
2,182,235 | 2,189,582 | OUTDATED - Error modes for OpenCV | I am using OpenCV 1 to do some image processing, and am confused about the cvSetErrMode function (which is part of CxCore).
OpenCV has three error modes.
Leaf: The program is terminated after the error handler is called.
Parent: The program is not terminated, but the error handler is called.
Silent: Similar to Parent mode, but no error handler is called
At the start of my code, I call cvSetErrMode(CV_ErrModeParent) to switch from the default 'leaf' mode to 'parent' mode so my application is not terminated with an exception/assertion pop up.
Unfortunately 'parent' mode doesn't seem to be working. I still get the message dialog pop up, and my application still terminates.
If I call cvSetErrMode(CV_ErrModeSilent) then it actually goes silent, and no longer quits the application or throws up a dialog... but this also means that I dont know that an error has occurred. In this case, I think the mode is being set correctly.
Has anyone else seem this behaviour before and might be able to recommend a solution?
References:
cvSetErrMode function reference
Open CV Error handling mode reference
| I am going to answer my own question, because after some fiddling around I have worked out what happens.
When you switch to 'parent' mode instead of leaf mode, there is an error handler that gets called cvGuiBoxReport(). cvGuiBoxReport() is the default error handler. It seems that even in parent mode, cvGuiBoxReport() still terminates your application! Oops.
So, to get around that you can write your own error handler, and redirect the error to be handled and NOT terminate the application.
An example error handler:
int MyErrorHandler(int status, const char* func_name, const char* err_msg, const char* file_name, int line, void*)
{
std::cerr << "Woohoo, my own custom error handler" << std::endl;
return 0;
}
You can set up parent mode and redirect your error with:
cvSetErrMode(CV_ErrModeParent);
cvRedirectError(MyErrorHandler);
|
2,182,408 | 2,182,451 | Return a const reference or a copy in a getter function? | What's better as default, to return a copy (1) or a reference (2) from a getter function?
class foo {
public:
std::string str () { // (1)
return str_;
}
const std::string& str () { // (2)
return str_;
}
private:
std::string str_;
};
I know 2) could be faster but don't have to due to (N)RVO. 1) is safer concerning dangling references but the object will probably outlife or the reference is never stored.
What's your default when you write a class and don't know (yet) whether performance and lifetime issues matter?
Additional question: Does the game change when the member is not a plain string but rather a vector?
| Well it really depends on what you expect the behaviour to be, by default.
Do you expect the caller to see changes made to str_ unbeknownst(what a word!) to them? Then you need to pass back a reference. Might be good if you can have a refcounted data member and return that.
If you expect the caller to get a copy, do 1).
|
2,182,454 | 2,182,473 | Why does this code using `::boost::bind` get a compiler error? | This code:
#include <boost/signals.hpp>
#include <boost/bind.hpp>
#include <boost/mem_fn.hpp>
#include <iostream>
class Recorder : public ::boost::signals::trackable {
public:
void signalled() {
const void *me = this;
::std::cerr << "Recorder at " << me << " signalled!\n";
}
};
void signalled()
{
::std::cerr << "Signalled!\n";
}
int main(int argc, const char *argv[])
{
::boost::signal<void ()> sig;
sig.connect(&signalled);
{
Recorder r;
sig.connect(::boost::bind(&Recorder::signalled, &r, _1));
sig();
}
sig();
return 0;
}
is generating these compiler errors:
In file included from move_constructor.cpp:2:
/usr/include/boost/bind.hpp: In instantiation of ‘boost::_bi::result_traits<boost::_bi::unspecified, void (Recorder::*)()>’:
/usr/include/boost/bind/bind_template.hpp:15: instantiated from ‘boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >’
move_constructor.cpp:25: instantiated from here
/usr/include/boost/bind.hpp:67: error: ‘void (Recorder::*)()’ is not a class, struct, or union type
In file included from /usr/include/boost/function/detail/maybe_include.hpp:13,
from /usr/include/boost/function/function0.hpp:11,
from /usr/include/boost/signals/signal_template.hpp:38,
from /usr/include/boost/signals/signal0.hpp:24,
from /usr/include/boost/signal.hpp:19,
from /usr/include/boost/signals.hpp:9,
from move_constructor.cpp:1:
/usr/include/boost/function/function_template.hpp: In static member function ‘static void boost::detail::function::void_function_obj_invoker0<FunctionObj, R>::invoke(boost::detail::function::function_buffer&) [with FunctionObj = boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >, R = void]’:
/usr/include/boost/function/function_template.hpp:904: instantiated from ‘void boost::function0<R>::assign_to(Functor) [with Functor = boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >, R = void]’
/usr/include/boost/function/function_template.hpp:720: instantiated from ‘boost::function0<R>::function0(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >, R = void]’
/usr/include/boost/function/function_template.hpp:1040: instantiated from ‘boost::function<R()>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >, R = void]’
/usr/include/boost/signals/slot.hpp:111: instantiated from ‘boost::slot<SlotFunction>::slot(const F&) [with F = boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >, SlotFunction = boost::function<void()>]’
move_constructor.cpp:25: instantiated from here
/usr/include/boost/function/function_template.hpp:152: error: no match for call to ‘(boost::_bi::bind_t<boost::_bi::unspecified, void (Recorder::*)(), boost::_bi::list2<boost::_bi::value<Recorder*>, boost::arg<1> > >) ()’
This is with g++ 4.4.1 on a Fedora 11 box with the Fedora 11 boost-1.37.0 package installed.
This code seems perfectly kosher to me. I don't understand what's going on here, and the maze of template expansion related errors is very confusing. Does anybody know what the problem is?
| sig.connect(::boost::bind(&Recorder::signalled, &r, _1));
What is the _1 placeholder left for here? It's unneeded, connect expects void -> void function. If you'll remove the needless placeholder, the code will compile.
You provide &Recorder::signalled -- a member function of type void Recorder::(void), correctly bind a Recorder pointer, changing it to void -> void, and then additionally leave a placeholder _1 -- what is obviously wrong.
|
2,182,598 | 2,182,667 | Static function access in other files | Is there any chance that a function defined with static can be accessed outside the file scope?
| It depends upon what you mean by "access". Of course, the function cannot be called by name in any other file since it's static in a different file, but you have have a function pointer to it.
$ cat f1.c
/* static */
static int number(void)
{
return 42;
}
/* "global" pointer */
int (*pf)(void);
void initialize(void)
{
pf = number;
}
$ cat f2.c
#include <stdio.h>
extern int (*pf)(void);
extern void initialize(void);
int main(void)
{
initialize();
printf("%d\n", pf());
return 0;
}
$ gcc -ansi -pedantic -W -Wall f1.c f2.c
$ ./a.out
42
|
2,182,784 | 2,182,901 | Length of a BYTE array in C++ | I have a program in C++ that has a BYTE array that stores some values. I need to find the length of that array i.e. number of bytes in that array. Please help me in this regard.
This is the code:
BYTE *res;
res = (BYTE *)realloc(res, (byte_len(res)+2));
byte_len is a fictitious function that returns the length of the BYTE array and I would like to know how to implement it.
| Given your code:
BYTE *res;
res = (BYTE *)realloc(res, (byte_len(res)+2));
res is a pointer to type BYTE. The fact that it points to a contiguous sequence of n BYTES is due to the fact that you did so. The information about the length is not a part of the pointer. In other words, res points to only one BYTE, and if you point it to the right location, where you have access to, you can use it to get BYTE values before or after it.
BYTE data[10];
BYTE *res = data[2];
/* Now you can access res[-2] to res[7] */
So, to answer your question: you definitely know how many BYTEs you allocated when you called malloc() or realloc(), so you should keep track of the number.
Finally, your use of realloc() is wrong, because if realloc() fails, you leak memory. The standard way to use realloc() is to use a temporary:
BYTE *tmp;
tmp = (BYTE *)realloc(res, n*2);
if (tmp == NULL) {
/* realloc failed, res is still valid */
} else {
/* You can't use res now, but tmp is valid. Reassign */
res = tmp;
}
|
2,183,067 | 2,185,691 | boost::Spirit Grammar for unsorted schema | I have a section of a schema for a model that I need to parse. Lets say it looks like the following.
{
type = "Standard";
hostname="x.y.z";
port="123";
}
The properties are:
The elements may appear unordered.
All elements that are part of the schema must appear, and no other.
All of the elements' synthesised attributes go into a struct.
(optional) The schema might in the future depend on the type field -- i.e., different fields based on type -- however I am not concerned about this at the moment.
| According to the Spirit forums, the following is the answer.
You might want to have a look at the
permutation parser:
a ^ b ^ c
Which matches a or b or c (or a
combination thereof) in any sequence.
If the objective is to parse into a struct, than the best way to test weather all essential members have been initialized, the struct members should be wrapped with boost::optional<> The attribute presence may then be easily tested post-parsing during run-time.
|
2,183,087 | 2,183,106 | Why can't I use float value as a template parameter? | When I try to use float as a template parameter, the compiler cries for this code, while int works fine.
Is it because I cannot use float as a template parameter?
#include<iostream>
using namespace std;
template <class T, T defaultValue>
class GenericClass
{
private:
T value;
public:
GenericClass()
{
value = defaultValue;
}
T returnVal()
{
return value;
}
};
int main()
{
GenericClass <int, 10> gcInteger;
GenericClass < float, 4.6f> gcFlaot;
cout << "\n sum of integer is "<<gcInteger.returnVal();
cout << "\n sum of float is "<<gcFlaot.returnVal();
return 0;
}
Error:
main.cpp: In function `int main()':
main.cpp:25: error: `float' is not a valid type for a template constant parameter
main.cpp:25: error: invalid type in declaration before ';' token
main.cpp:28: error: request for member `returnVal' in `gcFlaot',
which is of non-class type `int'
I am reading "Data Structures for Game Programmers" by Ron Penton, the author passes a float, but when I try it it doesn't seem to compile.
| The current C++ standard does not allow float (i.e. real number) or character string literals to be used as template non-type parameters. You can of course use the float and char * types as normal arguments.
Perhaps the author is using a compiler that doesn't follow the current standard?
|
2,183,113 | 2,184,971 | using catch(...) (ellipsis) for post-mortem analysis | Someone in a different question suggested using catch(...) to capture all otherwise unhandled - unexpected/unforseen exceptions by surrounding the whole main() with the try{}catch(...){} block.
It sounds like an interesting idea that could save a lot of time debugging the program and leave at least a hint of what happened.
The essence of the question is what information can be recovered that way (other than whatever debug globals I leave behind), and how to recover it (how to access and recognize whatever catch was called with)
Also, what caveats are connected with it. In particular:
will it play nice with threads that sprout later?
will it not break handling segfaults (captured elsewhere as signal)
will it not affect other try...catch blocks inevitably nested inside, that are there to handle expected exceptions?
| Yes it is a good idea.
If you let an exception escape main it is implementation defined weather the stack is unwound before the application is shut down. So in my opinion it is essential you catch all exceptions in main.
The question then becomes what to do with them.
Some OS (See MS and SE) provide some extra debugging facilities so it is useful to just re-throw the exception after you catch it (because the stack has been unwound now anyway).
int main()
{
try
{
/// All real code
}
// I see little point in catching other exceptions at this point
// (apart from better logging maybe). If the exception could have been caught
// and fixed you should have done it before here.
catch(std::exception const& e)
{
// Log e.what() Slightly better error message than ...
throw;
}
catch(...) // Catch all exceptions. Force the stack to unwind correctly.
{
// You may want to log something it seems polite.
throw; // Re-throw the exception so OS gives you a debug opportunity.
}
}
will it play nice with threads that sprout later?
It should have no affect on threads. Usually you have to manually join any child threads to make sure that they have exited. The exact details of what happens to child threads when main exits is not well defined (so read your documentation) but usually all child threads will die instantly (a nasty and horrible death that does not involve unwinding their stacks).
If you are talking about exceptions in child threads. Again this is not well defined (so read your documentation) but if a thread exits via an exception (ie the function used to start the thread exits because of an exception and not a return) then this usually causes the application to terminate (same affect as above). So it is always best to stop ALL exceptions from exiting a thread.
will it not break handling segfaults (captured elsewhere as signal)
Signals are not affected by the exception handling mechanism.
But because signal handlers may place an odd structure on the stack (for their own return handling back to normal code) it is not a good idea to throw an exception from within a signal handler as this may cause unexpected results (and is definitely not portable).
will it not affect other try...catch blocks inevitably nested inside, that are there to handle expected exceptions?
Should have no effect on other handlers.
|
2,183,119 | 2,183,156 | Should I use XML or Binary to send data from server to client? | I have two separate apps - one a client (in C#), one a server (in C++). They need to exchange data in the form of "structs" and ~ about 1 MB of data a minute is sent from server to client.
Whats better to use - XML or my own Binary format?
With XML:
Translating XML to a struct using a parser would be slow I believe? ("good",but: load parser, load XML, parse)
The other option is parsing XML with regex (bad!)
With Binary:
compact data sizes
no need for meta information like tags;
but structs cannot be changed easily to accomodate new structs/new members in structs in future;
no conversion from text (XML) to binary (struct) necessary so is faster to receive and "assemble" into a struct)
Any pointers? Should I not be considering binary at all?? A bit confused about what approach to take.
| 1MB of data per minute is pretty tiny if you've got a reasonable network connection.
There are other choices between binary and XML - other human-readable text serialization formats, such as JSON.
When it comes to binary, you don't have to have versioning problems - technologies like Protocol Buffers (I'm biased: I work for Google and I've ported PB to C#) are explicitly designed with backward and forward compatibility in mind. There are other binary formats to consider as well, such as Thrift.
If you're worried about performance though, you should really measure it. I'm pretty sure my phone could parse 1MB of XML sufficiently quickly for it not to be a problem in this case... basically work out what you're most concerned about, in terms of:
Simplicity of code
Interoperability
Performance in terms of CPU
Network traffic
Backward/forward compatibility
Human readability of on-the-wire format
It's all a balancing act - but you're the one who has to decide how much weight to give each of those factors.
|
2,183,129 | 2,183,337 | C# and C++ Synchronize between processes | We have 2 applications. One written in C# and the other in C++. We need to maintain a counter (in memory) shared between these processes. Every time one of these applications start, it needs to check for this counter and increase it and every time the application shut-down it needs to decrease the counter. If the application has a crash or shut-down using task manager, we also need the counter to decrease.
We thought of using one of the OS synchronization objects like MUTEX.
My question: What kind of sync object is the best for cross process (when one is C# and the other C++)
Hope my question was clear.
Thank you very much,
Adi Barda
| You might get away with named semaphore. Semaphore is basically a count, it is here to allow developers limit the number of thread/processes that are accessing some resource. Usually it works like that
You create a semaphore with maximum count N.
N threads call waiting function on it, WaitForSingleObject or similar and each of them go on without waiting. Each time internal semaphore counter goes down.
N+1 thread also calls waiting function but because internal counter of our semaphore is 0 now, it has to wait.
One of our first N threads releases the semaphore by calling ReleaseSemaphore function. This function increments internal counter of semaphore.
Our waiting thread don't have to wait now, so it resumes but semaphore counter goes back to 0.
I don't think this is how you want to use it though. So, instead, you should:
Create named semaphore with initial counter set to zero.
When application start, immediately release it, increasing the counter. You'll get previous counter value during that call.
When application ends, call WaitForSingleObject(hSemaphore, 0), decreasing the counter. 0 means you don't want to wait.
This is all quite easy.
In C++
//create semaphore
HANDLER hSemaphore = CreateSemaphore(NULL, 0, BIG_NUMBER, "My cool semaphore name");
//increase counter
LONG prev_counter;
ReleaseSemaphore(hSemaphore, 1, &prev_counter);
//decrease counter
WaitForSingleObject(hSemaphore, 0);
In C#
using System.Threading;
//create semaphore
Semaphore sem = new Semaphore(0, BIG_NUMBER, "My cool semaphore name");
//increase counter
int prev_counter = sem.Release();
//decrease counter
sem.WaitOne(0);
Names and BIG_NUMBERs should be the same obviously.
If this is not sufficient for your task, you will have to look into shared memory and lock access to it though named mutex, but that is a little bit more complicated.
|
2,183,145 | 2,183,183 | Best C++ library to interact with ICQ? | I have a Qt-based windows application written in C++ that needs to report it's status via ICQ and react on some ICQ messages. Maybe anyone know a good library to interact with ICQ? I need to connect ICQ with registered login and password, send messages to specified contacts and receive messages from them.
| I know of two streamline options:
libpurple - The core library behind Pidgin
Telepathy - The core behind Empathy
|
2,183,505 | 2,183,609 | ways to improve the launch speed of C++ application | Recently, my boss asked me to improve the launch speed of our application, the AP was written with C++.
The AP is a little big, it used 200+ dll, Windows needs long time to enter the main() function. I tried these two ways, but still can't make our boss happy.
delay load dll http://msdn.microsoft.com/en-us/library/yx9zd12s(VS.80).aspx
use EDITBIN to to modify EXE http://msdn.microsoft.com/en-us/library/xd3shwhf(VS.80).aspx
Are there other ways to improve it? Thanks in advance.
| You need to profile your application in order to determine the true cause of the slowdown. For example, it could be that you are spending most of the time in some initialization routine of one the .dll's you are loading. Go find yourself a good profiling tool and then determine where the bottleneck is.
|
2,183,557 | 2,183,593 | SMTP email on Windows using C++ | Is there any Windows Native API to send SMTP based message using C++
| There is not a native API. You will need a 3rd party component or to build this yourself.
Useful information in this thread:
https://stackoverflow.com/questions/58210/c-smtp-example
|
2,183,649 | 2,183,699 | what is invalidate,update methods do in VC++ | i have small doubt regarding the window functions in c++.
what exactly "invalidate()" function do?
what message does it sends?when we need to call this? also what is "update()" function?
is "invalidaterect()" works similar to "invalidate()" function?.
Thanks
| CWnd::Invalidate() invalidates the entire client area of a window, which indicates that the area is out of date, and should be repainted. You would typically call this on a control that needs to be redrawn. CWnd::InvalidateRect() invalidates only part of the window.
With the Invalidate functions, the WM_PAINT message will posted [not strictly true; see the comments] to the message queue and handled at some point in the future. CWnd::UpdateWindow() sends (as opposed to posts) a WM_PAINT message, causing the invalidated regions to be redrawn immediately.
Really, this is all in the docs.
|
2,183,650 | 2,183,797 | C# shared memory between C++ application and C# application using COM object | Is it OK/possible to have a shared memory in a COM object which will be consumed by applications built using C# and/or C++?
Can C# access shared memory in COM object without crash?
Thanks,
Adi Barda
| I don't really see what role the COM object plays in this. But, no, there isn't anything that makes MMFs fundamentally incompatible with managed code. They are merely very awkward to use since you need pointers to access them.
Support for MMFs is coming in .NET 4.0. It is probably too soon to use that in production code. But do make sure to take a look at the API so that what you do now is going to fit the API well when you switch.
|
2,183,712 | 2,183,739 | A little problem in handling 2-D arrays with class | class linklist4x4
{
private:
struct node4x4
{
double data[4][4];
node4x4 *link;
}*p;
public:
linklist4x4();
void append( double* num );
void add_as_first( double* num );
void addafter( int c, double* num );
//void del( double* num );
void display();
int count();
double* getdata(int c);
~linklist4x4();
};
The above is my class declaration.
Now below is a function declaration.
void linklist4x4::append(double* num)
{
node4x4 *q,*t;
if( p == NULL )
{
p = new node4x4;
for(int i=0;i<4;i++)
for(int j=0;j<4;j++)
p->data[i][j]=num[i][j];//error line
p->link = NULL;
}
else
{
q = p;
while( q->link != NULL )
q = q->link;
t = new node4x4;
for(int i=0;i<4;i++)
for(int j=0;j<4;j++)
t->data[i][j]=num[i][j];//error line
t->link = NULL;
q->link = t;
}
}
When i try to compile, it gives me an error saying that "error C2109: subscript requires array or pointer type" at the marked commented lines as shown.
Can anyone please help?
| You have defined the method append to take a 1-D array of type double. But in the function you are making use of it as a 2-D array.
The data in each of your link list node is a 2-D array of double.
So your all your member functions should accept a 2-D array of double as argument.
Change:
void append( double* num );
void add_as_first( double* num );
void addafter( int c, double* num );
to
void append( double num[][4] );
void add_as_first( double num[][4] );
void addafter( int c, double num[][4] );
|
2,183,961 | 2,184,244 | Is creating a base class for all applications of a particular type good design? | I am trying to write a graphics application in C++. It currently uses OGRE for display, but I'd like it to work with Irrlicht or any other engine, even a custom rendering engine which supports my needs. This is a rather long question, so I'd appreciate help on re-tagging/ cleanup (if necessary). I'll start with a little background.
The application has three major states:
1. Display rasterized scene
2. Display a ray traced version of the same scene
3. Display a hybrid version of the scene
Clearly, I can divide my application into four major parts:
1. A state management system to switch between the above modes.
2. An input system that can receive both keyboard and mouse input.
3. The raster engine used for display.
4. The ray tracing system.
Any application encompassing the above needs to be able to:
1. Create a window.
2. Do all the steps needed to allow rendering in that window.
3. Initialize the input system.
4. Initialize the state manager.
5. Start looping (and rendering!).
I want to be able to change the rendering engine/state manager/input system/ ray tracing system at any time, so long as certain minimum requirements are met. Imho, this requires separating the interface from the implementation. With that in mind, I created the interfaces for the above systems.
At that point, I noticed that the application has a common 'interface' as well. So I thought to abstract it out into an ApplicationBase class with virtual methods. A specific application, such as one which uses OGRE for window creation, rendering etc would derive from this class and implement it.
My first question is - is it a good idea to design like this?
Here is the code for the base class:
#ifndef APPLICATION_H
#define APPLICATION_H
namespace Hybrid
{
//Forward declarations
class StateManager;
class InputSystem;
//Base Class for all my apps using hybrid rendering.
class Application
{
private:
StateManager* state_manager;
InputSystem* input_system;
public:
Application()
{
try
{
//Create the state manager
initialise_state_manager();
//Create the input system
initialise_input_system();
}
catch(...) //Change this later
{
//Throw another exception
}
}
~Application()
{
delete state_manager;
delete input_system;
}
//If one of these fails, it throws an
//exception.
virtual void initialise_state_manager() = 0;
virtual void initialise_input_system() = 0;
virtual void create_window() = 0;
//Other methods.
};
#endif
When I use OGRE, I rely on OGRE to create the window. This requires OGRE to be initialised before the createWindow() function is called in my derived class. Of course, as it is, createWindow is going to be called first! That leaves me with the following options:
1. Leave the base class constructor empty.
2. In the derived class implementation, make initialising OGRE part of the createWindow function.
3. Add an initialize render system pure virtual function to my base class. This runs the risk of forcing a dummy implementation in derived classes which have no use for such a method.
My second question is- what are your recommendations on the choice of one of these strategies for initialising OGRE?
| You are mixing two unrelated functions in one class here. First, it serves as a syntactic shortcut for declaring and initializing StateManager and InputSystem members. Second, it declares abstract create_window function.
If you think there should be a common interface - write an interface (pure abstract class).
Additionally, write something like OgreManager self-contained class with initialization (looping etc) methods and event callbacks. Since applications could create and initialize this object at any moment, your second question is solved automatically.
Your design may save a few lines of code for creating new application objects, but the price is maintaining soup-like master object with potentially long inheritance line.
Use interfaces and callbacks.
P.S.: not to mention that calling virtual functions in constructor doesn't mean what you probably expect.
|
2,184,123 | 2,184,257 | Detect removable drive (e.g. USB flash drive) C/C++ | How can I detect when a removable disk drive is (dis)connected to the system? How to get the mount path (for Linux) and the drive letter (for windows)?
EDIT: Is there a way to detect the currently connected devices?
| For Windows, the API RegisterDeviceNotification will let you know when a USB device is added. The information about the volume is given in the DEV_BROADCAST_VOLUME structure. The dbcv_unitmask gives the drive letter.
|
2,184,125 | 2,184,140 | Getting input from user using cin | I am using Turbo C++ 3.0 Compiler
While using the following code ..
char *Name;
cin >> Name;
cout << Name;
When I gave input with space ... its only saving characters typed before space ..
like if I gave input "QWERT YUIOP" ... Name will contain "QWERT";
Any explaination why ??
| You need to allocate space for the char array into which you want to read the Name. char *Name; will not work as it only declares a char pointer not a char array. Something like char Name[30];
Also the cin << only allows us to enter one word into a string (char name[30]).
However, there is a cin function that reads text containing blanks.
cin.get(name, MAX)
get will read all characters including spaces until Max characters have
been read or the end of line character (‘\n’) is reached and will put them
into the name variable.
|
2,184,276 | 2,184,723 | Modifying items of boost multi index container | struct tagEnumdef{}; struct tagName{}; struct tagWidget{};
template< class type > class ParamTags;
template<> class ParamTags<int> { public: typedef tagEnumdef tag; };
template<> class ParamTags<QString> { public: typedef tagName tag; };
template<> class ParamTags<QWidget*>{ public: typedef tagWidget tag; };
typedef boost::multi_index::multi_index_container
<
ParamRegistrationEntry,
boost::multi_index::indexed_by
<
boost::multi_index::ordered_unique< boost::multi_index::tag<tagEnumdef>, BOOST_MULTI_INDEX_CONST_MEM_FUN( ParamRegistrationEntry, int, enumdef ) >,
boost::multi_index::ordered_unique< boost::multi_index::tag<tagName>, BOOST_MULTI_INDEX_CONST_MEM_FUN( ParamRegistrationEntry, QString, name ) >,
boost::multi_index::ordered_unique< boost::multi_index::tag<tagWidget>, BOOST_MULTI_INDEX_CONST_MEM_FUN( ParamRegistrationEntry, QWidget*, widget ) >
>
>
> ParamRegisterIndexContainer;
T t_; // int, QString or QWidget*
ParamRegisterIndexContainer* const register_;
register_->modify( register_->get<ParamTags<T>::tag>().find( t_ ), ... ); // C2664
error C2664: 'bool boost::multi_index::detail::ordered_index<KeyFromValue,Compare,SuperMeta,TagList,Category>::modify<boost::lambda::lambda_functor<T>>(boost::multi_index::detail::bidir_node_iterator<Node>,Modifier)' :
cannot convert parameter 1
from 'boost::multi_index::detail::bidir_node_iterator<Node>'
to 'boost::multi_index::detail::bidir_node_iterator<Node>'
With
Node=ordered_index_node<index_node_base<...>>
Node=ordered_index_node<ordered_index_node<ordered_index_node<index_node_base<...>>
I've stripped down parts which shouldn't matter. Are the 3 ordered_index_node's related to the 3 keys I've defined in the container? I get an iterator from 1 index with get(), but modify() seems to require some sort of combination?
| It is my understanding that modify() should be called on an index, not on a container. So what you want to write is probably more like:
typedef typename ParamTags<T>::tag TagType;
// Get the proper index
ParamRegisterIndexContainer::index<TagType>::type& index = register_->get<TagType>();
// Modify a value found in this index
index.modify(index.find(t_), ...);
|
2,184,421 | 2,184,431 | Problem compiling simple C++ progam | I was provided this simple C++ [I think] program to investigate the maximum size of int that can be stored:
#include <limits.h>
#include <iostream>
void main ( int argc , char * argv[])
{
cout << "INT_MAX " << INT_MAX << endl ;
cout << "INT_MAX +1 = " << INT_MAX + 1 << endl ;
cout << "INT_MAX -1 = " << INT_MAX - 1 << endl ;
cout << "INT_MAX / INT_MAX " << INT_MAX /INT_MAX << endl ;
cout << "(INT_MAX +1) / INT_MAX " << (INT_MAX +1) /INT_MAX << endl;
cout << "(INT_MAX -1) / INT_MAX " << (INT_MAX -1) /INT_MAX <<endl;
cout << "INT_MAX / (INT_MAX +1) " << INT_MAX /(INT_MAX+1) <<endl;
cout << "INT_MAX / (INT_MAX -1) " << INT_MAX /(INT_MAX -1) <<endl;
}
I'm attempting to compile with:
gcc -o int_max int_max.cpp
But I get the following error:
int_max.cpp:4: error: '::main' must return 'int'
int_max.cpp: In function 'int main(int, char**)':
int_max.cpp:8: error: 'cout' was not declared in this scope
int_max.cpp:8: error: 'endl' was not declared in this scope
int_max.cpp:9: warning: integer overflow in expression
int_max.cpp:13: warning: integer overflow in expression
int_max.cpp:15: warning: integer overflow in expression
I tried adding a return 0 at the end of main but that didn't help. Any idea what I've done wrong?
P.S It's possible this is actually a C snippet but I seem to remember the lecturer saying it was C++.
Cheers
| You are compiling C++ code with gcc in a file with .c extension?
// Use new C++ header files instead of their .h version.
#include <climits>
#include <iostream>
// cout and endl are declared in the std namespace.
using namespace std;
int main (int argc, char * argv[])
{
cout << "INT_MAX " << INT_MAX << endl ;
cout << "INT_MAX +1 = " << INT_MAX + 1 << endl ;
cout << "INT_MAX -1 = " << INT_MAX - 1 << endl ;
cout << "INT_MAX / INT_MAX " << INT_MAX /INT_MAX << endl ;
cout << "(INT_MAX +1) / INT_MAX " << (INT_MAX +1) /INT_MAX << endl;
cout << "(INT_MAX -1) / INT_MAX " << (INT_MAX -1) /INT_MAX <<endl;
cout << "INT_MAX / (INT_MAX +1) " << INT_MAX /(INT_MAX+1) <<endl;
cout << "INT_MAX / (INT_MAX -1) " << INT_MAX /(INT_MAX -1) <<endl;
return 0;
}
and use g++ to compile.
|
2,184,656 | 2,184,756 | Making wide application (with plugins) | I'm going to make my application extensible.
Where I can read information about writing programs which support plugins?
C++
| A plug-in architecture is what you need to look-up and read about. A SO answer will not help beyond providing a few stray links. I'll try to explain as briefly as I can: Typically, plug-ins are a set of dynamic libraries that the host application loads (usually at start up, sometimes delay loaded for efficiency purposes). They then become part of the application and behave as if they were a native/core component. Hence, you need to rethink about your application's architecture and module design as well. Here are a set of questions you'll need to answer:
What do you call the core?
What do you want the plug-ins to do?
What set of core functionality will the plug-ins need?
If your application is cross-platform you'll need to make sure your plug-in APIs are cross-platform too -- which usually involves some work.
Do you want the plug-ins to modify the UI? This opens up a whole new box of surprises.
|
2,184,702 | 2,184,727 | How to generate a good random seed to pass to srand()? | I am writing a C++ program which needs to create a temporary file for its internal usage. I would like to allow concurrent executions of the program by running multiple processes, so the temporary file name needs to be randomized, that way each spawned process will generate a unique temporary file name for its own use.
I am using rand() to generate random characters for part of the file name, so i need to initialize the random number generator's seed using srand().
What options are there for passing a good argument to srand() such that two processes will not be initialized with the same seed value?
My code needs to work both on Windows and on Linux.
| The question is actually asking how to create a uniquely-named temporary file.
The operating system probably provides an API for this, which means you do not have to generate your own name.
On Windows, its called GetTempFileName() and GetTempPath().
On Unix, use tmpfile().
(Windows supports tmpfile() too; however, I've heard reports that from others that, whilst it works nicely on XP, it fails on Vista if you're on the C: drive and you are not an administrator; best to use the GetTempFileName() method with a custom, safe path)
|
2,184,932 | 2,847,043 | Opaque object for template in another namespace | I know how to do an opaque object in C++ as following:
// my_class.hpp
class opaque_object;
class my_class {
my_class();
~my_class();
opaque_object *m_opaque_object;
};
// my_class.cpp
#include <my_class.hpp>
class opaque_object {
// ...
};
my_class::my_class() { m_opaque_object = new opaque_object(); }
my_class::~my_class() {delete m_opaque_object; }
Now how to do it when the opaque object is an existing class template in a different namespace without including the header file of this one. The following code is not good, it is just here to illustrate my problem.
// my_class.hpp
class third::party::library::opaque_object<
third::party::library::templated_class>;
class my_class {
my_class();
~my_class();
third::party::library::opaque_object<
third::party::library::templated_class> *m_opaque_object;
};
// my_class.cpp
#include <my_class.hpp>
#include <third/party/library/opaque_object.hpp>
#include <third/party/library/template_class.hpp>
typedef third::party::library::opaque_object<
third::party::library::templated_class> opaque_object;
my_class::my_class() { m_opaque_object = new opaque_object(); }
my_class::~my_class() {delete m_opaque_object; }
The actual class of my source code is even more templated than this example (4 arguments of template with some of them which are themselves templated class).
Since my_class.hpp is used quite everywhere in my project, the general compilation take a lot of time (5sec juste to include my_class.hpp for each cpp file) so I would like to avoid including opaque_object.hpp, template_class.hpp ... in my_class.hpp.
How can this be done ? Any comment, or idea to do it are welcome.
Thanks in advance for your time.
| Need to declare each templated class as well as each class needed as for template (template argument) with the right namespace.
|
2,185,443 | 2,185,470 | Does C++ support constant arrays of type string? | I'm a programming student in my first C++ class, and for a recent project I did, I was unable to create an array of strings like I could do in C#:
string MONTHS[ARRAY_CAPACITY] = { "Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sep", "Oct", "Nov", "Dec" };
// this yields many compiler errors in C++
Is it possible to do something similar in C++?
Thanks!
| If you initialise the array in C++ then it doesn't require a size to be set (although it'll accept one), so:
std::string months[] = { "Jan", "Feb", "Mar", "April", "May", "June", "July", "Aug", "Sep", "Oct", "Nov", "Dec" };
compiles fine with g++ for me and I'd expect it to compile elsewhere too. I expect your errors are due to the lack of std:: namespace.
|
2,185,495 | 2,185,529 | use of char * vs std::string in different environments | I have been using std::string in my code. I was going to make a std::string and pass it by reference. However, someone suggested using a char * instead. Something about std::string is not reliable when porting code. Is that true? I have avoided using char * as I would need to do some memory management for it. Instead I find using the std::string much easier to use.
Basically I have a 10 digit output that I am storing in this string. Atm, I am not sure which would be better to use.
| In C++, almost every string should be std::string unless another library requires a cstring, in which case you should still be using an std::string and passing string.c_str(), unless you're using functions that work with buffers.
However, if you're writing a library and exporting functions, it's better to use const char* parameters rather than std::string parameters for portability.
|
2,185,583 | 2,185,992 | How to guard against memory leaks? | I was recently interviewing for a C++ position, and I was asked how I guard against creating memory leaks. I know I didn't give a satisfactory answer to that question, so I'm throwing it to you guys. What are the best ways to guard against memory leaks?
Thanks!
| What all the answers given so far boil down to is this: avoid having to call delete.
Any time the programmer has to call delete, you have a potential memory leak.
Instead, make the delete call happen automatically. C++ guarantees that local objects have their destructors called when they go out of scope. Use that guarantee to ensure your memory allocations are automatically deleted.
At its most general, this technique means that every memory allocation should be wrapped inside a simple class, whose constructor allocates the necessary memory, and destructor releases it.
Because this is such a commonly-used and widely applicable technique, smart pointer classes have been created that reduce the amount of boilerplate code. Rather than allocating memory, their constructors take a pointer to the memory allocation already made, and stores that. When the smart pointer goes out of scope, it is able to delete the allocation.
Of course, depending on usage, different semantics may be called for. Do you just need the simple case, where the allocation should last exactly as long as the wrapper class lives? Then use boost::scoped_ptr or, if you can't use boost, std::auto_ptr. Do you have an unknown number of objects referencing the allocation with no knowledge of how long each of them will live? Then the reference-counted boost::shared_ptr is a good solution.
But you don't have to use smart pointers. The standard library containers do the trick too. They internally allocate the memory required to store copies of the objects you put into them, and they release the memory again when they're deleted. So the user doesn't have to call either new or delete.
There are countless variations of this technique, changing whose responsibility it is to create the initial memory allocation, or when the deallocation should be performed.
But what they all have in common is the answer to your question: The RAII idiom: Resource Acquisition Is Initialization. Memory allocations are a kind of resource. Resources should be acquired when an object is initialized, and released by the object itslef, when it is destroyed.
Make the C++ scope and lifetime rules do your work for you. Never ever call delete outside of a RAII object, whether it is a container class, a smart pointer or some ad-hoc wrapper for a single allocation. Let the object handle the resource assigned to it.
If all delete calls happen automatically, there's no way you can forget them. And then there's no way you can leak memory.
|
2,185,829 | 2,186,451 | Maven learning curve & overhead for small/medium projects? | what would be (rough estimation, average, of course) the initial learning and setup curve and subsequent overhead for using Maven for C++/Eclipse/Linux project of small to medium size?
We are 4 developers at the beginning of the way. We currently have ~20 native eclipse C++ (CDT) "projects", which we compile interactively. We would like to have an automated checkout & build script.
It seems a bit overkill at this stage, but perhaps we should adopt it sooner then later, provided that it does not incur an overhead. We don't have bandwidth for extensive configuration management right now. Thanks a lot!
EDITED / DETAILED:
I realize I haven't described my needs well enough. Having read the references provided below, I see that CI tool seems an overkill for us at the moment. What I'd like to have is a build tool that is well integrated with eclipse on one hand, and allows offline, non-interactive builds on the other. I enjoy the simplicity of working with eclipse projects: you just add files, add references to internal components and 3rd part libs as they add up, and that's it. You don't need to manually maintain makefiles or the like. The trouble with it, as with MSVS a few years ago when I worked with it, is that it does not give you an option of non-interactive builds. So, does such tool exist?
| First, while Maven has some support to build C++ projects with the maven-native-plugin or, if you already are using Make, with the maven-make-plugin from the c-builds suite, this is not a common use case and there aren't widely used. So while it should be possible, you won't get support and find resources easily (just Google a bit or browse the maven users list to get an idea).
Second, if you add to this that you'll have to learn Maven in the same time, then it seems reasonable to say that you are not taking the easiest path.
So, instead, I'd stick with more traditional tools and/or Ant. For the continuous integration itself, I've seen several references mentioning the use of CruseControl to build a C++ project. Refer to What continuous integration tool is best for a C++ project? or UsingCruiseControlWithCplusPlus for example. But I guess the principles are transposable to another CI engine (like Hudson that I find much more easy to use than CruiseControl).
|
2,185,953 | 2,186,393 | machine precision and max and min value of a double-precision type | (1) I have met several cases where epsilon is added to a non-negative variable to guarantee nonzero value. So I wonder why not add the minimum value that the data type can represent instead of epsilon? What are the difference problems that these two can solve?
(2) Also I notice that the inverse of the maximum value of a double precision type is bigger than its min value, and inverse of its min value is inf, way bigger than its max value. Is it useful to compute the reciprocals of its max and min values?
(3) For a very small positive number of double type, to compute its reciprocal, how small it is when its reciprocal starts to not make sense? Is it better to put an upper bound on the reciprocal? How much is the bound?
Thanks and regards
| You need to understand how floating point numbers are represented in the CPU. In the data type, 1 bit is reserved for the sign, i.e. whether it is a positive or negative number, (yes you can have positive and negative 0 in floating point numbers,) then a number of bits is reserved for the significand (or mantissa,) these are the significant digits in the floating point number and finally a number of bits is reserved for the exponent. The value of the floating point number now is:
-1^sign * significand * 2^exponent
This means the smallest number is a very small value, namely the smalles significand with the lowest exponent. The rounding error however is much larger and depends on the magnitude of the number, namely the smallest number with a given exponent. The epsilon is the difference between 1.0 and the next representable larger value. That's why epsilon is used in code that is robust for rounding errors, and really you should scale the epsilon with the magnitude of the numbers you work with if you do it right. The smallest representable value is not really of any significant use normally.
You're seeing the difference between the normalized and denormalized minimum. The problem is that due to the way the significand is used it is possible to make a larger negative exponent than a positive one, say the bit pattern of the significand is all zeros except the last bit, which is one, then the exponent is effectively lowered by the number of bits in the significand. For the maximum you cannot do this, even if you set the significand to all ones, the effective exponent will still only be the exponent that is given. i.e. think of the difference between 0.000001e-10 and 9.999999e+10, the first is much smaller than the second is big. The first is actually 1e-16 while the second is approx 1e+11.
It depends on the precision of the floating point number of course. In the case of double precision, the difference between the maximum and the next smaller value is already huge, (along the lines of 10^292,) so your rounding errors will be very big. If the value is too small you will simply get inf instead, as you already saw. Really, there is no strict answer, it depends entirely on the precision of numbers you need. Given that the rounding error is approx epsilon*magnitude, the reciprocal of (1/epsilon) already has a rounding error of around 1.0 if you need numbers to be accurate to 1e-3 then even epsilon would be too big to divide by.
See these wikipedia pages on IEEE754 and Machine epsilon for some background info.
|
2,185,954 | 2,186,059 | Implications of template declaration & definition | From what I understand template classes and template functions (for the most part) must be declared and defined in the same header file. With that said:
Are there any other ways to achieve separate compilation of template files other than using particular compilers? If yes, what are those?
What, if any, are the drawbacks of having the declaration and definition in the same file?
What is considered best-practice when it comes to template declaration & definition?
| How To Organize Template Source Code
Basically, you have the following options:
Make the template definition visible to compiler in the point of instantiation.
Instantiate the types you need explicitly in a separate compile unit, so that linker can find it.
Use keyword export (if available)
|
2,185,960 | 2,188,368 | Particle stream should be the same length regardless of emitter speed | I'm writing a particle system for our student game, and I've run into a bit of a snag. I want to improve the effect on the ships' rockets, but I can't seem to figure out how.
Here's how the effect looks on a stationary ship:
And here's how it looks on a moving ship:
I want the flames to be the same length consistently. Here's Particle's Tick function:
void Particle::Tick(float a_DT)
{
// temporarily turned off to see the effect of the rest of the code more clearly
//m_Pos += m_Vel;
if (m_Owner) { m_Pos += m_Owner->GetParentSpeed(); }
m_Life -= 1;
if (m_Life <= 0) { m_Alive = false; }
}
Thanks in advance.
EDIT: To clear things up a bit, I want the effect to trail, but I want it to trail the same way regardless of the emitter's speed.
| You're making the particles move faster or slower according to the parent ship's speed, but their lifetime is some constant that you decrement by one until you reach zero, correct?
What you probably want to do is set the lifetime to a distance value, rather than some number of ticks. Then, subtract the ship's speed (or whatever you're adding to each particle on each tick) from the lifetime. When lifetime goes negative, kill the particle.
I think that's what you want... but it might be cooler (and more realistic) if you make two changes to your algorithm:
The current behavior (length of the tail) is correct if the particle
speed coming out of your engines is based upon thrust (acceleration
rather than just speed).
Once a particle leaves the engine, any changes in speed/direction of
the ship have no effect on it. Once the particle is emitted, it's speed
and direction are constant until it fizzles out. This should actually
look pretty cool when you're turning the ship, or dramatically changing
acceleration.
Cheers.
|
2,186,122 | 2,188,033 | Using enum with string variables | I'm trying to do something like this,
enum Data_Property
{
NAME,
DATETIME,
TYPE,
};
struct Item_properties
{
char* Name;
char* DateTime;
int Type;
};
int main() {
std::string data = "/Name Item_Name /DATETIME [TimeStamp] /TYPE [Integer value]";
std::list <std::string> known_properties;
known_properties.push_back("/Name");
known_properties.push_back("/DATETIME");
known_properties.push_back("/TYPE");
Item_properties item_p = extract_properties(data); //I use a function to get properties
//in a structure.
//here I want to use a switch-case statement to get the property by enum value
}
I need to know is there any way i can make it any simple? or how would i combine property keys like (/NAME /DATETIME /TYPE) with an enum and avoid using a std::list i.e known_properties?
| First, let me say that there is always another alternative, one of my philosophies.
Let's look at the Big Picture. You are trying read an Item from string. The Item is a container of datums. In Object Oriented terms, the Item would like each datum to load its own data from a string. The Item doesn't need to know how to load a datum, as it can request this from each datum.
Your datums are actually more complex and intelligent than the C++ simple POD types. (Simple POD types don't have field names.) So, you need to create classes to represent these (or to encapsulate the extra complexity). Trust me, this will be better in the long run.
Here is a simple example of Refactoring the Name member:
struct Name_Field
{
std::string value; // Change your habits to prefer std:string to char *
void load_from(const std::string& text,
const std::string::size_type& starting_position = 0)
{
value.clear();
std::string::size_type position = 0;
position = text.find("/Name", starting_position);
if (position != std::string::npos) // the field name was found.
{
// Skip the next space
++position;
// Check for end of text
if (position < text.length())
{
std::string::size_t end_position;
end_position = text.find_first_not_of(" ", position);
if (end_position != std::string::npos)
{
value = text.substr(position, end_position - position);
}
}
}
};
You can now add a load_from to the Item class:
struct Item
{
Name_Field Name;
char * DateTime;
char * Type;
void load_from(const std::string& text)
{
std::string::size_t position = 0;
// Notice, the Name member is responsible for load its data,
// relieving Item class from knowing how to do it.
Name.load_from(text, position);
}
};
To load the item:
Item new_item;
new_item.load_from(data);
As you refactor, be aware of common methods. For example, you may want to put common methods between Name_Field and DateTime_Field into a base (parent) class, Field_Base. This will simplify your code and design as well as support re-usability.
|
2,186,150 | 2,192,336 | Objective C in C++ – Out of Scope | I have a little problem with the WOsclib. Not particularly with the library, it's more the callback function. The listen to specific osc commands i have to put up some callback method like
void TheOscStartMethod::Method(
const WOscMessage *message,
const WOscTimeTag& when,
const TheNetReturnAddress* networkReturnAddress)
{
std::cout << "Got the start signal";
start.alpha = 1.0;
}
start is IBOutlet UIImageView.
But the compiler says me, that start is out of scope. If I try to access start in obj-c code, it works like it should.
How can i get my Objective C Objects into the c code or at least call a objective-c function.
Thank you
| The Solution:
I don't know if this is the best way to do it, but it works.
There must be an empty c object, which later will become our objective c object that holds all the stuff we want to access.
static gsSearchForIp* delegate = NULL;
We must define a function to set the objective c object
void setCallbackDelegate(gsSearchForIp* del)
{
delegate = del;
}
And then call it. ( I called it in the initWithFrame method)
setCallbackDelegate(self);
Now i can call a method with [delegate methodName:firstPara] in my c++ method. In this function i have access to all my stuff that I need from the gsSearchForIp class.
|
2,186,197 | 2,186,265 | Static library API question (std::string vs. char*) | I have not worked with static libraries before, but now I need to.
Scenario:
I am writing a console app in Unix. I freely use std::string everywhere because it's easy to do so. However, I recently found out that I have to support it in Windows and a third party application would need API's to my code (I will not be sharing source, just the DLL).
With this in mind, can I still use std::string everywhere in my code but then provide them with char * when I code the API's? Would that work?
| Yep. Use std::string internally and then just use const char * on the interface functions (which will be converted to std::strings on input.
|
2,186,301 | 2,186,339 | inverse distance weighting interpolation | I would like to compute a weight as reciprocal of a distance for something like inverse distance weighting interpolation.
double wgt = 0, wgt_tmp, result = 0;
for (int i = 0; i < num; i++) {
wgt_tmp = 1.0/dist[i];
wgt += wgt_tmp;
result += wgt_tmp * values[i];
}
results /= wgt;
However the distance can be 0 and I need to make the weight suitable for computation. If there is only one distance dist[i] is 0, I would like its corresponding value values[i] to be dominant. If there are several distances are 0, I would like to have their values to contribute equally to the result. Also even if dist[i] is not zero but very small, I would like to have a reasonable criterion to check it and deal with it. Any idea how to implement it?
| I don't see any way besides piecewise - you need a different function than reciprocal distance for small distances. The simplest thing would be to just chop off the top:
modified_dist[i] = dist[i] < MIN_DIST ? MIN_DIST : dist[i]
but you could replace that with something still decreasing if you want, like (MIN_DIST + dist[i])/2.
|
2,186,438 | 2,215,574 | Read() from file descriptor hangs | Hey, hopefully this should be my last PTY-related question and I can move onto more exciting issues. (c;
Here's a set of small functions I have written for creating and reading/writing to a pty: http://pastebin.com/m4fcee34d The only problem is that they don't work! After I run the initializer and writeToPty( "ls -l" ) , 'output' from readFromPty is still empty.
Ubuntu, QT C++
EDITED: Ok, I can confirm all this stuff works except for the read loop. In the debuggers' locals/watchers tab it shows that the QString 'output' actually does get the right data put in it, but after it ( the read() ) runs out of characters from the output it runs and then hangs. What is going on and how can I fix it?
Thanks! (c:
#include <iostream>
#include <unistd.h>
#include <utmp.h>
#include <pty.h>
#include <QString>
#include <QThread>
// You also need libutil in your .pro file for this to compile.
class CMkPty
{
public:
CMkPty( int *writeChannel, int *readChannel );
~CMkPty();
int runInPty( char *command );
int writeToPty( char *input );
int readFromPty( QString output );
int m_nPid;
private:
int m_nMaster, m_nSlave, m_nPosition, m_nBytes;
char *m_chName;
void safe_print( char *s );
char m_output;
};
CMkPty::CMkPty( int *masterFD, int *slaveFD )
{
openpty( &m_nMaster, &m_nSlave, (char*)0, __null, __null );
m_nPid = fork();
*masterFD = m_nMaster;
*slaveFD = m_nSlave;
if( m_nPid == 0 )
{
login_tty( m_nSlave );
execl( "/bin/bash", "-l", (char*)0 );
return;
}
else if( m_nPid > 0 )
{
return;
}
else if( m_nPid < 0 )
{
std::cout << "Failed to fork." ;
return;
}
}
CMkPty::~CMkPty()
{
close( m_nMaster );
close( m_nSlave );
}
int CMkPty::writeToPty( char *szInput )
{
int nWriteTest;
write( m_nMaster, szInput, sizeof( szInput ) );
nWriteTest = write( m_nMaster, "\n", 1 );
if( nWriteTest < 0 )
{
std::cout << "Write to PTY failed" ;
return -1;
}
return 0;
}
int CMkPty::readFromPty( QString output )
{
char buffer[ 160 ];
m_nBytes = sizeof( buffer );
while ( ( m_nPosition = read( m_nMaster, buffer, m_nBytes ) ) > 0 )
{
buffer[ m_nPosition ] = 0;
output += buffer;
}
return 0;
}
EDIT: Here's a link to the question with the code that finally worked for me.
| I'm note entirely familiar with posix, but after reading this page http://pwet.fr/man/linux/fonctions_bibliotheques/posix/read I had some insight. What's more, I don't see you adjusting your M_nBytes value if you haven't read as much as you were expecting on the first pass of the loop.
edit: from that link, perhaps this will be of some help:
If some process has the pipe open for writing and O_NONBLOCK is clear, read() shall block the calling thread until some data is written or the pipe is closed by all processes that had the pipe open for writing.
When attempting to read a file (other than a pipe or FIFO) that supports non-blocking reads and has no data currently available:
*
If O_NONBLOCK is clear, read() shall block the calling thread until some data becomes available.
so essentially, if you're not in an error state, and you tell it to keep reading, it will block until it finds something to read.
|
2,186,788 | 2,187,054 | Is there an open-source c/c++ implementation of IEEE-754 operations? | I am looking for a reference implementation of IEEE-754 operations. Is there such a thing?
| I believe the C libraries SoftFloat and fdlibm are suitable for what you are looking for. Others include Linux (GNU libc, glibc) or *BSD libc's math functions. Finally, CRlibm should also be of interest to you.
Ulrich Drepper has a interesting look at different math libraries, that might be also worth reading through.
|
2,186,825 | 2,187,020 | C/C++: How to store data in a file in B tree | It appears to me that one way of storing data in a B-tree as a file can be done efficiently with C using binary file with a sequence (array) of structs, with each struct representing a node. One can thus connect the individual nodes with approach that will be similar to creating linked lists using arrays. But then the problem that props up would be deletion of a node, as erasing only a few bytes in the middle in a huge file is not possible.
One way of deleting could be to keep track of 'empty' nodes until a threshold cutoff is reached and then make another file that will discard the empty nodes. But this is tedious.
Is there a better approach from the simplicity/efficiency point of view for deleting, or even representing a B-tree in a file?
TIA,
-Sviiya
| I did a very quick search and dug up this: http://people.csail.mit.edu/jaffer/WB C source: http://cvs.savannah.gnu.org/viewvc/wb/wb/c/ - it seems to offer disk-based B-tree style databases - although taking a look at "delete.c" it seemed to imply if you delete a node everything down from it would be taken out - if that's the correct behaviour then it sounds like something that might help?
Also - B-trees are often used in filesystems - could you not take a look at some filesystem code?
My own inclination is that of a file-system - if you have a B-tree of fixed-size, whenever you "delete" a node rather than attempting to remove the reference, just set the value to whatever means nothing in your code. Then, have a clean-up thread running that checks if anyone has the file open for reading and if all's quiet blocks the file and tidies up.
|
2,186,829 | 2,189,487 | Is it possible to package WPF window as COM Object | I am trying to use a WPF window from a legacy c++ unmanaged gtk gui application. Is it possible to package the WPF window (including the xaml file) and use it in the c++ gui application as a regular com object. Do you foresee any issues or problems with this approach?
If possible any links or tutorials or any suggestions on how to do it will be very helpful.
Thanks.
| I'm not aware of any tutorials online for doing this; but it shouldn't be a big problem at all. I've tried implementing smth like this and it worked fine for me, below is sequence if steps I've done:
1.add a "wpf user control" or "wpf custom control" library to your solution.
2.add a new WPF Window class (Add->Window->...) into the new project. Then add what ever wpf controls you like to your new window just to check if it works later
3.add new class and interface to the library project and define it like an example below:
[ComVisible(true)]
[Guid("694C1820-04B6-4988-928F-FD858B95C881")]
public interface ITestWPFInterface
{
[DispId(1)]
void TestWPF();
}
[ComVisible(true)]
[Guid("9E5E5FB2-219D-4ee7-AB27-E4DBED8E123F"),
ClassInterface(ClassInterfaceType.None)]
public class TestWPFInterface : ITestWPFInterface
{
public void TestWPF()
{
Window1 form = new Window1();
form.Show();
}
}
4.make your assembly com visible (Register For COM interop key in the Build tab of the project properties) and assign a strong name to it (see signing tab); generate key with sn utility
5.Once all above done you going to have a your_wpf_lib.tlb file generated in the debug\release folder
6.In your c++ application (I guess you have sources for it and can recompile), add following line:
import
"C:\full_path_to_your_tlb\your_wpf_lib.tlb"
this should generate appropriate tlh file in the your win32 project debug output folder.
7.now you can call your form from the c++ code:
TestWPFForms::ITestWPFInterfacePtr comInterface(__uuidof(TestWPFForms::TestWPFInterface));
comInterface->TestWPF();
this should show your wpf form.
Also I believe links below might be useful for you:
Calling Managed .NET C# COM Objects from Unmanaged C++ Code
WPF and Win32 Interoperation Overview
hope this helps, regards
|
2,187,068 | 2,187,082 | How can I assign to an instance variable in C++ when a local variable has same name? | I have a class defined like this:
class MyClass
{
int x;
public:
MyClass(int x);
};
MyClass::MyClass(int x)
{ //Assign x here
}
However, I can't initialize x in the constructor because it has the same name as an instance variable. Is there any way around this(other than changing the name of the argument)?
| The best option is to use the constructor's initializer list:
MyClass::MyClass(int x) : x( x ) { // Body }
But you could also try this approach:
MyClass::MyClass(int x) { this->x = x; }
|
2,187,128 | 2,187,232 | where to store .properties file for use in c++ dll | I created a .properties file that contains a few simple key = value pairs.
I tried it out from a sample c++ console application, using imported java classes, and I was able to access it, no problem.
Now, I am trying to use it in the same way, from a C++ dll, which is being called by another (unmanaged) c++ project.
For some reason, the file is not being accessed.
Maybe my file location is wrong. Where should I be storing it?
What else might be the issue?
TIA
| As you are mentioning "DLL" i guess, that you are using MS Windows. Finding a file there from a DLL, and independently from the logged on user is a restricted item. The best way is to store the file in a path assembled from the environment variable ALLUSERSPROFILE. This is the only location that is equal to all users and where all users usually have write access. Your applications data should reside in a private subdirectory named like < MyCompany > or < MyApplicationsName >. Type
echo %ALLUSERSPROFILE%
on a windows command line prompt to find out the actual location on a machine.
Store your data in i.e.:
%ALLUSERSPROFILE%\MyApp\
Your dll can then query the location of ALLUSERSPROFILE using getenv:
char *allUsersData = getenv("ALLUSERSPROFILE");
|
2,187,189 | 2,188,050 | creating arrays in nvidia cuda kernel | hi I just wanted to know whether it is possible to do the following inside the nvidia cuda kernel
__global__ void compute(long *c1, long size, ...)
{
...
long d[1000];
...
}
or the following
__global__ void compute(long *c1, long size, ...)
{
...
long d[size];
...
}
| You can do the first example, I haven't tried the second.
However, if you can help it, you might want to redesign your program not to do this. You do not want to allocate 4000 bytes of memory in your kernel. That will lead to a lot of use of CUDA local memory, since you will not be able to fit everything into registers. CUDA local memory is slow (400 cycles of memory latency).
|
2,187,263 | 2,187,297 | What is the best technique to create a thread in C++ while passing the class and an index? | I am creating a data-parallel program using pthreads and C++. From pthread function from a class, I found out how to supply pthread_create with a function pointer to a static C++ function(and supply it a this argument).
However, I also need to supply the thread with an index, so it knows what data it's working on. I could malloc a struct for each thread(with both the pointer to the C++ class and an index), but this seems like it would add some bookkeeping code, and could lead to leaks if the struct isn't freed. Is there a better way to do this?
| You can use Boost.Thread. It provides a type-safe way for you to pass more than one argument into your callable.
Yes, it has similar kinds of bookkeeping as your question stated, but it uses C++ mechanisms to ensure that it doesn't leak.
|
2,187,425 | 2,187,865 | How do I use a COM DLL with LoadLibrary in C++ | First, COM is like black magic for me. But I need to use COM dll in one project I'm working on.
So, I have a DLL I am developing and I need some functionalities that are available in a separate COM DLL. When I look to the COM DLL with Depends.exe I see methods like DllGetClassObject() and other functions but none of the functions I'm interested in.
I have access to the COM DLL (legacy) source code but it's a mess and I'd rather like to use the COM DLL in binary like a big black box not knowing what's going on inside.
So, how can I call the COM DLL functions from my code using LoadLibrary? Is it possible? If, yes, could you give me an example of how to do it?
I'm using Visual Studio 6 for this project.
Thanks a lot!
| Typically you would use CoCreateInstance() to instantiate an object from a COM DLL. When you do this, there's no need to load the DLL first and get proc addresses like you would need to do with a normal DLL. This is because Windows "knows" about the types that a COM DLL implements, what DLL they are implemented in, and how to instantiate them. (Assuming of course that the COM DLL is registered, which it typically is).
Suppose you have a COM DLL with the IDog interface you want to use. In that case,
dog.idl
interface IDog : IUnknown
{
HRESULT Bark();
};
coclass Dog
{
[default] Interface IDog;
};
myCode.cpp
IDog* piDog = 0;
CoCreateInstance(CLSID_DOG, 0, CLSCTX_INPROC_SERVER, IID_IDOG, &piDog); // windows will instantiate the IDog object and place the pointer to it in piDog
piDog->Bark(); // do stuff
piDog->Release(); // were done with it now
piDog = 0; // no need to delete it -- COM objects generally delete themselves
All this memory management stuff can get pretty grungy, though, and the ATL provides smart pointers that make the task of instantiating & managing these objects a little easier:
CComPtr<IDog> dog;
dog.CoCreateInstance(CLSID_DOG);
dog->Bark();
EDIT:
When I said above that:
Windows "knows" about the types that a COM DLL implements [...and]
what DLL they are implemented in
...I really glossed over exactly how Windows knows this. It's not magic, although it might seem a little occult-ish at first.
COM libraries come with Type Libraries, which list the Interfaces and CoClasses that the library provides. This Type Library is in the form of a file on your hard drive -- very often it is embedded directly in the same DLL or EXE as the library itself. Windows knows where to find the Type Library and the COM Library itself by looking in the Windows Registry. Entries in the Registry tell Windows where on the hard drive the DLL is located.
When you call CoCreateInstance, Windows looks the clsid up in the Windows Registry, finds the corresponding DLL, loads it, and executes the proper code in the DLL that implements the COM object.
How does this information get in to the Windows Registry? When a COM DLL is installed, it is registered. This is typically done by running regsvr32.exe, which in turn loads your DLL in to memory and calls a function named DllRegisterServer. That function, implemented in your COM server, adds the necesarry information to the Registry. If you are using ATL or another COM framework, this is probably being done under the hood so that you don't have to interface with the Registry directly. DllRegisterServer only needs to be called once, at install-time.
If you try to call CoCreateInstance for a COM object that has not yet been registered via the regsvr32/DllRegisterServer process, then CoCreateInstance will fail with an error that says:
Class Not Registered
Fortunately, the fix for this is to simply call regsvr32 on your COM server, and then try again.
|
2,187,609 | 2,187,640 | c and c++ operators help | can someone explain to me why the following results in b = 13?
int a, b, c;
a = 1|2|4;
b = 8;
c = 2;
b |= a;
b&= ~c;
| It is using binary manipultaors. (Assuming ints are 1 byte, and use Two's complement for storage, etc.)
a = 1|2|4 means a = 00000001 or 00000010 or 00000100, which is 00000111, or 7.
b = 8 means b = 00001000.
c = 2 means c = 00000010.
b |= a means b = b | a which means b = 00001000 or 00000111, which is 00001111, or 15.
~c means not c, which is 11111101.
b &= ~c means b = b & ~c, which means b = 00001111 and 11111101, which is 00001101, or 13.
|
2,187,648 | 2,188,001 | How can I use a std::valarray to store/manipulate a contiguous 2D array? | How can I use a std::valarray to store/manipulate a 2D array?
I'd like to see an example of a 2D array with elements accessed by row/column indices. Something like this pseudo code:
matrix(i,j) = 42;
An example of how to initialize such an array would also be nice.
I'm already aware of Boost.MultiArray, Boost.uBlas, and Blitz++.
Feel free to answer why I shouldn't use valarray for my use case. However, I want the memory for the multidimensional array to be a contiguous (columns x rows) block. No Java-style nested arrays.
| Off the top of my head:
template <class element_type>
class matrix
{
public:
matrix(size_t width, size_t height): m_stride(width), m_height(height), m_storage(width*height) { }
element_type &operator()(size_t row, size_t column)
{
// column major
return m_storage[std::slice(column, m_height, m_stride)][row];
// row major
return m_storage[std::slice(row, m_stride, m_height)][column];
}
private:
std::valarray<element_type> m_storage;
size_t m_stride;
size_t m_height;
};
std::valarray provides many interesting ways to access elements, via slices, masks, multidimentional slices, or an indirection table. See std::slice_array, std::gslice_array, std::mask_array, and std::indirect_array for more details.
|
2,187,679 | 2,187,735 | C++ containers behavior | My question is simple. When I use STL containers, do they copy the value I store there (by using copy constructor) or not? What if I give them array of characters (char *) instead of string instance? How do they behave? Is guaranteed that information will be stored in heap instead of system stack?
Thanks for answers.
| Values in STL containers are stored by-value. If you have a vector like this:
class BigObject
{
...
};
vector<BigObject> myObjs;
myObjs.push_back(obj1);
myObjs.push_back(obj2);
...
The vector will make a copy of the object you're pushing in. Also in the case of a vector, it may make new copies later when it has to reallocate the underlying memory, so keep that in mind.
The same thing is true when you have a vector of pointers, like vector<char*> -- but the difference here is that the value that is copies is the pointer, not the string it points to. So if you have:
vector<char*> myStrings;
char* str = new char[256]; // suppose str points to mem location 0x1234 here
sprintf(str, "Hello, buffer");
myStrings.push_back(str);
delete [] str;
...the vector will get a copy of the pointer. The pointer it gets will have the same value (0x1234), and since you deleted that pointer after pushing in the pointer, your vector contains a wild pointer and your code will eventually crash (sooner than later, hopefully).
Which, by the way, could have been avoided if instead of using char*s you used strings:
typedef vector<string> strings;
strings myStrings;
myStrings.push_back("Hello, buffer");
|
2,187,724 | 2,187,757 | X-Macros Redefinition | When i include the "xmacro.h" in header file which is used by multiple header files i get linking error:
Error LNK2005: "char const * * iD_Strings" (?iD_Strings@@3PAPBDA)
already defined in header_file.obj
1.//"xmacro.h"
2.
3.// (1) Define code generating macro
4.
5. #ifndef XMACRO_H
6. #define XMACRO_H
7.
8. #define GENERATE_IDS \
9. X(_Name, "/Name") \
10. X(_ID, "/ID")
11.
12. // (2) Define X-Macro for generating enum members
13.
14. #define X(id, idString) id,
15. enum IDs
16. {
17. ID_LOWERBOUND = -1,
18. GENERATE_IDS
19. NUM_IDS
20. };
21. #undef X
22.
23. // (3) Define X-Macro for generating string names
24.
25. #define X(id, idString) idString,
26. const char* iD_Strings[] =
27. {
28. GENERATE_IDS
29. NULL
30. };
31. #undef X
32.
33. #endif
it generates an error when i define X-Macro for generating string names at line 23. how would i use a single macro without redefining it?
| Your header file contains the definition of iD_Strings. When you include it from different source files, it gets linked in multiple times. That results in a conflict even if the definitions are identical.
You could declare iD_Strings as static (the C way) or wrap it in an anonymous namespace (the C++ way) to get around this problem.
|
2,187,927 | 2,188,123 | ICU Unicode Normal vs Fullwidth | I am somewhat new to unicode and unicode strings. I'm trying to determine the difference between "fullwidth" symbol and a normal one.
Take these two for example:
Normal: http://www.fileformat.info/info/unicode/char/20a9/index.htm
Fullwidth: http://www.fileformat.info/info/unicode/char/ffe6/index.htm
I notice that the fullwidth is defined as U+20A9 and coincidentally 20A9 is the normal one. So what is the value of U?
When using libraries like ICU is there a way to specify always return normal versus full?
Thanks,
| U+number is a notational convention for a Unicode code point. There is no 'value' of U.
U+0020, for example, is a space. The value in memory is 32 decimal, 20 hex.
Full width characters are a whole other story.
Back in the days of the 3270, Hanzi took up two positions in memory in the display. So they also took up two columns on the screen. To make things line up neatly, IBM defined a set of 'full-width' (better would have been 'double-width') letters and numbers.
If some ICU API is delivering full-width, you can use the Normalizer to get rid of it. You might also post a ticket to their ticket system, this seems odd.
|
2,188,024 | 2,188,324 | C++ calling managed COM object can't find dependent assemblies | I've created and registered a managed COM library in C# on my development machine. I've successfully registered it and created a .tlb file with regasm, and successfully imported the tlb into a c++ console app used for testing.
My COM assembly is called "efcAPI.dll" and it references another assembly that has not been set up for COM or registered in anyway called "efcServerDiscovery.dll". This second dll contains some code used by my COM dll and exists in the same folder as efcAPI.dll.
Everything concerning loading the COM assembly works fine. I can create instances of my classes defined in the COM and call methods from them. However when I call certain methods that use the code defined in efcServerDiscovery.dll I get a _com_error which reports that it could not load file or assembly 'efcServerDiscovery'.
I've verified that everywhere on my hard drive where efcAPI.dll exists there's a copy of efcServerDiscovery.dll (which is just the location I built and registered efcAPI.dll from). I've also attempted to place efcAPI.dll and efcServerDiscovery.dll in the same directory as the c++ app with no success.
Any suggestions as to where the c++ app is looking for the assembly or how to discover where it's looking would be great!
| Yes, this is a problem with COM components having non-COM dependencies. Windows doesn't consider the location of the COM DLL when it searches for dependent DLLs. The normal search rules are in effect, the folder that contains the EXE first, Windows directories, current working directory, PATH environment. The location of the COM server does not play a role.
Assuming you don't want to deploy to the EXE folder, none of these are good places to store your DLL, although plenty of installers made the desperation move of storing it in c:\windows\system32 or modify the system PATH environment variable.
One thing you could do is P/Invoke SetDllDirectory() in your C# code before running any code in the DLL. Using Assembly.GetExecutingAssembly().Location will do it. That is however not a safe thing to do though, it might alter the search rules for the app that uses your component.
The only real fix is to install the DLL in the Windows side-by-side cache (WinSxS) and to include a manifest in your C# executable. Given the state of the documentation, I can only wish you the best of luck.
|
2,188,490 | 2,188,531 | Static library inspector for windows? | I know there are tools like PE Explorer for inspecting the contents of DLLs on windows (exported symbols, etc). Is there something similar for static libraries? I'm linking against a third party library that's generating some linking errors, and I want to double check that the symbols I expect are indeed being provided.
| Dumpbin
The DUMPBIN utility, which is provided with the 32-bit version of Microsoft Visual C++, combines the abilities of the LINK, LIB, and EXEHDR utilities. The combination of these tools features the ability to provide information about the format and symbols provided in executable, library, and DLL files.
|
2,188,559 | 2,188,573 | C++ How to use and pass a 3-dimensional char array? | I'm trying to build a char array for storing the return value of a function. In the following function the data is stored in *****valv**. How to build a extern variable to access the data?
int credis_lrange(REDIS rhnd, const char *key,
int start, int end, char ***valv)
{
int rc;
if ((rc = cr_sendfandreceive(rhnd, CR_MULTIBULK, "LRANGE %s %d %d\r\n",
key, start, end)) == 0)
{
*valv = rhnd->reply.multibulk.bulks;
rc = rhnd->reply.multibulk.len;
}
return rc;
}
Solution:
char **elements;
int size = credis_lrange(this->redis,"object_2",600,603,&elements);
for (int i=0; i<size;i++) {
cout << "element: " << elements[i] << endl;
}
Thanks to everyone!
| char ***element[size];
Is not exactly a 3D array, but an array of size elements of pointers-to-pointers-to-pointers to char.
Use any one of the following:
char e[ D1 ][ D2 ][ D3 ]; /* D1, D2, D3 are integral constants */
char *e[ D2 ][ D3 ];
char e[][ D2 ][ D3 ];
Also, you can pass it on by simply speficying e as the argument to your function.
On further reading, it appears that the parameter is not really a 3D array but a pointer to an array of C-style strings. Note, the syntax may be the same, the intent is different.
In that case, you'll need to do two things:
Specify the number of strings you want to store in the array
For each string
Allocate memory
Copy string data to the char array
And finally, you'll be passing in the address of this array of strings on to the credis_lrange function.
|
2,188,680 | 2,188,723 | Is fastcall really faster? | Is the fastcall calling convention really faster than other calling conventions, such as cdecl?
Are there any benchmarks out there that show how performance is affected by calling convention?
| It depends on the platform. For a Xenon PowerPC, for example, it can be an order of magnitude difference due to a load-hit-store issue with passing data on the stack. I empirically timed the overhead of a cdecl function at about 45 cycles compared to ~4 for a fastcall.
For an out-of-order x86 (Intel and AMD), the impact may be much less, because the registers are all shadowed and renamed anyway.
The answer really is that you need to benchmark it yourself on the particular platform you care about.
|
2,188,698 | 2,188,753 | A method of creating simple game GUI | I have been able to find a lot of information on actual logic development for games. I would really like to make a card game, but I just dont understand how, based on the mouse position, an object can be selected (or atleast the proper way) First I thought of bounding box checking but not all my bitmaps are rectangles. Then I thought f making a hidden buffer wih each object having a different color, but it seems ridiculous to have to do it this way. I'm wondering how it is really done. For example, how does Adobe Flash know the object under the mouse?
Thanks
| Your question is how to tell if the mouse is above a non-rectangular bitmap. I am assuming all your bitmaps are really rectangular, but they have transparent regions. You must already somehow be able to tell which part of your (rectangular) bitmap is transparent, depending on the scheme you use (e.g. if you designate a color as transparent or if you use a bit mask). You will also know the z-order (layering) of bitmaps on your canvas. Then when you detect a click at position (x,y), you need to find the list of rectangular bitmaps that span over that pixel. Sort them by z-order and for each one check whether the pixel is transparent or not. If yes, move on to the next bitmap. If no, then this is the selected bitmap.
|
2,188,751 | 2,188,765 | linking <iostream.h> in linux using gcc | I'm trying to run my very first c++ program in linux (linux mint 8). I use either gcc or g++, both with the same problem: the compiler does not find the library I am trying to import.
I suspect something like I should either copy the iostream.h file (which I don't know where to look for) in the working folder, move my file to compile somewhere else or use an option of some sort.
Thanks for your suggestions.
Here's the gcc command, the c++ code, and the error message:
gcc -o addition listing2.5.c
.
#include <iostream.h>
int Addition(int a, int b)
{
return (a + b);
}
int main()
{
cout << "Resultat : " << Addition(2, 4) << "\n";
return 0;
}
.
listing2.5.c:1:22: error: iostream.h: No such file or directory
listing2.5.c: In function ‘main’:
listing2.5.c:10: error: ‘cout’ undeclared (first use in this function)
listing2.5.c:10: error: (Each undeclared identifier is reported only once
listing2.5.c:10: error: for each function it appears in.)
Now the code compiles, but I cannot run it from the command line using the file name. addition: command not found Any suggestion?
|
cout is defined in the std:: namespace, you need to use std::cout instead of just cout.
You should also use #include <iostream> not the old iostream.h
use g++ to compile C++ programs, it'll link in the standard c++ library. gcc will not. gcc will also compile your code as C code if you give it a .c suffix. Give your files a .cpp suffix.
|
2,188,879 | 2,188,942 | C C++ array.... need help understanding code | Can you please explain this code? It seems a little confusing to me
Is "a" a double array? I would think it's just an integer, but then in the cout statement it's used as a double array. Also in the for loop condition it says a<3[b]/3-3, it makes no sense to me, however the code compiles and runs. i'm just having trouble understanding it, it seems syntactically incorrect to me
int a,b[]={3,6,5,24};
char c[]="This code is really easy?";
for(a=0;a<3[b]/3-3;a++)
{
cout<<a[b][c];
}
| Wow. This is really funky. This isn't really 2 dimensional array. it works because c is an array and there is an identity in the C language that treats this
b[3]
as the same as this
3[b]
so this code translates into a loop that increments a while a < (24/3-3) since 3[b] is the same as b[3] and b[3] is 24. Then it uses a[b] (which is the same as b[a]) as an index into the array c.
so, un-obfuscated this code is
int a;
int b[] = {3,5,6,24}
char c[] = "This code is really easy?";
for (a = 0; a < 5; a++)
{
cout << c[b[a]];
}
which is broken since b[4] doesn't exist, so the output should be the 3rd, 5th, 6th and 24th characters of the string c or
sco?
followed by some random character or a crash.
|
2,188,980 | 2,189,002 | Dock a control in native C++ | I know how to do this in .Net but in native Win32, how does one dock a control. I'm trying to dock a trackbar I made like vlc's trackbar.
Thanks
dock it to the bottom of its parent window
| You write the code to do it, or use a library which provides that functionality. If you are intent on using raw Win32, handle the WM_SIZE message for the container, calculate where the trackbar should be, and put it there. Roughly (0,height-trackbar_height,width,trackbar_height).
Apparently, Pearl Jam are still touring.
|
2,188,991 | 2,189,018 | What is useful about a reference-to-array parameter? | I recently found some code like this:
typedef int TenInts[10];
void foo(TenInts &arr);
What can you do in the body of foo() that is useful, that you could not do if the declaration was:
void foo(int *arr); // or,
void foo(int arr[]); // or,
void foo(int arr[10]); // ?
I found a question that asks how to pass a reference to an array. I guess I am asking why.
Also, only one answer to "When is pointer to array useful?" discussed function parameters, so I don't think this is a duplicate question.
| The reference-to-array parameter does not allow array type to decay to pointer type. i.e. the exact array type remains preserved inside the function. (For example, you can use the sizeof arr / sizeof *arr trick on the parameter and get the element count). The compiler will also perform type checking in order to make sure the array argument type is exactly the same as the array parameter type, i.e. if the parameter is declared as a array of 10 ints, the argument is required to be an array of exactly 10 ints and nothing else.
In fact, in situations when the array size is fixed at compile-time, using a reference-to-array (or pointer-to-array) parameter declarations can be preceived as the primary, preferred way to pass an array. The other variant (when the array type is allowed to decay to pointer type) are reserved for situations when it is necessary to pass arrays of run-time size.
For example, the correct way to pass an array of compile-time size to a function is
void foo(int (&arr)[10]); // reference to an array
or
void foo(int (*arr)[10]); // pointer to an array
An arguably incorrect way would be to use a "decayed" approach
void foo(int arr[]); // pointer to an element
// Bad practice!!!
The "decayed" approach should be normally reserved for arrays of run-time size and is normally accompanied by the actual size of the array in a separate parameter
void foo(int arr[], unsigned n); // pointer to an element
// Passing a run-time sized array
In other words, there's really no "why" question when it comes to reference-to-array (or pointer-to-array) passing. You are supposed to use this method naturally, by default, whenever you can, if the array size is fixed at compile-time. The "why" question should really arise when you use the "decayed" method of array passing. The "decayed" method is only supposed to be used as a specialized trick for passing arrays of run-time size.
The above is basically a direct consequence of a more generic principle. When you have a "heavy" object of type T, you normally pass it either by pointer T * or by reference T &. Arrays are no exception from this general principle. They have no reason to be.
Keep in mind though that in practice it is often makes sense to write functions that work with arrays of run-time size, especially when it comes to generic, library-level functions. Such functions are more versatile. That means that often there's a good reason to use the "decayed" approach in real life code, Nevertheless, this does not excuse the author of the code from recognizing the situations when the array size is known at compile time and using the reference-to-array method accordingly.
|
2,189,189 | 2,189,206 | map vs. hash_map in C++ | I have a question with hash_map and map in C++. I understand that map is in STL, but hash_map is not a standard. What's the difference between the two?
| They are implemented in very different ways.
hash_map (unordered_map in TR1 and Boost; use those instead) use a hash table where the key is hashed to a slot in the table and the value is stored in a list tied to that key.
map is implemented as a balanced binary search tree (usually a red/black tree).
An unordered_map should give slightly better performance for accessing known elements of the collection, but a map will have additional useful characteristics (e.g. it is stored in sorted order, which allows traversal from start to finish). unordered_map will be faster on insert and delete than a map.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.