question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
1,236,827
1,236,907
Compiling C++ Code With Boost's Numeric Binding Library to Solve Ax=b Linear System
I am using Numeric Library Bindings for Boost UBlas to solve a simple linear system: #include<boost/numeric/ublas/matrix.hpp> #include<boost/numeric/ublas/io.hpp> #include<boost/numeric/bindings/traits/ublas_matrix.hpp> #include<boost/numeric/bindings/lapack/gesv.hpp> #include <boost/numeric/bindings/traits/ublas_vector2.hpp> namespace ublas = boost::numeric::ublas; namespace lapack= boost::numeric::bindings::lapack; int main() { ublas::matrix<float,ublas::column_major> A(3,3); ublas::vector<float> b(3); for(unsigned i=0;i < A.size1();i++) for(unsigned j =0;j < A.size2();j++) { std::cout << "enter element "<<i << j << std::endl; std::cin >> A(i,j); } std::cout << A << std::endl; b(0) = 21; b(1) = 1; b(2) = 17; lapack::gesv(A,b); std::cout << b << std::endl; return 0; } I tried compiling it with the following command: g++ -I/home/foolb/.boost/include/boost-1_38 -I/home/foolb/.boostnumbind/include/boost-numeric-bindings solve_Axb_byhand.cc -o solve_Axb_byhand but fail with the following error: /media/disk/tmp/ccbd973l.o: In function `boost::numeric::bindings::lapack::detail::gesv(int, int, float*, int, int*, float*, int, int*)': solve_Axb_byhand2.cc:(.text._ZN5boost7numeric8bindings6lapack6detail4gesvEiiPfiPiS4_iS5_[boost::numeric::bindings::lapack::detail::gesv(int, int, float*, int, int*, float*, int, int*)]+0x59): undefined reference to `sgesv_' collect2: ld returned 1 exit status What's wrong with my approach in the code?
sgesv_ is a symbol from LAPACK library, you'll have to link to that. uBLAS just binds to it I guess. I too don't know the name of the library though :)
1,237,042
1,237,077
Retrieving paths of FileSystemInfo instances
How does one retrieve ( or resolve, for that matter ) the absolute and relative ( FullPath and OriginalPath fields ) paths of FileSystemInfo/DirectoyInfo/FileInfo instances ? I'm trying to get the paths of the files/directories returned by a FileSystemInfos call on a DirectoryInfo class object.
the FullName property gets you the full path of the directory or file.
1,237,188
1,237,199
compiling c++ into "real" programs
I know how to use g++ and all that to compile c++ programs. My question is, if I have some code which depends on various libraries, how can I compile it into a simple executable that I can send anyone. For this I would be happy with just keeping it on os x. I would like to know how to compile a "real" program not just an executable I can run locally. I have tried googling this but haven't found much. Do I have to use installing software? I know in windows you can make some simple .exe stuff that use common DLL files.
You a looking for "static linking". That will import all the needed code from the libraries into your executable. Note the executable will get larger. If you are using standard libraries, they should be present on standard OS installation. You should try "-static" flag of g++. Running "ldd your_executable_name" should display all libraries your executable uses (linked dynamically).
1,237,259
1,827,599
About write buffer in general network programming
I'm writing server using boost.asio. I have read and write buffer for each connection and use asynchronized read/write function (async_write_some / async_read_some). With read buffer and async_read_some, there's no problem. Just invoking async_read_some function is okay because read buffer is read only in read handler (means in same thread usually). But, write buffer need to be accessed from several threads so it need to be locked for modifying. FIRST QUESTION! Are there any way to avoid LOCK for write buffer? I write my own packet into stack buffer and copy it to the write buffer. Then, call async_write_some function to send the packet. In this way, if I send two packet in serial, is it okay invoking async_write_some function two times? SECOND QUESTION! What is common way for asynchronized writing in socket programming? Thanks for reading.
Answer #1: You are correct that locking is a viable approach, but there is a much simpler way to do all of this. Boost has a nice little construct in ASIO called a strand. Any callback that has been wrapped using the strand will be serialized, guaranteed, no matter which thread executes the callback. Basically, it handles any locking for you. This means that you can have as many writers as you want, and if they are all wrapped by the same strand (so, share your single strand among all of your writers) they will execute serially. One thing to watch out for is to make sure that you aren't trying to use the same actual buffer in memory for doing all of the writes. For example, this is what to avoid: char buffer_to_write[256]; // shared among threads /* ... in thread 1 ... */ memcpy(buffer_to_write, packet_1, std::min(sizeof(packet_1), sizeof(buffer_to_write))); my_socket.async_write_some(boost::asio::buffer(buffer_to_write, sizeof(buffer_to_write)), &my_callback); /* ... in thread 2 ... */ memcpy(buffer_to_write, packet_2, std::min(sizeof(packet_2), sizeof(buffer_to_write))); my_socket.async_write_some(boost::asio::buffer(buffer_to_write, sizeof(buffer_to_write)), &my_callback); There, you're sharing your actual write buffer (buffer_to_write). If you did something like this instead, you'll be okay: /* A utility class that you can use */ class PacketWriter { private: typedef std::vector<char> buffer_type; static void WriteIsComplete(boost::shared_ptr<buffer_type> op_buffer, const boost::system::error_code& error, std::size_t bytes_transferred) { // Handle your write completion here } public: template<class IO> static bool WritePacket(const std::vector<char>& packet_data, IO& asio_object) { boost::shared_ptr<buffer_type> op_buffer(new buffer_type(packet_data)); if (!op_buffer) { return (false); } asio_object.async_write_some(boost::asio::buffer(*op_buffer), boost::bind(&PacketWriter::WriteIsComplete, op_buffer, boost::asio::placeholder::error, boost::asio::placeholder::bytes_transferred)); } }; /* ... in thread 1 ... */ PacketWriter::WritePacket(packet_1, my_socket); /* ... in thread 2 ... */ PacketWriter::WritePacket(packet_2, my_socket); Here, it would help if you passed your strand into WritePacket as well. You get the idea, though. Answer #2: I think you are already taking a very good approach. One suggestion I would offer is to use async_write instead of async_write_some so that you are guaranteed the whole buffer is written before your callback gets called.
1,237,361
1,237,545
How sets, multisets, maps and multimaps work internally
How do multisets work? If a set can't have a value mapped to a key, does it only hold keys? Also, how do associative containers work? I mean vector and deque in the memory is located sequentially it means that deleting/removing (except beginning [deque] and end [vector, deque]) are slow if they are large. And list is a set of pointers which are not sequentially located in the memory which causes longer search but faster delete/remove. How are sets, maps, multisets and multimaps stored and how do they work?
These 4 containers are typically all implemented using "nodes". A node is an object that stores one element. In the [multi]set case, the element is just the value; in the [multi]map case each node stores one key and its associated value. A node also stores multiple pointers to other nodes. Unlike a list, the nodes in sets and maps form a tree. You'd typically arrange it such that branches on the "left" of a certain node have values less than that node, while branches on the "right" of a certain node have values higher than that node. Operations like finding a map key/set value are now quite fast. Start at the root node of the tree. If that matches, you're done. If the root is larger, search in the left branch. If the root is smaller than the value you're looking for, follow the pointer to the right branch. Repeat until you find a value or an empty branch. Inserting an element is done by creating a new node, finding the location in the tree where it should be placed, and then inserting the node there by adjusting the pointers around it. Finally, there is a "rebalancing" operation to prevent your tree from ending up all out of balance. Ideally each right and left branch is about the same size. Rebalancing works by shifting some nodes from the left to the right or vice versa. E.g. if you have values {1 2 3} and your root node would be 1, you'd have 2 and 3 on the left branch and an empty right branch: 1 \ 2 \ 3 This is rebalanced by picking 2 as the new root node: 2 / \ 1 3 The STL containers use a smarter, faster rebalancing technique but that level of detail should not matter. It's not even specified in the standard which better technique should be used so implementations can differ.
1,237,536
1,237,583
Controlling Firefox from C/C++
I'm thinking of creating an application that can use Firefox as a download manager. Is there any way to control Firefox (add downloads, start/stop downloads, etc) from an external program in C/C++? If that is not possible, then perhaps an extension that can do that? If an extension is the only way, then how do I communicate with the extension from outside of Firefox?
You're starting with a solution, not a problem. The easier idea is to use XulRunner, the platform on which FireFox is built. You'd effectively implement your own application as a XulRunner plugin and use Necko (the network layer of XulRunner and FireFox) from there.
1,237,555
1,238,201
Design advice for a personal project - "Files Renamer"?
i've just started learning winapis and c++ programming .. i was thinking about starting a personal project (to enhance my coding, and to help me understand the winapis better).. and i've decided to program a "cmd" files renamer, that basically takes : 1)a path 2)a keyword 3)the desiered formate 4)versioned or not (or numbered, like if u had 20 episodes of the same show, u wouldnt wanna truncate the episode number).. 5)special cases to delete (like when ur downloading a torrent, they have a [309u394] attached to the name.. and most of the time an initial [WE-RIP-TV-SHOWS-HDTV-FANSUBS-GROUPS-ETC] i am building the logic as follows: the program takes the path(input 1), performs a full files indexing.. then it compares the files found against the keyword example gives (input 2) (use regex?) Reformat file name step. (input 3, 4, 5); save file name. questions: A) is my logic flow proper? any suggestions to improve it? B)should i use Regex to check against file name, keyword, and desired format? (not good with regex yet) , i mean is it the best way to perform the huge amount of comparisons ?
Regular expressions should do the trick. Also you could use the Boost library, it has some really neat functions including the regexp, which is probably faster than the functions you'll find around (:
1,237,571
1,237,619
Problems deleting a 2D dynamic array in C++ (which is eventually store in a vector)
So I have this 2d dynamic array which content I want to free when I am done with it. However I keep running into a heap corruption after the destructor. The code works fine (of course with memory leaks) if I comment out the destructor. (Visual Studio 2005) FrameData::FrameData(int width, int height) { width_ = width; height_ = height; linesize[0] = linesize[1] = linesize[2] = linesize[3] = 0; // Initialise the 2d array // Note: uint8_t is used by FFMPEG (typedef unsigned char uint8_t) red = new uint8_t* [height]; green = new uint8_t* [height]; blue = new uint8_t* [height]; for (int i=0; i < height; i++) { red[i] = new uint8_t [width]; green[i] = new uint8_t [width]; blue[i] = new uint8_t [width]; } } FrameData::~FrameData() { // Delete each column for (int i=0; i < height_; i++) { delete[] ((uint8_t*) red[i]); delete[] ((uint8_t*)green[i]); delete[] ((uint8_t*)blue[i]); } // Final cleanup delete[] red; red = NULL; delete[] green; green = NULL; delete[] blue; blue = NULL; } I have no idea what is wrong with the code. The only another thing is somewhere else, I did this in a loop where the crash occurred FrameData myFrame; std::vector<FrameData> frames; ...snipped... frames.push_back(myFrame); This shouldn't be causing any problem, right? If I remember correct, push_back makes a copy instead of storing a pointer or a reference. PS. Yes, I should use vectors. But I am not allowed to. Additional Info: The operator= and copy constructor are not defined. I guess that's a reason for the problem.
Your problem is as you guessed in here: FrameData myFrame; std::vector<FrameData> frames; ...snipped... frames.push_back(myFrame); The vector makes copies of the elements that you push in. What do you have for your copy constructor and/or operator= for your class? If you have none defined, the default version that the compiler creates for you simply makes copies of the members of your class. This will copy the pointer members red, green and blue to the new instance. Then the old instance that you copied will be destroyed when it goes out of scope, causing the pointers to be deleted. The one you copied into the vector will then have invalid pointers since the target of the pointer is thus deleted. A good rule of thumb is that if you have any raw pointer members, then you need to make a copy constructor and operator= that will handle this situation correctly, by making sure that the pointers are given new values and not shared, or that ownership is transferred between the instances. For example, the std::auto_ptr class has a raw pointer - the semantics of the copy constructor is to transfer ownership of the pointer to the target. The boost::shared_ptr class has a raw pointer - the semantics is to share ownership by means of reference counting. This is a nice way to handle std::vectors containing pointers to your class - the shared pointers will control the ownership for you. Another way might be to use vectors to take the place of your member pointers - the member pointers are simply aliases for your arrays anyway, so the vector is a good substitute.
1,237,723
1,237,927
How to assign / copy a Boost::multi_array
I want to assign a copy of a boost::multi_array. How can I do this. The object where I want to assign it to has been initialized with the default constructors. This code does not work, because the dimensions and size are not the same class Field { boost::multi_array<char, 2> m_f; void set_f(boost::multi_array<short, 2> &f) { m_f = f; } } What to use instead of m_f = f ?
You should resize m_f before assigning. It could look like in the following sample: void set_f(boost::multi_array<short, 2> &f) { std::vector<size_t> ex; const size_t* shape = f.shape(); ex.assign( shape, shape+f.num_dimensions() ); m_f.resize( ex ); m_f = f; } May be there is a better way. Conversion short to char will be implicit. You should consider using std::transform if you want explicit conversion.
1,237,756
1,237,777
How to Sum Column of a Matrix and Store it in a Vector in C++
Is there a straight forward way to do it? I'm stuck here: #include <iostream> #include <vector> #include <cstdlib> using std::size_t; using std::vector; int main() { vector<vector<int> > Matrix; //Create the 2x2 matrix. size_t rows = 2; size_t cols = 2; // 1: set the number of rows. Matrix.resize(rows); for(size_t i = 0; i < rows; ++i) { Matrix[i].resize(cols); } // Create Matrix Matrix[0][0] = 1; Matrix[0][1] = 2; Matrix[1][0] = 3; Matrix[1][1] = 4; // Create Vector to store sum vector <int> ColSum; for(size_t i = 0; i < rows; ++i) { for(size_t j = 0; j < cols; ++j) { std::cout <<"["<<i<<"]"<<"["<<j<<"] = " <<Matrix[i][j]<<std::endl; // I'm stuck here } } return 0; } Given the matrix above: 1 2 3 4 In the end we hope to print the result of a vector (that keeps the sum of each column): 4 6 Note that the matrix can be of any dimension.
for( size_t row = 0; row < Matrix.size(); row++ ) { ColSum[row] = 0; for( size_t column = 0; column < Matrix[row].size(); column++ ) { ColSum[row] += Matrix[row][column]; } }
1,237,768
1,237,803
Where is the memory leak in this C++?
I have been told be a couple of tools that the following code is leaking memory, but we can't for the life of us see where: HRESULT CDatabaseValues::GetCStringField(ADODB::_RecordsetPtr& aRecordset, CString& strFieldValue, const char* strFieldName, const bool& bNullAllowed) { HRESULT hr = E_FAIL; try { COleVariant olevar; olevar = aRecordset->Fields->GetItem(_bstr_t(strFieldName))->Value; if (olevar.vt == VT_BSTR && olevar.vt != VT_EMPTY) { strFieldValue = olevar.bstrVal; hr = true; } else if ((olevar.vt == VT_NULL || olevar.vt == VT_EMPTY) && bNullAllowed) { //ok, but still did not retrieve a field hr = S_OK; strFieldValue = ""; } } catch(Exception^ error) { hr = E_FAIL; MLogger::Write(error); } return hr; } We assume it is something to do with the olevar variant as the size of the leak matches the size of the string being returned from the recordset. I have tried olevar.detach() and olevar.clear(), both had no effect, so if this is the cause, how do I release the memory that is presumably allocated in GetItem. And if this is not the cause, what is? EDIT I read the article suggested by Ray and also the comments associated with it and then tried: HRESULT CDatabaseValues::GetCStringField(ADODB::_RecordsetPtr& aRecordset, CString& strFieldValue, const char* strFieldName, const bool& bNullAllowed) { HRESULT hr = E_FAIL; try { COleVariant* olevar = new COleVariant(); _bstr_t* fieldName = new _bstr_t(strFieldName); *olevar = aRecordset->Fields->GetItem(*fieldName)->Value; if (olevar->vt == VT_BSTR && olevar->vt != VT_EMPTY) { strFieldValue = olevar->bstrVal; hr = true; } else if ((olevar->vt == VT_NULL || olevar->vt == VT_EMPTY) && bNullAllowed) { //ok, but still did not retrieve a field hr = S_OK; strFieldValue = ""; } delete olevar; delete fieldName; } catch(Exception^ error) { hr = E_FAIL; MLogger::Write(error); } return hr; } Main differences being the olevariant and bstr are now explicitly created and destroyed. This has roughly halved the volume of leak, but there is still something in here that is leaking. Solution? Looking at the advice from Ray about using Detach, I came up with this: HRESULT CDatabaseValues::GetCStringField(ADODB::_RecordsetPtr& aRecordset, CString& strFieldValue, const char* strFieldName, const bool& bNullAllowed) { HRESULT hr = E_FAIL; try { COleVariant olevar; _bstr_t fieldName = strFieldName; olevar = aRecordset->Fields->GetItem(fieldName)->Value; if (olevar.vt == VT_BSTR && olevar.vt != VT_EMPTY) { BSTR fieldValue = olevar.Detach().bstrVal; strFieldValue = fieldValue; ::SysFreeString(fieldValue); hr = true; } else if ((olevar.vt == VT_NULL || olevar.vt == VT_EMPTY) && bNullAllowed) { //ok, but still did not retrieve a field hr = S_OK; strFieldValue = ""; } ::SysFreeString(fieldName); } catch(Exception^ error) { hr = E_FAIL; MLogger::Write(error); } return hr; } According to the tool (GlowCode) this is no longer leaking, but I am worried about using SysFreeString on fieldValue after it has been assigned to the CString. It appears to run, but I know that is no indication of being memory corruption free!
You have to release memory allocated for BSTR. See article Oh, and you have to do a detach before assigning bstr value of VARIANT to CString strFieldValue = olevar.detach().bstrVal; and then ensure your CString object gets properly destroyed in time.
1,237,948
1,297,928
Problem with directories and file selector (VC++ 2008)
I have implemented a file selector with a combobox. I want to write the selected filename to a log. The problem is that when I select a file from the original directory it goes well but when I choose a file from another directory it won't work. Can anybody help with this? Here is the code for the file selector, it is inside a dialog. BOOL CALLBACK BateriaFaxDlg(HWND hDlg, UINT msg, WPARAM wParam, LPARAM lParam){ char descripcion[100]=""; char archivo[100]=""; char cad[100]; int i,l; switch (msg) { case WM_INITDIALOG: InitCombo(hDlg, "*.*"); return TRUE; break; case WM_COMMAND: switch(LOWORD(wParam)) { case IDOK: i = SendDlgItemMessage(hDlg, IDC_ARCH2, CB_GETCURSEL, 0, 0); if(i >= 0) { SendDlgItemMessage(hDlg, IDC_ARCH2, CB_GETLBTEXT, (WPARAM)i, (LPARAM)archivo); } if (!GetDlgItemText(hDlg, IDC_DESCBATER, descripcion , 100)) { MessageBox(hDlg, "Ambos campos son obligatorios", "ERROR", MB_ICONEXCLAMATION | MB_OK); break; } actualizarBaterias(GetParent(hDlg), "FAX", archivo, descripcion); EndDialog(hDlg, FALSE); break; case IDCANCEL: EndDialog(hDlg, FALSE); break; case IDC_ARCH2: switch(HIWORD(wParam)) { case CBN_CLOSEUP: case CBN_KILLFOCUS: if(DlgDirSelectComboBoxEx(hDlg, cad, 100, IDC_ARCH2)) { strcat(cad, "*.*"); InitCombo(hDlg, cad); } break; } break; default: break; return TRUE; } } return FALSE; } This is InitCombo: void IniciarCombo(HWND hwnd, char* p) { char path[100]; strcpy(path, p); DlgDirListComboBox( hwnd, path, IDC_ARCH2, ID_TITULO, DDL_DIRECTORY | DDL_DRIVES ); SendDlgItemMessage(hwnd, IDC_ARCH2, CB_SETCURSEL, 0, 1); } and finally this is where i write the filename to a file. void actualizarBaterias(HWND hWnd, char *tipo, char *archivo, char *descripcion) { FILE *fp; HWND hctrl; int i; HFONT hfont; fp = fopen("conf\\Baterias.conf", "a" ); if (fp) { MessageBox(hWnd, "Actuali","error", MB_ICONEXCLAMATION | MB_OK); fprintf(fp, "\n%s %s %s", tipo, archivo, descripcion); fclose(fp); } } Thanks in advance.
From the documentation for DlgDirListComboBox: If lpPathSpec specifies a directory, DlgDirListComboBox changes the current directory to the specified directory before filling the combo box. The text of the static control identified by the nIDStaticPath parameter is set to the name of the new current directory. You probably want to cache the current directory (GetCurrentDirectory) before calling DlgDirSelectComboBoxEx, then set it back after it returns. Or, don't call fopen with a relative directory.
1,237,963
1,238,014
Alignment along 4-byte boundaries
I recently got thinking about alignment... It's something that we don't ordinarily have to consider, but I've realized that some processors require objects to be aligned along 4-byte boundaries. What exactly does this mean, and which specific systems have alignment requirements? Suppose I have an arbitrary pointer: unsigned char* ptr Now, I'm trying to retrieve a double value from a memory location: double d = **((double*)ptr); Is this going to cause problems?
It can definitely cause problems on some systems. For example, on ARM-based systems you cannot address a 32-bit word that is not aligned to a 4-byte boundary. Doing so will result in an access violation exception. On x86 you can access such non-aligned data, though the performance suffers a little since two words have to be fetched from memory instead of just one.
1,238,319
1,238,326
Check whether a shutdown is initiated or not
What is the win32 function to check whether a shutdown is initiated or not? EDIT: I need to check that inside a windows service (COM). How to do that?
There's no actual Win32 function to check for that. Instead Windows sends the WM_QUERYENDSESSION message to every application when a shutdown is initiated. You can respond to that message and for example cancel the shutdown. (Although you shouldn't do that unless it is absolutely necessary) Before the actual shutdown the WM_ENDSESSION message is sent. You should do any of your cleanup only after this message, because it is not guaranteed that the system actually shuts down after WM_QUERYENDSESSION. EDIT: If you want to listen for these messages from a Service you have to put some more work into it. Services normally don't have windows, so you cannot simply hook into an existing window message queue. Instead you have to create a dummy window, which is meant only to processes messages and use it to handle the messages above. See the MSDN documentation for more information about message-only windows.
1,238,376
1,281,662
VC++: KB971090 and selecting Visual C Runtime DLL dependencies
As you might know, Microsoft recently deployed a security update for Visual Studio: KB971090. Among other things, this updated the Visual C Runtime DLL from version 8.0.50727.762 to 8.0.50727.4053. So after this update, everything I compile that uses the runtime dynamically linked, gets their dependencies updated to the new runtime. Of course, for new applications it is fine to update to the new, presumably more secure, version. But I would also like to be able to retain the old dependency - for example, I might like to build a fixpack that only require a single DLL to be replaced (if I try to do that after the update, I will get the dreaded "This application has failed to start because the application configuration is incorrect." unless I also distribute the updated runtime). Is there any way to do this, or will I need to retain two installations of Visual Studio: one updated and one non-updated?
You can specify the version by using the workaround found here
1,238,379
1,238,410
Detecting if a process is still running
I need to check if a process with a given HANDLE is still running, I tried to do it using the following code however it always returns at the second return false, even if the process is running. bool isProcessRunning(HANDLE process) { if(process == INVALID_HANDLE_VALUE)return false; DWORD exitCode; if(GetExitCodeProcess(process, &exitCode) != 0) return false;//always returns here return GetLastError() == STILL_ACTIVE;//still running }
You can test the process life by using bool isProcessRunning(HANDLE process) { return WaitForSingleObject( process, 0 ) == WAIT_TIMEOUT; }
1,238,609
1,238,719
static_cast wchar_t* to int* or short* - why is it illegal?
In both Microsoft VC2005 and g++ compilers, the following results in an error: On win32 VC2005: sizeof(wchar_t) is 2 wchar_t *foo = 0; static_cast<unsigned short *>(foo); Results in error C2440: 'static_cast' : cannot convert from 'wchar_t *' to 'unsigned short *' ... On Mac OS X or Linux g++: sizeof(wchar_t) is 4 wchar_t *foo = 0; static_cast<unsigned int *>(foo); Results in error: invalid static_cast from type 'wchar_t*' to type 'unsigned int*' Of course, I can always use reinterpret_cast. However, I would like to understand why it is deemed illegal by the compiler to static_cast to the appropriate integer type. I'm sure there is a good reason...
You cannot cast between unrelated pointer types. The size of the type pointed to is irrelevant. Consider the case where the types have different alignment requirements, allowing a cast like this could generate illegal code on some processesors. It is also possible for pointers to different types to have differrent sizes. This could result in the pointer you obtain being invalid and or pointing at an entirely different location. Reinterpret_cast is one of the escape hatches you hacve if you know for your program compiler arch and os you can get away with it.
1,238,741
1,365,903
Does the latest Visual Studio 2005 Security Update cause C runtime library issues when hot fixing customer sites
As you might be aware an update to visual studio 2005 was auto updated on most machines last week. This update included a new version of the visual c runtime library. As a result any binaries built after the update also require a new redistributable installed on client systems. See http://support.microsoft.com/kb/971090/ And here is the installer for the new redistributable: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=766a6af7-ec73-40ff-b072-9112bab119c2 This is fine for distributing new binaries to customers, I will ship the new redistributable with the installer and it will all work. However I am really worried about my ability to hotfix existing customer sites if they discover a bug. In this case normally I would just send the dll or exe that was fixed. However if I do this now, I will have to send these customers the new redistributable and now I will be using two different versions of the c runtime library in the same executable. Is this a problem? Can this cause my application to crash? What happens if I allocate memory in one dll then deallocate it in another? Normally this works if the same release runtime library is used. I went through the our code about 3 years ago cleaning this up, but I cannot be sure that I have found and fixed all occurrences. Is the allocate/deallocate in different dlls still a problem? Now that in the era of smart pointers etc it is very had to enforce this. Can I control what runtime library version I depend on by changing manifests? Any pointers or advice would be grateful. Updated: I have just noticed this question VC++: KB971090 and selecting Visual C Runtime DLL dependencies This is very similar, but my question is more concerned with using two different version of the runtime in one executable.
The version number specified in the application’s manifest file/resource only specifies the minimum version required to run the application. The default behavior of the loader is to first check the WINDOWS\WinSxS folder for the identical version or a superseding version of a dependency identified in an application manifest, and to use that version regardless of whether or not a private assembly containing the dependency has been provided with the application. (See http://msdn.microsoft.com/en-us/library/aa375674(VS.85).aspx). So chances are your old binaries will also use the latest version of the Microsoft run time library anyway. Try to run the release build of your application (built before you update your Visual Studio) on a fully patched machine and use process explorer to see which DLLs it loads. The only problem is you will have to include the new run time redistributable file in your patch. If you are still worried, you can try the method described here: http://tedwvc.wordpress.com/2009/08/10/avoiding-problems-with-vc2005-sp1-security-update-kb971090/
1,239,235
1,239,457
How lazy can C++ global initialization be?
I'm used to thinking of all initialization of globals/static-class-members as happening before the first line of main(). But I recently read somewhere that the standard allows initialization to happen later to "assist with dynamic loading of modules." I could see this being true when dynamic linking: I wouldn't expect a global initialized in a library to be initialized before I dlopen'ed the library. However, within a grouping of statically linked together translation units (my app's direct .o files) I would find this behavior very unintuitive. Does this only happen lazily when dynamically linking or can it happen at any time? (or was what I read just wrong? ;)
The standard has the following in 3.6.2/3: It is implementation-defined whether or not the dynamic initialization (8.5, 9.4, 12.1, 12.6.1) of an object of namespace scope is done before the first statement of main. If the initialization is deferred to some point in time after the first statement of main, it shall occur before the first use of any function or object defined in the same translation unit as the object to be initialized. But o Of course you can never officially tell when the initialization takes place since the initialization will occur before you access the variable! as follows: // t1.cc #include <iostream> int i1 = 0; int main () { std::cout << i1 << std::endl // t2.cc extern int i1; int i2 = ++i1; I can conform that g++ 4.2.4 at least appears to perform the initialization of 'i2' before main.
1,239,297
1,239,305
What environment should I use for 3d programming on Linux?
One thing I always shy away from is 3d graphics programming, so I've decided to take on a project working with 3d graphics for a learning experience. I would like to do this project in Linux. I want to write a simple 3d CAD type program. Something that will allow the user to manipulate objects in 3d space. What is the best environment for doing this type of development? I'm assuming C++ is the way to go, but what tools? Will I want to use Eclipse? What tools will I want?
OpenGL/SDL, and the IDE is kind-of irrelevant. My personal IDE preference is gedit/VIM + Command windows. There are tons of IDE's, all of which will allow you to program with OpenGL/SDL and other utility libraries. I am presuming you are programming in C, but the bindings exist for Python, Perl, PHP or whatever else, so no worries there. Have a look online for open-source CAD packages, they may offer inspiration! Another approach might be a C#/Mono implementations ... these apps are gaining ground ... and you might be able to make it a bit portable.
1,239,364
1,239,468
When are two elements of an STL set considered identical?
From cplusplus.com: template < class Key, class Compare = less<Key>, class Allocator = allocator<Key> > class set; "Compare: Comparison class: A class that takes two arguments of the same type as the container elements and returns a bool. The expression comp(a,b), where comp is an object of this comparison class and a and b are elements of the container, shall return true if a is to be placed at an earlier position than b in a strict weak ordering operation. This can either be a class implementing a function call operator or a pointer to a function (see constructor for an example). This defaults to less, which returns the same as applying the less-than operator (a<b). The set object uses this expression to determine the position of the elements in the container. All elements in a set container are ordered following this rule at all times." Given that the comparison class is used to decide which of the two objects is "smaller" or "less", how does the class check whether two elements are equal (e.g. to prevent insertion of the same element twice)? I can imagine two approaches here: one would be calling (a == b) in the background, but not providing the option to override this comparison (as with the default less<Key>)doesn't seem too STL-ish to me. The other would be the assumption that (a == b) == !(a < b) && !(b < a) ; that is, two elements are considered equal if neither is "less" than the other, but somehow this doesn't feel right to me either, considering that the comparison can be an arbitrarily complex bool functor between objects of an arbitrarily complex class. So how is it really done?
Not an exact duplicate, but the first answer here answers your question Your second guess as to the behaviour is correct
1,239,380
1,239,792
pass a callable object to a member function
class Action { public: void operator() () const; } class Data { public: Data(); ~Data(); Register(Action action) { _a = action; } private: Action _a; } class Display { public: Display(Data d) { d.Register( bind(Display::SomeTask, this, _1) ); } ~Display(); void SomeTask(); } I want to bind the private member _a of Data to a member function of Display, but I get compile errors saying my argument types don't match when I call d.Register, what am I doing wrong? Thanks.
What you're trying to do is not completely clear, but I'll assume that "bind" is boost::bind (or tr1::bind). A couple of problems with bind(Display::SomeTask, this, _1): It should be &Display::SomeTask The _1 placeholder makes no sense because that creates an unary function object and: Display::SomeTask takes no arguments Action::operator() takes no arguments Using Boost.Function and Boost.Bind, here's what you could write to acheive what I guess you're trying to do: typedef boost::function<void(void)> Action; class Data { public: Data(); ~Data(); Register(Action action) { _a = action; } private: Action _a; }; class Display { public: Display(Data d) { d.Register( bind(&Display::SomeTask, this) ); } ~Display(); void SomeTask(); };
1,239,613
1,241,852
Use Objective-C game engine in C++ iPhone game?
You often hear that C++ is preferable to Objective-C for games, especially in a resource-constrained environment like the iPhone. (I know you still need some Objective-C to initially talk to iPhone services.) Yet, the 2D game engine of choice these days seems to be Cocos2d, which is Objective-C. I understand that what Apple calls "Objective-C++" allows you to mix C++ and Objective-C classes in a single file, but you can't mix and match the languages' constructs within the same class or function. So, is it sensible/possible to use Cocos2d for a C++ game? Do you have to write a lot of "glue" code? I'd like to avoid some of the heavy lifting that a direct OpenGL-ES approach would require.
I'm currently prototyping a game with Cocos2. I'm writing the game logic in C++ with Chipmunk and then using Cocos to implement the view layer. You can indeed mix C++ and Objective-C freely in the same class, function and line of code. I'm sure there are limits, like you probably can't mix Objective-C and C++ method definition syntax in a class interface (I actually hadn't thought to try), but for most practical purposes you can mix freely. If you are only targeting iPhone then I wouldn't be too worried about writing everything in Objective-C. As others have mentioned, if anything is actually a performance bottleneck you can just profile and optimize it. I am writing my game core in C++ because I may want to deploy on other platforms and in that case Objective-C will become a liability.
1,239,845
1,239,947
CMake build mode RelWithDebInfo
I think that I understand the difference between Release and Debug build modes. The main differences being that in Debug mode, the executable produced isn't optimized (as this could make debugging harder) and the debug symbols are included. While building PCRE, one of the external dependencies for WinMerge, I noticed a build mode that I hadn't seen before: RelWithDebInfo. The difference between Debug and RelWithDebInfo is mentioned here: http://www.cmake.org/pipermail/cmake/2001-October/002479.html. exerpt: "RelwithDebInfo is quite similar to Release mode. It produces fully optimized code, but also builds the program database, and inserts debug line information to give a debugger a good chance at guessing where in the code you are at any time." This sounds like a really good idea, however not necessarily obvious how to set up. This link describes how to enable this for VC++: http://www.cygnus-software.com/papers/release_debugging.html Am I missing something, or does it not make sense to compile all release code as RelWithDebInfo?
Am I missing something, or does it not make sense to compile all release code as RelWithDebInfo? It depends on how much you trust your customer with the debugging information. Additional Info: gcc encodes the debugging information into the object code. Here is the pdb equivalent for gcc: How to generate gcc debug symbol outside the build target? Note, that cmake doesn't appear to support this approach out of the box.
1,239,855
1,240,032
Pad a C++ structure to a power of two
I'm working on some C++ code for an embedded system. The I/O interface the code uses requires that the size of each message (in bytes) is a power of two. Right now, the code does something like this (in several places): #pragma pack(1) struct Message { struct internal_ { unsigned long member1; unsigned long member2; unsigned long member3; /* more members */ } internal; char pad[64-sizeof(internal_)]; }; #pragma pack() I'm trying to compile the code on a 64-bit Fedora for the first time, where long is 64-bits. In this case, sizeof(internal_) is greater than 64, the array size expression underflows, and the compiler complains that the array is too large. Ideally, I'd like to be able to write a macro that will take the size of the structure and evaluate at compile time the required size of the padding array in order to round the size of the structure out to a power of two. I've looked at the Bit Twiddling Hacks page, but I don't know if any of the techniques there can really be implemented in a macro to be evaluated at compile time. Any other solutions to this problem? Or should I perpetuate the problem and just change the magical 64 to a magical 128?
Use a template metaprogram. (Edited in response to comment). #include <iostream> #include <ostream> using namespace std; template <int N> struct P { enum { val = P<N/2>::val * 2 }; }; template <> struct P<0> { enum { val = 1 }; }; template <class T> struct PadSize { enum { val = P<sizeof (T) - 1>::val - sizeof (T) }; }; template <class T, int N> struct PossiblyPadded { T payload; char pad[N]; }; template <class T> struct PossiblyPadded<T, 0> { T payload; }; template <class T> struct Holder : public PossiblyPadded<T, PadSize<T>::val> { }; int main() { typedef char Arr[6]; Holder<Arr> holder; cout << sizeof holder.payload << endl; // Next line fails to compile if sizeof (Arr) is a power of 2 // but holder.payload always exists cout << sizeof holder.pad << endl; }
1,239,908
1,239,940
Why doesn't a derived template class have access to a base template class' identifiers?
Consider: template <typename T> class Base { public: static const bool ZEROFILL = true; static const bool NO_ZEROFILL = false; } template <typename T> class Derived : public Base<T> { public: Derived( bool initZero = NO_ZEROFILL ); // NO_ZEROFILL is not visible ~Derived(); } I am not able compile this with GCC g++ 3.4.4 (cygwin). Prior to converting these to class templates, they were non-generic and the derived class was able to see the base class's static members. Is this loss of visibility in a requirement of the C++ spec or is there a syntax change that I need to employ? I understand that each instantiation of Base<T> will have it's own static member "ZEROFILL" and "NO_ZEROFILL", that Base<float>::ZEROFILL and Base<double>::ZEROFILL are different variables, but i don't really care; the constant is there for readability of the code. I wanted to use a static constant because that is more safe in terms of name conflicts rather than a macro or global.
That's two-phase lookup for you. Base<T>::NO_ZEROFILL (all caps identifiers are boo, except for macros, BTW) is an identifier that depends on T. Since, when the compiler first parses the template, there's no actual type substituted for T yet, the compiler doesn't "know" what Base<T> is. So it cannot know any identifiers you assume to be defined in it (there might be a specialization for some Ts that the compiler only sees later) and you cannot omit the base class qualification from identifiers defined in the base class. That's why you have to write Base<T>::NO_ZEROFILL (or this->NO_ZEROFILL). That tells the compiler that NO_ZEROFILL is something in the base class, which depends on T, and that it can only verify it later, when the template is instantiated. It will therefore accept it without trying to verify the code. That code can only be verified later, when the template is instantiated by supplying an actual parameter for T.
1,239,938
1,239,977
Accessing an array out of bounds gives no error, why?
I am assigning values in a C++ program out of the bounds like this: #include <iostream> using namespace std; int main() { int array[2]; array[0] = 1; array[1] = 2; array[3] = 3; array[4] = 4; cout << array[3] << endl; cout << array[4] << endl; return 0; } The program prints 3 and 4. It should not be possible. I am using g++ 4.3.3 Here is compile and run command $ g++ -W -Wall errorRange.cpp -o errorRange $ ./errorRange 3 4 Only when assigning array[3000]=3000 does it give me a segmentation fault. If gcc doesn't check for array bounds, how can I be sure if my program is correct, as it can lead to some serious issues later? I replaced the above code with vector<int> vint(2); vint[0] = 0; vint[1] = 1; vint[2] = 2; vint[5] = 5; cout << vint[2] << endl; cout << vint[5] << endl; and this one also produces no error.
Welcome to every C/C++ programmer's bestest friend: Undefined Behavior. There is a lot that is not specified by the language standard, for a variety of reasons. This is one of them. In general, whenever you encounter undefined behavior, anything might happen. The application may crash, it may freeze, it may eject your CD-ROM drive or make demons come out of your nose. It may format your harddrive or email all your porn to your grandmother. It may even, if you are really unlucky, appear to work correctly. The language simply says what should happen if you access the elements within the bounds of an array. It is left undefined what happens if you go out of bounds. It might seem to work today, on your compiler, but it is not legal C or C++, and there is no guarantee that it'll still work the next time you run the program. Or that it hasn't overwritten essential data even now, and you just haven't encountered the problems, that it is going to cause — yet. As for why there is no bounds checking, there are a couple aspects to the answer: An array is a leftover from C. C arrays are about as primitive as you can get. Just a sequence of elements with contiguous addresses. There is no bounds checking because it is simply exposing raw memory. Implementing a robust bounds-checking mechanism would have been almost impossible in C. In C++, bounds-checking is possible on class types. But an array is still the plain old C-compatible one. It is not a class. Further, C++ is also built on another rule which makes bounds-checking non-ideal. The C++ guiding principle is "you don't pay for what you don't use". If your code is correct, you don't need bounds-checking, and you shouldn't be forced to pay for the overhead of runtime bounds-checking. So C++ offers the std::vector class template, which allows both. operator[] is designed to be efficient. The language standard does not require that it performs bounds checking (although it does not forbid it either). A vector also has the at() member function which is guaranteed to perform bounds-checking. So in C++, you get the best of both worlds if you use a vector. You get array-like performance without bounds-checking, and you get the ability to use bounds-checked access when you want it.
1,240,218
1,927,684
Does a Qt application work in Google Native Client?
I'm not familiar with Qt or with Google Native Client. Is it possible for a TRIVIAL Qt console application to be ported to Google Native Client? I understand that some work would be involved. But the question is, how much if it's even possible?
A Qt developer has managed to get some Qt examples running under Native Client: http://blog.qt.io/blog/2009/12/17/take-it-with-a-grain-of-salt/
1,240,242
1,240,279
Should destructors be exported in Windows DLL Libraries?
In generating Windows DLL dynamic libraries, you are asked to declare which functions should be exported so that some functions maybe left private to the DLL and not accessible by other applications. I haven't seen anything mentioned regarding whether destructors need to be exported or are they automatically handled by the compiler or windows kernel? As in if I don't export the destructor and they dynamically allocate a class which I declared to be exportable, can they successfully call delete on it if the destructor is not exported?
In general, any class with a constructor should export the destructor as well. That being said, there are a couple of things to be wary of here... If you're building on Windows, you need to be careful about mixing VS versions with libraries. If you're only going to be distributing your library as a DLL, exporting constructors and destructors is a bad idea. The problem is in the C++ runtimes. It's pretty much a requirement that the same runtime that handles memory allocation needs to handle the deallocation. This is the #1 cause of "bad things" that happen when you try to use a library compiled in VS 2005 from within VS 2008, for example. The solution for this is to provide factory methods to create your class (allocation is handled by the runtime with which you compiled) as well as methods to delete/destruct your class (so the deallocation happens in the same runtime).
1,240,634
1,241,325
How to get rid of warning LNK4006 when not using templates?
I know the question is not very descriptive but I couldn't phrase it better. I'm trying to compile a static linked library that has several objects, all the objects contain the following: #include foo.h foo.h is something along these lines: #pragma once template<class T> class DataT{ private: T m_v; public: DataT(T v) : m_v(v){} }; typedef DataT<double> Data; Now, everything works fine, but if I change DataT to be just Data with double instead of T, I will get a LNK4006 warning at linking time for each .obj stating that the .ctor was already defined. Edit 1: #pragma once class Data{ private: double m_v; public: Data(double v) : m_v(v){} }; Edit 2: I'm using MSVC7. The .ctor is actually included in both cases as in ... public: Data(double v); #include foo.inl ... //foo.inl Data::Data(double v): m_v(v) {} What I'm trying to accomplish though, is not to have that compiled but as a header the user can use.
I'm not sure what you're trying to do in the example for edit #2, but I think it might help if you have the following in foo.inl: inline Data::Data(double v): m_v(v) {} If the contents of foo.inl is also being included in something where the inline keyword won't work or shouldn't be, you can probably use the preprocessor to handle the difference by using a macro that expands to nothing or inline as appropriate.
1,240,703
1,240,717
storage of user, error, exception messages (c++)
Rather simple question. Where should I store error,exception, user messages? By far, I always declared local strings inside the function where it is going to be invoked and did not bother. e.g. SomeClass::function1(...) { std::string str1("message1"); std::string str2("message2"); std::string str3("message3"); ... // some code ... } Suddenly I realized that since construction & initialization are called each time and it might be quite expensive. Would it be better to store them as static strings in class or even in a separate module? Localization is not the case here. Thanks in advance.
Why not just use a string constant when you need it? SomeClass::function1(...) { /* ... */ throw std::runtime_error("The foo blortched the baz!"); /* ... */ } Alternately, you can use static const std::strings. This is appropriate if you expect to copy them to a lot of other std::strings, and your C++ implementation does copy-on-write: SomeClass::function1(...) { static const std::string str_quux("quux"); // initialized once, at program start xyz.someMember = str_quux; // might not require an allocation+copy } If you expect to make lots of copies of these strings, and you don't have copy-on-write (or can't rely on it being present), you might want to look into using boost::flyweight.
1,240,876
1,240,898
Stylistic question concerning returning void
Consider the following contrived example: void HandleThat() { ... } void HandleThis() { if (That) return HandleThat(); ... } This code works just fine, and I'm fairly sure it's spec-valid, but I (perhaps on my own) consider this unusual style, since the call appears to return the result of the function, despite the fact that both functions are prototyped to be void. Typically, I would expect to see: if (That) {HandleThat(); return;} which, I feel, leaves no ambiguity as to what's going on. SO community, can I get your opinion on whether the returning-void coding style is confusing or problematic? It has the feel of an idiom; should I use this or avoid it? Generally I'd strive for clarity and use the second style. On the other hand, there's a neatness to the first form that draws me to it somewhat.
I agree with you, the first style is confusing because there's the implication that some sort of value is getting returned. In fact I had to read it over a couple times because of that. When returning from a function prototyped void, it should just be return;
1,241,000
1,241,031
Unmanaged C++ Get the current process id? (Console Application)
How can I get the current process id from an unmanaged C++ console application? I see that GetWindowThreadProcessId Works when you have an HWND, but what can I do for a console application?
Have you tried GetCurrentProcessId? http://msdn.microsoft.com/en-us/library/ms683180(VS.85).aspx
1,241,099
1,241,592
C++: access const member vars through class or an instance?
In C++, is there any reason to not access static member variables through a class instance? I know Java frowns on this and was wondering if it matters in C++. Example: class Foo { static const int ZERO = 0; static const int ONE = 1; ... }; void bar(const Foo& inst) { // is this ok? int val1 = inst.ZERO; // or should I prefer: int val2 = Foo::ZERO ... }; I have a bonus second question. If I declare a static double, I have to define it somewhere and that definition has to repeat the type. Why does the type have to be repeated? For example: In a header: class Foo { static const double d; }; In a source file: const double Foo::d = 42; Why do I have to repeat the "const double" part in my cpp file?
For the first question, aside from the matter of style (it makes it obvious it's a class variable and has no associated object), Fred Larsen, in comments to the question, makes reference to previous question. Read Adam Rosenthal's answer for very good reason why you want to be careful with this. (I'd up-vote Fred if he'd posted it as answer, but I can't so credit where it's due. I did up-vote Adam.) As to your second question: Why do I have to repeat the "const double" part in my cpp file? You have to repeat the type primarily as an implementation detail: it's how the C++ compiler parses a declaration. This isn't strictly ideal for local variables either, and C++1x (formerly C++0x) makes use of the auto keyword to avoid needing to be repetitive for regular function variables. So this: vector<string> v; vector<string>::iterator it = v.begin(); can become this: vector<string> v; auto it = v.begin(); There's no clear reason why this couldn't work with static as well, so in your case thos: const double Foo::d = 42; could well become this. static Foo::d = 42; The key is to have some way of identifying this as a declaration. Note I say no clear reason: C++'s grammar is a living legend: it is extremely hard to cover all of its edge cases. I don't think the above is ambiguous but it might be. If it isn't they could add that to the language. Tell them about it ... for C++2x :/.
1,241,144
1,241,199
Socket remains open after program has closed (C++)
I'm currently writing a small server application, and my problem is, that when I close my app (or better, press the terminate button in eclipse), the socket sometimes stays open, so when I execute my app the next time, bind() will fail with "Address already in use". How can I properly close my sockets when the program exits? I already put close(mySocket); in the class destructors, but that doesn't seem to change anything.
http://hea-www.harvard.edu/~fine/Tech/addrinuse.html should answer a lot of your questions. I tend to use SO_REUSEADDR to work around that problem.
1,241,399
1,241,548
What is a .h.gch file?
I recently had a class project where I had to make a program with G++. I used a makefile and for some reason it occasionally left a .h.gch file behind. Sometimes, this didn't affect the compilation, but every so often it would result in the compiler issuing an error for an issue which had been fixed or which did not make sense. I have two questions: 1) What is a .h.gch file and what is one used for? and 2) Why would it cause such problems when it wasn't cleaned up?
A .gch file is a precompiled header. If a .gch is not found then the normal header files will be used. However, if your project is set to generate pre-compiled headers it will make them if they don’t exist and use them in the next build. Sometimes the *.h.gch will get corrupted or contain outdated information, so deleting that file and compiling it again should fix it.
1,241,848
1,241,892
Why doesn’t WPF support C++.NET - the way WinForms does?
As a C++ stickler, this has really been bugging me. I've always liked the idea of the "language-independant framework" that Microsoft came up with roughly a decade ago. Why have they dropped the ball on this idea? Does anyone know the reasoning behind it?
Part of the reason will be that C++ support is actually two languages in one -- the native and the CLI variants; that extra development load has been acknowledged by the Visual C++ team as the reason that proper MSBuild integration lagged (lags? I haven't checked in 2008 or later) behind other languages. Another part will be to do with the code generation during compilation that goes on in a C# build to support e.g. the binding "magic"; I've found that even in F#, you don't get it "just happening".
1,241,973
1,241,998
push_back(this) pushes wrong pointer onto vector
I have a vector of UnderlyingClass pointers stored in another object, and inside a method in UnderlyingClass I want to add the "this" pointer to the end of that vector. When I look at the contents of the vector immediately after the push_back call, the wrong pointer is in there. What could be going wrong? cout << "this: " << this << endl; aTextBox.callbacks.push_back(this); cout << "size is " << aTextBox.callbacks.size() << endl; cout << "size-1: " << aTextBox.callbacks[aTextBox.callbacks.size()-1] << endl; cout << "back: " << aTextBox.callbacks.back() << endl; cout << "0: " << aTextBox.callbacks[0] << endl; cout << "this: " << this << endl; cout << "text box ptr: " << &aTextBox << endl; cout << "text box callbacks ptr: " << &(aTextBox.callbacks) << endl; Here's the output: this: 0x11038f70 size is 1 size-1: 0x11038fa8 back: 0x11038fa8 0: 0x11038fa8 this: 0x11038f70 text box ptr: 0x11039070 text box callbacks ptr: 0x11039098 By the way, callbacks is a vector of WebCallback pointers, and UnderlyingClass implements WebCallback: std::vector<WebCallback*> callbacks; class UnderlyingClass :public WebCallback Copied from comments: (see Answer below) output: this: 0x6359f70 size is 1 size-1: 0x6359fa8 back: 0x6359fa8 0: 0x6359fa8 this: 0x6359f70 WebCallback This: 0x6359fa8 text box ptr: 0x635a070 text box callbacks ptr: 0x635a098 okay, so that explains why the pointers don't match up. My real question, then, is this: how do I get the correct version of a method to be called? Specifically, WebCallback stipulates that a function onWebCommand() be implemented, and right now callbacks[0]->onWebCommand() is not causing the onWebCommand() that I wrote in UnderlyingClass to be executed.
This can happen with multiple inheritance, if your layout looks like this: class UnderlyingBase { char d[56]; }; class UnderlyingClass :public UnderlyingBase, public WebCallback { }; Then the layout can be like this, for each object involved. The last one is the complete object containing the first two ones as base-class sub-objects, and that you take the pointer of, and which will be converted to WebCallback*. [UnderlyingBase] > char[56]: 56 bytes, offset 0x0 [WebCallback] > unknown: x bytes, offset 0x0 [UnderlyingClass] > [UnderlyingBase]: 56 bytes (0x38 hex), offset 0x0 > [WebCallback]: x bytes, offset 0x38 Now since your vector contains WebCallback*, the compiler adjusts the pointer to point to the WebCallback sub-object, while when it would point to UnderlyingClass or UnderlyingBase, it would start 0x38 (56) bytes earlier.
1,242,005
2,671,834
What is the most efficient way to display decoded video frames in Qt?
What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage(). I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView. I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed. Thanks.
Thanks for the answers, but I finally revisited this problem and came up with a rather simple solution that gives good performance. It involves deriving from QGLWidget and overriding the paintEvent() function. Inside the paintEvent() function, you can call QPainter::drawImage(...) and it will perform the scaling to a specified rectangle for you using hardware if available. So it looks something like this: class QGLCanvas : public QGLWidget { public: QGLCanvas(QWidget* parent = NULL); void setImage(const QImage& image); protected: void paintEvent(QPaintEvent*); private: QImage img; }; QGLCanvas::QGLCanvas(QWidget* parent) : QGLWidget(parent) { } void QGLCanvas::setImage(const QImage& image) { img = image; } void QGLCanvas::paintEvent(QPaintEvent*) { QPainter p(this); //Set the painter to use a smooth scaling algorithm. p.setRenderHint(QPainter::SmoothPixmapTransform, 1); p.drawImage(this->rect(), img); } With this, I still have to convert the YUV 420P to RGB32, but ffmpeg has a very fast implementation of that conversion in libswscale. The major gains come from two things: No need for software scaling. Scaling is done on the video card (if available) Conversion from QImage to QPixmap, which is happening in the QPainter::drawImage() function is performed at the original image resolution as opposed to the upscaled fullscreen resolution. I was pegging my processor on just the display (decoding was being done in another thread) with my previous method. Now my display thread only uses about 8-9% of a core for fullscreen 1920x1200 30fps playback. I'm sure it could probably get even better if I could send the YUV data straight to the video card, but this is plenty good enough for now.
1,242,185
1,242,214
loop condition evaluation
Just a quick question. I have a loop that looks like this: for (int i = 0; i < dim * dim; i++) Is the condition in a for loop re-evaluated on every loop? If so, would it be more efficient to do something like this?: int dimSquare = dim * dim; for (int i = 0; i < dimSquare; i++) Thanks -Faken
In general, if you would for example change the value of "dim" inside your loop, it would be re-evaluated every time. But since that is not the case in your example, a decent compiler would optimize your code and you wouldn't see any difference in performance.
1,242,190
1,279,744
C++ Memory Efficient Solution for Ax=b Linear Algebra System
I am using Numeric Library Bindings for Boost UBlas to solve a simple linear system. The following works fine, except it is limited to handling matrices A(m x m) for relatively small 'm'. In practice I have a much larger matrix with dimension m= 10^6 (up to 10^7). Is there existing C++ approach for solving Ax=b that uses memory efficiently. #include<boost/numeric/ublas/matrix.hpp> #include<boost/numeric/ublas/io.hpp> #include<boost/numeric/bindings/traits/ublas_matrix.hpp> #include<boost/numeric/bindings/lapack/gesv.hpp> #include <boost/numeric/bindings/traits/ublas_vector2.hpp> // compileable with this command //g++ -I/home/foolb/.boost/include/boost-1_38 -I/home/foolb/.boostnumbind/include/boost-numeric-bindings solve_Axb_byhand.cc -o solve_Axb_byhand -llapack namespace ublas = boost::numeric::ublas; namespace lapack= boost::numeric::bindings::lapack; int main() { ublas::matrix<float,ublas::column_major> A(3,3); ublas::vector<float> b(3); for(unsigned i=0;i < A.size1();i++) for(unsigned j =0;j < A.size2();j++) { std::cout << "enter element "<<i << j << std::endl; std::cin >> A(i,j); } std::cout << A << std::endl; b(0) = 21; b(1) = 1; b(2) = 17; lapack::gesv(A,b); std::cout << b << std::endl; return 0; }
Short answer: Don't use Boost's LAPACK bindings, these were designed for dense matrices, not sparse matrices, use UMFPACK instead. Long answer: UMFPACK is one of the best libraries for solving Ax=b when A is large and sparse. http://www.cise.ufl.edu/research/sparse/umfpack/ http://www.cise.ufl.edu/research/sparse/umfpack/UMFPACK/Doc/QuickStart.pdf Below is sample code (based on umfpack_simple.c) that generates a simple A and b and solves Ax = b. #include <stdlib.h> #include <stdio.h> #include "umfpack.h" int *Ap; int *Ai; double *Ax; double *b; double *x; /* Generates a sparse matrix problem: A is n x n tridiagonal matrix A(i,i-1) = -1; A(i,i) = 3; A(i,i+1) = -1; */ void generate_sparse_matrix_problem(int n){ int i; /* row index */ int nz; /* nonzero index */ int nnz = 2 + 3*(n-2) + 2; /* number of nonzeros*/ int *Ti; /* row indices */ int *Tj; /* col indices */ double *Tx; /* values */ /* Allocate memory for triplet form */ Ti = malloc(sizeof(int)*nnz); Tj = malloc(sizeof(int)*nnz); Tx = malloc(sizeof(double)*nnz); /* Allocate memory for compressed sparse column form */ Ap = malloc(sizeof(int)*(n+1)); Ai = malloc(sizeof(int)*nnz); Ax = malloc(sizeof(double)*nnz); /* Allocate memory for rhs and solution vector */ x = malloc(sizeof(double)*n); b = malloc(sizeof(double)*n); /* Construct the matrix A*/ nz = 0; for (i = 0; i < n; i++){ if (i > 0){ Ti[nz] = i; Tj[nz] = i-1; Tx[nz] = -1; nz++; } Ti[nz] = i; Tj[nz] = i; Tx[nz] = 3; nz++; if (i < n-1){ Ti[nz] = i; Tj[nz] = i+1; Tx[nz] = -1; nz++; } b[i] = 0; } b[0] = 21; b[1] = 1; b[2] = 17; /* Convert Triplet to Compressed Sparse Column format */ (void) umfpack_di_triplet_to_col(n,n,nnz,Ti,Tj,Tx,Ap,Ai,Ax,NULL); /* free triplet format */ free(Ti); free(Tj); free(Tx); } int main (void) { double *null = (double *) NULL ; int i, n; void *Symbolic, *Numeric ; n = 500000; generate_sparse_matrix_problem(n); (void) umfpack_di_symbolic (n, n, Ap, Ai, Ax, &Symbolic, null, null); (void) umfpack_di_numeric (Ap, Ai, Ax, Symbolic, &Numeric, null, null); umfpack_di_free_symbolic (&Symbolic); (void) umfpack_di_solve (UMFPACK_A, Ap, Ai, Ax, x, b, Numeric, null, null); umfpack_di_free_numeric (&Numeric); for (i = 0 ; i < 10 ; i++) printf ("x [%d] = %g\n", i, x [i]); free(b); free(x); free(Ax); free(Ai); free(Ap); return (0); } The function generate_sparse_matrix_problem creates the matrix A and the right-hand side b. The matrix is first constructed in triplet form. The vectors Ti, Tj, and Tx fully describe A. Triplet form is easy to create but efficient sparse matrix methods require Compressed Sparse Column format. Conversion is performed with umfpack_di_triplet_to_col. A symbolic factorization is performed with umfpack_di_symbolic. A sparse LU decomposition of A is performed with umfpack_di_numeric. The lower and upper triangular solves are performed with umfpack_di_solve. With n as 500,000, on my machine, the entire program takes about a second to run. Valgrind reports that 369,239,649 bytes (just a little over 352 MB) were allocated. Note this page discusses Boost's support for sparse matrices in Triplet (Coordinate) and Compressed format. If you like, you can write routines to convert these boost objects to the simple arrays UMFPACK requires as input.
1,242,357
1,242,442
How to release memory from std::deque?
I'm using a std::deque to store a fairly large number of objects. If I remove a bunch of those objects, it appears to me that its memory usage does not decrease, in a similar fashion to std::vector. Is there a way to reduce it? I know that in a vector you have to use the 'swap trick', which I assume would work here too, but I'd rather avoid that since it would require copying all the elements left in the container (and thus requires that you have enough memory to store every object twice). I'm not intimately familiar with the implementation of deque, but my understanding of it is that it might be possible to achieve such a thing without lots of copies (whereas with a vector it's clearly not). I'm using the VC++ (Dinkumware) STL, if that makes any difference.
There is no way to do this directly in a std::deque. However, it's easy to do by using a temporary (which is basically what happens in a std::vector when you shrink it's capacity). Here is a good article on std::deque, comparing it to std::vector. The very bottom shows a clean way to swap out and shrink a vector, which works the same with deque.
1,242,742
1,242,748
Compiling PARDISO linear solver test case with GCC
I am trying to compile a linear system solver using PARDISO. The test case (pardiso_sym.c) also downloaded from the same website above. I have the following files inside the directory: [gv@emerald my-pardiso]$ ls -lh total 1.3M -rw-r--r-- 1 gv hgc0746 1.3M Aug 7 11:59 libpardiso_GNU_IA64.so -rw-r--r-- 1 gv hgc0746 7.2K Nov 13 2007 pardiso_sym.c Then I try to compile it with the following command: [gv@emerald my-pardiso]$ gcc pardiso_sym.c -o pardiso_sym -L . -llibpardiso_GNU_IA64.so -L/home/gv/.boost/include/boost-1_38 -llapack But it gives this error: /usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.3.2/../../../../x86_64-unknown-linux-gnu/bin/ld: cannot find -llibpardiso_GNU_IA64.so collect2: ld returned 1 exit status What's wrong with my compilation method? This is the additional info of my system: [gv@emerald my-pardiso]$ uname -a Linux gw05 2.6.18-92.1.13.el5 #1 SMP Wed Sep 24 19:32:05 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux [gv@emerald my-pardiso]$ gcc --version gcc (GCC) 4.3.2 Update: The library is recognized using Dave Gamble's suggestion. But now it gives different error: $ gcc pardiso_sym.c -o pardiso_sym -L . -lpardiso_GNU_IA64 -L/home/gv/.boost/include/boost-1_38 -llapack ./libpardiso_GNU_IA64.so: undefined reference to `s_stop' ./libpardiso_GNU_IA64.so: undefined reference to `s_wsfe' ./libpardiso_GNU_IA64.so: undefined reference to `e_wsfe' ./libpardiso_GNU_IA64.so: undefined reference to `z_abs' ./libpardiso_GNU_IA64.so: undefined reference to `s_cat' ./libpardiso_GNU_IA64.so: undefined reference to `s_copy' ./libpardiso_GNU_IA64.so: undefined reference to `do_fio'
EDIT: I read the pardiso manual. Here's the fix: gcc pardiso_sym.c -o pardiso_sym -L . -lpardiso_GNU_IA64 -L/home/gv/.boost/include/boost-1_38 -llapack Here I've removed the "lib" from the start and the ".so" from the end of -lpardiso_GNU_IA64
1,242,820
1,243,108
Can a C++ Static Library link to shared library?
Say I have a static C++ lib, static.lib and I want to call some functions from a C++ shared lib, say shared.lib. Is it possible? Now assume that I have another shared lib, say shared2.lib which links to static.lib but does not link to shared.lib. Does the linker automatically link shared2.lib to shared.lib in this case? I am using Microsoft Visual Studio 2003.
Static libraries are not linked. They are just a collection of object files (*.obj or *.o) that are archived together into a library file (kind of like a tar/zip file) to make it easier for the linker to find the symbols it needs. A static lib can call functions that are not defined (but are only declared in a header file), as it is only compiled. Then when you link an exe or dll that uses the static lib you will have to link with another library that provides the called from the static lib but not defined in it. If you want to the linker to automatically link other libraries Stephen's suggestion will work and is used by very reputable libraries like boost and stlport. To do this put the pragma in the main header file for the static library. You should include the static library and its dependants. However IMO this feature is really meant for library writers, where the library is in the system library path so the linker will easily find it. Also in the case of boost and stlport they use this feature to support multiple version of the same libraries with options defined with #defines where different options require different versions of the library to be linked. This means that users are less likely to configure boost one way and link with a library configured another. My preference for application code is to explicitly link the required parts.
1,242,830
1,242,835
Constructor initialization-list evaluation order
I have a constructor that takes some arguments. I had assumed that they were constructed in the order listed, but in one case it appears they were being constructed in reverse resulting in an abort. When I reversed the arguments the program stopped aborting. This is an example of the syntax I'm using. The thing is, a_ needs to be initialized before b_ in this case. Can you guarantee the order of construction? e.g. class A { public: A(OtherClass o, string x, int y) : a_(o), b_(a_, x, y) { } OtherClass a_; AnotherClass b_; };
It depends on the order of member variable declaration in the class. So a_ will be the first one, then b_ will be the second one in your example.
1,242,947
1,242,960
Quickest way to find substrings in text files
What's the fastest way to find strings in text files ? Case scenario : Looking for a particular path in a text file with around 50000 file paths listed (each path has it's own line).
A file of that size should easily fit in memory and you can make it into a std::set (or even better a hashset, if you have a library of that at hand) with the paths as its items. Checking if an exact path is there will then be very fast. If you need to look for sub-paths as well, a sorted std::vector (if you're looking for prefixes only) may be the only useful approach -- or if you're looking for completely general substrings of paths then you'll need to scan through all the vector anyway, but unless you have to do it a zillion times even that wouldn't be too bad.
1,243,241
1,243,317
Given an Array, is there an algorithm that can allocate memory out of it?
I'm doing some graphics programming and I'm using Vertex pools. I'd like to be able to allocate a range out of the pool and use this for drawing. Whats different from the solution I need than from a C allocator is that I never call malloc. Instead I preallocate the array and then need an object that wraps that up and keeps track of the free space and allocates a range (a pair of begin/end pointers) from the allocation I pass in. Much thanks.
in general: you're looking for a memory mangager, which uses a (see wikipedia) memory pool (like the boost::pool as answered by TokenMacGuy). They come in many flavours. Important considerations: block size (fixed or variable; number of different block sizes; can the block size usage be predicted (statistically)? efficiency (some managers have 2^n block sizes, i.e. for use in network stacks where they search for best fit block; very good performance and no fragementation at the cost of wasting memory) administration overhead (I presume that you'll have many, very small blocks; so the number of ints and pointers maintainted by the memory manager is significant for efficiency) In case of boost::pool, I think the simple segragated storage is worth a look. It will allow you to configure a memory pool with many different block sizes for which a best-match is searched for.
1,243,331
1,243,344
Disjoint set as linked list
Can anyone point me to some info on Disjoint sets as linked list? I cant find any code on this. Language C++
Well I think you can find information in this page of Wikipedia. Of course, that information is written in pseudo-code, but is not difficult to translate it.
1,243,428
1,243,435
Convert string to int with bool/fail in C++
I have a std::string which could be a string or could be a value (such as 0). What is the best or easiest way to convert the std::string to int with the ability to fail? I want a C++ version of C#'s Int32.TryParse.
Use boost::lexical_cast. If the cast cannot be done, it will throw an exception. #include <boost/lexical_cast.hpp> #include <iostream> #include <string> int main(void) { std::string s; std::cin >> s; try { int i = boost::lexical_cast<int>(s); /* ... */ } catch(...) { /* ... */ } } Without boost: #include <iostream> #include <sstream> #include <string> int main(void) { std::string s; std::cin >> s; try { std::stringstream ss(s); int i; if ((ss >> i).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } /* ... */ } catch(...) { /* ... */ } } Faking boost: #include <iostream> #include <sstream> #include <string> template <typename T> T lexical_cast(const std::string& s) { std::stringstream ss(s); T result; if ((ss >> result).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } return result; } int main(void) { std::string s; std::cin >> s; try { int i = lexical_cast<int>(s); /* ... */ } catch(...) { /* ... */ } } If you want no-throw versions of these functions, you'll have to catch the appropriate exceptions (I don't think boost::lexical_cast provides a no-throw version), something like this: #include <iostream> #include <sstream> #include <string> template <typename T> T lexical_cast(const std::string& s) { std::stringstream ss(s); T result; if ((ss >> result).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } return result; } template <typename T> bool lexical_cast(const std::string& s, T& t) { try { // code-reuse! you could wrap // boost::lexical_cast up like // this as well t = lexical_cast<T>(s); return true; } catch (const std::bad_cast& e) { return false; } } int main(void) { std::string s; std::cin >> s; int i; if (!lexical_cast(s, i)) { std::cout << "Bad cast." << std::endl; } }
1,243,962
1,243,998
C++ Operator overloading - casting from class
While porting Windows code to Linux, I encountered the following error message with GCC 4.2.3. (Yes, I'm aware that it's a slight old version, but I can't easily upgrade.) main.cpp:16: error: call of overloaded ‘list(MyClass&)’ is ambiguous /usr/include/c++/4.2/bits/stl_list.h:495: note: candidates are: std::list<_Tp, _Alloc>::list(const std::list<_Tp, _Alloc>&) [with _Tp = unsigned char, _Alloc = std::allocator<unsigned char>] /usr/include/c++/4.2/bits/stl_list.h:484: note: std::list<_Tp, _Alloc>::list(size_t, const _Tp&, const _Alloc&) [with _Tp = unsigned char, _Alloc = std::allocator<unsigned char>] I'm using the following code to generate this error. #include <list> class MyClass { public: MyClass(){} operator std::list<unsigned char>() const { std::list<unsigned char> a; return a; } operator unsigned char() const { unsigned char a; return a; } }; int main() { MyClass a; std::list<unsigned char> b = (std::list<unsigned char>)a; return 0; } Has anyone experienced this error? More importantly, how to get around it? (It's possible to completely avoid the overload, sure, by using functions such as GetChar(), GetList() etc, but I'd like to avoid that.) (By the way, removing "operator unsigned char()" removes the error.)
It compiles properly if you remove the cast, and I've checked that the operator std::list is being executed. int main() { MyClass a; std::list<unsigned char> b = a; return 0; } Or if you cast it to a const reference. int main() { MyClass a; std::list<unsigned char> b = (const std::list<unsigned char>&)a; return 0; }
1,244,001
1,244,036
Deleting from vector in for loop crashes?
I'm having a problem with my looping over a vector, and deleting values from another vector sometimes crashes my program. I have this vector of int's to keep track of which elements should be removed. std::vector<int> trEn; Then I loop through this vector: struct enemyStruct { float x, y, health, mhealth, speed, turnspeed; double angle, tangle; }; std::vector<enemyStruct> enemies; The loop looks like this: for ( unsigned int i = 0; i < bullets.size(); i++ ) { for ( unsigned int j = 0; j < enemies.size(); j++ ) { if ( bullets[i].x > enemies[j].x-10 && bullets[i].x < enemies[j].x+10 && bullets[i].y > enemies[j].y-10 && bullets[i].y < enemies[j].y+10 ) { enemies[j].health-=bullets[i].dmg; if(enemies[j].health<=0){trEn.push_back(j);break;} } } } The bullets vector is just another vector similar to the enemies vector, but with bullets in it. That one does not seem to be the problem. All this code works well, but when it comes to actually delete the items in my enemies vector the program sometimes crashes. std::reverse( trEn.begin(), trEn.end() ); for ( unsigned int g = 0; g < trEn.size(); g++ ) { unsigned int atmp = trEn.at(g); if(atmp<=enemies.size()&&atmp>=0)enemies.erase(enemies.begin()+atmp,enemies.begin()+atmp+1); } trEn.clear(); First I reverse the vector of int´s so that it will go from back to front. If i did´nt do this all values after trEn[0] would be invalid. This is the loop which gives me a crash, but only sometimes. What I´m trying to do is a top-down shooter game, and it seems that when lots of things should be removed at the same time it just crashes. Please help me with this! Just ask if I was unclear or if there is anything missing.
The only seemingly obvious thing would be: if(atmp<=enemies.size() ... Are you sure you do not mean (atmp < enemies.size()) here? Otherwise your code enemies.erase(enemies.begin()+atmp, ... will for sure produce some serious issues.
1,244,085
1,244,138
Replacing Value of Diagonals in (m x m) Matrix With its Column Sum with Memory Efficient Way in C++
I have the following matrix of size m=4 0.00000 0.09130 0.09130 0.00000 0.04565 0.00000 0.00000 0.00000 0.04565 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 And I want to replace the diagonal of that matrix with (1 - sum of its column). Resulting matrix: 0.90870 0.09130 0.09130 0.00000 0.04565 0.90870 0.00000 0.00000 0.04565 0.00000 0.90870 0.00000 0.00000 0.00000 0.00000 1.00000 So for example for (1,1) we have 1 - (0.04565 + 0.04565 + 0.00000) = 0.90870 Now the actual practice the size of m is very large of scale 10^6 to 10^7. So I can't afford to store the initial matrix into a container. Is there any memory efficient alternative way to do it? The current is the implementation I have for slurping it into vector of vectors. It cannot handle large m (10^6). #include <iostream> #include <vector> #include <fstream> #include <sstream> #include <map> using namespace std; // Initialize Matrix Before Slurping vector <vector<double> > Matrix; Matrix.resize(nofRow); for(size_t i = 0; i < nofRow; ++i) { Matrix[i].resize(nofCol); } if (arg_count !=2 ) { cerr << "expected one argument" << endl; return EXIT_FAILURE; } string line; ifstream myfile (arg_vec[1]); // Slurpint it int count1=0; if (myfile.is_open()) { while (getline(myfile,line) ) { stringstream ss(line); double Value; count1++; int count2=0; while (ss >> Value) { count2++; Matrix[count1][count2] = Value; } } myfile.close(); } else { cout << "Unable to open file"; } // Summing up Column; vector <double> ColSum; ColSum.resize(nofCol); for(size_t i = 0; i < nofRow; ++i) { for(size_t j = 0; j < nofCol; ++j) { //std::cout <<"["<<i<<"]"<<"["<<j<<"] = " <<Matrix[i][j]<<std::endl; ColSum[j] += Matrix[i][j]; } } // Printing it for(size_t k = 0; k < nofRow; ++k) { for(size_t l = 0; l < nofCol; ++l) { if (k == l ) { double OneMinusSum = 1 - ColSum[k]; //if (OneMinusSum < 0) { OneMinusSum = 1; }; std::cout << OneMinusSum << "\t"; } else { std::cout<< Matrix[k][l] << "\t"; } } std::cout << std::endl; }
Create a vector of size m to store the diagonal. Then go through the file and add the ith column of each line to diag[i]. Now go through the file again and output each line, but replace the value of the ith element on the ith line with diag[i]. This way you only need to store a vector of size m in memory.
1,244,231
1,400,713
TrayIcon balloon does not show up
I compiled my trayicon utility c++ code in visual studio 2005 express edition and tray icon balloons showed up successfully but later I deleted my firewall, switched on to windows firewall and now I am on another firewall software. Now i ran my same project and balloons showed up successfully but when i rebuilt it, i can't see balloons anymore. My Shell_NotifyIcon method still returns true. Here is the code: nid.cbSize = sizeof( NOTIFYICONDATA_V2_SIZE ); // i've tested NOTIFYICONDATA //and nid as well nid.hWnd = hWnd; nid.uID = uID; nid.uFlags = NIF_ICON | NIF_MESSAGE | NIF_TIP|NIF_INFO; nid.dwInfoFlags = 0x00000004; strcpy(nid.szInfoTitle , balloonTitle); strcpy(nid.szInfo , balloonMsg); int ret = Shell_NotifyIcon( NIM_MODIFY, &nid ); Can anybody suggest where does the problem lie? it seems it is related to OS, my OS is XP and i've even modified "EnableBalloonTips" to 1. Faran Shabbir
Ok I figured it out myself. nid.cbSize = sizeof( NOTIFYICONDATA_V2_SIZE ); should be nid.cbSize = NOTIFYICONDATA_V2_SIZE;
1,244,269
1,244,296
Placement of a method in a Class
I have a C++ class in which many of its the member functions have a common set of operations. Putting these common operations in a separate function is important for avoiding redundancy, but where should i place this function ideally? Making it a member function of the class is not a good idea since it makes no sense being a member function of the class and putting it as a lone function in a header file also doesn't seem to be a nice option. Any suggestion regarding this rather design question?
If the "set of operations" can be encapsulated in a function that is not inherently tied to the class in question then it probably should be a free function (perhaps in an appropriate namespace). If it's somehow tied to the class but doesn't require a class instance it should probably be a static member function, probably a private function if it doesn't form part of the class interface.
1,244,468
1,244,601
QPixmap of a QGraphicsTextItem
How do you convert/paint a QGraphicsTextItem into a QPixmap?
You can add it to a QGraphicsScene (if it's not already inside one) and then render() the scene to a QPixmap using a QPainter QPixmap pix(100, 100); QPainter paint(&pix); scene.render(&paint); Or, you can save yourself the trouble and just use QPainter::drawText() after changing the current font of the painter. it should provide the same capabilities. Maybe something like this- QPixmap pix(100, 100); QPainter paint(&pix); paint.drawText(0, 0, "Hello World");
1,244,694
1,244,726
makefile/script for small program
I have to frequently compile small program and run it. Since, it was tedious to write compile command g++ -W -Wall file.cpp -o out everytime for each cpp file, I wrote one small scripts, which does the compiling. Here is the script that I wrote #!/bin/bash g++ -W -Wall $1 -o $1.out So, if I have to compile file.cpp, I will do compile file.cpp and it will create file.cpp.out executable for me. And, such file.cpp dont have any header files or any other dependencies. I know how to write makefile for particular file.cpp like this (very simple case) file: file.cpp g++ -W -Wall file.cpp -o file but if I have to compile file2.cpp, I have to change above makefile again or write new. So, what I want to do is, when I give make file.cpp command it will produce file as executable. And when I give make file2.cpp it will produce file2 as executable, and similarly for other cpp files.
export CXXFLAGS="-W -Wall" rm Makefile make file1 Make has sane defaults. You don't have to write a makefile to use make. Make has a set of generic rules, which get apply automatically when there is no specific rule. One of them is to make 'file' out of 'file.cpp' using a C++ compiler with flags from environment variable CXXFLAGS. This works a bit like you want...
1,244,847
1,244,908
Error converting a pipe (Handler) to fd on vs 2003
I am trying to use notify a main gtk thread ( from a separate thread) that some even occurred using pipes. I get the following warning when I am trying to setup pipes. What is a good workaround? when I can this g_io_channel_win32_new_fd, I see this warning, and thus pipe isn't created at all :( GLib-WARNING **: giowin32.c:1564: 3 isn't a C library file descriptor int fds[2]; GIOChannel* gioChannels[2]; HANDLE rdPipe, wrPipe; SECURITY_ATTRIBUTES saAttr; saAttr.nLength = sizeof(SECURITY_ATTRIBUTES); saAttr.bInheritHandle = TRUE; saAttr.lpSecurityDescriptor = NULL; if (CreatePipe(&rdPipe, &wrPipe, NULL, 1024)) { fds[0] =_open_osfhandle((gssize)rdPipe, O_RDONLY); fds[1] = _open_osfhandle((long)wrPipe,_O_APPEND); gioChannels[0] =g_io_channel_win32_new_fd(fds[0] ); gioChannels[1] =g_io_channel_win32_new_fd(fds[0] ); g_io_add_watch( gioChannels[1],(GIOCondition) (G_IO_IN | G_IO_HUP), (GIOFunc)SomeCallaback,(gpointer)this ); } The goal is to notify main application that something occurred in thread thread. In my case, I can't use gtk in multi-threaded way (calling functions of main thread from spawned one), so I am trying to do it via pipes. I also saw that it could be a visual studio issue in this thread Any suggestions?
Like the error says, handles created by CreatePipe are not file descriptors. The Windows programming model does not use file descriptors, so you cannot normally mix and match Windows and non-Windows I/O functions. I suspect if you removed some of the casts in your code, your compiler would pinpoint the problem - C-style (or reinterpret) casrs in C++ code are almost always a sign you are doing something wrong.
1,245,075
1,245,123
XML vs Hardcoded interface?
I'm working on a flexible GUI application that can have ~12 varied layouts. These layouts are all well-defined and won't change. Each layout consists of multiple widgets that interface with a DLL using bit patterns. While the majority of the widgets are the same, the bit patterns used vary depending on the interface type being presented. My gut instinct is to use inheritance: define a generic 'Panel' and have subclasses for the different configurations. However, there are parts of the interface that are user-defined and are spec'd to be specified in an XML file. Should the entire panel be defined in XML, or just the user configured sections?
YAGNI: Design your screens for the current requirements, which you specifically state aren't going to change. If a year down the line more customization is needed, make it more customizable then, not now. KISS: If using XML results in less overall code and is simpler than subclassing, use XML. If subclassing results in less code, use subclassing. Experience tells me subclassing is simpler.
1,245,191
1,245,211
where can I find a prime forum for gtk+ (c++) type questions?
I am sorry if it defeats the purpose of this forum, but I see very limited GTK activity here and would like get heavily involved in it. What is the prime forum(s) where GTK is discussed. I use it primarily with c/c++.
http://www.gtkforums.com? :-) Or, better, use mailing lists: http://www.gtk.org/development.html#MailingLists
1,245,430
1,245,443
Over the last 7-8 years what are the biggest influences on C++ programming?
I started programming in C++. It was my first language, but I have not used it in many years. What are the new developments in the C++ world? What are the BIG things - technologies, books, frameworks, libraries, etc? Over the last 7-8 years what are the biggest influences on C++ programming? Perhaps we could do one influence per post, and that way we can vote on them.
Boost: free peer-reviewed portable C++ source libraries. We emphasize libraries that work well with the C++ Standard Library... We aim to establish "existing practice" and provide reference implementations so that Boost libraries are suitable for eventual standardization. Ten Boost libraries are included in the C++ Standards Committee's Library Technical Report (TR1) and in the new C++11 Standard. C++11 also includes several more Boost libraries in addition to those from TR1. More Boost libraries are proposed for standardization in C++17...
1,245,445
1,245,562
Asymmetric virtual Inheritance diamond in C++
So I have this idea and I think it's basically impossible to implement in C++... but I want to ask. I read through chapter 15 of Stroustrup and didn't get my answer, and I don't think the billion other questions about inheritance diamonds answer this one, so I'm asking here. The question is, what happens when you inherit from two base classes which share a common base class themselves, but only one of the two inherits from it virtually. For example: class CommonBase { ... }; class BaseA : CommonBase { ... }; class BaseB : virtual CommonBase { ... }; class Derived : BaseA, BaseB { ... }; The reason I think I want to do this is because I'm trying to extend an existing library without having to recompile the whole library (don't want to open that can of worms). There already exists a chain of inheritance that I would like to modify. Basically something like this (excuse the ascii art) LibBase | \ | \ | MyBase | | | | LibDerived | | \ | | \ | | MyDerived | | LibDerived2 | | \ | | \ | | MyDerived2 | | LibDerived3 | | \ | | \ | | MyDerived3 | | LibConcrete | \ | MyConcrete Get the picture? I want an object of each of "My" classes to be an object of the class they are essentially replacing, but I want the next class in the inheritence diagram to use the overridden method implementation from "My" base class, but all the other methods from the library's classes. The library classes do not inherit virtually so it's like this class LibDerived : LibBase But if I make my class inherit virtually class MyBase : virtual LibBase {}; class MyDerived: virtual MyBase, virtual LibDerived {}; Since MyDerived will have a vtable, and MyBase will have a vtable, will there be only one LibBase object? I hope this question is clear enough.
To simplify answer let's think about virtual/non-virtual as duplicated or non-duplicated content. class LibDerived : LibBase declares: I allow LibBase be twice (or more ) entered into descending of LibDerived class MyBase : virtual LibBase {}; declares: I allow compiler to optimize two entries of LibBase in MyBase descendings into single one. When these two declarations meet each one, the first is more priority so MyDerived gets 2 implmentation of LibBase. But power of c++ is possibility to resolve it! Just make overriding on MyDerived virtual functions to select which you want to use. Ore another way - create universal wrapper of MyDerived derived from interface LibBase that aggregate any instance: LibDerived, MyBase, ... and call expected method from aggregate.
1,245,840
1,245,844
Can you guarantee destructor order when objects are declared on a stack?
I have code that controls a mutex lock/unlock based on scope: void PerformLogin() { ScopeLock < Lock > LoginLock( &m_LoginLock ); doLoginCommand(); ScopeLock < SharedMemoryBase > MemoryLock( &m_SharedMemory ); doStoreLogin(); ... } Can I guarantee that MemoryLock will be destructed before LoginLock?
Yes, it is. In any particular scope local objects are destroyed in the reverse order that they were constructed.
1,245,905
1,245,921
Question about include directory order in g++
Somehow this is the first time I've ever encountered this problem in many years of programming, and I'm not sure what the options are for handling it. I am in the process of porting an application to Linux, and there is a header file in one of the directories that apparently has the same name as a header file in the standard C library, and when "cstdlib" is included from another file in that directory, it's trying to include the local file rather than the correct one from the standard library. In particular, the file is named "endian.h", which is trying to be included from /usr/include/bits/waitstatus.h. So instead of including /usr/include/bits/endian.h it is trying to include ./endian.h. makes no difference Is my only option to rename the endian.h in the project to something else, or is there a way that I can force the compiler to look in the same directory as the file that it's being included from first? Edit: Okay it was just a stupid mistake on my part. My Makefile was setting -I. , so it was looking in the current directory first. D'oh.
There is an important difference between: #include "endian.h" // Look in current directory first. And #include <endian.h> // Look in the standard search paths. If you want the one in the current directory, use the quotes. If you want the system one, then use the angle brackets. Note that if you have put the current directory in the include path via the "-I" flag, then both might resolve to the one in the current directory, in which case you shouldn't use "-I" with the current directory.
1,245,979
1,246,097
C/C++ call-graph utility for Windows platform
I have a large 95% C, 5% C++ Win32 code base that I am trying to grok. What modern tools are available for generating call-graph diagrams for C or C++ projects?
Have you tried SourceInsight's call graph feature? http://www.sourceinsight.com/docs35/ae1144092.htm
1,246,119
1,246,276
why this conversion doesn't work?
Below is my func. I call it with if(try_strtol(v, rhs)) and RHS = "15\t// comment" bool try_strtol(int64_t &v, const string& s) { try { std::stringstream ss(s); if ((ss >> v).fail() || !(ss >> std::ws).eof()) throw std::bad_cast(); return true; } catch(...) { return false; } } It returns false, i except true with v=15. How do i fix this?
If you want it to return a boolean, just do this: bool try_strtol(int64_t &v, const string& s) { std::stringstream ss(s); return (ss >> v).fail() || !(ss >> std::ws).eof(); } And it's failing because it's a bad cast. Were you hoping the comment would be ignored?
1,246,260
1,246,366
Why don't C header files increase the binary's size?
I wrote the following C++ program class MyClass { public: int i; int j; MyClass() {}; }; int main(void) { MyClass inst; inst.i = 1; inst.j = 2; } and I compiled. # g++ program.cpp # ls -l a.out -rwxr-xr-x 1 root wheel 4837 Aug 7 20:50 a.out Then, I #included the header file iostream in the source file and I compiled again. # g++ program.cpp # ls -l a.out -rwxr-xr-x 1 root wheel 6505 Aug 7 20:54 a.out The file size, as expected, was increased. I also wrote the following C program int main(void) { int i = 1; int j = 2; } and I compiled # gcc program.c # ls -l a.out -rwxr-xr-x 1 root wheel 4570 Aug 7 21:01 a.out Then, I #included the header file stdio.h and I compiled again # gcc program.c # ls -l a.out -rwxr-xr-x 1 root wheel 4570 Aug 7 21:04 a.out Oddly enough, the executable files' size remained the same.
By including iostream in your source file, the compiler needs to generate code to setup and tear down the C++ standard I/O library. You can see this by looking at the output from nm, which shows the symbols (generally functions) on your object file: $ nm --demangle test_with_iostream 08049914 d _DYNAMIC 08049a00 d _GLOBAL_OFFSET_TABLE_ 08048718 t global constructors keyed to main 0804883c R _IO_stdin_used w _Jv_RegisterClasses 080486d8 t __static_initialization_and_destruction_0(int, int) 08048748 W MyClass::MyClass() U std::string::size() const@@GLIBCXX_3.4 U std::string::operator[](unsigned int) const@@GLIBCXX_3.4 U std::ios_base::Init::Init()@@GLIBCXX_3.4 U std::ios_base::Init::~Init()@@GLIBCXX_3.4 080485cc t std::__verify_grouping(char const*, unsigned int, std::string const&) 0804874e W unsigned int const& std::min<unsigned int>(unsigned int const&, unsigned int const&) 08049a3c b std::__ioinit 08049904 d __CTOR_END__ ... (remaining output snipped) ... (--demangle takes the C++ function names "mangled" by by the compiler and produces more meaningful names. The first column is the address, if the function is included in the executable. The second column is the type. "t" is code in the "text" segment. "U" are symbols linked in from other places; in this case, from the C++ shared library.) Compare this with the functions generated from your source file without including iostream: $ nm --demangle test_without_iostream 08049508 d _DYNAMIC 080495f4 d _GLOBAL_OFFSET_TABLE_ 080484ec R _IO_stdin_used w _Jv_RegisterClasses 0804841c W MyClass::MyClass() 080494f8 d __CTOR_END__ ... (remaining output snipped) ... When your source file included iostream, the compiler generated several functions not present without iostream. When your source file includes only stdio.h, the generated binary is similar to the test without iostream, since the C standard I/O library doesn't need any extra initialization above and beyond what's already happening in the C dynamic library. You can see this by looking at the nm output, which is identical. In general, though, trying to intuit information about the amount of code generated by a particular source file based on the size of the executable is not going to be meaningful; there's too much that could change, and simple things like the location of the source file may change the binary if the compiler includes debugging information. You may also find objdump useful for poking around at the contents of your executables.
1,246,301
1,246,312
C/C++, can you #include a file into a string literal?
I have a C++ source file and a Python source file. I'd like the C++ source file to be able to use the contents of the Python source file as a big string literal. I could do something like this: char* python_code = " #include "script.py" " But that won't work because there need to be \'s at the end of each line. I could manually copy and paste in the contents of the Python code and surround each line with quotes and a terminating \n, but that's ugly. Even though the python source is going to effectively be compiled into my C++ app, I'd like to keep it in a separate file because it's more organized and works better with editors (emacs isn't smart enough to recognize that a C string literal is python code and switch to python mode while you're inside it). Please don't suggest I use PyRun_File, that's what I'm trying to avoid in the first place ;)
The C/C++ preprocessor acts in units of tokens, and a string literal is a single token. As such, you can't intervene in the middle of a string literal like that. You could preprocess script.py into something like: "some code\n" "some more code that will be appended\n" and #include that, however. Or you can use xxd​ -i to generate a C static array ready for inclusion.
1,246,449
1,246,473
How can an application write text to the screen?
How can an application write text to the screen without using any DrawText type methods, and how can I catch it? I've hooked the following: DrawText DrawTextA DrawTextW DrawTextEx DrawTextExA DrawTextExW TextOut TextOutA TextOutW ExtTextOut ExtTextOutA ExtTextOutW PolyTextOut PolyTextOutA PolyTextOutW None of them yields a thing.
Many applications will write their own proprietary Text Drawing API, for the exact reason that they don't want you to hook it... easily. Take a look at James Devlin's Poker Botting series, he talks about this and how certain poker sites have their own API. He also talks about methods to get around this, OCR, memory scraping. Coding The Wheel
1,246,813
1,247,224
(simple) boost thread_group question
I'm trying to write a fairly simple threaded application, but am new to boost's thread library. A simple test program I'm working on is: #include <iostream> #include <boost/thread.hpp> int result = 0; boost::mutex result_mutex; boost::thread_group g; void threaded_function(int i) { for(; i < 100000; ++i) {} { boost::mutex::scoped_lock lock(result_mutex); result += i; } } int main(int argc, char* argv[]) { using namespace std; // launch three threads boost::thread t1(threaded_function, 10); boost::thread t2(threaded_function, 10); boost::thread t3(threaded_function, 10); g.add_thread(&t1); g.add_thread(&t2); g.add_thread(&t3); // wait for them g.join_all(); cout << result << endl; return 0; } However, when I compile and run this program I get an output of $ ./test 300000 test: pthread_mutex_lock.c:87: __pthread_mutex_lock: Assertion `mutex->__data.__owner == 0' failed. Aborted Obviously, the result is correct but I'm worried about this error message, especially because the real program, which has essentially the same structure, is getting stuck at the join_all() point. Can someone explain to me what is happening? Is there a better way to do this, i.e. launch a number of threads, store them in a external container, and then wait for them all to complete before continuing the program? Thanks for your help.
I think you problem is caused by the thread_group destructor which is called when your program exits. Thread group wants to take responsibility of destructing your thread objects. See also in the boost::thread_group documentation. You are creating your thread objects on the stack as local variables in the scope of your main function. Thus, they have already been destructed when the program exits and thread_group tries to delete them. As a solution, create your thread objects on the heap with new and let the thread_group take care of their destruction: boost::thread *t1 = new boost::thread(threaded_function, 10); ... g.add_thread(t1); ...
1,247,119
12,319,593
Is there a way to forbid subclassing of my class?
Say I've got a class called "Base", and a class called "Derived" which is a subclass of Base and accesses protected methods and members of Base. What I want to do now is make it so that no other classes can subclass Derived. In Java I can accomplish that by declaring the Derived class "final". Is there some C++ trick that can give me the same effect? (Ideally I'd like to make it so that no class other than Derived can subclass Base as well. I can't just put all the code into the same class or use the friend keyword, since Base and Derived are both templated, with Base having fewer template arguments than Derived does....)
As of C++11, you can add the final keyword (technically a special identifier since it is not actually a keyword) to your class, eg class Derived final { ... You can read more about the final keyword at http://en.wikipedia.org/wiki/C++11#Explicit_overrides_and_final
1,247,129
1,247,190
Fast generation of random set, Monte Carlo Simulation
I have a set of numbers ~100, I wish to perform MC simulation on this set, the basic idea is I fully randomize the set, do some comparison/checks on the first ~20 values, store the result and repeat. Now the actual comparison/check algorithm is extremely fast it actually completes in about 50 CPU cycles. With this in mind, and in order to optimize these simulations I need to generate the random sets as fast as possible. Currently I'm using a Multiply With Carry algorithm by George Marsaglia which provides me with a random integer in 17 CPU cycles, quite fast. However, using the Fisher-Yates shuffling algorithm I have to generate 100 random integers, ~1700 CPU cycles. This overshadows my comparison time by a long ways. So my question is are there other well known/robust techniques for doing this type of MC simulation, where I can avoid the long random set generation time? I thought about just randomly choosing 20 values from the set, but I would then have to do collision checks to ensure that 20 unique entries were chosen. Update: Thanks for the responses. I have another question with regards to a method I just came up with after my post. The question is, will this provide a robust truly (assuming the RNG is good) random output. Basically my method is to set up an array of integer values the same length as my input array, set every value to zero. Now I begin randomly choosing 20 values from the input set like so: int pcfast[100]; memset(pcfast,0,sizeof(int)*100); int nchosen = 0; while (nchosen<20) { int k = rand(100); //[0,100] if ( pcfast[k] == 0 ) { pcfast[k] = 1; r[nchosen++] = s[k]; // r is the length 20 output, s the input set. } } Basically what I mentioned above, choosing 20 values at random, except it seems like a somewhat optimized way of ensuring no collisions. Will this provide good random output? Its quite fast.
If you only use the first 20 values in the randomised array, then you only need to do 20 steps of the Fisher-Yates algorithm (Knuth's version). Then 20 values have been randomised (actually at the end of the array rather than at the beginning, in the usual formulation), in the sense that the remaining 80 steps of the algorithm are guaranteed not to move them. The other 80 positions aren't fully shuffled, but who cares? C++ code (iterators should be random-access): using std::swap; template <typename Iterator, typename Rand> // you didn't specify the type void partial_shuffle(Iterator first, Iterator middle, Iterator last, Rand rnd) { size_t n = last - first; while (first != middle) { size_t k = rnd(n); // random integer from 0 to n-1 swap(*(first+k),*first); --n; ++first; } } On return, the values from first through to middle-1 are shuffled. Use it like this: int arr[100]; for (int i = 0; i < 100; ++i) arr[i] = i; while (need_more_samples()) { partial_shuffle(arr, arr+20, arr+100, my_prng); process_sample(arr, arr+20); }
1,247,493
1,247,530
char[] (c lang) to string (c++ lang) conversion
I can see that almost all modern APIs are developed in the C language. There are reasons for that: processing speed, low level language, cross platform and so on. Nowadays, I program in C++ because of its Object Orientation, the use of string, the STL but, mainly because it is a better C. However when my C++ programs need to interact with C APIs I really get upset when I need to convert char[] types to C++ strings, then operate on these strings using its powerful methods, and finally convert from theses strings to char[] again (because the API needs to receive char[]). If I repeat these operations for millions of records the processing times are higher because of the conversion task. For that simple reason, I feel that char[] is an obstacle in the moment to assume the C++ as a better c. I would like to know if you feel the same, if not (I hope so!) I really would like to know which is the best way for C++ to coexist with char[] types without doing those awful conversions. Thanks for your attention.
The C++ string class has a lot of problems, and yes, what you're describing is one of them. More specifically, there is no way to do string processing without creating a copy of the string, which may be expensive. And because virtually all string processing algorithms are implemented as class members, they can only be used on the string class. A solution you might want to experiment with is the combination of Boost.Range and Boost.StringAlgo. Range allows you to create sequences out of a pair of iterators. They don't take ownership of the data, so they don't copy the string. they just point to the beginning and end of your char* string. And Boost.StringAlgo implements all the common string operations as non-member functions, that can be applied to any sequence of characters. Such as, for example, a Boost range. The combination of these two libraries pretty much solve the problem. They let you avoid having to copy your strings to process them. Another solution might be to store your string data as std::string's all the time. When you need to pass a char* to some API functoin, simply pass it the address of the first character. (&str[0]). The problem with this second approach is that std::string doesn't guarantee that its string buffer is null-terminated, so you either have to rely on implementation details, or manually add a null byte as part of the string.
1,247,555
1,247,571
Why do you sometimes need to write `typename T` instead of just `T`?
I was reading the Wikipedia article on SFINAE and encountered following code sample: struct Test { typedef int Type; }; template < typename T > void f( typename T::Type ) {} // definition #1 template < typename T > void f( T ) {} // definition #2 void foo() { f< Test > ( 10 ); //call #1 f< int > ( 10 ); //call #2 without error thanks to SFINAE } Now I've actually written code like this before, and somehow intuitively I knew that I needed to type "typename T" instead of just "T". However, it would be nice to know the actual logic behind it. Anyone care to explain?
In general, C++'s syntax (inherited from C) has a technical defect: the parser MUST know whether something names a type, or not, otherwise it just can't solve certain ambiguities (e.g., is X * Y a multiplication, or the declaration of a pointer Y to objects of type X? it all depends on whether X names a type...!-). The typename "adjective" lets you make that perfectly clear and explicit when needed (which, as another answer mentions, is typical when template parameters are involved;-).
1,247,745
1,247,753
default visibility of C++ class/struct members
In C++, why is private the default visibility for members of classes, but public for structs?
C++ was introduced as a superset of C. Structs were carried over from C, where the semantics of their members was that of public. A whole lot of C code exists, including libraries that were desired to work with C++ as well, that use structs. Classes were introduced in C++, and to conform with the OO philosophy of encapsulation, their members are private by default.
1,247,778
1,248,484
Is D's scope failure/success/exit necessary?
When using a language that has try/catch/finally, are D's failure/success/exit scope statements still useful? D doesn't seem to have finally which may explain why those statements are used in D. But with a language like C# is it useful? I am designing a language so if I see many pros I'll add it in.
scope(X) isn't necessary in the same way that for isn't necessary provided you have if and goto. Here's a paraphrased example from some code I've been writing today: sqlite3* db; sqlite3_open("some.db", &db); scope(exit) sqlite3_close(db); sqlite3_stmt* stmt; sqlite3_prepare_v2(db, "SELECT * FROM foo;", &stmt); scope(exit) sqlite3_finalize(stmt); // Lots of stuff... scope(failure) rollback_to(current_state); make_changes_with(stmt); // More stuff... return; Contrast this to using try/catch: sqlite3* db; sqlite3_open("some.db", &db); try { sqlite3_stmt* stmt; sqlite3_prepare_v2(db, "SELECT * FROM foo;", &stmt); try { // Lots of stuff... try { make_changes_with(stmt); // More stuff... } catch( Exception e ) { rollback_to(current_state); throw; } } finally { sqlite3_finalize(stmt); } } finally { sqlite3_close(db); } The code has turned into spaghetti, spreading the error recovery all over the shop and forcing a level of indentation for every try block. The version using scope(X) is, in my opinion, significantly more readable and easier to understand.
1,247,857
1,247,867
Way to increase memory allocated on free store
Is it possible to incrementally increase the amount of allocated memory on a free store that a pointer points to? For example, I know that this is possible. char* p = new char; // allocates one char to free store char* p = new char[10]; // allocates 10 chars to free store but what if I wanted to do something like increase the amount of memory that a pointer points to. Something like... char input; char*p = 0; while(cin >> input) // store input chars into an array in the free store char* p = new char(input); obviously this will just make p point to the new input allocated, but hopefully you understand that the objective is to add a new char allocation to the address that p points to, and store the latest input there. Is this possible? Or am I just stuck with allocating a set number.
You can do this using the function realloc(), though that may only work for memory allocated with malloc() rather than "new" having said that, you probably don't want to allocate more memory a byte at a time. For efficiency's sake you should allocate in blocks substantially larger than a single byte and keep track of how much you've actually used.
1,247,968
1,247,970
Fast C++ program, C# GUI, possible?
I'm looking into developing an application that will process data from a line-scan camera at around 2000 lines (frames) per second. For this real-time application, I feel that C/C++ are the way to go. (It is my feeling, and others will agree that Managed code just isn't right for this task.) However, I've done very little MFC, or any other C++ GUI. I am really getting to do C# GUIs very well, though. So it seems natural to me to write the data-intensive code in C/C++, and the GUI in C#. The GUI will be used for set-up/calibration/on-line monitoring (and possibly outputting of data via UDP, because it's easier in C#. So first, I'd like to see if anyone agrees that this would be the way to go. Based on my programming experience (good at low-level C algorithms, and high-level C# GUI design), it just feels right. Secondly, I'm not sure the right way to go about it. I just threw together a solution in VS2005, which calls some (extern "C") DLL functions from a C# app. And to make sure I could do it, I wrote to some global variables in the DLL, and read from them: test.h int globaldata; extern "C" __declspec(dllexport) void set(int); extern "C" __declspec(dllexport) int get(); test.cpp extern int data=0; __declspec(dllexport) void set(int num) { data = num; } __declspec(dllexport) int get() { return data; } test.cs [DllImport("test")] private static extern void set(int num); [DllImport("test")] private static extern int get(); Calling get() and set() work properly (get() returns the number that I passed to set()). Now, I know that you can export a C++ class as well, but does it have to be managed? How does that work? Am I going about this the right way? Thanks for all your help! *** EDIT *** First of all, THANK YOU for your fantastic answers so far! I'm always incredibly impressed with Stack Overflow... I guess one thing I should have hit on more, was not necessarily raw speed (this can be prototyped and benchmarked). One thing that has me more concerned is the non-deterministic behavior of the Garbage Collector. This application would not be tolerant of a 500ms delay while performing garbage collection. I am all for coding and trying this in pure C#, but if I know ahead of time that the GC and any other non-deterministic .NET behavior (?) will cause a problem, I think my time would be better spent coding it in C/C++ and figuring out the best C# interface.
There is no reason that you can't write high performance code entirely in C#. Performance (C# Programming Guide) Rico Mariani's Performance Blog (an excellent resource) Tuning .NET Application Performance SO questions on the same/similiar topic: C++ performance vs. Java/C# How much faster is c++ than c#? Other articles: Microbenchmarking C++, C#, and Java Harness the Features of C# to Power Your Scientific Computing Projects Find Application Bottlenecks with Visual Studio Profiler Debunking C# vs C++ Performance
1,248,079
1,248,533
Ways to Determine the Version of Firebird SQL?
Exist any Way to Determine the Version of Firebird SQL is running? using SQL or code (delphi, C++). Bye
If you want to find it via SQL you can use get_context to find the engine version it with the following: SELECT rdb$get_context('SYSTEM', 'ENGINE_VERSION') as version from rdb$database; you can read more about it here firebird faq, but it requires Firebird 2.1 I believe.
1,248,140
1,248,193
MinGW linking problem
I have a linking problem with MinGW. These are the calls: g++ -enable-stdcall-fixup -Wl,-enable-auto-import -Wl,-enable-runtime-pseudo-reloc -mthreads -Wl -Wl,-subsystem,windows -o debug/Simulation.exe debug/LTNetSender.o debug/main.o debug/simulation.o debug/moc_simulation.o -L'c:/Programmieren/Qt/4.5.2/lib' -lmingw32 -lqtmaind -LC:\Programmieren\Qt\boost_1_39_0\distrib\lib -LC:\Programmieren\MinGW\lib -llibboost_system-mgw34-mt -llibws2_32 -lQtSqld4 -lQtGuid4 -lQtNetworkd4 -lQtCored4 C:\Programmieren\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ ld.exe: cannot find -llibws2_32 The MinGW library directory is included in the library path and libws2_32.a is in this directory. Why isn't the linker finding the library?
I think the linker command should be -lws2_32. The "lib" and ".a" is filled in automatically.
1,248,255
1,248,261
Are C++ zero (null) pointers supposed to return false?
I'm not sure if my understanding of C++ is wrong.. I've read that 1) all non-zero values are equivalent to TRUE, and zero is equivalent to FALSE; 2) null pointers are stored as zero. Yet code like this: void ViewCell::swapTiles (ViewCell *vc) { ViewTile *tmp = vc->tile(); [stuff ...] if (tmp) addTile(tmp); } Gives me a segfault from dereferencing a null pointer, but if (tmp != 0) addTile(tmp); works fine. Any idea why?
For a pointer, p and (p != 0) are exactly equivalent. If it gives you a segfault, then either it's not a plain pointer, or the problem is elsewhere
1,248,506
1,248,522
How to put std::dec/hex/oct into a look-up array
I have this generic string to number conversion : enum STRING_BASE : signed int { BINARY = -1, OCTAL = 0, DECIMAL = 1, HEX = 2, }; template <class Class> static bool fromString(Class& t, const std::string& str, STRING_BASE base = DECIMAL) { if (base == BINARY) { t = (std::bitset<(sizeof(unsigned long)*8)>(str)).to_ulong(); return true; } std::istringstream iss(str); std::ios_base& (*f)(std::ios_base&); /// have no idea how to turn this into a look-up array switch (base) { case OCTAL: f = std::oct; break; case DECIMAL: f = std::dec; break; case HEX: f = std::hex; break; } return !(iss >> f >> t).fail(); }; I would like to turn the switch case into a fine look-up array, something along these lines: std::ios_base arr[2] = {std::oct, std::dec, std::hex}; return !(iss >> arr[(int)base] >> t).fail(); This produces : *error C2440: 'initializing' : cannot convert from 'std::ios_base &(__cdecl )(std::ios_base &)' to 'std::ios_base' This won't work either : std::ios_base& arr[2] = {std::oct, std::dec, std::hex}; I get : error C2234: 'arr' : arrays of references are illegal So, is there any solution to this problem?
Try: std::ios_base& (*arr[])( std::ios_base& ) = { std::oct, std::dec, std::hex }; Or with typedef for the function pointer: typedef std::ios_base& (*ios_base_setter)( std::ios_base& ); ios_base_setter arr[] = { std::oct, std::dec, std::hex }; You can omit the array size, it will be deteremined from the number of initializers. I noticed this because you specified an array of size 2, but provided 3 initializers.
1,248,706
1,249,143
Accurate evaluation of 1/1 + 1/2 + ... 1/n row
I need to evaluate the sum of the row: 1/1+1/2+1/3+...+1/n. Considering that in C++ evaluations are not complete accurate, the order of summation plays important role. 1/n+1/(n-1)+...+1/2+1/1 expression gives the more accurate result. So I need to find out the order of summation, which provides the maximum accuracy. I don't even know where to begin. Preferred language of realization is C++. Sorry for my English, if there are any mistakes.
Actually, if you're doing the summation for large N, adding in order from smallest to largest is not the best way -- you can still get into a situation where the numbers you're adding are too small relative to the sum to produce an accurate result. Look at the problem this way: You have N summations, regardless of ordering, and you wish to have the least total error. Thus, you should be able to get the least total error by minimizing the error of each summation -- and you minimize the error in a summation by adding values as nearly close to each other as possible. I believe that following that chain of logic gives you a binary tree of partial sums: Sum[0,i] = value[i] Sum[1,i/2] = Sum[0,i] + Sum[0,i+1] Sum[j+1,i/2] = Sum[j,i] + Sum[j,i+1] and so on until you get to a single answer. Of course, when N is not a power of two, you'll end up with leftovers at each stage, which you need to carry over into the summations at the next stage. (The margins of StackOverflow are of course too small to include a proof that this is optimal. In part because I haven't taken the time to prove it. But it does work for any N, however large, as all of the additions are adding values of nearly identical magnitude. Well, all but log(N) of them in the worst not-power-of-2 case, and that's vanishingly small compared to N.)
1,248,774
1,248,835
External sorting of ints with O(N log N) reads and O(N) writes
I'm interested in algorithm which I should use to meet the requirements of external sorting of ints with O(N log N) reads and O(N) writes
If you're after an algorithm for that type of sorting (where the data can't all fit into core at once), my solution comes from the very earliest days of the "revolution" when top-end machines had less memory than most modern-day calculators. I haven't worked out the big-O properties but I think it would be O(n) reads, O(n log n) sort phase (depends on the sort method chosen) and O(n) writes. Let's say your data set has one million elements and you can only fit 100,000 in memory at a time. Here's what I'd do: read in the first 100,000, sort them and write that sorted list back out. do this for each group of 100,000. run a merge operation on the 10 groups. In other words, once your 10 groups are sorted within the group, grab the first entry from each group. Then write that the lowest of those 10 (which is the lowest of the whole million) to the output file and read the next one from that group in its place. Then just continue selecting the lowest of the 10, writing it out and replacing it from the same group. In that way, the final output is the entire sorted list of a million entries.
1,248,941
1,248,965
Visual-C++ Linker Error
I have a class called MODEL in which public static int theMaxFrames resides. The class is defined in its own header file. theMaxFrames is accessed by a class within the MODEL class and by one function, void set_up(), which is also in the MODEL class. The Render.cpp source file contains a function which calls a function in the Direct3D.cpp source file which in turn calls the set_up() function through a MODEL object. This is the only connection between these two source files and theMaxFrames. When I try to compile my code I get the following error messages: 1>Direct3D.obj : error LNK2001: unresolved external symbol "public: static int MODEL::theMaxFrames" (?theMaxFrames@MODEL@@2HA) 1>Render.obj : error LNK2001: unresolved external symbol "public: static int MODEL::theMaxFrames" (?theMaxFrames@MODEL@@2HA) 1>C:\Users\Byron\Documents\Visual Studio 2008\Projects\xFileViewer\Debug\xFileViewer.exe : fatal error LNK1120: 1 unresolved externals
It sounds very much like you have declared theMaxFrames in the class, but you haven't provided a definition for it. If this is the case you need to provide a definition for it in a .cpp somewhere. e.g. int MODEL::theMaxFrames; There's a FAQ entry for this question: static data members.
1,249,264
1,249,279
Visual Studio and Boost::Test
I'm getting started with Boost::Test driven development (in C++), and I'm retrofitting one of my older projects with Unit Tests. My question is -- where do I add the unit test code? The syntax for the tests themselves seems really simple according to Boost::Test's documentation, but I'm confused as to how I tell the compiler to generate the executable with my unit tests. Ideally, I'd use a precompiled header and the header-only version of the boost::test library. Do I just create a new project for tests and add all my existing source files to it? Billy3
They way I've added Boost unit tests to existing solutions was to create new projects and put the test code in those projects. You don't need to worry about creating a main() function or setting up the tests. Boost takes care of all that for you. Here is a project I put on Google Code that uses Boost for its unit tests.
1,249,402
1,249,418
Everything inside < > lost, not seen in html?
I have many source/text file, say file.cpp or file.txt . Now, I want to see all my code/text in browser, so that it will be easy for me to navigate many files. My main motive for doing all this is, I am learning C++ myself, so whenever I learn something new, I create some sample code and then compile and run it. Also, along these codes, there are comments/tips for me to be aware of. And then I create links for each file for easy navigation purpose. Since, there are many such files, I thought it would be easy to navigate it if I use this html method. I am not sure if it is OK or good approach, I would like to have some feedback. What I did was save file.cpp/file.txt into file.html and then use pre and code html tag for formatting. And, also some more necessare html tags for viewing html files. But when I use it, everything inside < > is lost eg. #include <iostream> is just seen as #include, and <iostream> is lost. Is there any way to see it, is there any tag or method that I can use ? I can use regular HTML escape code < and > for this, to see < > but since I have many include files and changing it for all of them is bit time-consuming, so I want to know if there is any other idea ?? So is there any other solution than s/</&lt; and s/>/&gt; I would also like to know if there any other ideas/tips than just converting cpp file into html. What I want to have is, in my main page something like this, tip1 Do this tip2 Do that When I click tip1, it will open tip1.html which has my codes for that tip. And also there is back link in tip1.html, which will take me back to main page on clicking it. Everything is OK just that everything inside < > is lost,not seen. Thanks.
You might want to take a look at online tools such as CodeHtmler, which allows you to copy into the browser, select the appropriate language, and it'll convert to HTML for you, together with keyword colourisation etc.
1,249,646
39,934,452
When using boost::program_options, how does one set the name of the argument?
When using boost::program_options, how do I set the name of an argument for boost::program_options::value<>()? #include <iostream> #include <boost/program_options.hpp> int main() { boost::program_options::options_description desc; desc.add_options() ("width", boost::program_options::value<int>(), "Give width"); std::cout << desc << std::endl; return 0; } The above code gives: --width arg Give width What I want is to replace the arg name with something more descriptive like NUM: --width NUM Give width
In recent versions of Boost (only tested for >= 1.61) this is fully supported. Below a slight modification of the first example in the tutorial, where "LEVEL" is printed instead of "arg": po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>()->value_name("LEVEL"), "set compression level") ; Live Example
1,249,673
1,250,768
UCS-2LE text file parsing
I have a text file which was created using some Microsoft reporting tool. The text file includes the BOM 0xFFFE in the beginning and then ASCII character output with nulls between characters (i.e "F.i.e.l.d.1."). I can use iconv to convert this to UTF-8 using UCS-2LE as an input format and UTF-8 as an output format... it works great. My problem is that I want to read in lines from the UCS-2LE file into strings and parse out the field values and then write them out to a ASCII text file (i.e. Field1 Field2). I have tried the string and wstring-based versions of getline – while it reads the string from the file, functions like substr(start, length) do interpret the string as 8-bit values, so the start and length values are off. How do I read the UCS-2LE data into a C++ String and extract the data values? I have looked at boost and icu as well as numerous google searches but have not found anything that works. What am I missing here? Please help! My example code looks like this: wifstream srcFile; srcFile.open(argv[1], ios_base::in | ios_base::binary); .. .. wstring srcBuf; .. .. while( getline(srcFile, srcBuf) ) { wstring field1; field1 = srcBuf.substr(12, 12); ... ... } So, if, for example, srcBuf contains "W.e. t.h.i.n.k. i.n. g.e.n.e.r.a.l.i.t.i.e.s." then the substr() above returns ".k. i.n. g.e" instead of "g.e.n.e.r.a.l.i.t.i.e.s.". What I want is to read in the string and process it without having to worry about the multi-byte representation. Does anybody have an example of using boost (or something else) to read these strings from the file and convert them to a fixed width representation for internal use? BTW, I am on a Mac using Eclipse and gcc.. Is it possible my STL does not understand wide character strings? Thanks!
substr works fine for me on Linux with g++ 4.3.3. The program #include <string> #include <iostream> using namespace std; int main() { wstring s1 = L"Hello, world"; wstring s2 = s1.substr(3,5); wcout << s2 << endl; } prints "lo, w" as it should. However, the file reading probably does something different from what you expect. It converts the files from the locale encoding to wchar_t, which will cause each byte becoming its own wchar_t. I don't think the standard library supports reading UTF-16 into wchar_t.
1,249,750
1,251,000
Is there an elegant way to bridge two devices/streams in Asio?
Given two stream-oriented I/O objects in Asio, what is the simplest way to forward data from one device to the other in both directions? Could this be done with boost::iostreams::combination or boost::iostreams:copy perhaps? Or is a manual approach better--waiting for data on each end and then writing it out to the other stream? In other words,how does one leverage Boost and Asio to produce a minimal amount of code? An example application would be streaming between a serial port and TCP socket as requested in this question.
With standard C++ streams you can do the following, can't you do something similar with Asio? // Read all data from in and write to out. void forward_data( std::istream& in, std::ostream& out ) { out << in.rdbuf(); }
1,249,814
1,249,848
Templated copy-constructor fails with specific templated type
As some of my code required implicit conversion between matrices of different types (e.g. Matrix<int> to Matrix<double>), I defined a templated copy constructor Matrix<T>::Matrix(Matrix<U> const&) instead of the standard Matrix<T>::Matrix(Matrix<T> const&): template <typename T> class Matrix { public: // ... template <typename U> Matrix(Matrix<U> const&); // ... private unsigned int m_rows, m_cols; T *m_data; // ... }; With an appropriate typecast added to the copy-constructor, this method flawlessly converted between matrices of different types. Surprisingly, it fails with a malloc error in the very situation where a simple copy-constructor would function: where U == T. Sure enough, overloading the copy-constructor with the default Matrix<T>::Matrix(Matrix<T> const&) signature solves the problem. This is a poor solution, as it results in the wholesale duplication of the copy-constructor code (Literally an unchanged copy-and-paste). More importantly, I do not understand why there is a double-free malloc error without the duplicate code. Furthermore, why is the extremely verbose template <typename T> template <typename U> syntax required here as opposed to the standard, and much more succinct, template <typename T, typename U>? Full source of the templated method, compiled using G++ v4.0.1 on Mac OS 10.5. template <typename T> template <typename U> Matrix<T>::Matrix(Matrix<U> const& obj) { m_rows = obj.GetNumRows(); m_cols = obj.GetNumCols(); m_data = new T[m_rows * m_cols]; for (unsigned int r = 0; r < m_rows; ++r) { for (unsigned int c = 0; c < m_cols; ++c) { m_data[m_rows * r + c] = static_cast<T>(obj(r, c)); } } }
It fails because a template doesn't suppress the implicit declaration of a copy constructor. It will serve as a simple converting constructor, which can be used to copy an object when overload resolution selects it. Now, you probably copied your matrix somewhere, which would use the implicitly defined copy constructor which does a flat copy. Then, the copied matrix and the copy would both in their destructor delete the same pointer. Furthermore, why is the extremely verbose template <typename T> template <typename U> syntax required Because there are two templates involved: The Matrix, which is a class template, and the converting constructor template. Each template deserves its own template clause with its own parameters. You should get rid of the <T> in your first line, by the way. Such a thing does not appear when defining a template. This is a poor solution, as it results in the wholesale duplication of the copy-constructor code You can define a member function template, which will do the work, and delegate from both the converting constructor and the copy constructor. That way, the code is not duplicated. Richard made a good point in the comments which made me amend my answer. If the candidate function generated from the template is a better match than the implicitly declared copy constructor, then the template "wins", and it will be called. Here are two common examples: struct A { template<typename T> A(T&) { std::cout << "A(T&)"; } A() { } }; int main() { A a; A b(a); // template wins: // A<A>(A&) -- specialization // A(A const&); -- implicit copy constructor // (prefer less qualification) A const a1; A b1(a1); // implicit copy constructor wins: // A(A const&) -- specialization // A(A const&) -- implicit copy constructor // (prefer non-template) } A copy constructor can have a non-const reference parameter too, if any of its members has struct B { B(B&) { } B() { } }; struct A { template<typename T> A(T&) { std::cout << "A(T&)"; } A() { } B b; }; int main() { A a; A b(a); // implicit copy constructor wins: // A<A>(A&) -- specialization // A(A&); -- implicit copy constructor // (prefer non-template) A const a1; A b1(a1); // template wins: // A(A const&) -- specialization // (implicit copy constructor not viable) }
1,249,904
1,249,918
convert batch files to exes
I'm wondering if it's possible to convert batch files to executables using C++? I have plenty of batch files here and I would like to convert them to executables (mainly to obfuscate the code). I understand that there are 3rd party tools that can do this but I was thinking that this would be a good opportunity for a programming project. I'm not sure where to start. Do I need to code some sort of parser or something?
You don't just need a parser, you need to write a compiler that accepts .BAT or .CMD files as it's input and outputs C++ as its "machine code". I would class this as a "hard to very hard" project (mainly because of the weirdo syntax and semantics of the input language) but if you want to go for it, the definitive SO question on compiler writing is here.
1,250,219
1,250,314
Using Visual Studio 2008 with C/C++
I've decided to dive into some code written in C, and I'd like to use Visual Studio. I have Visual Studio 2008 Professional which I'm using now primarily for C#, but I've noticed that there are no options for C in Visual Studio. Also I've noticed that although Visual Studio has projects, and whatnot for C++ that the build options are all greyed out so I cannot build C++. What do I need to build C++? Can I add projects and building for C in Visual Studio?
Visual Studio doesn't distinguish much between C++ and C. Instead, you create a C++ project, and then simply add .c files to it. It will by default compile .c files as C code, and .cpp files as C++.
1,250,253
1,250,761
Optimizing bit array accesses
I'm using Dipperstein's bitarray.cpp class to work on bi-level (black and white) images where the image data is natively stored as simply as one pixel one bit. I need to iterate through each and every bit, on the order of 4--9 megapixels per image, over hundreds of images, using a for loop, something like: for( int i = 0; i < imgLength; i++) { if( myBitArray[i] == 1 ) { // ... do stuff ... } } Performance is usable, but not amazing. I run the program through gprof and find out there is significant time and millions of calls to std::vector methods like iterator and begin. Here's the top-sampled functions: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 37.91 0.80 0.80 2 0.40 1.01 findPattern(bit_array_c*, bool*, int, int, int) 12.32 1.06 0.26 98375762 0.00 0.00 __gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > >::__normal_iterator(unsigned char const* const&) 11.85 1.31 0.25 48183659 0.00 0.00 __gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > >::operator+(int const&) const 11.37 1.55 0.24 49187881 0.00 0.00 std::vector<unsigned char, std::allocator<unsigned char> >::begin() const 9.24 1.75 0.20 48183659 0.00 0.00 bit_array_c::operator[](unsigned int) const 8.06 1.92 0.17 48183659 0.00 0.00 std::vector<unsigned char, std::allocator<unsigned char> >::operator[](unsigned int) const 5.21 2.02 0.11 48183659 0.00 0.00 __gnu_cxx::__normal_iterator<unsigned char const*, std::vector<unsigned char, std::allocator<unsigned char> > >::operator*() const 0.95 2.04 0.02 bit_array_c::operator()(unsigned int) 0.47 2.06 0.01 6025316 0.00 0.00 __gnu_cxx::__normal_iterator<unsigned char*, std::vector<unsigned char, std::allocator<unsigned char> > >::__normal_iterator(unsigned char* const&) 0.47 2.06 0.01 3012657 0.00 0.00 __gnu_cxx::__normal_iterator<unsigned char*, std::vector<unsigned char, std::allocator<unsigned char> > >::operator*() const 0.47 2.08 0.01 1004222 0.00 0.00 std::vector<unsigned char, std::allocator<unsigned char> >::end() const ... remainder omitted ... I'm not really familiar with C++'s STL, but can anyone shed light on why, for instance, std::vector::begin() is being called a few million times? And, of course, whether there's something I can be doing to speed it up? Edit: I just gave up and optimized the search function (the loop) instead.
a quick peak into the code for bitarray.cpp shows: bool bit_array_c::operator[](const unsigned int bit) const { return((m_Array[BIT_CHAR(bit)] & BIT_IN_CHAR(bit)) != 0); } m_Array is of type std::vector the [] operator on STL vectors is of constant complexity but its likely implemented as a call to vector::begin to get the base address of the array and then it calculates an offset to get to the value you want. since bitarray.cpp makes a call to the [] operator on EVERY BIT ACCESS you are getting a lot of calls. given your use case i would create a custom implementation of the functionality contained in bitarray.cpp and tune it for your sequential, bit by bit, access pattern. Don't use unsigned char's, use 32 or 64 bit values to reduce the number of memory accesses needed. I would use a normal array, not a vector to avoid the look up overhead Create a sequential access function, nextbit() that doesn't do all the look ups. Store a pointer to the current "value" all you need to do in increment it on the 32/64 bit boundary, all accesses between boundaries are a simple mask/shift operations and should be very fast.
1,250,432
1,250,440
Limiting Singleton instance to thread
What is a good way to implement a singleton that will be restricted only to the thread that seeks its instance? Is there a thread id or something that I can use to do that? I'm using Carbon threading API but will have to implement this on windows and pure POSIX later too, so any technique is appreciated.
In the past, I have leveraged a hashmap or index to store data structures that are per-thread inside of a single global thread-safe data structure. For instance, if you provide the id for each thread as an incrementing integer, you can store your data structure in a pre-allocated array at the index of the thread it. If you are leveraging thread IDs that are provided by the operating system or need to be more flexible, then a thread safe HashMap or HashTable will come in quite handy. Jacob
1,250,459
1,250,476
Return value for a << operator function of a custom string class in C++
I am trying to create my own std::string wrapper to extend its functionality. But I got a problem when declaring the << operator. Here's my code so far: my custom string class: class MyCustomString : private std::string { public: std::string data; MyCustomString() { data.assign(""); } MyCustomString(char *value) { data.assign(value); } void Assign(char *value) { data.assign(value); } // ...other useful functions std::string & operator << (const MyCustomString &src) { return this->data; } }; the main program: int main() { MyCustomString mystring("Hello"); std::cout << mystring; // error C2243: 'type cast' : conversion from 'MyCustomString *' to 'const std::basic_string<_Elem,_Traits,_Ax> &' exists, but is inaccessible return 0; } I wanted cout to treat the class as a std::string, so that I won't need to do something like: std::cout << mystring.data; Any kind of help would be appreciated! Thanks. Just fyi: my IDE is Microsoft Visual C++ 2008 Express Edition.
Firstly, you seem to have an issue with the definition of MyCustomString. It inherits privately from std::string as well as containing an instance of std::string itself. I'd remove one or the other. Assuming you are implementing a new string class and you want to be able to output it using std::cout, you'll need a cast operator to return the string data which std::cout expects: operator const char *() { return this->data.c_str(); }
1,250,522
1,250,589
Does Qt work well with STL & Boost?
I am interested in learning Qt. I am fairly good with C++, STL and Boost. I like STL/Boost style very much, and I use them with C++ whenever I can in uni projects. However, I always miss the GUI. It seems that Qt is the best solution in my case. Qt does have a good collection of containers, but I am greatly familiar with STL/Boost stuff. What should I take care of when learning Qt and using it side by side with STL/Boost?
Yes, Qt works just fine with both Boost and the STL. Most of the STL functionality is duplicated in Qt to ensure that such features are supported on all of the platforms that support Qt. However, nothing prohibits you from using STL/boost counterparts of the Qt constructs or functionality therein that Qt lacks. Although Qt has its own string, container and algorithm objects, it also contains a great deal of functions for compatability with STL. For example, a QString can be converted to a std::string and a QVector can be used with std::for_each. Qt also contains some features that overlap with boost such as QPointer (compare/contrast with std:auto_ptr and boost:shared_ptr). The Creating Custom Qt Types article may provide some insight into using advanced features with boost objects such as transmitting custom objects between threads with queued connections. See also: Mixing Qt and Boost Mixing Qt with STL and Boost - are there any bridges to make it easy?
1,250,599
1,250,624
How to unordered_set<tuple<int,int>>?
I had encountered strange problem while construct a unordeed_set<tuple<int,int>>. I had tried VC++8, gcc3.2, gcc4.3, all have the same result. I have no idea what's wrong with the code, following is my code: #include <boost/unordered_set.hpp> #include <boost/tuple/tuple.hpp> // For unordered container, the declaration of operator== #include <boost/tuple/tuple_comparison.hpp> using namespace std ; using namespace boost ; // define of the hash_value funciton for tuple<int, int> size_t hash_value(tuple<int, int> const& t) { return get<0>(t) * 10 + get<1>(t) ; } int main () { unordered_set<tuple<int, int>> s ; tuple<int, int> t ; s.insert(t) ; } Here is the compile error message: 1>c:\libs\boost_1_37_0\boost\functional\hash\extensions.hpp(72) : error C2665: 'boost::hash_value' : none of the 16 overloads could convert all the argument types 1> c:\libs\boost_1_37_0\boost\functional\hash\hash.hpp(33): could be 'size_t boost::hash_value(bool)' 1> c:\libs\boost_1_37_0\boost\functional\hash\hash.hpp(34): or 'size_t boost::hash_value(char)' 1> c:\libs\boost_1_37_0\boost\functional\hash\hash.hpp(35): or 'size_t boost::hash_value(unsigned char)' .... It seems the compiler can not see the definition of hash_value(tuple<int, int>). But if I replace the tuple<int, int> to other data type like struct F{int a, b;} and it works. That's really strange. Do I miss anything? Thank you very much.
Put the hash function in namespace boost. #include <boost/unordered_set.hpp> #include <boost/tuple/tuple.hpp> #include <boost/tuple/tuple_comparison.hpp> using namespace std; using namespace boost; namespace boost { size_t hash_value(tuple<int, int> const & t) { return get<0>(t) * 10 + get<1>(t) ; } } int main () { unordered_set< tuple<int, int> > s ; tuple<int, int> t ; s.insert(t) ; }
1,250,991
1,407,600
Visual Studio 2008 folder browser dialog
In Visual Studio 2008 there is a folder browser dialog that looks like this (very similar to file open dialog): Does anyone know how to invoke it from code?
At the end I just used the VistaBridge library to open it.
1,251,147
1,251,505
Boost::Test -- generation of Main()?
I'm a bit confused on setting up the boost test library. Here is my code: #include "stdafx.h" #define BOOST_TEST_DYN_LINK #define BOOST_TEST_MODULE pevUnitTest #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_CASE( TesterTest ) { BOOST_CHECK(true); } My compiler generates the wonderfully useful error message: 1>MSVCRTD.lib(wcrtexe.obj) : error LNK2019: unresolved external symbol _wmain referenced in function ___tmainCRTStartup 1>C:\Users\Billy\Documents\Visual Studio 10\Projects\pevFind\Debug\pevUnitTest.exe : fatal error LNK1120: 1 unresolved externals It seems that the Boost::Test library is not generating a main() function -- I was under the impression it does this whenever BOOST_TEST_MODULE is defined. But ... the linker error continues. Any ideas? Billy3 EDIT: Here's my code to work around the bug described in the correct answer below: #include "stdafx.h" #define BOOST_TEST_MODULE pevUnitTests #ifndef _UNICODE #define BOOST_TEST_MAIN #endif #define BOOST_TEST_DYN_LINK #include <boost/test/unit_test.hpp> #ifdef _UNICODE int _tmain(int argc, wchar_t * argv[]) { char ** utf8Lines; int returnValue; //Allocate enough pointers to hold the # of command items (+1 for a null line on the end) utf8Lines = new char* [argc + 1]; //Put the null line on the end (Ansi stuff...) utf8Lines[argc] = new char[1]; utf8Lines[argc][0] = NULL; //Convert commands into UTF8 for non wide character supporting boost library for(unsigned int idx = 0; idx < argc; idx++) { int convertedLength; convertedLength = WideCharToMultiByte(CP_UTF8, NULL, argv[idx], -1, NULL, NULL, NULL, NULL); if (convertedLength == 0) return GetLastError(); utf8Lines[idx] = new char[convertedLength]; // WideCharToMultiByte handles null term issues WideCharToMultiByte(CP_UTF8, NULL, argv[idx], -1, utf8Lines[idx], convertedLength, NULL, NULL); } //From boost::test's main() returnValue = ::boost::unit_test::unit_test_main( &init_unit_test, argc, utf8Lines ); //End from boost::test's main() //Clean up our mess for(unsigned int idx = 0; idx < argc + 1; idx++) delete [] utf8Lines[idx]; delete [] utf8Lines; return returnValue; } #endif BOOST_AUTO_TEST_CASE( TesterTest ) { BOOST_CHECK(false); } Hope that's helpful to someone. Billy3
I think the problem is that you're using the VC10 beta. It has a fun little bug where, when Unicode is enabled, it requires the entry point to be wmain, not main. (Older versions allowed you to use both wmain and main in those cases). Of course this will be fixed in the next beta, but until then, well, it's a problem. :) You can either downgrade to VC9, disable Unicode, or try manually setting the entry point to main in project properties. Another thing that might work is if you define your own wmain stub, which calls main. I'm pretty sure this is technically undefined behavior, but as a workaround for a compiler bug in an unreleased compiler it might do the trick.
1,251,389
1,251,501
What is the smallest embedded browser I can use in C++?
I need to build my application GUI using HTML/CSS/JavaScript with a C++ backend all cross platform. I made simple tests with QtWebKit, XULRunner and Mozilla. Well from the simple testes I notice something that is very batters me and it is the deployment size of the browsers libs/framework. It's big: 8 MB and above. Is there some kind of smaller embedded browser I missing?
I think dillo requires c calling conventions, but it might do for your needs. No javascript or flash, or or or, but it does support CSS. On reading the question again, I see that you need javascript, which dillo does not currently support. Sorry.