question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
72,271,761
72,274,361
Same opngl texture2d become all black in GTK but works in glut
I have a program using OpenGL + GLUT that caputre image from camera and use it as a 2D texture. Then display a pointcloud with that texture in the window. It works fine when in GLUT window. But when I change to GTK opengl area. The pointcloud (or vertices) shows but it become all black. With only changing the window related code, What can possibly goes wrong? The following is the code. I only post the code about texture because the whole code does work in GLUT. So I believe the code is correct. Maybe GTK need some additional setting? Generate texture glGenTextures(1, &TEXTURE); glBindTexture(GL_TEXTURE_2D, TEXTURE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Update texture content glBindTexture(GL_TEXTURE_2D, TEXTURE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame.get_width(), frame.get_height(), 0, GL_RGB, GL_UNSIGNED_BYTE, frame.get_data()); Update the vertice and it's texture coordinate for (int i = 0; i < points.size(); i++) { if (rsVertices[i].z) { vertices[count * 3] = rsVertices[i].x * 0.5f; vertices[count * 3 + 1] = rsVertices[i].y * -0.5f; vertices[count * 3 + 2] = rsVertices[i].z - 1.5f; textureCoord[count * 2] = rsTextureCoord[i].u; textureCoord[count * 2 + 1] = rsTextureCoord[i].v; count++; } } glBindBuffer(GL_ARRAY_BUFFER, VBO_VERTEX); glBufferData(GL_ARRAY_BUFFER, sizeof(float) * count * 3, vertices, GL_DYNAMIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void *)0); glBindTexture(GL_TEXTURE_2D, TEXTURE); glBindBuffer(GL_ARRAY_BUFFER, VBO_TEX); glBufferData(GL_ARRAY_BUFFER, sizeof(float) * count * 2, textureCoord, GL_DYNAMIC_DRAW); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void *)0); Vertex shader #version 330 core layout (location = 0) in vec3 aPos; layout (location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0); TexCoord = aTexCoord; }; Fragment Shader #version 330 core out vec4 FragColor; in vec2 TexCoord; uniform sampler2D ourTexture; void main(){ FragColor = texture(ourTexture, TexCoord); }
Problem Solved. I update the texture in a different thread. So, every time before update texture buffer. gdk_gl_context_make_current() should be called. (Because GTK window is control by thirdparty's code. Maybe it change context somehow) gdk_gl_context_make_current(gdkContext); glBindTexture(GL_TEXTURE_2D, TEXTURE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame.get_width(), frame.get_height(), 0, GL_RGB, GL_UNSIGNED_BYTE, frame.get_data());
72,271,935
72,272,029
How to access C++ map inner values
This is my map std::map<std::string,ProductInfo> mymap and these are the values inside ProductInfo: bool isActive = false; char name[80]; I am already able to access a specific key - value pair (std::string - ProductInfo) using ::iterator but what I actually need is the name property inside ProductInfo also this is what ProductInfo looks like during debug
You want to access the property name of the ProductInfo object inside a map. What you need to do is your_map["the key"]->get_name() where get_name() is a getter for name in ProductInfo.
72,272,061
72,273,287
Qt c++ nested classes / solve illegal call of non-static member function
I'm dealing with sample code from a camera SDK, and I have issues getting the frame data "outside" the CSampleCaptureEventHandler class. class DahengCamera : public QObject { Q_OBJECT class CSampleCaptureEventHandler : public ICaptureEventHandler { void DoOnImageCaptured(CImageDataPointer& objImageDataPointer, void* pUserParam) { [grab some data ...] CopyToImage(objImageDataPointer); // illegal call of non-static member function } }; public: DahengCamera(); ~DahengCamera(); private: void CopyToImage(CImageDataPointer pInBuffer); // I want to have my frame datas here QImage m_data; //and here }; I'm using a callback register call to make the camera "DoOnImageCaptured" event called once a frame is grabbed by the system. But I'm stuck getting the data outside this method. CopyToImage() is supposed to get a reference to QImage or to write into m_data, but I have "illegal call of non-static member function" errors. Tried to make CopyToImage() static, but it just move the problem... How can I solve this ? Thanks !
CopyToImage is a private non-static function in the class DahengCamera. The fact that CSampleCaptureEventHandler is a nested class inside DahengCamera allows it to access DahengCamera's private members and functions (as if it were decleared a friend class), but this does not provide CSampleCaptureEventHandler with a pointers to any DahengCamera objects. You need to provide the actual instance of the CSampleCaptureEventHandler object on which DoOnImageCaptured is called with a pointer/refence to the DahengCamera object on which CopyToImage should be called. You might consider providing this pointer/reference to the DoOnImageCaptured object to CSampleCaptureEventHandler's constuctor (i.e. dependency injection). (And - for your own sake - do not try to "fix" this by turning CopyToImage or m_data into static - this would create only a horrible mess)
72,273,350
72,273,676
Why does the speedup I get by parallelizing with OpenMP decrease after a certain workload size?
I'm trying to get into OpenMP and wrote up a small piece of code to get a feel for what to expect in terms of speedup: #include <algorithm> #include <chrono> #include <functional> #include <iostream> #include <numeric> #include <vector> #include <random> void SingleThreaded(std::vector<float> &weights, int size) { auto totalWeight = 0.0f; for (int index = 0; index < size; index++) { totalWeight += weights[index]; } for (int index = 0; index < size; index++) { weights[index] /= totalWeight; } } void MultiThreaded(std::vector<float> &weights, int size) { auto totalWeight = 0.0f; #pragma omp parallel shared(weights, size, totalWeight) default(none) { // clang-format off #pragma omp for reduction(+ : totalWeight) // clang-format on for (int index = 0; index < size; index++) { totalWeight += weights[index]; } #pragma omp for for (int index = 0; index < size; index++) { weights[index] /= totalWeight; } } } float TimeIt(std::function<void(void)> function) { auto startTime = std::chrono::high_resolution_clock::now().time_since_epoch(); function(); auto endTime = std::chrono::high_resolution_clock::now().time_since_epoch(); std::chrono::duration<float> duration = endTime - startTime; return duration.count(); } int main(int argc, char *argv[]) { std::vector<float> weights(1 << 24); std::srand(std::random_device{}()); std::generate(weights.begin(), weights.end(), []() { return std::rand() / static_cast<float>(RAND_MAX); }); for (int size = 1; size <= weights.size(); size <<= 1) { auto singleThreadedDuration = TimeIt(std::bind(SingleThreaded, std::ref(weights), size)); auto multiThreadedDuration = TimeIt(std::bind(MultiThreaded, std::ref(weights), size)); std::cout << "Size: " << size << std::endl; std::cout << "Speed up: " << singleThreadedDuration / multiThreadedDuration << std::endl; } } I compiled and ran the above code with MinGW g++ on Win10 like so: g++ -O3 -static -fopenmp OpenMP.cpp; ./a.exe The output (see below) shows a maximum speedup of around 4.2 at a vector size of 524288. That means that the multi-threaded code ran 4.2 times faster than the single-threaded code for a vector size of 524288. Size: 1 Speedup: 0.00614035 Size: 2 Speedup: 0.00138696 Size: 4 Speedup: 0.00264201 Size: 8 Speedup: 0.00324149 Size: 16 Speedup: 0.00316957 Size: 32 Speedup: 0.00315457 Size: 64 Speedup: 0.00297177 Size: 128 Speedup: 0.00569801 Size: 256 Speedup: 0.00596125 Size: 512 Speedup: 0.00979021 Size: 1024 Speedup: 0.019943 Size: 2048 Speedup: 0.0317662 Size: 4096 Speedup: 0.181818 Size: 8192 Speedup: 0.133713 Size: 16384 Speedup: 0.216568 Size: 32768 Speedup: 0.566396 Size: 65536 Speedup: 1.10169 Size: 131072 Speedup: 1.99395 Size: 262144 Speedup: 3.4772 Size: 524288 Speedup: 4.20111 Size: 1048576 Speedup: 2.82819 Size: 2097152 Speedup: 3.98878 Size: 4194304 Speedup: 4.00481 Size: 8388608 Speedup: 2.91028 Size: 16777216 Speedup: 3.85507 So my questions are: Why is the multi-threaded code slower for a smaller vector size? Is it purely because of the overhead of creating the threads and distributing the work or am I doing something wrong? Why does the speedup I get decrease after a certain size? What would be the best case speedup I could theoretically achieve on the CPU I used (i7 7700k)? Does the distinction between physical CPU cores and logical CPU cores matter in terms of speedup? Did I make any blatant mistakes in my code? Can I improve something?
I agree with your theory; it's likely the overhead of setting things up. While the CPU cores on your processor have their own L1 and L2 caches, they all share an 8M L3 cache, and once the vector becomes too big to fit into that L3 cache, there is the risk of the threads mutually evicting each other's pages from the cache. I assume by "logical core" you mean a hyperthread? Those cannot actually compute in parallel, they can merely "fill in" while the other thread is e.g. blocked waiting for memory. In cache effective, compute bound code, that could limit their potential for parallelism considerably. I don't know to what extent your compiler vectorizes the code it compiles; I would benchmark the two functions you have against a fully vectorized implementation (e.g. using cblas_sasum and cblas_sscal from a good BLAS implementation). It's quite possible that you're leaving a lot of single thread performance on the table at the moment.
72,274,781
72,281,766
Show failed function instead of macro in gtest
currently I'm working on creating new tests using gtest. There are some cases where I use same groups of EXPECT_EQs, so I wrap them up in a function. And now when particular test failes, it prints out the line where EXPECT that failed was written instead of the line where wrapper function was called. class TestSuite : public ::testing::Test { public: void wrapperForExpects(const std::string& str) { EXPECT_EQ(0, 0); EXPECT_EQ("zero", str); } }; TEST_F(TestSuite, exampleName) { std::string exampleVariable = "one"; wrapperForExpects(exampleVariable); } In this example line number 7 will be printed out as failed to console, but I would like to see line number 14. How can I do that?
This is done by adding SCOPED_TRACE("exampleName") before the function call. This will create something similar to the stack trace in the output: TEST_F(TestSuite, exampleName) { SCOPED_TRACE("exampleName_scope"); // <------ Add this std::string exampleVariable = "one"; wrapperForExpects(exampleVariable); } Live example: https://godbolt.org/z/nnzvjGc8P See here for more examples.
72,274,888
72,275,136
C++ default member initialization and constructors
I would like to know where I can find some documentation about the following behavior: class Foo { public: Foo(int argX) : Foo(argX, defaultYValue) {} Foo(int argX, int argY) : x(argX), y(argY) {}; private: const int x; const int y; const int defaultYValue = -1; } Might it be possible that y value is undefined ? Or is there some documentation in the standard that tells that it works (I did noticed that the default member initialization is discarded if it is otherwise overridden inside the constructor) PS: this was discovered while forgetting the static for defaultYValue.
Yes, the code has undefined behavior. When using a delegating constructor it is the delegating constructor that will initialize the class members. When you pass defaultYValue to the delegating constructor, it has not yet be initialized so you are passing an uninitialized value to the delegate, and said delegate uses that value to set y. This is called out by [class.base.init]/7 The expression-list or braced-init-list in a mem-initializer is used to initialize the designated subobject (or, in the case of a delegating constructor, the complete class object) according to the initialization rules of [dcl.init] for direct-initialization.
72,275,689
72,275,855
What kinds of expressions are allowed in a `#if` (the conditional inclusion preprocesssor directives)
Many sources online (for example, https://en.cppreference.com/w/cpp/preprocessor/conditional#Condition_evaluation) say that the expression need only be an integer constant expression. The following are all integral constant expressions without any identifiers in them: #include <compare> #if (1 <=> 2) > 0 #error 1 > 2 #endif #if (([]{}()), 0) #error 0 #endif #if 1.2 < 0.0 #error 1.2 < 0.0 #endif #if ""[0] #error null terminator is true #endif #if *"" #error null terminator is true #endif Yet they fail to compile with clang or gcc, so there obviously are some limitations. The grammar for the #if directive is given in [cpp.pre] in the standard as: if-group: # if constant-expression new-line groupopt All of the previous expressions fit the grammar of constant-expression. It goes on later to say (in [cpp.cond]): 1/ The expression that controls conditional inclusion shall be an integral constant expression except that identifiers (including those lexically identical to keywords) are interpreted as described below 8/ Each preprocessing token that remains (in the list of preprocessing tokens that will become the controlling expression) after all macro replacements have occurred shall be in the lexical form of a token. All of the preprocessing tokens seem to be in the form of [lex.token]: token: identifier keyword literal operator-or-punctuator <=>, >, [, ], {, }, (, ), * are all an operator-or-punctuator 1, 2, 0, 1.2, 0.0, "" are all literals So what part of the standard rules out these forms of expressions? And what subset of integral constant expressions are allowed?
I think that all of these examples are intended to be ill-formed, although as you demonstrate the current standard wording doesn't have that effect. This seems to be tracked as active CWG issue 1436. The proposed resolution would disqualify string literals, floating point literals and also <=> from #if conditions. (Although <=> was added to the language after the issue description was written.) I suppose it is also meant to disallow lambdas, but that may not be covered by the proposed wording.
72,275,713
72,276,600
C++ Explicit instantiation of a template function results in an error "no definition available"
When trying to compile the files bellow this error occurs: The error Logging.h: In instantiation of 'void Sudoku::printBoardWithCandidates(const Sudoku::Board<BASE>&) [with int BASE = 3]': Logging.h:10:64: required from here Logging.h:10:64: error: explicit instantiation of 'void Sudoku::printBoardWithCandidates(const Sudoku::Board<BASE>&) [with int BASE = 3]' but no definition available [-fpermissive] template void printBoardWithCandidates<3>(const Board<3>& b); ^ As I am not very experienced in C++ I don't see any possible cause for this problem. Since the definition is present in .cpp file I dont see any reason not to compile. Wouldn't someone mind explaing that to me, please? .h file #pragma once #include "Board.h" namespace Sudoku { template<int BASE> void printBoardWithCandidates(const Board<BASE> &b); template void printBoardWithCandidates<2>(const Board<2>&); template void printBoardWithCandidates<3>(const Board<3>&); template void printBoardWithCandidates<4>(const Board<4>&); } .cpp file #include "Logging.h" #include <iostream> template<int BASE> void Sudoku::printBoardWithCandidates(const Board<BASE> &board) { // definition... } Edit: Similar implementatiin I have employed several times throughout the program. For example Board.h #pragma once #include <vector> #include <cstring> #include "stdint.h" namespace Sudoku { struct Cell { int RowInd; int ColInd; int BoxInd; friend bool operator==(const Cell& cell1, const Cell& cell2); }; bool operator==(const Cell& cell1, const Cell& cell2); template<int BASE> class Board { public: static constexpr int WIDTH = BASE * BASE; static constexpr int CELL_COUNT = WIDTH * WIDTH; static constexpr uint16_t CELL_COMPLETELY_OCCUPIED = 65535 >> (sizeof(uint16_t) * 8 - WIDTH); private: const int EMPTY_VALUE; uint16_t rowOccupants[WIDTH] = {0}; uint16_t colOccupants[WIDTH] = {0}; uint16_t boxOccupants[WIDTH] = {0}; int* solution; void Init(); void Eliminate(Cell& cell, uint16_t value); public: std::vector<Cell> EmptyCells; Board(const int* puzzle, int* solution, int emptyValue = -1); void SetValue(Cell cell, uint16_t value); void SetValue(Cell cell, int value); int* GetSolution() const; inline uint16_t GetOccupants(Cell cell) const { return rowOccupants[cell.RowInd] | colOccupants[cell.ColInd] | boxOccupants[cell.BoxInd]; } }; template class Board<2>; template class Board<3>; template class Board<4>; } // namespace Sudoku
The problem is that at the point inside the header file where you have provided the 3 explicit template instantiation, the definition of the corresponding member function template printBoardWithCandidates is not available. Thus, the compiler cannot generate the definition for these instantiations and gives the mentioned error saying: error: explicit instantiation of 'void Sudoku::printBoardWithCandidates(const Sudoku::Board<BASE>&) [with int BASE = 3]' but no definition available [-fpermissive] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ There are two ways to solve this as shown below. Note also that i have used an empty struct Board since the Board class you provided is very big(in length) and will take a lot of space if pasted here 2 times. The concept is the same though. Method 1 Provide the definition for the member function template printBoardWithCandidates in the header before providing the 3 explicit template instantiations as shown below: header.h #pragma once #include "Board.h" namespace Sudoku { template<int BASE> void printBoardWithCandidates(const Board<BASE> &b) { //note this is a definition } //explicit template instantiation declaration extern template void printBoardWithCandidates<2>(const Board<2>&); extern template void printBoardWithCandidates<3>(const Board<3>&); extern template void printBoardWithCandidates<4>(const Board<4>&); } Board.h #pragma once template<int> struct Board { }; main.cpp #include <iostream> #include "header.h" //explicit template instantiation definition template void Sudoku::printBoardWithCandidates<2>(const Board<2>&); template void Sudoku::printBoardWithCandidates<3>(const Board<3>&); template void Sudoku::printBoardWithCandidates<4>(const Board<4>&); int main() { return 0; } Working demo Method 2 Here we provide the definition of the member function template as well as the 3 explicit template instantiations in the source file. In the header file we only provide the declaration for the member function template and no explicit template instantiation are provided in the header file. header.h #pragma once #include "Board.h" namespace Sudoku { //this is a declaration not a definition template<int BASE> void printBoardWithCandidates(const Board<BASE> &b); //no explicit template instantiation here since we have provided only the declaration above and not the definition } Board.h #pragma once template<int> struct Board { }; source.cpp #include "header.h" //provide the definition here template<int BASE> void Sudoku::printBoardWithCandidates(const Board<BASE> &board) { // definition... } template void Sudoku::printBoardWithCandidates<2>(const Board<2>&); template void Sudoku::printBoardWithCandidates<3>(const Board<3>&); template void Sudoku::printBoardWithCandidates<4>(const Board<4>&); main.cpp #include <iostream> #include "header.h" int main() { return 0; } Working demo
72,275,836
72,276,155
Why can't I manually define a template parameter?
I have a simple sample which provides: a struct template: #include <iostream> #include <vector> template <typename T> struct range_t { T b, e; range_t(T x, T y) : b(x), e(y) {} T begin() { return b; } T end() { return e; } }; a function template: template <typename T> range_t<T> range(T b, T e) { return range_t<T>(b, e); } I can use it to skip items in foreach loop of a (i.e) std::vector: int main() { std::vector<int> v{ 1, 2, 3, 4 }; for (auto p : range(v.begin()+1, v.end())) { std::cout << p << " "; } } This works as intended, however I don't really understand the need of the function template (2). I tried to write the foreach loop as this: for (auto p : range_t<std::vector::const_iterator>(v.begin()+1, v.end())) But for this I always got error: template argument 1 is invalid This might be a duplicate question, feel free to mark it as duplicate and please let me know the duplicated question which answers to all of these questions: Why is the template argument invalid here? (How) can I skip the function template? How can I create a function template which would work as this: myskipper would get only v as parameter in the foreach loop: template<typename T> range_t<T::const_iterator> myskipper(T c) { return range_t<T::const_iterator>(c.begin()+1, c.end()); } ... for (auto p : myskipper(v)) ...
Based on the comments and this article about iterator overflow, here a complete working example: #include <iostream> #include <vector> template <typename T> struct range_t { T b, e; range_t(T x, T y) : b(x), e(y) {} T begin() { return b; } T end() { return e; } }; template <typename T> range_t<T> range(T b, T e) { return range_t<T>(b, e); } template<typename T> range_t<typename T::iterator> skip(T &c, typename T::size_type skipCount) { return range_t<typename T::iterator>(c.begin() + std::min(c.size(), skipCount), c.end()); } int main() { std::vector<int> v{ 1, 2, 3, 4 }; for (auto p : range(v.begin()+1, v.end())) { std::cout << p << " "; } std::cout << std::endl; for (auto p : range_t(v.begin()+1, v.end())) { std::cout << p << " "; } std::cout << std::endl; for (auto p : skip(v, 3)) { std::cout << p << " "; } std::cout << std::endl; }
72,276,829
72,277,321
Vector of set insert elements
I'm trying to write a function which will return vector of set type string which represent members of teams. A group of names should be classified into teams for a game. Teams should be the same size, but this is not always possible unless n is exactly divisible by k. Therefore, they decided that the first mode (n, k) teams have n / k + 1 members, and the remaining teams have n / k members. #include <iostream> #include <vector> #include <string> #include <set> #include <list> typedef std::vector<std::set<std::string>>vek; vek Distribution(std::vector<std::string>names, int k) { int n = names.size(); vek teams(k); int number_of_first = n % k; int number_of_members_first = n / k + 1; int number_of_members_remaining = n / k; int l = 0; int j = 0; for (int i = 1; i <= k; i++) { if (i <= number_of_first) { int number_of_members_in_team = 0; while (number_of_members_in_team < number_of_members_first) { teams[l].insert(names[j]); number_of_members_in_team++; j++; } } else { int number_of_members_in_team = 0; while (number_of_members_in_team < number_of_members_remaining) { teams[l].insert(names[j]); number_of_members_in_team++; j++; } } l++; } return teams; } int main () { for (auto i : Distribution({"Damir", "Ana", "Muhamed", "Marko", "Ivan", "Mirsad", "Nikolina", "Alen", "Jasmina", "Merima" }, 3)) { for (auto j : i) std::cout << j << " "; std::cout << std::endl; } return 0; } OUTPUT should be: Damir Ana Muhamed Marko Ivan Mirsad Nikolina Alen Jasmina Merima MY OUTPUT: Ana Damir Marko Muhamed Ivan Mirsad Nikolina Alen Jasmina Merima Could you explain me why names are not printed in the right order?
teams being a std::vector<...> supports random access via an index. auto & team_i = teams[i]; (0 <= i < teams.size()), will give you an element of the vector. team_i is a reference to type std::set<std::list<std::string>>. As a std::set<...> does not support random access via an index, you will need to access the elements via iterators (begin(), end() etc.), e.g.: auto set_it = team_i.begin();. *set_it will be of type std::list<std::string>. Since std::list<...> also does not support random access via an index, again you will need to access it via iterators, e.g.: auto list_it = set_it->begin();. *list_it will be of type std::string. This way it is possible to access every set in the vector, every list in each set, and every string in each list (after you have added them to the data structure). However - using iterators with std::set and std::list is not as convenient as using indexed random access with std::vector. std::vector has additional benefits (simple and efficient implementation, continous memory block). If you use std::vectors instead of std::set and std::list, vek will be defined as: typedef std::vector<std::vector<std::vector<std::string>>> vek; std::list being a linked list offers some benefits (like being able to add an element in O(1)). std::set guarentees that each value is present once. But if you don't really need these features, you could make you code simpler (and often more efficient) if you use only std::vectors as your containers. Note: if every set will ever contain only 1 list (of strings) you can consider to get rid of 1 level of the hirarchy, I.e. store the lists (or vectors as I suggested) directly as elements of the top-level vector. UPDATE: Since the question was changed, here's a short update: In my answer above, ignore all the mentions of the std::list. So when you iterate on the set::set the elements are already std::strings. The reason the names are not in the order you expect: std::set keeps the elements sorted, and when you iterate it you will get the elements by that sorting order. See the answer here: Is the std::set iteration order always ascending according to the C++ specification?. Your set contains std::strings and the default sort order for them is alphabetically. Using std::vector instead of std::set like I proposed above, will get you the result you wanted (std::vector is not sorted automatically). If you want to try using only std::vector: Change vek to: typedef std::vector<std::vector<std::string>>vek; And replace the usage of insert (to add an element to the set) with push_back to do the same for a vector.
72,276,889
72,286,137
How to handle this kind of c++ error in python
I have a problam in string of code: self.db.bulk_insert('input', lines, columns_numbers = (2,3)) where db is my wrap for pymssql. Here is the method of that class: def bulk_insert(self, table_name:str, dataset:Sequence, columns_numbers:Sequence=None): self._connection.bulk_copy(table_name, dataset, columns_numbers) where _connection is just pymssql connection. When I`am running it the data are inserting, but in console appears free(): invalid next size (fast) and code not going further. Table and data I am inserting, quite simple: CREATE TABLE input ( id BIGINT IDENTITY(1,1) PRIMARY KEY, ipn VARCHAR(12) NOT NULL, info_id INT NOT NULL FOREIGN KEY REFERENCES info(id), fetched BIT DEFAULT 0, inserted DATETIME DEFAULT CURRENT_TIMESTAMP, updated DATETIME DEFAULT NULL ); Element of lines is looking like: ('1234567890', Decimal('1')) Why its happening and how to solve it? Thanks!
So, I found the issue. Befor that bulk_insert i made db request to get that id to insert. In table it int, but returns as Decimal('1') as i metioned in my question. So, the problem was in data type I am inserting (decimal), because table type is int.
72,277,334
72,277,842
Comparing characters at the same index of two strings
I am trying to compare the characters of two strings at a given index. However, I am getting the following error: Line 9: Char 26: error: invalid operands to binary expression ('const __gnu_cxx::__alloc_traits<std::allocator<char>, char>::value_type' (aka 'const char') and 'const std::__cxx11::basic_string<char>') if (s[i] != letter) { return s.substr(0, i); } ~~~~ ^ ~~~~~~ Here is my code: string longestCommonPrefix(vector<string>& strs) { auto comp = [](const string& a, const string& b) { return a.size() < b.size(); }; const auto smallest = min_element(strs.begin(), strs.end(), comp); for (int i = 0; i < smallest->size(); ++i) { const auto letter = smallest[i]; for (const auto& s : strs) { if (s[i] != letter) { return s.substr(0, i); } // <- error occurs here } } return smallest; } Could someone please explain why this does not work?
In your code, smallest is a std::vector<std::string>::iterator const which is a random access iterator, hence provides operator[]. However, smallest[i] is equivalent to *(smallest + i) so it will return a reference to the i-th string after the one pointed to by smallest instead of the i-th character of the string pointed to by smallest (which is what you want). To get the i-th character of the string pointed to by smallest, you should first deference it: (*smallest)[i] And you should deference it as well in the return statement. Also, I recommend you pass strs as a reference to const and check that smallest is not the end iterator (which may occurr if the input vector is empty): string longestCommonPrefix(vector<string> const& strs) { auto comp = [](const string& a, const string& b) { return a.size() < b.size(); }; const auto smallest = std::min_element(strs.begin(), strs.end(), comp); if (smallest == strs.end()) return ""; for (int i = 0; i < smallest->size(); ++i) { const auto letter = (*smallest)[i]; for (const auto& s : strs) { if (s[i] != letter) return s.substr(0, i); } } return *smallest; }
72,277,442
72,277,570
Overloading different types in C++
Suppose we have the following class: class Rational { // Represents rational number. n=1/2=d for example. public: int n = 0; int d = 1; }; Rational x = Rational(); x.n = 1; x.d = 2; Is it possible to do overloading such that 3 * x would give 3/2 instead of an error? My teacher said that overloading happens only between objects of the same type, but why can we do overloading between cout which is of type ostream and an object of type Rational and not of the type int and Rational?
You may write for example Rational operator *( const Rational &r, int x ) { return { r.n * x, r.d }; } Rational operator *( int x, const Rational &r ) { return { r.n * x, r.d }; } You may overload operators for user defined types. For a binary operator at least one of operands must be of a user defined type. From the C++ 20 Standard (12.4.2.3 Operators in expressions) 2 If either operand has a type that is a class or an enumeration, a user-defined operator function can be declared that implements this operator or a user-defined conversion can be necessary to convert the operand to a type that is appropriate for a built-in operator. In this case, overload resolution is used to determine which operator function or built-in operator is to be invoked to implement the operator. Therefore, the operator notation is first transformed to the equivalent function-call notation as summarized in Table 15 (where @ denotes one of the operators covered in the specified subclause). However, the operands are sequenced in the order prescribed for the built-in operator (7.6).
72,277,533
72,297,026
The remote system does not have CMake 3.8 or greater
Introduction to the problem I'm trying to create a MacOS app that prints a "Hello World" in C++ using Visual Studio 2022 (latest release 17.2.0) on Windows and the CMake template so I can connect remotely (using SSH) to the MacOS, I've been following this official Microsoft tutorial Problem ocurred The problem is that when I get to the installation step of CMake in MacOS I can't get it to recognize the version of MacOS installed since when I open the project in Windows I get this following message in the console: 1> Copying files to the remote machine. 1> Starting copying files to remote machine. 1> Finished copying files (elapsed time 00h:00m:00s:650ms). 1> CMake generation started for configuration: 'macos-debug'. 1> Found cmake executable at /Users/maria/.vs/cmake/bin/cmake. 1> The remote system does not have CMake 3.8 or greater. An infobar to automatically deploy CMake to the remote system will be displayed if you are building on a supported architecture. See https://aka.ms/linuxcmakeconfig for more info. It also shows the following message above: Supported CMake version is not present on 'remote address'. Install latest CMake binaries from CMake.org? Yes No And when I press "yes", it says that I actually have cmake installed on the remote MacOS: 1> Copying files to the remote machine. 1> Starting copying files to remote machine. 1> Finished copying files (elapsed time 00h:00m:00s:650ms). 1> CMake generation started for configuration: 'macos-debug'. 1> Found cmake executable at /Users/maria/.vs/cmake/bin/cmake. 1> The remote system does not have CMake 3.8 or greater. An infobar to automatically deploy CMake to the remote system will be displayed if you are building on a supported architecture. See https://aka.ms/linuxcmakeconfig for more info. CMake binary deployment to the remote machine started. CMake generation will continue automatically after deployment finishes. CMake binary deployment to the remote machine failed: Installation directory '/Users/maria/.vs/cmake' already exists. Solution attemps I have tried to install CMake using brew (latest version available 3.23.1) and making sure that cmake was accessible directly from the MacOS terminal (included in PATH), I also tried doing the procedure following the official guide by installing the image .dmg by copying the "CMake.app" to "/Applications" and adding it to the path using the following command: export PATH=/Applications/CMake.app/Contents/bin:$PATH And I even tried to install older versions of CMake (like 3.8.0 or 3.8.1) but the same thing still happened. The expected result is the same as the Microsoft guide shown here: 1> Copying files to the remote machine. 1> Starting copying files to remote machine. 1> Finished copying files (elapsed time 00h:00m:00s:650ms). 1> CMake generation started for configuration: 'macos-debug'. 1> Found cmake executable at /Applications/CMake.app/Contents/bin/cmake. 1> /Applications/CMake.app/Contents/bin/cmake -G "Ninja" DCMAKE_BUILD_TYPE_STRING="Debug" -DCMAKE_INSTALL_PREFIX 1> [CMake] -- Configuring done 1> [CMake] -- Generating done 1> [CMake] -- Build files have been written to: /Users/cti/.vs/CMakeProject90/out/build/macos-debug 1> Extracted CMake variables. 1> Extracted source files and headers. 1> Extracted code model. 1> Extracted includes paths. 1> CMake generation finished. Does anyone know why this is happening or what could be the solution to this problem?
This seems to be a Visual Studio bug. You can keep track of it here. Workaround It looks like Visual Studio always looks for CMake under local folder of currently connected via SSH user (i.e. ~/.vs/cmake/bin/cmake), no matter how you installed it. Then, when Visual Studio suggests to install it: Supported CMake version is not present on ‘192.168.1.180’. Install latest CMake binaries from CMake.org? If you agree to do that, it actually rolls out it locally in the said folder. The binaries Visual Studio uses are broken and throws an error if you try to use it on the Mac machine locally: $: ~/.vs/cmake/bin/cmake zsh: exec format error: /Users/User/.vs/cmake/bin/cmake That's why Visual Studio keeps struggling to find the working CMake binary. You can get it round by creating a symbolic link to the folder with working CMake binaries in place of the folder Visual Studio looks for them in: $: rm -rf ~/.vs/cmake/bin $: ln -s /Applications/CMake.app/Contents/bin ~/.vs/cmake/bin At this point Visual Studio will be able to locate the CMake, but won't be able to locate the default compilers and generators: 1> /Users/User/.vs/cmake/bin/cmake -G "Ninja" -DCMAKE_BUILD_TYPE:STRING="Debug" -DCMAKE_INSTALL_PREFIX:PATH="/Users/User/.vs/CrossPlatform/out/install/macos-debug" /Users/User/.vs/CrossPlatform/CMakeLists.txt; 1> [CMake] CMake Error: CMake was unable to find a build program corresponding to "Ninja". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. 1> [CMake] CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage 1> [CMake] CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage The simplest way to fix that is just by running this command on the Mac machine. It will generate the cache Visual Studio can use then, however be advised that whenever this cache is invalidated for whatever reason (e.g. when switching between configurations), you will have to re-generate it again the same way. Another option is to specify all missing parameters explicitly (either via CMakeLists.txt or as command line arguments under cacheVariables property of CMakePresets.json)
72,278,141
72,278,653
Using std::async with a method receiving a cv::OutputArray in order to assign it doesn't work
I have the following function: void MyClass::myFunc(cv::OutputArray dst) const { cv::Mat result; ... dst.assign(result); } If I run it like this: cv::Mat result; myFunc(result); otherFunc(result); It works fine and otherFunc recieves result modified by myFunc. But if I use std::async like this: cv::Mat result; auto resultFuture = std::async(&MyClass::myFunc, this, result); resultFuture.wait(); otherFunc(result); otherFunc receives empty result. What am I doing wrong?
The root cause is that passing arguments by reference (&) to a function to run via std::async is problematic. You can read about it here: Passing arguments to std::async by reference fails (in your case there is no a compilation error, but the link explains the issue in general). And in your case you use cv::OutputArray which is defined in opencv as a refernce type: typedef const _OutputArray& OutputArray; I assume you wanted the reference semantics, since you expected your result object to be updated by myFunc. The solution is to use std::ref. But since the result object you pass is a cv::Mat, it preferable and more straightforward that myFunc will receive a cv::Mat&. I also managed to produce a solution using a cv::OutputArray, but it requires an ugly cast (in addition to the std::ref). It works fine on MSVC, but I am not sure it is will be generally valid. Below is the code demostrating these 2 options. I recomend to use the 1st approach if you can. You can call otherFunc(result); at the point where I print the dimensions of result after it is initialized. #include <opencv2/core/core.hpp> #include <future> #include <iostream> // Test using a cv::Mat &: class MyClass1 { void myFunc(cv::Mat & dst) const { cv::Mat result(4, 3, CV_8UC1); result = 1; // ... initialize some values dst = result; // instead of using cv::OutputArray::assign } public: void test() { cv::Mat result; std::cout << "MyClass1: before: " << result.cols << " x " << result.rows << std::endl; auto resultFuture = std::async(&MyClass1::myFunc, this, std::ref(result)); resultFuture.wait(); // Here result will be properly set. std::cout << "MyClass1: after: " << result.cols << " x " << result.rows << std::endl; } }; // Test using a cv::OutputArray: class MyClass2 { void myFunc(cv::OutputArray dst) const { cv::Mat result(4, 3, CV_8UC1); result = 1; // ... initialize some values dst.assign(result); } public: void test() { cv::Mat result; std::cout << "MyClass2: before: " << result.cols << " x " << result.rows << std::endl; auto resultFuture = std::async(&MyClass2::myFunc, this, std::ref(static_cast<cv::OutputArray>(result))); resultFuture.wait(); // Here result will be properly set. std::cout << "MyClass2: after: " << result.cols << " x " << result.rows << std::endl; } }; int main() { // Test receiving a cv::Mat&: MyClass1 m1; m1.test(); // Test receiving a cv::OutputArray: MyClass2 m2; m2.test(); return 0; }
72,278,520
72,278,603
Sort Algorithm creates error message when changing objects in vector
#include<vector>; using namespace std; int main() { vector<int>Liste; Liste = { 5,2,3,6,3,4,7 }; int n = Liste.size(); int i, j, k_1, k_2 ; int m_1, m_2 ; for (i; i = 0; i = n - 1) { k_1 = i; m_1 = Liste[i]; for (j; j = i + 1; j = n) { if (Liste[j] < m_1) { k_2 = j; m_2 = Liste[j]; } Liste.insert(k_1, m_2); Liste.insert(k_2 + 1, m_1); Liste.erase(k_1+1); Liste.erase(k_2 + 2); } } cout << Liste << endl; return 0; } When running the code an error occures in line 19,20,21,22: Keine Instanz von Überladene Funktion "std::vector<_Ty, _Alloc>::insert [mit _Ty=int, _Alloc=std::allocator]" stimmt mit der Argumentliste überein Since i am new to coding i am not sure why this error occures.
insert and erase methods of std::vector take an iterator as first parameter, not a simple integer, this is roughly the sens of the error returned by your compiler. It does not find any version of insert that takes an int as first parameter. https://www.cplusplus.com/reference/vector/vector/insert/ Anyway you can use this method to achieve what you want to do: Liste.insert(Liste.begin() + k_1, m_2); Liste.insert(Liste.begin() + (k_2 + 1), m_1); Liste.erase(Liste.begin() + (k_1 + 1)); Liste.erase(Liste.begin() + (k_2 + 2))
72,278,593
72,278,662
Is this causing a dangling pointer when using map of pointers
This simple code generates a warning about "Object baking the pointer will be destroyed at the end of the full expression". What does that mean? Can I not use the object entry after I use get_map? And also why is this warning showing up static std::map<std::string, int *> get_map() { static std::map<std::string, int*> the_map; return the_map; } int main() { (...) auto entry = get_map().find("HEY"); (...) use entry , is that wrong ? }
Can I not use the object entry after I use get_map? No, you cannot. static std::map<std::string, int *> get_map() returns a copy of the map. auto entry = get_map().find("HEY"); returns an iterator pointing into the copy. The copy is destroyed immediately after entry is assigned (because the copy was not saved in any variable, it remained a temporary). So, entry can't be safely used.
72,278,918
72,279,288
vector move operation vs element move operation
For the following example, why the vector move operation is not triggered? How do I know when I should explicitly use a move operator? #include <iostream> #include <vector> using namespace std; class Test { public: Test() { std::cout << " default " << std::endl; } Test(const Test& o) { std::cout << " copy ctor " << std::endl; } Test& operator=(const Test& o) { std::cout << " copy assign " << std::endl; return *this; } Test(Test&& o) { std::cout << " move ctor" << std::endl; } Test& operator=(Test&& o) { std::cout << " move assign " << std::endl; return *this; } }; int main() { std::cout << " vector: " << std::endl; std::vector<Test> p; p = {Test()}; // expect vector move here since the RHS is temporary. std::cout << std::endl; std::cout << " single value " << std::endl; Test tt; tt = Test(); } Output: vector: default copy ctor single value default default move assign I was under the impression that when we assign a temporary variable to a lvalue (the single value case in the example), it would trigger a move operation if it exists. Seems that I was wrong and my understanding was overly simplified, I need to carefully check case by case to ensure there's no redundant copy.
std::vector has an assignment operator that takes an std::initializer_list: vector& operator= (initializer_list<value_type> il); So when you wrote p = {Test()}; you're actually using the above assignment operator. Now why a call to the copy constructor is made can be understood from dcl.init.list, which states: An object of type std::initializer_list<E> is constructed from an initializer list as if the implementation allocated a temporary array of N elements of type const E, where N is the number of elements in the initializer list. Each element of that array is copy-initialized with the corresponding element of the initializer list, and the std::initializer_list object is constructed to refer to that array.
72,279,026
72,279,266
Execution speed of code with `function` object as compared to using template functions
I know that std::function is implemented with the type erasure idiom. Type erasure is a handy technique, but as a drawback it needs to store on the heap a register (some kind of array) of the underlying objects. Hence when creating or copying a function object there are allocations to do, and as a consequence the process should be slower than simply manipulating functions as template types. To check this assumption I have run a test function that accumulates n = cycles consecutive integers, and then divides the sum by the number of increments n. First coded as a template: #include <iostream> #include <functional> #include <chrono> using std::cout; using std::function; using std::chrono::system_clock; using std::chrono::duration_cast; using std::chrono::milliseconds; double computeMean(const double start, const int cycles) { double tmp(start); for (int i = 0; i < cycles; ++i) { tmp += i; } return tmp / cycles; } template<class T> double operate(const double a, const int b, T myFunc) { return myFunc(a, b); } and the main.cpp: int main() { double init(1), result; int increments(1E9); // start clock system_clock::time_point t1 = system_clock::now(); result = operate(init, increments, computeMean); // stop clock system_clock::time_point t2 = system_clock::now(); cout << "Input: " << init << ", " << increments << ", Output: " << result << '\n'; cout << "Time elapsed: " << duration_cast<milliseconds>(t2 - t1).count() << " ms\n"; return 0; } This was run a hundred times and get a mean result of 10024.9 ms. Then I introduce the function object in the main, plus a template specialization for operate so the code above can be recycled: // as above, just add the template specialization template<> double operate(const double a, const int b, function<double (const double, const int)> myFunc) { cout << "nontemplate called\n"; return myFunc(a, b); } // and inside the main int main() { //... // start clock system_clock::time_point t1 = system_clock::now(); // new lines function<double (const double, const int)> computeMean = [](const double init, const int increments) { double tmp(init); for (int i = 0; i < increments; ++i) { tmp += i; } return tmp / increments; }; // rest as before // ... } I expected the function version to be faster, but the average is about the same, actually even slower, result = 9820.3 ms. Checked the standard deviations and they are about the same, 1233.77 against 1234.96. What sense can be made of this? I would have expected the second version with the function object to be slower than the template version. Here the whole test can be run on GDB.
I know that std::function is implemented with the type erasure idiom. Type erasure is a handy technique, but as a drawback it needs to store on the heap a register (some kind of array) of the underlying objects. Type erasure does not necessarily require heap allocations. In this case, it is likely the implementation of std::function will not have to do any heap allocation, since the lambda doesn't capture any variables. Therefore, std::function only has to store the function pointer, which it will do in the object itself, not in heap-allocated memory. Apart from that, even if std::function did do a heap allocation, some compilers might even elide those heap allocations. Last but not least, while heap allocations are more expensive than stack allocations, if you only need to allocate something on the heap once for the entire duration of your program, you probably won't notice any difference in timing due to that allocation.
72,279,087
72,279,200
opengl won't overlap more than 1 texture
I'm trying to create an opengl program that creates a 2d square, and applies 2 textures on it. I followed this tutorial: https://learnopengl.com/Getting-started/Textures This is my fragment shader: #version 330 core //in vec3 Color; in vec2 TexCoord; out vec4 FragColor; uniform sampler2D Texture1; uniform sampler2D Texture2; void main() { FragColor = mix(texture(Texture1, TexCoord), texture(Texture2, TexCoord), 0.5); } This is the code that sends the textures and the uniforms: GLuint Tex1, Tex2; int TexWidth, TexHeight, TexNrChannels; unsigned char* TexData = stbi_load("container.jpg", &TexWidth, &TexHeight, &TexNrChannels, 0); glGenTextures(1, &Tex1); glBindTexture(GL_TEXTURE_2D, Tex1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TexWidth, TexHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, TexData); glGenerateMipmap(GL_TEXTURE_2D); glUniform1i(glGetUniformLocation(Program, "Texture1"), 0); stbi_image_free(TexData); TexData = stbi_load("awesomeface.png", &TexWidth, &TexHeight, &TexNrChannels, 0); glGenTextures(1, &Tex2); glBindTexture(GL_TEXTURE_2D, Tex2); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, TexWidth, TexHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, TexData); glGenerateMipmap(GL_TEXTURE_2D); glUniform1i(glGetUniformLocation(Program, "Texture2"), 1); stbi_image_free(TexData); And this is the render loop: while (!glfwWindowShouldClose(window)) { processInput(window); // GL render here glClearColor(0.05f, 0.0f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, Tex1); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, Tex2); glUseProgram(Program); glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); glfwSwapBuffers(window); glfwPollEvents(); } When I run it, only the first texture shows up on the square, and the a argument (last argument) of the mix function in the shader won't make a difference for any value. I tried activating (glActiveTexture + GlBindTexture) the second texture first in the render loop, and it caused the second texture to be shown exclusively. How can I make the textures mix together like in the tutorial? If this approach is wrong, I would like to learn about another way to accomplish the same result.
glUniform1i set a value in the default uniform block of the currently installed program. You have to install the program with glUseProgram, before you can set the value of a uniform variable: GLint t1_loc = glGetUniformLocation(Program, "Texture1"); GLint t2_loc = glGetUniformLocation(Program, "Texture2"); glUseProgram(Program); glUniform1i(t1_loc, 0); glUniform1i(t2_loc, 1); Alternatively you can use glProgramUniform1i: glProgramUniform1i(Program, t1_loc, 0); glProgramUniform1i(Program, t2_loc, 1);
72,279,095
72,279,177
How to check if an object is an instance of a template class of multiple template arguments in C++?
I have the following class: template <typename T, typename U = UDefault> class A; How to check whether types such as A<float> or A<float, int> are instances of the above templated class? I tried modifying How to check if an object is an instance of a template class in C++? into: template <typename T, typename U> struct IsA : std::false_type {}; template <typename T, typename U> struct IsA<A<T, U>> : std::true_type {}; but it's giving the following error 20:18: error: wrong number of template arguments (1, should be 2) 16:8: error: provided for 'template<class T, class U> struct IsA' In function 'int main()': 40:34: error: wrong number of template arguments (1, should be 2) 16:8: error: provided for 'template<class T, class U> struct IsA' How can this be solved?
Your IsA class should be expected to take one template argument. The type you are testing. template <typename Type> struct IsA : std::false_type {}; template <typename T, typename U> struct IsA< A<T,U> > : std::true_type {}; // ^^^^^^ The specialization of your one template argument. Or put alternately, since the individual template parameters to A do not matter here: template <typename ...AParams> struct IsA< A<AParams...> > : std::true_type {}; // ^^^^^^^^^^^^^ The specialization of your one template argument. See it work in Compiler Explorer
72,279,211
72,279,228
Can new still throw an exception?
I'm a new C++ programmer, so I never wrote C++ code for anything older than C++11. I was reading Scott Meyers "Effective C++, 2nd Edition" (I know it is old, but I think it still has some valid points). In the book, according to "Item 7", new can throw exceptions which should be handled. Can new still throw exceptions? Should I be prepared about exceptions that could be thrown by a smart pointer or new?
Yes, on allocation new can throw a bad_alloc exception. That is, unless you pass const std::nothrow_t& as the second parameter, where you'll be guaranteed a return value of nullptr See details here
72,279,221
72,279,258
Passing arguments to child boost::process
void mainParent() { string str = ".\\childProcess.exe"; boost::process::child c(str,bp::args({stringArg}) ); c.wait(); } int mainChild(int argc, const char* argv[]) { cout << "test == " << argv[0] << endl; } string stringArg ="text"; I tried: boost::process::child c(str,bp::args({stringArg}) ); but cout << "test == " << argv[0] << endl; outputs its own path to the exe instead of the text I want.
but cout << "test == " << argv[0] << endl; outputs its own path to the exe instead of the text I want As it should be, because argv[0] is supposed to hold the path to the exe file. The 1st command-line parameter will be in argv[1] instead, and the 2nd parameter will be in argv[2], and so on. Use argc to know how many strings are actually in argv[], eg: int mainChild(int argc, const char* argv[]) { for(int i = 0; i < argc; ++i) { cout << "argv[" << i << "] = " << argv[i] << endl; } }
72,279,666
72,279,951
Reading multiple lines from textfile in c++
I am trying to make a login programme that reads and writes from a textfile. For some reason, only the first line of the textfile works but the rest wont be successful login. #include <iostream> #include <fstream> #include <string> using namespace std; bool loggedIn() { string username, password, un, pw; cout << "Enter username >> "; cin >> username; cout << "Enter password >> "; cin >> password; ifstream read("users.txt"); while (read) { getline(read, un, ' '); getline(read, pw); if (un == username && pw == password) { return true; } else { return false; } } } Text File: user1 pass1 user2 pass2 Alternatives I tried: read.getline(un, 256, ' '); read.getline(pw, 256);
while (read) is the same as while (!read.fail()), which is the wrong loop condition to use in your situation. You are not checking if both getline() calls are successful before comparing the strings they output. You also need to move the return false; statement out of the loop. Since you have a return in both the if and else blocks, you are comparing only the 1st user in the file and then stopping the loop regardless of the result. You want to keep reading users from the file until a match is found or the EOF is reached. Try this instead: #include <iostream> #include <fstream> #include <string> using namespace std; bool loggedIn() { string username, password, un, pw; cout << "Enter username >> "; cin >> username; cout << "Enter password >> "; cin >> password; ifstream read("users.txt"); while (getline(read, un, ' ') && getline(read, pw)) { if ((un == username) && (pw == password)) { return true; } } return false; } Alternatively, use 1 call to std::getline() to read an entire line, and then use std::istringstream to read values from the line, eg: #include <iostream> #include <fstream> #include <sstream> #include <string> using namespace std; bool loggedIn() { string username, password, un, pw, line; cout << "Enter username >> "; cin >> username; cout << "Enter password >> "; cin >> password; ifstream read("users.txt"); while (getline(read, line)) { istringstream iss(line); if ((iss >> un >> pw) && (un == username) && (pw == password)) { return true; } } return false; }
72,280,052
72,282,181
How to copy an RGBA image to Windows' Clipboard
How might one copy a 32bit (per pixel) RGBA image to Windows' Clipboard? I've arrived to this function after a lot of trial, but no luck in having my image data "paste" at all. It does not appear in the Clipboard's history either. Slightly editing it to use CF_DIB and the BITMAPINFOHEADER header has yielded a "copy" entry in that history and an image of the correct size when pasted, though sticking a png on the back of a CF_DIB has caused programs to glitch out in incredibly interesting and non-benign ways. My goal is to copy an image with an alpha channel to the Clipboard, and to have the colors not be multiplied against this alpha during the hand-off. What am I be doing wrong..? bool copyBitmapIntoClipboard(Window & window, const Bitmap & in) { // this section is my code for creating a png file StreamWrite stream = StreamWrite::asBufferCreate(); in.savePng(stream); uint64 bufSize = 0; char * buf = stream._takeBuffer(bufSize, false); // "buf" <-- contains the PNG payload // "bufSize" <-- is the size of this payload // beyond this point, it's just standard windows' stuff that doesn't rely on my code BITMAPV5HEADER header; header.bV5Size = sizeof(BITMAPV5HEADER); header.bV5Width = in.getX(); // <-- size of the bitmap in pixels, width and height header.bV5Height = in.getY(); header.bV5Planes = 1; header.bV5BitCount = 0; header.bV5Compression = BI_PNG; header.bV5SizeImage = bufSize; header.bV5XPelsPerMeter = 0; header.bV5YPelsPerMeter = 0; header.bV5ClrUsed = 0; header.bV5ClrImportant = 0; header.bV5RedMask = 0xFF000000; header.bV5GreenMask = 0x00FF0000; header.bV5BlueMask = 0x0000FF00; header.bV5AlphaMask = 0x000000FF; header.bV5CSType = LCS_sRGB; header.bV5Endpoints; // ignored header.bV5GammaRed = 0; header.bV5GammaGreen = 0; header.bV5GammaBlue = 0; header.bV5Intent = 0; header.bV5ProfileData = 0; header.bV5ProfileSize = 0; header.bV5Reserved = 0; HGLOBAL gift = GlobalAlloc(GMEM_MOVEABLE, sizeof(BITMAPV5HEADER) + bufSize); if (gift == NULL) return false; HWND win = window.getWindowHandle(); if (!OpenClipboard(win)) { GlobalFree(gift); return false; } EmptyClipboard(); void * giftLocked = GlobalLock(gift); if (giftLocked) { memcpy(giftLocked, &header, sizeof(BITMAPV5HEADER)); memcpy((char*)giftLocked + sizeof(BITMAPV5HEADER), buf, bufSize); } GlobalUnlock(gift); SetClipboardData(CF_DIBV5, gift); CloseClipboard(); return true; }
At least in my experience, trying to transfer png data with a BITMAPV5HEADER is nearly a complete loss, unless you're basically planning on using it strictly as an internal format. One strategy that does work at least for a fair number of applications, is to register the PNG clipboard format, and just put the contents of a PNG file into the clipboard (with no other header). Code would look something like this: bool copyBitmapIntoClipboard(Window & window, const Bitmap & in) { // this section is my code for creating a png file StreamWrite stream = StreamWrite::asBufferCreate(); in.savePng(stream); uint64 bufSize = 0; char * buf = stream._takeBuffer(bufSize, false); // "buf" <-- contains the PNG payload // "bufSize" <-- is the size of this payload HGLOBAL gift = GlobalAlloc(GMEM_MOVEABLE, bufSize); if (gift == NULL) return false; HWND win = window.getWindowHandle(); if (!OpenClipboard(win)) { GlobalFree(gift); return false; } EmptyClipboard(); auto fmt = RegisterClipboardFormat("PNG"); // or `L"PNG", as applicable void * giftLocked = GlobalLock(gift); if (giftLocked) { memcpy((char*)giftLocked, buf, bufSize); } GlobalUnlock(gift); SetClipboardData(fmt, gift); CloseClipboard(); return true; } I've used code like this with, and successfully pasted the contents into recent versions of at least LibreOffice Write and Calc, MS Word, and Paint.Net. This is also the format Chrome (for one example) will produce as the first (preferred) format if you tell it to copy a bitmap. On the other hand, FireFox produces a whole plethora of formats, but not this one. It will produce a CF_DIBV5, but at least if memory serves, it has pre-multiplied alpha (or maybe it loses alpha completely--I don't remember for sure. Doesn't preserve it as you'd want anyway). Gimp will accept 32-bit RGB format DIB, with alpha in the left-over byte, and make use of that alpha. For better or worse, as far as I've been able to figure out that's about the only thing that works to paste something into Gimp with its alpha preserved (not pre-multiplied). Notes As versions are updated, the formats they support may well change, so even though (for example) PNG didn't work with Gimp the last time I tried, it might now. You can add the same data into the clipboard in different formats. You want to start from the "best" format (the one that preserves the data most faithfully), and work your way down to the worst. So when you do a copy, you might want to do PNG, then RGB with an alpha channel, then CF_BITMAP (which will pre-multiply alpha, but may still be better than nothing).
72,280,094
72,280,715
Is it possible to include files in macOS bundle without using Xcode?
So, I've been trying to include shaders inside macOS bundle for a while, and only way I have found, was adding them through Xcode. It would be nice if it would be possible to do something like that but only with CMake:
Well, It looks like I found a solution: set(VS_SHADER_NAME "cgui_tri_vertex.vs") set(VS_SHADER_PATH ${PROJECT_SOURCE_DIR}/resources/${VS_SHADER_NAME}) file(COPY ${VS_SHADER_PATH} DESTINATION "${PROJECT_NAME}.app/Contents/Resources")
72,280,166
72,281,307
.pgm images don't fit seem to fit in array
Working on some basic image processing and I need to manipulate a P2 .prg image. If there are no comments then in line two there should be a width and height value. If we take the FEEP example from this website https://people.sc.fsu.edu/~jburkardt/data/pgma/pgma.html then we can see the value given is 24 and the array later in the file is 24 wide. Perfect! However whenever I go to read in any other file from the same website, the width measurement is say 512, but the text data inside the file is only 17 digits wide? Am I looking at this file wrong? How can I parse out this file into a multidimensional array if the width of the lines don't match the width from the header?
In the .pgm formats, one line of text does not correspond to one line of the image. The first width numbers are the first row of the image. Then the next width numbers are the second row of the image and so on. How many numbers are on one line of text is inconsequential. That means when reading the file, you should ignore newlines and treat them like any other whitespace.
72,280,185
72,280,651
Is this GCC 12.1 const problem a bug or feature? "Attempts to call non-const function with const object"
We're seeing C++ code, that compiles successfully in GCC 11.3 and Visual Studio 2022, have issues with GCC 12.1. The code is on Compiler Explorer: https://godbolt.org/z/6PYEcsd1h (Thanks to @NathanPierson for simplifying it some.) Basically, a template class is deciding to try to call a non-const base class function in a const function, even though a const overload is available. This appears to be some sort of compiler bug, but it could be some weird new C++ rule I don't understand. Does this represent a compiler bug? struct BaseClass { // Commenting this non-const function out will also fix the compilation. int* baseDevice() { return nullptr; } const int* baseDevice() const { return nullptr; } }; template <class ObjectClass> struct DerivedClass : BaseClass { }; template <class ObjectClass> struct TopClass : DerivedClass<ObjectClass> { public: virtual int failsToCompile() const { // This should choose to call the const function, but it tries to call the non-const version. if (BaseClass::baseDevice()) return 4; return 1; } }; int main() { TopClass<int> x; } <source>: In instantiation of 'int TopClass<ObjectClass>::failsToCompile() const [with ObjectClass = ConcreteObject]': <source>:27:17: required from here <source>:30:32: error: passing 'const TopClass<ConcreteObject>' as 'this' argument discards qualifiers [-fpermissive] 30 | if (BaseClass::baseDevice()) | ~~~~~~~~~~~~~~~~~~~~~^~ <source>:14:15: note: in call to 'MyDevice* BaseClass::baseDevice()' 14 | MyDevice* baseDevice() { return nullptr; } | ^~~~~~~~~~ ASM generation compiler returned: 1
Is this gcc 12.1 const problem a bug or feature It's a bug. I filed a bug report and the issue has already been verified coming from this commit. The ticket has been assigned and the resolution has a targeted milestone of version 12.2 - so we can hope for a quick fix.
72,280,650
72,280,697
Running into "error: [...] is a c++ extension"
After running: g++ --std=c++11 -ansi -pedantic-errors -Wall -o test_database test_database.cpp I am receiving the following errors: ./database.h:40:10: error: 'auto' type specifier is a C++11 extension [-Werror,-Wc++11-extensions] for (auto x:composerMap_) { ^ ./database.h:40:16: error: range-based for loop is a C++11 extension [-Werror,-Wc++11-extensions] for (auto x:composerMap_) { Yet note that I have already added the '--std=c++11' flag recommended in many other stackoverflow posts similar to this one. This has wasted several hours for me. How do I get past this? I am on macOS monterey 12.1. Here are details about the version of g++ I am using: g++ --version Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1 Apple clang version 13.0.0 (clang-1300.0.29.30) Target: arm64-apple-darwin21.2.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin
From GCC manual: -ansi In C mode, this is equivalent to -std=c90. In C++ mode, it is equivalent to -std=c++98. Remove -ansi, just -std=c++11 -pedantic-errors is enough. I also suggest adding -Wextra...
72,281,084
72,281,520
How to get the type underlying std::complex<T> and use it in a class
I am writing a data processor, and would like to be able to perform real-to-real, and real-to-complex computations. The setup I have right now: // class to hold various data types template <typename T> class DataArray { public: DataArray(){}; T *ptr; }; // Processing configuration template <typename T> class Config { public: Config(){}; T var; }; // The data processor itself, takes input, output array and the processing configuration template <typename Tin, typename Tout> class Process { public: DataArray<Tin> *in; DataArray<Tout> *out; Config<Tout> *config; Process(){}; Process(DataArray<Tin> *in_, DataArray<Tout> *out_, Config<Tout> *config_) { in = in_; out = out_; config = config_; } }; Currently, I can handle real-to-real processing as follows: int main() { DataArray<int16_t> input; DataArray<float> output; Config<float> config; // real 2 real processing Process<int16_t, float> P1(&input, &output, &config); // works fine return 0; } The problem I am facing is that when I want to do real-to-complex processing, I still need my Config to be in the underlying type of std::complex<T>. How can I change the Process class (or any other class) such that I can create a class Process<int16_t, std::complex<float>> that will know that it expects a Config<float>? int main() { DataArray<int16_t> input; DataArray<std::complex<float>> output_complex; Config<float> config; // real 2 complex processing Process<int16_t, std::complex<float>> P2(&input, &output, &config); // Obviously does not work return 0; } Follow up question Thanks to the Remy's answer, I can now get the type of std::complex in the class Process. If I have a namespace with templated functions, how would I obtain the underlying type in that function/namespace? The following works, but feels a bit clunky. This means that I have to repeat the line using T = typename helper::value_type_of<Tout>::type; in every kernel, am I right? // namespace with computation kernels namespace kernels { // define ConfigType in the kernel namespace? template <typename T> using ConfigType = Config<typename helper::value_type_of<T>::type>; template <typename Tin, typename Tout, bool IQ> void bfpw(ConfigType<Tout> *BFC, Tin *RF, Tout *BF) { // get type of Tout: std::complex<float> or float using T = typename helper::value_type_of<Tout>::type; T variable = BFC->var; } } template <typename Tin, typename Tout> Process<Tin, Tout>::Process(DataArray<Tin> *in_, DataArray<Tout> *out_, ConfigType *config_) { in = in_; out = out_; config = config_; kernels::bfpw<Tin, Tout, true>(config, in->ptr, out->ptr); // call the kernel }
One way is to define a helper template which Process can use to detect whether Tout is a std::complex or not, and if so then it can use Tout's value_type member, otherwise it can use Tout as-is. For example: namespace helper { template<typename T> struct value_type_of { using type = T; }; template<typename T> struct value_type_of<std::complex<T>> { using type = typename std::complex<T>::value_type; // or simply: using type = T; }; } template <typename Tin, typename Tout> class Process { public: using ConfigType = Config<typename helper::value_type_of<Tout>::type>; ... ConfigType *config; Process(){}; Process(..., ConfigType *config_) { ... config = config_; } }; Online Demo This way, Process<..., float> and Process<..., std::complex<float>> will both accept Config<float>.
72,281,629
72,281,675
C++ Most effective way to grab a substring with a value in the middle of a long string
I want to find the most effective way to do something like this: A big string containing all kinds of data, for example: plushieid:5637372&plushieposition:12757&plushieowner:null&totalplushies:5637373 I want to make a function that would have the input to be, let's say "plushieposition", and I would have it find and return the string with plushieposition:12757. The only way I can think of is find the position of plushieposition and then scan for & and delete the rest. But, is there a cleaner way? If not, what would be the best way to do this in code? I'm having a little bit of trouble understanding string scan practices.
Use std::string::find() to find the starting and stopping positions, and then use std::string::substr() to extract what is between them, eg: string extract(const string &s, const string &name) { string to_find = name + ":"; string::size_type start = s.find(to_find); if (start == string::npos) return ""; string::size_type stop = s.find('&', start + to_find.size()); return s.substr(start, stop - start); } string s = "plushieid:5637372&plushieposition:12757&plushieowner:null&totalplushies:5637373"; string found = extract(s, "plushieposition"); Online Demo
72,281,992
72,282,101
Runtime error: reference binding to null pointer of type 'int' (stl_vector.h) c++
I know that this error refers to undefined behavior (I think), but I've reread my code 20 times and I don't see what the UB is!? I'm doing Leetcode 238. Product of Array Except Self and here's my code (btw I don't know if this solution is right): class Solution { public: vector<int> productExceptSelf(vector<int>& nums) { vector<int> result; map<int, int> map; for(int i = 0; i < nums.size(); i++){ map[i] = nums[i]; } for(int i = 0; i < nums.size(); i++){ for(auto& it : map){ if(it.first != i){ result[i] = nums[i] * result[i]; } } } return result; } };
You can replace vector<int> result; with vector<int> result(nums.size()); This will initialize all of the values of result to 0. It won't solve the problem, but it will get rid of the runtime error you're getting.
72,282,042
72,282,873
Get every combination (order is important) in vector with given size and elements
I want to create every possible coloring in a vector for a given size of the vector (amount of vertices) and given possible elements (possible colors) as an example: for a graph with 3 vertices and I want to color it with 3 colors, I want the following possible vectors, that are gonna be my possible colorings: 0 0 0 0 0 1 0 0 2 ... 2 1 1 2 1 2 ... 2 0 0 1 0 0 as you can see I want both combinations like "0 0 1" and "1 0 0". is there any way to do this efficiently?
This is definitely possible. Refer to the below code. It also works with all the other ASCII characters. You can modify it in order to meet your demands: #include <iostream> #include <vector> #include <string> inline std::vector<std::string> GetCombinations(const char min_dig, const char max_dig, int len) { std::vector<std::string> combinations; std::string combination(len, min_dig); while (true) { if (combination[len - 1] == max_dig) { combination[len - 1] = min_dig; int increment_index = len - 2; while (increment_index >= 0 && combination[increment_index] == max_dig) { combination[increment_index] = min_dig; increment_index--; if (increment_index == -1) break; } if (increment_index == -1) break; combination[increment_index]++; combinations.push_back(combination); continue; } combination[len - 1]++; combinations.push_back(combination); } return combinations; } int main() { std::cout << "Enter the number of digits: "; int len; std::cin >> len; std::cout << std::endl; // '0' is the minimum character. '2' is the maximum character. len is the length. std::vector<std::string> combinations = GetCombinations('0', '2', len); for (auto& i : combinations) { std::cout << i << std::endl; } }
72,282,372
72,669,979
ROOT(CERN): How to draw a figure with title in unicode
I'm trying to draw a scatter figure via root-framework(cern). I want to set the titles of the figure in chinese, but I failed. My code for setting the title is TGraphErrors graph(x,y,x_err,y_err); char title[]=u8"圖表標題;x座標;y座標";//chinese title graph.SetTitle(title); But in the figure, all the titles are shown in garbled text as shown in the following picture shown: Also, if I use a unicode string(i.e. wchar_t title[]=L"圖表標題;x座標;y座標"), I will get an error since graph.SetTitle() doesn't support that. But all strings above can show properly in the standard input/output in the terminal. So it seems that the question is not the string contains chars with wrong encoding, but root-framework can't perform them properly. Is there any way to show unicode in the figure? p.s. I run the code by root code.cpp and compile it with g++, but the results are the same. My root's version is 6.26/02 and my OS version is Ubuntu 22.04. p.s. Also, if there is a solution for both way of building the code (compiled by g++ or use as a root marco),it will be great.
There is a dirty hack: instead of providing Chinese characters via TGraph::SetTitle(), you can place TMathText instances wherever you want your characters to appear: gStyle->SetOptTitle(0); // no graph title please, we'll create our own double x1[5]{0., 1., 2., 3., 4.}, y1[5]{1., 2., 3., 4., 5.}; TGraph* graph_one = new TGraph(5, x1, y1); graph_one->Draw("AP"); (new TMathText(2.0, 5.5, "\\hbox{圖表標題}"))->Draw(); // graph title TMathText l_x; // x-axis title l_x.DrawMathText(4.0, 0.2, "\\hbox{x座標}"); TMathText l_y; // y-axis title l_y.SetTextAngle(90); l_y.DrawMathText(-0.3, 4.5, "\\hbox{y座標}");
72,282,741
72,282,947
How to use the member type iterator of std::list with a while loop to make simple changes to a list
I create and modify a simple list. I replace the element at index 1 of the list. How would I semantically accomplish the same thing with a while loop. The tutorial instructor remarked that the current code is quite ugly and a while loop would accomplish the same thing in a much more simple and pretty fashion. I can't figure it out. #include <iostream> #include <list> int main() { std::list<int> numbers; numbers.push_back(1); numbers.push_back(2); numbers.push_back(3); numbers.push_front(0); std::list<int>::iterator it = numbers.begin(); it++; numbers.insert(it, 100); std::cout << "Current element is: " << *it << '\n'; std::list<int>::iterator eraseIt = numbers.begin(); eraseIt++; eraseIt = numbers.erase(eraseIt); std::cout << "erasing at element: " << *eraseIt << '\n'; for (std::list<int>::iterator it = numbers.begin(); it != numbers.end();) { if (*it == 2) { numbers.insert(it, 1234); } if (*it == 1) { it = numbers.erase(it); } else { it++; } } for (std::list<int>::iterator it = numbers.begin(); it != numbers.end(); it++) { std::cout << *it << '\n'; } return 0; }
You can probably do this if iterators are required: // ... std::list<int>::iterator it = ++numbers.begin(); numbers.insert(it, 100); std::cout << "Current element is: " << *it << '\n'; std::list<int>::iterator eraseIt = ++numbers.begin(); eraseIt = numbers.erase(eraseIt); std::cout << "erasing at element: " << *eraseIt << '\n'; it = numbers.begin(); while (it != numbers.end()) { if (*it == 2) { numbers.insert(it, 1234); } if (*it == 1) { it = numbers.erase(it); } else { ++it; } } for (auto& i : numbers) { std::cout << i << std::endl; } // ...
72,282,784
72,283,015
Multiple linking of a static library across different shared objects
Currently I have a setup where there is a 3rd-party supplied shared library, libfoo.so. Internally this links in (without using something like --whole-archive) a static library (specifically Intel performance primitives) ipps.a. This is third party so cannot be modified. I then build a separate shared library, libbar.so, which also statically links ipps.a. These two libraries are eventually (along with some others) linked into a plugin, plugin.so, which is then loaded at runtime. Unfortunately this (sometimes) causes runtime issues (I've seen both hanging in the initialization function called when IPP starts up, as well as corrupted data) due to the fact that there are functions from ipps.a that exist in both libbar.so and libfoo.so. What are the options to deal with this, given that I can't modify the third-party library?
What are the options to deal with this, given that I can't modify the third-party library? Both you and the 3rd party developer have committed a sin -- you are exposing symbols from ipps.a in your own interface (this is the default on UNIX). You should hide these symbols instead, using e.g. a linker version script. Example. If you hide all the ipps.a symbols in libbar.so, then the fact that libfoo.so was also linked with ipps.a should become irrelevant.
72,283,549
72,283,643
What is a properly way to iterate a char* string? Getting "corrupt" values
When I was trying to iterate a number (transformed to binary with bitset library) I managed this solution #include <iostream> #include <bitset> void iterateBinary(int num) { char *numInBinary = const_cast<char*>(std::bitset<32>(num).to_string().c_str()); for (int i = 31; i >= 0; i--) { char bit = numInBinary[i]; std::cout << i << ": " << bit << std::endl; } } But founded those weird characters in the output I already implemented a solution to my initial idea with for (char bit : numInBinary) and without c_str() transformation, but I'm still curious about what happened (maybe memory problems?) or which can be a better way to iterate a char* string Also remark that the "corrupt" values in the output are no the same on each ejecution and only appears at the end, why? Thanks in advance
The lifetime of the string returned by to_string(), and into which the pointer returned by c_str() points, ends at the end of the full expression. This means after the line char *numInBinary = const_cast<char*>(std::bitset<32>(num).to_string().c_str()); the pointer numInBinary will be dangling and trying to access through it will result in undefined behavior. You need to store the return value from to_string() so that it lives long enough, e.g. auto numInBinary = std::bitset<32>(num).to_string(); There is also no need for a char* pointer, since std::string can just be indexed directly. Also, if you think you need to use const_cast anywhere, rethink it. Except for very specific scenarios where you take care of const correctness by unusual means, const_cast is almost surely the wrong approach and likely to result in undefined behavior down the line.
72,284,547
72,286,626
How to encode and decode vector in google protobuff
I have following structure in main.cpp typedef struct s1 { uint8 plmn[3]; }tai_s; typedef struct s2 { tai_s tai; }tailist_s; std::vector<tailist_s> tallist; I have folowing structure in main.proto message tai_s { google.protobuf.BytesValue plmn[3]; } message tailist_s { tai_s tai; } repeated tailist_s tallist; Im trying to encode protobuff like below, for(int i1=0; i1<tailist.size(); i1++) { const tailist_s *tailistproto = proto->add_tailist(); tailistproto->mutable_tai()->mutable_plmn()->set_value(tailist.tai.plmn, 3); } Im trying to decode protobuff like below, for(int i1=0; i1<proto->tailist_size(); i1++) { mempy(tailist.tai.plmn, proto->tailist(i1).tai().plmn().value(), 3); } But it is giving segmentation fault during memcpy. Please let me know what i'm doing wrong.
for(int i1=0; i1<proto->tailist_size(); i1++) { mempy(tailist.tai.plmn, proto->tailist(i1).tai().plmn().value(), 3); } You are trying to decode a vector. Where is that vector? Where do you create the tailist you are trying to write to? You aren't adding the tailist to the vector and overwrite it in every iteration. This should be something like this: std::vector<tailist_s> tallist(proto->tailist_size()); for(int i1=0; i1<proto->tailist_size(); i1++) { mempy(&tallist[i1].tai.plmn, proto->tailist(i1).tai().plmn().value(), 3); }
72,284,840
72,285,273
why strcat_s causing problems
I'm facing problem that I get random chars output instead of getting first and mid and last name combined which is the purpose of the program and when I run the debugger it says that the problem in strcat_s but I don't know what's the problem with it #include <iostream> #include <string.h> class name { private: char first[20], mid[20], last[20]; public: name(); name(const char*, const char*, const char*); ~name(); char* show(); }; name::name() { first[0] = mid[0] = last[0] = '\0'; } name::name(const char* f, const char* m, const char* l) { size_t lenF = strlen(f), lenM = strlen(m), lenL = strlen(l); if (strlen(f) > 20) lenF = 20; else if (strlen(m) > 20) lenM = 20; else if (strlen(l) > 20) lenL = 20; strncpy_s(first, f, lenF); strncpy_s(mid, m, lenM); strncpy_s(last, l, lenL); } name::~name() { std::cout << "distructing..." << std::endl; } char* name::show() { char temp[62]; strcpy_s(temp, first); strcat_s(temp, " "); strcat_s(temp, mid); strcat_s(temp, " "); strcat_s(temp, last); return temp; } int main() { name a("kinan", "fathee", "ayed"); std::cout << a.show() << std::endl; }
Your logic for returning the full name is incorrect. You have a local variable temp and you are returning a reference to that variable. However, this variable is destroyed once show() function is completed. So, in your main function you have a reference but its pointing to something that is already destroyed. That's why you will see random characters when you print it. Here is the solution to your problem. You need to create a dynamic array i.e a pointer so that it does not get destroyed. char *name::show() { int size = strlen(first) + strlen(mid) + strlen(last) + 3; char *temp = new char[size]; strcpy_s(temp, size, first); strcat_s(temp, size, " "); strcat_s(temp, size, mid); strcat_s(temp, size, " "); strcat_s(temp, size, last); return temp; } int main() { name a("kinan", "fathee", "ayed"); char *temp = a.show(); std::cout << temp << std::endl; delete[] temp; } Edit: In your original code, if you print temp at the end of show function before returning it, you will see that temp contains full name. Edit: Temp is a local variable. Every local variable is destroyed at the end of the function's execution. In the suggested solution, I am creating an array dynamically on the heap. When we dynamically create an array, it's not removed from the heap automatically. So, when I return a reference to that array and use it in main it still works. Edit: I have added delete[] in main function to deallocate memory.
72,284,958
72,285,121
C++ Vector content is being deleted?
I've been trying to create a directed graph following https://www.youtube.com/watch?v=V_TulH374hw class Digraph { public: Digraph(); void addNode(Node); void addEdge(Edge); void print(); private: //This is a vector which contains a node source and a vector of node destinations vector< tuple< Node, vector<Node>>> nodes; }; but after I add 2 nodes and a Edge it seems like the vector with the destinations is beeing emptied void Digraph::addEdge(Edge e){ Node src = e.getSrc(); Node dest = e.getDest(); for(auto node : nodes){ if(get<0>(node).getName() == src.getName()){ get<1>(node).push_back(dest); //cout << "added conection " << get<0>(node).getName() << " -> " << get<1>(node).back().getName() << " now " << get<0>(node).getName() << " has " << get<1>(node).size() << " destinations" <<"\n"; return; } } cout << "node " << src.getName() << " does not exist \n"; return; } void Digraph::print(){ for(auto node : nodes){ get<0>(node).print(); cout << " has " << get<1>(node).size() << " destinations"; cout << "\n"; for(auto destination : get<1>(node)){ cout << "\t->"; destination.print(); cout << "\n"; } } } In main.cpp I add the nodes and the edge graph.addNode(NY); graph.addNode(CHICAGO); graph.addEdge(road); graph.print(); It ends up adding the edge succesfully but when it prints the final result it does not recognize the edge it just added added conection NY -> Chicago now NY has 1 destinations NY has 0 destinations Chicago has 0 destinations When tried with more Nodes and Edges I realized it never adds more than one Edge, maybe it has to do with how i defined the class? vectors is not the choice?
Your auto in the for loop needs to be a reference (auto&), and after the code I'll tell you why. void Digraph::addEdge(Edge e){ Node src = e.getSrc(); Node dest = e.getDest(); // use references here for(auto& node : nodes){ if(get<0>(node).getName() == src.getName()){ // now this modifies the original, not a copy get<1>(node).push_back(dest); //cout << "added conection " << get<0>(node).getName() << " -> " << get<1>(node).back().getName() << " now " << get<0>(node).getName() << " has " << get<1>(node).size() << " destinations" <<"\n"; return; } } cout << "node " << src.getName() << " does not exist \n"; return; } The entire point of this function is to change the nodes in some way. If you iterate over them with for (auto node : nodes) you are getting a copy of each node. It's the same as Node node : nodes, the auto doesn't make a difference. You then call push_back on this copy, which gets destroyed as soon as the next loop iteration comes up. Effectively, your original stays entirely untouched since you only take copies of it's member nodes, not references. If you use a reference here like I did, you take a reference to each node in the nodes vector, and modify the reference, which modifies the original.
72,284,984
72,285,907
Draw function with glDrawArrays() needs to be called twice for anything to show
This is a strange problem. I have a function: void drawLines(std::vector<GLfloat> lines) { glBindVertexArray(VAO2); //positions glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (void*)0); glEnableVertexAttribArray(0); //colors glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (void*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, VBO2); glBufferData(GL_ARRAY_BUFFER, sizeof(lines[0]) * lines.size(), &lines[0], GL_STATIC_DRAW); glUseProgram(pShaderProgram); glDrawArrays(GL_TRIANGLES, 0, lines.size() / 6); } It draws lines of a specified thickness (this is built into the data of lines which contains three floats for position and three floats for color, per point with over 200 points to mark line endings). Calling the function once does not yield any result, even after following with SwapBuffers(HDC) and glFlush(). However, when I call the function twice, then everything shows. I suspect there is some nuance that I am missing with pushing the buffers through the render pipeline, that might be activated by binding the buffer multiple times. I am unsure. Any thoughts?
glVertexAttribPointer expects a buffer bound to the GL_ARRAY_BUFFER binding point in order to establish an association between the generic vertex attribute and the buffer object to source the data from. However, you bind the buffer with glBindBuffer(GL_ARRAY_BUFFER, VBO2) after the call to glVertexAttribPointer. This is why you need two calls, because the first call will not associate any buffer object to the generic vertex attributes, since likely no buffer was bound before. The way you are using VAOs is dubious, however, since everything you do in this method/function will already be saved as VAO state. So, you neither need to reestablish the association between generic vertex attributes and buffer objects every time, nor do you need to enable the generic vertex attributes every time you draw. All of this can be done once when doing the vertex specification for the VAO.
72,285,375
72,286,042
Qt: Can emitting signals cause an Stack Overflow (or memory leak)? What happens if their connected slot/thread is blocked?
When the target thread which is going to capture the signal is blocked, what will happen to the signal and the memory it occupies? Do the signals go inside a queue? Does the queue overflow and we lose some signals? Do we get an stack overflow?
In general it may happen that signals are produced faster than they are consumed. This can hapen only if you use queued connections. This happens typically in multithreaded code (uses queued connection by default) or if you set your connection with flag Qt::QueuedConnection. If your connection is not queued, then this situation does not happen because signal is processed by the slot synchronously, immediately after it is emitted. So unprocessed signals do not wait in the queue. So when you have a queued connection and generate and emit signals faster than the consuming event loop can process them, they of course are enqueued, they occupy memory (heap) and if running long enough, the memory can be eventually exhaused (you would probably observe RAM swapping to disk, slowing down your system making it unusable). As you were asking about memory leaks - that would probably not happen. But memory leaks are your least concern here. So you must avoid this situation of generating signals too fast. There are many options how to do it. For example you can have a timer in the emitting party which does not allow emitting signal if the latest signal was emitted before less than, say, 100 ms. (I am using this in my progressbars in my app.) Another option is to implement a two-way communication, where the emitter will send a signal and the receiver will process it and will emit back another signal as a response confirming that the processing was done and this signal will be received by the emitter of the original signal, informing it that now it is safe to emit another signal. Yet another option is to not use signals and slots and call methods directly, but of course you need to have proper synchronization mechanism in place using atomics or locking mutextes. Note that in this case the signals will not wait in a queue but threads can perform badly because they block each other too often. So it is up to you which method you choose. But you must definitely avoid the situation when you are emitting signals faster than you are able to process them in a slot connected with queued connection.
72,285,987
72,289,587
fstream::write and fstream::read changing both reading and writing pointers
I am learning about how to write records on file and read it . I created a class called student , the class has function like enterStudent , showStudent , printInsideFile (student info) ,...... . when I try to write student info into the file , it works . when I try to read all student info from the file , it works when I try to do both , something unexpected happens (nothing show up) i thought that the problem with file.flush() , so when i deleted it the output was unreadable text i could close the file and open it again but i think that is silly the code is: #include <iostream> #include <string> #include <fstream> using namespace std; class student { private: char id[5], name[20], age[5], address[50], gender[5]; public: void enterStudent() { cout << "Enter student id : "; cin >> id; cout << "\nEnter student name : "; cin >> name; cout << "\nEnter student age : "; cin >> age; cout << "\nEnter student address : "; cin >> address; cout << "\nEnter student gender : "; cin >> gender; } void showStudent() { cout << "#########student data##########\n"; cout << "student id : "<< id; cout << "\nstudent name : " << name; cout << "\nstudent age : " << age; cout << "\nstudent address : " << address; cout << "\nstudent gender : " << gender<<endl; } void printInsideFile(fstream &file) { file.write(id,sizeof(id)); file.write(name, sizeof(name)); file.write(age, sizeof(age)); file.write(address, sizeof(address)); file.write(gender, sizeof(gender)); file.flush(); } bool readFromFile(fstream &file) { if (file.eof()) return 0; file.read(id, sizeof(id)); file.read(name, sizeof(name)); file.read(age, sizeof(age)); file.read(address, sizeof(address)); file.read(gender, sizeof(gender)); if (file.eof()) return 0; return 1; } void showAllFromFile(fstream &file) { while (this->readFromFile(file)) this->showStudent(); } }; int main() { student s; fstream file; file.open("a.txt", ios::in | ios::out | ios::app); if (!file) goto k270; s.enterStudent(); s.printInsideFile(file); //when i read one student , it works okay //s.readFromFile(file); //s.showStudent(); //when i try to read multiple students , it doesn't work at all s.showAllFromFile(file); file.close(); k270: system("pause"); return 0; }
The problem was not the ios::app It seems that functions file.write() and file.read() change either reading pointer and writing pointer . Even if you use file<<"text"; or file>>array of chars; both of pointers are changing . I searched about it but didn't find explanation but i find the code of ostream::write and istream::read ,they were advanced so if anybody check the link and tell us why note that ,when the reading pointer points at the end of the file , it will print out nothing
72,286,056
72,287,425
Template function deduction fail on std::conditional argument
Please, before marking this as a duplicate of This question read the entirety of the post This piece of code fails to compile, with a template deduction error: #include <iostream> #include <type_traits> template<typename T = float, int N> class MyClass { public: template<typename DATA_TYPE> using MyType = std::conditional_t<(N>0), DATA_TYPE, double>; MyType<T> Var; void Foo() { Bar(Var); } template<typename TYPE> void Bar(MyType<TYPE> Input) { std::cout << typeid(Input).name() << std::endl; } }; int main() { MyClass<float, 1> c; c.Foo(); return 0; } I understand the point that was made in the question i linked above, which is that "the condition which allows to choose the type to be deduced depends on the type itself", however, why would the compiler fail in the specific case i provided as the condition here seems to be fully independent from the type, or is there something i'm missing? I would be more than happy if someone could refer to a section of the c++ standard that would allow me to fully understand this behaviour.
As the linked question, TYPE is non deducible. MyType<TYPE> is actually XXX<TYPE>::type. You have several alternatives, from your code, I would say one of Bar no longer template: template<typename T = float, int N> class MyClass { public: template<typename DATA_TYPE> using MyType = std::conditional_t<(N>0), DATA_TYPE, double>; MyType<T> Var; void Foo() { Bar(Var); } void Bar(MyType<T> Input) { std::cout << typeid(Input).name() << std::endl; } }; requires (or SFINAE/specialization for pre-c++20): template<typename T = float, int N> class MyClass { public: template<typename DATA_TYPE> using MyType = std::conditional_t<(N>0), DATA_TYPE, double>; MyType<T> Var; void Foo() { Bar(Var); } template<typename TYPE> void Bar(TYPE Input) requires(N > 0) { std::cout << typeid(Input).name() << std::endl; } void Bar(double Input) requires(N <= 0) { std::cout << typeid(Input).name() << std::endl; } };
72,286,758
72,292,313
Is it safe to disable threads on boost::asio in a multi-threaded program?
I read in this SO answer that there are locks around several parts of asio's internals. In addition I'm aware that asio is designed to allow multiple threads to service a single io_context. However, if I only have a single thread servicing a single io_context, but I want to have more than 1 io_context in my application, is it safe to disable threads (per BOOST_ASIO_DISABLE_THREADS) That is: I have one io_context and one thread which has entered its io_context::run() loop, and it is servicing a number of sockets etc. All interaction with these sockets are done within the context of that thread. I then also have another thread, and another io_context, and that thread services that io_context and its sockets etc. Inter-thread communication is achieved using a custom thread-safe queue and an eventfd wrapped with an asio::posix::stream_descriptor which is written to by the initiating thread, and read from the receiving thread which then pops items off the thread-safe queue. So at no point will there be user code which attempts to call asio functions from a thread which isn't associated with the io_context servicing its asio objects. With the above use-case in mind, is it safe to disable threads in asio?
It'll depend. As far as I know it ought to be fine. See below for caveats/areas of attention. Also, you might want to take a step back and think about the objectives. If you're trying to optimize areas containing async IO, there may be quick wins that don't require such drastic measures. That is not to say that there are certainly situations where I imagine BOOST_ASIO_DISABLE_THREADS will help squeeze just that little extra bit of performance out. Impact What BOOST_ASIO_DISABLE_THREADS does is replace selected mutexes/events with null implementations disable some internal thread support (boost::asio::detail::thread throws on construction) removes atomics (atomic_count becomes non-atomic) make globals behave as simple statics (applies to system_context/system_executor) disables TLS support System executor It's worth noting that system_executor is the default fallback when querying for associated handler executors. The library implementation specifies that async initiations will override that default with the executor of any IO object involved (e.g. the one bound to your socket or timer). However, you have to scrutinize your own use and that of third-party code to make sure you don't accidentally rely on fallback. Update: turns out system_executor internally spawns a thread_group which uses detail::thread - correctly erroring out when used IO Services Asio is extensible. Some services may elect to run internal threads as an implementation detail. docs: The implementation of this library for a particular platform may make use of one or more internal threads to emulate asynchronicity. As far as possible, these threads must be invisible to the library user. [...] I'd trust the library implementation to use detail::thread - causing a runtime error if that were to be the case. However, again, when using third-party code/user services you'll have to make sure that they don't break your assumptions. Also, specific operations will not work without the thread support, like: Live On Coliru #define BOOST_ASIO_DISABLE_THREADS #include <boost/asio.hpp> #include <iostream> int main() { boost::asio::io_context ioc; boost::asio::ip::tcp::resolver r{ioc}; std::cout << r.resolve("127.0.0.1", "80")->endpoint() << std::endl; // fine // throws "thread: not supported": r.async_resolve("127.0.0.1", "80", [](auto...) {}); } Prints 127.0.0.1:80 terminate called after throwing an instance of 'boost::wrapexcept<boost::system::system_error>' what(): thread: Operation not supported [system:95] bash: line 7: 25771 Aborted (core dumped) ./a.out
72,287,201
72,287,415
How to overload + operator in a Template array class to add every element with the same Index together
I somewhat successfully overloaded the + operator to add 2 arrays of the same size together. This is my current code: //.h #pragma once #include <iostream> template<typename T, size_t S> class MyArray { public: T dataArray[S]; T& operator[](size_t arrayIndex); const T& operator[](size_t arrayIndex) const; MyArray<T, S> operator+(const MyArray& secondSummand) const; constexpr size_t getSize() const; const T at(unsigned int arrayIndex) const; void place(int arrayIndex, T arrayValue); T* acces();//pointer to first array element }; template<typename T, size_t S> inline constexpr size_t MyArray<T, S>::getSize() const { return S; } template<typename T, size_t S> inline T& MyArray<T, S>::operator[](size_t arrayIndex) { return dataArray[arrayIndex]; } template<typename T, size_t S> inline const T& MyArray<T, S>::operator[](size_t arrayIndex) const { return dataArray[arrayIndex]; } template<typename T, size_t S> inline MyArray<T, S> MyArray<T, S>::operator+(const MyArray& secondSummand) const { MyArray returnArray{}; for (unsigned int i = 0; i < S; i++) { returnArray[i] = this->at(i) + secondSummand[i]; } return returnArray; } template<typename T, size_t S> inline const T MyArray<T, S>::at(unsigned int arrayIndex) const { return dataArray[arrayIndex]; } template<typename T, size_t S> inline void MyArray<T, S>::place(int arrayIndex, T arrayValue) { dataArray[arrayIndex] = arrayValue; } template<typename T, size_t S> inline T* MyArray<T, S>::acces() { return dataArray; } //main.cpp #include <iostream> #include <random> #include "MyArray.h" int main() { { srand((unsigned)time(0)); //Working fine MyArray<int, 5> firstArray = {10, 5, 3, 2, 8}; MyArray<int, 5> secondArray = {5, 3, 5, 6, 2}; std::cout << "The first Array numbers are:\n"; for (unsigned int i = 0; i < firstArray.getSize(); i++) { std::cout << firstArray[i] << " "; } std::cout << "\n\nThe second Array numbers are:\n"; for (unsigned int i = 0; i < secondArray.getSize(); i++) { std::cout << secondArray[i] << " "; } MyArray<int, firstArray.getSize()> tempArray = firstArray + secondArray; std::cout << "\n\nAdd every position of 2 Arrays together:\n"; for (unsigned int i = 0; i < tempArray.getSize(); i++) { std::cout << firstArray[i] << " + " << secondArray[i] << " = " << tempArray[i] << "\n"; } } //Not working MyArray<int, 5> firstArray = {10, 5, 3, 2, 8}; MyArray<int, 4> secondArray = {5, 3, 5, 6}; std::cout << "\n\nThe first Array numbers are:\n"; for (unsigned int i = 0; i < firstArray.getSize(); i++) { std::cout << firstArray[i] << " "; } std::cout << "\n\nThe second Array numbers are:\n"; for (unsigned int i = 0; i < secondArray.getSize(); i++) { std::cout << secondArray[i] << " "; } } So my overloaded operator works fine for objects (array) with the same size. If i try to add 2 objects with different sizes i get this error that the type is not the same https://i.stack.imgur.com/7cZG4.png if my understanding is correct my return type of the + operator is an MyArray object that has the same Template arguments as the summand on the left side of +. in my second example "Not working" this should be T = int, S = 5 and the left side of the operator would be a const Reference to my array with T = int, S = 4 I don't understand why this is not working because I did the same without templates and it worked fine, can someone explain to me why I cant add 2 arrays with different sizes together with my code or what I can do so that it accepts objects with different sizes?
In the declaration of the operator function: MyArray<T, S> operator+(const MyArray& secondSummand) const; When you use plain MyArray it's implied to be MyArray<T, S>. The template arguments T and S are the same as for both "this" class and the class of the function argument. If you want to use different sizes for your argument then you need to make the operator function a template as well, to provide for the different size of the template<size_t R> MyArray<T, S> operator+(const MyArray<T, R>& secondSummand) const; Note that I kept the size of the resulting array object the same size as "this" array object, as per the OP's comment. Be careful to not go out of bounds when implementing the operator, you should only loop to the minimum of the two objects sizes, and if S > R then the rest of the result array should probably be zero-initialized.
72,289,340
72,289,826
'const static' STL container inside reentrant function
Let's say, that this is a function that serves several threads. They read kHKeys that is not protected since Read-Read from the same memory-address is not a data-race. But, on the 1st Read, kHKeys is constructed. It is possible that during construction, another thread enters reentrantFunction function. Is it necessary to construct kHKeys before unleashing the threads that call simultaneously the reentrantFunction ? Example: int reentrantFunction(const std::wstring& key_parent_c) { // const 'static' means that kHKeys is constructed only once — // The 1st the function is run — and put into a shared memory space. // Otherwise, kHKeys is local and it must be constructed each time the function is called. const static std::map<std::wstring, HKEY> kHKeys{ { L"HKEY_CURRENT_USER", HKEY_CURRENT_USER } , { L"HKEY_LOCAL_MACHINE", HKEY_LOCAL_MACHINE } , { L"HKEY_CLASSES_ROOT", HKEY_CLASSES_ROOT } , { L"HKEY_CURRENT_CONFIG", HKEY_CURRENT_CONFIG } , { L"HKEY_CURRENT_USER_LOCAL_SETTINGS", HKEY_CURRENT_USER_LOCAL_SETTINGS } , { L"HKEY_PERFORMANCE_DATA", HKEY_PERFORMANCE_DATA } , { L"HKEY_PERFORMANCE_NLSTEXT", HKEY_PERFORMANCE_NLSTEXT } , { L"HKEY_PERFORMANCE_TEXT", HKEY_PERFORMANCE_TEXT } , { L"HKEY_USERS", HKEY_USERS } }; // Use kHKeys
It is not a must to construct kHKeys before the threads start to use reentrantFunction. As you can see here: static local variables, since C++11 is it guaranteed by the standard that a static local variable will be initialized only once. There is a specific note regarding locks that can be applied to ensure single initialazion in a multi threaded environment: If multiple threads attempt to initialize the same static local variable concurrently, the initialization occurs exactly once (similar behavior can be obtained for arbitrary functions with std::call_once). Note: usual implementations of this feature use variants of the double-checked locking pattern, which reduces runtime overhead for already-initialized local statics to a single non-atomic boolean comparison. However - if you use a static variable that requires a relatively long initialization (not the case in your example), and your threads are required to perform according to some realtime requirements (with minimum delay), you can consider to do it in separate initialization phase, before the threads start to run.
72,289,530
72,291,507
Moving through list elements
I need to move through list elements and add them to the set. However, while moving through list I need to skip elements that are already added to set. First element of list is added to set before moving through list. For example: {"Damir", "Ana", "Muhamed", "Marko", "Ivan","Mirsad", "Nikolina", "Alen", "Jasmina", "Merima"} Enter shift: 5 Mirsad Enter shift: 6 Muhammed Enter shift: 7 Ana EXPLANATION: #include <iostream> #include <vector> #include <string> #include <list> #include <set> void Moving_Trough_List(std::vector<std::string>names) { int n = names.size(), shift = 0; std::list<std::string>lista; for (int i = 0; i < n; i++) { lista.push_back(names[i]); } std::set<std::string>team; auto it = lista.begin(); auto temp = it; int index_list = 0; while (shift != -1) { std::cout << "Enter shift: "; std::cin >> shift; std::cin.ignore(100, '\n'); for (int i = 0; i < shift; i++) { index_list++; } if (index_list > n - 1) index_list = index_list - n + 1; while (it != temp) it--; for (int i = 0; i < index_list; i++) it++; std::cout << *it << "\n"; team.insert(*it); } std::cout << std::endl; for (auto i : team) std::cout << i << " "; } int main () { Moving_Trough_List({"Damir", "Ana", "Muhamed", "Marko", "Ivan", "Mirsad", "Nikolina", "Alen", "Jasmina", "Merima" }); return 0; } MY OUTPUT: Enter shift: 5 Mirsad Enter shift: 6 Muhammed Enter shift: 7 Merima So it worked correctly for shift 5 and 6, but after that it didn't skip elements already added to set. Could you help me to modify this to skip already added elements to set?
Here's a way to use the best data structure for this without losing the list or mutating it: Linked list of indexes. Linked lists react well to having nodes deleted. So, as you shift, traverse the index list, use the number stored in there to index into the name list. Add that name to the set and delete the node from the index list. Repeat as needed. You will need to construct the index list before taking input. It should end up just as long as the name list, each node should index to a name in the name list, and as you delete nodes names will become inaccessible. Please note: I haven't read the "full task setting" you linked that Marcus claims contradicts the question posted here.
72,289,873
72,289,912
Type defined inside class not recognized as return type of method in implementation
I am surprised why a self-defined type with using inside a class is not recognized when I use it as the return value of a method of that same class. In this example Pair is recognized well in the class definition but not in the implementation of createPair(): #include <utility> class A { public: using Pair = std::pair<int, int>; Pair createPair(); }; Pair A::createPair() { return {0, 0}; } The error shown is: error: ‘Pair’ does not name a type | Pair A::createPair() | ^~~~ Why? Is there a way to solve this?
The problem is that to use the alias Pair we have to be in the scope of the class A which we can do by qualifying Pair with A using the scope resolution operator :: as shown below: //--vvv----------------------->note the A:: part A::Pair A::createPair() { return {0, 0}; } Working demo
72,290,269
72,290,397
How do I correctly destruct a derived object that was constructed using placement new
Say we have a C++ program with this sort of class inheritance: class A { public: virtual ~A() {/* ... */} }; class B : public A { public: virtual ~B() {/* ... */} }; class C : public A { public: virtual ~C() {/* ... */} }; And furthermore, there are specialized memory constraints which requires that B and C must always be allocated in a special region of RAM (e.g. a reserved region of physical SRAM that guarantees faster response times than normal SDRAM) and so we must never allocate instances of B or C from the general heap. So we might have something like: A * ptr; if(condition) { ptr = specialized_allocator(sizeof(B)); new(ptr) B; } else { ptr = specialized_allocator(sizeof(C)); new(ptr) C; } /* Do something, which persists beyond the scope of the function where allocation occurred... */ ptr->~A(); specialized_deallocator(ptr); In this scenario, will the complete chain of derived class destructors be invoked correctly, or will it end up only invoking the top-level A destructor?
Run this and it may help a little: #include <iostream> class A { public: virtual ~A() { std::cout << "A\n"; } }; class B : public A { public: virtual ~B() { std::cout << "B\n"; } }; class C : public A { public: virtual ~C() { std::cout << "C\n"; } }; int main() { A* ptr = new C(); // or A* ptr = new B() // ... ptr->~A(); }
72,290,542
72,290,792
Why does gcc use the size-aware delete operator by default when optimizing?
If I define my own new and delete operators as shown below: #include <cstdio> #include <cstdlib> #include <new> void* operator new (size_t count) { printf("Calling custom new!\n"); return malloc(count); } void operator delete(void *p) noexcept { printf("Called size unaware delete!\n"); free(p); } int main() { int *a = new int{1}; delete a; } and compile using gcc version 12.1, with -O2 -Wall options specified, I get a mismatched-new-delete warning. Looking at the compiled output, I see that the compiler uses the size-aware delete operator (signature void operator delete(void *p, std::size_t sz);) instead of the custom one I defined (see compiler explorer output for details). Other compilers such as clang use the delete operator I defined, and thus do not result in that mismatched operator warning. Why does gcc use the size-aware version when optimizing?
References are to the post-C++20 draft (n4861). I am also assuming C++14 or later, which introduced size-aware deallocation functions. For your particular example, the delete expression is required to call the size-aware operator delete, since the type to be destroyed is complete. So GCC is behaving correctly. (see [expr.delete]/10.5) It is however not a problem that the size-aware one is chosen, because the default behavior of the global operator delete(void*, size_t) overload, if not replaced, is to just call the corresponding operator delete(void*), so your custom implementation will still be used in the end. (see [new.delete.single]/16) There is however a recommendation in [new.delete.single]/11 that a program which replaces the operator delete version without size_t parameter should also replace the one with the size_t parameter. A note clarifies that although currently the standard library supplied default behavior of the size-aware versions call the custom non-size-aware implementation anyway, that may change in future standard revisions. Also, the compiler is allowed to elide both the call to operator new and operator delete in the given example to either provide storage in a different manner or more likely to just elide the whole body of main which has no other observable side effects. So it is possible that no operator delete is called at all. So, to future-proof the code and avoid linter warnings add a replacement for the size-aware global overload as well void operator delete(void *p, std::size_t) noexcept { ::operator delete(p); } Also note that the standard requires the replacement of this overload to behave in such a way that it could always be replaced by a call to the size-unaware version without affecting memory allocation. (see [new.delete.single]/15) Although required since C++14, Clang doesn't seem to enable size-aware deallocation functions yet by default. You need to add the -fsized-deallocation flag to enable them. Some discussion of a patch enabling it by default seems to be going on here. Also note that your implementation of operator new is broken. The throwing version of operator new is not allowed to return a null pointer. So you must check the return value of malloc and throw std::bad_alloc if it is null.
72,290,587
72,290,885
Get path of file that is called
Suppose I've got following folder structure /dir/dir2/dir3/program.exe I want to obtain program.exe file path as it is called. E.g. // program.exe #include <iostream> #include <filesystem> int main(int argc, char** argv) { std::cout << std::filesytem::current_path() << "\n"; } But this program.exe works differently if called from different locations: being in dir3 user@command_line:/dir/dir2/dir3$ ./program.exe output: "/dir/dir2/dir3" being in dir2 user@command_line:/dir/dir2$ ./dir3/program.exe output: "/dir/dir2" being in dir user@command_line:/dir$ ./dir2/dir3/program.exe output: "/dir" I wish I could obatin exact path of program.exe no matter what location is it called from. Is it possible? Thank you for your time.
You are looking for a folder with the working executable. Current path giving your a process current directory To obtain executable path For Windows you can use GetModuleFileNameA Win API function: For examle: char exe_name[ MAX_PATH+1 ] = {'\0'}; ::GetModuleFileNameA(nullptr,exe_name,MAX_PATH); For POSIX you can obtain executable path with readlink system call For example: char exe_name[ PATH_MAX+1 ] = {'\0'}; char query[64] = {'\0'}; std::snprintf(query, 64, "/proc/%u/exe", ::getpid() ); ::readlink(query, exe_name, PATH_MAX);
72,291,244
72,293,774
I am having trouble cloning a linked list, what is the problem in my code?
Structure of Node: class Node{ public: int data; Node *next; Node *arb; Node(int value){ data=value; next=NULL; arb=NULL; } }; Now, I wrote the following code, but I am getting a segmentation fault runtime error. I can't find out what is causing this error. Node *copyList(Node *head) { Node* ptr=head; Node *temp; Node *clonehead; Node *clonetail; while(ptr!=NULL){ temp=ptr->next; Node* newnode=new Node(ptr->data); if(ptr==head){ clonehead=newnode; clonetail=clonehead; } else{ clonetail->next=newnode; clonetail=newnode; } clonetail->arb=ptr->arb; ptr->next=clonetail; ptr=temp; } ptr=clonehead; while(ptr!=NULL){ temp=ptr->arb; ptr->arb=temp->next; ptr=ptr->next; } return clonehead; } What is wrong with my code? Link to the problem: Clone a linked list with next and random pointer
There are several mistakes in your code: clonetail->arb=ptr->arb; The instructions you provided are very clear that the next and arb pointers in the cloned list need to point at nodes in the cloned list, not at nodes in the original list. ptr->next=clonetail; You are modifying the next pointer of the nodes in the original list, which you should not be doing at all. This code makes no sense at all: while(ptr!=NULL){ temp=ptr->arb; ptr->arb=temp->next; ptr=ptr->next; } You are iterating through the cloned list, and for each arb (which is pointing at a node in the original list, not in the cloned list), you are updating it to point at the referred node's next node rather than at the referred node itself. You are not taking into account the possibility that arb may be NULL, or the fact that the cloned arbs are pointing at nodes in the wrong list to begin with. Since each node's arb is pointing at a random node in the same list, you can't clone the arbs in the same loop that is cloning the nodes, as any given arb may be referring to a later node that hasn't been cloned yet. To clone the arbs, you would have to first finish cloning the nodes from the original list, and then iterate through the cloned list updating its arbs to point at the correct nodes within the cloned list, not the original list. I believe this is what you are attempting to do, but you are not doing it correctly. With that said, try something more like this: struct Node{ int data; Node *next; Node *arb; Node(int value){ data = value; next = NULL; arb = NULL; } }; Node* resolveNode(Node *head, Node *clone, Node *target) { while (head && clone){ if (head == target) return clone; head = head->next; clone = clone->next; } return NULL; } Node* copyList(Node *head) { Node *clonehead = NULL; Node *ptr, **newnode = &clonehead; ptr = head; while (ptr != NULL){ *newnode = new Node(ptr->data); newnode = &((*newnode)->next); ptr = ptr->next; } Node *cloneptr = clonehead; ptr = head; while (ptr != NULL){ cloneptr->arb = resolveNode(head, clonehead, ptr->arb); cloneptr = cloneptr->next; ptr = ptr->next; } return clonehead; } Alternatively, if you can spare some extra memory, you can avoid the repeated list iterations in the 2nd loop by using a std::(unordered_)map to keep track of which nodes in the original list correspond to which nodes in the cloned list, eg: #include <map> struct Node{ int data; Node *next; Node *arb; Node(int value){ data = value; next = NULL; arb = NULL; } }; Node* copyList(Node *head) { Node *clonehead = NULL; Node *ptr, **newnode = &clonehead; std::map<Node*, Node*> node_lookup; ptr = head; while (ptr != NULL){ *newnode = new Node(ptr->data); node_lookup.insert(std::make_pair(ptr, *newnode); newnode = &((*newnode)->next); ptr = ptr->next; } Node *cloneptr = clonehead; ptr = head; while (ptr != NULL){ cloneptr->arb = node_lookup[ptr->arb]; cloneptr = cloneptr->next; ptr = ptr->next; } return clonehead; }
72,291,344
72,291,553
Perform Memory Allocation To Store Data Obtained In Interrupt Handler
I am writing a program that uses PortAudio to get audio input from the computer into my program. PortAudio, in their Writing a Callback tutorial, says that the callback is triggered as an Interrupt Handler, and explains that code written in the callback needs to not do: memory allocation/deallocation, I/O (including file I/O as well as console I/O, such as printf()), context switching (such as exec() or yield()), mutex operations, or anything else that might rely on the OS The problem I'm having is figuring out how would I go about processing the audio coming from the callback without being able to use malloc. My current (and working) callback looks like this. int Audio::paCallback( const void *inputBuffer, void *outputBuffer, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData ) { // Cast data passed through stream to our structure. auto *in = (uint16_t *) inputBuffer; // Get number of audio channels, for instance stereo would be two int numberOfChannels = Pa_GetDeviceInfo(Pa_GetDefaultOutputDevice())->maxInputChannels; for (int i = 0; i < numberOfChannels; i++) { auto storedStream = (uint16_t *) malloc(sizeof(uint16_t) * (framesPerBuffer + 2)); storedStream[0] = uint16_t(i); storedStream[1] = uint16_t(framesPerBuffer); for (int j = 0; j < storedStream[1]; j++) { storedStream[j + 2] = in[numberOfChannels*j+i]; } audioQueue->push(&storedStream); } return 0; } I know I shouldn't be using malloc, so how would I go about fixing this? While looking for examples of code, I found that Audacity, a free audio editing software, uses PortAudio. I have looked at Audacity's PortAudio callback (line 2391 in AudioIO.cpp), and all they do is call two functions. In the second function, they call alloca, which allocates memory. Is it ok to call functions from an interrupt handler that then do memory allocation, or would the functions also be executed within the interrupt context?
Generally to avoid dynamically-allocated memory, we'll employ the use of various 'static containers.' Things like a circular buffer of pre-allocated and reserved data, or a blit buffer (two static buffers, where new data is added to one buffer, while previously-added data is processed from a second buffer. Periodically their roles are 'swapped' when all the data in the 'processing' buffer is empty). In both cases, it helps to know the 'maximum' amount of data you're likely to need, and pre-allocate that much data. Is it ok to call functions from an interrupt handler that then do memory allocation, or would the functions also be executed within the interrupt context? This really depends on your target platform. Sometimes avoiding dynamic memory allocation is very important, and sometimes on modern embedded platforms, the concern is quite a bit overblown.
72,291,579
72,292,335
overflow instead of saturation on 16bit add AVX2
I want to add 2 unsigned vectors using AVX2 __m256i i1 = _mm256_loadu_si256((__m256i *) si1); __m256i i2 = _mm256_loadu_si256((__m256i *) si2); __m256i result = _mm256_adds_epu16(i2, i1); however I need to have overflow instead of saturation that _mm256_adds_epu16 does to be identical with the non-vectorized code, is there any solution for that?
Use normal binary wrapping _mm256_add_epi16 instead of saturating adds. Two's complement and unsigned addition/subtraction are the same binary operation, that's one of the reasons modern computers use two's complement. As the asm manual entry for vpaddw mentions, the instructions can be used on signed or unsigned integers. (The intrinsics guide entry doesn't mention signedness at all, so is less helpful at clearing up this confusion.) Compares like _mm_cmpgt_epi32 are sensitive to signedness, but math operations (and cmpeq) aren't. The intrinsics names Intel chose might look like they're for signed integers specifically, but they always use epi or si for things that work equally on signed and unsigned elements. But no, epu implies a specifically unsigned thing, while epi can be specifically signed operations or can be things that work equally on signed or unsigned. Or things where signedness is irrelevant. For example, _mm_and_si128 is pure bitwise. _mm_srli_epi32 is a logical right shift, shifting in zeros, like an unsigned C shift. Not copies of the sign bit, that's _mm_srai_epi32 (shift right arithmetic by immediate). Shuffles like _mm_shuffle_epi32 just move data around in chunks. Non-widening multiplication like _mm_mullo_epi16 and _mm_mullo_epi32 are also the same for signed or unsigned. Only the high-half _mm_mulhi_epu16 or widening multiplies _mm_mul_epu32 have unsigned forms as counterparts to their specifically signed epi16/32 forms. That's also why 386 only added a scalar integer imul ecx, esi form, not also a mul ecx, esi, because only the FLAGS setting would differ, not the integer result. And SIMD operations don't even have FLAGS outputs. The intrinsics guide unhelpfully describes _mm_mullo_epi16 as sign-extending and producing a 32-bit product, then truncating to the low 32-bit. The asm manual for pmullw also describes it as signed that way, it seems talking about it as the companion to signed pmulhw. (And has some bugs, like describing the AVX1 VPMULLW xmm1, xmm2, xmm3/m128 form as multiplying 32-bit dword elements, probably a copy/paste error from pmulld) And sometimes Intel's naming scheme is limited, like _mm_maddubs_epi16 is a u8 x i8 => 16-bit widening multiply, adding pairs horizontally (with signed saturation). I usually have to look up the intrinsic for pmaddubsw to remind myself that they named it after the output element width, not the inputs. The inputs have different signedness so if they have to pick one, side, I guess it makes sense to name it for the output, with the signed saturation that can happen with some inputs, like for pmaddwd.
72,291,750
72,291,972
How to calculate time taken to execute C++ program excluding time taken to user input?
I'm using the below code to calculate the time for execution. It works well when I take input from ./a.out < input.txt. But when I manually write my input it also includes that time. Is there a way to exclude the time taken by the user to input? auto begin = chrono::high_resolution_clock::now(); // my code here has cin for input auto end = chrono::high_resolution_clock::now(); cout << chrono::duration_cast<chrono::duration<double>>(end - begin).count() << " seconds"; Edit: I know that we can count the time before cin and after then subtract it. Is there any other way?
A straightforward approach would be to "Freeze time" when user input is required, so instead of creating the end variable after the input lines, create it before the input lines and restart time calculation again after the input: double total = 0; auto begin = chrono::high_resolution_clock::now(); // code that needs time calculation auto end = chrono::high_resolution_clock::now(); total += chrono::duration_cast<chrono::duration<double>>(end - begin).count(); // your code here that has cin for input begin = chrono::high_resolution_clock::now(); // code that needs time calculation end = chrono::high_resolution_clock::now(); total += chrono::duration_cast<chrono::duration<double>>(end - begin).count(); cout << total << " seconds";
72,291,832
72,318,543
How do you upload a file using Emscripten in C++?
I'm trying to upload a file to a server. I have been successful in downloading data using Emscripten's Fetch API with a GET request, but so far have been unsuccessful with POST requests. Here is my current implementation: (the file is being opened and read as expected, but the server is not receiving the file) void uploadSucceeded(emscripten_fetch_t* fetch) { printf("Successful upload of %llu bytes to %s.\n", fetch->numBytes, fetch->url); // The data is now available at fetch->data[0] through fetch->data[fetch->numBytes-1]; emscripten_fetch_close(fetch); // Free data associated with the fetch. } void uploadFailed(emscripten_fetch_t* fetch) { printf("Failed upload to %s - HTTP failure status code: %d.\n", fetch->url, fetch->status); emscripten_fetch_close(fetch); // Also free data on failure. } bool UploadFile(const std::string& url, const std::string& file_name) { emscripten_fetch_attr_t attr; emscripten_fetch_attr_init(&attr); strcpy(attr.requestMethod, "POST"); attr.attributes = EMSCRIPTEN_FETCH_LOAD_TO_MEMORY; attr.onsuccess = uploadSucceeded; attr.onerror = uploadFailed; // Set headers: const char* headers[] = { "Content-Type", "application/x-www-form-urlencoded", 0 }; attr.requestHeaders = headers; // Read file data: std::ifstream in_file(file_name.c_str(), std::ios::binary); // in_file.seekg(0, std::ios::end); int file_size = in_file.tellg(); // in_file.seekg(0, std::ios::beg); std::stringstream buffer; buffer << in_file.rdbuf(); // char *cstr = new char[buffer.str().length() + 1]; strcpy(cstr, buffer.str().c_str()); // attr.requestData = cstr; attr.requestDataSize = file_size; // Send HTTP request: emscripten_fetch(&attr, url.c_str()); return true; }
You need to make sure that the request header has: "Content-Type", "multipart/form-data; boundary=[custom-boundary]\r\n" ...where [custom-boundary] is a string of your choice. Then in the request data, you start with that custom boundary, followed by "\r\n", then you have another header, such as: "Content-Disposition: form-data; name=\"myFile\" filename=\"G0000U00000R01.html\"\r\n" "Content-Transfer-Encoding: binary\r\n" "Content-Type: text/html\r\n\r\n" ...followed by the file contents, followed by "\r\n" again, and finally followed by the same custom boundary as before.
72,292,118
72,488,349
find memory allocated between time A and time B which remains unfreed at time C
I know that Visual Studio allows you to compare memory between two time snapshots in order to find leaks, using the debugger-integrated Memory Usage diagnostic tool. However is there a way to filter out of the diff any memory that was allocated after another time point (B) between start time (A) and end time (C) ? Time A = start caring about memory that gets allocated Time B = stop caring Time C = all memory that was allocated between Time A and Time B should now be freed; if not, let's see the callstack that allocated each chunk This does not necessarily need to be done using Visual Studio interactive diagnostics tool if is there for example a way to do this using _CrtMemCheckpoint instead or another way. Although I have tagged this as a Visual Studio 2019 question, I will accept any solution that uses freely available Microsoft tools such as WinDbg or Free Open Source tools such as VLD (Visual Leak Detector) to achieve the same result.
Assuming the target platform is Windows since VS2019 is mentioned, I've found a tool similar to what you are looking for. https://www.codeproject.com/Articles/11221/Easy-Detection-of-Memory-Leaks It is pretty old but still compiles. Code from MemoryHooks can be used for new/delete override implementation. (void* operator new (std::size_t count ); and void operator delete (void* ptr);)
72,292,461
72,293,189
how to return a template list in C++?
I am learning C++ in school and in my homework, my task is to create the FooClass for these: int main() { int x[] = {3, 7, 4, 1, 2, 5, 6, 9}; FooClass<int> ui(x, sizeof(x) / sizeof(x[0])); std::string s[] = {"Car", "Bike", "Bus"}; FooClass<std::string> us(s, sizeof(s) / sizeof(s[0])); } then modify the code so it can write out the size of the lists and the elements of the lists. I managed to write the code for the size function. But I am struggling with the element part, getting an error: missing template arguments before '.' token. Here is my code so far: template <typename T> class FooClass { private: T *items; int itemsSize; bool mergeOn; public: FooClass(T items[], int itemsSize) { items = new T[itemsSize]; this->itemsSize = itemsSize; }; int getItemsSize() { return this->itemsSize; } void print(const FooClass <T>& items) { for (int i=0; i< items.getItemsSize(); ++i) { std::cout<<items[i]<<std::endl; } } }; int main() { int x[] = {3, 7, 4, 1, 2, 5, 6, 9}; FooClass<int> ui(x, sizeof(x) / sizeof(x[0])); std::string s[] = {"Car", "Bike", "Bus"}; FooClass<std::string> us(s, sizeof(s) / sizeof(s[0])); std::cout<<ui.getItemsSize()<<std::endl; FooClass.print(us); //this is where I get the compilation error. } How should I implement the print function?
Your constructor is not copying the source elements into the array that it allocates. And, you need a destructor to free the allocated array when you are done using it. And, your print() method is not static, so it should act on this instead of taking a FooClass object as a parameter. Try this: template <typename T> class FooClass { private: T *m_items; int m_itemsSize; bool m_mergeOn; public: FooClass(T items[], int itemsSize) { m_items = new T[itemsSize]; for (int i = 0; i < itemsSize; ++i) { m_items[i] = items[i]; } m_itemsSize = itemsSize; }; ~FooClass() { delete[] m_items; } int getItemsSize() const { return m_itemsSize; } void print() const { for (int i = 0; i < m_ItemsSize; ++i) { std::cout << m_items[i] << std::endl; } } }; int main() { int x[] = {3, 7, 4, 1, 2, 5, 6, 9}; FooClass<int> ui(x, sizeof(x) / sizeof(x[0])); std::cout << ui.getItemsSize() << std::endl; ui.print(); std::string s[] = {"Car", "Bike", "Bus"}; FooClass<std::string> us(s, sizeof(s) / sizeof(s[0])); std::cout << us.getItemsSize() << std::endl; us.print(); }
72,292,600
72,293,093
Maxheap giving wrong result
I wrote the following code to build a maxheap from a already existing array the downadjust function makes the array a max heap but it is not producing results as desired Please check the code and tell me where am I going wrong also it would be very helpful if someone suggest what changes to the downadjust function will help me in making a min heap ( that is the next question that I have to code) #include <iostream> using namespace std; void downadjust(int heap[],int i){ // n is size int n=heap[0]; int j,flag=1; while(2*i<=n && flag==1){ j=2*i; if(j+1<=n && heap[j+1]>heap[j]){ j=j+1; } if(heap[i]>heap[j]) flag=0; else { swap(heap[i],heap[j]); i=j; } } } void disp(int heap[],int n){ for(int i=1;i<n;i++){ cout<<heap[i]<<" "; } } int main() { int n; cout<<"no of stud"; cin>>n; n++; int heap[n]; heap[0]=n-1; for(int i=1;i<n;i++){ cin>>heap[i]; } for(int i=n/2;i>=1;i--){ downadjust(heap,i); } disp(heap,n); cout<<endl; cout<<"max is "<<heap[1]; return 0; }
Result for 5 1 9 2 11 50 6 100 7 is valid heap 100 11 50 7 5 9 6 2 1. Perhaps you wanted 50 11 sequence and other ordered pairs of child nodes, but heap construction does not provide strict mutual ordering of children (as binary search tree does). To make minheap, you just need to change two comparisons: if (j + 1 <= n && heap[j + 1] < heap[j]) { j = j + 1; } if (heap[i] < heap[j]) flag = 0;
72,293,698
72,294,254
What is the use of a custom unique_ptr deleter that calls delete?
In the C++ samples provided by NVidia's TensorRT library, there is a file named common.h that contains definitions of structures used throughout the examples. Among other things, the file contains the following definitions: struct InferDeleter { template <typename T> void operator()(T* obj) const { delete obj; } }; template <typename T> using SampleUniquePtr = std::unique_ptr<T, InferDeleter>; The SampleUniquePtr alias is used throughout the code samples to wrap various pointers to interface classes returned by some functions, e.g. SampleUniquePtr<INetworkDefinition>(builder->createNetworkV2(0)); My question is, in what practical aspects are std::unique_ptr and SampleUniquePtr different? The behavior of SampleUniquePtr is pretty much what I would expect from std::unique_ptr, at least now. Could it be for compatibility with old versions of C++?
From the history I see it was for a while template <typename T> void InferDeleter::operator()(T* obj) const { if (obj) { obj->destroy(); } } Then they declared destroy() methods deprecated: Destructors for classes with destroy() methods were previously protected. They are now public, enabling use of smart pointers for these classes. The destroy() methods are deprecated. Although it is deprecated, it is still not removed and the old ABI InferDeleter should be kept for backward compatibility for applications linked with the old TensorRT. Thus they are using since recently template <typename T> void InferDeleter::operator()(T* obj) const { delete obj; } not removing the struct InferDeleter and using SampleUniquePtr = std::unique_ptr<T, InferDeleter>. But it may be removed in the future. When they remove struct InferDeleter and change using SampleUniquePtr = std::unique_ptr<T>, it will make TensorRT library incompatible with old software.
72,293,711
72,293,765
How to static assert whether all types of a tuple fulfill some condition?
I have some type traits SomeTraits from which I can extract whether a type T fulfills some condition, through SomeTraits<T>::value. How would one go over all the types of a given std::tuple<> and check (through say a static assert) whether they all fulfill the above condition? e.g. using MyTypes = std::tuple<T1, T2, T3>; // Need some way to do something like static_assert(SomeTupleTraits<MyTypes>::value, "MyTypes must be a tuple that blabla..."); where SomeTupleTraits would check whether SomeTraits<T>::value == true for each type inside MyTypes? I am restricted to C++14.
As a one liner (newlines optional), you can do something like: // (c++20) static_assert([]<typename... T>(std::type_identity<std::tuple<T...>>) { return (SomeTrait<T>::value && ...); }(std::type_identity<MyTypes>{})); Or you can create a helper trait to do it: // (c++17) template<template<typename, typename...> class Trait, typename Tuple> struct all_of; template<template<typename, typename...> class Trait, typename... Types> struct all_of<Trait, std::tuple<Types...>> : std::conjunction<Trait<Types>...> {}; static_assert(all_of<SomeTrait, MyTypes>::value); Or in C++11, you can reimplement std::conjunction inside the helper trait: template<template<typename, typename...> class Trait, typename Tuple> struct all_of; template<template<typename, typename...> class Trait> struct all_of<Trait, std::tuple<>> : std::true_type {}; template<template<typename, typename...> class Trait, typename First, typename... Rest> struct all_of<Trait, std::tuple<First, Rest...>> : std::conditional<bool(Trait<First>::value), all_of<Trait, std::tuple<Rest...>>, std::false_type>::type::type {}; static_assert(all_of<SomeTrait, MyTypes>::value, "");
72,294,123
72,306,568
Task with inserting elements to list - tough a little
This is just a continuation of several past questions. My function should return std::vector<std::set<std::string>> A group of names should be classified into teams for a game. Teams should be the same size, but this is not always possible unless n is exactly divisible by k. Therefore, they decided that the first mode (n, k) teams have n / k + 1 members, and the remaining teams have n / k members. Vector of strings which function accepts must be converted to list of strings. I need to move through list elements and add them to the set. However, while moving through list I need to skip elements from list that are already added to set. First element of list is added to set before moving through list. Moving is calculated based on length of last inserted member. For example if Damir is inserted, then move (shift) will be 5. EXPLANATION: UPDATE [after @stefaanv's suggestion]: #include <iostream> #include <list> #include <set> #include <string> #include <vector> typedef std::vector<std::set<std::string>> vek; vek Distribution(const std::vector<std::string>&names, int k) { vek teams(k); int num = names.size(); int number_of_first = num % k; int number_of_members_first = num / k + 1; int number_of_members_remaining = num / k; std::list<std::string> lista; for (int i = 0; i < num; i++) lista.push_back(names[i]); auto it = lista.begin(); auto temp = it; int n = num, new_member = 0, index_list = 0; for (int i = 0; i < k; i++) { if (i <= number_of_first) { int number_of_members_in_team = 0; while (number_of_members_in_team < number_of_members_first) { for (int i = 0; i < new_member; i++) index_list++; if (index_list > n - 1) index_list = index_list - n; while (it != lista.begin()) it--; for (int i = 0; i < index_list; i++) it++; teams[i].insert(*it); number_of_members_in_team++; new_member = it->length(); it = lista.erase(it); n--; } } else { int number_of_members_in_team = 0; while (number_of_members_in_team < number_of_members_remaining) { for (int i = 0; i < new_member; i++) index_list++; if (index_list > n - 1) index_list = index_list - n; while (it != lista.begin()) it--; for (int i = 0; i < index_list; i++) it++; teams[i].insert(*it); number_of_members_in_team++; new_member = it->length(); it = lista.erase(it); n--; } } } return teams; } int main() { for (auto i : Distribution({"Damir", "Ana", "Muhamed", "Marko", "Ivan", "Mirsad", "Nikolina", "Alen", "Jasmina", "Merima"}, 3)) { for (auto j : i) std::cout << j << " "; std::cout << std::endl; } return 0; } Correct output would be: Ana Damir Mirsad Muhammed Ivan Merima Nikolina Alen Jasmina Marko I get nothing in the output! I just need simple fix in algorithm. Reason why I have nothing in output is this: it = lista.erase(it); n--; If I don't use those lines of code I would get 3 teams with 4, 3, and 3 names respectively, just with wrong names. So now I get the right number of teams and the right number of team members. The only problem here remains to add correct names to teams... Could you give me better approach?
One wrong thing in your check function: you only check the names in the current team to skip that, but you should skip any name that was already put in a team. I already gave you an alternative on another question, so the check function isn't needed (erasing from the list). Also, when iterating over a list, you should take into account that when you reach end, you must continue at begin. Adding the names a 100 times isn't a good alternative. Hints: try to put in comments what your code is supposed to do so you and we can see why the code doesn't work as expected. When possible, use const reference to pass data-structures to functions For the number of next shifts: it->length() works too, no need to look for the name. You can easily avoid the code duplication, the only difference is one number. i and l have the same meaning and when you iterate i from 0 to < k, they become the same. EDIT after changing code in the question based on my comments Code how I would do it based on your edited code (there are two names switched with the expected output), make_team() and Distribution() are working together on the data, so they could be joined in a class: #include <iostream> #include <list> #include <set> #include <string> #include <vector> // data structure for result typedef std::vector<std::set<std::string>> ResultType; // put players in one team according to shift algorithm (shifts is length of name of last player) to make more random // list_players and it_players are references and are being changed in this function std::set<std::string> make_team(std::list<std::string>& list_players, std::list<std::string>::iterator& it_players,int size) { std::set<std::string> team; int number_shifts = 0; int number_of_members_in_team = 0; while (number_of_members_in_team < size) { // shift to new selection (rotate: end becomes begin) for (int i = 0; i < number_shifts - 1; i++) { it_players++; if (it_players == list_players.end()) it_players = list_players.begin(); } // move to team by inserting and deleting from list team.insert(*it_players); number_of_members_in_team++; number_shifts = it_players->length(); // distribute algorithm: shift number of times as length of name of last chosen player it_players = list_players.erase(it_players); if (list_players.empty()) // just in case return team; } return team; } // distribute players to given number of teams ResultType Distribution(const std::vector<std::string>&names, int number_teams) { // init ResultType teams(number_teams); int number_players = names.size(); int number_of_first = number_players % number_teams; int number_of_members = number_players / number_teams + 1; // adjusted after number_of_first std::list<std::string> list_players(names.begin(), names.end()); auto it_players = list_players.begin(); // do for all teams for (int tm = 0; tm < number_teams; ++tm) { if (tm == number_of_first) // adjust because not all teams can have the same size, so first teams can be larger by 1 number_of_members--; teams[tm] = make_team(list_players, it_players, number_of_members); if (list_players.empty()) return teams; } return teams; } // test harnass int main() { for (auto i : Distribution({"Damir", "Ana", "Muhamed", "Marko", "Ivan", "Mirsad", "Nikolina", "Alen", "Jasmina", "Merima"}, 3)) { for (auto j : i) std::cout << j << " "; std::cout << std::endl; } return 0; }
72,295,582
72,300,454
How to create a new terminal and run a command in it?
I have a function like this void smbProcess(){ string smbTargetIP; cout<<"Target IP: "; cin>>smbTargetIP; string commandSmb_S = "crackmapexec smb " + smbTargetIP; int smbLength = commandSmb_S.length(); char commandSmb_C[smbLength + 1]; strcpy(commandSmb_C, commandSmb_S.c_str()); system("xterm -hold -e commandSmb_C"); } I want to create a new terminal and run my command (like this "crackmapexec smb 192.168.1.0/24"). But it doesn't work. When I try this, it works system("xterm -hold -e date"); These are also doesn't work system("xterm -hold -e 'commandSmb_C'"); system("xterm -hold -e "commandSmb_C""); If you know another way to do this it will works too
Add "xterm -hold -e" to commandSmb_S void smbProcess(){ string smbTargetIP; cout<<"Target IP: "; cin>>smbTargetIP; string commandSmb_S = "xterm -hold -e crackmapexec smb " + smbTargetIP; int smbLength = commandSmb_S.length(); char commandSmb_C[smbLength + 1]; strcpy(commandSmb_C, commandSmb_S.c_str()); system(commandSmb_C); }
72,296,089
72,296,137
i'm making a console game (on cmd), when i touch the screen with my mouse. how do i make it ignore the mouse clicks
literally just that. in this pic when i clicked here. the car stopped coming down. you can find the original code in this link from github. #include<iostream> #include <windows.h> #include <time.h> using namespace std; //don't hate me for it i started coding a day ago. HANDLE console = GetStdHandle(STD_OUTPUT_HANDLE); COORD coords; int carY[3]; int carX[3]; int carFlag[3]; void gotoxy(int x, int y) { //change the cordinates the text outputs to coords.X = x; coords.Y = y; SetConsoleCursorPosition(console, coords); } void gencar(int ind) {//ind means indeterminate. carX[ind] = 40; } void drawcar(int ind) { if (carFlag[ind] != false) { gotoxy(carX[ind], carY[ind]); cout << "****"; gotoxy(carX[ind], carY[ind] + 1); cout << " ** "; gotoxy(carX[ind], carY[ind] + 2); cout << "****"; gotoxy(carX[ind], carY[ind] + 3); cout << " ** "; } } void erasecar(int ind) { //fills old places with spaces if (carFlag[ind] != false) { gotoxy(carX[ind], carY[ind]); cout << " "; gotoxy(carX[ind], carY[ind] + 1); cout << " "; gotoxy(carX[ind], carY[ind] + 2); cout << " "; gotoxy(carX[ind], carY[ind] + 3); cout << " "; } } it basically prints a car as it's going down and over prints the previous print's position with spaces. int main() { carFlag[0] = 1; carY[0] = 1; gencar(0); while (1) { drawcar(0); Sleep(60); erasecar(0); if (carFlag[0] == 1) //makes it move on the Y axis carY[0] += 1; } return 0; }
In your console options (the top left button), go to Properties and turn off the "QuickEdit Mode" setting. This is a Windows feature where clicking the mouse suspends whatever program is running and lets you select text from the screen. You don't want that for your program.
72,296,440
72,308,158
How to prevent std::min and max to return NAN if the first element of the array is NAN?
Is there a way to make min/max (std::min_element) ignore all NANs? I mean, it seems to ignore NANs in the middle but not if the first element is NAN. Sample: template <typename T> inline void GetMinMax(const T* data, const int len, T& min, T& max) { min = *std::min_element(data, data + len); max = *std::max_element(data, data + len); } float min, max; std::array<float, 4> ar1 = { -12, NAN, NAN, 13 }; GetMinMax<float>(&ar1[0], ar1.size(), min, max); //min: -12. max: 13 std::array<float, 4> ar2 = { -12, 3, 13, NAN }; GetMinMax<float>(&ar2[0], ar2.size(), min, max);//min: -12. max: 13 std::array<float, 4> ar3 = { NAN, -12, 3, 13 }; GetMinMax<float>(&ar3[0], ar3.size(), min, max);//min: -nan(ind). max: -nan(ind) !!!!
The safest path is to remove all the NaNs values from the range before applying any min-max standard algorithm. Consider a possible implementation of std::min_element1: template<class ForwardIt> ForwardIt min_element(ForwardIt first, ForwardIt last) { if (first == last) return last; ForwardIt smallest = first; // <-- If the first is a NaN... ++first; for (; first != last; ++first) { if (*first < *smallest) { // <-- This condition will always be FALSE smallest = first; } } return smallest; // <-- An iterator to a NaN is returned } More formally, the C++ Standard2 specifies: 27.8.1 General [alg.sorting.general] The operations in [alg.sorting] defined directly in namespace std have two versions: one that takes a function object of type Compare and one that uses an operator<. Compare is a function object type ([function.objects]) that meets the requirements for a template parameter named BinaryPredicate ([algorithms.requirements]). The return value of the function call operation applied to an object of type Compare, when contextually converted to bool ([conv]), yields true if the first argument of the call is less than the second, and false otherwise. Compare comp is used throughout for algorithms assuming an ordering relation. For all algorithms that take Compare, there is a version that uses operator< instead. That is, comp(*i, *j) != false defaults to *i < *j != false. For algorithms other than those described in [alg.binary.search], comp shall induce a strict weak ordering on the values. The term strict refers to the requirement of an irreflexive relation (!comp(x, x) for all x), and the term weak to requirements that are not as strong as those for a total ordering, but stronger than those for a partial ordering. If we define equiv(a, b) as !comp(a, b) && !comp(b, a), then the requirements are that comp and equiv both be transitive relations: (4.1) comp(a, b) && comp(b, c) implies comp(a, c) (4.2) equiv(a, b) && equiv(b, c) implies equiv(a, c) The problem, given any float value x, is that the following hold: x < NaN == false and NaN < x == false, but x != NaN Only considering the subset of the float values which are not NaNs we can fulfill the requirement. 1) https://en.cppreference.com/w/cpp/algorithm/min_element 2) I'm quoting the draft at https://eel.is/c++draft/alg.sorting.general , emphasis mine.
72,296,686
72,296,722
How do I return a string with variables and characters in c++?
I have a string function that I would like to output the following cout lines. string print_ticket(void){ if(sold_status == true){ cout<<seat_number<<" "<<seat_number<<"sold"; } else{ cout<<seat_number<<" "<<seat_number<<"available"; } } The problem is the function must return a string and I'm not sure the best way to turn these cout statement into a string in this scenario. Thank you for any assistance.
Use ostringstream, available when including <sstream>: string print_ticket(void){ std::ostringstream sout; if (sold_status) { sout << seat_number << " " << seat_number << "sold"; } else { sout << seat_number << " " << seat_number << "available"; } return sout.str(); }
72,296,737
72,296,779
How can I set a var for the url in libcurl
I followed a tutorial to fetch a webpage. It worked but they manually set the URL. I tried changing it to use a URL from a var, but that did not work. I get an error in the terminal "Couldn't resolve host name" I tried main(char*) which gave the same error. I can't seem to find anything online for this. How can I make it so a user-defined var can be used as the URL? code below. #include <iostream> #include <curl/curl.h> #include <string> using namespace std; int main() { string website; getline(cin, website); CURL* curl = curl_easy_init(); if (!curl) { fprintf(stderr, "init failed\n"); return EXIT_FAILURE; } // set up curl_easy_setopt(curl, CURLOPT_URL, "$website"); // perform CURLcode result = curl_easy_perform(curl); if (result != CURLE_OK) { fprintf(stderr, "download prob: %s\n", curl_easy_strerror(result)); } curl_easy_cleanup(curl); return EXIT_SUCCESS; }
"$website" is just a string, a piece of text. The variable should be referenced as website, and since the function is expecting a pointer to an array of characters, you use the c_str() or data() method of the class. curl_easy_setopt(curl, CURLOPT_URL, website.data());
72,296,860
72,301,649
Saving a nested initializer list as a variable for vector construction
I'm currently initializing a vector like this: struct Foo{ Foo(double a, double b){ a_ = a; b_ = b; }; double a_; double b_; }; std::vector<Foo> foo_vec{{1, 2}, {2, 3}}; This correctly constructs a vector with two initialized elements. I'd like to pull this initialization out to a const global variable because I'm using the same one multiple times for different vectors. I tried this: // this fails to compile: result type must be constructible from value type of input range const std::array<std::initializer_list<double>, 2> bar = {{{1, 2}, {2, 3}}}; std::vector<Foo> foo_vec{bar.begin(), bar.end()}; // this works const std::array<Foo, 2> bar = {{{1, 2}, {2, 3}}}; std::vector<Foo> foo_vec{bar.begin(), bar.end()}; Are there any other shorter/better ways to do this or is this it?
Why do you want to mess with initializer lists? Just copy a vector: const std::vector<Foo> default_vec{{1, 2}, {2, 3}}; std::vector<Foo> foo_vec{default_vec};
72,296,910
72,301,754
Why allocation and sort of std::pair is faster than std::vector?
Today I just tried to solve a problem in programming. I noticed that allocation and sorting of the vector<vector> are much much slower than vector<pair<int, pair<int, int>>. I took some benchmarks and came to know that nested vector code is 4x slower than nested pair code for the given input (https://pastebin.com/izWGNEZ7). Below is the code I used for benchmarking. auto t_start = std::chrono::high_resolution_clock::now(); vector<pair<int, pair<int, int>>> edges; for (int i = 0; i < points.size(); i++) for (int j = i + 1; j < points.size(); j++) edges.push_back({abs(points[i][0] - points[j][0]) + abs(points[i][1] - points[j][1]), {i, j}}); sort(edges.begin(), edges.end()); auto t_end = std::chrono::high_resolution_clock::now(); double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end - t_start).count(); cout << elapsed_time_ms << endl; auto t_start1 = std::chrono::high_resolution_clock::now(); vector<vector<int>> edges1; for (int i = 0; i < points.size(); i++) for (int j = i + 1; j < points.size(); j++) edges1.push_back({abs(points[i][0] - points[j][0]) + abs(points[i][1] - points[j][1]), i, j}); sort(edges1.begin(), edges1.end()); auto t_end1 = std::chrono::high_resolution_clock::now(); double elapsed_time_ms1 = std::chrono::duration<double, std::milli>(t_end1 - t_start1).count(); cout << elapsed_time_ms1 << endl; Output: 241.917 1188.11 Does anyone know why there is a big difference in performance?
A std::pair or std::array has a fixed size known at compile time and will include the objects directly in the class itself. A std::vector on the other hand has to deal with dynamic size and needs to allocate a chunk of memory on the heap to hold the objects. For small objects the std::pair or std::array will be better because the overhead of allocating and freeing memory will eat into your performance. That's what you are seeing. The extra indirection involved with the pointer will also cost you when e.g. comparing the elements as well as having to check the size at run time. On the other hand for large objects the std::vector should be better because it supports move semantics. Swapping 2 vectors will just swap the pointer to the data while std::pair or std:array will have to move/copy each element, which would be costly for large objects. So what you see is not that pair is faster than vector but that pair is faster than vector in that use case.
72,297,845
72,298,054
How to check if an object is an instance of template class of multiple template arguments and that one of said arguments fulfills some condition?
From How to check if an object is an instance of a template class of multiple template arguments in C++? I got the following trait to check whether a type is a particular template instantiation of a templated type of several template arguments: template <typename T1, typename T2> struct A { }; template <typename Type> struct IsA: std::false_type { }; template <typename T1, typename T2> struct IsA<A<T1, T2>> : std::true_type { }; How would one additionally add the condition that the second type T2 fulfills some other condition? I tried doing template <typename Type> struct IsA: std::false_type { }; template <typename T1, typename T2, std::enable_if_t<SomeCondition<T2>::value>> struct IsA<A<T1, T2>> : std::true_type { }; but am getting the error error: template parameters not deducible in partial specialization:
You were on the right track: #include <type_traits> #include <iostream> template <typename T1, typename T2> struct A { }; template <typename Type, typename=void> struct IsA: std::false_type { }; template <typename T1, typename T2> struct IsA<A<T1, T2>, std::enable_if_t<std::is_same_v<T2, int>>> : std::true_type { }; int main() { std::cout << IsA<int>::value << "\n"; std::cout << IsA<A<char, char>>::value << "\n"; std::cout << IsA<A<char, int>>::value << "\n"; return 0; } In this trivial example the "some condition" is just a std::is_same_v<T2, int>, only for the sake of an example.
72,297,871
72,297,985
How to implement one loop with different frequencies
Assume the following while loop runs at 1kHz. What is the proper way to run another piece of code inside this loop but with different frequency (i.e. say 500Hz) without multithreading. while (1){ // running 1kHz (i.e. outer loop) do stuff if (){ // running 500Hz (i.e. inner loop) do another stuff } } Another question is assume the outer loop runs at the maximum speed of the CPU, is it possible to run the inner loop at a percentage of outer loop (i.e. 50% of outer loop).
The easiest way is something like this: int counter = 0; while (1) { // do stuff if (++counter == 2) { // inner loop counter = 0; // do other stuff } } Note that in a spin-loop like this there's no guarantee that the outer loop will run at 1kHz; it will run at a speed determined by the CPU speed and the amount of work that occurs within the loop. If you really need exactly 1kHz execution, you'll probably want to program a timer-interrupt instead. What is guaranteed is that the code inside the inner if() block will be executed on every second iteration of the outer loop.
72,298,100
72,298,770
How to add the READONLY style to a wxTextCtrl text box in C++
was wondering how to add the REARONLY style to a TextCtrl in C++ for the wxWidgets framework. Im a complete noob to C++ and wxWidgets and couldn't find an comprehensible answer online. All I want to do is have a basic on screen text box holding a label text for an input text box below it. So, if im just ignorant to a better method, please let me know. m_txt_box = new wxTextCtrl (this, wxID_ANY, "Test", wxPoint(100, 500), wxSize(30, 30));
m_txt_box = new wxTextCtrl (this, wxID_ANY, "Test", wxPoint(100, 500), wxSize(30, 30), wxTE_READONLY); long style = 0 is wxTextCtrl constructor parameter after wxSize size. Required style is wxTE_READONLY.
72,298,249
72,298,310
Why is it OK to assign a std::string& to a std::string variable in C++?
class MyClass: public: MyClass(const std::string& my_str); private: std::string _myStr In the implementation: MyClass::MyClass(const std::string& my_str): _mystr(my_str) How can you assign a reference (my_str is const std::string&) to a non-reference variable (_mystr, which is std::string)? In my mental model, a variable is a block of space in memory can be filled with stuff that is the same type as the variable's declared type. So a std::string variable is a block of space that can hold a string (eg: "abcd"), while a const std::string& is a block of space in memory that can hold a "reference" (kind of vague what it is, unlike a pointer which is an address to another block in memory). When you assign one variable to another, you are copying the content stored in one memory block into another. So how can you copy a std::string& into a std::string - their types are different. What am I misunderstanding here?
In your case, you do not make an assignment, actually. This line: _mystr(my_str) Is invoking the copy constructor of your _mystr member. The copy constructor received a const std::string& (my_str, in your case), and constructs a clone of the object it refers to into your member _mystr. But to answer your question in a more general way: It is possible to assign a variable of type reference into a non-reference type. What will happen is the assignment operator of the target will be invoked. The assignment operator (of std::string, in this case) accepts (like the copy constructor) a const std::string&, and assigns a clone of the object it refers to into your member _mystr. Therefore, this will work as well: void f(std::string const & str) { std::string local_str = str; // ... } All the above is correct in general for all classes. Non-class types behave in a similar manner, e.g.: int i1; int & ri1 = i1; int i2 = ri1; // Will assign a copy of the value referenced by ri1, i.e. the value of i1, into i2 See more here about Initialisation and assignment. And Why are initialization lists preferred over assignments?.
72,298,569
72,298,708
How would I find the height of each node and assign it postorder in a binary search tree?
I have a templated class with an additional Node class with a height attribute. I want to be able to validate the height of each node after a new node has been inserted. My insert node function is working well, I am just confused on how I would change the height of each node after a new node is inserted. template <typename T> class BST { public: class Node { public: T key; int height = 0; Node* left = nullptr; Node* right = nullptr; Node* parent = nullptr; Node(){} Node(T k, Node* input_node = nullptr) { key = k; parent = input_node; } }; private: Node* root_ = nullptr; unsigned int size_ = 0; public: BST(); ~BST(); void insert(T k); private: void fix_height(Node* node); template <typename T> void BST<T>::insert(T k) { Node* node = root_; // insert function Node* prev_node = node; bool went_right; if(node == nullptr) { root_ = new Node(k); ++size_; return; } while(node != nullptr) { prev_node = node; if(k < node->key) { node = node->left; went_right = false; } else if (k > node->key) { node = node->right; went_right = true; } else { return; } } if(went_right) { prev_node->right= new Node(k, prev_node); // assigning the new node } else { prev_node->left= new Node(k, prev_node); } ++size_; } template <typename T> void BST<T>::fix_height(Node* node) { }```
The height of a node is defined as the maximum of its child nodes' heights plus 1. Although your question title speaks of "post-order" (a term related to recursion), your code is not actually recursive. Normally, a post-order update happens after a recursive call. Anyway, with your iterative solution the height can still be updated easily because you have stored parent pointers in your tree nodes. And so all you need to do is walk back through each node and update the height. template <typename T> void BST<T>::fix_height(Node* node) { while (node) { node->height = 1 + std::max( node->left ? node->left->height : -1, node->right ? node->right->height : -1); node = node->parent; } } It should be noted that your tree heights appear to treat a leaf node's height (that is, the height of a node that has no children) as 0. In order to make your fix_height function well-behaved even if called on a leaf node, then the height of "no child" is treated as -1. If you decide that a leaf node's height should actually be 1, then you should change those "no child" heights to 0, and change the default height of a new node to 1. To use this, you would call fix_height(prev_node); just before returning from your insert function.
72,298,630
72,299,030
Recursive concept/type_traits on tuple-like types
Say I was trying to implement a concept meowable that Integral types are meowable. Class types with member function meow are meowable. This is in the final target but the current question doesn't focus on it. Tuple-like types with only meowable elements are meowable. std::ranges::range with meowable elements are meowable. This is in the final target but the current question doesn't focus on it. Then I came up with this implementation(simplified as I could): #include <concepts> #include <type_traits> #include <ranges> #include <utility> #include <tuple> template<class T> concept meowable_builtin = std::integral<T>; template<class T, std::size_t I> concept has_tuple_element = requires (T t) { typename std::tuple_element<I, T>::type; { get<I>(t) } -> std::convertible_to<std::tuple_element_t<I, T>&>; }; template<class T> concept tuple_like = requires { typename std::tuple_size<T>::type; { std::tuple_size_v<T> } -> std::convertible_to<std::size_t>; } && []<std::size_t...I>(std::index_sequence<I...>) { return (has_tuple_element<T, I> && ...); } (std::make_index_sequence<std::tuple_size_v<T>>{}); template<class T> struct is_meowable: std::false_type{}; template<meowable_builtin T> struct is_meowable<T>: std::true_type{}; template<tuple_like T> struct is_meowable<T> : std::bool_constant< []<std::size_t...I>(std::index_sequence<I...>) { return (is_meowable<std::tuple_element_t<I, T>>::value && ...); } (std::make_index_sequence<std::tuple_size_v<T>>{}) > {}; template<class T> concept meowable_tuple = tuple_like<T> && is_meowable<T>::value; template<class T> concept meowable = is_meowable<T>::value; static_assert(meowable<int>); //static_assert(tuple_like<std::tuple<int>>); static_assert(is_meowable<std::tuple<int>>::value); But some compilers don't like it (https://godbolt.org/z/5vMTEhTdq): 1. GCC-12 and above: internal compiler error. 2. GCC-11: accepted. 3. Clang-13 and above: static_assert fired. 4. MSVC-v19: accepted. However, if I uncomment the second last line of code, all compilers are happy. (Instantiation point of concepts?) So my questions are: Why this behavior? (compiler bug or something like "ill-formed NDR"?) How can I achieve my target?
Why this behavior? (compiler bug or something like "ill-formed NDR"?) This is apparently a bug of GCC-trunk and Clang-trunk, the issue here is that GCC/Clang doesn't properly handle the template partial specialization based on the concept initialized by the lambda. Reduced template<class> concept C = [] { return true; } (); template<class T> struct S {}; template<class T> requires C<T> struct S<T> { constexpr static bool value = true; }; // static_assert(C<int>); static_assert(S<int>::value); How can I achieve my target? Replace lambda with the template function based on the reduced result template<class T, std::size_t...I> constexpr bool all_has_tuple_element(std::index_sequence<I...>) { return (has_tuple_element<T, I> && ...); } template<class T> concept tuple_like = requires { typename std::tuple_size<T>::type; { std::tuple_size_v<T> } -> std::convertible_to<std::size_t>; } && all_has_tuple_element<T>(std::make_index_sequence<std::tuple_size_v<T>>{}); Demo
72,298,677
72,298,703
C++ : second child class is unable to inherit the properties from the parent class
Write a c++ program using inheritance to display the count of apples and mangoes in a basket of fruits. Make three classes fruit, apple and mango. Fruit as the base class and apple and mango as child classes. The fruit class should contain all the variables and two functions to input values and calculate the total number of fruits in the basket. The child classes apple and mango should each contain a function that prints the number of apples/mangoes in the basket I have come up with the following code as the answer : #include <iostream> using namespace std; class fruit { public: int apples = 0, mangoes = 0; int total_fruits = 0; void input_fruits() { cout << "Enter the number of apples : "; cin >> apples; cout << "Enter the number of mangoes : "; cin >> mangoes; } void calculate_total() { total_fruits = apples + mangoes; cout << "The total fruits in the basket are : " << total_fruits << endl; } }; class apple : public fruit { public: void show_apples() { cout << "The number of apples in the basket is : " << apples << endl; } }; class mango : public fruit { public: void show_mangoes() { cout << "The number of mangoes in the basket is : " << mangoes << endl; } }; int main() { apple a1; mango m1; a1.input_fruits(); a1.show_apples(); m1.show_mangoes(); a1.calculate_total(); return 0; } Here is the output, where the mangoes count doesn't work: When the show_mangoes() function is called, it is returning the previously declared value of the mangoes i.e. 0. If I remove the value at declaration, the function is returning the pointer value. I couldn't figure out the reason of why the mango class is not able to access the data from the fruit class. But I was able to come up with two hacky ways of getting the output. Declaring the variables in global scope: #include <iostream> using namespace std; int apples = 0, mangoes = 0; int total_fruits = 0; class fruit { public: void input_fruits() { cout << "Enter the number of apples : "; cin >> apples; cout << "Enter the number of mangoes : "; cin >> mangoes; } void calculate_total() { total_fruits = apples + mangoes; cout << "The total fruits in the basket are : " << total_fruits << endl; } }; class apple : public fruit { public: void show_apples() { cout << "The number of apples in the basket is : " << apples << endl; } }; class mango : public fruit { public: void show_mangoes() { cout << "The number of mangoes in the basket is : " << mangoes << endl; } }; int main() { apple a1; mango m1; a1.input_fruits(); a1.show_apples(); m1.show_mangoes(); a1.calculate_total(); return 0; } Output: Inheriting apple class instead of mango: #include <iostream> using namespace std; class fruit { public: int apples = 0, mangoes = 0; int total_fruits = 0; void input_fruits() { cout << "Enter the number of apples : "; cin >> apples; cout << "Enter the number of mangoes : "; cin >> mangoes; } void calculate_total() { total_fruits = apples + mangoes; cout << "The total fruits in the basket are : " << total_fruits << endl; } }; class apple : public fruit { public: void show_apples() { cout << "The number of apples in the basket is : " << apples << endl; } }; class mango : public apple { public: void show_mangoes() { cout << "The number of mangoes in the basket is : " << mangoes << endl; } }; int main() { mango m1; m1.input_fruits(); m1.show_apples(); m1.show_mangoes(); m1.calculate_total(); return 0; } Ouput : Can you please explain why the mango class was unable to access the variable from the parent class in the first version of the code. Any suggestions for better approach are also appreciated.
When show_mangoes() function is called it is returning the previously declared value of the mangoes i.e. 0 The problem is that you never used m1.input_fruits() for m1 while for a1 you did use a1.input_fruits(). And since you never called input_fruits on m1 its data member mango still has the value(0) from the in-class initializer. So to solve this just call input_fruits() on m1 and it will give the desired output(which is the value entered by the user), as shown below. Working demo int main() { apple a1; mango m1; a1.input_fruits(); a1.show_apples(); m1.input_fruits(); //ADDED THIS which was missing before in your example 1 m1.show_mangoes(); a1.calculate_total(); }
72,298,878
72,298,929
Initialize array on the heap without specifying its length
I was reading Bjarne Stroustrup's Programming Principles and Practice Using C++ (second edition). On page 597: double* p5 = new double[] {0,1,2,3,4}; ...; the number of elements can be left out when a set of elements is provided. I typed the code above into Visual Studio 2022, and I get red underlines saying that "incomplete type is not allowed" (same thing happens when I define p5 as a data member), nevertheless, the code compiles and runs successfully. May I ask if it is fine to define array in such way? If so, why would Visual Studio show those red underlines...?
May I ask if it is fine to define array in such way? Yes, starting from C++11 it is valid. From new expression's documentation: double* p = new double[]{1,2,3}; // creates an array of type double[3] This means in your example: double* p5 = new double[] {0,1,2,3,4}; creates an array of type double[5]. Demo Note This was proposed in p1009r2.
72,298,972
72,299,071
How to use Insert in Set for Custom Data Type ? C++
class Game() { void add(set<Velocity> & v); } class Velocity() { private: // Member Variables public: // Constructors and methods } void Game::add(set<Velocity> &velocities) { Velocity v; v.setVelocity(); v.setSource(); velocities.insert(v); } As you can see I have a custom class called Game and it has a public method called Add which adds a velocity object to the set. When the insert(v) code executes it throws me an error: invalid operands to binary expression ('const Velocity' and 'const Velocity') {return __x < __y;} I am not sure how to fix this, I would appreciate any help or suggestions. Thanks a bunch.
In std::set... sorting is done using the key comparison function... You need to look toward something like this: bool operator<(const Velocity&, const Velocity&); class Velocity { friend bool operator<(const Velocity&, const Velocity&); private: unsigned velocity_value; // ... }; bool operator<(const Velocity& a, const Velocity& b) { return a.velocity_value < b.velocity_value; } Note, however that in this example, there won't be possible to have two different elements of type Velocity with the same velocity_value, since... std::set is an associative container that contains a sorted set of unique objects of type Key If you need to add all the supplied Velocity objects in an instance of the Game, you may need to reconsider the choice of the container, or some other mean of comparison.
72,299,026
72,300,411
Convert if constexpr based C++17 templatized code to C++14
I am working on downgrading a project written in C++ 17 to C++ 14. While downgrading, I came across a piece of code involving if constexpr and I wish to convert it to C++ 14 (From what I know, if constexpr is a C++ 17 feature). Boost's is_detected is used to check if a given type has star operator or get method. #include <iostream> #include <boost/type_traits/is_detected.hpp> #include <type_traits> #include <boost/optional/optional.hpp> #include <memory> #include <typeinfo> template < template < typename... > typename Operation, typename... Args > constexpr bool is_detected_v = boost::is_detected< Operation, Args... >::value; template < typename T > using has_star_operator = decltype( *std::declval< T >( ) ); template < typename T > using has_get_method = decltype( std::declval< T >( ).get( ) ); There is a function call deref which is used to dereference types like pointers, arrays, iterators, smart pointers, etc. template < typename T > inline constexpr const auto& deref( const T& value ) { if constexpr ( is_detected_v< has_star_operator, T > ) { return deref( *value ); } else if constexpr ( is_detected_v< has_get_method, T > ) { return deref( value.get( ) ); } else { return value; } } I tried to form a solution without if constexpr by using std::enable_if as below: template <typename T> typename std::enable_if< !is_detected_v<has_get_method, T> && is_detected_v<has_star_operator, T>, decltype( *std::declval< const T >( ) )>::type deref(const T& value) { std::cout << "STAR " << typeid(*value).name() << std::endl; return *value; } template <typename T> typename std::enable_if< is_detected_v<has_get_method, T>, decltype( std::declval< const T >( ).get( ) ) >::type deref(const T& value) { std::cout << "GET " << typeid(value.get()).name() << std::endl; return value.get(); } template <typename T> typename std::enable_if< !is_detected_v<has_get_method, T> && !is_detected_v<has_star_operator, T>, const T>::type deref(const T& value) { std::cout << "NONE\n"; return value; } int main() { int VALUE = 42; boost::optional<int> optional_value = boost::make_optional(VALUE); int a = 42; int *b = &a; const int array[ 4 ] = {VALUE, 0, 0, 0}; //const auto list = {std::make_unique< int >( VALUE ), std::make_unique< int >( 0 ), // std::make_unique< int >( 0 )}; //const auto iterator = list.begin( ); //std::unique_ptr<int> u = std::make_unique< int >( VALUE ); std::cout << deref(a) << std::endl; std::cout << deref(optional_value) << std::endl; std::cout << deref(b) << std::endl; std::cout << deref(array) << std::endl; //std::cout << deref(iterator) << std::endl; //std::cout << deref(u) << std::endl; } But, the above fails for cases like iterators and smart pointers where multiple dereference has to be made. For example, for a std::unique_ptr, first p.get() will be called (auto q = p.get()) followed by star operator (*q). I am a beginner with templates and require some help in this. Please let me know how this can be solved. I am using GCC 5.4 to compile.
How about a solution exploiting tag dispatch? The idea is to move the code from your branches to three auxiliary functions. These functions are overloaded on the last parameter, whose only purpose is to allow you calling the right one later on: template <typename T> constexpr const auto& deref(const T& value); template <typename T> constexpr const auto& deref(const T& value, std::integral_constant<int, 0>) { return deref(*value); } template <typename T> constexpr const auto& deref(const T& value, std::integral_constant<int, 1>) { return deref(value.get()); } template <typename T> constexpr const auto& deref(const T& value, std::integral_constant<int, 2>) { return value; } template <typename T> constexpr const auto& deref(const T& value) { using dispatch_t = std::integral_constant< int, is_detected_v<has_star_operator, T> ? 0 : (is_detected_v<has_get_method, T> ? 1 : 2)>; return deref(value, dispatch_t{}); } With the above implementation, the following compiles: int main() { int VALUE = 42; boost::optional<int> optional_value = boost::make_optional(VALUE); int a = 42; int* b = &a; const int array[4] = {VALUE, 0, 0, 0}; const auto list = {std::make_unique<int>(VALUE), std::make_unique<int>(0), std::make_unique<int>(0)}; const auto iterator = list.begin(); std::unique_ptr<int> u = std::make_unique<int>(VALUE); std::cout << deref(a) << std::endl; std::cout << deref(optional_value) << std::endl; std::cout << deref(b) << std::endl; std::cout << deref(array) << std::endl; std::cout << deref(iterator) << std::endl; std::cout << deref(u) << std::endl; } and outputs: 42 42 42 42 42 42 Also note that, until C++14, when declaring a template parameter that's a template itself, the syntax is template <template <typename...> class Operation, typename... Args> // ^ class: you can use typename since C++17 constexpr bool is_detected_v = boost::is_detected<Operation, Args...>::value;
72,299,327
72,302,561
Why do we need to specify namespace if we also need to include standard library headers?
Completely new to C++, but have done some work in C. Have just seen the Hello, World example: #include <iostream> int main() { std::cout << "Hello, World!" << std::endl; return 0; } My question is why we must specify that cout is from the standard library, when I have already included the declarations for cout from the iostream header file? I suspect that it's so that if we had another header file, say myFirstHeader.h, which also had a cout identifier, it would avoid ambiguity about which cout is being used? Appreciate any help or redirection.
Generally namespaces prevent name clashes between different modules or libraries. Now you might say that the std namespace is the standard that everyone uses so nobody should name their variables, classes or functions to clash with the standard. But that is short sighted. What is used in todays standard is not the same as yesterdays standard or tomorrows standard. The standard changes and things are added over time (and sometimes removed). So any variable, class or function you use today could have a name clash tomorrow. Having the namespace std avoids that problem because it will never clash with anything you define outside the namespace std. Plus std::... automatically tells me where to find the documentation when I see some unknown thing and tells me it's not something thought up by the project I'm looking at.
72,299,536
72,299,612
Reference over array into array of reference
I have an array std::array<T, N> arr for some T, N and I'd like to get an array of reference over arr's elements like so std::array<std::reference_wrapper<T>, N> arr_ref. But as a reference needs to be set at its initialization, I did not work out a solution. Therefore I would like to do something like that: std::array<std::reference_wrapper<T>, N> ref{} for (std::size_t i{0}; i < N; ++i) ref[i] = arr[i]; But at compile-time and at the initialization of ref. I thought of using some variadic template magic to convert my initial array to a parameter pack and then take a reference to each element but I am not sure this is possible. My last option would be an array of raw ptrs or of std::optional<std::reference_wrapper<T>>.
#include <array> #include <functional> #include <utility> #include <cstddef> template<typename x_Item, ::std::size_t x_count, ::std::size_t... x_index___> auto wrap_impl(::std::array<x_Item, x_count> & items, ::std::index_sequence<x_index___...>) { return ::std::array<::std::reference_wrapper<x_Item>, x_count>{items[x_index___]...}; } template<typename x_Item, ::std::size_t x_count> auto wrap(::std::array<x_Item, x_count> & items) { return wrap_impl(items, ::std::make_index_sequence<x_count>{}); } #include <iostream> int main() { ::std::array items{1, 2, 3}; auto wrapped_items{wrap(items)}; for (auto & wrapped_item: wrapped_items) { ::std::cout << wrapped_item.get() << ::std::endl; } return 0; } online compiler
72,300,721
72,300,795
How to sort diferent type of lists with template in C++
In my homework my task is to create the FooCl class for these: double d[] = {1.3, 0.7, 2.4, 1.5, 6.2, 5.7, 8.6, 9.1}; FooCl<double> itemsD(d, sizeof(d) / sizeof(d[0])); std::string s[] = {"C++", "Haskell", "Python", "Java"}; FooCl<std::string> itemsS(s, sizeof(s) / sizeof(s[0])); itemsD.mySort(); itemsS.mySort(); I made a constructor/destructor for it, but I don't know how to create two different functions with templates for the two different types of lists. I think I would need to use some kind of overloading but don't know how. template <typename T> class FooCl { private: T *mItems; int mItemsSize; public: FooCl(T items[], int itemsSize) { mItems = new T[itemsSize]; for (int i=0; i<itemsSize; ++i) { this->mItems[i] = items[i]; } this->mItemsSize = itemsSize; }; ~FooCl() { delete[] mItems; } void mySort() { //I have no idea how to write this function, so it can sort two different types of lists. } };
One way is to use std::sort as shown below: void mySort() { //--vvvvvvvvv------------------------------------>use std::sort std::sort(mItems, mItems + mItemsSize); } You can even write your sort functionality/implementation which will include the use of mItems and mItemsSize.
72,300,831
72,381,813
Is there a (portable) way to detect layout change in C++ classes?
For example say I have a class Foo with a typical serialization pattern. struct Foo { int a; bool b; template <typename Archive> void serialize(Archive & ar) { ar & a; ar & b; } }; Now suppose somebody comes along and adds a field. struct Foo { int a; bool b; std::string c; template <typename Archive> void serialize(Archive & ar) { ar & a; ar & b; } }; This compiles just fine but the Archive method is wrong. I could add something like namespace detail { template <int a, int b> void AssertSizeOfImplementation() { static_assert(a == b, "Size does not match (check stack trace)"); } } template <typename T, int size> void AssertSizeOf() { detail::AssertSizeOfImplementation<sizeof(T), size>(); } struct Foo { int a; bool b; template <typename Archive> void serialize(Archive& ar) { ar& a; ar& b; AssertSizeOf<Foo,8>(); } }; and now if I add the extra field I will see a compile error : In instantiation of 'void detail::AssertSizeOfImplementation() [with int a = 40; int b = 8]': I can then fix my serialization code and update the assertion. However this is not very portable. sizeof will return different results depending on packing and architecture. Are there any other alternatives to using sizeof? ( play with this on godbolt https://godbolt.org/z/fMo8ETjnr )
There is cool library in boost: boost pfr. I never used it in real project (just some toys), but seems to work quite well: struct Foo { int a; bool b; template <typename Archive> void serialize(Archive& ar, const unsigned int) { boost::pfr::for_each_field(*this, [&ar](const auto& field) { ar& field; }); } }; Live demo. Now any change in list of fields will be taken into account without need to alter serialization code. I tried on godbolt, but I've failed to resolve linking issues (can't find boost libraries), so it could actually run.
72,301,113
72,302,798
does mkl_vml_serv_threader in the gprofile means MKL is not running sequentially
We're running an application that's in the process of being MKL BLAS enhaced. We've been told not to hyperthread. In order for multithreaded (so-called parallel?) version to not be considered during compilation, i.e. to disable hyperthreading but only wanting MKL sequential vectorization, we removed the threaded library from the FindMKL Cmake file. The compiler was icc 2019. In order to disable multithreading at runtime we launched the tasks in slurm setting --threads-per-core=1 in the slurmfile. Yet we are not sure how to double-check that MKL is only running sequentially, so we collected a (summed over 4 cores, single cluster node) profile w/ gprof. The following functions appear on the flat profile albeit consuming less than 0.3% each. Are they evidence to support the idea that MKL is hyperthreading, i.e. "not running in sequential mode"? mkl_vml_serv_threader_d_2iI_1oI mkl_vml_serv_threader_d_1i_1o mkl_vml_serv_threader_d_1iI_1oI mkl_vml_serv_threader_d_2i_1o
By default, Intel® oneAPI Math Kernel Library uses the number of OpenMP threads equal to the number of physical cores on the system and it runs on all the available physical cores until and unless we mention some options which are mentioned below. Intel compilers like icc(latest) have a compiler option -qmkl=[lib] and the lib indicates which library files should be linked and the values are as follows. parallel: Tells the compiler to link using the threaded libraries in oneMKL. This is the default if the option is specified with no lib. sequential: Tells the compiler to link using the sequential libraries in oneMKL. cluster: Tells the compiler to link using the cluster-specific libraries and the sequential libraries in oneMKL. So if you want to run it sequentially, use -qmkl=sequential. Since you are using icc 2019, check icc --help and search for the options (i guess it is -mkl not -qmkl). Additionally, you can also make use of the link line advisor tool (https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html?wapkw=link%20line%20advisor#gs.0myxfc) which helps you to see the required libraries specific to your use case. As mentioned in the comments, using MKL_VERBOSE=1 helps to get details about version of MKL, parameters to the mkl calls, time taken by the function, also the NThr which indicates number of threads and some other details as well you can refer the given link. eg: MKL_VERBOSE=1 ./a.out
72,301,682
72,301,781
c++ Array class template with template parameters
i have Created an Array class template with template parameters <element type, size > and array class members, input, sort, and output functions. but code does not work below what might i be doing wrong? #include <iostream> using namespace std; template <class T, int n> class array { T mass[n]; public: void input(); void output(); void sort(); }; template <class T, int n> void array < T, n > ::input() { for (int i = 0; i < n; i++) cin >> mass[i]; } template <class T, int n> void array < T, n > ::output() { for (int i = 0; i < n; i++) cout << mass[i] << '\0'; } template <class T, int n> void array < T, n > ::sort()[T x; int p = 1, m = n; while (p) { p = 0; for (int i = 0; i < m - 1; i++) if (mas[i] > mas[i + 1]) [x = mass[i]; mass[i] = mass[i + 1]; mass[i + 1] = x; p = 1; } m--; } } int main() { array < int, 10 > a; array < float, 5 > b; a.input(); a.sort(); a.output(); b.input(); b.sort(); b.output(); return 0; } i get the following compiler error what might i be doing wrong in this code ? 25 | void array < T, n > ::sort()[T x; int p = 1, m = n; | ^ /tmp/ZOMErK6tKN.cpp:25:33: error: expected ']' before 'x' 25 | void array < T, n > ::sort()[T x; int p = 1, m = n; | ^~ | ] /tmp/ZOMErK6tKN.cpp:25:52: error: 'n' was not declared in this scope 25 | void array < T, n > ::sort()[T x; int p = 1, m = n; | ^ /tmp/ZOMErK6tKN.cpp:26:5: error: expected unqualified-id before 'while' 26 | while (p) { | ^~~~~ /tmp/ZOMErK6tKN.cpp:32:7: error: 'm' does not name a type 32 | m--; | ^ /tmp/ZOMErK6tKN.cpp:33:5: error: expected declaration before '}' token 33 | } | ^ /tmp/ZOMErK6tKN.cpp:34:3: error: expected declaration before '}' token 34 | }```
You have some typos in your code. In particular, you have use [ instead of { and mas instead of mass. These are correct and highlighted using comments in the below code: template <class T, int n> //----------------------------v------------------->[ changed to { void array < T, n > ::sort(){T x; int p = 1, m = n; while (p) { p = 0; for (int i = 0; i < m - 1; i++) //----------vvvv------vvvv----------------------->mas changed to mass if (mass[i] > mass[i + 1]) //--------v-------------------------------------->[ changed to { {x = mass[i]; mass[i] = mass[i + 1]; mass[i + 1] = x; p = 1; } m--; } }
72,302,070
72,302,201
Set the bounds of an array after object initialisation in cpp
I'm working on an image renderer in C++ that I wrote from scratch (I don't want to use anything but standard libraries), but I'm having some trouble when trying to store the image. The class I use to store images looks like this: class RawImage { private: RGB pixels[][][3] = {}; public: int width = 0; int height = 0; RawImage(int width, int height) { this->width = width; this->height = height; }; RGB GetPixel(int x, int y) { if (x < 0 || x > width - 1) return RGB(0.f, 0.f, 0.f); if (y < 0 || y > height - 1) return RGB(0.f, 0.f, 0.f); return pixels[x][y]; }; int SetPixel(int x, int y, RGB color) { if (x < 0 || x > width - 1) return -1; if (y < 0 || y > height - 1) return -1; this->pixels[x][y] = color; return 0; } }; When I try to compile this code, the g++ compiler gives the following error: declaration of ‘pixels’ as multidimensional array must have bounds for all dimensions except the first. How do I use a multidimensional array of which the 2 first dimensions vary in size, but the third dimension is of a fixed size?
Set the bounds of an array after object initialisation in cpp The size of an array never changes through its lifetime. It's set upon creation. Technically this isn't a problem for you because you can initialise the array in the constructor. But, size of an array variable must be compile time constant, so you cannot accept the size as a constructor parameter. You can use a dynamic array. Most convenient way is to use std::vector.
72,302,797
72,302,831
Why &x[0]+x.size() instead of &x[x.size()]?
I'm reading A Tour of C++ (2nd edition) and I came across this code (6.2 Parameterized Types): template<typename T> T* end(Vector<T>& x) {     return x.size() ? &x[0]+x.size() : nullptr;     // pointer to one-past-last element } I don't understand why we use &x[0]+x.size() instead of &x[x.size()]. Does it mean that we take the address of the first element in x and just add to that number x.size() bytes?
&x[x.size()] would result in (attempting to) take the address of x[x.size()]. However x[x.size()] attempts to access an out of bound element; depending on the API of Vector<T>::operator[] for the particular T, a number of bad things could happen: | Vector<T>::operator[] semantics | | ======================================= | | return\ contr | | | | type \ -act | unchecked | checked | | --------------------------------------- | | reference | UB (1) | FH | | value | UB (2) | FH | | --------------------------------------- | with UB (1): undefined behavior when creating a reference to an out-of-range-element. UB (2): undefined behaviour when attempting to read the value of an out-of-range element. FH: some fault handling action from the API if it is checked (e.g. throwing an exception, terminating, ...). For std::vector, as an example, you would run into UB (1) as its operator[] is unchecked and returns a reference type. Whilst you may perform pointer arithmetics to compute a pointer to one-past-last (semantically end()) of a buffer, you may not dereference a one-past-last pointer.
72,302,914
72,303,868
Creating list of unique_ptr using initialization list and make_unique fails in GCC 5.4
I am using GCC 5.4 for compiling a test program in C++ 14. #include <type_traits> #include <list> #include <iostream> #include <memory> int main() { int VALUE = 42; const auto list_ = { std::make_unique<int>(VALUE), std::make_unique<int>(0), std::make_unique<int>(0) }; } GCC 5.4 fails with the below error message: <source>: In function 'int main()': <source>:13:5: error: use of deleted function 'std::unique_ptr<_Tp, _Dp>::unique_ptr(const std::unique_ptr<_Tp, _Dp>&) [with _Tp = int; _Dp = std::default_delete<int>]' }; ^ In file included from /opt/compiler-explorer/gcc-5.4.0/include/c++/5.4.0/memory:81:0, from <source>:4: /opt/compiler-explorer/gcc-5.4.0/include/c++/5.4.0/bits/unique_ptr.h:356:7: note: declared here unique_ptr(const unique_ptr&) = delete; ^ The same code compiles properly with Clang 3.5. See https://godbolt.org/z/PM776xGP4 The issue seems to be there until GCC 9.2 where it compiles properly. Is this a known bug in GCC 9.1 and below? If yes, is there a way to solve this using the initializer list?
You can use: const std::initializer_list<std::unique_ptr<int>> list{ std::make_unique< int >( 42 ), std::make_unique< int >( 0 ), std::make_unique< int >( 0 ) }; Demo (old gcc-5.4 tested).
72,304,199
72,304,328
No matching Constructor Error For Initialization in c++. Whats wrong with my constructor?
I have a class in a header file as follows: #include <iostream> #include <string> #include <sstream> using namespace std; class ShowTicket { public: bool is_sold(void){ if (sold_status == true){ return true; } else{ return false; } } void sell_seat(void){ sold_status = true; } string print_ticket(void){ ostringstream sout; if(sold_status == true){ sout<<row<<" "<<seat_number<<"sold"; } else{ sout<<row<<" "<<seat_number<<"available"; } return sout.str(); } bool sold_status; const char* row; const char* seat_number; ShowTicket(const char* Row, const char* SeatNumber): sold_status{false}, row(Row), seat_number(SeatNumber) {} }; The main function to test this class is as follows: #include <iostream> #include <string> #include <sstream> #include "showticket.h" using namespace std; int main () { ShowTicket myticket1("AA","101"); ShowTicket myticket2("AA","102"); if(!myticket1.is_sold()) myticket1.sell_seat (); cout << myticket1.print_ticket() << endl; cout << myticket2.print_ticket() << endl; return 0; } When myticket 1 and 2 are created there is an error "No matching constructor for initialization of 'ShowTicket" but I believe my constructor accepts these parameters so I'm not sure how to resolve. Any advice would be appreciated.
By the looks of it, you only want to supply two arguments, but your constructor requires three: ShowTicket(const char* row, const char* seat_number, bool sold_status){ sold_status = false; } The body of the constructor makes be believe that you want to initialize a newly created ShowTicket with sold_status set to false. You can do that directly in the member initializer list: ShowTicket(const char* row, const char* seat_number) : // note the colon sold_status{false} // <- here {} If that's not the case, you can make sold_status have a default value, below it's set to false: ShowTicket(const char* row, const char* seat_number, bool sold_status = false) : sold_status{sold_status} {} I also recommend using different names for the arguments and the member variables. It can easily get messy otherwise: ShowTicket(const char* row, const char* seat_number, bool SoldStatus = false) : sold_status{SoldStatus} {} Also note that you don't actually allocate memory for and save the row and seat_number in the constructor. Calling your print_ticket function will therefore cause undefined behavior. I recommend that you replace the raw pointers and replace them with std::strings: std::string row; std::string seat_number; You can now save all in the constructor and not have to worry about memory management: ShowTicket(const char* Row, const char* SeatNumber, bool SoldStatus = false) : sold_status{SoldStatus}, row(Row), seat_number(SeatNumber) {} You may also want to consider taking the row and seat_number arguments as const std::string&s or std::string_views (since C++17) to make it easier to work with in general.
72,304,669
72,305,262
Z-Function and unique substrings: broken algorithm parroted everywhere?
I am not a huge math nerd so I may easily be missing something, but let's take the algorithm from https://cp-algorithms.com/string/z-function.html and try to apply it to, say, string baz. This string definitely has a substring set of 'b','a','z', 'ba', 'az', 'baz'. Let's see how z function works (at leas how I understand it): we take an empty string and add 'b' to it. By definition of the algo z[0] = 0 since it's undefined for size 1; we take 'b' and add 'a' to it, invert the string, we have 'ab'... now we calculate z-function... and it produces {0, 0}. First element is "undefined" as is supposed, second element should be defined as: i-th element is equal to the greatest number of characters starting from the position i that coincide with the first characters of s. so, at i = 1 we have 'b', our string starts with a, 'b' doesn't coincide with 'a' so of course z[i=1]=0. And this will be repeated for the whole word. In the end we are left with z-array of all zeroes that doesn't tell us anything despite the string having 6 substrings. Am I missing something? There are tons of websites recommending z function for count of distinct substrings but it... doesn't work? Am I misunderstanding the meaning of distinct here? See test case: https://pastebin.com/mFDrSvtm
When you add a character x to the beginning of a string S, all the substrings of S are still substrings of xS, but how many new substrings do you get? The new substrings are all prefixes of xS. There are length(xS) of these, but max(Z(xS)) of these are already substrings of S, so You get length(xS) - max(Z(xS)) new ones So, given a string S, just add up all the length(P) - max(Z(P)) for every suffix P of S. Your test case baz has 3 suffixes: z, az, and baz. All the letters are distinct, so their Z functions are zero everywhere. The result is that the number of distinct substrings is just the sum of the suffix lengths: 3 + 2 + 1 = 6. Try baa: The only non-zero in the Z functions is Z('aa')[1] = 1, so the number of unique substrings is 3 + 2 - 1 + 1 = 5. Note that the article you linked to mentions that this is an O(n2) algorithm. That is correct, although its overhead is low. It's possible to do this in O(n) time by building a suffix tree, but that is quite complicated.
72,304,983
72,309,003
Casting structs to add definition to a shared-memory block in a SIMD application
I am building an application that requires the use of a large block of shared memory space of type double. This block needs to be byte aligned to ensure proper loading into SIMD registers. For example double *ptr_x = (double *)_mm_malloc(sizeof(double) * 40, 32); Internally, there are several calculations that use the allocated memory (this is where the SIMD processing comes in). It is more convenient to use variable names with appropriate class functions to make the code legible. Rather than performing the calculations and then moving the values to this memory block, I want to use the local variables to make the calculations but have those variables pointing back to the memory. One way I have tried is to form data structures like: struct Position{ double xCoord; double yCoord; double zCoord; double zeroPad; }; struct Velocity{ double xCoord; double yCoord; double zCoord; double zeroPad; }; and, define pointers to these structs, and reinterpret_cast pointers from the block of memory as follows: Position *posCar; Velocity *velCar; posCar = reinterpret_cast<Position*>(ptr_x + 16); velCar = reinterpret_cast<Position*>(ptr_x + 20); Is there a preferred way to performing this mapping? Is this compiler safe? For this case, the structs are always of type double and come in groups of 4 to match the __m256d vector definition. Appreciate any insight into a more preferred approach or the wisdom of experience in an issue that may crop up.
Is there a preferred way to performing this mapping? It’s subjective. C++ books say the preferred one is reinterpret_cast like you are doing. Personally, I think C-style casts like (Position*)( ptr_x + 16 ) is more readable. Also if you have these things at sequential addresses, consider defining a larger structure with both position and velocity. Is this compiler safe? I think the language standard says “undefined behavior”. In reality on AMD64 processors this works fine in all 4 major compilers. And one more thing. I want to use the local variables to make the calculations but have those variables pointing back to the memory. You can, but if these calculations are complicated and involve several steps, consider the performance implications. Memory is generally slow, several orders of magnitude slower than registers. For optimal performance, you should do something like that: __m256d pos = _mm256_loadu_pd( &posCar->x ); __m256d vel = _mm256_loadu_pd( &velCar->x ); // ..update these vectors somehow _mm256_storeu_pd( &posCar->x, pos ); _mm256_storeu_pd( &velCar->x, vel );
72,305,082
72,305,353
Using a self calling function within a while loop C++
I am new to C++ and I am exploring the behavior of self calling function calls used inside a while loop. Here is the code I wrote #include <iostream> using namespace std; void self_calling_function(int); void self_calling_function(int i){ cout << "Inside function :" << i ; while(i < 5){ i++; cout << " I: " << i << endl; self_calling_function(i); } }; int main() { int i = 0; cout << "Hello world!" << endl; self_calling_function(i); return 0; } Basically I am expecting that once the execution enters the while loop and subsequently calling the function which calls back to itself, I thought the program execution should be returned back to the original while iterations. I can predict what happens if this were Java, but I am unable to understand this result given below in C++ Edit 1: The behavior I am expecting is that the state/value of i should be bound to the scope of it's caller. Meaning, say for the very first iteration of the while the value of i changed from 0 to 1 and 1 is passed to the function. The next sequence of i is basically 2. But since we are calling the function which again calls i via the new while call, the state/value of i of the original while sequence is changed. Edit 2: Here is the expected output instead. Hello world! Inside function :0 I: 1 Inside function :1 I: 2 Inside function :2 I: 3 Inside function :3 I: 4 Inside function :4 I: 5 // after this the code shouldn't go into the while as i is not < 5) Inside function :2 I: 3 //Here we are continuing with the original while sequence i = 2 (which got lost) Inside function :3 I: 4 Inside function :4 I: 5 // after this the code shouldn't go into the while as i is not < 5) Inside function :3 I: 4 // //Here we are continuing with the original while sequence i = 3 Inside function :4 I: 5 // after this the code shouldn't go into the while as i is not < 5) Inside function :4 I: 5 // //Here we are continuing with the original while sequence i = 4 and after this the code shouldn't go into the while as i = 5 is not < 5)
Looking at your comments i think you wanted the recursive calls to end once 5 is reached. This can be done by make the parameter of the function to be a reference type as shown below: //-----------------------------v----->i is an lvalue reference to non-const int void self_calling_function(int &i){ cout << "Inside function :" << i ; while(i < 5){ i++; cout << " I: " << i << endl; self_calling_function(i); } }; int main() { int i = 0; cout << "Hello world!" << endl; self_calling_function(i); //pass i by reference return 0; } The output of the above program is: Hello world! Inside function :0 I: 1 Inside function :1 I: 2 Inside function :2 I: 3 Inside function :3 I: 4 Inside function :4 I: 5 Inside function :5 Working demo
72,305,152
72,320,526
Erro trying to build qml lib to Android
I'm using a qml lib with a qt c++ projetc, and work well in linux and windows, but when I try to build to Android I receive this error: make: *** No rule to make target 'install'. Stop. 09:40:07: The process "/home/ysaakue/Android/Sdk/ndk/21.3.6528147/prebuilt/linux-x86_64/bin/make" exited with code 2. Error while building/deploying project IracemaCharts (kit: Android Qt 5.15.2 Clang Multi-Abi) When executing step "Copy application data" 09:40:07: Elapsed time: 00:01. that is my build config: and that is my android device config: EDIT: this is my .pro file: TEMPLATE = lib TARGET = IracemaCharts QT += qml quick CONFIG += plugin c++11 qmltypes QML_IMPORT_NAME = IracemaCharts QML_IMPORT_MAJOR_VERSION = 0 QML_IMPORT_MINOR_VERSION = 1 TARGET = $$qtLibraryTarget($$TARGET) uri = IracemaCharts # Input SOURCES += \ iracemalineseries.cpp \ iracemacharts_plugin.cpp \ iracemacharts.cpp \ iracemalineseriesview.cpp HEADERS += \ iracemalineseries.h \ iracemacharts_plugin.h \ iracemacharts.h \ iracemalineseriesview.h DISTFILES = qmldir !equals(_PRO_FILE_PWD_, $$OUT_PWD) { copy_qmldir.target = $$OUT_PWD/qmldir copy_qmldir.depends = $$_PRO_FILE_PWD_/qmldir copy_qmldir.commands = $(COPY_FILE) "$$replace(copy_qmldir.depends, /, $$QMAKE_DIR_SEP)" "$$replace(copy_qmldir.target, /, $$QMAKE_DIR_SEP)" QMAKE_EXTRA_TARGETS += copy_qmldir PRE_TARGETDEPS += $$copy_qmldir.target } qmldir.files = qmldir qmldir.files += plugins.qmltypes unix { installPath = $$[QT_INSTALL_QML]/$$replace(uri, \., /) qmldir.path = $$installPath target.path = $$installPath copy_qmltypes.path = $$installPath copy_qmltypes.files = $$OUT_PWD/plugins.qmltypes INSTALLS += target qmldir copy_qmltypes } windows { installPath = $$[QT_INSTALL_QML]/$$replace(uri, \., /) installPath = $$replace(installPath, /, \\) qmldir.path = $$installPath target.path = $$installPath copy_qmltypes.path = $$installPath copy_qmltypes.files = $$OUT_PWD/plugins.qmltypes INSTALLS += target qmldir copy_qmltypes } I'm using Qt version 5.15.2.
Apparently the problem was because this project is a lib, and in the default build steps has some to copy the files to target device, and in my case it isn't required, so I disabled(using the QtCreator interface as bellow) the last two stets and the problem was solved. (for now at least).
72,305,314
72,306,718
How to link libraries in cmake
I am trying to pass my c++ project that I was developing in Linux to windows. I am using cLion so a cMake. this is my Cmake cmake_minimum_required(VERSION 3.10) # common to every CLion project project(PackMan) # project name set(GLM_DIR C:/libs/GLM/glm) set(GLAD_DIR C:/libs/GLAD/include) include_directories(${GLM_DIR}) include_directories(${GLAD_DIR}) find_package(PkgConfig REQUIRED) pkg_search_module(GLFW REQUIRED glfw) ADD_LIBRARY(mainScr scr/Carte.cpp scr/Enemy.cpp scr/MoveableSquare.cpp scr/Palette.cpp scr/Player.cpp scr/Square.cpp scr/Wall.cpp scr/glad.c ) add_executable(PackMan scr/main.cpp) target_link_libraries(PackMan libglfw3.a) target_link_libraries(PackMan mainScr) Every include folder works fine. I copie pasted dll file ro systeme32 folder inside windows folder. So like I said in my project I have all external includes i can see where the definition is made and everything but it seems like I cant link them with dll. I get the error of -- Checking for one of the modules 'glfw' CMake Error at C:/Program Files/JetBrains/CLion 2022.1.1/bin/cmake/win/share/cmake-3.22/Modules/FindPkgConfig.cmake:890 (message): None of the required 'glfw' found Call Stack (most recent call first): CMakeLists.txt:12 (pkg_search_module) -- Configuring incomplete, errors occurred! See also "C:/Users/tanku/Documents/Projects/PackMan/cmake-build-debug/CMakeFiles/CMakeOutput.log". when I try to build.
You are doing that wrong way. You should use find_package instead hard-coding paths. This should go more or less like this: find_package(PkgConfig REQUIRED) pkg_search_module(GLFW REQUIRED glfw3) add_library(mainScr scr/Carte.cpp scr/Enemy.cpp scr/MoveableSquare.cpp scr/Palette.cpp scr/Player.cpp scr/Square.cpp scr/Wall.cpp scr/glad.c) target_link_libraries(mainScr PUBLIC ${GLFW_LIBRARIES}) target_include_directories(mainScr PUBLIC ${GLFW_INCLUDE_DIRS}) add_executable(PackMan scr/main.cpp) This should work if GLFW is properly installed. On Windows you can use vcpkg to manage c++ libraries. This is done based on GLFW documentation - didn't test this.
72,305,432
72,306,849
Ignore commemts while parsing txt file C++
I have a large text file and I am parsing using string stream. Text file looks like this ##### ##bjhbv nvf vbhjbj vfjbvjf *bj *bvjbv . . . . +FILE data I want to parse from here to . . . . -FILE till here #shv again comments . . How can I parse only between +FILE to -FILE? I can parse inside off it, but i just want to ignore above and below comments. Please help me how to ignore whie reading or parsing from .txt file. Any leads will be appreciated.,
As you see in the comments, you can discard above +FILE. you can use flag and condition method. Where use, while(some_condition) { //ignore comments if(!std::find("#")) { continue; } bool flag=false; if(!std::find("+FILE")) { flag=true; } if(!std::find("-FILE")) { flag=false; } if(flag) { .. .. .. } } You can also use enum and array index method if you have different string names. If you dont want to use std::find, if you are parsing through getline,then you can replace std::find toline.at(0).
72,305,519
72,305,758
Does the C++ standard guarantee that when the return value of 'rdbuf' passed to the stream output operator, he conent of the buffer gets printed out
Consider the following code snippet: std::stringstream ss; ss << "hello world!\n"; auto a = ss.rdbuf(); std::cout << a; // prints out "hello world! The variable a is a pointer to an object of the type std::stringbuf. When it is passed to the stream output operator <<, with GCC9.4, the content of the stream buffer pointed by a gets printed out. My question is: is this behavior just an accident from the way std::stringbuf is implemented in GCC, or does the language standard guarantee this will always work?
A std::basic_stringbuf is derived from a std::basic_streambuf. Cppreference describes its use: The I/O stream objects std::basic_istream and std::basic_ostream, as well as all objects derived from them (std::ofstream, std::stringstream, etc), are implemented entirely in terms of std::basic_streambuf. What does that mean? Well, let's take a look at the overload set for std::basic_istream::operator<< here: basic_ostream& operator<<( std::basic_streambuf<CharT, Traits>* sb ); (10) Behaves as an UnformattedOutputFunction. After constructing and checking the sentry object, checks if sb is a null pointer. If it is, executes setstate(badbit) and exits. Otherwise, extracts characters from the input sequence controlled by sb and inserts them into *this until one of the following conditions are met: end-of-file occurs on the input sequence; inserting in the output sequence fails (in which case the character to be inserted is not extracted); an exception occurs (in which case the exception is caught). If no characters were inserted, executes setstate(failbit). If an exception was thrown while extracting, sets failbit and, if failbit is set in exceptions(), rethrows the exception. So, yes, it's guaranteed by the standard that std::cout << ss.rdbuf(); will have the effect you observed.
72,305,685
72,306,619
Erasing list elements step by step
I want to erase list elements one by one. Before removing any of list elements I want to see the whole list. #include <iostream> #include <list> int main() { std::list<int>numbers{0,1,2,3,4,5,6,7,8,9}; auto it=numbers.begin(); for(int i=0;i<10;i++){ for(auto j:numbers) std::cout<<j<<" "; std::cout<<std::endl; it=numbers.erase(it); it++; } return 0; } OUTPUT: 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 3 4 5 6 7 8 9 1 3 5 6 7 8 9 1 3 5 7 8 9 1 3 5 7 9 double free or corruption (out) Why this process goes just to half of elements. How to delete all list elements one after another step by step? I know I could use numbers.clear(), but I don't need that. Also, why erasing doesn't go in order? (0 is deleted, then 2, and then 4)?
The issue is with these two lines it=numbers.erase(it); it++; The function, list::erase, returns an iterator pointing to the element that followed the last element erased. Here, your code removes the item from the list and sets it to the next element in the list. Then the instruction it++ advances the iterator one more place, hence skipping one item in the list. The simple solution is to comment out the it++ line: #include <iostream> #include <list> int main() { std::list<int>numbers{0,1,2,3,4,5,6,7,8,9}; auto it=numbers.begin(); for(int i=0;i<10;i++) { for(auto j:numbers) std::cout<<j<<" "; std::cout<<std::endl; it=numbers.erase(it); //it++; } return 0; } This gives this output: 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 3 4 5 6 7 8 9 4 5 6 7 8 9 5 6 7 8 9 6 7 8 9 7 8 9 8 9 9
72,306,477
72,306,597
How to parse floating point number using sscanf
This works well, outputting 1 3 std::string s("driver at 1 3"); int c, d; sscanf( s.c_str(), "%*s %*s %d %d", &c, &d ); std::cout << c <<" "<< d <<"\n"; But this fails, outputting 6.95129e-310 6.95129e-310 std::string s("driver at 1 3"); double c, d; sscanf( s.c_str(), "%*s %*s %f %f", &c, &d ); std::cout << c <<" "<< d <<"\n"; I tried changing the input to std::string s("driver at 1.0 3.0"); but it fails in exactly the same way c++17 windows mingw g++ v11.2
You have a bug in your format string. You should increase the warning your compiler outputs: https://godbolt.org/z/Pfd414o45 #include <string> #include <cstdio> #include <iostream> int main() { std::string s("driver at 1 3"); double c, d; sscanf( s.c_str(), "%*s %*s %f %f", &c, &d ); std::cout << c <<" "<< d <<"\n"; } <source>: In function 'int main()': <source>:10:19: warning: format '%f' expects argument of type 'float*', but argument 3 has type 'double*' [-Wformat=] 10 | "%*s %*s %f %f", | ~^ | | | float* | %lf 11 | &c, &d ); | ~~ | | | double* <source>:10:22: warning: format '%f' expects argument of type 'float*', but argument 4 has type 'double*' [-Wformat=] 10 | "%*s %*s %f %f", | ~^ | | | float* | %lf 11 | &c, &d ); | ~~ | | | double* Compiler returned: 0 You need to use %lf for double.
72,306,529
72,319,993
DLLNotFound when using DLLImport with shared library
I am creating a .NET Framework 4.0 application that use a DLL when launch on Windows and a shared library wrote in C++ when launch on Linux (debian version 10). The C# codes looks like that : [DllImport("graf")] private static extern int Method1(); On Windows, everything is fine, and the application works very well. On Linux, I use Wine to start the application. The problem is that when i try to use any method from my library, I got a DLLNotFoundException: graf. My shared library is in /lib, /usr/lib and in the exe folder. I tried with renaming my libraries libgraf.so and only graf.dll but it's not working. I followed every step of this link. But I can't use my .so library. Do you have any clue to fix that ? EDIT: Ok it's seems to be a problem in my shared library compilation.
My problem came from the fact that 1) my library was badly compiled (Makefile problem); and 2) I had to use the keyword extern "C" { in the definitions of my functions. See What is the effect of extern "C" in C++? for more details.