question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
74,044,607
74,046,828
gmock multiple sequences with duplicate EXPACT_CALL with same arguments
I have a problem to properly setup a test for a turtle-graphic interface. As sample I have a simplified interface for just drawing a line. Now I want to write a test for drawing some grid draw_grid and ensure that each line of the grid is drawn, but the actual order of drawn lines does not matter. I only need to ensure, that the method calls to move_to and line_to are properly paired. I tried this with InSequence but this leads to several issues, because a couple of line_to and move_to calls are used for two lines. For example move_to(0,0) is used for the top edge of the grid and for the left border of the first cell. So the test will generate two EXPACT_CALL(sot, move_to(0,0)) in two different sequences, but each of them implicitly with Times(1). So I guess this is the main problem. This will happen for each left and top border of a row and similarly for line_to for right and bottom border of a row. I also tried to use After for an EXPACT_CALL, but this just leads to different test errors. Is there any nice way to specify the requested behavior? Thanks for your help! using testing::InSequence; struct ITurtle { virtual void move_to(int x, int y) = 0; virtual void line_to(int x, int y) = 0; void line(int x0, int y0, int x1, int y1) { move_to(x0, y0); line_to(x1, y1); } }; class TurtleMock : public ITurtle { public: MOCK_METHOD(void, move_to, (int x, int y), (override)); MOCK_METHOD(void, line_to, (int x, int y), (override)); }; void draw_grid(ITurtle &t) { for (int r = 0; r < 100; r += 10) { // top t.line(0,r,100,r); for (int c = 0; c < 100; c += 10) { // left t.line(c,r,c,r+10); } // right t.line(100,r,100,r+10); } // bottom t.line(0,100,100,100); } TEST(TurtleTest, lines) { TurtleMock sot; for (int r = 0; r < 100; r += 10) { { InSequence s; EXPECT_CALL(sot, move_to(0, r)); EXPECT_CALL(sot, line_to(100, r)); } for (int c = 0; c < 100; c += 10) { InSequence s; EXPECT_CALL(sot, move_to(c,r)); EXPECT_CALL(sot, line_to(c,r+10)); } { InSequence s; EXPECT_CALL(sot, move_to(100, r)); EXPECT_CALL(sot, line_to(100, r + 10)); } } { InSequence s; EXPECT_CALL(sot, move_to(0, 100)); EXPECT_CALL(sot, line_to(100, 100)); } draw_grid(sot); } EDIT: Extension to proposed solution of Sedenion to support polygon drawing with one initial move_to followed by a sequence of line_to statements. void move_to(int x, int y) final { m_move_to_data = {x, y}; } void line_to(int x, int y) final { ASSERT_TRUE(m_move_to_data.has_value()); auto const [x0, y0] = *m_move_to_data; line_mock(x0, y0, x, y); m_move_to_data = {x, y}; }
Consider the following simplified example of drawing just 2 lines (live example): void draw_two_lines(ITurtle &t) { t.line(0, 0, 10, 0); t.line(0, 0, 0, 10); } TEST(TurtleTest, two_lines) { TurtleMock sot; { InSequence s; EXPECT_CALL(sot, move_to(0, 0)); // (2) EXPECT_CALL(sot, line_to(10, 0)); } { InSequence s; EXPECT_CALL(sot, move_to(0, 0)); // (1) EXPECT_CALL(sot, line_to(0, 10)); } draw_two_lines(sot); } Forgetting about InSequence for a moment, expectations are matched in reverse order (from bottom to top) by default by Google Mock. Quote from the manual: By default, when a mock method is invoked, gMock will search the expectations in the reverse order they are defined, and stop when an active expectation that matches the arguments is found (you can think of it as “newer rules override older ones.”). Now, here we do have InSequence, i.e. two groups. However, the two groups themselves are not "in sequence". That means gMock will match the first call to move_to(0, 0) with the second group (marked by (1) in the code above). Thus, afterwards, line_to(0, 10) is expected but line_to(10, 0) gets called, resulting in a test failure. If you exchange the order of the two InSequence-groups, the test will pass. However, this is not really worth anything since your goal is to have the order independent. What you want is basically to specify something like one "atomic" match of all 4 parameters. I am not aware of any way to directly express this with the InSequence or After machinery of GoogleMock. Thus, I propose to take another approach and store the call of move_to in a temporary variable, and in the call of line_to take the remembered two values and the two given values to call a dedicated mock function (live example): struct ITurtle { virtual void move_to(int x, int y) = 0; virtual void line_to(int x, int y) = 0; void line(int x0, int y0, int x1, int y1) { move_to(x0, y0); line_to(x1, y1); } }; class TurtleMock : public ITurtle { public: std::optional<std::pair<int, int>> move_to_data; virtual void move_to(int x, int y) final { ASSERT_FALSE(move_to_data.has_value()); move_to_data = {x, y}; } virtual void line_to(int x1, int y1) final { ASSERT_TRUE(move_to_data.has_value()); auto const [x0, y0] = *move_to_data; line_mock(x0, y0, x1, y1); move_to_data.reset(); } MOCK_METHOD(void, line_mock, (int x0, int y0, int x1, int y1)); }; void draw_two_lines(ITurtle &t) { t.line(0, 0, 10, 0); t.line(0, 0, 0, 10); } TEST(TurtleTest, two_lines) { TurtleMock sot; EXPECT_CALL(sot, line_mock(0, 0, 10, 0)); EXPECT_CALL(sot, line_mock(0, 0, 0, 10)); draw_two_lines(sot); } This allows to specify all 4 parameters in one "atomic" match, making the whole stuff with InSequence unnecessary. The above example passes regardless of the order of the line() calls in draw_two_lines(). Your test for draw_grid() would then become (live example): TEST(TurtleTest, grid) { TurtleMock sot; for (int r = 0; r < 100; r += 10) { EXPECT_CALL(sot, line_mock(0, r, 100, r)); for (int c = 0; c < 100; c += 10) { EXPECT_CALL(sot, line_mock(c, r, c, r+10)); } EXPECT_CALL(sot, line_mock(100, r, 100, r + 10)); } EXPECT_CALL(sot, line_mock(0, 100, 100, 100)); draw_grid(sot); } Note: This solution assumes that you cannot or do not want to make ITurtle::line() virtual. If it were, you could of course ditch the helper move_to_data and line_mock() and instead mock line() directly.
74,044,672
74,130,000
Why does shutting down this grpc::CompletionQueue cause an assertion?
At this question, I asked how to unblock a grpc::CompletionQueue::Next() that is waiting on a grpc::Channel::NotifyOnStateChange(..., gpr_inf_future(GPR_CLOCK_MONOTONIC), ...). That question, specifically, is still unanswered, but I am trying a workaround, where the CompletionQueue is instead waiting on a grpc::Channel::NotifyOnStateChange() with a non-infinite deadline: // main.cpp #include <chrono> #include <iostream> #include <memory> #include <thread> #include <grpcpp/grpcpp.h> #include <unistd.h> using namespace std; using namespace grpc; void threadFunc(shared_ptr<Channel> ch, CompletionQueue* cq) { void* tag = NULL; bool ok = false; int i = 1; grpc_connectivity_state state = ch->GetState(false); std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now(); std::chrono::time_point<std::chrono::system_clock> deadline = now + std::chrono::seconds(2); cout << "state " << i++ << " = " << (int)state << endl; ch->NotifyOnStateChange(state, //gpr_inf_future(GPR_CLOCK_MONOTONIC), deadline, cq, (void*)1); while (cq->Next(&tag, &ok)) { state = ch->GetState(false); cout << "state " << i++ << " = " << (int)state << endl; now = std::chrono::system_clock::now(); deadline = now + std::chrono::seconds(2); ch->NotifyOnStateChange(state, //gpr_inf_future(GPR_CLOCK_MONOTONIC), deadline, cq, (void*)1); } cout << "thread end" << endl; } int main(int argc, char* argv[]) { ChannelArguments channel_args; CompletionQueue cq; channel_args.SetInt(GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA, 0); channel_args.SetInt(GRPC_ARG_MIN_RECONNECT_BACKOFF_MS, 2000); channel_args.SetInt(GRPC_ARG_MAX_RECONNECT_BACKOFF_MS, 2000); channel_args.SetInt(GRPC_ARG_HTTP2_BDP_PROBE, 0); channel_args.SetInt(GRPC_ARG_KEEPALIVE_TIME_MS, 60000); channel_args.SetInt(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 30000); channel_args.SetInt(GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS, 60000); { shared_ptr<Channel> ch(CreateCustomChannel("my_grpc_server:50051", InsecureChannelCredentials(), channel_args)); std::thread my_thread(&threadFunc, ch, &cq); cout << "sleeping" << endl; sleep(5); cout << "slept" << endl; cq.Shutdown(); cout << "shut down cq" << endl; my_thread.join(); } } Output of the running executable: $ ./a.out sleeping state 1 = 0 state 2 = 0 state 3 = 0 slept shut down cq state 4 = 0 E1012 15:29:07.677225824 54 channel_connectivity.cc:234] assertion failed: grpc_cq_begin_op(cq, tag) Aborted (core dumped) This version periodically unblocks, as expected, but why does it assert? My question is ultimately: how do you cleanly exit from a loop/thread that is waiting on a grpc::CompletionQueue that is waiting on a grpc::Channel::NotifyOnStateChange() ? My experience has been that with an infinite deadline, it's impossible to unblock grpc::CompletionQueue::Next(), and with a non-infinite deadline, shutting down the grpc::CompletionQueue results in an assert, which is presumably a non-clean exit.
The documentation for CompletionQueue::Shutdown()`](https://grpc.github.io/grpc/cpp/classgrpc_1_1_completion_queue.html#a40efddadd9073386fbcb4f46e8325670) says: Also note that applications must ensure that no work is enqueued on this completion queue after this method is called. In other words, once you shut down the CQ, it is illegal to call NotifyOnStateChange() again, because that is enqueing new work. In this case, what you should expect to see after you call CompletionQueue::Shutdown() is that the already-invoked call to CompletionQueue::Next() will return the already-requested NotifyOnStateChange() completion, and the next call to CompletionQueue::Next() will return false, thus indicating that the CQ is shut down. However, your code is not making a call to Next() to see if the CQ is shut down before it calls NotifyOnStateChange() again to request another state change notification, so that is happening after the CQ is shut down, which is why you're seeing this assertion. In general, the right way to use a CQ is to have a separate, dedicated set of threads that always call Next() in a loop but do not themselves start any new work on the CQs. Starting new work on the CQs should be done in separate thread(s) and should not be done after the CQ is shut down. I hope this information is helpful.
74,046,063
74,046,096
nested For loop pattern in c++
in c++, when I am making nested For loop and I am trying to calculate the factorial...I don't get the correct factorials...I don't know why. For example factorial of 5 is 120 but here it results in 34560. why? and here is the code: int fact=1; for (int number=1; number<=10; number++) { for (int i=1; i<=number; i++) fact=fact*i; cout <<"factorial of "<<number<<"="<<fact<<"\n"; } here is it pictured:
You need to re-initialize fact for each number. int fact=1; for (int number=1; number<=10; number++) { fact = 1; for (int i=1; i<=number; i++) fact=fact*i; cout <<"factorial of "<<number<<"="<<fact<<"\n"; }
74,046,187
74,046,251
type for aliasing constexpr char[]
Modified the question to make it clear. Sorry for my sloppy English wordings. I am looking for a type of a variable, which is an alias of another variable of constexpr char [] type. I've tried a few but none worked. Here is an example: constexpr char FieldX[] = "source"; WhatType OptY = FieldX; The value of FieldX will not change at runtime -- obviously. However, later on, someone could change it in the source code and recompile. I would like to avoid having to change the value of OptY manually according to the change in the value of FieldX. They have essentially same values; they are in different .cc/.cpp files, and would have better readability if the name of variables could be different.
If you want an alias of a variable, you want a reference. constexpr char FieldX[] = "my_field_or_option"; constexpr auto& OptY = FieldX; static_assert( sizeof FieldX == 19 ); static_assert( sizeof OptY == 19 ); // No array decay Note that FieldX must be evaluated at compile time to be able to create a constexpr reference to it, meaning it should be global or static. Demo
74,046,668
74,047,039
How to iterate over a parameter pack and switch on parameter type?
I have a variadic function accepting any number of mixed parameters: the following code works as expected: template <typename ... Args> void call_snippet(lua_State *L, const std::string& name, Args... args) { lua_rawgeti(L, LUA_REGISTRYINDEX, snippets[name]); int nargs = 0; for (auto &&x : {args...}) { lua_pushinteger(L, x); nargs++; } lua_pcall(L, nargs, LUA_MULTRET, 0); } but falls short from needs because it assumes all parameters are int (or convertible to int). I would need something along the lines: template <typename ... Args> void call_snippet(lua_State *L, const std::string& name, Args... args) { lua_rawgeti(L, LUA_REGISTRYINDEX, snippets[name]); int nargs = 0; for (auto &&x : {args...}) { switch (typeof(x)) { case int: lua_pushinteger(L, x); nargs++; break; case float: lua_pushnumber(L, x); nargs++; break; case std:string: lua_pushcstring(L, x.c_str()); nargs++; break; case char*: lua_pushcstring(L, x); nargs++; break; default: //raise error ; } lua_pcall(L, nargs, LUA_MULTRET, 0); } How should I actually implement the above pseudocode?
You should be able create function overloads for calling lua_push... and use a fold expression instead of the loop. The sizeof... operator can be used to determine the number of parameters: void Push(lua_State* l, std::nullptr_t) = delete; void Push(lua_State* l, std::string const& str) { lua_pushcstring(l, str.c_str()); } void Push(lua_State* l, char const* str) { lua_pushcstring(l, str); } void Push(lua_State* l, int value) { lua_pushinteger(l, value); } void Push(lua_State* l, float value) { lua_pushnumber(l, value); } template <typename ... Args> void call_snippet(lua_State *L, const std::string& name, Args&&... args) { lua_rawgeti(L, LUA_REGISTRYINDEX, snippets[name]); ((Push(L, std::forward<Args>(args))), ...); int nargs = sizeof...(Args); lua_pcall(L, nargs, LUA_MULTRET, 0); } The following complete example should demonstrate this in a similar scenario: #include <iostream> #include <utility> void PrintNumber(float f) { std::cout << f << "(float)\n"; } void PrintInt(int i) { std::cout << i << "(int)\n"; } void PrintCstring(char const* str) { std::cout << str << "(char const*)\n"; } void Print(std::nullptr_t) = delete; void Print(std::string const& str) { PrintCstring(str.c_str()); } void Print(char const* str) { PrintCstring(str); } void Print(int value) { PrintInt(value); } void Print(float value) { PrintNumber(value); } template <typename ... Args> void PrintArgs(Args&&... args) { ((Print(std::forward<Args>(args))), ...); int nargs = sizeof...(Args); std::cout << "nargs = " << nargs << '\n'; } int main() { PrintArgs("foo", std::string("bar"), 42, 99.9f); } Note: You may need to add some overloads to resolve ambiguity, e.g. when passing 99.9 instead of 99.9f, since the former is a double which results in ambiguity during overload resolution.
74,046,805
74,047,008
Wrapping the EXPECT_NE, EXPECT_EQ into a validation function
I have a few unit tests that validate whether certain values equate to 0 or not. In some cases, they're supposed to be 0 and in some, they're not. Just like the following: testA expects valueA to not contain 0 whereas testB expects valueB to not. What I am looking to do is to somehow wrap the validation part in a function so instead of invoking EXPECT_NE/EXPECT_EQ for each member, I just invoke a function that takes care of the validation part. TEST(UnitTest, testA) { Object object; // do stuff that modified object's values EXPECT_NE(object.valueA, 0); EXPECT_EQ(object.valueB, 0); } TEST(UnitTest, testB) { Object object; // do stuff that modified object's values EXPECT_EQ(object.valueA, 0); EXPECT_NE(object.valueB, 0); } This is what I came up with but it's a bit too verbose. Wondering if there's a better approach to it? void Validate(Object* obj, bool valA, bool valB) { // verify valueB if (valA) { EXPECT_EQ(object->valueA, 0); } else { EXPECT_NE(object->valueA, 0); } // verify valueB if (valB) { EXPECT_EQ(object->valueB, 0); } else { EXPECT_NE(object->valueB, 0); } } TEST(UnitTest, testA) { Object object; // do stuff that modified object's values Validate(&object, false, true); } TEST(UnitTest, testB) { Object object; // do stuff that modified object's values Validate(&object, true, false); }
With FieldsAre and structured binding With C++17 and a recent GoogleTest version (>= v1.12.0), you can simply use FieldsAre(), in case Object allows structured binding (see live example): using ::testing::FieldsAre; using ::testing::Eq; using ::testing::Ne; struct Object { int valueA; int valueB; }; TEST(UnitTest, testA) { Object object{42,0}; EXPECT_THAT(object, FieldsAre(Ne(0), Eq(0))); } TEST(UnitTest, testB) { Object object{0, 42}; EXPECT_THAT(object, FieldsAre(Eq(0), Ne(0))); } With a combination of matchers Otherwise (if your GoogleTest is too old or Object does not allow structured binding), you can write a simple matcher-like function: using ::testing::Field; using ::testing::AllOf; template <class M1, class M2> auto MatchesValues(M1 m1, M2 m2) { return AllOf(Field(&Object::valueA, m1), Field(&Object::valueB, m2)); } and use it just like FieldsAre (live example): TEST(UnitTest, testA) { Object object{42,0}; EXPECT_THAT(object, MatchesValues(Ne(0), Eq(0))); } TEST(UnitTest, testB) { Object object{0, 42}; EXPECT_THAT(object, MatchesValues(Eq(0), Ne(0))); } With a custom matcher As noted in the comment, your original Object is a template, in which case Field cannot be used. In this case you can write a proper customer matcher like so (live example): template<typename T> struct Object { int valueA; int valueB; T otherStuff; Object(int a, int b) : valueA(a), valueB(b) {} }; MATCHER_P2(MatchesValues, m1, m2, "") { return ExplainMatchResult(m1, arg.valueA, result_listener) && ExplainMatchResult(m2, arg.valueB, result_listener); } TEST(UnitTest, testA) { Object<int> object{42,0}; EXPECT_THAT(object, MatchesValues(Ne(0), Eq(0))); }
74,047,043
74,066,652
Error passing an Eigen tensor to function
I am trying to turn the following code to a function: Update: added full working example to test and run. Thanks. static const int nx = 4; static const int ny = 4; static const int nz = 4; double Lx = 2*EIGEN_PI; double Ly = 2*EIGEN_PI; double A = (2 * EIGEN_PI)/Lx; double A1 = (2 * EIGEN_PI)/ Ly; Eigen::Tensor<double, 3> eXX(nx,ny,nz); eXX.setZero(); Eigen::Tensor<double, 3> eYY(nx,ny,nz); eYY.setZero(); Eigen::Tensor<double, 3> eZZ(nx,ny,nz); eZZ.setZero(); double dx = Lx / nx; double dy = Ly / ny; double dz = Lz / nz; for(int i = 0; i< nx; i++){ for(int j = 0; j< ny; j++){ for(int k = 0; k< nz; k++){ eXX(k,i,j) = i*dx; eYY(j,i,k) = j*dy; eZZ(j,i,k) = k*dz; } } } Eigen::Tensor<double, 3> uFun(nx,ny,nz); uFun.setZero(); for(int i = 0; i< nx; i++){ for(int j = 0; j< ny; j++){ for(int k = 0; k< nz; k++){ uFun(k,i,j) = sin(3. * A * eZZ(k,i,j)) * sin(A * eXX(k,i,j)) * cos(A1 * eYY(k,i,j)); } } } //Turn this to function #define IMAG 1 #define REAL 0 fftw_complex *input_array; fftw_complex *output_array; input_array = (fftw_complex*) fftw_malloc(nx*ny*nz * sizeof(fftw_complex)); output_array = (fftw_complex*) fftw_malloc(nx*ny*nz * sizeof(fftw_complex)); for (int i = 0; i < nx; ++i) { for (int j = 0; j < ny; ++j) { for (int k = 0; k < nz; ++k) { { input_array[k + nz * (j + ny * i)][REAL] = uFun(k,i,j); input_array[k + nz * (j + ny * i)][IMAG] = 0; } } } } fftw_plan forward = fftw_plan_dft_3d(nx, ny, nz, input_array, output_array, FFTW_FORWARD, FFTW_ESTIMATE); fftw_execute(forward); fftw_destroy_plan(forward); fftw_cleanup(); My attempt: void r2cfft3d(Eigen::Tensor<double, 3>& rArr, Eigen::Tensor<std::complex<double>, 3> cArr){ fftw_complex *input_array; fftw_complex *output_array; input_array = (fftw_complex*) fftw_malloc(nx*ny*nz * sizeof(fftw_complex)); output_array = (fftw_complex*) fftw_malloc(nx*ny*nz * sizeof(fftw_complex)); for (int i = 0; i < nx; ++i) { for (int j = 0; j < ny; ++j) { for (int k = 0; k < nz; ++k) { { input_array[k + nz * (j + ny * i)][REAL] = rArr(k,i,j); input_array[k + nz * (j + ny * i)][IMAG] = 0; } } } } //this is correct 3D fft of uFun = fftn(uFun) in MATLAB fftw_plan forward = fftw_plan_dft_3d(nx, ny, nz, input_array, output_array, FFTW_FORWARD, FFTW_ESTIMATE); fftw_execute(forward); fftw_destroy_plan(forward); fftw_cleanup(); } But I get all these errors: error: variable or field ‘r2cfft3d’ declared void 27 | void r2cfft3d(Eigen::Tensor<double, 3>& rArr, Eigen::Tensor<std::complex<double>, 3> cArr); | ^~~~~~ spectralFunctions3D.h:27:22: error: ‘Tensor’ is not a member of ‘Eigen’ spectralFunctions3D.h:27:29: error: expected primary-expression before ‘double’ 27 | void r2cfft3d(Eigen::Tensor<double, 3>& rArr, Eigen::Tensor<std::complex<double>, 3> cArr); error: ‘rArr’ was not declared in this scope 27 | void r2cfft3d(Eigen::Tensor<double, 3>& rArr, Eigen::Tensor<std::complex<double>, 3> cArr); error: ‘cArr’ was not declared in this scope 27 | void r2cfft3d(Eigen::Tensor<double, 3>& rArr, Eigen::Tensor<std::complex<double>, 3> cArr); | ^~~~ I don't understand these errors, espically that the code works fine before trying to turn it into a function. I am more familiar with passing Eigen matrices to function but not Eigen tensors. Thanks
I managed to compile this fine by including all the relevant headers (especially #include <unsupported/Eigen/CXX11/Tensor>). Since you are using const sizes for the Eigen Tensors you could define them as fixed size with Eigen::TensorFixedSize<double, Eigen::Sizes<nx, ny, nz>> Also take a look at this: https://www.fftw.org/fftw3_doc/Column_002dmajor-Format.html , maybe you can avoid the conversion from column-major to row-major and pass directly the pointer to the Eigen Tensor.
74,047,120
74,288,006
Mediapipe palm detection model outputs
I want to add Mediapipe hand landmark detection to my C++ project, but mediapipe doesn't support CMake so I had to find another way, I found that the hand landmark detection is a two-model run in serial. the first model is palm detection and the second is landmark detection, from the mediapipe website I reached to the two models the models are tflite models so adding them shouldn't be difficult but I had a problem figuring out how to convert the palm output to bboxes, the model gives me two outputs one with shape (2016, 18) and a second (2016,) the first one should be a [number of anchors, 18] 0 - 4 are bounding box offset, width, and height: dx, dy, w ,h 4 - 18 are 7 hand keypoint x and y coordinates: x1,y1,x2,y2,...x7,y7 the second should be the accuracy for each bbox (2016, 18)[0] –-> [-3896.9226 5079.4067 6987.4683 7181.9116 992.45654 4032.2664 -7006.974 -2635.5786 -4408.5684 -3171.507 -2381.8406 -3177.1763 -1996.8119 -2633.921 2559.212 5521.417 4017.0728 4059.862 ] (2016,)[0] ---> -2090.7869 Could you please help me figure the needed math to end up with bbox? During my research, I found the same problem at https://github.com/google/mediapipe/issues/3751 And in https://github.com/aashish2000/hand_tracking But I couldn’t understand how to end up with bbox
the main steps needed to convert the mediapipe palm models output to a rectangle are explained in this repo terryky/tflite_gles_app.git, they used the old models but the main steps are the same, in my repo, I made the necessary changes to run the new models both the palm model and the hand landmark detection you can found the source code here.hand-landmarks-cpp.git. I noticed that the python version runs at less 3X time faster than the CPP version probably because of the lag in TensorFlow C++ API -tflite- or maybe I have a bug in my code how knows:)
74,047,260
74,047,681
Why is the desctructor being invoked in this scenario?
In the following code , I am not able to understand why the destructor of the class Buf is invoked twice. When debugging I can see that it is being invoked the first time when the running thread is leaving the function Test::produce. The second time is when leaving the main function which essentially is when destructing the class EventQueue, something I would expect. However, I dont understand why when leaving the function Test::produce the destructor of Buf is invoked. Specifically, I create the class Buf as a r-value passing it to the EventQueue and move to its internal cache. In fact, this has created me the problem that I end up trying ti free the same pointer twice which throws an exception. template<typename T> class EventQueue{ public: void offer(T&& t) { m_queue.try_emplace(std::this_thread::get_id()).first->second.push(std::move(t)); }; std::unordered_map<std::thread::id, std::queue<T>> m_queue; }; class Buf{ const uint8_t *m_data; const size_t m_size; public: Buf(const uint8_t *data, size_t size) : m_data(data), m_size(size) { } size_t size() const { return m_size; } const uint8_t *data() const { return m_data; } ~Buf() { std::cout << "dtor called " << std::endl; free((void *)m_data); } }; class Test{ and was not expecting public: Test(shared_ptr<EventQueue<Buf>> buf) : m_buf(buf) { std::thread t1 = std::thread([this] { this->produce(10); }); t1.detach(); }; void produce(int msg_size) { m_buf->offer(Buf(new uint8_t[msg_size], 10)); } std::shared_ptr<EventQueue<Buf>> m_buf; }; int main() { auto event_queue = std::make_shared<EventQueue<Buf>>(); Test tt(event_queue); return 0; }
The destructor is called two times because you have two objects to destroy. First - the temporary you created as an argument for the offer function parameter: void produce(int msg_size) { m_buf->offer(Buf(new uint8_t[msg_size], 10)); } Second - when you add this temporary to std::queue container, it makes a copy under the hood: void offer(T&& t) { m_queue.try_emplace(std::this_thread::get_id()).first->second.push(std::move(t)); }; Every temporary object created must always be destructed. However the problem is not about how many objects were destructed, but that you ignore the rules of zero, three and five here. I.e. if you create any of a destructor, a copy constructor or a copy-assignment operator, you are supposed to take care of all three. Another side effect is that the compiler will not generate the move constructor and move assignment operator for you when any of the big three are explicitly defined. Thus, when passing rvalue-reference to a Buf constructor, you actually ends up with a copy constructor. However even if it was a default move constructor, it would not solve your problem, because resources represented with raw pointers which your class instance "owns" (and is supposed to delete at some point) are not quite compatible with the implicit move constructor, which merely does member-wise std::move: For non-union class types (class and struct), the move constructor performs full member-wise move of the object's bases and non-static members, in their initialization order, using direct initialization with an xvalue argument. For any built-in types (including raw pointers), it means that nothing actually happens and they are just copied. Long story short: you have to nullify the source object's member raw pointer explicitly: Buf(Buf&& other) noexcept : m_data{ std::exchange(other.m_data, nullptr) }, m_size{ other.m_size } {} The better solution would be to not mess with rules of three/five and stick to rule of zero, by leveraging RAII idiom and letting automatic storage duration to handle the resources without explicitly allocating/releasing them: class Buf{ const std::vector<std::uint8_t> m_data; public: Buf(std::vector<std::uint8_t> data) : m_data{ std::move(data) } { } const std::vector<std::uint8_t>& data() const { return m_data; } };
74,047,308
74,047,340
Replace pointers with std::optional in a recursive data structure
Is it possible to replace pointers with std::optional in a recursive data structure? For example, how would I replace the following pointer based Tree template< typename T > struct Tree { T data; Tree* left; Tree* right; }; with a Tree that uses std::optional instead of pointers? I have tried this: template< typename T > struct Tree { T data; std::optional< Tree< T > > left; std::optional< Tree< T > > right; }; but the compiler greeted me with several screens of error messages about incomplete type Tree<int> used in type trait expression.
It doesn't work this way. The idea behind std::optional<> is that it already contains storage for Tree<T>. Your Tree<T> would be of infinite size this way.
74,047,689
74,047,763
How do I get make to just compile?
I want to just compile using make and not link. Within my directory I have the following: Makefile, pi*, pi.cpp, pi.o In the Makefile this is the code: pi: pi.o c++ -O0 pi.o -o pi pi.o: pi.cpp c++ -O0 pi.cpp -c How do I get make to just compile: c++ -O0 pi.cpp -c
Do you mean running only part for getting object file? Try runing make pi.o
74,047,851
74,048,038
Why does the copy constructor of an object that is being used for intializing another object gets invoked?
class point { public: point(double x, double y) : x(x), y(y) { std::cout << "point parameterized constructor of: " << getThis() << std::endl; } point(const point& that) : x(that.x), y(that.y) { std::cout << "point copy constructor of: " << getThis() << std::endl; } ~point() { std::cout << "point destructor of: " << getThis() << std::endl; } private: double x; double y; point* getThis() { return this; } }; class line { public: line(const point& startPoint, const point& endPoint) : startPoint(startPoint) endPoint(endPoint) { std::cout << "line parameterized constructor: " << getThis() << std::endl; } ~line() { std::cout << "line destructor of: " << getThis() << std::endl; } private: point startPoint; point endPoint; line* getThis() { return this; } }; int main() { point p1(3.0, 4.0); point p2(5.0, 6.0); line l1(p1, p2); return 0; } Output of the program: point parameterized constructor of: 0x577fffc00 point parameterized constructor of: 0x577fffbe0 point copy constructor of: 0x577fffbb0 point copy constructor of: 0x577fffbc8 lineSegment parameterized constructor of: 0x577fffbb0 lineSegment destructor of: 0x577fffbb0 point destructor of: 0x577fffbc8 point destructor of: 0x577fffbb0 point destructor of: 0x577fffbe0 point destructor of: 0x577fffc00 I don't understand how point's copy constructor gets invoked 2 times (1 for each point parameter) Reason why is originally, line constructors parameters were not const references. And the compiler was giving this warning for the line constructor Clang-Tidy: The parameter 'endPoint' is copied for each invocation but only used as a const reference; consider making it a const reference line(point startPoint, point endPoint) : startPoint(startPoint), endPoint(endPoint) {...} And this was the output: point parameterized constructor: 0x8feedffa40 point parameterized constructor: 0x8feedffa20 point copy constructor: 0x8feedffa60 point copy constructor: 0x8feedffa80 point copy constructor: 0x8feedff9f0 point copy constructor: 0x8feedffa08 lineSegment parameterized constructor: 0x8feedff9f0 point destructor: 0x8feedffa80 point destructor: 0x8feedffa60 lineSegment destructor: 0x8feedff9f0 point destructor: 0x8feedffa08 point destructor: 0x8feedff9f0 point destructor: 0x8feedffa20 point destructor: 0x8feedffa40 As you can see the copy constructor of the point gets invoked 4 times(2 for each point parameter). I assumed all 4 invocations would go away when I made the parameters const point references. Instead, they halved. Why is that?
In this constructor of the class line line(const point& startPoint, const point& endPoint) : startPoint(startPoint) endPoint(endPoint) { std::cout << "line parameterized constructor: " << getThis() << std::endl; } there is used the copy constructor of the type point for data members startPoint and endPoint in this mem-initializer list startPoint(startPoint) endPoint(endPoint) In this constructor line(point startPoint, point endPoint) : startPoint(startPoint), endPoint(endPoint) {...} where arguments are not accepted by reference the copy constructor is called two times more to initialize the parameters.
74,048,033
74,049,234
Why is it required to add certain fields as a parameter to the constructor in C++
Probably a simple a question, but I haven't found anything yet. For the code below, the IDE complains about the field and suggests to "Add as parameter to the constructor". #include <chrono> class TestClass { private: std::chrono::time_point start; public: TestClass(){ start = std::chrono::steady_clock::now(); } }; Question 1: What is wrong about the code? Question 2: How is it possible to define a time_point as a field?
std::chrono::time_point is a template class, meaning you can not instantiate it without giving the required template parameters. In this case that's the first template parameter Clock. I.e. you need to tell for which clock you want to store a time point. You can fix that by adding your clock: std::chrono::time_point<std::chrono::steady_clock> start; But clocks also have a time_point type alias, so you write the shorter: std::chrono::steady_clock::time_point start;
74,048,046
74,048,151
How to know which included header a function comes from?
I have a big class A that has dozens included headers, each header has its own included header as well. I'm creating a new class that would use a function which is also used in class A. I do not want to include the whole class A in my new class, so I try to find the header who brought that function to class A. What's the best way to do it?
If you are not using an IDE or an appropriate editor plugin (you should), then the easiest way is to add a deliberate error to a file and look at the error message. Note, this may or may not work with your compiler. int foo(); // defined somewhere but we don't know where // ask the compiler foo(42); Error messages: test.cpp:42:8: error: too many arguments to function ‘int foo()’ 42 | foo(42); | ~~~^~~~ foo.h:38:5: note: declared here 38 | int foo(); | ^~~ You should not blindly #include <foo.h> if it comes from a third party library. It might be a file that end users are not supposed to include directly. Double check.
74,048,208
74,048,246
Can't understand the behaviour of dynamic_bitset
I am working on a Hill Climber Algorithm and I need to represent data as bitsets. To sumarize my issue, I have written this piece of code: #include <iostream> #include <boost/dynamic_bitset.hpp> void print(const boost::dynamic_bitset<> bitset) { std::cout << bitset[0]; //bitset.at(0) - same result } int main() { boost::dynamic_bitset<> myBitset(4, 10); // will be 1010 std::cout << myBitset<<"\n"; print(myBitset); return 0; } I can't understand why this prints 1010 0
As indicated in the comments, boost::dynamic_bitset indexes from the least-significant bit, i.e. the rightmost bit in the usual printed representation of a binary number. So bitset[0] is the zero on the far right of 1010, and the 1 that you expect to be printed is in fact bitset[3].
74,048,666
74,048,827
How can I return user to original switch menu from do-while loop?
how can I get user to go back to original switch menu once the user selects N at the end. When user selects N, would I use another loop to get them back to original menu? Any help is greatly appreciated. cout << "Total Chips: " << chips << endl; cout << "1) xxxxx" << endl; cout << "2) xxx" << endl; cout << "Please enter an option" << endl; int option; cin >> option; switch(option) { case 1: { char again; do { /* code */ cout << "Would you like to play again? Y/N" << endl; cin >> again; }while(towlower(again) == 'y'); // I'm not sure whether to use another do-while loop.
When user selects N, would I use another loop to get them back to original menu? Yes, one that is put around the original menu, eg: bool keepRunning = true; do { cout << "Total Chips: " << chips << endl; cout << "1) xxxxx" << endl; cout << "2) xxx" << endl; cout << "Please enter an option" << endl; int option; cin >> option; switch (option) { case 1: { char again; do { /* code */ cout << "Would you like to play again? Y/N" << endl; cin >> again; } while (again == 'y' || again == 'Y'); break; } ... } } while (keepRunning);
74,048,690
74,048,709
Why do I see C++ code in file_reader.cc, shouldn't it be C?
I'm doing an University proyect where professors give us some base code supposedly in C. But inside this C files just see C++ functions like cout instead of printf, vectors from STL, inheritance? This is an example of file_reader.cc: void read (const char *nombre_archivo_pse, vector <float> &vertices, vector <int> &caras){ unsigned num_vertices = 0, num_caras = 0; ifstream src; string na = nombre_archivo_pse; if (na.substr (na.find_last_of (".") + 1) != "ply") na += ".ply"; abrir_archivo (na, src); leer_cabecera (src, num_vertices, num_caras, true); leer_vertices (num_vertices, vertices, src); leer_caras (num_vertices, num_caras, caras, src); cout << "archivo ply leido." << endl << flush; } This shouldn't be file_reader.cpp or file_reader.c++? How I code? In C or C++?
That is C++ code. The file extension for C++ files can be .cc or .cpp.
74,048,795
74,051,357
Verify (EXPECT_NE/EXPECT_EQ) struct members for a Template class in a function
I have a unit test that validates whether certain member values (valueA, valueB) equate to 0 or not. And the object being tested i.e Object is a template and what I am looking to accomplish is to: wrap the validation part in a function so instead of invoking EXPECT_NE/EXPECT_EQ for each member value, I just invoke a function that takes care of the validation This is the original snippet: template<typename T> struct Object { struct Values { int valueA; int valueB; }; Values values = {}; T otherStuff; void setValues(int valueA, int valueB) { values.valueA = valueA; values.valueB = valueB; } }; TEST(UnitTest, testA) { Object<int> object; // do stuff that modified object's values via setValues() EXPECT_NE(object.values.valueA, 0); EXPECT_EQ(object.values.valueB, 0); } Following is what I came up with but I get the following error gmock-matchers.h:2074:31: error: no matching function for call to 'testing::internal::FieldMatcher<Object<int>::Values, int>::MatchAndExplainImpl(std::integral_constant<bool, false>::type, const Object<int>&, testing::MatchResultListener*&) const' 2074 | return MatchAndExplainImpl( | ~~~~~~~~~~~~~~~~~~~^ 2075 | typename std::is_pointer<typename std::remove_const<T>::type>::type(), | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2076 | value, listener); If I were to use a member variable outside of a struct i.e testVar in MatchesStruct, it would compile fine. Why the complain with struct member? using ::testing::Eq; using ::testing::Ne; using ::testing::Field; using ::testing::AllOf; template<typename T> struct Object { int testVar; struct Values { int valueA; int valueB; }; Values values = {}; T otherStuff; void setValues(int valueA, int valueB) { values.valueA = valueA; values.valueB = valueB; } }; template <typename T, class M1, class M2> auto MatchesStruct(M1 m1, M2 m2) { return AllOf(Field(&Object<T>::Values::valueA, m1), Field(&Object<T>::Values::valueB, m2)); } TEST(UnitTest, testA) { Object<int> object; // do stuff that modified object's values via setValues() EXPECT_THAT(object, MatchesStruct<int>(Ne(0), Eq(0))); } Here's a live sample
This is almost the same question as the one you asked a few hours ago and is actually based on my initial answer there, but with the twist that here Object is a template and its members valueA and valueB are in another struct. You cannot form a member pointer to a member within a member variable like you attempt to do (i.e. &Object<T>::Values::valueA is not allowed by the C++ standard), so Field cannot be used that way. In any case, the answer to this is almost the same as my answer in the other post (which I amended just now after you said in a comment there that your Object is a template): Write a proper matcher using MATCHER_P2, like so (live example): template<typename T> struct Object { int ran; struct Values { int valueA = 42; int valueB = 0; }; Values values = {}; T otherStuff; }; MATCHER_P2(MatchesStruct, m1, m2, "") { return ExplainMatchResult(m1, arg.values.valueA, result_listener) && ExplainMatchResult(m2, arg.values.valueB, result_listener); } TEST(UnitTest, testA) { Object<int> object; EXPECT_THAT(object, MatchesStruct(Ne(0), Eq(0))); }
74,049,670
74,049,808
How do I replace all "unsigned long" in my project files with "unsigned long long" without affecting compiler system files?
I need to replace all occurences of "unsigned long" with "unsigned long long" in my solution. I need to replacement it only in my own files. Files like "string.h" etc. should not be affected. How can I make sure that no system / compiler files are affected? When I press Ctrl + F, a box opens up which allows me to select either Current Block Current Document All Open Documents Current Project Entire Solution I have already managed to wreck up string.h and other files, so "Entire Solution" is obviously not the correct choice. Thank you!
I have now opened each and every one of my files manually and used "Only this document".
74,049,759
74,049,780
Is there any difference between the vsnprintf function under xcode and the vsnprintf of other platforms?
I wrote a very simple test code to test vsnprintf, but in xcode and visual studio environment, the results are very different. The test code is as follows: #define _CRT_SECURE_NO_WARNINGS #include <iostream> #include <string.h> #include <cstdarg> void p(const char* fmt, ...) { static const int DefaultLength = 256; char defaultBuf[DefaultLength] = { 0 }; va_list args; va_start(args, fmt); vsprintf(defaultBuf, fmt, args); printf("%s\n", defaultBuf); memset(defaultBuf, 0, sizeof(defaultBuf)); vsnprintf(defaultBuf, DefaultLength, fmt, args); printf("%s\n", defaultBuf); va_end(args); } int main(int argc, const char* argv[]) { // if you uncomment this line(std::cout ...), it will crash at vsnprintf in xcode std::cout << "Tests...!\n"; p("Create:%s(%d)", "I'm A String", 0x16); return 0; } this is the output in visual studio : Tests...! Create:I'm A String(22) Create:I'm A String(22) This is normal and doesn't seem to be a problem. But the same code, I created a macos command line project, pasted this code in, something strange happened, when the code is executed to vsnprintf, it will start EXC_BAD_ACCESS directly. What's more outrageous is that if I comment out the std::cout in the main function, it will not crash, but the output is wrong. So the question is, what is the reason for this difference, shouldn't these functions all be functions of the C standard library, and their behavior should be constrained by the standard library? Or is my usage wrong? the output when i delete std::cout: Create:I'm A String(22) Create:\310\366\357\277\367(3112398) Program ended with exit code: 0 If only one is right, is xcode right or visual studio is right, in the end I used is the latest xcode 14, visual studio 2022。
You must reinitialize the va_list between calls to vsprintf. Failure to do so is undefined behavior. See https://en.cppreference.com/w/c/variadic/va_list : If a va_list instance is created, passed to another function, and used via va_arg in that function, then any subsequent use in the calling function should be preceded by a call to va_end. void p(const char* fmt, ...) { static const int DefaultLength = 256; char defaultBuf[DefaultLength] = { 0 }; va_list args; va_start(args, fmt); vsprintf(defaultBuf, fmt, args); va_end(args); printf("%s\n", defaultBuf); memset(defaultBuf, 0, sizeof(defaultBuf)); va_start(args, fmt); vsnprintf(defaultBuf, DefaultLength, fmt, args); va_end(args); printf("%s\n", defaultBuf); }
74,050,256
74,050,313
What is the difference between compiling a C++ file with the 'gcc' and 'c++' commands?
While learning C++, I tried to compile a HelloWorld program using the 'gcc' command' and found that I needed to add the '-lstdc++' option for it to compile successfully: gcc HelloWorld.cpp -lstdc++ However, I idly tried to use 'c++' as a command to compile a file, and much to my surprise, it worked without me needing to use the -lstdc++ option, and it produced an output executable file that ran just as well as the one produced by the 'gcc' command with the '-lstdc++' option: c++ HelloWorld.cpp Does anyone know if there are any hidden differences in output between the two commands, and if the 'c++' command may be safely used in place of the 'gcc' command? I have searched a dozen or so websites, and not a single one of them had any documentation or samples for code featuring 'c++' used as a command to compile a C++ executable file in the OS that I'm running (Linux Ubuntu 20.04).
c++ is a soft link of g++. Then you can find the difference at this question:What is the difference between g++ and gcc?
74,050,371
74,072,643
C++ CMake error in VisualStudio - can't find CRABMEAT library
I'm new to Visual Studio and have had very little experience with C++. I have a project that I'm trying to open in VS Community 2022. All I've done so far is open a folder that has a CMakeLists.txt file in it, so it is automatically running through things. It hits an error and stops: CMake Error at Lconfig/packages.d/crabmeat.cmake-inc:7 (message): Did not find CRABMEAT library. In the crabmeat.cmake-inc file, it just looks for the package/library "crabmeat": # vim: ft=cmake find_package( crabmeat QUIET ) if ( CRABMEAT_FOUND) message( STATUS "found CRABMEAT library. [lib=${CRABMEAT_LIBRARY},include=${CRABMEAT_INCLUDE_DIR}]") else( CRABMEAT_FOUND) message( FATAL_ERROR "Did not find CRABMEAT library.") endif( CRABMEAT_FOUND) I have been searching online to find out what crabmeat is, with zero success. Then I found crabmeat mentioned in a compiler.h file: /*! * @brief * set our own macros for compilers * */ #ifndef LDNDC_COMPILERS_H_ #define LDNDC_COMPILERS_H_ /** compiler detection **/ #include "crabmeat-compiler.h" /* clang (llvm), note: have before gcc because clang also identifies as gcc.. */ #if defined(CRABMEAT_COMPILER_CLANG) # define LDNDC_COMPILER_CLANG /* pgi */ #elif defined(CRABMEAT_COMPILER_PGI) # define LDNDC_COMPILER_PGI ... etc etc "crabmeat-compiler.h" doesn't appear to exist as a file. Could someone please explain what crabmeat is and how I get it so I can move forward? I've also searched for it in the components of the VS Installer and nothing comes up, so I'm at a loss. Thanks in advance.
It appears that it's either developed in-house, or at least it's a local version. I was given a Subversion link to download it from their repository, in any case.
74,050,391
74,050,439
multi-dimensional array printing negative number in c++
I am learning was try to print out result from multi-dimensional array. My code is int numbers[3][3][4] = { { {5,3,8,7}, {1,2,3,4}, {8,9,10,11} }, { {12,13,14,15}, {16,17,18,19}, {20,21,22,23} }, { {121,131,141,151}, {161,171,181,119}, {210,211,212,213} } }; int i,j,k; // int arraylength = sizeof(numbers)/sizeof(int); // cout<<arraylength; for(i=0;i<=3;i++){ for(j=0;j<=3;j++){ for(k=0;k<=4;k++){ cout<<numbers[i][j][k]; } cout<<endl; } } return 0; } I am trying to get output the number but getting negative number. also i was trying to get the length of the array to work with the loop with int arraylength = sizeof(numbers)/sizeof(int); with this i am getting the whole length of the array but how i can get the nested array length??
Indices of arrays in C++ (and C) are 0..(n-1), where n is the number of elements. Therefore your loops should use < instead of <= (to avoid accessing memory out of bounds): for(i=0;i<3;i++){ for(j=0;j<3;j++){ for(k=0;k<4;k++){ // ... } } } In order to get the size of the nested arrays, you can use: std::cout << sizeof(numbers[0]) << std::endl; std::cout << sizeof(numbers[0][0]) << std::endl; Output: 48 16 Side notes: In c++ it is usually recomended to use std::vector instead of raw C arrays. In order to manage multidimentional arrays I always prefer to keep a 1D std::vector, and manage the indices manually. Look for strided arrays for more details. You can see an example for 2D in my answer here: Two-dimensional dynamic array pointer access. Better to avoid using namespace std - see here Why is "using namespace std;" considered bad practice?.
74,050,524
74,050,673
What does the --gpu-architecture (-arch) flag of NVCC do?
I am a beginner at CUDA and I encountered a somewhat confusing behavior of NVCC when trying out this simple "hello world from gpu" example: // hello_world.cu #include <cstdio> __global__ void hello_world() { int i = threadIdx.x; printf("hello world from thread %d\n", i); } int main() { hello_world<<<1, 10>>>(); cudaDeviceSynchronize(); printf("Execution ends\n"); } If compiled with nvcc hello_world.cu the output is: hello world from thread 0 hello world from thread 1 hello world from thread 2 hello world from thread 3 hello world from thread 4 hello world from thread 5 hello world from thread 6 hello world from thread 7 hello world from thread 8 hello world from thread 9 Execution ends However, if compiled with: nvcc hello_world.cu -arch=sm_86 Then the output is only Execution ends I thought -arch=sm_86 was only to specify the compute architecture, but it seems to change the behavior of the program as well. Why? I am using RTX2060 and NVCC 11.1. A note: This is exercise 1-4 from Professional CUDA C Programming by John Cheng et al., which asks the reader to see what happens when the program is compiled with/without the -arch flag.
The -arch flag of NVCC controls the minimum compute capability that your program will require from the GPU in order to run properly. As you can see here, RTX 2060 compute capabilty is 7.5 (i.e. sm_75). This means that it will not be able to run with higher capabilty (like sm_86). You can use -arch=sm_75 to specify this compute capability to NVCC. You can use cudaGetLastError to check if there was an error launching the kernel.
74,050,878
74,051,835
Why non-volatile T with conversion from volatile T to T should be non-trivial?
struct T { int a; T() = default; T(const T &) = default; T(const volatile T &src) { a = src.a; } }; If we don't provide the copy ctor which takes a cvref parameter, then T is trivial, not otherwise. By C++ standard, looks like it's expected. I can understand volatile T is not trivial, because a volatile type may need different copying method. But, I don't quite understand why a non-volatile type T which has an user-provided conversion from volatile T to T should be considered as non-trivial as well? Any comment would be appreciated.
I don't disagree with Ted Lyngmo's answer, but I think this may provide more context: A type is trivial if: it has a trivial default constructor; [*] every eligible copy constructor, move constructor, copy-assignment operator, or move-assignment operator it has is trivial, and moreover it has at least one of these; and its destructor is trivial. Your type is nontrivial because it doesn't satisfy condition (2). It has one eligible copy constructor that's trivial, and one that isn't. To me, this just pushes the question back a step (although maybe to a real expert all the following is obvious): why can't we write struct T { int a; T() = default; T(const T &) = default; T(const volatile T &src) = default; }; If this were allowed, presumably the generated copy constructor for a const volatile reference would match what you've written here, and then T would satisfy the definition of a trivial type. The problem with this is that it would mean that any time you write a flat struct it would automatically come equipped with a copy constructor taking a volatile instance, and that could be dangerous. Suppose we have a struct like struct X { char data[256]; int num_reads; int num_writes; }; and we have a volatile X instance mapped to memory in such a way that every time we read some data[i] it causes num_reads to increment. This is a perfectly legitimate situation when we're using volatile, and a situation in which we obviously don't want the compiler generating a default copy constructor. In the other direction, I guess we could ask why the constructor taking a const volatile reference is even considered a copy constructor at all. If you do provide such a constructor, it can be used to copy a non-volatile instance (assuming you didn't also provide another constructor for that case), which is maybe a point in favor of calling it a copy constructor. On the other hand, I guess it wouldn't be that hard to write something like S(const S& s) : S(static_cast<const volatile S&>(s)) { } to forward to it. (But there's a fair chance I'm missing some good reason that we need to regard this as a copy constructor to make something work properly -- maybe to let generic collection types take a volatile type parameter?) That said, if this is causing you a problem, and you don't need to constructor taking a const volatile reference to count as a copy constructor, it would be easy enough to write this: struct T { int a; T() = default; T(const T &) = default; T(const volatile T &src, bool) { a = src.a; } }; Unfortunately you can't go a step further and provide a default argument for that dummy parameter, because that makes it a copy constructor again! But here's a version that does let you get away with a little more: struct T { int a; T() = default; T(const T &) = default; template<int = 0> T(const volatile T &src) { a = src.a; } }; This works because template constructors do not count as copy constructors. I'm not sure of the exact rationale there, but my assumption is that in general it's probably equivalent to the halting problem to determine if a template constructor has some set of arguments that give it the same signature as a copy constructor. Assuming that's the case, it would make it uncomputable in general to determine whether a type was trivially copyable or not, which would kind of blunt the usefulness of the concept. [*] Technically, the condition is actually that every eligible default constructor is trivial, and moreover there's at least one of these. What's the difference? A type can have more than one default constructor: struct S { S() = default; S(int x = 0) { } };
74,050,975
74,051,203
Can someone please tell what is wrong in this(Run time error)(want to get the maximum number out of all four integers)
This Question is from Hacker rank Function in C++ section I am getting the answer that i want but the output is repeated so many time that i have to stop the code from running manually #include <iostream> #include <cstdio> using namespace std; int max_of_four(int a,int b,int c,int d){ if (a>b){ cout<<a; }else if(b>c){ cout<<b; }else if(c>d){ cout<<c; }else if(d>c){ cout<<d; } return max_of_four( a, b, c, d); } int main() { int a, b, c, d; scanf("%d %d %d %d", &a, &b, &c, &d); int ans = max_of_four(a, b, c, d); printf("%d", ans); return 0; }
The problem is that you have written max_of_four() as a recursive function. It calls itself over and over in an endless loop, and it is printing on each iteration. The function should not be printing anything at all (the caller is printing the value that is returned), and it certainly should not be calling itself at all. Also, the logic is just plain wrong. It doesn't actually report the largest value of the 4 values given. For instance, if a>b is true then it doesn't even consider the values of c and d at all, which might be larger than a. And similarly, if b>c is true then the value of d is not considered. The correct logic would look more like this: int max_of_four(int a,int b,int c,int d){ int max_value = a; if (b > max_value){ max_value = b; } if (c > max_value){ max_value = c; } if (d > max_value){ max_value = d; } return max_value; } Which can be simplified using the std::max() algorithm, eg: #include <algorithm> int max_of_four(int a,int b,int c,int d){ return std::max(a, std::max(b, std::max(c, d))); // Or: return std::max({a,b,c,d}); } Alternatively, using the standard std::max_element() algorithm, eg: #include <algorithm> int max_of_four(int a,int b,int c,int d){ int arr[] = {a,b,c,d}; return *std::max_element(arr, arr+4); }
74,051,474
74,054,090
How do I execute an existing binary that's in the same location as the main cpp file?
I'm making a program that depends heavily on another C binary. Since I don't feel like learning how to use headers and what not yet, I wanted to take the simple rout and just run a pre-compiled binary from the same folder in my cpp program. Right now, my folder is setup like this: It has main.cpp, CMakeLists.txt, and the ibootim binary. Inside of main.cpp, how would I call ibootim? From coding in python, it's taught me that I should be able to run system("./ibootim"); but that doesn't work. Terminal tells me that there's no file found. Obvioiusly if I were to put the entire path to that binary, it would work. However, if other users were to download this, it would not work for them since they don't have the same computer, username, etc. as I do. So my first question, my primary concern would be: How do you run another binary that's in the same directory in a c++ program? If this isn't possible for some reason, then I can try downloading ibootim from source and maybe using the header file: How do you execute code from a C header in a C++ program?
In c++ if you want to use a binary you can use std::system() function. But to do this the binary must be on the PATH. If your binary is not on the path you can do something like this. #include <iostream> int main(){ #if _WIN32 std::system("./mybinarie.exe"); #else std::system("./mybinarie"); #endif return 0; } Starting the shell with std::system will ensure that you are in your working folder and that if the binary is in the working folder it should work.
74,052,463
74,052,581
std::string::erase not working when using it inside loop
I have written a simple C++ code which takes an Integer as input and converts it into string, and when iterating through it if it encounters any '0' it erases it. The program successfully removes multiple zeroes only when they are not in consecutively Can anyone help me understand why it fails when the zeroes are continuously present. #include <iostream> #include <string> using namespace std; int main() { int N; cin >> N; string str = to_string(N); auto it = str.begin(); for (it; it < str.end(); ++it) { if (*it == '0') { str.erase(it); } } cout << str << endl; } Input = 1509 --> output 159 Input = 10509 --> output 159 Input = 15009 --> output 1509 Input = 105009--> output 1509
When manipulating the string while looping over it, you have to take care not to loose your position in it. In other words, when removing an element, without correcting your iterator, you'll skip one character each time. Sidenote: Better use a STL function for this (as also pointed to in the comments. Also, problems like this are glowing examples of why to use std-functions).
74,053,870
74,054,629
Does self-defined headers count as preprocessor directives
All the statements with the symbol # are known as preprocessor directive. My question is does self-defined headers count as preprocessor directive? # include "example.cpp" // Does it count as preprocessor directive or is it only the header file defined by programmers allowed to be (called) a preprocessor directive?
All the statements with the symbol # are known as preprocessor directive # include "example.cpp" // Does it count as preprocessor directive Yes, it is a preprocessor directive. This line starts with #.
74,053,930
74,054,780
Same helper methods for two different classes (design problem) C++
I have a class hierarchy as follows: struct Arg { Info someSpecificInfo; OtherInfo anotherInfo } class BaseEvaluator { public: BaseEvaluator (Info info) {}; virtual Result evaluate(); } class SpecificEvaluator1 : public BaseEvaluator { public: Derived(Info info): Base(info) {}; Result evaluate(); } class SpecificEvaluator2: public BaseEvaluator { public: SpecificEvaluator2(Info info): Base(info) {}; Result evaluate(); } However, I currently need to implement two specific Evaluator classes. One of which uses the Info object (essentially another SpecificEvaluator3 class). However, the other Evaluator requires the entire Arg Object. Now I know that for OOP, I should not be mixing these two classes, and essentially I should change BaseEvaluator to InfoEvaluator and create another ArgEvaluator Class, and maybe both of these can inherit from a base BaseEvaluator class that has the evalute() method. However, the problem is that for these two Evaluator classes, they largely utilize the same helper methods, where these helper methods utilize the Info member variable. I can visualize my twoEvaluator classes as follows: class ArgSpecificEvaluator { public: ArgSpecificEvaluator (Arg arg) {}; Result evaluate(); // Implementation Uses OtherInfo protected: void helperMethod1(Info); void helperMethod2(Info); } class SpecificEvaluator3 : BaseEvaluator { public: SpecificEvaluator3(Info info): {}; Result evaluate(); protected: void helperMethod1(Info); void helperMethod2(Info); } As you can see there is repeated code for the helperMethods1. Since for ArgSpecificEvaluator, the helper method is the same when we use arg.Info instead of just info as arguments directly. In such case, how to do reconcile this? Do I switch to just using Arg for all the classes? The reason I do not do so, is because there is no need to the SpecificEvaluator classes to even touch OtherInfo. SpecificEvaluator3 is kind of a special case where it definitely is a type of evaluator, but has characteristics of the ArgSpecificEvaluator as they both utilize the same helper methods. Initially, the helperMethods() are supposed to have no arguments and utilize the member variable Info but this scenario is much harder to reconcile so I decided to abstract and put it in the arguments instead. In a way, what I want to achieve is the Java equivalent of SpecificEvaluator3 inheriting from base class, but implementing an interface with the helper methods. How do I approach this in C++? Multiple Inheritance?
i think what you are trying to achieve is either something like this: class ArgSpecificEvaluator : public SpecificEvaluator3 { public: ArgSpecificEvaluator (Arg arg) : Base(arg.info) {}; Result evaluate() override; // Implementation Uses OtherInfo } class SpecificEvaluator3 : BaseEvaluator { public: SpecificEvaluator3(Info info): {}; virtual Result evaluate(); protected: void helperMethod1(Info); void helperMethod2(Info); } Or that class Helper { protected: Helper(Whatever youneed); void helperMethod1(Info); void helperMethod2(Info); } class ArgSpecificEvaluator : public BaseEvaluator, protected Helper { public: ArgSpecificEvaluator (Arg arg) {}; Result evaluate() override; // Implementation Uses OtherInfo } class SpecificEvaluator3 : public BaseEvaluator , protected Helper { public: SpecificEvaluator3(Info info): {}; Result evaluate() override; } Note that i set ArgSpecificEvaluator to also inherite the base BaseEvaluator because it has the same evaluate function. In general, i try to go around the caveheat of inheritance by rather having an instance of a common class as a member rather than inheriting a class. class Helper { public: Helper(Whatever youneed); void helperMethod1(Info); void helperMethod2(Info); } class ArgSpecificEvaluator : public BaseEvaluator { public: ArgSpecificEvaluator (Arg arg) {}; Result evaluate() override; // Implementation Uses OtherInfo protected: Helper helper; } class SpecificEvaluator3 : public BaseEvaluator { public: SpecificEvaluator3(Info info): {}; Result evaluate() override; protected: Helper helper; } I would imagine that using Arg for all classes and having no-value or a default value in some cases is probably not a bad solution either.
74,053,939
74,054,014
Question about typecast behavior in arithmetic
I have the following code. My main question lies in line 5. int x1 = extRes1.at(1).toInt(); //A fairly large int value. This is from Qt, where extRes is a QStringList. the key is, the function returns an int value. int x2 = extRes2.at(1).toInt(); int y1 = extRes1.at(2).toInt(); int y2 = extRes2.at(2).toInt(); double c = (double)(y2*x1-y1*x2)/(x1-x2); //Typecasting, as I want this arithmetic to return a floating point properly. My question is, what is the exact behavior of the typecasting on line 5? Based on what I've found on the topic so far, I believe that the result of Line 5 RHS (y2 *x1-y1 * x2)/(x1-x2) is represented by a double. But does typecasting work by turning all individual elements (such as y2, x1) in the arithmetic into the type (in this case double)? Or does it work by only converting the result of the final solution? I am aware that on a technical level, my issue can be solved by converting the preexisting ints to doubles. Please let me know if more information is required.
Only the result of (y2*x1-y1*x2) (which is an int) is converted to double. This is due to the precedence of the casting operator: As you can see here, casting is priority over all "normal" artihmetic operations like multiplication and division. Then this double is divided by the result of (x1-x2) (an int promoted to double for the division) using floating-point division, yielding the double result assigned to c.
74,054,832
74,055,451
OpenCV detect which pins are bent
I have this image of a pin header, and I need to detect if there are bent pins in the header using OpenCV. UPDATE, solved Thanks to Nick, I have made something that works pretty good, not prefect but oke! I use the findContours function to find all contours. Then I loop over all the items and find the minAreaRect, and draw a box of the given size. When a box has a width greaten then a set threshold the pin is too bend (out of spec). for (const auto &entry: fs::directory_iterator(SAMPLES)) { try { // Load src image src = imread(entry.path(), IMREAD_COLOR); // Blur medianBlur(src, blurred, 3); // Set threshold threshold(blurred, blurred, 100, 255, cv::THRESH_BINARY); // Edge detection Canny(blurred, detected_edges, thres1, thres2, 3); // imshow("blurred", blurred); vector<vector<Point> > contours; // Find all the contours in the image findContours(detected_edges, contours, RETR_TREE, CHAIN_APPROX_SIMPLE); vector<RotatedRect> minRect(contours.size()); vector<vector<Point> > contours_poly(contours.size()); vector<Rect> boundRect(contours.size()); int bendPins = 0; for (size_t i = 0; i < contours.size(); i++) { // bind all shapes to the vectors minRect[i] = minAreaRect(contours[i]); approxPolyDP(contours[i], contours_poly[i], 3, true); boundRect[i] = boundingRect(contours_poly[i]); // Draw the min area rect need to fill the contour Rect rect(boundRect[i].tl(), boundRect[i].br()); rectangle(src, rect, Scalar(0, 0, 255), 2); // When a pin it to bend if (rect.width > threshold_bend) { Point centerRect = (boundRect[i].br() + boundRect[i].tl()) * 0.5; circle(src, centerRect, 20, Scalar(255, 0, 255), 2); bendPins++; } // Draw a rect around the pin, bent or not Point2f rect_points[4]; minRect[i].points(rect_points); for (int j = 0; j < 4; j++) { line(src, rect_points[j], rect_points[(j + 1) % 4], Scalar(0, 255, 255), 1); } } char buffer[100]; snprintf(buffer, 100, "Found bend pin(s) : %d", bendPins); putText(src, buffer, Point(10, 25), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(255, 255, 255), 2); imshow(entry.path().filename(), src); waitKey(); } catch (const std::exception &e) { cout << e.what() << endl; } } The result:
Have a look at cv::findContours. You should be able to extract the pins with that, maybe binarize first with cv::threshold(). Then using center-of-mass and the moment-of-area for the contours found, you can describe the position and angle of the pins. Or just using the bounding rectangle might even be enough.
74,055,345
74,063,706
Compact form for read and return frame from VideoCapture
I have for example this easy function, but i would like make it more compact, have you suggestion for me? VideoCapture camera = VideoCapture(0); cv::Mat& OpenCvCamera::getFrame() { Mat frame; camera >> frame; return frame; } I'd like to make it inline without using temporary variable "frame". Is it possible?
It looks like you want to hide the existence of the VideoCapture object. If so, just do only it. i.e. Just wrap the VideoCapture::read(). No other change will not be needed. //This object is invisible from the function user. VideoCapture camera = VideoCapture(0); //Type of this function (argument and return) is same as VideoCapture::read(). bool OpenCvCamera::getFrame( cv::Mat &frame ) { return camera.read( frame ); }
74,056,712
74,092,708
parsing packet to get application layer protocols such as http and tls using the dpdk packet framework without being computationally expensive
I have the following packet inspection function that parses transport layer protocols such as TCP and UDP. I need to get deeper into the packet and get application layer protocols such as HTTP and TLS. My current theory is to implement a pattern matching function on the payload but that would be computationally expensive. Any leads on how to proceed? void inspect_packet(struct rte_mbuf *pkt, unsigned port_id, int i) { uint8_t *data = (uint8_t *)(pkt->buf_addr + pkt->data_off); unsigned int offset = 0; struct rte_ether_hdr *eth = (struct rte_ether_hdr *)data; offset += sizeof(struct rte_ether_hdr); a_counter[i].pkts_counter++; a_counter[i].bits_counter += pkt->pkt_len; if (eth->ether_type != htons(RTE_ETHER_TYPE_IPV4) && eth->ether_type != htons(RTE_ETHER_TYPE_IPV6) && eth->ether_type != htons(RTE_ETHER_TYPE_ARP)) { return; } if (eth->ether_type == RTE_ETHER_TYPE_ARP) { a_counter[i].arp_counter++; return; } struct rte_ipv4_hdr *iph = (struct rte_ipv4_hdr *)(data + offset); struct rte_ipv6_hdr *iph6 = (struct rte_ipv6_hdr *)(data + offset); struct rte_tcp_hdr *tcph = NULL; struct rte_udp_hdr *udph = NULL; if(eth->ether_type == htons(RTE_ETHER_TYPE_IPV4)) { offset += 20; //header length switch (iph->next_proto_id) { case PROTOCOL_TCP: a_counter[i].tcp_counter++; tcph = (struct rte_tcp_hdr *)(data + offset); break; case PROTOCOL_UDP: a_counter[i].udp_counter++; udph = (struct rte_udp_hdr *)(data + offset); break; default: break; } } else if (eth->ether_type == htons(RTE_ETHER_TYPE_IPV6)) { offset += 40; //header length switch (iph6->proto) { case PROTOCOL_TCP: tcph = (struct rte_tcp_hdr *)(data + offset); break; case PROTOCOL_UDP: udph = (struct rte_udp_hdr *)(data + offset); break; } } data = nullptr; }
[based on the live discussion] Question: My current theory is to implement a pattern-matching function on the payload but that would be computationally expensive. Any leads on how to proceed? Answer: The application logic to identify the protocols and application would be fixed cost. So the first goal is to ensure pre-processing and post-processing stages lead to minimal loss. These get varied from physical NIC to virtual NIC too. So let me explain a few pointers I follow for the best result. In virtual NIC, I use memif or vhost for the best performance. if I know specific IP, VLAN, MPLS, VXLAN which get filtered I prefer to use AF_XDP PMD on the virtual NIC that bifurcates the traffic. In the case of physical NIC, use the NIC that supports PTYPES, RSS, QUEUE redirection. (Physical NIC offload) Using RSS, one can program RSS to spread traffic on all queues. Combining queue redirection with RSS, one can specifically send the desired traffic like IPv4, IPv6, VxLAN, GRE, Geneve, GTPu to be RSS on selective queues. This allows all ARP, ND, lldp and other packets to fall onto default queue. Using DPDK APi rte_eth_dev_set_ptypes enable rte_mbuf to carry ptype information from descriptor. ensure to use vector mode (AVX2 and|or AV512) with smaller descriptor size if possible for certain NIC. So well-tuned (BIOS and Kernel Command Line) lcores will now be able to get RX packets DMA with enough metadata. This will allow the extra overhead to parse each packet to reduce since selected traffic is sent to a specific queue and the PTYPE on the RSS queue helps to do an early filter. Then divide the development into multiple stages stage-1: rx packets and increments protocol counters then use rte_mbuf_free without drops stage-2: increment ref_cnt for the interested packet, and do tx_burst. then perform stage-2 stage-3: run single instance of optimized application-parser and how much cycles overhead comes in and is there packet drops if packet drops are present, it means single queue traffic needs either spread to multiple queues (more RX queues) in case of the run to completion model. Else explore the option of using eventdev or flow distributor with multiple lcore threads as application parsers. Note: as pointed out, one can use HW FPGA lookaside or SW regex like (hyperscan) to achieve optimized application parsing. this will work with clear text, in case of TLS, for connection which doesn't have keys to be terminated can be bypassed by using rte_flow to carry 32bit marker. Above logic is true for IDS mode. In case of IPS mode, sometimes requirements will be come to ensure packet ordering. So use atomic eventdev for all the 3 stages.
74,057,063
74,117,751
How to build libcpr/cpr static library using mingw?
I'm trying to build libcpr/cpr on Windows with Mingw64 and the output is always a libcpr.dll libcurl-d.dll libzlib.dll file in the ./lib folder. How can make the build provide a lib file instead of dll files?
Use CMake flag -DBUILD_SHARED_LIBS:BOOL=OFF to build static library files (*.a).
74,057,119
74,057,204
One liner tuple/pair unpack in c++ with reusing same variable multiple times
I have already seen Is there a one-liner to unpack tuple/pair into references? and know how the unpack values from tuple/pairs in a single line like following auto [validity, table] = isFieldPresentAndSet(r, "is_federated"); here isFieldPresentAndSet returns a tuple. Now I want to reuse these two variable in multiple successive calls of isFieldPresentAndSet like following auto [validity, table] = isFieldPresentAndSet(r, "is_federated"); auto [validity, table] = isFieldPresentAndSet(r, "gslb_sp_enabled"); and then check the value for the validity and table. But this gives me compile error because I am redefining the validity and table variable second time. If the change the second line to [validity, table] = isFieldPresentAndSet(r, "gslb_sp_enabled"); or validity, table = isFieldPresentAndSet(r, "gslb_sp_enabled"); It still gives me compile error. Is there any way to do this??
You can use std::tie. It returns a tuple of references, which makes the assignement possible: std::tie(validity, table) = isFieldPresentAndSet(r, "gslb_sp_enabled");
74,057,507
74,067,974
Predefined instances of C++ class enum vs static
I have a class that is a bit complex to initialize. It is basically a tree structure and to create an instance the current constructor takes the root node. Nevertheless there are some instances that will be used more often than others. I would like to make it easier for the user to instantiate this ones faster and easier. I was debating what the best option would be. First option: using enum to choose between different options in the constructor. enum CommonPatterns {TRIANGLE, DIAMOND}; typedef struct PatternNode { int id; vector<PatternNode*> child; } PatternNode; class Pattern { private: PatternNode root; public: //Constructor that takes the root of the tree Pattern (PatternNode root) { this->root = root; } //Constructor that takes enum to create some common instances Pattern (CommonPatterns pattern) { PatternNode predefined_root; if (pattern == CommonPatterns::TRIANGLE) { //Build tree structure for the triangle } else if (pattern == CommonPatterns::DIAMOND) { //Build tree structure for the diamond } Pattern(predefined_root); } } Second option: predifining some static instances Pattern.h enum CommonPatterns {TRIANGLE, DIAMOND}; typedef struct PatternNode { int id; vector<PatternNode*> child; } PatternNode; class Pattern { private: PatternNode root; static Pattern createTriangle(); static Pattern createDiamond(); public: //Constructor that takes the root of the tree Pattern (PatternNode root) { this->root = root; } //Predefined common instances of patterns const static Pattern TRIANGLE; const static Pattern DIAMOND; } Pattern.cc Pattern::Pattern createTriangle() { PatternNode root; //Create the tree for the triangle return Pattern(root); } Pattern::Pattern createDiamond() { PatternNode root; //Create the tree for the diamond return Pattern(root); } Pattern Pattern::TRIANGLE = Pattern::createTriangle(); Pattern Pattern::DIAMOND = Pattern::createDiamond(); I don't understand that well the implications of using static performance wise so I would appreciate some suggestions.
As usual when people ask for the performance benefits, the first rule of optimization of code applies: If you think, you have a performance problem, measure the performance. So my (and many a a people's) opinion is, that you should treat this problem with other things in mind, e.g. what is more clear to the user and/or the reader of the code (which is often yourself, so be extra nice to them!) or what code structure makes it easier to test. Unfortunately those are a bit up to opinion, so now I will share mine: Having separate functions for these seems cleaner to me. It means that for testing purposes you have more but smaller tests, which makes it easier to spot the exact problem, when a test fails. Related: The constructor is smaller and hence less error prone. For the user it is extremely specific: He gets a function in the class namespace whose name says what it does. If you go that route, remember to document these static functions in a way that a user will stumble upon them, e.g. mention them in the class documentation and/or the constructor documentation. Although the same holds for documentation of the enum. Lastly let me hazard a guess regarding performance: Although I don't expect any noticable performance issues either way, the static function version has the advantage that the compiler may optimize it more easily as it (seems to) depends only on compile-time data. Again to really find out about performance, you would have to measure the performance differences or --even better-- disassemble the code and see what the compiler actually did with your code.
74,058,653
74,064,250
Advantage to Declaring Constructors and Destructors inside vs. outside of the class?
I've been following along a class about constructors, destructors and constructor overloading in C++. (Granted, it's from 2018, I don't know if that changes anything.) Is there any reason that he defines constructors and everything else outside of the class (still inside the same .cpp file)? What's the difference between: const std::string unk = "unknown"; const std::string prefix = "copy-of-"; class Human { std::string _name = ""; int _height = 0; int _age = 0; public: Human(); Human(const std::string& name, const int& height, const int& age); Human(const Human& right); Human& operator = (const Human& right); ~Human(); void print() const; }; Human::Human() : _name(unk), _height(0), _age(0) { puts("Default Constructor"); } Human::Human(const std::string& name, const int& height, const int& age) : _name(name), _height(height), _age(age) { puts("Constructor w/ arguments"); } Human::Human(const Human& right) { puts("Copy Constructor"); _name = prefix + right._name; _height = right._height; _age = right._age; } Human& Human::operator = (const Human& right) { puts("Copy Operator!"); if (this != &right) { _name = prefix + right._name; _height = right._height; _age = right._age; } } Human::~Human() { printf("Destructor: %s ", _name.c_str()); } void Human::print() const { printf("Hello, I'm %s, %dcm tall and %d years old.\n", _name.c_str(), _height, _age); } and const std::string unk = "unknown"; const std::string prefix = "copy-of-"; class Human { std::string _name = ""; int _height = 0; int _age = 0; public: Human() : _name(unk), _height(0), _age(0) { puts("Default Constructor"); } Human(const std::string& name, const int& height, const int& age) : _name(name), _height(height), _age(age) { puts("Constructor w/ arguments"); } Human(const Human& right) { puts("Copy Constructor"); _name = prefix + right._name; _height = right._height; _age = right._age; } Human& operator = (const Human& right) { puts("Copy Operator!"); if (this != &right) { _name = prefix + right._name; _height = right._height; _age = right._age; } } ~Human() { printf("Destructor: %s ", _name.c_str()); } void print() const { printf("Hello, I'm %s, %dcm tall and %d years old.\n", _name.c_str(), _height, _age); } }; since both work perfectly fine? Wouldn't it be more efficient (readeable) to declare everything inside the class on the first go?
It makes a difference if you are going to be using the class Human in (possibly many) different .cpp files. In that case the information regarding the structure of the class needs to be placed in a separate header (i.e. .h ) file. Behind the scenes, the information from the .h file is automatically copied to every .cpp file that contains a #include "Human.h" statement somewhere. This needs to be done for every .cpp file that uses the class Human before it can be compiled ( and later linked ). The information needed externally from Human.h will be the class definition, which, at a minimum, contains all data members and method declarations of the class Human. If the class definition also contains method definitions, these will be duplicated for each .cpp file in which the header is included. Now you should see why it might not be a good idea to place very long method definitions inside the class definition. In some cases duplication of methods can increase performance, but its usually better to do that using compiler optimization settings. The problem with having too much function code in the header file is it can greatly increase the compilation time. As constructors and destructors are usually short, there is virtually no functional performance difference in placing them inside vs outside the class. If you are developing a module that will be included as part of a much bigger piece of software though, sometimes it's a good idea to design your header file as documentation for your module's API. You don't want to overwhelm someone using your module with tons of code in the header. Instead, just document what the classes and methods do with comments and place the routines themselves, even short ones, in the .cpp file. When you are just playing around with a single-file project it makes no difference. In multi-file projects it will make a difference. That's the gist of it. Edited to add one more thing: When your class has private member attributes ( _age for instance is private by default inside your Human class ) that some code in a different file needs access to, the class will need a get_age() method. Something like... int get_age() { return _age; } If another file has some code like... double getAverageAge( const std::vector<Human>& staff_directory ) { int age_sum = 0; for ( const Human& human : staff_directory ) { age_sum += human.get_age(); } return static_cast<double>( age_sum ) / staff_directory.size(); } and the staff directory contains a thousand or more Human objects, there will be some performance benefit to keeping the get_age() function in the header. If get_age() is defined in the header file inside the class, the compiler will effectively duplicate the code everywhere get_age() appears, even in a different compilation unit ( i.e. code from another .cpp file that happens to include Human.h ). This is called automatic inlining. So, very trivial functions like getters or setters should stay inside the class definition in the header file, especially if they will likely be called inside a loop in some other .cpp file.
74,059,547
74,059,604
Why is this code not printing the value returned by the binarySearch() function?
#include<bits/stdc++.h> using namespace std; int binarySearch(int [], int, int, int); int main() { int n, ar[50], givensum; cout << "Enter the size of the array: "; cin >> n; for(int i = 0; i<n; i++) { cout << "ar[" << i << "] = "; cin >> ar[i]; } cout << "Enter the given sum: "; cin >> givensum; cout << "The closest sum possible is: " << binarySearch(ar, 0, n-1, givensum) << endl; } int binarySearch(int arr[], int l, int r, int key) { int mid = l+(r-l)/2; while(l<=r) { if(arr[mid]==key) return arr[mid]+1; else if(arr[mid] > key) r = mid-1; else l = mid+1; } return arr[mid]; } The code is not printing the value returned by the function. Is the code wrong or the compiler is nuts? I tried storing the return value in another variable but it didn't work out. My interview for Blueflame Labs is scheduled for tomorrow. PLS HELP!!
Your binary search algorithm itself is wrong. It's stuck in an infinite loop. Corrected code is as follows: int binarySearch(int arr[], int l, int r, int key) { int mid = l+(r-l)/2; while(l<=r) { if(arr[mid]==key) return arr[mid]; else if(arr[mid] > key) r = mid-1; else l = mid+1; mid = l+(r-l)/2; //update the mid point so you're checking new points } return arr[mid]; }
74,060,092
74,060,370
How to find if a program is installed on WIndows via command line/C++
I'm writing a program in C++ and at one point I want to open a file with a certain program (either Libreoffice or Word, depending on what is installed on the pc). For that I need to check which program is installed on the pc first. I'm usually using linux and for that I have found if (!system("which libreoffice --writer > /dev/null 2>&1")) { const char* command = "libreoffice --writer myfile.rtf &"; system(command); } which works perfectly. However, I cannot figure out how to do the same for Windows (the program is meant to run on a Windows pc). I know that I can query if a program is installed in Windows using where -command, however apparently I don't quite understand how to use it for I cannot get it to work for me. Help would be very appreciated.
You do not need to check which program is installed on this PC before executing a data file on Windows. You can use ShellExecute directly on a data file, and the system will find the program that has been associated with that file type, and execute it appropriately. Depending on your needs, you may prefer to use ShellExecuteEx instead. In particular, if you want to find when the child process finishes execution (or similar), ShellExecuteEx gives you a handle to the child process, which ShellExecute does not. If you really want to find the executable associated with a data file (even though it's unnecessary for the case you've cited), you can use FindExecutable to do that.
74,060,402
74,061,451
Accessing private members inside a MATCHER
I am verifying the private members of Object (Values storing the internal state of the class) via a MATCHER in a unit test however I have the following concern: I created GetValueA() and GetValueB() public interfaces solely so that unit test could access inside a MATCHER. Doesn't sound like a right idea (specially if it's not supposed to be publicly exposed) but is there a way to access valueA and valueB inside a MATCHER somehow without having to create public methods? (could be set to private/protected perhaps so it's not publicly exposed) MATCHER_P2 could be brought inside the Object class but how would the caller invoke it? Live sample template<typename T> class Object { // internal to the class struct Values { int valueA = 100; int valueB = 0; }; Values values = {}; T otherStuff; public: // only exposing for the sake of Unit test access int GetValueA() const { return values.valueA; } int GetValueB() const { return values.valueA; } }; MATCHER_P2(Match, m1, m2, "") { return ExplainMatchResult(m1, arg.GetValueA(), result_listener) && ExplainMatchResult(m2, arg.GetValueB(), result_listener); } class UnitTest { TEST(UnitTest, testA) { Object<int> object; EXPECT_THAT(object, Match(Ne(0), Eq(0))); } };
You can pass the private member to the matcher, and define the test to be a friend class. Also use FRIEND_TEST macro instead of friend class. template<typename T> class Object { // internal to the class struct Values { int valueA = 42; int valueB = 0; }; FRIEND_TEST(UnitTest, testA); Values values = {}; T otherStuff; }; MATCHER_P2(Match, m1, m2, "") { return ExplainMatchResult(m1, arg.valueA, result_listener) && ExplainMatchResult(m2, arg.valueB, result_listener); } TEST(UnitTest, testA) { Object<int> object; EXPECT_THAT(object.values, Match(Ne(0), Eq(0))); } Live example: https://godbolt.org/z/WWr3vrWsn
74,061,116
74,061,450
How to resolve ambiguity in template base class template method?
I am trying to figure out how to resolve an ambiguity problem with function names in base classes. #include <type_traits> template <typename T, typename PARENT> class BaseA { public: BaseA(PARENT& p) : _parent(p) {} public: template <typename P_ = PARENT> auto& parent() { if constexpr (std::is_same_v<P_, PARENT>) { return _parent; } else { return _parent.template parent<P_>(); } } private: PARENT& _parent; }; class AbstractBaseB { }; class BaseB : public AbstractBaseB { public: AbstractBaseB* parent() { return _parent; } private: AbstractBaseB* _parent; }; class Z { public: void foo() {} }; class Y : public BaseA<Y, Z>, public BaseB { public: Y(Z& z) : BaseA(z) { } void foo() {} }; class X : public BaseA<X, Y>, public BaseB { public: X(Y& y) : BaseA(y) { //This will compile BaseA::parent().foo(); //This will NOT compile BaseA::parent<Z>().foo(); } }; int main() { Z z; Y y(z); X x(y); } This is a very specific/odd use case, so I have a working example here: https://cppinsights.io/s/08afbad9 To get it to compile, just comment out line 58. With 58 enabled, this is where I get the ambiguity which is due to line 16: return _parent.template parent<P_>(); Since _parent is of a different type than this instance of the BaseA template, I can't just do: return _parent.template BaseA::parent<P_>(); like I did on line 57. How do I go about fixing this? For those who ask, the purpose of the templated parent method is to get the "Nth" nested parent without having to do something like parent().parent().parent()
If you want member function (templates) of the same name to be considered from multiple base classes you need to explicitly import them into the derived class scope: class Y : public BaseA<Y, Z>, public BaseB { public: /*...*/ using BaseA::parent; using BaseB::parent; };
74,061,126
74,061,246
C++ - segmentation fault when using vectors
i was working on a problem from LeetCode about finding intersection elements in two different arrays. and when i pass the input it throws a segmentation fault. here is my solution (the solve() function only): vector<int> solve(){ int n,m; cin >> n >> m; vector<int> arr1, arr2; for (int i = 0; i < n; i++){ cin >> arr1[i]; } for (int i = 0; i < m; i++){ cin >> arr2[i]; } sort(arr1.begin(), arr1.end()); sort(arr2.begin(), arr2.end()); vector<int> result; int i = 0, j = 0; while (i < n && j < m){ if (arr1[i] == arr2[j]){ result.push_back(arr1[i]); i++; j++; } else if(arr1[i] > arr2[j]){ j++; } else { i++; } } return result; }
This happens because your vectors, arr1 and arr2 start off with a length of 0 each. When you try to set a certain index of them, this assumes that the index is already allocated, meaning the vector is long enough to contain that index, which it doesn't in your code. To solve this, the best solution would be to simply call push_back instead of indexing the vector. This works because push_back will allocate more memory if needed. for (int i = 0; i < m; i++) { int x; cin >> x; arr1.push_back(x); }
74,061,465
74,062,448
CUDA functions failing after initializing Spinnaker API
I am using the C++ Spinnaker API to capture images from cameras, and then using CUDA to process the images. The CUDA code works if I do not call the Spinnaker API, but once I call the Spinnaker API various CUDA functions start crashing (such as cudaMemset, or cudaMemcpy, or my custom CUDA kernels). The Spinnaker API works if I do not use CUDA code, so it is like the two API's cannot co-exist. The error is consistently: terminate called after throwing an instance of 'std::system_error' what(): Bad address The code looks like this: // get cameras SystemPtr system = System::GetInstance(); CameraList camList = system->GetCameras(); // fails on any of the following CUDA functions cudaMemset(...); myKernel(); cudaMemcpy(...); Any ideas what is going on here?
Downgrading from spinnaker-2.6.0.160 to spinnaker-2.0.0.146 fixed this issue on ARM64 Ubuntu 18.04.
74,061,590
74,071,302
QtSql Not Showing Query Results Anywhere
My SQL results were showing on Label but then I decided to drop that specific and made a new one. Then I changed the query to be executed but nothing shows up anymore! I have been debugging for the past 2 hours to no avail. I used boolean to check if the query was executed and it showed '1' however, when I print the query.value, it shows nothing. QSqlQuery query; query.exec("SELECT * FROM artists;"); QString name = query.value(0).toString(); ui->label_2->setText(name); I can also confirm that my database is connected since it shows '1' on if (db.open()). I have imported sql in the .pro file, obviously. I use mariadb => QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL"); Could it be a case of permission denied?
As drescherjm pointed out, query.next() must be used to move the pointer of query forward.
74,061,650
74,061,731
How can I constrain one template's variadic 'type-parameters' with various specializations of some certain second template in C++?
I need to have a template struct Lists, whose variadic parameters can only be the types represented by specializations of some certain container template List. Different Lists instances should be able to depend on different (but fixed for one instance) templates List. That said, I want to produce the code, equivalent to following pseudocode: template<typename...> struct Lists; // pseudocode. Is there a way to do something similar? template<template<typename...> typename List, typename... Types> struct Lists<List<Types...>...>{}; int main() { Lists<std::pair<int, char>,std::pair<double, char>> lists; Lists<std::tuple<int, char>,std::tuple<double, char>> lists; //Lists<std::pair<int, char>,std::tuple<double, char>> lists; // must cause a compilation error return 0; } How can I do that in modern C++?
template <template <typename...> typename, typename> inline constexpr bool is_specialization_of = false; template <template <typename...> typename Template, typename... Ts> inline constexpr bool is_specialization_of<Template, Template<Ts...>> = true; template <typename...> struct Lists; template <template <typename...> typename Template, typename... Ts, typename... Others> requires(is_specialization_of<Template, Others> && ...) struct Lists<Template<Ts...>, Others...> {}; int main() { Lists<std::pair<int, char>, std::pair<double, char>> lists0; // OK Lists<std::tuple<int, char>, std::tuple<double, char>> lists1; // OK /* Lists<std::pair<int, char>,std::tuple<double, char>> lists2; */ // ERROR } live example on godbolt.org
74,062,481
74,072,393
How to use python subprocess to run c++ executable file in another folder with providing arguments, inside a python script?
I am running a python script file in which it should run a c++ executable file from another folder with some arguments. The executable file is located in root home ubuntu i.e. (~/camera_intrinsic_calibration) folder Generally I run on the terminal in that folder location as follows: ./pngCamCalStep1 /home/nvi/Perception/09-22-22/data/60_left/%04d.png 12 8 0.05 where ./pngcamcalstep1 is my c++ executable file and others are arguments needed to be passed. Hence in the script file I tried the following using subprocess but none of them work: result = subprocess.call(["./pngCamCalStep1", "home/nvi/Perception/sensor_0/left-%04d.png", "12" ,"8", "0.05"], check =True, capture_output=True, cwd='/home/nvi/camera_intrinsic_calibration/',shell =True) or result = subprocess.run(shlex.split("./pngCamCalStep1 home/nvi/Perception/sensor_0/left-%04d.png 12 8 0.05"), check =True, capture_output=True, cwd='/home/nvi/camera_intrinsic_calibration/', shell =True) It doesn't work and I get output as : Traceback (most recent call last): File "/home/nvi/catkin_ws/src/camera_calibration/src/camera_calibration/camera_calibrator.py", line 340, in on_mouse self.c.do_calibration() File "/home/nvi/catkin_ws/src/camera_calibration/src/camera_calibration/calibrator.py", line 1280, in do_calibration result = subprocess.call(["./pngCamCalStep1", "home/nvi/Perception/sensor_0/left-%04d.png", "12" ,"8", "0.05"], check =True, capture_output=True, cwd='/home/nvi/camera_intrinsic_calibration/',shell =True) File "/usr/lib/python3.8/subprocess.py", line 340, in call with Popen(*popenargs, **kwargs) as p: TypeError: __init__() got an unexpected keyword argument 'check' Can anyone please let me know how to solve this problem? What is the right command to call or run a C++ executable file from another folder with providing it's input arguments?
Well, I have kept the whole argument in a single quote, then it worked, removing, check and capture_output: subprocess.call(["./pngCamCalStep1 home/nvi/Perception/sensor_0/left-%04d.png 12 8 0.05"], cwd='/home/nvi/camera_intrinsic_calibration/',shell =True)
74,062,801
74,062,836
Why isn't a function parameter used here?
I'm going through this tutorial: https://www.learncpp.com/cpp-tutorial/how-to-design-your-first-programs/ I noticed the author didn't use a parameter in this function: int getUserInput() { std::cout << "Enter an integer "; int input{}; std::cin >> input; return input; } Would it be okay to do something like this? int getUserInput(int input) { std::cout << "Enter an integer "; std::cin >> input; return input; }
It would work, but it wouldn't make much sense. The first version of your function is used something like this: int some_number = getUserInput(); That makes sense; the caller isn't providing any input to the function, so it takes no parameters. The second version takes a parameter though, so the caller has to provide it. The function doesn't actually do anything with that value though. All of the following behave exactly the same: int some_number1 = getUserInput(0); int some_number2 = getUserInput(123456); int some_number3 = getUserInput(some_number2); It makes no sense for the caller to provide a parameter to the function since the function doesn't use it at all.
74,063,233
74,063,376
Getting the correct type information for a function returning a function-pointer that uses variadic templates
I have a variadic function defined within class Foo AddCodeChunkInner(Type, DerivativeStatus, bInlined, Format, args...); And I am trying to write a function that returns its function pointer static auto getAddCodeChunkInner(){return &Foo::AddCodeChunkInner;} However I'm getting an error stating that it "cannot deduce type for 'auto' from 'overloaded-function'. I believe the solution should look something like this: template <typename... Args> static auto getAddCodeChunkInner(Args...) -> decltype(int32 (Foo::*) (EMaterialValueType Type, EDerivativeStatus DerivativeStatus, bool bInlined, const TCHAR* Format, ...)) I'm struggling a little bit to find the correct syntax here though. I have the general idea, but my knowledge of templates is a bit lacking minimum reproducible example: class FHLSLMaterialTranslator : public FMaterialCompiler { int32 AddCodeChunkInner(uint64 Hash, const TCHAR* FormattedCode, EMaterialValueType Type, EDerivativeStatus DerivativeStatus, bool bInlined); } int32 FHLSLMaterialTranslator::AddCodeChunkInner(uint64 Hash, const TCHAR* FormattedCode, EMaterialValueType Type, EDerivativeStatus DerivativeStatus, bool bInlined) { return 1; } class myHLSLMaterialTranslator : public FHLSLMaterialTranslator { public: static auto getAddCodeChunkInner(){return &FHLSLMaterialTranslator::AddCodeChunkInner;} };
Whatever your problem is, it's nothing to do with the function being variadic or the return type of your function being inferred. The following code compiles with no problem on gcc, clang, and MSVC: struct S { void f(int, ...) { } }; static auto getF() { return &S::f; } Given the error message you're describing, it sounds like maybe the problem is instead that you have more than one function in Foo called AddCodeChunkInner. If that's the case, you can explicitly specify which overload you're talking about by doing a cast which the desired overload will be the best match for: struct T { void g(int) { } void g(int, ...) { } }; static auto getG1() { return static_cast<void(T::*)(int)>(&T::g); } static auto getG2() { return static_cast<void(T::*)(int, ...)>(&T::g); }
74,064,332
74,076,369
How to accelerate array_t construction in pybind11
I used C++ to call python with Pytorch. C++ generate a vector and send to Python for neural network to inference. But send the vector is a time consuming process. A vector contain 500000 float consume 0.5 second turning to array_t. Is there a faster way to transfer vector to array_t? Any help will be appreciate! Here is the part of code: int main(){ float list[500000]; std::vector<float> v(list, list+length); py::array_t<float> args = py::cast(v); //consume 0.5 second py::module_ nd_to_tensor = py::module_::import("inference"); py::object result = nd_to_tensor.attr("inference")(args); } I also tried the second way as below, but it take 1.4 second in Python to make vector into tensor: PYBIND11_MAKE_OPAQUE(std::vector<float>); PYBIND11_EMBEDDED_MODULE(vectorbind, m) { m.doc() = "C++ type bindings created by py11bind"; py::bind_vector<std::vector<float>>(m, "Vector"); } int main(){ std::vector<float> v(list, list+length); py::module_ nd_to_tensor = py::module_::import("inference"); py::object result = nd_to_tensor.attr("inference")(&v); } Here is Python code: def inference(): tensor = torch.Tensor(Vector)
Problem solved py::array_t<float> args = py::array_t<float>({length}, {4}, &list[0]); Directly init the array_t will be the best way
74,064,467
74,064,929
Converting for loop in R to Rcpp
I've been playing around with using more efficient data structures and parallel processing and a few other things. I've made good progress getting a script from running in ~60 seconds down to running in about ~9 seconds. The one thing I can't for the life of me get my head around though is writing a loop in Rcpp. Specifically, a loop that calculates line-by-line depending on previous-line results and updates the data as it goes. Wondering if someone could convert my code into Rcpp that way I can back-engineer and figure out, with an example that I'm very familiar with, how its done. It's a loop that calculates the result of 3 variables at each line. Line 1 has to be calculated separately, and then line 2 onwards calculates based on values from the current and previous lines. This example code is just 6 lines long but my original code is many thousands: temp <- matrix(c(0, 0, 0, 2.211, 2.345, 0, 0.8978, 1.0452, 1.1524, 0.4154, 0.7102, 0.8576, 0, 0, 0, 1.7956, 1.6348, 0, rep(NA, 18)), ncol=6, nrow=6) const1 <- 0.938 for (p in 1:nrow(temp)) { if (p==1) { temp[p, 4] <- max(min(temp[p, 2], temp[p, 1]), 0) temp[p, 5] <- max(temp[p, 3] + (0 - const1), 0) temp[p, 6] <- temp[p, 1] - temp[p, 4] - temp[p, 5] } if (p>1) { temp[p, 4] <- max(min(temp[p, 2], temp[p, 1] + temp[p-1, 6]), 0) temp[p, 5] <- max(temp[p, 3] + (temp[p-1, 6] - const1), 0) temp[p, 6] <- temp[p-1, 6] + temp[p, 1] - temp[p, 4] - temp[p, 5] } } Thanks in advance, hopefully this takes someone with Rcpp skills just a minute or two!
Here is an the sample Rcpp equivalent code: #include <Rcpp.h> using namespace Rcpp; // [[Rcpp::export]] NumericMatrix getResult(NumericMatrix x, double const1){ for (int p = 0; p < x.nrow(); p++){ if (p == 0){ x(p, 3) = std::max(std::min(x(p, 1), x(p, 0)), 0.0); x(p, 4) = std::max(x(p, 2) + (0.0 - const1), 0.0); x(p, 5) = x(p, 0) - x(p, 3) - x(p, 4); } if (p > 0){ x(p, 3) = std::max(std::min(x(p, 1), x(p, 0) + x(p - 1, 5)), 0.0); x(p, 4) = std::max(x(p, 2) + (x(p - 1, 5) - const1), 0.0); x(p, 5) = x(p - 1, 5) + x(p, 0) - x(p, 3) - x(p, 4); } } return x; } A few notes: Save this in a file and do Rcpp::sourceCpp("myCode.cpp") in your session to compile it and make it available within the session. We use NumericMatrix here to represent the matrix. You'll see that we call std::max and std::min respectively. These functions require two common data types, i.e. if we do max(x, y), both x and y must be of the same type. Numeric matrix entries are double (I believe), so you need to provide a double; hence, the change from 0 (an int in C++) to 0.0 (a double) In C++, indexing starts from 0 instead of 1. As such, you convert R code like temp[1, 4] to temp(0, 3) Have a look at http://adv-r.had.co.nz/Rcpp.html for more information to support your development Update: If x was a list of vectors, here's an approach: #include <Rcpp.h> using namespace Rcpp; // [[Rcpp::export]] List getResult(List x, double const1){ // Create a new list from x called `res` Rcpp::List res(x); for (int p = 0; p < x.size(); p++){ // Initiate a NumericVector `curr` with the contents of `res[p]` Rcpp::NumericVector curr(res[p]); if (p == 0){ curr(3) = std::max(std::min(curr(1), curr(0)), 0.0); curr(4) = std::max(curr(2) + (0.0 - const1), 0.0); curr(5) = curr(0) - curr(3) - curr(4); } if (p > 0){ // Initiate a NumericVector `prev` with the contents of `res[p-1]` Rcpp::NumericVector prev(res[p-1]); curr(3) = std::max(std::min(curr(1), curr(0) + prev(5)), 0.0); curr(4) = std::max(curr(2) + (prev(5) - const1), 0.0); curr(5) = prev(5) + curr(0) - curr(3) - curr(4); } } return x; }
74,064,749
74,075,130
C++ class template specialization with value template parameters - how to prefer one over another?
I have the following code: template<typename T, typename U> struct combine; template<template<typename...> typename Tpl, typename... Ts, typename... Us> struct combine< Tpl<Ts...>, Tpl<Us...> > { using type = Tpl<Ts..., Us...>; }; template<size_t Ind, size_t Curr, typename Tpl> struct pack_upto_impl; // SPECIALIZATION 1 template<size_t Matched, template<typename...> typename Tpl, typename... Ts> struct pack_upto_impl<Matched, Matched, Tpl<Ts...> > { using type = Tpl<>; }; // SPECIALIZATION 2 template<size_t Ind, size_t Curr, template<typename...> typename Tpl, typename T, typename... Ts> struct pack_upto_impl<Ind, Curr, Tpl<T, Ts...> > { using remaining_type = typename pack_upto_impl<Ind, Curr+1, Tpl<Ts...>>::type; using type = typename combine<Tpl<T>, remaining_type>::type; }; template<size_t Ind, typename Tpl> using pack_upto = pack_upto_impl<Ind, 0, Tpl >; What I want this to do is something like... using T = tuple<int, double, short, float>; pack_upto<0, T> var1; // this is tuple<> pack_upto<1, T> var2; // this is tuple<int> pack_upto<2, T> var3; // this is tuple<int, double> ... When I try to do this, I get an error about ambiguous template specialization - when the first two template parameters of pack_upto_impl are the same, the compiler doesn't get the hint that I want SPECIALIZATION 1 rather than SPECIALIZATION 2. What's the most elegant way of making this work?
First of all, some typos: The , bool in your definition of remaining_type needs to be removed. You probably wanted to write using pack_upto = pack_upto_impl<Ind, 0, Tpl >::type;. (Before C++20, you also need typename here.) The core issue here is that you want specialization 1 to be considered "more specialized" than specialization 2 so that if a set of template arguments could match either specialization, specialization 1 is chosen. As you have currently written them, specialization 1 is not more specialized than specialization 2. In order for specialization 1 to be more specialized than specialization 2, it must be the case that you could supply arbitrary values of the template arguments for specialization 1 and have the result match specialization 2 (i.e., the template arguments of specialization 2 could be successfully deduced from anything you could instantiate from specialization 1). At present that condition is not met, because if Ts... is empty in specialization 1, it won't match specialization 2, which only accepts Tpl<T, Ts...> (i.e. Tpl must have at least 1 argument, not 0). We can fix this by adding an extra specialization; call it specialization 3: template<size_t Matched, template<typename...> typename Tpl, typename T, typename... Ts> struct pack_upto_impl<Matched, Matched, Tpl<T, Ts...> > { using type = Tpl<>; }; So in this case, when we have a nonempty argument list to Tpl, this specialization will be chosen because it is more specialized than specialization 2. (When the argument list is empty, only specialization 1 matches in the first place, and there's no ambiguity.) See the complete example here on Godbolt.
74,064,946
74,064,983
counting digts in c++ using log10
#include <iostream> #include <math.h> using namespace std; int main() { int n, temp, rem, digits=0, sum=0; cout << "Enter a armstrong number: "; cin >> n; temp = n; digits = (int)log10(n) + 1; while (n != 0) { rem = n % 10; sum = sum + pow(rem, digits); n = n / 10; } if (temp == sum) { cout << "yes"; } else { cout << "not"; } } How does the " digits = (int)log10(n) + 1; " line actually calculates the digits? can anyone explain?
Math. Logarithms are basically "exponents in reverse." Log10(100) is 2.0, as 10 to the second power is 100. Cast to 'int and add one to that are you get 3, which is the number of digits.
74,066,327
74,078,612
UML:Inheritance between template classes with parameter dependencies
Pardon my poor english. Just let me show you the situation what I'm trying to draw in UML class diagram. template<typename TB1, typename TB2, typename TB3> class Base { ... }; template<typename TD1, typename TD2> class Derived : public Base<typename TD1, typename TD2, int> { .... }; I know how to draw UML class diagram when a class inherits generic class, as below(similar question was asked before). But what should I do if base class's template paramter type is being set by derived class's template parameter type? Just add another arrow indicating binded type depends on derived class's template parameter?
In short You could show the relationship between the template Base and the template Derived either with a parameter binding (like in your picture, but between two template classes) or inheritance between template classes. But neither alternative is completely accurate regarding the C++ and the UML semantics at the same time. For this you would need to decompose the template inheritance into a binding and an inheritance. More explanations What does your C++ code mean? The C++ generalization between Derived and Base makes three things at once: it binds parameters of the Base template class (i.e. substituting TD1 for TB1, TD2 for TB2 and int for TB3); it keeps TD1 and TD2 substituable in the resulting bound class; it creates a generalization between the classes obtained by binding the parameters. For the readers who are less familiar with C++, let's illustrate this by using aliases to clarify: template<typename TB1, typename TB2, typename TB3> class Base { }; template<typename TD1, typename TD2> class Derived : public Base<TD1, TD2, int> { }; int main() { using MyDerived = Derived<string, Test>; // class corresponding to binding parameters using MyBase = Base<string, Test, int>; // also binding parameters MyBase *p = new MyDerived(); // this assignment works because the bound // MyBase generalization is a generalization // from MyDerived } So this code means that there is a generic specialization of Base into Derived which is true, whatever the parameter bindings, and in particular for the bound MyBase and MyDerived. How to show it in UML? Option 1 - binding A first possibility is to simply use <<bind>> between template classes: UML specs, section 9.3.3.1: (...) the details of how the contents are merged into a bound element are left open. (...) A bound Classifier may have contents in addition to those resulting from its bindings. Derived would be a bound classifier obtained by binding parameters of Base and adding "own content", including redefinitions of base elements ("overrides"). This is not wrong, but would not appropriately reflect that there is an inheritance also between bound classes obtained from Derived and bound classes obtained directly from Base. Option 2 - inheritance Another approach could be inheritance between the templates: It corresponds to the C++ semantics. But the UML section 9.9.3.2 Template classifier specializations gives another semantic to this diagram: A RedefinableTemplateSignature redefines the RedefinableTemplateSignatures of all parent Classifiers that are templates. All the formal TemplateParameters of the extended (redefined) signatures are included as formal TemplateParameters of the extending signature, along with any TemplateParameters locally specified for the extending signature. I understand this as meaning that the template parameters increase (i.e. the set would be TB1, TB2, TB3, TD1 and TD2) and there is no semantics nor notation foreseen to define a local binding of some parents elements. So UML readers might misunderstand the design intent. Option 3 - binding and inheritance The cleanest way would therefore be to decompose the binding and the inheritance (I've used a bound class that is itself templated with the new parameter name to align, but this could be overkill) :
74,067,291
74,067,412
Can't sort arrays with even numbers followed by odd numbers
I first wrote this: (which works as expected) #include<iostream> using namespace std; int main() { int a[5],cpy[5],ctr = 0; for (int i = 0 ; i<5 ; i++) { cout<<"Enter Value for index "<<i<<": "; cin>>a[i]; } for (int i = 0 ; i<5 ; i++) if (a[i]%2==0) { cpy[ctr]=a[i]; ctr++; } for (int i = 0 ; i<5 ; i++) if (a[i]%2!=0) { cpy[ctr]=a[i]; ctr++; } for (int i = 0 ; i<5 ; i++) cout<<cpy[i]<<" "; return 0; } Wanted to make it more condensed/cleaner by improving my logic, this is what I came up with: #include<iostream> using namespace std; int main() { int a[5],cpy[5],ctr = 0; for (int i = 0 ; i<5 ; i++) { cout<<"Enter Value for index "<<i<<": "; cin>>a[i]; } for (int i = 0 ; i<5 && a[i]%2==0 ; i++,ctr++) cpy[ctr]=a[i]; for (int i = 0 ; i<5 && a[i]%2!=0 ; i++,ctr++) cpy[ctr]=a[i]; for (int i = 0 ; i<5 ; i++) cout<<cpy[i]<<" "; return 0; } Expected Result: Enter Value for index 0: 1 Enter Value for index 1: 2 Enter Value for index 2: 3 Enter Value for index 3: 4 Enter Value for index 4: 5 2 4 1 3 5 What i get after running 2nd version: Enter Value for index 0: 1 Enter Value for index 1: 2 Enter Value for index 2: 3 Enter Value for index 3: 4 Enter Value for index 4: 5 1 0 24 0 0 Can you suggest where I am wrong in the 2nd block of code. The first block works correctly.
The problem here is that you will never enter the first loop. The counter is incremented only if the condition is satisfied, otherwise the loop is broken. You should not implement a condition like this. I suggest you to try the following with std::vector : #include<iostream> #include<vector> using namespace std; int main() { vector<int> a, cpy; for (int i = 0 ; i<5 ; i++) { a.push_back(i+1); } for (int i = 0 ; i<5; i++) { if (a[i]%2 == 0) cpy.push_back(a.at(i)); } for (int i = 0 ; i<5 ; i++) { if (a[i]%2 != 0) cpy.push_back(a.at(i)); } for (int i = 0 ; i<5 ; i++) cout<<cpy[i]<<" "; return 0; } It works as expected, and in a more condensed manner.
74,069,277
74,069,673
Initializing an array of objects created on the heap
Given the non trivial data structure: claas MyClass { public: MyClass():x(0), p(nullptr) {} private: int x; int* p; }; Is there any guarantee provided by the c++ specification that the default constructor will be called for each instance of MyClass in the array pointed by the ptr? int main() { MyClass* ptr = new MyClass[5]; }
Is there any guarantee provided by the c++ specification that the default constructor will be called for each instance of MyClass in the array pointed by the ptr? Yes, it is guaranteed as explained below. From new expression's documentation: ::(optional) new new-type initializer(optional) (2) The object created by a new-expression is initialized according to the following rules: If type or new-type is an array type, an array of objects is initialized. If initializer is absent, each element is default-initialized. And further from default initialization documentation: new T (2) Default initialization is performed in three situations: 2) when an object with dynamic storage duration is created by a new-expression with no initializer; Moreover, The effects of default initialization are: if T is an array type, every element of the array is default-initialized; (emphasis mine) Note the very last statement which says that "every element is default-initializaed" which means(in your example) the default constructor will be called as per bullet point 1: if T is a (possibly cv-qualified) non-POD (until C++11) class type, the constructors are considered and subjected to overload resolution against the empty argument list. The constructor selected (which is one of the default constructors) is called to provide the initial value for the new object; This means that it is guaranteed that the default constructor will be called in your example.
74,069,545
74,075,035
Float results difference on sin function compiled with g++ on two versions of ubuntu
I have tested my code developed on a ubuntu 18.04 bionic docker image on a ubuntu 20.04 focal docker image. I saw that there were a problem with my unit test and I have narrowed the root cause to a simple main.cpp #include <iostream> #include <iomanip> #include <math.h> int main() { const float DEG_TO_RAD_FLOAT = float(M_PI / 180.); float theta = 22.0f; theta = theta * DEG_TO_RAD_FLOAT; std::cout << std::setprecision(20) << theta << ' ' << sin(theta) << std::endl; return 0; } On the bionic docker image, I have upgraded my version of g++ using the commands : sudo apt-get install -y software-properties-common sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt install -y gcc-9 g++-9 sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 --slave /usr/bin/g++ g++ /usr/bin/g++-9 --slave /usr/bin/gcov gcov /usr/bin/gcov-9 sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 70 --slave /usr/bin/g++ g++ /usr/bin/g++-7 --slave /usr/bin/gcov gcov /usr/bin/gcov-7 My version of g++ are the same : 9.4.0. On ubuntu 18.04, the program outputs : 0.38397243618965148926 0.37460657954216003418 On ubuntu 20.04, the program outputs : 0.38397243618965148926 0.37460660934448242188 As you can see the difference is on the sin(theta), on the 7th decimal. The only difference I can think of is the version of libc which is 2.27 on the ubuntu 18.04 and 2.31 on the ubuntu 20.04. I have tried several g++ options -mfpmath=sse, -fPIC,-ffloat-store, -msse, -msse2 but it had no effects. The real problem is that on my Windows version of the code compiled with /fp:precise, I get the same results than the Ubuntu 18.04 : 0.38397243618965148926 0.37460657954216003418 Is there any way to force the g++ compiler to keep the same results as my Windows compiler please?
Well, investigating a slightly modified version of your test program: #include <iostream> #include <iomanip> #include <cmath> int main() { const float DEG_TO_RAD_FLOAT = float(M_PI / 180.); float theta = 22.0f; theta = theta * DEG_TO_RAD_FLOAT; std::cout << std::setprecision(20) << theta << ' ' << std::sin(theta) << ' ' << std::hexfloat << std::sin(theta) << std::endl; return 0; } The changes are that 1) use cmath and std::sin instead of math.h, and 2) also print the hex representation of the calculated sine value. Using GCC 11.2 on Ubuntu 22.04 here. Without optimizations I get $ g++ prec1.cpp $ ./a.out 0.38397243618965148926 0.37460660934448242188 0x1.7f98ep-2 which is the result you got on Ubuntu 20.04. With optimization enabled, however: $ g++ -O2 prec1.cpp $ ./a.out 0.38397243618965148926 0.37460657954216003418 0x1.7f98dep-2 which is what you got on Ubuntu 18.04. So why does it produce different results depending on optimization level? Investigating the generated assembler code gives a clue: $ g++ prec1.cpp -S $ grep sin prec1.s .section .text._ZSt3sinf,"axG",@progbits,_ZSt3sinf,comdat .weak _ZSt3sinf .type _ZSt3sinf, @function _ZSt3sinf: call sinf@PLT .size _ZSt3sinf, .-_ZSt3sinf call _ZSt3sinf call _ZSt3sinf So what does this mean? Well, it calls sinf (which lives in libm, the math library part of glibc). Now, for the optimized version: $ g++ -O2 prec1.cpp -S $ grep sin prec1.s $ Empty! What does that mean? It means that rather than calling sinf at runtime, the value was computed at compile time (GCC uses the MPFR library for constant folding floating point expressions). So the results differ because, depending on the optimization level, one is using two different implementations of the sine function. Now, finally, lets look at the hex values my modified test program printed. You can see the unoptimized value ends in e0 (the zero not being printed since it's a fractional value) vs de for the optimized one. If my mental hex arithmetic is correct, that is a difference of 2 ulp, and well, you can't really expect implementations of trigonometric functions to differ by less than that.
74,070,050
74,070,424
(C++) A question about "insert" function in vector
https://en.cppreference.com/w/cpp/container/vector/insert Cppreference shows: iterator insert( const_iterator pos, const T& value ); and four other overloads. But why the parameter is const_iterator but not iterator?
Whether or not the iterator is const doesn't matter, since the container is the thing being modified (and insert is not a const-qualified member function), not the passed in iterator. And this just makes it easier to use. A non-const iterator is convertible to a const_iterator (but not the other way around) so you can still easily use an iterator. A somewhat relevant paper: https://wg21.link/N2350
74,070,098
74,077,178
Z Buffer Not Working Correctly (Not Displaying Anything)
Here is the code for my Z Buffer, it returns a black screen when I draw it. sf::VertexArray ZOrder(sf::VertexArray verticies, std::vector<float> z_buffer) { std::vector<float> order; for (int i = 0; i < verticies.getVertexCount(); i++) { order.push_back(i); // {1, 2, 3, 4 ... } for (int i = 0; i < z_buffer.size(); i++) { for (int i = 0; i < z_buffer.size(); i++) { if (z_buffer[i] < z_buffer[i + 1]) { std::iter_swap(z_buffer.begin() + i, z_buffer.begin() + i + 1); std::iter_swap(order.begin() + i, order.begin() + i + 1); } } } sf::VertexArray darray(verticies.getPrimitiveType()); for (int i = 0; i < order.size(); i++) { darray.append(verticies[order[i]]); } return darray; } // Draw Code: dvertexa = ZOrder(dvertexa, z_buffer); window.draw(dvertexa); Without "dvertexa = ZOrder(dvertexa, z_buffer);" it acts like normal just without depth testing. Honestly im really tired right now so i'm probably just being an idiot but im stuck
FIXED: sf::VertexArray ZOrder(sf::VertexArray verticies, std::vector<float> z_buffer) { std::vector<float> order; for (int i = 0; i < verticies.getVertexCount(); i++) { order.push_back(i); } for (int i = 0; i < z_buffer.size(); i += 3) { for (int i = 0; i < z_buffer.size(); i += 3) { if (z_buffer[i] + z_buffer[i + 1] + z_buffer[i + 2] < z_buffer[i + 3] + z_buffer[i + 1 + 3] + z_buffer[i + 2 + 3]) { std::iter_swap(z_buffer.begin() + i, z_buffer.begin() + i + 3); std::iter_swap(z_buffer.begin() + i + 1, z_buffer.begin() + i + 1 + 3); std::iter_swap(z_buffer.begin() + i + 2, z_buffer.begin() + i + 2 + 3); std::iter_swap(order.begin() + i, order.begin() + i + 3); std::iter_swap(order.begin() + i + 1, order.begin() + i + 1 + 3); std::iter_swap(order.begin() + i + 2, order.begin() + i + 2 + 3); } } } sf::VertexArray darray(verticies.getPrimitiveType()); for (int i = 0; i < order.size(); i++) { darray.append(verticies[order[i]]); } return darray; }
74,070,654
74,070,742
fread struct with vector from binary file gives Access violation reading error
I'm trying to read and write a struct with vectors to a file in C++. I'm getting read violation error, why is that and how can I fix it? Here's the code. #pragma warning(disable : 4996) #include <iostream> #include <stdio.h> #include <stdlib.h> #include <vector> using namespace std; struct A { vector<int> int_vector; }; int main() { A a1 = A(); a1.int_vector.push_back(3); FILE* outfile = fopen("save.dat", "w"); if (outfile == NULL) { cout << "error opening file for writing " << endl; return 1; } fwrite(&a1, sizeof(A), 1, outfile); fclose(outfile); struct A ret; FILE* infile; infile = fopen("save.dat", "r"); if (infile == NULL) { cout << "error opening file for reading " << endl; return 1; } while (fread(&ret, sizeof(A), 1, infile)) { } fclose(infile); cout << ret.int_vector.at(0) << endl; return 0; } As a side note: If I change the struct A to struct A { int int_vector; }; the program works as expected without errors, so there's something about the vector which is causing the problem.
std::vector as you know, is dynamic, it simply contains a pointer to the data located on the heap. The sizeof(std::vector) is a constant value, you could not just write it to a file then read it back for that reason. What you need is serialization, there are some awesome open source library that you could find on github which will solve your problem.
74,070,977
74,071,377
How to define Iterators for an abstract class in C++
I'm working on a C++ project where I have an abstract class called Aggregate, which represents a container for another abstract class called Primitive. I want to be able to iterate through an Aggregate, without worrying with the details of how the Primitive objects are actually stored. Since I'm not really proficient with C++, I have two questions: First, is it even possible to do something like this? Second, what exactly should my Aggregate class and its derived classes do in order for this to work? Any explanation/references are very much appreciated.
First, is it even possible to do something like this? Second, what exactly should my Aggregate class and its derived classes do in order for this to work? Yes, you "simply" add to Aggregate two virtual functions returning a begin() and an end() iterator. struct Aggregate { struct iterator { /* ... */ }; virtual ~Aggregate() {} virtual iterator begin() { return {}; } virtual iterator end() { return begin(); } }; You can then use range for loops and algorithms from the Standard Library for "free": for (auto& p : aggregate) { p.value = 0; } std::copy(aggregate.begin(), aggregate.end(), aggregate_copy.begin()); You'd need some boilerplate code to implement a working iterator though, but you'll manage with a good Google search. Working demo
74,071,230
74,071,290
Understanding what (void) does when placed in front of a function call
My questions is: Why is the (void) responsible to return a different value? What's exactly happening? struct S { int operator,(int) { return 0; } }; std::cout << (S(), 42) << '\n'; // prints '0' std::cout << ((void) S(), 42) << '\n'; // prints '42'
Problem here is that comma operator has been overloaded. So first line: (S(), 42) Will invoke this custom overload of comma operator, since arguments are S and int which matches this overload. Note this version always returns 0 value. In second line: ((void) S(), 42) Your custom overload of comma operator doesn't match type passed, since first argument is type of void (casting is done). So build-in version of comma operator kicks in. This means that second argument will be returned. That it is why in second line 42 is printed. Side note: Yes C++ allows you to overload comma operator, but it is unusual and quite often treated as a bad practice. In many coding standards it is forbidden.
74,071,288
74,075,421
Switch context in coroutine with boost::asio::post
I'm trying to understand C++ coroutines. My expectation in the example below would be, that each asio::post will switch the context/thread to the given threadpool. But something very strange happens. I get the following output: thread0: 7f0239bfb3c0 thread2: 7f02391f6640 (tp2) thread3: 7f02387f5640 (tp) thread4: 7f02387f5640 (tp2) thread5: 7f02373f3640 (tp) thread6: 7f02373f3640 (tp2) thread7: 7f0235ff1640 (tp) done So the thread-id of 3 and 4 are the same, but they should run on a different context/threadpool. And 3, 5 and 7 should have the same ID since it's the same context (with just one thread). I assume that I understand some concept wrong. Can you give me a hint? Thanks #include <boost/asio.hpp> #include <boost/thread.hpp> #include <iostream> boost::asio::thread_pool tp {1}; boost::asio::thread_pool tp2 {10}; boost::asio::awaitable<void> test() { std::cout << "thread2: " << boost::this_thread::get_id() << std::endl; co_await boost::asio::post(tp, boost::asio::use_awaitable); std::cout << "thread3: " << boost::this_thread::get_id() << std::endl; co_await boost::asio::post(tp2, boost::asio::use_awaitable); std::cout << "thread4: " << boost::this_thread::get_id() << std::endl; co_await boost::asio::post(tp, boost::asio::use_awaitable); std::cout << "thread5: " << boost::this_thread::get_id() << std::endl; co_await boost::asio::post(tp2, boost::asio::use_awaitable); std::cout << "thread6: " << boost::this_thread::get_id() << std::endl; co_await boost::asio::post(tp, boost::asio::use_awaitable); std::cout << "thread7: " << boost::this_thread::get_id() << std::endl; } int main() { std::cout << "thread0: " << boost::this_thread::get_id() << std::endl; boost::asio::co_spawn(tp2, &test, [](std::exception_ptr e) { std::cout << "done" << std::endl; }); tp2.join(); tp.join(); }
I have a near identical answer up here: asio How to change the executor inside an awaitable? I'll leave the explanation there, but your code can act as expected like this: Live On Coliru #include <boost/asio.hpp> #include <iomanip> #include <iostream> #include <sstream> #include <thread> namespace asio = boost::asio; asio::thread_pool tp1{1}, tp2{1}; static inline auto trace(auto msg) { static std::mutex mx; std::lock_guard lg(mx); std::cout // << "trace " << msg << ": thread " << std::hex << std::setw(2) << std::setfill('0') << (std::hash<std::thread::id>{}(std::this_thread::get_id()) % 256) << std::endl; } asio::awaitable<void> test() { trace(2); auto a1 = asio::bind_executor(tp1.get_executor(), asio::use_awaitable); auto a2 = asio::bind_executor(tp2.get_executor(), asio::use_awaitable); co_await post(a1); trace(3); co_await post(a2); trace(4); co_await post(a1); trace(5); co_await post(a2); trace(6); co_await post(a1); trace(7); } int main() { post(tp1, [] { trace("tp1"); }); post(tp2, [] { trace("tp2"); }); co_spawn(tp2, test, [](std::exception_ptr const&) { std::cout << "done" << std::endl; }); tp2.join(); tp1.join(); } Prints e.g. trace tp2: thread 33 trace 2: thread 33 trace tp1: thread d6 trace 3: thread d6 trace 4: thread 33 trace 5: thread d6 trace 6: thread 33 trace 7: thread d6 done
74,072,188
74,076,297
How to filter and transform cpp vector to another type of vector?
I have a class called InfoBlob and two enums called Action and Emotion. My function is supposed to take in a vector<InfoBlobs> blobs, and return a vector<Action> actions, corresponding to certain attributes of the blob. However, I have to perform this conversion only in the range of 'first HAPPY blob to the last SAD blob'. Further, within that range, it can only convert blobs that are either HAPPY or SAD. I have 2 questions: How do I apply this to only the range of 'first HAPPY blob to last SAD blob' How do I transform the vector of InfoBlobs to the vector of actions given the filter condition 'The blobs must be either HAPPY or SAD'. Right now I am trying to filter it and then transform but I get the error error: no match for ‘operator=’ (operand types are ‘std::vector<ex4::Action>’ and ‘std::ranges::transform_view<std::ranges::filter_view<std::ranges::ref_view<std::vector<ex4::InfoBlob> >, task03(std::vector<ex4::InfoBlob>)::<lambda(ex4::InfoBlob)> >, task03(std::vector<ex4::InfoBlob>)::<lambda(ex4::InfoBlob)> >’) 35 | }); This is the class: enum class Emotion : char { HAPPY, SAD, PERPLEXED, STRESSED }; enum class Action : char { RUN, LAUGH, WORRY, WEEP, PLAN, PLOT, READ }; class InfoBlob { public: InfoBlob(int entropy, float spin, Emotion emotion) noexcept : entropy{entropy}, spin{spin}, emotion{emotion} { } int getEntropy() const noexcept { return entropy; } float getSpin() const noexcept { return spin; } Emotion getEmotion() const noexcept { return emotion; } operator==(const InfoBlob& other) const noexcept { return entropy == other.entropy && spin == other.spin && emotion == other.emotion; } This is my code: #include "task03.h" #include <iostream> #include <algorithm> #include <ranges> using namespace ex4; std::vector<ex4::Action> task03(std::vector<InfoBlob> blobs){ std::vector<ex4::Action> actions; actions = blobs | std::ranges::views::filter([](ex4::InfoBlob blob){ return blob.getEmotion() == ex4::Emotion::HAPPY || blob.getEmotion() == ex4::Emotion::SAD; }) | std::ranges::views::transform([](ex4::InfoBlob blob){ if(blob.getSpin() > blob.getEntropy()) return ex4::Action::PLOT; else return ex4::Action::RUN; }); return actions; }
How do I apply this to only the range of 'first HAPPY blob to last SAD blob' You want drop_while. To remove elements past the last SAD blob, you can do a reverse, then a drop_while, then another reverse. How do I transform the vector of InfoBlobs to the vector of actions given the filter condition 'The blobs must be either HAPPY or SAD' You've figured out the filter part, so what remains is to create a new vector from a view. See c++20 ranges view to vector for a general answer to this problem, but in this case I believe you can just use the iterator pair constructor (you don't know the size in advance, so no missed reserve opportunities, and you shouldn't need a common_view, since the views used model common_range when their underlying views do). C++23 std::ranges::to should make this nicer. Working code: template<typename Pred> constexpr auto drop_while_end(Pred &&pred) { using namespace std::views; return reverse | drop_while(std::forward<Pred>(pred)) | reverse; } std::vector<Action> task03(std::vector<InfoBlob> const &blobs) { constexpr auto notHappy = [](InfoBlob const &blob) { return blob.getEmotion() != Emotion::HAPPY; }; constexpr auto notSad = [](InfoBlob const &blob) { return blob.getEmotion() != Emotion::SAD; }; constexpr auto happyOrSad = [](InfoBlob const &blob) { return blob.getEmotion() == Emotion::HAPPY || blob.getEmotion() == Emotion::SAD; }; constexpr auto blobToAction = [](InfoBlob const &blob) { return blob.getSpin() > blob.getEntropy() ? Action::PLOT : Action::RUN; }; using namespace std::views; auto actions = blobs | drop_while(notHappy) | drop_while_end(notSad) | filter(happyOrSad) | transform(blobToAction); return {actions.begin(), actions.end()}; }
74,072,212
74,073,587
`uint_fast32_t` not found in namespace `boost` for boost > 1.74.0
I am trying to compile boost from sources but getting the error below. It works fine for all versions of boost up to 1.74.0 but it breaks for anything newer than that. Note that I am compiling a subset of boost modules, std::regex only. Is there anything that changed on this version that makes these types unavailable? clang-linux.compile.c++ bin.v2/libs/regex/build/clang-linux-14/release/link-static/visibility-hidden/posix_api.o libs/regex/build/../src/posix_api.cpp:90:4: error: no type named 'uint_fast32_t' in namespace 'boost'; did you mean simply 'uint_fast32_t'? boost::uint_fast32_t flags = (f & REG_PERLEX) ? 0 : ((f & REG_EXTENDED) ? regex::extended : regex::basic); ^~~~~~~~~~~~~~~~~~~~ uint_fast32_t This script causes the error: #!/bin/bash set -exo pipefail INSTALL_DIR=/ssd/tmp/install TMPDIR=/ssd/tmp TAG=boost-1.78.0 cd $TMPDIR rm -rf boost git clone https://github.com/boostorg/boost.git cd boost git checkout $TAG allsm=" tools/build tools/bcp tools/boost_install tools/boostdep libs/regex libs/config libs/predef libs/core libs/detail libs/headers libs/integer" for sm in $allsm; do git submodule update --init $sm done #export LD=/usr/local/bin/ld.lld #export CC='/usr/local/bin/clang' #export CXX='/usr/local/bin/clang++' #export CXXFLAGS='-O3 -stdlib=libc++ -std=c++20 -stdlib=libc++' #export LDFLAGS="-lc++abi -lc++" ./bootstrap.sh #--with-toolset=clang ./b2 headers ./b2 install -q -a \ --prefix=$INSTALL_DIR/local \ --build-type=minimal \ --layout=system \ --disable-icu \ --with-regex \ variant=release link=static runtime-link=static \ threading=single address-model=64 architecture=x86 # toolset=clang However it works if you change the git tag from boost-1.78.0 to boost-1.71.0.
You aren't checking out all of the submodules, the simplest solution is to just follow the docs and run: git clone --recursive https://github.com/boostorg/boost.git or: git clone git@github.com:boostorg/boost.git git submodule update --init After adding libs/throw_exception, and libs/assert to the submodules in your script it works for me on Ubuntu 20.04. Here's the dockerfile I used for testing: From ubuntu:20.04 RUN apt update RUN apt-get install g++ git -y RUN git clone https://github.com/boostorg/boost.git RUN cd boost && git checkout boost-1.78.0 RUN cd boost && git submodule update --init tools/build RUN cd boost && git submodule update --init tools/bcp RUN cd boost && git submodule update --init tools/boost_install RUN cd boost && git submodule update --init tools/boostdep RUN cd boost && git submodule update --init libs/regex RUN cd boost && git submodule update --init libs/config RUN cd boost && git submodule update --init libs/predef RUN cd boost && git submodule update --init libs/core RUN cd boost && git submodule update --init libs/detail RUN cd boost && git submodule update --init libs/headers RUN cd boost && git submodule update --init libs/integer RUN cd boost && git submodule update --init libs/assert RUN cd boost && git submodule update --init libs/throw_exception RUN cd boost && ./bootstrap.sh RUN cd boost && ./b2 headers RUN cd boost && ./b2 install -q -a \ --prefix=$INSTALL_DIR/local \ --build-type=minimal \ --layout=system \ --disable-icu \ --with-regex \ variant=release link=static runtime-link=static \ threading=single address-model=64 architecture=x86
74,072,237
74,072,327
Lifetime of a reference that has been returned from a function
For a function T& f() {...} what will be the lifetime of the reference entity created in T x = f(); ? According to the standard "The lifetime of a reference begins when its initialization is complete and ends as if it were a scalar object.", and while there is a section concerning temporary objects, a reference is not an object so it doesn't seem to apply. Does this mean that according to the standard, in the example above the reference must actually exist throughout the whole scope block in which T x = f(); lies? That would seem redundant. I can't see any issue if the reference here were treated similarly to how "temporary" objects are - it seems safe for it to stop existing at the end of the full expression in which it is contained.
Does this mean that according to the standard, in the example above the reference must actually exist throughout the whole scope block in which T x = f(); lies? Yes, the type of the expression f() is actually T and not T&(because in C++ the type an expression is never a reference type), so you're not actually creating a reference variable x but a normal non-reference variable x. And so the normal scoping rules apply. Basically, x is being initialized with the value referred to by the reference. From expression's documentation Each expression has some non-reference type, and each expression belongs to exactly one of the three primary value categories: prvalue, xvalue, and lvalue. For example, int& f() { static int i = 0; return i; } int main() { { int x = f(); //x is a copy of i //you can use x here in this block }//scope of x ends //can't use x here }
74,072,830
74,076,352
Multiple arguments to binary fold expression?
I am trying to write variadic template printing using fold expressions rather than template recursion. Currently I have template <typename... Ts, typename charT, typename traits> constexpr std::basic_ostream<charT, traits>& many_print(std::basic_ostream<charT, traits>& os, Ts... args){ os << '{'; (os << ... << args); return os << '}'; } For a call to many_print(1, 2);, the output is {12}. I would like to have my output be {1, 2}. The only close attempt I have made is template <typename... Ts, typename charT, typename traits> constexpr std::basic_ostream<charT, traits>& many_print(std::basic_ostream<charT, traits>& os, Ts... args){ os << '{'; (os << ... << ((void)(os << ", "), args)); return os << '}'; } This uses the comma operator to print ", " for every arg. Unfortunately due to the sequencing order, the comma prints before the arg, resulting in {, 1, 2}; Are there any solutions without using template recursion? I understand that having n-1 commas will be an issue. I would appreciate if I could even get code that outputs {1, 2, }.
In this case, where nothing is being computed, you can just use the comma operator for the fold itself: ((os << args << ", "),...) With a state variable trick, you can even omit one comma: int n=0; ((os << (n++ ? ", " : "") << args),...);
74,073,045
74,092,196
Is it ok to distribute api-ms-win-core-xxx.dll with my app?
I compile a C++ exe with vs2022 on win11, and run it on a win7 device, but the system prompts that a bunch of api-ms-win-core-xxx.dll is missing. So, I check the documentation, it says these DLLs are introduced after win7, so can I just distribute them with my app?
I suggest you don't try to get these dlls, distributing these files is a violation of the Windows End User Agreement. According to this issue: Those DLLs are Windows's implementation detail and are subject to change at anytime. Those files you get from a higher version of Windows won't work if your Windows version is too low. If you are a developer, just use the APIs documented in the Windows SDK (unless absolute necessary like writing antivirus) and do not take any dependency on those dlls, as they may only exist in one Windows version.
74,073,081
74,073,703
How do I parameterize a consteval lambda?
Background I am using a NTTP (non-type template parameter) lambda to store a string_view into a type at compile time: template<auto getStrLambda> struct MyType { static constexpr std::string_view myString{getStrLambda()}; }; int main() { using TypeWithString = MyType<[]{return "Hello world";}>; return 0; } This works, and achieves my main intention. Question My question now is, to make this easier to use, how can I write a wrapper function to create the lambda for me? I'm thinking something like: // This helper function generates the lambda, rather than writing the lambda inline consteval auto str(auto&& s) { return [s]() consteval { return s; }; }; template<auto getStrLambda> struct MyType { static constexpr std::string_view myString{getStrLambda()}; }; int main() { using TypeWithString = MyType<str("Hello world")>; return 0; } The above fails on clang since the lambda isn't structural, since it needs to capture the string: error: type '(lambda at <source>:4:12)' of non-type template parameter is not a structural type using TypeWithString = MyType<str("Hello world")>; ^ note: '(lambda at <source>:4:12)' is not a structural type because it has a non-static data member that is not public return [s]() consteval { ^ Given that it's possible for a lambda to use a variable without capturing it if the variable is initialized as a constant expression (source), how can I define a function to parameterize this lambda return value at compile time?
GCC and clang are both not actually incorrect in accepting or rejecting the program. The lambda's type simply has a data member of type char[12], which is allowed to be public or private. It seems that clang treats them as private members. The obvious solution is to write out the closure type explicitly ensuring the data member is always public: consteval auto str(auto&& s) { static_assert(std::is_array_v<std::remove_reference_t<decltype(s)>>); return [&]<std::size_t... I>(std::index_sequence<I...>) { struct { std::remove_cvref_t<decltype(s)> value; consteval std::decay_t<decltype(s)> operator()() const { return value; } } functor{{std::forward<decltype(s)>(s)[I]...}}; return functor; }(std::make_index_sequence<std::extent_v<std::remove_reference_t<decltype(s)>>>{}); } The great thing about this is that it will have the same type for strings of the same length, so MyType<str("xyz")> in one translation unit will mangle to the same name as MyType<str("xyz")> in another, since it stores an array. Your goal of str("string literal") being a function call and return something "without any capture" is impossible, since the function argument auto&& s is not usable in a constant expression. In particular, you can't convert it to a pointer nor can you access any of its items. You can also have str be a type and skip the step of a function: template<typename T> struct str; template<typename T, std::size_t N> struct str<T[N]> { T value[N]; constexpr str(const str&) = default; consteval str(const T(& v)[N]) : str(std::make_index_sequence<N>{}, v) {} consteval auto operator()() const { return value; } private: template<std::size_t... I> consteval str(std::index_sequence<I...>, const T(& v)[N]) : value{ v[I]... } {} }; template<typename T, std::size_t N> str(const T(&)[N]) -> str<T[N]>; Where str("string literal") is now a structural type holding an array.
74,073,275
74,089,623
Keying an (unordered_)map using a multimap iterator
I'm building a software where one class is responsible to log info sources and commands (both are grouped as requests), where all requests are inserted inside a multimap, wherein the multimap is keyed by the request name, and each element points to request structure that holds management information and callback function pointer, insighted from this software. The callbacks are executed to issue a command, or to get an info, and everything is ok until here. To enable subscription-based information delivery, I've introduced a new map keyed by the request iterator, so where calling subscribe("infoID") the software looks for the exact match request and return its iterator. Because these iterators are unique per request, I've found it useful to key the subscriptions map using it. Where the key points to info subscriber's callback-functions. The error is: error: no match for 'operator<' (operand types are 'const std::__detail::_Node_iterator<std::pair<const std::__cxx11::basic_string, request>, false, true>' and 'const std::__detail::_Node_iterator<std::pair<const std::__cxx11::basic_string, request>, false, true>') { return __x < __y; } Followed by 15 compiling notes 'template argument deduction/substitution failed': 'const std::__detail::_Node_iterator<std::pair<const std::__cxx11::basic_string, request>, false, true>' is not derived from 'const std::pair<_T1, _T2>' { return __x < __y; } each one with a unique source: const std::pair<_T1, _T2>, const std::reverse_iterator<_Iterator> (stl_function.h), const std::reverse_iterator<_Iterator> (stl_iterator.h), ... etc. Full error here. Code: #include <iostream> #include <unordered_map> #include <vector> #include <string> #include <functional> #include <map> using namespace std; struct request { string f1; }; using SYS_REQMAP =unordered_multimap<string, request, hash<string>>; using SYS_REQMAP_I =SYS_REQMAP::iterator; using SYS_INFOSUB_CBF = function<void(string, string)>; using SYS_INFOSUB_CBFS = vector<SYS_INFOSUB_CBF>; using SYS_REQINF_SUBS = map<SYS_REQMAP_I, SYS_INFOSUB_CBFS>; void cbf(const string& a, const string& b){} int main() { SYS_REQINF_SUBS infoSubr; SYS_REQMAP vm{{"cmd1", {"foo"}}, {"cmd2", {"bar"}}}; for (SYS_REQMAP_I it = vm.begin(); it != vm.end(); it++) { infoSubr[it].push_back(cbf); // Compile error } } void compilesOK() { using SYS_REQINF_SUBS_1 = std::map<int, SYS_INFOSUB_CBFS>; SYS_REQINF_SUBS_1 subs1; subs1[1].push_back(cbf); // Compiles OK } And here's OnlineGDB link to compile and observe output.
The iterator requires a custom comparison function _Compare: struct Compare_REQMAP_I { bool operator()(const SYS_REQMAP_I& lhs, const SYS_REQMAP_I& rhs) const { return &lhs < &rhs; } }; using SYS_REQINF_SUBS = std::map<SYS_REQMAP_I, SYS_INFOSUB_CBFS, Compare_REQMAP_I>;
74,073,346
74,073,948
Automatic template deduction for function pointers
In the following code template<class T> void f(T); int main(){ f(3); return 0; } the template argument int for deduces automatically, as usual. But in template<class T> void f(T); template<class T> void (*p)(T) = f<T>; int main(){ p(3); return 0; } the compiler (clang++) insists that p(3) needs a template parameter. Why? Besides, if I put the line template<class T> void (*p)(T) = f<T>; in a header to be included by several files, will that cause problems?
Template argument deduction works with function templates and and with CTAD from C++17. Writing a wrapper is trivial for your example. template<class T> void f(T); template<class T> void (*p)(T) = f<T>; template<typename T> void Wrapper(T&& t) { p<T>(std::forward<T>(t)); } int main(){ Wrapper(3); }
74,073,399
74,094,653
Passing SQLite3 database pointer to a Function having a sqlite3_close(db) gets a 21 error - bad parameter or other API issue
My goal is to have a successful SQLite3 close after an open from functions I created. I expected the close to return a code of zero. I'm passing the SQLite3 *db pointer after a successful open to a function of my construction having an rc = sqlite3_close(db). Seems in the act of passing the db pointer something is lost and the close errors out. As a design feature I intend to have many functions built around SQLite's db call functions so getting just a simple open and close to work helps me in the future. This may just be a miss-understanding on how to pass a pointer to a function. The errors generated after code run: Return code: 0 |Error code: 0 |Error message: not an error |Message: Database open success New message -->Return code: 0 |Message: Database close success Old message -->Return code: 21 |Error code: 21 |Error message: bad parameter or other API misuse |Message: Database close failed My code: #include <iostream> #include <string> #include <sqlite3.h> using namespace std; // Function prototypes int openDB(sqlite3**, string); // Mod - Added additional * int closeDB(sqlite3**); // Mod - Added additional * int main() { **// Open database** sqlite3 *db; string errmsg; int rc; string dbStr = "/home/steven/sparks-robotics/data/myDB.db"; const char* database = dbStr.c_str(); rc = openDB(&db,database); // Mod - Added & if (rc != EXIT_SUCCESS) {return EXIT_FAILURE;} **// Close database** rc = closeDB(&db); // Mod - Added & if (rc != EXIT_SUCCESS) {return EXIT_FAILURE;} return 0; }; int openDB(sqlite3 **db, string dbStr) { // Mod - Added * int rc; // SQLite return code const char* em; // SQLite error message int ec; // SQLite error code string errmsg; const char* database = dbStr.c_str(); rc = sqlite3_open_v2(database, &*db, SQLITE_OPEN_READONLY, NULL); // Mod - Added * errmsg = "Database open success"; ec = sqlite3_errcode(*db); // Mod - Added * em = sqlite3_errmsg(*db); // Mod - Added * if(rc != SQLITE_OK) {errmsg = "Database open failed";} cout << "Return code: " << rc <<" |Error code: " << ec << " |Error message: " << em << " |Message: " + errmsg << endl; return rc; }; int closeDB(sqlite3 **db) { // Mod - Added * int rc; // SQLite return code //const char* em; // SQLite error message // Mod - Deleted //int ec; // SQLite error code // Mod - Deleted string errmsg; rc = sqlite3_close(*db); // Mod - Added * errmsg = "Database close success"; //ec = sqlite3_errcode(*db); // Mod - deleted, db already closed //em = sqlite3_errmsg(*db); // Mod - deleted, db already closed if(rc != SQLITE_OK) {errmsg = "Database close failed";} //cout << "Return code: " << rc <<" |Error code: " << ec << " |Error message: " << em << " |Message: " + errmsg << endl; cout << "Return code: " << rc << " |Message: " + errmsg << endl; return rc; };
As I wrote in my comment, changes to db in openDB do not show up in the variable db in main. There are two important things to understand: Pointer variables have an address like any other variable, but their value is also an address. C++ defaults to "pass by value" I will explain the problem by interpreting your code myself: Before the call to openDB, the db in main (henceforth main_db) has address=0xMAIN_DB and value=GARBAGE (because you failed to initialize it) At the start of openDB, openDB_db has address=0xOPENDB_DB (different from main_db) and value=GARBAGE (because C++ is pass by value) You call sqlite3_open with &openDB_db, so it dutifully writes a new value to address 0xOPENDB_DB. 0xMAIN_DB is untouched. By contrast, if you change the signature of openDB to take a sqlite3 *&, openDB_db is a reference to main_db, which means that openDB_db will have the same address as main_db: 0xMAIN_DB. When you now pass &openDB_db to sqlite3_open, it will effectively write a pointer value into 0xMAIN_DB, which means that main_db also has that pointer value.
74,073,670
74,074,455
Iterator returned by std::max_element(vList.begin(), vList.end()) changes value after clearing and updating the vector(vList)
For a BucketSort program using a vector of lists. I used std::max_element to find out the maximum element from the vector. But it looks like once the original vector is cleared and updated, the iterator returned by max_element also changes the value to the updated value from vector (vList) from same index. #include <iostream> #include <list> #include <vector> #include <algorithm> using namespace std; void BucketSort(std::vector<int>& vList) { int i = 0; auto maxElem = std::max_element(vList.begin(), vList.end()); std::vector<std::list<int>> tempList; std::cout << "Max element = " << *maxElem << "\n"; for (i = 0; i <= *maxElem; i++) tempList.push_back({}); for (auto x : vList) { tempList[x].push_back(x); } vList.clear(); std::cout << "*max = " << *maxElem << "\n"; i = 0; while(i <= *maxElem) { std::cout << "*max = " << *maxElem << " i = " << i << " tempList[i].size() = " << tempList[i].size() << "\n"; if (tempList[i].empty() == false) { vList.push_back(tempList[i].front()); tempList[i].pop_front(); } else i++; } } int main() { std::vector<int> vList = {1, 5, 4, 1 }; BucketSort(vList); return 0; } Produces below output Max element = 5 *max = 5 *max = 5 i = 0 tempList[i].size() = 0 *max = 5 i = 1 tempList[i].size() = 2 *max = 5 i = 1 tempList[i].size() = 1 *max = 1 i = 1 tempList[i].size() = 0 Does the iterator retain the reference to the index returned even after clearing the vector?
The iterators provided in C++ store an address that is referenced. When you clear the vector and fill it with new values, the iterator is still pointing to the same place and will show the new value in that location. Take a look at this example: [0,1,2,3,4] ^ You have an iterator pointing to the third element of the vector. Now imagine you clear the vector and fill it with new values: [5,6,7,8,9] ^ Now the pointer points to the same address, but that memory now holds a new value. You can get around this by storing the data in a local variable and use it throughout the rest of your code: int maxValue = *maxElem
74,073,781
74,075,487
malloc fails when asking for over 8 GiB
I am trying to load 1024 matrices into an OpenCV Mat. Each matrix is width*height=2200x2200 and each element is float, so it is about 19.36 MB for each matrix. I need to assign 1024 of these matrices which require over 19 GB of memory. This is okay as I have 128GB RAM in my virtual machine. However, I have problem getting the code to run once I am over 443 matrices and the code produce segmentation fault. I suspect that the gcc compiler is producing 32bit binary instead of 64bit, but it still failed with -m64 option to g++. Could you have a look at my code and how can I load all these matrices at once? int frame_num = 1; int frames = 444; // max frames to process int hpixels = 2200; // number of roi horizontal pixels int vpixels = 2200; // number of roi vertical pixels unsigned int roi_size = vpixels * hpixels; float_t *roi = (float_t *)malloc(frames * vpixels * hpixels * sizeof(float_t)); Mat cv_source(vpixels, hpixels, CV_32FC1, roi + frame_num*roi_size); waitKey(0); free(roi); return 0;
The problem is with this expression: frames * vpixels * hpixels * sizeof(float_t) As all those variables are of type int, the result seems to overflow. Try using types size_t for those variables. Best regards.
74,073,977
74,074,018
Does C/C++ program performance depend on compiler?
I read an article in which different compilers were compared to infer which is the best in different circumstances. It gave me a thought. Even though I tried to google, I didn't manage to find a clear and lucid answer: will the program run faster or slower if I use different compilers to compile it? Suppose, it's some uncommon complicated algorithm that is used along with templating.
Yes. The compiler is what writes a program that implements the behavior you've described with your C or C++ code. Different compilers (or even the same compiler, given different options) can come up with vastly different programs that implement the same behavior. Remember, your CPU does not execute C or C++ code. It only executes machine code. There is no defined standard for how the former gets transformed into the latter.
74,074,312
74,077,959
Standard math functions reproducibility on different CPU's
I am working on project with a lot math calculations. After switching on a new test machine, I have noticed that a lot of tests failed. But also important to notice that tests also failed on my develop machine, and on some machines of other developers. After tracing values and comparing with values from the old machine I found that some functions (At this moment I found only cosine) from math.h sometimes returns slightly different values (for example: 40965.8966304650828827e-01 and 40965.8966304650828816e-01, -3.3088623618085204e-08 and -3.3088623618085197e-08). New CPU: Intel Xeon Gold 6230R (Intel64 Family 6 Model 85 Stepping 7) Old CPU: Exact model is unknown (Intel64 Family 6 Model 42 Stepping 7) My CPU: Intel Core i7-4790K Tests results doesn't depend on Windows version (7 and 10 were tested). I have tried to test with binary that was statically linked with standard library to exclude loading of different libraries for different processes and Windows versions, but all results were the same. Project compiled with /fp:precise, switching to /fp:strict changed nothing. MSVC from Visual Studio 15 is used: 19.00.24215.1 for x64. How to make calculations fully reproducible?
Since you are on Windows, I am pretty sure the different results are because the UCRT detects during runtime whether FMA3 (fused-multiply-add) instructions are available for the CPU and if yes, use them in transcendental functions such as cosine. This gives slightly different results. The solution is to place the call set_FMA3_enable(0); at the very start of your main() or WinMain() function, as described here. If you want to have reproducibility also between different operating systems, things become harder or even impossible. See e.g. this blog post. In response also to the comments stating that you should just use some tolerance, I do not agree with this as a general statement. Certainly, there are many applications where this is the way to go. But I do think that it can be a sensible requirement to get exactly the same floating point results for some applications, at least when staying on the same OS (Windows, in this case). In fact, we had the very same issue with set_FMA3_enable a while ago. I am a software developer for a traffic simulation, and minor differences such as 10^-16 often build up and lead to entirely different simulation results eventually. Naturally, one is supposed to run many simulations with different seeds and average over all of them, making the different behavior irrelevant for the final result. But: Sometimes customers have a problem at a specific simulation second for a specific seed (e.g. an application crash or incorrect behavior of an entity), and not being able to reproduce it on our developer machines due to a different CPU makes it much harder to diagnose and fix the issue. Moreover, if the test system consists of a mixture of older and newer CPUs and test cases are not bound to specific resources, means that sometimes tests can deviate seemingly without reason (flaky tests). This is certainly not desired. Requiring exact reproducibility also makes writing the tests much easier because you do not require heuristic thresholds (e.g. a tolerance or some guessed value for the amount of samples). Moreover, our customers expect the results to remain stable for a specific version of the program since they calibrated (more or less...) their traffic networks to real data. This is somewhat questionable, since (again) one should actually look at averages, but the naive expectation in reality usually wins.
74,074,633
74,078,309
How do you make a pipeable function like ranges::to<T>() with range-v3 ranges?
My general question is how do you make something like ranges::to<T>() for classes for which ranges::to<T>() does not work? But specifically I am looking for a pipeable way to construct a boost geometry multi_linestring from a range-v3 range view of linestrings. Somewhat surprisingly ranges::to just works when constructing a linestring but fails to compile when constructing a multi_linestring, as below. namespace r = ranges; namespace rv = ranges::views; namespace bg = boost::geometry; using point = bg::model::point<double, 2, bg::cs::cartesian>; using polyline = bg::model::linestring<point>; using polylines = bg::model::multi_linestring<polyline>; int main() { std::vector<point> some_points = { {1,1},{2,2},{3,3},{4,4},{5,5} }; auto poly = some_points | r::to<polyline>(); // <- this works std::vector<std::vector<point>> vec_of_vec_of_pts = { {{1,1},{2,2},{3,3},{4,4},{5,5}}, {{6,6},{7,7},{8,8}}, {{9,9},{10,10},{11,11},{12,12}} }; auto polys = vec_of_vec_of_pts | rv::transform( [](const auto& v) { return v | r::to<polyline>(); } ) | r::to<polylines>(); // <- this does not compile. return 0; } The particular error message from Visual Studio is 1>[...]: error C2678: binary '|': no operator found which takes a left-hand operand of type 'ranges::transform_view<ranges::ref_view<std::vector<std::vector<point,std::allocator<point>>,std::allocator<std::vector<point,std::allocator<point>>>>>,Arg>' (or there is no acceptable conversion) 1> with 1> [ 1> Arg=main::<lambda_1> 1> ] 1>C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include\cstddef(42,27): message : could be 'std::byte std::operator |(const std::byte,const std::byte) noexcept' [found using argument-dependent lookup] 1>C:\libraries\range-v3\include\range\v3\view\any_view.hpp(66,24): message : or 'ranges::category ranges::operator |(ranges::category,ranges::category) noexcept' [found using argument-dependent lookup] 1>C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include\regex(1191,1): message : or 'std::_Node_flags std::operator |(std::_Node_flags,std::_Node_flags) noexcept' [found using argument-dependent lookup] 1>[...]: message : while trying to match the argument list '(ranges::transform_view<ranges::ref_view<std::vector<std::vector<point,std::allocator<point>>,std::allocator<std::vector<point,std::allocator<point>>>>>,Arg>, ranges::detail::to_container::closure<meta::id<polylines>,ranges::detail::to_container::fn<meta::id<polylines>>>)' 1> with 1> [ 1> Arg=main::<lambda_1> 1> ] My current work-around is a function template that will do the conversion: template<typename Rng> polylines to_polylines(Rng lines) { polylines polys; polys.resize(r::distance(lines)); for (auto&& [i, line] : rv::enumerate(lines)) { polys[i] = line; } return polys; } which I can use like auto polys = to_polylines( vec_of_vec_of_pts | rv::transform( [](const auto& v) { return v | r::to<polyline>(); } ) ); but it can't be a target of a range-v3 pipeline. How can I implement something like the above that can be piped to?
The pipeline is just overloaded operator |, you can overload it yourself. struct to_polyline_tag{} to_polylines; template<typename Range> polylines opeator | (Range&& lines, to_polyline_tag) { // the body of your to_polylines() polylines polys; polys.resize(r::distance(lines)); for (auto&& [i, line] : rv::enumerate(lines)) { polys[i] = line; } return polys; } // use: some_range_like | to_polylines
74,074,742
74,075,112
When Importing a dll to python using ctypes, istream and ostream can be found, but not iostream. Why?
When Importing c.dll to python using ctypes, and iostream has been included in a.cpp, I encounter the following error: Could not find module 'C:\Test\foo.dll' (or one of its dependencies). Try using the full path with constructor syntax. As explained at bugs.python.org this is an ambiguous error, indicative of a deeper issue. A few quick notes to be thorough: the path is correct and using exists(path) reveals as much. istream, ostream, and cmath all link without issue. Looking at the contents of iostream, the only file that iostream includes other than istream and ostream is bits/c++config.h. However, cmath also includes bits/c++config.h and I am still able to load the .dll in python with cmath included. I am using Windows 10 x64, ample storage space, relatively new m.2 drive with no reported errors. Python version is 3.10.8 I have tried switching c++ standards (tried all of them) and recompiling. I have tried linking using the method and flags recommended here. I have tried using extern "C"__declspec(dllexport) instead of extern "C" I am compiling using g++ from mingw64 Dependency walker shows nearly 100 missing dependencies, which doesnt make to me, given that it compiles and runs without iostream. The contents of my files for reproduction is as follows: a.cpp #include <ostream> extern "C" void Print() { printf("Hello!"); } b.py from ctypes import * core = cdll.LoadLibrary("C:/Test/c.dll") core.Print() makefile main: g++ -shared -o .c.dll .a.cpp python3 -B b.py If I compile this using make it works perfectly, as it should. However, if I change a.cpp to the following and compile, I receive the above noted error: a.cpp #include <iostream> extern "C" void Print() { printf("Hello!"); } As far as I can tell, the linker cannot find a dependency that I cannot see in the mingw64 version of iostream, but that is as much as I can discern. Does anyone else encounter this error using the above files? Any thoughts?
at this moment for python 3.10 the solution for this is to statically link against both libgcc and libstdc++ (or copy them from mingw/bin folder to your dll folder), this doesn't seem to be the case for versions of python prior to 3.10 g++ -shared -o c.dll a.cpp -static-libstdc++ -static-libgcc it seems like python 3.10 currently doesn't look for those dlls in your PATH. Edit: according to this stack overflow answer, you have to add the dlls path manually using os.add_dll_directory, alternatively you can use CDLL winmode parameter to allow windows to search for dlls in your PATH, but it is discouraged, and linking them statically is the best solution to run the dll on systems that don't have mingw installed or on PATH. core = CDLL(r"c.dll",winmode=0x8)
74,074,885
74,075,107
How can I make my 'for' statement work fine, and how can I calculate the total?
class score { private: int marks; int total; public: public: score(){ marks = 0; total = 0; } void getM(); void tot(); void displayM(); void cinM(); }; void score::displayM() { cout << "The score is " << total << endl; } void score::getM() { for (int i = 0; i <= subjects; i++) cout << "Enter the score of the subject " << i << endl; cin >> marks; } void score::tot() { total = total + marks; } Output is: Enter the score of the subject 0 Enter the score of the subject 1 Enter the score of the subject 2 Enter the score of the subject 3 Enter the score of the subject 4 Enter the score of the subject 5 // once i write any number it just print The score is 3 The output in my mind is: Enter the score of the subject 0 3 Enter the score of the subject 1 3 Enter the score of the subject 2 1 Enter the score of the subject 3 3 Enter the score of the subject 4 3 Enter the score of the subject 5 2 The score is 15
void score::getM() { cin >> marks; tot(); } And then in main, // main.cpp int main() { score s; for(int i=0; i<5; i++) { cout << "Enter the score of the subject " << i+1 << endl; s.getM(); } s.displayM(); return 0; } Here is a demo with full code: // main.cpp #include<iostream> using namespace std; class score { private: int marks; int total; public: public: score(){ marks = 0; total = 0; } void getM() { cin >> marks; tot(); } void tot() { total = total + marks; } void displayM() { cout << "The score is " << total << endl; } }; int main() { score s; for(int i=0; i<5; i++) { cout << "Enter the score of the subject " << i+1 << endl; s.getM(); } s.displayM(); return 0; } To compile and run g++ -Wall main.cpp ./a.out The output: ❯ ./a.out Enter the score of the subject 1 1 Enter the score of the subject 2 2 Enter the score of the subject 3 3 Enter the score of the subject 4 4 Enter the score of the subject 5 5 The score is 15
74,075,178
74,075,858
How to initialize const data members in initializer list that share common attributes?
Suppose we have a class Bar which has two data members, a and b respectively pointing to two other objects of type A. Within this type A, we have a data member i that points to an integer. In the code below, in order to allow the data members of Bar, i.e. a and b to share the same pointer i. I have to create another data member inside Bar that represent the common pointer a and b are going to share. However, I'm wondering if there's any way of sharing the same intermediate pointer i when initializing a and b in the initializer list. #include <memory> using namespace std; struct A { const shared_ptr<int> i; explicit A(const shared_ptr<int> &i): i(i) {} }; struct Bar { const shared_ptr<int> _i; const unique_ptr<A> a; const unique_ptr<A> b; explicit Bar(const int &i): _i(make_shared<int>(i)), a(make_unique<A>(_i)), b(make_unique<A>(_i)) {} };
Since A::i is public and is a shared_ptr, you can eliminate the need for Bar::_i like this: struct Bar { const unique_ptr<A> a; const unique_ptr<A> b; explicit Bar(const int &i): a(make_unique<A>(make_shared<int>(i))), b(make_unique<A>(a->i)) {} }; If A::i is later made private, simply make Bar be a friend of A, then it should still work. With the old code, you could have done this: struct Bar { const A* a; const A* b; explicit Bar(const int &i): a(new A(new int(i))), b(new A(a->i)) {} }; But you would have had a management problem in deciding who is responsible for freeing the int*. With shared_ptr, you don't have to worry about that.
74,075,179
74,075,323
Why does referencing an xvalue not extend the lifetime of the object it refers to?
The compiler has no way of knowing whether the xvalue is actually referencing a temporary. Therefore if the xvalue is a reference to some specific persistent "non-temporary" object, we would not want tie the lifetime of this object to the new reference variable - otherwise we could actually end up destroying the persistent object prematurely. So the real issue is that "lifetime extension" is implemented using "transfer of lifetime ownership", and this transfer is not something that we can always apply if we want want to guarantee that we don't end up destroying an object prematurely. 1 Is my description above correct? // It would seem a lot clearer and more useful if the lifetime extension concept was truly an "extension" internally - meaning if the object's lifetime was longer than the reference to begin with, it would not change anything, and the object would keep living even after the reference is destroyed. 2 What are the reasons that "extension" is not implemented in that way? I can imagine that this might only be possible if additional runtime instructions are added, e.g. some type of "reference counting" as a garbage collector does, and that likely goes against c++ philosophy. Or perhaps it would be possible to implement without any additional runtime instructions, it's would just be difficult and not worth the effort.
Reference lifetime extension only ever applies to prvalues, and it is only applied immediately when they're created. There's no "transfer of lifetime ownership". When a temporary object is created, it can immediately bound to a reference. If it is, then it is created with the lifetime of that reference. The object isn't created only to later have its lifetime extended. It's lifetime is extended from the moment it starts. In this way, the compiler does know if an xvalue is referencing a temporary object at the time that the lifetime extension takes place. Consider some examples: const T& ref = T{}; Here the temporary T object's lifetime is extended to that of ref immediately as it's created. T someFunction() { return {}; }; const T& ref = someFunction(); Here the return value of someFunction has its lifetime extended. Again, this happens immediately when that object is being created. Remember a function's return value is always created by the caller. Note that the following are not examples of reference lifetime extension: void someFunction(const T& ref) {} someFunction(T{}); This is often mistaken for reference lifetime extension, but it isn't. The temporary T object passed to someFunction already has a lifetime that extends to the end of the full expression in which it's created, which is already longer than the lifetime of ref. const T& someFunction() { return {}; } const T& ref = someFunction(); Here, ref does not extend the lifetime of the object returned by someFunction because there is an intermediate reference involved. ref is left dangling.
74,076,082
74,076,212
CPP won't generate text?
I have some CPP code to generate Lorem Ipsum style text. It works when I ask it for one sentence at a time, but when I tell it to mass generate sentences it generates tons of sentences that are just spaces and then periods. Here's the code (modified for confidentiality): srand(time(NULL)); string a[9327] = {"Str1", "Str2", "Str3" . . .}; int loop_1 = 0; int loop_2 = 0; while (loop_2 <= 100000) { while (loop_1 <= (rand() % 38) + 2) { int value = rand() % (9327 - (rand() % (9327 - (rand() % (9327 - (rand() % 9327)))))); cout << a[value] << " "; loop_1 = loop_1 + 1;} cout << "\b. "; loop_2 = loop_2 + 1; } I'm sorry if this is an incompetent question. I'm a conlanger/composer normally but I had to throw together some code for a project––so I'm still just barely learning C++.
Okay, I mean, this code doesn't make a lot of sense but to answer the question, note that the only way the inner loop can not issue random strings is if it never runs and it will not run if loop_1 is greater than (rand() % 38) + 2 which is a random number from 2 to 40. Once loop_1 is greater than 40 the inner loop can never run, because loop_1 only increases. But anyway, before that occurs, if you want the inner loop to definitely run then test that it does ... Also might as well get rid of loop_2 because it isn't doing anything once loop_1 is greater than 40. Replacing 9327 with 7, I get int main() { srand(time(NULL)); string a[7] = { "aaaaaaaaaa ", "bbbbbbbb ", "ccccccccccccc ", "dddddddd ", "eeeeeeeeeee ", "ffffffffff ", "ggggggg "}; int loop_1 = 0; while (loop_1 < 40) { auto num = (rand() % 38) + 2; if (loop_1 > num) { continue; } while (loop_1 <= num) { int value = rand() % (7 - (rand() % (7 - (rand() % (7 - (rand() % 7)))))); cout << a[value] << " "; loop_1 = loop_1 + 1; } cout << "\b. "; } }
74,076,492
74,077,617
Teensy 3.1/3.2 - region `FLASH' overflowed by 86948 bytes while program is 40kb
I’m using Teensy 3.2 and cannot build my teensy code due to two warnings resulting in an error 1 return. Warning 1 - .pio/build/teensy31/firmware.elf section .text' will not fit in region FLASH’ Warning 2 - region `FLASH’ overflowed by 86948 bytes Error - collect2: error: ld returned 1 exit status From what I read it basically means that the file is too large but my src folder is 40129 bytes and Teensy 3.2 flash size is 262144 bytes as it is written in the platforms/teensy/boards/teensy31.json file. Even the build begins with > Verbose mode can be enabled via -v, --verbose option CONFIGURATION: https://docs.platformio.org/page/boards/teensy/teensy31.html PLATFORM: Teensy (4.16.0) > Teensy 3.1 / 3.2 HARDWARE: MK20DX256 72MHz, 64KB RAM, 256KB Flash DEBUG: Current (jlink) External (jlink) PACKAGES: - framework-arduinoteensy @ 1.156.0 (1.56) - toolchain-gccarmnoneeabi @ 1.50401.190816 (5.4.1) The src folder is a cpp file (with setup and loop functions) + 4 header files surrounding it with functions used in the cpp file. Also, the 2 warnings in the .h files are unrelated to the issue. Tree for more clarity
From what I read it basically means that the file is too large but my src folder is 40129 bytes and Teensy 3.2 flash size is 262144 The size of your src folder has not much to do with the size of the generated program. If you are interested in where all that memory goes to you can use an ELF viewer. For example, here you find an online viewer: http://www.sunshine2k.de/coding/javascript/onlineelfviewer/onlineelfviewer.html. Upload your elf file and scroll down to the symbol table section to find out what eats up that huge amount of memory.
74,076,785
74,077,136
Value of a variable is not updating, it is either 1 or 0
Hey there! In the following code, I am trying to count frequency of each non zero number My intention of the code is to update freq after testing each case using nested loop but value of freq is not updating. freq value remains to be either 0 or 1. I tried to debug but still ending up with the same bug. Code: #include <bits/stdc++.h> using namespace std; int main() { int size; cin>>size; int freq=0; int d[size]; for(int i=0;i<size;i++){ //To create array and store values in it cin>>d[i]; } for(int i=0;i<size;i++){ if(d[i]==0 )continue; for(int j=0;j<size;j++){ if(d[i]==d[j]){ freq=freq+1; d[j]=0; } } cout<<"Frequency of number "<<d[i]<<" is "<<freq<<endl; d[i]=0; freq=0; } } Input: 5 1 1 2 2 5 Expected output: Frequency of number 1 is 2 Frequency of number 2 is 2 Frequency of number 5 is 1 Actual output: Frequency of number 0 is 1 Frequency of number 0 is 1 Frequency of number 0 is 1 Frequency of number 0 is 1 Frequency of number 0 is 1 Some one please debug the code and fix it. Open for suggestions.
#include <bits/stdc++.h> This is not standard C++. Don't use this. Include individual standard headers as you need them. using namespace std; This is a bad habit. Don't use this. Either use individual using declarations for identifiers you need, such as using std::cout;, or just prefix everything standard in your code with std:: (this is what most people prefer). int d[size]; This is not standard C++. Don't use this. Use std::vector instead. for(int j=0;j<size;j++){ if(d[i]==d[j]){ Assume i == 0. The condition if(d[i]==d[j]) is true when i == j, that is, when j == 0. So the next thing that happens is you zero out d[0]. Now assume i == 1. The condition if(d[i]==d[j]) is true when i == j, that is, when j == 1. So the next thing that happens is you zero out d[1]. Now assume i == 2. The condition if(d[i]==d[j]) is true when i == j, that is, when j == 2. So the next thing that happens is you zero out d[2]. Now assume i == 3 ... So you zero out every element of the array the first time you see it, and if(d[i]==d[j]) never becomes true when i != j. This can be fixed by changing the inner loop to for (int j = i + 1; j < size; j++) { This will output freq which is off by one, because this loop doesn't count the first element. Change freq = 0 to freq = 1 to fix that. I recommend having one place where you have freq = 1. A good place to place this assignment is just before the inner loop. Note, I'm using spaces around operators and you should too. Cramped code is hard to read. Here is a live demo of your program with all the aforementioned problems fixed. No other changes are made.
74,077,309
74,078,452
Flatten vector of classes which contain vectors of structs
I have a vector of a class Planters which contain vectors of Plants. My goal is to return a vector of plants containing the plants from planter 1, followed by plants from planter 2, etc. example: planter{{1,2,3,4}, {2,3,4,5}} should lead to {1,2,3,4,2,3,4,5}. note that the numbers represent plant objects. I'm trying to use join_view to flatten it but I am getting the error error: class template argument deduction failed: 18 | plants = std::ranges::join_view(planterView); | ^ /home/parallels/CMPT373/se-basic-cpp-template/lib/solutions/task05.cpp:18:52: error: no matching function for call to ‘join_view(std::ranges::ref_view<std::vector<ex4::Planter> >&)’ I have tried the following : for (auto it : planters){ plants.insert(plants.end(), it.getPlants().begin(), it.getPlants().end()); } This works however I am only allowed to use one loop (excluding loops inside STL function calls) and can only allocate memory once. The above approach allocates memory multiple times. How do I approach this? my code: std::vector<Plant> task05(std::vector<Planter> planters){ std::vector<Plant> plants; auto planterView = std::views::all(planters); std::views::transform(planterView, [](Planter planter){ return planter.getPlants();}); plants = ranges::views::all(std::ranges::join_view(planterView)); return plants; } Class: struct Plant { int plantiness = 0; Plant(int plantiness) : plantiness{plantiness} { } bool operator==(const Plant& other) const noexcept { return plantiness == other.plantiness; } }; class Planter { public: Planter(std::initializer_list<Plant> plants) : plants{plants} { } const std::vector<Plant>& getPlants() const noexcept { return plants; } private: std::vector<Plant> plants; };
std::vector<Plant> task05(std::vector<Planter> planters) { auto plants = planters | std::views::transform([](Planter const& planter) -> decltype(auto) { return planter.getPlants();}) | std::views::join | std::views::common ; return std::vector<Plant>(plants.begin(), plants.end()); } https://godbolt.org/z/T75anE15c Firstly you go wrong here auto planterView = std::views::all(planters); std::views::transform(planterView, [](Planter planter){ return planter.getPlants();}); std::transform is inplace but views::transform is not and returns a view which you ignore, but this needs to be passed into join_view. Secondly, begin and end return different types, but the constructor of std::vector needs these to have the same type, so you have to use views::common to achieve this. Thirdly, the deduced return type of the lamdba doesn't propagate the const&-ness of getPlants resulting in a copy. You have declare the return type as decltype(auto). Also, you don't need to call views::all. It will be automatically called when calling transform and you should use the join adapter rather than the join_view type. Also, ranges are very ergonomic when used in a pipeline so just writing one expression spanning multiple lines is best practice. Also, there's not much point wrapping int and std::vector<int> in structs. It's easy just to use std::tuple, std::array, std::variant, std::optional and never actually write your own types.
74,077,404
74,077,421
Is DLL function called from multithreaded application run on same thread?
I have multithreaded application. Now, I have written one function in Windows DLL. I am calling this DLL function from thread function (called by multiple threads) which is in my application. So, my question is, does this DLL function also executes on same calling thread? (And no need to handle multithreading separately in DLL)
Yes, the DLL function will execute on the same thread as the caller. There is no need to handle multithreading separately in the DLL.
74,077,744
74,077,769
I got a negative number while trying to return a long long value
I created a function seriesSum to return a sum of the series of a number, and I used long long return data type but it returns a negative number if I insert for example 46341 output will be -1073716337 and what I am expected is 1073767311 here is my code: #include <iostream> using namespace std; long long seriesSum(int n) {return n*(n+1)/2;} int main() { cout<<seriesSum(46341); // expected 1073767311 but output is -1073716337 return 0; }
The argument variable n is an int. All operations you perform in the function are done using int values. Which you will overflow, leading to undefined behavior. Change the argument type to unsigned long long. I also recommend you change the return type to be unsigned as well, if you're not going to get negative results.
74,077,844
74,077,866
Reuse structured binding variables when using a local struct
I have a function as follows: auto foo(int i) { struct s { int i; std::string name; }; return s{i+10, "hello there"}; } And it works fine with structured bindings: auto [i, name] = foo(10); Is there a way to reuse the variables? i.e. // Later in the code. [i, name] = foo(20); It doesn't work with std::tie because I'm not returning a tuple. I prefer a struct as it feels cleaner to me than a tuple. But if there is no other alternative, I'm open to switching to a tuple.
Since the hidden variable introduced by a structured binding is unnamed, there's no easy way to refer to it to a assign a new value. You could do auto x = foo(10); const auto &[i, name] = x; Then assign to x to change the meaning of i and name.
74,078,097
74,078,382
Is there something like std::unconvertible_to?
I am trying to have a template parameter, that is allowed to be every type, except for one. And I have no clue how. I'm new to concepts, and don't fully understand them yet, but this is how I implemented std::convertible_to: template <typename T> concept notSomeType = requires(T v) { {v} -> std::convertible_to<SomeType>; }; Is there anything like std::unconvertible_to built in? If not, is there any other way to do this?
You can simply create a concept that is a negation of std::convertible_to: template<class From, class To> concept NotSomeType = !std::convertible_to<From, To>; template<NotSomeType<int> T> void f(T) { std::cout << "Not convertible to int\n"; } template<std::convertible_to<int> T> void f(T) { std::cout << "convertible to int\n"; } int main() { f(1); f('a'); f("Hello world"); f([]() {}); }
74,078,132
74,078,234
qualified reference to 'Edge' is a constructor name rather than a type in this context
I don't know Java at all, but I need this program in C++. Can anyone help me convert it? I know OOP in C++, but I'm used to another syntax and do not understand exactly what and how to modify it. class Graph { class Edge { int src, dest; } int vertices, edges; Edge[] edge; Graph(int vertices, int edges) { this.vertices = vertices; this.edges = edges; edge = new Edge[edges]; for (int i = 0; i < edges; i++) { edge[i] = new Edge(); } } public static void main(String[] args) { int i, j; int numberOfVertices = 6; int numberOfEdges = 7; int[][] adjacency_matrix = new int[numberOfEdges][numberOfEdges]; Graph g = new Graph(numberOfVertices, numberOfEdges); g.edge[0].src = 1; g.edge[0].dest = 2; //do something with graph } } I tried to do that and I optinut the code below: class Graph { public: class Edge { private : Graph *parentClass; public: Edge(Graph *parentClass) { this->parentClass = parentClass; } int src; int dest; }; int vertices; int edges; std::vector<Graph::Edge::Edge*> edge; //error 1 qualified reference to 'Edge' is a constructor name rather than a type in this context Graph(int vertices, int edges) { this->vertices = vertices; this->edges = edges; edge = std::vector<Graph::Edge::Edge>(edges); //err 2 no viable overloaded '=' //err 3 qualified reference to 'Edge' is a constructor name rather than a type in this context for (int i = 0; i < edges; i++) { edge[i] = new Graph::Edge::Edge(); //err 4 no matching constructor for initialization of 'Graph::Edge::Edge' } } }; But the code that I got myself, has a number of errors here's my code in full
In the case of your mentioned error 1 and 3, "qualified reference to 'Edge' is a constructor name rather than a type in this context" that you mention in your question title the error is because you are using the constructor where the compiler would expect to see a type. You have: Graph - this is a class Edge - this is a nested class Edge - the constructor for the Edge class So when you refer to Graph::Edge that is the class, but Graph::Edge::Edge is the constructor function. Replace your usage of Graph::Edge::Edge with Graph::Edge to solve that error. You also have problems where you mix vectors of pointers and vectors of instances. Your final error 4 is because you are not using the constructor that you have defined (wrong parameters).
74,078,223
74,078,282
how to check that const array members grow monotonically at compile time
assume we have const array: const int g_Values[] = { ... }; how check that members grow monotonically at compile time, i.e. g_Values[i] < g_Values[i + 1] in runtime this possible to check like this: bool IsMonotonously() { int i = _countof(g_Values); int m = MAXINT; do { int v = g_Values[--i]; if (v >= m) return false; m = v; } while (i); return true; } but how rewrite this with constexpr and if IsMonotonously() return false - generate compile time error.
This is impossible for an array that is just const. You need to make it constexpr to be able to use it in a constexpr context. All you need to do in addition to this is to implement the function for checking the array as constexpr: template<class T, size_t N> constexpr bool IsStrictlyMonotonouslyIncreasing(T (&arr)[N]) { bool result = true; if (N > 1) { for (size_t i = 0; result && (i != N - 1); ++i) { result = (arr[i] < arr[i + 1]); } } return result; } const int g_Values[] = { 1, 2, 3, 4 }; static_assert(IsStrictlyMonotonouslyIncreasing(g_Values)); // compiler error g_Values is not usable in a constexpr context constexpr int g_Values2[] = { 1, 2, 3, 4 }; static_assert(IsStrictlyMonotonouslyIncreasing(g_Values2)); // ok
74,078,523
74,078,542
Is there a way to make a single loop for two different variables?
I am looking for a way to make a single loop for two variables. I have a "p1" (player 1) and a "p2". Both have the particularity to use the same loop to have a certain number of spaces. Is there a solution to make a single loop instead of creating two? Here is the loop in question for the player one: for (string::size_type i = 1; i < (SIZE - p1.name.length()) / 2; i++) { cout << " "; }
Yes, you separate the initialization and updates with commas (,) and convert the condition to a compound one. for (string::size_type i = 1, j = 1; i < (SIZE - p1.name.length()) / 2 && j < (SIZE - p2.name.length()) / 2; i++, j++) { cout << " "; }
74,078,646
74,078,681
Why replace an existing keyword in C/C++ with a macro?
I often come across keywords or one-word identifiers defined with another name. For example, boost defines noexcept as BOOST_NOEXCEPT, some C++ standard libraries replace [[nodiscard]] with _NODISCARD, Windows API is prone to introduce their own macros as well, and so forth. I didn't really manage to find anything which would explain this. This makes an unfamiliar person search for what such macros mean which therefore makes the code a bit harder to understand (at least this is how I see it). Why is such tendency so widespread? What is the purpose of replacing existing one-word constructs with macros?
The most useful case of these is when you target both compilers that support a new feature and ones that do not. For instance: #if CompilerSupportsNoexcept #define NOEXCEPT noexcept #else #define NOEXCEPT #endif
74,079,280
74,079,322
What does this syntax "success |= MAX35101_Read_2WordValue(TOF_DIFF_AVG_REG, &TOF_DIFF_Results->TOF_DiffData);" mean? in C/C++ language?
I am trying to decode the reference code to access the registers in MAX35101 IC. The code has this syntax at various lines. What does this mean? bool MAX35101_Update_TOF_AVG_DIFFData(Flow_ResultsStruct* TOF_DIFF_Results) { bool success = false; success |= MAX35101_Read_2WordValue(TOF_DIFF_AVG_REG, &TOF_DIFF_Results->TOF_DiffData); return success; } Simply what does Z |= X(a, &b->c); mean?
Z |= X(a, &b->c); could be rewritten as Z = Z | X(a, &(b->c)); We have a pointer to a struct Flow_ResultsStruct* b. To access a member of a struct via a pointer we use a ->. So it is get the address of member c of struct b for which we have a pointer. Pass a and the address to X. Now logical or (|) the return value with Z which is false (0) so Z equals the return of the function.
74,079,454
74,091,282
Qt error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument
I packed a very easy Qt application (with shared libraries found by ldd) on my debian 11. But it failed to run on ubuntu(20.04) virtual machine. The error is : error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument Both the glibc is version 2.31.  The version of qt is 6.3.2. I used to set LD_LIBRARY_PATH to . or /usr/local/lib,it has no effect.
You should never pack GLIBC or its constituent libraries (libc.so.6, libm.so.6, librt.so.1, libdl.so.2, libpthread.so.0) for reasons explained here. The most likely cause of the error is a mismatch between /lib64/ld-linux-x86-64.so.2 and ./libdl.so.2. Removing the 5 libraries listed above from this directory should make the application work.
74,079,605
74,079,686
Optimization of the condition in the if-statement
Well I know that the title makes almost no sense but I could not find a better one to explain my question here. So I've just started doing challenges on LeetCode and I am on the very first steps for now. But one situation confused me. So I was solving the question named "Number of 1 Bits" which basically gives you an unsigned integer and wants to know how many 1's are in its binary representation. So first, I've written this code; class Solution { public: int hammingWeight(uint32_t n) { int answer=0; while(n>0) { if(n%2)answer++; n/=2; } return answer; } }; Then I realised that it has a runtime of 3 miliseconds. Then I was trying other solutions to optimize it and I've written the fastest code(i think). class Solution { public: int hammingWeight(uint32_t n) { int answer=0; while(n>0) { if(n%2==1)answer++; n/=2; } return answer; } }; So this one had a runtime of 0 miliseconds. I thought since if(i%2) makes less comparisons, it would be faster. The only difference is the condition in the "if command". So why is if(i%2==1) is faster than if(i%2) ?
It's not. Both your code will produce the same machine code. Your measuring method is wrong, you need to loop that function millions times to get a non-bias result and it will be the same. The lesson? Don't try to optimize the if statement, in most cases you won't be smarter than the compiler
74,079,805
74,079,996
Eigen error while initializing fixed size array
I was following the Eigen documentation, b prints but a doesn't print. #include <iostream> #include <eigen3/Eigen/Dense> using namespace std; using namespace Eigen; int main() { Vector3d b(5.0, 6.0, 7.0); MatrixXi a { // construct a 2x2 matrix {1, 2}, // first row {3, 4} // second row }; cout << b << endl; } the error when I get when compiling and running with: g++ -std=c++11 -I /usr/include/eigen3/Eigen eigen.cpp -o eigen no matching function for call to ‘Eigen::Matrix<int, -1, -1>::Matrix(<brace-enclosed initializer list>)’ 17 | }; | ^ In file included from /usr/include/eigen3/Eigen/Core:458, from /usr/include/eigen3/Eigen/Dense:1, from eigen.cpp:2:
I solved the problem by deleting Eigen from my system first and then I cloned it from the git repository in a file called eigen I created in home folder. then I included the whole directory with #include </home/zornic/eigen/eigen/Eigen/Dense> and it worked, but I'm not sure why my headers didn't work like in the documentation in the eigen website.
74,080,016
74,081,147
How to terminate child process in other child process? (win32api)
I created two child processes called p_1(notepad) and p_2(a clone of parent process) using CreateProcess. What I want is terminate p_1 in p_2, not in parent process. First, I created the processes with the following code: case WM_LBUTTONDOWN: { STARTUPINFO si = { 0, }; WCHAR notepad[32] = L"C:\\Windows\\notepad.exe"; WCHAR wndproc[32] = L"WindowProject.exe"; if (true == CreateProcess(NULL, notepad, NULL, NULL, true, 0, NULL, NULL, &si, &p_1)) { CreateProcess(NULL, wndproc, NULL, NULL, true, 0, NULL, NULL, &si, &p_2); } } break; When click the mouse left button, parent process creates two process And here's the code to terminate p_1 process by right-clicking: case WM_RBUTTONDOWN: { TerminateProcess(p_1.hProcess, 0); } break; However, the p_2 process can't terminate p_1, only the parent process can terminate p_1. I thought it was because p_2 doesn't have a handle of p_1, so I added DuplicateHandle in after CreateProcess (I don't know this is the correct way to use DuplicateHandle): if (true == CreateProcess(NULL, notepad, NULL, NULL, true, 0, NULL, NULL, &si, &p_1)) { CreateProcess(NULL, wndproc, NULL, NULL, true, 0, NULL, NULL, &si, &p_2); //give handle of p_1 to p_2 DuplicateHandle(&p_1.hProcess, &p_1.hProcess, &p_2.hProcess, &p_2.hProcess, 0, NULL, DUPLICATE_SAME_ACCESS); } But the result was the same. So instead of DuplicateHandle I writed p_2.hProcess = OpenProcess(PROCESS_TERMINATE, true, p_1.dwProcessId); , but still the same result. I wonder how p_2 can terminate p_1. Or which function is correct to use? (I am a beginner in programming and this is my first time using stackoverflow. Please forgive me even if this is an inexperienced question.)
Using DuplicateHandle(), the parent can start P1 and P2, then duplicate its P1 handle into P2's context, then pass the value of the duplicate handle to P2 via an IPC mechanism of your choosing, such as a pipe, window message, etc. P2 can then store the value into a HANDLE variable and terminate P1 using that handle. Another option is to have the parent start P1 first, then use SetHandleInformation() to make its P1 handle inheritable, then start P2 with the bInheritHandles parameter set to true (and preferably with the P1 handle specified explicitly via STARTUPINFOEX instead of being inherited implicitly). Then the parent can pass the value of its P1's handle as a command-line parameter when starting P2, and then P2 can parse the parameter into a HANDLE variable that it can use to terminate P1. Another option is the parent can start P1 first, and then pass P1's PID as a command-line parameter when starting P2, then P2 can parse the parameter into a DWORD variable and OpenProcess() the PID so it can then terminate P1 using the opened handle. This 3rd approach does have a small risk, though, in that P1 could be terminated through other means after P2 has started and before P2 opens the PID, thus risking the PID being reused for a completely unrelated process. The 1st and 2nd approaches avoid that risk.
74,080,233
74,080,560
Where does the C++ standard state that subobjects of an array declared as static are static objects?
Given the following code (it compiles successfully): int main(void) { const int a{}; const int &r = a; static int arr[1] = {r}; constexpr const int &ref = arr[0]; // OK } The initialization of ref is glvalue designating reference ref. So the full-expression of the initialization has to be a glvalue core constant expression. That's, the reference ref has to refer to an entity that's a permitted result of a constant expression (object with static storage duration). It refers to the element arr[0] which is of type int. Here's my question: Is this element a subobject with static storage duration? if yes, what's the wording that states that?
Here's my question: Is this element a subobject with static storage duration? [basic.stc.inherit]/1 says: The storage duration of subobjects and reference members is that of their complete object. An array element is a subobject of the array it is an element of. So it has the same storage duration as the array.
74,080,878
74,081,560
Separating of tests build form application build in CI/CD without rebuilding
I have a project with files: main.cpp - application sum.h sum.cpp tester.cpp - tester application I build it with CMake, CMakeLists.txt: cmake_minimum_required(VERSION 3.10) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) set (ProjectName hello) project(hello VERSION 1.0) set(SOURCES sum.h sum.cpp main.cpp) add_executable(${ProjectName} ${SOURCES}) enable_testing() add_executable(tester sum.h sum.cpp tester.cpp) add_test(Tester tester) Then I set up a CI/CD on gitlab: image: gcc build: stage: build before_script: - apt update && apt -y install make cmake libcrypto++-dev script: - cd SimpleCiCdForCpp - mkdir build - cd build - cmake .. - make - ls artifacts: paths: - SimpleCiCdForCpp/build/hello - SimpleCiCdForCpp/build/tester test: stage: test before_script: - apt update && apt -y install make cmake libcrypto++-dev script: - ls - cd SimpleCiCdForCpp - cd build - cmake .. - make - ctest Everything works - I have 2 applications: the main one and the tester and the tests pass. The problem is that I have to build everything on Build stage and on Testing stage, otherwise I end up with empty testing config for ctest Is there any good practice to separate testing projects from main applications?
Your CMAkeLists.txt may look like this: cmake_minimum_required(VERSION 3.10) project(hello VERSION 1.0) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) include(CTest) # Sets BUILD_TESTING=0 by default # Do not build twice. add_library(sum sum.cpp sum.h) # There is already PROJECT_NAME CMake variable. add_executable(${PROJECT_NAME} main.cpp) target_link_libraries(${PROJECT_NAME} PRIVATE sum) # Do not use if not needed! if(BUILD_TESTING) add_executable(tester tester.cpp) target_link_libraries(tester PRIVATE sum) # Use same name for test and executable. add_test(COMMAND tester NAME tester) endif() Then: build: - cmake -S. -Bbuild -DBUILD_TESTING=0 - cmake --build build artifacts: paths: # Cache the... cache, so that you don't rebuild! - SimpleCiCdForCpp/build test: - cmake -S. -Bbuild -DBUILD_TESTING=1 - cmake --build build # Newest syntax. - ctest --test-dir build