question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
71,677,501
71,724,924
Radix sort get the right key to sort signed integers
I have a trouble with finding a right value for sorting with radix sorting algorithm. I have implemented it right, but I have an issue with negatives. I tried just to reverse value with a binary. It helped me a little, but negatives are in ascending order instead of being in descending order. Funniest thing is that sorting works fine with floating points and signed integers. Here is my function to build a key for sorting: uint32_t SortKeyToU32(int Value) { uint32_t Result = (uint32_t)Value; if(Result & 0x80000000) { Result = ~Result; } else { Result |= 0x80000000; } return Result; } And here is my sorting function: void RadixSort(int** Array, int Size) { int* Dest = new int[Size]; int* Source = *Array; memset(Dest, 0, Size*sizeof(int)); for(int Bit = 0; Bit < 32; Bit += 8) { uint32_t ArrayOfValues[256] = {}; for(int Index = 0; Index < Size; ++Index) { uint32_t SortKey = SortKeyToU32(Source[Index]); uint32_t Key = (SortKey >> Bit) & 0xFF; ++ArrayOfValues[Key]; } int Sum = 0; for (int Index = 0; Index < ArraySize(ArrayOfValues); ++Index) { uint32_t Count = ArrayOfValues[Index]; ArrayOfValues[Index] = Sum; Sum += Count; } for(int Index = 0; Index < Size; ++Index) { uint32_t SortKey = SortKeyToU32(Source[Index]); uint32_t Key = (SortKey >> Bit) & 0xFF; Dest[ArrayOfValues[Key]++] = Source[Index]; } int* Temp = Source; Source = Dest; Dest = Temp; } } But how can I deal with sorting signed integers? Sorry if it is looks obvious. Thanks. EDIT. Here is sample array input: 1, 6, 9, 2, 3, -4, -10, 8, -30, 4 Output: -4, -10, -30, 1, 2, 3, 4, 6, 8, 9
Sort key formula that you need is very simple: uint32_t SortKeyToU32(int32_t Value) { return uint32_t(Value) + (1U << 31); } the reason for it following. First we cast from 32-bit signed to unsigned. This casting just reinterprets bits of signed value as unsigned. Lets see how signed values are presented in value space of a number. For simplicity lets see on 4-bit signed number, it has 16 different values in total mapped as following: Signed form: 0, 1, 2, 3, 4, 5, 6, 7, -8, -7, -6, -5, -4, -3, -2, -1 Unsigned form: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 In signed case first half of values are non-negative in ascending order and exactly second half are negative values also in ascending order. For sorting we need all negative values go before positive and also in ascending order. For signed 4-bit values above (in unsigned form) we need just to add 8, this is exactly middle of a range. Thus whole range will shift right with overflow and negative values will appear before positives (below first number is number after adding 8, second number (in parenthesis) is original (before adding)): 0 (-8), 1 (-7), 2 (-6), 3 (-5), 4 (-4), 5 (-3), 6 (-2), 7 (-1), 8 (0), 9 (1), 10 (2), 11 (3), 12 (4), 13 (5), 14 (6), 15 (7) This is exactly what I did in my sort key formula uint32_t(Value) + (1U << 31) I reinterpret-casted signed to unsigned form and added middle-range value which is (1U << 31). With this corrected formula your code works right way, in snippet below I left your original sort key formula too under name SortKeyToU32Old() you may change names of sort key functions to see how it works before (incorrectly) and after (correctly) change on my example array. Try it online! #include <cstdint> #include <cstring> #define ArraySize(a) (sizeof(a) / sizeof(a[0])) uint32_t SortKeyToU32(int32_t Value) { return uint32_t(Value) + (1U << 31); } uint32_t SortKeyToU32Old(int Value) { uint32_t Result = (uint32_t)Value; if(Result & 0x80000000) { Result = ~Result; } else { Result |= 0x80000000; } return Result; } void RadixSort(int** Array, int Size) { int* Dest = new int[Size]; int* Source = *Array; memset(Dest, 0, Size*sizeof(int)); for(int Bit = 0; Bit < 32; Bit += 8) { uint32_t ArrayOfValues[256] = {}; for(int Index = 0; Index < Size; ++Index) { uint32_t SortKey = SortKeyToU32(Source[Index]); uint32_t Key = (SortKey >> Bit) & 0xFF; ++ArrayOfValues[Key]; } int Sum = 0; for (int Index = 0; Index < ArraySize(ArrayOfValues); ++Index) { uint32_t Count = ArrayOfValues[Index]; ArrayOfValues[Index] = Sum; Sum += Count; } for(int Index = 0; Index < Size; ++Index) { uint32_t SortKey = SortKeyToU32(Source[Index]); uint32_t Key = (SortKey >> Bit) & 0xFF; Dest[ArrayOfValues[Key]++] = Source[Index]; } int* Temp = Source; Source = Dest; Dest = Temp; } } #include <iostream> int main() { int a[] = {1000, -200, 800, -300, 900, -100}; int * p = &a[0]; RadixSort(&p, ArraySize(a)); for (auto x: a) std::cout << x << " "; } Input: {1000, -200, 800, -300, 900, -100} Output (correct, new sort key): -300 -200 -100 800 900 1000 Output (incorrect, original sort key): -100 -200 -300 800 900 1000
71,677,547
71,677,645
Does C++ constructor return an object?
All books and internet pages I read say that C++ constructors do not have a return value and they just initialize an object: #include <iostream> class Number { int m_val{}; public: Number() = default; Number(int val) : m_val(val) {} int val() { return m_val; } }; int main() { Number n; // Initializing object with defualt constructor std::cout << n.val() << '\n'; return 0; } But it turns out that also I can use constructors for assignment and for calling methods of object, like it returns the value of this object: Number n = Number(10); // This works std::cout << Number(29).val() << '\n'; // And this In other similar stackoverflow questions like this and this people write that this semantics creates a value-initialized temporary object of type Number , but it does not answer my question. So does constructor return object, or maybe it is some c++ entity that i've never heard of?
Indeed a constructor doesn't have a return value. But Number(10) is an object of type Number. And that is an expression with a value.
71,678,240
71,736,898
CGAL read_OFF discards face depending on vertex order
When reading an off-file with cgal it appears that the vertex order of a face decides whether or not it is read in by read_OFF. But the off-file definition does not say anything about the vertex order of a face. I am reading in self generated off-files using the read_OFF method of cgal: using Kernel = CGAL::Exact_predicates_inexact_constructions_kernel; using Point_3 = typename Kernel::Point_3; ... CGAL::Surface_mesh<Point_3> test_mash; CGAL::IO::read_OFF(file_name, test_mash); std::cout << "Number of vertices: " << test_mash.vertices().size() << ", Number of faces: " << test_mash.faces().size() << std::endl; two_faces_read.off: OFF 4 2 0 1 1 1 2 -2 2 3 3 -3 -4 4 4 3 0 1 2 3 0 3 1 one_face_read.off: OFF 4 2 0 1 1 1 2 -2 2 3 3 -3 -4 4 4 3 0 1 2 3 0 1 3 Reading two_faces_read.off works as expected, printing: Number of vertices: 4, Number of faces: 2. But when i read one_face_read.off i get Number of vertices: 4, Number of faces: 1. The only difference between these two files is the last line, the vertex order of the second face is different. After trying all possible combinations it seems that with 031, 103, 310 2 faces are read in, while with 013, 130, 301 only 1 face is read in. The off-file specification referenced by cgal does not mention any rules concerning the vertex order of a face. Why does this happen and how can i ensure that all faces are read in?
one_face_read.off does not define a valid surface mesh has the orientation of the two faces are not compatible. You can use the following function to read points and faces and call CGAL::Polygon_mesh_processing::is_polygon_soup_a_polygon_mesh() to check if the input is a valid surface mesh. The function CGAL::Polygon_mesh_processing::orient_polygon_soup() can be used to fix orientations. CGAL::Polygon_mesh_processing::polygon_soup_to_polygon_mesh() can be used to create the mesh.
71,678,278
71,678,443
How to declare member template of class template as friend?
Given the following code: template <typename T, typename D> class B; template <typename T> class A { public: A() { } template <typename D> A(B<T, D>); }; template <typename T, typename D> class B { friend A<T>::A(B<T, D>); int x; }; template <typename T> template <typename D> A<T>::A(B<T, D> b) { b.x = 42; } int main() { B<int, double> b; A<int> a(b); return 0; } I want to declare member template A(B<T, D>) of class template A<T> as friend. So I declared, friend A<T>::A(B<T, D>); But I got a compile error: test.cc: In instantiation of ‘class B<int, double>’: test.cc:24:18: required from here test.cc:13:10: error: prototype for ‘A<int>::A(B<int, double>)’ does not match any in class ‘A<int>’ friend A<T>::A(B<T, D>); ^~~~ test.cc:4:7: error: candidates are: constexpr A<int>::A(A<int>&&) class A { ^ test.cc:4:7: error: constexpr A<int>::A(const A<int>&) test.cc:8:3: error: template<class D> A<T>::A(B<T, D>) [with D = D; T = int] A(B<T, D>); ^ test.cc:6:3: error: A<T>::A() [with T = int] A() { } ^ test.cc: In instantiation of ‘A<T>::A(B<T, D>) [with D = double; T = int]’: test.cc:25:13: required from here test.cc:20:5: error: ‘int B<int, double>::x’ is private within this context b.x = 42; ~~^ test.cc:14:7: note: declared private here int x; ^ How to fix it?
The important part of the error message seems to be this line: test.cc:8:3: error: template<class D> A<T>::A(B<T, D>) [with D = D; T = int] Notice how the template type D is not expanded to the actual type. That lead me to believe that adding a new template type for the friend declaration might help: template<typename U> friend A<T>::A(B<T, U>); And it works in my testing. After thinking a little bit about the reasons behind this, I think it's because there's really no such (constructor) function as A::A(B<T, D>), only the template template<typename D> A::A(B<T, D>).
71,678,379
71,680,560
How to using boost::multi_index with struct in struct?
I have a vector containing information called ST_ThepInfo. My problem is when using struct ST_ThepInfo in struct Infovalue_t. struct ST_ThepInfo { int length; string ex; int weight; }; struct Infovalue_t { ST_ThepInfo s; int i; }; struct ST_ThepInfo_tag {}; typedef boost::multi_index_container< Infovalue_t, boost::multi_index::indexed_by< boost::multi_index::random_access<>, // this index represents insertion order boost::multi_index::hashed_unique< boost::multi_index::tag<ST_ThepInfo_tag>, boost::multi_index::member<Infovalue_t, ST_ThepInfo,&Infovalue_t::s>> > > myvalues_t; then i call these code: myvalues_t s; ST_ThepInfo k; .... auto t = count.emplaceback(k, 0); However I get an error like this: How do I fix it?
Hashed indexes require the key type to be hashable and equality-comparable. You need to provide these for the info struct: Live On Coliru #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/random_access_index.hpp> #include <boost/multi_index_container.hpp> namespace bmi = boost::multi_index; struct STInfo { int length; std::string ex; int weight; auto key_fields() const { return std::tie(length, ex, weight); } friend size_t hash_value(STInfo const& info) { using boost::hash_value; return hash_value(info.key_fields()); } bool operator==(STInfo const& other) const { return key_fields() == other.key_fields(); } }; struct Infovalue_t { STInfo s; int i; }; using Table = bmi::multi_index_container< // Infovalue_t, // bmi::indexed_by< // bmi::random_access<>, // represents insertion order bmi::hashed_unique< // bmi::tag<struct byInfo>, // bmi::member<Infovalue_t, STInfo, &Infovalue_t::s>> // >>; #include <fmt/ranges.h> #include <fmt/ostream.h> struct Format : fmt::formatter<int> { auto format(STInfo const& info, auto& ctx) const { return fmt::format_to(ctx.out(), "{}", info.key_fields()); } auto format(Infovalue_t const& iv, auto& ctx) const { return fmt::format_to(ctx.out(), "({}, {})", iv.s, iv.i); } }; template<> struct fmt::formatter<Infovalue_t> : Format{}; template<> struct fmt::formatter<STInfo> : Format{}; int main() { Table count; STInfo k; count.push_back({STInfo{42, "LtUaE", 99}, 30}); count.push_back({STInfo{43, "SomethingElse", 98}, 40}); count.push_back({STInfo{44, "SomethingElse", 97}, 30}); fmt::print("{}\n", count); } Prints: [((42, "LtUaE", 99), 30), ((43, "SomethingElse", 98), 40), ((44, "SomethingElse", 97), 30)] Note that it is very important that equality matches hashing. If they don't agree, there's going to be Undefined Behaviour. This is the main reason why I don't recommend defaulting the equality operator as you can in c++20: #ifdef __cpp_impl_three_way_comparison auto operator<=>(STInfo const&) const = default; #endif This makes it less explicit that hash_value needs to agree with the members and risks they go out of sync when you e.g. add a member.
71,678,559
71,678,795
How can you change the value of a string pointer that is passed to a function in C++?
I need to change the value of a std::string using a function. The function must be void, and the parameter must be a pointer to a string as shown. #include <iostream> void changeToBanana(std::string *s) { std::string strGet = "banana"; std::string strVal = strGet; s = &strVal; } int main() { std::cout << "Hello, World!" << std::endl; std::string strInit = "apple"; std::string* strPtr; strPtr = &strInit; changeToBanana(strPtr); std::cout << *strPtr << std::endl; return 0; } I would like the resulting print to say "banana" Other answers involve changing parameter. I have tried assigning the string using a for loop, and going element by element, but that did not work. The value remained the same.
The function must be void, and the parameter must be a pointer to a string as shown. With this requirements you cannot change the value of the pointer that is passed to the function, because it is passed by value. Don't confuse the pointer with what it points to. Parameters are passed by value (unless you pass them by reference). A copy is made and any changes you make to s in the function do not apply to the pointer in main. However, you can change the string pointed to by the pointer (because s points to the same string as the pointer in main): void changeToBanana(std::string *s) { std::string str = "banana"; *s = str; } However, this is not idiomatic C++. You should rather pass a a reference void changeToBanana(std::string& s) or return the string std::string returnBanana().
71,679,130
71,682,074
Storing variadic unique_ptr pack into a tuple
I am trying to write a constructor that takes a variadic pack of unique_ptrs as argument and store it in a tuple: template<class... E> class A { std::tuple<std::unique_ptr<E>...> a_; public: A(std::unique_ptr<E>&&... a) : a_(std::make_tuple(std::move(a)...)) {} }; but this fails to compile when I call the constructor with more than one argument --- e.g. A< double> obj(std::make_unique<double>(2.0), std::make_unique<double>(3.0)); fails to compile with an error in tuple::test_method(). My questions are: Is there anything inherently wrong in what I am trying to do? Is it doable? Thanks
It looks like in this case the issue is just that your variable is type A<double> but you are passing two values, so you need to use A<double, double>. A<double, double> obj(std::make_unique<double>(2.0), std::make_unique<double>(3.0)); Alternatively, you can eliminate the need to state template parameters if you declare the variable with auto. auto obj = A(std::make_unique<double>(2.0), std::make_unique<double>(3.0));
71,679,224
71,679,513
error C2672 and C2784 when using lamdas with template functions
I have written the following function which hides the loops when iterating over a 2d vector: template<typename ElementType> void iterateOver2DVector(std::vector<std::vector<ElementType>> & vec, std::function<void(ElementType & element)> function) { for(auto & row : vec) { for(auto & element : row) { function(element); } } } but I get error: 'function': no matching overloaded function found and 'declaration' : could not deduce template argument for 'type' from 'type' when using it with lambdas like this: iterateOver2DVector(graph, [](Node & node) { node.update(); } ); Does someone know what I do wrong?
The call will try to deduce ElementType from both the first and second parameter/argument pair. It will fail for the second pair, since the second argument to the function is not a std::function, but a closure type. If it fails for one pair, then the whole deduction fails, even if the other pair would deduce the template argument for ElementType correctly. Your function doesn't need to deduce ElementType from the second parameter/argument pair, so you can make it a non-deduced context, so that no deduction for it will be attempted. A common approach to that is to use std::type_identity_t: template<typename ElementType> void iterateOver2DVector(std::vector<std::vector<ElementType>> & vec, std::type_identity_t<std::function<void(ElementType & element)>> function) std::type_identity_t<T> is an alias for std::type_identity<T>::type which is an alias for T, however since the type T now is left of a :: it is in a non-deduced context. std::type_identity_t is only available since C++20, but it can be defined easily in previous versions of C++: template<typename T> struct type_identity { using type = T; }; template<typename T> using type_identity_t = typename type_identity<T>::type; However, in this case std::function is just unnecessary overhead anyway. Simply accept the closure type directly, instead of a std::function: template<typename ElementType, typename F> void iterateOver2DVector(std::vector<std::vector<ElementType>> & vec, F function) A std::function is only useful if you intend to store the callable without dependence on its specific type e.g. in a class member and even in that case the conversion to std::function can be done when assigning to the member.
71,679,270
71,679,386
Ordering coordinates c++
I'm trying to order a list of coordinates in c++, but it's not working. i'm using c++ sort function, and first ordering the x values and then the y values if n = 9 and the coordinates: (2,2) (2,3) (1,2) (1,3) (2,1) (1,1) (3,2) (3,3) (3,1) the output should be: (1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3 ,3) But for some reason, it is: (1,1) (1,2) (1,3) (2,1) (2,2) (2,3) (3,2) (3,3) (3,1) #include <iostream> #include <algorithm> using namespace std; typedef struct{ short x; short y; } coord; bool comparex( coord a, coord b){ if(a.x < b.x) return 1; else return 0; } bool comparey(coord a, coord b){ if(a.x == b.x && a.y < b.y){ return 1; } else return 0; } int main(){ short n; coord v[1001]; while(cin >> n){ for (int i=1; i<=n; i++){ cin >> v[i].x; cin >> v[i].y; } sort(v+1, v+n, comparex); sort(v+1, v+n, comparey); for (int i=1; i<=n; i++){ cout << v[i].x << ' ' << v[i].y << endl; } } return 0; }
Your comparison is flawed: bool comparey(coord a, coord b){ if(a.x == b.x && a.y < b.y){ return 1; } else return 0; } You always return 0 (should be false) when a.x != b.x. For example comparey({0,1}, {1,1}) == false but also comparey({1,1},{0,1}) == false. You can use std::pair for the comparison: bool comparey(coord a, coord b){ return std::pair(a.x,a.y) < std::pair(b.x,b.y); } or use std::tie to avoid constructing the pairs: bool comparey(coord a, coord b){ return std::tie(a.x,a.y) < std::tie(b.x,b.y); }
71,679,698
71,680,854
Initialization of a base class reference from a derived class member
I have two classes, Base and Derived. Derived constructs Base using its own member object, which inherits from Base::BaseChild. struct Base { struct BaseChild { int x = 5; }; Base(BaseChild& c): baseData(c), constantVar(c.x) { assert(constantVar == 5); } int getX() const {return baseData.x;} private: const int constantVar; BaseChild& baseData; }; struct Derived: public Base { struct DerivedChild: public BaseChild { double y = 4.0; }; Derived(): Base(data) {} private: DerivedChild data; }; Derived myObject; assert(myObject.getX() == 5); Reasoning: I do this in this way because everything seems pretty encapsulated for my case, where I need to send Childs to swap their content (vector, shared_ptr, unique_ptr) with other Childs, keeping child memory address, and I still can access to Child object from base class without the need of a virtual function, which was killing my app performance. Question: I've read another post like this one, where it states initialization of a Derived member before the Base isn't possible. So the constantVar assert would always fail. However getX() works fine, after the constructor, and I'm interested in these functions which are called once the constructor ends. Is this safe? Or is there any hidden danger here?
The base class Base of Derived is constructed before the member data. As a result data will not be initialized when you pass a reference to it to Base's constructor. The initialization will happen after that constructor call. You are however trying to read the member x of data in Base's constructor. At this point data's lifetime has not started yet and accessing the value of a non-static data member of an object outside its lifetime causes undefined behavior. Whether or not the assertions succeed isn't significant. Undefined behavior allows for either outcome. The situation would be potentially different (although technically not rules in the standard) if you were not trying to access the value of data inside Base's constructor, but only storing the reference to it.
71,680,595
71,680,635
Is a constructor a function and is it possible to call a constructor
I came across this where one of the user comment says: A constructor cannot be called, it is not a function. It is invoked automatically when a new object is created. My question is that is the above comment true/correct? If yes, then why isn't a constructor considered a function and why can't we call it?
Formally in the C++ Standard it is (along with several others) a special member function so yes it is a function, but it is a special function and not all of the normal rules apply. There is no syntax to write code that calls a constructor directly or forming a function pointer to it. The Standard specifically says "Constructors do not have names." The compiler will automatically call a constructor when an object is created. The compiler will also automatically call constructors for subobjects (bases and members) of a class object. "Delegating constructors" are sort-of a degenerate case of initialization of subobjects (In formal algebra, we say that any set is a subset of itself, and say "strict" subset when we mean a subset that is not the entire set). There are a variety of ways to create an object and some of them look like a function call, but that's actually a cast and results in creation of a new object, on which the constructor is called implicitly by the compiler. There's also placement-new syntax which doesn't do very much besides causing the compiler to implicitly call the constructor -- but even there a brand new object is being created. One important way in which the compiler's implicit call to a constructor differs from an explicit function call found in user code is that the implicit call occurs within an implicit try/catch scope that will result in destruction of subobjects if an exception occurs. A direct call to the constructor, if one were possible, wouldn't have such extra behavior.
71,680,931
71,769,940
Why does this OpenMP code compile with g++, but fail with nvcc?
I'm trying to compile this code that uses OpenMP. When I compile it with nvcc, it gives an error that appears to be complaining about a token that isn't even there. Here's a minimal version of my code: int main() { // this loop somehow prevents the second one from compiling for (int foo = 0; foo < 10; foo++) { int bar; continue; } #pragma omp parallel for for (int baz = 0; baz < 10; baz++) { } return 0; } Here's the error message it produces: exp.cu:10:1: error: for statement expected before ‘}’ token 10 | for (int baz = 0; baz < 10; baz++) { } | ^ I'm compiling it with this command: nvcc -Xcompiler -fopenmp exp.cu Without the first loop, this program compiles correctly. It also works if I remove either of the lines in the first loop. How does the first loop prevent the second one from compiling? Am I using invalid OpenMP syntax? If I rename the file to exp.cpp and compile it with g++ -fopenmp exp.cpp, that works without errors. Is there any possibility that this is a bug in nvcc? Unfortunately, I can't just use g++, because I need to be able to use CUDA kernels in other places. Edit I'm using CUDA 11.2.
There is evidently a defect in CUDA 11.2 as far as this code example goes. The problem appears to be resolved in CUDA 11.4 and later. The solution is to upgrade the CUDA install to CUDA 11.4 or later.
71,680,983
71,681,678
Why is newer version of g++ saying `static_assert(is_trivial_v<_CharT> && is_standard_layout_v<_CharT>);` when class did not change?
In the following, the my_char class is said to not be trivial. I'm thinking that maybe the compiler is wrong, but maybe you know better than me what is wrong. In file included from /usr/include/c++/11/bits/basic_string.h:48, from /usr/include/c++/11/string:55, from /usr/include/c++/11/bits/locale_classes.h:40, from /usr/include/c++/11/bits/ios_base.h:41, from /usr/include/c++/11/ios:42, from /usr/include/c++/11/ostream:38, from /usr/include/c++/11/iostream:39, from /home/alexis/my_char.cpp:2: /usr/include/c++/11/string_view: In instantiation of class std::basic_string_view<main()::my_char, std::char_traits<main()::my_char> >: /home/alexis/my_char.cpp:24:20: required from here /usr/include/c++/11/string_view:101:21: error: static assertion failed 101 | static_assert(is_trivial_v<_CharT> && is_standard_layout_v<_CharT>); | ^~~~~~~~~~~~~~~~~~~~ /usr/include/c++/11/string_view:101:21: note: std::is_trivial_v<main()::my_char> evaluates to false Here is the compilable code my_char.cpp: #include <iostream> struct my_char { typedef std::basic_string<my_char> string_t; bool is_null() const { return f_char == CHAR_NULL; } static my_char::string_t to_character_string(std::string const & s) { my_char::string_t result; for(auto const & ch : s) { my_char c; c.f_char = ch; result += c; } return result; } char32_t f_char = CHAR_NULL; std::uint32_t f_line = 0; std::uint32_t f_column = 0; }; int main() { constexpr char32_t CHAR_NULL = '\0'; my_char::string_t str; my_char c{ 'c' }; str += c; std::cerr << "char = [" << static_cast<char>(str[0].f_char) << "]\n"; return 0; } g++ version: g++ (Ubuntu 11.2.0-7ubuntu2) 11.2.0 Command line used to compile the above: g++ -Wall my_char.cpp When I remove the to_character_string() static function, it works. If I define that function outside of the class, it doesn't help. Still not trivial. On the other hand, the is_null() function causes no issue. Why would that one function make the class non-trivial? Note that this class works under Ubuntu 18.04. The non-trivial issue appeared on Ubuntu 21.10. I suppose that's either a new check or the old check just let it go. For those interested by the complete class, it can be found here.
Your class my_char is not suitable as a character type for basic_string. From cpp-reference: The class template basic_string stores and manipulates sequences of char-like objects, which are non-array objects of trivial standard-layout type. and if you follow the definition of trivial, we have among other requirements of a trivial default constructor: T has no non-static members with default initializers. (since C++11) If you remove the default initializers from the class members, your class should be good. The reason that earlier compilers did not complain is that they did not have the conformity check.
71,681,003
71,682,298
C++ map fast 1,2,3 integers to hardcoded chars?
I need to map int values 1,2,3 to chars 'C', 'M', 'A' Whats the fastest way (this will be called 100s times per sec 24/7)? a macro or an inline function and bunch of ?: operators or ifs or switch? or an array?
A lookup-table seems the most obvious approach as it is also branch-free: constexpr char map(std::size_t i) { constexpr char table[] = "0CMA"; // if in doubt add bounds checking for i, but it will cost performance return table[i]; } Observe that with optimisation, the lookup table boils down to an integer constant. Edit: You can shave of an additional instruction if you are less lazy than me and specify the lookup table thusly: constexpr char table[] = {0, 'M', 'C', 'A'};
71,681,204
71,681,334
constructor for a polynomial class that must initialize the coefficients although the degree is unknown
i have this question: You will implement a polynomial class that uses a dynamic array to store the polynomial's coefficients. The Polynomial class has two private members variables: a dynamic array to store the coefficients and the degree of the polynomial like so: (private: double *coef; // Pointer to the dynamic array int degree; // the polynomial degree) 1.Write the constructors permitting the initialization of simple polynomials of the following way: i. Polynomial p1(3,1); // polynomial of degree 1 p1 = 3 x+1 ii. Polynomial p2(1,4,2); // polynomial of degree 2 p2 = x2+4x+2 iii. Polynomial p3(1,3,3,1); // polynomial of degree 3 p3 = x3+ 3x2 + 3x +1 that is the question. these constructors are being handed the coefficients but the number of coefficients is chosen by the user, how do i make such constructor(s)?? and i cant put a limit on the degree of the polynomial the user wants to enter(if this were the case, i could put default values that are zero so that if he doesnt give all coefficinets, the rest of coefficients will be zero) is the dynamic array member going to help in this problem?
I suggest creating a struct Term: struct Term { int coefficient; int power; }; A polynomial, by definition is a container (or sum) of terms: class Polynomial { public: std::vector<Term> terms; Polynomial(int coef1, int constant1); } In the above class, the constructor will create two terms: Polynomial::Polynomial(int coef1, int constant1) { Term t1; Term c; t1.coefficient = coef1; t1.power = 1; c.coefficient = constant1; c.power = 0; terms.push_back(c); terms.push_back(t1); } The next constructor, in your requirements, creates 3 terms: Polynomial::Polynomial(int coef1, int coef2, int constant1) { Term t1 = {coef1, 2}; Term t2 = {coef2, 1}; Term constant_term = {constant1, 0}; terms.push_back(constant_term); terms.push_back(t2); terms.push_back(t1); } One of the theorems of addition is that the terms can be in any order. You can change the order that you append them to the container so you print them in common order (highest exponent term first). Array of Coefficients In the requirements, there is double * coef which is supposed to be an array of coefficients (one for each term). Here's one example of a constructor: Polynomial::Polynomial(double coef1, double constant) { degree = 1; coef = new double[2]; // 2 terms. coef[0] = coef1; coef[1] = constant; } The other constructors are similar to the above one. Remember, your destructor should contain delete[] coef;.
71,681,274
71,681,632
Does GCC 7.3 omit the [[nodiscard]] attribute for reference returning member functions?
I've got the following code utilizing the [[nodiscard]] attribute of C++17. class SomeClass { public: /** Methods **/ [[nodiscard]] int getValue() { return n; } [[nodiscard]] int &getRef() { return n; } [[nodiscard]] int *getPtr() { return &n; } private: /** Members **/ int n{5}; }; int main() { SomeClass object; object.getValue(); object.getRef(); object.getPtr(); return 0; } When I compile it with GCC 7.3, I've two warnings stating that the return value of two functions is ignored. The two functions detected by the compiler are the ones that don't return a reference, getValue() and getPtr(). On the other hand, when compiled with GCC 8.1 and above versions, the getRef() also causes a warning. The C++ support table provided by GCC shows that the [[nodiscard]] attribute is fully supported as of version 7. It also has a white paper. Appearance of a [[nodiscard]] call as a potentially ­evaluated discarded­ value expression is discouraged unless explicitly cast to void. So, is it a bug or am I missing something?
Yes, it is a bug. It was fixed in GCC 8 as you have already realized. Bug report: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80896
71,681,431
71,681,598
c++ standard_layout for ref types
I'm trying to understand why standard_layout does not apply to ref types? #include <type_traits> struct X { int y; }; static_assert(std::is_standard_layout_v<X>); static_assert(std::is_standard_layout_v<X&>); // does not compile
As you've found, is_standard_layout only works with an object type, not a reference. As such, if you want either a type or a reference to a type, you could use: static_assert(std::is_standard_layout_v<std::remove_reference_t<X>>); remove_reference_t will yield the referred-to type for a reference, or the type itself if what you pass isn't a reference (note: for older compilers, you can use std::remove_reference<T>::type).
71,681,457
71,681,529
accessing struct members using pointer to pointer
I have a program like below, when I try to access a struct member using pointer to pointer it says the expression must have a pointer to class type. Please tell me how can I access data element of the struct object using a pointer to pointer #include "stdafx.h" #include<iostream> struct Node{ int data; Node* next; }; void addnode(Node** node, int val) { if (node == NULL){ *node = new Node(); *node->data = val; } } int _tmain(int argc, _TCHAR* argv[]) { Node* node; addnode(&node, 10); return 0; }
In *node->data = val; there are 3 operators involved. operator* to dereference operator-> to access a member operator= to do the assignment In which order will these happen? According to operator precedence which states: Precedence Operator 2 operator-> 3 operator* 16 operator= node->data will happen first and then the result will be dereferenced - so you have *(node->data) on the left hand side. You need to dereference node first and can use (... ) to override the precedence rules: (*node)->data = val; Also note that your original Node* is uninitialized and reading it (like you do in if(node == NULL) ) will cause the program to have undefined behavior. Initialize it to nullptr: Node* node = nullptr; // here addnode(&node, 10);
71,681,675
72,745,846
How to recompile a single .cc file in a project built previously with CMake tool in Ubuntu 20.04?
I am using the ORB_SLAM3 project (https://github.com/UZ-SLAMLab/ORB_SLAM3) as a baseline for a monocular odometry system. To understand how the ORB_SLAM3 software ingests the EuRoCV dataset, I am modifying some of the initial codes in the mono_euroc.cc file available in /Examples/Monocular folder. However, each time I change the .cc file, I cannot compile just the mono_euroc.cc file by itself, but need to run the ./build.sh command from the parent directory which executes the entire CMake. The process which takes a while to complete. My question is, is there a tool within CMake that would allow me to only change the "mono_euroc.cc" file directly from the "/Examples/Monocular" subdirectory rather than having to constantly invoke the "./build.sh" from the parent directory?
For the time being, I am following this process. I opened two terminal windows, both pointing to the parent directory (i.e ~/Dev/ORB_SLAM3). Everytime I change something in the target file (here it is the ./Examples/Monocular/euroc_mono) I execture the ./build.sh command in one and run the file on the other. I can confirm that though the cmake command looks over all the files, it only builds the one that was changed. I guess this method works when one is using the CMake tool to build a C++ project in Linux.
71,681,765
71,682,448
Ambiguous constructor overload on GCC, but not on Clang
Let's say we have the following simple code: #include <iostream> #include <type_traits> struct S { template <typename T> explicit S(T) noexcept requires std::is_signed<T>::value { std::cout << "T\n"; } template<typename T> explicit S(const T&) noexcept { std::cout << "const T&\n"; } }; int main() { S s(4); } This code compiles with Clang and prints 'T', but with GCC we have the following error: error: call of overloaded 'S(int)' is ambiguous My question is: what compiler has a bug, GCC or Clang?
GCC is correct. It's ambiguous. First, we have to look at the implicit conversion sequences. In both cases, the identity conversion sequence is involved: int to int, and int to const int& (the latter is considered the identity conversion sequence thanks to [over.ics.ref]/1). Second, we look at the tie-breaker rules regarding standard conversion sequences, [over.ics.ref]/3.2. None of these tie-breakers apply in this situation. This means that neither implicit conversion sequence is better than the other. We next have to go to the global tie-breakers. These can allow one overload to be considered better than another even when all implicit conversion sequences for one overload are neither better nor worse than the corresponding implicit conversion sequences for the other. The global tie-breakers are defined in [over.match.best.general]/2. According to the fifth bullet point (and none of the others could possibly apply to this situation), one of the overloads could be better than the other if both are template specializations but one template is more specialized than the other. To determine whether this is the case, we refer to [temp.func.order], which refers to [temp.deduct.partial]. We are in the context of a function call, so according to (3.1), "the types used are those function parameter types for which the function call has arguments." Then, paragraph 5 strips references, and paragraph 7 strips top-level cv-qualifiers. The result of this is that deduction succeeds in both directions. (That is, even though not every T is a const U&, the deduction succeeds in this direction anyway because const U& gets replaced by U before the actual deduction occurs.) Going back to [temp.func.order], since deduction succeeds in both directions, the final tie-breaker mentioned in paragraph 2 is whether one template is more constrained than the other. For that, we scroll down to paragraph 6. The bullet point that applies is (6.2.2), according to which: Otherwise, if the corresponding template-parameters of the template-parameter-lists are not equivalent ([temp.over.link]) or if the function parameters that positionally correspond between the two templates are not of the same type, neither template is more specialized than the other. Note that in this case, the stripping of references and cv-qualifiers doesn't apply, because that is only done as part of deduction, and we're not doing deduction anymore, so the function parameters types that positionally correspond are T and const T&, which are not the same. Therefore, neither template is more specialized than the other, meaning that the final tie-breaker has failed to prefer one overload over the other.
71,683,269
71,683,425
Pattern matching with variadic templates and default argument
I'm trying to add a default "hidden" setting into a templated class: template<bool DebugMode=false, typename... Args> struct A { A() {}; }; int main() { A<double, double> a; } which fails when compile with g++ 8.3.1 and C++17: error: type/value mismatch at argument 1 in template parameter list for ‘template<bool DebugMode, class ... Args> struct A’ note: expected a constant of type ‘bool’, got ‘double’ Yet I don't understand why g++ can't do any pattern matching in template arguments. Will it be fixed a newer C++ version ?
It's basically the same as with default function arguments: You can only omit parameters from the right. And I don't expect this to change, also because what you want to do can be achieved by adding a layer of indirection: template<bool DebugMode=false> struct Wrap { template <typename ...T> struct A {}; }; template <typename...T> using A = Wrap<>::A<T...>; int main() { A<double, double> a; } Alternatively: template <bool DebugMode=false,typename ...T> struct A_impl {}; template <typename...T> using A = A_impl<false,T...>; Though here the default false cannot be really used, for the using you still have to specify it.
71,684,134
71,684,178
Why is this partial template specialization failing?
Here is my code. #include <iostream> template<class> struct IsInteger; template<class> struct IsInteger { using value = std::false_type; }; template<> struct IsInteger<int> { using value = std::true_type; }; int main() { std::cout << std::boolalpha << IsInteger<5>::value::value << '\n'; } Above code results in an error saying Source.cpp(9,36): error C2974: 'IsInteger': invalid template argument for '<unnamed-symbol>', type expected Source.cpp(9,50): error C2955: 'IsInteger': use of class template requires template argument list I don't understand why the compiler doesn't pick template<> struct IsInteger<int> { using value = std::true_type; }; in this case. Why does it result in an error?
You need to use your trait as IsInteger<int> instead of IsInteger<5>. Also, the idiomatic way to use std::true_type and std::false_type in cases like this is to inherit from them, instead of aliasing them as value: template<class> struct IsInteger : std::false_type {}; template<> struct IsInteger<int> : std::true_type {}; int main() { std::cout << std::boolalpha << IsInteger<int>::value << '\n'; }
71,684,205
71,684,414
C++ Improve time complexity
I would like to ask for tips on how to improve the time complexity of the program. I can't change the interface (function headers) For example, if I do sort before find (), will it have any effect? Or are there any alternatives to my code. Thank you for all the advice In the link to the whole program, here is a part of the code https://onecompiler.com/cpp/3xxpa4w9q bool Company::operator == (Company cmpx) const { return ( ( (strcasecmp(addr.c_str(), cmpx.addr.c_str()) == 0) && (strcasecmp(name.c_str(), cmpx.name.c_str()) == 0) ) || (id == cmpx.id) ); } void CVATRegister::sortI (vector<unsigned int> &TotalInvoice) const { sort(TotalInvoice.begin(), TotalInvoice.end(), greater<unsigned int>()); } bool CVATRegister::cancelCompany ( const string &name, const string &addr ) { Company cmp(name, addr, "-1"); auto itr = find(DCompany.begin(), DCompany.end(), cmp); if(itr != DCompany.end()) { DCompany.erase(itr); return true; } return false; } bool CVATRegister::newCompany ( const string &name, const string &addr, const string &taxID ) { Company cmp(name, addr, taxID); if ( find(DCompany.begin(), DCompany.end(), cmp) == DCompany.end() ) { DCompany.push_back(cmp); return true; } return false; } bool CVATRegister::invoice ( const string &taxID, unsigned int amount ) { Company cmp("", "", taxID); auto itr = find(DCompany.begin(), DCompany.end(), cmp); if(itr != DCompany.end()) { TotalInvoice.push_back(amount); DCompany[distance(DCompany.begin(), itr)].saveInvoice(amount); return true; } return false; } bool CVATRegister::audit ( const string &name, const string &addr, unsigned int &sumIncome ) const { Company cmp(name, addr,"-1"); auto itr = find(DCompany.begin(), DCompany.end(), cmp); if(itr != DCompany.end()) { sumIncome = DCompany[distance(DCompany.begin(), itr)].getTotalIncome(); return true; } return false; } void CVATRegister::sortC (vector<Company> &c) const { sort(c.begin(), c.end()); } bool CVATRegister::firstCompany ( string &name, string &addr ) const { vector<Company> tmp = DCompany; sortC(tmp); if( tmp.size() > 0 ) { name = tmp[0].getName(); addr = tmp[0].getAddr(); return true; } return false; }
You are storing companies in a vector. In multiple methods, the vector is sorted and the searched. do not sort the vector again if it is already sorted as pointed out, use a binary search instead of std::find or use a hash-based container, e.g. std::unordered_set to store the companies. This should make most operations O(1) on average. or use multiple containers, accessing/updating each as needed or write your own :)
71,684,499
71,684,656
I cant read a binary file with C-STYLE. Problem with strings c++
For some reason, I can't read a file that contains "string"s with C-style. If I use an array of characters, then I can do it. But I want to do strings and I would like to know how to do it. When I print the b."x attribute" it shows random characters. And yes, I know I should be using c++ files. But this is purely for an educational purpose. Code: struct Boleta { string name; string surename; string legajo //156.455-6; int cod_materia; string date // 2022/10/26; }; int main() { Boleta boleta; FILE * f = fopen("DIAFINALES.DAT", "wb"); if(!f) { cout<<"Error al abrir el archivo 'DIAFINALES.DAT'"<<endl; return 1; } while(true) { cout<<"Name: ", cin>>boleta.name; cout<<"Surname: ", cin>>boleta.surename; if(boleta.name == "NULO" && boleta.surename == "NULO") break; cout<<"Legajo: ", cin>>boleta.legajo; cout<<"Exam date: ",cin>>boleta.date; fwrite(&boleta, sizeof(boleta), 1, f); } fclose(f); FILE * f1 = fopen("DIAFINALES.DAT", "rb"); if(!f1) { cout<<"Error al abrir el archivo 'DIAFINALES.DAT'"<<endl; return 1; } Boleta b; while(fread(&b, sizeof(b),1,f1)) { cout<<"************************"<<b.legajo<<"******************************"<<endl; cout<<"EXAM DATE: "<<b.date<<endl; cout<<"Name and surname: "<<b.name<<" "<<b.surename<<endl; cout<<"Code of subject: "<<b.cod_materia<<endl; } fclose(f1); return 0; }
You try to write 'boleta' struct Boleta { string name; string surename; string legajo //156.455-6; int cod_materia; string date // 2022/10/26; }; to a file like this fwrite(&boleta, sizeof(boleta), 1, f); this will not work. std::string is a pointer to the actual string data, the actual string is not stored in the struct. So first you need to decide on the format of your binary file. What does each record in it look like. I suggest you have fixed size strings and the the cod_materia as 4 byte int on the end name........surname........legajo......dat.......cod- 20 20 10(?) 10 4 To write this there are several ways I would do struct Bol_iobuf{ char name[20]; char surname[20]; char legato[10]; char date[10]; int cod_materia; } now you need to marshal a belato struct into this struct Bolato b; // loaded with data Bol_iobuff buff; strcpy(buff.name,b.name.c_str()); strcpy(buff.surname,b.surname.c_str()); strcpy(buff.legato,b.lagato.c_str()); strcpy(buff.date,b.date.c_str()); buff.cod_materia= b.cod_materia; now buff has all the bytes for one row and you can write it fwrite(&buff, sizeof(Bol_iobuff), 1, f); reading is the same, but in revers, read into Bol_iobuff then marshall that fields by fields into a Boleta instance NOte that I have not checked in the marshal for write code that the strings are not too large to fit in their target char arrays (20 and 10 bytes). You could use strncpy to truncate them, or you can have guard code in your input functions to ensure you never have names too long
71,684,608
71,684,792
How to properly print enum type
I have this class code in c++: #include <iostream> #include <cstring> using namespace std; enum tip{txt, pdf, exe }; class File{ private: char *imeDatoteka{nullptr}; tip t; char *imeSopstvenik{nullptr}; int goleminaFile = 0; public: File(){} File(char *i, char *imeS, int golemina, tip tip){ t = tip; goleminaFile = golemina; imeDatoteka = new char[strlen(i)+1]; imeSopstvenik = new char[strlen(imeS)+1]; strcpy(imeDatoteka, i); strcpy(imeSopstvenik, imeS); } File(const File &f){ t = f.t; goleminaFile = f.goleminaFile; imeDatoteka = new char[strlen(f.imeDatoteka)+1]; imeSopstvenik = new char[strlen(f.imeSopstvenik)+1]; strcpy(imeDatoteka, f.imeDatoteka); strcpy(imeSopstvenik, f.imeSopstvenik); } ~File(){ delete [] imeDatoteka; delete [] imeSopstvenik; } File &operator=(const File &f){ if(this != &f){ t = f.t; goleminaFile = f.goleminaFile; delete [] imeDatoteka; delete [] imeSopstvenik; imeDatoteka = new char[strlen(f.imeDatoteka)+1]; imeSopstvenik = new char[strlen(f.imeSopstvenik)+1]; strcpy(imeDatoteka, f.imeDatoteka); strcpy(imeSopstvenik, f.imeSopstvenik); } return *this; } void print(){ cout<<"File name: "<<imeDatoteka<<"."<<(tip) t<<endl; cout<<"File owner: "<<imeSopstvenik<<endl; cout<<"File size: "<<goleminaFile<<endl; } bool equals(const File & that){ if((strcmp(imeDatoteka, that.imeDatoteka) == 0) && (t == that.t) && (strcmp(imeSopstvenik, that.imeSopstvenik) == 0)) return true; else return false; } bool equalsType(const File & that){ if((strcmp(imeDatoteka, that.imeDatoteka) == 0) && (t == that.t)) return true; else return false; } }; And i have a problem. So i have an private member 'tip' that is enum type. The problem is it doesn't print it correctly(pdf,txt or exe), it just prints 0,1 or 2. I've seen some people try to cast it in the cout but it doesn't work for me. Any help?
You could create a lookup table using map and string: #include <map> #include <string> std::map<tip, std::string> tip_to_string = { {txt, "txt"}, {pdf, "pdf"}, {exe, "exe"} }; And then when you want to print some tip t: std::cout << tip_to_string.at(t) << std::endl; Or you could do a function: std::string tip_to_string(tip t) { switch(t) { case txt: return "txt"; case pdf: return "pdf"; case exe: return "exe"; default: return "You forgot to add this tip to the tip_to_string function."; } } And then when you want to print some tip t: std::cout << tip_to_string(t) << std::endl; I don't think there's a way to just print an enum as a string, but somebody who knows more about C++ could probably answer that. This might be a helpful read: https://en.cppreference.com/w/cpp/language/enum
71,684,938
71,693,525
Two versions of a code based on a #define
I'm working with a microcontroller and writing in C/C++ and I want to separate stuff that's supposed to work only in the transmissor and stuff that will work for the receiver. For this I thought about having a #define DEVICE 0 being 0 for transmissor and 1 for receiver. How would I use this define to cancel other defines? I have multiple defines that should only work on one of the devices.
You have the following directives: #if (DEVICE == 0) ... #else ... #endif To be sure the code will be exclusive. Although I recommended to do it dynamically: you can have a boolean global attribute/function parameter and execute code according to its value. The code will be optimized-out on a certain target (even with the lowest optimization setting). One compilation will be enough to check compilation errors instead of 2 with a define change. Bear in mind you will still need a define for the boolean value to be changed and so test every case, but this can be done automatically with any Dynamic code analysis, while not possible with a #define implementation.
71,685,067
71,701,798
Class Inside a Class in C++: Is that heavy?
Suppose I do something like: class A { public: class B { public: void SomeFunction1() const; using atype = A; }; using btype = B; void SomeFunction2() const; private: B b; }; Then I create an instance of class A and copy it: A a; A acopy = a; Does that make the class A heavy, what I mean is, what happens in the background? I would hope that C++ doesn't really literally "consider" the definition of class B everytime I declare an instance of class A, I thin what happens in the background is that class B will be treated as a new definition under a namespace named A, so A::B is defined. My question is does defining a class inside a class (B inside A) create any overhead when declaring or copying the class A, or it's treated exactly as if B is defined outside? Thank you everyone :)
Both possibilities (B as nested class and B as external class) will yield exactly the same performance. In fact, the compiler will generate the same assembly code in both cases. B as external class: https://godbolt.org/z/7voYGd6Mf B as nested class: https://godbolt.org/z/731dPdrqo B is a member of A. Hence it resides in A's memory layout and B's constructor will be called every time you constructor/copy A. The introduced overhead depends on B implementation, but it will be identical in both cases (B nested and external class),
71,685,424
71,685,535
(C++) How do I display an error when more than one "." is used in a calcualtor?
I am making a calculator using command line arguments, and one of the problems I am having is that I can't find a way to display an error to inputs that have more than one ".". 3.33 can be accepted, but 3.3.3.2 cannot because its an invalid number. int main(int argc, char *argv[]) { if (argc == 1) { cout << "E\n"; return 0; } if (argc <= 2) { cout << "P\n"; return 0; } if (argc > 4) { cout << "P\n"; return 0; } if (argc == 3) { cout << endl << (atof(argv[1]) + atof(argv[2])) << endl; return 0; } else if (argc == 4) { // Addition operation if (argv[3][0] == 'a') cout << endl << (atoi(argv[1]) + atoi(argv[2])) << endl; // Subtraction operation else if (argv[3][0] == 's') cout << endl << (atof(argv[1]) - atof(argv[2])) << endl; // Multiplication operation else if (argv[3][0] == 'm') cout << endl << (atof(argv[1]) * atof(argv[2])) << endl; // Division operation else if (argv[3][0] == 'd') if (argv[2][0] == '0') { cout << endl << "error"; return 0; } else { cout << endl << (atof(argv[1]) / atof(argv[2])) << endl; } // Exponential operation else if (argv[3][0] == 'p') if (argv[2][0] > -1.00 && argv[2][0] < 1.00) { cout << endl << "Y"; return 0; } else if (argv[1][0] == '-') { cout << endl << "Y"; return 0; } else cout << endl << pow(atof(argv[1]), atof(argv[2])) << endl; else cout << endl << "V" << endl; // Any other operator } }
One way for checking the validity of a string is to use regular expressions, for example, you can use this. You can write something like this to extract your operands: float op1, op2; std::string p1 = argv[1]; std::string p2 = argv[2]; std::regex pattern("[+-]?([0-9]*[.])?[0-9]+"); if (std::regex_match(p1, pattern) && std::regex_match(p2, pattern)) { op1 = atof(argv[1]); op2 = atof(argv[2]); // ... } else { cout << "Error!" << std::endl; } Alternatively, you could use the return value of atof to check if an error happened: Return value Floating point value corresponding to the contents of str on success. If the converted value falls out of range of corresponding return type, range error occurs and HUGE_VAL, HUGE_VALF or HUGE_VALL is returned. If no conversion can be performed, ​0​ is returned and *str_end is set to str.
71,685,459
71,685,499
I'm trying to build a dice program using C++ it isn't displaying my return
The code is printing the greeting and all the messages except the number. I need to see what is being generated by my random number generator. #include <iostream> #include <cstdlib> #include <ctime> void greeting(int pnum){ if(pnum == 1) { std::cout << "Please press \"ENTER\" to roll the die"; } else { std::cout << "Please press \"ENTER\" to roll the die AGAIN"; } std::cin.ignore(); } int dieroll(void){ int ran; srand(time(NULL)); ran = rand()%6+1; std::cout << "You have rolled :" << std::endl; return ran; } int main(void){ int counter, firstdie, ran; char firststart; do { greeting(1); firstdie = dieroll(); } while (ran > 0); { return ran; } std::cin.ignore(); return 0; } I'm a beginner so i'm unsure where to start trouble shooting. I'm looking into making local variables.
in main you do this while (ran > 0); { return ran; } First you never give 'ran' a value, so its either > 0, in which case you exit with a random completion code. Or 'ran' is <= 0, in which case you exit with a value of 0. Its not clear what you are trying to do here, but either way your program terminates immediately To be clear , a return in main will cause your program to stop immediatley Then here int dieroll(void){ int ran; srand(time(NULL)); ran = rand()%6+1; std::cout << "You have rolled :" << std::endl; return ran; } You intend to print 'ran' but in fact do not , you need std::cout << "You have rolled :" << ran << std::endl; I wonder if you think that the 'ran' here is the same as the 'ran' in main, it is not, there is no relationship between them
71,685,465
71,685,497
How can I initialize a custom Array class by aggregate initialization?
I have my own basic version of std::array Here's how it looks: template<typename T, size_t N> class Array { public: Array()=default; T& operator[](size_t n) {return m_data[n];} size_t Size() {return N;} private: T m_data[N]; }; I can initialize it this way: Array<int, 3> arr; arr[0] = 11; arr[1] = 22; arr[2] = 33; But what if I'd like to initialize it in aggregate, like this: Array<int, 3> arr = { 1, 2, 3 }; How could I go about doing this?
In order for aggregate initialization to work you need to make the class an aggregate class. To achieve this you need to make the array member public. Depending on the standard version you may also need to remove the defaulted default constructor Array()=default; and let it be defined implicitly instead. Don't declare any constructors at all. The class will then be an aggregate class and the initialization will then work as shown performing aggregate initialization. This is also how std::array works.
71,685,930
71,685,992
Data race about map::operator[]
Is there any potential problem in this code snippet? #include <mutex> #include <map> #include <vector> #include <thread> constexpr int FOO_NUM = 5; int main() { std::map<int, std::mutex> mp; std::vector<std::thread> vec; for(int i=0; i<2; i++) { vec.push_back(std::thread([&mp](){ std::lock_guard<std::mutex> lk(mp[FOO_NUM]); //I think there is some potential problem here, am I right? //do something })); } for(auto& thread:vec) { thread.join(); } As per the document,which says that: Inserts value_type(key, T()) if the key does not exist. This function is equivalent to return insert(std::make_pair(key, T())).first->second; I think there is a potential problem in the aforementioned code snippet. You see this may happen: 1.the first thread created a mutex, and is locking the mutex. 2.the second thread created a new one, and the mutex created in the first thread needs to be destroyed while it's still used in the first thread.
Yes, there is a data race, but it is even more fundamental. None of the containers in the C++ library are thread-safe, in any way. None of their operators are thread safe. mp[FOO_NUM] In the shown code multiple execution threads invoke map's [] operator. This is not thread-safe, the operator itself. What's contained in the map is immaterial. the second thread created a new one, and the mutex created in the first thread needs to be destroyed while it's still used in the first thread. The only thing that destroys any mutex in the shown code is the map's destructor when the map itself gets destroyed when returning from main(). std::lock_guard<std::mutex> does not destroy its mutex, when the std::lock_guard gets destroyed and releases the mutex, of course. An execution thread's invocation of the map's [] operator may default-construct a new std::mutex, but there's nothing that would destroy it when the execution thread gets joined. A default-constructed value in a std::map, by its [] operator, gets destroyed only when something explicitly destroys it. And it's the [] operator itself that's not thread safe, it has nothing to do with a mutex's construction or destruction.
71,686,081
71,686,131
Why isn't constexpr guaranteed to run during compilation?
Why isn't constexpr guaranteed to run during compilation? Additionally, why was consteval added instead of changing constexpr to guarantee a compile-time execution?
constexpr already guarantees compile-time evaluation when used on a variable. If used on a function it is not supposed to enforce compile-time evaluation since you want most functions to be usable at both compile-time and runtime. consteval allows forcing functions to not be usable at runtime. But that is not all that common of a requirement.
71,686,401
71,687,139
C++: How can I sort a string vector of numbers in numerical order?
In my program, I have an empty string vector that gets filled in via user input. The program is meant to take numbers from user input, then sort those numbers in order from smallest to largest (the data type is string to make it easier to check for undesired inputs, such as whitespaces, letters, punctuation marks, etc.). As it is, the program sorts the numbers according to starting digit instead of size. How can I change the program to sort the way I want it to? #include <iostream> #include <vector> #include <algorithm> #include <limits> #include <string> #include <sstream> using namespace std; int main() { vector<string> vect; string input; int intInput; int entries = 0; int i = 0; int x = 0; while (x < 1) { i++; cout << "Please input an integer. When finished providing numbers to organize, input any character that isn't an integer:\n"; vect.resize(i+1); getline(cin, input); cout << endl; stringstream ss(input); if (ss >> intInput) { if (ss.eof()) { vect.push_back(input); entries++; } else { cout << "Error: Invalid input.\n\n"; } } else if (entries < 1) { cout << "Error: Invalid input.\n\n"; i = -1; continue; } else if (entries >= 1) { break; } } cout << "All done? Organizing numbers!\n"; sort(vect.begin(), vect.end()); for (int j = 0; j < vect.size(); j++) { cout << vect[j] << endl; } return 0; } I've tried various methods to convert string data to int data, such as lexical cast & stoi(), but it didn't work, so I would like to know if there's another way, such as sorting the data without changing the data type.
You can specify a comparision function that returns whether the first argument is "less than" the second argument to the std::sort function. While testing, I found that some empty strings, which make std::stoi throw std::invalid_argument, are pushed into the vector (it looks like by vect.resize(i+1);). Therefore, I added some code to detect the error and evaluate the invalid strings as smaller than any valid integers. sort(vect.begin(), vect.end(), [](const string& a, const string& b) { bool aError = false, bError = false; int aInt = 0, bInt = 0; try { aInt = stoi(a); } catch (invalid_argument&) { aError = true; } try { bInt = stoi(b); } catch (invalid_argument&) { bError = true; } if (aError && !bError) return true; if (bError) return false; return aInt < bInt; }); #include <stdexcept> should be added to use std::invalid_argument. References: std::sort - cppreference.com std::stoi, std::stol, std::stoll - cppreference.com std::invalid_argument - cppreference.com
71,686,910
71,687,864
How to appropriately read in user input without manually inputting it in?
I have been trying to figure out how to read in user input without having to manually type every example out. I am building a stone game that is supposed to familiarize me with circularly linked lists. I have to manually type out examples like this to achieve the output. Is there another approach to replace this, and read example texts?** Here is the code I want to include the implementation in: #include <iostream> int main() { int nodes, moves = 0; std::cin >> nodes; for (int index = 0; index < nodes; index++) { link_one.add(1); } link_one.print(); }
A useful workaround to obnoxiously long inputs is temporarily using a text file with the std::getline function. Just replace #include <iostream> with #include <fstream> until the program is done. you can find a pretty good explanation of that in this SO answer. Edit: you'll also need to change your iostream declaration to an fstream one
71,687,088
71,687,353
C++ template recursively print a vector of vector using template
#include <any> #include <iostream> #include <string> #include <sstream> #include <vector> using namespace std; template<class T> struct is_vector : std::false_type {}; template<class T> inline constexpr bool is_vector_v = is_vector<T>::value; template <typename T> string VectorToString(const vector<T> &vec) { string res = "["; int n = vec.size(); for (size_t i=0; i<n; i++) { if constexpr(is_vector_v<T>) res += VectorToString(vec[i]); else res += std::to_string(vec[i]); if (i < n-1) res += ", "; } res += "]"; return res; } int main( int argc, char** argv ) { vector<int> a = {1,2,3}; cout << VectorToString(a) << "\n"; vector<vector<int>> b = {{1,2,3}, {4,5,6}, {7,8,9}}; //cout << VectorToString(b); vector<vector<vector<double>>> c = {{{1,2,3}, {4,5,6}}, {{7,8,9}}}; //cout << VectorToString(c); return 0; } I'm trying to make a print function that works with any vector type, like Python. I wish to use template if possible, but not sure how. What should struct is_vector looks like to do this? If a template solution is not possible, then I'd like to see any solution possible.
What should struct is_vector looks like to do this? It looks like what template partial specialization looks like template<class T> struct is_vector : std::false_type {}; template<class T, class Alloc> struct is_vector<std::vector<T, Alloc>> : std::true_type {}; Demo
71,687,711
71,687,954
C++: Implementation of virtual destructor necessary when using inherited structs with only properties?
I know that I need to define a virtual destructor (best option even if my class is final). In my case, I am using C-like structures (no functions, no defaults, just plain members) and use inheritance to compose a new structure. I then store a pointer to the base class in std::unique_ptr and let RAII do the rest. I am now curious if there is a need to also explicitely add a virtual destructor to avoid memory problems. An example might be: #include <chrono> #include <memory> struct A { std::chrono::milliseconds duration = std::chrono::milliseconds{-1}; int count = 0; }; struct B { int mode = 0; }; struct C : public A, public B { int foo = 1; }; int main() { std::unique_ptr<A> base = std::make_unique<C>(); base.reset(); // I expect here that A,B and C are destructed properly return 0; }
It doesn't matter whether the class is polymorphic or whether it is trivial. If delete is called on a pointer of different type (up to cv-qualification) than the most-derived type of the object it points to and the pointed-to-type doesn't have a virtual destructor, then the behavior is undefined. One obvious reason for this rule is that the base class subobject might not be located at the same address as the most-derived object. So the compiler would have no way of knowing what the offset to pass to the deallocation function needs to be. One could maybe argue that a standard-layout class with trivial destructor would not need to follow this rule if a pointer to the first base class subobject is used, but the standard doesn't make that exception and your class C isn't standard-layout anyway. See CWG issue 1259 closed as not-a-defect. The size-aware global deallocation functions mentioned in the issue were also introduced with C++14, which is another reason that using a base class pointer may cause you problems in practice, even if the destructor is trivial and the address without offset.
71,688,949
71,689,061
Sandard way of implementing c++ multi-threading for collecting data streams and processing
I'm new to c++ development. I'm trying to run infinite functions that are independent of each other. Problem statement is smiliar to this: The way I'm trying to implement this is #include <iostream> #include <cstdlib> #include <pthread.h> #include <unistd.h> #include <mutex> int g_i = 0; std::mutex g_i_mutex; // protects g_i // increment g_i by 1 void increment_itr() { const std::lock_guard<std::mutex> lock(g_i_mutex); g_i += 1; } void *fun(void *s) { std::string str; str = (char *)s; std::cout << str << " start\n"; while (1) { std::cout << str << " " << g_i << "\n"; if(g_i > 1000) break; increment_itr(); } pthread_exit(NULL); std::cout << str << " end\n"; } void *checker(void *s) { while (1) { if(g_i > 1000) { std::cout<<"**********************\n"; std::cout << "checker: g_i == 100\n"; std::cout<<"**********************\n"; pthread_exit(NULL); } } } int main() { int itr = 0; pthread_t threads[3]; pthread_attr_t attr; void *status; // Initialize and set thread joinable pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); int rc1 = pthread_create(&threads[0], &attr, fun, (void *)&"foo"); int rc2 = pthread_create(&threads[1], &attr, fun, (void *)&"bar"); int rc3 = pthread_create(&threads[2], &attr, checker, (void *)&"checker"); if (rc1 || rc2 || rc3) { std::cout << "Error:unable to create thread," << rc1 << rc2 << rc3 << std::endl; exit(-1); } pthread_attr_destroy(&attr); std::cout << "main func continues\n"; for (int i = 0; i < 3; i++) { rc1 = pthread_join(threads[i], &status); if (rc1) { std::cout << "Error:unable to join," << rc1 << std::endl; exit(-1); } std::cout << "Main: completed thread id :" << i; std::cout << " exiting with status :" << status << std::endl; } std::cout << "main end\n"; return 0; } This works, but I want to know if this implementation is a standard approach to do this or this can be done in any better way?
You correctly take a lock inside increment_itr, but your fun function is accessing g_i without acquiring the lock. Change this: void increment_itr() { const std::lock_guard<std::mutex> lock(g_i_mutex); g_i += 1; } To this int increment_itr() { std::lock_guard<std::mutex> lock(g_i_mutex); // the const wasn't actually needed g_i = g_i + 1; return g_i; // return the updated value of g_i } This is not thread safe: if(g_i > 1000) break; // access g_i without acquiring the lock increment_itr(); This this is better: if (increment_itr() > 1000) { break; } Similar fix is needed in checker: void *checker(void *s) { while (1) { int i; { std::lock_guard<std::mutex> lock(g_i_mutex); i = g_i; } if(i > 1000) { std::cout<<"**********************\n"; std::cout << "checker: g_i == 100\n"; std::cout<<"**********************\n"; break; } return NULL; } As to your design question. Here's the fundamental issue. You're proposing a dedicated thread that continuously takes a lock and would does some sort checking on a data structure. And if a certain condition is met, it would do some additional processing such as writing to a database. The thread spinning in an infinite loop would be wasteful if nothing in the data structure (the two maps) has changed. Instead, you only want your integrity check to run when something changes. You can use a condition variable to have the checker thread pause until something actually changes. Here's a better design. uint64_t g_data_version = 0; std::conditional_variable g_cv; void *fun(void *s) { while (true) { << wait for data from the source >> { std::lock_guard<std::mutex> lock(g_i_mutex); // update the data in the map while under a lock // e.g. g_n++; // // increment the data version to signal a new revision has been made g_data_version += 1; } // notify the checker thread that something has changed g_cv.notify_all(); } } Then your checker function only wakes up when it fun signals it to say something has changed. void *checker(void *s) { while (1) { // lock the mutex std::unique_lock<std::mutex> lock(g_i_mutex); // do the data comparison check here // now wait for the data version to change uint64_t version = g_data_version; while (version != g_data_version) { // check for spurious wake up cv.wait(lock); // this atomically unlocks the mutex and waits for a notify() call on another thread to happen } } }
71,689,137
71,689,236
What is the best way to drop last element using c++20 ranges
Is there any better way to drop last element in container using c++20 ranges than reverse it twice? #include <iostream> #include <vector> #include <ranges> int main() { std::vector<int> foo{1, 2, 3, 4, 5, 6}; for (const auto& d: foo | std::ranges::views::reverse | std::ranges::views::drop(1) | std::ranges::views::reverse) { std::cout << d << std::endl; } }
What you need is views::drop_last which comes from p2214 and has a priority of Tier 2. As the paper says: We’ll go through the other potential range adapters in this family and discuss how they could be implemented in terms of existing adapters: take_last(N) and drop_last(N). views::take_last(N) is equivalent to views::reverse | views::take(N) | views::reverse. But this is somewhat expensive, especially for non-common views. For random-access, sized ranges, we’re probably want r | views::take_last(N) to evaluate as r | views::drop(r.size() - N), and that desire is really the crux of this whole question — is the equivalent version good enough or should we want to do it right? Since vector is a random-access, sized range, you can just do for (const auto& d: foo | std::views::take(foo.size() - 1)) { std::cout << d << std::endl; }
71,689,408
71,689,471
showing multiple windows in opencv
with the following function, I am trying to plot the calculated histogram of a 3 channel photo but when I use the function multiple times it only shows the windows of the last time that it was called and does not show previous ones. How can I change it to show all windows? void showHistogram(std::vector<cv::Mat>& hists, vector<string> titles) { // Min/Max computation double hmax[3] = { 0,0,0 }; double min; cv::minMaxLoc(hists[0], &min, &hmax[0]); cv::minMaxLoc(hists[1], &min, &hmax[1]); cv::minMaxLoc(hists[2], &min, &hmax[2]); std::string wname[3] = { "blue", "green", "red" }; cv::Scalar colors[3] = { cv::Scalar(255,0,0), cv::Scalar(0,255,0), cv::Scalar(0,0,255) }; std::vector<cv::Mat> canvas(hists.size()); // Display each histogram in a canvas for (int i = 0, end = hists.size(); i < end; i++) { canvas[i] = cv::Mat::ones(125, hists[0].rows, CV_8UC3); for (int j = 0, rows = canvas[i].rows; j < hists[0].rows - 1; j++) { cv::line( canvas[i], cv::Point(j, rows), cv::Point(j, rows - (hists[i].at<float>(j) * rows / hmax[i])), hists.size() == 1 ? cv::Scalar(200, 200, 200) : colors[i], 1, 8, 0 ); } if (hists.size() == 3) { namedWindow(titles[i]); cv::imshow(titles[i], canvas[i]); } else { cv::imshow(hists.size() == 1 ? "value" : wname[i], canvas[i]); } } }
Only the last image is shown when the same window names are used, so you should add an unique thing to the window name to prevent them to be the same. For example: #include <sstream> void showHistogram(std::vector<cv::Mat>& hists, vector<string> titles) { static int windowId = 0; std::stringstream ss; ss << (windowId++); // generate an unique ID for each call // Min/Max computation double hmax[3] = { 0,0,0 }; double min; cv::minMaxLoc(hists[0], &min, &hmax[0]); cv::minMaxLoc(hists[1], &min, &hmax[1]); cv::minMaxLoc(hists[2], &min, &hmax[2]); std::string wname[3] = { "blue", "green", "red" }; cv::Scalar colors[3] = { cv::Scalar(255,0,0), cv::Scalar(0,255,0), cv::Scalar(0,0,255) }; std::vector<cv::Mat> canvas(hists.size()); // Display each histogram in a canvas for (int i = 0, end = hists.size(); i < end; i++) { canvas[i] = cv::Mat::ones(125, hists[0].rows, CV_8UC3); for (int j = 0, rows = canvas[i].rows; j < hists[0].rows - 1; j++) { cv::line( canvas[i], cv::Point(j, rows), cv::Point(j, rows - (hists[i].at<float>(j) * rows / hmax[i])), hists.size() == 1 ? cv::Scalar(200, 200, 200) : colors[i], 1, 8, 0 ); } // add the ID to make the window titles unique if (hists.size() == 3) { namedWindow(titles[i] + ss.str()); cv::imshow(titles[i] + ss.str(), canvas[i]); } else { cv::imshow((hists.size() == 1 ? "value" : wname[i]) + ss.str(), canvas[i]); } } }
71,689,518
71,689,578
Array assignment error in C++, What is wrong with my code?
I am trying this code in CodeBlocks but the result is some random numbers! I appreciate if you check the code and let me know what my mistake is. It should result in multipliers of 25: #include <iostream> using namespace std; main(){ const int array_size = 10; int numbers[array_size]; int counter = 0; while(counter < array_size){ numbers[counter] = 25 * counter; counter++; cout << "number[" << counter << "] = " << numbers[counter] << endl; } } Thanks in advance.
You are incrementing counter after setting the value of numbers[counter]. After that. numbers[counter] (with the new value of counter) is a new uninitialized element. You should move the increment after the printing: #include <iostream> using namespace std; int main(){ const int array_size = 10; int numbers[array_size]; int counter = 0; while(counter < array_size){ numbers[counter] = 25 * counter; cout << "number[" << counter << "] = " << numbers[counter] << endl; counter++; } } Or simply avoid using an array (in this case): #include <iostream> int main(){ const int array_size = 10; int counter = 0; while(counter < array_size){ std::cout << "number[" << counter << "] = " << 25 * counter << std::endl; counter++; } } Another way is using two loops -- one for initialization and one for printing: #include <iostream> using namespace std; int main(){ const int array_size = 10; int numbers[array_size]; int counter = 0; while(counter < array_size){ numbers[counter] = 25 * counter; counter++; } counter = 0; while(counter < array_size){ cout << "number[" << counter << "] = " << numbers[counter] << endl; counter++; } }
71,690,353
71,690,553
C++ std::any function that convert std::any of C char-array to string
#include <iostream> #include <any> #include <string> #include <vector> #include <map> using namespace std; string AnyPrint(const std::any &value) { cout << size_t(&value) << ", " << value.type().name() << " "; if (auto x = std::any_cast<int>(&value)) { return "int(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<float>(&value)) { return "float(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<double>(&value)) { return "double(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<string>(&value)) { return "string(\"" + (*x) + "\")"; } if (auto x = std::any_cast<char*>(&value)) { return string(*x); } } int main() { int a = 1; float b = 2; double c = 3; string d = "4"; char *e = "555"; cout << AnyPrint(a) << "\n"; cout << AnyPrint(b) << "\n"; cout << AnyPrint(c) << "\n"; cout << AnyPrint(d) << "\n"; cout << AnyPrint("555") << "\n"; cout << AnyPrint(e) << "\n"; return 0; } I'm trying to make a function that converts a std::any object to string, given that the list of possible types is hard-coded. However, there's a problem when user parse raw string like AnyPrint("555"). I use the method from Checking std::any's type without RTTI I got the following output when I run the program: 140722480985696, i int(1) 140722480985696, f float(2.000000) 140722480985696, d double(3.000000) 140722480985696, NSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE string("4") 140722480985696, PKc string("4") 140722480985696, Pc 555 How can I handle a std::any of raw string? I don't want to write AnyPrint("555"s) unless it's the only way. Edit: I use this to run the example https://www.onlinegdb.com/online_c++_compiler
Type of "555" is const char[4] which might decays to const char*. You handle char*, but not const char*. Handling const char* fixes your issue: std::string AnyPrint(const std::any &value) { std::cout << size_t(&value) << ", " << value.type().name() << " "; if (auto x = std::any_cast<int>(&value)) { return "int(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<float>(&value)) { return "float(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<double>(&value)) { return "double(" + std::to_string(*x) + ")"; } if (auto x = std::any_cast<std::string>(&value)) { return "string(\"" + (*x) + "\")"; } if (auto x = std::any_cast<const char*>(&value)) { return *x; } return "other"; } Demo
71,690,714
71,751,904
OpenCV numpy to cv::Mat conversion
I inherited an application with opencv, shiboken and pyside and my first task was to update to qt6, pyside6 and opencv 4.5.5. This has gone well so far, I can import the module and make class instances etc. However I have a crash when passing numpy arrays: I am passing images in the form of numpy arrays through python to opencv and I am using pyopencv_to to convert from the array to cv::Mat. This worked in a previous version of opencv (4.5.3), but with 4.5.5 it seems to be broken. When I try to pass an array through pyopencv_to, I get the exception opencv_ARRAY_API was nullptr. My predecessor solved this by directly calling PyInit_cv2(), which was apparently previously included via a header. But I cannot find any header in the git under the tag 4.5.3 that defines this function. Is this a file that is generated? I can see there is a pycompat.hpp, but that does not include the function either. Is there a canonical way to initialize everything so that numpy arrays can be passed properly? Or a tutorial anyone can point me to? My searches have so far not produced any useful hints. Thanks a lot in advance! :)
I finally found a solution. I dont know if this is the correct way of doing it, but it works. I made a header file that contains PyMODINIT_FUNC PyInit_cv2(); as a forward declaration and then copied over everything in the modules/python/src2 directory. I assumed this was already happening in the cv2.cpp file, because there is already exactly that line (in cv2.cpp). But just adding that include works perfectly fine, apparently. Now I can call the init function when my own module is initialized and it seems to properly set all the needed state.
71,690,762
71,692,030
How can I connect two classes (which don't know eachother) through public interface (C++)
I'm currently working on a project where everything is horribly mixed with everything. Every file include some others etc.. I want to focus a separating part of this spaghetti code into a library which has to be completely independent from the rest of the code. The current problem is that some functions FunctionInternal of my library use some functions FunctionExternal declared somewhere else, hence my library is including some other files contained in the project, which is not conform with the requirement "independent from the rest of the code". It goes without saying that I can't move FunctionExternal in my library. My first idea to tackle this problem was to implement a public interface such as described bellow : But I can't get it to work. Is my global pattern a way I could implement it or is there another way, if possible, to interface two functions without including one file in another causing an unwanted dependency. How could I abstract my ExternalClass so my library would still be independent of the rest of my code ? Edit 1: External.h #include "lib/InterfaceInternal.h" class External : public InterfaceInternal { private: void ExternalFunction() {}; public: virtual void InterfaceInternal_foo() override { ExternalFunction(); }; }; Internal.h #pragma once #include "InterfaceInternal.h" class Internal { // how can i received there the InterfaceInternal_foo overrided in External.h ? }; InterfaceInternal.h #pragma once class InterfaceInternal { public: virtual void InterfaceInternal_foo() = 0; };
You can do like you suggested, override the internal interface in your external code. Then // how can i received there the InterfaceInternal_foo overrided in External.h ? just pass a pointer/reference to your class External that extends class InterfaceInternal. Of course your class Internal needs to have methods that accept InterfaceInternal*. Or you can just pass the function to your internal interface as an argument. Something around: class InterfaceInternal { public: void InterfaceInternal_foo(std::function<void()> f); }; or more generic: class InterfaceInternal { public: template <typename F> // + maybe some SFINAE magic, or C++20 concept to make sure it's actually callable void InterfaceInternal_foo(F f); };
71,691,109
71,691,359
Copy constructor and default assignment operator
I have made the following Car class: class Car { private: int id; int* data; public: Car(int id, int data) : id(id) , data(new int(data)){} //Car(const Car& rhs) : id(rhs.id) , data(new int(*rhs.data)){} void print(){std::cout << id << " - " << *data << " - " << data << std::endl;} }; With the following main code: int main() { Car A(1,200); A.print(); Car B=A; B.print(); } When I run this code I get the following output: 1 - 200 - 0x14bdc20 1 - 200 - 0x14bdc20 This is also what I expected as the default assignment operator simply copies the values of id and data. When I comment in the copy constructor and run the same code, I get the following output: 1 - 200 - 0x71bc20 1 - 200 - 0x71c050 Hence, the data pointer of B points to a new address. I do not quite understand why this happens. I thought the default assignment operator still would only copy the values from A and the only way to solve this was to overload the assignment operator. How is it the default assignment operator seems to use the copy constructor in this case?
Lets consider what happens in the 2 cases individually. Also, note that there is difference between initialization and assignment in C++. In particular, Car B=A; is copy-initialization and not copy-assignment. Case 1 Here we consider the case where there is no user defined copy-constructor. That is, the case where you've commented the copy constructor. class Car { private: int id; int* data; public: Car(int id, int data) : id(id) , data(new int(data)){} void print(){std::cout << id << " - " << *data << " - " << data << std::endl;} }; In this case, the compiler implicitly synthesizes a copy-constructor. That copy constructor does memberwise copy of the data members. That is, it simply copies, the id and data data member from the passed argument. This explains why you get the mentioned output. Because, they are nothing but a copy of the data members of the passed object. In particular, the data member data(is initialized) gets a copy of the data member data of the passed argument. That is, both the data members data of the passed argument as well as the current instance point to the same int object. Hence, the output is the same in this case. Case 2 Here we consider the case where there is a user defined copy-constructor. That is, the case where you've commented out the copy constructor. class Car { private: int id; int* data; public: Car(int id, int data) : id(id) , data(new int(data)){} Car(const Car& rhs) : id(rhs.id) , data(new int(*rhs.data)){} void print(){std::cout << id << " - " << *data << " - " << data << std::endl;} }; In this case, the data member data points to the a newly created int object due to the expression data(new int(data)). Note that this is different from the case 1 because in case 1 the data member data simply gets a copy of the data member data of the passed object. That is, in case 1, both the data points to the same int. While in case 2, the data member data points to a separate int object created due to new int(data). Hence the output is different in this case.
71,691,584
71,691,664
Why are the results performed using Code Runner in VScode different from the results performed in a shell?
I try to learn the function fork(). However, the results performed using Code Runner in VScode are different from the results performed in a shell. Just like blew pictures show, I used same commands but get different results. I know that the second output in shell is right and I would like to know why is the first output printed by using Code Runner? Is something wrong about the plugin? The code is like this. #include <stdio.h> #include <unistd.h> #include <stdlib.h> int main() { printf("Hello World\n"); fork(); fork(); fork(); exit(0); }
It's a buffering issue. When stdout is connected to an actual terminal it will be line buffered That means the output is actually written to the terminal when the buffer is full, is explicitly flushed, or when you print a newline ('\n'). This is what happens in the second case. In the first case, VSCode will not run it directly in a terminal, instead stdout will be connected to a pipe which VSCode will then output in its "terminal". When connected to a pipe (or anything else that isn't a direct terminal) then stdout will be fully buffered, where the output is flushed only when the buffer is full or explicitly flushed. In this first case the buffer for stdout will be inherited by the child processes, so each child-process you fork will have its own copy of the existing buffer. And when each of the child-processes exits, their buffer will be flushed and actually written. Since there are a total of eight processes running, you will get the output eight times. From your program to the actual output there might be multiple buffers. But the buffer I'm talking about above is the high-level stdout buffer only. When output to stdout is to an actual command-line environment then output will be written on newline. Otherwise the output will be buffered until the buffer is full (or explicitly flushed with fflush(stdout)). When running in an IDE of some kind, the output pane or window is usually a GUI control of the IDE itself, it's not a command-line environment. Therefore all output written on stdout will be stored in the stdout buffer until it's full, and then the IDE will receive it so it can write it to the output pane or window. Since the stdout buffer is not flushed, when you do fork() the new child process, being an almost exact copy of the parent process, will start with a stdout buffer that already have data in it. When the child process exits then the stdout buffer will be flushed and output written to the layers below, and the IDE will receive it to print. Since you have multiple child (and grandchild) processes, in addition to the original parent process, you will get output from each of the processes.
71,691,612
71,692,064
2D array to find sum of columns, doesn't display properly
#include <iostream> #include <iomanip> using namespace std; void table(int i[], int j[]); int m[4][5] = { {2,5,4,7}, {3,1,2,9}, {4,6,3,0}, }; int main() { table({}, {}); } void table(int i[], int j[]) { for (int k = 0; k < 5; k++) { int sum = 0; for (int l = 0; l < 4; l++) { sum += m[l][k]; } cout << "column: " << " " << sum << '\n'; } } Basically I want it to display like this: Column Sum of Column Entries 1 9 2 12 3 9 4 16 and I'm not sure how to go about doing this. Do I write a loop?
The presented code does not make a sense. For starters the parameters of the function table are not used. Moreover they are initialized as null pointers after this call table({}, {}); Also the array m is declared with 4 rows and 5 columns. It means that the last row contains all zeroes and the last column also contains zeroes. It seems you mean an array with three rows and four columns. The program can look the following way #include <iostream> $include <iomanip> const size_t COLS = 4; void table( const int a[][COLS], size_t rows ); int main() { int m[][COLS] = { { 2, 5, 4, 7 }, { 3, 1, 2, 9 }, { 4, 6, 3, 0 }, }; table( m, sizeof( m ) / sizeof( *m ) ); } void table( const int a[][COLS], size_t rows ) { std::cout << "Column Sum of Column Entries\n"; for (size_t j = 0; j < COLS; j++) { int sum = 0; for (size_t i = 0; i < rows; i++) { sum += a[i][j]; } std::cout << j + 1 << ":" << std::setw( 14 ) << ' ' << sum << '\n'; } std::cout << '\n'; } The program output is Column Sum of Column Entries 1: 9 2: 12 3: 9 4: 16
71,691,700
71,692,217
Reducing memory alignment
I want to know if it is possible to "reduce" the alignment of a datatype in C++. For example, the alignment of int is 4; I want to know if it's possible to set the alignment of int to 1 or 2. I tried using the alignas keyword but it didn't seem to work. I want to know if this is something not being done by my compiler or the C++ standard doesn't allow this; for either case, I would like to know the reason why it is as such.
I want to know if it is possible to "reduce" the alignment of a datatype in C++. It is not possible. From this Draft C++ Standard: 10.6.2 Alignment specifier      [dcl.align] … 5     The combined effect of all alignment-specifiers in a declaration shall not specify an alignment that is less strict than the alignment that would be required for the entity being declared if all alignment-specifiers appertaining to that entity were omitted. The 'reason' for this is that, in most cases, alignment requirements are dictated by the hardware that is being targeted: if a given CPU requires that an int be stored in a 4-byte-aligned address then, if the compiler were allowed to generate code that puts such an int in a less strictly aligned memory location, the program would cause a hardware fault, when run. (Note that, on some platforms, the alignment requirement for an int is only 1 byte, even though access may be optimized when more strictly aligned.) Some compilers may offer ways that appear to allow alignment reduction; for example, MSVC has the __declspec(align(#)) extension, which can be applied in a typedef statement. However, from the documentation: __declspec(align(#)) can only increase alignment restrictions: #include <iostream> typedef __declspec(align(1)) int MyInt; // No compiler error, but... int main() { std::cout << alignof(int) << "\n"; // "4" std::cout << alignof(MyInt) << "\n"; // "4" ...doesn't reduce the aligment requirement return 0; }
71,691,749
71,691,787
Is Passing Reference From Child To Parent During Construction UB?
The following is a simplified version of some code. struct Test { Test( int &id ) : id( id ) {} int &id; }; struct B : Test { B() : Test( a ) {} int a; }; Now, I'm aware that the parent, in this case Test would be created before the B object when a B object is created. Does that then mean that the a variable, being passed in to the Test constructor, does not yet have an address and is thus Undefined Behaviour? Or is this safe? Just to clarify, the value of id is not used until after B is fully constructed.
Yes your code is fine. You can use memory addresses and reference to not yet initialized members in the constructor. What you cannot do is using the value before it has been initialized. This would be undefined behavior: struct BROKEN { BROKEN( int* id ) : id(*id) {} int id; // ^ -------- UB }; struct B : BROKEN { B() : BROKEN( &a ) {} int a; }; [...] being passed in to the Test constructor, does not yet have an address and is thus Undefined Behaviour Consider what happens when an object is created. First memory is allocated, then the constructor is called. Hence "does not yet have an address" is not correct.
71,691,881
71,691,919
How to avoid "Undefined Reference" error when using my own library in an executable, building the project with CMake?
I'm trying to set up a C++ project using CMake but I think I'm missing something. When I'm trying to use my library in an executable I get the error: Scanning dependencies of target dynamic-shadows-lib [ 33%] Linking CXX static library libdynamic-shadows-lib.a [ 33%] Built target dynamic-shadows-lib Scanning dependencies of target main [ 66%] Building CXX object CMakeFiles/main.dir/main.cpp.o [100%] Linking CXX executable main /usr/bin/ld: CMakeFiles/main.dir/main.cpp.o: in function `main': main.cpp:(.text+0x83): undefined reference to `num3()' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/main.dir/build.make:85: main] Error 1 make[1]: *** [CMakeFiles/Makefile2:78: CMakeFiles/main.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 My file structure looks like this: . ├── build ├── CMakeLists.txt ├── include │   └── vec2f.hpp ├── main.cpp └── src └── vec2f.cpp 3 directories, 4 files My root (and only) CMakeLists.txt looks like this: cmake_minimum_required(VERSION 3.16) project( dynamic-shadows VERSION 1.0 LANGUAGES CXX ) # Set C++ to version 14 set(CMAKE_CXX_STANDARD 14) # Set a name for the target set(TARGET_LIB ${CMAKE_PROJECT_NAME}-lib) # Make library ${TARGET_LIB} add_library(${TARGET_LIB} STATIC) # Set linker language to CXX (Gets error without it) set_target_properties(${TARGET_LIB} PROPERTIES LINKER_LANGUAGE CXX) # Set include directory for ${TARGET_LIB} target_include_directories( ${TARGET_LIB} PUBLIC ${PROJECT_SOURCE_DIR}/include ) # Set sources for ${TARGET_LIB} target_sources( ${TARGET_LIB} PUBLIC ${PROJECT_SOURCE_DIR}/src ) # Add a simple test executable to test the library add_executable(main main.cpp) # Link the ${TARGET_LIB} to main executable target_link_libraries( main PUBLIC ${TARGET_LIB} ) I suspect the problem lies in my CMakeLists.txt since I'm new to this, but I can't figure out what it is. What am I missing? Could it be something else I'm doing wrong? The code I'm trying to run is very simple but I'll include it for reference: ./include/vec2.hpp #ifndef __VEC2F_HPP__ #define __VEC2F_HPP__ #include <iostream> namespace ds { class vec2f { public: float x; float y; vec2f(float x_value, float y_value) : x(x_value), y(y_value) {} }; } // End of namespace ds std::ostream & operator<<(std::ostream &out, const ds::vec2f &v); ds::vec2f operator+(const ds::vec2f &left, const ds::vec2f &right); float num3(); #endif ./src/vec2f.cpp #include "../include/vec2f.hpp" /** * @brief Overload of << operator for ds::vec2f class to allow for printing it in std::cout. * * @param out std::ostream reference (&) * @param v ds::vec2f reference (&) * @return std::ostream& out */ std::ostream & operator<<(std::ostream &out, const ds::vec2f &v) { return out << "[" << v.x << ", " << v.y << "]"; } /** * @brief Overload of + operator for ds::vec2f class to allow for vector addition. * * @param left ds::vec2f * @param right ds::vec2f * @return ds::vec2f sum */ ds::vec2f operator+(const ds::vec2f &left, const ds::vec2f &right) { return ds::vec2f( left.x + right.x, left.y + right.y ); } float num3() { return 3; } ./main.cpp #include "vec2f.hpp" int main(int argc, char* argv[]) { std::cout << "Hello world!" << std::endl; ds::vec2f v1 = ds::vec2f(8, -2); ds::vec2f v2 = ds::vec2f(2, 5); float n = num3(); std::cout << "Res: " << n << std::endl; return 0; } I've tried to follow solutions to similair problems which usually seems to have something to do with linking. Most havn't really helped since I'm required to solve this using CMake. I've tried with a variety of CMakeLists.txt configurations but ended up with this one since it looked the cleanest and seemed to be using the latest implementations of commands (target_include_directory instead of include_directories etc..)
# Set sources for ${TARGET_LIB} target_sources( ${TARGET_LIB} PUBLIC ${PROJECT_SOURCE_DIR}/src ) You add source files, not directories. Just: add_library(... STATIC src/vec2f.cpp ) Do not use PROJECT_SOURCE_DIR, it will change when someone does add_subirectory from above. If you want current project source dir, thats ${CMAKE_CURRENT_SOURCE_DIR}. # Set linker language to CXX (Gets error without it) set_target_properties(${TARGET_LIB} PROPERTIES LINKER_LANGUAGE CXX) Remove it. Yes, without source files, no one knows what language your library is in.
71,691,888
71,692,095
Why can I not pass a numeric template parameter to my templated function?
I have a custom class encapsulating a std::tuple ("MyTuple") and another class implementing a custom interface for a std::tuple ("MyInterface"). I need this separate interface in the code base, the code below is simplified. Since elements of std::tuple need to be accessed with the key as template parameter, the interface's functions have a numeric template parameter size_t Key which is then given to std::get for the tuple for example. This interface works fine, but not when calling it from another templated function which passes a numeric parameter as "key": #include <iostream> #include <functional> #include <tuple> #include <string> template <typename... Types> class MyInterface { public: MyInterface(const std::tuple<Types...>& tuple) : tuple(tuple) {} template <size_t Key> std::string getString() { return std::to_string(std::get<Key>(tuple)); } private: const std::tuple<Types...>& tuple; }; template <typename... Types> class MyTuple { public: MyTuple(Types... values) : value(std::tuple<Types...>(values...)) {} template <size_t Key> std::string asString() { MyInterface<Types...> interface(value); return interface.getString<Key>(); // here I get the compiler error } private: std::tuple<Types...> value; }; int main() { MyInterface<int, float, long> interface(std::tuple<int, float, long>(7, 3.3, 40)); std::cout << interface.getString<0>() << std::endl; // this works fine MyTuple<int, float, long> tuple(7, 3.3, 40); std::cout << tuple.asString<0>() << std::endl; } Complete output of g++: templated_function_parameter_pack.cpp: In member function ‘std::__cxx11::string MyTuple<Types>::asString()’: templated_function_parameter_pack.cpp:28:39: error: expected primary-expression before ‘)’ token return interface.getString<Key>(); // here I get the compiler error ^ templated_function_parameter_pack.cpp: In instantiation of ‘std::__cxx11::string MyTuple<Types>::asString() [with long unsigned int Key = 0; Types = {int, float, long int}; std::__cxx11::string = std::__cxx11::basic_string<char>]’: templated_function_parameter_pack.cpp:40:34: required from here templated_function_parameter_pack.cpp:28:33: error: invalid operands of types ‘<unresolved overloaded function type>’ and ‘long unsigned int’ to binary ‘operator<’ return interface.getString<Key>(); // here I get the compiler error Why is not valid syntax to call interface.getString<Key>() inside MyTuple::asString<size_t Key>?
When you want to call a template method of an instance, you need to write this: return interface.template getString<Key>(); You'll find every details of why in this answer: Where and why do I have to put the "template" and "typename" keywords?
71,692,007
71,692,102
Array of pointers holds the same value for all elements
I'm currently deep-diving into the way pointers work. Something for me unexplainable happened when executing the following lines of code: std::vector<OptimizerPlanOperatorPtr> sources; for (const auto &source : sourceOperators){ OptimizerPlanOperator planOperator = OptimizerPlanOperator(source); sources.push_back(static_cast<std::shared_ptr<OptimizerPlanOperator>(&planOperator)); } all sourceOperators differ, however when checking the elements of sources, they all point to the same OptimizerPlanOperator. When I ran the debugger, I realized that in every loop step, all values of sources change to the recent value. My assumption is, that I poorly initialized the pointer here which somehow results in the value the pointer refers to being overridden. Can somebody show a solution or explain, what I did wrong here?
You are storing the location of an object whose lifetime ends with the current iteration and handing ownership of it to a shared_ptr. Both are problems that lead to undefined behaviour. Casting a pointer to std::shared_ptr does not automagically make the pointed-to object into a shared object and extend its lifetime, and it is equivalent to std::shared_ptr<OptimizerPlanOperator>(&planOperator). The simplest solution is to not do this stepwise but all at once: for (const auto &source : sourceOperators){ sources.push_back(std::make_shared<OptimizerPlanOperator>(source)); }
71,692,143
71,871,181
UHD USRP crash in debug mode
I have a simple receiver application with USRP B200. It works fine in release mode but crashes in debug mode. Program crashes when following method is called. uhd::usrp::multi_usrp::make(args) Here the stack view when it crashes: The program only requires libboost_thread from the boost library. I tried with different versions (libboost_thread-vc141-mt-x64-1_69.lib, libboost_thread-vc141-mt-gd-x64-1_69.lib, libboost_thread-vc141-mt-sgd-x64-1_69.lib) of that library but got the same result. Environment : OS: Windows 10 and 11 Compiler: MSVC2017, MSVC2015 64 bit UHD version: 3.15.0.0 and 4.1.0.5 Boost versions : 1.69, 1.69, 1.77 and 1.79 Libusb version: 1.0 (debug mode dll) Edit: This program works stably in release mode. Also, a similar program like this one works fine in release and debug modes on Ubuntu, but crashes in debug mode on Windows. So, I don't think it's a hidden bug causing the crash. I suspect there is a point between UHD, Boost, and MSVC for the debug mode in Windows. I would be grateful for any help.
I found the problem. The same build configuration must be used for UHD binaries. Using release built uhd.dll in debug mode causes the crash. Unfortunately, The official build of UHD doesn't contain debug builds. Those who need the debug version, need to compile it themselves. Here is the build guide: https://files.ettus.com/manual/page_build_guide.html And here are the debug builds of mine for testing purposes. https://github.com/huzeyfe-erkek/UHD-binaries
71,692,281
71,692,389
Why is it not possible to return a const reference while overloading the [] operator
Let us take this code as an example #include <iostream> using namespace std; struct valStruct { double& operator[](int i){return values[i];}; //line 6 double operator[](int i) const {return values[i];}; //line 7 double values[4]; }; int main () { valStruct vals = {0,1,2,3}; cout << "Value before change" << endl; for ( int i = 0; i < 3; i++ ) { cout << "vals[" << i << "] = "<< vals[i] << endl; } vals[1] = 2.2; // change 2nd element cout << "Value after change" << endl; for ( int i = 0; i < 3; i++ ) { cout << "vals[" << i << "] = "<< vals.values[i] << endl; } return 0; } I understand that line 6 (see comment in code) enables the writing (and reading!?) of a value to the index in array values while line 7 only reads that value. I understand the need of the const declaration in line 7 as preventing changing the value while not intended (although I do not understand how since line 6 exists), but my question is, why cannot I write the line as double& operator[](int i) const {return values[i];}; //line 7 which throws out the error: binding reference of type ‘double&’ to ‘const double’ discards qualifiers. This also raises the question of why do we need line 7 at all since line 6 exists and can do both writing and reading. EDIT: I understand the idea of a const func() const [suggested here][1] and I do not understand how this answers my question. I did not understand the mechanism explained by the two answers given which answer my question. I now understand that the second line is needed to deal with const objects of my function. I also understand that when I have a func() const, it implicitly makes the members const. This means that the returned value needs to be constant and that is why this does not work ´double& operator[](int i) const { return values[i]; };´ while this does ´const double& operator[](int i) const { return values[i]; };´ [1]: Why use the keyword 'const' twice in a class member function C++
why do we need line 7 at all since line 6 exists and can do both writing and reading. We need line 7 to work on const objects. Line 6 can't be used on const objects of type valStruct. This is because const class objects can only explicitly call const member functions, and the overloaded operator[] in line 6 has not been marked as a const member function. So it can't be used on const object of type valStruct. Thus, we need line 7 which "marks" the overloaded operator[] as a const member function. More info about this can be found here. Now, if you change the return type in line 7 to double&, then the problem is that here you've overloaded operator[] as a const member function. This means that the data members are also const. And since we cannot bind an "lvalue reference to non-const object", to a const object, we get the mentioned error. For example, const double d = 43.4; double& ref = d;//here we'll get the same error The situation(of why you're getting the error) is similar to the above given snippet. To solve(get rid of) this error, we need to change the return type from double& to const double&.
71,692,411
71,692,547
How to sort a vector<any>?
Is it possible to sort vector<any> by using std::sort or somehow else? I was trying to do smth like this vector<any> va{ 55ll, 'a', -1}; sort(va.begin(), va.end(), [](const any& lhs, const any& rhs) { return any_cast<decltype(lhs.type())>(lhs) > any_cast<decltype(lhs.type()>(rhs) });
Is it possible to sort vector by using std::sort or somehow else? It is possible. You can do it the same way as sorting anything else: By defining a function to compare two std::any with strict total order. any_cast<decltype(lhs.type())>(lhs) This won't work. std::any::type returns std::type_info, and unless you store an object of type std::type_info in std::any, the std::any_cast will fail. There are many ways to order objects of heterogeneous types. A relatively simple way is to primarily order by the type of the object. A caveat here is that the order of types is not portable across systems: bool any_less_type(const std::any& l, const std::any& r) { auto& lt = l.type(); auto& rt = r.type(); return std::type_index(lt) < std::type_index(rt); } Then, objects of same type that are orderable may be further ordered, but that feature may have to be limited to a small set of types as if you were using std::variant.
71,693,165
71,694,292
C++ Fastest numerical string to long parsing
Here is what I came up with. len is guaranteed to have meaningful value (positive and true size of the char array) s is long unsigned number as a string without null-termination (received from 3rd party lib) typically 11-12 symbols e.g. "123456789000" running on x86 linux I am not C++ dev, could you help make it faster? inline uint64_t strtol(char* s, int len) { uint64_t val = 0; for (int i = 0; i < len; i++) { char c = *(s + i) - '0'; val = val * 10 + c; } return val; };
You might want to have a look at loop unrolling. When the body of a loop is short enough, checking the loop condition every iteration might be relatively expensive. A specific and interesting way of implementing loop unrolling is called Duff's device: https://en.wikipedia.org/wiki/Duff%27s_device Here's the version for your function: inline uint64_t strtol_duff(char* s, int len) { uint64_t val = 0; int n = (len + 7) / 8; int i = 0; switch (len % 8) { case 0: do { val = val * 10 + (*(s + i++) - '0'); case 7: val = val * 10 + (*(s + i++) - '0'); case 6: val = val * 10 + (*(s + i++) - '0'); case 5: val = val * 10 + (*(s + i++) - '0'); case 4: val = val * 10 + (*(s + i++) - '0'); case 3: val = val * 10 + (*(s + i++) - '0'); case 2: val = val * 10 + (*(s + i++) - '0'); case 1: val = val * 10 + (*(s + i++) - '0'); } while (--n > 0); } return val; }; To be honest, in your case I believe you will not see a huge benefit because the loop's body is not that tiny. It's all very much system dependent and requires experimentation (like most optimizations). Good compiler optimizers might unroll the loop automatically if it is actually beneficial. But it's worth to try.
71,693,329
71,693,391
Is returning a pointer to a local variable always undefined behavior
I have read that we should not return a pointer or a reference to a local variable. So in the below given example, i understand that when i wrote: return f; inside function foo, i am returning a pointer to a local variable. And using that pointer outside the function, will lead to undefined behavior. #include <iostream> const char* foo() { const char* f = "ffdf"; return f;//returning pointer to a local variable } const char* func() { return "fsfs"; } int main() { const char* ptr = func(); const char* f = foo(); std::cout<<f<<std::endl; //I know this is undefined behavior because we're using a pointer that points to a local variable std::cout<<ptr; //But IS THIS UNDEFINED BEHAVIOR too? } My question is that does the same hold true for the return statement return "fsfs"; inside function func. I am aware that in C++17 there is mandatory copy elison. So my question is directed towards all modern C++ version(C++11, C++17, etc). Does the behavior depends/differs on C++ version. In particular, i know that the statement std::cout<<f<<std::endl; inside main is always undefined behavior because we're using a pointer(dangling) that points to a local variable. But does the statement std::cout<<ptr; also leads to undefined behavior. If not why and what will happen here. PS: I might be wrong in describing what is actually happening in the first cout statement too. So please correct me if i am wrong. Also, my original intention was not restricted to a particular type like a string literal. For example, i could've have chosen to return an int instead of a string literal and have the return type as const int&. But since people have already started answering i am not changing the example to use int.
Returning a pointer to a non-static function local variables will cause the pointer you get at the call site to be a dangling pointer and using it will have undefined behavior. Here, this is not the case. A string literal has static storage duration, meaning it will live until the end of the program. This means it is safe to return a pointer to a string literal that was declared in a function. So both foo and func are safe, but if you had const char * bar() { std::string text = "some text"; // stuff return text.c_str(); } Then you would be returning a pointer to an object that no longer exits and would have UB trying to read from that returned pointer.
71,693,532
71,694,961
Assembly: Why there is an empty memory on stack?
I use online complier wrote a simple c++ code : int main() { int a = 4; int&& b = 2; } and the main function part of assembly code complied by gcc 11.20 shown below main: push rbp mov rbp, rsp mov DWORD PTR [rbp-4], 4 mov eax, 2 mov DWORD PTR [rbp-20], eax lea rax, [rbp-20] mov QWORD PTR [rbp-16], rax mov eax, 0 pop rbp ret I notice that when initializing 'a', the instruction just simply move an immediate operand directly to memory while for r-value reference 'b', it first store the immediate value into register eax,then move it to the memory, and also there is an unused memory bettween [rbp-8] ~ [rbp-4], I think that whatever immediate value,they just exist, so it has to be somewhere or it just simply use signal to iniltialize(my guess), I want to know more about the underlying logic. So my question is that: Why does inilization differs? Why there is an empty 4-bytes unused memory on stack?
Let me address the second question first. Note that there are actually three objects defined in this function: the int variable a, the reference b (implemented as a pointer), and the unnamed temporary int with a value of 2 that b points to. In unoptimized compilation, each of these objects needs to be stored at some unique location on the stack, and the compiler allocates stack space naively, processing the variables one by one and assigning each one space below the previous. It evidently chooses to handle them in the following order: The variable a, an int needing 4 bytes. It goes in the first available stack slot, at [rbp-4]. The reference b, stored as a pointer needing 8 bytes. You might think it would go at [rbp-12], but the x86-64 ABI requires that pointers be naturally aligned on 8-byte boundaries. So the compiler moves down another 4 bytes to achieve this alignment, putting b at [rbp-16]. The 4 bytes at [rbp-8] are unused so far. The temporary int, also needing 4 bytes. The compiler puts it right below the previously placed variable, at [rbp-20]. True, there was space at [rbp-8] that could have been used instead, which would be more efficient; but since you told the compiler not to optimize, it doesn't perform this optimization. It would if you used one of the -O flags. As to why a is initialized with an immediate store to memory, whereas the temporary is initialized via a register: to really answer this, you'd have to read the details of the GCC source code, and frankly I don't think you'll find that there is anything very interesting behind it. Presumably there are different code paths in the compiler for creating and initializing named variables versus temporaries, and the code for temporaries may happen to be written as two steps. It may be that for convenience, the programmer chose to create an extra object in the intermediate representation (GIMPLE or RTL), perhaps because it simplifies the compiler code in handling more general cases. They wouldn't take any trouble to avoid this, because they know that later optimization passes will clean it up. But if you have optimization turned off, this doesn't happen and you get actual instructions emitted for this unnecessary transfer.
71,693,714
71,693,879
In a C++ function template, why can't I use a lambda to specify the array size of a parameter?
I stumbled on the following while trying to implement some SFINAE trickery (what I was actually trying to achieve is irrelevant; I wan't to understand this behavior): I define a constexpr function that takes a reference to an array of size 1, but I specify the array size through a lambda call: constexpr bool f(const char(&)[+[](){return 1;}()]) { return true; } (The + before the lambda is because the compiler complains about two consecutive left brackets.) I add a caller function: constexpr bool g() { char x[1] = {}; return f(x); } This compiles fine. Now I templatize and instantiate: template<typename T> constexpr bool f(const char(&)[+[](){return 1;}()]) { return true; } constexpr bool g() { char x[1] = {}; return f<int>(x); } This time I get a strange compiler error: ERROR: maps/suggest/indexer/nhr/nhr_flume_flags.cc:134:45 no matching function for call to 'f' constexpr bool g() { char x[1] = {}; return f<int>(x); } ^~~~~~~ maps/suggest/indexer/nhr/nhr_flume_flags.cc:130:16 candidate function [with T = void] not viable: no known conversion from 'char[1]' to 'const char[+[]() { return 1; }()]' for 1st argument constexpr bool f(const char(&)[+[](){return 1;}()]) { return true; } ^ 1 error generated. Why am I getting this error? The command I'm using is: /usr/lib/llvm-11/bin/clang++ -stdlib=libstdc++ -std=c++17 myprog.cc The version info from the compiler is: Debian clang version 11.1.0-4+build3 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/lib/llvm-11/bin
Why am I getting this error? /usr/lib/llvm-11/bin/clang++ -stdlib=libstdc++ -std=c++17 myprog.cc Using lambdas in function signature isn't allowed in C++17: [expr.prim.lambda] A lambda-expression is a prvalue whose result object is called the closure object. A lambda-expression shall not appear in an unevaluated operand, in a template-argument, in an alias-declaration, in a typedef declaration, or in the declaration of a function or function template outside its function body and default arguments. [ Note: The intention is to prevent lambdas from appearing in a signature.  — end note ] [ Note: A closure object behaves like a function object. — end note ] The program is ill-formed. The diagnostic message has room for improvement. Not diagnosing the non-template is a compiler bug. It's easy to work around using a constant. Much easier to read too: constexpr inline auto s = [](){return 1;}(); template<typename T> constexpr bool f(const char(&)[s]) Since proposal P0315, it should be allowed in C++20 because the highlighted part of the rule is removed. Clang however still fails to compile it in C++20 which is a bug as far as I can tell. At the moment, Clang's support for P0315 is listed as "partial".
71,693,878
71,693,991
My rand() isn't really working in C++ specifically in Visual Studio 2019. I do have had "include <time.h> and <stdlib.h>
My rand() gives out the same number (GPA) for every student. srand(time(NULL)); int gpa = 0 + (rand() % (10 - 0 + 1)); for (int i = 0; i < number; i++) { cout << "Enter the student #" << i + 1 << "'s name: "; getline(cin, pStudents[i]); cout << endl; } for (int i = 0; i < number; i++) { cout << "Student " << pStudents[i] << " has GPA of: " << gpa << endl; }
You only compute it once. Here is how to fix it on your code: srand(time(NULL)); for (int i = 0; i < number; i++) { cout << "Enter the student #" << i + 1 << "'s name: "; getline(cin, pStudents[i]); cout << endl; } for (int i = 0; i < number; i++) { int gpa = 0 + (rand() % (10 - 0 + 1)); cout << "Student " << pStudents[i] << " has GPA of: " << gpa << endl; }
71,694,096
71,694,194
What is the difference between class and struct in the "Type Erasure" code by using std::make_shared in C++?
I am trying to understand the behavior of "Type Erasure" by using std::make_shared. The basic idea is to use a class Object to wrap some different classes, such as class Foo and class Bar. I write the following code, and it does work. // TypeErasure.cpp #include <iostream> #include <memory> #include <string> #include <vector> class Base { public: virtual ~Base() {} virtual std::string getName() const = 0; }; template< typename T > struct Derived : Base { public: explicit Derived(const T&& t) : objD(t) {} std::string getName() const override { return objD.getName(); } private: T objD; }; class Object { public: template <typename T> explicit Object(T&& t) : objPtr(std::make_shared<Derived<T>>(std::forward<T>(t))) {} std::string getName() const { return objPtr->getName(); } std::shared_ptr<const Base> objPtr; }; void printName(std::vector<Object> vec) { for (auto v: vec) std::cout << v.getName() << std::endl; } class Bar { public: std::string getName() const { return "Bar"; } }; class Foo { public: std::string getName() const { return "Foo"; } }; int main() { std::vector<Object> vec{Object(Foo()), Object(Bar())}; printName(vec); } but when I change "struct Derived : Base" into "class Derived : Base", it shows the following error. error: no matching function for call to 'std::shared_ptr::shared_ptr(std::shared_ptr)'| The code is as following. // TypeErasure.cpp #include <iostream> #include <memory> #include <string> #include <vector> class Base { public: virtual ~Base() {} virtual std::string getName() const = 0; }; template< typename T > class Derived : Base { public: explicit Derived(const T&& t) : objD(t) {} std::string getName() const override { return objD.getName(); } private: T objD; }; class Object { public: template <typename T> explicit Object(T&& t) : objPtr(std::make_shared<Derived<T>>(std::forward<T>(t))) {} std::string getName() const { return objPtr->getName(); } std::shared_ptr<const Base> objPtr; }; void printName(std::vector<Object> vec) { for (auto v: vec) std::cout << v.getName() << std::endl; } class Bar { public: std::string getName() const { return "Bar"; } }; class Foo { public: std::string getName() const { return "Foo"; } }; int main() { std::vector<Object> vec{Object(Foo()), Object(Bar())}; printName(vec); } What is the root cause of this error? Is it about the difference between class and struct? Is it because class is a reference and struct is a value?
The only real difference between a class and a struct in C++ is that, for a struct, the default member access and inheritance is public, whereas, for a class, the default is private. So, to make your code work for the class Derived template, just make its inheritance of Base public: template< typename T > class Derived : public Base { // public inheritance public: //... Such public inheritance gives the Derived class access to the Base class constructors. Alternatively, to make your struct template case fail – most likely with the exact same error message(s) – you can make its inheritance of Base private: template< typename T > struct Derived : private Base { // private inheritance - fails to compile! public: //...
71,694,352
72,045,831
Why is C++ getline() non-blocking when program is called from python subprocess?
I have a C++ program that waits for some text input with getline(), and it works well from the command line. However, I would like to call it from Python - send some text, get the output, and have it wait for more input. I tried with subprocess, but it seems that getline() in this case doesn't wait for input but gets an empty line. It works as intended if I constantly send input, but as soon as I stop doing that, it starts reading empty strings. Adding if (!input_command.empy()) in C++ works, but in this way the program consumes a lot of resources (I suppose because it keeps cycling the loop). Is it possible to have getline() stop and wait for some actual input? C++: bool ExitProg = FALSE; do{ string input_command; getline(cin, input_command) if (input_command == std::string("something")){ cout << "something" << endl; } if (input_command == std::string("exit")){ ExitProg = True; } } while (!ExitProg) Python: process = subprocess.Popen('c_program.exe', stdin=subproces.PIPE, stdout=subprocess.PIPE) process.stdin.write('something\n') process.stdin.flush() print(process.stdout.readline()) UPDATE: I assumed that the program was reading empty lines for the following reason. In the C++ program, I split the input line into an array and, when the python code was finished, I was getting an error from the C++ program about an element of the array not existing.
I created a class and added the start of the subprocess to the __init__ method. The methods of the class are used to interact with the C++ program. At this point however I was still having the same issue. I solved it by adding a __del__ method that terminates the subprocess.
71,694,365
71,694,691
std::queue and std::deque cleanup
Suppose we have a situation where we need FIFO data structure. For example, consume some events in the order they came in. Additionally, we need to clear the entire queue from time to time. std::queue seems like the perfect fit for doing that, but unfortunately it lacks a function for clearing the container. So at this point, we have 2 alternatives: std::queue we asked the STL lib what we need. Granted, the STL lib will give us more: it will give us an std::deque disguised as a std::queue we got back only a part from what we need, namely the pop front and push back but without clear we will have to "emulate" clear somehow, without the naive way of looping and popping std::deque we asked the STL lib what we need we got back what we asked for, but we've got too much: we also got push front and pop back Overall, we either received too few or too much, never exactly what we really wanted. Here is the thing that took me by surprise, while I was trying to provide clear functionality for using with std::queue which is a member var of my object struct message { }; struct consumer { std::queue<message> _pending; void clear_ver_1() { auto will_be_deleted_when_out_of_scope = std::move(_pending); } void clear_ver_2() { std::queue<message> will_be_deleted_when_out_of_scope; _pending.swap(will_be_deleted_when_out_of_scope); } }; I've read the specs and I can not say for sure if clear_ver_1 will leave the _pending in a valid but unspecified state or not. See the string example there. I'm quite surprised that the specs are so vague about this topic. Am I not looking in the right place? Thank you all! Update It seems there is a non-ignorable difference between assigning and clearing. Internally, queue and deque are almost the same (one is using the other one)
we got back what we asked for, but we've got too much: we also got push front and pop back You got exactly what you asked for, a dequeue is a data structure that allows efficient inserts and deletes at either end point. It might not be the data structure for you, but that's your fault for choosing it. we will have to "emulate" clear somehow, without the naive way of looping and popping For the record, popping is extremely cheap in terms of performance, it simply decrements a number. A pop in a while loop translates to decrementing an integer until 0, which unless you have a lot of numbers is very fast. In fact, it's probably much faster than allocating memory, which brings us to: The STL way of clearing these collection classes is to swap them with an empty collection (which is what you figured out on your own) or to just straight up re-allocate them in place (apple's answer). Both of those will (probably, the standard is vague about this point) allocate memory, which is a very expensive linear operation. You have all the pieces to do it, though I'd suggest profiling to see which way is faster if it really matters to you. Personally I just pop the queue in a loop, it leaves the allocated memory in place for the next time I need to push more, so it saves on potentially multiple allocations and re-allocations (when compared to resetting the queue), depending on the amount of data you have.
71,694,462
71,694,603
Can I define begin and end on an input iterator?
Lets say I have an input iterator type MyInputIter (which I use to traverse a tree-like structure) that satisfies the std::input_iterator concept. Are there any reasons why I shouldn't define begin() and end() on the iterator itself? struct MyInputIter { // iterator stuff omitted auto begin() const { return *this; } auto end() const { return MySentinel{}; } }; Reason being that I don't have to create another type just to wrap begin and end so I can use it in a for loop: MyInputIter iterate(TreeNode root, FilterPattern pattern) { return MyInputIter{ root, pattern }; } void foo() { for (auto item : iterate(someRandomTreeNode, "*/*.bla")) process(item); } while also being able to use it as an iterator: std::vector<TreeNode> vec(iterate(someRandomTreeNode, "*"), MySentinel{});
Are there any reasons why I shouldn't define begin() and end() on the iterator itself? Potential issues to consider: Implementing those functions for the iterator may be expensive. Either because of need to traverse the structure to find them, or because of extra state stored in the iterator. It may be confusing since it deviates from common patterns. Edit: As pointed out by 康桓瑋, there's precedent for iterators that are ranges in std::filesystem::directory_iterator, so this may not a significant issue in general. There is another consideration whether your range implementation works in an expected way. Reason being that I don't have to create another type As far as I can tell, you don't need to create another type. You can use: std::ranges::subrange(MyInputIter{ root, pattern }, MySentinel{})
71,694,522
71,697,647
How to solve "std::basic_ofstream<char, std::char_traits<char> >::open(std::string&)"
I get an error about std::basic_ofstream<char, std::char_traits<char> >::open(std::string&) when compiling this code: FileEXt = ".conf"; const char* FileEX = FileEXt.c_str(); const char* File = Uname + FileEX; string File = Uname + FileEXt; ofstream outFile; outFile.open(File); Full code: LPCSTR lpPathName = ".\\DB"; SetCurrentDirectoryA(lpPathName); string Uname, Pword; cout << "Please enter a name: "; cin >> Uname; cout << '\n' << '\n'; cin >> Pword; system("CLS"); cout << "Username: " << Uname << '\n' << "Password: " << Pword << '\n'; const char* FileEXt = ".conf"; const char* Unames = Uname.c_str(); const char* FileEX = FileEXt; string File = Uname + FileEXt; ofstream outFile; outFile.open(File); if ( outFile.fail() ) { outFile << Uname << '\n' << Pword; outFile.close(); } else { cout << Uname << " already exists!" << '\n'; Sleep(3000); return 0; } This code is supposed to create a file that stores a name in the DB directory.
You are passing a std::string to ofstream::open(). Prior to C++11, open() did not accept a std::string as input, only a const char* pointer, eg: outFile.open(File.c_str());
71,694,553
71,694,591
How to solve "error C2078: too many initializers" when moving the same members from the parent class to its child?
I am facing a relatively tricky situation here that seemed quite easy at first sight. After moving those three members from the parent class Parent to its child class Child it seems that I'm no longer able to take advantage of the default constructor. Why? And is there a way out here without having to specifically implement a Child(...) constructor. Seems counterintuitive at first... Actually I would have thought that the first example is where it fails (thinking that the Child class' constructor would overshadow the one of its parent). struct Parent { std::string mText; int mInt; bool mBool; }; struct Child : public Parent { }; Child test{ "", 0, false}; // Compiles But in this latter case, the default constructor won't be created if the members are defined in the child class. struct Parent { }; struct Child : public Parent { std::string mText; int mInt; bool mBool; }; Child test{ "", 0, false}; // error C2078: too many initializers
You need empty braces for the base subobject in aggregate initialization. (Default constructor is irrelevant in this case, both Parent and Child are aggregate and aggregate initialization gets performed.) However, if the object has a sub-aggregate without any members (an empty struct, or a struct holding only static members), brace elision is not allowed, and an empty nested list {} must be used. Child test{ {}, "", 0, false}; // ^^
71,694,567
71,694,615
push_back of an integer doesn't work on my vector of strings
I am trying to push back 3 vectors in parallel, and when I get to push_back() into the string vector, I get this error: no instance of overloaded function "std::vector<_Ty, _Alloc>::push_back [with _Ty=std::string, _Alloc=std::allocator<std::string>]" matches the argument listC/C++(304) ask3.cpp(38, 8): argument types are: (int) ask3.cpp(38, 8): object type is: std::vector<std::string, std::allocator<std::string>> Here is the chunk of code that I'm in: #include <iostream> #include <fstream> #include <sstream> #include <vector> #include <string> using namespace std; int main() { int length, count = 0, moviecount = 0, spacecount = 0; ; vector<int> status, price; vector<string> movies; string FileName, text, line, dummy; FileName = "MovieList.txt"; ifstream InFile; InFile.open(FileName); while (!InFile.eof()) { getline(InFile, line); text += line + "\n"; } cout << text; length = text.length(); for (int i = 0; i <= length; i++) { if (text[i] == ' ') { spacecount++; } } if (spacecount == 2) { moviecount = 1; } else if (spacecount > 2) { int temp = spacecount; temp = temp - 2; temp = temp / 3; moviecount = 1 + temp; } movies.push_back(moviecount); //<-- problem line status.push_back(moviecount); price.push_back(moviecount); }
movies is a vector of string, so you cannot push int directly. If you are using C++11 or later, you can use std::to_string to convert integers to strings. Another way to convert integers to strings is using std::stringstream like this: std::stringstream ss; ss << moviecount; movies.push_back(ss.str());
71,695,153
71,696,329
Enable Perfect Forward Secrecy In Indy 10?
In How to enable Perfect Forward Secrecy In Indy 10?, the question is answered for Delphi. As I am trying to achieve the same in C++, I get stuck at the SSL_CTX_set_ecdh_auto() method. It is present in the source of Indy, and thus (I assume) in the installed version (I am running C++Builder 11), but there is no reference in the C++ header file IdSSLOpenSSLHeaders.hpp. However, I might add this manually in the header, assuming the DCU contains the source, but searching the web for OpenSSL I found SSL_CTX_set_ecdh_auto() and SSL_set_ecdh_auto() are deprecated and have no effect. How can I best enable perfect forward secrecy using C++ and Indy 10? TIdServerIOHandlerSSLOpenSSL * LIOHandleSSL; LIOHandleSSL = new TIdServerIOHandlerSSLOpenSSL(FServer); LIOHandleSSL->SSLOptions->Mode = TIdSSLMode::sslmServer; LIOHandleSSL->SSLOptions->Method = TIdSSLVersion::sslvTLSv1_2; LIOHandleSSL->SSLOptions->SSLVersions = TIdSSLVersions() << TIdSSLVersion::sslvTLSv1_2; LIOHandleSSL->SSLOptions->CertFile = AppRoot + CertFile; if (RootCertFile.Trim().Length() > 0) LIOHandleSSL->SSLOptions->RootCertFile = AppRoot + RootCertFile; LIOHandleSSL->SSLOptions->KeyFile = AppRoot + KeyFile; LIOHandleSSL->SSLOptions->CipherList = "" "ECDHE-RSA-AES256-GCM-SHA384:" "ECDHE-ECDSA-AES256-GCM-SHA384:" "ECDHE-RSA-WITH-AES-256-GCM-SHA384:" "ECDHE-ECDSA-CHACHA20-POLY1305:" "ECDHE-ECDSA-AES128-GCM-SHA256:" "ECDHE-ECDSA-AES256-SHA384:" "ECDHE-ECDSA-AES128-SHA256:" "HIGH:" "!aNULL:" "!eNULL:" "!EXPORT:" "!DES:" "!RC4:" "!MD5:" "!PSK:" "!SRP:" "!CAMELLIA:" "@STRENGTH"; // this is what is needed according to the post // auto sslContext = TMyIdSSLContext(LIOHandleSSL->SSLContext); // SSL_CTX_set_ecdh_auto(FSSLContext.fContext, 1); LIOHandleSSL->OnGetPassword = OnGetSSLPassword; FServer->IOHandler = LIOHandleSSL; FServer->OnQuerySSLPort = OnQuerySSLPort;
[SSL_CTX_set_ecdh_auto()] is present in the source of Indy, and thus (I assume) in the installed version (I am running C++Builder 11), but there is no reference in the C++ header file IdSSLOpenSSLHeaders.hpp. That is because all of the OpenSSL functions used in the IdSSLOpenSSLHeaders.pas unit are marked as {$EXTERNALSYM} specifically so that they won't appear in the IdSSLOpenSSLHeaders.hpp file. This is customary when Delphi units use external SDKs that are otherwise available to C/C++ natively. So, to use the OpenSSL functions in C++, you will have to download the OpenSSL 1.0.2 SDK and #include its .h header files in your code (or, as you said, you can simply declare the functions yourself, since they are present in the Delphi DCUs). Delphi can't use .h files, which is (mostly) why IdSSLOpenSSLHeaders.pas exists. searching the web for OpenSSL I found SSL_CTX_set_ecdh_auto() and SSL_set_ecdh_auto() are deprecated and have no effect. In OpenSSL 1.1.0 and later, yes. But not in OpenSSL 1.0.2, which is what TIdSSLIOHandlerSocketOpenSSL uses. If you want to use OpenSSL 1.1.x+, you need to use this (wip) SSLIOHandler instead. // this is what is needed according to the post // auto sslContext = TMyIdSSLContext(LIOHandleSSL->SSLContext); // SSL_CTX_set_ecdh_auto(FSSLContext.fContext, 1); In C++, that would look something like this: #include <openssl/ssl.h> // or simply: // long __fastcall SSL_CTX_set_ecdh_auto(PSSL_CTX ctx, long m); class TMyIdSSLContext : public TIdSSLContext { public: __property PSSL_CTX Context = {read=fContext}; }; auto sslContext = (TMyIdSSLContext*) LIOHandleSSL->SSLContext; SSL_CTX_set_ecdh_auto(sslContext->Context, 1);
71,695,335
71,695,493
What is the variadic function template overloading precedence rule?
I'm using variadic function templates in the common recursive format and I need to change the behaviour of the function whenever I'm handling a vector. If the functions templates were not variadic, overloading works well, but with variadic function templates, the overloading resolution seems to change when unpacking the argument pack. Below some code to explain better what I mean. #include <iostream> #include <vector> template<typename T> void complexfun(T x) { std::cout << "1 end" << std::endl; } template<typename T, typename... Args> void complexfun(T x, Args... args) { std::cout << "1 "; complexfun(args...); } template<typename T> void complexfun(std::vector<T> x) { std::cout << "2 end" << std::endl; } template<typename T, typename... Args> void complexfun(std::vector<T> x, Args... args) { std::cout << "2 "; complexfun(args...); } int main() { std::vector<int> vint = {2, 3, 4}; float x1 = 9.4; complexfun(vint); // output: 2 end -> OK complexfun(vint, x1); // output: 2 1 end -> OK complexfun(x1, vint); // output: 1 1 end -> WRONG: need 1 2 end return 0; } In the execution of complexfun(x1, vint) we should have complexfun(vint), but it does not behave as the "standalone" call complexfun(vint). Any help on why this is the case and how to fix it is greatly appreciated!
You need to declare template<typename T> void complexfun(std::vector<T>) before the function that is supposed to be using it. Just swap the order of those function templates so you get: template<typename T> // this function template void complexfun(std::vector<T>) { std::cout << "2 end" << std::endl; } template<typename T, typename... Args> // ...before this function template void complexfun(T, Args... args) { std::cout << "1 "; complexfun(args...); } Demo
71,696,301
71,697,738
Declare static array in class with size passed to constructor?
Is there any way, to declare static array in class with size that was passed to constructor? It is alright if the size has to be const and it makes it impossible to set it in runtime. I tried doing something like this: class class_name { public: float* map; class_name(int n, const int d) { float arr[d]; map = arr; } }; but I feel like it could be very bad idea. Is it bad? If it is, then why is it?
Yes, this code class_name(int n, const int d) { float arr[d]; map = arr; } is a bad idea, for 2 reasons float arr[d]; creates a local variable in stack, so it ceases to exist at the end of the block. So map becomes a dangling pointer. If you needed dynamic size allocation, you should just use std::vector<float> map and avoid a lot of hassle. float arr[d]; is a variable length array, and C++ does not support those. Making d be const does not help, it has to be an actual constant, not const variable. Solution: Since you say the array length can be determined at compile time, this is perfect fit for a template: template <std::size_t N> class class_name { public: std::array<float, N> map { {} }; // { {} } causes value initialization of everything to 0 // actually above could be `float map[N];` but it has the C array gotchas class_name(int n) { // not sure what n is for... } }; And to declare a variable of this class: class_name<5> obj; // obj.map size is 5
71,696,714
71,705,396
MQTT client waits indefinitely during publish of message
I try to implement an asynchronous MQTT client with the paho library, that receives messages on topic "request", formulates a string and puts the response out on topic "response". I use the callbacks to handle the incoming messages. #include "mqtt/async_client.h" #include "mqtt/topic.h" const std::string SERVER_ADDRESS {"tcp://localhost:2883"}; const std::string CLIENT_ID {"test_client"}; class TestCallback : public virtual mqtt::callback { // the mqtt client mqtt::async_client& cli_; // (re)connection success void connected(const std::string& cause) override { cli_.subscribe("request", 0); } // callback for when a message arrives. void message_arrived(mqtt::const_message_ptr msg) override { if( msg->get_topic() == "request" ) { /* format response message here and put it into (string) msg */ mqtt::message_ptr pubmsg = mqtt::make_message("response", msg); pubmsg->set_qos(2); //// PROBLEMATIC CODE //// cli_.publish(pubmsg)->wait(); ////////////////////////// } } public: TestCallback(mqtt::async_client& cli) : cli_(cli) {} }; int main(int argc, char** argv) { mqtt::async_client cli(SERVER_ADDRESS, CLIENT_ID); TestCallback cb(cli); cli.set_callback(cb); mqtt::connect_options connOpts = mqtt::connect_options_builder() .clean_session(false) .automatic_reconnect() .finalize(); try { cli.connect(connOpts)->wait(); } catch (const mqtt::exception& exc) { std::cerr << "[ERROR] " << exc.what() << std::endl; return 1; } // run until the application is shut down while (std::tolower(std::cin.get()) != 'q') ; try { cli.disconnect()->wait(); } catch (const mqtt::exception& exc) { std::cerr << "[ERROR] " << exc.what() << std::endl; return 1; } return 0; } The problem arises when I try to publish the response message, as the client seems to wait indefinitely. Responsible for this is the wait function which is used on a token to track the status of the published message (reference). To my understanding, this has to be done especially when using higher levels of QoS so ensure everything went well. Upon removal of the call to wait(), it works as expected. But I am not sure if this ensures the correct publishing of messages. What is the correct way to do this?
I'm going to make a guess here, because I don't really know how async works in C++. The MQTT client has a single message handling thread, this deals with all the incoming and outgoing TCP packets as they arrive/depart on the socket. When a new MQTT message arrives it then calls the message handler callback (message_arrived), in which you call publish and wait for it to complete. But because the call to wait effectively blocks message_arrived the message handling thread can not continue. This means it can not deal with the 3 legged QOS2 handshake required for the publish to complete, hence it hangs. I will also guess that if you changed the publish to QOS 0 it would complete, but would also fail with QOS 1 as that requires the message handling thread to send/receive multiple messages to continue. Not waiting for the publish to complete is probably the correct solution.
71,696,964
71,697,336
C++ Union Array differs in 32/64 bits
My code: union FIELD { int n; char c; const char *s; FIELD(){} FIELD(int v){ n = v; } FIELD(char v){ c = v; } FIELD(const char* v){ s = v; } }; struct SF { const char* s0; char s1; int s2; const char* s3; }; int main() { printf("sizeof(long) = %ld\n", sizeof(long)); printf("now is %d bit\n", sizeof(long) == 8?64:32); FIELD arrField[] = { FIELD("any 8 words 0 mixed"), FIELD('d'), FIELD(251356), FIELD("edcba") }; SF* sf0 = (SF*)&arrField; printf("sf0->s0 = %s, ", sf0->s0); printf("sf0->s1 = %c, ", sf0->s1); printf("sf0->s2 = %d, ", sf0->s2); printf("sf0->s3 = %s\n", sf0->s3); } When I use the default 64-bit execution output: I add the compilation parameters in CMakeLists.txt: set_target_properties(untitled PROPERTIES COMPILE_FLAGS "-m32" LINK_FLAGS "-m32") It will compile the 32-bit program, then run and output: My question is, how can I make a 64-bit program have the same output behavior as a 32-bit program?
Apply alignas(FIELD) to every single member variable of SF. Additionally you cannot rely on the size of long to tell 64 bit and 32 bit systems appart. Check the size of a pointer to do this. On some 64 bit systems long is 32 bit. This is the case for my system for example. Furthermore %ld requires a long parameter, but the sizeof operator yields size_t which is unsigned in addition to not necesarily matching long in size. You need to add a cast there to be safe (or just go with std::cout which automatically chooses the correct conversion based on the second operand of the << operator). union FIELD { int n; char c; const char* s; FIELD() {} FIELD(int v) { n = v; } FIELD(char v) { c = v; } FIELD(const char* v) { s = v; } }; struct SF { alignas(FIELD) const char* s0; alignas(FIELD) char s1; alignas(FIELD) int s2; alignas(FIELD) const char* s3; }; int main() { printf("sizeof(long) = %ld\n", static_cast<long>(sizeof(long))); printf("now is %d bit\n", static_cast<int>(sizeof(void*)) * 8); FIELD arrField[] = { FIELD("any 8 words 0 mixed"), FIELD('d'), FIELD(251356), FIELD("edcba") }; SF* sf0 = (SF*)&arrField; printf("sf0->s0 = %s, ", sf0->s0); printf("sf0->s1 = %c, ", sf0->s1); printf("sf0->s2 = %d, ", sf0->s2); printf("sf0->s3 = %s\n", sf0->s3); }
71,697,593
71,707,107
Why the CPU usage is higher when using OpenCV on C++ than on Python
I am using Ubuntu 20.04.4, and I compiled OpenCV as release mode. Whenever I read frames, it consumes quite a lot of my CPU. I tested this in other machines as well. However, using a very similar script on python, it uses much less CPU. I found this question that seems to have a similar problem as mine. Although I am using the Release version. Also, my python seems to be using the same OpenCV version as the one I compiled: 4.5.5. Here is the C++ test code: #include "opencv2/opencv.hpp" int main(){ cv::VideoCapture vo = cv::VideoCapture(2); //Set fourc for better performance. vo.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M','J','P','G')); vo.set(cv::CAP_PROP_FPS,30); //Setting buffersize to one will make vi.read() blocking until next frame is available. vo.set(cv::CAP_PROP_BUFFERSIZE,1); vo.set(cv::CAP_PROP_FRAME_WIDTH,1920); vo.set(cv::CAP_PROP_FRAME_HEIGHT,1080); cv::Mat frame; while (vo.isOpened()) { vo.read(frame); } } And the python code: import cv2 vo = cv2.VideoCapture(2) vo.set(cv2.CAP_PROP_FPS,30) vo.set(cv2.CAP_PROP_BUFFERSIZE,1) vo.set(cv2.CAP_PROP_FRAME_WIDTH,1920) vo.set(cv2.CAP_PROP_FRAME_HEIGHT,1080) while(vo.isOpened()): ret, frame = vo.read() The python script consumes around 10% of my CPU while the C++ consumes around 30%. I work in an environment where CPU resource is critical. I'd like to know if there is any way to decrease this usage. Am I missing something?
Thanks for @ChristophRackwitz. Apparently, it was the fourcc configuration that was causing the high CPU usage. Using a resolution of 1920x1080 will cramp FPS to 5 using the default YUYV encoding. This is likely why I got lower CPU usage using python. If I set the fourcc on python to MJPG the CPU usage spikes.
71,699,002
71,699,431
Variadic template returning a N-tuple based on an unknown number of arguments
I would like to have a variadic function template that takes pointers of a certain type T, fills those pointers, and for each of them generate an object as a result, the final result being a tuple of all these generated objects. Given a function from a library (that uses a C api): struct opaque {}; bool fill_pointer(opaque **); And: struct MyType {}; MyType gen_object(opaque *); I would like to have a variadic template function that would look like this (sort of): std::tuple<bool, MyType...> fill_and_gen_objects(opaque **...); (where the bool result is false if and only one of fill_pointer return value is false). This is what I would like to achieve: opaque *oa, *ob, *oc; auto [failed, ta, tb, tc] = fill_and_gen_objects(oa, ob, oc); Thanks
That's heavy pseudocode, I'll answer with heavy pseudocode: template<typename ... Ts> constexpr auto fill_and_gen_objects(Ts* ... os) { bool some_status = true; //whatever return std::make_tuple(some_status, gen_object(os) ...); } Ok, actually it even compiles, see here EDIT: downgraded to C++14 ... that's what you've tagged. Same for C++17 using CTAD template<typename ... Ts> constexpr auto fill_and_gen_objects(Ts* ... os) { bool some_status = true; //whatever return std::tuple{some_status, gen_object(os) ...}; } Same for C++20 using abbreviated function template syntax constexpr auto fill_and_gen_objects(auto* ... os) { bool some_status = true; //whatever return std::tuple{some_status, gen_object(os) ...}; } C++20 with indices by using integer sequence (untested): constexpr auto fill_and_gen_objects(auto* ... os) { bool some_status = true; //whatever return []<int ... I>(std::index_sequence<I...>, auto tup){ return std::tuple{some_status, gen_object(std::get<I>(tup)) ...};} (std::make_index_sequence<sizeof...(os)>{}, std::tuple{os...}) } Furthermore, here is the C++27 solution: void do_my_fckng_work() { bool asap = true; }
71,699,081
71,700,665
Can't assign a QString as a value for a LineEdit in Qt
I am working on a simple calculator app in Qt. I want to display an error message when the user tries to divide by zero. I have tried the code below but the output just stays as 0. if(dblDisplayVal == 0.0){ QString error = "Can't divide by zero!"; ui->display->insert(error); } else{ result = calcVal / dblDisplayVal; divTrigger = false; } Any idea on how I can solve this issue?
you should use setText function instead of insert. this is an example: QLineEdit *lineEdit = new QLineEdit(this); lineEdit->setReadOnly(true); ui->gridLayout->addWidget(lineEdit, 0, 0, 1, 1); QString error = "Can't divide by zero!"; lineEdit->setText("Can't divide by zero!"); and it is the output:
71,699,562
71,699,676
How to convert a string into a hexadecimal and store that hexadecimal value into a string in c++
I want to store the hex values into a string, but I don't know to do that when my string is not giving me the hex values when it is printed out. I'm pretty sure it has something to do with hex, but I don't know how to get those int values that print out the correct hex values to be stored into a string without it being changed. I tried different ways of manipulating this and searched on the web but have not found much of a solution in solving this. #include <iostream> #include <string> #include <sstream> #include <iterator> #include <iomanip> #include <vector> #include <stdlib.h> using std::cout; using std::endl; using std::string; using std::hex; using std::stringstream; using namespace std; int main(){ string s2 = "HelloWorld"; cout << "string: " << s2 << endl; cout << "hexval: "; vector<int> character; // converting each character to its ascii value string bytes; for(int i = 0; i < s2.size(); i++) { character.push_back(int(s2[i])); bytes = to_string(character.at(i)); cout << hex << character.at(i) << " "; cout << bytes << endl; } cout << endl; cout << bytes << endl; return 0; } Here is the output that 'bytes' my string is printing out: 48 72 65 101 6c 108 6c 108 6f 111 57 87 6f 111 72 114 6c 108 64 100 Left is the hexadecimals and right is the string. Two different values. How can I store these hexadecimals that is being converted from a string be stored into a string as a hexadecimal value?
I see 2 different ways: The first one uses a char array and writes to it with sprintf with %X. The second way uses a stringstream and streams the int values into it with the hex specifier. You can get the string with the .str() method of stringstream. #include <iostream> #include <string> #include <sstream> #include <iterator> #include <iomanip> #include <vector> #include <stdlib.h> using std::cout; using std::endl; using std::string; using std::hex; using std::stringstream; using namespace std; int main(){ string s2 = "HelloWorld"; cout << "string: " << s2 << endl; string result; for(int i = 0; i < s2.size(); i++) { char buffer[20]; sprintf(buffer, "%X ", s2[i]); result += buffer; } cout << "hexval1: " << result << endl; stringstream res; for (int val : s2) res << hex << val << " "; cout << "hexval2: " << res.str() << endl; return 0; }
71,699,687
71,699,791
C++ Code keeps crashing after a validation
I have written a piece of code to validate a keyword, it validates and makes sure that the word is 5 letters long and it is all letters with no numbers in it. However, when I run it, the code seems to stop working at all and doesn't prompt me for the next question, I've tested it without this code and this part of the code is the problem, as it works fine without it. The code: cout<<name1<<", please enter the keyword (5 characters): "<<endl; cin>>key; for(int i = 0; i < keylength; i++){ if(isalpha(key[i]) == 1){ validnum += 1; } } if(validnum == keylength && key.length() == keylength){ validated = true; } else{ validated = false; }
Before the for loop you need to check that key.length() is equal to keyLength. Otherwise the loop can invoke undefined behavior when the user enters a string with the length less than keyLength. Also the function isalpha does not necessary returns 1. It can return any positive value. Change your code something like the following validated = key.length() == keyLength; if ( validated ) { size_t i = 0; while ( i < keyLength && isalpha( ( unsigned char )key[i] ) ) ++i; validated = i == keyLength; }
71,699,793
71,700,155
nested 'while' loop not looping back to outer 'for' loop after finishing (c++)
I'm needing to find how much tax $ any given number of taxpayers have to pay over any number of years. At the beginning of the program, # of taxpayers is entered, and # of years is entered. The while loop executes fine, and does what it's supposed to do the first time; however, it never loops back to the 'for' loop & asks for the next taxpayer's income. (I will note I have to do it this way as it's for a class) for (int i = 1; i <= taxpayers; i++) { while (year <= years) { cout << "\n\nPlease enter payer " << i << "'s income for year " << year << ": $"; cin >> income; if (income >= 0) { ....... year++; } else { cout << "\n *Error*" cin.clear(); cin.ignore(INT_MAX, '\n'); continue; } } }
it never loops back to the 'for' loop & asks for the next taxpayer's income That is because after the while loop is finished the 1st time through, year has caught up to years, and so on subsequent iterations of the for loop, year <= years is always false. You need to reset year back to its starting value on each iteration of the for loop, before entering the while loop: for (int i = 1; i <= taxpayers; i++) { year = <your 1st year>; // <-- HERE while (year <= years) { ... } }
71,699,898
71,699,994
c++: passing arrays by reference
I am trying to define a function prototype which takes an array of char of different lengths. I understand that I must pass the array by reference to avoid the array decaying to a pointer to its first element. So I've been working on this simple example to get my understanding correct. #include <stdio.h> // size_t //template to accept different length arrays template<size_t len> //pass array of char's by reference void func(const char (&str)[len]) { //check to see if the array was passed correctly printf(str); printf("\n"); //check to see if the length of the array is known printf("len: %lu",len); } int main(){ //create a test array of chars const char str[] = "test12345"; //pass by reference func(&str); return 0; } This gives me the compiler errors: main.cpp: In function ‘int main()’: main.cpp:19:14: error: no matching function for call to ‘func(const char (*)[10])’ func(&str); ^ main.cpp:6:6: note: candidate: template<long unsigned int len> void func(const char (&)[len]) void func(const char (&str)[len]) ^~~~ main.cpp:6:6: note: template argument deduction/substitution failed: main.cpp:19:14: note: mismatched types ‘const char [len]’ and ‘const char (*)[10]’ func(&str); I thought that the function signature func(const char (&str)[len]) indicates a pointer to a char array of length len, which is what I am passing by func(&str). I tried func(str), which I would expect to be wrong, since I am passing the value str, instead of its reference. However, this actually works and I dont understand why. What is going on here? What does it actually mean to pass by reference?
Your function is declared correctly, but you are not passing the array to it correctly. func(*str); first decays the array to a pointer to the 1st element, and then deferences that pointer, thus passing just the 1st character to func(). But there is no func(char) function defined, so this is an error. func(&str); takes the address of the array, thus passing a pointer to the array, not a reference to it. But there is no func(char(*)[len]) function defined, so this is also an error. To pass str by reference, you need to simply pass str as-is without * or &: func(str); This is no different than passing a reference to a variable of any other type, eg: void func(int &value); int i; func(i); On a side note: printf(str); is dangerous, since you don't know if str contains any % characters in it. A safer call would be either: printf("%s", str); Or: puts(str); But those only work if str is null-terminated (which it is in your case). Even safer would be: printf("%.s", (int)len, str); Which doesn't require a null terminator.
71,699,928
71,700,136
inline static constexpr vs global inline constexpr
Suppose that I have a few inline constexpr variables (named as default_y and default_x) in a header file and I decided to move them to a class that they are completely related to and mark them static (cause it seems better in terms of design). namespace Foo { inline constexpr std::streamsize default_size { 160 }; // not closely related to the class Bar class Bar { public: inline static constexpr std::uint32_t default_y { 20 }; // closely related to the class Bar inline static constexpr std::uint32_t default_x { 20 }; // closely related to the class Bar }; } So the question is will this make a difference in terms of how and when they are initialized at the start of the program (and overall efficiency)? Will the inline keyword in this particular use case force the compiler to add some guard for these two variables and make accessing them slower? Or maybe because they're constexpr there is no need to do those stuff at runtime since their value can be retrieved from the read-only section of the executable and then be assigned to them at the start of the main thread? I built the program once with inline static and once with static and there was no difference in the size of the binary compared to the previous solution so maybe the linker generated the exact same code (hopefully).
Placing static inline constexpr variables should not impact efficiency in any way. Due to constexpr they're const-initialized at compile time if it's possible. inline keyword here is helping you to initialize static variable inside the body of a class. You might find this material on the inline keyword interesting: https://pabloariasal.github.io/2019/02/28/cpp-inlining/
71,700,710
71,700,782
How to write wrappers for classes in c++ that overrides some member and inherits other members
I want to implement a custom vector class myVector<dataT> which is identical to std::vector expect that its index starts from an offset which is given as parameter. Example usage below: myVector<int> vec(3,0,1); // length=3, initial_value=0, offset=1 assert(vec.size()==3); vec[1]=1, vec[2]=2, vec[3]=3; assert(vec[1]==1); assert(vec[3]==3); Basically I want to override the dataT& operator[] method while all other methods remain the same. But I want handle cases where offset is a variable instead of a constant. If offset is constant, I can declare it as template parameter. But this way I cannot pass a variable as offset. template <typename dataT, size_t offset> class myVector{ //definition }; myVector<int, 1> vec; //valid int offset=1; myVector<int, offset> vec; //invalid I have several possible way to implement myVector in mind, each requires some boiler-plate code: Method 1: Inherit std::vector, accept offset as parameter of constructor template <typename dataT> class myVector : public std::vector { myVector(int n, int v0, int offset) {} dataT& operator [] (int index) {return this->at(index-offset);} } By inheriting, methods like size, push_back works automatically. But to accept offset as parameter of constructor, I have to make changes to all overloaded constructors, which can be verbose. Method 2: Inherit std::vector, make offset a data member. Similar to method 1, but we don't pass offset as constructor parameter. Instead, we use setOffset() to assign value to it. While this method gets rid of most boilerplate code, every myVector definition requires a setOffset, which is also not elegant. Method 3: std::vector as data member of myVector. template <typename dataT> class myVector { std::vector<dataT> stdVector; myVector(const std::vector<dataT> _stdVector, int offset) {} } This way definition/initialization of myVector becomes simple, but we need to write myVector::size(), myVector::push_back(), which is also verbose. So, I have concerns about all 3 methods mentioned above. Is there any elegant implementation that both makes initialization simple and do not involve boilerplate code? Thanks!
Method 4: make offset the first parameter to the constructor, use variadic parameters for the rest, and perfect-forward them. template<typename ...Args> myVector(int offset, Args && ...args) : std::vector{std::forward<Args>(args)...} { } This solves the immediate problem of a single implementation for overriding all of std::vector's constructors. Now, there are other issues with subclassing std::vector, and all other C++ library containers which have been talked about ad-infinitum before. That may or may not be of a concern to you, you should carefully consider the implications of that (the usual objects are a lack of a virtual destructor). But, that wasn't in the scope of the original question...
71,702,086
71,703,353
Why the size of class showing 12 and 16 byte?
I have five classes here, class A B C I can explain the size of them. class D I expect the result is 12 bytes, but the output is 16 bytes, I found the reason is that after adding the virtual function, the alignment will become 8 bytes, So I created the Test class again, according to the above reasoning, my expected result is 16 bytes, but after running, the result is 12 bytes, Should not the Test class is also 8 bytes alignment, and the result should be 16 bytes? Or what causes the class to perform 8 bytes alignment? Code: #include <iostream> using namespace std; class A { //empty }; class B { int a = 123; }; class C { public: void print(){ cout << "C" << endl; } private: int i = 123; }; class D { public: virtual void print(){ std::cout << "D" << std::endl; } virtual int d(){ return 0; } void add(){ std::cout << "D add" << std::endl; } private: int i; }; class Test { private: int i; int j; int l; }; int main(){ cout << sizeof(A) << endl;//1 byte:avoid null pointer cout << sizeof(B) << endl;//4 bytes:one int cout << sizeof(C) << endl;//4 bytes:one int cout << sizeof(D) << endl;//16 bytes(using 12byte):one int + one pointer and 8 alignment cout << sizeof(Test) << endl;//12 bytes:Why not 16 bytes? return 0; }
class D I expect the result is 12 bytes, but the output is 16 bytes Your expectation is misguided. I found the reason is that after adding the virtual function, the alignment will become 8 bytes, That's the reason. 12 is not aligned to 8 bytes. 16 is. So I created the Test class again, according to the above reasoning, my expected result is 16 bytes, but after running, the result is 12 bytes, You didn't add virtual functions to Test, so your expectation is wrong. Should not the Test class is also 8 bytes alignment, There's no reason to expect that. Or what causes the class to perform 8 bytes alignment? In this case, it was caused by having a virtual member function. In other cases it can also be caused by having a sub object with alignment of 8.
71,702,201
71,702,321
Different instruction address shown in ltrace and objdump
I use ltrace and objdump to analyse the simple code below. But I find there is a difference on instruction address shown between ltrace and objdump. #include <iostream> int main() { std::cout << "Hello"; return 0; } As the following info, you can see that the address of [call std::basic_ostream] is [0x400789] in ltrace. (0x400789 is the address of the instruction "call", not std::basic_ostream) binary@binary-VirtualBox:~/code/chapter5/test$ ltrace -i -C ./a.out [0x4006a9] __libc_start_main(0x400776, 1, 0x7fff06c6ad28, 0x4007f0 <unfinished ...> [0x4007b7] std::ios_base::Init::Init()(0x601171, 0xffff, 0x7fff06c6ad38, 160) = 0 [0x4007cb] __cxa_atexit(0x400650, 0x601171, 0x601048, 0x7fff06c6ab00) = 0 [0x400789] std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*)(0x601060, 0x400874, 0x7fff06c6ad38, 192) = 0x601060 [0x7f220180aff8] std::ios_base::Init::~Init()(0x601171, 0, 0x400650, 0x7f2201b96d10Hello) = 0x7f2201f19880 [0xffffffffffffffff] +++ exited (status 0) +++ However, the address of [call std::basic_ostream] shown in objdump is [0x400784] and another instruction [mov eax,0x0] is on [0x400789]. The same is true for other "call" instructions. 0000000000400776 <main>: 400776: 55 push rbp 400777: 48 89 e5 mov rbp,rsp 40077a: be 74 08 40 00 mov esi,0x400874 40077f: bf 60 10 60 00 mov edi,0x601060 400784: e8 d7 fe ff ff call 400660 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt> 400789: b8 00 00 00 00 mov eax,0x0 40078e: 5d pop rbp 40078f: c3 ret I really want to know what causes the gap. Thank you a lot.
Those are return addresses (instruction after the call in the parent, the address which call pushes on the stack). ltrace can't know how long the instruction was that called a function, e.g. call reg with a function pointer is only 2 bytes vs. 5 for call rel32 vs. 6 for call [RIP + rel32] (memory-indirect call which GCC will use if you compile with -fno-plt.) Or if it was tail-called, execution would have reached it from a jmp or something, so even on an ISA with fixed-length instructions like MIPS or AArch64, ltrace still couldn't reliably print where it was called from. Best to not try to make too much stuff up and keep it simple, printing the address it can actually see on the callstack.
71,703,032
71,703,198
Regex pattern issue remove specific digits
I'm trying to use a regex to extract a time string in this format only "01 Apr 2022". But I'm having trouble getting these digits out "07:28:00". std::string test = "Fri, 01 Apr 2022 07:28:00 GMT"; std::string get_date(std::string str) { static std::vector<std::regex> patterns = { std::regex{"Fri,(.+)([0-9]+)GMT"}, }; for (auto& regex : patterns) { std::smatch m; if (std::regex_search(str, m, regex)) { return m[1]; } } return str; }
Here is a regex which will do the job: std::regex reg{R"(\d{2} \w+ \d{4})"};. And in your code you use m[0], not m[1]. But if your format is stable (and it sure looks like one) you don't need regex at all. Just do something like this: str.substr(5, 12) or std::string(str.begin() + 5, str.begin() + 16).
71,703,047
71,704,227
invalit type 'int[int]' for array subscript
I`m trying convert QBytearray to QVector<QVector3D>. extern "C" { typedef struct { double **vertexes; int top_rows_vertexes; int top_column_vertexes; double **edges; int top_rows_edges; int top_column_edges; }MATRIX; int start_processing(const char * file_name, MATRIX *date); } MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); // сишная часть тут создаем структуру получаем имя файла и запускаем нашу сишную функцию MATRIX A; QString name = "../../test.obj"; QByteArray str_name; str_name += name; int result = start_processing(str_name, &A); // тут наполняем массив 3Д координатами QVector<QVector3D> coords; for (int i = 0; i < A.top_rows_vertexes; ++i) { coords.append(QVector3D(A.top_rows_vertexes[0][0], A.vertexes[0][1], A.vertexes[0][2])); } but I get an error. "invalid type 'int[int]' for array subscript" I need help. I ask you to explain as simple as possible because i'm just learning. i`m tried changing type array from double to float... it didn't help
int top_rows_vertexes; That is a plain integer A.top_rows_vertexes[0][0] Here you are trying to index into an integer, which obviously will not work. You probably want to index into vertexes like for the other cases: coords.append(QVector3D(A.vertexes[0][0], A.vertexes[0][1], A.vertexes[0][2])); But that is rather suspicious also. Presumably you would want to use the loop variable here, so maybe coords.append(QVector3D(A.vertexes[i][0], A.vertexes[i][1], A.vertexes[i][2]));
71,703,459
71,703,524
Runtime error in following leetcode problem
Problem link - https://leetcode.com/problems/maximum-average-subarray-i/ class Solution { public: double findMaxAverage(vector<int>& nums, int k) { deque<pair<int,int>> d; //deque since its a sliding window problem vector<double> ans; //to return answer double avg; double sum=0; int n=nums.size(); for(int i=0;i<n;i++) { if(!d.empty() && d.front().second<=(i-k)) //out of window size { avg=sum/k; ans.push_back(avg); //push the average of k elements in answer vector sum=sum-d.front().first; //remove the sum of first element when moving to next d.pop_front(); //remove front element of deque } d.push_back(make_pair(nums[i],i)); //push current element in deque sum=sum+d.back().first; //add it to sum } return *max_element(ans.begin(), ans.end()); //return maximum average of subarray } }; I was trying to solve leetcode problem 643 maximum average subarray I , which is an easy problem based on slididng window approach. I wrote the code and got runtime error saying "Line 811: Char 16: runtime error: reference binding to null pointer of type 'double' (stl_iterator.h) SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c" Can anyone please help with this ? Thanks in advance.
The error is here return *max_element(ans.begin(), ans.end()); If ans is empty, you dereference ans.end().
71,703,838
71,703,983
Cannot find a key of a std::map with customized compare function
#include <cstdio> #include <iostream> #include <vector> #include <map> using namespace std; struct mycmp { bool operator()(const int &a, const int &b) const { return abs(a) <= abs(b); } }; int main(){ map<int, int, mycmp> M1; map<int, int> M2; for(int i = 0; i < 5; i++) { M1[i]++; M2[i]++; } cout << (int)(M1.find(4) == M1.end()) << endl; cout << (int)(M2.find(4) == M2.end()) << endl; return 0; } the output of codes above is 1 0 which implies can't find the key 4 of M1, while 4 can be found in M2. But everything looks fine when I use an iterator to iterate M1 like for ( auto &x: M1) cout << x.first << " " << x.second << endl; it outputs 0 1 1 1 2 1 3 1 4 1 it seems to be caused by compare function, but why and how?
Given two elements a and b the comparator is used to decide if a should come before b. If comp(a,b) == true then a comes before b. The same element cannot be placed before itself. Though, your comparator requires that, because mycomp(a,a) == true. More specifically the comparator must impose a strict weak ordering. The constraints are listed here: https://en.cppreference.com/w/cpp/named_req/Compare It says: For all a, comp(a,a)==false If comp(a,b)==true then comp(b,a)==false if comp(a,b)==true and comp(b,c)==true then comp(a,c)==true Your comparator violates the first two. Note that the comparator and sorting do not care at all about equality. Certainly a==a, but even if this was not the case (because your elements have some odd operator==) comp(a,a) must return false. Using a comparator for sort or for a std::map that does not adhere to the requirements listed in the page linked above results in undefined behavior.
71,703,881
71,710,551
iOS Keychain in C++, how to call SecItemAdd?
I'm trying to call iOS Keychain framework from C++, following various documentation pages and StackOverflow questions I arrived at the following: // library is multiplaform so for now using macros for platform imports #ifdef ANDROID // Android specific imports #else #include <CoreFoundation/CoreFoundation.h> #include <Security/Security.h> #endif // ANDROID // ...some glue code CFStringRef keys[4]; keys[0] = kSecClass; keys[1] = kSecAttrAccount; keys[2] = kSecAttrService; keys[3] = kSecValueData; CFTypeRef values[4]; values[0] = kSecClassGenericPassword; values[1] = CFSTR("accountname2"); // TODO change this to bundle identifier values[2] = CFSTR("jsi-rn-wallet-core"); values[3] = CFSTR("testvalueblahblah"); CFDictionaryRef query = CFDictionaryCreate( kCFAllocatorDefault, (const void**) keys, (const void**) values, 4, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); OSStatus status = SecItemAdd(query, NULL); if(status != errSecSuccess) { cout << "DID NOT STORE DATA, status: " << status << endl; return {}; } cout << "Data stored" << endl; No matter what I do, I get a status of -50, which according to documentation is an incorrect param, however, I don't know which one exactly. Any idea what might be wrong?
I figured it out, as it turns out, on macOS you can put CFStrings directly in the query dictionary, but on iOS you need to store binary data. Something like this: string value = "testvalueblahblah"; std::vector<uint8_t> vec(value.begin(), value.end()); ... values[3] = CFDataCreate(kCFAllocatorDefault, &vec[0], vec.size()); After data is converted, everything started working.
71,704,144
71,749,693
Xcode, CMake unable to link C++ library with ObjC++ library
I have created a minimalistic C++ library that I want to use in my Xcode project. It has this directory structure - library/ - CMakeLists.txt - build/ // build files will reside here - iOS.cmake // toolchain file - core/ - CMakeLists.txt - squareroot.h - squareroot.cpp - platform/ - CMakeLists.txt - squrerootwrapper.h - squarerootwrapper.mm Project link - https://github.com/devojoyti/CMakeLibrary Inside build/ I am doing - cmake ../ -G Xcode -D CMAKE_TOOLCHAIN_FILE=../iOS.cmake -D IOS_PLATFORM=SIMULATOR xcodebuild -target install -configuration Debug Basically what is happening, is I am building two libraries, one inside core/ which calculates the squareroot of a number, another in platform/ which is a ObjC wrapper on the C++ squareroot computation library. I am linking the core/ library inside platform/ library. Problem: Problem is, I am unable to link the library generated inside core/, with that of generated inside platform/. The code builds just fine, and the library (and the corresponding .h header file) is generated. However, if I try to use it in my Xcode, it says this: Clearly, the core/ library functions are not accessible inside platform/. I have tried fiddling with C and CXX flags, trying to add library using find_library first, and using different toolchain, for example, this one here as well. Platform specifications: MacOS Big Sur, 11.6.5, 2.3 Ghz i9 CMake - 3.22.2 Xcode - 13.2.1
Alright, I finally got what was the problem. The libraries I was generating, were static libraries. Static libraries cannot resolve their dependencies on their own (i.e., here, library in platform/ cannot resolve its dependency of core/) unless I explicitly add both of them to Xcode. It started working after I added both libraries (core/ library and platform/) in Xcode. Explanation: Static Library: Though you specify a dependency, it's not resolved by the toolchain. You'd have to specify all dependencies to the final executable. e.g: libB depends on libA then when linking hello.exe which only calls methods of libB you've to specify both libB.a AND libA.a too for it to work Dynamic/Shared Library: Dependencies b/w libraries are resolved; just specifying the dependency needed by the final executable is good enough You'd still have to make sure both .dylibs are in the same directory for libB.dylib to load libA.dylib So, either create a dynamic library or add all the dependencies of the static library in Xcode as well.
71,704,403
71,704,516
Returning a boolean literal from function as reference
I have encountered this code during trying to find a bug: int x = 10; // for example const bool& foo() { return x == 10; } int bar() { bool y = foo(); // error here } This code block causes a crash when compiled with gcc11.2, while it works correctly and sets y as true in Visual Studio 2019. Both compilers give a warning about returning reference to local variable. I would like to know whether this behaviour is an UB or not. (We fixed this bug by changing bool& to bool) Edit: I forgot to put const before bool&, sorry about that.
This is undefined behaviour as you are trying to return a reference on a value whose scope is destroyed once you exit the function.
71,704,975
71,705,200
how can i convert a class which uses template to a normal class which uses Double
I thought I was doing good using class templates. But as soon as I started going backwards I had some difficulties. My task is to remove the tempalte realtype class and replace it with a normal double. I really have no idea how to get started. My idea was to simply remove this line and I thought everything will work fine, but I get errors. ArcBasis.hpp: template <typename RealType> //line to replace class K_Arc_Basis { public: DECLARE_K_STANDARD (Arc_Basis) private: typedef K_Arc_Basis<Double> ThisType; typedef K_Circle_Basis <Double> CircleType; public: K_Arc_Basis(); K_Arc_Basis( const CircleType& Circle ); private: PointType m_Center; Double m_Radius; Double m_StartAngle; Double m_Arc; }; ArcBasis.inl template <typename RealType>//line to replace inline K_Arc_Basis<RealType>::K_Arc_Basis() : m_Center(), m_Radius (1), m_StartAngle( 0 ), m_Arc( 2*KSRealPi( Double(0) ) ) { } template <typename RealType>//line to replace inline K_Arc_Basis<RealType>::K_Arc_Basis( const CircleType& Circle ) : m_Center( Circle.Center() ), m_Radius( Circle.Radius() ), m_StartAngle( 0 ), m_Arc( 2*KSRealPi( Double(0) ) ) { }
You haven't used the template type anywhere, but you have used a specialisation of that template (and also K_Circle_Basis?). You need to remove all the <Double> too. class K_Arc_Basis { public: DECLARE_K_STANDARD (Arc_Basis) // what does this expand to?? private: typedef K_Arc_Basis ThisType; typedef K_Circle_Basis CircleType; public: K_Arc_Basis(); K_Arc_Basis( const CircleType& Circle ); private: PointType m_Center; // What is PointType? double m_Radius; double m_StartAngle; double m_Arc; }; Alternately, you could start using the type parameter in your template, and provide some explicit instantiations template <typename RealType> class K_Arc_Basis { public: DECLARE_K_STANDARD (Arc_Basis) private: using ThisType = K_Arc_Basis<RealType>; using CircleType = K_Circle_Basis<RealType>; public: K_Arc_Basis(); K_Arc_Basis( const CircleType& Circle ); private: PointType m_Center; RealType m_Radius; RealType m_StartAngle; RealType m_Arc; }; using K_Arc_Basis_F = K_Arc_Basis<float>; // or whatever name using K_Arc_Basis_D = K_Arc_Basis<double>; // or whatever name
71,705,383
71,705,454
Why smart pointer type member variable can't be initialized at the declaring place in a class?
When I want to add a member variable with smart pointer type to a class, I found that it can't be initialized at the declaring place: class Foo { public: std::shared_ptr<int> intSharedPtr = new int; // not ok Foo() {} }; But I can do this: class Foo { public: std::shared_ptr<int> intSharedPtr; // ok int* intPtr = new int; // ok Foo() { intSharedPtr.reset(new int); } }; It seems that smart pointer is quite different form the normal pointer, Why this happens?
std::shared_ptr can't be copy-initialized from raw pointer, the conversion constructor is marked as explicit. You can use direct-initialization: class Foo { public: std::shared_ptr<int> intSharedPtr {new int}; Foo() {} }; Or initialize from an std::shared_ptr: class Foo { public: std::shared_ptr<int> intSharedPtr = std::shared_ptr<int>(new int); Foo() {} }; And better to use std::make_shared: class Foo { public: std::shared_ptr<int> intSharedPtr = std::make_shared<int>(); Foo() {} };
71,706,321
71,706,987
Definition of static data member without repeating its type
When I have a class with a static const or constexpr data member, defining that variable reqires me to repeat stuff: /// my_class.hpp class my_class { constexpr static int x = 1; }; /// my_class.cpp #include "my_class.hpp" // auto my_class::x; // error: declaration of 'auto my_class::x' has no initializer // decltype(auto) my_class::x; // error: declaration of 'decltype(auto) my_class::x' has no initializer decltype(my_class::x) my_class::x; // ok Of course I could do #define DEFINE_STATIC_DATA_MEMBER(x) decltype(x) x DEFINE_STATIC_DATA_MEMBER(my_class::x); but I wonder if there’s a non-macro solution. The question arose because both the type and the fully-qualified name of the static data member are lengthy and I’m likely to get more of these.
Starting from C++17 you don't need to separately define static constexpr variables. Just class my_class { constexpr static int x = 1; }; is enough, without a .cpp file.
71,706,827
71,706,919
What's the use of <ratio> when we have contexpr values?
The <ratio> header lets you uses template meta-programming to work with and manipulate rational values. However - it was introduced in C++11, when we already had constexpr. Why is it not good enough to have a fully-constexpr'ifed library type for rationals, i.e. basically: template<typename I> struct rational { I numerator; I denominator; }; and use that instead? Is there some concrete benefit to using std::ratio that C++11 constexpr functionality would not be well-suited enough for? And if so, is it still relevant in C++20 (with the expanded "reach" of constexpr)?
Is there some concrete benefit to using std::ratio that C++11 constexpr functionality would not be well-suited enough for? You can pass ratio as a template type argument, which is what std::chrono::duration does. To do that with a value-based ratio, you need C++20 or newer. In C++20 and newer I don't see any benefits of the current design.
71,706,909
71,723,497
C++ thread still joinable after calling `join()`
I have a piece of code that simulates the provider/consumer scenario where each provider and consumer is a thread. I've paired each consumer with a provider and the consumer will wait until its provider has finished (by calling join() on the provider thread) before executing. The code is as follows: std::vector<std::thread> threads; const uint32_t num_pairs = 3; auto provider = [&](uint32_t idx) { /* produce resources and put them in a global container */ }; auto consumer = [&](uint32_t idx) { /* consumer i is paired with provider (i-num_pairs) */ uint32_t provider_idx = idx - num_pairs; threads[provider_idx].join(); assert(threads[provider_idx].joinable() == false); /* access the resources */ }; for (uint32_t i = 0; i < 2 * num_pairs; i++) { if (i < num_pairs) { /* 0, 1, 2 are providers */ threads.emplace_back(provider, i); } else { /* 3, 4, 5 are consumers */ threads.emplace_back(consumer, i); } } /* join the consumer threads later */ Most of the time it works fine but sometimes the assertion in consumer fails and the provider thread is still joinable after it has been joined. Is the implementation incorrect or is there something I am not aware of happening? Please help and thanks in advance!
The vector threads is growing as you push be threads into it, there's a good chance that some of the threads will start executing before you've finished adding all the threads. When the vectors capacity increases all iterators and references to it become invalid. This means that in threads[provider_idx] the returned reference could be invalidated between the return of the [] operator and the execution of the thread method. As this is undefined behaviour your observed misbehaviour of join and joinable isn't unexpected. Reserving the size of the vector before creating the threads should fix the problem: threads.reserve(2 * num_pairs);
71,707,275
71,707,513
C++ `using namespace` directive makes global-scope operator disappear?
For reasons I do not understand, the following C++ code fails to compile on VS 2022 (dialect set to C++20): #include <compare> namespace N1 {} namespace N1::N2 { class A {}; A operator-(A&); } std::strong_ordering operator-(std::strong_ordering o); namespace N1 { using namespace N2; // (1) !!! std::strong_ordering foo(); inline std::strong_ordering bar() { return -foo(); // (2) !!! } } At (2), the compiler files a complaint: error C2678: binary '-': no operator found which takes a left-hand operand of type 'std::strong_ordering' (or there is no acceptable conversion). When the using namespace directive at (1) is removed, the compiler happily finds the operator- defined at global scope for the std::strong_ordering type. This gives rise to a set of questions: Is this VS 2022 behavior (a) a bug, (b) allowed or even (c) mandatory according to the language standard? In case of (b) or (c), how? Which specific sentences in the standard allow/mandate the compiler to not find the operator- at global scope? How would you suggest to work around the issue, presuming that the using namespace directive is there to stay? Live demo
Your compiler is correct, and I would expect other compilers to agree. For the behaviour of a using-directive, see C++20 [namespace.udir]/2: A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (6.5.2), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. In other words, if N1 contains using namespace N2;, then only for the purposes of unqualified name lookup, names in N2 will appear as if they are in the lowest common ancestor namespace of N1 and N2. Since N2 is inside N1, the lowest common ancestor namespace is just N1, and that means the operator- in N2 appears inside N1 when unqualified name lookup is performed. This means that unqualified lookup for operator- will find N2::operator-, and it won't proceed to the global namespace to continue searching for additional declarations. See [basic.lookup.unqual]/1: In all the cases listed in 6.5.2, the scopes are searched for a declaration in the order listed in each of the respective categories; name lookup ends as soon as a declaration is found for the name. If no declaration is found, the program is ill-formed. To work around the issue, there are two strategies. One is to place the operator- for your type in the namespace where that type is declared, so it can be found via argument-dependent lookup. However, you are not allowed to add operator overloads to the std namespace. The other strategy is to redeclare the operator- you want using a using-declaration: using namespace N2; using ::operator-; This effectively brings the operator- that you want "one level deeper", putting it at the same level as the other operator- that appears thanks to the using-directive, so unqualified name lookup will find both, and the compiler will perform overload resolution.
71,707,566
71,719,001
OpenGL first person realistic keyboard movement
So I'm making this FPS game where I want to move forward, backward, left and right with keyboard input and look around with the camera like in a real FPS-game like Battlefield. The camera movement in combination with the keyboard input works great, but now my camera can fly around. And I just want to be able to stay on the "ground" and move forward, backward, left and right while looking up or down like in a game like battlefield without flying around (like no-clip does). Now If I look down or up and press forward on my keyboard I can "fly", but I don't want this to happen. A friend of my suggested to use something like this to go forward. Position += glm::vec3(glm::cos(Yaw), glm::sin(Yaw), 0) * velocity instead of: Position += Front * velocity; But I don't fully understand how this would work? This is the current keyboard input code: void Camera::ProcessKeyboard(Camera_Movement direction, float deltaTime) { float velocity = MovementSpeed * deltaTime; if (direction == FORWARD) Position += Front * velocity; if (direction == BACKWARD) Position -= Front * velocity; if (direction == LEFT) Position -= Right * velocity; if (direction == RIGHT) Position += Right * velocity; } Tips or help would be appreciated! Yes this code comes from LearnOpenGL.com What we are doing here is trying to simulate movement by moving all objects in the scene in the reverse direction, giving the illusion that we are moving.
I want to thank everyone for helping me out! I understand better how everything works now. I want to move in the x-z plane and y must be zero (yes I chose the Y-axis as "up" or "the sky"), because I don't want the camera to move up or down. So when I go forward I only want to change the x and z parameter of the glm vector! void Camera::ProcessKeyboard(Camera_Movement direction, float deltaTime) { float velocity = MovementSpeed * deltaTime; if (direction == FORWARD) { // glm::vec3(X,Y,Z)!!! we only want to change the X-Z position Position += glm::vec3(glm::cos(glm::radians(Yaw)), 0, glm::sin(glm::radians(Yaw))) * velocity; //Y is not affected, Y is looking up } if (direction == BACKWARD) { // glm::vec3(X,Y,Z)!!! we only want to change the X-Z position Position -= glm::vec3(glm::cos(glm::radians(Yaw)), 0, glm::sin(glm::radians(Yaw))) * velocity; //Y is not affected, Y is looking up } if (direction == LEFT) { Position -= Right * velocity; } if (direction == RIGHT) { Position += Right * velocity; } } Now everything works great!
71,707,916
71,708,175
Sending a large text via Boost ASIO
I am trying to send a very large string to one of my clients. I am mostly following code in HTTP server example: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/examples/cpp11_examples.html Write callbacks return with error code 14, that probably means EFAULT, "bad address" according to this link: https://mariadb.com/kb/en/operating-system-error-codes/ Note that I could not use message() member function of error_code to read error message, that was causing segmentation fault. (I am using Boost 1.53, and the error might be due to this: https://github.com/boostorg/system/issues/50) When I try to send small strings, let's say of size 10 for example, write callback does not return with an error. Here is how I am using async_write: void Connection::do_write(const std::string& write_buffer) { auto self(shared_from_this()); boost::asio::async_write(socket_, boost::asio::buffer(write_buffer, write_buffer.size()), [this, self, write_buffer](boost::system::error_code ec, std::size_t transfer_size) { if (!ec) { } else { // code enters here **when** I am sending a large text. // transfer_size always prints 65535 } }); } Here is how I am using async_read_some: void Connection::do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(buffer_), [this, self](boost::system::error_code ec, std::size_t bytes_transferred) { if (!ec) { do_write(VERY_LARGE_STRING); do_read(); } else if (ec != boost::asio::error::operation_aborted) { connection_manager_.stop(shared_from_this()); } }); } What could be causing write callback to return with error with large string?
The segfault indicates likely Undefined Behaviour to me. Of course there's to little code to tell, but one strong smell is from you using a reference to a non-member as the buffer: boost::asio::buffer(write_buffer, write_buffer.size()) Besides that could simply be spelled boost::asio::buffer(writer_buffer), there's not much hope that write_buffer stays around for the duration of the asynchronous operation that depends on it. As the documentation states: Although the buffers object may be copied as necessary, ownership of the underlying memory blocks is retained by the caller, which must guarantee that they remain valid until the handler is called. I would check that you're doing that correctly. Another potential cause for UB is when you cause overlapping writes on the same socket/stream object: This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes. If you checked both these causes of concern and find that something must be wrong, please post a new question including a fully selfcontained example (SSCCE or MCVE)
71,708,233
71,708,437
How to avoid multiple definition of overridden method
I have a Base class and a Derived class (which derives from the Base class). Both classes have a method called "getNumber ()". In my main (), I want to run the method "getNumber()" of the Derived class. My main instantiates an object of the Animal class. In the Animal class is member variable : a pointer to the Base class, called "base". However, in the constructor for the Animal class, I want to set that "base" pointer to an object of type Derived. However, when I try comppile this, I get the following error message: Animal.cc:(.text+0x0): multiple definition of `Derived::getNumber()'; /tmp/ccwfn1zo.o:Derived.cc:(.text+0x0): first defined here /usr/bin/ld: /tmp/ccwfn1zo.o: in function `Derived::~Derived()': Derived.cc:(.text._ZN7DerivedD2Ev[_ZN7DerivedD5Ev]+0x26): undefined reference to `Base::~Base()' /usr/bin/ld: /tmp/ccwfn1zo.o:(.data.rel.ro._ZTI7Derived[_ZTI7Derived]+0x10): undefined reference to `typeinfo for Base' /usr/bin/ld: /tmp/ccpsXsim.o: in function `Derived::Derived()': Animal.cc:(.text._ZN7DerivedC2Ev[_ZN7DerivedC5Ev]+0x18): undefined reference to `Base::Base()' collect2: error: ld returned 1 exit status Here is the main: #include <iostream> #include "Animal.h" #include "Base.h" using namespace std; int main() { Animal * a = new Animal; cout << a->base->getNumber(); } Here is the code of the Base class in Base.h: class Base { public: Base (); virtual ~Base (); public: virtual unsigned int getNumber() { return 1; } }; Here is the code of the Derived.cc: #include "Base.h" class Derived : public Base { public: ~Derived () {} public: unsigned int getNumber () override; }; unsigned int Derived::getNumber () { return 5; } Here is the code of Animal.h class Base; class Animal { public: Animal (); ~Animal (); public: Base* base = nullptr; }; And finally Animal.cc #include "Animal.h" #include "Derived.cc" Animal::Animal () { base = new Derived; } What am I doing wrong ? I think its something basic, but I don't see it.
The problem is that you're including a source file named Derived.cc instead of a header file. This results in multiple definition error because this source file is then included (directly and indirectly) into other files and so there are "multiple definition" for the mentioned member function Derived::getNumber(). To solve this, you should instead create a header file called Derived.h that will contain the declarations of the member functions and a separate source file that will contain the implementations of those member functions as shown below. Working Demo main.cpp #include <iostream> #include "Animal.h" #include "Base.h" int main() { Animal * a = new Animal; std::cout << a->base->getNumber(); } Base.h #include <iostream> #include "Animal.h" #include "Base.h" int main() { Animal * a = new Animal; std::cout << a->base->getNumber(); } Base.cc #include "Base.h" Base::Base() { } Base::~Base() { } Animal.h class Base; class Animal { public: Animal (); ~Animal (); public: Base* base = nullptr; }; Animal.cc #include "Animal.h" #include "Derived.h" Animal::Animal () { base = new Derived; } Derived.h #include "Base.h" class Derived : public Base { public: ~Derived () {} public: unsigned int getNumber () override; }; Derived.cc #include "Derived.h" unsigned int Derived::getNumber () { return 5; } Working Demo Note: We should not include source files.
71,708,334
71,709,941
Frama-Clang: Invalid integer constant
While working with Frama-Clang, I ran into a problem. The following code shows the problem broken down to the minimum: const long long value = -1; int main(){ return 0; } Running the Frama-C (Frama-Clang) analysis leads to the following output. > frama-c invalid_integer.cpp [kernel] Parsing invalid_integer.cpp (external front-end) Now output intermediate result [kernel] invalid_integer.cpp:3: Failure: Invalid integer constant: -1 [kernel] User Error: stopping on file "invalid_integer.cpp" that has errors. [kernel] Frama-C aborted: invalid user input. There are several ways to work around this error. works if value is type short or int, fails for long and long long works without the const keyword works if the variable value is defined inside the main function instead of global definition Where could this error come from and can it be solved?
The error comes from a miscommunication between Frama-Clang and the Frama-C kernel itsef. Specifically, in the case of initializers of global integer variables, Frama-Clang neglects to convert a negative constant into the application of unary minus to a positive integer, which is what the kernel is expecting at this point. I've taken the liberty to open an issue on Frama-Clang's BTS.
71,709,402
71,709,634
Build error serializing struct to XML using Boost
I'm trying to serialize some structs to XML using Boost. I can't change the structs, so I'm trying to do it non invasively. Following the simple "non intrusive version" example in the docs I managed to get flat text serialization to work. However when I try to extend it to XML by looking at the XML example I'm unable to build and get errors from within the Boost libraries. I find only one hit on the error message and I'm unable to see how to apply it to my situation to solve it. Looking generally for other posts and examples, I only see the referenced ones that don't involve putting code inside the structs. The code and error are below. Is anyone familiar enough with this to point out what I'm doing wrong? The code is #include <iostream> #include <iomanip> #include <iostream> #include <fstream> #include <string> #include <boost/config.hpp> #include <boost/archive/xml_oarchive.hpp> #include <boost/archive/xml_iarchive.hpp> void boostTest(); int main() { boostTest(); } struct s_FileInfo { //const char* comment; int test; int motionCases; //number of motion cases }; void boostTest() { struct s_FileInfo test = { 3,2 }; std::ofstream ofs("c:\\sandpit\\test.xml", std::ofstream::out); boost::archive::xml_oarchive oa(ofs); oa << BOOST_SERIALIZATION_NVP(test); ofs.close(); } namespace boost { namespace serialization { template<class Archive> void serialize(Archive& ar, s_FileInfo& g, const unsigned int version) { ar& g.test; ar& g.motionCases; } } // namespace serialization } // namespace The errors I'm getting on attempting to build are 'mpl_assertion_in_line_6': const object must be initialized 'int boost::mpl::assertion_failed<false>(boost::mpl::assert<false>::type)': cannot convert argument 1 from 'boost::mpl::failed ************boost::serialization::is_wrapper<T>::* ***********' to 'boost::mpl::assert<false>::type' Both are in boost\archive\basic_xml_oarchive.hpp
The assert says it all: // If your program fails to compile here, its most likely due to // not specifying an nvp wrapper around the variable to // be serialized. BOOST_MPL_ASSERT((serialization::is_wrapper< T >)); So, let's add that Live On Coliru #include <boost/archive/xml_iarchive.hpp> #include <boost/archive/xml_oarchive.hpp> #include <fstream> void boostTest(); int main() { boostTest(); } struct s_FileInfo { // const char* comment; int test; int motionCases; // number of motion cases }; void boostTest() { s_FileInfo test {3, 2}; std::ofstream ofs("test.xml"); { boost::archive::xml_oarchive oa(ofs); oa << BOOST_SERIALIZATION_NVP(test); } } namespace boost::serialization { template <class Ar> void serialize(Ar& ar, s_FileInfo& g, unsigned /*unused*/) { ar& BOOST_SERIALIZATION_NVP(g.test); ar& BOOST_SERIALIZATION_NVP(g.motionCases); } } // namespace boost::serialization Creates test.xml: <!DOCTYPE boost_serialization> <boost_serialization signature="serialization::archive" version="19"> <test class_id="0" tracking_level="0" version="0"> <g.test>3</g.test> <g.motionCases>2</g.motionCases> </test> </boost_serialization>