question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
72,338,226
72,338,287
Creating alpha channel opencv
I have a program in c++ and opencv2 that loads an mp4 file and gives me frame pixels. I have a problem opencv outputs 24 bits per pixel, and I need 32 to display in a window on winapi. Is it possible to create an empty alpha channel using standard opencv tools?
You can use cv::cvtColor to convert a 3 channel RGB/BGR image to a 4 channel image with an additional alpha channel. As you can see in the documentation link above, the 3rd parameter specifies the required conversion. cv::Mat img24; // initialized somehow // ... cv::Mat img32; cv::cvtColor(img24, img32, CV_BGR2BGRA);
72,338,378
72,407,467
QPushButton Postion does not update its positon after window resize
I'm currently using QT 5.14.2 and QT Creator 4.11.1. In my current project I placed a QPushButton in the ui, then in the cpp file I set the window size to maxsize so it would use the whole screen with the close/rezise/minimize buttons available. In the program I simply created a function that would create a popup menu when i would click on the button and place it just right under the button. Although it's fine when you don't resize window, but after rezising it, the position's attributes doesn't change for the button and after clicking on it will create the popup menu as if the window is still maximized (so outside of the actual program's window). Is this supposed to be a bug under a certain QT version ? (I also tried to check out the whole function list that a pushbutton offers to me but to no avail, none of them do (my goal here is to do ->) an update on the button's coords based on the window's size and location on the screen so the popup menu would appear right beneath the button's y pos) Also here is my snippet : void DropMenu::on_dropMenu_clicked() { QMenu * menu = new QMenu(this); menu->addAction(new QAction("Help")); int ypos = (ui->dropMenu->pos().y() + ui->dropMenu->height() * 2); //using * 2 because the program window's y and x pos is 0 which is the window's border height (where app icon and app name is) qDebug() << ui->dropMenu->pos().x(); qDebug() << ui->dropMenu->pos().y() << " " << (ui->dropMenu->pos().y() + ui->dropMenu->height()) << " " << ypos; QPoint point; point.setX(ui->dropMenu->pos().x()); point.setY(ypos); //to render it just under the button's y pos menu->popup(point); connect(menu, SIGNAL(triggered(QAction*)), this, SLOT(DisplayHelp(QAction*))); }
So to get your QMainWindow positon the following code should be used : QPoint point(ui->centralwidget->mapToGlobal(ui->centralwidget->pos())); //Initialising QPoint x and y with mapToGlobal which will display for us the x y based on where our window is on the screen. point.setY(point.y() + ui->dropButton->height()); //Adding the button's y coord in order to make the popup window created just below the button each time. //Note that my button is in the top-left corner that's why I don't add the x coord of my button based on where it is in the window menu->popup(point); //Displaying the popup menu. connect(menu, SIGNAL(triggered(QAction*)), this, SLOT(DisplayHelp(QAction*))); //Hooking up to a signal to do things for us.
72,338,665
72,338,769
Modify maps by reference in template class
Hey guys! I have school project to make. In the project I have to create a map_storage template class that's able to store/ know what maps it's going to modify and do different operations on them. The main.cpp file was already created for me, so that I can check if my code was correct. Template class: #include <map> #include <utility> #include <vector> #ifndef MAPALIGN_H #define MAPALIGN_H template<class Key, class T, class Compare = std::less<Key> > class map_storage{ public: void add(std::map<Key, T, Compare> &temp_map){ data_.push_back(temp_map); } void doSomething(){ std::cout << data_.size() << std::endl; //This does something to all maps in the vector. } private: std::vector<std::map<Key, T, Compare> > data_; }; #endif When I modify the maps in data_ (add new key,values etc.) I want to see the changes in main too and the same thing for the other way around. My question is that How can I store the references of the maps in a template class? Example of the main method: #include <iostream> #include "mapalign.h" #include <string> #include <map> int main() { std::map<std::string, int> map1; map_storage<std::string int> ms; ms.add(map1); map1["asd"] = 1; map1["dsa"] = 2; ms.doSomething(); //Tests that are checking that operations on map1 were finished on the map with size of 2 and not the empty one. ..... return 0; } So in the main.cpp file its mandatory that all modifications are shown on the maps reference that was added to the template class. In this case the doSomething() function does something with the map that has "asd" 1, "dsa" 2 key-value pairs, and not just the empty copy of the map that was added earlier to the vector. During the doSomething() operation we have to know all map data to accordingly modify data in the other maps. If anyone has any idea to this problem I would really appreciate it. (I know that c++ does not support storing references in an array nor vector, so there must be another way that I can't figure out) Thank you again.
We cannot have vector of lvalue references. You could instead store pointers inside the vector as shown below: template<class Key, class T, class Compare = std::less<Key> > class map_storage{ public: //------------------------------------v--------------->pointer void add(std::map<Key, T, Compare>* temp_map){ data_.push_back(temp_map); } void doSomething(){ std::cout << data_.size() << std::endl; std::cout<< data_.at(0)->size(); //prints 2 } private: //---------------------------------------v------------->pointer std::vector<std::map<Key, T, Compare>* > data_; }; int main() { std::map<std::string, int> map1; map_storage<std::string, int> ms; //-----------------------^----------->added comma here which was missing in the original code ms.add(&map1); map1["asd"] = 1; map1["dsa"] = 2; ms.doSomething(); } Working demo Also, make sure that the you don't dereference pointers that are pointing to objects that no longer exist. Method 2 template<class Key, class T, class Compare = std::less<Key> > class map_storage{ public: //------------------------------------v--------------->reference void add(std::map<Key, T, Compare>& temp_map){ data_.push_back(&temp_map); } void doSomething(){ std::cout << data_.size() << std::endl; std::cout<< data_.at(0)->size(); //prints 2 } private: //---------------------------------------v------------->pointer std::vector<std::map<Key, T, Compare>* > data_; }; int main() { std::map<std::string, int> map1; map_storage<std::string, int> ms; //-----------------------^----------->added comma here which was missing in the original code ms.add(map1); map1["asd"] = 1; map1["dsa"] = 2; ms.doSomething(); } Working demo
72,338,833
72,338,919
Why can't I pass operator to a function from an other in C++?
In my C++ homework (where I have to short different arrays with different methods), I run into a problem. I can't pass comp() from one function to another. Here is a simplified version of my code: template <typename T, typename Compare = std::less<T>> void fooFunction(T arr[], int arraySize, Compare comp = Compare{}) { int endElem=arraySize-1; int beginElem =0; fooFunction2(arr, beginElem , endElem, arraySize, comp()); //I am getting the errors here } template <typename T, typename Compare = std::less<T>> void fooFunction2(T arr[], int beginElem, int endElem, int arraySize, Compare comp = Compare{}) { } struct string_size_less { bool operator()(const std::string &a, const std::string &b) const { return a.size() < b.size(); } }; int main() { int arrI[] = { 4, 5, 1, 4, 2}; std::string arrS[] = {"Car", "Bicycle", "Metro", "Bike"}; fooFunction(arrI, 5); fooFunction(arrS, 4, string_size_less()); return 0; } At the moment I am getting: error: no match for call to '(std::less<int>) ()'| error: 'fooFunction2' was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]| What would be the correct way to solve this?
I changed two things: in your call to fooFunction2(..), don't pass comp() but comp. If you add parantheses, you execute the comp-function, which fails because a comparing function needs arguments passed to it. Just passing comp instead will just tell it where to look for the function in question. place the definition of fooFunction2(..) above where you call it. If you define it lateron, the compiler won't find the definition at the moment it finds the first call to it. (There might be compiler options to fix this). Just swapping the defintions is enough to fix it, though declaring it in a header file (especially in larger projects) might be a smoother way. This code compiles just fine: template <typename T, typename Compare = std::less<T>> void fooFunction2(T arr[], int beginElem, int endElem, int arraySize, Compare comp = Compare{}) { } template <typename T, typename Compare = std::less<T>> void fooFunction(T arr[], int arraySize, Compare comp = Compare{}) { int endElem=arraySize-1; int beginElem =0; fooFunction2(arr, beginElem , endElem, arraySize, comp); //I am getting the errors here } struct string_size_less { bool operator()(const std::string &a, const std::string &b) const { return a.size() < b.size(); } }; int main() { int arrI[] = { 4, 5, 1, 4, 2}; std::string arrS[] = {"Car", "Bicycle", "Metro", "Bike"}; fooFunction(arrI, 5); fooFunction(arrS, 4, string_size_less()); return 0; }
72,338,886
72,338,939
How to return a variable from a destructor
So I have a struct called timer which determines how much time did a block of code take to execute and complete and I'm going to run few benchmarks on my sorting algorithm and take the average value of time it took for each sorting algorithm. struct Example{ std::chrono::time_point<std::chrono::steady_clock> start, end; Example() { start = std::chrono::high_resolution_clock::now(); } ~Example() { end = std::chrono::high_resolution_clock::now(); std::chrono::duration<float> duration = end - start; float ms = duration.count() * 1000.0f; std::cout << ms << " miliseconds\n"; // a way to return ms? } }; However, I was not able to find a way to get the variable ms out of the destructor and assign it to something after measuring the time. Is there any way to get it out or can I write my struct in a better way?
You can't return anything from the deststructor but you can assign the value to a variable that you supply to Example upon creation. Example: #include <chrono> #include <iostream> template <class Clock = std::chrono::steady_clock> struct Example { std::chrono::time_point<Clock> start; std::chrono::duration<float>& duration; // a reference Example(std::chrono::duration<float>& dur) : // take the duration as an argument start(Clock::now()), duration(dur) {} ~Example() { auto end = Clock::now(); duration = end - start; // assign the value } }; int main() { std::chrono::duration<float> duration; { Example<> x(duration); } std::cout << duration.count() << '\n'; // read it afterawrds }
72,339,157
72,339,280
How to approach this coding challenge?
Source Create a function that performs an even-odd transform to an array, n times. Each even-odd transformation: Adds two (+2) to each odd integer. Subtracts two (-2) from each even integer. Examples: evenOddTransform([3, 4, 9], 3) ➞ [9, -2, 15] // Since [3, 4, 9] => [5, 2, 11] => [7, 0, 13] => [9, -2, 15] evenOddTransform([0, 0, 0], 10) ➞ [-20, -20, -20] evenOddTransform([1, 2, 3], 1) ➞ [3, 0, 5] It seems to me that in order to do this challenge, I will need to define a new function evenOddTransform which takes an array and an integer as input. Within evenOddTransform I will also need to define a for loop which scans to the size of the passed array (for (i = 0, i < sizeofarray, i=i+2) in pseudocode). However, C++ is apparently unable to calculate the size of the array when it's passed to a function. In other words, I need to calculate the size of the array first and then pass that size together the array to evenOddTransform. But if I do that, then evenOddTransform needs at least three inputs, while the examples only take two inputs. How should I approach this coding challenge?
I suggest that you take a std::vector<int-type> by reference and do the transformation on that instead since array sizes need to be known at compile time. Example: #include <algorithm> #include <cstdint> #include <iostream> #include <vector> void evenOddTransform(std::vector<std::intmax_t>& vec, unsigned times) { times *= 2; std::transform(vec.begin(), vec.end(), vec.begin(), [&times](std::intmax_t val) { if (val % 2) return val + times; // odd return val - times; // even }); } int main() { std::vector<std::intmax_t> vec{3, 4, 9}; evenOddTransform(vec, 3); for (auto val : vec) std::cout << val << ' '; // 9 -2 15 }
72,339,931
72,340,007
"multiple definition of" while variable is not defined anywhere else in the scope
I have these three source files: test.h #ifndef __TESTH #define __TESTH #ifdef __cplusplus #define EXTERNC extern "C" #else #define EXTERNC #endif typedef struct { uint8_t value; } my_struct; EXTERNC void initialise(); EXTERNC void load(my_struct**); #endif test.cpp: #include <cstdint> #include "test.h" my_struct test; void initialise() { test.value = 200; } void load(my_struct** struct_ptr) { *struct_ptr = &test; } main.cpp: #include <cstdint> #include <iostream> #include "test.h" my_struct *test; int main() { initialise(); load(&test); while (true) { std::cout << test->value << std::endl; } } When I compile it, the linker gives me an error telling me that test has been defined multiple times (first defined in test.cpp). Why? To me it seems like it doesn't leave the scope of test.cpp. And when I remove the definition of test in main.cpp, it gives me an undefined error! Thank you for taking the time out of your day to help me.
I think you would need to scope test.cpp's test variable to that file only, assuming your test pointer in main.cpp is different than test in test.cpp namespace { my_struct test; } See here
72,340,339
72,340,510
How to pass a name for convenient use of tuple?
I would like to improve the code so that it is convenient to interact with it. struct prototype { template <class... T1> prototype(T1&&... args) { auto p = std::tie(args...); std::cout << std::get<0>(p) << std::endl; if constexpr(std::tuple_size_v<decltype(p)> >= 3) { std::cout << std::get<2>(p) << std::endl; } } }; int option_1 = 10; std::string option_2 = "test2"; auto option_3 = 0.41; std::vector<int> option_4(10); int main() { prototype p1(option_1, option_2, option_3, option_4); prototype p2(option_1, option_2, option_3); prototype p3(option_1, option_2); prototype p4(option_1); } i would like to do so std::cout << option_1 << std::endl; if constexpr (std::tuple_size_v<decltype(p)> >= 3) { std::cout << option_2 << std::endl; } I don't like this option std::get<0>(p) Any ideas how to replace the call to tuple? You can also see the option on https://godbolt.org/z/bT4Wzjco8
You can create a variable template out of a lambda. At the end of the day all you want is a compile time constant to pass to std::get: template <std::size_t N> constexpr auto option = [] (auto p) -> auto&& { return std::get<N-1>(p); }; This can be used as option<1>(p) Demo The familiar template syntax for lambdas may seem as another alternative: constexpr auto option = []<std::size_t N>(auto p) { return std::get<N-1>(p); }; Here the argument to std::get is passed as a non type template parameter. As @Davis Herring mentions, this unfortunately does not mean the lambda is then to be used as option<1>(p). The reason being that the lambda is not itself a template, its function call operator is. The proposal changes nothing on the templateness of the lambda itself. As a result the lambda above is invocable as option.operator()<1>(p) Demo
72,340,344
72,340,486
How to transform 2D tuple [x][y] to [y][x] and call variadic function for each result set
I want to "rotate" the axis of a 2D tuple and call a variadic function for each of the result sets. All tuple elements have the same type, but the element items/attributes might have different type. Starting from constexpr auto someData = std::make_tuple( std::make_tuple(1, 2, 3.0), std::make_tuple(4, 5, 6.0), std::make_tuple(7, 8, 9.0)); the result I want to achieve, are calls to a variadic function like this someFunction(1, 4, 7); someFunction(2, 5, 8); someFunction(3.0, 6.0, 9.0); I was trying to solve this using std::get<index>(tuple) in a lambda to create indices using std::make_index_sequence and invoke a variadic function, passing tuple elements via std::apply, like this (without success). #include <iostream> #include <tuple> constexpr auto someFunction(auto&&... args) { //do some stuff ((std::cout << args),...); } int main() { constexpr auto someData = std::make_tuple( std::make_tuple(1, 2, 3.0), std::make_tuple(4, 5, 6.0), std::make_tuple(7, 8, 9.0) ); // want to get // someFunction(1, 4, 7); // someFunction(2, 5, 8); // someFunction(3.0, 6.0, 9.0); using t0_t = typename std::tuple_element<0, decltype(someData)>::type; [] <std::size_t... I> (auto&& tuple, std::index_sequence<I...>) { ([&] (std::size_t i) { std::apply([&](auto&&... args) { //(std::get<i>(args), ...); // someFunction ((std::get<i>(args), ...)); }, tuple); }(I), ...); }(std::forward<decltype(someData)>(someData), std::make_index_sequence<std::tuple_size<t0_t>::value>{}); } How can that be done correctly?
Use nested lambdas, one to expand the index and one to expand the tuple constexpr auto someData = std::make_tuple( std::make_tuple(1, 2, 3.0), std::make_tuple(4, 5, 6.0), std::make_tuple(7, 8, 9.0) ); // someFunction(1, 4, 7); // someFunction(2, 5, 8); // someFunction(3.0, 6.0, 9.0); std::apply([](auto first_tuple, auto... rest_tuples) { [&]<std::size_t... I>(std::index_sequence<I...>) { ([]<size_t N>(auto... tuples) { someFunction(std::get<N>(tuples)...); }.template operator()<I>(first_tuple, rest_tuples...), ...); }(std::make_index_sequence<std::tuple_size_v<decltype(first_tuple)>>{}); }, someData); Demo
72,340,706
72,349,736
Unable to get my software C++ glsl implementation to behave correctly
so, I am writing a couple of functions so to run GLSL fragment shaders on CPU. I implemented all the basic mathematical functions to do so. However I can't even get the simplest stuff to execute correctly. I was able to trace it down to either of these functions: template<unsigned vsize> _SHADERH_INLINE vec<float, vsize> normalize(const vec<float, vsize>& vs) { vec<float, vsize> fres; float mod = 0.0f; for (unsigned i = 0; i < vsize; ++i) { mod += vs[i] * vs[i]; } float mag = glsl::sqrt(mod); if (mag == 0) { std::logic_error("In normalize the input vector is a zero vector"); } for (unsigned i = 0; i < vsize; ++i) { fres[i] = vs[i] / mag; } return fres; } template<unsigned vsize> _SHADERH_INLINE vec<float, vsize> length(const vec<float, vsize>& vs) { float fres = 0; for (unsigned it = 0; it != vsize; it++) { fres += vs[it] * vs[it]; } return glsl::sqrt(fres); } template<unsigned vsize> _SHADERH_INLINE vec<float, vsize> dot(const vec<float, vsize>& vs1, const vec<float, vsize>& vs2) { return std::inner_product(vs1.begin(), vs1.end(), vs2.begin(), 0); } But it could still be something in my vec implementation (pretty certain it's not). So, does anyone see anything wrong with the code above or something that does not align with glsl behavior? If nobody is able to find anything wrong there, I will make a follow-up question with my vec implementation.
There are few things that may be the origin of the problems. Failing dot function After a bit of testing, it turned out that your dot function fails, because you are providing an integer value 0 as initial value in your call to inner_product, which results in wrong calculations. See this post for the reason: Zero inner product when using std::inner_product Simply write 0.0f to ensure a float as initial value for the accumulator : vec<float, vsize> dot(const vec<float, vsize>& vs1, const vec<float, vsize>& vs2) { return std::inner_product(vs1.begin(), vs1.end(), vs2.begin(), 0.0f); } More generally, I recommend you to always write any hardcoded value with clear indication of its type. You can also manually write your dot product as well, as you did in normalize and length : vec<float, vsize> dot(const vec<float, vsize>& vs1, const vec<float, vsize>& vs2) { float f = 0.0f; for (unsigned i = 0; i < vsize; ++i) { f += vs1[i] * vs2[i]; } return f; } No exception thrown when normalizing a null vector In the normalize function, you are not actually throwing any error when mag == 0. When passing a null vector to normalize, it returns (-nan, -nan, -nan) instead of throwing the error you want. std::logic_error does not throw any error, it creates an object to be thrown (see : https://en.cppreference.com/w/cpp/error/logic_error). Thus instead of writing : if (mag == 0) { // Doesn't do anything... std::logic_error("In normalize the input vector is a zero vector"); } You must write : if (mag == 0.0f) { // Actually throws a `logic_error` exception throw std::logic_error("In normalize the input vector is a zero vector"); } Handling of floats as vec types This depends on your implementation of vec. length and dot return scalar values (float) and not vectors. Since (to my understanding) you didn't mention compilation errors, I assume your vec type can handle floats as 1-component vectors. Be sure that this is indeed working. Here is the code I used to test your functions. I quickly implemented a simple vec class and adapted your functions to this type : #include <iostream> #include <vector> #include <math.h> #include <numeric> #include <stdexcept> class vec { public: int vsize; std::vector<float> vals; vec(int s) : vsize(s){ vals = std::vector<float>(vsize, 0.0f); } }; void print_vec(vec& v){ for(unsigned i = 0; i < v.vsize; i++){ std::cout << v.vals[i] << " "; } std::cout << std::endl; } vec normalize(const vec& vs) { vec fres(vs.vsize); float mod = 0.0f; for (unsigned i = 0; i < vs.vsize; ++i) { mod += vs.vals[i] * vs.vals[i]; } float mag = sqrt(mod); if (mag == 0.0f) { throw std::logic_error("In normalize the input vector is a zero vector"); } for (unsigned i = 0; i < vs.vsize; ++i) { fres.vals[i] = vs.vals[i] / mag; } return fres; } float length(const vec& vs) { float fres = 0; for (unsigned it = 0; it != vs.vsize; it++) { fres += vs.vals[it] * vs.vals[it]; } return sqrt(fres); } float dot(const vec& vs1, const vec& vs2) { return std::inner_product(vs1.vals.begin(), vs1.vals.end(), vs2.vals.begin(), 0.0f); } In main I simply tested some hardcoded vectors and printed the results of normalize, length and dot. I ran the code on https://www.onlinegdb.com/online_c++_compiler
72,341,025
72,341,264
Why does this code behave differently on C-stdio function overloads? (vfprintf vs. putchar)
I'm trying to define various functions with the same name as C stdio to prevent unwanted usage. I encountered an odd situation where the technique works on some functions, but not others. I cannot explain why A::fn calls the stdio version of vfprintf instead of the function definition in the A namespace. #include <stdio.h> #include <stdarg.h> namespace A { template <typename... Ts> void putchar(int ch) { static_assert(sizeof...(Ts) == -1); } template <typename... Ts> void vfprintf(FILE* file, const char* fmt, va_list vlist) { static_assert(sizeof...(Ts) == -1); } void fn(const char* fmt, ...) { putchar('A'); // fails to compile (as expected) va_list vlist; va_start(vlist, fmt); vfprintf(stdout, "Hello!\n", vlist); // does not fail (not expected) va_end(vlist); } } int main() { A::fn("hello"); return 0; } P.S. Happy to hear comments indicating that there is a better way to restrict C-style I/O (maybe clang-tidy).
Replacing standard library routines is UB (citation to follow). See examples here and here for the kind of trouble this can cause. Edit: OK, here's the promised citation: The C++ standard library reserves the following kinds of names: ... names with external linkage ... If a program declares or defines a name in a context where it is reserved, other than as explicitly allowed by [library], its behavior is undefined. But, as discussed in the comments, I'm not sure whether you're doing this in a permitted context or not (although, on reflection, I don't think you are), so I'm going to change tack. There is, in fact, a very simple way to do what you want. You can do this with #pragma GCC poison. So, taking your example, all you need is: #pragma GCC poison putchar vfprintf and you're done. clang also supports this pragma. Live demo. Hats, rabbits, we can do it all :) (on good days)
72,341,040
72,341,076
why does the getline() function does not work unless I call it twice in the function charmodifier
what's wrong if I use the get line function only once in the character modifier function the compiler will ignore it unless I call the function twice why cant I use it only once? I tried using other ways, it worked but I wanna understand this one I'm now just writing random things so the add more details error messages go away #include <iostream> #include<string> using namespace std; class egybest { string link,m; char sys, type, restart; int s = 1, e = 1, date; public: string charmodifier() { //here getline(cin, m); getline(cin, m); for (int x = 0; x <= m.size(); x++) { if (m[x] == ' ') m[x] = '-'; } return m; } ~egybest() { system("cls"); cout << "do you want to restart the program? y:n;" << endl; cin >> restart; system("cls"); if (restart == 'y' || restart == 'Y') egybest(); else if (restart == 'n' || restart == 'N') { system("exit"); } } egybest() { cout << "do you want to watch a movie or a series? 1:2;" << endl; cin >> type; system("cls"); if (type == '1') linkmovie(); else if (type == '2') series(); else cout << "wrong input!" << endl; } void linkmovie() { cout << "enter the name of the movie:" << endl; charmodifier(); cout << "enter the release date: " << endl; cin >> date; link = "start https://cape.egybest.cool/movie/" + m + "-" + to_string(date); cout << endl; system(link.c_str()); } void series() { cout << "do you want it to open links for a particular season, particular episode or all seasons? s:e:a;" << endl; cin >> sys; system("cls"); if (sys == 'S' || sys == 's') linkseason(); else if (sys == 'A' || sys == 'a') linkall(); else if (sys == 'E' || sys == 'e') linkepisode(); else cout << "wrong input!" << endl; } void linkall() { cout << "season No." << endl; cin >> s; cout << "episode No." << endl; cin >> e; cout << "enter the name of the show:" << endl; charmodifier(); for (int j = 1; j <= s; j++) { for (int i = 1; i <= e; i++) { link = "start https://cape.egybest.cool/episode/" + m + "-season-" + to_string(j) + "-ep-" + to_string(i); system(link.c_str()); } } cout << endl; } void linkepisode() { cout << "season No." << endl; cin >> s; cout << "episode No." << endl; cin >> e; cout << "enter the name of the show:" << endl; charmodifier(); link = "start https://cape.egybest.cool/episode/" + m + "-season-" + to_string(s) + "-ep-" + to_string(e); cout << endl; system(link.c_str()); } void linkseason() { cout << "season No." << endl; cin >> s; cout << "episodes No." << endl; cin >> e; cout << "enter the name of the show:" << endl; charmodifier(); for (int i = 1; i <= e; i++) { link = "start https://cape.egybest.cool/episode/" + m + "-season-" + to_string(s) + "-ep-" + to_string(i); cout << endl; system(link.c_str()); } } }; int main() { egybest egy; return 0; }```
The problem is that after entering an integer or a character as for example cout << "episode No." << endl; cin >> e; cout << "enter the name of the show:" << endl; charmodifier(); //... the input buffer contains the new line character '\n' that corresponds to the pressed Enter key. So the following call of getline reads an empty string until the new line character is encountered. In such a case before calling getline you need to remove the new line character from the input buffer like for example #include <limits> //... cout << "episode No." << endl; cin >> e; cout << "enter the name of the show:" << endl; std::cin.ignore( std::numeric_limits<std::streamsize>::max(), '\n' ); charmodifier(); //...
72,341,290
72,347,075
How can I clear console to the right of the cursor in Windows NT command prompt with C++?
I need an output on the same line, like this: std::cout << "\rIt's " << leisure << " time!"; I want to make sure I do not see stuff like time!e! at the end of the line if the next value of leisure is shorter. \t does not overwrite the symbols (if run from cmd), so I still see time!e!. \033[0K is a VT 100 escape sequence and is not supported by Windows NT. What is the best way to solve the problem and make it as crossplatform as possible?
I wanted to avoid additional coding for a single line terminator, but since there is seemingly no other solution, I resorted to Console Virtual Terminal Sequences as suggested in the comments: #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #endif #include <windows.h> #include <string> static const bool enable_VT_mode(void) { HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE); if (hOut == INVALID_HANDLE_VALUE) return false; DWORD dwMode = 0; if (!GetConsoleMode(hOut, &dwMode)) return false; dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING; if (!SetConsoleMode(hOut, dwMode)) return false; return true; } const std::string get_terminator(void) noexcept { if (enable_VT_mode()) return "\x1b[0K"; else return " "; } Then go like this: static const std::string terminator = get_terminator(); std::cout << "\rIt's " << leisure << " time!" << terminator; As suggested, I added WIN32_LEAN_AND_MEAN to exclude unnecessary stuff from windows.h
72,341,483
72,342,952
How can I generate a C++23 stacktrace with GCC 12.1?
In the release notes for GCC12, under the section "Runtime Library (libstdc++)", it says: Improved experimental C++23 support, including: [...] <stacktrace> (not built by default, requires linking to an extra library). What library do I need to link against to use <stacktrace>? I'm on an x86 Linux system, if that matters.
You need to link with -lstdc++_libbacktrace (as now documented here). In order for this to work, gcc needs to have been configured with --enable-libstdcxx-backtrace.
72,341,710
72,341,746
How to make two Forms access the same variables in C++Builder
I have two Forms, both are including the same .cpp file that has these global variables: static vector<News> allNews; static vector<user> allUsers; static admin appAdmin("admin", "adminpassword"); static int userIndex =0; The problem is that, when Form A adds News objects to the vector, the second Form B seems to be viewing a different vector that is empty. How can I solve this?
both are including the same .cpp - never include .cpp files. Make a .h file with the content: extern vector<News> allNews; extern vector<user> allUsers; extern admin appAdmin; extern int userIndex; And then update the .cpp file by removing static: vector<News> allNews; vector<user> allUsers; admin appAdmin("admin", "adminpassword"); int userIndex = 0; Include the newly created .h file into the forms. If you include the .cpp file with the static variables into two forms, you get two translation units where each of them has their own unique static variables, which are not visible to other translation units.
72,341,945
72,342,154
How does a compiler store information about an array's size?
Recently I read on IsoCpp about how compiler known size of array created with new. The FAQ describes two ways of implementation, but so basically and without any internal information. I tried to find an implementation of these mechanisms in STL sources from Microsoft and GCC, but as I see, both of them just call the malloc internally. I tried to go deeper and found an implementation of the malloc function in GCC, but I couldn't figure out where the magic happens. Is it possible to find how this works, or it implemented in system runtime libraries?
Here is where the compiler stores the size in the source code for GCC: https://github.com/gcc-mirror/gcc/blob/16e2427f50c208dfe07d07f18009969502c25dc8/gcc/cp/init.c#L3319-L3325 And the equivalent place in the source code for Clang: https://github.com/llvm/llvm-project/blob/c11051a4001c7f89e8655f1776a75110a562a45e/clang/lib/CodeGen/ItaniumCXXABI.cpp#L2183-L2185 What the compilers do is store a "cookie" which is the number of elements allocated (the N in new T[N]) immediately before the pointer that new T[N] returns. This in turn means that a few extra bytes have to be allocated in the call to operator new[]. The compiler generates code to do this at runtime. operator new[](std::size_t x) itself does no work: It simply allocates x bytes. The compiler makes new T[N] call operator new[](sizeof(T) * N + cookie_size). The compiler does not "know" the size (it's a run-time value), but it knows how to generate code to retrieve the size on a subsequent delete[] p.
72,342,270
72,343,567
Limited space iterators
I've implemented a tree (not a binary tree, every node can have several child nodes). For every node, we can access its level in the tree, its children and its parent node. The next phase is to implement 2 iterators for this tree but the catch is I can not save more than a constant amount of information to help complete these iterators (i.e constant space complexity). My questions are, given a node n that I'm currently at: What would be the algorithm to find the next node to traverse in a BFS order? What would be the algorithm to find the next node to traverse in a *Reverse BFS order? *Note: Reverse BFS is traversing the levels in reverse order e.g Reverse BFS of the following tree 1 / | \ 2 3 4 / / \ \ 5 6 7 8 would be 5 6 7 8 then 2 3 4 and then 1.
Here's a sketch of the algorithm. Very inefficient, but satisfies your requirement of only using O(1) additional space. Node* GoRight(Node* c) If c is root (there's no parent), return NULL Let p be the parent of c. Find its child r immediately to the right of c (may need to do a linear search of p's child links). If found, return r If not found (c is the right-most child), set c=p, repeat from the start. The node thus found may be at a higher level than the node we started with. Node* GoDownToLevel(Node* p, int k) If p is NULL, return NULL If p is at level k, return p. Starting from p, follow the left-most child links down until level k is reached or there are no links to follow. Let c be the node thus found. If c is at level k, return c. Otherwise, c is a leaf node at a level above k. Set p = GoRight(c), repeat from the start. Node* NextAtLevel(Node* c, int k) Return GoDownToLevel(GoRight(c), k) Node* NextInBFSOrder(Node* c) Let k be the level of c. Let r = NextAtLevel(c, k). If r is not NULL, return r. Otherwise, traverse the parent chain all the way to the root, return GoDownToLevel(root, k+1). Alternatively, root could be stored in the iterator. Alternatively, the iterator could keep track of the leftmost child of the first non-leaf node it encountered while traversing level k, and jump to that child once NextAtLevel fails; this child starts the iteration at level k+1. Reverse BFS would work similarly. The hard part is finding the node to start the traversal from. Basically, perform GoDownToLevel(root, infinity) while keeping track of the deepest level encountered and the first node encountered at that level. And of course, do GoDownToLevel(root, k-1) instead of GoDownToLevel(root, k+1) when NextAtLevel fails. If you keep track of the height h of the tree while its being built, then you can start the traversal with GoDownToLevel(root, h)
72,342,288
72,342,461
How to create loops and use push_back()
I am trying to have the user choose how many names they would like to add, and then use push_back() to add that many to the list. I am new to programming and very confused. Here is the code I have so far: int main() { int numNames; std::cin >> numNames; vector<string> names; numNames = read_integer("How many names would you like to add?"); for (int i = 0; i < numNames; i++) { names[i] = read_string("Enter name:"); i++; while (cin >> numNames) { names.push_back(read_string("First value:")); } }
When using the std::vector there is no need to have the user enter the number of names beforehand. That is the benefit of having std::vector handle the memory management for you. You simply read and validate the input and .push_back() to the vector until the user is done. Using getline() for input allows you to read names with whitespace (e.g. "first last") while using std::cin would stop reading at the first whitespace encountered. Using getline() also provides a simple way for the user to indicate they are done entering names. Having the user press Enter alone on a new line instead of a name will result in the string filled by getline() having .size() == 0. You simply break your read-loop at that point. A short example of reading names and adding to std::vector<std::string> can be done similar to: #include <iostream> #include <string> #include <vector> int main() { std::vector<std::string> names {}; /* vector of strings */ std::string tmp {}; /* temp string */ std::cout << "Enter names below, [Enter] alone when done\n\n name: "; /* read line while input size isn't 0 */ while (getline (std::cin, tmp) && tmp.size() != 0) { names.push_back (tmp); /* add input to vector */ std::cout << " name: "; /* next prompt */ } std::cout << "\nnames collected:\n\n"; for (const auto& name : names) { /* loop over each name and output */ std::cout << name << '\n'; } } Example Use/Output $ ./bin/vector-string-names Enter names below, [Enter] alone when done name: Mickey Mouse name: Minnie Mouse name: Pluto (the dog) name: Donald Duck name: Daffy Duck name: Daisy Duck name: Hughie Duck name: Louie Duck name: Dewy Duck name: names collected: Mickey Mouse Minnie Mouse Pluto (the dog) Donald Duck Daffy Duck Daisy Duck Hughie Duck Louie Duck Dewy Duck Additional Notes While using namespace std; is a convenience for short example programs, see Why is “using namespace std;” considered bad practice?. Never std::cin >> numNames; without checking the stream-state after the input. (especially when a numeric conversion is involved). A simple: if (!(std::cin >> numNames)) { std::cerr << "error: invalid integer input.\n"; return 1; } To ensure the user provides valid integer input, you can loop continually until a valid int is entered and then break the read loop after you have a valid integer. You can use .clear() and use .ignore() to remove any invalid characters from stdin after a failed input and before your next input.
72,342,528
72,344,316
Question about the type of `&"hello"` and `"hello"`
As per the output of this code snippet, the type of &"hello" is const char(*)[6], so char* ptr = &"hello"; is inlegal for char* and const char(*)[6] are different types. And since char* ptr1 = "hello"; compiles with C++11 and latter, the type of "hello" is char*? If the type of "hello" is char*, then &"hello" shouldn't be a pointer which points to char*(i.e. char**)? I used to write char* ptr1 = "hello"; many years, and when I know that "hello" is a prvalue. A question raises that I could acquire the address of the string literal.After I found the type of &"hello" is const char(*)[6], I am totally confued now. Could anybody shed some light on this matter? Here is the aforementioned code snippet: #include<memory> #include<thread> #include<iostream> #include<typeinfo> int main() { char* ptr = &"hello"; char* ptr1 = "hello"; } Here is what the compiler complains: <source>: In function 'int main()': <source>:8:17: error: cannot convert 'const char (*)[6]' to 'char*' in initialization 8 | char* ptr = &"hello"; | ^~~~~~~~ | | | const char (*)[6] <source>:9:18: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings] 9 | char* ptr1 = "hello"; | ^~~~~~~ UPDATE: Thanks to ShadowRanger's clarification. I realise that type of string literal is const char[6].
"Hello" is a string literal of type const char [6] which decays to const char* due to type decay. Now let's see what is happening for each of the statements in you program. Case 1 Here we consider the statement: char* ptr = &"hello"; As i said, "hello" is of type const char [6]. So, applying the address of operator & on it gives us a const char (*)[6] which is read as a pointer to an array of size 6 with elements of type const char. This means that there is a mismatch in the type on the right hand side(which is const char (*)[6]) and the left hand side(which is char*). And since there is no implicit conversion from a const char (*)[6] to a char* the compiler gives the mentioned error saying: cannot convert 'const char (*)[6]' to 'char*' Case 2 Here we consider the statement: char* ptr1 = "hello"; //invalid C++ Since "Hello" is of type const char[6] meaning that the char elements inside the array are immutable(or non-changable). As i said, the type const char[6] decays to const char*. Thus on the right hand side we have a const char*. So if we were allowed to write char* ptr1 = "Hello"; then that would mean that we're allowed to change the elements of the array since there is no low-level const on ptr1. Thus, allowing char* ptr1 = "Hello"; would allow changing const marked data, which should not happen(since the data was not supposed to change as it was marked const). This is why the mentioned warning said: ISO C++ forbids converting a string constant to 'char*' So to prevent this from happening we have to add a low-level const as shown below: vvvvv---------------------------> note the low-level const const char* ptr1 = "Hello"; //valid c++ so that the pointer ptr1 is not allowed to change the const marked data. By adding the low-level const highlighted above, it is meant that we're not allowed to change the underlying characters of the array.
72,342,770
72,342,855
What to pass as a Sender in a button OnClick method?
I have a function that creates a button dynamically void createBtn(News obj,TForm *Form1){ TButton *spam = new TButton(Form1); spam->Parent = newsCard; spam->Position->X = 280; spam->Position->Y = 256; spam->Text = "Spam"; } I need to assign an OnClick event to it, so I added the following line to the function above: spam->OnClick = spamClick; The code for spamClick is: void __fastcall TForm1::spamClick(TObject *Sender, News obj) { allUsers[userIndex].spamNews(obj); ShowMessage("Done!"); } The problem is, I need to pass the obj in the line, but the function requires 2 arguments, which are the Sender and the obj. spam->OnClick = spamClick(obj); // error here But I do not know what to pass. I've tried Form1, spam, and newsCard. But nothing works. What do I pass as a Sender? Or, is there a way to assign an OnClick event to the button inside createBtn()? Edit: class News has the following definition class News { public: News(string, string, string, Dates); string title; string description; Dates date; int rate; string category; vector<Comment> comments; int spamCount; static int newsCount; int newsID; int numOfRatedUsers; }; and spamNews is a function in the user class that pushes the obj.newsID into a vector in the user then increases the spamCount. void user::spamNews(News& obj) { //check that the news in not already spammed if(!findNews(spammedNews,obj)){ spammedNews.push_back(obj.newsID); obj.spamCount++; } }
Your second approach doesn't work, because you are trying to call spamClick() first and then assign its return value to the OnClick event. Your first approach is the correct way, however you can't add parameters to the OnClick event handler. TButton has Tag... properties for holding user-defined data. However, since the News object is not being passed around by pointer, the Tag... properties are not very helpful in this case (unless the News object is held in an array/list whose index can then be stored in the Tag). Otherwise, I would suggest deriving a new class from TButton to hold the News object, eg: class TMyButton : public TButton { public: News NewsObj; __fastcall TMyButton(TComponent *Owner, const News &obj) : TButton(Owner), NewsObj(obj) {} }; void TForm1::createBtn(const News &obj) { TMyButton *spam = new TMyButton(this, obj); spam->Parent = newsCard; spam->Position->X = 280; spam->Position->Y = 256; spam->Text = _D("Spam"); spam->OnClick = &spamClick; } void __fastcall TForm1::spamClick(TObject *Sender) { MyButton *btn = static_cast<TMyButton*>(Sender); allUsers[userIndex].spamNews(btn->NewsObj); ShowMessage(_D("Done!")); } UPDATE: Since your News objects are being stored in a vector that you are looping through, then a simpler solution would be to pass the News object to createBtn() by reference and then store a pointer to that object in the TButton::Tag property, eg: void TForm1::createBtn(News &obj) { TButton *spam = new TButton(this); spam->Parent = newsCard; spam->Position->X = 280; spam->Position->Y = 256; spam->Text = _D("Spam"); spam->Tag = reinterpret_cast<NativeInt>(&obj); spam->OnClick = &spamClick; } void __fastcall TForm1::spamClick(TObject *Sender) { TButton *btn = static_cast<TButton*>(Sender); News *obj = reinterpret_cast<News*>(btn->Tag); allUsers[userIndex].spamNews(*obj); ShowMessage(_D("Done!")); } Or, using the TMyButton descendant: class TMyButton : public TButton { public: News *NewsObj; __fastcall TMyButton(TComponent *Owner) : TButton(Owner) {} }; void TForm1::createBtn(News &obj) { TMyButton *spam = new TMyButton(this); spam->Parent = newsCard; spam->Position->X = 280; spam->Position->Y = 256; spam->Text = _D("Spam"); spam->NewsObj = &obj; spam->OnClick = &spamClick; } void __fastcall TForm1::spamClick(TObject *Sender) { MyButton *btn = static_cast<TMyButton*>(Sender); allUsers[userIndex].spamNews(*(btn->NewsObj)); ShowMessage(_D("Done!")); }
72,343,016
72,343,310
Why is flatbuffers output different from C + + in Python?
I use the same protocol files, but I find that they have different output in Python and C++. My protocol file: namespace serial.proto.api.login; table LoginReq { account:string; //账号 passwd:string; //密码 device:string; //设备信息 token:string; } table LoginRsp { account:string; //账号 passwd:string; //密码 device:string; //设备信息 token:string; } table LogoutReq { account:string; } table LogoutRsp { account:string; } My python code: builder = flatbuffers.Builder() account = builder.CreateString('test') paswd = builder.CreateString('test') device = builder.CreateString('test') token = builder.CreateString('test') LoginReq.LoginReqStart(builder) LoginReq.LoginReqAddPasswd(builder, paswd) LoginReq.LoginReqAddToken(builder, token) LoginReq.LoginReqAddDevice(builder, device) LoginReq.LoginReqAddAccount(builder, account) login = LoginReq.LoginReqEnd(builder) builder.Finish(login) buf = builder.Output() print(buf) with open("layer.bin1","wb") as f: f.write(buf) My C++ code: flatbuffers::FlatBufferBuilder builder; auto account = builder.CreateString("test"); auto device = builder.CreateString("test"); auto passwd = builder.CreateString("test"); auto token = builder.CreateString("test"); auto l = CreateLoginReq(builder, account = account, passwd = passwd, device = device, token = token); builder.Finish(l); auto buf = builder.GetBufferPointer(); flatbuffers::SaveFile("layer.bin", reinterpret_cast<char *>(buf), builder.GetSize(), true); output: md5 layer.bin MD5 (layer.bin) = 496e5031dda0f754fb4462fadce9e975
Flatbuffers generated by different implementations (i.e. generators) don't necessarily have the same binary layout, but can still be equivalent. It depends on how the implementation decide to write out the contents. So taking the hash of the binary is not going to tell you equivalence.
72,343,660
72,369,546
Semicolon(;) After Class Constructors or Destructors
I am currently maintaining and studying the language using a Legacy Source,I want to clear up some confusion on the use of semi-colons inside a class. Here is the bit where confusion strikes me. class Base { public: Base(int m_nVal = -1 ): nVal(m_nVal) {} // Confused here virtual ~Base() {} // Confused here public: virtual void SomeMethod(); virtual int SomeMethod2(); protected: int nVal; }; class Derived : public Base { public: Derived(int m_nVal):nVal2(m_nVal) {}; // Confused here virtual ~Derived(){}; // Confused here public: virtual void SomeMethod(); virtual int SomeMethod2(); protected:/* Correction Here */ int nVal2; }; I have noticed that some of the class destructors/constructors have a Semi-colon after them and some of them don't, I do understand that the a semi-colon is a is a statement terminator. My question is does the Semi-colon after the constructors or destructor tells something specific to the compiler? or is it something that doesn't really matter.
does the Semi-colon after the constructors or destructor tells something specific to the compiler? After (or before) a member function definition it does not.† After (but not before) a member function declaration it is mandatory. 'Probably just an oversight. †: Unless the definition has no body: struct A { A() = default; // Mandatory semicolon. Definition ~A() {} // Accessory semicolon. Definition void foo(); // Mandatory semicolon. Declaration };
72,343,995
72,344,555
How is OpenMP communicating between threads with what should be a private variable?
I'm writing some code in C++ using OpenMP to parallelize some chunks. I run into some strange behavior that I can't quite explain. I've rewritten my code such that it replicates the issue minimally. First, here is a function I wrote that is to be run in a parallel region. void foo() { #pragma omp for for (int i = 0; i < 3; i++) { #pragma omp critical printf("Hello %d from thread %d.\n", i, omp_get_thread_num()); } } Then here is my whole program. int main() { omp_set_num_threads(4); #pragma omp parallel { for (int i = 0; i < 2; i++) { foo(); #pragma omp critical printf("%d\n", i); } } return 0; } When I compile and run this code (with g++ -std=c++17), I get the following output on the terminal: Hello 0 from thread 0. Hello 1 from thread 1. Hello 2 from thread 2. 0 0 Hello 2 from thread 2. Hello 1 from thread 1. 0 Hello 0 from thread 0. 0 1 1 1 1 i is a private variable. I would expect that the function foo would be run twice per thread. So I would expect to see eight "Hello from %d thread %d.\n" statements in the terminal, just like how I see eight numbers printed when printing i. So what gives here? Why is it that in the same loop, OMP behaves so differently?
From the documentation of omp parallel: Each thread in the team executes all statements within a parallel region except for work-sharing constructs. Emphasis mine. Since the omp for in foo is a work-sharing construct, it is only executed once per outer iteration, no matter how many threads run the parallel block in main.
72,344,637
72,344,841
Is there any point in returning an object (e.g., std::string) by reference when the method has no parameters?
Take the following snippet of code #include <iostream> #include <string> class Foo { private: std::string m_name; public: Foo(std::string name) : m_name { name } {} const std::string & get_name() const { return m_name; } }; int main() { Foo x { "bob" }; x.get_name(); } Because I initialized an object and name exists somewhere in memory, is a temporary object is made when I call the x.get_name()? If a temporary object is made, than is there a point to returning by reference? My understanding is you return by reference so to avoid the cost of creating a large object or when using an std::ostream& object because you have to.
Yes, there is a point, you return by reference if you want to return a reference to some object. Why would you want to have a reference to some object? Exactly because you need to access it and not a copy of it. Reasons might vary, basic ones are that you do not want to make an extra copy - e.g. the get_name you posted, maybe you want to store it and access it later, and/or because you want to modify it. Returning a reference is not much different from a passing parameter by reference. No temporary std::string object is made in x.get_name(). The method returns lvalue reference by value. Since references are usually implemented as pointers, the true return value is a pointer. So a copy of the pointer is made during each call but that is like returning an int - can be done in registers or stack. So it's as cheap as it gets. Yes, your understanding is correct, although I would say that const T& is used when we want to avoid copy for whatever reasons and T& should only be used when we need to get mutable access to the object - e.g. std::ostream& in operator<< which mutates the stream by printing into it. BTW, you make an extra copy in your ctor - name parameter is copied into name member. Instead you should move it there like Foo(std::string name):name(std::move(name)){}.
72,344,698
72,344,817
A compilation error occurs when using clang in a Windows environment
I compliation the code with Vscode. The clang -v: clang version 14.0.3 Target: x86_64-w64-windows-gnu Thread model: posix InstalledDir: C:/msys64/mingw64/bin You can see I get clang form msys. The file I compliate use such header file: #include <windows.h> #include <windowsx.h> #include <tchar.h> #include <d2d1.h> The task I use : "args": [ "-fdiagnostics-color=always", "C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.22000.0\\um\\x86\\user32.lib", "C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.22000.0\\um\\x86\\ole32.lib", "C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.22000.0\\um\\x86\\d2d1.lib", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe", "-mwindows" ], And the complier messages : C:\msys64\mingw64\bin\clang-cpp.exe -fdiagnostics-color=always "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\user32.lib" "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\ole32.lib" "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\d2d1.lib" -g C:\Users\cookie\Desktop\GameEngineFromScratch\Platform\Windows\helloengine_d2d.cpp -o C:\Users\cookie\Desktop\GameEngineFromScratch\Platform\Windows\helloengine_d2d.exe -mwindows clang-cpp: warning: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\user32.lib: 'linker' input unused in cpp mode [-Wunused-command-line-argument] clang-cpp: warning: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\ole32.lib: 'linker' input unused in cpp mode [-Wunused-command-line-argument] clang-cpp: warning: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x86\d2d1.lib: 'linker' input unused in cpp mode [-Wunused-command-line-argument] clang-cpp: warning: argument unused during compilation: '-mwindows' [-Wunused-command-line-argument] I got a executable file but when I run it my terminal give me an erro message Unable to run: The specified executable is not a valid application for this operating system platform It look like I got a executable file for linux but not windows.How to solve this question.
clang-cpp is the Clang preprocessor, not the C++ compiler. You should use clang++ for the C++ compiler front-end program.
72,344,701
72,345,736
different behavior for different "for"s in benchmark
We can write a simple benchmark using google benchmark or https://www.quick-bench.com/, static void range_based_for(benchmark::State &state) { for (auto _ : state) { std::to_string(__LINE__); } } BENCHMARK(range_based_for); We can also rewrite it with std::for_each, static void std_for_each(benchmark::State &state) { std::for_each (std::begin(state), std::end(state), [] (auto _) { std::to_string(__LINE__); }); } BENCHMARK(std_for_each); Everything is good. However, when we use old school for statement, it runs without finishing. static void old_school_for(benchmark::State &state) { for (auto iter = std::begin(state); iter != std::end(state); ++iter) { std::to_string(__LINE__); }; } BENCHMARK(old_school_for); In fact, std::for_each is implemented with this style. How can they behave different?
The begin/end functions are documented with a warning: says "These functions should not be called directly" These functions should not be called directly. REQUIRES: The benchmark has not started running yet. Neither begin nor end have been called previously. end calls StartKeepRunning And what does StartKeepRunning do? It resets the number of iterations to the maximum It is clear that you're only supposed to call begin and end once. The difference in your third loop is that std::end(state) is called once per iteration, which apparently resets the iteration count back to the maximum. I have no idea why the library is designed this way.
72,345,326
72,345,969
Allocation free std::vector copy when using assignment operator
When having two instances of std::vector with a primitive data type, having same size and capacity, is there a guarantee that copying via the copy assignment operator will not re-allocate the target vector? Example: const int n = 3; std::vector<int> a, b; // ensure defined capacity a.reserve(n); b.reserve(n); // make same size a.resize(n); b.resize(n); // set some values for a for (int i = 0; i < n; i++) { a[i] = i; } // copy a to b: allocation free? b = a; I've only found "Otherwise, the memory owned by *this may be reused when possible." (since C++11) on cppreference.com. I was hoping for a "must" instead of "may". If there should be a positive answer for a more general case such as "same size is enough", even better. If there should be no guarantee, this case could be an answer to Copying std::vector: prefer assignment or std::copy?, when std::copy would be preferred.
Standard doesn't guarantee that there would be no allocations. According to the C++11 Standard the effect of b = a; is as if b.assign(a.begin(), a.end()) (with surplus b's elements destroyed, if any) which result is "Replaces elements in b with a copy of [a.begin(), a.end())". Nothing about allocations but with the C++20 Standard (maybe earlier) we have an additional statement: "Invalidates all references, pointers and iterators referring to the elements of b". Which means allocation is possible and the capacity() isn't mentioned anywhere in these guarantees to prevent it in your particular case. On the other hand, in practice, why would it reallocate the memory if there is enough already?
72,346,114
72,346,191
Are explicit template instantiation definition for a function template allowed in header files
I was reading about explicit template instantiation when i came across the following answer: Assuming by "explicit template instantiation" you mean something like template class Foo<int>; // explicit type instantiation // or template void Foo<int>(); // explicit function instantiation then these must go in source files as they considered definitions and are consequently subject to the ODR. My question is that is the above claim that explicit template instantiation definition cannot be put into header files(and must be put into source files) technically correct. I am looking for an exact reference from the standard(or equivalent source) where it is specified that these ETI definitions cannot be put into header files. I also tried this in a sample program which compiles and links fine without giving any multiple definition error(demo) in both gcc and clang even though i have put the ETIs into the header. Is the below given program well-formed according to the standard? Header.h #ifndef MYHEADER_H #define MYHEADER_H #include <string> template<class T> int func( const T& str) { return 4; } template int func<std::string>( const std::string& str); //first ETI in header. Will the program be well formed if this header is included in multiple source files? template int func<double>(const double& d); //second ETI in header #endif source2.cpp #include "Header.h" source3.cpp #include "Header.h" main.cpp #include <iostream> #include "Header.h" int main(){ std::string input = "123"; auto result = func(input); std::cout<<result<<std::endl; } Demo
From explicit instantiation's documentation: An explicit instantiation definition forces instantiation of the class, struct, or union they refer to. It may appear in the program anywhere after the template definition, and for a given argument-list, is only allowed to appear once in the entire program, no diagnostic required. (emphasis mine) This means that the shown program in question is in violation of the above quoted statement and thus ill-formed no diagnostic required. The same can be found in temp.spec: For a given template and a given set of template-arguments, an explicit instantiation definition shall appear at most once in a program An implementation is not required to diagnose a violation of this rule. (emphasis mine) This again leads to the conclusion that the given example program is ill-formed NDR. Thus as long as the ETI definition occurs only once in the entire program the program is valid. For example, if you have a header that has the ETI definitions, then the program will be ill-formed if that header is included in more than one source file(note a possible workaround for this here). But if the header is included in exactly one source file then the program will be well-formed. The point is that they should appear at most once in the program. From where they come from(like header or a source file) is irrelevant.
72,346,416
72,346,567
Why does vector of same size takes more memory than array in leetcode
I'm trying to solve Ones and Zeros question from leetcode and for the same code but using vector occupies ~3x more memory than using array of same size. Here is my code that uses 3-D vector: int findMaxForm(vector<string>& strs, int m, int n) { int S = strs.size(); vector<vector<vector<int>>> dp(S+1, vector<vector<int>>(m+1, vector<int>(n+1, 0))); // int dp[S+1][m+1][n+1]; // memset(dp, 0, sizeof dp); for(int i = 0; i < S; i++) { for(int j = 0; j <= m; j++) { for(int k = 0; k <= n; k++) { if(i == 0) { int zeros = count(strs[i].begin(), strs[i].end(), '0'); int ones = strs[i].length() - zeros; if(zeros <= j && ones <= k) dp[i][j][k] = 1; else dp[i][j][k] = 0; continue; } int skip = dp[i - 1][j][k]; int take = INT_MIN; int zeros = count(strs[i].begin(), strs[i].end(), '0'); int ones = strs[i].length() - zeros; if(zeros <= j && ones <= k) take = 1 + dp[i - 1][j - zeros][k - ones]; dp[i][j][k] = max(skip, take); } } } return dp[S-1][m][n]; } Submission details: Using vector: Runtime (~500ms); Memory (102.6 MB) Using array: Runtime (~500ms); Memory (32.5 MB)
An array (I assume you used plain C arrays) uses only as much memory as its elements. A vector uses some memory to store some housekeeping information like the length and location of the data. Because you made a vector of vector of vectors, this housekeeping information is created for all of the nested vectors, which occupies a lot of space. This gets worse and worse if you increase the "dimension" of your "multidimensional" vector.
72,348,126
72,348,264
Getting device twin in C Sdk - Azure IoT Hub
Is there a way to get the device twin of a device from Azure IoT-Hub, using Azure SDK for C? As far as I know, I am able to get the device twin using the Azure SDK for NodeJS. In nodejs we do it like. const Client = require('azure-iot-device').Client; cosnt Protocol = require('azure-iot-device-mqtt').Mqtt; var client = Client.fromConnectionString(connectionString, Protocol); function main() { client.open(function (err) { //If connection is success client.getTwin(); } }); Is there any way to get the twin data for a device and get twinchange notification in Azure SDK for C i.e A callback function when there is a change in twin data?
Is there any way to get the twin data for a device and get twinchange notification in Azure SDK for C i.e A callback function when there is a change in twin data? Yes, you can refer to example of callback function as per Get updates on the device side: Try the following code snippet taken from the document: static void deviceTwinCallback(DEVICE_TWIN_UPDATE_STATE update_state, const unsigned char* payLoad, size_t size, void* userContextCallback) { (void)userContextCallback; printf("Device Twin update received (state=%s, size=%zu): %s\r\n", MU_ENUM_TO_STRING(DEVICE_TWIN_UPDATE_STATE, update_state), size, payLoad); } You can also refer to iothub_devicetwin_sample.c and iothub_device_client.c
72,348,293
72,358,779
Deletion in array implementation of queues reduces capacity?
In all queue array implementations I have seen, when they 'pop an element from front', they basically change the front tag of the queue to the next element. but then the capacity of the queue is technically reduced (since array is used). How hasn't this caused problems yet or how is this considered valid? Edit : https://www.softwaretestinghelp.com/queue-in-cpp/ Take the illustration in this link under consideration. When we perform the dequeue operation, we change the pointer of front to the next element. From this point on, any operation we perform will be done with respect to the 2nd position of array as the front element. Now if we go on adding elements to the full capacity of queue, we would, the maximum no. of elements that we could fit in the queue would be 1 less than the capacity of the array (which we had defined earlier).
You are right with your concern about the C++ implementation given in the article https://www.softwaretestinghelp.com/queue-in-cpp/. With that implementation, basically when you dequeue an element, the pointer to the "first" of the queue shift 1 unit to the right (in the underlying array), and that reduce the capacity of the queue. The correct way to implement such a queue should be similar to the Java implementation provided in that very article. As you can see, every array indices are pre-processed with % this.max_size. That makes the array accessing become "circular", i.e. when we access an index k >= this.max_size, the real array index is back to the range [0, this.max_size - 1]. As a result, all the slot in the underlying array are used, which makes the capacity of the queue remain the same after the dequeue or enqueue operation. Here is the corrected version of the C++ implementation. #include <iostream> #define MAX_SIZE 5 using namespace std; class Queue { private: int myqueue[MAX_SIZE], front, rear; public: Queue () { front = -1; rear = -1; } bool isFull () { if ((rear - front + MAX_SIZE) % MAX_SIZE == MAX_SIZE - 1) { return true; } return false; } bool isEmpty () { if (front == -1) return true; else return false; } void enQueue (int value) { if (isFull ()) { cout << endl << "Queue is full!!"; } else { if (front == -1) front = 0; rear++; myqueue[rear % MAX_SIZE] = value; cout << value << " "; } } int deQueue () { int value; if (isEmpty ()) { cout << "Queue is empty!!" << endl; return (-1); } else { value = myqueue[front % MAX_SIZE]; if (front >= rear) { //only one element in queue front = -1; rear = -1; } else { front++; } cout << endl << "Deleted => " << value << " from myqueue"; return (value); } } /* Function to display elements of Queue */ void displayQueue () { int i; if (isEmpty ()) { cout << endl << "Queue is Empty!!" << endl; } else { cout << endl << "Front = " << front; cout << endl << "Queue elements : "; for (i = front; i <= rear; i++) cout << myqueue[i % MAX_SIZE] << "\t"; cout << endl << "Rear = " << rear << endl; } } }; int main () { Queue myq; myq.deQueue (); //deQueue cout << "Queue created:" << endl; myq.enQueue (10); myq.enQueue (20); myq.enQueue (30); myq.enQueue (40); myq.enQueue (50); //enqueue 60 => queue is full myq.enQueue (60); myq.displayQueue (); //deQueue =>removes 10, 20 myq.deQueue (); myq.deQueue (); //queue after dequeue myq.displayQueue (); myq.enQueue (70); myq.enQueue (80); myq.enQueue (90); //enqueue 90 => queue is full myq.displayQueue (); return 0; }
72,348,328
72,348,648
how to fix on devc++ collect2.exe [Error] ld returned 1 exit status
It tells me error id returned 1 exit status when I try to run the code and have search it up from what i have seen is that you mostly get this error from misspelling main() function but that is not the case here. #include <iostream> #include <string> using namespace std; int main(){ double gallons(); double miles(); double meters(); char option; cout<<"This program converts english units to metrics:"<<endl; cout<<"--------------------------------------------------"<<endl; cout<<"--------------- Select any option ----------------"<<endl; cout<<"press g to convert gallon to liters"<<endl; cout<<"press m to convert miles to kilometers"<<endl; cout<<"press f to convert meters to feet"<<endl; cin>>option; if(option =='G' || option == 'g'){ gallons(); } if(option =='M' || option == 'm'){ miles(); } if(option =='F' || option == 'f'){ meters(); } gallons();{ double gallon; cout<<"Enter the amount of gallon you want to convert to liters"<<endl; cin>>gallon; gallon = gallon * 3.78541; cout<<"You have "<<gallon<<" liters"<<endl; } miles();{ double mile; cout<<"Enter the miles covered that you want to convert to kilometers"<<endl; cin>>mile; mile = mile * 1.60934; cout<<"You have covered "<<mile<<" kilometers"<<endl; } meters();{ double meter; cout<<"How many meters do you want to convert to feet?"<<endl; cin>>meter; meter = meter * 3.28084; cout<<"You have covered "<<meter<<" kilometers"<<endl; } return (0); } This is the end of the code.
One of the mistakes you've made is that you haven't properly declared or defined you're functions. If you want to take the declare then define approach, you'll want to move you're declaration to the global scope and out of the main() function. Also because they don't return anything they should be declared void: #include <iostream> #include <string> using namespace std; void gallons(); void miles(); void meters(); int main() { /// Code goes here } This has the benefit of allowing you call the code and work on the definition later. One thing with functions is that a semi-colon after a function is only necessary if it is a declaration not for the definition as such, to define the functions you can do this: #include <iostream> #include <string> using namespace std; void gallons(); void miles(); void meters(); int main() { /// Code goes here } void gallons() { double gallon; cout << "Enter the amount of gallon you want to convert to litres" << endl; cin >> gallon; gallon = gallon * 3.78541; cout << "You have " << gallon << " litres" << endl; } void miles() { double mile; cout<<"Enter the miles covered that you want to convert to kilometers"<<endl; cin>>mile; mile = mile * 1.60934; cout<<"You have covered "<<mile<<" kilometers"<<endl; } void meters() { double meter; cout << "How many meters do you want to convert to feet?" << endl; cin >> meter; meter = meter * 3.28084; cout << "You have covered "<<meter<<" kilometres" << endl; } Finally it is considered bad practice to use the line using namespace std; as later on when you use other namespaces from libraries, frameworks etc you might do the same thing and its generally not good as this bring naming conflicts between libraries so try to avoid that. A better approach so you don't have to keep writing std::cout, 1std::cinandstd::endleverywhere with thestd::` is to write: using std::cout; using std::cin; using std::endl; This has the same effect but only occurs for those two names. Overall, the code should look something like this. #include <iostream> #include <string> using std::cout; using std::cin; using std::endl; void gallons(); void miles(); void meters(); int main() { char option; cout << "This program converts English units to metrics:" << endl; cout <<"--------------------------------------------------" << endl; cout <<"--------------- Select any option ----------------" <<e ndl; cout <<"press g to convert gallon to litres" << endl; cout <<"press m to convert miles to kilometres"<< endl; cout <<"press f to convert meters to feet" << endl; cin >> option; if(option == 'G' || option == 'g') { gallons(); } if(option =='M' || option == 'm') { miles(); } if(option =='F' || option == 'f') { meters(); } return 0; } void gallons() { double gallon; cout << "Enter the amount of gallon you want to convert to litres" << endl; cin >> gallon; gallon = gallon * 3.78541; cout << "You have " << gallon << " litres" << endl; } void miles() { double mile; cout<<"Enter the miles covered that you want to convert to kilometers"<<endl; cin>>mile; mile = mile * 1.60934; cout<<"You have covered "<<mile<<" kilometers"<<endl; } void meters() { double meter; cout << "How many meters do you want to convert to feet?" << endl; cin >> meter; meter = meter * 3.28084; cout << "You have covered "<<meter<<" kilometres" << endl; } Hope all this makes sense.
72,348,680
72,348,776
what is wrong with this template metaprogram to find square root?
I was trying to code the O(N) solution to find the square root of a perfect square number using template metaprogramming in C++. Algorithm: algorithm sqrt(N, start): if(start*start == N) return start else return sqrt(N, start+1) i.e: template<int value, int N> struct sqrt { enum { val = ((value*value == N) ? value : (sqrt<(value+1), N>::val)) }; }; and instantiating sqrt<1, 4> in main(). I'm running into "template instantiation depth exceeds maximum of 900" error, eventhough it has to stop try and instantiate when value is 2? Am I missing something specific for template metaprogramming? Does it executes both side of the ternary operator, irrespective of the condition? Kindly help me figure this out?
sqrt<X,N> instantiates sqrt<X+1,N> and that instantiates sqrt<X+2,N> etc. it never stops. Both branches are evaluated by the compiler, even if only one of them is taken. The compiler is not clever enough to see that the condition is false at some point and then sqrt<value+1,N> does not need to be instantiated. You have to tell it more explictly. (Actually more correct: The compiler does need to know both sides of the : to determine their common type, because thats what the conditional operators type is). Since C++17 you can use constexpr if to get the false-branch discarded: #include <iostream> template<int value, int N> int sqrt(){ if constexpr (value*value == N) { return value; } else { return sqrt<value+1,N>(); } }; int main() { std::cout << sqrt<1,4>(); } Before C++17 you could use template specialization as the stop condition for the recursion: #include <iostream> template<int value, int N> struct sqrt { enum { val = ((value*value == N) ? value : (sqrt<(value+1), N>::val)) }; }; template <int value> struct sqrt<value,value*value> { enum { val = value }; }; int main() { std::cout << sqrt<1,4>::val; } However, both will fail when N is not a square number. You should change the condition to value*value >= N to make sure the recursion always stops (and addtionally check if N == value*value). Further I suggest to swap the order of the arguments so you can use a default 1 for value. Also the enum thingy looks a little outdated. I don't remember what restriction is was meant to overcome. Anyhow, you can simply use a static const val = ... instead.
72,348,805
72,348,894
the difference of automatic and dynamic variables rules in zero initialization
code like this, #include <iostream> class obj { public: int v; }; int main(int argc, char *argv[]) { obj o1; std::cout << o1.v << std::endl; // print 32766, indeterminate values obj *o2 = new obj(); std::cout << o2->v << std::endl; // print 0,but why? int v1; std::cout << v1 << std::endl; // print 22024, indeterminate values int *v2 = new int; std::cout << *v2 << std::endl; // print 0,but why? return 0; } I know the global or static variables will be initialize zero. and automatic does the indeterminate values. but the heap object use new keyword, has any reference to explain it?
obj *o2 = new obj(); is value initialization meaning the object will be zero initialized and hence the data member v will be initialized to 0. This can be seen from value initialization: This is the initialization performed when an object is constructed with an empty initializer. new T () (2) 2,6) when an object with dynamic storage duration is created by a new-expression with the initializer consisting of an empty pair of parentheses or braces (since C++11); On the other hand, int *v2 = new int; //this uses default initialization std::cout << *v2 << std::endl; //this is undefined behavior the above leads to undefined behavior because you're dereferencing v2 and the allocated int object has is uninitialized and so has indeterminate value. Undefined behavior means anything can happen. But never rely(or make conclusions based) on the output of a program that has UB. The program may just crash. This can be seen from default initialization: This is the initialization performed when an object is constructed with no initializer. new T (2) when an object with dynamic storage duration is created by a new-expression with no initializer; The effects of default initialization are: otherwise, no initialization is performed: the objects with automatic storage duration (and their subobjects) contain indeterminate values.
72,349,768
72,705,236
Convert steady_clock::time_point (C++) to System::DateTime (C++/CLI)
I need to convert C++ std::chrono::steady_clock::time_point to C++/CLI System::DateTime. Background: I am wrapping a C++ library with a C++/CLI interface, to be used by a .NET app. One of the C++ methods return a std::chrono::steady_clock::time_point. I thought it is appropriate to returns a System::DateTime from the C++/CLI wrapper method. Thus the need to convert. I am aware that if I had a system_clock::time_point, I could have converted it to time_t as explained here: How to convert std::chrono::time_point to calendar datetime string with fractional seconds?. Then I could have used DateTimeOffset.FromUnixTimeMilliseconds, and from it get a System::DateTime. Another approach could have been to use time_since_epoch. But neither to_time_t nor time_since_epoch are available for std::chrono::steady_clock (about time_since_epoch see: chrono steady_clock not giving correct result?). However - I cannot change the C++ interface. Also didn't manage to properly convert steady_clock::time_point to e.g. system_clock::time_point. The solution I came up with: I take current time from both std::chrono::steady_clock and System::DateTime, then calculate offset from the steady_clock::time_point, and finally apply this offset in reverse to the DateTime time. I calculate the offset in milliseconds, and since the precision I am interested in is of seconds, it works well. This method is shown in the code below. But it feels a bit awkward. It is also sensitive to the requested precision. My question: can you suggest a better way to do the conversion ? using namespace System; #include <chrono> System::DateTime SteadyClockTimePointToDateTime(std::chrono::steady_clock::time_point const & tCPP) { auto nowCPP = std::chrono::steady_clock::now(); auto nowCLI = System::DateTime::Now; long long milliSecsSinceT = std::chrono::duration_cast<std::chrono::milliseconds>(nowCPP - tCPP).count(); System::DateTime tCLI = nowCLI - System::TimeSpan::FromMilliseconds(static_cast<double>(milliSecsSinceT)); return tCLI; } int main(array<System::String ^> ^args) { System::Console::WriteLine("System::DateTime::Now (for debug): " + System::DateTime::Now.ToString()); // print reference time for debug auto tCPP = std::chrono::steady_clock::now(); // got the actual value from a C++ lib. System::Threading::Thread::Sleep(100); // pass some time to simulate stuff that was executed since the time_point was received. System::DateTime tCLI = SteadyClockTimePointToDateTime(tCPP); System::Console::WriteLine("System::DateTime (converted): " + tCLI.ToString()); // should show a time very close to System::DateTime::Now above return 0; } Output example: System::DateTime::Now (for debug): 23-May-22 16:41:04 System::DateTime (converted): 23-May-22 16:41:04 Note: I added the C++ tag because the question is not a pure C++/CLI issue. E.g. there might be a solution involving conversion between std::chrono clocks that will enable an easy further conversion to System::DateTime (as mentioned above regarding DateTimeOffset.FromUnixTimeMilliseconds).
The approach is sound, but the code can be made shorter and easier to read with a couple small changes: template<typename Rep, typename Period> System::TimeSpan DurationToTimeSpan(std::chrono::duration<Rep, Period> const& input) { auto milliSecs = std::chrono::duration_cast<std::chrono::milliseconds>(input).count(); return System::TimeSpan::FromMilliseconds(milliSecs); } System::DateTime SteadyClockTimePointToDateTime( std::chrono::steady_clock::time_point const & tCPP) { auto const nowCPP = std::chrono::steady_clock::now(); auto nowCLI = System::DateTime::Now; auto tCLI = nowCLI + DurationToTimeSpan(tCPP - nowCPP); } The specific changes made are: Hide use of the duration cast and TimeSpan factory function invocation in another helper function.1 Reverse the order of subtraction of nowCPP and tCPP, to avoid having to reverse the sign later. Add const to local variables having native type and which will not be changed. .NET types sadly are not const-correct, because const-ness is only respected by the C++/CLI compiler and not the languages in which the .NET library is written. Avoid explicit type conversion between the return value of count() and the parameter of AddMilliseconds. If for some platform there is a different duration representation and implicit conversion doesn't work, it's better to have the compiler tell the maintenance programmer. Note that the result of this function does NOT provide the "steady clock" guarantee. In order to do so, one should generate a single time_point/DateTime pair and save it for later reuse. 1 Another choice would be to use the AddMilliseconds member function, which replaces a call to TimeSpan factory function and overloaded operator-: System::DateTime SteadyClockTimePointToDateTime( std::chrono::steady_clock::time_point const & tCPP) { auto const nowCPP = std::chrono::steady_clock::now(); auto nowCLI = System::DateTime::Now; auto const milliSecsUntilT = std::chrono::duration_cast<std::chrono::milliseconds>(tCPP - nowCPP).count(); auto tCLI = nowCLI.AddMilliseconds(milliSecsUntilT); return tCLI; }
72,350,046
72,366,408
How to work with 2 HCSR04 Arduino Component?
Any ideas for the code of working with 2 different ultrasonic sensors? The idea is when either one of the sensors detects an obj in front of the sensor, it automatically turns on a buzzer. But for now, I only use the 2 ultrasonic sensors. This is my code, doesnt work as expected: #define trigPin1 3 #define echoPin1 2 #define trigPin2 4 #define echoPin2 5 long duration, distance, RightSensor,LeftSensor; void setup() { Serial.begin (9600); pinMode(trigPin1, OUTPUT); pinMode(echoPin1, INPUT); pinMode(trigPin2, OUTPUT); pinMode(echoPin2, INPUT); } void loop() { SonarSensor(trigPin1, echoPin1); RightSensor = distance; SonarSensor(trigPin2, echoPin2); LeftSensor = distance; Serial.print(LeftSensor); Serial.print(" | "); Serial.println(RightSensor); } void SonarSensor(int trigPin,int echoPin) { digitalWrite(trigPin, LOW); delay(2); digitalWrite(trigPin, HIGH); delay(2); digitalWrite(trigPin, LOW); duration = pulseIn(echoPin, HIGH); distance = (duration/2) / 29.1; }
A better way to approach this problem would be to make a function that returns the distance long. The code would look like this: long duration, distance, RightSensor,LeftSensor; void setup() { Serial.begin (9600); pinMode(trigPin1, OUTPUT); pinMode(echoPin1, INPUT); pinMode(trigPin2, OUTPUT); pinMode(echoPin2, INPUT); } void SonarSensor(int trigPin,int echoPin) { digitalWrite(trigPin, LOW); delay(2); digitalWrite(trigPin, HIGH); delay(2); digitalWrite(trigPin, LOW); duration = pulseIn(echoPin, HIGH); distance = (duration/2) / 29.1; return distance; } void loop() { RightSensor = SonarSensor(trigPin1, echoPin1); LeftSensor = SonarSensor(trigPin2, echoPin2); Serial.printIn(LeftSensor); Serial.print(" | "); Serial.printIn(RightSensor); } If you are getting wrong readings try using this code for the function: void SonarSensor(int trigPin,int echoPin) { digitalWrite(trigPin, LOW); delayMicroseconds(2); digitalWrite(trigPin, HIGH); delayMicroseconds(10); digitalWrite(trigPin, LOW); duration = pulseIn(echoPin, HIGH); distance = duration * 0.034 / 2; return distance; If you want to use the original code, move the void SonarSensor before void loop and make distance a global long.
72,350,127
72,350,215
Incompatible two void functions declaration
I have a problem about declaring two void functions in my template "Wallet" class, which are going to remove and add existing template class "CreditCard" to the vector. Compiler writes that "declaration is incompatible" #pragma once #include<iostream> #include<string> #include"CreditCard.h" #include<vector> using namespace std; template<class T> class Wallet { protected: vector<CreditCard<T>>cards; public: Wallet(vector<CreditCard<T>>cards); void addCreditCard(const CreditCard card); void removeCreditCard(const CreditCard card); }; template<class T> Wallet<T>::Wallet(vector<CreditCard<T>>cards) { } template<class T> void Wallet<T>::addCreditCard(const CreditCard card) { } template<class T> void Wallet<T>::removeCreditCard(const CreditCard card) { }
The problem is that CreditCard is a class template which is different from a class-type. So we have to specify the template argument list to make it a type. To solve this you can specify the template arguments to CreditCard as shown below: template<class T> class Wallet { protected: vector<CreditCard<T>>cards; public: Wallet(vector<CreditCard<T>>cards); //-------------------------------------vvv------------>specify template argument explicitly void addCreditCard(const CreditCard<T> card); //----------------------------------------vvv------------>specify template argument explicitly void removeCreditCard(const CreditCard<T> card); }; template<class T> //--------------------------------------------vvv---------->specify template argument void Wallet<T>::addCreditCard(const CreditCard<T> card) { } template<class T> //-----------------------------------------------vvv---------->specify template argument void Wallet<T>::removeCreditCard(const CreditCard<T> card) { } Demo.
72,350,241
72,350,390
Im trying to sort a list of Cities and their Temperature using a bubblesort
Im fairly new to C++ and im trying to convert an int "city.temp[4]" to a string and then adding that to an already existing string "city.name[4]" which would then sort the cities and their temperatures based on low to high. The user is the one to name all four cities and assign them a temperature. The bubbleSort is working fine sorting the temperatures by them self but im not sure how to get the cities to follow along with their respective temperatures #include <iostream> using namespace std; class City{ public: string name[4]; int temp[4]; }; int main() { City city; city.name[4]; city.temp[4]; cout << " Please Input the name of 4 different cities and their temperature\n\n\n"; for(int i=0; i < 4; i++){ cout << " Input the name and temperature of city (" << i+1 << "): "; getline(cin, city.name[i]); cout << " "; cin >> city.temp[i]; cin.ignore(); } int length = 4; for(int i = 0; i < length; i++){ for(int j = 0; j < length - 1; j++){ if(city.temp[j] > city.temp[j+1]){ int hold = city.temp[j]; city.temp[j] = city.temp[j+1]; city.temp[j+1] = hold; } } } for(int i = 0; i < length; i++) cout << " " << city.temp[i] << " " << city.name[i] << "\n"; return 0; }
The good solution is to have a City-class with a name and a temperature and swap cities based on the order of the temperature with std::sort. The easy fix for now is to use std::swap to swap the temperatures and at the same time swap the names: if(city.temp[j] > city.temp[j+1]){ std::swap(city.temp[j], city.temp[j+1]); std::swap(city.name[j], city.name[j+1]); }
72,350,951
72,351,086
Why are finding elements most efficient in arrays in c++?
I need a fast STL container for finding if an element exists in it, so I tested arrays, vectors, sets, and unordered sets. I thought that sets were optimized for finding elements, because of unique and ordered values, but the fastest for 10 million iterations are: arrays (0.3 secs) vectors (1.7 secs) unordered sets (1.9 secs) sets (3 secs) Here is the code: #include <algorithm> #include <iostream> #include <set> #include <unordered_set> #include <vector> int main() { using std::cout, std::endl, std::set, std::unordered_set, std::vector, std::find; int i; const long ITERATIONS = 10000000; int a[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; for (int i = 0; i < ITERATIONS; i++) { if (find(a, a + 16, rand() % 64) == a + 16) {} else {} } vector<int> v{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; for (i = 0; i < ITERATIONS; i++) { if (find(v.begin(), v.end(), rand() % 64) == v.end()) {} else {} } set<int> s({0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}); for (i = 0; i < ITERATIONS; i++) { if (find(s.begin(), s.end(), rand() % 64) == s.end()) {} else {} } unordered_set<int> us({0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}); for (i = 0; i < ITERATIONS; i++) { if (find(us.begin(), us.end(), rand() % 64) == us.end()) {} else {} } }
Please remember that in C and C++ there is the as if rule! This means compiler can transform code by any means (even by dropping code) as long as observable result of running code remains unchanged. Here is godbolt of your code. Now note what compiler did for if (find(a, a + 16, rand() % 64) == a + 16) {}: .L206: call rand sub ebx, 1 jne .L206 Basically compiler noticed that result of it is not used and remove everything expect calling rand() which has side effects (visible changes in results). The same happen for std::vector: .L207: call rand sub ebx, 1 jne .L207 And even for std::set and std::unordered_set compiler was able to perform same optimization. The difference you are seeing (you didn't specified how you did that) is just result of initializing all of this variables which is time consuming for more complex containers. Writing good performance test is hard and should be approached with caution. There is also second problem with your question. Time complexity of given code. Searching array and searching std::set and std::unrodered_set scales differently to size of data. For small data set simple array will be fates since its simple implementation and optimal access to memory. As data size grow time complexity of std::find for array will grow as O(n) on other hand slower std::set time to find item will grow as O(log n) and for std::unordered_set it will be constant time O(1). So for small amount of data array will be fastest, for medium size std::set is the winner and if amount of data will be large std::unordered_set will be best. Take a look on this benchmark example which uses google benchmark.
72,351,233
72,351,348
Why is it advantageous to return by reference?
I know that the rule of thumb is that we return by reference iff the returned variable exists in the caller. Say we have a function: int& f(int& a){ return a; } Now, in the caller, I can call in 2 ways: int a = 5; int b1 = f(a); // 1 int& b2 = f(a); // 2 The difference is that change in b1 or a in the caller, doesn't effect one another, but changes in b2 or a in caller effects one another. That means, what a was referring to int the callee is copied to b2, whereas, it seems like only the value a was referring to in the callee was copied to b1 because of the above statement. Then what is the advantage of returning by referrence like 1st case if it copies as well. Example where statements like 1 are seen is copy assignment operator which returns by reference for chaining of assignments: eg: obj1 = obj2 = obj3; Am I missing something here?
Frankly your argument is moot. Along the same line of reasoning you could argue that there is no advantage of using a reference in general, because it can be used to make a copy: int x = 42; int& ref = x; int y = ref; // makes a copy However, just because you can use a reference to make a copy does not decrease usability of a reference: ref = 24; // does modify x In your example: int a = 5; int b1 = f(a); // 1 int& b2 = f(a); // 2 It has almost nothing to do with f that the caller decided to make a copy in // 1. Of course you can use a reference to make a copy. But you cannot use a copy to modify a: f(a) = 42; // modifies a This only works as inteded (modify a) when f returns a reference. As an analogy you could ask: Whats the advantage of using an airplane? Airplanes can drive on the ground just like cars can do ;)
72,351,798
72,351,834
How to force set in c++ to store values in descending order?
I have been stuck on an algorithm that requires unique values sorted in descending order. Since the need is unique, I thought set is the best data structure to be used here, but I guess set by default stores the value in non-decreasing order, how do I make it store in non-increasing order? Other than the fact that I can let it store in ascending order and then reverse the set, is there any other modification that I can do?
How about using std::set<int, std::greater<int>> mySet{}? By default it's using std::less if I recall correctly.
72,351,917
72,354,444
OpenMP. Parallelization of two consecutive cycles
I am studying OpenMP and have written an implementation of shaker sorting. There are 2 consecutive cycles here, and in order for them to be called sequentially, I added blockers in the form of omp_init_lock, omp_destroy_lock, but still the result is incorrect. Please tell me how you can parallelize two consecutive cycles. My code is below: int Left, Right; Left = 1; Right = ARR_SIZE; while (Left <= Right) { omp_init_lock(&lock); #pragma omp parallel reduction(+:Left) num_threads(4) { #pragma omp for for (int i = Right; i >= Left; i--) { if (Arr[i - 1] > Arr[i]) { int temp; temp = Arr[i]; Arr[i] = Arr[i - 1]; Arr[i - 1] = temp; } } Left++; } omp_destroy_lock(&lock); omp_init_lock(&lock); #pragma omp parallel reduction(+:Right) num_threads(4) { #pragma omp for for (int i = Left; i <= Right; i++) { if (Arr[i - 1] > Arr[i]) { int temp; temp = Arr[i]; Arr[i] = Arr[i - 1]; Arr[i - 1] = temp; } } Right--; } omp_destroy_lock(&lock); }
You seem to have several misconceptions about how OpenMP works. Two parallel sections don't execute in parallel. This is fork-join parallelism. The parallel section itself is executed by multiple threads which then join back up at the end of the parallel section. Your code looks like you expected them to work like pragma omp sections. Side note: Unless you have absolutely no other choice and/or you know exactly what you are doing, don't use sections. They don't scale well. Your use of the lock API is wrong. omp_init_lock initializes a lock object. It doesn't acquire it. Likewise the destroy function deallocates it, it doesn't release the lock. If you ever want to acquire a lock, use omp_set_lock and omp_unset_lock on locks that you initialize once before you enter a parallel section. Generally speaking, if you need a lock for an extended section of your code, it will not parallelize. Read up on Amdahl's law. Locks are only useful if used rarely or if the chance of two threads competing for the same lock at the same time is low. Your code contains race conditions. Since you used pragma omp for, two different threads may execute the i'th and (i-1)'th iteration at the same time. That means they will touch the same integers. That's undefined behavior and will lead to them stepping on each other's toes, so to speak. I have no idea what you wanted to do with those reductions. How to solve this Well, traditional shaker sort cannot work in parallel because within one iteration of the outer loop, an element may travel the whole distance to the end of the range. That requires an amount of inter-thread coordination that is infeasible. What you can do is a variation of bubble sort where each thread looks at two values and swaps them. Move this window back and forth and values will slowly travel towards their correct position. This should work: #include <utility> // using std::swap void shake_sort(int* arr, int n) noexcept { using std::swap; const int even_to_odd = n / 2; const int odd_to_even = (n - 1) / 2; bool any_swap; do { any_swap = false; # pragma omp parallel for reduction(|:any_swap) for(int i = 0; i < even_to_odd; ++i) { int left = i * 2; int right = left + 1; if(arr[left] > arr[right]) { swap(arr[left], arr[right]); any_swap = true; } } # pragma omp parallel for reduction(|:any_swap) for(int i = 0; i < odd_to_even; ++i) { int left = i * 2 + 1; int right = left + 1; if(arr[left] > arr[right]) { swap(arr[left], arr[right]); any_swap = true; } } } while(any_swap); } Note how you can't exclude the left and right border because one outer iteration cannot guarantee that the value there is correct. Other remarks: Others have already commented on how std::swap makes the code more readable You don't need to specify num_threads. OpenMP can figure this out itself
72,352,743
72,353,040
Z3 Prover: Equivalent to Python Datatype in the C++ API
Is there an equivalent to the Python Datatype() API for C++? For example in Python you can do: >>> List = Datatype('List') >>> List.declare('cons', ('car', IntSort()), ('cdr', List)) >>> List.declare('nil') >>> List = List.create() >>> # List is now a Z3 declaration >>> List.nil nil >>> List.cons(10, List.nil) cons(10, nil) >>> List.cons(10, List.nil).sort() List >>> cons = List.cons >>> nil = List.nil >>> car = List.car >>> cdr = List.cdr >>> n = cons(1, cons(0, nil)) >>> n cons(1, cons(0, nil)) >>> simplify(cdr(n)) cons(0, nil) >>> simplify(car(n)) 1 How to declare such algebraic datatypes using the C++ API?
Yes. In general, everything you can do from the Python API (and other APIs), you can do from C/C++. The function you're looking for is called Z3_mk_datatype. See: https://z3prover.github.io/api/html/group__capi.html#ga34875df69093aca24de67ae71542b1b0 for details. Note that the C/C++ APIs are much lower level, so while possible to do so, I'd strongly recommend against using these APIs unless you've a requirement that forces you to use C/C++.
72,353,536
72,356,119
C++ code compiled with cygwin needs cygwin1.dll to run
I have no special code to share to ask this, but I wrote a C++ code (which could even be a simple Hello World program) compiled to an exe file, requires the cygwin1.dll available either via %path% or in the same folder as the exe to run (for runtime). If it were libstdc++-6.dll needed I could have done something like use the tag -static or -static-libstdc++. But, what can I do to not depend upon cygwin1.dll file for exe execution? Should I use some other compiler instead? Ps: I am expecting some kinda solution similar to this if it makes sense. MinGW .exe requires a few gcc dll's regardless of the code? Also, I do not want to create Or use a separate installer to add the dll to the right place for my case.
As discussed in comments, the native gcc inside Cygwin targets Cygwin. You want the GCC targeting mingw32 not the one targeting Cygwin. While you can install that GCC inside Cygwin as a cross compiler, the native GCC from MSYS works just fine and I too would recommend that. Note that if you need a library that isn't available in MINGW, pulling the one from Cygwin will essentially never work. Consider this a hard line: never mix Cygwin dependent dlls with non-Cygwin dependent dlls. Treat Cygwin as a subsystem that happens to be startable directly from Win32 rather than a Win32 library. MINGW has had pthread for awhile; however many times the port doesn't work. If the pthread-using program calls fork(), it will not work. That's when you need Cygwin.
72,353,673
72,353,754
How to add variable to derived initialization list from base class initialization list?
I have a base class ShowTicket with parameterized constructor: //constructor ShowTicket(const char* Row, const char* SeatNumber): sold_status{false}, row(Row), seat_number(SeatNumber) {} I am creating a derived class, SportTicket that will take the same parameters as ShowTicket but will add a new boolean value to keep track of beer_sold. The problem is I do not know how to tell C++ that I still want sold_status to be intialized to false in the SportTicket Constructor. I tried doing this: //Constructor SportTicket(const char* Row, const char* SeatNumber): ShowTicket(Row, SeatNumber), beer_sold{false}, sold_status{false} {} But I received the following error message: Member initializer 'sold_status' does not name a non-static data member or base class Is sold_satus already initialized to false because the variable is inherited from the base classes initialization list or is there a different syntax I can use to bring this variable into my derived class?
The constructor of the class ShowTicket itself initializes its data member sold_status to false //constructor ShowTicket(const char* Row, const char* SeatNumber): sold_status{false}, row(Row), seat_number(SeatNumber) {} So in the derived class just remove the line sold_status{false} because it is incorrect and redundant. //Constructor SportTicket(const char* Row, const char* SeatNumber): ShowTicket(Row, SeatNumber), beer_sold{false} {}
72,353,688
72,354,047
Iterator for Matrix Native class class
I wrote my own Matrix class with such fields (you can't use STL containers in it) template <typename T> class Matrix { private: T *data = nullptr; size_t rows; size_t cols; I also made an iterator for this class: public: class Iterator { friend Matrix; private: T *curr; public: Iterator() : curr(nullptr) {} Iterator(Matrix *matrix) : curr(matrix->data) {} ~Iterator() = default; ...... Iterator begin() { Iterator it(this); std::cout << *it.curr << std::endl; return it; } Iterator end() { Iterator it(this); /*Iterator it(this->data + rows * cols); if (it == nullptr) { return it; }*/ return it; } How can I properly write an end() method that would return the next element after the last one? I understand that using address arithmetic you can get the latter, but how can I do it in my code?
I suggest that you let the iterator take a T* as argument. You can then supply data to the begin() iterator and data + rows * cols to the end() iterator. Example: template <class T> class Matrix { private: T* data = nullptr; size_t rows; size_t cols; public: Matrix(size_t Rows, size_t Cols) : data{new T[Rows * Cols]}, rows{Rows}, cols{Cols} {} Matrix(const Matrix&) = delete; Matrix(Matrix&&) = delete; ~Matrix() { delete[] data; } class iterator { private: T* curr; // store a T* public: iterator(T* t) : curr{t} {} // take a T* and store it iterator& operator++() { ++curr; return *this; } bool operator==(const iterator& rhs) const { return curr == rhs.curr; } bool operator!=(const iterator& rhs) const { return !(curr == rhs.curr); } T& operator*() { return *curr; } T* operator->() { return curr; } }; iterator begin() { return {data}; } // begin() takes data iterator end() { return {data + rows * cols}; } // end() takes data + rows *cols }; Demo However, you don't need to implement an iterator yourself for this. You can simply use pointers: template <class T> class Matrix { private: T* data = nullptr; size_t rows; size_t cols; public: Matrix(size_t Rows, size_t Cols) : data{new T[Rows * Cols]}, rows{Rows}, cols{Cols} {} Matrix(const Matrix&) = delete; Matrix(Matrix&&) = delete; ~Matrix() { delete[] data; } using iterator = T*; // iterator is now simply a T* iterator begin() { return data; } iterator end() { return data + rows * cols; } };
72,354,043
72,362,447
How to pass python lambda func to c++ std::function<> using Boost.Python
Just lets consider next example: #include <functional> class Model { function<bool(const vector<double>&, float, float, float)> q_bifurcate_pointer; } Now in c++ env I can simply assign lambda value to q_bifurcate_pointer: model.q_bifurcate_pointer = [](const vector<double>& a, float branch_lenght, float bifurcation_threshold, float bifurcation_min_dist)->bool { return (a.at(2) / a.at(0) <= bifurcation_threshold) && branch_lenght >= bifurcation_min_dist; }; Next I try to export that class to python using 'Boost.Python': BOOST_PYTHON_MODULE(riversim) { class_<vector<double>> ("t_v_double") .def(vector_indexing_suite<vector<double>>()); class_<Model>("Model") .def_readwrite("q_bifurcate_pointer", &Model::q_bifurcate_pointer) } And now we are getting finally to the problem. In python I can execute next lines: import riversim model = riversim.Model() model.q_bifurcate_pointer = lambda a, b, c, d: True # or more complicated example with type specification from typing import Callable func: Callable[[riversim.t_v_double, float, float, float], bool] = lambda a, b, c, d: True model.q_bifurcate_pointer = func And in both cases I am getting next error: --------------------------------------------------------------------------- ArgumentError Traceback (most recent call last) /home/oleg/Documents/riversim/riversimpy/histogram.ipynb Cell 2' in <module> 3 model = riversim.Model() 4 func: Callable[[riversim.t_v_double, float, float, float], bool] = lambda a, b, c, d: True ----> 5 model.q_bifurcate_pointer = func ArgumentError: Python argument types in None.None(Model, function) did not match C++ signature: None(River::Model {lvalue}, std::function<bool (std::vector<double, std::allocator<double> > const&, float, float, float)>) So, how to create proper lambda function and past it to my c++ code?
Lambda expression in pythone - this is byte code and C++ needs machine code. Just googled similar to mine but more general problem here: https://stackoverflow.com/a/30445958/4437603
72,354,800
72,379,101
Malloc error when trying to run node js server due to ibm_db module
I have a nodejs application, and configured to run on port 5001. When I try to run the node server using node server.js, it throws me an malloc error like below node(6080,0x1067aa600) malloc: *** error for object 0x7ffb503d2670: pointer being freed was not allocated node(6080,0x1067aa600) malloc: *** set a breakpoint in malloc_error_break to debug zsh: abort node server.js My machine configs are Processor - 2.4 GHz 8-Core Intel Core i9 Memory - 32 GB 2667 MHz DDR4 When I try to run this server, I do not run any other node server. I also checked all the processes running but nothing clashes with it. Maybe I am missing something. I tried running it on different ports as well, but i get the same error My node js version is v14.16.1 npm version is 6.14.12 Xcode version is 13.4.0.0.1.1651278267
The issue was not related with X code or node version but it was related to one the npm package I was using which was ibm_db and only if you have the mac monterey os. Follow these step if you have this package installed to rectify the error. Delete ibm_db package from your project and delete package-lock.json as well. Install latest ibm_db package currently 2.8.1. Even if it does not resolve the error, do the last step. Go to node_modules/ibm_db/installer/clidriver/lib and rename the file libstdc++.6.dylib to anything like libstdc++.7.dylib You can find detailed discussion here https://github.com/ibmdb/node-ibm_db/issues/824 https://github.com/ibmdb/node-ibm_db/issues/801 Hopefully the issue will be resolved. Thanks
72,355,184
72,355,211
Parallel version of the `std::generate` performs worse than the sequential one
I'm trying to parallelize some old code using the Execution Policy from the C++ 17. My sample code is below: #include <cstdlib> #include <chrono> #include <iostream> #include <algorithm> #include <execution> #include <vector> using Clock = std::chrono::high_resolution_clock; using Duration = std::chrono::duration<double>; constexpr auto NUM = 100'000'000U; double func() { return rand(); } int main() { std::vector<double> v(NUM); // ------ feature testing std::cout << "__cpp_lib_execution : " << __cpp_lib_execution << std::endl; std::cout << "__cpp_lib_parallel_algorithm: " << __cpp_lib_parallel_algorithm << std::endl; // ------ fill the vector with random numbers sequentially auto const startTime1 = Clock::now(); std::generate(std::execution::seq, v.begin(), v.end(), func); Duration const elapsed1 = Clock::now() - startTime1; std::cout << "std::execution::seq: " << elapsed1.count() << " sec." << std::endl; // ------ fill the vector with random numbers in parallel auto const startTime2 = Clock::now(); std::generate(std::execution::par, v.begin(), v.end(), func); Duration const elapsed2 = Clock::now() - startTime2; std::cout << "std::execution::par: " << elapsed2.count() << " sec." << std::endl; } The program output on my Linux desktop: __cpp_lib_execution : 201902 __cpp_lib_parallel_algorithm: 201603 std::execution::seq: 0.971162 sec. std::execution::par: 25.0349 sec. Why does the parallel version performs 25 times worse than the sequential one? Compiler: g++ (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
The thread-safety of rand is implementation-defined. Which means either: Your code is wrong in the parallel case, or It's effectively serial, with a highly contended lock, which would dramatically increase the overhead in the parallel case and get incredibly poor performance. Based on your results, I'm guessing #2 applies, but it could be both. Either way, the answer is: rand is a terrible test case for parallelism.
72,355,376
72,355,530
Plus sign "+" before wchar_t variable
I have found on the cpp reference website (link) the following code which I do not understend completely: void try_widen(const std::ctype<wchar_t>& f, char c) { wchar_t w = f.widen(c); std::cout << "The single-byte character " << +(unsigned char)c << " widens to " << +w << '\n'; } What I do not understand is what does plus sign "+" makes before w variable (+w) and before (unsigned char)c (+(unsigned char)c). Can somebody explain me this?
Like all arithmetic operations, unary plus triggers integer promotion of its operand. This results in printing a numerical value of a character. Without it, a different overload of operator << could be chosen. << (unsigned char)c would print a character rather than a number. I believe std::cout << w would not compile.
72,355,415
72,355,497
How to set BaseClass variable from DerivedClass parameter
I have the classes Player and HumanPlayer. HumanPlayer is derived from Player. class Player { private: int id; protected: string name; public: Player(int id); ~Player(); } class HumanPlayer : public Player { public: HumanPlayer(int id, string name); } I want to make a constructor for HumanPlayer the set the Player id and name but I don't seem to figure out how to set Player::id to the HumanPlayer::id. This is what I have worked out but gives an error "int Player::id' is private within this context" HumanPlayer::HumanPlayer(int id, string name) : Player(id){ this -> id = id; this -> name = name; }
For your understanding. class Player { private: int id; protected: std::string name; public: //Base class constructor, initialize Id. Player(int i):id(i) {} ~Player() {} //Test int GetId() { return id; } std::string GetName() { return name; } }; class HumanPlayer : public Player { public: HumanPlayer(int i, std::string s) :Player(i) //Pass on Id base class constructor { name = s; //Protected variable accessible. } }; void main() { HumanPlayer hPlayer(10, "Test"); std::cout << hPlayer.GetId() << std::endl; std::cout << hPlayer.GetName() << std::endl; }
72,355,450
73,317,668
Object generation from different id types slow compilation
I have a of templated class that can generate an object instance from an ID. The context is networking code with object replication. The code below shows a way that I can manage to do this, but it has the drawback of beeing very slow in compilation. Does anyone know a "better" way to achieve what my example shows. I'm not sure how to make this question more clear, I hope the code speaks for itself. I have looked at extern templates, but I do not see how to apply that to templated functions in templated classes. If anyone knows how to do that, that would solve the issue. Alternatively a way to fix the ambiguous problem of MyRegistersSimple would also be greatly helpfull! template<typename ID, typename Base> class Register { public: void create(ID id) { m_types.at(id).second(); } private: std::map<ID, std::function<std::unique_ptr<Base>(ID)>> m_types; }; template<typename tag> struct ID { int value; }; class ABase {}; class BBase {}; class CBase {}; using ID_A = ID<struct ID_A_TAG>; using ID_B = ID<struct ID_B_TAG>; using ID_C = ID<struct ID_C_TAG>; class MyRegistersSimple : public Register<ID_A, ABase>, public Register<ID_B, BBase>, public Register<ID_C, CBase> { }; template<typename... Registers> class MultiRegister : public Registers... { public: template<typename ID> void create(ID) { // lots of complex template code to find the correct Register from 'Registers...' // and call 'create' on it // this makes compilation very slow } }; class MyRegistersComplex : public MultiRegister< Register<ID_A, ABase>, Register<ID_B, BBase>, Register<ID_C, CBase>> {}; void test() { MyRegistersSimple simple; simple.create(ID_A(0)); // -> ambiguous, doest not compile MyRegistersComplex complex; complex.create(ID_A(0)); // -> very slow compilation }
Basic solution Bring all the bases into scope via using: // a helper to avoid copy pasting `using`s template<typename... Registers> struct MultiRegister : Registers... { using Registers::create...; }; class MyRegisters : public MultiRegister< Register<ID_A, ABase>, Register<ID_B, BBase>, Register<ID_C, CBase>> {}; void test() { MyRegisters registers; registers.create(ID_A(0)); // IDE shows that `Register<ID<ID_A_TAG>, ABase>` is chosen } I hope the built-in overload resolution is faster than "lots of complex template code" in the ...Complex version. Offtopic improvement I didn't like that manual Register<ID_A, ABase> ID_x <-> xBase dispatch and dummy ID_x_TAGs, so I removed all of that (if using xBase as the ID's template parameter is fine). Then Register<ID_A, ABase>, Register<ID_B, BBase> etc. become template<typename Base> using MakeRegister = Register<ID<Base>, Base>; and the code suggested above is just (test() omitted - it's the same) template<typename... Registers> struct MultiRegister : Registers... { using Registers::create...; }; template<typename... Bases> using MakeMultiRegister = MultiRegister<MakeRegister<Bases>...>; class MyRegisters : public MakeMultiRegister<ABase, BBase, CBase> {};
72,355,478
72,357,013
Global Constants in .h included in multiple c++ project
I want to run a small simulation in c++. To keep everything nice and readable I seperate each thing (like all the sdl stuff, all the main sim stuff, ...) into it's own .h file. I have some variables that I want all files to know, but when I #include them in more then one file other the g++ compliler sees it as a redefinition. I understand why he does this, but this still leaves me with my wish to have one file where all important variables and constants for each run are defined and known to all other files, to easily find and change them when running my simulation. So my Question here: Is there a good workaround to achieve that or something similar?
You can put the declarations for all the globals in a header and then define them in a source file and then you will be able to use those global variables in any other source file by just including the header as shown below: header.h #ifndef MYHEADER_H #define MYHEADER_H //declaration for all the global variables extern int i; extern double p; #endif source.cpp #include "header.h" //definitions for all the globals declared inside header.h int i = 0; double p = 34; main.cpp #include <iostream> #include "header.h" //include the header to use globals int main() { std::cout << i <<std::endl;//prints 0 std::cout<< p << std::endl;//prints 34 return 0; } Working demo
72,355,483
72,355,712
Calculate class Cylinder using class Circle
The constructor of class "Circle" allows the radius to be specified via a parameter, while it is not possible to create objects of the Circle type without specifying the parameter. Also, automatic conversion of real numbers into Circle objects must not be allowed. The Set method, which does the same thing as a constructor, should also be supported, except that it allows the radius of an already created object to be changed later. The Cylinder class constructor requires two parameters that represent the base radius and the height of the roller, respectively. Instances of this class also cannot be created without specifying the mentioned information. It should also support the "Set" function, which does the same thing as a constructor, except that it allows you to modify an already created object. Both classes must have other methods (listed in code). I need to use class Circle inside class Cylinder to enable calculating volume, area, and other functions. #include <cmath> #include <iostream> class Circle { double radius; public: Circle(double r); void Set(double r); double GetRadius() const; double GetPerimeter() const; double GetArea() const; void Scale(double s); void Print() const; }; class Cylinder { Circle baze; double height; public: Cylinder(double r_baze, double h); void Set(double r_baze, double h); Circle GetBaze() const; double GetRadiusOfBaze() const; double GetHeight() const; double GetArea() const; double GetVolume() const; void Scale(double s); void Print() const; }; int main() { return 0; } Circle::Circle(double r) { radius = r; } void Circle::Set(double r) { radius = r; } double Circle::GetRadius() const { return radius; } double Circle::GetPerimeter() const { return 2 * 4 * atan(1) * radius; } double Circle::GetArea() const { return radius * radius * 4 * atan(1); } void Circle::Scale(double s) { radius *= s; } void Circle::Print() const { std::cout << "R= " << GetRadius() << " O= " << GetPerimeter() << " P= " << GetRadius(); } Cylinder::Cylinder(double r_baze, double h) { baze.GetRadius() = r_baze; height = h; } void Cylinder::Set(double r_baze, double h) { baze.GetRadius() = r_baze; height = h; } Circle Cylinder::GetBaze() const { return baze; } double Cylinder::GetRadiusOfBaze() const { return baze.GetRadius(); } double Cylinder::GetHeight() const { return height; } double Cylinder::GetArea() const { return baze.GetArea() * 2 + baze.GetPerimeter() * height; } double Cylinder::GetVolume() const { return baze.GetArea() * height; } void Cylinder::Scale(double s) { baze.GetRadius() *= s; height *= s; } void Cylinder::Print() const { std::cout << "R= " << baze.GetRadiusOfBaze() << " H= " << height << " P= " << GetArea() << " V= " << GetVolume(); } I'm new to objected-oriented programming concept. Could you help me to understand where I'm making mistakes? I cannot compile this, because I get errors: 57 : no matching function for call to ‘Circle::Circle()’ 14: note: candidate: ‘Circle::Circle(double)’ 14: note: candidate expects 1 argument, 0 provided 3: note: candidate: ‘constexpr Circle::Circle(const Circle&)’ 3: note: candidate expects 1 argument, 0 provided 62, 70, 91 : lvalue required as left operand of assignment
Cylinder::Cylinder(double r_baze, double h) { baze.GetRadius() = r_baze; height = h; } In your Cylinder class, when your constructor is called, baze is implicitly initialized with a default constructor that does not exist. You want to use an initializer list to handle that initialization, at which point the code inside your Cylinder constructor becomes unnecessary. Cylinder::Cylinder(double r_baze, double h) : baze(r_baze), height(h) { } Alternatively, you could provide functionally a default constructor for your Circle class, and then Set the radius in Cylinder's constructor, but that's more work. Circle::Circle(double r=0.0) { radius = r; } Cylinder::Cylinder(double r_baze, double h) { baze.Set(r_baze); height = h; } Also... Please note that GetRadius returns a double and cannot be assigned to, so you will get an error on that line of code.
72,355,676
72,356,023
Why does Bazel http_archive rule not download archive?
I'm a Bazel newbie and am working through the C++ guide here, trying to include an external testing library (gtest): https://bazel.build/tutorials/cpp-use-cases#include-external-libraries This is my file structure and WORKSPACE and BUILD file contents: $ tree . ├── gtest.BUILD ├── lib │   ├── BUILD │   ├── hello-time.cc │   └── hello-time.h ├── main │   ├── BUILD │   ├── hello-greet.cc │   ├── hello-greet.h │   └── hello-world.cc ├── README.md ├── test │   ├── BUILD │   └── hello-test.cc └── WORKSPACE Contents of WORKSPACE: $ cat WORKSPACE load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "gtest", build_file = "@//:gtest.BUILD", url = "https://github.com/google/googletest/archive/release-1.10.0.zip", sha256 = "94c634d499558a76fa649edb13721dce6e98fb1e7018dfaeba3cd7a083945e91", ) Contents of gtest.BUILD: $ cat gtest.BUILD cc_library( name = "main", srcs = glob( ["src/*.cc"], exclude = ["src/gtest-all.cc"] ), hdrs = glob([ "include/**/*.h", "src/*.h" ]), copts = ["-Iexternal/gtest/include"], linkopts = ["-pthread"], visibility = ["//visibility:public"], ) Contents of test/BUILD: $ cat test/BUILD cc_test( name = "hello-test", srcs = ["hello-test.cc"], copts = ["-Iexternal/gtest/include"], deps = [ "@gtest//:main", "//main:hello-greet", ], ) I then try to run "bazel test test:hello-test" but it throws an issue complaining about a missing "BUILD" file: ERROR: An error occurred during the fetch of repository 'gtest': Traceback (most recent call last): ... Error in read: Unable to load package for //:gtest.BUILD: BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package. - /home/user/code/bazelbuild_examples/cpp-tutorial/stage4 I then ran "touch BUILD" in the top level directory, after finding a GitHub issue with a similar error message, which got rid of that error. Bazel is now downloading gtest library (can see it under "bazel-stage4/external/gtest") but it doesn't seem to be making it available to the test target: ERROR: /home/user/code/bazelbuild_examples/cpp-tutorial/stage4/test/BUILD:1:8: Compiling test/hello-test.cc failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF ... (remaining 25 arguments skipped) Use --sandbox_debug to see verbose messages from the sandbox test/hello-test.cc:1:10: fatal error: gtest/gtest.h: No such file or directory 1 | #include "gtest/gtest.h" | ^~~~~~~~~~~~~~~ compilation terminated. Why is it unable to find the gtest headers/library? How does the directory layout work when you're running "bazel test"? (i.e. where is the test code directory relative to the 3rd party library directory?)
I think the problem has to do with your gtest build file. First, google test comes with a supported Bazel BUILD file already, so why write your own instead of using theirs? Second: cc_library( name = "main", srcs = glob( ["src/*.cc"], exclude = ["src/gtest-all.cc"] ), hdrs = glob([ "include/**/*.h", "src/*.h" ]), copts = ["-Iexternal/gtest/include"], linkopts = ["-pthread"], visibility = ["//visibility:public"], ) The path that c++ source code would have to #include for headers in this rule is "include/.../*.h". That's not proper. For one, copts affects this rule, but not other targets that depend on it. In general, compiler options should be part of the toolchain, not the rule like this. Second, cc_library rules have an includes = [] argument specifically for fixing include paths to strip leading prefixes. Instead of copts you should use includes to fix the path-------BUT you should use the official google test BUILD file instead of writing your own and not have to deal with problems like this.
72,355,799
72,355,883
How do I actually delete the node(s)?
I am solving a question on LeetCode.com: Given the root of a binary tree, collect a tree's nodes as if you were doing this: a. Collect all the leaf nodes. b. Remove all the leaf nodes. c. Repeat until the tree is empty.                                      For the input root = [1,2,3,4,5] (image above), the output should be: [[4,5,3],[2],[1]]. This comes down to finding the height of each node of the tree: /** * Definition for a binary tree node. * struct TreeNode { * int val; * TreeNode *left; * TreeNode *right; * TreeNode() : val(0), left(nullptr), right(nullptr) {} * TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} * TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {} * }; */ class Solution { public: vector<vector<int>> m; int dfs(TreeNode* root) { if(!root) return -1; int l=dfs(root->left); int r=dfs(root->right); int height=max(l,r)+1; if(height==m.size()) m.push_back({}); m[height].push_back(root->val); //delete root; return height; } vector<vector<int>> findLeaves(TreeNode* root) { dfs(root); return m; } }; The above code gets accepted, but that is because the OJ does not actually check if the nodes were deleted or not. My question is, how do I delete the nodes? Specifically: a. If I add delete (commented line above), I get a runtime error; b. I cannot just set root=nullptr like the Java folks, since C++ does not have garbage collection by default and so the root node would not really be deleted (the memory would continue to be occupied). c. I don't think we can delete the root node anywhere else. So how do I go about actually deleting the node? Thanks!
Your placement of delete is correct, but since we don't know how root is allocated we can't be sure whether delete is logically correct. BTW, delete deallocates the memory but it doesn't remove the pointers themselves. You can do root->left = root->right = nullptr to take care of that. Also, have findLeaves take a reference to the pointer and set root to nullptr after the call to dfs() to fully delete the tree.
72,356,247
72,356,633
Unicode to integer conversion visual studio bug
Im trying to convert a unicode character to an integer and encountered a bug in visual studio not sure if its a bug or something im doing wrong The project has unicode character set and not multibyte. #include <windows.h> #include <iostream> int main() { constexpr int a = L''; printf("%i\n", a); std::cout << a << std::endl; return 0; } Problem: Mouse hovering variable 'a' shows that its 129408 or 0x1F980 which is correct but when it prints it out to the console i get 55358 I have created a new project and wrote the same code and it printed out the correct value but after switching the same project from unicode to multibyte and back to unicode it produces this issue, not sure how to fix this.
Wide characters in Visual Studio are only 16 bits, meaning they won't hold a value greater than 65535. You're getting the first half of the character encoded in UTF-16, which is d83e dd80.
72,356,735
72,356,814
Struct of array global scope
I need a struct of array[lenght] to be seen by all my methods(global). The problem I have is that the struct needs to be initialized with a specific length inside a specific function. To be more precise the initialization of struct with the length of size has to happen when importantFunction() is called and I need all my other fcts to have access to the struct. This is the solution Ive come with. Im unaware of any other way to make the struct global. Cant use the hashMap or vector class. This template is part of my own vector class. template<class A> class Table { public: struct hash { T value; int key; }; struct hash *hash1; //struct hash hash1[size]; I could do but I dont have access to size from here. private: A *elements; int size; int capacity; }; template<class A> Table<A>::Table(const Table &otro) { //function that will use the struct of array } template<class A> void Table<A>::importantFunction() { hash1 = malloc(size); //struct hash hash1[size]; if I do this the scope of my struct hash1 is only whitin this function }
If using std::vector is not allowed, then you can use dynamic memory allocation either manually using new and delete or better would be to use smart pointers. But since you're not allowed to use std::vector, i suppose you're also not allowed to use smart pointers in your project. template<class A> class Table { public: struct hash { T value; int key; }; //--------vvvvvvvv-------------->a pointer to a hash object hash *ptrArray = nullptr; private: A *elements; int size; int capacity; }; //other member functions here template<class A> void Table<A>::importantFunction() { //-------------vvvvvvvvvvvvvvvv-->assign to ptrArray the pointer returned from new hash[size]() ptrArray = new hash[size](); } //don't forget to use delete[] in the destructor
72,356,820
72,357,195
Compiler can't find SDL2 classes. How to properly include and link them?
I just started venturing into C++. I download this simple helicopter game and I'm trying to compile it, but I don't know how to properly include and link the SDL2 dependencies. My first approach was trying to compile it with gcc. I got to the following command: gcc main.cpp ^ -I C:\code\SDL2\SDL2-2.0.22\include ^ -I C:\code\SDL2\SDL2_image-2.0.5\include ^ -I C:\code\SDL2\SDL2_mixer-2.0.4\include ^ -I C:\code\SDL2\SDL2_ttf-2.0.18\include ^ -L C:\code\SDL2\SDL2-2.0.22\lib\x64 ^ -L C:\code\SDL2\SDL2_image-2.0.5\lib\x64 ^ -L C:\code\SDL2\SDL2_mixer-2.0.4\lib\x64 ^ -L C:\code\SDL2\SDL2_ttf-2.0.18\lib\x64 But the compiler complains about not being able to find classes and functions from the SDL2 libs. For instance, the first of the many errors it gives is In file included from heli.h:4:0, from main.cpp:7: loader.h: In function 'SDL_Surface* load_image(std::__cxx11::string, int)': loader.h:27:57: error: 'SDL_DisplayFormat' was not declared in this scope optimizedImage = SDL_DisplayFormat( loadedImage ); I then tried to turn the code base into a Visual Studio project and configured it following this tutorial, basically having the following configuration: Include Directories: C:\code\SDL2\SDL2-2.0.22\include;C:\code\SDL2\SDL2_image-2.0.5\include;C:\code\SDL2\SDL2_mixer-2.0.4\include;C:\code\SDL2\SDL2_ttf-2.0.18\include;$(IncludePath) Library Directories: C:\code\SDL2\SDL2-2.0.22\lib\x64;C:\code\SDL2\SDL2_image-2.0.5\lib\x64;C:\code\SDL2\SDL2_mixer-2.0.4\lib\x64;C:\code\SDL2\SDL2_ttf-2.0.18\lib\x64;$(LibraryPath) Linker/Input/Additional Dependencies: SDL2.lib;SDL2main.lib;SDL2_image.lib;SDL2_mixer.lib;SDL2_ttf.lib;%(AdditionalDependencies) But then I got the exact same missing dependencies, as "identifier 'foo' not found" or "'bar': undeclared identifier". What am I missing about including and linking C++ dependencies?
The source code of the game is referencing the SDL not SDL2. There is a chance that the function names or implementations have changed since version 1. In fact, if you download SDL and SDL2 and look for the SDL_video.h files in both versions, you will see that SDL_DisplayFormat is in SDL_video header file of version 1 but not in version 2. As for the include procedure, the best way to go about it is to set it up in the IDE that you are using. For example, in Visual Studio, you can do that by adding the directories in the "Solution Properties -> General Properties -> C/C++ -> All Options -> Additional Include Directories". Otherwise, I believe you need to know the right order of inclusion or gcc will complain. Check this discussion: Why does the order in which libraries are linked sometimes cause errors in GCC?
72,356,890
72,357,086
How to define the different template structs with different enums
I've two enums as below: enum TTT {t = 2}; enum XXX {x = 2}; I'm trying to make some struct to help me get the name of an enum. Here is what I've done: template<TTT> struct StrMyEnum { static char const* name() { return "unknown"; } }; template<> struct StrMyEnum<t> { static char const* name() { return "tt"; } }; It works. Now I can get the name of the enum t: StrMyEnum<t>::name(). However, if I write another group of template structs for XXX, it doesn't seem to work. template<XXX> struct StrMyEnum { static char const* name() { return "unknown"; } }; template<> struct StrMyEnum<x> { static char const* name() { return "xx"; } }; Now I get a compile error: could not convert 'x' from 'XXX' to 'TTT'. It seems that the compiler is trying to match the StrMyEnum<x> with TTT... I don't know why. So template can't distinguish the different enums? Is it possible to write some structs for different enums?
StrMyEnum identifies the name of the template. You can have a general declaration for it, and then some specializations. But you cannot have a second set of declaration + specializations (like you did with the set for XXX). What you can do is have one template parametrized with both the enum type and the enum value. Then you can have 2 levels of specializations: For a specific value of a specific enum. Default for a specific enum. See the code below: #include <iostream> enum TTT { t = 2, t2 = 3 }; enum XXX { x = 2, x2 = 3 }; template <typename T, T val> struct StrMyEnum { static char const* name() { return "unknown"; } }; // Default specialization for TTT: template <TTT t> struct StrMyEnum<TTT, t> { static char const* name() { return "TTT"; } }; // Specialization for TTT::t: template <> struct StrMyEnum<TTT, t> { static char const* name() { return "TTT::t"; } }; // Specialization for XXX::x: template <> struct StrMyEnum<XXX, x> { static char const* name() { return "XXX::x"; } }; int main() { std::cout << StrMyEnum<TTT, t2>().name() << std::endl; std::cout << StrMyEnum<TTT, t>().name() << std::endl; std::cout << StrMyEnum<XXX, x2>().name() << std::endl; std::cout << StrMyEnum<XXX, x>().name() << std::endl; return 0; } Output: TTT TTT::t unknown XXX::x Demo: https://godbolt.org/z/zW547T9nf.
72,356,992
72,357,450
Use multiple sampler2D over one input texture in OpenGL
Now I have a noise texture generated by this website: https://aeroson.github.io/rgba-noise-image-generator/. I want to use 4 uniform samplers in my computing shader to get 4 random rgba values from a single noise texture. My computing shader source codes look like: #version 430 core layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in; uniform sampler2D noise_r0; uniform sampler2D noise_i0; uniform sampler2D noise_r1; uniform sampler2D noise_i1; layout (binding = 0, rgba32f) writeonly uniform image2D tilde_h0k; layout (binding = 1, rgba32f) writeonly uniform image2D tilde_h0minusk; uniform int N = 256; .... // Box-Muller algorithm vec2 texCoord = vec2(gl_GlobalInvocationID.xy) / float(N); // Here every invocation refers to a pixel of output image float noise00 = clamp(texture(noise_r0, texCoord).r, 0.001, 1.0); float noise01 = clamp(texture(noise_i0, texCoord).r, 0.001, 1.0); float noise02 = clamp(texture(noise_r1, texCoord).r, 0.001, 1.0); float noise03 = clamp(texture(noise_i1, texCoord).r, 0.001, 1.0); .... and in the main program, I use this code to upload my downloaded noise texture to the computing shader: unsigned int tex_noise; glGenTextures(1, &tex_noise); glBindTexture(GL_TEXTURE_2D, tex_noise); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); int w_noise, h_noise, nrChannels; unsigned char* data = stbi_load("noise.png", &w_noise, &h_noise, &nrChannels, 0); if (data) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w_noise, h_noise, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); glGenerateMipmap(GL_TEXTURE_2D); } else std::cout << "Failed to load noise texture." << std::endl; stbi_image_free(data); .... .... My question is: Can I set up those sampler2D's in computing shader using this code? glUseProgram(computeProgram); glActiveTexture(GL_TEXTURE1); glUniform1i(glGetUniformLocation(computeProgram, "noise_r0"), 1); glActiveTexture(GL_TEXTURE2); glUniform1i(glGetUniformLocation(computeProgram, "noise_i0"), 2); glActiveTexture(GL_TEXTURE3); glUniform1i(glGetUniformLocation(computeProgram, "noise_r1"), 3); glActiveTexture(GL_TEXTURE4); glUniform1i(glGetUniformLocation(computeProgram, "noise_i1"), 4); If it is wrong, what should I do to set up those sampler2D's, and make sure that the random rgba values I get from those sampler2D's are not the same? (cause if they are the same, the Box-Muller algorithm won't work). Thanks so much for your help!
The type of the uniform is ìmage2D, not sampler2D. To load and store an image, you must bind the texture to an image unit using glBindImageTexture. See Image Load Store. e.g.: glBindImageTexture(1, tex_noise, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F); If you want to bind a texture to a texture unit you need to select active texture unit with glActiveTexture and then bind the texture with glBindTexture: glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture_object); You can access this texture in the shader with a uniform of type sampler2D and the texture* functions: layout(binding = 1) uniform sampler2D myTextue; If you want to bind a texture to a image unit, you have to use glBindImageTexture: glBindImageTexture(1, texture_object, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F); You can access this texture in the shader with a uniform of type iamge2D and and the image* functions: layout (binding = 1, rgba32f) writeonly uniform image2D myImage;
72,357,168
72,357,291
C++ Code doesn't output in Visual Studio Code when certain data structures are used
I'm running into a weird bug in Visual Studio Code - I'm writing code in C++ using standard extensions (C/C++ extension pack) and if I write a simple program like this, it works fine: int main() { #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #endif cout << "Hello World!" << endl; } However, the second I declare a variable using STD include statements, such as a map, vector, etc., the code runs without errors but doesn't print anything. int main() { #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #endif vector<int> test; cout << "Hello World!" << endl; // this no longer prints } Any ideas why this error is occurring? I have all the include statements I need as well: #include <bits/stdc++.h> #include <complex> #include <queue> #include <set> #include <unordered_set> #include <list> #include <chrono> #include <random> #include <iostream> #include <algorithm> #include <cmath> #include <string> #include <vector> #include <map> #include <unordered_map> #include <stack> #include <iomanip> #include <fstream> using namespace std;
I tested your code, and it ran as expected. Some ideas: When ONLINE_JUDGE is not defined, freopen("output.txt", "w", stdout); makes the output redirected to output.txt. Maybe you forgot checking the file instead of the console the second time you ran your program. Besides, since you were using Visual Studio Code, maybe you used an extension to run your program. The extension may not use your current working directory to run the program, which made the output.txt elsewhere. Also, check if ONLINE_JUDGE is set by the extension or your script. BTW, if you use g++, #include <bits/stdc++.h> will include all headers of standard library, and there is no need for including other headers. But doing so will dramatically increase your compilation time.
72,357,225
72,357,290
OpenCV C++ memory leak issue
Just see the below code snippet - # include "opencv4/opencv2/opencv.hpp" # include "iostream" int main() { while (true) { cv::Mat* mat = new cv::Mat(2000, 2000, CV_8UC3); std::cout << "mat size" << mat->size() << std::endl; mat->release(); std::cout << "mat size after" << mat->size() << std::endl; } } Problem after running is - ram keep filling. I have 48 gb of ram, which got filled in just some minutes as the loop runs. If i am releasing the memory, then why it keeps acquiring my ram.
A cv::Mat object contains metadata (width, height etc.) and a pointer to the image data. As you can see in the link, the cv::Mat::release method frees the memory allocated for the cv::Mat data (assuming the ref-count is 0). It does not free the memory for the cv::Mat object itself (i.e. the instance of the class containing the medadata and data pointer). In your case the object is allocated on the heap using new and therfore should be freed with a corresponding delete. However - it is not clear why you use new at all. You can have the cv::Mat on the stack as an automatic variable: cv::Mat mat(2000, 2000, CV_8UC3); This way it's destructor (and deallocation) will be called automatically at the end of the scope. Note that you can still use release if you need to free the data pointed by the cv::Mat object manually. In your case above it is not needed because the destrcutor of cv::Mat will take care of it for you.
72,357,301
72,357,384
What exactly does most specialized class mean in C++?
Let's say we have the following: template<typename T1, typename T2> class A {} template<typename T1, typename T2> class A<T1*, T2*> {} template<typename T> class A<T, T> {} Now, I know that we need to select the most specialized class, but for A<double*, double*>, there is a ambiguity error for both specializations and my teacher told that they are having the same specialization level. But at the first look, I would have said A<T, T> is more specialized. How exactly did we come to this conclusion? Or how can we say if 2 different specializations are at same level or not?
First, regarding terminology: Each of these definitions are not definitions for classes. The first definition defines a primary class template. The other definitions define partial specializations of that primary template. It is not possible to categorize partial specializations by "levels" of specialization. The actual rules are rather complicated and a reference can be found e.g. here. But roughly, you can consider a partial specialization more specialized than another, if the latter would accept all template argument lists that the former would accept, but not the other way around. In your case the first specialization accepts A<int*, long*>, but the second one doesn't, and the second one accepts A<int, int>, but the first one doesn't. So informally, we can see that neither is more specialized than the other. The primary template is not more specialized than the partial specializations either, so A<double*, double*> which would be accepted by all three without one of them being more specialized than all the others, is ambiguous and causes the program to be ill-formed.
72,358,050
72,377,912
Avoid reusing thread ids in C++
I noticed that based on this, Linux reuses the thread ids of terminated threads instead of generating new ones. For some reason, I need to avoid this behavior. How can I make sure that newly created threads, will have a freshly generated thread id instead of reusing the old ones? (Update for interested people: I'm working on a DNN scheduler for GPU using PyTorch's C++ API, I need to create a new thread to call each layer/operation, and whenever the newly created thread shares the thread id with a terminated thread, I get CUDNN_STATUS_MAPPING_ERROR. I have reached this after a long time and if I can create threads with unique ids, I might be able to track down the main reason behind this.) Update 2: POSIX Thread avoids generating new thread ids (thread objects in glibc implementation) as long as there are terminated threads to reuse, I want to avoid this behavior. Maybe somehow deallocating terminated thread would solve this problem. But I don't know how. Update 3: Based on lines 84-97 in link, Linux tends to reuse previously allocated but terminated threads. Is it somehow possible to deallocate these threads to prevent from reusing previous thread ids?
There would be a way to avoid the stack allocation for terminated threads to be reused, you will have to self-allocate the stack memories. The pthread_attr_setstack could help. Notice that it add complexity to handle the buffer overflowed and the responsibilities now belong to API users Following is some tests that I have made by play arround the POSIX thread library Created thread id: 139938008069888 in __pthread_create_2 Created thread id: 139937999677184 in __pthread_create_2 Created thread id: 139937999677184 in __pthread_create_2 Created thread id: 139938008069888 in __pthread_create_2 Thread 1 : db42f700 Thread 2 : dac2e700 Thread 3 : dac2e700 Thread 4 : db42f700 As result, the stack is preserved for thread 3,4 With the self-allocated stack Created thread id: 139891916830464 in __pthread_create_2 Set stackaddr to 139891916849184 Set stacksize to 32768 Created thread id: 139891916879616 in __pthread_create_2 Set stackaddr to 139891916898352 Set stacksize to 32768 Created thread id: 139891916928768 in __pthread_create_2 Set stackaddr to 139891916947520 Set stacksize to 32768 Created thread id: 139891916977984 in __pthread_create_2 Thread 1 : 139891916830464 Thread 2 : 139891916879616 Thread 3 : 139891916928768 Thread 4 : 139891916977984
72,358,267
72,358,367
How to get the true disk usage of a sparse file on windows?
I am watching a book called "windows via c/c++". It says a programmer can create a sparse file with VirtualAlloc() on a FileMapping. And I can see that this sparse file taking 1MB in file's properties in the book. And the book say it only takes 64KB on disk actually . So how can I get the actual size of a sparse file? Beside, I create a spare file with code as follow: #include <iostream> #include <Windows.h> int main() { using namespace std; HANDLE fileHandle = CreateFile(TEXT("D:\\sfile.txt"), GENERIC_READ | GENERIC_WRITE, 0, nullptr, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, nullptr); if(fileHandle==INVALID_HANDLE_VALUE) { cout << "INVALID FILE HANDLE" << endl; return 1; } HANDLE hFileMapping = CreateFileMapping(fileHandle, nullptr, PAGE_READWRITE|SEC_RESERVE, 0, 4*1024 * 1024, nullptr); if(hFileMapping==INVALID_HANDLE_VALUE||hFileMapping==NULL) { cout << "INVALID FILE MAPPING HANDLE" << endl; return 1; } PVOID view = MapViewOfFile(hFileMapping, FILE_MAP_READ | FILE_MAP_WRITE, 0, 0, 1024 * 1024); if(view == nullptr) { cout << "map failed" << endl; return 1; } PVOID revisePos = static_cast<PLONG_PTR>(view) + 512; auto allocatePos = (char*)VirtualAlloc(revisePos, 1024 * 1024, MEM_COMMIT, PAGE_READWRITE); char mess[] = "123456789 hello world!"; strcpy_s(static_cast<char*>(revisePos), sizeof(mess), mess); UnmapViewOfFile(view); CloseHandle(hFileMapping); CloseHandle(fileHandle); } It will create 4MB file, although I just try to VirtualAlloc 1MB . It seems that this file actually take 4MB on windows by watching file's properties. Why window's won't compress a sparse file? If windows won't compress it, when do I need a sparse file.
You should check documentation rather than rely on the book. You create a file mapping backed by the normal file with PAGE_READWRITE|SEC_RESERVE options. File size is supposed to increase 4 MB to match size of mapping object: If an application specifies a size for the file mapping object that is larger than the size of the actual named file on disk and if the page protection allows write access (that is, the flProtect parameter specifies PAGE_READWRITE or PAGE_EXECUTE_READWRITE), then the file on disk is increased to match the specified size of the file mapping object. SEC_RESERVE has no effect at all: This attribute has no effect for file mapping objects that are backed by executable image files or data files (the hfile parameter is a handle to a file).
72,358,544
72,358,860
Abstract class with template function issue
I have an abstract struct I with method a. B and B2 will inherit from it. X struct has an I type member and will instantiate it via createInsance template method based on type. I want to have on B2 an additional function b2Exclusive but I got compilation error that it is not present in A. error: ‘using element_type = struct B’ {aka ‘struct B’} has no member named ‘b2Exclusive’ Is any way for solving this without defining b2Exclusive for B as well and to keep to structure this way? #include <iostream> #include <memory> using namespace std; struct I { virtual void a() = 0; }; struct B : public I { B() { std::cout<<"B\n"; } void a() { std::cout<<"-a from B\n"; } }; struct B2 : public I { B2() { std::cout<<"B2\n"; } void a() { std::cout<<"-a from B2\n"; } void b2Exclusive() { std::cout<<"-something for B2\n"; } }; using Iptr = std::shared_ptr<I>; struct X { void createI() { if (type == "B") { createInstance<B>(); } else { createInstance<B2>(); } } template <typename T> void createInstance() { auto i = std::make_shared<T>(); if (type == "B2") { i->b2Exclusive(); } } std::string type = "None"; }; int main() { X x; x.type = "B2"; x.createI(); return 0; }
You can only call b2Exclusive if the template function use typename B2: one way to do so is to create the specialization for that type, such as this for example: struct X { void createI(); template <typename T> void createInstance() { //do something } std::string type = "None"; }; template<> void X::createInstance<B2> () { auto i = std::make_shared<B2>(); i->b2Exclusive(); } void X::createI() { if (type == "B") { createInstance<B>(); } else { createInstance<B2>(); } } int main() { X x; x.type = "B2"; x.createI(); return 0; }
72,358,852
72,358,973
Int subtraction from a string
Why doesn't the code below give an error that says the array is out of range? #include <iostream> int main() { std::cout << "Hello, world!" - 50; return 0; }
It doesn't give an error because it's your job to make sure to honor array bounds. Like many other things this is impossible to always detect so it's left up to the user to not do it. Compiler have gotten better at warning about it though: <source>:4:36: warning: offset '-50' outside bounds of constant string [-Warray-bounds] <source>:5:36: warning: array subscript 50 is outside array bounds of 'const char [14]' [-Warray-bounds] Both under and overflow on a constant string get detected just fine by gcc and clang.
72,358,922
72,372,395
Java, C++ JNI - java.lang.UnsatisfiedLinkError : No .dylib in java.library.path
I was trying to link java and c++ code using java JNI but when I ran the java file using dylib path, getting an error that no file was available. However, my lib file is available in the current working directory. Also, I tried moving same dylib to /Library/Java/Extensions but still the same error. Java File: JNIJava.java public class JNIJava { static { System.loadLibrary("JNI_CPP"); } public native void printString(String name); public static void main(final String[] args) { JNIJava jniJava = new JNIJava(); jniJava.printString("Invoked C++ 'printString' from Java"); } } Header file : JNIJava.h /* DO NOT EDIT THIS FILE - it is machine generated */ #include "/Library/Java/JavaVirtualMachines/jdk1.8.0_261.jdk/Contents/Home/include/jni.h" /* Header for class JNIJava */ #ifndef _Included_JNIJava #define _Included_JNIJava #ifdef __cplusplus extern "C" { #endif /* * Class: JNIJava * Method: printString * Signature: (Ljava/lang/String;)V */ JNIEXPORT void JNICALL Java_JNIJava_printString (JNIEnv *, jobject, jstring); #ifdef __cplusplus } #endif #endif C++ file : JNIJava.cpp #include <iostream> #include "JNIJava.h" using namespace std; JNIEXPORT void JNICALL Java_JNIJava_printString(JNIEnv *env, jobject jthis, jstring string) { const char *stringInC = env->GetStringUTFChars(string, NULL); if (NULL == stringInC) return; cout << stringInC << endl; env->ReleaseStringUTFChars(string, stringInC); } Used the below commands to link and run the code : javac JNIJava.java -h . g++ -dynamiclib -O3 \ -I/Library/Java/JavaVirtualMachines/jdk1.8.0_261.jdk/Contents/Home/include \ -I/Library/Java/JavaVirtualMachines/jdk1.8.0_261.jdk/Contents/Home/include \ JNIJava.cpp -o JNI_CPP.dylib java -cp . -Djava.library.path=$(pwd) JNIJava When I Do ls : JNIJava.class JNIJava.cpp JNIJava.h JNIJava.java JNI_CPP.dylib Error : Exception in thread "main" java.lang.UnsatisfiedLinkError: no JNI_CPP in java.library.path: /Users/tkapadn/Documents/Documents_Data/Lens-Eclipse-Workspace/Java_JNI/CPP at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2447) at java.base/java.lang.Runtime.loadLibrary0(Runtime.java:809) at java.base/java.lang.System.loadLibrary(System.java:1893) at JNIJava.<clinit>(JNIJava.java:3) Here is the screenshot of the error : enter image description here Note - I tried linking c with java using JNI and I was successfully able to run the java file. Java Version - jdk1.8.0_261, System - macOS Big Sur (11.6.1) Please provide your suggestions.
instead of using 'System.loadLibrary("")' use 'System.load("lib path")' and check if the library is working.then you can move it to the java library path. for example : (tested and working) try { String libPath = System.getProperty("user.dir") + System.getProperty("file.separator") + System.mapLibraryName("JNI_CPP"); System.load(libPath); } catch (Exception e) { System.out.println(e.toString()); }
72,359,283
72,366,191
IntelliSense VSCode show "name must be a namespace nameC/C++(725)" on "using namespace nvcuda;"
Hi everyone actually I'm programming on Cuda and I'm testing a simple tensor core example, but i have a problem with intelliSense, practically its show me errore on this commands (see image) and I dont know why, because when i compile and run programm (with tasks) its work correctly, some ideas ? Errors: Entire code: #include <stdio.h> #include <stdlib.h> #include <cuda.h> #include <mma.h> using namespace nvcuda; // The only dimensions currently supported by WMMA const int WMMA_M = 16; const int WMMA_N = 16; const int WMMA_K = 16; __global__ void wmma_example(half *a, half *b, float *c, int M, int N, int K, float alpha, float beta) { // Leading dimensions. Packed with no transpositions. int lda = M; int ldb = K; int ldc = M; // Tile using a 2D grid int warpM = (blockIdx.x * blockDim.x + threadIdx.x) / warpSize; int warpN = (blockIdx.y * blockDim.y + threadIdx.y); // Declare the fragments wmma::fragment<wmma::matrix_a, WMMA_M, WMMA_N, WMMA_K, half, wmma::col_major> a_frag; wmma::fragment<wmma::matrix_b, WMMA_M, WMMA_N, WMMA_K, half, wmma::col_major> b_frag; wmma::fragment<wmma::accumulator, WMMA_M, WMMA_N, WMMA_K, float> acc_frag; wmma::fragment<wmma::accumulator, WMMA_M, WMMA_N, WMMA_K, float> c_frag; wmma::fill_fragment(acc_frag, 0.0f); // Loop over the K-dimension for (int i = 0; i < K; i += WMMA_K) { int aRow = warpM * WMMA_M; int aCol = i; int bRow = i; int bCol = warpN * WMMA_N; // Bounds checking if (aRow < M && aCol < K && bRow < K && bCol < N) { // Load the inputs wmma::load_matrix_sync(a_frag, a + aRow + aCol * lda, lda); wmma::load_matrix_sync(b_frag, b + bRow + bCol * ldb, ldb); // Perform the matrix multiplication wmma::mma_sync(acc_frag, a_frag, b_frag, acc_frag); } } // Load in current value of c, scale by beta, and add to result scaled by alpha int cRow = warpM * WMMA_M; int cCol = warpN * WMMA_N; if (cRow < M && cCol < N) { wmma::load_matrix_sync(c_frag, c + cRow + cCol * ldc, ldc, wmma::mem_col_major); for(int i=0; i < c_frag.num_elements; i++) { c_frag.x[i] = alpha * acc_frag.x[i] + beta * c_frag.x[i]; } // Store the output wmma::store_matrix_sync(c + cRow + cCol * ldc, c_frag, ldc, wmma::mem_col_major); } }
Set CUDA_ARCH variable in c_cpp_properties as follow: { "configurations": [ { "name": "Linux", "includePath": [ "${workspaceFolder}/**" ], "defines": [ "__CUDA_ARCH__=750" ], "compilerPath": "/usr/bin/gcc", "cStandard": "gnu17", "cppStandard": "gnu++17", "intelliSenseMode": "linux-gcc-x64" } ], "version": 4 }
72,359,355
73,552,753
Extend display range of ArrayItems/IndexListItems using natvis
I am trying to visualize a memory content using natvis which is pointed by a pointer. I have also tried to declare the memory as a vector. But every time the problem I am facing is that, during debugging the visualizer can show only first 50 entry. I am giving here a very minimal example. Suppose, the pointer_array is a member of Foo class. In the driver file an array of size 5000 is created which is pointed by the array. I would like to observe the value of the array with the variable pointer_array. Also I have tried to understand how natvis reacts with std::vector and that's why as a member variable a vector (foo_vec) is also declared. foo.h: #include <iostream> #include <vector> class Foo { public: Foo(){} uint32_t *pointer_array; std::vector<uint32_t> foo_vec; }; main.cpp: #include "foo.h" # define ARRAY_SIZE 5000 int main() { Foo obj_1; uint32_t foo_array[ARRAY_SIZE]; for(int i = 0; i < ARRAY_SIZE; i++) { foo_array[i] = i*2; } obj_1.pointer_array = foo_array; for(uint32_t i = 0; i < ARRAY_SIZE; i++) { obj_1.foo_vec.push_back(i*3); } return 0; } The following natvis file I have used. <?xml version="1.0" encoding="utf-8"?> <AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010"> <Type Name="Foo"> <DisplayString>Testing_Natvis</DisplayString> <Expand> <ArrayItems> <Size>5000</Size> <ValuePointer>pointer_array</ValuePointer> </ArrayItems> <!-- Tested with IndexListItems but failed to fetch all data, still only first 49 entry --> <!-- <IndexListItems> <Size>5000</Size> <ValueNode>pointer_array[$i]</ValueNode> </IndexListItems> --> <!-- Same result as like as pointer_array. Only first 49 entry is appeared --> <!-- <IndexListItems> <Size>foo_vec.size()</Size> <ValueNode>foo_vec[$i]</ValueNode> </IndexListItems> --> <!-- <ArrayItems> <Size>foo_vec.size()</Size> <ValuePointer>&amp;foo_vec[0]</ValuePointer> </ArrayItems> --> </Expand> </Type> </AutoVisualizer> In the launch.json I have added extra only the following two lines: "visualizerFile": "${workspaceFolder}/natvis_file/file.natvis", "showDisplayString": true, For better understanding I am giving here a screenshot of the output where in natvis file I have used IndexListItems and given size 80 to see value from index 0 to 79 but the displayed last value is from index 49. And the following is showing that I have given the size value 6 and natvis perfectly is showing value from index 0 to 5. Any workaround to achieve all entry of the memory using Natvis?
According to this release the problem is solved although there is still bug of displaying more than 1000 value using ArrayItems node. See this issue to get more information. This display range limitation is gone if IndexListItems is used. Following snippet could be used to see more than 1000 element <IndexListItems> <Size>5000</Size> <ValueNode>pointer_array[$i]</ValueNode> </IndexListItems>
72,359,915
72,360,105
Is there a way to see what's inside the stdio.h or how it's implemented?
Is there a way to see what's inside the stdio.h or how it's implemented? I learned that the standard functions are declared in the stdio.h file and I can't find it in my computer plus I heard that there is another file where the body of the functions are all written, which is called the stdio.c file. Can anyone tell me WHERE this file is in my computer (I am using gcc compilier) or anyway to see how it is implemented?
As far as I know the c++ header files are stored in C:\Program Files (x86)\Windows Kits\10\Include\"some_version"\ucrt for Windows, and in /usr/include for linux. There you can find the stdio.h file and any other of the standard c++ header files. Otherwise looking on the internet for stdio.h source code is also an option
72,360,372
72,365,328
Including a C++ library into another?
I'm trying to build a c++ library, which will be using itself another library. I would like to output at the end a single .so file, so it is easily copied and used in any other project. In this library I am using another library, GLFW. Now, I can create my library fine, but when I am using it I am getting linking errors, where the GLFW functions are not defined. This makes me think that the GLFW lib is not exported with my library. I've seen this that seemed to be a solution, but i gave me lot of duplicate symbol errors. I'm quite a beginner with cmake, so maye there is something obvious I'm not seeing. Here is my CMakeLists.txt : cmake_minimum_required(VERSION 3.22) project(MyLib) set(CMAKE_CXX_STANDARD 23) # define folders path get_filename_component(ROOT_DIR ${CMAKE_CURRENT_LIST_FILE} PATH) set(HEADER "${ROOT_DIR}/include") set(SRCS_PATHS "${ROOT_DIR}/src") set(TESTS_SRC "${ROOT_DIR}/tests") # add dependencies set(DEP_HEADERS "${ROOT_DIR}/dependencies/GLFW/include") # set the project sources and headers files include_directories(${HEADER}) include_directories(${DEP_HEADERS}) set(SRCS [...]) add_library(MyLib SHARED ${SRCS}) # set the project property linker language set_target_properties(MyLib PROPERTIES LINKER_LANGUAGE CXX) # target tests add_executable(window ${TESTS_SRC}/window.cpp) target_link_libraries(window MyLib) I've seen I'm not the only one with this issue, but most of the answers I've tried won't work and lead to the same problem.
From what I can deduce from your CMakeLists.txt, you should do something like this (I don't like vendoring, not an expert of this approach, so maybe there is something more elegant): cmake_minimum_required(VERSION 3.20) project(MyLib) # glfw static PIC set(CMAKE_POSITION_INDEPENDENT_CODE_SAVED ${CMAKE_POSITION_INDEPENDENT_CODE}) set(BUILD_SHARED_LIBS_SAVED ${BUILD_SHARED_LIBS}) set(CMAKE_POSITION_INDEPENDENT_CODE ON) set(BUILD_SHARED_LIBS OFF) add_subdirectory(dependencies/GLFW EXCLUDE_FROM_ALL) set(CMAKE_POSITION_INDEPENDENT_CODE ${CMAKE_POSITION_INDEPENDENT_CODE_SAVED}) set(BUILD_SHARED_LIBS ${BUILD_SHARED_LIBS_SAVED}) # MyLib add_library(MyLib SHARED [...]) target_include_directories(MyLib PUBLIC $<BUILD_INTERFACE:${PROJECT_SOURCE_DIR}/include>) target_link_libraries(MyLib PRIVATE glfw) target_compile_features(MyLib PUBLIC cxx_std_23) # Tests add_executable(window tests/window.cpp) target_link_libraries(window PRIVATE MyLib) target_compile_features(MyLib PRIVATE cxx_std_23) But honestly it's bad to hardcode all these informations in a CMakeLists, you should have a generic CMakeLists and avoid vendoring: cmake_minimum_required(VERSION 3.20) project(MyLib) # MyLib find_package(glfw3 REQUIRED) add_library(MyLib [...]) target_include_directories(MyLib PUBLIC $<BUILD_INTERFACE:${PROJECT_SOURCE_DIR}/include>) target_link_libraries(MyLib PRIVATE glfw) target_compile_features(MyLib PUBLIC cxx_std_23) # Tests add_executable(window tests/window.cpp) target_link_libraries(window PRIVATE MyLib) target_compile_features(MyLib PRIVATE cxx_std_23) And then you would decide at build time how to build each lib: // build & install glfw once, as static PIC (glfw is not vendored in MyLib source code here) cd <glfw_source_dir> cmake -B build -S . -DCMAKE_BUILD_TYPE=Release -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DBUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=<glfw_install_dir> cmake --build build cmake --build build --target install // build MyLib as shared cd <mylib_source_dir> cmake -B build -S . -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_PREFIX_PATH=<glfw_install_dir> cmake --build build
72,360,602
72,363,006
Fast 1D Convolution with Eigen C++?
Suppose I have two data arrays: double data[4096] = { .... }; double b[3] = {.25, .5, .25}; I would like a fast and portable implementation of convolution. To use NumPy syntax result = numpy.convolve(data, b, "same") Kernel size is small, 3 or 5 and I may have to convolve with a kernel with zeros (giving scope maybe for further optimisations). double b[5] = {.25, .0, .5, .0, .25}; I have a feeling Eigen C++ has optimised code for this, but I can't figure out how to use it. Alternatively, are there other libraries with a portable implementation of convolution, ideally optimised for common platforms?
Armadillo should have you covered. An Eigen implementation may look like this: Eigen::VectorXd convolve(const Eigen::Ref<const Eigen::VectorXd>& in, const Eigen::Vector3d& weights) { const Eigen::Index innersize = in.size() - 2; Eigen::VectorXd out(in.size()); out.segment(1, innersize) = in.head(innersize) * weights.x() + in.segment(1, innersize) * weights.y() + in.tail(innersize) * weights.z(); // Treat borders separately return out; } Unroll similarly for 5 weights.
72,361,092
72,361,743
Passing parameter pack in constexpr
i am trying to determine the size of all passed objects at compile time and then abort the build process via static_assert when a maximum size is exceeded. #include <iostream> template<class T> class Test { public: T value; constexpr size_t size() const { return sizeof(T) + 3; } }; template<typename ...T> constexpr int calc(const T&...args) { return (args.size() + ...); } template<typename ...T> void wrapper(const T& ...args) { // error: 'args#0' is not a constant expression constexpr int v = calc(args...); static_assert(v <= 11, "oops"); } int main() { Test<int> a; Test<char> b; // a.size() + b.size() == 11 // works constexpr int v = calc(a, b); static_assert(v <= 11, "oops"); // wrapper function wrapper(a, b); } run on godbolt it works perfectly if i call the calculation function directly with the objects. but if i use a wrapper function and pass the parameter pack, suddenly the parameters don't seem to be constant anymore. does anyone know how i can fix this problem?
Function arguments are not constexpr expressions (for good reasons) even if part of constexpr or consteval functions. If you are willing to make Test::size static, independent of objects: #include <iostream> template<class T> class Test { public: T value; constexpr static size_t size() { return sizeof(T) + 3; } }; template<typename ...T> constexpr size_t calc_types() { return (T::size() + ...); } template<typename ...T> constexpr size_t calc_vals(const T&...) { return calc_types<T...>(); } template<typename ...T> constexpr void wrapper_types() { static_assert(calc_types<T...>() <= 11, "oops"); } template<typename ...T> constexpr void wrapper_vals(const T&...) { wrapper_types<T...>(); } int main() { Test<int> a; Test<char> b; // a.size() + b.size() == 11 // works constexpr int v = calc_vals(a, b); static_assert(v <= 11, "oops"); // wrapper function wrapper_vals(a, b); }
72,361,515
72,430,374
Getting huge random numbers error while trying to get the maximum element in array - C++ error
I am making a program in which the program takes 3 numbers as input: "l", "r" and "a". I get all the values of "x" between l and r, (l and r inclusive). example, l = 1, r = 3, x values are 1, 2, 3. so now I have a function, f(n) = ((x/a) + (x % a)),(note: [x/a] is rounded down to an integer). so I have implemented this in c++ and my code is below. #include<iostream> using namespace std; int main() { int l; int r; int a; cin>>l>>r>>a; int nums[(r-l)+2]; int answers[(r-l)+2]; for (int i = 1; i < (r-l)+2; i++) { nums[i] = i; } for (int i = 1; i < sizeof(nums)/sizeof(nums[0])-1; i++) { answers[i] = ((nums[i]/a) + (nums[i] % a)); } int j = 0; j = answers[0]; for (int i = 0; i < sizeof(answers); i++) { if (j < answers[i]) { j = answers[i]; } } cout<<j; } but whenever I run this code, I get huge random numbers like 230984084 and all.So please point out what's wrong with my Code. Thanks in advance.
Request from Levi to post my comment as answer: The point is that one of the successes of C++ is that it started from C, but it moved on quite a bit and the code in your question is still mostly C-code. Here is an example how it could be handled in C++ with more knowledge needed but less chance for errors: #include <iostream> #include <vector> #include <algorithm> #include <iterator> #include <numeric> int main() { std::cout << "Give 'left', 'right' and 'process' values: \n"; int l, r, a; std::cin >> l >> r >> a; std::vector<int> nums((r-l)+2); std::vector<int> answers; // fill container with ascending numbers std::iota(nums.begin(), nums.end(), 1); // transform as needed std::transform(nums.begin(), nums.end(), std::back_inserter(answers), [a](int i) { return i/a + i%a; }); // find maximum element in container (returns iterator to element) auto maxv = std::max_element(answers.begin(), answers.end()); std::cout << *maxv; }
72,361,930
72,362,064
why are elements of unordered_set not unique for custom equal_to
I am trying to understand unordered_set better. Mainly, to my understanding elements in an unordered_set should be unique up to equal_to operator. Therefore, I decided to test that by a small piece of code #include <iostream> #include <unordered_set> using namespace std; struct m_int { int x; }; // template specialization for std::hash template<> struct std::hash<m_int> { std::size_t operator()(m_int const& x) const noexcept { return hash<int>{}(x.x); } }; //template specialization for std::equal_to template<> struct std::equal_to<m_int> { bool operator()(const m_int &lhs, const m_int &rhs) const { return abs(lhs.x-rhs.x)<=3; } }; int main() { unordered_set<m_int> m={{1},{2},{3},{4}}; for(auto&x:m) cout<<x.x<<' '; cout<<endl; cout<<m.count({2})<<endl; // your code goes here return 0; } My initial thought is that 1 would be inserted 2, and 3 would but 4 would be inserted. The output I got was 4 3 2 1 1 since according to my equal_to 1 and 2 are the same, then this set actually contains repetition, which is against the definition of the set. Why is that the case and how can I solve that?
Unordered containers have the following requirements for their hash and key equality functions: If two Keys are equal according to Pred, Hash must return the same value for both keys. This is not true for your container. 1 and 4, for example, compare equal but they'll (almost certainly) have different hash values. This results in undefined behavior. Another logical problem here: 1 is equal 4, and 4 is equal to 7, but 1 is not equal to 7. This does not compute, either.
72,362,234
72,362,470
How to use QHash::removeIf(Predicate Pred)
Qt 6.1 introduced the method removeIf(Predicate Pred) to a number of its collection classes: QByteArray, QHash, QList, QMap, QMultiHash, QMultiMap, QString and QVarLengthArray. But how do I write a predicate? Let's take a QHash example: struct MishMash { int i; double d; QString str; enum Status { Inactive=0, Starting, Going, Stopping }; Status status; }; QHash<QString, MishMash> myHash; // ... fill myHash with key-value pairs // Now remove elements where myHash[key].status == MishMash::Status::Inactive; myHash.removeIf(???);
From the documentation... The function supports predicates which take either an argument of type QHash<Key, T>::iterator, or an argument of type std::pair<const Key &, T &>. That being the case, you should be able to use a lambda something along the lines of (untested)... myHash.removeIf( [](QHash<QString, MishMash>::iterator i) { return i.value().status == MishMash::Status::Inactive; } );
72,362,615
72,420,796
Is it really impossible to suspend two std/posix threads at the same time?
I want to briefly suspend multiple C++ std threads, running on Linux, at the same time. It seems this is not supported by the OS. The threads work on tasks that take an uneven and unpredictable amount of time (several seconds). I want to suspend them when the CPU temperature rises above a threshold. It is impractical to check for suspension within the tasks, only inbetween tasks. I would like to simply have all workers suspend operation for a few milliseconds. How could that be done? What I'm currently doing I'm currently using a condition variable in a slim, custom binary semaphore class (think C++20 Semaphore). A worker checks for suspension before starting the next task by acquiring and immediately releasing the semaphore. A separate control thread occupies the control semaphore for a few milliseconds if the temperature is too high. This often works well and the CPU temperature is stable. I do not care much about a slight delay in suspending the threads. However, when one task takes some seconds longer than the others, its thread will continue to run alone. This activates CPU turbo mode, which is the opposite of what I want to achieve (it is comparatively power inefficient, thus bad for thermals). I cannot deactivate CPU turbo as I do not control the hardware. In other words, the tasks take too long to complete. So I want to forcefully pause them from outside.
I want to suspend them when the CPU temperature rises above a threshold. In general, that is putting the cart before the horse. Properly designed hardware should have adequate cooling for maximum load and your program should not be able to exceed that cooling capacity. In addition, since you are talking about Turbo, we can assume an Intel CPU, which will thermally throttle all on their own, making your program run slower without you doing anything. In other words, the tasks take too long to complete You could break the tasks into smaller parts, and check the semaphore more often. A separate control thread occupies the control semaphore for a few milliseconds It's really unlikely that your hardware can react to millisecond delays -- that's too short a timescale for anything thermal. You will probably be better off monitoring the temperature and simply reducing the number of tasks you are scheduling when the temperature is rising and getting close to your limits. I've now implemented it with pthread_kill and SIGRT. Note that suspending threads in unknown state (whatever the target task was doing at the time of signal receipt) is a recipe for deadlocks. The task may be inside malloc, may be holding arbitrary locks, etc. etc. If your "control thread" also needs that lock, it will block and you lose. Your control thread must execute only direct system calls, may not call into libc, etc. etc. This solution is ~impossible to test, and ~impossible to implement correctly.
72,363,129
72,363,576
Wrapping std::getline()
I am struggling with the problem of reading input from file on a per-line basis, in a cross-platform way. Different platforms use different sequences of characters to represent a new line/end of line. std::getline doesn't deal with these in a cross platform way. What do I mean by this? std::getline changes its behavior depending on the platform on which an executable is compiled. On Windows platforms, it expects to see CRLF to denote line endings. On Linux, it expects just LF. It does not handle cases where a file contains a line ending which is not what the platform expects. For example a file created on a Windows machine is likely to have CRLF line endings. If that file is copied to a Linux machine without changing the line ending format then std::getline "breaks". It seemed to me that the easiest way to work around this would be to create a new function which wraps std::getline. Something like this: return_type GetLine(stream_type ifs, string_type s) { return_type ret = std::getline(ifs, s); s.erase(std::remove(s.begin(), s.end(), '\r' ), s.end()); s.erase(std::remove(s.begin(), s.end(), '\n' ), s.end()); return ret; } However at this point I'm stuck. From some searching, although getline returns a stream object (?) it also has an implicit cast-to-bool operator. I could force return_type to be bool, but then this prevents my wrapper function from returning a stream object, if such a thing were to be required in future. I also haven't been able to make sense of the STL templates in a sufficient enough way to determine what stream_type and string_type should be. I can force them to be std::ifstream and std::string, but I think this decision would also make the function less generic. How should I proceed here?
You should take the stream by reference because streams typically cannot be copied. Also the string should be passed by reference because you want to write to it. To be generic you can use the same interface as std::getline does. As you want to use specific delimiters, they need not be passed as arguments. If you make the function a template then it will work with any stream that also works for std::getline: #include <iostream> #include <sstream> #include <string> template< class CharT, class Traits, class Allocator > std::basic_istream<CharT,Traits>& my_getline( std::basic_istream<CharT,Traits>& input, std::basic_string<CharT,Traits,Allocator>& str) { return std::getline(input,str); } int main() { std::istringstream s{"hello world"}; std::string foo; my_getline(s,foo); std::cout << foo; } However at this point I'm stuck. From some searching, although getline returns a stream object (?) it also has an implicit cast-to-bool operator. It's not getline that converts to bool but the stream returned by getline can be converted to bool. Your line is almost correct, but it needs to be a reference (and you need not spell out the type explicitly): auto& ret = std::getline(ifs, s); // more code return ret; Note that I didn't address the actual issue of extracting characters until any of the delimiters is encountered (rather than only the platform specific newline that you already get with bare std::getline).
72,363,664
72,365,018
Embedding Python to C++ Segmentation fault
I am trying to track the execution of python scripts with C++ Threads (If anyone knows a better approach, feel free to mention it) This is the code I have so far. #define PY_SSIZE_T_CLEAN #include </usr/include/python3.8/Python.h> #include <iostream> #include <thread> void launchScript(const char *filename){ Py_Initialize(); FILE *fd = fopen(filename, "r"); PyRun_SimpleFile(fd, filename); PyErr_Print(); Py_Finalize(); } int main(int argc, char const *argv[]) { Py_Initialize(); PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\".\")"); std::thread first (launchScript,"script.py"); std::cout << "Thread 1 is running with thread ID: " << first.get_id() << std::endl; std::thread second (launchScript,"script2.py"); std::cout << "Thread 2 is running with thread ID: " << second.get_id() << std::endl; first.join(); second.join(); Py_Finalize(); return 0; } Script.py just has a print statement that prints "Hello World" Script2.py has a print statement that prints "Goodbye World" I build the application with the following commands g++ -pthread -I/usr/include/python3.8/ main.cpp -L/usr/lib/python3.8/config-3.8-x86_64 linux-gnu -lpython3.8 -o output When I run ./output, I receive the following on my terminal Thread 1 is running with thread ID: 140594340370176 Thread 2 is running with thread ID: 140594331977472 GoodBye World ./build.sh: line 2: 7864 Segmentation fault (core dumped) ./output I am wondering why I am getting Segmentation Fault. I have tried to debug with PyErr_Print(); but that has not given me any clues. Any feed back is appreciated.
After testing and debugging the program for about 20 minutes I found that the problem is caused because in your example you've created the second std::thread named second before calling join() on the first thread. Thus, to solve this just make sure that you've used first.join() before creating the second thread as shown below: int main(int argc, char const *argv[]) { Py_Initialize(); PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\".\")"); std::thread first (launchScript,"script.py"); std::cout << "Thread 1 is running with thread ID: " << first.get_id() << std::endl; //--vvvvvvvvvvvvv-------->call join on first thread before creating the second std::thread first.join(); std::thread second (launchScript,"script2.py"); std::cout << "Thread 2 is running with thread ID: " << second.get_id() << std::endl; second.join(); Py_Finalize(); return 0; }
72,363,792
72,364,162
Eigen: Comparing each element of vector with constant
is there a way how to compare each element of a vector with a constant? So far, I am comparing 2D vector Eigen::Vector2d with a constant double tolerance like this: if (x(0) > tolerance && x(1) > tolerance) { ... } I have found the function isApprox() but it did not worked somehow. Is there is a nicer or recommended way on how to do it?
One way to do this is to use the array method of the Vector class. Like this: #include <Eigen/Dense> #include <iostream> int main(int argc, char * argv[]) { Eigen::Vector2d A{ 7.5, 8.2 }; std::cout << A << '\n'; auto res = A.array() >= 8.0; std::cout << res << '\n'; if (res.all()) { std::cout << "True" << '\n'; } else { std::cout << "False" << '\n'; } A(0) = 10.2; auto res2 = A.array() >= 8.0; std::cout << res2 << '\n'; if (res2.all()) { std::cout << "True" << '\n'; } else { std::cout << "False" << '\n'; } return 0; } In this case res and res2 are CwiseBinaryOp which contains booleans for each element in A. Use all to find when both are True.
72,364,063
72,364,848
C++ functions declaration with macros
I am asking about a way to improve code readability I was able to make a macro like this for a small library I'm making #define fun(name, arg1, arg2) void name(int arg1, int arg2) NOTE: int is an existent class, but I replace it with int so anyone can run it This would allow me to use this code to create a function: fun(testFunction, x, y) { // do stuff std::cout << x << y << std::endl; } and then in my main: int main() { testFunction(1, 2); return 0; } This works great (at least in Visual Studio, haven't tested in GCC but I think it works there too). Is it possible to make a macro that would be like: #define fun name(arg1, arg2) void name(int arg1, int arg2) so a macro that would allow me to declare a function like: fun testFunction(x, y) { // do stuff } The actual thing I am asking if is there a way to make a macro that allows me to do this (for example) CustomClassTemplate doStuff(CustomClass& arg1, CustomClassTemplate arg2, Library::Binary::Binary bin) { // do stuff return CustomClassTemplate(/*blah, blah, blah*/); } to this: fun doStuff(arg1, arg2, bin) { // do stuff return CustomClassTemplate(/* blah blah blah*/); } you can create an empty class for each argument
As you already discovered: technically it can be done, but not in a #define fun name(arg1, arg2) and I think that's a good thing because you want to hide the fact that you're using macros while that should be clear enough. Also fun doStuff(arg1, arg2, bin) looks like a regular function declaration with empty parameter types (no parameter names) and returning fun. What you already saw in the comments is that a lot of people don't agree that that using macros is a good thing and that will also be true for the people who will use your library or work on it, so you might take that into account when deciding to use it anyway. Other points to take into account: copying and adapting code lines isn't that costly you can align the parameters to show that they're the same variations in function declarations will need more macros and longer names problems in functions with parameters are easier spotted when they are explicit
72,364,329
72,376,327
boost::beast ssl tcp stream: gracefully disconnect and then reconnect
I have a boost::beast ssl stream data member: class HttpsClient { ... asio::ssl::context _ssl_ctx; beast::ssl_stream<beast::tcp_stream> _stream; }; At construction I initialise the stream with an asio::io_context and an ssl context as follows: namespace ssl = boost::asio::ssl; ssl::context sslContext() { ssl::context ssl_ctx {ssl::context::tlsv12_client}; ssl_ctx.set_verify_mode(ssl::context::verify_peer | ssl::context::verify_fail_if_no_peer_cert); ssl_ctx.set_default_verify_paths(); boost::certify::enable_native_https_server_verification(ssl_ctx); return ssl_ctx; } HttpsClient::HttpsClient(asio::io_context& ctx) : _ssl_ctx(sslContext()) , _stream(ctx, _ssl_ctx) {} My connect function synchronously resolves the endpoint, connects to it, and performs the SSL handshake. void HttpsClient::connect(const std::string& host, std::uint16_t port, const std::string& path) { tcp::resolver resolver {_stream.get_executor()}; tcp::resolver::results_type end_point = resolver.resolve(host, std::to_string(port)); beast::get_lowest_layer(_stream).connect(end_point); beast::get_lowest_layer(_stream).socket().set_option(tcp::no_delay {true}); SSL_set_tlsext_host_name(_stream.native_handle(), host.c_str()); _stream.handshake(ssl::stream_base::client); } Once connected, I start an async_read, and when I want to send a request, I use the synchronously version: void HttpsClient::write(beast::http::request<beast::http::string_body>& request) { request.prepare_payload(); http::write(_stream, request); } This is all works as expected. The problem I'm coming up against is that I would like to disconnect and then reconnect the stream. I have tried several different ways to close the connection: Cancel outstanding async_read and shutdown the stream: _stream.next_layer().cancel(); _stream.shutdown(); Once the shutdown completes I am seemingly able to connect again, but attempts to write to the stream fail with "protocol is shutdown". After receiving "protocol is shutdown" any attempts to reconnect receive "Operation canceled" Cancel outstanding async_read and close the underlying TCP stream: _stream.next_layer().cancel(); _stream.next_layer().close(); Once the close completes I am still seemingly able to connect again, but now attempts to write to the stream fail with "wrong version number". After receiving "wrong version number" any attempts to reconnect receive "unspecified system error" Questions: Is it possible to disconnect my stream, and then reconnect and reuse it? If so, what procedure do I need to do to allow this?
This issue relating to websocket, but possibly the ssl stream has similar requirements, says to recreate the entire stream... suggests reusing a disconnected stream is not possible. Certainly recreating the stream does work, which suggests this is indeed the case
72,365,350
72,365,403
Can I test the value of a preprocessor directive?
I have a preprocessor directive that I do not set, so I cannot change it, it is either true or false. Normally I would have done : #ifdef DIRECTIVE // code #endif But this will always run, since DIRECTIVE is always defined. Is there a way that I can do basically the equivalent of: #if DIRECTIVE #endif I guess I could do bool DirectiveValue = DIRECTIVE; if (DirectiveValue){ } But I was really hoping the second code block was possible in some way. Thanks for any insight!
The preprocessor has an #if statement, so you can do things like: #if DIRECTIVE, which (just like in C normally) tests as false if the value of he expression is zero, and true if the value of the expression is non-zero. Although it's not clear whether it provides a real advantage for you, there's also a kind of intermediate form: if constexpr (DIRECTIVE), which is evaluated at compile time, no run time, so it resembles an #if to some degree in that way--but still using normal C++ syntax, and integrated with the language in general, instead of being its own little thing with slightly different rules than the rest of the compiler.
72,365,688
72,461,717
wxChoice not visible in wxPanel
I need to add a drop down box to a panel but it doesn't seem to show up when I add it. WeldProfileDialog::WeldProfileDialog(cMainWindow* parent, wxWindowID id) : wxDialog(parent,id, "Weld Profile Editor") { wxBoxSizer* mainSizer = DBG_NEW wxBoxSizer(wxHORIZONTAL); this->SetSizer(mainSizer); wxBoxSizer* col1Sizer = DBG_NEW wxBoxSizer(wxVERTICAL); wxPanel* sidebar = DBG_NEW wxPanel(this, wxID_ANY); sidebar->SetSizer(col1Sizer); mainSizer->Add(sidebar); wxChoice* selectProfileType = DBG_NEW wxChoice(this, wxID_ANY); vector<wxString> choices = { "Single V", "Double V", "J Groove", "Compound"}; selectProfileType->Append(choices); selectProfileType->SetSelection(0); col1Sizer->Add(selectProfileType, 1, wxEXPAND); } However, when I remove the panel and add it directly to a box sizer it works just fine. And I'm not really sure what I'm missing. WeldProfileDialog::WeldProfileDialog(cMainWindow* parent, wxWindowID id) : wxDialog(parent, id, "Weld Profile Editor") { wxBoxSizer* mainSizer = DBG_NEW wxBoxSizer(wxHORIZONTAL); this->SetSizer(mainSizer); wxBoxSizer* col1Sizer = DBG_NEW wxBoxSizer(wxVERTICAL); //wxPanel* sidebar = DBG_NEW wxPanel(this, wxID_ANY); //sidebar->SetSizer(col1Sizer); mainSizer->Add(col1Sizer); wxChoice* selectProfileType = DBG_NEW wxChoice(this, wxID_ANY); vector<wxString> choices = { "Single V", "Double V", "J Groove", "Compound"}; selectProfileType->Append(choices); selectProfileType->SetSelection(0); col1Sizer->Add(selectProfileType, 1, wxEXPAND); }
As spotted by @Igor: The problem was that my control was a child of the window instead the panel, which placed it under the panel. this: wxChoice* selectProfileType = DBG_NEW wxChoice(this, wxID_ANY); into this: wxChoice* selectProfileType = DBG_NEW wxChoice(sidebar, wxID_ANY);
72,366,296
72,366,365
Count how many times class member was printed
I need to count how many times class members were printed using function Print which is inspector. Constructor should set private elements of class. #include <cmath> #include <iostream> class Vector3d { double x, y, z; mutable int count = 0; public: Vector3d(); Vector3d(double x, double y, double z); void Print() const; int GetCount() const; }; Vector3d::Vector3d() { count = 0; } Vector3d::Vector3d(double x, double y, double z) { count = 0; Vector3d::x = x; Vector3d::y = y; Vector3d::z = z; } void Vector3d::Print() const { count++; std::cout << "{" << x << "," << y << "," << z << "}"; } int Vector3d::GetCount() const { return count; } int main() { Vector3d v1(1, 2, 3); v1.Print();v1.Print();v1.Print(); Vector3d v2(v1); v2.Print();v2.Print(); std::cout << v2.GetCount(); return 0; } I used mutable int to enable changing element of const function. For v1.GetCount() I get output 3, which is correct. However, for v2.GetCount() I get output 5, which is wrong (correct is 2). Could you help me to fix this? Where am I making mistake?
You need to overload copy constructor and copy assign operator for Vector3d class. Now you are copying state of count field into v2 object, therefore it starts from 3 not from 0. #include <cmath> #include <iostream> class Vector3d { double x, y, z; mutable int count = 0; public: Vector3d(double x, double y, double z); Vector3d(const Vector3d&); Vector3d& operator=(const Vector3d&); Vector3d(Vector3d&&) = delete; Vector3d& operator=(Vector3d&&) = delete; void Print() const; int GetCount() const; }; Vector3d::Vector3d(double x, double y, double z) { Vector3d::x = x; Vector3d::y = y; Vector3d::z = z; } Vector3d::Vector3d(const Vector3d& that) : Vector3d(that.x, that.y, that.z) { } Vector3d& Vector3d::operator=(const Vector3d& that) { x = that.x; y = that.y; z = that.z; return *this; } void Vector3d::Print() const { count++; std::cout << "{" << x << "," << y << "," << z << "}"; } int Vector3d::GetCount() const { return count; } int main() { Vector3d v1(1, 2, 3); v1.Print();v1.Print();v1.Print(); Vector3d v2(v1); v2.Print();v2.Print(); std::cout << v2.GetCount(); return 0; } UPDATE: Someone mentioned that explicitly deleted move ctor and operator is not ok, I understand that, but for me it is not clear should we move counter to other instance or not. Therefore here possible implementation: #include <cmath> #include <iostream> class Vector3d { double x, y, z; mutable int count = 0; public: Vector3d(double x, double y, double z); Vector3d(const Vector3d&); Vector3d(Vector3d&&); Vector3d& operator=(Vector3d); void Print() const; int GetCount() const; private: void swap(Vector3d&); }; Vector3d::Vector3d(double x, double y, double z) { Vector3d::x = x; Vector3d::y = y; Vector3d::z = z; } Vector3d::Vector3d(const Vector3d& that) : Vector3d(that.x, that.y, that.z) { } Vector3d::Vector3d(Vector3d&& that) : Vector3d(that.x, that.y, that.z) { count = that.count; } Vector3d& Vector3d::operator=(Vector3d that) { swap(that); return *this; } void Vector3d::swap(Vector3d& that) { std::swap(x, that.x); std::swap(y, that.y); std::swap(z, that.z); std::swap(count, that.count); } void Vector3d::Print() const { count++; std::cout << "{" << x << "," << y << "," << z << "}"; } int Vector3d::GetCount() const { return count; } int main() { Vector3d v1(1, 2, 3); v1.Print();v1.Print();v1.Print(); Vector3d v2 = std::move(v1); v2.Print();v2.Print(); std::cout << v2.GetCount(); return 0; } But these more for commenters than for question author.
72,366,669
72,366,986
struct fwd-declared in member-function parameter list not nested in enclosing struct
I was trying to forward-declare nested type struct A::impl and run into this issue. The following code snippet compiles fine: struct A { struct impl; // `struct impl` fwd declaration void f(impl); // A::f declaration }; struct A::impl {}; // definition void A::f(impl) {} However, if I move the forward-declaration into the parameter list of member-function A::f, I get a compiler error: struct A { void f(struct impl); // A::f declaration and `struct impl` fwd declaration }; struct A::impl {}; // compile-time error //struct impl {}; // < This compiles, instead void A::f(impl) {} It looks like I'm forward-declaring struct impl instead of struct A::impl. The compiler error is (on Clang 14.0.0): error: no struct named 'impl' in 'A' struct A::impl {}; Question: why is struct A { void f(struct impl); }; different from struct A { struct impl; void f(impl); }; with respect to the scope of struct impl?
Yes, there's a difference in what the forward declaration means here. C++ Standard [basic.scope.pdecl]/7 reads: The point of declaration of a class first declared in an elaborated-type-specifier is as follows: for a declaration of the form        class-key attribute-specifier-seqopt identifier; the identifier is declared to be a class-name in the scope that contains the declaration, otherwise for an elaborated-type-specifier of the form        class-key identifier if the elaborated-type-specifier is used in the decl-specifier-seq or parameter-declaration-clause of a function defined in namespace scope, the identifier is declared as a class-name in the namespace that contains the declaration; otherwise, except as a friend declaration, the identifier is declared in the smallest namespace or block scope that contains the declaration. The original struct impl; within the definition of class A matches the first case, so it does forward declare impl in the immediate class scope, so as a member type. In void f(struct impl); the struct impl is not a declaration all on its own with a semicolon, so the first case doesn't match. It's within a parameter-declaration-clause, but the function declaration is neither a definition nor at namespace scope. So we're left with the third case: the smallest enclosing namespace or block scope. It's not in any block scope at all, and the only namespace here is the global namespace, so it forward declares struct ::impl. Because these details are tricky, I recommend always putting the forward declaration all on its own in the class/struct/union name ; form, not inside any other syntax.
72,366,851
72,367,118
Why does operator==(std::variant<T, U>, T) not work?
Consider the following: #include <iostream> #include <variant> int main () { std::variant<int, float> foo = 3; if(foo == 3) { std::cout << "Equals 3\n"; } } Godbolt demo here This does not compile because of the foo == 3: <source>:7:12: error: no match for 'operator==' (operand types are 'std::variant<int, float>' and 'int') 7 | if(foo == 3) | ~~~ ^~ ~ | | | | | int | std::variant<int, float> In file included from /opt/compiler-explorer/gcc-12.1.0/include/c++/12.1.0/iosfwd:40, from /opt/compiler-explorer/gcc-12.1.0/include/c++/12.1.0/ios:38, from /opt/compiler-explorer/gcc-12.1.0/include/c++/12.1.0/ostream:38, from /opt/compiler-explorer/gcc-12.1.0/include/c++/12.1.0/iostream:39, from <source>:1: /opt/compiler-explorer/gcc-12.1.0/include/c++/12.1.0/bits/postypes.h:192:5: note: candidate: 'template<class _StateT> bool std::operator==(const fpos<_StateT>&, const fpos<_StateT>&)' 192 | operator==(const fpos<_StateT>& __lhs, const fpos<_StateT>& __rhs) | ^~~~~~~~ // Many, many more rejected operator== candidates omitted There's a free function operator== that compares two std::variant<int, float>s. And there's an implicit conversion from int to std::variant<int, float>; that's how I was able to initialize foo in the first place. So why doesn't this comparison compile? Strictly speaking, I suppose there are a few related questions here. One is why this doesn't already work, explaining how the rules for overload resolution apply to this section. And second is if there's anything that can be done in user-written code to make this comparison work sensibly.
Neither parameter of that operator== overload is an undeduced context. So template argument deduction for the overload will fail if it fails in either parameter/argument pair. Since int is not a std::variant, deduction will fail for the corresponding parameter/argument pair and so the template overload is not viable. Implicit conversions are not considered when deducing template arguments. Types must (with few minor exceptions) match exactly.
72,367,123
72,367,153
What is a call to `char()`, `uint8_t()`, `int64_t()`, integer `T()`, etc, as a function in C++?
I've never seen this call to char() as a function before. Where is this described and what does it mean? This usage is part of the example on this cppreference.com community wiki page: https://en.cppreference.com/w/cpp/string/basic_string/resize: short_string.resize( desired_length + 3 ); std::cout << "6. After: \""; for (char c : short_string) { std::cout << (c == char() ? '@' : c); // <=== HERE === } This wording in the description also doesn't make any sense to me and I don't understand what it's saying: Initializes appended characters to CharT(). Highlighted in context: Adjacently related What motivated me to study the std::string::resize() method was trying to learn how to pre-allocate a std::string for use in C function calls as a char* buffer. This is possible by first pre-allocating the std::string by calling the my_string.resize() function on it. Then, you can safely write into &my_string[0] as a standard char* up to index my_string.size() - 1. See also: Directly write into char* buffer of std::string Is there a way to get std:string's buffer How to convert a std::string to const char* or char* See my detailed answer to this question here. Update: 3.5 months after asking the original question, I was made aware of this question also: What does int() do in C++?
It's the constructor† for char; with no arguments it constructs '\0'. Rarely used since primitives offer other ways to initialize them, but you initialize them with () just like you would a user-defined class, which ensures they get initialized to something; char foo; has undefined value, while char foo = char(); or char foo{}; is definitely '\0'. †As HolyBlackCat notes, it's not technically a constructor, because it's not a class, but it behaves like one for most purposes.
72,367,571
72,379,812
Seg fault while calling glfwSwapBuffers
It seems that I am having a seg fault while using GLFW and OpenGL on ArchLinux, DWM (fully updated and patched). I retraced the code and it is having the segFault in the glfwSwapBuffers(window). Here is my code : main.cpp #include <iostream> #include "gui/window.h" int main(int, char**) { Window window("Test GL", 800, 600); if(!window.hasCorrectlyLoaded()) { return 1; } while (!window.shouldClose()) { glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); window.pollEvents(); } } window.h #ifndef __WINDOW_H__ #define __WINDOW_H__ #include <string> #include <glad/gl.h> #include <GLFW/glfw3.h> class Window { private: GLFWwindow *window; bool correctlyLoaded; public: Window(const std::string&, int, int); ~Window(); const bool hasCorrectlyLoaded(); const bool shouldClose(); const void pollEvents(); }; #endif // __WINDOW_H__ window.cpp #include "window.h" #include <spdlog/spdlog.h> Window::Window(const std::string& title, int width, int height) { correctlyLoaded = false; if(!glfwInit()) { spdlog::default_logger()->critical("Could not load GLFW"); return; } glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GLFW_TRUE); GLFWwindow* window = glfwCreateWindow(width, height, title.c_str(), nullptr, nullptr); if (!window) { spdlog::default_logger()->critical("Failed to create GLFW window !"); return; } glfwMakeContextCurrent(window); if (!gladLoadGL(glfwGetProcAddress)) { spdlog::default_logger()->critical("Failed to load OpenGL !"); return; } spdlog::default_logger()->info("Loaded OpenGL {}", glfwGetVersionString()); glViewport(0, 0, width, height); correctlyLoaded = true; } const void Window::pollEvents() { glfwSwapBuffers(window); glfwPollEvents(); //<- Seg fault here } Window::~Window() { glfwTerminate(); } const bool Window::hasCorrectlyLoaded() { return correctlyLoaded; } const bool Window::shouldClose() { return glfwWindowShouldClose(window); } While further researching, I stumbled upon an answer that told me to set the glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API) window hint but I still got a segfault, but at a different place : GLFW source code GLFWAPI void glfwSwapBuffers(GLFWwindow* handle) { _GLFWwindow* window = (_GLFWwindow*) handle; assert(window != NULL); _GLFW_REQUIRE_INIT(); if (window->context.client == GLFW_NO_API) { _glfwInputError(GLFW_NO_WINDOW_CONTEXT, "Cannot swap buffers of a window that has no OpenGL or OpenGL ES context"); //<- Seg fault without window hint return; } window->context.swapBuffers(window); //<- Seg fault with window hint } Here is the output I get from the logging : [2022-05-24 20:01:04.252] [info] Loaded OpenGL 3.4.0 X11 GLX Null EGL OSMesa monotonic [1] 432406 segmentation fault (core dumped) /home/lygaen/code/testgl/build/testgl
Your problem occurs in Window.cpp, at this line: //... glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GLFW_TRUE); GLFWwindow* window = glfwCreateWindow(width, height, title.c_str(), nullptr, nullptr); //<--- if (!window) { //... You've redeclared window as a local variable to this constructor, and as a result, the pointer never escapes the constructor, and is dangled. A good habit when trying to assign class members is to use the this keyword. It is often redundant, but it does help indicate intent. So the code should be changed to this: //... glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GLFW_TRUE); this->window = glfwCreateWindow(width, height, title.c_str(), nullptr, nullptr); //<--- if (!this->window) { //... If your style guidelines don't permit it, you can omit the this->; the only important part is that you're not declaring an entirely new variable that's shadowing the class member.
72,368,147
72,368,244
How does "to_string" work in array of string?
I want to combine the first 4 characters of each line in txt file and compare it with the keyword I have, but when I combine the characters, I get the sum of these 4 characters' ascii numbers(whatever). How can I solve this problem. My code is here: When I debuged, I saw the string search(variable) was 321. int main() { ifstream file("sentence.txt"); if (file.is_open()) { string line; while (getline(file, line)) { string search = to_string(line[0] + line[1] + line[2]); // you see what I mean if ("dog" == search) { cout << "there is dog"; } else { cout << "there is no dog"<<endl; } } } }
line[0], line[1], and line[2] are chars, not std::strings. char is an integer type, so adding two chars together results in a single integer that is the sum of the two operands. It does not produce a std::string that is the concatenation of the two chars. To get a substring of a std::string use the substr member function: std::string search = line.substr(0, 3); Or, if you actually need to construct a std::string from individual chars, use the constructor that accepts a std::initializer_list<char>: std::string search{line[0], line[1], line[2]};
72,369,144
72,371,888
Iterate an array from json using jsoncpp
I have the following json: { "laureates": [{ "id": "1", "firstname": "Wilhelm Conrad", "surname": "Röntgen", "born": "1845-03-27", "died": "1923-02-10", "bornCountry": "Prussia (now Germany)", "bornCountryCode": "DE", "bornCity": "Lennep (now Remscheid)", "diedCountry": "Germany", "diedCountryCode": "DE", "diedCity": "Munich", "gender": "male", "prizes": [{ "year": "1901", "category": "physics", "share": "1", "motivation": "\"in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him\"" }] }] } I have tried this: bool ok = reader.parse(txt, root, false); if(! ok) { std::cout << "failed parse\n"; } std::vector<std::string> keys = root.getMemberNames(); for (std::vector<std::string>::const_iterator it = keys.begin(); it != keys.end(); ++it) { if (root[*it].isString()) { std::string value = root[*it].asString(); std::cout << value << std::endl; } else if (root[*it].isInt()) { int value = root[*it].asInt(); std::cout << value << std::endl; } else if (root[*it].isArray()){ // what to do here? } } The code works fine, but the problem is when I have an array like "prizes". I can't realize how to iterate and show the values without hardcoded it. Can anyone help me with this? Thanks in advance.
I can't realize how to iterate and show the values without hardcoded it. I think the problem is that you don't have a great handle on recursion or the key->value nature of JSON, because when you have an array like "prizes", you could have a nested Json object, such as an array inside an array. You could use a recursion to handle that: #include <jsoncpp/json/json.h> #include <iostream> void PrintJSONValue(const Json::Value &val) { if (val.isString()) { std::cout << val.asString(); } else if (val.isBool()) { std::cout << val.asBool(); } else if (val.isInt()) { std::cout << val.asInt(); } else if (val.isUInt()) { std::cout << val.asUInt(); } else if (val.isDouble()) { std::cout << val.asDouble(); } else { } } void HandleJsonTree(const Json::Value &root, uint32_t depth = 0) { depth += 1; if (root.size() > 0) { std::cout << '\n'; for (Json::Value::const_iterator itr = root.begin(); itr != root.end(); itr++) { // print space to indicate depth for (int tab = 0; tab < depth; tab++) { std::cout << " "; } std::cout << "key: "; PrintJSONValue(itr.key()); std::cout << " "; HandleJsonTree(*itr, depth); } } else { std::cout << ", value: "; PrintJSONValue(root); std::cout << "\n"; } } int main(int argc, char **argv) { std::string json = R"###( { "laureates": [ { "id": "1", "firstname": "Wilhelm Conrad", "surname": "Röntgen", "born": "1845-03-27", "died": "1923-02-10", "bornCountry": "Prussia (now Germany)", "bornCountryCode": "DE", "bornCity": "Lennep (now Remscheid)", "diedCountry": "Germany", "diedCountryCode": "DE", "diedCity": "Munich", "gender": "male", "prizes": [ { "year": "1901", "category": "physics", "share": "1", "motivation": "in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him" } ] } ] } )###"; Json::Value root; Json::Reader reader; bool ok = reader.parse(json, root, false); if (!ok) { std::cout << "failed parse\n"; } HandleJsonTree(root); } The output looks like this: key: laureates key: 0 -- the first item of an array key: born , value: 1845-03-27 key: bornCity , value: Lennep (now Remscheid) key: bornCountry , value: Prussia (now Germany) key: bornCountryCode , value: DE key: died , value: 1923-02-10 key: diedCity , value: Munich key: diedCountry , value: Germany key: diedCountryCode , value: DE key: firstname , value: Wilhelm Conrad key: gender , value: male key: id , value: 1 key: prizes key: 0 -- the first item of an array key: category , value: physics key: motivation , value: in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him key: share , value: 1 key: year , value: 1901 key: surname , value: Röntgen
72,369,308
72,369,874
c++ const ref vs template concept
I'm studying the c++20 specifications for templates, in detail the concepts. Currently for passing parameters to functions I use the const ref: void writeMsg(const std::string& msg) { std::cout << "msg = " << msg << "\n"; } I have only now discovered that templates can also be used, and with concepts I can control the parameters passed. Example: template<typename T> concept is_string = std::is_convertible<T, std::string>::value; template<is_string T> void writeMsg2(T&& msg) { std::cout << "msg =" << msg << "\n"; } Personally I can't understand the benefits, maybe my case is wrong to get the benefits from the concepts? Do you have any suggestions or links to help me in this question? Thanks
As you presented it: types which are convertible to std::string wouldn't need to be converted before the call to std::cout (potentially saving memory [de]allocations). It's the only big advantage I can see looking at your code. However, flipping a bit or two: template <class T> concept String = std::is_convertible<T, std::string>::value; void writeMsg2(String auto const& msg) { std::cout << "msg = " << msg << "\n"; } Live on Compiler Explorer
72,369,593
72,376,158
Does vkQueuePresentKHR prevent later commands from executing while it is waiting on the semaphore?
This is kind of a follow-up question for this question, and it is also based on the code provided by the same Vulkan tutorial. Here is a simplified example: // Vulkan handles defined and initialized elsewhere VkDevice device; VkQueue queue; VkSempahore semaphore; VkSwapchain swapchain; VkCommandBuffer cmd_buffer; // Renderer code uint32_t image_index; // image acquisition is omitted VkPresentInfoKHR present_info{}; present_info.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR; present_info.waitSemaphoreCount = 1; present_info.pWaitSemaphores = &semaphore; present_info.swapchainCount = 1; present_info.pSwapchains = &swapchain; present_info.pImageIndices = &image_index; VkSubmitInfo submit_info{}; submit_info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO; // ... irrelevant code omitted submit_info.pCommandBuffers = &cmd_buffer; vkQueuePresentKHR(queue, &present_info); vkQueueSubmit(queue, 1, &submit_info, VK_NULL_HANDLE); In the above example, will the commands in cmd_buffer also have to wait until semaphore is signaled? I am asking about this because a comment below the tutorial mentioned that: However, if the graphics and present queue do end up being the same, then the renderFinished semaphore guarantees proper execution ordering. This is because the vkQueuePresentKHR command waits on that semaphore and it must begin before later commands in the queue begin (due to implicit ordering) and that only happens after rendering from the previous frame finished.
In the above example, will the commands in cmd_buffer also have to wait until semaphore is signaled? Only if you use the semaphore as a waitSemaphore for the later submit. This is because the vkQueuePresentKHR command waits on that semaphore and it must begin before later commands in the queue begin (due to implicit ordering) and that only happens after rendering from the previous frame finished. I don't believe this is true. Commands start in implicit order with respect to other commands in the queue, but this is pipelined on a stage-to-stage basis. Also note the spec wording says "start in order" not "complete in order", which is a specification sleight of hand. Hardware is perfectly free to overlap and out-of-order execution of individual commands that are otherwise sequential in the stream unless the stream contains synchronization primitives that stop it doing so.
72,369,737
72,372,146
How do we display pixel data calculated in an OpenCL kernel to the screen using OpenGL?
I am interested in writing a real-time ray tracing application in c++ and I heard that using OpenCL-OpenGL interoperability is a good way to do this (to make good use of the GPU), so I have started writing a c++ project using this interoperability and using GLFW for window management. I should mention that although I have some coding experience, I do not have so much in c++ and have not worked with OpenCL or OpenGL before attempting this project, so I would appreciate it if answers are given with this in mind (that is, beginner-friendly terminology is preferred). So far I have been able to get OpenCL-OpenGL interoperability working with an example using a vertex buffer object. I have also demonstrated that I can create image data with an RGBA array (at least on the CPU), send this to an OpenGL texture with glTexImage2D() and display it using glBlitFramebuffer(). My problem is that I don't know how to create an OpenCL kernel that is able to calculate pixel data such that it can be given as the data parameter in glTexImage2D(). I understand that to use the interoperability, we must first create OpenGL objects and then create OpenCL objects from these to write the data on as these objects share memory, so I am assuming I must first create an empty OpenGL array object then create an OpenCL array object from this to apply an appropriate kernel to which would write the pixel data before using the OpenGL array object as the data parameter in glTexImage2D(), but I am not sure what kind of object to use and have not seen any examples demonstrating this. A simple example showing how OpenCL can create pixel data for an OpenGL texture image (assuming a valid OpenCL-OpenGL context) would be much appreciated. Please do not leave any line out as I might not be able to fill in the blanks! It's also very possible that the method I described above for implementing a ray tracer is not possible or at least not recommended, so if this is the case please outline an advised alternate method for sending OpenCL kernel calculated pixel data to OpenGL and subsequently drawing this to the screen. The answer to this similar question does not go into enough detail for me and the CL/GL interop link is not working. The answer mentions that this can be achieved using a renderbuffer rather than a texture, but it says at the bottom of the Khronos OpenGL wiki for Renderbuffer Objects that the only way to send pixel data to them is via pixel transfer operations but I can not find any straightforward explanation for how to initialize data this way. Note that I am using OpenCL c (no c++ bindings).
From your second para you are creating an OpenCL context with a platform specific combination of GLX_DISPLAY / WGL_HDC and GL_CONTEXT properties to interoperate with OpenGL, and you can create a vertex buffer object that can be read/written as necessary by both OpenGL and OpenCL. That's most of the work. In OpenGL you can copy any VBO into a texture with glBindBuffer(GL_PIXEL_UNPACK_BUFER, myVBO); glTexSubImage2D(GL_TEXTURE_2D, level, x, y, width, height, format, size, NULL); with the NULL at the end meaning to copy from GPU memory (the unpack buffer) rather than CPU memory. As with copying from regular CPU memory, you might also need to change the pixel alignment if it isn't 32 bit.
72,369,740
72,370,003
Operator overloading not working as intended for class pointers
I've made a very simple program trying to understand operator overloading in C++. However as you too will see, the result of the dimention d3 is not updated even though the appropriate values are returned from the operator overloading. #include <iostream> using namespace std; class dimention{ protected: int width, height; public: dimention(int w = 0, int h = 0){ width = w; height = h; } int getWidth(){ return width; } int getHeight(){ return height; } dimention& operator = (const dimention &d){ dimention *temp = new dimention; temp->height = d.height; temp->width = d.width; return *temp; } dimention& operator + (const dimention &d){ dimention *newDimention = new dimention; newDimention->width = this->getWidth() + d.width; newDimention->height = this->getHeight() + d.height; return *newDimention; } }; int main(){ dimention *d1 = new dimention(5, 5); dimention *d2 = new dimention(1, 1); dimention *d3 = new dimention; *d3 = *d1; cout << d3->getHeight() << endl; cout << d3->getWidth() << endl; *d3 = *d1 + *d2; cout << d3->getHeight() << endl; cout << d3->getWidth() << endl; return 0; } Thanks for your help.
I think you misunderstand the way methods operate on an object. Consider the assignment operator: dimention& operator = (const dimention &d){ dimention *temp = new dimention; temp->height = d.height; temp->width = d.width; return *temp; } You are never editing the object itself (this) that is being assigned to. Instead, you are creating (and leaking) a new temp object and changing it. That object is not d3. A correct implementation would be: dimention& operator = (const dimention &d){ this->height = d.height; this->width = d.width; return *this; } Will give you the expected result.
72,370,293
72,370,674
How is modification order of a variable defined in C++?
I've read this Q&A: What is the significance of 'strongly happens before' compared to '(simply) happens before'? The author gives an outline of an interesting evaluation that was not possible until C++20 but apparently is possible starting C++20: .-- T3 y.store(3, seq_cst); --. (2) | | | strongly | | sequenced before | happens | V | before | T3 a = x.load(seq_cst); // a = 0 --. <-' (3) | : coherence- | : ordered | : before | T1 x.store(1, seq_cst); <-' --. --. (4) | | |st | | | sequenced before |h | | V |b | | . T1 y.store(1, release); <-' | (x) | | : | strongly | | : synchronizes with | happens | | V | before | > T2 b = y.fetch_add(1, seq_cst); // b = 1 --. | (1) | | |st | | | sequenced before |h | | V |b | '-> T2 c = y.load(relaxed); // c = 3 <-' <-' The numbers on the right denote a possible seq_cst order (and I added (x) for convenience; this line does not participate in SC order since it's not an SC operation). I was trying to understand what is the modification order of y in this example, but I don't know how to determine it. (Or are there multiple possible modification orders of y for this evaluation?..) More generally, how is modification order of an atomic variable defined in C++? For example there's this: https://en.cppreference.com/w/cpp/atomic/memory_order Write-write coherence: If evaluation A that modifies some atomic M (a write) happens-before evaluation B that modifies M, then A appears earlier than B in the modification order of M So it seems that modification order has to be consistent with a write-write happens-before one. Is it the only thing that defines modification order? In the above example, AFAIU there's no happens-before between (2) and (1); so which one is first in the modification order of y? Is the mod order of y (x 1 2) (for this evaluation)? I believe it might also help reasoning about seq_cst order...
The modification order for an object is the order a thread would see if it was spinning in a tight loop running while(1) { y.load(relaxed); }, and happened to see every every change. (Not that that's a useful way to actually observe it, but every object has its own modification order that all threads can always agree on, like on real CPUs thanks to MESI exclusive ownership being required to commit a store to L1d cache. Or see it early via private store-forwarding between SMT threads on POWER CPUs.) Some random facts: The modification order for a single object is compatible with program order within one thread, even with relaxed After a thread sees a value with a load, it can only see that value or later ones in the modification order, even if the loads and stores are relaxed. The observed value of a variable can't change more times than there are stores by other threads. If you had a bunch of stores from a bunch of threads, one of them will be last, and that's the value that all readers will see (eventually). And while the dust is settling, any given reader won't see the value changing back and forth, other than actually seeing later stores in the mod order. (See the "coherency" rules in [intro.races] in the standard) I think this evaluation is showing actual effective order, so the mod order for y is just reading top to bottom, 2 x 1. (Because it's using enough seq_cst operations that all threads can agree on the order, and showing that some other things end up ordering the release store (x) after the seq_cst store (2).) This eval order is saying that the (2) store did become visible before the (x) store, so the (x) store replaced it. And the dust has settled on that before the y.fetch_add (1), otherwise it would have synced-with (2) instead of (x).
72,370,312
72,370,598
C++ what happen if I convert pointer type?
When we allocate a piece of memory, such as: double* ptr = new double[10]; It gives me a pointer that points to the 1st location of 10 continuous memory location of double Now if I do: unsigned char* ptr2 = (unsigned char*)ptr; does it create another unsigned char pointer that points to the 1st location of ptr, then translate the array into unsigned char (which the size is nolonger 10)? If not, how to do so?
A C-style cast of a pointer type (type *)ptr does nothing at all to the bit pattern. It simply tells the compiler to consider the pointer to be of the new type. Something that is rather dangerous as in most cases using the result is just UB. Now char is a special case because it is used to access the raw memory underlying any data. So you can cast any pointer (not function or member pointers) to a char pointer and use it to access the bit pattern of the data. You can also cast a char pointer back to the original type safely. Pretty much everything else quickly becomes UB. Look at the C++ style casts for better and safer methods: const_cast, static_cast, dynamic_cast, reinterpret_cast, .
72,370,560
72,370,697
How to get the next line in a recursive function
I'm trying to create a recursive function, that will go through a file, and for each line, will create an instance of an object, give this object a name, list of attributes, and a previous name (all names are unique, so I don't need to worry about similarities). Here's the code: void getTags(string prevname){ fstream file; file.open("source.txt"); string name; // Getting current line, and splitting it. string str; getline(file, str); cout << str << endl; vector<string> currentLine = splitString(str, ' '); // Getting tag name if(!isClosing(str)){ for (int i = 1; i < (int)currentLine[0].size(); i++){ name += currentLine[0][i]; } Tag tag = Tag(name, prevname); if(hasAttr(str)){ vector<pair<string, string>> temp = getAttr(currentLine); for(int i = 0; i < (int)temp.size(); i++){ tag.addAttribute(temp[i]); } } tags.push_back(tag); getTags(name); } else { getTags(prevname); } } }; For some reason, my getline(file, str) isn't getting the next line, and is instead just recursing until I get a segmentation fault. I've also tried file >> str. But that didn't work either. I've tried printing the line to check whether it does go to the next line, but it will always stay on the first line. I'm not sure exactly where I've gone wrong, any help would be appreciated. Each line in the file would look something like: name value="value" value2 = "value2"
Your first problem is that your recursion has no base case (i.e. terminating condition), and so getTags is being called until you fill up the call stack, at which point you get a segmentation violation. The way to fix this is to create a terminating condition which, in this case, would be when you exhaust all of the input in the file. Your second problem is that each recursive call opens the file again and assign it to a new instance of fstream file. This means that your getline call is always going to read the first line, regardless of how many recursive calls you've made so far (keep in mind that local variables defined in one recursive call are not carried over into subsequent calls). One way to fix this is to open the file prior to kicking off the recursion and pass the fstream as a parameter to getTags. That way, you're not needlessly opening the file more than once and you're actually able to read the file properly. Given the above two points, the general structure of getTags might look something like this: void getTags(fstream &file, string prevName) { string line; result = getline(file, line); if (!result) { // This is your base case. If result == false, the call to getline failed (due to EOF or some other reason) return; } // Main function logic here... vector<string> currentLine = splitString(line, ' '); if (!isClosing(line) ) { // ... }