question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
72,606,772
72,607,196
How does CoffeeCatch jump back to the COFFEE_CATCH clause?
I recently discovered CoffeeCatch, which I want to use to log the C/C++ native crashes on Android. I haven't managed, but still I am curious about how it works internally. My understanding is that it basically catches and emitted signal (e.g. SIGSEGV) and allows the user to do something with it; in my case I would like save the stack trace and crash. It is used like this: COFFEE_TRY_JNI(env, *retcode = call_dangerous_function(env, object)); Where call_dangerous_function() is a function that could crash and therefore emit a signal. The macro is defined here: #define COFFEE_TRY_JNI(ENV, CODE) \ do { \ COFFEE_TRY() { \ CODE; \ } COFFEE_CATCH() { \ coffeecatch_throw_exception(ENV); \ } COFFEE_END(); \ } while(0) Which resolves to something like this: do { if (coffeecatch_inside() || (coffeecatch_setup() == 0 && sigsetjmp(*coffeecatch_get_ctx(), 1) == 0)) { \ call_dangerous_function(env, object); } else { coffeecatch_throw_exception(ENV); } coffeecatch_cleanup(); } while(0) Here, my understanding is that sigsetjmp(*coffeecatch_get_ctx(), 1) == 0) somehow sets a pointer to the current "context", which I guess is something like the execution state of this thread at this moment. So somehow, later, siglongjmp(t->ctx, code); will be called (after the environment has been somehow prepared for it) to jump back here. What's not clear to me is if we then jump at the if statement (and evaluate the statement again), or somehow end up into the else statement for some reason. My expectation is that siglongjmp now results in going into the else statement (where in my case a Java exception could be thrown). But: First, that's not what I see (my program emits the SIGSEGV signal right after siglongjmp is called without ever reaching the else statement). Second, I don't get where siglongjmp should actually jump. If it goes to the if statement again and evaluates the condition again, it will evaluate to true again and that's a loop. How would it go to else?
siglongjmp jumps back to where sigsetjmp was called, and makes it look like sigsetjmp returned the value that siglongjmp was passed. So in this case, if siglongjmp is called with a non-zero value, then it will jump back to the sigsetjmp(*coffeecatch_get_ctx(), 1) == 0 condition, which will evaluate to false and thus the if will not be satisfied and the else block will execute. It's very unlikely you can meaningfully recover from a SIGSEGV however. By the time a SIGSEGV happens your program has wandered so far off into undefined behavior that it is impossible to reason about its current state. There is a high chance that data has been corrupted and/or your call stack has been destroyed. The only meaningful action is to terminate the process. Note: Using (sig)?setjmp/(sig)?longjmp in C++ is a very bad idea. They do not execute object destructors, and thus can easily leak memory and/or violate class invariants.
72,606,999
72,607,621
Changing the lambda function but the lambda argument and return type stays the same
I am using std::semiregular to hold some functors in a class. Ideally, what I really want is to be able to instantiate such template class, but define the lambda implementation at a later stage using the register function. However, I am struggling to find a way do that. Even the simplest case down below does not seem to work. The compiler error says: error: cannot convert 'main()::<lambda(int, float)>' to 'main()::<lambda(int, float)>' 30 | api.register_get(get1); | ^~~~ | | | main()::<lambda(int, float)> #include <concepts> #include <functional> #include <iostream> template<std::semiregular F> class RestApiImpl { F m_get_method; public: RestApiImpl(F get = F{}) : m_get_method{std::move(get)} {} void register_get(F functor) { m_get_method = std::move(functor); } }; int main(){ auto get = [](int, float intf){ std::string dummy = "dummy"; }; RestApiImpl api(get); auto get1 = [](int, float intf){ std::string dummy = "dummy"; }; api.register_get(get1); return 0; };
That's because the type of the lambdas are different. You could use a function pointer, or a std::function. I believe the following change is valid, and should be the only required one: RestApiImpl<void(*)(int,float)> api(get); The only difference from your code is the template parameter type is explicitly specified. The type is pointer to a void returning function accepting int for the first parameter, and float for the second. A pointer to main would be int(*main)(int,char**)… If you use std::function then the type would be std::function<void(int,float)>.
72,608,088
72,608,257
SFINAE when using lvalue ref but success when using rvalue ref
I searched but really couldn't find an answer why SFINAE happens only when the argument is passed by lvalue ref, but the build succeeds when the arg is passed by rvalue ref: template <typename T> class A { public: using member_type = T; }; template <typename AType> typename AType::member_type f(AType&& m) { typename AType::member_type res{}; return res; } void demo() { A<int> a; // ERROR: candidate template ignored: substitution failure // [with AType = A<int> &]: type 'A<int> &' // cannot be used prior to '::' because it has no members f(a); // BUILD SUCCESS f(std::move(a)); }
When you have template <typename AType> typename AType::member_type f(AType&& m) You have what is called a forwarding reference. Even though it looks like an rvalue reference, this reference type can bind to lvalues or rvalues. The way it works is when you pass an lvalue to f, AType gets deduced to being T&, and when you pass an rvalue AType gets deduced to just T. So, when you do f(a); AType gets deduced as A<int>&, and you try to form the return type of A<int>&::member_type which is invalid as references do not have type members. Conversely when you do f(std::move(a));, AType gets deduced to A<int> and A<int> does have a member_type type member. To fix this you can remove the reference-ness of type by using std::decay_t like template <typename AType> auto f(AType&& m) { typename std::decay_t<AType>::member_type res{}; return res; }
72,608,807
72,609,287
Attempting to use OpenCV 2.4 C++ library when .so files installed in a non-standard location
I've read some other posts about doing something similar to this, and I know about the existence of the -L and -l flags for G++, however I can't seem to get it right. All of the .so files for opencv 2.4 are currently installed in $HOME/.local/lib, since this is a VM I do not have root access to, and cannot get the administrator to install openCV. I am trying to compile a project with #include <opencv2/opencv.hpp> at the top. Here are a few of the things I have tried: g++ -L$HOME/.local/lib -lopencv OpenCVTest.cpp g++ -L$HOME/.local/lib OpenCVTest.cpp g++ -I$HOME/.local/lib OpenCVTest.cpp I have tried changing to: #include <opencv2/core/core.hpp> and then running: g++ -L$HOME/.local/lib -lopencvcore OpenCVTest.cpp Whichever of these I run, I get the error: OpenCVTest.cpp:1:10: fatal error: opencv2/opencv.hpp: No such file or directory When I used opencv/core/core.hpp, it had the same error but with that instead. $HOME does not expand to a path with spaces in it, and it works the same way when I write out the entire path instead of using $HOME. What am I getting wrong? Are the names after -l not correct? Where do I find the correct names? Any help appreciated, thanks.
Question resolved in comments, the command that successfully compiles my project is: g++ -I$HOME/.local/include -L$HOME/.local/lib -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_ocl -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videostab -lrt -lpthread -lm -ldl OpenCVTest.cpp -o OpenCVTest and to run ./OpenCVTest successfully, I had to call: export LD_LIBRARY_PATH=$HOME/.local/lib:$LD_LIBRARY_PATH
72,608,845
72,609,088
C++ concept: Requiring a static variable to be present in a policy class
I want to constraint the template parameters of a policy class. That is, when I call Foo<policy>, I want the compiler to stop here if the policy class does not fulfill the requirements I want. Complete non-working example To simplify the problem, let's consider just the requirement that the policy class has to declare a static variable that itself fulfill another concept (here, the Acceleration concepts from the mp-units library. #include <units/isq/si/si.h> using units::isq::Acceleration; // A policy struct earth { // requirement seems to be fulfilled static inline constexpr Acceleration auto gravity = standard_gravity<>; }; // Let's define a concept because I will need soon to use a set of more than 1 requirements template <typename T> concept SphericBody = requires(T) { { T::gravity } -> Acceleration; }; // The host class that has a constraint of the template argument template<SphericBody T> class Foo { // ... } int main() { Foo<earth> // does not compile :'( } It fails with the following compiler message: ‘T::gravity’ does not satisfy return-type-requirement { T::gravity } -> units::isq::Acceleration; In the current version of the mp-units library, the Acceleration concept declaration is the following: #include <units/concepts.h> #include <units/isq/dimensions/length.h> #include <units/isq/dimensions/time.h> namespace units::isq { template<typename Child, Unit U, typename...> struct dim_acceleration; template<typename Child, Unit U, DimensionOfT<dim_length> L, DimensionOfT<dim_time> T> struct dim_acceleration<Child, U, L, T> : derived_dimension<Child, U, exponent<L, 1>, exponent<T, -2>> {}; template<typename T> concept Acceleration = QuantityOfT<T, dim_acceleration>; } // namespace units::isq What am I doing wrong? I am aware of this related question: C++ Concepts - Can I have a constraint requiring a function be present in a class? but it focuses on non-static member variables. Minimal working example As requested by @HolyBlackCat, I tried my best to come with a minimal working example. The member variable is now a simple integer. Simply adding the requires clause works: template <typename T> concept HasGravity = requires(T t) { { t.gravity } -> std::same_as<int&>; }; struct myearth { int gravity; }; // The host class that has a constraint of the template argument template<HasGravity T> class Foo {}; // using policy_t = Foo<earth> // compiles Minimal NON working example In this case, the requirement is exported to a concept, and it does not compile anymore. template <typename T> concept IsAcceleration = std::same_as<int&>; }; // Let's define a concept because I will need soon to use a set of more than 1 requirements template <typename T> concept HasGravity = requires(T t) { { t.gravity } -> IsAcceleration; }; // A policy struct myearth { int gravity; }; // The host class that has a constraint of the template argument template<HasGravity T> class Foo {}; // using policy_t = Foo<earth> // does not compile Error: note: constraints not satisfied test.cpp:45:9: required for the satisfaction of ‘HasGravity<T>’ [with T = myearth] test.cpp:45:22: in requirements with ‘T t’ [with T = myearth] test.cpp:47:7: note: ‘t.gravity’ does not satisfy return-type-requirement 47 | { t.gravity } -> IsAcceleration;
I'm not familiar with this library, but my guess is that the Acceleration concept rejects references. { expr } -> concept requirements determine the type as if by decltype((expr)), which for your variable yields an lvalue reference. decltype inspects the value category of the expression, and adds & to types of lvalues and && to types of xvalues (prvalue types are unchanged). Since expressions can't have reference types, this doesn't lose any information. decltype has a special case for variables - for them it returns the type as written, discarding the value category. By adding a second pair of parentheses, you disable this feature, falling back to the behavior described above.
72,608,991
72,609,142
Why can't I use std::optional with Boost Asio sockets without moving them
I'm creating a simple network game in c++. I have a server class where a single socket is stored for usage. The socket is not known at the creation of the class, so I've chosen to use a std::optional<tcp::socket> (is this the correct way or is there a better one?) which is initialized to std::nullopt and later a socket will have a socket stored inside. I've seen that, since it only has a rvalue copy operator, I have to move the socket before assigning it, in this way: std::optional<tcp::socket> optSocket(std::nullopt); .... optSocket = std::move(mySocket); On the other hand, I've seen that if I use a simple "std::string" variable (so not a primitive type) I don't need to move it and I can simply do a copy assignment: std::optional<std::string> optString(std::nullopt); .... optString = myString; While, if I try to do the same with a socket, it gives me the following error: No viable overloaded '=' candidate template ignored: requirement '__and_v<std::__not_<std::is_same<std::optional<boost::asio::basic_stream_socket<boost::asio ::ip::tcp, boost::asio::any_io_executor>>, boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost:... candidate template ignored: could not match 'optional' against 'basic_stream_socket' candidate template ignored: could not match 'optional' against 'basic_stream_socket' Why is there a difference between the two types, and why do i need to move the socket (or any other object to pass it to an std::optional)? Wouldn't it be better to have both a copy and move assignment? Thank you in advance!
Simply put, sockets aren't copyable because it's not clear what a copy of a socket would be. When the remote end sends data which socket instance would receive that data; the original or the copy? What happens when you close the original what should happen to the copy? You could design a socket class that acts as a shared handle to a socket instead of representing the socket itself, but that would go against the general design theory that most C++ objects follow. That's what something like std::shared_ptr is for. Something as simple as a string doesn't have most of these sorts of concerns. It's fairly clear what it means to make a copy of a string: you just copy the bytes that represent the characters in the string.
72,609,121
72,794,470
Installed C++ with VS Build Tools, but can't find CL.exe
We have a Jenkins build agent based on docker pull mcr.microsoft.com/dotnet/framework/sdk:4.8 Part of the Docker file for the container pulls in additional workloads as follows vs_buildtools.exe --quiet --wait --norestart --nocache modify \ --installPath "%ProgramFiles(x86)%\Microsoft Visual Studio\2022\BuildTools" \ --add Microsoft.VisualStudio.Workload.VCTools \ --add Microsoft.VisualStudio.Workload.DataBuildTools \ --add Microsoft.VisualStudio.Workload.UniversalBuildTools But builds of C++ projects fail saying they can't find CL.EXE. I've Googled this problem and everybody who's had errors saying they couldn't find CL.EXE got the answer to just run vsvars.bat and that fixed it for them. But the CL.exe is physically not there. We go to C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.31.31103 and there is no bin folder. We searched the whole container for cl.exe and do see it in some c:\windows\WinSxS\ folder, and we tried adding that to the PATH environment, but it got an error about it not being compatible with the version of Windows. Is there some reason it won't install the actual compiler?
You also need to pass either --includeRecommended or --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 as an argument. MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest) is listed among Components included by VCTools workload as Recommended and thus is not installed with Microsoft.VisualStudio.Workload.VCTools by default. Here's the description of the --add parameter from VS Installer command-line parameter reference: During an install or modify command, this repeatable parameter specifies one or more workload or component IDs to add. The required components of the artifact are installed, but not the recommended or optional components. You can control additional components globally using --includeRecommended and/or --includeOptional parameters.
72,609,196
72,610,195
WriteProcessMemory, program's value + user's input value at the same time
There is a program that stores a value in memory, like 100. I read that value using ReadProcessMemory(): ReadProcessMemory(processHandle, (LPVOID)(programBaseAddress + offsetProgramToBaseAdress), &baseAddress, sizeof(baseAddress), NULL); After ReadProcessMemory(), baseaddress contains 100. With this code: int value{}; cin >> value; WriteProcessMemory(processHandle, (LPVOID)(pointsAddress), &value, 4, 0); I can change the value in the other program. But I don't want to set the other program's value to just any value. I want to add the user's input to that value. I mean, the user inputs a number like 50, the result should be 150, not 50. I tried this but it didn't work: WriteProcessMemory(processHandle, (LPVOID)(pointsAddress), &baseAddress + value, 4, 0);
You need to read the value first, then add the user's input to the value, then write the value back. Those are separate operations, don't try to mix them together (ie, &baseAddress + value doesn't do what you think it does). Try something like this instead: int32_t value{}; ReadProcessMemory(processHandle, (LPVOID)(programBaseAddress + offsetProgramToBaseAdress), &value, sizeof(value), NULL); int input{}; cin >> input; value += input; WriteProcessMemory(processHandle, (LPVOID)pointsAddress, &value, sizeof(value), 0);
72,609,454
72,614,418
Using enable_if to decide the type of a member variable
template <typename ...T> class BaseEvent { BaseEvent(const unsigned int index, const uint8_t id, const std::variant<T...> data) : m_index(index), m_id(id), m_data(m_data){}; virtual ~BaseEvent(); template <typename V> const V get() { static_assert(constexpr std::is_same_v<V, T...>); return std::get<V>(m_data); }; protected: unsigned int m_index; uint8_t m_id; std::variant<T...> m_data; // pseudocode: // enable_if(sizeof(T...) > 1) // then: std::variant<T...> m_data // else: T m_data }; However later in the code, template <class T> class StringEvent : public BaseEvent<T> { virtual ~StringEvent(); const T string() { return get<T>(); }; }; Pasing only a single type to BaseEvent will not be able to create variant as it is useless in that case anyways. How can I use enable_if to create m_data of type T when T... is a single type only?
I have done exactly what you want to do, so I know what you need. To handle both single message and multiple message types, use std::variant<std::monostate, T...>. In addition, your use of is_same_v<> is incorrect. You can only use 1 type, not multiple types there. So you need a code like this: template <typename ...T> class BaseEvent { BaseEvent(const unsigned int index, const uint8_t id, const std::variant<T...> data) : m_index(index), m_id(id), m_data(m_data){}; // ^-- There is a small bug here in creating variant(different types). you need to solve, and use in-place for variant. virtual ~BaseEvent(); template <typename V> const V get() { static_assert(is_valid_type<V>() || std::is_same_v<V, std::monostate>); return std::get<V>(m_data); }; protected: unsigned int m_index; uint8_t m_id; std::variant<std::monostate, T...> m_data; private: template<typename U> constexpr static bool is_valid_type() { return (std::is_same_v<U, T> || ...); } }; My main code is much more complex and I just extracted small part for this, so use it with care.
72,610,082
72,610,118
How to use CreateCompatibleDC(), SetPixel(), and BitBlt() to display an image?
I'm trying to draw and display an image(s) on a device context (variable: dc) by using CreateCompatibleDC(), SetPixel(), and BitBlt() as seen in the code below: HDC Layout = CreateCompatibleDC(0); HBITMAP image = CreateCompatibleBitmap(Layout, symbol->bitmap_width, symbol->bitmap_height); // Draw the image int bit = 0; for (int j = 0; j < symbol->bitmap_height; j++) { for (int k = 0; k < symbol->bitmap_width; k++) { if (symbol->bitmap[bit] == '1') SetPixel(Layout, j, k, rgbBlue); else SetPixel(Layout, j, k, rgbGreen); bit++; } } BOOL success = BitBlt(dc, 1000, 1000, 1000, 1000, BCLayout, 0, 0, SRCCOPY); I expected the image to be displayed in said device context but the image does not display in the end. Does anyone know why that is? A few things I should clarify: the variable "symbol" is a struct variable that holds all the information for the image the symbol->bitmap array is a character array that has characters that denote the color of a pixel on the bitmap representation of the image (why it's one-dimensional, I don't know. It was designed that way by a third party)
CreateCompatibleDC() creates an in-memory HDC with a 1x1 monochrome HBITMAP assigned to it by default. You need to use SelectObject() to replace that default HBITMAP with your own HBITMAP before you then use SetPixel() to change the HDC's pixels, eg: // create an HDC... HDC Layout = CreateCompatibleDC(0); // create a bitmap for the HDC... HBITMAP image = CreateCompatibleBitmap(Layout, symbol->bitmap_width, symbol->bitmap_height); // replace the default bitmap with the new one... // remember the old bitmap for later... HBITMAP oldBmp = (HBITMAP) SelectObject(Layout, image); // Draw the image as needed... // restore the previous bitmap... SelectObject(Layout, oldBmp); // destroy the new bitmap... DeleteObject(image); // destroy the HDC... DeleteDC(Layout);
72,610,264
72,610,373
Custom destructor x default constructors in C++
I have a class with four member and no implemented Destructor by my part. If I delete the object, the 4 members will be deleted by the default destructor, right? If I make a blank custom destructor none of them will be deleted?If I make a custom destructor that only deletes one of them, will the other three be deleted as well?
Strictly answering your questions I don't think that answering your question actually helps you because the way you framed the problem doesn't help you. But here it is anyway: If I delete the object, the 4 members will be deleted by the default constructor, right? Right If I make a blank custom destructor none of them will be deleted Incorrect. Still all the members will be properly deleted. If I make a custom destructor that only deletes one of them, will the other three be deleted as well? A destructor (nor any method) cannot legally explicitly delete its own members. Addressing your confusion If p is a pointer, when you do delete p you are not deleting the pointer p, you are deleting the object pointed by p. p is there to tell you what object needs deletion. After delete p p is still alive. Sure it points to invalid memory, but it itself is not deleted. You can assign to it and make it point to something else. Let's consider a simple class with 3 members: a float, an int pointer, and a std::vector object: struct X { float f; int* p; std::vector<double> v; }; The 3 members of your class are: a float, a pointer and std::vector object. These members will always get properly destroyed by the destructor, regardless if you have a user defined destructor or a implicit destructor. Always! All members! No exception. You cannot inhibit or modify this behavior in any way. For f I don't think there is anything to explain. Now for the interesting part: p here you need to see a very important distinction: the pointer p (which is your data member) and the potential int object that is pointed by p. Like any other data members, the pointer p will properly be disposed by the destructor. But what should happen to the object that is pointed by the pointer p? Well there are several cases: your pointer might point to an int object, to null, be uninitialized or point to an invalid address. If it points to an int object it might point to an int object that is a subobject of another class type, it might be an element in an array, it might be an static object or an object with automatic storage duration or it might be an int object created by new. If it is created by new it might be the X class responsibility to delete it or not and there might be a complex decision if it must destroy it (think shared pointers). X should delete the object pointed by p iff X has ownership over this pointer and a proper logic should be implemented. As you can see the complier cannot possibly know to what p points to and if X should delete it. That's why the default destructor doesn't do it. For v again, the destructor (default or user defined) will properly destroy v. You might now that internally std::vector has a pointer member of its own, but: it's std::vector's responsibility to destroy the array that pointer might point to, not X. It all works out of the box, X does not need to do anything special. How you should think about it In C++ there is the RAII principle which, despite its poor name it is one of the most important concepts in C++. Basically if your class acquires a resource that needs manual management then that class has ownership over that resource and is it responsible to release that resource. So if your class does manual memory allocation (new) then it is responsible to call the corresponding delete. If it acquires a resource by a fictitious resource_aquire() then it is the one responsible to also call the accompanying resource_destroy() Another important principle is the single responsibility principle: if your class is responsible with managing a resource then that class should do only that thing.
72,610,677
72,611,180
What for are JSON schemas practically used?
Reference: Getting started with JSON schema I have been reading about JSON schema. I understand that When you’re talking about a data format, you want to have metadata about what keys mean, including the valid inputs for those keys. JSON Schema is a proposed IETF standard how to answer those questions for data. Alright, so these schemas define what is and what is not permitted in the JSON structure I am building. My question is, how are these schemas practically used? For example if I am using a JSON file in a C++ program (or a python script), I can use the json file as it is (of course without any validation). But if I want to validate it, how can I use the json schemas to do that? Are there any recommended libraries for that? (I am interested in C++ but additional info on python would be welcomed too) EDIT: I would like to emphasize that the main purpose of this question is to understand how are these schemas practically used? Are schemas used only for validation? or are there other uses? (I am new to the concept of schemas)
One use is validation. More than pass/fail you get a meaningful error message like e.g. "unexpected value W for field A.B.C, allowed values are X, Y, Z" or "invalid type for field A.B.C, expected date, found int", "missing field A.B.C" etc. They can also serve as self documentation. They are also used for autocomplete. For instance a json setting file for a program like VS Code. When you edit the settings.json or c_cpp_properties.json from within VS Code you get autocomplete for that particular json file. That is built in. But you can also define your own schemas with file pattern match and and you can get autocomplete in the editor for your own json files.
72,610,959
72,611,001
How to understand Using :: (Scope resolution operator) to access a in-class class (nested class) or typedef
I'm trying to understand Scope resolution operator :: I know I can only access static class member via Scope resolution operator. But I can use it to access typedef or a nested class thing like this: class test{ public: class testinner{ public: int _val; testinner(){} testinner(int x):_val(x){} }; test(){} typedef int testdef; int s; }; int main() { test::testinner tt1 = test::testinner(5); //OK LINE(1) test::testinner tt2; //OK LINE(2) test::testdef tt3 = 5; //OK LINE(3) test::s = 5; //non static member ERROR LINE(4) return 0; } I can instantiate an in-class type object via :: such as line 1 and line 2 I can use typedef to instantiate an object such as line 3 I can't access non-static members via :: such as line 4 Is that mean an in-class class and typedef is a static member in a class? I know namespace is quite equal to class name but I'm still really getting confused about it. By the way, for the typedef part, can I simply think tt3 is int type rather than test::testdef type?
I can't access non-static members via :: such as line 4 False. The problem is that you cannot access non-static members without an object. If you have an object, you can use a qualified name (with ::). int main() { test t; t.test::s = 5; //^^^^^^ } Is that mean an in-class class and typedef is a static member in a class? It depends what you mean by "static member". They do not require an object of the class, but at the same time they do not require the static keyword. (My understanding is that in the official terminology, nested types and typedefs are not considered "members", so in that respect they are not static members. However, I think that might be side-stepping the intended question.) By the way, for the typedef part, can I simply think tt3 is int type rather than test::testdef type? Up to you. A typedef creates an alias, so int and test::testdef are two names for the same thing. If you prefer thinking in terms of int, do that. If you prefer thinking in terms of test::testdef, do that. The type doesn't mind which name you use. (The same would hold if the typedef was outside a class definition. You are making the situation less clear for yourself by thinking that the class makes a difference in this case. A typedef defines an alias for a type.)
72,611,080
72,614,035
How to add a callback in an event handler in a legacy MFC code?
This is a toy implementation of a legacy code using MFC. OnBnClickedButton is an event handler but it contains codes which are executed asynchronously in a different thread ( may be a bad idea). The declaration syntax is accepted by the message map. //declaration afx_msg void OnBnClickedButton(); //message map ON_BN_CLICKED(IDC_BUTTON, &CMFCApplicationDlg::OnBnClickedButton) Now I want to add a callback to the event handler like so, but the message-map won't accept the new declaration syntax, where to go? afx_msg void OnBnClickedButton(std::optional<std::function<CString(void)>> callback);
The function signatures and return values for entries in MFC message maps are fixed. You have to follow the protocol; it doesn't offer any customization points. In case of the ON_BN_CLICKED button handler the prototype must abide to the following signature afx_msg void memberFxn(); It doesn't accept or return any values. The only state available is that implied from the message map entry (i.e. OnBnClickedButton is called whenever the child control of the dialog represented by CMFCApplicationDlg with ID IDC_BUTTON is clicked). In your implementation of OnBnClickedButton you are free to do whatever you like, such as querying for additional information (e.g. from data stored in the class instance or thread-local storage), spinning up threads, either explicitly or using C++20 coroutines, etc. MFC doesn't help you with any of that, specifically it doesn't provide any sort of support for asynchronous operations. That's something you will have to implement yourself.
72,611,116
72,611,206
"If the deriving class does not inherit the base class virtually, then all virtual methods must be defined".How to understand that in the right way?
As per the wiki, which says that[emphasise mine]: Note the code snippet in the quotaion is seen here. Suppose a pure virtual method is defined in the base class. If a deriving class inherits the base class virtually, then the pure virtual method does not need to be defined in that deriving class. However, if the deriving class does not inherit the base class virtually, then all virtual methods must be defined. The code below may be explored interactively here. #include <string> #include <iostream> class A { protected: std::string _msg; public: A(std::string x): _msg(x) {} void test(){ std::cout<<"hello from A: "<<_msg <<"\n"; } virtual void pure_virtual_test() = 0; }; // since B,C inherit A virtually, the pure virtual method >pure_virtual_test doesn't need to be defined class B: virtual public A { public: B(std::string x):A("b"){} }; class C: virtual public A { public: C(std::string x):A("c"){} }; // since B,C inherit A virtually, A must be constructed in each child // however, since D does not inherit B,C virtually, the pure virtual method in A *must be defined* class D: public B,C { public: D(std::string x):A("d_a"),B("d_b"),C("d_c"){} void pure_virtual_test() override { std::cout<<"pure virtual hello from: "<<_msg <<"\n"; } }; // it is not necessary to redefine the pure virtual method after the parent defines it class E: public D { public: E(std::string x):A("e_a"),D("e_d"){} }; int main(int argc, char ** argv){ D d("d"); d.test(); // hello from A: d_a d.pure_virtual_test(); // pure virtual hello from: d_a E e("e"); e.test(); // hello from A: e_a e.pure_virtual_test(); // pure virtual hello from: e_a } How to understand the statement in bold in the right way? It seems that if the deriving class(i.e. class B) does not inherit the base class virtually, then virtual methods can be left undefined.Here is my demo code snippet to support what I say: #include <string> #include <iostream> class A { protected: std::string _msg; public: A(std::string x): _msg(x) {} void test(){ std::cout<<"hello from A: "<<_msg <<"\n"; } virtual void pure_virtual_test() = 0; }; // Attention: B does not inherit A ***virtually***, the pure virtual method pure_virtual_test doesn't need to be defined, either. class B: public A { public: B(std::string x):A("b"){} }; class D: public B { public: D(std::string x):B("d_b"){} void pure_virtual_test() override { std::cout<<"pure virtual hello from: "<<_msg <<"\n"; } }; // it is not necessary to redefine the pure virtual method after the parent defines it class E: public D { public: E(std::string x):D("e_d"){} }; int main(int argc, char ** argv){ D d("d"); d.test(); d.pure_virtual_test(); E e("e"); e.test(); e.pure_virtual_test(); }
The description in the wikipedia article is wrong/misleading. "If the deriving class does not inherit the base class virtually, then all virtual methods must be defined" is only true if the deriving class gets instantiated. A mere declaration, without instantiation, does not require definition of pure virtual methods. The wikipedia article's claim that "since D does not inherit B,C virtually, the pure virtual method in A must be defined" is simply not true, and the following compiles without any issues, without either D or E instantiating the pure virtual method: #include <string> #include <iostream> class A { protected: std::string _msg; public: A(std::string x): _msg(x) {} void test(){ std::cout<<"hello from A: "<<_msg <<"\n"; } virtual void pure_virtual_test() = 0; }; // since B,C inherit A virtually, the pure virtual method pure_virtual_test doesn't need to be defined class B: virtual public A { public: B(std::string x):A("b"){} }; class C: virtual public A { public: C(std::string x):A("c"){} }; class D: public B,C { public: D(std::string x):A("d_a"),B("d_b"),C("d_c"){} }; class E: public D { public: E(std::string x):A("e_a"),D("e_d"){} }; int main() { return 0; } main is left empty, and D and E are declared without issues. Now, if you try to instantiate one or the other, then you're going to have problems.
72,611,170
72,611,285
Breaking a module into multiple implementation files
C++20 modules question. Let's say I have the following code files, where '.ixx' are module files. Main.cc, A.ixx, B.ixx, ..., Z.ixx, Group.ixx If I want to make all the files [A-Z] part of the same module, 'TheModule', does each file need a unique module partition name? ie: // A.ixx export module TheModule:A // B.ixx export module TheModule:B // ... // Z.ixx export module TheModule:Z // Group.ixx export module TheModule; export import :A; export import :B; // ... export import :Z; Is there a way for '.ixx' files to declare their stuff into 'TheModule', without each needing to have a separate partition name? And if so, how would importing work between the various '.ixx' files -- how do I access the stuff from 'A.ixx' within 'B.ixx'?
If I want to make all the files [A-Z] part of the same module, 'TheModule', does each file need a unique module partition name? Yes. Any module unit that can be independently imported either is the primary module interface unit or is a module partition. In both cases, it must have a name. A unique name. Is there a way for '.ixx' files to declare their stuff into 'TheModule', without each needing to have a separate partition name? No. Module includes represent a directed, acyclic graph of dependencies between files, not a hodge-podge collection of code.
72,611,247
72,621,838
Register width and parsing for a fast-loading file format
For the past approx. 20 years I've been working on a program for 3D graphics that implements a METAFONT-like language. It's in C++. I now have started working on a format and functions for writing the data for the 3D objects to a binary file and then reading them in again. It is intended for saving and fast-loading data that has been calculated in order to avoid calculating it again each time the program is run. The syntax for the file format is intended to be for a machine-like language that allows for the highest possible efficiency without having to worry about being comfortable for people to read or write. My question relates to the way data is read into registers: The architecture of my computer is x86_64, so obviously I have 64-bit registers. Does it pay at all to read data into objects smaller than 64 bit, i.e., chars, ints or floats? Isn't anything that's read read into a 64-bit register? As I understand it, any unused bits of a register are set to 0, which is an extra step, so less efficient than just reading a long int or a double in the first place. Is this correct and does anyone have any suggestion on how I should proceed? This is what I tried in response to Scheff's Cat's comment. /* ttemp.c */ #include <stdlib.h> #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> void write_uint(unsigned int i); void write_ulong(unsigned long int li); int fd = 0; int main(int argc, char *argv[]) { printf("Entering ttemp.\n"); fd = open("ttemp.output", O_WRONLY | O_CREAT, S_IRWXU); printf("fd == %d\n", fd); write_uint(~0U); write_ulong(~0UL); close(fd); printf("Exiting ttemp.\n"); return 0; } void write_uint(unsigned int i) { write(fd, &i, 4); return; } void write_ulong(unsigned long int li) { write(fd, &li, 8); return; } Then I ran: gcc -pg -o ttemp ttemp.c ttemp gprof ttemp This is the contents of ttemp.output, according to Emacs in Hexl mode, so the objects were obviously written to the output file: 00000000: ffff ffff ffff ffff ffff ffff ............ This was the relevant portion of the output of gprof: Call graph (explanation follows) granularity: each sample hit covers 2 byte(s) no time propagated index % time self children called name 0.00 0.00 1/1 main [8] [1] 0.0 0.00 0.00 1 write_uint [1] ----------------------------------------------- 0.00 0.00 1/1 main [8] [2] 0.0 0.00 0.00 1 write_ulong [2] ----------------------------------------------- So, not very illuminating. My guess is that the nulling in the registers is performed at the level of the processor and any time it takes won't show up on the system call level. However, I'm not a systems programmer and my grasp of these topics isn't particularly firm.
Does it pay at all to read data into objects smaller than 64 bit, i.e., chars, ints or floats? This is dependent of the architecture. On most platform this is very cheap, like 1 cycle if not even free regarding the exact target code. For more information about this, please read Should I keep using unsigned ints in the age of 64-bit computers?. Note that float-double conversion can be significantly slower but it is still a mater of dozens of cycles on most mainstream x86 platform (is can be very slow on embedded devices though). Isn't anything that's read read into a 64-bit register? Actually, the processor does not read files per block of 64-bits. Nearly all IO operations are buffered (otherwise they would be very very slow due to the high-latency of storage devices and even system calls). For example, the system can fetch a buffer of 256 KiB when you request only 4 bytes because it knows that application often reads files contiguously and also because most storage device are optimized for contiguous operations (the number of IO operation per second is generally small). For more information about the the latency of IO operations compared to other ones, please read this (not the number are approximations). Put it shortly, the latency of an IO operation is far bigger than the one of a type cast so the later should be completely negligible on most platforms (at least all mainstream ones). And even though the read/write are buffered, the cost of a function call that read/write from/into an internal buffer is still higher than a cast. Thus, you should not care much about that in such a case.
72,611,565
72,613,499
CMake dependencies between libraries and programs
I'm a beginner with CMake and since yesterday I try something without result :-( I explain my goal... I've a C++ project with dynamic libraries and programs using these libraries. Here is the structure of my project: libA libB program1 program2 program3 Inside each directory, I've an include and a src directory. libB uses libA, program1 program2 and program3 use libA and libB. I've a CMakeLists.txt in each directory (even in the root directory). I'm able to build each lib and program individually (without dependencies) but I don't know how to define dependencies between them. For example, I would like the include files of the libraries to be known by the programs, the same for the link of .dll Could someone help me on this topic? Thanks for your answers :-) Regards.
how to define dependencies between them. Just, in respective CMakeLists: target_link_libraries(program1 PRIVATE libA libB) target_link_libraries(program2 PRIVATE libA libB) target_link_libraries(program3 PRIVATE libA libB) target_link_libraries(libB PUBLIC LIBA)
72,611,859
72,625,772
Is there a portable way to implement variadic CHECK and PROBE macros for detecting the number of macro arguments in C++?
In C Preprocessor tricks, tips, and idioms, it suggests the following macros which detect the number of arguments created by a macro: #define CHECK_N(x, n, ...) n #define CHECK(...) CHECK_N(__VA_ARGS__, 0,) #define PROBE(x) x, 1, and then states that: CHECK(PROBE(~)) // Expands to 1 CHECK(xxx) // Expands to 0 However in MSVC 2019, compiling for C++17, the above two CHECK()'s both expand to 0. Godbolt shows me that GCC and Clang expand the macros as expected, but MSVC does not. It seems to be pasting __VA_ARGS__ from CHECK(...) as a single token even if it contains commas...? I have heard that MSVC is nonconforming with regard to macros in some ways, but I am not clear on the details. Is there a way to make these macros work for MSVC, and ideally still work for GCC/Clang? (I'd rather not #ifdef a separate implementation if possible.)
After some more digging I found this answer from the VS Developer Community, which provides the solution: an extra layer of indirection and some funky rebracketing. Rewriting to match the original question: #define CHECK_N(x, n, ...) n #define CHECK_IMPL(tuple) CHECK_N tuple //note no brackets here #define CHECK(...) CHECK_IMPL((__VA_ARGS__, 0)) //note the double brackets here #define PROBE(x) x, 1 godbolt demonstrates that this works across MSVC, gcc, and clang. Some of the other macro tools from the original link also require some adjustments (eg IIF(x)) but again, more layers of indirection seem to solve those too. I hope one day to be able to use the compiler option /Zc:preprocessor as was mentioned elsewhere, which also fixes these macros, unfortunately that breaks certain other libraries (such as the Windows SDK).
72,612,034
72,613,153
Is there a way to get or notice the default arguments of the function?
Example I have a lots of class, each have it own constructor with defaut arguments, and each have a fake_constructor function which have same arguments as the constructor so I can take the function pointer from it. class someRandomClass { public: someRandomClass(int a = 0, float b = 0.f, double c = 0.0, const char* d = "") {} void fake_constructor(int a = 0, float b = 0.f, double c = 0.0, const char* d = "") {} }; And I have a function which take a class type and construct it with the parameter pack args: template<typename Cls, typename... T> void callClassConstructor(T... args) { // check if the arguments is same as class contructor. if (compareAugments(&Cls::fake_constructor, args...) == true) Cls(args...); // construct the class. } Also, I have a function to compare if the parameter pack are same as the class contructor arguments: template<typename type, typename Cls, typename...Arg1, typename...Arg2> bool compare_Class_Augments(type(Cls::* func)(Arg1...), Arg2...args) { return (std::is_same_v<std::tuple<Arg1...>, std::tuple<Arg2...>>); } This's how it work: first I call the callClassConstructor and pass to it the class type, and the arguments, then it will get the fake_constructor which same as the constructor and compare its arguments with the arguments I give, if it are same then it will contruct that class type with that argument. int main() { callClassConstructor<someRandomClass>(69, 5.f, 1.23, "hello"); } But, It only work if I pass full the arguments, if I do this: callClassConstructor<someRandomClass>(69, 5.f); It won't work, cause it don't understand the default arguments , it only check if the arguments type are same: Example: int, float, double, const char* int, float Sorry for this stupid question, and the Explain not clearly. PS: I know I can just remove the default arguments part and pass full the arguments, but I want to know if there's a way to solve this.
std::is_constructible traits might help (And you might get rid of fake_constructor :-) ): template<typename Cls, typename... Ts> // requires(std::is_constructible_v<Cls, Ts&&...>) // C++20, // or SFINAE for previous version void callClassConstructor(Ts&&... args) { if constexpr (std::is_constructible_v<Cls, Ts&&...>) { Cls myClass(std::foward<Ts>(args)...); // .... } }
72,612,340
72,612,539
the class that i defined as student is storing the variable but not processing it and doing the desired action
#include <iostream> #include <string> class student { public : int total_percentage {}; public: int eng_marks {31}; int maths_marks {64}; int sst_marks {98}; int comp_marks {89}; int sports_marks {56}; public: int percentage(){ total_percentage = ((eng_marks + maths_marks + sst_marks + comp_marks + sports_marks)/500)*100; return total_percentage; } public: int grade() { if (total_percentage >= 90){ std::cout<<"The grade of the student is A. Congratulations ! " << std::endl; } else if(total_percentage >= 80 && total_percentage<90 ) { std::cout<<"The grade of the student is B. COOL !! "<<std::endl; } else if (total_percentage >= 70 && total_percentage<80){ std::cout<<"The grade of the student is C. UH huh !! "<<std::endl; } else if (total_percentage >= 60 && total_percentage<70){ std::cout<<"The grade of the student is D. UH huh !! "<<std::endl; } else { std::cout<<"The grade of the student is F. FAIL WORK HARD !! !! "<<std::endl; } return 0; } }; int main (){ student student1; std::cout<<"The percentage of default student is = " << student1.percentage() << std::endl; std::cout<< student1.grade() << std::endl; std::cout<< std::endl; std::cout<< std::endl; student student2; student2.comp_marks = 34; student2.sst_marks = 78; student2.eng_marks = 42; std::cout<<"The percentage of student2 is = " << student2.percentage() << std::endl; std::cout<<" student2 eng_marks = " << student2.eng_marks << std::endl; std::cout<<student2.grade() << std::endl; std::cout<< std::endl; std::cout<< std::endl; /* the value is being stored in the eng_marks variable but still the code is unable to calculate the total percentage */ student student3; student3.maths_marks = 95; student3.sports_marks = 90; student3.sst_marks = 93; student3.comp_marks = 98 ; std::cout<< "The percentage of student3 is = " << student3.percentage() << std::endl; std::cout<<" student3 sst_marks = " << student3.sst_marks << std::endl; std::cout<<student3.grade() << std::endl; std::cout<< std::endl; std::cout<< std::endl; std::cout<<"Program end hit !! Thanks " << std::endl; return 0; } /* OUTPUT The percentage of default student is = 0 The grade of the student is F. FAIL WORK HARD !! !! 0 The percentage of student2 is = 0 student2 eng_marks = 42 The grade of the student is F. FAIL WORK HARD !! !! 0 The percentage of student3 is = 0 student3 sst_marks = 93 The grade of the student is F. FAIL WORK HARD !! !! 0 */
You are performing an integer division. Following division will produce incorrect results. total_percentage = ((eng_marks + maths_marks + sst_marks + comp_marks + sports_marks)/500)*100; Since all variables involved are integer, compiler will perform the following division. total_percentage = ( (31+64+98+89+56) / 500 ) * 100 total_percentage = ( (338) / 500 ) * 100 total_percentage = ( 0 ) * 100 Please change the data type of total_percentage to double and update total_percentage calculation as follows: total_percentage = ((eng_marks + maths_marks + sst_marks + comp_marks + sports_marks)/500.)*100.; Note 500. instead of 500 in the above calculation. For more details kindly refer to implicit conversion.
72,612,451
72,612,718
How to get a second cin to work when the first has a while loop to take in an unknown size input
I have been trying to figure out how to get a simple program to work, however I am getting hung up on taking user input from the console. I am able to take in a list of integers (eg. 3 5 3 2 1 8 9) into a vector, however I need to also take in one more user input for the number I need to check if it is inside the vector. When I run the code, it always skips over the second cin and does not allow any more console input, finishing the program. My best understanding is that since cin does not take white space into account, using a second line will not work. However, I do not understand after breaking the while loop it will skip over my next cin statement. Another way I can think of getting it to work is possibly using getline for the first line of input, however I am not sure of how to get that to work especially when converting back to an integer. Sample Input for line 1 on console: 2 7 6 7 8 5 67 54 3 (these will go into a vector) Sample Input for line 2 on console: 54 (this will just go into another variable num) int i; vector<int> v; int num; cout << "When finished entering numbers type any letter and hit enter" << endl << "Enter list of numbers: "; while (cin >> i) { v.push_back(i); } cout << endl << "Enter number to be found: "; cin >> num; cout << endl;
Your while loop is reading integers from the input stream std::cin, so if you enter a letter std::cin goes into an error state and will remain there until you explicitly clear the error state. To clear the error state, call cin.clear(). But invalid input remains in the stream. To ignore all the remaining characters in the stream, call cin.ignore() like this: std::cin::ignore(std::numeric_limits<std::streamsize>::max(), '\n'); So please add following two lines after the while loop, then your program should work as expected. std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); Output: When finished entering numbers type any letter and hit enter Enter list of numbers: 1 2 3 4 abcde Enter number to be found: 3
72,612,665
72,612,792
Can I edit a global vector using multiple threads in C++?
I currently have a code which works well, but I am learning C++, and hence would like to rid myself of any newbie mistakes. Basically the code is vector<vector<float>> gAbs; void functionThatAddsEntryTogAbs(){ ... gAbs.pushback(value); } int main(){ thread thread1 = thread(functionThatAddsEntryTogAbs,args); thread thread2 = thread(functionThatAddsEntryTogAbs,args); thread1.join(); thread2.join(); std::sort(gAbs.begin(),gAbs.end()); writeDataToFile(gAbs,"filename.dat"); } For instance I remember learning that there are only a few instances where global variables are the right choice. My initial thought was just to have the threads write to the file, but then I cannot guarantee that the data is sorted (which I need), which is my I use std::sort. Are there any suggestions of how to improve this, and what are some alternatives that the more experiences programmers would use instead? The code needs to be as fast as possible. Thanks in advance
You can access and modify global resources, including containers from different threads, but you have to protect them from doing that at the same time. Some exceptions are: no modifications are possible, the container itself is not changed and the threads are working on separate entries. In your code, entries are added to the container, so you need mutexes, but by doing that your parallel code probably doesn't gain you much in speed. A better way could be to know how many entries need to be added, add empty entries (just initialize) and then assign ranges to the threads, so they can fill in the entries.
72,612,728
72,613,688
GCC but not Clang changes ref-qualifier of function type for a pointer to qualified member function
Following snippet compiles in Clang but not in GCC 12. // function type (c style) //typedef int fun_type() const&; // C++ style using fun_type = int() const&; struct S { fun_type fun; }; int S::fun() const& { return 0; } int main() { fun_type S::* f = &S::fun; } Produces error in GCC: prog.cc: In function 'int main()': prog.cc:21:25: error: cannot convert 'int (S::*)() const &' to 'int (S::*)() const' in initialization 21 | fun_type S::* f = &S::fun; | ^~~~~~~ Declaration of S should be equivalent of following declaration struct S { int fun() const&; }; Using this declaration doesn't change behaviour of either compiler. Is this a bug in compiler's translation module related to an under-used feature of language? Which compiler is correct standard-wise?
Which compiler is correct standard-wise? Clang is correct in accepting the program. The program is well-formed as fun_type S::* f is equivalent to writing: int (S::*f)() const & which can be initialized by the initializer &S::fun.
72,613,068
72,613,172
Lifetime of the returned range-v3 object in C++
I want to make a function that works like np.arange(). With range-v3, the code is auto arange(double start, double end, double step){ assert(step != 0); const auto element_count = static_cast<int>((end - start) / step) + 1; return ranges::views::iota(0, element_count) | ranges::views::transform([&](auto i){ return start + step * i; }); } and to use it, auto range = arange(1, 5, 0.5); for (double x : range){ std::cout << x << ' '; // expect 1 1.5 2 2.5 3 3.5 4 4.5 5 } However, the result told me a dummy value. I think the lifetime of returned range object is expired, and I found that by making them to vector can pass the result well. (And it will cause overhead for constructing vector.) Is there any way to return range itself without expired lifetime ?
You fell victim to Undefined Behaviour due to capturing of local variables via [&]. If you capture by value [start, step](auto i){ return start + step * i; }, the code will work correctly. Note that views are always non-owning, can be copied around and are generally O(1) in their storage. Since iota is a generating view and stores its full state inside itself, the code is safe.
72,613,136
72,613,620
How to check if a parameter pack contain all elements of other paraments pack
Example: I have function A with some default arguments, and I want a function that take all the arguments of that function A to check with all arguments I giving. If function A contains all that arguments, then it will call function A with that arguments Here my sample code: void A(int a = 0, float b = 0.f, double c = 0.0, const char* d = "") {} template<typename T1, typename...Arg1, typename...Arg2> void compareArguments(T1(*func)(Arg1...), Arg2...args) { // some code here: if (Arg1... contain Arg2...) // some thing to check func(args...); // call function. } int main() { compareArguments(A, 69, 5.5f); } Any idle?
As state in comment, when passing T1(*func)(Arg1...), you lose default parameters. So instead of passing function pointer, you might pass functor: [](auto... args) -> decltype(A(args...)){ return A(args...); } and then std::is_invocable might be used: template<typename F, typename... Ts> void call(F func, Ts...args) { if constexpr (std::is_invocable_v<F, Ts...>) { func(args...); } } with usage call([](auto... args) -> decltype(A(args...)){ return A(args...); }, 69, 5.5f);
72,613,849
72,613,974
Max Pairwise Product problem integer overflow
#include<algorithm> #include <iostream> #include<vector> using namespace std; long long MaxPairwise(const std::vector<int>& nums){ long long product = 0; int n; n=nums.size(); int index1=-1; for(int i=0;i<n;i++){ if(index1==-1 || nums[index1]<nums[i]){ index1=i; } } int index2=-1; for(int j=0;j<n;j++){ if(index1!=j && nums[index2]<nums[j]){ index2=j; } } product=nums[index2]*nums[index1]; return ((long long)(product)); } int main() { int n; cin>>n; std::vector<int> nums(n); for(int i=0;i<n;i++){ std::cin>>nums[i]; } long long result; result= MaxPairwise(nums); cout<<result<<'\n'; return 0; } It causes an integer overflow for inputs 900000 100000, even though I have assigned a long long type to the variables. How do I fix this? I have tried changing the types but cannot figure it out and need help.
product = 1LL * nums[index2] * nums[index1]; forces conversion of the coefficients on the right hand side to the long long type. Otherwise the type of the product is an int, with possible overflow effects. Using std::vector<long long> is another option. Note that nums.size(); is a std::vector<int>::size_type type. That's certainly unsigned, and likely to be a std::size_t. In other words there's another possibility of overflow in using an int there.
72,614,028
72,614,092
C++ Unix and Windows support
I want to make my project available for Linux. Therefore, I need to substitute functions from windows.h library. In my terminal.cpp I highlight error messages in red. This step I only want to do in windows OS (ANSI don't work for my console, so i don't have a cross-platform solution for this). On windows it works, but on Linux i get the following error: /usr/bin/ld: /tmp/ccvTgiE8.o: in function `SetConsoleTextAttribute(int, int)': Terminal.cpp:(.text+0x0): multiple definition of `SetConsoleTextAttribute(int, int)'; /tmp/cclUawx7.o:main.cpp:(.text+0x0): first defined here collect2: error: ld returned 1 exit status In my main.cpp file I do nothing but include terminal.h and run it. terminal.cpp if (OS_Windows) { SetConsoleTextAttribute(dependency.hConsole, 4); cout << "Error: " << e.getMessage() << endl; SetConsoleTextAttribute(dependency.hConsole, 7); } else { cout << "Error: " << e.getMessage() << endl; } terminal.h #ifdef _WIN32 #define OS_Windows 1 #include "WindowsDependency.h" #else #define OS_Windows 0 #include "UnixDependency.h" #endif WindowsDependency.h #pragma once #include <Windows.h> class Dependency { public: HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE); }; UnixDependency.h #pragma once class Dependency { public: int hConsole = 0; }; void SetConsoleTextAttribute(int hConsole, int second) {};
Header files are supposed to contain declarations. By adding the {} you made a definition and C++ does not allow multiple definitions of the same function with identical signatures. Either remove the {} and provide a definition in a separately-compiled .cpp file, OR by marking the function as inline.
72,614,417
72,616,562
How to use arc co-ordinates from .slib file to draw an arc in Qt?
I am trying to generate various gate symbols ( AND,NOT,XNOR,MUX etc) by reading .slib file. But I faced a problem while reading an arc related co-ordinate from .slib file. I am not understanding how to use those co-ordinates and draw an arc ? The format of an arc in .slib file is confusing. Here is the example: .slib format for an arc and for line line (66 * SCALE, 80 * SCALE, 0 * SCALE, 80 * SCALE); line (94 * SCALE, 70 * SCALE, 62 * SCALE, 70 * SCALE); . . arc (145 * SCALE, 100 * SCALE, 94 * SCALE, 70 * SCALE,94.9268 * SCALE,126.774 * SCALE); arc (94 * SCALE, 130 * SCALE, 145 * SCALE, 100 * SCALE,94.9268 * SCALE, 73.2256 * SCALE); arc (61 * SCALE, 130 * SCALE, 61 * SCALE, 70 * SCALE,8.75 * SCALE, 100 * SCALE); 1st line says draw an arc from O (145,100) to F(94,70) 2nd line says draw an arc from L(94,130) to O(145,100) 3rd line says draw an arc from K(62,30) to E(62,70) I tried to draw an arc by using 1st 4 co-ordinates from line ( but do not know how to use remaining 2 co-ordinates ? ) QPainterPath path; // arc from L ---> F path.moveTo(94,70); QRect bound1 (44,70,102,60); path.arcTo(bound1,90,-180); QPainterPath path1; // arc from K ---> E path1.moveTo(62,70); QRect bound2 (42,70,40,60); path1.arcTo(bound2,90,-180); And I got following output : But, Input lines to OR gate are not attached to 1st arc. I am using only first four co-ordinates. How to use remaining 2 co-ordinates to draw an arc ? So how to use all given co-ordinates from.slib to draw an arc ? Note : SCALE is defined at the start of the file.
It looks like the first two coordinate pairs are two points on an imaginary circle and the third pair is the center of that circle. Together, those describe a circle arc section. For this to work with arcTo, we construct a QRectF bounding the circle, ie with the given center and side 2*radius. Thus, the following ought to work: QPointF from, to; // first and second coordinate pair QPointF center; // third coordinate pair // Bounding rectangle is a square around center. QLineF lineFrom{center, from}; QLineF lineTo{center, to}; qreal radius = lineFrom.length(); QRectF bounding{ center - QPointF{radius, radius}, center + QPointF{radius, radius}}; // Use QLineF to calculate angles wrt horizontal axis. qreal startAngle = lineFrom.angle(); qreal sweep = lineFrom.angleTo(lineTo); QPainterPath path; path.moveTo(from); path.arcTo(bounding, startAngle, sweep);
72,615,134
72,615,233
Ranges algorithm in LLVM 14 libc++
I have this snippet. #include <algorithm> #include <vector> int main() { std::vector<int> v1 = {1, 2, 3}; std::vector<int> v2 = {4, 5, 6}; return std::ranges::equal(v1, v2); } I compile it with GCC 10 (Debian stable) and everything's alright: $ g++ -std=c++20 test.cpp -o test <compiles fine> I compile it with Clang 14 and libc++14 (Debian stable, installed from packages from apt.llvm.org): $ clang++-14 -std=c++20 -stdlib=libc++ test.cpp -o test test.cpp:8:25: error: no member named 'equal' in namespace 'std::ranges' return std::ranges::equal(v1, v2); ~~~~~~~~~~~~~^ 1 error generated. Same for a lot of other things. Is libc++ support for the ranges library really so behind or am I missing something?
You can find an exhaustive table for implementations feature support here: https://en.cppreference.com/w/cpp/compiler_support For C++20s "The One Ranges Proposal" where std::equal is part of the table says "13 (partial)". There is another overview for clang here: https://clang.llvm.org/cxx_status.html#cxx20. Though it only lists language features.
72,616,930
72,620,163
Is extracting the binaries from a GLSL shader a standard, supported operation? If so, how do we build glad.c to support it?
We have been working on an OpenGL program where glad was built two summers ago, working on Linux and windows on cards such as NVIDIA 2060 under Ubuntu 20.04LTS, Intel on Windows and Ubuntu, GeForce 940mx, and others. On Linux the driver I personally am using is nouveau on this laptop. *-display description: VGA compatible controller product: HD Graphics 620 vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:129 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:4000(size=64) memory:c0000-dffff *-display description: 3D controller product: GM108M [GeForce 940MX] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:131 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:3000(size=128) In a previous question, I asked why we were getting a segfault when trying to get the binaries from a shader program. The fragmentary answer given was that perhaps glad.c was built wrong. This isn't, in my view an acceptable answer but perhaps I need to construct a better question. Is there any way to debug OpenGL segfaulting when extracting code from binary shader Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel, windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA with an NVIDIA driver. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist? If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do?
Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel, windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA with an NVIDIA driver. Retrieving the binary represantation of a compiled shader program is specified in the ARB_get_program_binary OpenGL extension. This feature is also available in OpenGL since version 4.1. This means that you can use this feature if any of the following is true: The GL context you're using has at least Version 4.1 The GL implementation you are using advertises the aviability of this feature (on this context) by including GL_ARB_get_program_binary in the GL extension string. Every reasonably modern GPU should support GL 4.1, so this feature should be widely available. However, some implementations may support OpenGL 4.x only in core profile. If you work with compatibility or legacy profiles, you may be out of luck. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist? This is one of the main points for having an extension mechanism at all. Since you used the glad GL loader, this can be done via glad quite easily. After you created the context and initialized glad, you can query the availability of this feature at runtime by if (GLAD_GL_VERSION_4_1 || GLAD_GL_get_program_binary) { // feature is available... } Since core OpenGL and the extension do specify exactly the same function and enum names without any extension suffix, you can just use these functions no matter if they were acquired via core OpenGL feature-set or the extension. Please note that it will matter how you create the context, and which version you request when you create the context. If you ask for a context below verssion 4.1, you might not get one even if the implementation techically would support that version. Typically, the extension would be available in that case anyway, but that isn't a requirement. If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do? The only requirements for the above code to work is that you generated the GLAD loader for at least OpenGL 4.1 and for the GL_ARB_get_program_binary extension. If you generated for 4.6 and left out the extension, then glad will never look for that extension and GLAD_GL_get_program_binary will not be defined. Then, you will miss out on the ability to use the extension if you work with GL contexts < 4.1, even if it would be supported by your GL implementation.
72,618,271
72,618,412
C++ class templates can be implicity specialized and instantiated without angle brackets?
This actually compiles and works, but it's unclear to me why. #include <iostream> template <class T> class LikeA { T m_val{}; public: LikeA() = default; explicit LikeA(T iv): m_val(std::move(iv)) {} LikeA(LikeA<T> const &) = default; LikeA(LikeA<T> &&) noexcept = default; ~LikeA() noexcept = default; operator T const &() const { return m_val; } LikeA<T> &operator=(T nv) { m_val = std::move(nv); return *this; } LikeA<T> &operator=(LikeA<T> const &n) { m_val = n.m_val; return *this; } LikeA<T> &operator=(LikeA<T> &&n) { m_val = std::move(n.m_val); return *this; } }; template <class T> T f (LikeA<T> i) { return i; } int main() { std::cout << f(LikeA{3.1415927}) << '\n'; // No template argument? Not a syntax error? return 0; } I was previously calling f like f(3.1415927) before I let a lint checker talk me into making one of LikeAs constructors explicit. After that, of course, it couldn't implicitly convert the constant to a LikeA. If you just add braces (i.e. f({3.1415927}) the compiler still doesn't know what to select. In my full code the actual template argument is a lot more verbose, so just for grins I put the template name LikeA in front of the brace initializers, fully expecting a syntax error. To my surprise, it compiled and ran. Since this was MSVC, at first I though it was just Microsoft lulling me into a sense of false security. But I tested it against several compilers (gcc, clang, zigcc) in Compiler Explorer, and it works on all of them. How does C++ select the correct template specialization? On the surface, argument-dependent lookup would seem to be the answer, but notice there are no angle brackets, and the template doesn't have a default argument. I definitely remember this being a syntax error at some point in the past. (Function template specialization without templated argument doesn't answer this because OP actually specifies the arguments). The cppreference on function template arguments has a quick aside about omitting <> but this is a class template. The syntax here appears to require the angle brackets all the time.
Since C++17, compiler can automatically deduce the argument type of a template by using class template argument deduction (CTAD). You can skip defining the templates arguments explicitly if the constructor is able to deduce all template parameters. So you simply write int main() { std::vector v{2, 4, 6, 8}; // same as std::vector<int> std::list l{1., 3., 5.}; // same as std::list<double> std::pair p{false, "hello"}; // same as std::pair<bool, const char *> std::cout << typeid(v).name() << std::endl; std::cout << typeid(l).name() << std::endl; std::cout << typeid(p).name() << std::endl; } Under MSVC, it produces the following output class std::vector<int,class std::allocator<int> > class std::list<double,class std::allocator<double> > struct std::pair<bool,char const * __ptr64> Kindly refer CTAD for more details.
72,619,077
72,620,192
How to get the actual size of a protocol buffer message before serialization?
I defined a message in *.proto file and set the values using reflection. I need to find out how many bytes are parsed per second with SerializeToString() API. Is it possible to get the actual size of the message before calling SerializeToString?
It depends on which size you're interested in. If you want to know how large the serialized protobuf message returned by MessageLite::SerializeToString() is going to be you can use Message::ByteSizeLong(). Example: ExampleMessage msg; msg.set_example(12); std::size_t expectedSize = msg.ByteSizeLong(); std::string result; msg.SerializeToString(&result); assert(expectedSize == result.size()); This is also the way SerializeToString() calculates the size of the message internally to resize the std::string to have enough space for the entire message. On the other hand if you want to know how much memory the message currently requires in unserialized form you can use Message::SpaceUsedLong() - which will give you an estimate of that size. Example: ExampleMessage msg; msg.set_example(12); std::size_t approximateInMemorySize = msg.SpaceUsedLong();
72,619,779
72,621,553
Correct calling convention for exporting windows DLL functions for Excel VBA without mangled names
I am writing a DLL to export functions to be used in Excel VBA - I have found a way to be able to pass parameters in but with mangled names. If I set up without name mangling then I can not pass parameters and get a calling convention error I use the standard declaration for calling DLL exported functions from VBA: VBA Public Declare Function foo Lib "C:\ ... \helloworld.dll" (ByVal bar As Long) As Long My function is set up as so: helloworld.cpp extern "C" __declspec(dllexport) long foo(long bar){ return bar * 2; } I compile with cl.exe /LD helloworld.cpp using cl.exe (Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30145 for x86) and dumplib/exports helloworld.dll yields Dump of file helloworld.dll File Type: DLL Section contains the following exports for helloworld.dll 00000000 characteristics FFFFFFFF time date stamp 0.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA name 1 0 00001000 foo Summary 2000 .data 6000 .rdata 1000 .reloc A000 .text If I call the function from VBA VBA dim x as long x = foo(2) I get the VBA error Bad DLL calling convention (Error 49) If I add __stdcall to the function signature, extern "C" __declspec(dllexport) long __stdcall foo(long bar){ return bar * 2; } I get the following DLL export Dump of file helloworld.dll File Type: DLL Section contains the following exports for helloworld.dll 00000000 characteristics FFFFFFFF time date stamp 0.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA name 1 0 00001000 _foo@4 Summary 2000 .data 6000 .rdata 1000 .reloc A000 .text And the function now works if I use the alias in the VBA declaration Public Declare Function foo Lib "C:\ ... \helloworld.dll" Alias "_foo@4" (ByVal bar As Long) As Long VBA dim x as long x = foo(2) 'foo sets x = 4 Is it possible to pass parameters to functions but not have a mangled/ordinal name?
Per Microsoft's documentation: https://learn.microsoft.com/en-us/office/client-developer/excel/developing-dlls When compilers compile source code, in general, they change the names of the functions from their appearance in the source code. They usually do this by adding to the beginning and/or end of the name, in a process known as name decoration. You need to make sure that the function is exported with a name that is recognizable to the application loading the DLL. This can mean telling the linker to associate the decorated name with a simpler export name. The export name can be the name as it originally appeared in the source code, or something else. The way the name is decorated depends on the language and how the compiler is instructed to make the function available, that is, the calling convention. The standard inter-process calling convention for Windows used by DLLs is known as the WinAPI convention. It is defined in Windows header files as WINAPI, which is in turn defined using the Win32 declarator __stdcall. A DLL-export function for use with Excel (whether it is a worksheet function, macro-sheet equivalent function, or user-defined command) should always use the WINAPI / __stdcall calling convention. It is necessary to include the WINAPI specifier explicitly in the function's definition as the default in Win32 compilers is to use the __cdecl calling convention, also defined as WINAPIV, if none is specified. You can tell the linker that a function is to be exported, and the name it is to be known by externally in one of several ways: Place the function in a DEF file after the EXPORTS keyword, and set your DLL project setting to reference this file when linking. Use the __declspec(dllexport) declarator in the function's definition. Use a #pragma preprocessor directive to send a message to the linker. Although your project can use all three methods and your compiler and linker support them, you should not try to export one function in more than one of these ways. For example, suppose that a DLL contains two source code modules, one C and one C++, which contain two functions to be exported, my_C_export and my_Cpp_export respectively. For simplicity, suppose that each function takes a single double-precision numerical argument and returns the same data type. The alternatives for exporting each function by using each of these methods are outlined in the following sections. ... The article then goes on to provides examples of each method. In your case, since you are already doing the 2nd method and not getting the result you want, you will have to employ the 1st or 3rd method as well.
72,620,283
72,620,314
How does std::is_polymorphic identify polymorphism?
I tried to understand the working of std::is_polymorphc in C++. This is defined in type_traits.h: template <class _Ty> struct is_polymorphic : bool_constant<__is_polymorphic(_Ty)> {}; // determine whether _Ty is a polymorphic type template <class _Ty> _INLINE_VAR constexpr bool is_polymorphic_v = __is_polymorphic(_Ty); I am not able to find the source code for __is_polymorphic. Could someone help me understand how __is_polymorphic works ?
__is_polymorphic is a reserved keyword, so it's built-in to the compiler i.e. it's not implemented in library, it's implemented directly in the compiler. So, there is no source code to see, unless you look at the compiler's source code. On cppreference, you can see a possible implementation: namespace detail { template <class T> std::true_type detect_is_polymorphic( decltype(dynamic_cast<const volatile void*>(static_cast<T*>(nullptr))) ); template <class T> std::false_type detect_is_polymorphic(...); } // namespace detail template <class T> struct is_polymorphic : decltype(detail::detect_is_polymorphic<T>(nullptr)) {}; This works by using the fact that dynamic_cast requires a polymorphic type in order to compile. detect_is_polymorphic is an overloaded function that uses SFINAE to check if dynamic_cast is valid on T.
72,620,480
72,620,573
bubble sort not sorting properly
Im trying to sort this random generated array, but my output isincorrect it looks like this: sorted round: 0: 87 sorted round: 0: 78 sorted round: 0: 16 sorted round: 0: 94 sorted round: 0: 36 sorted round: 0: 93 sorted round: 0: 50 sorted round: 0: 22 sorted round: 0: 63 sorted round: 0: 28 sorted round: 0: 91 sorted round: 0: 60 sorted round: 0: 64 sorted round: 0: 27 sorted round: 0: 41 sorted round: 0: 73 sorted round: 0: 37 sorted round: 0: 12 sorted round: 0: 69 84 78 16 87 36 93 50 22 63 28 91 60 64 27 41 73 37 12 69 0 ive been at this for hours and havnt been able to figure out what im doing wrong, any help is appreciated. #include <stdio.h> #include <iostream> #include <algorithm> // for std::find #include <iterator> // for std::begin, std::end using namespace std; int main() { int random_array[20]; int i; int j; random_array[0]=rand()%(100-1+1)+1; for (i=1; i<20;) { j= rand()%(100-1+1)+1; bool exists; exists = find(begin(random_array), end(random_array), j) != end(random_array); if(!exists) { random_array[i]=j; i=i+1; } else { ; } } int size=20; i=0; j=0; int k =0; for (i; i < size; i++) { for (j; j < size - i; j++) { for(k; k<20; k++) { cout<<"sorted round: "<<j<<": "<<random_array[k]<<endl; } if (random_array[j] > random_array[j + 1]) { swap(random_array[j], random_array[j + 1]); } else { ; } } } for (i=0; i<20;i++) { cout<<random_array[i]<<endl; } return 0; } The out put should be a sorted array called random_array and the issue im running into starts on this line of code: int size=20; i=0; j=0; int k =0; for (i; i < size; i++) {
In these nested for loops for (i; i < size; i++) { for (j; j < size - i; j++) { for(k; k<20; k++) { cout<<"sorted round: "<<j<<": "<<random_array[k]<<endl; } if (random_array[j] > random_array[j + 1]) { swap(random_array[j], random_array[j + 1]); } else { ; } } } neither variable j nor the variable k are reset to 0 in each iteration of the most outer loop for (i; i < size; i++) { At least you should write for (i = 0; i < size; i++) { for (j = 0; j < size - i; j++) { for(k = 0; k<20; k++) { cout<<"sorted round: "<<j<<": "<<random_array[k]<<endl; } if (random_array[j] > random_array[j + 1]) { swap(random_array[j], random_array[j + 1]); } else { ; } } } Pay attention to that the program invokes undefined behavior in the first iteration of the most outer loop that is when i is equal to 0 because in this if statement if (random_array[j] > random_array[j + 1]) { the expression random_array[j + 1] accesses memory beyond the array when j is equal size - 1. It is better to start the inner for loop with 1 for (j = 1; j < size - i; j++) { and in if statement to write if (random_array[j-1] > random_array[j]) {
72,620,535
72,620,643
\K operator does not work in std::regex_replace regex
The value of name is: regex101: build, test, and debug regex - 3 running windows im trying to remove everything forward the last - std::wstring name = accvector[i].name; std::wregex regexp(L".*\\K( -\\s+\\d+\\srunning.*$)"); name = std::regex_replace(name, regexp, L""); nothing is being replace, whats wrong?
You should keep in mind that \K operator is not supported by ECMAScript regex flavor (used by default in std::wregex/std::regex). It is mainly supported by PCRE, Boost, Onigmo, Python PyPi regex library. Here, you may use (.*) -\s+\d+\srunning.* The replacement pattern should be $1. See the C++ demo: #include <iostream> #include <regex> int main() { std::wstring name = L"regex101: build, test, and debug regex - 3 running windows"; std::wregex regexp(LR"((.*) -\s+\d+\srunning.*)"); name = std::regex_replace(name, regexp, L"$1"); std::wcout << name; } Output: Note you do not need the $ at the end of the pattern, .* matches till the end of the line/string.
72,622,304
72,622,612
Can I use std::copy to copy arrays allocated on heap?
I am using Qt6, C++ 11, I declare two 2d arrays of dynamic sizes: int **A; int **B; A = new int*[rowCount](); for(int i = 0; i < rowCount; i++) { A[i] = new int[colCount](); //Same for B } // Then feed A with some incoming values and I want to copy all A's values to B, I know that using std::copy is faster and cleaner than using for -loop, so I tried: std::copy(&A[0][0], &A[0][0]+rowCount*colCount,&B[0][0]); However I got error message: code: 0xc0000005: read access violation at: 0x0, flags=0x0 (first chance) Looks like I am trying to access memories not allocated? But I have already allocated two arrays on the heap Why I don't use 2d vector or list is that I need to process large amount of data and accessing array by index is O(1), if you think this is caused by my compiler I can provide make file and project file snippets. Thank you very much Edit: @Miles Budnek pointed out that std::vector and raw C++ array have similar indexing performances (both O(1)). I am handing large amount of data, the way I store and read data is basically indexing. I have tested std::vector and C++ array indexing performances under MSVC 2019 64-bit, C++ 11 using Qt creator and I found they are similar(std::vector even a little bit faster), if under most environments(like various compilers) std::vector and raw C++ array are both O(1), I would say std::vector is safer and more convenient than C++ raw arrays. But it looks like QVector indexing speed is much lower?
You cannot use std::copy to copy your array as a single chunk because you do not have a single array. What you have is a pointer to the first element of an array of pointers to the first element of arrays of ints. That is, assuming rowCount and colCount are both 3, you have this: A ┌───┐ │ │ │ │ │ │ │ │ └─┼─┘ │ ▼ ┌───┐ │ │ ┌───┬───┬───┐ │ ──┼───────►│ 0 │ 0 │ 0 │ │ │ └───┴───┴───┘ ├───┤ │ │ ┌───┬───┬───┐ │ ──┼───────►│ 0 │ 0 │ 0 │ │ │ └───┴───┴───┘ ├───┤ │ │ ┌───┬───┬───┐ │ ──┼───────►│ 0 │ 0 │ 0 │ │ │ └───┴───┴───┘ └───┘ As you can see, there is no contiguous chunk of elements for std::copy to copy. If you want to be able to efficiently copy (and access) elements of your array, you should allocate a single array that is rowCount*colCount long. If you want nice syntax you could wrap it up in a class and overload the () or [] operator to make the access nicer. For example: class Matrix { private: int rowSize_; std::vector<int> storage_; public: Matrix(int rowCount, int colCount) : rowSize_{colCount}, storage_(rowCount * colCount) {} int& operator()(int row, int col) { return storage_[row * rowSize_ + col]; } }; int main() { Matrix mat{3, 3}; mat(1, 2) = 42; // copy with simple copy construction Matrix mat2 = mat; // or copy-assignment mat2 = mat; }
72,622,310
72,623,613
Should I clean up beast::flat_buffer when I see errors on on_read?
http_client_async_ssl class session : public std::enable_shared_from_this<session> { ... beast::flat_buffer buffer_; // (Must persist between reads) http::response<http::string_body> res_; ... } void on_write(beast::error_code ec, std::size_t bytes_transferred) { if (ec) { fail(ec, "write"); return try_again(); } // Receive the HTTP response http::async_read( stream_, buffer_, res_, beast::bind_front_handler(&session::on_read, shared_from_this())); } void on_read(beast::error_code ec, std::size_t bytes_transferred) { if (ec) { fail(ec, "read"); return try_again(); } // Step 1: process response // const auto &body_data = res_.body().data(); user_parse_data(net::buffers_begin(body_data), net::buffers_end(body_data)); // Step 2: clean up buffer_ // buffer_.consume(buffer_.size()); // clean up buffer_ after finishing reading it // Step 3: continue to write ... } In the above implementation, I ONLY clean up the buffer_ when I finish parsing the data successfully. Question> Should I clean up the buffer_ when I experience an error on the on_read too? void on_read(beast::error_code ec, std::size_t bytes_transferred) { if (ec) { // clean up buffer_ buffer_.consume(buffer_.size()); // Should we do the cleanup here too? fail(ec, "read"); return try_again(); } // Step 1: process response // const auto &body_data = res_.body().data(); user_parse_data(net::buffers_begin(body_data), net::buffers_end(body_data)); // Step 2: clean up buffer_ // buffer_.consume(buffer_.size()); // Step 3: continue to write ... }
// Should we do the cleanup here too? That's asking the wrong question entirely. One obvious question that comes first is "should we cleanup the read buffer at all". And the more important question is: what do you do with the connection? The buffer belongs to the connection, as it represents stream data. The example you link always closes the connection. So the buffer is irrelevant after receiving the response - since the connection becomes irrelevant. Note that the linked example doesn't consume on the buffer either. Should You Cleanup At All? You should not cleanup after http::read! The reason is that http::read already consumes any data that was parsed as part of the response message. Even if you expect to read more messages from the same connection (e.g. HTTP pipelining), you need to start the next http::read with the same buffer since it might already contain (partial) data for the subsequent message. What About Errors? If you have an IO/parse error during HTTP transmissions, I expect in most circumstances the HTTP protocol specification will require you to shut down the connection. There is no "try_again" in HTTP/1. Once you've lost the thread on stream contents, there is no way you can recover to a "known state". Regardless, I'd always recommend shutting down failed HTTP sessions, because not doing so opens up to corrupted messages and security vulnerabilities.
72,622,349
72,622,399
Trying to copy lines from text file to array of strings (char**)
This is my code for allocating memory for the array of strings: FileReader::FileReader() { readBuffer = (char**)malloc(100 * sizeof(char*)); for (int i = 0; i < 100; i++) { readBuffer[i] = (char*)malloc(200 * sizeof(char)); } } Im alocating 100 strings for 100 lines then allocating 200 chars for each string. This is my code for reading the lines: char** FileReader::ReadFile(const char* filename) { int i = 0; File.open(filename); if (File.is_open()) { while (getline(File, tmpString)) { readBuffer[i] = (char*)tmpString.c_str(); i++; } return readBuffer; } } and for printing: for (int i = 0; i <= 5; i++) { cout << fileCpy[i]; } this is the output to terminal: Picture As you can see it just repeats the last line of the file as the file just reads: This is test line 2 line 3 line 4 line 5 Any idea on whats going on? Why the lines aren't copying correctly?
Replace readBuffer[i] = (char*)tmpString.c_str(); with strcpy(readBuffer[i], tmpString.c_str()); Your version just saves a pointers to tmpString in your array. When tmpString changes then that pointer points at the new contents of tmpString (and that's just the best possible outcome). However strcpy actually copies the characters of the string, which is what you want. Of course, I'm sure it doesn't need saying, but you can avoid all the headache and complication like this vector<string> readBuffer; This way there are no more pointer problems, no more manual allocation or freeing of memory, no limits, you aren't limited to 100 lines or 200 characters per line. I'm sure you have a reason for doing things the hard way. but I wonder if it's a good reason.
72,622,621
72,634,790
How do I capture(trap) a mouse in a window in c++?
I am writing a tile map editor in SFML and C++. I have been having all sorts of troubles with the mouse. I am using the built in SFML Mouse:: static functions and recently managed to get a custom cursor moving on the screen and pointing accurately to a tile by doing as follows:` Sprite cursor; bool focus = false; RenderWindow window(VideoMode(512, 288), "Tilemap editor"); window.setFramerateLimit(60); Texture cursorTexture; if(!cursorTexture.loadFromFile("Graphics/Cursor.png")) { std::cout << "Failed to load cursor texture\n"; return 0; } cursor.setTexture(cursorTexture); Mouse::setPosition(mousePos); While(window.isOpen()) { window.setMouseCursorVisible(focus); if(Mouse::getPosition().x != lastMousePos.x) { mousePos.x = mousePos.x + (Mouse::getPosition().x - lastMousePos.x); } if(Mouse::getPosition().y != lastMousePos.y) { mousePos.y = mousePos.y + (Mouse::getPosition().y - lastMousePos.y); } cursor.setPosition(mousePos.x, mousePos.y); lastMousePos = Mouse::getPosition(); window.clear(); window.draw(cursor) window.display() } The built-in Mouse functions only display relativity to the desktop or the window and as I am using this app in a small window in which my view moves, I can't use either. The solution above moves a cursor independent of the desktop and with the ability to move the cursor if and when I want to move my view. The issue is that my mouse will move off the side of the app when I try to click items in the top left corner. Is there a good cross-platform (I'm on Linux BTW) way to trap the mouse inside of the window unless I enter a keystroke (like a VM window)? Also, is there a better way to do cross-platform mouse support in general? SFML kinda sucks. (Code obviously needs to be in a main function and the namespace must be sf with SFML/Graphics.hpp included)
There is already a method for that. void setMouseCursorGrabbed (bool grabbed) // Grab or release the mouse cursor. You can also use these methods to convert your screen coordinates to mouse coordinates and vice versa. Vector2f mapPixelToCoords (const Vector2i &point) const // Convert a point from target coordinates to world coordinates, using the current view. Vector2f mapPixelToCoords (const Vector2i &point, const View &view) const // Convert a point from target coordinates to world coordinates. Vector2i mapCoordsToPixel (const Vector2f &point) const // Convert a point from world coordinates to target coordinates, using the current view. Vector2i mapCoordsToPixel (const Vector2f &point, const View &view) const // Convert a point from world coordinates to target coordinates. sf::RenderWindow Class Reference
72,622,767
72,622,798
"unresolved external symbol" when including a single-header library
When I try to include any single-header library in my project (here I am using HTTPRequest), it keeps giving me the LNK2019 error. This is my code: #include "HTTPRequest.hpp" void main() { http::Request request{ "http://test.com/test" }; const auto response = request.send("GET"); std::cout << std::string{ response.body.begin(), response.body.end() } << '\n'; } Is this an issue with my project setup because these libraries are meant to be a single h/hpp file?
Those error messages are referring to socket API functions which the HTTP library is using. You need to link your project to your platform's socket library, ie ws2_32.lib on Windows, etc.
72,622,921
72,624,515
Python print() corrupting memory allocated by ctypes
I'm working on some code to act as a Python wrapper for a rather large C++ project. I have created a class wrapper with the associated function wrappers which make direct calls to the DLL. Since it is a C++ project, it needs a C wrapper as well, which is implemented and working correctly. const char* MyClass::GetName() { printf("Name at %p\n", &Name); printf("Name is %s\n", Name); return Name; } My Python class d is constructed using the Open() method. There is a C++ function GetName() which simply returns the value of Name. I modified this function in the C++ source to print out the address and value of the Name variable for debugging. The get_name() function in Python is the wrapper. ??.Open.restype = POINTER(c_int) ??.GetName.argtypes = (POINTER(c_int),) ??.GetName.restype = c_char_p d = MyClass() d.get_name() print('hi') d.get_name() This outputs the following: Name at 0x80012e598 Name is device_name hi Name at 0x80012e598 Name is hi Any other amount of code I have tested so far maintains "Name is device_name" but when it comes to print the value comes back empty or as the last thing passed to print() (it is empty when the last thing passed was large). It seems like the buffer used by print() overlaps with the allocated memory for the object in C++. If I run the script with the -u flag (unbuffered outputs), Name it is empty every single time: Name at 0x800111368 Name is device_name hi Name at 0x800111368 Name is Since the C++ is printing out the address of the variable, I know it hasn't changed, which means Python is modifying it when it shouldn't be allowed to. What steps should I take to further debug/resolve this? Thank you in advance. EDIT I worked on a minimal reproducible example and discovered the cause of the issue, but do not understand why. It was a part of the init for my Python class. The argument is a string Name which needs to be converted to bytes() to be passed through ctypes. I will show one working example and one breaking example. What is the difference between the two, causing one to work and the other not? # Create working class class MyWorkingClass(): def __init__(self, name): self.obj = lib.MyClass_Open(name) def get_name(self): return lib.MyClass_GetName(self.obj).decode('utf-8') # This part works name = bytes('my_name', 'utf-8') working = MyWorkingClass(name) for i in range(5): print(working.get_name()) And this one gets the wrong data back: # Create breaking class class MyBreakingClass(): def __init__(self, name): name = bytes(name, 'utf-8') self.obj = lib.MyClass_Open(name) def get_name(self): return lib.MyClass_GetName(self.obj).decode('utf-8') # This part doesn't work breaking = MyBreakingClass('my_name') for i in range(5): print(breaking.get_name()) In both cases, the same exact name should be (from my understanding anyway) getting passed to MyClass_Open(), but clearly that is not the case. Why?
It appears the C++ code (not shown) is storing a pointer to name being passed. In the breaking case, the bytes object whose internal buffer that pointer references goes out of scope, freeing the buffer and creating undefined behavior. In the OP's original problem, it is likely the allocation for 'hi' ended up at the same address, but anything could happen due to UB. Here's a minimal example: test.cpp - implied implementation from description #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif class MyClass { const char* _name; public: MyClass(const char* name) : _name(name) {} // store pointer during construction const char* GetName() const { return _name; } // access pointer later }; extern "C" { API MyClass* MyClass_Open(const char* name) { return new MyClass(name); // leaks in this example } API const char* MyClass_GetName(MyClass* p) { return p->GetName(); } } test.py - combined examples and made complete import ctypes as ct lib = ct.CDLL('./test') lib.MyClass_Open.argtypes = ct.c_char_p, lib.MyClass_Open.restype = ct.c_void_p lib.MyClass_GetName.argtypes = ct.c_void_p, lib.MyClass_GetName.restype = ct.c_char_p # Create working class class MyWorkingClass(): def __init__(self, name): self.obj = lib.MyClass_Open(name) def get_name(self): return lib.MyClass_GetName(self.obj) # This part works # bytes object is created here # "name" is the only reference but it is still in scope during get_name() below name = bytes('my_name', 'utf-8') working = MyWorkingClass(name) for i in range(5): print(working.get_name()) # Create breaking class class MyBreakingClass(): def __init__(self, name): # bytes object is created here # "name" is the only reference and goes out of scope when __init__ returns name = bytes(name, 'utf-8') self.obj = lib.MyClass_Open(name) def get_name(self): return lib.MyClass_GetName(self.obj) # This part doesn't work breaking = MyBreakingClass('my_name') for i in range(5): print(breaking.get_name()) # garbage output Output: b'my_name' b'my_name' b'my_name' b'my_name' b'my_name' b'\xf0' # could be anything due to UB b'\xf0' b'\xf0' b'\xf0' b'\xf0'
72,623,633
72,623,659
Error in map with 2 classes: "binary '<': 'const _Ty' does not define this operator or a conversion to a type acceptable to the predefined operator"
I'm having a weird error while declaring this map: std::map<LoggedUser, GameData> m_players; I've looked at many possible solutions, but couldn't find anything that works. I can't find the problem that causes this. The error: C2676 binary '<': 'const _Ty' does not define this operator or a conversion to a type acceptable to the predefined operator Game (where the map is declared): #pragma once #include "LoggedUser.h" #include "Question.h" #include "GameData_struct.h" #include <vector> #include <map> class Game { private: std::vector<Question> m_questions; std::map<LoggedUser, GameData> m_players; public: void add_question(Question question); void add_into_m_players(LoggedUser logged_user, GameData game_data); Question getQuestionForUser(LoggedUser logged_user); bool submitAnswer(LoggedUser logged_user, std::string answer); void removePlayer(LoggedUser logged_user); }; LoggedUser: #pragma once #include <string> class LoggedUser { private: std::string _m_username; int id; public: LoggedUser(); LoggedUser(std::string username, int id); std::string get_username(); int get_id(); }; GameData: #pragma once #include <iostream> #include <string> #include "Question.h" //Game data struct struct GameData { Question currentQuestion; unsigned int correctAnswerCount; unsigned int wrongAnswerCount; unsigned int averangeAnswerTime; }; Question: #pragma once #include <string> #include <vector> //Question class Question { private: std::string m_question; std::vector<std::string> m_possibleAnswers; public: std::string getQuestion(); std::string getPossibleAnswers(); std::string getCorrentAnswer(); };
std::map is a sorted container. It uses operator< by default to compare keys for sorting and matching (you can optionally specify your own comparitor to override this behavior). The error message is complaining that your LoggedUser class does not implement an operator< for comparing the map's keys.
72,623,761
72,623,825
Writing to a file using FILE* and fprintf in c++ won't work as expected
Not to bother anyone, but i have ran into an issue with a class of mine, somehow when i write to a file with the FILE* and fprintf() function i don't get any text in my text file that i created, i have searched all over youtube and i don't know what i'm doing wrong, because my code is the same. Heres a copy of my .c++ and .h code: main.c++: #include <iostream> #include "../include/include.h" using namespace std; int main() { write_file wf("test.txt"); wf.write_line("Hello, world!"); return 0; } include.h: #ifndef INCLUDE_H #define INCLUDE_H #include <iostream> class write_file { public: write_file(const char *file_name) { FILE* fp = fopen(file_name, "w"); } void write_line(const char *line) { fprintf(fp, "%s\n", line); } void close() { fclose(fp); } private: FILE* fp; }; #endif /* include.h */
Main issue: To fix your issue, you have to remove the local fp variable that shadows the class member. When the compiler sees FILE *fp in your method, it uses a separate variable and is not referring to the one in your class instance. Change the method definition to: write_file(const char *file_name) { fp = fopen(file_name, "w"); } Additional points I really ought to comment on: You never call close. Mishandling resources is one of the most common mistakes in C & CPP. Make sure to implement a destructor that calls close. If you do that, make sure to improve the close implementation to handle multiple calls. Consider using standard CPP classes for interacting with files, specifically ifstream and ofstream. Those handle a lot of the fuss automagically for you. Please don't use .c++ as a file extension. This is really odd. Most CPP developers use .cpp or .cc for CPP source files. I might be saying that because I'm not a gen-Z kid, but please don't search Youtube for programming tutorials. Searching text-based sources is so much more efficient. Learn how to use cplusplus or cppreference instead.
72,623,896
72,638,056
Java foreign function interface (FFI) interop with C++?
As of Java 18 the incubating foreign function interface doesn't appear to have a good way to handle C++ code. I am working on a project that requires bindings to C++ and I would like to know how to avoid creating a thunk library in C. One of the C++ classes looks something like this: namespace library { typedef uint8_t byte; class CppClass { public: static oncstexpr const char* DefaultArgument = "default"; CppClass(const std::string& argument = DefaultArgument); virtual ~CppClass(); bool doStuff(); bool handleData(std::vector<byte>* data); private: std::unique_ptr<InternalType> internalState; }; } I would like to create a Java class that looks something like the following to mirror that (with error checking left out): public final class CppClass implements AutoCloseable { public static final String DefaultArgument = "default"; private static final MethodHandle NEW; private static final MethodHandle FREE; private static final MethodHandle DO_STUFF; private static final MethodHandle HANDLE_DATA; static{ var binder = Natives.getBinder(); NEW = binder.bind("(manged constructor)", ValueLayout.ADDRESS, ValueLayout.ADDRESS); FREE = binder.bindVoid("(manged deconstructor)", ValueLayout.ADDRESS); DO_STUFF = binder.bind("(manged doStuff)", ValueLayout.JAVA_BYTE, ValueLayout.ValueLayout.ADDRESS); HANDLE_DATA = binder.bind("manged handleData)", ValueLayout.JAVA_BYTE, ValueLayout.ADDRESS, ValueLayout.ADDRESS, ValueLayout.JAVA_LONG); } private final MemorySegment pointer; public CppClass() { this(DefaultArgument); } public CppClass(String argument) { try(var scope = MemoryScope.newConfinedScope()) { var allocator = MemoryAllocator.nativeAllocator(scope); pointer = (MemoryAddress)NEW.invokeExact( allocator.allocateUtf8String(argument) ); } } @Override public void close() { FREE.invokeExact(pointer); } public boolean doStuff() { return (byte)DO_STUFF.invokeExact(pointer) != 0; } public boolean handleData(MemorySegment segment) { return (byte)HANDLE_DATA.invokeEact(pointer, segment.address(), segment.byteSize()) != 0; } } where Binder looks something like this: public interface Binder { MethodHandle bind(String name, FunctionDescriptor desc); MethodHandle bind(String name, MemoryLayout result, MemoryLayout... args); MethodHandle bindVoid(String name, MemoryLayout... args); } I am not sure what parts of this are correct. My biggest implementation questions are: What is the correct way to call constructors and destructors? What is the correct way to call methods? What is the correct way to handle the std types (std::string, std::vector) Do C++ compilers add the default argument values at compile time, or do they generate multiple methods?
So the general answer seems to be "just create a shim library" because the C++ ABI is far more fluid and not supported by Java. As for the answers at the end: You just do it like normal, but with void* pointers Pass in this as a void* and treat it as an opaque pointer Handled automatically in the shim, from what I gather std::string makes a copy and has an internal reference count The default arguments are handled at compile time
72,623,961
72,623,981
What is the difference between calling foo() and ::foo() within a C++ class member function?
I am looking at someone else's C++ code (note I am not fluent in C++). Within the class, there is this member function: void ClassYaba::funcname() { ... ::foo(); ... } There is no member function within that class's namespace named foo, but aside from that, what is the difference between ::foo() and foo() (no leading colons)?
When you call foo(); C++ will search for something named foo in the following order: Is there something with this name declared within the class? Is there something with this name in a base class? Is there something with that name in the namespace in which the class was declared? (And, if not, is there something with that name in the parent namespace of that namespace, etc.?) Finally, if nothing was found, is there something with that name in the global namespace? On the other hand, writing ::foo(); will make C++ look for something purely in the global namespace. If there is nothing named foo in your class, any of its base classes, or any namespaces foo was declared inside of, then there's no difference between the two approaches.
72,624,023
72,624,722
Implementing task primitives based on asio::awaitable
I'm looking for a way to implement task primitives like whenAll, whenAny, taskFromResult on top of (boost) asios awaitable<T> coroutine type. What I've got so far is a function that creates an awaitable<T> from a completion callback. However I'm unsure how I'm supposed to run multiple tasks in parallel on the specified io_context and await all of them or until any one of them is finished. In .NET there are primitives like Task.WhenAny, Task.WhenAll and types like TaskCompletionSource that make it easy to work with tasks. Did anyone do this for asio based coroutines?
You can use the experimental operator overloads to combine awaitables. E.g. Live On Coliru #include <boost/asio.hpp> #include <boost/asio/awaitable.hpp> #include <boost/asio/detached.hpp> #include <boost/asio/experimental/awaitable_operators.hpp> #include <boost/asio/use_awaitable.hpp> #include <iostream> using namespace std::chrono_literals; auto now = std::chrono::steady_clock::now; static auto start = now(); using namespace boost::asio::experimental::awaitable_operators; using boost::asio::awaitable; using boost::asio::use_awaitable; using boost::system::error_code; awaitable<void> foo_and() { boost::asio::steady_timer tim1(co_await boost::asio::this_coro::executor, 1s); boost::asio::steady_timer tim2(co_await boost::asio::this_coro::executor, 2s); co_await (tim1.async_wait(use_awaitable) && tim2.async_wait(use_awaitable)); } awaitable<void> foo_or() { boost::asio::steady_timer tim1(co_await boost::asio::this_coro::executor, 1s); boost::asio::steady_timer tim2(co_await boost::asio::this_coro::executor, 2s); co_await (tim1.async_wait(use_awaitable) || tim2.async_wait(use_awaitable)); } int main() { boost::asio::io_context ioc; auto handler = [](auto caption) { return [=](std::exception_ptr e) { try { if (e) std::rethrow_exception(e); std::cout << caption << " succeeded at "; } catch (std::exception const& e) { std::cout << caption << " failed at "; } std::cout << (now() - start) / 1.0s << "s" << std::endl; }; }; co_spawn(ioc.get_executor(), foo_and(), handler("foo_and")); co_spawn(ioc.get_executor(), foo_or(), handler("foo_or")); ioc.run(); } Prints e.g. foo_or succeeded at 1.00153s foo_and succeeded at 2.00106s BONUS With some more C++17 and default completion tokens: awaitable<void> foo_and(auto... delays) { auto ex = co_await boost::asio::this_coro::executor; co_await(Timer(ex, delays).async_wait() && ...); } awaitable<void> foo_or(auto... delays) { auto ex = co_await boost::asio::this_coro::executor; co_await(Timer(ex, delays).async_wait() || ...); } Now you can supply variadic lists of delays: co_spawn(ioc.get_executor(), foo_and(100ms, 1500ms, 75ms), handler("foo_and")); co_spawn(ioc.get_executor(), foo_or(1s, 5min, 65ms), handler("foo_or")); See it Live On Coliru: #include <boost/asio.hpp> #include <boost/asio/awaitable.hpp> #include <boost/asio/detached.hpp> #include <boost/asio/experimental/awaitable_operators.hpp> #include <boost/asio/use_awaitable.hpp> #include <iostream> using namespace std::chrono_literals; auto now = std::chrono::steady_clock::now; static auto start = now(); using namespace boost::asio::experimental::awaitable_operators; using boost::asio::awaitable; using boost::asio::use_awaitable; using boost::system::error_code; using Timer = boost::asio::use_awaitable_t<>::as_default_on_t<boost::asio::steady_timer>; awaitable<void> foo_and(auto... delays) { auto ex = co_await boost::asio::this_coro::executor; co_await(Timer(ex, delays).async_wait() && ...); } awaitable<void> foo_or(auto... delays) { auto ex = co_await boost::asio::this_coro::executor; co_await(Timer(ex, delays).async_wait() || ...); } int main() { boost::asio::io_context ioc; auto handler = [](auto caption) { return [=](std::exception_ptr e) { try { if (e) std::rethrow_exception(e); std::cout << caption << " succeeded at "; } catch (std::exception const& e) { std::cout << caption << " failed at "; } std::cout << (now() - start) / 1ms << "ms" << std::endl; }; }; co_spawn(ioc.get_executor(), foo_and(100ms, 1500ms, 75ms), handler("foo_and")); co_spawn(ioc.get_executor(), foo_or(1s, 5min, 65ms), handler("foo_or")); ioc.run(); } Prints e.g. foo_or succeeded at 65ms foo_and succeeded at 1500ms
72,625,089
72,682,646
Best way to design a time-measuring structure in C++?
I am struggling to do the following in the best way possible: I have to measure the execution time of a C++ functionality implemented in C++. I have access to the code, so I can extend/modify it. The structure of what I have to do would be something like: for (int k=0;k<nbatches;k++) { //Set parameters from config file parameters=readFromFile(k); s=startTime(); for(int i=0;i<niters;i++) { o=namespacefoo::foo(parameters); writeToFile(o,i,k); } e=endTime(); times[k]=e-s/niters; } return times; I am quite sure that I will have to use the same structure to measure other functionalities from other namespaces. I am not sure if it makes sense to transform each functionality into a derived-class from a base-class. Each derived-class would implement the virtual read/write wrappers and there would be a measuring function, non-member non-friend convenience function, which would implement my previous structures. Also, the number/type of the parameters is also dependent on each derived-class. Maybe I would have to do the same derived-class strategy for the parameters too. Finally a factory function would set everything. Does this seem very cumbersome for the simple task I want to solve? I am sure this is not the first time that someone needs something like this and I do not want to rediscover the wheel. Thanks
The std::chrono library gives you all what you need. Please see here. With that, you could write a very simple wrapper for your requirement. We will define a timer class with a start and a stop function. "start" uses now to get the current time. "stop" will calculate the elapsed time, between start and the time, when "stop" was called. Additionally, we override the inserter operator << to allow for easy output. Please see the simple example below: #include <iostream> #include <fstream> #include <chrono> class Timer { std::chrono::time_point<std::chrono::high_resolution_clock> startTime{}; long long elapsedTime{}; public: void start() { startTime = std::chrono::high_resolution_clock::now(); } void stop() { elapsedTime = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - startTime).count(); } friend std::ostream& operator << (std::ostream& os, const Timer& t) { return os << t.elapsedTime; } }; // Example code int main() { Timer t; // Define/Instantiate timer t.start(); // Start Timer // Burn some time: for (unsigned i{}; i < 1000; ++i) std::cout << i << '\n'; t.stop(); // Stop timer // Show result std::cout << "\n\nDuration for operation was:\t " << t << " ms\n"; }
72,625,624
72,625,711
C++ using vector<vector> to represent matrix with continuous data buffer
#include <iostream> #include <vector> #include <algorithm> using namespace std; vector<vector<float>> func(int M) { // res = matrix size MxM vector<vector<float>> res; float* buffer = static_cast<float*>(malloc(M * M * sizeof(float))); res.reserve(M); for (int i=0; i<M; i++) { res.emplace_back(buffer + i * M, buffer + (i + 1) * M); /// res[i] = compute_the_matrix(); } return res; } I'm required to make a function that use vector<vector<float>> to represent a matrix. However, it's inefficient because the rows might be at different location in memory, while a good matrix should have all its element in a continuous block. To do this, I malloc a continuous block of memory. Then initialize the vectors from this block. Is this method safe and will the vectors free memory correctly when it's destructed? Another situation I can think of is if there's an exception in res[i] = compute_the_matrix();, then we have a memory leak. Edit: I think this code perform copy-constructor instead of move-constructor, so it's not what I'm looking for. So, how can I make a vector that is continuous in memory?
The code doesn't do what you think it does. The line res.emplace_back(buffer + i * M, buffer + (i + 1) * M); creates a new std::vector<float> to add to res. This std::vector<float> will allocate its own memory to hold a copy of the data in the range [buffer + i * M, buffer + (i + 1) * M), which also causes undefined behavior because you never initialized the data in this range. So, in the end you are not using the memory you obtained with malloc at all for the vectors. That memory is simply leaked at the end of the function. You can't specify what memory a vector<vector<float>> should use at all. There is simply no way to modify its allocation strategy. What you can do is etiher use a vector<float> instead to hold the matrix entries linearly indexed in a single vector or you can use a vector<vector<float, Alloc1>, Alloc2> where Alloc1 and Alloc2 are some custom allocator types for which you somehow specify the allocation behavior so that the storage layout is closer to what you want (although I doubt that the latter can be done nicely here or is worth the effort over just using the linear representation).
72,625,758
72,626,014
Why is my code giving segmentation fault error?
Step By Knight problem: Given a square chessboard, the initial position of Knight and position of a target. Find out the minimum steps a Knight will take to reach the target position. Note: The initial and the target position coordinates of Knight have been given according to 1-base indexing. #include<bits/stdc++.h> using namespace std; class Solution { public: //Function to find out minimum steps Knight needs to reach target position. int ans = -1; void func(int x, int y, int fr, int fc, int N, int cnt){ if(x > N || y > N || x < 1 || y < 1){ return; } if(x == fr && y == fc){ if(ans > cnt){ ans = cnt; } return; } func(x + 2, y - 1, fr, fc, N, cnt + 1); func(x + 2, y + 1, fr, fc, N, cnt + 1); func(x - 1, y + 2, fr, fc, N, cnt + 1); func(x + 1, y + 2, fr, fc, N, cnt + 1); func(x - 2, y + 1, fr, fc, N, cnt + 1); func(x - 2, y - 1, fr, fc, N, cnt + 1); func(x - 1, y - 2, fr, fc, N, cnt + 1); func(x + 1, y - 2, fr, fc, N, cnt + 1); return; } int minStepToReachTarget(vector<int>&KnightPos,vector<int>&TargetPos,int N) { int cnt = 0; func(KnightPos[0], KnightPos[1], TargetPos[0], TargetPos[1], N, cnt); return ans; } }; int main(){ int tc; cin >> tc; while(tc--){ vector<int>KnightPos(2); vector<int>TargetPos(2); int N; cin >> N; cin >> KnightPos[0] >> KnightPos[1]; cin >> TargetPos[0] >> TargetPos[1]; Solution obj; int ans = obj.minStepToReachTarget(KnightPos, TargetPos, N); cout << ans <<"\n"; } return 0; } INPUT 1 6 4 5 1 1 OUTPUT Runtime Error Segmentation Fault (SIGSEGV) Learn More about Seg Fault
Your issue is a stack overflow because each level of recursion has no knowledge of what positions have already been checked. Consider this simple example to illustrate: The knight makes a move of +2,-1, and you make a recursive call to test this new position. While checking this position, the function will test a move of -2,+1 Do you see the problem? These two cases above will oscillate forever, adding more to the stack as you recurse, until you run out of stack and the operating system terminates your process. Now, it's not enough to simply remember what the last move is and exclude that, because a knight can make multiple moves in succession before arriving back at the same point. The solution is to create a "board" containing a value for each possible co-ordinate that represents whether that has been visited or not. When you make a move to a valid square, if it is already marked as visited then you must return immediately. Otherwise, you can mark it as visited and continue checking. Because you probably want to search all possible combinations of moves, then you should also mark the square as not visited before you return from your function. You can declare your "board" like this: std::vector<bool> visited(N * N); And you can index like this: visited[(y-1)*N + x-1] = true; For ease, let's store this in the Solution class along with the answer, and make some adjustments: class Solution { public: int minStepToReachTarget(vector<int>& KnightPos, vector<int>& TargetPos, int N) { ans = -1; visited.resize(N * N, false); func(KnightPos[0], KnightPos[1], TargetPos[0], TargetPos[1], N, 0); return ans; } private: void func(int x, int y, int fr, int fc, int N, int cnt); int ans; vector<bool> visited; }; Now, the function itself: void func(int x, int y, int fr, int fc, int N, int cnt) { if(x > N || y > N || x < 1 || y < 1) { return; } // Target reached if(x == fr && y == fc) { ans = cnt; return; } // Prune futile searches cnt++; if (ans >= 0 && ans <= cnt) return; // Search int vidx = (y - 1) * N + x - 1; if (!visited[vidx]) { visited[vidx] = true; func(x + 2, y - 1, fr, fc, N, cnt); func(x + 2, y + 1, fr, fc, N, cnt); func(x - 1, y + 2, fr, fc, N, cnt); func(x + 1, y + 2, fr, fc, N, cnt); func(x - 2, y + 1, fr, fc, N, cnt); func(x - 2, y - 1, fr, fc, N, cnt); func(x - 1, y - 2, fr, fc, N, cnt); func(x + 1, y - 2, fr, fc, N, cnt); visited[vidx] = false; } } Notice how the visited flags are being used here. Also note a couple of extra important changes from above: // Target reached if(x == fr && y == fc) { ans = cnt; return; } // Prune futile searches cnt++; if (ans >= 0 && ans <= cnt) return; The first bit simplifies your target-reached scenario. Quite simply, if you reach the target, you know that it's the best answer so far. How? Because of the second bit... The second bit increments your step count and then checks if this can possibly produce a better answer than what we have already. If not, we stop searching. Such searching would be futile because even if you ultimately reach the target it will not be the shortest path. You should avoid such searches, because they can blow out your execution time by an insane amount.
72,626,340
72,627,253
How to use Gtk::EntryCompletion::set_match_func on GTKMM C++?
i want to search something for every sub string. I've been looking for GTK completion example on internet but i couldn't find the example with set_match_func. The documentation says i need to specify SlotMatch, but I don't understand how to use SlotMatch. m_completion->set_text_column(0); m_completion->set_minimum_key_length(0); m_completion->set_popup_completion(true); m_completion->set_match_func(func);
The first line in the documentation, right after the inheritance diagram typedef sigc::slot< bool(const Glib::ustring&, const TreeModel::const_iterator&)> SlotMatch; Further reading reaches the example For example, bool on_match(const Glib::ustring& key, const TreeModel::const_iterator& iter); In gtkmm m_completion->set_match_func(sigc::ptr_fun(on_match));
72,626,664
72,638,039
Why is this program timing out without any network traffic?
I am trying to create a simple c++ program that hides the differences between Linux and Windows when making sockets and connecting to servers The Linux part of this code compiles without any warnings or errors but times out after resolving the host IP and does not connect to the server running (nc -lvnp 7777) Using tcpdump -i eth0 -v port 7777 to capture all the traffic to and from the machine running the program shows nothing class Socket { public: int initsoc(void); int connectsoc(int sock, const char * host, int port); }; int Socket::connectsoc(int sock, const char * host, int port) #ifdef _WIN32 /* windows part */ #else struct hostent *server; struct sockaddr_in server_addr; struct in_addr *address; server_addr.sin_family = AF_INET; server_addr.sin_port = port; /* resolve host */ server = gethostbyname(host); if (server == NULL) { printf("Error : %s \nFailed to resolve %s:%d", strerror(errno), host, port); return -1; } address = (struct in_addr *) (server->h_addr); printf("Resolved: [%s] ===> [%s]\n", host, inet_ntoa(*address)); server_addr.sin_addr.s_addr = inet_addr(inet_ntoa(*address)); iResult = connect(sock, (struct sockaddr*)&server_addr, sizeof(server_addr)); printf("Connect returned : %d\n",iResult); if (iResult < 0) { printf("Error: %s\nFailed to connect\n",strerror(errno)); return -1; } printf("Connected to [%s:%d]\n",host,port); return 0; I tried to open a netcat listener on a machine on the same network as the computer running the program without any NATs but it still times out this is how i compiled and what it outputs g++ -Wall -ggdb3 -pedantic -g main.cpp -o app Resolved: [10.0.0.100] ===> [10.0.0.100] Connect returned : -1 Error: Connection timed out Failed to connect the ip of the machine running the code is 10.0.0.2 the ip of the machine running the netcat server is 10.0.0.100
As commented by @user253751, this line: server_addr.sin_port = port; should be changed to: server_addr.sin_port = htons(port);
72,627,522
72,627,643
How to add command line option to ELF binary using cmake and gcc?
I have a C++ based application and building the binary for it using cmake and make. Now, I want to show the version of my binary with something like --version flag. In the end I want to achieve ex_app -v should show the binary version. In one of the header files I could see a #define APP_VERSION "1.0" and this version number is displayed when the binary is executed. Now, I want to display this version number with -v command line option. In the existing source files there is no code for "command line options". And I am looking ways to introduce command line arguments for the binary. The binary is of ELF format compiled for GNU/Linux. I want to know if this is possible to accomplish without majorly modifying source files. Does GCC provide any option to insert version info to ELF binary file? Thanks in advance. P.S: I understand that this is more of discussion type post and I am looking for some hint to get started.
You can use the CMake configure_file directive to generate a header file with APP_VERSION macro. CMakeLists.txt: ... configure_file(${CMAKE_SOURCE_DIR}/version.h.cmake ${CMAKE_CURRENT_BINARY_DIR}/version.h) ... version.h.cmake: ... #define APP_VERSION "@PROJECT_VERSION@" ... This will take version.h.cmake (template) from your source directory, substitute @PROJECT_VERSION@ with VERSION value you set in project(...) in CMakeLists.txt and dump it into version.h file in build directory. This is not limited to the project version, you can do this with any CMake variable. For list of CMake defined ones see cmake-variables(7). version.h can be then included in your codebase and macro APP_VERSION used to display version with --version without need of modifying any source files (you just need to bump version in a single central place - CMakeLists.txt). Command line arguments are passed to the program through its main() entrypoint as arguments argc (count of given arguments) and argv (array of C strings representing each argument). First argument is always the command itself. Very naive command line argument processing code example: #include <string> #include <iostream> #include "version.h" ... int main(int argc, char *argv[]) { for (int i = 1; i < argc; i++) { std::string_view a(argv[i]); if (a == "-v" || a == "--version") { std::cout << APP_VERSION << std::endl; } else { std::cout << "unknown argument: " << a << std::endl; } } return 0; } For more complicated CLI interfaces (such as some arguments taking a value, value validation, formatting, mutual exclusion etc.) I recommend using some existing library such as CLI11.
72,628,666
72,628,915
Given an `int A` Is there a strong guarantee that `A == (int) (double) A`?
I need a strong guarantee that int x = (int) std::round(y) will always give the correct results (y is finite and "humanly", e.g. -50000 to 50000). std::round(4.1) can give 4.000000000001 or 3.99999999999. In the latter case, casting to int gives 3, right? To manage this, I reinvented the wheel with this ugly function: template<std::integral S = int, std::floating_point T> S roundi(T x) { S r = (S) x; T r2 = std::fmod(x, 1); if (r2 >= 0.5) return r + 1; if (r2 <= -0.5) return r - 1; return r; } But is this necessary? Or does casting from double to int use the last mantissa bit for rounding?
Assuming int is 32 bits wide and double is 64 bits wide (and assuming IEEE 754), all values of int are exactly representable in a double. That means std::round(4.1) returns exactly 4. Nothing more nothing less. And casting that number to int is always 4 exactly.
72,629,731
72,629,959
c++11 two critical sections can use nested lock_guard?
If I have two critical sections, and I make two corresponding mutex to protect each of them.(I think it is necessary to precisely control when to lock because they use in different times and scenario, Q1:is it really so?) For example: bool a;//ignore atomic_bool, because in actual, the data structure is more complex mutex mut_a;//for threadsafe of a bool b; mutex mut_b;//for threadsafe of b And I need to implement this kind of logic: if (lock_guard<mutex> lck(mut_a);a) { do something... } else { if (lock_guard<mutex> lck(mut_b);b) { do something... } } As seen, lock_guard(mutex) is used nested, Q2is it proper and threadsafe?
I think a problem here is that if (bool x = true) { // x is in scope } else { // x is STILL in scope! x = false; } So the first lock on mut_a is still held in the else block. Which might be your intention, but I would consider this not an optimal way to write it for readability if that were so. Also, if it is important that !a in the critical section of b you DO need to keep the lock on a.
72,630,190
72,630,352
My object is being called has different values even though I had assigned value for it
I am currently doing my homework that requires me to create a dispenser machine system, which the below code is only a part extracted from my actual code, but it contains the core of the problem i faced. Console Result: enter image description here As shown in the console, the dispenser show 50 50 50 50 which is the default value if the constructor is called using dispenserType() instead of dispenserType(int item[], int c[]) However, if i were to move the for loop in main, into staffDispenser(), the values will be remember and will show the values exactly as inputted by me. So I was assuming that whenever i call dispenserType dt;, it somehow calls dispenserType() constructor which ultimately overwrites everything to 50 again. I identified the issue but I don't know any fixes for this, I am expected to call the for loop function in another class called customerDispenser(), not in staffDispenser(). Is there anyway for me to use the dispenserType class function in other classes without it calling dispenserType() constructor again just to overwrite my values back to 50? #note: I cannot remove dispenserType() nor dispenserType(int item[], int c[]) since both are required in my homework. #include <iostream> using namespace std; class dispenserType { private: int numberOfItems[4] = {0,0,0,0}; int cost[4] = {0,0,0,0}; public: dispenserType() { for (int i = 0; i < 4; i++) { numberOfItems[i] = 50; cost[i] = 50; } } dispenserType(int item[], int c[]) { for (int i = 0; i<4;i++){ numberOfItems[i] = item[i]; cost[i] = c[i]; } } int getNoOfItems(int i) { int j = i; return numberOfItems[j]; } int getCost(int i) { int j = i; return cost[j]; } void makeSale(int i) { int j = i; numberOfItems[j] -= 1; } }; void staffDispenser() { char choice; int item[4]; int cost[4]; cout << endl << "Do you want to asign values manually for each items? (Y for yes, N for no): "; cin >> choice; if (choice == 'Y' || choice == 'y') { for (int i = 0; i < 4; i++) { cout << "Enter the number of " << name[i] << ": "; cin >> item[i]; cout << "Enter the cost of " << name[i] << " (in cents): "; cin >> cost[i]; } dispenserType dt = dispenserType(item, cost); } else if (choice == 'N' || choice == 'n') { dispenserType dt = dispenserType(); cout << endl << "Each item is set to 50 in quantity and also cost 50 cents" << endl; } } int main() { dispenserType dt; int item[4]; int cost[4]; staffDispenser(); for (int i = 0; i < 4; i++) { juice[i] = dt.getNoOfItems(i); c[i] = dt.getCost(i); } for (int i =0; i<4; i++){ cout << item[i] << " "; } cout << endl; for (int i =0; i<4; i++){ cout << cost[i] << " "; } }
The variable dt in main() and in staffDispenser are two distinct variables. The changes done to dt variable in staffDispenser are local to that function, it is not reflected into the main. If you want to update the dt variable of main in staffDispenser function you need to pass it by reference. void staffDispenser(dispenserType &dt) { } And from main call staffDispenser as follows: dispenserType dt; int item[4]; int cost[4]; staffDispenser(dt); // pass dt by reference You also need to remove following two lines from staffDispenser function dispenserType dt = dispenserType(item, cost); dispenserType dt = dispenserType();
72,630,439
72,630,555
Including specific paths when using CMake
I have the following code structure: --src |--common--include--common---datatype--a_h_file.hpp | | | --src | |--main_lib | |--------include-----one---one.hpp | | | |---src--------one----one.cpp | CMakeLists.txt |---main.cpp CMakeLists.txt main.cpp uses one.hpp without problem. My CMakeLists.txt files are like this Upper level cmake_minimum_required(VERSION 3.0.0) project(MyProject VERSION 1.0.0) add_subdirectory(main_lib) add_executable(myproj main.cpp) target_link_libraries(myproj PUBLIC mainpub) and the other add_library(mainpub src/one/one.cpp) target_include_directories(mainpub PUBLIC include) With this I can use one.hpp My problem is that by design it has been decided that one.hpp should include a_h_file.hpp like this (one.hpp) #pragma once float addition(float,float); #include "common/data_type/a_h_file.hpp" //<---THIS! class whatever{ public: int number=1; }; So, my question is how do I modify the CMake files to include the path /src/common/include to the paths that are going to be considered in order to be able to use a_h_file.hpp? EDIT: I tried cmake_minimum_required(VERSION 3.0.0) project(MyProject VERSION 1.0.0) add_library(Common INTERFACE) target_include_directories(Common INTERFACE common/include) add_subdirectory(main_lib) add_executable(myproj main.cpp) target_link_libraries(myproj PUBLIC mainpub) and in the other add_library(mainpub src/one/one.cpp) #target_include_directories(mainpub PUBLIC include ../common/include) #target_include_directories(mainpub PUBLIC Common) #target_include_directories(mainpub PUBLIC include) target_include_directories(mainpub PUBLIC Common include) but it did not work :( fatal error: common/data_type/a_h_file.hpp: No such file or directory #include "common/data_type/a_h_file.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~ EDIT2: Modified the second Cmake file to add_library(mainpub src/one/one.cpp) target_link_libraries(mainpub PUBLIC Common) target_include_directories(mainpub PUBLIC include) It did not work either.
First, define an INTERFACE target for your common directory in the top-level CMakeLists.txt: add_library(Common INTERFACE) target_include_directories(Common INTERFACE common/include) Then just link against it in your targets, which will propagate the include directories: target_link_libraries(mainpub PUBLIC Common)
72,630,561
74,148,683
boost::asio::steady_timer get stuck at WaitForSingleObject when built as a DLL
I've just encountered a weird and devastating problem that I couldn't find any information about it anywhere. asio::steady_timer timer(m_context); This asio::steady_timer works perfectly fine if I'm building it as an EXE, but if it's built as a DLL it will be stuck waiting for WaitForSingleObject (in win_thread.ipp file, line 106) whenever initialize a asio::steady_timer, please take a look at the picture below. This DLL is just an empty project, it only includes the asio.hpp file. I've found this_article about a problem that might be relevant, but still found no way to debug or fix this. Am I doing something wrong, or is this the library's bug? Thanks for your time!
This was solved by making these two class members or global variables and not initializing them in the entry point of the DLL. asio::io_context context; asio::steady_timer timer(context);
72,630,872
72,722,875
Having a dificult time with Directx11 dynamic texture Map/Unmap
I have been trying to upload a dynamic texture with Map/Unmap but no luck so far. Here's the code im working with D3D11_MAPPED_SUBRESOURCE subResource = {}; ImmediateContext->Map(dx11Texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &subResource); Memory::copy(subResource.pData, (const void*)desc.DataSet[0], texture->get_width() * texture->get_height() * GraphicsFormatUtils::get_format_size(texture->get_format())); subResource.RowPitch = texture->get_width() * GraphicsFormatUtils::get_format_size(texture->get_format()); subResource.DepthPitch = 0; ImmediateContext->Unmap(dx11Texture, 0); I have created the texture with immutable state and supplying the data upfront, that worked out well but when i try to create it with a dynamic flag and upload the same data my texture shows a noisy visual. This is the texture with immutable creation flags and updating the data upfront on the texture creation phase. Immutable texture This is the texture with dynamic creation flags and updating the data after the texture creation phase with Map/Unmap mehtods. Dynamic texture Any input would be appreciated.
When using map, the subResource rowPitch that is returned by the map function is the one that is expected for you to perform the copy (you can notice that you never send it back to the deviceContext, so it's read only). It is generally a power of 2, for memory alignment purposes. When you provide initial data in an (immutable or other) texture, this copy operation is hidden from you, but still happens behind the scene, so in that case, you need to perform the pitch test yourself. The process of copying a dynamic texture is as follow : int myDataRowPitch =; //width * format size (if you don't pad) D3D11_MAPPED_SUBRESOURCE subResource = {}; ImmediateContext->Map(dx11Texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &subResource); if (myDataRowPitch == subResource.RowPitch) { //you can do a standard mem copy here } else { // here you need to copy line per line } ImmediateContext->Unmap(dx11Texture, 0);
72,631,196
72,724,434
Enable exception support in Emscripten
I am using Bazel (5.2.0) to build an emscripten app. My setup looks like this: main.cpp: #include "emscripten.h" #include <iostream> int main(int argc, char **argv) { throw std::runtime_error("error!"); } BUILD.bazel: load("@rules_cc//cc:defs.bzl", "cc_binary") load("@emsdk//emscripten_toolchain:wasm_rules.bzl", "wasm_cc_binary") cc_binary( name = "index", srcs = ["main.cpp"], copts = [ "-Wno-unused-variable", "-Wno-unused-but-set-variable", "-Wno-unused-function", ], data = ["index.html"], linkopts = [ "-s USE_GLFW=3", "-s USE_WEBGPU=1", "-s WASM=1", "-s ALLOW_MEMORY_GROWTH=1", "-s NO_EXIT_RUNTIME=0", "-s ASSERTIONS=1", "-s EXCEPTION_CATCHING_ALLOWED=[..]", ], tags = ["manual"], ) wasm_cc_binary( name = "index-wasm", cc_target = ":index", ) WORKSPACE.bazel: load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "emsdk", strip_prefix = "emsdk-311acff345fd71dcfe5f350653cec466ee7e3fbc/bazel", url = "https://github.com/emscripten-core/emsdk/archive/311acff345fd71dcfe5f350653cec466ee7e3fbc.tar.gz", ) load("@emsdk//:deps.bzl", emsdk_deps = "deps") emsdk_deps() load("@emsdk//:emscripten_deps.bzl", emsdk_emscripten_deps = "emscripten_deps") emsdk_emscripten_deps(emscripten_version = "3.1.13") When a build my application I get the error: main.cpp:11:5: error: cannot use 'throw' with exceptions disabled throw std::runtime_error("error!"); ^ I added "-s EXCEPTION_CATCHING_ALLOWED=[..]", already to the link options, but this seems not to help. Any idea how exceptions can be enable in Emscripten using Bazel?
You should set Enable C++ Exceptions option to Yes and Enable Objective-C Exceptions to Yes. If you still have the problem then refer
72,632,162
72,632,478
std::conditional for compile time inheritance paired with std::enable_if for compile time methods
I wanted do design a template class with two arguments that at compile time inherited based on the template arguments one of two mutually exclusive base classes. I wanted to keep it simple for me so came up with this working example. The inheritance condition i got with std::conditional based on the template arguments. The specialized methods for that conditional inheritance I set with std::enable_if. class Empty {}; template<typename T> class NonEmpty { protected: std::vector<T> mObjects; }; template< typename A, typename B = A> class Storage : public std::conditional<std::is_same<A, B>::value, Empty, NonEmpty<B>>::type { public: template<typename C = B, typename std::enable_if<std::is_same<C, A>::value>::type* = nullptr> void doStuff() { // one argument or two arguments with same type // do stuff ... }; template<typename C = B, typename std::enable_if<std::is_same<C, A>::value>::type* = nullptr> void doSomthingElse() { // one argument or two arguments with same type // do something exclusively just for this argument constellation ... }; template<typename C = B, typename std::enable_if<!std::is_same<C, A>::value>::type* = nullptr> void doStuff() { // two arguments with different types // do stuff with inherited variables of NonEmpty-Class ... }; }; int main() { EmptyClass<int> emp; NonEmptyClass<int, float> nonemp; emp.doStuff(); emp.doSomethingElse(); nonemp.doStuff(); } Is there a better way to go about it, or are there any improvements for my existing solution? (I am using GCC 8.1.0 with C++ 14)
In my opinion you're much better of partially specializing the template, since the entire implementation for both versions are completely independent. This way you can also not inherit any class instead of inheriting an empty class. template<typename T> class NonEmpty { protected: std::vector<T> mObjects; }; template<typename A, typename B = A> class Storage : public NonEmpty<B> { public: void doStuff() { std::cout << "doStuff() different types\n"; }; }; template<typename A> class Storage<A, A> { public: void doStuff() { std::cout << "doStuff() same types\n"; }; void doSomethingElse() { std::cout << "doSomethingElse()\n"; }; }; int main() { Storage<int> emp; Storage<int, float> nonemp; emp.doStuff(); emp.doSomethingElse(); nonemp.doStuff(); }
72,632,309
72,632,908
Is there any problem using a reference to a std::set key to erase itself?
Consider the following code std::set<int> int_set = {1, 2, 3, 4}; for(const auto& key : int_set) { if(key == 2) { int_set.erase(key); break; } } The code runs as expected, but is it safe? It feels wrong to be using a reference to a key to erase itself from a set, as presumably once the erase has happened the reference is no longer valid. Another code snippet with the same potential problem would be std::set<int> int_set = {1, 2, 3, 4}; const auto& key = *int_set.find(2); int_set.erase(k);
This is safe in the code provided since you break out of the loop without attempting to use the reference (key) again (nor implicitly advance the underlying iterator that for-each loops are implemented in terms of, which would happen if you did not break/return/throw/exit()/crash, when you looped back to the top of the loop) after the call to erase. key itself is only needed to find the element to remove; until that element is removed, it's valid, once it's removed, it's not used again by erase (it already found the element to erase, there's no possible use for it after that point). If you tried to use key after the erase, you'd be using a dangling reference, invoking undefined behavior. Similarly, even allowing the loop to continue (implicitly advancing the underlying iterator) would be illegal; the iterator is invalid, and the implicit advancing of the iterator when you returned to the top of the loop would be equally invalid. The safe way to erase more than one element as you iterate would be to switch from for-each loops (that are convenient, but inflexible) to using iterators directly, so you can update them with the return value of erase, e.g.: for(auto it = int_set.begin(); it != int_set.end(); /* Don't increment here */) { if (predicate(*it)) { it = int_set.erase(it); // In C++11 and higher, erase returns an iterator to the // element following the erased element so we can // seamlessly continue processing } else { ++it; // Increment if we didn't erase anything } } Of course, as noted in the comments, if you only need to remove one element with a known value, the whole loop is pointless, being a slow (O(n) loop + O(log n) erase) way to spell: int_set.erase(2); // Returns 1 if 2 was in the set, 0 otherwise which is a single O(log n) operation.
72,632,973
72,639,273
Make failing with not used function
I am trying to build my code. After I do cmake .. from a build directory I do make -j8 and I get [ 90%] Building CXX object common/CMakeFiles/common.dir/src/utils/path_util.cpp.o [ 95%] Linking CXX executable myproj CMakeFiles/myproj.dir/main.cpp.o: In function `cv::String::~String()': main.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()' CMakeFiles/myproj.dir/main.cpp.o: In function `cv::String::operator=(cv::String const&)': main.cpp:(.text._ZN2cv6StringaSERKS0_[_ZN2cv6StringaSERKS0_]+0x28): undefined reference to `cv::String::deallocate()' collect2: error: ld returned 1 exit status CMakeFiles/myproj.dir/build.make:95: recipe for target 'myproj' failed make[2]: *** [myproj] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/myproj.dir/all' failed make[1]: *** [CMakeFiles/myproj.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... The curious thing is that nowhere in the code I use cv::String. I have also put #Search for dependencies set(MIN_OPENCV_VERSION "3.4.11" CACHE STRING "OpenCV version") find_package(OpenCV ${MIN_OPENCV_VERSION} REQUIRED COMPONENTS core PATHS /usr/local/opencv-${MIN_OPENCV_VERSION} NO_DEFAULT_PATH ) in several CMakeLists.txt files and cmake finds opencv What could be the problem? EDIT I set the VERBOSE environment variable to 1 as stated here and I got [ 90%] Building CXX object common/CMakeFiles/common.dir/src/utils/path_util.cpp.o cd /home/user/ws/src/build/common && /usr/bin/c++ -I/usr/local/include/eigen3 -isystem /usr/local/opencv-3.4.11/include -isystem /usr/local/opencv-3.4.11/include/opencv -I/home/user/ws/src/common/include -I/home/user/ws/src/common/src -isystem /usr/local -fPIC -o CMakeFiles/common.dir/src/utils/path_util.cpp.o -c /home/user/ws/src/common/src/utils/path_util.cpp [ 95%] Linking CXX executable road_info /usr/bin/cmake -E cmake_link_script CMakeFiles/myproj.dir/link.txt --verbose=1 /usr/bin/c++ -rdynamic CMakeFiles/myproj.dir/main.cpp.o -o myproj mainpub_lib/mainpub.a CMakeFiles/myproj.dir/main.cpp.o: In function `cv::String::~String()': main.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()' CMakeFiles/myproj.dir/main.cpp.o: In function `cv::String::operator=(cv::String const&)': main.cpp:(.text._ZN2cv6StringaSERKS0_[_ZN2cv6StringaSERKS0_]+0x28): undefined reference to `cv::String::deallocate()' collect2: error: ld returned 1 exit status CMakeFiles/myproj.dir/build.make:95: recipe for target 'myproj' failed make[2]: *** [myproj] Error 1 make[2]: Leaving directory '/home/user/ws/src/build' CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/myproj.dir/all' failed make[1]: *** [CMakeFiles/myproj.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs....
First, thanks @fabian for the help and pointers I finally realized that the problem was not in my main.cpp but in a hpp file that main calls. This one.hpp file included another hpp file that was the one that caused the problem (When I commented it, the problem disapeared) So what I did was change the CMakeLists.txt of the second level (the one dealing with the problematic hpp file) and added target_link_libraries(mainpub PRIVATE ${OpenCV_LIBS}) With this the problem was solved To allow for a verbose make I did export VERBOSE=1 because my make version is old. With the verbose output I could see that mainpub was the only library linked What strikes me strange is that mainpub was apparently being built even with the problem instead the main build was signaled as problematic
72,633,758
72,634,064
Is there a way to static_assert a variable reference given in a template parameter?
struct Config { int version = 1; }; template<Config& config /* , ... */> struct Peripheral { const Config config_ = config; static_assert(config_.version > 1, "Config version must be greater than 1"); /* ... */ }; Config myConfig; int main() { myConfig.version = 5; Peripheral<myConfig> peripheral; } I want to check at compile-time if the configurations given to my template are correct. So I am trying to cast my reference to a constant instance in order to try to use it in my static_assert, but I get the error: 'invalid use of non-static data member ...' Is there a way to check the values of a non-type parameter at compile-time in this case? Or do you have other suggestions to achieve this goal?
If you want the value of myConfig to be used at compile-time, then you should mark it constexpr and give it its value directly in the initializer. Whether it is a static or automatic storage duration variable is then secondary: constexpr Config myConfig = { .version = 5 }; // alternatively before C++20 for example // constexpr Config myConfig = { 5 }; /*...*/ Peripheral<myConfig> peripheral; Then the template should take the parameter by-value, not by-reference, and you should use that template parameter directly in the static_assert: template<Config config /* , ... */> struct Peripheral { static_assert(config.version > 1, "Config version must be greater than 1"); /* ... */ }; Depending on what else is in Config or if you are not using C++20 or later, the type might not be allowed as by-value template parameter. In that case you can keep using a reference parameter (although I am not sure that this is good design) but you would need to make it const Config& instead to match the const implied by constexpr. In this case it does matter that myConfig has static storage duration (specifically it may be declared at namespace scope, as a static data member or since C++17 as static local variable). If you want to keep on using a local config_ copy to assert on, that copy should also be marked constexpr (const is not enough to make a variable usable at compile-time) and hence must also be marked static (because non-static data members cannot be declared constexpr). The value of the template parameter and static_assert cannot be (potentially) determined at run-time, so myConfig.version = 5; will never work.
72,634,961
72,656,463
Template with STL algorithms slows down function a lot
because pre-computing some keys into a std::vector saved me some time on the followed std::sort(before that the keys were recomputed every time) and I wanted to reuse it on different places, I tried to template this code: void myFunction() { QList<const Object*> objects = getObjectsList(); const SomeCapturedType* someCapturedType = getCapturedType(); typedef std::pair<double, const Object*> Pair; typedef std::vector<Pair> Transformed; Transformed transformed = Transformed(objects.length()); std::transform(objects.begin(), objects.end(), transformed.begin(), [someCapturedType ](const Object* obj) { return std::make_pair(someCapturedType->lenghtyComputation(obj), obj); }); std::sort(transformed.begin(), transformed.end()); std::transform(transformed.begin(), transformed.end(), objects.begin(), [](Pair pair) { return pair.second; }); } into this code: template <class T1, class T2, class Lambda> void transformThenSortList(QList<T1>& objects, Lambda&& callback) { typedef std::pair<T2, T1> Pair; typedef std::vector<Pair> Transformed; Transformed transformed = Transformed(objects.length()); std::transform(objects.begin(), objects.end(), transformed.begin(), [callback](T1 obj) { return std::make_pair(callback(obj), obj); }); std::sort(transformed.begin(), transformed.end()); std::transform(transformed.begin(), transformed.end(), objects.begin(), [](Pair pair) { return pair.second; }); } void myFunction() { QList<const Object*> objects = getObjectsList(); const SomeCapturedType* someCapturedType = getCapturedType(); transformThenSortList<const Object*, double>(objects, [someCapturedType](const Object* obj) { return someCapturedType->lenghtyComputation(obj); }); } but the time taken by my function just exploded, do you have any reasons why ?
Ok so I changed my template to this for more generalization: template <class T1, class T2, class Lambda> void transformThenSortList(QList<T1>& objects, Lambda&& transformLambda) { typedef std::pair<T2, T1> Pair; typedef std::vector<Pair> Transformed; Transformed transformed = Transformed(objects.length()); std::transform( objects.begin(), objects.end(), std::make_move_iterator(transformed.begin()), [&transformLambda](const T1& obj) { return std::make_pair(transformLambda(obj), obj); }); std::sort(transformed.begin(), transformed.end()); std::transform(transformed.begin(), transformed.end(), objects.begin(), [](Pair& pair) { return std::move(pair.second); }); } Although, I tried switching back to the original code and had the expected time measurements. So I have no idea why at the time of posting I measured more than 1 minute for this code supposed to run in about 12 seconds. I immediately posted because I assumed there were issues with my template, but as pointed out here there are none since I'm using pointers. Thanks for your comments which made me improve my code.
72,635,511
72,635,729
I can copy or pass by value unique pointers, how is this possible?
What I've read from multiple sources states that, An unique_ptr cannot be copied to another unique_ptr, passed by value to a function, or used in any C++ Standard Library algorithm that requires copies to be made. However, I can seem to be able to do all those things. #include <iostream> #include <memory> int test(int copy) { std::cout << "copy: " << copy << std::endl; return copy; } int main() { std::unique_ptr<int> uniquePtr1(new int(4)); std::cout << "*uniquePtr1: " << *uniquePtr1 << std::endl; int num1 = *uniquePtr1; std::cout << "num1: " << num1 << std::endl; int num2 = test(*uniquePtr1); std::cout << "num2: " << num2 << std::endl; int *ptr1 = new int; *ptr1 = *uniquePtr1; std::cout << "*ptr1: " << *ptr1 << std::endl; std::unique_ptr<int> uniquePtr2(new int); *uniquePtr2 = *uniquePtr1; std::cout << "*uniquePtr2: " << *uniquePtr2 << std::endl; delete ptr2; } Output: *uniquePtr1: 4 num1: 4 copy: 4 num2: 4 *ptr2: 4 *uniquePtr2: 4 Am I missing something? Links are also appreciated.
You indeed can't copy a unique pointer, this won't compile: std::unique_ptr<int> uniquePtr1(new int(4)); std::unique_ptr<int> uniquePtr2(uniquePtr1); You'd need to move uniquePtr1 into uniquePtr2: std::unique_ptr<int> uniquePtr2(std::move(uniquePtr1)); What your code is doing is copying the value pointed to by the pointer. A useful analogy would be a pointer being equivalent to a box and the value of that pointer being a ball in the box. A unique box can't hold the same ball as any other box but it is possible to look at the ball in the first unique box and add a ball of the same colour to another unique box. You now have two unique boxes, both containing red balls but not the same red ball. If you were now to paint one of the balls blue the other ball would still be red. Expanding this analogy to shared pointers (or bare pointers) requires some balls that can exist in multiple places at the same time. With a shared box you can have the same ball in two different shared boxes and painting one ball will also change the colour of the ball in the other box. Back to code: std::unique_ptr<int> uniquePtr1(new int(4)); std::cout << *uniquePtr1; // 4 std::unique_ptr<int> uniquePtr2(new int(*uniquePtr1)); std::cout << *uniquePtr1; // 4 std::cout << *uniquePtr2; // 4 *uniquePtr1 = 5; std::cout << *uniquePtr1; // 5 std::cout << *uniquePtr2; // still 4 Though uniquePtr1 point to ints with the same value they are separate objects and updating one doesn't update the other. With shared pointers we can copy the pointer rather than the value: std::shared_ptr<int> sharedPtr1(new int(4)); std::cout << *sharedPtr1; // 4 std::shared_ptr<int> sharedPtr2(new int(*sharedPtr1)); // copy the value into a new pointer std::shared_ptr<int> sharedPtr3(sharedPtr1); // copy the pointer std::cout << *sharedPtr1; // 4 std::cout << *sharedPtr2; // 4 std::cout << *sharedPtr3; // 4 *sharedPtr1= 5; std::cout << *sharedPtr1; // 5 std::cout << *sharedPtr2; // still 4 std::cout << *sharedPtr3; // 5
72,635,668
72,636,109
A Simple Gradient Effect
I need to code the fragment shader so that the triangle has a simple gradient effect. That is, so that its transparency decreases from left to right. I tried this but it fails: #version 120 uniform float startX = gl_FragCoord.x; void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; gl_FragColor[3] = startX / gl_FragCoord.x; } The full code: #include <cstdlib> #include <iostream> using namespace std; #include <GL/glew.h> #include <SDL.h> GLuint program; GLint attribute_coord2d; bool init_resources(void) { GLint compile_ok, link_ok = GL_FALSE; GLuint vs = glCreateShader(GL_VERTEX_SHADER); const char* vs_source = R"( #version 120 attribute vec2 coord2d; void main(void) { gl_Position = vec4(coord2d, 0.0, 1.0); } )"; glShaderSource(vs, 1, &vs_source, NULL); glCompileShader(vs); glGetShaderiv(vs, GL_COMPILE_STATUS, &compile_ok); if (!compile_ok) { cerr << "Error in vertex shader" << endl; return false; } GLuint fs = glCreateShader(GL_FRAGMENT_SHADER); const char* fs_source = R"( #version 120 uniform float startX = gl_FragCoord.x; void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; gl_FragColor[3] = startX / gl_FragCoord.x; } )"; glShaderSource(fs, 1, &fs_source, NULL); glCompileShader(fs); glGetShaderiv(fs, GL_COMPILE_STATUS, &compile_ok); if (!compile_ok) { cerr << "Error in fragment shader" << endl; return false; } program = glCreateProgram(); glAttachShader(program, vs); glAttachShader(program, fs); glLinkProgram(program); glGetProgramiv(program, GL_LINK_STATUS, &link_ok); if (!link_ok) { cerr << "Error in glLinkProgram" << endl; return false; } const char* attribute_name = "coord2d"; attribute_coord2d = glGetAttribLocation(program, attribute_name); if (attribute_coord2d == -1) { cerr << "Could not bind attribute " << attribute_name << endl; return false; } return true; } void render(SDL_Window* window) { glClearColor(1.0, 1.0, 1.0, 1.0); glClear(GL_COLOR_BUFFER_BIT); glUseProgram(program); glEnableVertexAttribArray(attribute_coord2d); GLfloat triangle_vertices[] = { 0.0, 0.8, -0.8, -0.8, 0.8, -0.8, }; glVertexAttribPointer(attribute_coord2d, 2, GL_FLOAT, GL_FALSE, 0, triangle_vertices); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(attribute_coord2d); SDL_GL_SwapWindow(window); } void free_resources() { glDeleteProgram(program); } void mainLoop(SDL_Window* window) { glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); while (true) { SDL_Event ev; while (SDL_PollEvent(&ev)) { if (ev.type == SDL_QUIT) return; } render(window); } } int main(int argc, char* argv[]) { SDL_Init(SDL_INIT_VIDEO); SDL_Window* window = SDL_CreateWindow("My First Triangle", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 640, 480, SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL); SDL_GL_CreateContext(window); GLenum glew_status = glewInit(); if (glew_status != GLEW_OK) { cerr << "Error: glewInit: " << glewGetErrorString(glew_status) << endl; return EXIT_FAILURE; } if (!init_resources()) return EXIT_FAILURE; mainLoop(window); free_resources(); return EXIT_SUCCESS; } How to do it right?
vYou can not initialize a uniform with gl_FragCoord.x. A uniform initialization is determined at link time. uniform float startX = gl_FragCoord.x; uniform float startX; You have to set the unform with glUniform1f. gl_FragCoord.xy are not the vertex coordinates. gl_FragCoord.xy are the window coordinate in pixels. You have to divide gl_FragCoord.xy by the size of the viewport: #version 120 void main(void) { gl_FragColor = vec4(0.0, 0.0, 0.0, gl_FragCoord.x / 640.0); } Or passing coord2d to the fragment shader: #version 120 attribute vec2 coord2d; varying vec2 coord; void main(void) { coord = coord2d; gl_Position = vec4(coord2d, 0.0, 1.0); } #version 120 varying vec2 coord; void main(void) { float alpha = 2.0 / (1.0 - coord.y); gl_FragColor = vec4(0.0, 0.0, 1.0, alpha); } Or use a color attribute: #version 120 attribute vec2 coord2d; attribute vec4 attrColor; varying vec4 color; void main(void) { color = attrColor; gl_Position = vec4(coord2d, 0.0, 1.0); } #version 120 varying vec4 color; void main(void) { gl_FragColor = color; } attribute_color = glGetAttribLocation(program, "attrColor"); GLfloat triangle_colors[] = { 0.0f, 0.0f, 1.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f }; glEnableVertexAttribArray(attribute_color); glVertexAttribPointer(attribute_color, 4, GL_FLOAT, GL_FALSE, 0, triangle_colors);
72,635,920
72,649,934
Extra memory consumption b/w l +=c and l = c+l?
I was solving a question in which I have to add a single character in front of a string multiple times so I just use string l =""; char c = 'x'; l = c+l; but when I run it, it shows the memory limit is exceeded? Instead when I used string l =""; char c = 'x'; l += c; reverse(l.begin(),l.end()); It was compiled successfully. I want to know why this is happening?
As others have mentioned adding a char to the front of the string will create a new string and copy every single time while adding a char to the back will grow the capacity in larger steps and only copy occasionally. But both ways use <= 2N memory. The used memory for both ways isn't too different. The issue with adding to the front is time, not space. What the test probably notices though is fragmentation of the free memory. In the first case the libc will allocate a block of memory for every size of string 1 - N and promptly free it again. But the blocks used for smaller strings can't be reused for larger strings unless 2 adjacent small strings are freed and merged into a larger block of reusable memory. For the last step you have a string of size N-1 and size N. If the malloc simply uses a single heap then best case you need 3N memory and worst case you have a free block of memory of size N-1, the string of size N-1, free block of size N-1, string of size N, free block of size N-1. So overall 3N - 5N memory. But the libc probably has an optimized malloc using different memory pools for different allocations sizes (8, 16, 32, 64, ... bytes). Then the blocks allocated for smaller strings will never be reused for larger strings and you end up with 2 blocks of size 8, 16, 32, 64, ... each. Or 2N log_2 N of unusable memory. Although at larger size (multiple of page size) malloc will mmap and munmap blocks needed for the string and show 0 overhead. But for smallish N you easily end up with >20N memory usage. For the l += c and reverse case the same problem exists, but you have far fewer allocations as the string grows in larger steps. For the simple malloc you still need maybe 3-4N memory and for the optimized malloc you probably end up with only N log_2 N memory usage (assuming the string doubles in size). Or 10N where the first method had 20N. In conclusion: Both methods can run with about the same memory but the inability or inefficiency of reusing freed memory will make a big difference on the allocated total memory. It's the overhead of the memory system that kills you, not the memory used by the strings. If you really want to minimize memory usage then use reserve on the string to make it allocate the final size of the string at the start. Then add all the characters and last reverse it. Then you truly only need N memory.
72,635,933
72,636,053
Why cant we declare std::function with auto
I got following code: template<typename T> concept con1 = requires(T t, std::string s){ { t[s] } -> std::same_as<std::string>; }; using function_signature = std::function<void ( con1 auto functor)>; // ERROR! while the compiler has no problem me defining the lambda directly: auto lambda_1 = [](con1 auto functor){....} The reason why I want the former to work is the following: template<std::semiregular T> class R{ T functor; R() = default; register_functor(T functor_) { functor = std::move(functor_);} } If I get to instantiate my class using the signature like: auto rr = R<function_signature>(); I can get to register my functor at a later stage, and even change the function at run time as long as I keep the signature the same. Using the lambda directly means I am stuck with whatever my lambda is at the time I instantiate the class R
auto in a lambda parameter list doesn't represent one single automatically-inferred type like it would in a variable initialization, it represents that the lambda has a templated operator() which has a whole parameterized family of function signatures. You can't instantiate a template that expects a concrete type (and std::function does) with a parameterized family of types. You could create a parameterized family of typedefs, each formed by instantiating std::function: template<con1 T> using function_signature = std::function<void (T)>; But this doesn't get you any closer to being able to write R<function_signature>. For that, you'd need template<template class T<U>> class R; and then R needs to somehow provide the type parameter T2 in T<T2> functor. In the end, it comes down to std::function being a wrapper to a pointer-to-member function (among other flavors), and a pointer-to-member function cannot point to a whole template family of member functions. This also fails: auto lambda_1 = [](con1 auto functor){....}; auto pmf = &lambda_1::operator(); // cannot take address of template member function Put another way, type-erasure doesn't work on templates. Instantiating the template requires its type.
72,636,331
72,636,683
Using nested class as parent class template parameter?
I have a nested class definition that I wanted to pass down as the parent class (class containing the nested class, not the class that's being inherited) template parameter. Since the template does not seem to be aware of the nested class's existance, I tried to pass it down as an incomplete type, only to read later that doing so is usually a bad idea and is only rarely permitted (such as in the case of shared_ptr). I know this can be very easily solved by simply declaring the nested class externally (which is what I do) but I am asking this because I want to learn if there is any way of achieving the same effect because I an fond of how they do not pollute the namespace how they are "associated" with the parent class definition without exposing anything. Here's an example to perhaps make it more clear on what I am talking about: #include <memory> using namespace std; template <class T> class shared_class { protected: shared_ptr<T> d = make_shared<T>(); T* container() { return d.get(); } }; // A is the "parent" class. class A : public shared_class<C> { // Compiler error: C is undefined. // Similarly, A::C will complain that no C is in A. class C { // Nested class public: int i = 0; }; }; Is there another way of doing this with templates in a way that the entire definition of the nested class is contained entirely within the parent class definition?
There is no solution for your specific case with the requirements you gave, as far as I can tell. Directly using C as template argument can't work in any way, simply because at that point the compiler hasn't seen yet that C is declared as a member class of A and because there is no way to declare the nested class before that point. It doesn't work in your specific case, but with some limitations, you could add an indirection to the type of the shared_class template parameter via a traits template or directly via an alias declaration, which would allow you to delay the determination that T is supposed to be C until after the definition of A and keep C as nested class. Unfortunately that doesn't work if you want to use the type T of shared_class in one of the declarations that would be instantiated with the class template specialization, here shared_ptr<T> d and T* container(). The latter could be saved by declaring the return type auto, but the former can't. The idea would be to let shared_class use typename T::shared_class_type everywhere instead of T. The class A would be defined as class A : public shared_class<A> { public: class C { public: int i = 0; }; using shared_class_type = C; }; Alternatively you would define a template<typename T> struct shared_class_traits;, potentially with a default implementation containing using shared_class_type = typename T::shared_class_type; to be potentially specialized for A after its definition with a using shared_class_type = A::C; member, so that shared_class can use typename shared_class_traits<T>::shared_class_type everywhere. This is more flexible than "reserving" a specific member name of all classes using shared_class_type. But as I explained above it will not work as long as shared_class uses shared_class_type at all in a context that is instantiated with the class template specialization. So inside a member function body or default member initializer would be fine, but in a data member or a member function type is not.
72,636,361
72,636,392
Edit Object in Vector and return Vector
I am new to programming. I am trying to make a Banking application, where a User enters their name and gets a Username set. I am messing around with Classes for the first time. I am trying to pass a std::vector to a function to add Data into it. Do I have to return the values that I want to set into the Vector? Or can I just edit the Vector in the subfunction, since a Vector is stored on the Heap? How can I modify the Attributes of an Object in a Vector? class Account {int something{}; }; Account Frank; Frank.something =.... How can I do that in Vectors?
You can pass a non-const reference of your vector to your function, e.g. void add_value(std::vector<int>& values, int value) { values.push_back(value); } // later std::vector<int> values; add_value(values, 5); // values now contains {5} If you have a vector of objects you can first index one of them, then call a method or attribute std::vector<Account> accounts; // ... gets filled ... accounts[i].do_something(); // call method accounts[i].name; // access public member
72,637,060
72,637,347
Temporary lifetime extension mixed with copy elision object on clang
I have an issue in a project of mine that uses aggregate types to extend the lifetime of temporaries in a relatively safe manner by making aggregates that contain references uncopyable and unmovable, however mandatory copy/move elision (C++17) don't care if an object is copyable or movable. This is all well and good as, in my mind, the copy/move should never really happen as there actually should only be one object. In my case this object has a reference that extends the lifetime of some temporary and, to my knowledge, the temporary should only be destroyed when the aggregate that holds the reference is destroyed. The following code is a simplified example of the problem, notice that here B is indeed copyable, but it could as well not be and the same result would follow. #include <iostream> struct K { K() { std::cout << "K::K()" << std::endl; } K(K const&) { std::cout << "K::K(K const&)" << std::endl; } K(K&&) { std::cout << "K::K(K&&)" << std::endl; } ~K() { std::cout << "K::~K()" << std::endl; } }; struct B { K const& l; ~B() { std::cout << "B::~B()" << std::endl; } }; int main() { B b = B{ K{} }; std::cout << "end of main" << std::endl; (void)b; } The code above has different behavior in different compilers. MSVC and GCC will destroy the temporary K{} only after B b, while Clang will destroy K{} at the end of the expression. My question is: Is the code presented here invoking UB? If not, who is correct, MSVC and GCC or Clang? And is this issue known? Just as a note: to make B not copyable in C++17 it suffices to declare the copy-constructor as deleted and it will still be an aggregate. In C++20 this has changed (don't ask me why) again!... and you need to include a non-copyable member in the aggregate as p1008r1 shows (great solution!).
This looks like a bug in Clang. With mandatory copy elision B b = B{ K{} }; should be fully equivalent to B b{K{}}; and lifetime extension of the K object to the lifetime of b applies there since it is aggregate initialization. No other temporary B object exists which could contain a reference which is bound to the temporary K object first and I don't see any exception in the lifetime extension rules that could be relevant. There is an exception which applies through the mandatory copy elision in a return statement, so e.g. returning B{K{}} to assign to initialize B b from will not work to extend the lifetime of the K object, but I think it is obvious that this couldn't work. I could not find any matching issue on the LLVM issue list at https://github.com/llvm/llvm-project/issues with a quick search. You might want to consider reporting it. There was a related CWG issue 1697 asking what the behavior should be prior to C++17 given optional copy elision, but that was closed with the copy elision being made mandatory. I am not sure what the intended behavior is prior to C++17. This does sounds kind of dangerous though, since the lifetime may change if someone chooses to compile with -std=c++14 instead and generally the lifetime extension rule for aggregate initialization is kind of non-obvious. In particular it does not apply to C++20 parenthesized aggregate initialization.
72,637,402
72,639,636
How do I write a hash function for an unordered_map that takes a pair as key, but return the same value if I switch the order of the pairs members?
I'm trying to create an std::unordered_map that takes a std::pair as key, and returns a size_t as value. The tricky part for me is that I want custom hash function for my map to disregard the order of the members of the key std::pair. I.e: std::pair<int,int> p1 = std::make_pair<3,4>; std::pair<int,int> p2 = std::make_pair<4,3>; std::unordered_map<std::pair<int,int>, int> m; m[p1] = 3; // m[p2] should now also return 3! This is not a clear cut MWE but it's a cut out of what I'm trying to do in my program: #include <vector> #include <string> #include <iostream> #include <algorithm> #include <memory> #include <unordered_map> #include <functional> class Point { public: static size_t id_counter; size_t id; Point()=default; ~Point()=default; bool operator==(const Point& rhs) { return id == rhs.id; } friend std::ostream& operator<<(std::ostream& os, Point& P); }; size_t Point::id_counter = 0; class Hasch_point_pair { public: size_t operator()(const std::pair<Point*, Point*>* p) const { // XOR hash. We don't care about collision we're FREAKS auto h1 = std::hash<size_t>()(p->first->id); auto h2 = std::hash<size_t>()(p->second->id); return h1^h2; } }; int main(int argc, char const *argv[]) { auto p1 = std::make_unique<Point>(); auto p2 = std::make_unique<Point>(); auto p3 = std::make_unique<Point>(); auto p4 = std::make_unique<Point>(); std::unordered_map<std::pair<Point*, Point*>*, size_t*, Hasch_point_pair> m; auto p = std::make_unique<std::pair<Point*, Point*>>(p1.get(),p2.get()); auto p_hmm = std::make_unique<std::pair<Point*, Point*>>(p2.get(),p1.get()); size_t value = 3; m[p.get()] = &value; std::cout << "m[p] = " << m.at(p.get()) << std::endl; std::cout << "m[p_hmm] = " << m.at(p_hmm.get()) << std::endl; } One thought I had was to compare the id's of each Point and always use the Point with the largest id member variable as the first hash, but I haven't gotten it to work. Does it make sense? class Hasch_point_pair { public: size_t operator()(const std::pair<Point*, Point*>* p) const { if (p->first->id > p->second->id) { auto h1 = std::hash<size_t>()(p->first->id); auto h2 = std::hash<size_t>()(p->second->id); return h1^h2; } else { // Note switched order of hash1 and hash2! auto h2 = std::hash<size_t>()(p->first->id); auto h1 = std::hash<size_t>()(p->second->id); return h1^h2; } } };
Using a custom class for equality testing: class Equal_point_pair { public: bool operator( const std::pair<Point *, Point *> p1, const std::pair<Point *, Point *> p2) const { // Verify if both pair are in the same order const bool p1Asc = p1->first-> id < p1->second-> id; const bool p2Asc = p2->first-> id < p2->second-> id; // If both point are in same order, compare same members // Otherwise, compare swaped members... return p1Asc == p2Asc ? *p1->first == *p2->first && *p1->second == *p2->second : *p1->first == *p2->second && *p1->second == *p2->first; } }; Note that the above code does what I think you want to do... Also I haven't tested the code. Then your map would be declared like that : using PointMap = std::unordered_map< std::pair<Point*, Point*>*, size_t*, Hasch_point_pair, Equal_pointPair>; PointMap m; By the way, not sure why you are using (nested) pointers...
72,637,849
72,639,008
googletest SetUpTestSuite() does not run
From : https://google.github.io/googletest/advanced.html#sharing-resources-between-tests-in-the-same-test-suite . I am using 1.11 googletest version. I am trying to utilize this feature in the following tests: Game_test.h class Game_test : public :: testing :: Test { protected: Game_test() = default; virtual ~Game_test() = default; public: void SetUpTestCase() { Field field(8, 8); Engine engine; Rules rules; GameLogic glogic(&engine, &rules, &field); } }; cpp : I expected that it will automatically run SetUpTestCase() for each TEST_F, but it does not. What I am missing? TEST_F(Game_test, apply_rule) { field.setStatus(1, 2, true); // use of undeclared identifier..... } P.S. initially I used SetUpTestSuite(), later I tried SetUpTestCase(), which is in the example
Several things: The example is SetUpTestSuite, not SetUpTestCase. SetUpTestSuite should be a static member. field should be a static member of the class if used in SetUpTestSuite. SetUpTestSuite runs once per test suite, not once per test case. If you want something to run once per test case, use SetUp, which is a non-static member function. SetUp can then manipulate non-static member variables. See this example that shows the usage of both functions: class Game_test : public testing::Test { protected: Game_test() = default; virtual ~Game_test() = default; public: static void SetUpTestSuite() { std::cout << "========Beginning of a test suit ========" << std::endl; static_field = std::string("AAAA"); } void SetUp() override { std::cout << "========Beginning of a test ========" << std::endl; object_field = std::string("AAAA"); } static std::string static_field; std::string object_field; }; std::string Game_test::static_field; TEST_F(Game_test, Test1) { EXPECT_EQ(static_field, std::string("AAAA")); EXPECT_EQ(object_field, std::string("AAAA")); // We change object_field, SetUpTestSuite cannot reset it back to "AAAA" because // it only runs once at the beginning of the test suite. static_field = std::string("BBBB"); // Although we change object_field here, // SetUp will reset it back to "AAAA" at the beginning of each test case. object_field = std::string("BBBB"); } TEST_F(Game_test, Test2) { EXPECT_EQ(static_field, std::string("BBBB")); EXPECT_EQ(object_field, std::string("AAAA")); } Live example: https://godbolt.org/z/e6Tz1xMr1
72,638,117
72,638,280
Different ways of opening and binding a UDP socket with Boost Asio c++
I'm trying to create a simple UDP broadcast class in c++ using the Boost Asio library. Specifically, in the main class I'd like to instantiate a socket to both send and receive data. But I've seen three different ways of doing so, and I wanted to ask if anyone knew the difference? These are the methods I've seen: The first after creating the socket using a io_context, opens it: socket.open(udp::v4()); I've read somewhere that it works also in receiving after sending a packet, because calling socket.send(...) automatically binds the socket to a local endpoint (i.e. host address and a random port); but at this point anyone wanting to send a packet to this specific socket, how would be able to do so if the local endpoint is kind of "generated random" (the port is not known..). The second method I've seen is first opening the socket then binding it to a local endpoint: socket.open(udp::v4()); socket.bind(local_endpoint); Finally the third method, consists of creating the socket with already a local endpoint, and use it without calling open(): udp::socket socket(io_context, local_endpoint); So what would be the difference between the three, and would they all work? What would be the best way? Thank you in advance!
The first method will create a socket without binding to a specific port. This is fine if you don't care about someone initiating the messages with you. IE: You send a message to a recipent, they can reply back because they received the sender's IP and Port along with the message. If you want someone to be able to message you on a specific IP and port, you can initialize your socket like so: socket_(io_service, udp::endpoint(udp::v4(), port))
72,638,289
72,653,736
ESP32 SPI - SPI.h library provided by Arduino
I got a question regarding the SPI.h driver which is available in Arduino IDE examples. it seems there is only a function for transmission and there is no function for receiving data using SPI. Here is the function used for transfer: uint8_t transfer(uint8_t data); which is defined in this class: uint8_t SPIClass::transfer(uint8_t data) { if(_inTransaction){ return spiTransferByteNL(_spi, data); } return spiTransferByte(_spi, data); } and here is the implemention of the function: uint8_t spiTransferByte(spi_t * spi, uint8_t data) { if(!spi) { return 0; } SPI_MUTEX_LOCK(); spi->dev->mosi_dlen.usr_mosi_dbitlen = 7; spi->dev->miso_dlen.usr_miso_dbitlen = 7; spi->dev->data_buf[0] = data; #if CONFIG_IDF_TARGET_ESP32C3 || CONFIG_IDF_TARGET_ESP32S3 spi->dev->cmd.update = 1; while (spi->dev->cmd.update); #endif spi->dev->cmd.usr = 1; while(spi->dev->cmd.usr); data = spi->dev->data_buf[0] & 0xFF; SPI_MUTEX_UNLOCK(); return data; } Is the value returned by this function, data, the byte sent by SPI Slave ?? I mean is the buf[0] & 0xFF value the received value from the slave side? it should be so strange if the SPI.h driver does not have a function to receive the value from the Slave side.
Short answer is yes, that could be the data send back from the Slave. A more detail answer: The function transfer() in SPI is bi-directional, as SPI has separate output (MOSI) and input (MISO) line, when you clock-out one byte, it also clock-in one byte from Slave. Technically you can receiving data while sending data. But more often, you need to tell the slave of what kind of data you want from the slave. So it is often, for example, that if you send one byte command (to tell slave what you want to read) and expecting 3 bytes data return from the slave, then you might need to transfer((uint8_t buffer), 4) to send 4 bytes data (one byte cmd and 3 bytes dummy) in order to get the 3 bytes data back. How this is actually implemented varies from device to device, so you need to consult the datasheet of the device you are working with.
72,638,308
72,651,964
How do I simulate C#'s {get; set;} in C++?
I am experimenting with lambda functions and managed to recreate a "get" functionality in C++. I can get the return value of a function without using parentheses. This is an example class, where I implement this: using namespace std; struct Vector2 { float x; float y; float length = [&]()-> float {return sqrt(x * x + y * y); }(); float angle = [&]()-> float {return atan2(y, x); }(); Vector2() : x(0), y(0) {} Vector2(float a, float b) : x(a), y(b) {} ~Vector2() {} Vector2(Vector2& other) : x(other.x), y(other.y) {} Vector2(Vector2&& other) = delete; void operator =(Vector2&& other) noexcept{ x = other.x; y = other.y; } }; int main() { Vector2 vec = Vector2(10, 17); printf("%f\n%f\n%f\n%f\n", vec.x, vec.y, vec.length, vec.angle); } However, I am currently trying to also recreate the "set" functionality that C# has. But I'm failing. I tried to add this: void angle = [&](float a)->void { float l = length; x = cos(a) * l; y = sin(a) * l; }; But am getting "Incomplete type is not allowed" error. I'm not sure if that's how it should look, even if I wasn't getting the error. Is it even possible to recreate the "set" functionality C# has in C++? I know that I can just use a method SetAngle(float a){...}, but that's not really the point.
While other solutions also seem to be possible, this one seems to be the most elegant :P using namespace std; struct Vector2 { float x; float y; float init_length = [&]()-> float {return sqrt(x * x + y * y); }(); float init_angle = [&]()-> float {return atan2(y, x); }(); __declspec(property(get = GetAngle, put = SetAngle)) float angle; __declspec(property(get = GetLength, put = SetLength)) float length; Vector2() : x(0), y(0) {} Vector2(float a, float b) : x(a), y(b) {} ~Vector2() {} Vector2(Vector2& other) : x(other.x), y(other.y) {} Vector2(Vector2&& other) = delete; void operator =(Vector2&& other) = delete; void Display() { printf("%f\n%f\n%f\n%f\n\n", x, y, length, angle); } float GetLength() { return sqrt(x * x + y * y); } float GetAngle() { return atan2(y, x); } void SetLength(float l) { float a = GetAngle(); x = cos(a) * l; y = sin(a) * l; } void SetAngle(float a) { float l = GetLength(); x = cos(a) * l; y = sin(a) * l; } }; int main() { Vector2 vec = Vector2(10, 17); vec.Display(); vec.length = 5; vec.Display(); }
72,638,320
72,639,121
How to check if characters forming string are different length
I want to create a function that takes in a string as a parameter and checks if the number of occurrences of the individual letters are different. "OBDO" should display NO, because O occurs twice, but B and D occur once. "AABBB" should display YES, because A occurs twice, and B occurs three times. My code seems to work, but my code auto checker wont accept it, out of 4 tests it only passes once. I believe this can be done way better and shorter. Can anyone advise? #include <iostream> #include <string> #include <algorithm> #include <vector> #include <set> using namespace std; struct custom_comparator { bool operator()(const std::pair<int, int>& a, const std::pair<int, int>& b) const { return less_comparator(std::minmax(a.first, a.second), std::minmax(b.first, b.second)); } std::less<std::pair<int, int>> less_comparator; }; int main() { string word; string notDupes = ""; vector<int> amount; vector<pair<char,int>> para; std::set<std::pair<char, int>, custom_comparator> unique; vector <int> items; vector <char> toIterate; string result; while(cin>>word) { if(word.length() > 100) { return 0; } for( int x=0;x<word.length();x++) { int isUpper = isupper(word[x]); if(!isUpper) { return 0; } } for(int i =0;i<word.length();i++) { int times = std::count(word.begin(), word.end(), word[i]); para.push_back({word[i], times}); } for(int pl=0;pl<para.size();pl++) { unique.insert(para[pl]); } for (auto p : unique) { toIterate.push_back(p.first); items.push_back(p.second); } auto it = std::unique(items.begin(), items.end()); bool wasUnique = (it == items.end()); if(wasUnique) { result = "YES"; } else { result = "NO"; } cout << result << endl; } return 0; }
You can reduce the whole thing (if I understood the question correctly) to just a few rather simple steps: Sort the input string, so duplicate characters end up next to each other. produce a list (or vector) of the occurrence counts of each distinct character. test, if the list (or vector) of those occurrence counts contains duplicates. If duplicates - output "NO" else output "YES". step 1 is easily done, using std::sort. step 2 is not quite as obvious- but a std::accumulate (similar to reduce and also often called fold) can help with that, if we maintain the current counting and listing state in the accumulator of the operation, while iterating over the sorted strings characters. step 3, we can do by using a std::set<size_t>, inserting the resulting list from step 2 into it and then compare the size of the set with the size of the list. In C++, this can look like this: #include <iostream> #include <string> #include <functional> #include <algorithm> #include <numeric> #include <vector> #include <set> struct Accumulator { char current; size_t count; std::vector<size_t> occurrences; Accumulator(char c, size_t count, std::vector<size_t> occs) : current{c} , count{count} , occurrences{occs} { } }; int main (int argc, const char* argv[]) { if (argc >= 1) { std::string s = argv[1]; auto sorted_s = s; std::sort(sorted_s.begin(),sorted_s.end(),std::less<char>()); Accumulator x = std::accumulate(sorted_s.cbegin() + 1, sorted_s.cend(), Accumulator(*sorted_s.cbegin(),1,{}), [](Accumulator acc, char c) { if (c == acc.current) { return Accumulator(c, acc.count + 1, acc.occurrences); } else { auto acc1 = Accumulator(c,1,acc.occurrences); acc1.occurrences.push_back(acc.count); return acc1; } }); x.occurrences.push_back(x.count); std::set<size_t> deduped; for (auto& k : x.occurrences) { deduped.insert(k); } if (deduped.size() == x.occurrences.size()) { std::cout << "Yes" << std::endl; } else { std::cout << "No" << std::endl; } } else { std::cout << "no input." << std::endl; } return 0; } And it is easily ported to other languages, offering about the same facilities. Here, for example the less verbose implementation in Common Lisp: (defun unique-occurrence-counts-of-chars (s) (let ((sorted-s (sort (copy-seq s) #'char<))) (let ((x (reduce #'(lambda (acc c) (if (char= (first (first acc)) c) (list (list c (+ (second (first acc)) 1)) (second acc)) (list (list c 1) (cons (second (first acc)) (second acc))))) (subseq sorted-s 1) :initial-value (list (list (aref sorted-s 0) 1) '())))) (let ((y (cons (second (first x)) (second x)))) (let ((deduped-y (remove-duplicates y))) (if (= (length y) (length deduped-y)) "yes" "no")))))) and even terser in Haskell: import qualified Data.Set as Set import Data.List unique_occurrence_counts_of_chars:: String -> String unique_occurrence_counts_of_chars s = let folder ((current, count),occs) c = if current == c then ((c, count + 1), occs) else ((c, 1), (count : occs)) in let sorted_s = sort s x = foldl folder ((head sorted_s, 1), []) (tail sorted_s) y = (snd (fst x)) : (snd x) yDeduped = Set.fromList y in if ((length . Set.toList) yDeduped == length y) then "Yes" else "No"
72,638,416
72,638,655
Destructor of child class not being called
I'm currently working on a game as a project for my University. It's being made in C++ with SDL2. I have a vector that holds pointers of the class Enemies, which is an abstract parent class of the Plant class. In the constructor of my enemy manager, I am pushing back a pointer to a Plant object into the enemies vector. m_pEnemies.push_back(new Plant(projectileManager)); At some point, the plant comes in contact with a projectile (the detection works fine and all), and I run this piece of code in order to remove it from the vector: Enemies* temp{ m_pEnemies[i] }; m_pEnemies[i] = m_pEnemies.back(); m_pEnemies.back() = temp; delete m_pEnemies.back(); m_pEnemies.pop_back(); The enemy is destroyed during the game, and I am getting no run-time errors for illegal memory accessing, but there are memory leaks. When I placed a breakpoint, it showed that the destructor of the Plant object does not get called. It should be when I pop the back of the vector, but for some reason it doesn't.
I got the answer from Philipp's comment! I had forgotten about virtual destructors, thanks for that! And for everyone that is mentioning "smart pointers", we haven't covered them in our course so I am not allowed to use them :/.
72,638,542
72,639,412
Return Type Resolver and ambiguous overload for 'operator='
I've copied code from this wiki and it works. The problem occurs when I make this code: int main() { std::set<int> random_s = getRandomN(10); std::vector<int> random_v; random_v = getRandomN(10); std::list<int> random_l = getRandomN(10); } My compiler (gcc trunk) prints out this error: error: ambiguous overload for 'operator=' (operand types are 'std::vector<int, std::allocator<int> >' and 'getRandomN') 43 | random_v = getRandomN(10); I don't understand why the C++ compiler can't simply take copy operator= but try to match operator=(initializer_list<value_type> __l) and operator=(vector&& __x) instead. Here is my solution to this problem, which I don't like, but I can't think of anything else: The functional cast: random_v = std::vector<int>(getRandomN(10)); Clearly, the type must be repeated twice. Private inheritance and forwarding required methods from the parent: template<typename T> class my_vector : private std::vector<T> { public: using std::vector<T>::end; using std::vector<T>::insert; my_vector<int>& operator=(my_vector<int> rhs) noexcept { std::swap(*this, rhs); return *this; } }; my_vector<int> random_v1; random_v1 = getRandomN(10); Clearly, I don't use std::vector<int> anymore... The whole code: godbolt
The issue is that, from the conversion operator's signature alone, the compiler can't tell if it should convert getRandomN(10) to a std::initializer_list<int> and then assign that to random_v or convert getRandomN(10) to a std::vector<int> and then assign that to random_v. Both involve exactly one user-defined conversion, and so neither is a better choice from the compiler's point of view. Of course, once you look at the body of the conversion operator it becomes clear that std::initializer_list<int> won't work, since it has no insert member function, but that's too late. The compiler makes its choice for overload resolution before looking at the body of the function. The way to make this work is to make it clear that std::initializer_list<int> isn't the right choice from the signature alone. If you have access to C++20 concepts that's pretty easy: template <typename T> concept BackInsertable = requires(T t) { t.insert(std::end(t), 0); }; class getRandomN { size_t count; public: getRandomN(int n = 1) : count(n) {} // ------ vvvvvvvvvvvvvv ------ NOTE HERE template <BackInsertable Container> operator Container () { Container c; for(size_t i = 0;i < count; ++i) c.insert(c.end(), rand()); // push_back is not supported by all standard containers. return c; } }; Without concepts you'll need to use other SFINAE tricks to make that operator invalid. Here's one possible implementation that works all the way back to C++11: template <typename T> using BackInsertable = decltype(std::declval<T&>().insert(std::end(std::declval<T&>()), 0)); class getRandomN { size_t count; public: getRandomN(int n = 1) : count(n) {} template <typename Container, BackInsertable<Container>* = nullptr> operator Container () { Container c; for(size_t i = 0;i < count; ++i) c.insert(c.end(), rand()); // push_back is not supported by all standard containers. return c; } };
72,638,625
72,638,749
C++ function to check whether value exists in tuple
I'm a total beginner to C++, and I am trying to code a program that checks user input to make sure it is a valid option. Here's my code so far: #include <iostream> #include <string> #include <tuple> int UserInputCheck() { int x; cout << "Options: 1,2,3 or q. \n \n Choose an option:"; cin >> x; tuple<int,int,int,string> valid_options{ 1, 2, 3, "q"}; do { cout << "\nInvalid input. Please try again."; cin >> x; } while (x is not in valid_options); // This is psuedo-code, I'm looking for a function that does this cout << x << "\n"; return 0; { So is there a C++ function that would check if x is in valid_options? If not, how can I write one?
static inline bool isValid(int input) { static const std::vector<int> valids = {1, 2, 3}; return std::any_of(valids.begin(), valids.end(), [&input](const auto &s) { return input == s; }); } This function can do the coffee EDIT: String version static inline bool isValid(const std::string &input) { static const std::vector<std::string> valids = {"1", "2", "3", "q"}; return std::any_of(valids.begin(), valids.end(), [&input](const auto &s) { return input.find(s) != std::string::npos; }); }
72,638,827
72,638,946
How to detect block devices on Linux?
With C++ on Linux, how does one detect block devices? Right now, I'm using this code: for (const auto &entry : std::filesystem::directory_iterator("/dev/")) { std::string name = entry.path().filename().string(); if (name.find("sd") == 0 || name.find("nvme") == 0 || name.find("hd") == 0 || name.find("vd") == 0 || name.find("xvd") == 0) { std::cout << "Found device: " << entry.path() << std::endl; } } Which works well enough in practice, but almost certainly isn't the way it's "supposed to be done". And it isn't perfect either, as it misses losetup devices because I didn't include "loop", it also misses Network Block Devices because I didn't include "nbd".
std::filesystem::directory_entry has an is_block_file() method for this exact purpose: Checks whether the pointed-to object is a block device. For example: for (const auto &entry : std::filesystem::directory_iterator("/dev/")) { if (entry.is_block_file()) { std::cout << "Found device: " << entry.path() << std::endl; } }
72,638,829
72,638,934
Interpretation of access decoration of member functions
In C++11 and later, one can decorate the member function with &, const&, oe && (or other combinations). If one has several overloads, and at least one is specified like this, the others must follow the same convention. Pre C++11 style: struct A { void f() const {} // #1 void f() {} // #2 }; C++11 and later: struct { void f() const& {} // #3 void f() & {} // #4 // void f() && {} // if necessary }; Until today, I though that #4 was equivalent to #2, but today I found a counter example: struct A { void f() & {} // void f() && {} // commented for testing purposes, see below } struct B { void f() {} } ... A{}.f(); // compile error: argument discards qualifiers B{}.f(); // ok https://godbolt.org/z/qTv6hMs6e So, what is the deal? An undecorated (non-const) member function written in the old style is equivalent to both its && or & version depending on the context (at the calling point)? Is the below code the correct interpretation? struct B { void f() {... some body...} } ...is the same as this? struct B { void f() & {... some body...} void f() && {... some (same) body...} } ... which is the same as this: struct B { void f() & {... some body...} void f() && {return f();} }
The qualifiers have the exact same meaning as if they were the qualifiers on the hypothetical implicit object parameter which is passed the object expression of the member access expression. So, #4 can not be called on a prvalue, because a non-const lvalue reference can not bind to a prvalue, explaining why A{}.f(); doesn't work. (A{} is a prvalue) The old style without reference qualifier is the odd one. It behaves in overload resolution as if the implicit object parameter was an lvalue reference (const or not depending on that qualifier), but in contrast to normal function parameters it is allowed to bind to rvalues anyway for the purpose of overload resolution. So to replicate the old style unqualified behavior, you need to specify both the &-qualified overload and the &&-qualified overload (at least if the function is not also const-qualified). (There are likely some corner cases where the two qualified member functions are not 100% equivalent to one unqualified one though. I guess a simple example would be trying to take the address &B::f.)
72,639,072
72,663,388
Can't get correct value with QSettings and custom type
I am trying to read a small struct from my ini file using QSettings. Writing works fine, but when I try to read it back I always get the default value out of QVariant. This is the structure definition: struct CellSize { int font; int cell; }; inline QDataStream& operator<<(QDataStream& out, const CharTableView::CellSize& v) { out << v.font << v.cell; return out; } inline QDataStream& operator>>(QDataStream& in, CharTableView::CellSize& v) { int font, cell; in >> font >> cell; v.font = font; v.cell = cell; return in; } inline bool operator==(const CharTableView::CellSize& a, const CharTableView::CellSize& b) { return a.font == b.font && a.cell == b.cell; } Q_DECLARE_METATYPE(CharTableView::CellSize) I am writing with m_settings.setValue(Settings::TableCellSize, QVariant::fromValue<CharTableView::CellSize>(CharTableView::CellSizeSmall));. I assume this is working fine because the gibberish inside the ini file is consistent with the UI changes. My reading code is CharTableView::CellSize tableCellSize = m_settings.value(Settings::TableCellSize).value<CharTableView::CellSize>(); and it always gives me {0, 0}. I'm fairly new to Qt and, to be honest, I am a litte confused by QVariant and all the metaprogramming stuff. Am I missing something? EDIT: I have tried to set some breakpoints. Basically, the first breakpoint that gets triggered is the one after I've read the value from QSettings, which is {0, 0} as always. Then after a while a breakpoint inside operator>> gets triggered and the values of font and cell inside the operator function are correct. What's happening?
I solved the issue! I had to add qRegisterMetaType<CharTableView::CellSize>() inside CharTableView constructor.
72,639,533
72,641,309
Writing to file using overloaded operator << - problem with recursion
I´m currently learning C++ and I decided to try to write my own "log tool". My goal is, that I can write just something like this: logger<<"log message"; I have this problem - when I wrote operator << overloading function, the IDE compiler warned me, that it is infinite recursion. Here is the code of operator overloading: Logger &operator<<(Logger &logger, char *message) { logger << message << "Log message"; return logger; } And function is in class declared as friend: friend Logger &operator<<(Logger &logger, char *message); Why is this code infinite recursion? Maybe I just can´t see some trivial mistake... Thank you for your answers.
In order to invoke your overloaded operator<< from anywhere, you need a Logger object on the left side and a char* pointer (BTW, it should be const char* instead) on the right side. Inside your overloaded operator<<, this statement: logger << message is trying to invoke an operator<< with a Logger object on the left side and a char* pointer on the right side. So, what do you think that is going to invoke? That's right, ITSELF! Over and over, endlessly, forever. That is what the compiler is warning you about. You need to replace logger in that statement with whatever std::istream (or other object) that the Logger class wants to write to. Not the Logger itself. For example, let's say your Logger class has a data member named m_file. Your overloaded operator<< could then look like this: Logger& operator<<(Logger &logger, const char *message) { logger.m_file << message << "Log message"; return logger; }
72,640,089
72,640,750
Getting huge random numbers in c++
for my final OOP project i have to create a "Streaming service" in c++, i am basically done with the whole program, i just have one problem. I give a rating to each movie, but when i print this rating, i just get some huge random numbers instead of the actual rating. here is my movie class and the definition of the function i'm having trouble with. class movie : public video{ public: movie(); movie(int, std::string, int, std::string, int); float getRating(); void setRating(int); void showrating(); protected: int rating; }; movie::movie(){ rating = 0; } movie::movie(int _id, std::string _name, int _length, std::string _genre, int _rating) : video(_id, _name, _length, _genre){} void movie::showrating(){ std::cout << name <<" has been rated " << rating << " stars out of 5." << std::endl; } And here is how i used it in my main.cpp movie LordOfTheRings(0, "Lord of the Rings", 155, "Adventure", 5); movie StarWars(1, "Star Wars", 132, "SciFi", 5); movie Inception(2, "Inception", 143, "Action", 4); movie Interstellar(3, "Interstellar", 153, "SciFi", 5); movie Tenet(4, "Tenet", 135, "Action", 3); movie moviearr [5] = {LordOfTheRings, StarWars, Inception, Interstellar, Tenet}; for (int i = 0; i <= 4; i++){ moviearr[i].showrating(); } After running the program, this is the result. The rating should appear there but i'm just getting these random numbers. Result i'm getting from the code above I'd really appreciate some help.
You forgot to set rating to anything in your constructor. That's why its value is 'random' Change this movie::movie(int _id, std::string _name, int _length, std::string _genre, int _rating) : video(_id, _name, _length, _genre){} to this movie::movie(int _id, std::string _name, int _length, std::string _genre, int _rating) : video(_id, _name, _length, _genre), rating(_rating) {}
72,640,180
72,640,510
Why can't I access private members of class Box in operator<<?
Why can't I access private functions of class Box in ostream& operator<<(ostream& out, const Box& B){cout << B.l << " " << B.b << " " << B.h << endl; }? #include<bits/stdc++.h> using namespace std; class Box{ int l, b, h; public: Box(){ l=0; b=0; h=0; } Box(int length, int breadth, int height){ l=length; b=breadth; h=height; } bool operator < (Box &B){ if(l < B.l)return 1; else if(b < B.b && l==B.l)return 1; else if(h< B.h && b== B.b && l==B.l)return 1; else return 0; } }; ostream& operator <<(ostream& out, const Box& B){ cout << B.l << " " << B.b << " " << B.h ; return out; }
The problem is that you don't have any friend declaration for the overloaded operator<< and since l, b and h are private they can't be accessed from inside the overloaded operator<<. To solve this you can just provide a friend declaration for operator<< as shown below: class Box{ int l, b, h; //other code here as before //--vvvvvv----------------------------------------->friend declaration added here friend ostream& operator <<(ostream& out, const Box& B); }; //definition same as before ostream& operator <<(ostream& out, const Box& B){ cout << B.l << " " << B.b << " " << B.h ; return out; } Working demo
72,640,483
72,641,828
How can I declare this function to return a TFuture? UE5 C++
I have been trying to declare this function in my header file (using ue5 c++) and I get the compiler telling me this error: Unrecognized type 'TFuture' - type must be a UCLASS, USTRUCT, UENUM, or global delegate. [UnrealHeaderTool ParserError]* static TFuture<UTexture2D*> ImportImageFromDiskAsync(UObject* Outer, const FString& ImagePath, TFunction<void()> CompletionCallback); What am I doing wrong here? Minimal Reproducible Example: #include "CoreMinimal.h" #include "PixelFormat.h" #include "UObject/NoExportTypes.h" #include "Async/Future.h" #include "TSImageLoader.generated.h" // Forward Declare Texture 2D class UTexture2D; DECLARE_LOG_CATEGORY_EXTERN(LogTextureSerializeImageLoading, Log, All); UCLASS(BlueprintType) class TEXTURESERIALIZEIO_API UTSImageLoader : public UObject { GENERATED_BODY() public: UFUNCTION(BlueprintCallable, meta = (HidePin = "Outer", DefaultToSelf = "Outer")) static TFuture<UTexture2D*> ImportImageFromDiskAsync(UObject* Outer, const FString& ImagePath, TFunction<void()> CompletionCallback); };
Okay I figured it out. TFuture cannot be returned on a function that would be exposed to blueprints. So removing the UFUNCTION() tag above it solves the issue.
72,640,793
72,641,198
not terminating scanf() or gets() after taking newline
I have to take three inputs in a single string. The code to take the input is: char msg_send[1000]; gets(msg_send); The input is like GET /api HTTP/1.1 id=1&name=phoenix&mail=bringchills@ppks.com That means there is a newline after the fist line GET /api HTTP/1.1. The next line is an empty newline. The input taking function should terminate after taking the 3rd newline. Also, I have to terminate the input after the first line GET /something HTTP/1.1 if the line doesn't have a /api word at the place of /something word. But when I'm using gets(), it terminates the string after taking GET /api HTTP/1.1 part of the input. When using scanf, it terminates after taking only the first GET word. Is there any way to take the input as a single string?
In cases like these, you should just use getchar in a loop: #define MAX_MSG_BUF 1000 char char msg_send[MAX_MSG_BUF]; // you should wrap the below code in a function, like get3lines(buf, buf_size) unsigned index = 0; unsigned line_count = 0; const unsigned buf_size = MAX_MSG_BUF; do { int tmp = getchar(); if (tmp < 0) break; // end of file or error msg_send[index++] = tmp; if (tmp == '\n') { ++line_count; if (line_count == 1) { // check that first line contains /api if (!first_line_valid(msg_send, index)) break; } } } while (index < (MAX_MSG_BUF-1) && line_count < 3; msg_send[index] = '\0'; // print error if line_count < 3 ? The first_line_valid function might look like this (note: needs #include <stdbool.h> for bool type): bool starts_with(const char *str, size_t length, const char *prefix) { size_t i = 0; for(;;) { if (i == length || prefix[i] == '\0') return true; if (str[i] != prefix[i]) return false; // does catch end of str ++i; } } bool first_line_valid(const char *buf, size_t buf_size) { // note: buf might not be NUL-terminated so use buf_size if (starts_with(buf, buf_size, "GET /api ")) return true; if (starts_with(buf, buf_size, "GET /api/")) return true; return false; }
72,640,804
72,640,924
How to avoid errors in Vscode for putting header files in a separate directory than src
Ok so I am having an issue with errors in VSCode. Basically I decided to reorganize and move my header files into a separate folder, "include". My directory put simply is as follows: -build -include |-SDL2 |-SDL2_Image |-someHeaderFile1.h |-someHeaderFile2.h -src |-main.cpp |-someCppFile.cpp -Makefile My Makefile contains: SRC_DIR = src BUILD_DIR = build/debug CC = g++ SRC_FILES = $(wildcard $(SRC_DIR)/*.cpp) OBJ_NAME = play INCLUDE_PATHS = -Iinclude -I /include LIBRARY_PATHS = -Llib COMPILER_FLAGS = -std=c++11 -Wall -O0 -g LINKER_FLAGS = -lsdl2 -lsdl2_image all: $(CC) $(COMPILER_FLAGS) $(LINKER_FLAGS) $(INCLUDE_PATHS) $(LIBRARY_PATHS) $(SRC_FILES) -o $(BUILD_DIR)/$(OBJ_NAME) The program compiles and runs, however, my issue is with VSCode as it shows an error having the include as : #include "someHeaderFile1.h" vs #include "../include/someHeaderFile1.h" Any assistance would be appreciated.
You need to put that folder's path to the Include path. One way to do that is shown below. The screenshots are attached with each steps so that it(the process) would be more clear. Step 1 Press Ctrl + Shift + P This will open up a prompt having different options. You have to select the option saying Edit Configurations Step 2 After selecting Edit Configurations a page will open with different options. You have to scroll down and go the the option saying Include Path and just paste the path to your include folder there. Below is the picture after adding the include folder's path into the Include Path option. Step 3 Now after adding the path to the include folder into the Include path field you can close this window and all the vscode errors that you mentioned will not be there anymore.
72,641,269
72,646,827
Vulkan Image export handle to Win32 or fd. How to reverse win32 handle to obtain image information?
When I export a Win32 handle from the Vulkan image. Vulkan for another process. At this time, how to find the information of Vulkan Image according to win32 Handle reversely? I created a Handle with Vulkan in a process and created OpenGL's texture according to the Handle. It is then shared with another process. So there is one of the above problems. I don't know what the information of image is. How can I create an image and then import it into shared memory? I don't know what the image information is, so how can I create an OpenGL texture object? Therefore, I want to obtain the image information contained in Handle. // Get the Vulkan texture and create the OpenGL equivalent using the memory allocated in Vulkan inline void createTextureGL(nvvk::ResourceAllocator& alloc, Texture2DVkGL& texGl, int format, int minFilter, int magFilter, int wraps, int wrapt) { vk::Device device = alloc.getDevice(); nvvk::MemAllocator::MemInfo info = alloc.getMemoryAllocator()->getMemoryInfo(texGl.texVk.memHandle); texGl.handle = device.getMemoryWin32HandleKHR({ info.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32 }); auto req = device.getImageMemoryRequirements(texGl.texVk.image); glCreateMemoryObjectsEXT(1, &texGl.memoryObject); glImportMemoryWin32HandleEXT(texGl.memoryObject, req.size, GL_HANDLE_TYPE_OPAQUE_WIN32_EXT, texGl.handle); glCreateTextures(GL_TEXTURE_2D, 1, &texGl.oglId); glTextureStorageMem2DEXT(texGl.oglId, texGl.mipLevels, format, texGl.imgSize.width, texGl.imgSize.height, texGl.memoryObject, info.offset); goolge-image-import project: ::VkImage image = *images_[0]; VkMemoryRequirements requirements; device_->vkGetImageMemoryRequirements(device_, image, &requirements); aligned_data_size_ = vulkan::RoundUp(requirements.size, requirements.alignment); uint32_t memory_index = vulkan::GetMemoryIndex(&device, log, requirements.memoryTypeBits, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT); #ifdef _WIN32 VkImportMemoryWin32HandleInfoKHR import_allocate_info{ VK_STRUCTURE_TYPE_IMPORT_MEMORY_WIN32_HANDLE_INFO_KHR, nullptr, VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT, handle, nullptr}; #elif __linux__ VkImportMemoryFdInfoKHR import_allocate_info{ VK_STRUCTURE_TYPE_IMPORT_MEMORY_FD_INFO_KHR, nullptr, VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT, fd}; #endif VkMemoryAllocateInfo allocate_info{ VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, // sType &import_allocate_info, // pNext aligned_data_size_ * num_images, // allocationSize memory_index}; VkDeviceMemory device_memory; LOG_ASSERT(==, log, VK_SUCCESS, device->vkAllocateMemory(device, &allocate_info, nullptr, &device_memory));
I don't know what the information of image is. Yes, you do. You created it in Vulkan. You know its size. You know its format. You know everything about the image. If you can pass a handle to this function to create an OpenGL texture, then you can pass the other information too. There is no API to retrieve any information about the window. The driver may not even be keeping such information around, since it's information you already have and is therefore redundant.
72,641,526
72,641,592
type trait with enum as specialisation
I would like to have a type trait that would be false for any parameter T except for the enum value Http::Get template<typename T> struct isGet : public std::false_type{}; template<> struct isGet<Http::Get> : public std::true_type {}; However, it seems that the c++ compiler does not allow me to specialise a template class with an enum when the first is a typename. If I instead do : template<Http T> struct isGet : public std::false_type{}; it works! Any reason why that is the case, and what are the workarounds to achieve what I want. I basically want the compiler to return the type trait to false even when T is not of a enum type Http
typename and class expect types. Http::Get is (presumably) not a type, but a value, like any other constant (42, 'A', false etc.). And you obviously cannot pass a value when a type is expected. The solution would be different depending on your use cases. For example: #include <type_traits> enum class Http { Post, Get, }; template <auto T> struct isGet : public std::false_type {}; template <> struct isGet<Http::Get> : public std::true_type {};
72,642,840
72,642,948
Cannot assign reference value to result of std::invoke
I have a lambda that either appends an object and returns it or it returns an already existing object. On GCC, i receive the error: cannot bind non-const lvalue reference of type 'T&' to an rvalue of type 'T' Here is an example: #include <iostream> #include <cstdlib> #include <functional> struct foo { int a; }; int main() { std::vector<foo> foos = {foo{22}}; std::size_t new_index = -1; bool append = (rand() & 1) == 1; //compiler error in next line foo& new_foo = std::invoke([&foos,&new_index,append](){ if(append) { new_index = foos.size(); return foos.emplace_back(); }else { new_index = 0; return foos[new_index]; } }); return new_foo.a; } In this example new_foo is supposed to be mutated after retrieving the object, hence const foo& new_foo is not an option.
return type of lambda is not a reference by default, you have to specify it with trailling return type( -> foo&, -> decltype(auto)): [&foos, &new_index, append]()-> foo& { /*..*/ } or return a type which handles reference (as std::reference_wrapper): [&foos,&new_index,append]() { if(append) { new_index = foos.size(); return std::ref(foos.emplace_back()); } else { new_index = 0; return std::ref(foos[new_index]); } } Demo
72,643,091
72,643,313
How to get an element of type list by index
How can an element of a type list using L = type_list<T1, T2, ...> be retrieved by index, like std::tuple_element, preferrably in a non recursive way? I want to avoid using tuples as type lists for use cases, that require instantiation for passing a list like f(L{}). template<typename...> struct type_list {}; using L = typelist<int, char, float, double>; using T = typeAt<2, L>; // possible use case Not sure if an iteration using std::index_sequence and a test via std::is_same of the std::integral_constant version of the index is a good aproach.
I want to avoid using tuples as type lists for use cases, that require instantiation for passing a list like f(L{}) If you don't want to instanciate std::tuple but you're ok with it in unevaluated contexts, you may take advantage of std::tuple_element to implement your typeAt trait: template <std::size_t I, typename T> struct typeAt; template <std::size_t I, typename... Args> struct typeAt<I, type_list<Args...>> : std::tuple_element<I, std::tuple<Args...>> {}; // ^ let library authors do the work for you using L = type_list<int, char, float, double>; using T = typename typeAt<2, L>::type; static_assert(std::is_same<T, float>::value, "");
72,643,141
72,643,430
How to pass array of object pointers to function?
I am having trouble passing an array of object pointers from main() to a function from different class. I created an array of object pointers listPin main() and I want to modify the array with a function editProduct in class Manager such as adding new or edit object. Furthermore, I want to pass the whole listP array instead of listP[index]. How to achieve this or is there any better way? Sorry, I am very new to c++. #include <iostream> using namespace std; class Product { protected: string id, name; float price; public: Product() { id = ""; name = ""; price = 0; } Product(string _id, string _name, float _price) { id = _id; name = _name; price = _price; } }; class Manager { protected: string id, pass; public: Manager(string _id, string _pass) { id = _id; pass = _pass; } string getId() const { return id; } string getPass() const { return pass; } void editProduct(/*array of listP*/ ) { //i can edit array of listP here without copying } }; int main() { int numProduct = 5; int numManager = 2; Product* listP[numProduct]; Manager* listM[numManager] = { new Manager("1","alex"), new Manager("2", "Felix") }; bool exist = false; int index = 0; for (int i = 0; i < numProduct; i++) { //initialize to default value listP[i] = new Product(); } string ID, PASS; cin >> ID; cin >> PASS; for (int i = 0; i < numManager; i++) { if (listM[i]->getId() == ID && listM[i]->getPass() == PASS) { exist = true; index = i; } } if (exist == true) listM[index]->editProduct(/*array of listP */); return 0; }
Since the listP is a pointer to an array of Product, you have the following two option to pass it to the function. The editProduct can be changed to accept the pointer to an array of size N, where N is the size of the passed pointer to the array, which is known at compile time: template<std::size_t N> void editProduct(Product* (&listP)[N]) { // Now the listP can be edited, here without copying } or it must accept a pointer to an object, so that it can refer the array void editProduct(Product** listP) { // find the array size for iterating through the elements } In above both cases, you will call the function as listM[index]->editProduct(listP); That been said, your code has a few issues. First, the array sizes numProduct and numManager must be compiled time constants, so that you don't end up creating a non-standard variable length array. Memory leak at the end of main as you have not deleted what you have newed. Also be aware Why is "using namespace std;" considered bad practice? You could have simply used std::array, or std::vector depending on where the object should be allocated in memory. By which, you would have avoided all these issues of memory leak as well as pointer syntaxes. For example, using std::vector, you could do simply #include <vector> // in Manager class void editProduct(std::vector<Product>& listP) { // listP.size() for size of the array. // pass by reference and edit the listP! } in main() // 5 Product objects, and initialize to default value std::vector<Product> listP(5); std::vector<Manager> listM{ {"1","alex"}, {"2", "Felix"} }; // ... other codes for (const Manager& mgr : listM) { if (mgr.getId() == ID && mgr.getPass() == PASS) { // ... code } } if (exist == true) { listM[index]->editProduct(listP); }
72,644,982
72,645,016
C++ Instantiate Template Variadic Class
I have this code: #include <iostream> template<class P> void processAll() { P p = P(); p.process(); } class P1 { public: void process() { std::cout << "process1" << std::endl; } }; int main() { processAll<P1>(); return 0; } Is there a way to inject a second class 'P2' into my function 'processAll', using template variadic ? Something like this : ... template<class... Ps> void processAll() { // for each class, instantiate the class and execute the process method } ... class P2 { public: void process() { std::cout << "process2" << std::endl; } }; ... int main() { processAll<P1, P2>(); return 0; } Can we iterate over each class ?
With fold expression (c++17), you might do: template<class... Ps> void processAll() { (Ps{}.process(), ...); }
72,645,270
72,645,987
Why does my function only work with lvalues?
I have a function that returns a lowercase string: constexpr auto string_to_lower_case(const std::string& string) { return string | std::views::transform(std::tolower) | std::views::transform([](const auto& ascii) { return static_cast<char>(ascii); }); } and I expect that function return the same resultat when i will pass "SOME" or const std::string some("SOME"), but it's not. When I try to print out a result of string_to_lower_case("SOME"), I retrieve an empty console (the output of string_to_lower_case(some) is correct) const std::string some("SOME"); for (const auto& ch : string_to_lower_case(some)) std::cout << ch;
Some issues: The temporary std::string that is created when you call the function with a char[] goes out of scope when the function returns and is then destroyed. The view you return can't be used to iterate over the string after that. You take the address of std::tolower which isn't allowed since it's not on the list of Designated addressable functions. You don't convert the char used with std::tolower to unsigned char first. If char has a negative value, it'll cause undefined behavior. Your second transformation seems redundant. An alternative is to return an actual std::string instead of a view: constexpr std::string string_to_lower_case(const std::string& string) { auto view = string | std::views::transform([](char ch){ return static_cast<char>(std::tolower(static_cast<unsigned char>(ch))); }); return {view.begin(), view.end()}; }
72,646,906
72,647,015
C++ Problem with overriding base class variable
I have 3 classes that derive from each other: class Shape { public: float r; }; class ThreeDimentional : public Shape { public: virtual float area() = 0; virtual float volume() = 0; }; class Sphere : public ThreeDimentional { public: float r; float area() { return 4*pi*pow(r, 2); } float volume() { return float(4)/3*pi*pow(r, 3); } }; In main, I create an instance pointer of ThreeDimentional and set its value with a Sphere. And then change its r to 2. I think it somehow changes r of the base class? because it returns the volume as 0. Isn't sphere supposed to override the r of base class? how can I change r of sphere? int main() { ThreeDimentional* s1 = new Sphere; s1->r = 2; cout << s1->volume() << endl; } Output: 0
You can't override data members. You can override only virtual member functions. If every Shape is supposed to have a radius, then Sphere shouldn't declare another r (which will just hide the one in Shape depending on the context from where it is named). If only a Sphere is supposed to have a radius, then it shouldn't be possible to set r through a ThreeDimentional pointer, which ought to be agnostic about what kind of ThreeDimentional object the pointer is pointing to. (In circumstances where a decision must still be taken based on the derived type nonetheless, dynamic_cast can be used.) Which of the two applies depends on your intended interpretation for "radius"/r, but typically only spheres have a radius in the strict sense.
72,647,305
72,657,037
Interrupting the execution of a method
I have a case where I need to call a method that runs for infinite time on specific occasions: obj.run() The program will have a callback that should start this method or stop it based on a received message. How can that be achieved? Obs.: obj doesn't seem to have a destructor and the function is meant to stop only when killing the process.
One way to achieve this is by using boost::thread::interrupt like in the code from this gist #include <boost/thread.hpp> #include <iostream> using namespace std; void ThreadFunction() { int counter = 0; for (;;) { cout << "thread iteration " << ++counter << " Press Enter to stop" << endl; try { // Sleep and check for interrupt. // To check for interrupt without sleep, // use boost::this_thread::interruption_point() // which also throws boost::thread_interrupted boost::this_thread::sleep(boost::posix_time::milliseconds(500)); } catch (boost::thread_interrupted&) { cout << "Thread is stopped" << endl; return; } } } int main() { // Start thread boost::thread t(&ThreadFunction); // Wait for Enter char ch; cin.get(ch); // Ask thread to stop t.interrupt(); // Join - wait when thread actually exits t.join(); cout << "main: thread ended" << endl; return 0; }