question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
72,949,462 | 72,995,909 | How to dynamically BLE advertise on ESP32 and reflect on flutter app | My ESP32-based custom-PCB BLE peripheral is advertising LiFP batteries dynamic physical values, such as current or SoC (State of Charge).
Basically, the code is as follows:
/// Returns the manufacturer data as a String
void Ble :: setAdvertisingManufacturerData(BLEAdvertisementData *advertisementData) {
const float soc = battery.getSoc();
log("Advertising Soc %d%%", soc);
const char bytes[] = {
(manCode>>8)&0xff, manCode&0xff,
// SoC: 2 bytes | 0~2 bytes
(soc>>8)&0xff, soc&0xff,
};
advertisementData->setManufacturerData(std::string(bytes, sizeof(bytes)));
}
/// Prepares the advertising manif data
void Ble :: advertise() {
BLEAdvertisementData advertisementData;
advertisementData.setFlags(0x6);
setAdvertisingManufacturerData(&advertisementData);
pAdvertising = BLEDevice::getAdvertising();
pAdvertising->setScanResponse(true);
pAdvertising->setMinPreferred(0x06); // functions that help with iPhone connections issue
pAdvertising->setMinPreferred(0x12);
pAdvertising->setAdvertisementData(advertisementData);
pAdvertising->start();
}
void Ble :: setup() {
// == Start the advertising
advertise();
}
/// Dynamically advertises every seconds
void Ble :: loop() {
// == Dynamically advertise
static unsigned lastAdvertised = 0;
const unsigned now = millis();
if (!lastAdvertised || now - lastAdvertised > 1000) {
lastAdvertised = now;
BLEAdvertisementData scanResponse;
setAdvertisingManufacturerData(&scanResponse);
pAdvertising->stop();
pAdvertising->setScanResponseData(scanResponse);
pAdvertising->start();
}
}
So far so good. But from the Flutter app, the advertisement manufacturer data still shows SoC to be zero (aka the initial value) despite the evolving value I see in my ESP32 logs.
I probably made a mistake, any help welcome!
[UPDATE] With the nRF mobile app I get this:
And I see there are two sections with type 0x09: the first one is "empty" while the second has the right data.
| I finally did that, that makes my advertising being dynamic.
I removed the setScanResponseData()and replaced by another call of setAdvertisementData() as I do in advertise().
But I still don't get what setScanResponseData() is for.
void Ble :: loop() {
// == Dynamically advertise
static unsigned lastAdvertised = 0;
const unsigned now = millis();
if (!lastAdvertised || now - lastAdvertised > 5000) {
lastAdvertised = now;
log("Dynamic advertising");
BLEAdvertisementData scanResponse;
std::string md = getAdvertisingManufacturerData();
setAdvertisingManufacturerData(md, &scanResponse);
pAdvertising->stop();
//pAdvertising->setScanResponseData(scanResponse);
pAdvertising->setAdvertisementData(scanResponse);
pAdvertising->start();
}
}
|
72,949,593 | 72,950,434 | Float to Double adding many 0s at the end of the new double number during conversion | I'm facing a little problem on a personal project:
When I'm converting a float number to a double to make operations (+-*/) easy, it adds a lot of 0s behind the default float number.
For example: float number = -4.1112 -> double number = -4.1112000000000002
I convert the float to a double with the standard function std::stod().
This issue is a big problem for me cause I'm checking for overflow in my project and it throws an exception because of this issue.
Here is the checkOverflow function that throws an exception:
{
if (type == eOperandType::Int8) {
if (value > std::numeric_limits<int8_t>::max() || value < std::numeric_limits<int8_t>::min())
throw VMException("Overflow");
} else if (type == eOperandType::Int16) {
if (value > std::numeric_limits<int16_t>::max() || value < std::numeric_limits<int16_t>::min())
throw VMException("Overflow");
} else if (type == eOperandType::Int32) {
if (value > std::numeric_limits<int32_t>::max() || value < std::numeric_limits<int32_t>::min())
throw VMException("Overflow");
} else if (type == eOperandType::Float) {
if (value > std::numeric_limits<float>::max() || value < std::numeric_limits<float>::min())
throw VMException("Overflow");
} else if (type == eOperandType::Double) {
if (value > std::numeric_limits<double>::max() || value < std::numeric_limits<double>::min())
throw VMException("Overflow");
}
}
| The problem you are having is completely different.
All your checks are wrong. Think about it: if a variable is of type, say, int32_t, its value is necessarily between the minimum and maximum possible values that can be represented by an int32_t, by definition. Let's simplify: it's like having a single-digit number, and testing that it is between 0 and 9 (if it is unsigned), or between -9 and +9 (if it is signed): how could such a test fail? Your checks should never raise an exception. But, as you say, they do. How is it even possible? And anyway, why would it happen for the long series of zeros that derive from representing -4.1112 as a floating point number, turning it into -4.1112000000000002? That isn't an overflow! This is a strong hint that your problem is elsewhere.
The solution is that std::numeric_limits<T>::min doesn't do what you think. As CPP Reference explains, it gives you the smallest positive value:
For floating-point types with denormalization, min returns the minimum positive normalized value. Note that this behavior may be unexpected, especially when compared to the behavior of min for integral types. To find the value that has no values less than it, use numeric_limits::lowest.
And the page about lowest also provides an example, comparing the output of min, lowest and max:
std::numeric_limits<T>::min():
float: 1.17549e-38 or 0x1p-126
double: 2.22507e-308 or 0x1p-1022
std::numeric_limits<T>::lowest():
float: -3.40282e+38 or -0x1.fffffep+127
double: -1.79769e+308 or -0x1.fffffffffffffp+1023
std::numeric_limits<T>::max():
float: 3.40282e+38 or 0x1.fffffep+127
double: 1.79769e+308 or 0x1.fffffffffffffp+1023
And as you can see, min is positive. So the opposite of max is lowest.
So you are getting exceptions because your negative values are smaller than the smallest positive value. Or, in other words: because -4 is less than 0.0001. Which is correct. It's the test that is wrong!
You could fix that by using lowest... But then, what would your checks tell you? If they ever raised an exception, it would mean that the compiler and/or library that you are using are seriously broken. If that is what you are testing, ok. But honestly I think it will never happen, and you could just delete these tests, as they provide no real value.
|
72,949,669 | 72,950,296 | Is it possible to modify type definitions at runtime? | Is it possible to modify type definitions at runtime? For example if you were to define a class like this
class Test {
public:
int x;
int y;
};
could I remove the x or y field from the class at runtime? Or could I add more fields to this like adding a z field?
EDIT: This question is strictly out of curiosity.
| No. It is definitely impossible.
For example, we are updating the field x in the structure Test and we must know the size at the compile time because of operations on machine code level performs on data offsets
class Test {
public:
int x;
int y;
};
int main(){
Test t;
t.x = 10;
return t.x - 1;
}
Expands to
main:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-8], 10
mov eax, DWORD PTR [rbp-8]
sub eax, 1
pop rbp
ret
We are accessing the field x by direct writing the value into the address [rbp-8] (rbp holds the current frame pointer). We are subtracting 8 instead of adding because of the stack grows from the upper address to the lower.
But with C++ standard library and some masochistic tendencies you can write something like that (holding values in the variable 'values')
#include <any>
#include <iostream>
#include <map>
#include <string>
class Test {
public:
int x;
int y;
};
int main(){
std::unordered_map<std::string, std::any> values;
values.insert({"pi", 3.14f});
values.insert({"test", Test{1,1}});
std::cout<< std::any_cast<float>(values["pi"])<<std::endl;
std::cout<< std::any_cast<Test>(values["test"]).x<<std::endl;
}
|
72,950,362 | 72,951,084 | Inheriting constructors with initializer_list from multiple base classes deletes constructor | Apparently my compiler deletes my constructor for reasons I can't understand.
Compare this: This is working (Compiler Explorer):
using val = std::variant<std::monostate, int, bool>;
struct keyval
{
keyval(std::string, int) {
}
};
struct base_A
{
base_A(std::initializer_list<val>) {
}
};
struct base_B
{
base_B(std::initializer_list<keyval>) {
}
};
struct derived : public base_A//, public base_B
{
using base_A::base_A;
// using base_B::base_B;
};
int main() {
derived D = {1,2,3,4, true};
}
But if you uncomment it's not working anymore (CompilerExplorer)!
Apparently my compiler decided to delete the variant constructor inherited from base_A because it would be ill-formed. But how so if it worked before?
<source>:34:31: error: use of deleted function 'derived::derived(std::initializer_list<std::variant<std::monostate, int, bool> >) [inherited from base_A]'
34 | derived D = {1,2,3,4, true};
|
| As per cppref:,
If overload resolution selects one of the inherited constructors when initializing an object of such derived class, then the Base subobject from which the constructor was inherited is initialized using the inherited constructor, and all other bases and members of Derived are initialized as if by the defaulted default constructor (default member initializers are used if provided, otherwise default initialization takes place).
Other base classes are required to be default constructed (and only one default constructor otherwise resolution will be ambiguous).
In your case, how does the compiler know how to correctly initialize base_B since you don't provide a default constructor for it?
A simple fix is to make a default constructor for base_B.
struct base_B
{
base_B(std::initializer_list<keyval> = {}) {}
};
or
struct base_B
{
base_B(std::initializer_list<keyval>) {}
base_B() {}
};
Demo
|
72,950,584 | 72,950,754 | C++ lambda capture list by value or by reference doesn't give me different results | I am having the below code :
std::vector<std::function<void()>> functors;
class Bar
{
public :
Bar(const int x, const int y):d_x(x),d_y(y){}
~Bar(){
cout << "Destructing Bar" << endl;
}
void addToQueue()
{
const auto job = [=](){
cout << "x:" << d_x << " y: " << d_y;
};
functors.push_back(job);
}
private :
int d_x,d_y;
};
void example()
{
cout << "Hello World" << endl;
{
shared_ptr<Bar> barPtr = make_shared<Bar>(5,10);
barPtr->addToQueue();
}
cout << "Out of scope. Sleeping" << endl;
usleep(1000);
functors[0]();
}
The output is as expected :
Hello World
Destructing Bar
Out of scope. Sleeping
x:5 y: 10
I am now capturing by value, which is why I assume when the Bar object gets destroyed, I can still access its member variables. If the above is right, I am expecting the below change to give me UB:
const auto job = [&](){
However, I still see the same result. Why is that? Have i understood something wrong?
EDIT Further on the above, what I want to understand from this example - is how can I have access to a class member variables in a lambda function even if object has been destroyed? I am trying to avoid UB and thought that passing by value is the way to go, but can't prove that the opposite isn't working.
| This kind of confusion iss probably one of the reasons why C++20 deprecated the implicit capture of this with [=]. You can still capture [this], in which case you have the usual lifetime issues with an unmanaged pointer. You can capture [*this] (since C+=17), which will capture a copy of *this so you don't have lifetime issues.
You could also use std::enable_shared_from_this since you're using a std::shared_ptr in example, but that's a bit more complicated. Still, it would avoid both the copy and the UB when the lifetime issues.
|
72,951,653 | 72,962,469 | Using conan packages in CMake: Library 'mylibrary.a' not found in package | I am using cmake to build my project. I have a library, mylibrary, which is a dependency of my project. mylibrary is packaged with conan. I use the conan CMakeDeps and CMakeToolchain Generators when packaging mylibrary. This is the package_info function of mylibrary's conanfile:
def package_info(self):
self.cpp_info.set_property("cmake_find_mode", "config")
self.cpp_info.set_property("cmake_file_name", "Mylibrary")
self.cpp_info.components["libmylibrary"].set_property("cmake_target_name", "Mylibrary::Mylibrary")
self.cpp_info.components["libmylibrary"].libs = ["mylibrary.a"]
self.cpp_info.components["libmylibrary"].requires = ["gtest::gtest"]
My library is a shared library with the file name libmylibrary.a.I can package the library without having any problems. The find_package call in my projects CMakeLists.txt file looks like this:
find_package(Mylibrary REQUIRED HINTS ${LLIB_DIR})
When I build my project, CMake does declare my library's target, which is mylibrary::mylibrary. But right I run cmake, I get this error:
CMake Error at MyProject/cmakedeps_macros.cmake:4 (message):
Library 'mylibrary.a' not found in package. If 'mylibrary.a' is a system library,
declare it with 'cpp_info.system_libs' property
Call Stack (most recent call first):
MyProjectLibs/cmakedeps_macros.cmake:48 (conan_message)
MyProjectLibs/Mylibrary-Target-release.cmake:21 (conan_package_library_targets)
MyProjectLibs/MylibraryTargets.cmake:28 (include)
MyProjectLibs/MylibraryConfig.cmake:11 (include)
CMakeLists.txt:196 (find_package)
I am new to cmake's targets and I don't know what to do. I tried using uppercases or lowercases names when calling find_library, but it is not working. I suspect that I wrote something wrong in the package_info method.
| So, I did it. I don't understand why or how, but to solve my problem, I had to change this line:
self.cpp_info.components["libmylibrary"].libs = ["mylibrary.a"]
to this:
self.cpp_info.components["libmylibrary"].libs = ["mylibrary"]
|
72,951,952 | 72,959,254 | Boost serialization base class without default constructor | How to serialize/deserialize derived class inheriting base class without default constructor?
Please offer serializing boost functions for the following classes
struct Base
{
Base(int b) : b(b) {}
const int b;
}
struct Derived : public Base
{
Derived(float d, int b) : Base(b), d(d) {}
const float d;
}
| Your use-case straddles two of the "special considerations" documented by Boost Serialization:
Non-default constructors
Pointers to objects of derived classes
Note that I'm going to assume you want dynamic polymorphism, and to get this you need at least a virtual destructor. If you don't you will end up with Undefined Behaviour.
Combining the two for your example:
Live On Coliru
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
#include <boost/serialization/base_object.hpp>
#include <boost/serialization/serialization.hpp>
#include <boost/serialization/export.hpp>
#include <iostream>
struct Base {
Base(int b) : b(b) {}
virtual ~Base() = default;
const int b;
};
namespace boost::serialization {
template <typename Ar> inline void serialize(Ar&, Base&, unsigned) {}
template <typename Ar>
inline void save_construct_data(Ar& ar, Base const* p, unsigned) {
// save data required to construct instance
ar << p->b;
}
template <typename Ar>
inline void load_construct_data(Ar& ar, Base* p, unsigned) {
int attribute;
ar >> attribute;
// invoke inplace constructor to initialize instance
::new (p) Base(attribute);
}
} // namespace boost::serialization
struct Derived : public Base {
Derived(float d, int b) : Base(b), d(d) {}
const float d;
};
BOOST_CLASS_EXPORT(Base)
BOOST_CLASS_EXPORT(Derived)
namespace boost::serialization {
template <typename Ar> inline void serialize(Ar& ar, Derived& d, unsigned) {
ar & boost::serialization::base_object<Base>(d);
}
template <typename Ar>
inline void save_construct_data(Ar& ar, Derived const* p, unsigned) {
// save data required to construct instance
ar & p->b & p->d;
}
template <typename Ar>
inline void load_construct_data(Ar& ar, Derived* p, unsigned) {
int b;
float d;
ar & b & d;
// invoke inplace constructor to initialize instance
::new (p) Derived(d, b);
}
} // namespace boost::serialization
std::string save(Base* b) {
std::ostringstream oss;
{
boost::archive::text_oarchive oa(oss);
oa << b;
}
return oss.str();
}
Base* load(std::string txt) {
std::istringstream iss(std::move(txt));
boost::archive::text_iarchive ia(iss);
Base* b = nullptr;
ia >> b;
return b;
}
int main() {
for (Base* object :
{
new Base(-99),
static_cast<Base*>(new Derived(3.14, 42)),
}) //
{
std::cout << "----\n";
Base* roundtrip = load(save(object));
delete object;
std::cout << "roundtrip: b=" << roundtrip->b;
if (auto* as_derived = dynamic_cast<Derived const*>(roundtrip)) {
std::cout << ", d=" << as_derived->d;
}
std::cout << "\n";
delete roundtrip;
}
}
Prints
----
roundtrip: b=-99
----
roundtrip: b=42, d=3.14
SAFETY FIRST
Of course, don't use raw new/delete:
using unique_ptr Live On Coliru
using shared_ptr (note the dynamic_pointer_cast) Live On Coliru
|
72,951,953 | 73,184,794 | How to compute floating-point remainders with CGAL's exact number types? | I'm trying to get familiar with CGAL's exact number types and in the process, I'm trying to implement a function to compute the floating-point remainder of the division of two exact numbers (like std::fmod()). However, I'm wondering how to do any arithmetic with exact numbers outside of the trivial operator+, -, *, /. After searching the documentation for a while I found CGAL::div() and CGAL::mod(), but these don't work (return CGAL::Null_tag?) seemingly because they are defined for EuclideanRings only. Example code:
#include <iostream>
#include <CGAL/Exact_predicates_exact_constructions_kernel.h>
using Kernel = CGAL::Exact_predicates_exact_constructions_kernel;
using Number = Kernel::FT;
int main() {
Number a(2.5);
Number b(1.5);
std::cout << CGAL::div(a, b) << "\n"; // error
}
Compile error:
/tmp/cgal-test/test.cpp: In function ‘int main()’:
/tmp/cgal-test/test.cpp:9:15: error: no match for ‘operator<<’ (operand types are ‘std::ostream’ {aka ‘std::basic_ostream<char>’} and ‘CGAL::Null_functor::result_type’ {aka ‘CGAL::Null_tag’})
9 | std::cout << CGAL::div(a, b) << "\n"; // error
| ~~~~~~~~~ ^~ ~~~~~~~~~~~~~~~
| | |
| | CGAL::Null_functor::result_type {aka CGAL::Null_tag}
| std::ostream {aka std::basic_ostream<char>}
Of course, a simple solution to compute a floating-point remainder would be to use CGAL::to_double() and compute std::fmod() on the results, but this may lose precision or overflow, so this would negate the benefits of using an exact number type in the first place. Another way is to do repeated subtraction, but this blows up the running time if a is big and b is small.
Could someone explain (or point me to relevant documentation explaining) what the intended way is to implement operations like this in an exact fashion?
| Your code compiles for me, but it prints 1.66667 while I expect you wanted 1? I do get a similar error if I define CGAL_USE_GMPXX=1 or CGAL_DO_NOT_USE_BOOST_MP=1, so the error depends on the type used internally for exact rationals. There is also a function integral_division.
The simplest way I can think of it to compute a/b and convert it to an exact integer type: CGAL::Exact_integer(CGAL::exact(a/b)) (using <CGAL/Exact_integer.h>, although there may be a cleaner way to get the integer type that corresponds to this rational type, possibly with Get_arithmetic_kernel). Maybe the conversion could be done with NT_converter or Coercion_traits... Anyway, the direction in which this rounds may depend on the number type, so you should check the sign you get and possibly correct the quotient by 1.
Number q(CGAL::Exact_integer(CGAL::exact(a/b)));
std::cout << q << ' ' << (a-b*q) << '\n';
We never needed this kind of operation on rationals, which explains why we didn't implement it in CGAL.
|
72,953,293 | 72,953,684 | C++ Polymorphic Array Syntax or Polymorphic Vector Syntax | So I have the main parent class called item and that class has 2 child classes called book and periodical. The ideas behind what I am trying to do is have a polymorphic array or a polymorphic vector that would be able to do something like this:
Now the example is in C# (but I want to do it in C++)
item [ ] items = new items [100];
items[0] = new book();
items[1] = new periodical();
for (int i = 0; i < items.size; i++ ) {
items[i].read();
}
Like I said, the small example code is in C# but I want to do this in C++ but I am not sure how to go about going it. I wanted to use arrays but I'm my research, I haven't found a clear way of how to accomplish this. I also thought if vectors were possible to use or this but I was not sure about that either.
| Here is an example (if you have questions let me know):
#include <iostream>
#include <memory>
#include <vector>
class Item
{
public:
virtual ~Item() = default; // base classes with virtual methods must have a virtual destructor
virtual void read() = 0;
};
class Book final :
public Item
{
public:
void read() override
{
std::cout << "book read\n";
}
};
class Periodical final :
public Item
{
public:
void read() override
{
std::cout << "periodical read\n";
}
};
int main()
{
std::vector<std::unique_ptr<Item>> items;
// use emplace_back for temporaries
items.emplace_back(std::make_unique<Book>());
items.emplace_back(std::make_unique<Periodical>());
// range based for loop over unique_pointers in items
// use const& so item cannot be modified and & to avoid copy of unique_ptr (unique_ptr doesn't have a copy constructor)
for (const auto& item : items)
{
item->read();
}
return 0;
}
|
72,953,676 | 72,953,744 | How to use a variable inside a _T wrapper ? (MFC Dialog app C++) | I want to generate 17 service names in the List Control. How can i use a formatted string variable inside a _T wrapper ?
// TODO: Add extra initialization here
#define MAX_VALUE 17
int numberOfService = 0;
CString StringServiceName;
StringServiceName.Format(_T("Sense Counter %d"), numberOfService);
for (numberOfService; numberOfService < MAX_VALUE; numberOfService++) {
int nIndex = m_List.InsertItem(0, _T("")); //This variable i want to use in a _T wrapper - StringServiceName.Format(_T("Sense Counter %d"), numberOfService)
}
m_List.InsertColumn(0, _T("Názov služby"), LVCFMT_LEFT,150);
m_List.InsertColumn(1, _T("Status"), LVCFMT_LEFT, 90);
m_List.InsertColumn(2, _T(""), LVCFMT_LEFT, 90);
int nIndex = m_List.InsertItem(0, _T("Sense Counter 1"));
m_List.SetItemText(nIndex, 1, _T("Running"));
m_List.SetItemText(nIndex, 2, _T("✓"));
nIndex = m_List.InsertItem(1, _T("Sense Counter 2"));
m_List.SetItemText(nIndex, 1, _T("Stopped"));
m_List.SetItemText(nIndex, 2, _T("✓"));
| You can't - _T is just a macro to generate narrow or wide string constants, depending on whether you're compiling for Unicode or not.
I'm not over-familiar with CString, but perhaps you meant this:
for (numberOfService; numberOfService < MAX_VALUE; numberOfService++) {
StringServiceName.Format(_T("Sense Counter %d"), numberOfService)
int nIndex = m_List.InsertItem(0, (LPCTSTR) StringServiceName);
}
(The cast may not be necessary here, although Microsoft recommends it - try without.)
|
72,953,783 | 72,953,912 | How to efficiently initialize a std::variant data member in a class template | Consider the following class template, that can hold either a value of type T or an instance of some ErrorInfo class, using a std::variant data member:
template <typename T>
class ValueOrError
{
private:
std::variant<T, ErrorInfo> m_var;
};
How can I efficiently initialize the variant T alternative?
I can initialize it with a constructor like this:
template <typename T>
class ValueOrError
{
public:
explicit ValueOrError(const T& val)
: m_var{val}
{
}
…
};
But what syntax/coding technique can I use to enable move semantics optimization during initialization?
If I define a constructor taking a T&&, should I std::move or std::forward the parameter into the m_var?
template <typename T>
class ValueOrError
{
public:
// Efficient initialization with move semantics
explicit ValueOrError(T&& val)
: m_var{ /* ?? */ }
{
}
…
};
Note on interactions with ErrorInfo constructor overload
The ValueOrError template should also have a constructor overload that takes an ErrorInfo and initializes the variant member accordingly:
template <typename T>
class ValueOrError
{
public:
// Initialize with error code instead of T
explicit ValueOrError(const ErrorInfo& error)
: m_var{error}
{
}
…
};
It’s important that the generic T constructor overload interacts properly with the specific ErrorInfo overload.
ErrorInfo is a tiny class that wraps an error code (e.g. a simple integer), and can be constructed from such error code:
class ErrorInfo
{
public:
explicit ErrorInfo(int errorCode)
: m_errorCode{errorCode}
{
}
int ErrorCode() const
{
return m_errorCode;
}
// … other convenient methods
// (e.g. get an error message, etc.)
private:
int m_errorCode;
};
| A C++20 version using perfect forwarding:
#include <concepts> // std::constructible_from
template <class T>
class ValueOrError {
public:
explicit ValueOrError(const ErrorInfo& error) : m_var{error} {}
template<class... Args>
requires std::constructible_from<T, Args...>
explicit ValueOrError(Args&&... val) :
m_var(std::in_place_type<T>, std::forward<Args>(val)...)
{}
private:
std::variant<T, ErrorInfo> m_var;
};
A C++17 version, also using perfect forwarding, could look like this:
#include <type_traits> // std::is_constructible_v, std::enable_if_t
template <class T>
class ValueOrError {
public:
explicit ValueOrError(const ErrorInfo& error) : m_var{error} {}
template<class... Args,
std::enable_if_t<std::is_constructible_v<T, Args...>, int> = 0>
explicit ValueOrError(Args&&... val)
: m_var(std::in_place_type<T>, std::forward<Args>(val)...) {}
private:
std::variant<T, ErrorInfo> m_var;
};
Example usages:
class foo { // A non default constructible needing 3 constructor args
public:
foo(double X, double Y, double Z) : x(X), y(Y), z(Z) {}
private:
double x, y, z;
};
int main() {
ValueOrError<foo> voe1(1., 2., 3.); // supply all three arguments
// use the string constructor taking a `const char*`:
ValueOrError<std::string> voe2("Hello");
std::string y = "world";
// use the string constructor taking two iterators:
ValueOrError<std::string> voe3(y.begin(), y.end());
}
|
72,953,855 | 72,954,448 | Are struct scalar members zero-initialized when using value-initialization on a struct with a default non-trivial-constructor | If I have the following struct
struct test
{
char *x;
std::string y;
};
And I initialize with
test *t = new test();
That should value-initialize the object and do the following based on the standard:
if T is a (possibly cv-qualified) class type without a user-provided or deleted default constructor, then
the object is zero-initialized and the semantic constraints for default-initialization are checked, and if T
has a non-trivial default constructor, the object is default-initialized;
struct test has a non-trivial constructor, this can be checked with:
static_assert(std::is_trivially_constructible<test>::value, "test is not is_trivially_constructible");`)
Does the standard imply that my test object should always be zero-initialized in the case of value-initialization, and then subsequently default-initialized?
And should I be able to reliably assume that after doing test *t = new test() if I immediately check t->x == nullptr, that should be true because char *x (a pointer / scalar type) should get zero-initialized during value-initialization of test.
I ask because Coverity gives a Type: Uninitialized pointer read (UNINIT) warning because it reports the following if you try do something like if (t->x) after value-intialization:
Assigning: "t" = "new test", which is allocated but not initialized.
Is Coverity misinterpreting the standard as "value-initialization if trivial constructor OR default-initialization if non-trivial constructor"? If I remove the std::string y; member so that test has a trivial-default-constructor, Coverity no longer has a warning and assumes the char *x member is zero-initialized.
For what it's worth, I'm just using g++ -O3 -std=c++17 to compile and I have not been able to create an actual scenario where zero-intialization doesn't happen for my test object.
| The warning is not correct for modern C++(including C++17) as explained below.
should I be able to reliably assume that after doing test *t = new test(); if I immediately check t->x == nullptr, that should be true because char *x (a pointer / scalar type) should get zero-initialized during value-initialization of test.
Yes, it is guaranteed by the standard that x is zero-initialized and hence the check t->x == nullptr must evaluate to true. This can be seen from dcl.init#6 which states:
To zero-initialize an object or reference of type T means:
if T is a (possibly cv-qualified) non-union class type, its padding bits are initialized to zero bits and each non-static data member, each non-virtual base class subobject, and, if the object is not a base class subobject, each virtual base class subobject is zero-initialized;
(emphasis mine)
|
72,954,344 | 72,955,355 | Is there a way to teach gtest to print user-defined types using libfmt's formatter? | I'm wondering if there is a way to make gtest understand a user-defined types libfmt's formatter, for the means of printing a readable error output?
I know how I can teach gtest to understand user-defined types via adding the stream insertion operator operator<< for this very user-defined type, e.g.
std::ostream& operator<<(std::ostream& stream, CpuTimes const& anything);
std::ostream& operator<<(std::ostream& stream, CpuStats const& anything);
This is well documented here.
But I heavily use libfmt, which requires a formatting function to be implemented to produce a readable and printable output of a user-defined type. This so-called fmt::formatter is actually a template specialization, e.g.
namespace fmt {
template <> struct formatter<CpuTimes> : basics::fmt::ParseContextEmpty {
format_context::iterator format(CpuTimes const& times, format_context& ctx);
};
template <> struct formatter<CpuStats> : basics::fmt::ParseContextEmpty {
format_context::iterator format(CpuStats const& stats, format_context& ctx);
};
template <> struct formatter<CpuLimits> : basics::fmt::ParseContextEmpty {
format_context::iterator format(CpuLimits const& limits, format_context& ctx);
};
} // namespace fmt
For gtest to understand this format, you have to write the same implementation for the operator<< over and over for every type you want gtest to print properly, like
std::ostream& operator<<(std::ostream& stream, CpuTimes const& anything) {
return stream << fmt::format("{}", anything);
}
std::ostream& operator<<(std::ostream& stream, CpuStats const& anything) {
return stream << fmt::format("{}", anything);
}
std::ostream& operator<<(std::ostream& stream, CpuLimits const& anything) {
return stream << fmt::format("{}", anything);
}
Is there a way to spare my writing this boilerplate code?
| To avoid problems caused by ambiguity of PrintTo or operator<< namespace is needed.
Here is some demo when namespaces is used:
#include <fmt/format.h>
#include <gmock/gmock.h>
#include <gtest/gtest.h>
namespace me {
struct Foo {
int x = 0;
double y = 0;
};
bool operator==(const Foo& a, const Foo& b)
{
return a.x == b.x && a.y == b.y;
}
}
template <>
struct fmt::formatter<me::Foo> {
char presentation = 'f';
constexpr auto parse(format_parse_context& ctx) -> decltype(ctx.begin())
{
auto it = ctx.begin(), end = ctx.end();
if (it != end && (*it == 'f' || *it == 'e'))
presentation = *it++;
if (it != end && *it != '}')
throw format_error("invalid format");
return it;
}
template <typename FormatContext>
auto format(const me::Foo& p, FormatContext& ctx) -> decltype(ctx.out())
{
return presentation == 'f'
? format_to(ctx.out(), "({}, {:.1f})", p.x, p.y)
: format_to(ctx.out(), "({}, {:.1e})", p.x, p.y);
}
};
#if VERSION == 1
namespace me {
template <typename T>
void PrintTo(const T& value, ::std::ostream* os)
{
*os << fmt::format(FMT_STRING("{}"), value);
}
}
#elif VERSION == 2
namespace me {
template <typename T>
std::ostream& operator<<(std::ostream& out, const T& value)
{
::std::operator<<(out, fmt::format(FMT_STRING("{}"), value));
return out;
}
}
#endif
class MagicTest : public testing::Test { };
TEST_F(MagicTest, CheckFmtFormater)
{
EXPECT_EQ(fmt::format("{}", me::Foo {}), "(0, 0.0)");
}
TEST_F(MagicTest, FailOnPurpuse)
{
EXPECT_EQ(me::Foo {}, (me::Foo { 1, 0 }));
}
Dropping me namespace cause all implementations to have ambiguity problems.
Note that namespace causes that argument-dependent lookup is used.
|
72,954,488 | 72,954,672 | gMock Visual Studio test crashes when using EXPECT_CALL | When running my test from the Test Explorer in Visual Studio 2022 [17.2.0], testhost.exe crashes when my unit test has a gMock EXPECT_CALL call in it. What am I doing wrong?
Minimal code:
#include <CppUnitTest.h>
#include <CppUnitTestAssert.h>
#include <gmock/gmock.h>
#include <gtest/gtest.h>
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
using ::testing::Return;
class MockCow
{
public:
MOCK_METHOD0(Moo, int());
};
TEST_CLASS(the_unit_test)
{
TEST_METHOD(the_test_method)
{
MockCow cow;
EXPECT_CALL(cow, Moo())
.WillOnce(Return(42));
cow.Moo();
}
};
If the EXPECT_CALL call is commented out, the unit test will not crash.
I created this project by making an Empty C++ project, then installing the NuGet gMock package, latest stable version 1.11.0, and adding library directories/dependencies in the project settings. The project configuration type is Dynamic Library (.dll).
For C/C++ Additional Include Directories, I have:
$(VCInstallDir)Auxiliary\VS\UnitTest\include;
For Linker Additional Library Directories, I have:
$(VCInstallDir)Auxiliary\VS\UnitTest\lib;
For Linker Input Additional Dependencies, I have:
x64\Microsoft.VisualStudio.TestTools.CppUnitTestFramework.lib
It compiles and links successfully, so the above is just for completeness. I have also tried installing previous versions of the gMock NuGet behavior, but I get the same behavior.
Faulting module name: MyTest.dll_unloaded, version: 0.0.0.0, time stamp: 0x62cd8255
Exception code: 0xc0000005
Fault offset: 0x000000000011e8f7
Faulting process id: 0x8b04
Faulting application start time: 0x01d895fa0d052ac3
Faulting application path: C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\Extensions\TestPlatform\testhost.exe
There are other posts about gMock crashes that may be tangentially related, but they either don't have solutions or are missing details and were never updated.
|
You mix the very different test frameworks.
The more important thing is that you do not initialize Google test framework.
TEST_CLASS, TEST_METHOD are of another test framework, do not perform required pre and post stuff, Google mock is not supposed to work there.
Choose one framework and use it.
#include <gmock/gmock.h>
#include <gtest/gtest.h
using ::testing::Return;
class MockCow
{
public:
MOCK_METHOD0(Moo, int());
};
TEST(the_unit_test, the_test_method) {
MockCow cow;
EXPECT_CALL(cow, Moo())
.WillOnce(Return(42));
cow.Moo();
};
int main(int argc, char *argv[]) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
|
72,955,067 | 72,955,535 | Cleaner way to specify type to get from a std::variant? | I've got code that can be simplified to
std::variant<float, int> v[2] = foo();
int a = std::get<decltype(a)>(v[0]);
float b = std::get<decltype(b)>(v[1]);
Obviously this can go throw if foo() returns the wrong variants, but that's not my problem here. (The real code has a catch). My problem is that the decltype(a) violates the Don't Repeat Yourself principle.
Is there a cleaner way to initialize a and b, and still throw if the types do not match expectations? In particular, I don't want a static_cast<int>(std::get<float>(v)) if the variant contains a float while I'm trying to initialize an int.
| You could wrap your call to get in a template that implicitly converts to the target type.
template<typename... Ts>
struct variant_unwrapper {
std::variant<Ts...> & var;
template <typename T>
operator T() { return std::get<T>(var); }
};
See it on coliru
|
72,955,386 | 72,956,032 | Techniques for managing application shutdown in Win32 | We have a Win32 application written using WTL (Windows Template Library), and I'm looking for patterns for exiting the application. The issue that I'm dealing with is that some of the views in the application contain resources which may take some time (measured in 1 to 2 seconds), to destroy (i.e. waiting for a thread to exit, etc)
Win32 seems to support two main lifecycle messages, WM_CLOSE and WM_DESTROY. WM_CLOSE seems to be the message you want to send windows to notify them that there is a request to close the application, and gives you the opportunity to decide on whether or not to continue to close. WM_DESTROY, is when take down is occurring and should just be cleaning up.
The question is, how would you handle a case where you want to keep the message loop running, while shutting down the application.
Example:
Main window receives WM_CLOSE because user clicks on close. Let's say you don't need to prompt the user for exiting, but you then notify all of your views that a close is occuring. Let's also say that at this point, you don't want to exit the application because you need to clean up some resources, which may take a second or two, and you want the interaction to be asynchronous.
How would you notify the main window when all of the child views have completed their shutdown processes? To be more specific, is there any standard message for notifying parents that you are done cleaning up?
|
The question is, how would you handle a case where you want to keep the message loop running, while shutting down the application.
I would have the WM_CLOSE handler display a "Please wait" message to the user, and then asynchronously initiate whatever shutdown logic is needed. Do not call DefWindowProc() or DestroyWindow() yet.
Let the message loop run normally while the shutdown logic is doing its thing in the background.
When all shutdown tasks are finished, dismiss the "Please wait" message, and destroy the application window, signalling the message loop to exit, as explained by the other answers.
|
72,955,625 | 72,955,792 | objdump -t columns meaning | [ 4](sec 3)(fl 0x00)(ty 0)(scl 3) (nx 1) 0x00000000 .bss
[ 6](sec 1)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x00000000 fred
the number inside the square brackets is the number of the entry in the symbol table, the sec number is the section number, the fl value are the symbol's flag bits, the ty number is the symbol's type, the scl number is the symbol's storage class and the nx value is the number of auxilary entries associated with the symbol. The last two fields are the symbol's value and its name.
I want to know if is there a way to findout which one is a local and which one is a global symbol?
the same way as you can tell from lowercase/uppercase of flags when using nm command.
EDIT:
i know about this too
The other common output format, usually seen with ELF based files, looks like this:
00000000 l d .bss 00000000 .bss
00000000 g .text 00000000 fred
but when i use objdump -t on my object file this is not the output format im getting. (im using g++ -c to compile and my os is windows 10, if this matters)
thanks
[SOLVED]
Dumpbin.exe Thanks for mentionin this in comment. this one seems so clear to understand.
| You talk about nm, thus you talk about ELF files. Continue reading the manual:
The other common output format, usually seen with ELF based files, looks like this:
00000000 l d .bss 00000000 .bss
00000000 g .text 00000000 fred
The symbol is a local (l), global (g), unique global (u), neither global nor local (a space) or both global and local (!).
Before asking a question, I suggest that you practices the command invocations and compare command outputs with examples in the manual parts that you are reading.
|
72,956,610 | 72,958,056 | What is system:80 error showing while trying to copy file content? | I have this code:
#include <filesystem>
#include <fstream>
using namespace std::filesystem;
int main() {
std::error_code ec;
bool hi = copy_file("hi.txt" , "sth.txt",ec);
std::cout << ec;
return 0;
}
When I compile and run this, it throws system:80, which according to System Error Codes (0-499) is ERROR_FILE_EXISTS.
Why is this error thrown?
| The default behavior of std::filesystem::copy_file() is to fail with an error if the destination file already exists. To avoid that, you need to call the overloaded version of the function which takes a std::filesystem::copy_options parameter so you can tell it what to do with the existing destination file, eg:
bool hi = copy_file("hi.txt", "sth.txt", copy_options::skip_existing, ec);
bool hi = copy_file("hi.txt", "sth.txt", copy_options::overwrite_existing, ec);
bool hi = copy_file("hi.txt", "sth.txt", copy_options::update_existing, ec);
|
72,956,706 | 72,957,260 | How to read this C++ function | Maybe someone here could help me to understand more about C++
While reading about Unreal Engine 4, I came across this function which is used as the following
class ClassSample1
{
public:
Babu* pBabu; //0x022C
};
void SetFuncton(Babu* param1, bool param2)
{
(*(int(__fastcall**)(Babu*, bool))(*(DWORD64*)param1 + 0x38))(param1, param2);
}
What I want to know.
What will this function produce?
What datatype will this function produce?
Thank you.
|
What I want to know.
What will this function produce?
That's the fun part, from what you've shown, nobody knows!
What datatype will this function produce?
I guess the answer is "nothing", SetFunction() returns void, but this appears to be calling some kind of class parameter setter so it will probably have side effects.
Let's break this down a bit:
(int(__fastcall**)(Babu*, bool))
This declares a pointer to a pointer to a function, where the function returns int and has two parameters, one of type pointer to Babu, and one of type bool. This function should also use the __fastcall calling convention.
*(DWORD64*)param1 + 0x38
This is a compound statement which casts param1 to a pointer to DWORD64 and then reads the DWORD64 value at that address and adds 0x38 to it. Note that in the MSVC ABI, the vtable pointer is the first element of a class, so if param1 is a pointer to an instance of Babu this statement is reading the vtable pointer of Babu, and adding 0x38 to it.
Putting these together:
*(int(__fastcall**)(Babu*, bool))(*(DWORD64*)param1 + 0x38)
This says: take whatever is stored at memory address param1 (which is probably the vtable pointer), add 0x38 to it, cast this to a pointer to a pointer to a function, read this resulting address to produce a pointer to a function of the type described above. As @HolyBlackCat mentioned in the comments, this is most likely a virtual method lookup on the class Babu.
The last little bit: (param1, param2), is just the actual call to the function with param1 and param2 as arguments. Note that in any class method call, there is an implicit this pointer which gets passed as the first argument.
From all of this it's fair to deduce that class Babu has some set of virtual methods, and there's one at offset 0x38 which takes bool as its one non-implicit parameter. What happens after this is anybody's guess. At the risk of being dismissive I would consider it somewhat miraculous if it returns with your machine intact at all.
|
72,957,006 | 72,957,990 | Mapping a type to a function of that type in c++ | Let's suppose I have a character that can have 1 out of 3 states at a time(crouching, jumping and walking). For each of the 3 states I have a function of type void() that does whatever they are assigned to. I also have an enum that stores the different states and a number for each state.
class Player {
private:
enum State {
crouching = 0,
walking= 1,
jumping = 2
} state;
}
I also have an unordered map that is used to link the different states to their funtions.
class Player {
private:
std::unordered_map<int, void(Player::*)()> stateToFunc;
void playerJump(){ /* code here */ };
void playerCrouch(){ /* code here */ };
void playerWalk(){ /* code here */ };
Player() {
// other stuff
stateToFunc[0] = playerCrouch;
stateToFunc[1] = playerWalk;
stateToFunc[2] = playerJump;
}
I made it so everytime I press a certain key, the state variable will update.
My goal is so on each update I will call only the function stateToFunc[state] instead of checking manually with a switch statement.
It gives me the following error:
Error C3867 'Player::gActivated': non-standard syntax; use '&' to create a pointer to member
If I use stateToFunc[0] = & playerCrouch;, it gives me other errors. What can I do to achieve this?
| You need to do what the compiler tells you - use the & operator to get a pointer to a member method. You will also have to specify the class the methods belong to, eg:
class Player {
private:
std::unordered_map<State, void(Player::*)()> stateToFunc;
void playerJump(){ /* code here */ };
void playerCrouch(){ /* code here */ };
void playerWalk(){ /* code here */ };
Player() {
// other stuff
stateToFunc[crouching] = &Player::playerCrouch;
stateToFunc[walking] = &Player::playerWalk;
stateToFunc[jumping] = &Player::playerJump;
}
...
}
Then, to actually call the methods, you can use the ->* operator, like this:
void Player::doSomething()
{
...
(this->*stateToFunc[state])();
...
}
Alternatively, use std::function instead, with either std::bind() or lambdas, eg:
class Player {
private:
std::unordered_map<State, std::function<void()>> stateToFunc;
void playerJump(){ /* code here */ };
void playerCrouch(){ /* code here */ };
void playerWalk(){ /* code here */ };
Player() {
// other stuff
stateToFunc[crouching] = std::bind(&Player::playerCrouch, this);
stateToFunc[walking] = std::bind(&Player::playerWalk, this);
stateToFunc[jumping] = std::bind(&Player::playerJump, this);
// or:
stateToFunc[crouching] = [this](){ playerCrouch(); };
stateToFunc[walking] = [this](){ playerWalk(); };
stateToFunc[jumping] = [this](){ playerJump(); }
}
...
}
void Player::doSomething()
{
...
stateToFunc[state]();
...
}
|
72,957,178 | 72,959,794 | C++ intel_driver.hpp C1083 Cannot open include file: 'atlstr.h':No such file or directory (compiling source file main.cpp) I can't build the release | I would like to build this kdmapper project but unfortunately I can't because with 'altstr.h' has an issue in the attachment can you see the details: Kdmapper compiling and building issues
Has anybody an idea how to resolve this issue?
Thanks forward!
| Just install the MFC package in Visual Studio. https://learn.microsoft.com/en-us/cpp/mfc/mfc-and-atl?view=msvc-170
|
72,957,382 | 72,968,754 | Cannot create org.webrtc.voiceengine.WebRtcAudioManager on Android | I have a functional implementation of native (C++) WebRTC in Windows, which I'm trying to get working on every other platform now. Currently, I'm attacking Android.
When I call webrtc::CreatePeerConnectionFactory, it cannot create the java class org.webrtc.voiceengine.WebRtcAudioManager. I get the following result (followed by a crash!):
I audio_processing_impl.cc: (line 292): Injected APM submodules:
I audio_processing_impl.cc: Echo control factory: 0
I audio_processing_impl.cc: Echo detector: 0
I audio_processing_impl.cc: Capture analyzer: 0
I audio_processing_impl.cc: Capture post processor: 0
I audio_processing_impl.cc: Render pre processor: 0
I audio_processing_impl.cc: (line 301): Denormal disabler: supported
I webrtc_voice_engine.cc: (line 312): WebRtcVoiceEngine::WebRtcVoiceEngine
I webrtc_video_engine.cc: (line 648): WebRtcVideoEngine::WebRtcVideoEngine()
I webrtc_voice_engine.cc: (line 334): WebRtcVoiceEngine::Init
I audio_device_impl.cc: (line 76): Create
I audio_device_impl.cc: (line 84): CreateForTest
I audio_device_buffer.cc: (line 65): AudioDeviceBuffer::ctor
I audio_device_impl.cc: (line 121): AudioDeviceModuleImpl
I audio_device_impl.cc: (line 125): CheckPlatform
I audio_device_impl.cc: (line 133): current platform is Android
I audio_device_impl.cc: (line 155): CreatePlatformSpecificObjects
I audio_device_impl.cc: (line 947): PlatformAudioLayer
I jvm_android.cc: (line 72): JvmThreadConnector::ctor
I jvm_android.cc: (line 77): Attaching thread to JVM
I jvm_android.cc: (line 262): JVM::environment
I jvm_android.cc: (line 184): JNIEnvironment::ctor
I audio_manager.cc: (line 71): ctor
I jvm_android.cc: (line 196): JNIEnvironment::RegisterNatives: org/webrtc/voiceengine/WebRtcAudioManager
I jvm_android.cc: (line 134): NativeRegistration::ctor
I jvm_android.cc: (line 146): NativeRegistration::NewObject
I org.webrtc.Logging: WebRtcAudioManager: ctor@[name=Thread-15, id=21131]
W System.err: java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Object android.content.Context.getSystemService(java.lang.String)' on a null object reference
W System.err: at org.webrtc.voiceengine.WebRtcAudioManager.<init>(WebRtcAudioManager.java:176)
E rtc : #
E rtc : # Fatal error in: ../../modules/utility/source/jvm_android.cc, line 151
E rtc : # last system error: 0
E rtc : # Check failed: !jni_->ExceptionCheck()
E rtc : # Error during NewObjectV
F libc : Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 28384 (Thread-15), pid 27615 (.qgroundcontrol)
This what that failing Java line looks like:
audioManager =
(AudioManager) ContextUtils.getApplicationContext().getSystemService(Context.AUDIO_SERVICE);
I'm using WebRTC M103 (branch-heads/5060).
I've linked to libwebrtc.a, dynamically loaded libjingle_peerconnection_so.so, and bundled libwebrtc.jar. All of those seemed to be required dependencies, and advanced the ball for me one-by-one as I added them into the equation.
| Fixed! This is what I was missing:
I was calling webrtc::JVM::Initialize prior to webrtc::CreatePeerConnectionFactory (which is one of many Droid specific requirements they don't bother mention in any docs...). But, I missed the fact there is an overload which takes the Droid app context (i.e. static void Initialize(JavaVM* jvm, jobject context);). That is to say, I was only passing the jvm pointer when initializing.
After a mini adventure (for which I'll omit the details) to get the context reference on C side, and not have it go "stale" (when the Java GC would delete it), I passed that along to the initialization. That, in turn, allowed that (original issue) of the Java invocation of ContextUtils.getApplicationContext() to work!
|
72,957,449 | 72,957,484 | Difference between subscript [] operator and push_back method for inserting charachter in a string in C++ | I am stuck on this stupid doubt and can't understand which part have I understood wrong.
I am trying to fill an empty string and I thought of doing it using the subscipt [] operator but found that although loop runs perfectly but the final string is still empty with size zero. However push_back runs perfectly fine. I can use push_back but want to understand the reason for the first one. If anyone can clarify?
string r;
int i;
for(i = 0; i < 5; ++i)
{
r[i] = 'a';
}
cout << r.size(); //output: 0
cout << endl;
for(i = 0; i < 5; ++i)
{
r.push_back('a');
}
cout << r.size(); //output: 5
| As simple as it is for std::vector, std::string doesn't have bounds check when using subscript operator. When creating empty string, you have a container of zero length, thus when you assign values by index, the values are assigned to memory out of the collection's bounds
|
72,957,592 | 72,958,310 | Exception: STATUS_ACCESS_VIOLATION when trying to read value of pointer from another program | I am practicing the use of ReadProcessMemory and one task I have is to read the value of a pointer, and then read the value of the address stored in that pointer.
I can get up to the part of reading the value of the pointer, but every time I try to access the value stored in the address in that pointer, I get Exception: STATUS_ACCESS_VIOLATION, and I don't know why. I have tried to initialize it via = new int; but the error still persists.
Here is the program run:
Here is my code:
dummy.cpp
#include <iostream>
#include <Windows.h>
#include <string>
// Dummy program
int main()
{
// Variables
int varInt = 123456;
std::string varString = "DefaultString";
char arrChar[128] = "Long char array right there ->";
// Pointers
int* ptr2int = &varInt;
int** ptr2ptr = &ptr2int;
int*** ptr2ptr2 = &ptr2ptr;
// Infinite loop of process
while (true)
{
// Print current process ID
std::cout << "Process ID: " << GetCurrentProcessId() << std::endl;
// Print varInt's address and value
std::cout << "varInt " << "(0x" << &varInt << ")" << " = " << varInt << std::endl;
// Print varStrings's address and value
std::cout << "varString " << "(0x" << &varString << ")" << " = " << varString << std::endl;
// Print arrChar's address and value
std::cout << "arrChar " << "(0x" << &arrChar << ")" << " = " << arrChar << std::endl;
// Same thing as above but for the 3 pointers now
std::cout << "ptr2int " << "(0x" << &ptr2int << ")" << " = " << ptr2int << std::endl;
std::cout << "ptr2ptr " << "(0x" << &ptr2ptr << ")" << " = " << ptr2ptr << std::endl;
std::cout << "ptr2ptr2 " << "(0x" << &ptr2ptr2 << ")" << " = " << ptr2ptr2 << std::endl;
std::cout << "Press ENTER to print again." << std::endl;
getchar(); // To pause
std::cout << "-------------------------------------" << std::endl;
}
}
reader.cpp
#include <iostream>
#include <Windows.h>
// Program that reads memory from our dummy program
int main()
{
int* intRead = new int;
// Get handle
HANDLE wHandle = OpenProcess(PROCESS_ALL_ACCESS, false, 16564);
// Error checking
if (wHandle == NULL)
{
std::cout << "ERROR! OpenProcess failed: " << GetLastError() << std::endl;
return 1;
}
// Read process memory
ReadProcessMemory(wHandle, (LPCVOID)0x00F3F990, &intRead, sizeof(int), NULL);
// Output new value we read
std::cout << "New value of intRead buffer is: " << *intRead << std::endl; // Throws exception when dereferencing
// Close the open process
CloseHandle(wHandle);
return 0;
}
And the CORE file it generates:
[main] reader 1000 (0) exception: trapped!
[main] reader 1000 (0) exception: code 0xC0000005 at 0x401114
[main] reader 1000 (0) exception: ax 0xF3FA48 bx 0x246FF10 cx 0x8EE00000 dx 0x0
[main] reader 1000 (0) exception: si 0x0 di 0x401000 bp 0x246FEF4 sp 0x246FEE0
[main] reader 1000 (0) exception: exception is: STATUS_ACCESS_VIOLATION
[main] reader 1000 (0) stack: Stack trace:
[main] reader 1000 (0) stack: frame 0: sp = 0x246F2A8, pc = 0x6100A2C3
[main] reader 1000 (0) stack: frame 1: sp = 0x246F2E4, pc = 0x77DC8FB2
[main] reader 1000 (0) stack: frame 2: sp = 0x246F308, pc = 0x77DC8F84
[main] reader 1000 (0) stack: frame 3: sp = 0x246F3D0, pc = 0x77DA71E6
[main] reader 1000 (0) stack: frame 4: sp = 0x246FEF4, pc = 0x61004402
[main] reader 1000 (0) stack: frame 5: sp = 0x246FF3C, pc = 0x61004420
[main] reader 1000 (0) stack: frame 6: sp = 0x246FF48, pc = 0x4131EE
[main] reader 1000 (0) stack: frame 7: sp = 0x246FF58, pc = 0x40103A
[main] reader 1000 (0) stack: frame 8: sp = 0x246FF74, pc = 0x76736739
[main] reader 1000 (0) stack: frame 9: sp = 0x246FF84, pc = 0x77D98FEF
[main] reader 1000 (0) stack: frame 10: sp = 0x246FFDC, pc = 0x77D98FBD
[main] reader 1000 (0) stack: frame 11: sp = 0x246FFEC, pc = 0x0
[main] reader 1000 (0) stack: End of stack trace
Compiled using MinGW.
| You can't just retrieve a pointer with ReadProcessMemory() and then dereference it normally, like you would with pointers in your own process. You have to use ReadProcessMemory() for each value you want to read from the remote process.
0x00F3F990 is the address of ptr2int in the remote process. You are reading the value of ptr2int at that address. That value is the address of varInt in the remote process. To then read the value of varInt, you need to call ReadProcessMemory() again with the address that is in ptr2int, eg:
#include <iostream>
#include <Windows.h>
// Program that reads memory from our dummy program
int main()
{
int varInt;
int* ptr2int;
// Get handle
HANDLE wHandle = OpenProcess(PROCESS_VM_READ, false, 16564);
// Error checking
if (wHandle == NULL)
{
std::cout << "ERROR! OpenProcess failed: " << GetLastError() << std::endl;
return 1;
}
// Read process memory
ReadProcessMemory(wHandle, (LPCVOID)0x00F3F990, &ptr2int, sizeof(int*), NULL);
// Output new value we read
std::cout << "New value of ptr2int is: " << ptr2int << std::endl;
// Read process memory
ReadProcessMemory(wHandle, (LPCVOID)ptr2int, &varInt, sizeof(int), NULL);
// Output new value we read
std::cout << "New value of varInt is: " << varInt << std::endl;
// Close the open process
CloseHandle(wHandle);
return 0;
}
|
72,957,821 | 72,958,071 | Cascade variadic template template parameters | How can I cascade variadic types? I.e.:
template <typename... T>
using Cascade = ???; // T1<T2<T3<...>>>
Example:
using Vector2D = Cascade<std::vector, std::vector, double>;
static_assert(std::is_same_v<Vector2D, std::vector<std::vector<double>>>);
| You cannot have CascadeRight. T1 is not a typename, it is a template, and so are most of the others, but the last one is a typename. You cannot have different parameter kinds (both types and templates) in the same parameter pack. You also cannot have anything after a parameter pack.
You can have CascadeLeft like this:
template <typename K, template <typename...> class ... T>
class CascadeLeft;
template <typename K>
class CascadeLeft<K>
{
using type = K;
};
template <typename K,
template <typename...> class T0,
template <typename...> class... T>
class CascadeLeft<K, T0, T...>
{
using type = typename CascadeLeft<T0<K>, T...>::type;
};
Frankly, std::vector<std::vector<double>> is much more transparent than CascadeLeft<double, std::vector, std::vector>, so I wouldn't bother.
|
72,958,194 | 72,958,602 | What is the relationship between Boost::Asio and C++20 coroutines? | I started trying to learn Boost::Asio by reading the documentation and example code. I found things difficult to understand, particularly because the model seemed similar to coroutines.
I then decided to learn about coroutines, starting with this cppcon talk. In the linked talk, the following line was given in an example of coroutine usage. The example was written in 2014, so the syntax may not match C++20 coroutines.
auto conn = await Tcp::connect.Read("127.0.0.1", 1337)
This feels similar to the stated goals of Boost::Asio. However, in the examples section of the Boost::Asio documentation, there is an example that mixes Boost::Asio and C++20 coroutines. (I do not yet understand this example.)
What is the relationship between Boost::Asio and coroutines? Do coroutines replace parts of Boost::Asio? If I am not doing networking, should I still use Boost::Asio? Where do std::async and the senders/receivers proposal fit into all this?
|
Q. What is the relationship between Boost::Asio and coroutines?
C++20 coroutines are one of the completion token mechanisms provided with any Asio compliant async API
Q. Do coroutines replace parts of Boost::Asio?
Not 1 on 1.
In practice people may feel a lot less need to write asio::spawn (stackful) coroutines, because in practice the stackfulness is rarely required and makes the implementation (very) heavy in comparison. Also, up to Boost 1.81(?) asio::spawn will still depend on Boost Coroutine (work is underway to remove that and implement the functionality directly on top of Boost Context).
Another where C++20 coroutines seem to remove friction is when providing a dual API (sync and async). I've heard people suggest that it is possible to implement the synchronous version in terms of the asynchronous version transparently. I'm not up to speed with the specifics of this pattern (and whether it is ready for production code yet).
Q. If I am not doing networking, should I still use Boost::Asio?
Should? No. But you may. In general, with c++20 coroutines you will want to use some library like cppcoro or indeed Asio. That's because no user-level library facilities have been standardized yet.
In Asio the interesting bits are:
experimental stuff like channels (channel and concurrent_channel), parallel_group, wait_for_{all,one,any}
the general purpose facility coro which has a lot of flexibility. You could see it as the most useful 80% of cppcoro but
all in one relatively simple class template:
coro<T> -> simple generator
coro<T(U)> -> generator with input
coro<void, T> task producing a T
integrated with Asio executors
This documentation is a pretty decent introduction¹, especially when you're familiar with concepts from other libraries/languages.
Q. Where do std::async and the senders/receivers proposal fit into all this?
I'm not sure. I seem to remember Chris Kohlhoff wrote that proposal. The concept may be lurking under the channel/deferred abstractions in Asio already.
¹ Hat tip @klemensmorgenstern
|
72,958,305 | 72,958,636 | Google test error: '*' can only follow a repeatable token | Trying to create some unit tests with EXPECT_EXIT where the error message contains a '*'.
The test fails but not with expected error. What am I missing here?
Here a very simple example to reproduce the issue:
void test_Death() {
std::cerr << "*Error\n";
exit(EXIT_FAILURE);
}
TEST(ErrorWithStar, Star) {
EXPECT_EXIT(test_Death(), testing::ExitedWithCode(EXIT_FAILURE), "*Error\n");
}
The result is:
Message:
#1 - Failed
Syntax error at index 0 in simple regular expression "*Error
": '*' can only follow a repeatable token.
Running main() from c:\a\1\s\thirdparty\googletest\googletest\src\gtest_main.cc
#2 - Death test: test_Death()
Result: died but not with expected error.
Expected: *Error
Actual msg:
[ DEATH ] *Error
[ DEATH ]
I am using Microsoft Visual Studio Community 2019 Version 16.11.13.
I added a Google Test project to my solution, created all links etc. It is working perfectly for everything else, but not for messages containing a '*'.
What is meant by "'*' can only follow a repeatable token."?
| The character * is reserved by the regular expression grammar to indicate matching zero or more of the previous tokens or groups.
Some simple examples:
.* matches zero or more of any character
a* matches zero or more of the character a
[A-F]* matches zero or more of the characters A through to F
The error is occurring because you have a * at the beginning of the string, where there is no preceding character, group etc to be repeated. This is essentially a syntax error in the regular expression grammar, and the error message tells you so.
What you actually want is a literal *, not the one that belongs to the grammar. To achieve this, you must escape it with a \. And because strings in C++ also use that for an escape character, you must escape the backslash too. So you need two backslashes before the * (or use a raw string literal).
In a pinch, the solution should be:
EXPECT_EXIT(test_Death(), testing::ExitedWithCode(EXIT_FAILURE), "\\*Error\n");
|
72,958,341 | 73,095,301 | Why is CMake ignoring the compiler settings via command line? | I am trying to build MAGMA from Windows 10 but it's not working. I downloaded the project MAGMA from here http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.6.2.tar.gz. I downloaded and installed Intel's One API compilers and MKL. I'm taking the following step as part of my command line setup:
> call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
> :: I do this extra because CMake complains that doesn't find the dpcpp and ifort compilers in the PATH.
> set "PATH=%PATH%;C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin"
> set "PATH=%PATH%;C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin\intel64"
in MAGMA project extracted folder I do the typical:
> mkdir build
> cd build
> cmake -DCMAKE_CXX_COMPILER=dpcpp -DCMAKE_Fortran_COMPILER=ifort ..
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
-- The C compiler identification is MSVC 19.32.31332.0
-- The CXX compiler identification is MSVC 19.32.31332.0
-- The Fortran compiler identification is unknown
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000
Copyright (C) 1985-2022 Intel Corporation. All rights reserved.
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.32.31326/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.32.31326/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:9 (project):
The CMAKE_Fortran_COMPILER:
ifort
is not a full path and was not found in the PATH.
-- Configuring incomplete, errors occurred!
See also "C:/Dev/Opt/magma/2.6.2/build/CMakeFiles/CMakeOutput.log".
See also "C:/Dev/Opt/magma/2.6.2/build/CMakeFiles/CMakeError.log".
UPDATE as shown below, the newest One API C++ compiler dpcpp and ifort are both available in the command line and %PATH% where CMake is run from:
C:\>dpcpp --version
Intel(R) oneAPI DPC++/C++ Compiler 2022.1.0 (2022.1.0.20220316)
Target: x86_64-pc-windows-msvc
Thread model: posix
InstalledDir: C:\PROGRA~2\Intel\oneAPI\compiler\latest\windows\bin-llvm
C:\>ifort --version
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000
Copyright (C) 1985-2022 Intel Corporation. All rights reserved.
ifort: command line warning #10006: ignoring unknown option '/-version'
ifort: command line error: no files specified; for help type "ifort /help"
| I'm guessing you're running a version of CMake > 3.0? CMake doesn't necessarily honor the environment paths correctly as it is doing its lookups.
I tried installing 2.8.12.2 (the version recommended by the author) and it seems to build fine.
I also tried with the latest 3.x version (3.22.x) and I see the same error as you do. Rather than try to fix up the CMake file, it might be quicker to just downgrade CMake 2.8.12.x.
|
72,959,009 | 72,959,050 | Pointing on vector elements | I've got a vector of objects (apples) and I need a pointer to jump through every element, to print "size" value on every object in "apples". I tried vector::iterator, pointing whole vector but I still cannot get correct solution
#include <iostream>
#include <vector>
class Apple{
public:
int size;
Apple(int size): size(size){}
void print(){
std::cout<<size;
}
};
std::vector<Apple*>apples = {new Apple(21), new Apple(37), new Apple(66)};
Apple* ptr = apples[0];
void nextApple(){
ptr++;
ptr->print(); //returns 0
}
int main()
{
nextApple();
return 0;
}
Big thank You for any help!
| To iterate through any T[] array using a pointer, you need to use a T* pointer, and you need to point it initially at the address of the 1st element, not the value of the element.
Your vector's element type is T = Apple*, not T = Apple, so you need to use an Apple** pointer rather than an Apple* pointer, eg:
#include <iostream>
#include <vector>
class Apple{
public:
int size;
Apple(int size) : size(size){}
void print(){
std::cout << size << std::endl;
}
};
std::vector<Apple*> apples = {new Apple(21), new Apple(37), new Apple(66)};
Apple** ptr = &apples[0];
void nextApple(){
++ptr;
(*ptr)->print();
}
int main()
{
nextApple();
return 0;
}
Using an iterator instead (or even an index) would have worked just fine, eg:
#include <iostream>
#include <vector>
class Apple{
public:
int size;
Apple(int size): size(size){}
void print(){
std::cout << size << std::endl;
}
};
std::vector<Apple*> apples = {new Apple(21), new Apple(37), new Apple(66)};
std::vector<Apple*>::iterator iter = apples.begin();
// or: size_t index = 0;
void nextApple(){
++iter;
(*iter)->print();
// or:
// ++index;
// apples[index]->print();
}
int main()
{
nextApple();
return 0;
}
That being said, using a vector of raw Apple* pointers is not a good idea. You have to delete the Apple objects when you are done using them. At the very least, you should wrap the pointers in std::unique_ptr to avoid memory leaks, eg:
#include <iostream>
#include <vector>
#include <memory>
class Apple{
public:
int size;
Apple(int size): size(size){}
void print(){
std::cout << size << std::endl;
}
};
std::vector<std::unique_ptr<Apple>> apples = {std::make_unique<Apple>(21), std::make_unique<Apple>(37), std::make_unique<Apple>(66)};
auto *ptr = &apples[0];
// or: auto iter = apples.begin();
// or: size_t index = 0;
void nextApple(){
++ptr;
(*ptr)->print();
// or:
// ++iter;
// (*iter)->print();
// or:
// ++index;
// apples[index]->print();
}
int main()
{
nextApple();
return 0;
}
But really, just get rid of the dynamic allocation altogther, you don't need it in this example:
#include <iostream>
#include <vector>
class Apple{
public:
int size;
Apple(int size): size(size){}
void print(){
std::cout << size << std::endl;
}
};
std::vector<Apple> apples = {21, 37, 66};
Apple* ptr = &apples[0];
// or: auto iter = apples.begin();
// or: size_t index = 0;
void nextApple(){
++ptr;
ptr->print();
// or:
// ++iter;
// iter->print();
// or:
// ++index;
// apples[index].print();
}
int main()
{
nextApple();
return 0;
}
|
72,959,496 | 72,959,775 | GLFW undecorated window on MacOS after turning on and off again, gains black outline | So, there is a window, that is being created with these hints:
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
glfwWindowHint(GLFW_SRGB_CAPABLE, GLFW_TRUE);
glfwWindowHint(GLFW_DOUBLEBUFFER, GLFW_TRUE);
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GLFW_TRUE);
glfwWindowHint(GLFW_DECORATED, GLFW_FALSE);
glfwWindowHint(GLFW_TRANSPARENT_FRAMEBUFFER, GLFW_TRUE);
glfwWindowHint(GLFW_SCALE_TO_MONITOR, GLFW_TRUE);
main_window = glfwCreateWindow(last_window_size.x, last_window_size.y, main_window_name.c_str(), NULL, NULL);
glfwShowWindow(main_window);
window looks like that:
later I switch it to fullscreen with this function:
void set_fullscreen_mode()
{
glfwGetWindowSize(main_window, &last_window_size.x, &last_window_size.y);
glfwGetWindowPos(main_window, &last_window_position.x, &last_window_position.y);
full_screen = true;
current_monitor = get_monitor_by_cpos(get_global_mouse_position(main_window));
const GLFWvidmode* monitor_video_mode = glfwGetVideoMode(current_monitor);
glfwSetWindowMonitor(main_window, current_monitor, 0, 0, monitor_video_mode->width, monitor_video_mode->height, monitor_video_mode->refreshRate);
}
and change it back with this one:
void set_windowed_mode()
{
glfwHideWindow(main_window);
full_screen = false;
current_monitor = get_monitor_by_cpos(get_global_mouse_position(main_window));
const GLFWvidmode* monitor_video_mode = glfwGetVideoMode(current_monitor);
glfwSetWindowMonitor(main_window, NULL, last_window_position.x, last_window_position.y, last_window_size.x, last_window_size.y, monitor_video_mode->refreshRate);
glfwShowWindow(main_window);
}
And after doing all of that, appears black outline like that:
So, the question is why does it appear, and how to remove it ?
| I don't think this is a GLFW side issue or an issue in your code, I think this might be an issue with macOS's Quartz Compositor. QC is responsible for drawing all your windows to the screen just like Windows' DWM.
Or if the window covers the full screen (without "actually" being full screen, just a large window), you could use a quick and dirty hack to make the window 2 pixels bigger X and Y then move the window current_x_pos - 1 and current_y_pos - 1
|
72,959,582 | 72,968,820 | Update console output while slowing down program as little as possible | I have a single threaded program that does some operations on a large file (~16GB) in a loop, and has a variable count that increments on each loop, I want to be able to see what the count is at by outputting to console every so often, but I dont want it to significantly slow down my program, so I wanted to know if it would be faster to use modulo on count, and output if it equals 0 (if (count%1_000_000==0) {std::cout << count << endl;}), or if I should use system time, and if more than .25 of a second has passed, print to console, or just print to console every time it loops through.
the loop will run a few billion times aprox.
| count % 1'000'000 is a slow operation, even though the compiler optimizes that to multiplication by an inverse. If you use a power of 2 on the other hand this operation becomes much simpler. For example here is x % n == 0 for 1'000'000 and 1 << 20 == 1'048'576 with int.
mod_1_000_000(int):
imul edi, edi, 1757569337
add edi, 137408
ror edi, 6
cmp edi, 4294
seta al
ret
mod_1_048_576(int):
and edi, 1048575
setne al
ret
If count is uint64_t the difference gets much more pronounced.
An if (count % 1'048'576 == 0) will be cheap to compute and the branch predictor will only get about 1 miss in a million. So this would be cheap. You can probably make it even better by marking it unlikely so the code for printing console output gets put into a cold path.
Getting the system time and printing every .25 seconds sounds great. But if you are getting the system time inside the loop that will be millions of function calls. Those will be expensive, far more than count % (1 << 20).
Unfortunately you can't use alarm to interrupt the code periodically because you can't print to the console in a signal handler. But you could use multithreading, having one thread do the work and the other print updates and sleep in a loop.
Problem there is how to get the count from one thread to the other. The compiler has probably optimized that into a register so the other thread reading the memory location where count is stored won't show the actual count. You would have to make the variable atomic and that would increase the cost of using it.
Bets bet would be using
if (count % (1 << 20) == 0) atomic_count = count;
and update a shared atomic variable every so often. But is all that overhead of multithreading worth it? You aren't avoiding the if in the inner loop, just reducing the amount of code executed once in a blue moon.
|
72,959,590 | 72,961,938 | How to save bounded-length strings as quickly as possible for a timing mechanism? | I have a thin wrapper around rdtsc that I use as a timer. It supports "stepping" and all timestamps are saved in a std::array (you specify when creating the timer how many steps you will make). That way, you can do something like,
void funcToTime() {
timer<2> t;
...
t.start();
...
t.step();
...
t.step();
...
t.end();
}
And the timer will write the rdtsc at the start, each of the two steps, and the end, into a file for you to look at.
What I want to do is extend this to be able to add tags to the timer,
void funcToTime() {
// second template param is the max tag length
timer<2, 10> t;
...
t.start();
...
// "hi" is a tag
t.step("hi");
...
t.step("bye");
...
t.end();
}
And now in the file, I will be able to see the tags next to the steps (rdtsc timestamps).
Of course this must be as fast as possible, or else the timer is inaccurate and useless. I've been wracking my brain for how to do this but can't think of how to do it quickly.
My only idea is to maintain a std::array<MaxTagLen, NumSteps> that I write the tags into, but each step will now incur a string copy which is super slow, even when the strings are small enough to be in the SSO buffer. I don't think there is any templating magic I can do because these tags may not be available at compile-time, even if I know their max length at compile-time.
Any ideas?
| A string literal evaluates to the address at which that literal is stored in memory. As such, there's no need to copy the string in response to each step call.
template <class N>
class Timer {
struct TimeRecord {
unsigned long long timestamp;
char const *tag;
};
std::array<TimeRecord, N> data;
unsigned int current = 0;
std::ostream &output;
public:
Timer(std::ostream &os) : output(os) {}
void step(unsigned long long const &ts, char const *tag) {
data[current++] = TimeRecord{ts, tag};
}
void end() {
for (unsigned i=0; i<current; i++)
output << data[i].timestamp << "\t" << data[i].tag << "\n";
}
};
This does impose a limitation on what you pass as the tag. The pointer you pass has to remain valid as until after end() finishes execution. As long as you pass string literals, that's no problem at all--they have static storage duration. But if you define a Timer in one function, then it calls another function, and somebody passes the address of an array of char defined in that function, like this:
void foo(Timer &t) {
// note use of arrays here:
char tagS[] = "start foo";
char tagE[] = "stop foo";
t.step(tagS);
// do foo stuff
t.step(tagE);
}
Then you'd have a problem. But as long as you stick to string literals:
char const *tagS = "start foo";
char const *tagE = "stop foo";
// or:
t.step(a, "some tag");
...you have no problem at all. Also note that since you're only storing the address, this eliminates any requirement to store the maximum tag length either.
|
72,959,945 | 72,960,035 | How to write an overload function for std::array that calls a variadic function? | I have the following variadic function:
template <typename... Args>
CustomType<Args...> method(const Args& args...);
which works fine when I just do e.g.
method(1.0f, 2.0f, 3.0f);
However I also want to write an overload for std::array<T, N>:
template <typename T, std::size_t N>
auto method(const std::array<T, N>& arr);
Reading this post I figured I would use the following helper functions:
template <typename T, std::size_t N, typename VariadicFunc, uint32_t... I>
auto methodApply(const std::array<T, N>& arr, VariadicFunc func, std::index_sequence<I...>)
{
return func(arr[I]...);
}
template <typename T, std::size_t N, typename VariadicFunc, typename Indices = std::make_index_sequence<N>>
auto methodArr(const std::array<T, N>& arr, VariadicFunc func)
{
return methodApply(arr, func, Indices());
}
and then do
template <typename T, std::size_t N>
auto method(const std::array<T, N>& arr)
{
methodArr(arr, /* what to pass here? */);
}
Note: the reason why I would like to template on func is because I want these helper functions to be generic so that I may use them for other variadic functions, not just method().
Only I'm not sure what pass as the second argument - I was thinking of a lambda function that would make a call to the original function method(const Args& args...) but it's unclear to me how to do this.
EDIT:
Restricted to C++14.
| You can wrap it in a lambda and let the compiler deduce the type for you
template <typename T, std::size_t N>
auto method(const std::array<T, N>& arr)
{
return methodArr(arr, [](const auto&... args) { return method(args...); });
}
Demo
In C++17, methodApply can be replaced with std::apply
template <typename T, std::size_t N>
auto method(const std::array<T, N>& arr) {
return std::apply(
[](const auto&... args) { return method(args...); }, arr);
}
|
72,960,867 | 72,960,963 | Why is string::resize and string::substr O(1) | I am working on a coding problem in which I have to delete all occurrences of a substring T in a string S (keeping in mind that removing one occurrence of T in S may generate a new occurrence of T), and then to return the resulting string S after all deletions. The size of both S and T can be up to 10^6.
For example, if I have S = "aabcbcd" and T = "abc", then removing all occurrences of abc in S results in S = "d".
The sample solution to this problem involves building a string R from S one character at a time, and whenever the end of R matches T, we delete it from R (the comparison between the end of R and T is determined by string hashing).
The solution says that
Since this deletion is at the end of R this is just a simple O(1) resize operation.
However, according to https://m.cplusplus.com/reference/string/string/resize/ the time complexity of string::resize is linear in the new string length. Ben Voigt confirms this in Why is string::resize linear in complexity?.
Also, in the solution the code involves using string::substr to double check if the end of R and T match (since hash(the end of R)==hash(T) does not guarantee the end of R equals to T):
/* If the end of R and T match truncate the end of R (and associated hash arrays). */
if (hsh == thsh && R.substr(R.size() - T.size()) == T) {
//...
}
Once again, https://m.cplusplus.com/reference/string/string/substr/ says that string::substr has linear time complexity.
Even if string::substr wasn't linear, then comparing the two strings directly would still cause the comparison to be linear in the size of T.
If this is true, wouldn't the time complexity of the solution be at least O(S.length()*T.length()), instead of O(S.length()) (according to the solution)? Any help is appreciated!
| string::resize isn't always linear. If you're expanding a string, it's linear on the number of characters copied, which is potentially the total number in the resulting string (but could be less, if the string already has enough space for the character(s) you add, so it only has to write the new characters).
Using resize to reduce the size of a string will normally take constant time. In simplified form (and Leaving out a lot of other "stuff") string can look something like this:
template <class T>
class string {
char *data;
size_t allocated_size;
size_t in_use_size;
public:
void resize(size_t new_size) {
if (new_size < in_use_size) {
in_use_size = new_size;
data[in_use_size] = '\0';
} else {
// code to expand string to desired size in O(n) time
}
}
// ...
};
So although it'll be linear when expanding the string, it'll typically have constant complexity when reducing the size.
As for using substr, yes, in the case where the hashes match, substr itself will be linear (it creates a new string object) and you're going to do a linear-complexity comparison. I'd guess they're pretty much just presuming hash collisions are rare enough to ignore, so for most practical purposes, this only happens when you have an actual match.
|
72,961,236 | 72,961,338 | Fixing error: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libdl.so' | I have recently upgraded my OS (to PopOS! 22.04) and now a bunch of builds in my cmake workflow aren't compiling, halting at this particular error at the linking stage:
*** No rule to make target '/usr/lib/x86_64-linux-gnu/libdl.so'
This file now no longer exists. There is however a libdl.so.2.
Running apt-file search /usr/lib/x86_64-linux-gnu/libdl.so gives no output.
How can I get my builds working again?
EDIT:
The solution turned out to be due to manually built dependencies/packages that obviously weren't updated during the upgrade. I had to go and rebuild them, and then the error disappeared. Note, when rebuilding them (also with CMake) it required a full deletion of the build directory, not just running CMake again.
| Jammy's GNU C Libarry version is 2.35. The dl library is now part of the C standard library. The release notes, tells that, starting from version 2.34,
all functionality formerly
implemented in the libraries libpthread, libdl, libutil, libanl has
been integrated into libc. New applications do not need to link with
-lpthread, -ldl, -lutil, -lanl anymore. For backwards compatibility,
empty static archives libpthread.a, libdl.a, libutil.a, libanl.a are
provided, so that the linker options keep working. Applications which
have been linked against glibc 2.33 or earlier continue to load the
corresponding shared objects (which are now empty).
This means that you have to remove the explicit libdl.so from the linker dependencies.
|
72,962,924 | 72,963,089 | How to extract relevant info from the body of http response with Arduino? | I am currently doing a project with Arduino MKR WiFi 1010 and I send a GET request to the server and it sends me back a response contains "clientId". The only info I desire is this client ID. But I am pretty struggling with obtaining it. The complete response is as following:
HTTP/1.1 200 OK
Date: Wed, 13 Jul 2022 07:29:19 GMT
Content-Type: application/json;charset=utf-8
Content-Length: 241
Connection: keep-alive
Cache-Control: no-cache,no-store,must-revalidate
Pragma: no-cache
Expires: -1
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
[{"ext":{"ack":true},"minimumVersion":"1.0","clientId":"qezkvtxkk8i3csuyirg7bd97jtg","supportedConnectionTypes":["long-polling","smartrest-long-polling","websocket"],"data":null,"channel":"/meta/handshake","version":"1.0","successful":true}]
As you can see above, the client ID "qezkvtxkk8i3csuyirg7bd97jtg" is what I want. However, I don't know how to extract it. Can someone help me please?
What I have tried so far is following:
void loop() {
// if there are incoming bytes available
// from the server, read them and print them:
while (client.available()) {
char c = client.read();
//Serial.print(c);
if(c=='\n'){
char c = client.read();
if(c=='\r'){
char c = client.read();
if(c=='\n'){
char c = client.read();
Serial.print(c);
}
}
}
}
}
I tried to allocate the body as the body is separated from the header. But I failed to print the whole body. Instead I just got a "[", the first byte of the body, from the above code.
My idea is to store the body and treat it as a Json Object.
| If I understand it well, you retrieve it in a String.
So the easiest way to find your client ID is to find the String "clientID" with a provided Arduino's function indexOf, this will give you the index in the String. So on and on you retrieve the string between "clientId":" to ",
mystring.indexOf(val, from)
start = indexOf("\"clientId\":")
indexOf("\",", start)
https://arduinogetstarted.com/reference/arduino-string-indexof
Edit :
To retrieve the whole string from the client you should just do it with a while loop :
String msg = "";
while ( client.available() ) {
char c = client.read();
Serial.print(c);
msg += c;
}
|
72,963,090 | 72,963,126 | C++ non-generic class in template | I would like to know how to make a template with an own class:
#include <iostream>
using namespace std;
template<C cc> void A()
{
cout << cc.l << endl;
}
int main()
{
C cc;
A<cc>();
}
class C
{
public:
int l = 10;
};
But it doesn't work, so how to use that class, like a non-generic class parameter, like here:
#include <iostream>
using namespace std;
template<int i> void A()
{
cout << i << endl;
}
int main()
{
A<100>();
}
| You can do it as shown below with C++20(&onwards):
//moved definition of C before defining function template `A`
struct C
{
int l = 10;
};
template<C cc> void A()
{
cout << cc.l << endl;
}
int main()
{
//--vvvvvvvvv--------->constexpr added here
constexpr C cc;
A<cc>();
}
Working demo
Two changes have been made:
As template arguments must be compile time constant, constexpr is used.
The definition of C is moved before the definition of function template.
|
72,963,463 | 72,969,788 | Calling copy and assignment operators from base class to create inherited class instances in C++ | I have te following classes (e.g.) :
class A {
public:
A(void) : i(0) {}
A(int val) : i(val) {}
A(const A& other) : i(other.i) {}
A& operator=(const A& other) {
i = other.i;
return *this;
}
int i;
};
class B : public A {
public:
B(void) : A(), j(0) {};
B(const B& other) : A(other), j(other.j) {}
B(int i, int j) : A(other.i), j(other.j) {}
B& operator=(const B& other) {
A::operator=(other);
j = other.j;
return *this;
}
int j;
};
My question is, given the operator= overload and copy constructor in B, if I wanted to be able to create instances of B out of an already initialized instance of A, or to assign existing instances of A to existing instances of B, would it be necessary to define another copy constructor on B and operator= with the following signature ?:
B(const A& other);
B& operator=(const A& other);
The goal would be to be able to instantiate/assign derived class instances only with the Base class information.
PS : I am working in C++98 and unable to use any newer standard.
| Yes, you would have to define something like that.
B(const A& other);
This would allow constructing B out of A. This would also allow assigning A to B by way of implicitly converting A to B and then assigning. So that alone should suffice. But you get an extra copy.
B& operator=(const A& other);
This makes assigning A to B more efficient since you avoid the extra copy of the temporary B. This should also allow assigning things that can be implicitly converted to A like:
B b = 1;
If you don't want that you might have to add some explicit. Did C++98 have explicit? That is so last millenium.
Note: In modern C++ this would be more efficient because of copy elision and because you could use move semantic and perfect forwarding references.
|
72,963,630 | 73,000,332 | Problem with receiving mails from the SENT folder | if ( IdIMAP1->SelectMailBox( "SENT" ) )
{
TIdIMAP4SearchRec sr[1];
sr[0].SearchKey = skAll;
IdIMAP1->UIDSearchMailBox( EXISTINGARRAY(sr) );
int ile = IdIMAP1->MailBox->SearchResult.Length;
}
Error:
First chance exception at $757BF192. Exception class EIdReadLnMaxLineLengthExceeded with message 'Max line length exceeded.'.
It tries to read messages from the SENT folder and the program throws an error. There is no error when receiving from another SENT subfolder.
It seems to me that the problem lies in specifying SearchKey when the value is set to skAll, but no other setting reads the email despite the lack of an error. What does this error mean and how can I fix it?
By the way, I have a question about the SearchKey settings. Is it possible to give a specific date here that would filter emails only from today, for example?
|
Error:
First chance exception at $757BF192. Exception class EIdReadLnMaxLineLengthExceeded with message 'Max line length exceeded.'.
... What does this error mean and how can I fix it?
It means TIdIMAP4 called the IOHandler.ReadLn() method and received more than 16K worth of data that had no line breaks in it. The default value of the IOHandler.MaxLineLength property is 16384, and the default value of the IOHandler.MaxLineAction is maException.
To workaround the error, you could try increasing the value of the MaxLineLength (say, to MaxInt). However, a proper fix would be to prevent such a large amount of undelimited data to be received in the first place.
The response of UIDSearchMailBox() is a single line containing a list of email sequence numbers delimited by spaces, so you could be getting the EIdReadLnMaxLineLengthExceeded error here if the search is producing a LOT of sequence numbers (say, thousands of them, which makes sense when searching for just skAll on a large mailbox).
You really should not be searching for just skAll by itself to begin with. If you want to access all emails in the mailbox, just iterate the mailbox instead. After SelectMailBox() returns success, TIdIMAP4.MailBox.TotalMsgs will contain the number of emails currently in the mailbox. You can then run a loop retrieving individual emails as needed using sequence numbers in the range of 1..TotalMsgs, inclusive.
Otherwise, filter your search criteria better to produce fewer results.
By the way, I have a question about the SearchKey settings. Is it possible to give a specific date here that would filter emails only from today, for example?
Yes, of course. Look at the TIdIMAP4SearchKey enum, it lists all of the different keys you can search on, for instance:
skOn, //Messages whose internal date is within the specified date.
skSentOn, //Messages whose [RFC-822] Date: header is within the specified date.
skSentSince, //Messages whose [RFC-822] Date: header is within or later than the specified date.
skSince, //Messages whose internal date is within or later than the specified date.
In this case, either of those should work, depending on whether you want to search the email's internal server timestamps or their Date headers, eg:
if ( IdIMAP1->SelectMailBox( "SENT" ) )
{
TIdIMAP4SearchRec sr[1];
sr[0].SearchKey = skSince;
sr[0].Date = Sysutils::Date(); // or Dateutils::Today()
IdIMAP1->UIDSearchMailBox( EXISTINGARRAY(sr) );
int ile = IdIMAP1->MailBox->SearchResult.Length;
}
UPDATE
this example refer to expresion 'later than the specified date' but I can see that it is possible to use 'within'. How to set range of data? Is it possible in SearchKey settings?
There is no 'within' search key in IMAP. If you are referring to RFC 5032: WITHIN Search Extension to the IMAP Protocol (the OLDER and YOUNGER search keys), then TIdIMAP4 does not implement this extension at this time. I have opened a ticket to add it in a future release:
#420: Update TIdIMAP4 to support RFC 5032: "WITHIN Search Extension to the IMAP Protocol"
In the meantime, you can combine multiple search keys and they will be logically AND'ed together, eg:
if ( IdIMAP1->SelectMailBox( "SENT" ) )
{
TDateTime dtNow = Sysutils::Now();
TIdIMAP4SearchRec sr[2];
sr[0].SearchKey = skSince;
sr[0].Date = Dateutils::StartOfTheDay(Dateutils::IncDay(dtNow, -6));
sr[1].SearchKey = skBefore;
sr[1].Date = dtNow;
IdIMAP1->UIDSearchMailBox( EXISTINGARRAY(sr) );
int ile = IdIMAP1->MailBox->SearchResult.Length;
}
I suggest you read RFC 3501 Section 6.4.4 for how the SEARCH command works and what the standard search keys are.
|
72,963,777 | 72,964,058 | Is casting to (void**) well-defined? | Suppose A is a struct and I have a function to allocate memory
f(size_t s, void **x)
I call f to allocate memory as follows.
struct A* p;
f(sizeof(struct A), (void**)&p);
I wonder if (void**)&p here is a well-defined casting. I know that in C, it is well-defined to cast a pointer to void* and vice versa. However, I am not sure about the case of void**. I find the following document which states that we should not cast to a pointer with stricter alignment requirement. Does void** have stricter or looser alignment requirement?
| The conversion is not defined by the C standard, and, even if it were, code in f that assigned to it via the void ** type would not be defined by the C standard.
C 2018 6.3.2.3 7 says a pointer to an object type may be converted to a pointer to a different object type. This covers (void **) &p, since &p is a pointer to the object p, and void ** is a pointer to the object type void *. However, this paragraph only tells us the conversion may be performed. It does not full define what the result is. It says:
“If the resulting pointer is not correctly aligned for the referenced type, the behavior is undefined.” This is generally not a problem; in common C implementations, the alignment requirements of void * and struct A * will be the same, and this is easily checked.
“Otherwise, when converted back again, the result shall compare equal to the original pointer.” This is all the paragraph tells us about the result of the conversion: It is a pointer you can convert back to struct A * to get the original pointer or its equivalent. It does not tell us the pointer can be used for anything else while it is in the void ** type.
“When a pointer to an object is converted to a pointer to a character type,…” This part of the paragraph does not apply, since we are not converting to a pointer to a character type.
So, suppose the function f has some code that uses its parameter x like this:
*x = malloc(…);
Because the standard did not define what will happen if x is used as a void ** for any purpose other than converting it back to struct A *, we do not know what *x will do.
A typical expectation is that *x will access the same memory p is in, but it will access it as a void * instead of as a struct A *. A technical problem here is that the C standard does not guarantee that a void * is represented in memory in the same way that a struct A * is represented in memory. As far as the standard is concerned, void * could use eight bytes while struct A * uses four bytes, or void * could use a flat byte address while struct A * uses a segment-and-offset address scheme. However, as with alignment, in common C implementations, different types of pointers have the same representation in memory, and this can be checked.
But then we arrive at the aliasing rule. Even if void * and struct A * have the same representation in memory, C 2018 6.5 7 says:
An object shall have its stored value accessed only by an lvalue expression that has one of the following types:
— a type compatible with the effective type of the object,
…
The list continues with several other categories of types, and none of them match the struct A * type of p. That is, this paragraph in the standard tells us the object p shall have its stored value accessed (“accessed” in the C standard includes both reading and writing) only by an expression that has one of the listed types. The expression used to access p in *x = malloc(…); is *x, and its type is void *, and void * is not compatible with struct A *, and void * is also not any of the other types listed in the paragraph.
So the code *x = malloc(…); breaks that rule. Violating a “shall” rule means the behavior of the code is not defined by the C standard.
Some compilers support breaking this rule, when a switch is used to ask them to support aliasing objects through different types. Using such a switch prevents some optimizations by the compiler. In particular, given two pointers x and y that point to different types not matching the aliasing rule, then compiler may assume they point to different objects, so it can reorder accesses to *x and *y in whatever way is efficient because a store to one cannot change the value in the other.
So, if you verify that void * and struct A * have the same representation and alignment requirement and that your compiler supports aliasing, then the behavior will be defined for the specific C implementation you check. However, it is not defined by the C standard generally.
|
72,964,318 | 73,041,001 | Wrapping a C++ library using msl-loadlib in python | I am currently writing a wrapper for a C++ library. The library is a 32-bits dll file and I'm using 64-bits so I'm using msl-loadlib. I have a problem wrapping a function that has pointer parameters.
Here is the header of the function in C++
int CUSB::GetMeasurement(int Group, int StartPoint, int* NumberOfPoints, double* XData, double** YData, eFilter Filter)
and here the wrapper I wrote
from msl.loadlib import Server32
import ctypes
from client import Client
class Server(Server32):
def __init__(self, host: str, port: int):
super().__init__("USBLib.dll", "cdll", host, port)
...
def getMeasurement(self, group: int, startPoint: int, nbrOfPoints: int, filter: Client.Filter):
self.lib.GetMeasurement.restype = int
xData = (ctypes.c_double * 65536)()
yData = ((ctypes.c_double * 3) * 65536)()
self.lib.GetMeasurement(
ctypes.c_int(int(group)),
ctypes.c_int(int(startPoint)),
ctypes.pointer(ctypes.c_int(int(nbrOfPoints))),
ctypes.byref(xData),
ctypes.byref(yData),
ctypes.c_int(int(filter.value[0]))
)
return xData, yData
from enum import Enum, unique
from msl.loadlib import Client64
class Client(Client64):
@unique
class Filter(Enum):
NONE = 0,
LOWPASS = 1
...
def getMeasurement(self, group: int, startPoint: int, nbrOfPoints: int, filter: Filter):
return self.request32(
'getMeasurement',
group,
startPoint,
nbrOfPoints,
filter
)
When I call Client.getMeasurement(parameters), I get the following error
File 'C:\\Users\\DELL\\Documents\\python\\USBlib\\server.py', line 77, in getMeasurement
ctypes.c_int(int(filter.value[0]))
OSError: exception: access violation writing 0x00000000
Edit :
I tried to use ctypes argtypes.
xData = (ctypes.c_double * 65536)()
yData = ((ctypes.c_double * 3) * 65536)()
nbrOfPoints = ctypes.c_int()
self.lib.GetMeasurement.argtypes = [
ctypes.c_int,
ctypes.c_int,
ctypes.POINTER(ctypes.c_int),
ctypes.POINTER(ctypes.c_double),
ctypes.POINTER(ctypes.POINTER(ctypes.c_double)),
ctypes.c_int
]
dataSize = ctypes.c_int(int(inputDataSize - previousPoint))
self.lib.GetMeasurement(1, previousPoint, ctypes.byref(dataSize), xData, yData, Client.Filter.NONE)
I have a C++ sample code that is using the GetMeasurement function :
int PreviousPoint = 0;
double* ppYData1[3];
double* pXData1;
int UpdateSize = 65536;
for (int channel=0; channel<3; channel++)
{
ppYData1[channel] = new double[UpdateSize];
}
pXData1 = new double[UpdateSize];
int DataSize = Input1DataSize-PreviousPoint;
GetMeasurement(INPUT1, PreviousPoint, &DataSize, pXData1, ppYData1, (eFilter)(TC->SC->m_ScanConfiguration.m_nFilter));
| I'm ignoring msl-loadlib as extraneous to the problem of calling ctypes correctly.
Here's an example of calling the function shown. The YData needs to be an array of 3 double* and then each of those pointers needs to be initialized with the next dimension of the array. Note this parallels the C++ example of calling the function.
test.cpp - sample implementation to fill out the arrays.
#ifdef _WIN32
# define API __declspec(dllexport)
#else
# define API
#endif
typedef int eFilter;
extern "C" {
API int GetMeasurement(int Group, int StartPoint, int* NumberOfPoints, double* XData, double** YData, eFilter Filter) {
for(int i = 0; i < 65536; ++i) {
XData[i] = i;
YData[0][i] = i + .25;
YData[1][i] = i + .5;
YData[2][i] = i + .75;
}
return 0;
}
}
test.py - ctypes example to call the function
import ctypes as ct
pdouble = ct.POINTER(ct.c_double)
ppdouble = ct.POINTER(pdouble)
dll = ct.CDLL('./test')
dll.GetMeasurement.argtypes = ct.c_int, ct.c_int, ct.POINTER(ct.c_int), pdouble, ppdouble, ct.c_int
dll.GetMeasurement.restype = ct.c_int
xData = (ct.c_double * 65536)()
yData = (pdouble * 3)()
for channel in range(3):
yData[channel] = (ct.c_double * 65536)()
nbrOfPoints = ct.c_int()
dll.GetMeasurement(1, 0, ct.byref(nbrOfPoints), xData, yData, 0)
print(xData[0],xData[65535])
print(yData[0][0],yData[1][0],yData[2][0])
print(yData[0][65535],yData[1][65535],yData[2][65535])
Output:
0.0 65535.0
0.25 0.5 0.75
65535.25 65535.5 65535.75
|
72,964,591 | 72,965,299 | C++: Deep Copy Diamond Pointer Structure | In my simulation software, I generate objects with pybind11. So all objects are stored in std::shared_ptr with a not known structure at compile time. For parallelisation of my simulation I need to run the same configuration with different seeds. I want to implement the duplication of these objects in one call on the C++ side.
Following a minimal example, where
I want a2 to be a deepcopy of a, with the diamond structure.
// Type your code here, or load an example.
#include <memory>
#include <map>
#include <iostream>
class C{};
class B{
public:
B(std::shared_ptr<C> c):c(c){}
std::shared_ptr<C> c;
};
class A{
public:
A(std::shared_ptr<B> b1, std::shared_ptr<B> b2):b1(b1), b2(b2){}
std::shared_ptr<B> b1;
std::shared_ptr<B> b2;
};
auto init(){
auto c = std::make_shared<C>();
auto b1 = std::make_shared<B>(c);
auto b2 = std::make_shared<B>(c);
auto a = std::make_shared<A>(b1,b2);
return a;
}
int main(){
auto a = init();
auto a2 = a; //deepcopy of a, where b1 and b2 of the copy point to the same object C
}
The only solution I came up with is passing a map<pointer,shared_ptr>. This allows for lookup if the shared_ptr has already been deep copied. (Here I have some problems with the typing as I need to dynamicly cast back the types. This feels really ugly and bugprone.)
| You can use std::shared_ptr<void> to type-erase all your shared pointers, using std::static_pointer_cast to go to and from your actual types.
using Seen = std::set<std::shared_ptr<void>>;
template <typename T>
std::shared_ptr<T> deep_copy(std::shared_ptr<T> source, Seen & seen) {
if (auto it = seen.find(std::static_pointer_cast<void>(source)); it != seen.end()) {
return std::static_pointer_cast<T>(*it);
}
auto dest = make(*source, seen);
seen.insert(std::static_pointer_cast<void>(dest));
return dest;
}
You can then either write constructors that take an existing instance and a seen map to deep copy the members, allowing them to be private.
template <typename T>
std::shared_ptr<T> make(const T & source, Seen & seen) {
return std::make_shared<T>(source, seen);
}
class C{
public:
C(){}
C(const C &, Seen &){}
};
class B{
std::shared_ptr<C> c;
public:
B(std::shared_ptr<C> c):c(c){}
B(const B & other, Seen & seen):c(deep_copy(other.c, seen)){}
};
class A{
std::shared_ptr<B> b1;
std::shared_ptr<B> b2;
public:
A(std::shared_ptr<B> b1, std::shared_ptr<B> b2):b1(b1), b2(b2){}
A(const A & other, Seen & seen):b1(deep_copy(other.b1, seen)), b2(deep_copy(other.b2, seen)){}
};
int main(){
auto a = init();
Seen a2_seen;
auto a2 = deep_copy(a, a2_seen);
}
Or you can have overloads of make for each type, where make<T> would need to be friended by T if the members were private.
std::shared_ptr<C> make(const C &, Seen &) {
return std::make_shared<C>();
}
std::shared_ptr<B> make(const B & other, Seen & seen) {
auto c = deep_copy(other.c, seen);
return std::make_shared<B>(c);
}
std::shared_ptr<A> make(const A & other, Seen & seen) {
auto b1 = deep_copy(other.b1, seen);
auto b2 = deep_copy(other.b2, seen);
return std::make_shared<A>(b1, b2);
}
|
72,965,037 | 72,981,545 | How To Package Binary Projects Using Conan? | The Problem:
The package's consumer couldn't load the package's binary's shared libraries.
find_package(MyThirdParty REQUIRED) # MyThirdParty is installed using Conan
find_program(binary_paty MyThirdParty REQUIRED)
execute_process(COMMAND ${binary_path} COMMAND_ERROR_IS_FATAL ANY)
The execute_process command will fail because the MyThirdParty's shared libraries are missing.
How could I package the third-party binary projects?
The Minimal Reproducible Example:
Third-party project:
file(WRITE Library.hh "void Func();")
file(WRITE Library.cc "void Func() {}")
add_library(Library SHARED Library.hh Library.cc)
file(WRITE Main.cc "#include \"Library.hh\"\nint main() { Func(); }")
add_executable(MyThirdParty Main.cc)
target_link_libraries(MyThirdParty PRIVATE Library)
install(TARGETS MyThirdParty Library EXPORT MyThirdPartyConfig)
install(EXPORT MyThirdPartyConfig
NAMESPACE MyThirdParty::
DESTINATION lib/cmake/MyThirdParty
)
My attempt for packaging the third-party with Conan:
from conans import ConanFile, CMake, tools
class MyThirdPartyConan(ConanFile):
name = "MyThirdParty"
version = "1.0.0"
settings = "os", "compiler", "build_type", "arch"
def source(self):
tools.download(
filename = "CMakeLists.txt",
url = "https://gist.githubusercontent.com/gccore/9007084e1b307592ae040ceb5745bf5f/raw/419a96712145e8d24d4c9982ab3b7fd31d44b9f0/CMakeLists.txt")
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
cmake = CMake(self)
cmake.install()
The consumer:
find_package(MyThirdParty REQUIRED)
find_program(binary_paty MyThirdParty REQUIRED)
execute_process(COMMAND ${binary_path} COMMAND_ERROR_IS_FATAL ANY)
And finally the conanfile.txt:
[requires]
MyThirdParty/1.0.0@Ghasem/Test
[generators]
cmake_find_package
But the consumer's CMake with the Conan package fails because of the third-party's shared library. When the consumer's CMake tries to execute the MyThirdParty binary it will fail because it couldn't find the libLibrary.so file.
My Environment:
OS: Fedora 35
Kernel: Linux 5.18.9-100.fc35.x86_64
Compiler: GCC 11.3.1 20220421
CMake: 3.22.2
Conan: 1.47.0
| On Linux we could use patchelf and change the binary RPATH during the packaging state:
def package(self):
cmake = CMake(self);
cmake.install();
self.run("patchelf --set-rpath '$ORIGIN/../lib' " +
self.package_folder + "/bin/MyThirdParty");
And for Windows just put the shared libraries files besides the binary, the Windows Linker/Loader doesn't try hard to be smart(like Linux's Linker/Loader does).
|
72,965,227 | 72,970,337 | How can I converting multi page PDF file to many images .jpeg with Vips in C++? | I'am trying using vips in c++ to read a .PDF and convert to .jpeg files. The problem is that the code save all the pages in a single file .jpeg. How can i save in many .jpeg files?
My Code
VOption *voptions = new VOption();
voptions->set("dpi",150);
voptions->set("page", 0);
voptions->set("n", -1);
VImage in = VImage().pdfload("/Users/gui/Desktop/PDF_Reader/files/TEST_DOC_READER.pdf",voptions);
in.write_to_file("/Users/MyUser/Desktop/PDF_Reader/outputs/*.jpeg");
| I found a way to solve this using crop.
VImage in = VImage().pdfload("/Users/MyUser/Desktop/PDF_Reader/files/TEST_DOC_READER.pdf", voptions);
pages = in.get_int("n-pages");
h = in.height()/pages;
for(int i=0; i<pages; i++){
in.crop(0,i*h, in.width(), h).jpegsave((outdir+to_string(i)+format).c_str());
}
|
72,965,466 | 72,966,229 | Does referencing a shared pointer in a lambda preserve object lifetime? | Basically the question from the title. Consider I have to use some asynchronous API, does a reference to the local scope shared ptr in lambda preserve it's lifetime? And is this a safe practice?
class A
{
public:
static void foo();
}
A::foo()
{
std::shared_ptr<MyType> MyTypePtr = std::make_shared<MyType>();
MyTypePtr->asyncAPI([&MyTypePtr]() // this would return as soon as called
{
// doStuffWith MyTypePtr...
});
}
| This lambda captures the shared_ptr by reference. That's what "&" means, in the capture list. It only captures a reference to the shared_ptr. When the function returns the shared_ptr gets destroyed, leaving the lambda holding a bag with a reference to a destroyed object. Any further usage of this object results in undefined behavior.
A lambda capture never preserves the lifetime of anything. It either captures by value, effectively copying the captured object (with the original object's lifetime not affected in any way), or it captures by reference.
|
72,967,144 | 72,991,974 | c++ get indices of duplicating rows in 2D array | The task is following: find indices of duplicating rows of 2D array. Rows considered to be duplicated if 2nd and 4th elements of one row are equal to 2nd and 4th elements of another row.The simplest way to do it is something like that:
std::unordered_set<int> result;
for (int i = 0; i < rows_count; ++i)
{
for (int j = i + 1; j < rows_count; ++j)
{
if (arr[i][2] == arr[j][2] && arr[i][4] == arr[j][4])
{
result.push_back(j);
}
}
}
But if rows_count is very large this algorithm is too slow. So my question is there any way to get needed indices using some data structures (from stl or other) with only single loop (without nested loop)?
| You could take advantage of the properties of a `std::unordered_set.
A small helper class will further ease up things.
So, we can store in a class the 2nd and 4th value and use a comparision function to detect duplicates.
The std::unordered_set has, besides the data type, 2 additional template parameters.
A functor for equality and
a functor for calculating a hash function.
So we will add 2 functions to our class an make it a functor for both parameters at the same time. In the below code you will see:
std::unordered_set<Dupl, Dupl, Dupl> dupl{};
So, we use our class additionally as 2 functors.
The rest of the functionality will be done by the std::unordered_set
Please see below one of many potential solutions:
#include <vector>
#include <unordered_set>
#include <iostream>
struct Dupl {
Dupl() {}
Dupl(const size_t row, const std::vector<int>& data) : index(row), firstValue(data[2]), secondValue(data[4]){};
size_t index{};
int firstValue{};
int secondValue{};
// Hash function
std::size_t operator()(const Dupl& d) const noexcept {
return d.firstValue + (d.secondValue << 8) + (d.index << 16);
}
// Comparison
bool operator()(const Dupl& lhs, const Dupl& rhs) const {
return (lhs.firstValue == rhs.firstValue) and (lhs.secondValue == rhs.secondValue);
}
};
std::vector<std::vector<int>> data{
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, // Index 0
{2, 3, 4, 5, 6, 7, 8, 9, 10, 11}, // Index 1
{3, 4, 42, 6, 42, 8, 9, 10, 11, 12}, // Index 2 ***
{4, 5, 6, 7, 8, 9, 10, 11, 12, 13}, // Index 3
{5, 6, 42, 8, 42, 10, 11, 12, 13, 14}, // Index 4 ***
{6, 7, 8, 9, 10, 11, 12, 13, 14, 15}, // Index 5
{7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, // Index 6
{8, 9, 10, 11, 12, 13, 14, 15, 16, 17}, // Index 7
{9, 10, 42, 12, 42, 14, 15, 16, 17, 18}, // Index 8 ***
{10, 11, 12, 13, 14, 15, 16, 17, 18, 19}, // Index 9
};
int main() {
std::unordered_set<Dupl, Dupl, Dupl> dupl{};
// Find the unique rows
for (size_t i{}; i < data.size(); ++i)
dupl.insert({i, data[i]});
// Show some debug output
for (const Dupl& d : dupl) {
std::cout << "\nIndex:\t " << d.index << "\t\tData: ";
for (const int i : data[d.index]) std::cout << i << ' ';
}
}
|
72,967,220 | 72,967,387 | How to write a wrapper around a templated class that modifies the template parameters in C++? | We have a templated class A and derived classes A1 and A2:
template<typename T> class A {
};
template<typename T> class A1: public A<T>{
};
template<typename T> class A2: public A<T>{
};
I need a wrapper that accepts any class of type A*, ie any derived type of A, as a template parameter and modify its template parameter:
template<typename T, typename Atype> class WrapperA {
Atype<pair<T, int>> atypeobj;
};
Hoping to be used as follows:
WrapperA<int, A1<int>> w1;
WrapperB<int, A2<int>> w2;
The WrapperA needs to work only with derived classes of A.
| You do not need to explicitly state int as argument. The template and its argument can be dissected from a given instantiation by partial specialization (provided that all derived have same number of arguments). The fact that there is a base class A is actually not that relevant when the derived classes are templates too.
#include <utility>
template <typename T> struct A { };
template <typename T> struct A1 : A<T> {};
// primary template (no definition needed)
template<typename Atype> struct WrapperA;
// specialization when Atype is instantiation of a template D with
// one type argument T
template <template <typename> typename D,typename T> struct WrapperA<D<T>> {
D<std::pair<T,int>> atypeobj;
};
Then use it
WrapperA<A1<int>> w;
|
72,968,151 | 72,969,376 | shared_ptr CUDA cudaStream_t | I am trying to make a CUDA stream instance automatically delete itself once all its usages have been removed and I was wondering if when calling cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking), if it is created the object on the heap or not? (I am assuming it is but I am not sure).
In the end I want to do something like this:
struct CUDAStreamDeleter {
void operator()(cudaStream_t* p) const
{
cudaStreamDestroy(*p);
}
};
int main() {
int numberOfStreams = 4;
vector<shared_ptr<cudaStream_t>> streams(numberOfStreams);
for (size_t i = 0; i < numberOfStreams; ++i)
{
cudaStream_t stream;
cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking);
streams[i] = shared_ptr<cudaStream_t>(&stream, CUDAStreamDeleter());
}
}
Edit:
As per @wohlstad a better template would be:
class StreamWrapper {
public:
StreamWrapper() {
gpuErrchk(cudaStreamCreateWithFlags(&this->stream, cudaStreamNonBlocking));
}
~StreamWrapper() { gpuErrchk(cudaStreamDestroy(stream)); }
cudaStream_t& get() { return stream; }
cudaStream_t* ref() { return &this->stream; }
private:
cudaStream_t stream;
};
int main(){
int numberOfStreams = 10;
vector<shared_ptr<StreamWrapper>> streamsTemp(numberOfStreams);
for (size_t i = 0; i < numberOfStreams; ++i)
{
streamsTemp[i] = shared_ptr<StreamWrapper>(new StreamWrapper());
}
// Stream testing
for (size_t i = 0; i < numberOfStreams; ++i)
{
int * d_i;
gpuErrchk(cudaMallocAsync(&d_i, sizeof(int), streamsTemp[i]->get()));
gpuErrchk(cudaMemcpyAsync(d_i, &i, sizeof(int), cudaMemcpyHostToDevice, streamsTemp[i]->get()));
int out;
gpuErrchk(cudaMemcpyAsync(&out, d_i, sizeof(int), cudaMemcpyDeviceToHost, streamsTemp[i]->get()));
gpuErrchk(cudaFreeAsync(d_i, streamsTemp[i]->get()));
gpuErrchk(cudaStreamSynchronize(streamsTemp[i]->get()));
cout << "Out: " << to_string(out) << " In: " << to_string(i);
}
}
| As mentioned in several comment above (including mine), your first attempt involves creating std::shared_ptrs managing dangling pointers.
This is because these pointers are actually addresses of automatic variables created on the stack in the scope of the loop body (and therefore become dangling once the variables get out of scope).
However - you can use the RAII idiom to achieve what you need:
In the code below, StreamWrapper will create the stream in the ctor, and destroy it in the dtor.
Code:
#include "cuda_runtime.h"
#include <vector>
#include <memory>
#include <iostream>
#include <string>
#define gpuErrchk(X) X // use your current definition of gpuErrchk
// RAII class:
class StreamWrapper {
public:
StreamWrapper() { gpuErrchk(cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking)); }
~StreamWrapper() { gpuErrchk(cudaStreamDestroy(stream)); }
cudaStream_t& get() { return stream; }
private:
cudaStream_t stream;
};
int main() {
int numberOfStreams = 10;
std::vector<std::shared_ptr<StreamWrapper>> streamsTemp(numberOfStreams);
for (size_t i = 0; i < numberOfStreams; ++i)
{
streamsTemp[i] = std::make_shared<StreamWrapper>();
}
// Stream testing
for (size_t i = 0; i < numberOfStreams; ++i)
{
int* d_i;
gpuErrchk(cudaMallocAsync(&d_i, sizeof(int), streamsTemp[i]->get()));
gpuErrchk(cudaMemcpyAsync(d_i, &i, sizeof(int), cudaMemcpyHostToDevice, streamsTemp[i]->get()));
int out;
gpuErrchk(cudaMemcpyAsync(&out, d_i, sizeof(int), cudaMemcpyDeviceToHost, streamsTemp[i]->get()));
gpuErrchk(cudaFreeAsync(d_i, streamsTemp[i]->get()));
gpuErrchk(cudaStreamSynchronize(streamsTemp[i]->get()));
std::cout << "Out: " << std::to_string(out) << " In: " << std::to_string(i) << std::endl;
}
}
Notes:
When initializing a std::shared_ptr it is better to use std::make_shared. See here: Difference in make_shared and normal shared_ptr in C++.
Better to avoid using namespace std - see here: Why is "using namespace std;" considered bad practice?.
|
72,968,744 | 72,968,842 | Using class alias for its constructor definition | This minimum reproducible piece of code
class MyClass
{
public:
explicit MyClass();
~MyClass();
};
using MyClassAlias = MyClass;
MyClassAlias::MyClassAlias()
{
}
MyClassAlias::~MyClassAlias()
{
}
int main()
{
MyClassAlias obj;
return 0;
}
gives the error:
a.cpp:11:1: error: ISO C++ forbids declaration of ‘MyClassAlias’ with no type [-fpermissive]
11 | MyClassAlias::MyClassAlias()
| ^~~~~~~~~~~~
a.cpp:11:1: error: no declaration matches ‘int MyClass::MyClassAlias()’
a.cpp:11:1: note: no functions named ‘int MyClass::MyClassAlias()’
a.cpp:1:7: note: ‘class MyClass’ defined here
1 | class MyClass
| ^~~~~~~
Only if I replace MyClassAlias::MyClassAlias() with MyClassAlias::MyClass(), it gets cured. At the same time, as you can see, it is okay to have MyClassAlias::~MyClassAlias() (the compiler gives no error).
Is there any way to fix this: to have consistency in naming?
| The "names" (although these are not names in the technical sense of the standard) of the constructor and destructor are MyClass and ~MyClass respectively. They are based on the injected class name. You need to use these two to define them or write any declaration for them. You cannot use an alias name for these.
The same does not apply to the class name before the ::. It can be the name of an alias.
It seems that GCC accepts the alias as well for the destructor definition, but as far as I can tell that is not standard-conforming.
|
72,970,796 | 72,971,329 | C++ / WinAPI: How do I get a value from a function in the injected x64 DLL? | x86 way of doing this is easy and straightforward - through GetExitCodeThread. Unfortunately it's limited to returning 32 bit values. As I understand it WinAPI provides no 64 bit alternative.
So the problem is - I have no trouble calling the injected function by finding its base address through CreateToolhelp32Snapshot module loop then running CreateRemoteThread using it, it does whatever I wrote in it as it should but how exactly do I retrieve its return value without GetExitCodeThread? As an example I want to retrieve a 64bit pointer or even a struct (or a 64bit pointer to one) as a result of this function. What would be a correct way of doing it? And if it's ReadProcessMemory - then which memory address/offset should I read for a return value?
Edit: additional info:
I'm calling a function inside an injected DLL. The function executes some stuff (e.g. collection of data from the process it's injected into, which is successful) - the problem is I want to retrieve one of those variables back into the calling process (the one that calls CreateRemoteThread). GetExitCodeThread is a no go because variables are 64 bit.
code snippet for reference (hExportThread function returns uint64_t in DLL):
// LLAddr = LoadLibraryA address, lpBaseAddr = dll path related argument
HANDLE hInjectionThread = CreateRemoteThread(hProc, NULL, NULL, LLAddr, lpBaseAddr, NULL, &idThread);
WaitForSingleObject(hInjectionThread, INFINITE);
dllBaseAddr = getDLLBaseAddr(); // gets base address of the injected DLL
dllExportOffset = getDLLExportOffset(dllExportName.c_str()); // opens the DLL in the local buffer and gets the correct offset
LPTHREAD_START_ROUTINE lpNewThread = LPTHREAD_START_ROUTINE(dllBaseAddr + dllExportOffset); // gets the correct address of the function in the injected dll
HANDLE hExportThread = CreateRemoteThread(hProc, NULL, NULL, lpNewThread, NULL, NULL, 0); // executes injected function
WaitForSingleObject(hExportThread, INFINITE);
| I suggest that you use a shared memory region, have it open in both the injecting process and the injected DLL. When the injected library finishes you know that the memory should be ready.
Doing this, you aren't limited to 4 or 8 bytes, you can make the region of whatever size is needed to return the collected data.
|
72,970,939 | 72,971,155 | Why the object is not getting modified when passing the callable object with reference to async | Since we are passing the object by reference to std::async, it will call the operator () on same object, then why its member variable is not getting updated
struct Y
{
int m_val;
Y():m_val(0){}
double operator()(double val)
{
m_val = val*val;
return m_val;
}
};
int main()
{
Y y;
auto f=std::async(std::ref(y),2);
cout <<"Returned value " << f.get() << ", Modified value " << y.m_val << endl;
return 0;
}
output-
Returned value 4, Modified value 0
As per my understanding it should have called y(2) so y.m_val should be updated to 4, however it is printed as 0 in output. Please clarify what am I missing.
| Before C++17, the evaluation of arguments to << was unsequenced. That means you have no guarantee that f.get() would be called before y.m_val's value is taken.
As a consequence, your program has a potential data race, and therefore undefined behavior.
Since C++17, the evaluation order is specified as left-to-right, and so the result you expect is guaranteed.
For C++11/14 you can fix this by separating the output statement into multiple statements:
std::cout << "Returned value " << f.get();
std::cout << ", Modified value " << y.m_val << endl;
or even by calling f.get() in a separate statement before the output (storing the result to a variable), which is generally a good idea anyway if your call has side effects.
|
72,971,188 | 72,971,223 | Why main loop stops at last iteration | my program should accept input from:
3
UUUDU
DDD
UU
the output should be
302
but it stops at 0
int t;
cin >> t;
for(int i=0;i<t;i++){
string s;
vector<int> n;
int m;
cin>>s;
for(int j=0;j<s.length();j++){
if( s.at(j) =='U' ) {
m++;
}
else {
n.push_back(m);
m=0;
}
}
if(n.size()>0){
sort(n.begin(),n.end());
}
cout<<n[0]<<endl;
}
| You mustn't read n[0] when n has no elements.
To avoid the error, the part
if(n.size()>0){
sort(n.begin(),n.end());
}
cout<<n[0]<<endl;
should be
if(n.size()>0){
sort(n.begin(),n.end());
cout<<n[0]<<endl;
}
else {
cout<<0<<endl;
}
Also there are other logical errors:
The variable m is used without initialization.
The final value of m (corresponds to the final chunk of U) is not pushed.
To get the maximum value, the last (not first) element should be readed after sorting.
Fixing these errors, your program will be:
int t;
cin >> t;
for(int i=0;i<t;i++){
string s;
vector<int> n;
int m=0; // initialize m
cin>>s;
for(int j=0;j<s.length();j++){
if( s.at(j) =='U' ) {
m++;
}
else {
n.push_back(m);
m=0;
}
}
if(m>0){
n.push_back(m); // push the final value of m
}
if(n.size()>0){
sort(n.begin(),n.end());
cout<<n[n.size()-1]<<endl; // read the last element
}
else{
cout<<0<<endl;
}
}
|
72,971,696 | 72,971,740 | iterating through an array to transfer its elements to a vector with certain conditions (c++) | I am a Grade 10 student taking a Computer Science course over the summer and I am having trouble with my homework question.
The question asks to write code that will allow a user to enter 6 grades and sort the grades into two different vectors; one that stores passing grades and another that stores failing grades (>=60 means you pass). In the end, it wants you to print all the passing and failing grades in their respective place (passing grades on one line, failing grades on the other).
So far, my code accepts the user's input and successfully stores it into an integer array with 6 elements. However, after accepting the input, this error shows up:
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)
signal: aborted (core dumped)
Please look at the snippet below:
int userGrades[6];
vector <int> passingGrades;
vector <int> failingGrades;
for (int i = 0; i < 6; i++) {
cout << "Enter the grades of Student " << i+1 << ": ";
cin >> userGrades[i];
}
for (int x = 0; x < 6; x++) {
for (int y = 0; y < 6; y++) {
if (userGrades[x] >= 60) {
passingGrades.at(x) = (userGrades[x]);
break;
}
else {
failingGrades.at(x) =(userGrades[x]);
break;
}
}
}
int pgSize = passingGrades.size();
int fgSize = failingGrades.size();
cout << "The passing grades are: ";
for (int a = 0; a < pgSize; a++) {
cout << passingGrades[a] << ", ";
}
cout << "The failing grades are: ";
for (int b = 0; b < fgSize; b++) {
cout << failingGrades[b] << ", ";
}
| The vectors passingGrades and failingGrades have no elements, so any access to their "elements" are invalid.
You can use std::vector::push_back() to add elements to a std::vector.
Also note that the loop using y looks meaningless because the code inside the loop doesn't use y and executes break; in the first iteration.
In conclusion, the part:
for (int x = 0; x < 6; x++) {
for (int y = 0; y < 6; y++) {
if (userGrades[x] >= 60) {
passingGrades.at(x) = (userGrades[x]);
break;
}
else {
failingGrades.at(x) =(userGrades[x]);
break;
}
}
}
should be:
for (int x = 0; x < 6; x++) {
if (userGrades[x] >= 60) {
passingGrades.push_back(userGrades[x]);
}
else {
failingGrades.push_back(userGrades[x]);
}
}
|
72,972,139 | 72,976,471 | Passing complex data structures between Fortran and C++ | Background: I am tasked with a work project of creating interoperability between an existing large Fortran code basis and a modern C++ GUI using Qt. I am using Qt Creator 6.0.2 based on Qt 6.2.2 (MSVC 2019, 64 bit) and VS 2019 Pro with the Intel Fortran Compiler.
I have been able to successfully pass basic data types and simple structures/UDT between Fortran and Qt, but as I try to get into more complex data structures I am running into lots of issues and confusion. I've spent many hours googling this, but everything I can find is limited to basic data type examples.
So my question ultimately is if you have a data structure in Fortran that looks like this:
module example
type top_struct
type(sub_struct), allocatable :: sStruct(:)
complex , allocatable :: complex1(:)
real , allocatable :: real1(:, :)
integer , allocatable :: ints1(:, :)
character , allocatable :: label1(:)
end type top_struct
type sub_struct
complex , allocatable :: complex2(:)
real , allocatable :: real2(:, :)
integer , allocatable :: ints2(:, :)
character , allocatable :: label2(:)
end type sub_struct
end module
how would you implement code in both C++ and Fortran that would allow you to pass this structure back and forth between them?
I found other questions that discussed using pointers to workaround there not being a direct correlation between allocatable in Fortran and something like std::vector in C++, but they didn't give any examples as to how to handle this if the data is inside of a structure/UDT.
Any help would be greatly appreciated!
| Unless your structure is bind(C), no exact correspondence between C(++) and Fortran can be guaranteed. The compilers can choose to use different paddings or similar. But you cannot make a bind(C) structure with allocatable components. All that is left are hacks.
As a workaround you could make a proxy structure with type(c_ptr) pointers that point to those allocatable arrays and pass to C(++) this proxy.
|
72,972,898 | 72,973,086 | How to Initialize a Mutex Inside a Struct? | I'm kind of new to multithreading, and this is a small piece of a very large homework for my operating systems class. Currently, I have a C++ struct as follows:
struct arguments {
std::string string1;
std::string string2;
pthread_mutex_t bsem;
pthread_cond_t wait = PTHREAD_COND_INITIALIZER;
pthread_mutex_init(&bsem, NULL);
int turn_index = 0; // To identify which thread's turn it is.
};
The line containing:
pthread_mutex_init(&bsem, NULL);
Is giving me errors, namely the two:
expected an identifier before &
expected an identifer before _null
What is a quick resolve to this? I've seen someone make a constructor for the struct object and initalized the mutex in the constructor, but why do we need that? Also, is there a way to do it without a constructor?
Thank you very much.
| You can't perform non-declarative statements inside of a struct declaration. What you can do is add a constructor (and destructor, in this case) that performs the extra statements you need, eg:
struct arguments {
std::string string1;
std::string string2;
pthread_mutex_t bsem;
pthread_cond_t wait = PTHREAD_COND_INITIALIZER;
int turn_index = 0; // To identify which thread's turn it is.
arguments() {
pthread_mutex_init(&bsem, NULL);
}
~arguments() {
pthread_mutex_destroy(&bsem);
}
};
That being said, if you are using C++11 or later, you should use std::mutex (and std::condition_variable) instead:
struct arguments {
std::string string1;
std::string string2;
std::mutex bsem;
std::condition_variable wait;
int turn_index = 0; // To identify which thread's turn it is.
};
And consider using std::thread instead of pthreads.
|
72,973,635 | 72,973,874 | How to properly check keys in a map c++ | I have been using maps lately and wanted to know how to check for existing keys in a map.
This is how I would add/update keys:
map<int> my_map;
my_map[key] = value;
The [] operator adds a new key if one doesn't exists.
If I were to check for a key like this,
map<int> my_map;
if(check_value == my_map[key]){....}
Would this condition return false and also add a new key to my_map.
If so, would it be cleaner to add the following check before doing anything with the [] operator (possibly add a helper function that always does this for you).
if(my_map.find(key) == my_map.end()) {
if(check_value == my_map[key]){....}
}
I realize I kinda answered my own question here but is there a cleaner way to achieve this? Or to not use the [] altogether?
Links and tutorials are appreciated.
Thank you.
| In C++20, there is std::map::contains, which returns a bool.
if ( my_map.contains(key) ) { ... }
Before C++20, there is also std::map::count, which (unlike std::multimap::count) can only ever return 0 or 1.
if ( my_map.count(key) ) { ... }
|
72,974,766 | 72,974,905 | How to find middle of a button on screen | Ok I'm coding a button. I have done the box collision and all of the other stuff.
The problem I'm having is putting text in the middle of the button. No matter what I try it doesn't work :/ .
Please help I'm bad at math.
x = 120, y = 120, w = 120, h = 50
Screen dimensions = 480, 240
Is there an equation for this? I tried everything.
The best thing I have so far is
Brain.Screen.printAt(x + (w / 2, y + (h / 2), false, "Bruh");
// printAt args int x, int y, bool opaque, const char *text
The problem with that is the it's not at the exact center
is a little bit to the top right.
https://i.stack.imgur.com/vA2UQ.png
| You can compute the center-point of the button easily enough:
const int buttonCenterX = x+(w/2);
const int buttonCenterY = y+(h/2);
... for the next step you'll need to center the text around that point. If your GUI API doesn't provide a way to center the text for you, you can calculate the appropriate x/y position by hand, assuming you know (or have a way to calculate) the pixel-width and pixel-height of the text:
const int textHeight = [text string's height, in pixels]
const int textWidth = [text string's width, in pixels]
const int textLeft = buttonCenterX-(textWidth/2);
const int textTop = buttonCenterY-(textHeight/2);
drawTextAt(textLeft, textTop, textString); // assuming drawTextAt() draws starting at the top-left of the string
|
72,974,836 | 72,974,882 | How to convert an absolute path to a relative one? | Let's say I have a base path D:\files and an absolute path D:files\images\1.jpg.
Is there a way to convert this absolute path into a relative one with respect to the base path?
| Using std::filesystem::relative (C++17 needed)
#include <filesystem>
#include <iostream>
int main() {
std::cout << std::filesystem::relative("D:files/images/1.jpg", "D:files") << "\n";
std::cout << std::filesystem::relative("D:files\\images\\1.jpg", "D:files") << "\n";
}
Output
"images\\1.jpg"
"images\\1.jpg"
Demo
|
72,974,869 | 72,974,904 | Correct syntax for operator >> overloading to members of nested class? | I have class Address nested in class Student, and I want to feed each input line into the class Student with operator>> overloading through istream.
class Address {
public:
....
private:
int house_no;
string street;
string city;
string postcode
};
class Student {
public:
....
friend istream& operator>>(istream& iss, Student& Obj) {
iss >> Obj.roll_no >> Obj.name
>> Obj.addr.house_no >> Obj.addr.street >> Obj.addr.city >> addr.postcode; //problem line
return iss;
}
private:
int roll_no;
string name;
Address addr;
};
//Sample line of test data (fields delimiter is comma):
1101, Alice Rilley, 421, Main St., New York, 230011
2202, Bert McDonald, 152, Railroad St., Madison, 120022
...
My chained istream (problem line) did not work out with error message as:
./include/Student.h: In function ‘std::istream& operator>>(std::istream&, Student&)’:
./include/Student.h:23:60: error: ‘int Address::house_no’ is private within this context
23 | iss >> Obj.roll_no >> Obj.name >> Obj.addr.house_no >> ......
I found only one similar post in SO, but it is for "operator << " opposite to my need. And, it was quite difficult for me to digest.
What is the correct syntax for me to read in a line and feed the line into the object members (especially the nested class Address) of class Student? Thanks a lot!
| The problem is not in the operator itself, but in the visibility of the members. You are using the Address class as a member of Student, but the Address::house_no member is not accessible from it (not only for the input operator).
One solution (a simple but bad one) would be to open the members of Address:
class Address {
public:
....
//private:
public:
int house_no;
string street;
string city;
string postcode
};
But a better approach would be to define an input operator for Address:
class Address {
public:
....
private:
friend istream& operator>>(istream& iss, Address& Obj);
int house_no;
string street;
string city;
string postcode
};
class Student {
public:
....
friend istream& operator>>(istream& iss, Student& Obj);
private:
int roll_no;
string name;
Address addr;
};
istream& operator>>(istream& iss, Address& Obj) {
iss >> Obj.house_no >> Obj.street >> Obj.city >> Obj.postcode;
return iss;
}
istream& operator>>(istream& iss, Student& Obj) {
iss >> Obj.roll_no >> Obj.name >> Obj.addr;
return iss;
}
|
72,976,808 | 72,977,444 | Multithread share 2 variable problem with nonlock | I have a question about multithread share variable problem.
the two variable is like:
{
void* a;
uint64_t b;
}
only one thread can modify the two variable, other thread will frequently read these two variable.
I want to change a and b at one time, other thread will see the change together(see new value a and new value b).
Because many thread will frequently read these two variables, so I don't want to add lock, I want to ask if there is a method to combine change a and b operation, make it like a atomic operation? like use memory fence, will it work? Thank you!
| You're looking for a SeqLock.
It's ideal for this use-case, especially with infrequently-changed data. (e.g. like a time variable updated by a timer interrupt, read all over the place.)
Implementing 64 bit atomic counter with 32 bit atomics
Optimal way to pass a few variables between 2 threads pinning different CPUs
SeqLock advantages include perfect read-side scaling (readers don't need to get exclusive ownership of any cache lines, they're truly read-only not just lock-free), so any number of readers can read as often as they like with zero contention with each other. The downside is occasional retry, if a reader happens to try to read at just the wrong time. That's rare, and doesn't happen when the writer hasn't just written something.
So readers aren't quite wait-free, and in fact if the writer sleeps at just the wrong time, the readers are stuck retrying until it wakes up again! So overall the algorithm isn't even lock-free or obstruction-free. But the very common fast-path is just two extra reads from the same cache line as the data, and whatever is necessary for LoadLoad ordering in the reader. If there's been no write since the last read, the loads can all be L1d cache hits.
The only thing better is if you have efficient 16-byte atomic stores and loads, like Intel (but not AMD yet) CPUs with AVX, if your compiler / libatomic uses it for 16-byte loads of std::atomic<struct_16bytes> instead of x86-64 lock cmpxchg16b. (In practice most AMD CPUs are though to have atomic 16-byte load/store as well, but only Intel has officially put it in their manuals that the AVX feature bit implies atomicity for aligned 128-bit load/store such as movaps, so compilers can safely start uting it.)
Or AArch64 guarantees 16-byte atomicity for plain stp / ldp in ARMv8.4 I think.
But without those hardware features, and compiler+options to take advantage of them, 16-byte loads often get implemented as an atomic RMW, meaning each reader takes exclusive ownership of the cache line. That means reads contend with other reads, instead of the cache line staying in shared state, hot in the cache of every core that's reading it.
like use memory fence, will it work?
No, memory fences can't create atomicity (glue multiple operations into a larger transaction), only create ordering between operations.
Although you could say that the idea behind a SeqLock is to carefully order the write and reads (wrt. to sequence variable) in order to detect torn reads and retry when it happens. So yes, barriers are important for that.
|
72,977,135 | 72,978,419 | In MFC, how to add Buttons according to the user input | What I want to achieve is that user can input a number in the Edit Control, and according to that number the exact same number of buttons will be created (in the same dialog would be the best).
How will I be able to achieve that?
| Dynamically creating controls with MFC is a two-step process:
Construct a C++ class instance that will represent the control by invoking the c'tor (CButton::CButton)
Construct the actual control by calling CButton::Create
If you need to create n button controls, perform this sequence n times.
This solves the easy part. The more challenging issue is how to respond to button click messages. Since the message map macros are strictly a static, compile-time way to wire up events with event handlers, they are difficult to use with dynamically created controls. If you can restrict your UI to a maximum number of button controls, you could wire up an event handler using ON_COMMAND_RANGE or ON_CONTROL_RANGE for the BN_CLICKED notification code.
All of that is non-trivial with several distinct solutions. You should probably ask a separate question in case you're interested in how to tackle that problem.
While that answers the question that was asked, a far easier solution would be to statically lay out the dialog with the maximum number of allowed buttons (e.g. in your .rc script), and dynamically change the visibility of controls in response to user input (see CWnd::ShowWindow).
Doing that allows you to statically declare your message map entries. Since hidden windows (SW_HIDE) do not generate any input messages you don't have to do anything in addition to toggling visibility between SW_HIDE and SW_SHOWNOACTIVATE.
|
72,977,403 | 72,977,728 | std::tolower example from website not giving expected result | I found an example of std::tolower, here: https://en.cppreference.com/w/cpp/string/byte/islower
There's an example which, according to the website, should return false and true for this bit of code:
#include <iostream>
#include <cctype>
#include <clocale>
int main()
{
unsigned char c = '\xe5'; // letter å in ISO-8859-1
std::cout << "islower(\'\\xe5\', default C locale) returned "
<< std::boolalpha << (bool)std::islower(c) << '\n';
std::setlocale(LC_ALL, "en_GB.iso88591");
std::cout << "islower(\'\\xe5\', ISO-8859-1 locale) returned "
<< std::boolalpha << (bool)std::islower(c) << '\n';
}
But copy-pasting this bit in my own IDE gives me false and false, and so does the run this code button on the website itself.
EDIT:
So the locale is not being set properly. Using windows 10 with latest Jetbrains Rider.
This works:
assert(std::setlocale(LC_ALL, "en_US.UTF-8"));
//assert(std::setlocale(LC_ALL, "en_GB.iso88591"));
printf ("Locale is: %s\n", setlocale(LC_ALL,NULL) );
But uncommenting the other locale will throw error.
| Ok so problem is that on Windows locale names are not same as on Linux.
On Windows iso88591 is represented by codepage 1252 so one of possible locale name is:.1252:
std::setlocale(LC_ALL, ".1252");
Not sure, but it is possible also .Windows-1252 will do the job too.
You can also try boost.locale to try unify locale names (so it could work same on all platforms). Since this is C++ you need to use std::tolower(std::locale).
|
72,977,690 | 72,978,056 | Simple template to pass a c++ member method as a callback | I have a set of classes which have many very similar methods, grouped into 2 call signatures. These calls are of the form:
bool fn( const std::string& ) and bool fn( const std::vector<std::string>& )
I need to do some common logic around each call and I'm trying to make my life easy but without much luck. Conceptually, the wrapper logic signature looks like the following:
bool wrap( config::method, key, values, flag )
I have code that compiles but it is more complicated than I want. This is the actual signature (there will be two wrappers, one for each signature type):
template <typename T, bool(T::*fn)(const std::string&)>
bool CFG_STR( T& cfg, const char* key, Nodes data, bool flag ) { /* ... cfg.*fn(x) ... */ }
and this is the caller:
configClassWithVeryLongName config;
success &= CFG_STR<
configClassWithVeryLongName,
&configClassWithVeryLongName::methodCall
>( config, "key", dataStore, configFlag );
Is there a more compact and less fragile way to write this? I can reduce it to this:
#define MAC_STR(c,m,k,d,f) CFG_STR<typeof( c ), m>( c, k, d, f )
configClassWithVeryLongName config;
success &= MAC_STR( config, &configClassWithVeryLongName::methodCall, "key", dataStore, configFlag );
I would really like to remove the class name prefix and simply pass methodCall or "methodCall", and not use CPP macros. Is there a clever way to do this cleanly? I've tried a few things like combining typeof with the # paste macro to form the method name but without any success. Modifying the 'config...' classes is not an option because they do not all inherit from a common base class.
Thanks.
| You can just pass the member pointer as function argument:
template <typename T>
bool CFG_STR( T& cfg, bool(T::*fn)(const std::string&), const char* key, Nodes data, bool flag ) { /*...*/ }
And instead of repeating the class name you can just write decltype(config):
success &= CFG_STR( config, &decltype(config)::methodCall, "key", dataStore, configFlag );
|
72,977,902 | 72,978,084 | 2d push_back doesnt save to vector values | I have a 2d vector which should save x and y coordinates. Everything works as intended except saving this values to vector. What did I do wrong?
void Game::GetShips(Board &b)
{
vector<vector<int>> shipCors;
for (int i = 0; i < BOARDSIZE; i++) {
for (int j = 0; j < BOARDSIZE; j++) {
if (b.getSpaceValue(i, j) == SHIP) {
cout << "X :"<< i<<"\n Y:"<<j<<'\n';
shipCors[i].push_back(j);
}
}
}
cout<< shipCors.size()<<'\n';
}
| You declared an empty vector
vector<vector<int>> shipCors;
So you may not use the subscript operator
shipCors[i].push_back(j);
You could write
for (int i = 0; i < BOARDSIZE; i++) {
shipCors.resize( shipCors.size() + 1 );
for (int j = 0; j < BOARDSIZE; j++) {
if (b.getSpaceValue(i, j) == SHIP) {
cout << "X :"<< i<<"\n Y:"<<j<<'\n';
shipCors[i].push_back(j);
}
}
}
Pay attention to that as you are using the index i then you need to add a "row" of the vector even if the row will be empty after executing the inner for loop.
It will be even better to resize the vector initially before the for loops like
vector<vector<int>> shipCors;
shipCors.resize( BOARDSIZE );
for (int i = 0; i < BOARDSIZE; i++) {
for (int j = 0; j < BOARDSIZE; j++) {
if (b.getSpaceValue(i, j) == SHIP) {
cout << "X :"<< i<<"\n Y:"<<j<<'\n';
shipCors[i].push_back(j);
}
}
}
An alternative approach is to have a vector declared like
std::vector<std::pair<int, int>> shipCors;
In this case your loop will look like
for (int i = 0; i < BOARDSIZE; i++) {
for (int j = 0; j < BOARDSIZE; j++) {
if (b.getSpaceValue(i, j) == SHIP) {
cout << "X :"<< i<<"\n Y:"<<j<<'\n';
shipCors.emplace_back(i, j);
}
}
}
Or to keep the data sorted you can declare a set like
std::set<std::pair<int, int>> shipCors;
|
72,978,401 | 72,978,874 | Why is masking needed before using a pshufb shuffle as a lookup table for nibbles? | This code comes from https://github.com/WojciechMula/sse-popcount/blob/master/popcnt-avx2-lookup.cpp.
std::uint64_t popcnt_AVX2_lookup(const uint8_t* data, const size_t n) {
size_t i = 0;
const __m256i lookup = _mm256_setr_epi8(
/* 0 */ 0, /* 1 */ 1, /* 2 */ 1, /* 3 */ 2,
/* 4 */ 1, /* 5 */ 2, /* 6 */ 2, /* 7 */ 3,
/* 8 */ 1, /* 9 */ 2, /* a */ 2, /* b */ 3,
/* c */ 2, /* d */ 3, /* e */ 3, /* f */ 4,
/* 0 */ 0, /* 1 */ 1, /* 2 */ 1, /* 3 */ 2,
/* 4 */ 1, /* 5 */ 2, /* 6 */ 2, /* 7 */ 3,
/* 8 */ 1, /* 9 */ 2, /* a */ 2, /* b */ 3,
/* c */ 2, /* d */ 3, /* e */ 3, /* f */ 4
);
const __m256i low_mask = _mm256_set1_epi8(0x0f);
__m256i acc = _mm256_setzero_si256();
#define ITER { \
const __m256i vec = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(data + i)); \
const __m256i lo = _mm256_and_si256(vec, low_mask); \
\\\ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ why do we need this?
const __m256i hi = _mm256_and_si256(_mm256_srli_epi16(vec, 4), low_mask); \
const __m256i popcnt1 = _mm256_shuffle_epi8(lookup, lo); \
const __m256i popcnt2 = _mm256_shuffle_epi8(lookup, hi); \
local = _mm256_add_epi8(local, popcnt1); \
local = _mm256_add_epi8(local, popcnt2); \
i += 32; \
}
while (i + 8*32 <= n) {
__m256i local = _mm256_setzero_si256();
ITER ITER ITER ITER
ITER ITER ITER ITER
acc = _mm256_add_epi64(acc, _mm256_sad_epu8(local, _mm256_setzero_si256()));
}
...rest are unrelated to the question
The code is used to replace the builtin_popcnt function, which counts the number of 1s in a given input in binary format.
what bothers me are these two lines:
const __m256i lo = _mm256_and_si256(vec, low_mask); \
const __m256i hi = _mm256_and_si256(_mm256_srli_epi16(vec, 4), low_mask); \
according to Intel intrinsic guide https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#techs=AVX,AVX2&ig_expand=6392,305,6221,6389,6389,6221,6188,6769,6389,124,6050,6389&text=mm256_shuffle ,the _mm256_shuffle_epi8 instruction only looks at the lower 4 bits of your packed chars b:
__m256i _mm256_shuffle_epi8 (__m256i a, __m256i b)
FOR j := 0 to 15
i := j*8
IF b[i+7] == 1
dst[i+7:i] := 0
ELSE
index[3:0] := b[i+3:i]
\\\ ^^^^^^^^^^^^^^^^^^^^^^ only look at lower 4 bits
dst[i+7:i] := a[index*8+7:index*8]
FI
IF b[128+i+7] == 1
dst[128+i+7:128+i] := 0
ELSE
index[3:0] := b[128+i+3:128+i]
dst[128+i+7:128+i] := a[128+index*8+7:128+index*8]
FI
ENDFOR
dst[MAX:256] := 0
So if I'm not mistaken, you can just do
const __m256i lo = vec; \
const __m256i hi = _mm256_srli_epi16(vec, 4); \
I'm sort of new to AVX, Please tell me if there's anything wrong here.
| [v]pshufb looks at the high bit to zero that output element, unfortunately. In the pseudocode you quoted:
IF b[i+7] == 1 # if high-bit set
dst[i+7:i] := 0 # zero that output element
ELSE
... the part you were looking at # else index the source
Tthe intrinsics guide only covers it in the pseudocode, not the text.
As usual, the asm manual entry's description is much more descriptive:
If the most significant bit (bit[7]) of each byte of the shuffle control mask is set, then constant zero is written in the result byte
It's useful for some problems, but for pshufb as a nibble-LUT it does require 2 [v]pand instructions. Including for the high nibbles, because x86 doesn't have a SIMD byte shift. The narrowest being psrlw 16-bit elements, so even the every other byte will get garbage shifted into its high bit. Unless your input data is known to always have those bit-positions clear.
AVX-512VBMI (Ice Lake and newer) vpermb doesn't have this downside, but is lane-crossing so it has 3c latency instead of 1 on CPUs that support it. Luckily it is still only 1 uop on Ice Lake, unlike vperm2tw and vpermt2b even on Ice Lake (https://uops.info).
But it will probably be slower on any future CPUs that do AVX-512 by decoding into 2x 256-bit halves, like Zen 4 or some future Intel Efficiency cores. (Alder Lake E-cores have 128-bit wide EU, and already split 256-bit vectors in two halves, and supporting AVX-512 with 4 uops per instruction would start to get silly, I guess. And unfortunately Intel didn't design a way to expose the new AVX-512 functionality at only 128 and 256-bit width (like masking and better shuffles, vpternlogd, etc.))
The LUT for vpermb can still be broadcast-loaded from a 16-byte source, since it just repeats in each lane. (You can leave bits above the 4th unzeroed, as long as index 0, 16, 32, and 48 all read the same value, etc.)
|
72,978,941 | 72,979,038 | Total time in different parts of recursive function | I am new to C++ and I need to measure the total time for different parts of a recursive function. A simple example to show where I get so far is:
#include <iostream>
#include <unistd.h>
#include <chrono>
using namespace std;
using namespace std::chrono;
int recursive(int);
void foo();
void bar();
int main() {
int n = 5; // this value is known only at runtime
int result = recursive(n);
return 0;
}
int recursive(int n) {
auto start = high_resolution_clock::now();
if (n > 1) { recursive(n - 1); n = n - 1; }
auto stop = high_resolution_clock::now();
auto duration_recursive = duration_cast<microseconds>(stop - start);
cout << "time in recursive: " << duration_recursive.count() << endl;
//
// .. calls to other functions and computations parts I don't want to time
//
start = high_resolution_clock::now();
foo();
stop = high_resolution_clock::now();
auto duration_foo = duration_cast<seconds>(stop - start);
cout << "time in foo: " << duration_foo.count() << endl;
//
// .. calls to other functions and computations parts I don't want to time
//
start = high_resolution_clock::now();
bar();
stop = high_resolution_clock::now();
auto duration_bar = duration_cast<seconds>(stop - start);
cout << "time in bar: " << duration_bar.count() << endl;
return 0;
}
void foo() { // a complex function
sleep(1);
}
void bar() { // another complex function
sleep(2);
}
I want the total time for each of the functions, for instance, for foo() it is 5 seconds, while now I always get 1 second. The number of iterations is known only at runtime (n=5 here is fixed just for simplicity).
To compute the total time for each of the functions I did try replacing the type above by using static and accumulate the results but didn't work.
| You can use some container to store the times, pass it by reference and accumulate the times. For example with a std::map<std::string,unsinged> to have labels:
int recursive(int n, std::map<std::string,unsigned>& times) {
if (n >= 0) return;
// measure time of foo
times["foo"] += duration_foo;
// measure time of bar
times["bar"] += duration_bar;
// recurse
recursive(n-1,times);
}
Then
std::map<std::string,unsigned> times;
recursive(200,times);
for (const auto& t : times) {
std::cout << t.first << " took total : " << t.second << "\n";
}
|
72,979,702 | 72,979,793 | How to make a reference refer to another node of an std::unordered_map | I have an std::unordered_map<int, int> which stores the frequency count of each element present in a given array. I need to find the max frequency element and print the key and frequency count.
#include <iostream>
#include <unordered_map>
#include <type_traits>
int main() {
std::unordered_map<int, int> mp {
{ 1, 2 },
{ 2, 54 },
{ 3, 32 },
{ 4, 8 },
{ 5, 56 },
{ 6, 23 },
{ 7, 9 },
{ 8, 87 },
{ 9, 69 },
};
auto maxP = std::ref(*mp.begin());
for (const auto& p : mp) {
if (p.second > maxP.get().second)
maxP = std::ref(std::add_lvalue_reference<std::pair<const int, int>&>(std::remove_const<const std::pair<const int, int>>(std::remove_reference<const std::pair<const int, int>&>(p))));
}
std::cout << maxP.get().first << ", " << maxP.get().second << std::endl;
}
But I'm getting the below error
<Main.cpp>:21:191: error: no matching function for call to 'std::remove_reference<const std::pair<const int, int>&>::remove_reference(const std::pair<const int, int>&)'
21 | maxP = std::ref(std::add_lvalue_reference<std::pair<const int, int>&>(std::remove_const<const std::pair<const int, int>>(std::remove_reference<const std::pair<const int, int>&>(p))));
|
| std::remove_reference is not a callable. Its a type trait with a type member alias. Same goes for std::add_lvalue_reference. As you know all types, adding those type traits adds unnecessary complexity for no obvious gain. The code is barely readable, and frankly I don't understand how you expected it to work.
Anyhow you do not need any of this. The element is just a pair of ints, so using a value rather than a reference wouldn't hurt too much. Using an iterator to the element rather than a reference would be much simpler as well. The easiest would be to use std::max_element with a custom comparator:
#include <iostream>
#include <unordered_map>
#include <algorithm>
int main() {
std::unordered_map<int, int> mp {
{ 1, 2 },
{ 2, 54 },
{ 3, 32 },
{ 4, 8 },
{ 5, 56 },
{ 6, 23 },
{ 7, 9 },
{ 8, 87 },
{ 9, 69 },
};
auto it = std::max_element(mp.begin(),mp.end(),[](auto a,auto b) { return a.second < b.second;});
std::cout << it->first << ", " << it->second << std::endl;
}
For the question in the title
How to make a reference refer to another node of an std::unordered_map
Don't use a reference. Use an iterator. An iteration is already refering to an element. If you wanted to use a reference you'd use std::pair<const int,int>& (of course you cannot rebind it, but you do not need that here).
|
72,979,811 | 72,979,955 | difference between using std::move and adding 0 to the number? | I'm curious about that is there any practical difference between using std::move to convert an l-value integer to r-value, and adding a 0 to that integer? or any other neutral arithmetic operation (multiplying by 1, subtracting 0, etc).
Adding 0:
int f(int&& i){
i++;
return i;
}
int main(){
int x = 43;
f(x+0);
}
Using std::move:
#include <iostream>
int f(int&& i){
i++;
return i;
}
int main(){
int x = 43;
f(std::move(x));
}
I know we cannot perform such neutral actions on all types, but my question is specially about integral numbers not other types.
| std::move(x) and x+0 do not do the same thing.
The former gives you an rvalue (specifically xvalue) referring to x. The latter gives you a rvalue (specifically prvalue) which (after temporary materialization) refers to a temporary object with lifetime ending after the full-expression.
So f(x+0); does not cause x to be modified, while f(std::move(x)) does.
Taking a rvalue-reference to an int specifically is probably pointless. Moving and copying scalar types is exactly the same operation, so there is no benefit over just int&.
And your function both returns the result by-value and tries to modify the argument. Typically, it should do only one of those things. If it takes a reference and modifies the argument it should either have void return value or return a reference to the argument. If it ought to return the result by-value, then it doesn't need to be passed a reference and can just take a int parameter. (It would be ok to both modify the argument and return by-value if the value returned was unrelated to the new value of the argument, e.g. as in std::exchange returning the old value of the argument.)
|
72,980,035 | 72,981,895 | 'runtime_error' from c++ not captured in iOS | In my iOS project, I use a C++ module. The C++ module throws exception for some cases and the Objective C++ wrapper fails to catch it. For instance
Here is my HelloWorld.h
#include <string>
using namespace std;
class HelloWorld{
public:
string helloWorld();
};
#endif
Implementation HelloWorld.cpp
#include "HelloWorld.h"
string HelloWorld::helloWorld(){
throw (std::runtime_error("runtime_error")); // Throwing exception to test
string s("Hello from CPP");
return s;
}
Objective C++ wrapper HelloWorldIOSWrapper.h
#import <Foundation/Foundation.h>
@interface HelloWorldIOSWrapper:NSObject
- (NSString*)getHello;
@end
#endif /* HelloWorldIOSWrapper_h */
Implementation HelloWorldIOSWrapper.mm
#import "HelloWorldIOSWrapper.h"
#include "HelloWorld.h"
@implementation HelloWorldIOSWrapper
- (NSString*)getHello{
try {
HelloWorld h;
NSString *text=[NSString stringWithUTF8String: h.helloWorld().c_str()];
return text;
} catch (const std::exception & e) {
NSLog(@"Error %s", e.what());
}
return nil;
}
@end
#import "HelloWorldIOSWrapper.h" is added to the Bridging-Header
And now, when I try to invoke getHello() from controller, app crashes leaving the below message in log
libc++abi: terminating with uncaught exception of type std::runtime_error: runtime_error
dyld4 config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/Developer/usr/lib/libBacktraceRecording.dylib:/Developer/usr/lib/libMainThreadChecker.dylib:/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib
terminating with uncaught exception of type std::runtime_error: runtime_error
I expect that the exception must be caught in the wrapper, but, no idea why is it not caught leading to app crash. What do I miss?
| C++ Interoperability
In 64-bit processes, Objective-C exceptions (NSException) and C++
exception are interoperable. Specifically, C++ destructors and
Objective-C @finally blocks are honored when the exception mechanism
unwinds an exception. In addition, default catch clauses—that is,
catch(...) and @catch(...)—can catch and rethrow any exception
On the other hand, an Objective-C catch clause taking a dynamically
typed exception object (@catch(id exception)) can catch any
Objective-C exception, but cannot catch any C++ exceptions. So, for
interoperability, use @catch(...) to catch every exception and @throw;
to rethrow caught exceptions. In 32-bit, @catch(...) has the same
effect as @catch(id exception).
@try {
}
@catch (...) {
}
|
72,980,706 | 72,980,855 | c++: is it better to have a global variable or create a local variable? | for example i have a library function which needs to be used for validating signatures,and is only called when requested.
lets say i have a library class to verify signature
sigverify.hpp
class SigVerify
{
bool verifySignature(std::string path);
}
sigverify.cpp
bool Sigverify::verifySignature(std::string path)
{
//verfies signature
return true;
}
now assume that i compiled sigverify as library and linked it to my main service code
Service.hpp
#include "sigverify.hpp"
class SeviceClass
{
public:
void makeLibCall();
//is it better to declare a variable here and use it in my cpp
SigVerify m_sigVerify;
}
Service.cpp
void ServiceClass::makeLibCall()
{
// OR declare a local variable here like this
Sigverify m_sigVerify;
bool result = m_sigVerify.verifySignautre(path);
}
the library call is only made in one place in my entire code, so i think it is better to create a local variable when there is a need to make the call?
which is better in terms of performance??please help me :)
| Consider a third option mentioned in comments:
void ServiceClass::makeLibCall()
{
static Sigverify m_sigVerify;
bool result = m_sigVerify.verifySignautre(path);
}
m_sigVerify will be initialized once, when the function is called for the first time.
However, to know what is more performant you need to measure. There is no way around that. If the class is really not more than what you posted, then creating an object and calling a function can be expected to be not be more expensive than just calling the function. To be sure: measure.
|
72,980,766 | 72,983,330 | C++ Eigen initialise dynamic matrix with raw data | Suppose I have raw data, whose size I don't know at compile time, and that's why I need to store it in a dynamically sized matrix. I know I can initialise a static-sized matrix as follows:
std::vector<double> v {1.1, 2.2, 3.3, 4.4}; // "Raw data".
Eigen::Matrix<double, 2, 2> m(v.data());
std::cout << m << std::endl;
But is there a way of similarly initialising, or (even better) setting the data of a dynamic matrix? Something like the following (which doesn't compile)?
std::vector<double> v {1.1, 2.2, 3.3, 4.4}; // "Raw data".
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> m(v.data(), 2, 2);
std::cout << m << std::endl;
I know from a comment in this post that I can just use Eigen::Map, but as far as I understand, Eigen::Map doesn't own the memory, so I can't, for example, return it from a function. I know that I can set the matrix element-wise but that feels so dumb LOL.
| If you want to copy the raw data, assign the Map to a normal matrix.
std::vector<double> v {1.1, 2.2, 3.3, 4.4};
Eigen::MatrixXd m = Eigen::MatrixXd::Map(v.data(), 2, 2);
BTW: You don't need to deal with the template parameters such as Matrix<double, 2, 2> or Matrix<double, Dynamic, Dynamic>: There are type definitions for common cases. Just use Matrix2d and MatrixXd in your example. The type definitions are listed here for matrix and here for array
|
72,981,018 | 73,265,624 | How to cancel background noise while playing PCM Audio in STM32? | I am trying to play Audio from PCM data in STM32(blackpill_f411ce). I can hear the audio but there is a steep noise also coming with the audio. I am working in PlatformIO with Audrino's framework.
The PCM data is defined inside the code as an unsigned char array like below
unsigned char sample[98216] = {0x52, 0x49, 0x46, 0x46, 0xA0,......}
My code is below
#include<Arduino.h>
#include <SoundData.h>
#define OUT_PIN_STM_32 PA8
int SineValues[256];
void InitSineValues()
{
float ConversionFactor=(2.0*3.142)/256.0;
float RadAngle;
for(int MyAngle=0;MyAngle<256;MyAngle++)
{
RadAngle=MyAngle*ConversionFactor;
SineValues[MyAngle]=(sin(RadAngle)*127)+128;
}
}
void playPcmData()
{
for (size_t i = 0; i < 98216; i++)
{
int val=int(sample[i]);
analogWrite(OUT_PIN_STM_32,SineValues[val]);
delayMicroseconds(50);
}
}
void setup() {
InitSineValues();
}
void loop() {
playPcmData();
delay(5000);
}
I used the same code with dacWrite in ESP32 and it works fine.
| In setup, I added analogWriteFrequency(200000) and the problem solved
void setup() {
analogWriteFrequency(200000);
}
And also no need to put sine values
void playPcmData()
{
for (size_t i = 0; i < 98216; i++)
{
int val=int(sample[i]);
analogWrite(OUT_PIN_STM_32,val);
delayMicroseconds(50);
}
}
|
72,981,024 | 72,981,052 | weird behavior of #undef | #include <iostream>
#define MY_CONST 10
#define MY_OTHER_CONST MY_CONST
#undef MY_CONST
int main() {
enum my_enum : int {
MY_CONST = 100
};
std::cout << MY_OTHER_CONST;
return 0;
}
I would expect 10 as an output, but this program outputs 100. Can someone explain what is going on here?
https://godbolt.org/z/77EedG11x
| #define MY_OTHER_CONST MY_CONST defines the macro MY_OTHER_CONST to have a replacement list of MY_CONST. No replacement is performed when defining a macro.
In std::cout << MY_OTHER_CONST;, MY_OTHER_CONST is replaced by its replacement list, becoming MY_CONST. At this point, there is no macro definition for MY_CONST, so no further replacement is performed. Then MY_CONST refers to the enum constant MY_CONST, which has value 100.
|
72,981,284 | 72,981,317 | Why is this code not printing the prime factors of num? | I wrote this code for obtaining the prime factors of a number taken as an input from the user.
#include<bits/stdc++.h>
using namespace std;
void prime_Factors(int);
bool isPrime(int);
int main()
{
int num;
cout << "Enter the number to find it's prime factors: ";
cin >> num;
prime_Factors(num);
}
void prime_Factors(int n1)
{
for(int i = 2; i<n1; i++)
{
if(isPrime(i))
{
int x = i;
while(n1%x==0)
{
cout << i << " ";
x *= i;
}
}
}
}
bool isPrime(int n0)
{
if(n0==1)
return false;
for(int i = 0; i*i <= n0; i++)
{
if(n0%i==0)
return false;
}
return true;
}
The prime_Factors() function call in main() function is not printing the prime factors. Pls help!!
| The ranges of the loops are wrong.
Firstly, the loop for(int i = 2; i<n1; i++) will fail to find prime factors of prime numbers (the numbers theirself). It should be for(int i = 2; i<=n1; i++).
Secondly, the loop for(int i = 0; i*i <= n0; i++) will result in division-by-zero. It should be for(int i = 2; i*i <= n0; i++).
|
72,981,526 | 72,981,859 | Efficient creation of thread pool (C++) | What is the 'best' way to create a thread pool for more efficient calculation?
Suppose I have the following code to print out how many primes are in a given interval (for demonstration only, I know it's super slow):
#include <future>
#include <iostream>
#include <thread>
#include <math.h>
bool is_prime(int n) {
if (n == 2 || n == 3) {
return 1;
}
else if (n % 2 == 0 || n % 3 == 0) {
return 0;
}
for (int i = 5; i < sqrt(n) + 1; i = i + 6) {
if (n % i == 0 || n % (i+2) == 0) {
return 0;
}
}
return 1;
}
int primes_in_range(int a, int b) {
int total = 0;
for (int i = a; i <= b; i++) {
total += is_prime(i);
}
return total;
}
int main() {
int total = primes_in_range(2, 10000000);
std::cout << total << std::endl;
}
If I want to make this run faster by splitting the interval into smaller chunks for threads, how would I do so?
Currently, I'm doing something like this:
auto thread1 = std::async(std::launch::async, primes_in_range, 2, 2500000);
auto thread2 = std::async(std::launch::async, primes_in_range, 2500001, 5000000);
auto thread3 = std::async(std::launch::async, primes_in_range, 5000001, 7500000);
auto thread4 = std::async(std::launch::async, primes_in_range, 7500001, 10000000);
int total1 = thread1.get();
int total2 = thread2.get();
int total3 = thread3.get();
int total4 = thread4.get();
std::cout << total1 + total2 + total3 + total4 << std::endl;
But this doesn't seem very efficient, especially if I try to have say n threads.
What is a better way to do it? I'm fairly new to multithreading in general, so do tell me if I am doing something terribly wrong!
| Consider you would calculate the results for the intervals sequentially. Then you would use loops and you can do the same with std::asynch and std::future (std::asynch does not return a thread).
auto get_future_chunk(int from, int to){
return std::async(std::launch::async, primes_in_range, from,to);
}
int main() {
std::vector<decltype(get_future_chunk(0,0))> futures;
int from = 2;
int chunk_size = 5000000;
const int max = 10000000;
int to = from+chunk_size;
while (to <= max) {
futures.push_back(get_future_chunk(from,to));
from = to + 1;
to += chunk_size;
}
futures.push_back(get_future_chunk(from,max));
int total = 0;
for (auto& f : futures) total += f.get();
std::cout << total << "\n";
}
The only reason I wrote the function is because I was too lazy to look up the exact type of the future returned from asynch and with the help of the function I can deduce it more easily. For testing on godbolt I used only two chunks because when requesting lots of threads I got an error for unavailable resource. The code fixes the chunk_size and based on that determines the number of futures to be spawned. The reverse is of course possible as well: Fix the number of chunks and then calculate the interval bounds.
Complete example
|
72,982,010 | 72,982,455 | Makefile with multiple separate *.cpp files to output separate *.exe files in different dir | I am stuck, writing my Makefile.
Directory structure:
.\
Makefile
.\src\*.cpp(s)
.\bin
Desire: What I want to achieve with one Makefile.
Run: make
Output (Terminal):
g++ -g -Wall -c -o src/program1.o src/program1.cpp
g++ -g -Wall -c -o src/program2.o src/program2.cpp
g++ -g -Wall -c -o src/program3.o src/program3.cpp
g++ -g -Wall -c -o src/program4.o src/program4.cpp
Output (in /bin/)
program1.exe
program2.exe
program3.exe
program4.exe
EDIT:
CXX = g++
CXXFLAGS = -Wall -g3 -O0
SRC := ${wildcard src/*.cpp}
OBJS := $(SRC:.cpp=.o)
BIN := $(SRC:src/%.cpp=bin/%)
.PHONY: all
all: $(BIN)
$(BIN): $(OBJS)
$(CXX) -c $(CXXFLAGS) -o $(OBJS)
bin/%: src/%.o
$(CXX) -o $@ $^
Error:
g++: warning: linker input file unused because linking not done
| The introductory parts of the GNU make manual describe that all: $(BIN) creates a target all that depends on a target bin. That means make will try to create bin. Then you have $(BIN): $(OBJS) which says bin depends on all the object files, so make will try to create all the object files. Then there's a recipe for that rule that says, after you've created the object files run this command, which links together all the object files into a single program (bin).
So make is doing exactly what you asked it to do.
The problem is that is apparently not what you want it to do.
In your question you write, then take the original filenames of each *.cpp and add that to the executable which I don't fully understand, but I assumed that you want to link all the objects into a single executable, which is what your makefile does.
But then later you write: How can I output to bin directory and generate the correct executables?, but you never define what "correct executables" means, and this makes it sound like you want to turn each individual object file into its own executable; that's clearly not what your makefile does.
So before you can tell make what you want, first you have understand clearly what you want so you can write it in your makefile. And if you need us to help you write it into your makefile, you need to explain it clearly in your question so we can understand it.
Cheers!
ETA
OK so you want every source file to compile into an object file, then every object file to compile to a separate binary.
First compute the names of all the binaries you want to build:
SRCS := $(wildcard src/*.cpp)
BINS := $(SRCS:src/%.cpp=bin/%)
Now make a rule that depends on all the binaries:
all: $(BINS)
Now make a pattern rule that tells make how to build each one of those binaries:
bin/% : src/%.o
$(CXX) $(CXXFLAGS) -o $@ $^ $(LDLIBS)
Now you're actually done, because make already has a built-in rule that knows how to build a .o file into the same directory where the .c file lives, so it can figure out how to build the src/x.o files on its own.
|
72,982,157 | 72,982,359 | variadic template 'ambiguous call to overloaded function' seems a false error | I try to write a template function to initialize given systems and run the app and at the end run shutdown function on initialized systems.
This code should work in my eye and intellisense doesn't give any error but compiler:
1>C:\VisualStudio\DirectApp\AppMain.cpp(32,2): error C2668: 'initialize_these': ambiguous call to overloaded function
1>C:\VisualStudio\DirectApp\AppMain.cpp(28,13): message : could be 'void initialize_these<WindowManager,>(void)'
1>C:\VisualStudio\DirectApp\AppMain.cpp(18,13): message : or 'void initialize_these<WindowManager>(void)'
1>C:\VisualStudio\DirectApp\AppMain.cpp(29,1): message : while trying to match the argument list '()'
1>C:\VisualStudio\DirectApp\AppMain.cpp(29): message : see reference to function template instantiation 'void initialize_these<Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(29): message : see reference to function template instantiation 'void initialize_these<Heap,Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(29): message : see reference to function template instantiation 'void initialize_these<SystemInfo,Heap,Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(45): message : see reference to function template instantiation 'void initialize_these<Sdl,SystemInfo,Heap,Log,WindowManager>(void)' being compiled
The Code:
#include "pch.h"
#include "AppMain.h"
#include "AppRun.h"
#include "Dummy.h"
#include "Heap.h"
#include "Log.h"
#include "Sdl.h"
#include "SystemInfo.h"
#include "WindowManager.h"
static void initialize_these()
{
AppMain::run<AppRun>();
}
template<class Type = Dummy>
static void initialize_these()
{
if (AppMain::initialize<Type>()) { return; }
initialize_these();
AppMain::shutdown<Type>();
}
template<class First, class... Rest>
static void initialize_these()
{
if (AppMain::initialize<First>()) { return; }
initialize_these<Rest...>();
AppMain::shutdown<First>();
}
void AppMain::app_main()
{
initialize_these<
Sdl,
SystemInfo,
Heap,
Log,
WindowManager
>();
}
Hope the code be clear enough.
Oh, btw. those AppMain::initialize<>() calls will give true if initialization failed.
I check different part of this code. It seems to me everything is alright but the variadic templates of static void initialize_these() function.
Edit:
I remove
template<class Type = Dummy>
static void initialize_these()
{
if (AppMain::initialize<Type>()) { return; }
initialize_these();
AppMain::shutdown<Type>();
}
Compiler error:
1>C:\VisualStudio\DirectApp\AppMain.cpp(22,2): error C2672: 'initialize_these': no matching overloaded function found
1>C:\VisualStudio\DirectApp\AppMain.cpp(19): message : see reference to function template instantiation 'void initialize_these<WindowManager,>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(19): message : see reference to function template instantiation 'void initialize_these<Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(19): message : see reference to function template instantiation 'void initialize_these<Heap,Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(19): message : see reference to function template instantiation 'void initialize_these<SystemInfo,Heap,Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(35): message : see reference to function template instantiation 'void initialize_these<Sdl,SystemInfo,Heap,Log,WindowManager>(void)' being compiled
1>C:\VisualStudio\DirectApp\AppMain.cpp(22,2): error C2783: 'void initialize_these(void)': could not deduce template argument for 'First'
1>C:\VisualStudio\DirectApp\AppMain.cpp(18): message : see declaration of 'initialize_these'
| the Rest can be empty, so both are valid.
you can make the variadic one accept 2 or more argument
template<class Type>
static void f(){
// do something with Type
}
template<class First, class Second, class... Rest>
static void f(){
// do something with First
f<Second,Rest...>();
}
|
72,982,826 | 74,358,932 | How to detect whether GPU is AMD or NVIDIA from inside HIP code | I'm currently writing a HIP equivalent to NVIDIA's deviceQuery sample code. I want my code to work on both AMD and NVIDIA hardware.
Now, hipDeviceProp_t isn't exactly the same as cudaDeviceProp_t, because the former has both new and missing fields in the struct compared to the latter.
Currently the code I wrote works on AMD GPUs only and segfaults when I try it on an NVIDIA GPU, which I believe is due to accessing fields that are nonexistent in cudaDeviceProp_t. It is also still missing a critical part to detect the exact GPU model within the same gfx??? GCN architecture code.
How do I figure out whether the detected GPU is AMD or NVIDIA?
Edit: for comparison, SYCL has sycl::info::device::vendor that provides this information.
| When using HIP you known at compile time if you are compiling for AMD or Nvidia GPUs (there is no support for both AMD and Nvidia GPU code in one binary).
Thus you could try relying on the following pre-processor definitions:
#if defined(__HIP_PLATFORM_AMD__)
// AMD GPU code should take this code path
#elif defined(__HIP_PLATFORM_NVIDIA__)
// Nvidia takes this code path.
#else
#error "Unknown platform"
#endif
|
72,984,593 | 72,984,675 | GoogleMock trying to set a function argument to a specific value using EXPECT_CALL | I have the following function prototype
std::int16_t Driver::ListDevices( struct BoardInfo devInfo[], size_t len, int* pCount )
struct BoardInfo
{
int iBoardNum;
WORD wSlot;
char cSite;
};
I have created a Mock for it as follows
MOCK_METHOD3( ListDevices, std::int16_t( struct BoardInfo devInfo[], size_t len, int* pCount ) );
When I call the function using my mock object I want it to return the value 0 and to set the value pointed to by the third parameter pCount to the value 1
The following partial solution successfully compiles and returns the value 0 when the EXPECT_CALL gets invoked
EXPECT_CALL( *m_pMockObject, ListDevices( testing::_, testing::_, testing::_ ) ).Times( 1 ).WillOnce( testing::Return( 0 ) );
However if add to it in an attempt set the 3rd parameter to value 1 with the following
EXPECT_CALL( *m_pMockObject, ListDevices( testing::_, testing::_, testing::_ ) ).WillOnce( testing::SetArgPointee<2>( 1 ) ).Times( 1 ).WillOnce( testing::Return( 0 ) );
It no longer compiles giving the following errors
1>------ Build started: Project: Test_Units_TTVC_MPU_ControlPad, Configuration: UnitTest_Debug x64 ------
1> UnitTestTTVC_MPU_ControlPad.cpp
1>c:\workspace\mpu\controlpad-per-994\libs\googlemock\v1_7_0\bin\include\gmock\gmock-actions.h(446): error C2440: 'return': cannot convert from 'void' to 'short'
1> c:\workspace\mpu\controlpad-per-994\libs\googlemock\v1_7_0\bin\include\gmock\gmock-actions.h(446): note: Expressions of type void cannot be converted to other types
1> c:\workspace\mpu\controlpad-per-994\libs\googlemock\v1_7_0\bin\include\gmock\gmock-actions.h(445): note: while compiling class template member function 'short testing::PolymorphicAction<testing::internal::SetArgumentPointeeAction<2,int,false>>::MonomorphicImpl<F>::Perform(const std::tuple<A1,std::A2,A3> &)'
1> with
1> [
1> F=int16_t (BoardInfo *,std::size_t,int *),
1> A1=BoardInfo *,
1> A2=std::size_t,
1> A3=int *
1> ]
1> c:\workspace\mpu\controlpad-per-994\libs\googlemock\v1_7_0\bin\include\gmock\gmock-actions.h(433): note: see reference to class template instantiation 'testing::PolymorphicAction<testing::internal::SetArgumentPointeeAction<2,int,false>>::MonomorphicImpl<F>' being compiled
1> with
1> [
1> F=int16_t (BoardInfo *,std::size_t,int *)
1> ]
1> c:\workspace\mpu\controlpad-per-994\code\src_unit_test\test_units_ttvc_mpu_controlpad\unittestttvc_mpu_controlpad.cpp(142): note: see reference to function template instantiation 'testing::PolymorphicAction<testing::internal::SetArgumentPointeeAction<2,int,false>>::operator testing::Action<F>(void) const<F>' being compiled
1> with
1> [
1> F=int16_t (BoardInfo *,std::size_t,int *)
1> ]
1> c:\workspace\mpu\controlpad-per-994\code\src_unit_test\test_units_ttvc_mpu_controlpad\unittestttvc_mpu_controlpad.cpp(142): note: see reference to function template instantiation 'testing::PolymorphicAction<testing::internal::SetArgumentPointeeAction<2,int,false>>::operator testing::Action<F>(void) const<F>' being compiled
1> with
1> [
1> F=int16_t (BoardInfo *,std::size_t,int *)
1> ]
========== Build: 0 succeeded, 1 failed, 1 up-to-date, 0 skipped ==========
I'm not sure what I'm doing wrong any advice greatly appreciated
| Your actions are not combined properly.
EXPECT_CALL( *m_pMockObject, ListDevices( testing::_, testing::_, testing::_ ) )
.WillOnce(testing::DoAll(testing::SetArgPointee<2>( 1 ), testing::Return( 0 )));
|
72,985,114 | 72,986,766 | Deplying a C++ application on Linux- linking everything statically to simplify deployment? | I am building a C++ project from Github and want to deploy the code to a remote Linux machine. This is all new to me.
The project has a main.cpp, which includes the various headers/sources like a library.
The CMake outputs an executable (to represent main.cpp) AND a separate static library. The project also uses OpenSSL, which I have linked statically.
I presume the OpenSSL functions are included within the static library? So when I deploy, I don't need to copy-over or install any OpenSSL on the remote machine?
Is it possible to modify the CMake so the application and the library are merged in to one file?
I am trying to make deployment as simple as copying over a single file, if this is possible.
Any additional advice/references are most-welcome.
UPDATE the CMake script:
cmake_minimum_required(VERSION 3.20)
set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMake;${CMAKE_MODULE_PATH}")
project(helloworld C CXX)
set (CMAKE_CXX_STANDARD 20)
set (CMAKE_BUILD_TYPE Release)
set (BUILD_MAIN TRUE)
set (BUILD_SHARED_LIBS FALSE)
set (OPENSSL_USE_STATIC_LIBS TRUE)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set( HELLOWORLD_HEADERS helloworld/File1.h helloworld/File2.h )
set( HELLOWORLD_SOURCES helloworld/File1.cpp helloworld/File2.cpp )
# Static library
add_library( helloworld ${HELLOWORLD_SOURCES} ${HELLOWORLD_HEADERS} )
# Rapidjson
include_directories(/tmp/rapidjson/include/)
# OpenSSL
if (NOT OPENSSL_FOUND)
find_package(OpenSSL REQUIRED)
endif()
add_definitions(${OPENSSL_DEFINITIONS})
target_include_directories(helloworld PUBLIC $<BUILD_INTERFACE:${OPENSSL_INCLUDE_DIR}>)
target_link_libraries(helloworld PRIVATE ${OPENSSL_LIBRARIES})
set( HELLOWORLD_INCLUDE_DIRS ${CMAKE_CURRENT_SOURCE_DIR})
include(GNUInstallDirs)
target_include_directories(helloworld PUBLIC
$<BUILD_INTERFACE:${HELLOWORLD_INCLUDE_DIRS}/>
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/helloworld>
)
set_target_properties(helloworld PROPERTIES PUBLIC_HEADER "${HELLOWORLD_HEADERS}")
add_library(helloworld::helloworld ALIAS helloworld)
option(HELLOWORLD_INSTALL "Install HelloWorld" TRUE)
if (HELLOWORLD_INSTALL)
install(TARGETS helloworld
EXPORT helloworld
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/helloworld/
)
configure_file("${CMAKE_CURRENT_LIST_DIR}/helloworld-config.cmake.in" "${CMAKE_BINARY_DIR}/helloworld-config.cmake" @ONLY)
install(FILES "${CMAKE_BINARY_DIR}/helloworld-config.cmake" DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/helloworld")
install(EXPORT helloworld
FILE helloworld-targets.cmake
NAMESPACE helloworld::
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/helloworld
)
endif()
if (BUILD_MAIN)
add_executable(main main.cpp)
target_link_libraries(main helloworld)
endif()
| ITNOA
I it is very helpful to make URL of your GitHub's project, but I write some public notes about that
In generally in CMake for static linking your library to your executable, you can write simple like below (from official CMake example)
add_library(archive archive.cpp zip.cpp lzma.cpp)
add_executable(zipapp zipapp.cpp)
target_link_libraries(zipapp archive)
In above example your executable file is just work without needing .a library file and you can simple copy single file.
if you want to make all of thing static, you make sure all dependencies make static link to your project, like CMake: how to produce binaries "as static as possible"
if you want to prevent library creation, Probably in your CMake file, you can find add_library command, and add_executable command. you can remove add_library command and add all sources to add_executable command.
for example add_executable(a.out main.cpp lib.cpp)
|
72,985,253 | 72,985,549 | STL algorithm to get a per-vector-component min/max | I have a std::vector<vec3> points where vec3 has float x, y, z.
I want to find the min/max bounds of all the points. I.e. the min and max of all vec3::x, vec3::y, vec3::z separately in the vector of points.
I see that STL has std::minmax_element() which is almost what I want, but it assumes there is a min/max of the whole element.
I've found STL std::reduce() which can do min/max individually with a custom BinaryOp reduce function. The below code gives an example.
Unlike minmax_element(), reduce() needs two passes over the data. Is there an STL way to do it in one?
Yes, the right answer is to just write a simple loop myself, but I'm curious what STL has.
struct vec3 { float x, y, z; };
vec3 min(const vec3& a, const vec3& b)
{
return vec3{
min(a.x, b.x),
min(a.y, b.y),
min(a.z, b.z)
};
}
vec3 max(const vec3& a, const vec3& b)
{
return vec3{
max(a.x, b.x),
max(a.y, b.y),
max(a.z, b.z)
};
}
std::pair<vec3, vec3> minmax_elements(const std::vector<vec3>& points)
{
vec3 vmin = std::reduce<std::vector<vec3>::const_iterator, vec3, vec3(const vec3&, const vec3&)>(
points.cbegin(), points.cend(), points.front(), min);
vec3 vmax = std::reduce<std::vector<vec3>::const_iterator, vec3, vec3(const vec3&, const vec3&)>(
points.cbegin(), points.cend(), points.front(), max);
return {vmin, vmax};
}
Side question: I had to give std::reduce explicit template parameters. Why can't the compiler deduce BinaryOp, the function type for min/max.
| Of course you can use a lambda with std::reduce on a std::pair<vec3, vec3> collext both min and max at the same time.
std::pair<vec3, vec3> minmax_elements(const std::vector<vec3>& points)
{
assert(!points.empty());
return std::reduce(points.cbegin(), points.cend(), std::make_pair(points.front(), points.front()),
[](std::pair<vec3, vec3> const& current, vec3 const& v2)
{
return std::make_pair(min(current.first, v2), max(current.second, v2));
});
}
However ask yourself, brevity is really worth it. Personally I consider the following approach easier to understand and would prefer this implementation.
std::pair<vec3, vec3> minmax_elements(const std::vector<vec3>& points)
{
assert(!points.empty());
std::pair<vec3, vec3> result(points.front(), points.front());
for (auto iter = points.begin() + 1; iter != points.end(); ++iter)
{
result.first = min(result.first, *iter);
result.second = max(result.second, *iter);
}
return result;
}
|
72,985,711 | 72,985,758 | QT signal and slot connection not working | I am making a simple game and want to send a signal from my Game class to my MainWindow. My signal and slot share the same parameter but I can't connect them. I have tried sending very simple signals with a dummy variable but failed to connect. The code is as follows.
game.h
class Game : public QObject
{
Q_OBJECT
public:
Game();
signals:
void test(int l);
MainWindow.h
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
MainWindow(QWidget *parent = nullptr);
~MainWindow();
public slots:
void testSlot(int l);
game.cpp
void someFunction(){
emit test(2);
}
MainWindow.cpp
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow),
g{new Game()}
{
ui->setupUi(this);
g->gameLoop();
connect(g,&Game::test,this,&MainWindow::testSlot);
}
How can I get the signals and slots to connect properly? Thank you in advance.
| I think the problem may be in the fact that you have g->gameLoop(); BEFORE the connect. If your someFunction is called from the gameLoop, then the connect is performed only after the game has finished and after the execution returns from the gameLoop(). But of course it's just guessing. I wouldn't expect to see 'gameLoop' called from the Window's constructor so.. it looks odd as well. Other than that, it looks fine, so if my guess is not correct, then probably the problem lies elsewhere in the code we don't see.
|
72,986,106 | 72,986,218 | C++ / warning: control reaches end of non-void function [-Wreturn-type] | I am just a beginner today and trying to learn desktop programming with C++. And I am confused that why doesnt this work:
The code:
int math(int opt, int x, int y){
switch(opt){
case 1:
return x + y;
break;
case 2:
return x - y;
break;
case 3:
return x * y;
break;
case 4:
return x / y;
break;
default:
break;
}
}
The use:
cout << to_string(math(1,1,2)));
The error:
main.cpp: In function ‘int math(int, int, int)’:
main.cpp:38:1: warning: control reaches end of non-void function [-Wreturn-type]
38 | }
| ^
Thank you all,
as I understand:
break; after return is extra because will be never executed
need to return something in all cases(i.e. if opt is bigger than 5)
So this works now:
int math(int opt, int x, int y){
switch(opt){
case 1:
return x + y;
case 2:
return x - y;
case 3:
return x * y;
case 4:
return x / y;
default:
return 0;
}
}
| The problem is, if you pass "invalid" opt value, default case gets selected, and your function returns nothing. So one solution would be to decide what to return if you pass invalid opt.
You should fix your problem like this:
enum class Opt { PLUS, MINUS, TIMES, DIVIDED };
int math(Opt opt, int x, int y) {
switch(opt){
case Opt::PLUS:
return x + y;
case Opt::MINUS:
return x - y;
case Opt::TIMES:
return x * y;
case Opt::DIVIDED:
return x / y;
}
// it is important to not have default: case
// above because now compiler will issue warning,
// you add new values to Opt.
// below line should never be reached, so value doesn't matter,
// but it's needed to disable warning on some compilers.
return 0;
}
// test
int main() {
return math(Opt::PLUS, 1, 2); // exit code 3
}
Using enum class has the benefit, that if you want to shoot yourself into foot by passing random integer as opt, you have use an explicit cast to convert the integer into an enum class value.
|
72,986,187 | 72,986,915 | Will a source file be recompiled multiple times if its nested header file is modified? |
In the attached image, if D.h is modified, will Visual Studio recompile A.cpp twice? Or will it be recompiled only once?
| No. Compiling a.cpp once is sufficient to produce an object file that incorporates all the latest changes from the header files (if they are relevant to the code in a.cpp).
Your build system should be considered to be buggy and broken if it has to compile a.cpp twice during a single build, because the second compilation would be just redoing the same work as the first compilation and producing the same result.
|
72,986,476 | 72,986,592 | Macro with a C++ class | I was going through this code (line 41):
https://github.com/black-sat/black/blob/master/src/lib/include/black/logic/parser.hpp
and came across something like this:
#include <iostream>
#define YES
class YES myClass{};
int main(){
cout << "Hi\n";
return 0;
}
What is the purpose of defining a macro and using it in front of a class identifier?
| The way you've written it there isn't much point. But if you look at the project's common.hpp file to see how it's used, it makes a lot of sense, and is a common pattern in C and C++:
#ifdef _MSC_VER
#define BLACK_EXPORT __declspec(dllexport)
#else
#define BLACK_EXPORT
#endif
...
class BLACK_EXPORT parser
{
public:
...
Here the author has defined compiler-dependent attributes. If the code is built using MSVC, then any class marked with the BLACK_EXPORT macro will get the dllexport attribute. For other compilers it will do nothing.
|
72,986,697 | 72,986,930 | C++ fstream object passed as reference, but it won't make | I'm trying to do a bunch of stuff with the .txt file I'm trying to read, so I want to break it up into functions. But even when I pass the file stream in by reference, I can't get the program to compile.
#include "Executive.h"
#include "Clip.h"
#include <string>
#include <iostream>
#include <fstream>
void Executive::readFile()
{
std::fstream streamer;
streamer.open(m_file);
if(streamer.is_open ())
{
findStart(streamer);
for(int i = 0; i < 13; i++)
{
std::string temp;
streamer >> temp;
}
for(int i = 0; i < 20; i++)
{
std::string temp;
streamer >> temp;
std::cout << temp << " ";
if(i == 10) {std::cout << "\n";}
}
streamer.close();
return;
}
else
{ throw std::runtime_error("Could not read file!\n"); }
}
void findStart(const std::fstream& stream)
{
bool isStart = 0;
while(!isStart)
{
std::string temp;
stream >> temp;
if(temp == "Sc/Tk")
{ isStart = 1; }
}
}
| ITNOA
simple answer
for resolve your problem you can just remove const keyword in declaration of findStart funciton.
TL;DR;
in generally if you want to only read from file, please use ifstream instead of fstream.
your code problem is stream >> temp; does not work with const fstream because operator >> has declared like below
template< class CharT, class Traits, class Allocator >
std::basic_istream<CharT, Traits>&
operator>>( std::basic_istream<CharT, Traits>& is,
std::basic_string<CharT, Traits, Allocator>& str );
as you can see operator>> does not have any overload for const reference stream object, so your code is wrong and does not compile, if you want to know why C++ does not provide this override you can see below implementation for example
1540 {
1541 typedef basic_istream<_CharT, _Traits> __istream_type;
1542 typedef basic_string<_CharT, _Traits, _Alloc> __string_type;
1543 typedef typename __istream_type::ios_base __ios_base;
1544 typedef typename __istream_type::int_type __int_type;
1545 typedef typename __string_type::size_type __size_type;
1546
1547 __size_type __extracted = 0;
1548 const __size_type __n = __str.max_size();
1549 typename __ios_base::iostate __err = __ios_base::goodbit;
1550 typename __istream_type::sentry __cerb(__in, true);
1551 if (__cerb)
1552 {
1553 __try
1554 {
1555 __str.erase();
1556 const __int_type __idelim = _Traits::to_int_type(__delim);
1557 const __int_type __eof = _Traits::eof();
1558 __int_type __c = __in.rdbuf()->sgetc();
1559
1560 while (__extracted < __n
1561 && !_Traits::eq_int_type(__c, __eof)
1562 && !_Traits::eq_int_type(__c, __idelim))
1563 {
1564 __str += _Traits::to_char_type(__c);
1565 ++__extracted;
1566 __c = __in.rdbuf()->snextc();
1567 }
1568
1569 if (_Traits::eq_int_type(__c, __eof))
1570 __err |= __ios_base::eofbit;
1571 else if (_Traits::eq_int_type(__c, __idelim))
1572 {
1573 ++__extracted;
1574 __in.rdbuf()->sbumpc();
1575 }
1576 else
1577 __err |= __ios_base::failbit;
1578 }
1579 __catch(__cxxabiv1::__forced_unwind&)
1580 {
1581 __in._M_setstate(__ios_base::badbit);
1582 __throw_exception_again;
1583 }
1584 __catch(...)
1585 {
1586 // _GLIBCXX_RESOLVE_LIB_DEFECTS
1587 // 91. Description of operator>> and getline() for string<>
1588 // might cause endless loop
1589 __in._M_setstate(__ios_base::badbit);
1590 }
1591 }
1592 if (!__extracted)
1593 __err |= __ios_base::failbit;
1594 if (__err)
1595 __in.setstate(__err);
1596 return __in;
1597 }
as you can see in above example, for implementing operator>> we need to change state of stream to know (and save) last read position.
|
72,986,941 | 72,986,978 | Join a container of `std::string_view` | How can you concisely combine a container of std::string_views?
For instance, boost::algorithm::join is great, but it only works for std::string.
An ideal implementation would be
static std::string_view unwords(const std::vector<std::string_view>& svVec) {
std::string_view joined;
boost::algorithm::join(svVec," ");
return joined;
}
| ITNOA
short C++20 answer version:
using namespace std::literals;
const auto bits = { "https:"sv, "//"sv, "cppreference"sv, "."sv, "com"sv };
for (char const c : bits | std::views::join) std::cout << c;
std::cout << '\n';
since C++23 if you want to add special string or character between parts you can just use simple join_with and your code is just below (from official cppreference example)
#include <iostream>
#include <ranges>
#include <vector>
#include <string_view>
int main() {
using namespace std::literals;
std::vector v{"This"sv, "is"sv, "a"sv, "test."sv};
auto joined = v | std::views::join_with(' ');
for (auto c : joined) std::cout << c;
std::cout << '\n';
}
Note1: if you do not like use not stable release of language, you can simple use range-v3 library for join_with views
Note2: As Nicol Bolas you cannot join literally to exact one string_view without any copy (you can copy to string and ... :D), if you want to know more detailed about that you can see Why can't I construct a string_view from range iterators? SO question and answer.
|
72,987,131 | 72,987,510 | Slow performance using std::distance to get std::map index | I have a std::map and I need all its key, value and index for some process.
My code works correctly. The only issue is: it's too slow.
Below is an example:
void run(const std::map <key, value>& myMap) {
std::map <key, value>::const_iterator iter;
for (iter = myMap.begin(); iter != myMap.end(); ++iter) {
const auto& myKey = iter->first;
const auto& myValue = iter->second;
const auto index = std::distance(myMap.begin(), iter);
// then some process here
}
}
I use IgProf to profile the performance.
Rank % total Self Self / Children Function
[39] 8.2 9.06 0.69 / 8.37 run(const std::map <key, value>& myMap)
5.6 ......... 6.16 / 6.17 std::_Rb_tree_increment(std::_Rb_tree_node_base*) [clone .localalias.2] [54]
1.4 ......... 1.50 / 1.50 some process here [175]
0.3 ......... 0.36 / 0.36 std::_Rb_tree_increment(std::_Rb_tree_node_base const*) [428]
0.3 ......... 0.34 / 0.83 _init [232]
Here std::_Rb_tree_increment costs too much time.
In this example code, I can manually calculate the index:
replacing const auto index = std::distance(myMap.begin(), iter); by ++index;
I got a much faster performance
Rank % total Self Self / Children Function
[148] 2.3 2.42 0.60 / 1.81 run(const std::map <key, value>& myMap)
1.7 ......... 1.77 / 1.77 some process here [165]
0.0 ......... 0.03 / 0.04 std::_Rb_tree_increment(std::_Rb_tree_node_base*) [clone .localalias.2] [1268]
0.0 ......... 0.01 / 0.37 _init [420]
But in reality, I do need std::distance or something equivalent to get the index.
So I would really appreciate it if you could help me understand the reason of its slow performance.
Thanks in advance :)
| std::map is a tree structure. It's not random access, and elements don't have indices, and the only way to advance through the tree is to follow the links, one at a time. Because of this, std::map::iterator is a BidirectionalIterator. That means it only supports increment and decrement operations. It doesn't support any sort of difference operation. To get the difference between two of them you have to repeatedly increment the start iterator until it's equal to the end iterator. Something like this:
template <typename Iterator>
size_t distance(Iterator start, Iterator end)
{
size_t dist = 0;
while (start != end) {
++dist;
++start;
}
return dist;
}
Looking at that function, you can probably see why your loop is slow. Every time through the loop, std::distance has to walk through the tree and count how far from the beginning it is. If you really need an index to go with your map, you'll need to maintain it yourself. std::map doesn't seem like the right structure in that case though, since the indices will change as new elements are added.
|
72,988,260 | 72,988,898 | Problem with using derived class where base class is expected | I am writing a code for a little system that should run on an arduino. The objective is to control several cycles which each have a certain amount of sub-cycles. Both the cycles and the subcycles are defined by their duration and ultimately, the system's operations will be performed at the subcycle level (didn't implement this yet).
I thought about creating a class that could manage those events called EventManager. A class to represent the subevents called Events and a class called Cycles which is derived from both classes because it is both an event and at the same time will manage the subevents.
My problem is that when I pass Event objects to the EventManager, all is good. However, when I pass the Cycle objects, it doesn't work as expected (see attached pictures... Cycle 2 name isn't initialized for some reason).
This is my code below and please bare with me, I know it's a lot of files. Any help would be greatly appreciated. Thank you!
EventManager.h
#ifndef SRC_EVENTMANAGER
#define SRC_EVENTMANAGER
#include <Arduino.h>
#include "Event.h"
class EventManager
{
public:
EventManager(size_t n_events = 0, Event *events = nullptr);
void loop();
private:
size_t n_events;
Event *events;
size_t current_event;
bool cycle_ended;
bool check_event_end();
void end_current_event();
};
#endif /* SRC_EVENTMANAGER */
EventManager.cpp
#include "EventManager.h"
EventManager::EventManager(size_t n_events, Event *events)
: n_events(n_events), events(events), current_event(0), cycle_ended(false) {}
bool EventManager::check_event_end()
{
return events[current_event].ended();
}
void EventManager::end_current_event()
{
events[current_event].end();
current_event == n_events - 1 ? cycle_ended = true : current_event++;
}
void EventManager::loop()
{
if (cycle_ended)
return;
events[current_event].run();
if (check_event_end())
end_current_event();
}
Event.h
#ifndef SRC_EVENT
#define SRC_EVENT
#include <Arduino.h>
#include "utils.h"
class Event
{
public:
Event(String name, Duration duration);
bool ended();
void start();
void run();
void end();
private:
String name;
unsigned long duration;
unsigned long start_time;
unsigned long end_time;
bool started;
};
#endif /* SRC_EVENT */
Event.cpp
#include "Event.h"
Event::Event(String name, Duration duration)
: name(name), start_time(0), end_time(0), started(false)
{
this->duration = duration.toMillis();
}
bool Event::ended()
{
return millis() >= end_time;
}
void Event::start()
{
if (started)
return;
start_time = millis();
end_time = start_time + duration;
Serial.println("Event " + name + " started.");
started = true;
}
void Event::end()
{
start_time = 0;
end_time = 0;
started = false;
Serial.println("Event " + name + " ended.");
}
void Event::run()
{
start();
// Event logic here
}
Cycle.h
#ifndef SRC_CYCLE
#define SRC_CYCLE
#include <Arduino.h>
#include "Event.h"
#include "EventManager.h"
class Cycle : public Event, public EventManager
{
public:
Cycle(String name, Duration duration, size_t event_count, Event *events);
};
#endif /* SRC_CYCLE */
Cycle.cpp
#include "Cycle.h"
Cycle::Cycle(String name, Duration duration, size_t event_count, Event *events)
: Event(name, duration), EventManager(event_count, events){}
main.cpp
#include <Arduino.h>
#include "EventManager.h"
#include "Event.h"
#include "Cycle.h"
Event events[] = {
Event("event1", Duration{0, 0, 5}),
Event("event2", Duration{0, 0, 5}),
};
Cycle cycles[] = {
Cycle("First cycle", Duration{0, 0, 10}, 2, events),
Cycle("Second cycle", Duration{0, 0, 10}, 2, events),
};
EventManager event_manager(2, cycles);
// EventManager event_manager(2, events);
void setup()
{
Serial.begin(9600);
}
void loop()
{
event_manager.loop();
}
| You aren't passing an event or event pointer to your event manager; you are passing an array of events. While accessing individual objects through a pointer is polymorphic, this does not extend to raw arrays. Raw arrays are simple collections of only 1 type of object (naturally all of the same size). And all they contain is the objects - not the type or number of entries. (More favored in modern C++ are the fancier std::array and std::vector which you might want to look into, but won't solve your immediate problem here.)
You can't substitute an array of derived class for an array of base class. While objects accessed via pointer behave polymorphically, an array of a derived class is not a sub type of an array of a base class. If you pass a raw array, as you are doing, by pointer to first element and number of elements, that is fine as the size of each element is implicit in the element type. If you substitute in an array of a derived type, the compiler will be calculating the wrong address for any element after the first. You then get a hearty helping of scrambled data. I should also mention that modern C++ isn't particularly welcoming to unsanctioned type-punning games.
What you can do is set things up to pass an array of base pointers to derived objects. That is entirely legitimate and keeps to the same level language features that you are currently using. This adds an extra level of indirection and an intermediate array to your design.
To try to illustrate this suggestion, some key modified snippets from your code keeping things simple and close to your code (but not updating all of it):
EventManager.h
#ifndef SRC_EVENTMANAGER
#define SRC_EVENTMANAGER
#include <Arduino.h>
#include "Event.h"
class EventManager
{
public:
EventManager(size_t n_events = 0, Event **eventPointers = nullptr); // changed
void loop();
private:
size_t n_events;
Event **eventPointers; // changed
size_t current_event;
bool cycle_ended;
bool check_event_end();
void end_current_event();
};
#endif /* SRC_EVENTMANAGER */
partial EventManager.cpp
#include "EventManager.h"
EventManager::EventManager(size_t n_events, Event **eventPointers)
: n_events(n_events), eventPointers(eventPointers), current_event(0), cycle_ended(false) {}
// changed
bool EventManager::check_event_end()
{
return eventPointers[current_event]->ended(); // changed
}
Event.h
#ifndef SRC_EVENT
#define SRC_EVENT
#include <Arduino.h>
#include "utils.h"
class Event
{
public:
Event(String name, Duration duration);
virtual bool ended(); // make functions virtual as needed
virtual void start();
virtual void run();
virtual void end();
private:
String name;
unsigned long duration;
unsigned long start_time;
unsigned long end_time;
bool started;
};
#endif /* SRC_EVENT */
main.cpp
#include <Arduino.h>
#include "EventManager.h"
#include "Event.h"
#include "Cycle.h"
Event events[] = {
Event("event1", Duration{0, 0, 5}),
Event("event2", Duration{0, 0, 5}),
};
Event *eventsPointers[] = { // added
&events[0],
&events[1],
};
Cycle cycles[] = {
Cycle("First cycle", Duration{0, 0, 10}, 2, events),
Cycle("Second cycle", Duration{0, 0, 10}, 2, events),
};
Event *cyclesPointers[] = { // added, note using base pointers
&cycles[0],
&cycles[1],
};
EventManager event_manager(2, cyclesPointers);
// EventManager event_manager(2, eventsPointers);
|
72,988,487 | 72,992,193 | Is it safe to access stack variable after `this` has been deleted | Found similar questions: Is it safe to `delete this`?
I know that it's unsafe to access member variable. Because this becomes a dangling pointer after delete. But what about stack variable?Clang ASAN does report error if member variable is accessed. But it does not report any problems about stack variable access.
IMHO. Stack is destroyed after the current execution thread is finished. So it is probably safe to access stack variable even if this is deleted. Is there any better solution in this scenario? Following is the test case.
#include <atomic>
#include <chrono>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <ratio>
#include <string>
#include <system_error>
#include <thread>
#include <vector>
std::mutex g_iostream_mutex;
class TestDeleteBase {
int task_num_ = 0;
std::atomic<int> counter_;
public:
virtual void Run() = 0;
void Init(int task_num) {
task_num_ = task_num;
counter_.store(0, std::memory_order_relaxed);
}
void RunParallel() {
int local = counter_;
if (counter_.fetch_add(1, std::memory_order_acq_rel) == task_num_ - 1) {
delete this;
{
std::lock_guard<std::mutex> guard(g_iostream_mutex);
std::cout << std::this_thread::get_id() << " deleted this\n";
}
return;
} else {
{
std::lock_guard<std::mutex> guard(g_iostream_mutex);
std::cout << std::this_thread::get_id() << " not delete \n";
}
std::this_thread::sleep_for(std::chrono::seconds(1));
}
{
std::lock_guard<std::mutex> guard(g_iostream_mutex);
std::cout << std::this_thread::get_id() << " Access " << local << '\n';
std::cout << std::this_thread::get_id() << " Still alive\n";
}
}
virtual ~TestDeleteBase() {}
};
class TestDelete : public TestDeleteBase {
void Run() override {}
};
int main(int argc, char* argv[]) {
TestDeleteBase* obj = new TestDelete();
obj->Init(5);
std::vector<std::thread> threads;
for (int i = 0; i < 5; ++i) {
threads.emplace_back(&TestDeleteBase::RunParallel, obj);
}
for (auto&& thread : threads) {
thread.join();
}
}
| The problem is having counter_ equal to task_num_ - 1 does not mean all threads are finished. It just mean that the fetch_add call has been executed by all threads here. The thing is the == in the expression counter_.fetch_add(1, std::memory_order_acq_rel) == task_num_ - 1 is parsed from left to right. Thus, the fetch_add must be executed before task_num_ - 1 so the threads must access to the task_num_ member attribute after the fetch_add (especially because of the memory_order_acq_rel memory ordering). The point is the attribute can be deleted because the last thread can delete the operation meanwhile (which is very unlikely but still possible).
Besides, local = counter_ do not guarantee to get the last value so the printing will likely be incorrect. You can extract the correct value by storing the result of fetch_add in local.
Object self destruction if generally a very bad idea because it is very bug prone as said in the question link. Even when you succeed to make the code work, the code is not easy to maintain: another developer might not consider this issue when modifying the code (or even you several month/years after). The best thing to do is to delay delete and the general solution to self destruction is to define an entity responsible for the deletion of the object. In this case, the main function can be responsible for that because it is the one doing the thread.join. All threads are guaranteed to be freed after the loop by design.
|
72,988,494 | 72,989,010 | C++ File How to auto generate number for next data to store | #include <iostream>
#include <fstream>
using namespace std;
class Customer {
private:
fstream database;
string customerRecord = "customerRecord.txt";
int movieID = 0;
public:
void write() {
database.open(customerRecord, ios::app | ios::in);
string lines;
if(database.is_open()) {
while(getline(database, lines)) {
movieID++;
}
database << movieID + 1 << endl;
} else {
cout << "Cannot open database." << endl;
}
}
};
int main() {
Customer customer;
customer.write();
}
Suppose that there is already an existing data which is 1 inside customerRecord.txt file
So in my line of code:
while(getline(database, lines)) {
movieID++;
}
database << movieID + 1 << endl;
I read the total number of lines which is currently 1 and increment it by 1 which will be 2 that become the next auto generated ID for the next data that will be stored.
The problem is whenever I try to write a new data inside the file, it fails to write inside the file in which is I suppose that there is something wrong with my code below:
database << movieID + 1 << endl;
| If you read to the end of the file and stop reading because you tried to read past the end of the file, the fail bit and the eof bit are set and you cannot read or write until both are cleared.
The only way out of
while(getline(database, lines)) {
movieID++;
}
is to be unable to read any further and set fail and eof. So
while(getline(database, lines)) {
movieID++;
}
database.clear();
|
72,988,701 | 72,989,508 | Using std::is_same with structural (non-type) template parameters | Consider a templated type containing a structural template parameter of any type. For the purpose of the example value_type<auto V> is defined.
We also declare a constexpr structure containing some member integral types with custom constructors that set the member values using a non-trivial expression (requiring more than memory copy).
Try feeding the structure into value_type and comparing it against another instance using std::is_same.
Example code to illustrate the case:
#include <iostream>
#include <type_traits>
template <auto V>
struct value_type
{
using type = decltype(V);
static constexpr type value = V;
};
struct hmm
{
//Both constructors set b=4 by default
constexpr hmm(int x, int y = 8) : a(x), b(y / 2) { }
constexpr hmm(float c, int z = 2) : a((int)c), b(z * 2) { }
const int a;
const int b;
friend constexpr bool operator==(const hmm& a, const hmm& b) { return false; }
friend constexpr bool operator!=(const hmm& a, const hmm& b) { return true; }
};
int main()
{
std::cout << (std::is_same_v<value_type<hmm(2)>, value_type<hmm(3.5f)>>) << ", ";
std::cout << (std::is_same_v<value_type<hmm(5)>, value_type<hmm(5.11112f)>>) << ", ";
std::cout << (std::is_same_v<value_type<hmm(5, 7)>, value_type<hmm(5.11112f)>>) << ", ";
std::cout << (std::is_same_v<value_type<hmm(5, 12)>, value_type<hmm(5.11112f, 3)>>) << std::endl;
return 0;
}
This code prints 0, 1, 0, 1 on gcc, msvc, and clang. It makes perfect sense, however, it got me wondering what are the limits of this mechanism.
How exactly is the comparison of those types performed?
Is this behavior standardized across compilers or is it just pure luck that they all seem to follow the same pattern here?
From what it looks like, their members are checked after construction, however apparently without using the comparison operators.
Is there a standard compliant way to override this comparison?
And, more generally, according to a potential is_same implementation using <T,T> specialization (source):
template<class T, class U>
struct is_same : std::false_type {};
template<class T>
struct is_same<T, T> : std::true_type {};
What are the rules for matching types T, U to be considered the same entity in the context of <T,T> specialization?
|
How exactly is the comparison of those types performed?
Is this behavior standardized across compilers or is it just pure luck that they all seem to follow the same pattern here?
As per NTTP on cppref, emphasis mine:
An identifier that names a non-type template parameter of class type T denotes a static storage duration object of type const T, called a template parameter object, whose value is that of the corresponding template argument after it has been converted to the type of the template parameter. All such template parameters in the program of the same type with the same value denote the same template parameter object. A template parameter object shall have constant destruction.
And type equivalence:
Template argument equivalence is used to determine whether two template-ids are same.
Two values are template-argument-equivalent if they are of the same type and
they are of integral or enumeration type and their values are the same
or they are of pointer type and they have the same pointer value
or they are of pointer-to-member type and they refer to the same class member or are both the null member pointer value
or they are of lvalue reference type and they refer to the same object -> - or function
or they are of type std::nullptr_t
or they are of floating-point type and their values are identical
or they are of array type (in which case the arrays must be member objects of some class/union) and their corresponding elements are template-argument-equivalent
or they are of union type and either they both have no active member or they have the same active member and their active members are template-argument-equivalent
or they are of floating-point type and their values are identical
or they are of non-union class type and their corresponding direct subobjects and reference members are template-argument-equivalent
There's no comparison but only type equivalence involved. If they have the same value, they all refer to the same object. That's it.
In your case, hmm(5) and hmm(5.11112f) refers to the same object, but not between hmm(2) and hmm(3).
Is there a standard compliant way to override this comparison?
I believe it's not allowed now.
What are the rules for matching types T, U to be considered the same entity in the context of <T,T> specialization?
They have the exact same type with cv-quailifiers.
|
72,988,735 | 72,990,619 | Replacing THC/THC.h module to ATen/ATen.h module | I have question about replacing <THC/THC.h> method.
Recently, I'm working on installing different loss functions compiled with cpp and cuda.
However, what I faced was a fatal error of
'THC/THC.h': No such file or directory
I found out that TH(C) methods were currently deprecated in recent version of pytorch, and was replaced by ATen API (https://discuss.pytorch.org/t/question-about-thc-thc-h/147145/8).
For sure, downgrading my pytorch version will solve the problem. However, due to my GPU compatibility issue, I have no choice but to modify the script by myself. Therefore, my question can be summarized into follows.
First, how can I replace codes that have dependency of TH(C) method using ATen API?. Below are codes that I have to modify, replacing those three lines looked enough for my case.
#include <THC/THC.h>
extern THCState *state;
cudaStream_t stream = THCState_getCurrentStream(state);
Second, will single modification on cpp file be enough to clear the issue that I'm facing right now? (This is just a minor question, answer on first question will suffice me).
For reference, I attach the github link of the file I'm trying to build (https://github.com/sshaoshuai/Pointnet2.PyTorch).
| After struggling for a while, I found the answer for my own.
In case of THCState_getCurrentStream, it could directly be replaced by at::cuda::getCurrentCUDAStream(). Therefore, modified code block was formulated as below.
//Comment Out
//#include <THE/THC.h>
//extern THCState *state;
//cudaStream_t stream = THCState_getCurrentStream(state);
//Replace with
#include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/CUDAEvent.h>
cudaStream_t stream = at::cuda::getCurrentCUDAStream();
After replacing the whole source code, I was able to successfully build the module.
Hope this helps.
|
72,989,501 | 72,989,538 | How to read a text file into parallel arrays | I must have a function that reads card information from a text file
(cards.txt) and insert them to parallel arrays in the main program using a pointer.
I have successfully read the text file, but cannot successfully insert the info to the arrays.
#include <iostream>
#include <stream>
#include <string>
using namespace std;
void readCards();
int main() {
ifstream inputFile;
const int SIZE = 10;
int id[SIZE];
string beybladeName[SIZE];
string productCode[SIZE];
string type[SIZE];
string plusMode[SIZE];
string system[SIZE];
readCards();
return 0;
}
void readCards() {
ifstream inputFile;
const int SIZE = 10;
int id[SIZE];
string beybladeName[SIZE];
string productCode[SIZE];
string type[SIZE];
string plusMode[SIZE];
string system[SIZE];
int i = 0;
inputFile.open("cards.txt");
cout << "Reading all cards information..." << endl;
if (inputFile) {
while (inputFile >> id[i] >> beybladeName[i] >> productCode[i] >> type[i] >> plusMode[I] >>
system[I]) {
i++;
}
cout << "All cards information read." << endl;
}
inputFile.close();
for (int index = 0; index < SIZE; index++) {
cout << "#:" << id[index] << endl;
cout << "Beyblade Name: " << beybladeName[index] << endl;
cout << "Product Code: " << productCode[index] << endl;
cout << "Type: " << type[index] << endl;
cout << "Plus Mode: " << plusMode[index] << endl;
cout << "System: " << system[index] << endl;
cout << " " << endl;
}
}
| The main problem is that you have two sets of arrays, one in main, and one in readCards. You need one set of arrays in main and to pass those arrays (using pointers) to readCards. Like this
void readCards(int* id, string* beybladeName, string* productCode, string* type, string* plusMode, string* system);
int main()
{
ifstream inputFile;
const int SIZE = 10;
int id[SIZE];
string beybladeName[SIZE];
string productCode[SIZE];
string type[SIZE];
string plusMode [SIZE];
string system [SIZE];
readCards(id, beybladeName, productCode, type, plusMode, system);
return 0;
}
void readCards(int* id, string* beybladeName, string* productCode, string* type, string* plusMode, string* system)
{
...
}
|
72,989,685 | 72,989,867 | C++ Memory Layout: Questions about multiple inheritance, virtual destructors, and virtual function tables | I have a main.cpp file as follows.
#include <stdio.h>
class Base1
{
public:
int ibase1;
Base1() : ibase1(10) {}
virtual void f_b1_1() { printf("Base1::f_b1_1()()\n"); }
virtual void f_b1_2() { printf("Base1::f_b1_2()()\n"); }
virtual ~Base1() { printf("Base1::~Base1()\n"); }
};
class Base2
{
public:
int ibase2;
Base2() : ibase2(20) {}
virtual void f_b2_1() { printf("Base2::f_b2_1()()\n"); }
virtual void f_b2_2() { printf("Base2::f_b1_2()()\n"); }
virtual ~Base2() { printf("Base2::~Base2()\n"); }
};
class Base3
{
public:
int ibase3;
Base3() : ibase3(30) {}
virtual void f_b3_1() { printf("Base3::f_b3_1()\n"); }
virtual void f_b3_2() { printf("Base3::f_b3_2()\n"); }
virtual ~Base3() { printf("Base3::~Base3()\n"); }
};
class Derive : public Base1, public Base2, public Base3
{
public:
int iderive;
Derive() : iderive(100) {}
virtual void f_b1_1() { printf("Derive::f_b1_1()\n"); }
virtual void f_b2_1() { printf("Derive::f_b2_1()\n"); }
virtual void f_b3_1() { printf("Derive::f_b2_1()\n"); }
virtual void f_d_1() { printf("Derive::f_d_1()\n"); }
virtual ~Derive() { printf("Derive::~Derive()\n"); }
};
int main()
{
Derive d;
long **pVtab = (long **)&d;
for (int i = -2; i <= 18; ++i)
{
printf("vtab offset=%d addr=%lx\n", (i + 2) * 8, pVtab[0][i]);
}
return 0;
}
I use the g++ -fdump-lang-class -c main.cpp command to view the memory layout of the object, which generates the main.cpp.001l.class file, the following is part of that file.
And the output of main.cpp is as follows.
My questions are:
What is the meaning of the content in the red box?
What is the relationship between Derive::_ZThn16_N6Derive6f_b2_1Ev and Derive::f_b2_1?
What is the relationship between Derive::_ZThn16_N6Derive6f_b3_1Ev and Derive::f_b3_1?
Are Derive::_ZThn16_N6DeriveD1Ev, Derive::_ZThn16_N6DeriveD0Ev, Derive::_ZThn32_N6DeriveD1Ev, Derive::_ZThn32_N6DeriveD0Ev and Derive::~Derive related?
Does the above relate to "C++ trunk"?
The os is: Ubuntu 20.04
The g++ version is: 9.4.0
What materials should I read?
I look forward to your help and I would like to thank you in advance.
|
What is the meaning of the content in the red box?
Does the above relate to "C++ trunk"?
The symbols you highlighed are mangled, you can use some demangle tools, like c++filt
> c++filt _ZThn16_N6Derive6f_b2_1Ev
non-virtual thunk to Derive::f_b2_1()
As to your rest questions, you could refer to What is a 'thunk'? this SO question and answers.
It's a part of ABI, you could refer to Itanium C++ ABI
|
72,989,699 | 72,990,097 | Include SDL_image in mingw build on ubuntu | I'm trying to build a windows executable for a C++ application I've made that uses SDL2, and SDL_Image. I've seemingly managed to include the SDL libraries and headers just fine, but now I'm trying to include the SDL_Image ones. The command I'm currently using is as follows:
i686-w64-mingw32-gcc -lSDL2main -lSDL2 -I ~/SDL/SDL2-devel-2.0.22-mingw/SDL2-2.0.22/i686-w64-mingw32/include -L ~/SDL/SDL2-devel-2.0.22-mingw/SDL2-2.0.22/i686-w64-mingw32/lib -o main32.exe main.cpp
But this gives me the error
main.cpp:4:10: fatal error: SDL2/SDL_image.h: No such file or directory
4 | #include <SDL2/SDL_image.h>
| ^~~~~~~~~~~~~~~~~~
compilation terminated.
I am aware that SDL_Image is a separate plugin, and I have already installed it.
What directory(s) do I need to specify to include SDL_Image in the build?
I'm on Ubuntu 22.
Edit: I have found the correct directories, and I now have the following command:
i686-w64-mingw32-gcc -lmingw32 -lSDL2main -lSDL2 -lSDL2_image \
-I /home/nick/SDL/SDL2-devel-2.0.22-mingw/SDL2-2.0.22/i686-w64-mingw32/include \
-L /home/nick/SDL/SDL2-devel-2.0.22-mingw/SDL2-2.0.22/i686-w64-mingw32/lib \
-I /home/nick/SDL/SDL2_image-devel-2.6.0-mingw/SDL2_image-2.6.0/i686-w64-mingw32/inclulde \
-L /home/nick/SDL/SDL2_image-devel-2.6.0-mingw/SDL2_image-2.6.0/i686-w64-mingw32/lib \
-o main32.exe main.cpp
However, the command still gives me the same error. I have checked, and I am sure that the header file it cannot find is is the correct place.
| SDL2_Image is a plugin for SDL2, and needs to be downloaded separately. You also need to specify -I and -L for it, the same way you did for the SDL2 itself.
Also you forgot -lmingw32 (must be -lmingw32 -lSDL2main -lSDL2 in this exact order), plus -lSDL2_image after those.
As always, a shameless plug: I've made quasi-msys2, a cross-compilation environment that mimics (and is based on) MSYS2, but works on Linux.
Here's how you'd use it:
# Download Clang, LLD. Then:
git clone https://github.com/HolyBlackCat/quasi-msys2
cd quasi-msys2
make install _gcc _SDL2 _SDL2_image
env/shell.sh
Then:
$CXX main.cpp -o main `pkg-config --cflags --libs sdl2 SDL2_image`
|
72,989,732 | 72,989,817 | How to structure base class where derived classes operate on different data types | I have a class that is supposed to fetch an object from the server.
// T types
struct RequestLicense
{
// arbitrary data
};
struct RequestTrial
{
// arbitrary data
}
// U types
struct LicenseData
{
// arbitrary data
};
struct TrialData
{
// arbitrary data
}
struct Fetcher
{
template <typename T>
std::wstring FetchBlob(T requestParameters);
template <typename U>
std::unique_ptr<U> DeserializeBlob(const std::wstring& serializedBlob);
template <typename U>
bool ValidateResponse(U* deserializedResponse);
// Many different virtual, helper functions that make use of the types T and U
}
struct LicenseFetcher : Fetcher
{
// overrides for the different helper functions mentioned in Fetcher
}
struct TrialFetcher : Fetcher
{
// overrides for the different helper functions mentioned in Fetcher
}
// specialization FetchBlob with T=RequestLicense
// specialization for DeserializeBlob and ValidateResponse with U=LicenseData
// specialization FetchBlob with T=RequestTrial
// specialization for DeserializeBlob and ValidateResponse with U=TrialData
// specializations for many of the helper functions. 2 specializations for each of the functions--one RequestLicense/LicenseData and one for RequestTrial/TrialData.
Normally I would make the functions virtual, but I can't since they are templated. As I see it, I have two options: 1) proceed with the route I am on or 2) remove the templating and leverage downcasting.
I was thinking about creating LicenseBase that LicenseData and TrialData would inherit from. As LicenseData and TrialData have nothing in common, LicenseBase would be empty. I would then replace all references to the template type U with a pointer to LicenseBase. Since each derived class would be operating on a known type, it would be safe to downcast within each overriden function. e.g. LicenseFetcher::ValidateResponse(LicenseBase* r) { const LicenseData& data = static_cast<const LicenseData&>(r); } I would do something similar for the Request* types.
I know downcasting is frowned upon and it also wouldn't help when I am returning one of these types (like with DeserializeBlob) but I was wondering about future development. All this templatization seems a bit hard to follow, and downcasting would make it easier to read. Also, with the templated approach, Fetcher.h will grow quite large as it will contain a specialization for every function for each derived class. I already have a half-dozen functions that would need specialization and a half-dozen derived versions of Fetcher. As I type this out, templating seems like the right way to go, but I've been doing some templatization recently and a bit worried I'm starting to "view every problem as a nail."
Are there any options I'm missing? Or should I forge ahead with templating?
Edit: Regardless of the approach I take, I plan on having non-member functions like GetLicense and GetTrial that would look something like
LicenseData GetLicense()
{
const RequestLicense params = RequestLicense(/*arbitrary parameters*/);
std::wstring response = FetchBlob(params);
std::unique_ptr<LicenseData> data = DeserializeBlob<LicenseData>(response);
if (ValidateResponse<LicenseData>(data))
{
return data;
}
return {};
}
| I think I would make the whole Fetcher structure a template, with the member functions being abstract virtual functions:
// R is the request type
// D is the data type
template <typename R, typename D>
struct Fetcher
{
virtual std::wstring FetchBlob(R const& requestParameters) = 0;
virtual std::unique_ptr<D> DeserializeBlob(const std::wstring& serializedBlob) = 0;
virtual bool ValidateResponse(D const& deserializedResponse) = 0;
// ...
};
Then when inheriting from it pass the specific types:
struct LicenseFetcher : public Fetcher<RequestLicense, LicenseData>
{
std::wstring FetchBlob(RequestLicense const& requestParameters) override
{
// TODO: Implementation
}
std::unique_ptr<LicenseData> DeserializeBlob(const std::wstring& serializedBlob) override
{
// TODO: Implementation
}
bool ValidateResponse(LicenseData const& deserializedResponse) override
{
// TODO: Implementation
}
// ...
};
Would be much cleaner than having a multitude of specializations.
|
72,990,065 | 72,990,093 | Two sum but sum is in a range | How this can be solved faster than O(N^2) without using Binary indexed tree (O(NlogN), but Memory Out Limit)
arr = [6, 2, 3, 5, 1, 6], l = 5, h = 7
Find number of pairs i, j such that i < j && (arr[i] + arr[j] >= l && arr[i] + arr[j] <= r)
O(N^2) solution is very straight-forward, but TLE.
What I tried:
Using Binary Indexed Tree, but get Memory Limit Exceeded.
Anyone has ideas?
| Sort the array in ascending order and, for each i, binary search (in (i , n]) the first j for which the first condition is true. Let it be j1. Then, binary search (again in (i, n]) the last j for which the second condition is true. Let it be j2. If everything is ok, add |j2 - j1 + 1| to the answer.
The overall time complexity is O(n * log n) and the memory complexity is O(n).
|
72,990,156 | 72,990,369 | Question about reference return type in C++ | I write the following code:
const string& combine(string &s1,string &s2)
{
return s1+s2;
}
but when I pass two strings to this function, the result I use "std::cout" to print is the empty string.I don't know what the reason is.
Thanks in advance.
| The behaviour of your code is undefined. This is because s1 + s2 is an anonymous temporary and you are attempting to bind that to a reference return type.
The output you observe is a manifestation of that undefined behaviour.
Changing the return type of the function to a std::string value is a fix.
Another more interesting fix perhaps is to return one of the input strings modified, so the reference propagates back to the caller. See
#include <iostream>
#include <string>
const std::string& combine(std::string &s1, const std::string &s2)
{
return s1 += s2;
}
int main() {
std::string s1 = "Hello";
std::cout << combine(s1, ", World!");
}
The introduction of const allows ", World!" to bind to s2.
In general, writing a function that returns a reference can cause unexpected issues. The C++ standard library function std::max is a well-known example: if one of the parameters is an anonymous temporary and that value is selected then you have a possibility of a dangling reference! Usually they are confined to returning class member variables (usually bound to a const reference).
|
72,990,607 | 72,991,800 | "const std::stop_token&" or just "std::stop_token" as parameter for thread function? | Since clang-tidy was complaining about "The parameter 'stop_token' is copied for each invocation but only used as a const reference; consider making it a const reference" I was asking myself the question, why every example I find about std::jthread/stop_token takes the stop_token by value, but I did not find any explanation for it.
So, why take the stop_token by value?
1) void f(std::stop_token){};
2) void f(const std::stop_token&){};
Does it really matter when you can assume that the stop_token is the one generated by std::jthread?
edit: This is asked purely out of curiosity and to not ignore a clang-tidy warning "just because".
| As per cppref,
Creates new jthread object and associates it with a thread of execution. The new thread of execution starts executing
std::invoke(std::move(f_copy), get_stop_token(), std::move(args_copy)...), ...
And the return type of std::jthread::get_stop_token is std::stop_token.
So, if your f is only used to construct a std::jthread, I believe it's totally ok to be with a const std::stop_token& as its first parameter. There's no lifetime issue.
But if you want to use your f in other places, std::thread for example, there may be a problem.
void f(const std::stop_token&) {}
{
std::stop_source ssrc;
std::stop_token stk{ssrc.get_token()};
std::thread t{f, std::ref(stk)};
}
When stk gets destroyed, the reference is dangled.
In most cases, you should pass by value.
Because copying std::stop_token and std::stop_source is relative cheap.
Also, since std::token is usually stored in a lambda and/or used in another thread, passing by value can avoid lifetime issues.
There're also some cases where you could pass by reference.
Since it's not that cheap compared to raw pointers. It'll be more efficient to pass by reference if you can guarantee the lifetime.
For example, std::stop_callback's constructor, because it'll copy std::stop_token inside, pass by reference will be better.
template<class C>
explicit stop_callback( const std::stop_token& st, C&& cb ) noexcept(/*see below*/);
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.