question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
71,772,165 | 71,774,151 | Nothing dumped from json that converted from bson | I need your help about nlohmann's json library.
I wrote this code.
std::vector<std::uint8_t> v = nlohmann::json::to_bson(jInit);
std::vector<std::uint8_t> s;
for (auto& i : v)
{
s.push_back(v[i]);
}
std::ofstream ofs(s_basePath + "dump.txt");
ofs << nlohmann::json::from_bson(s).dump();
This simply converts json to bson, then convert the bson to json and dump it as a text file.
For some reasons, I have to use push_back() and get json back.
The problem is, size of the dumped text file is 0 kb, nothing will be dumped.
I tried also this code and it is working.
std::vector<std::uint8_t> v = nlohmann::json::to_bson(jInit);
std::ofstream ofs(s_basePath + "dump.txt");
ofs << nlohmann::json::from_bson(v).dump();
I have no idea what's the difference between vector v and s.
| The problem has nothing to do with json/bson. It's just that you use every uint8_t in the original bson data as an index into the very same data, which scrambles everything up.
Wrong:
for (auto& i : v)
{
s.push_back(v[i]); // `i` is not an index
}
The correct loop should look like this:
for (auto& i : v)
{
s.push_back(i); // `i` is a reference to the actual data you should put in `s`
}
If you need a copy of the vector with the bson data, you can simply copy it though:
s = v;
|
71,772,400 | 71,772,442 | Iterate over a part of an array | Whats the best way to iterate over a part of an array? I want to iterate over the first 4 elements of an array and then separately I want to iterate over the last 4 elements of the array. Is the following code the best way to do it? Please help.
void function() {
int arr[8] = {a1, a2, a3, a4, b1, b2, b3, b4};
int arr_num = 8;
for (int i = 0; i < arr_num/2; i++) {
if (arr[i] == a1) {
// publish something
} else if (arr[i] == a3) {
// publish something
}
}
for (int i = 5; i < arr_num; i++) {
if (arr[i] == b1) {
// publish something
} else if (arr[i] == b3) {
// publish something
}
}
}
| You could do:
int i = 0;
for ( ; i < arr_num / 2; ++i ) { // leave the initialization section empty, "i" is already 0
//...
}
for ( ; i < arr_num; ++i ) { // "i" already has the correct value from above loop
// ...
}
This way i retains its value between loops, and, is more generic for different sizes of arrays.
|
71,772,460 | 71,772,509 | I am not able to store an element of an array into a register using x86 assembly | The following is my code in assembly:
mov esi, MemberLvl
mov edi, OfficerLst
mov al, [esi]
mov test1, al
mov ah, [edi]
mov test2, ah
In the C++ main program, I have declared a list of type long called MemberLvl and OfficerLst, and two long types - test1 and test2.
Whenever I try to run my code, it keeps saying there is an operand size conflict with mov test1, al and mov test2, ah.
My thinking is that each array is stored in esi and edi. I then store the first element into al or ah by getting their first memory address. Because each long is 8 bytes and the al or ah register is 8 bytes, I'm thinking it will be able to store this into test1 and test2 (which are both declared a long, 8 bytes), but it isn't. I am not sure why this is happening.
| al and ah are 8-bit values (1 byte). test1 and test2 are "long" according to you, which is either 32 bit (4 bytes) or 64 bit (8 bytes), depending on your compiler / system.
If you want to store the values in the respective variables, you can use movzx (if unsigned) or movsx (if signed).
Also, note that if MemberLvl is a long, then moving it to esi, then doing [esi] is likely undefined behaviour, unless MemberLvl happened to contain a valid pointer address. If MemberLvl is a long *, then it's probably fine, but then [esi] is a 32 bit or 64 bit value, and thus you shouldn't use al or ah at all.
|
71,772,599 | 71,777,358 | Template specialization and selection in variadic template class | I've a variadic templated class with some template specializations. I would like, for some specialization, to enable them conditionally (with std::enable_if, so I added a additional unused template parameter).
Here is a code example:
#include <tuple>
#include <iostream>
template<int v,typename, typename...>
struct A;
template<int v, typename...Ts, typename...Us>
struct A<v,void,std::tuple<Ts...>, Us...> {
void print() {
std::cout << "A: Tuple selected" << std::endl;
}
};
template<int v, typename T, typename...Us>
struct A<v,void,T, Us...> {
void print() {
std::cout << "A: one element selected" << std::endl;
}
};
template<int v,typename, typename...>
struct B;
template<int v, typename...Ts, typename...Us>
struct B<v,std::enable_if_t<v!=11>,std::tuple<Ts...>, Us...> {
void print() {
std::cout << "B: Tuple selected" << std::endl;
}
};
template<int v, typename T, typename...Us>
struct B<v,std::enable_if_t<v==12>,T, Us...> {
void print() {
std::cout << "B: one element selected" << std::endl;
}
};
template<int v, typename...>
struct C;
template<int v, typename...Ts, typename...Us>
struct C<v,std::enable_if_t<v!=11,std::tuple<Ts...>>, Us...> {
void print() {
std::cout << "C: Tuple selected" << std::endl;
}
};
template<int v, typename T, typename...Us>
struct C<v, T, Us...> {
void print() {
std::cout << "C: one element selected" << std::endl;
}
};
int main()
{
struct A<12,void,std::tuple<int>> a;
a.print();
struct B<12,void,std::tuple<int>> b;
b.print();
struct C<12,std::tuple<int>> c;
c.print();
return 0;
}
Instantiation for a works (and the tuple specialization is indeed chosen).
Instantiation for b fails due to ambiguous template instatiation. With gcc:
$ g++ -std=gnu++17 -Wall -Wextra toto5.cpp
toto5.cpp: In function ‘int main()’:
toto5.cpp:61:38: error: ambiguous template instantiation for ‘struct B<12, void, std::tuple<int> >’
61 | struct B<12,void,std::tuple<int>> b;
| ^
toto5.cpp:25:8: note: candidates are: ‘template<int v, class ... Ts, class ... Us> struct B<v, typename std::enable_if<(v != 11), void>::type, std::tuple<_UTypes ...>, Us ...> [with int v = 12; Ts = {int}; Us = {}]’
25 | struct B<v,std::enable_if_t<v!=11>,std::tuple<Ts...>, Us...> {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
toto5.cpp:32:8: note: ‘template<int v, class T, class ... Us> struct B<v, typename std::enable_if<(v == 12), void>::type, T, Us ...> [with int v = 12; T = std::tuple<int>; Us = {}]’
32 | struct B<v,std::enable_if_t<v==12>,T, Us...> {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
toto5.cpp:61:38: error: aggregate ‘B<12, void, std::tuple<int> > b’ has incomplete type and cannot be defined
61 | struct B<12,void,std::tuple<int>> b;
| ^
For C, the error occurs when reading the template itself (no need to instantiate):
$ g++ -std=gnu++17 -Wall -Wextra toto5.cpp
toto5.cpp:42:8: error: template parameters not deducible in partial specialization:
42 | struct C<v,std::enable_if_t<v!=11,std::tuple<Ts...>>, Us...> {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
toto5.cpp:42:8: note: ‘Ts’
What can be done?
Note: I know that the condition in std::enable_if are not useful, but this is just an example here.
One way would be to use the B solution, but changing the std::enable_if condition to disable the second template specialization, but how to say "not a tuple of any kind"? std::is_same does not seem useful here.
clang++ gives me the same kinds of errors in all these cases.
| For specialization B, you need to ensure that the conditions to std::enable_if are orthogonal. In your version, if you supply v=12 both conditions v!=11 and v==12 yield true, meaning that both versions are enabled. That is the reason why you get the ambiguous instantiation error. The following compiles fine (https://godbolt.org/z/csaTWMd9v):
#include <tuple>
#include <iostream>
template<int v,typename, typename...>
struct B;
template<int v, typename...Ts, typename...Us>
struct B<v,std::enable_if_t<v!=11>,std::tuple<Ts...>, Us...> {
void print() {
std::cout << "B: Tuple selected" << std::endl;
}
};
template<int v, typename T, typename...Us>
struct B<v,std::enable_if_t<v==11>,T, Us...> {
void print() {
std::cout << "B: one element selected" << std::endl;
}
};
int main()
{
struct B<12,void,std::tuple<int>> b;
b.print();
}
Update: As asked in the comments, a way to check if a certain template parameter is a tuple of any kind can be done as follows (also compare e.g. this post). Is this closer what you want to achieve? (https://godbolt.org/z/dq85fM7KK)
#include <tuple>
#include <iostream>
template <typename>
struct is_tuple : std::false_type
{ };
template <typename... T>
struct is_tuple<std::tuple<T...>> : std::true_type
{ };
template<int v,typename, typename...>
struct B;
template<int v, typename...Ts, typename...Us>
struct B<v, std::enable_if_t<v!=11>, std::tuple<Ts...>, Us...> {
void print() {
std::cout << "B: Tuple selected" << std::endl;
}
};
template<int v, typename T, typename...Us>
struct B<v,std::enable_if_t<v==12 && !is_tuple<T>::value>, T, Us...> {
void print() {
std::cout << "B: one element selected" << std::endl;
}
};
int main()
{
B<12, void, std::tuple<int>>{}.print(); // Tuple selected
B<12, void, int>{}.print(); // one element selected
}
|
71,772,678 | 71,773,834 | How to save an exception and throw it later | Say I want to create an exception object, but throw it a bit later. My use case is that the exception is generated in a thread, but I need to actually throw it from my main thread (because there's nobody to catch it in the child thread)
using namespace std;
runtime_error *ex = nullptr;
thread myThread( [&ex]{
if (something_went_wrong) {
ex = new runtime_error("Something went wrong");
return;
}
} );
myThread.join();
if (ex != nullptr) {
throw *ex;
}
For what it's worth, the compiler seems happy with this, but I can see that it's a resource leak at best. Is there a right way to do this?
| std::exception_ptr is the C++ facility to store exceptions. It can usually avoid an extra allocation.
std::exception_ptr ex = nullptr;
std::thread myThread( [&ex]{
if (something_went_wrong) {
ex = std::make_exception_ptr(std::runtime_error("Something went wrong"));
return;
}
} );
myThread.join();
if (ex != nullptr) {
std::rethrow_exception(ex);
}
And since it can store any type of exception, you can store anything that could be thrown in the thread by wrapping the lambda like:
std::thread myThread( [&ex]{
try {
// do things
} catch (...) {
ex = std::current_exception();
}
} );
Alternatively, you can use a std::optional<std::runtime_error>
std::optional<std::runtime_error> ex;
std::thread myThread( [&ex]{
if (something_went_wrong) {
ex.emplace("Something went wrong");
return;
}
} );
myThread.join();
if (ex) {
throw *ex;
}
Which won't leak since you don't dynamically allocate
|
71,773,132 | 71,773,182 | Defaulted concept-constrained function never selected for instantiation | While working with C++20, I encountered an oddity which I'm unsure is a compiler defect due to C++20's infancy, or whether this is correct behavior.
The problem is that a specific constrained function is either not selected correctly, or produces a compile-error -- depending entirely on the order of the definitions.
This occurs in specific circumstance:
A constructor / destructor / member-function is constrained by requires, and
This constructor / destructor / member-function is implicitly deleted for some instantiation of T that does not satisfy the requires clause
For example, consider this basic naive_optional<T> implementation:
template <typename T>
class naive_optional {
public:
naive_optional() : m_empty{}, m_has_value{false}{}
~naive_optional() requires(std::is_trivially_destructible_v<T>) = default;
~naive_optional() {
if (m_has_value) {
m_value.~T();
}
}
private:
struct empty {};
union {
T m_value;
empty m_empty;
};
bool m_has_value;
};
This code works fine for trivial types like naive_optional<int>, but fails to instantiate for non-trivial types such as naive_optional<std::string> due to an implicitly deleted destructor:
<source>:26:14: error: attempt to use a deleted function
auto v = naive_optional<std::string>{};
^
<source>:8:5: note: explicitly defaulted function was implicitly deleted here
~naive_optional() requires(std::is_trivially_destructible_v<T>) = default;
^
<source>:17:11: note: destructor of 'naive_optional<std::basic_string<char>>' is implicitly deleted because variant field 'm_value' has a non-trivial destructor
T m_value;
^
1 error generated.
Live Example
If the order of definitions is changed so that the constrained function is below the unconstrained one, this works fine -- but instead selects the unconstrained function every time:
...
~naive_optional() { ... }
~naive_optional() requires(std::is_trivially_destructible_v<T>) = default;
...
Live Example
My question is ultimately: is this behavior correct -- am I misusing requires? If this is correct behavior, is there any simple way to accomplish conditionally default-ing a constructor / destructor / function that is not, itself, a template? I was looking to avoid the "traditional" approach of inheriting base-class implementations.
| Your code is correct, and indeed was the motivating example for supporting this sort of thing (see P0848, of which I am one of the authors). Clang simply doesn't implement this feature yet (as you can see here).
gcc and msvc both accept your code.
Note that you don't need empty anymore, just union { T val; } is sufficient in C++20, thanks to P1331. Demo.
|
71,773,154 | 71,773,313 | Does __global__ have overhead over __device__? | This question asks the difference between __device__ and __global__.
The difference is:
__device__ functions can be called only from the device, and it is executed only in the device.
__global__ functions can be called from the host, and it is executed in the device.
I interpret the difference between __global__ and __device__ to be similar to public and private class access specifiers. The point is to prevent accidentally calling a __device__ function from the host. It sounds like I could tag all void-returning functions as __global__ without changing program behavior. Would this change program performance?
| Yes, __global__ has overhead, compared to __device__, but there are additional details to be aware of. What you're proposing probably isn't a good idea.
__global__ is the device code entrypoint, from host code. Initially, the GPU has no code running on it. When your host code decides that it wants to start some processing on the GPU, this can only be done by calling a __global__ function (that is the so-called kernel launch).
You can call a __global__ function from device code, but that is invoking something called CUDA Dynamic Parallelism which has all the attributes of a kernel launch. If you're a beginner, you almost certainly don't want to do that.
If you have code running on the GPU, and you want to call a function in the context of a CUDA thread, the way to do that is by calling a __device__ function.
It sounds like I could tag all void-returning functions as __global__ without changing program behavior. Would this change program performance?
It would change both behavior and performance.
When you call a __global__ function (whether from host code or device code) the only way to do that is via a properly configured kernel launch. Using the typical method in the CUDA runtime API, that would be:
kernel<<<blocks, threads, ...>>>(... arguments ...);
That stuff in the triple-chevron syntax makes it different from an ordinary function call, and it will behave differently. It will launch a new kernel, with its own grid (the complement of threads/blocks associated with a kernel launch).
When you call a __device__ function, it looks like an ordinary function call:
func(... arguments ...);
and behaves like one also. It operates within the context of single thread, and does not spin up any new threads/blocks/kernel to service the function call.
You might want to spend a few hours in an orderly introduction to the topic. Just a suggestion, do as you wish of course.
|
71,773,367 | 71,775,569 | GetAce() failing with - ERROR_INVALID_PARAMETER | I'm attempting to do some ACL modification with some C++ code in Visual Studio. I'm stumbling into several issues, one of which is that, when I try to read the ACEs off the existing ACL using GetAce(), the call to the function fails and it returns error 87, ERROR_INVALID_PARAMETER.
The ACL in question includes four entries, and the first two are read successfully, but the second two fail with the invalid parameter error.
Here's the snippet of code that I'm running, I'd appreciate any insight anyone has on it.
// filePath variable is passed in from command line...
PACL existingAcl;
if(GetNamedSecurityInfo(filePath, SE_FILE_OBJECT, DACL_SECURITY_INFORMATION, NULL, NULL, &existingAcl, NULL, &securityDescriptor) != ERROR_SUCCESS) {
OutputDebugString("Could not retrieve ACL.\n");
return false;
}
const int numEntries = existingAcl->AceCount;
for(DWORD i = 0; i < numEntries; i++) {
EXPLICIT_ACCESS* thisAce;
if (!GetAce(existingAcl, i, (LPVOID*) &thisAce)) {
OutputDebugString("Failed to retrieve ACE.\n");
return false;
}
}
| I was modifying a member of one of the ACEs retrieved from the ACL directly, within the loop, which was most likely adjusting pointers or memory in such a way that broke things with memory allocation, etc. Thanks to @273K for pointing me in the right direction!
|
71,773,891 | 71,774,513 | std::coroutine_handle<Promise>::done() returning unexpected value | I am trying to write a simple round-robin scheduler for coroutines. My simplified code is the following:
Generator<uint64_t> Counter(uint64_t i) {
for (int j = 0; j < 2; j++) {
co_yield i;
}
}
...
void RunSimple(uint64_t num_coroutines) {
std::vector<std::pair<uint64_t, Generator<uint64_t>>> gens;
for (uint64_t i = 0; i < num_coroutines; i++) {
// Storing coroutines into a vector is safe because we implemented the move
// constructor and move assigment operator for Generator.
gens.push_back({i, Counter(i)});
}
// This is a simple round-robin scheduler for coroutines.
while (true) {
for (auto it = gens.begin(); it != gens.end();) {
uint64_t i = it->first;
Generator<uint64_t>& gen = it->second;
if (gen) {
printf("Coroutine %lu: %lu.\n", i, gen());
it++;
} else {
// `gen` has finished, so remove its pair from the vector.
it = gens.erase(it);
}
}
// Once all of the coroutines have finished, break.
if (gens.empty()) {
break;
}
}
}
When I run RunSimple(/*num_coroutines=*/4), I get the following output:
Coroutine 0: 0.
Coroutine 1: 1.
Coroutine 2: 2.
Coroutine 3: 3.
Coroutine 0: 0.
Coroutine 1: 1.
Coroutine 2: 2.
Coroutine 3: 3.
Coroutine 1: 1.
Coroutine 3: 3.
Coroutine 3: 3.
The last three lines of the output are unexpected... coroutines 1 and 3 do not seem to be exiting when I expect them to. Upon further investigation, this is happening because std::coroutine_handle<Promise>::done() is returning false for both of these coroutines. Do you know why this method is returning false... am I doing something wrong?
Edit: Here is my Generator implementation:
template <typename T>
struct Generator {
struct promise_type;
using handle_type = std::coroutine_handle<promise_type>;
struct promise_type {
T value_;
std::exception_ptr exception_;
Generator get_return_object() {
return Generator(handle_type::from_promise(*this));
}
std::suspend_always initial_suspend() { return {}; }
std::suspend_always final_suspend() noexcept { return {}; }
void unhandled_exception() { exception_ = std::current_exception(); }
template <std::convertible_to<T> From> // C++20 concept
std::suspend_always yield_value(From&& from) {
value_ = std::forward<From>(from);
return {};
}
void return_void() {}
};
handle_type h_;
Generator(handle_type h) : h_(h) {
}
Generator(Generator&& g) : h_(std::move(g.h_)) { g.h_ = nullptr; }
~Generator() {
if (h_) {
h_.destroy();
}
}
Generator& operator=(Generator&& g) {
h_ = std::move(g.h_);
g.h_ = nullptr;
return *this;
}
explicit operator bool() {
fill();
return !h_.done();
}
T operator()() {
fill();
full_ = false;
return std::move(h_.promise().value_);
}
private:
bool full_ = false;
void fill() {
if (!full_) {
h_();
if (h_.promise().exception_)
std::rethrow_exception(h_.promise().exception_);
full_ = true;
}
}
};
| Your program contains undefined behavior.
The issue is your fill resumes the coroutine when !full_, regardless of whether the coroutine is at final suspend or not, which you can learn by calling h_.done().
Generator<uint64_t> Counter(uint64_t i) {
for (int j = 0; j < 2; j++) {
co_yield i;
}
// final suspend
}
If you resume the coroutine at the final suspend point, it will destory itself and you can no longer do anything with the coroutine handle you have.
And you call fill from operator bool, meaning that it, when called while the coroutine is suspended at final suspend, first destroys the coroutine, and then tries to access it, which is UB:
fill(); // potentially destroy the coroutine
return !h_.done(); // access the destroyed coroutine
You could fix this by making fill aware of doneness:
void fill() {
if (!h_.done() && !full_) {
h_();
if (h_.promise().exception_)
std::rethrow_exception(h_.promise().exception_);
full_ = true;
}
}
Also, your move assignment operator leaks the current coroutine:
Generator& operator=(Generator&& g) {
h_ = std::move(g.h_); // previous h_ is leaked
g.h_ = nullptr;
return *this;
}
You probably want to have something like if (h_) h_.destroy(); in the beginning.
Also, as mentioned in the comments, the full_ member has to be carried over in the move constructor and assignment operators:
Generator(Generator&& g)
: h_(std::exchange(g.h_, nullptr))
, full_(std::exchange(g.full_, false)) {}
Generator& operator=(Generator&& g) {
full_ = std::exchange(g.full_, false);
...
}
|
71,774,376 | 71,774,811 | Insert into a vector using an insert using lower_bound | How do I change CompareByNA to make the insert work, I assume it's wrong for the first element to insert
program: https://onecompiler.com/cpp/3xycp2vju
bool Company::compareByNA(const Company &a, const Company &b)
{
return a.getNameAddr() < b.getNameAddr();
}
..
if ( binary_search(CompanyNameList.begin(), CompanyNameList.end(), cmp, Company::compareByNA) )
{
CompanyNameList.insert(lower_bound(CompanyNameList.begin(), CompanyIDList.end(), cmp, Company::compareByNA), cmp);
return true;
}
return false;
}
for example this not work for other elements that I want to insert in order
if(CompanyNameList.size() == 0)
{
CompanyNameList.push_back(cmp);
return true;
}
| Here is your issue
lower_bound(CompanyNameList.begin(), CompanyIDList.end(), cmp, Company::compareByNA);
You are mixing up your lists.
You probably meant to use
lower_bound(CompanyNameList.begin(), CompanyNameList.end(), cmp, Company::compareByNA);
|
71,774,410 | 71,774,688 | MSVS2017, Boost C++ error with namespaces | Can anyone help me out. I stumbled into a roadblock.
I've modified the project properties to include the Boost header path, and Boost linker path--plus the 'not using predefined header files' options'
Some how, Visual studio can't see std_in/std_out as part of the boost::process namespace.
I've compile the same file on Linux, and it works fine. Same version of Boost 1.78.0.
namespace bp = ::boost::process;
bp::opstream chldInput;
bp::ipstream chldOutput;
bp::child c(cmd.c_str(), bp::std_out > chldInput, bp::std_in < chldOutput);
| It's not that it can't see bp::std_in and bp::std_out. It's because you've swapped the streams.
ipstream is an implementation of a reading pipe stream - a stream that you can use in a similar way as std::istreams, like std::cin.
opstream is an implementation of a write pipe stream - a stream that you can use in a similar way as std::ostreams, like std::cout.
However, bp::std_out needs to go to a bp::ipstream (which you then can read from) and bp::std_in needs its input from an bp::opstream (that you can write to).
Example:
bp::child c(cmd.c_str(), bp::std_out > chldOutput, bp::std_in < chldInput);
Since chldInput isn't connected to anything, you may want to use stdin instead:
bp::child c(cmd.c_str(), bp::std_out > chldOutput, bp::std_in < stdin);
You may also want to c.wait() for the command to finish.
Swapping the streams like you've done will likely generate an error like
program.cpp: In function ‘int main()’:
program.cpp: error: no match for ‘operator>’ (operand types are ‘const boost::process::detail::std_out_<1>’ and ‘boost::process::opstream’ {aka ‘boost::process::basic_opstream<char>’})
| bp::child c(cmd.c_str(), bp::std_out > chldInput, bp::std_in < chldOutput);
| ~~~~~~~~~~~ ^ ~~~~~~~~~
| | |
| | boost::process::opstream {aka boost::process::basic_opstream<char>}
| const boost::process::detail::std_out_<1>
(and a few hundred lines more).
|
71,774,559 | 71,774,898 | What does string().copy() do? | Can anyone explain what this code does?
Con(const char* n){
char_number = new char[sizeof(n) + 1];
string(n).copy(char_number, sizeof(n) + 1);
}
I don't understand the string(n) part. What exactly does this do?
| From the documentation:
std::string::string
copy constructor
string (const string& str);
Constructs a copy of str.
from c-string
string (const char* s);
Copies the null-terminated character sequence (C-string) pointed by s.
Note: .copy() is explained here.
|
71,774,876 | 71,774,994 | linked list c++ difference between void function and return this | I have a question about the differences between 2 styles of coding:
void *addfirst (int data){
head = new Node(int data, head);
return;
}
and the second is:
LinkedList *addfirst (int data){
head = new Node(int data, head);
return this;
}
My prof said that nowadays most people prefer the second choice, but I don't know if there was any advantages, compare with the first one?
|
void *addfirst (int data){
head = new Node(int data, head);
return;
}
The function is declared to return an object of type void*. The function returns by returning nothing. The behaviour of the program is undefined.
LinkedList *addfirst (int data){
head = new Node(int data, head);
return this;
}
This function does return an object. The behaviour is well defined.
|
71,775,286 | 71,775,376 | understand the different construction of a std::string_view | Live On Coliru
The string_view family of template specializations provides an
efficient way to pass a read-only, exception-safe, non-owning handle
to the character data of any string-like objects with the first
element of the sequence at position zero.
#include <iostream>
#include <string_view>
#include <unordered_map>
using namespace std;
std::unordered_map<std::string_view, int> create_map()
{
std::unordered_map<std::string_view, int> umap;
std::string s1("s1");
auto s2_bad = new std::string("s2");
umap.emplace(std::make_pair(s1, 1)); // doesn't work b/c sv points to local variable
umap.emplace(std::make_pair(*s2_bad, 2)); // Q1> why doesn't this work?
umap.emplace(std::make_pair("s3", 3)); // Q2> why does this work?
auto s4_bad = new std::string("s4_bad");
umap.emplace(std::make_pair(s4_bad->c_str(), 4)); // this works
return umap;
}
int main()
{
auto abc_map = create_map();
for(const auto&[key, value]: abc_map)
{
std::cout << "key: " << key << std::endl;
std::cout << "value: " << value << std::endl;
}
return 0;
}
Output:
key: s3
value: 3
key: $
value: 2
key: s4_bad
value: 4
key: $
value: 1
Question 1> Why doesn't s2_bad work here? Even if it points to an allocated address space.
Question 2> Why does the insertion of string literal "s3" work here? Even if it is defined within a scope of a function?
| Question 1: std::make_pair(*s2_bad, 2) will create a std::pair<std::string, int>, so the pair has a distinct temporary std::string, unrelated to the string s2_bad points to. Then, a std::string_view will be constructed from this temporary, which will be promptly destroyed. So the moment umap.emplace(std::make_pair(*s2_bad, 2)); finishes, the string_view is invalid. If you'd like to avoid this copy (and the associated lifetime problems), you can use std::ref(*s2_bad) instead of *s2_bad.
Question 2: String literals have static lifetimes, i.e. their contents are stored in the read-only sections of binaries instead of in the stack, so holding a pointer to a string literal and returning from the function containing the string literal does not pose any dangling pointer problems.
|
71,775,512 | 71,775,557 | Wrong input in nested while loop | I don't no how to handle wrong input. I made an nested while loop, because if I only use the
If statement and the input is wrong, it jumps to the beginning of the first while loop and not to "Do you wish to continue?". If I
use a nested while loop, it somehow won't finish, even though the bool condition is satisfied. Please help.
Original Code on Github
while((end2 == false))
{
cout << endl;
cout << "Do you wish to continue? (Y / N)" << endl;
char go_on;
cout << "Your choice: ";
cin >> go_on;
cout << endl;
if((go_on != 'Y') && (go_on != 'y') && (go_on != 'N') && (go_on != 'n'))
{
cout << "Invalide input! Choose between Y and N" << endl;
continue;
}
if((go_on == 'Y') || (go_on == 'y'))
{
end2 == true;
end == false;
}
if((go_on == 'N') || (go_on == 'n'))
{
end2 = true;
end == true;
}
}
| It may be a lot simpler to create a function
bool check_continue() {
while(true) {
// get your input
if((go_on == 'Y') || (go_on == 'y'))
return true;
if((go_on == 'N') || (go_on == 'n'))
return false;
}
}
|
71,775,829 | 71,775,904 | c++ read file with accents | Good day, I am in a small project where I need to read .txt files, the problem is that some are in English and others in Spanish, the case is being presented in which some information comes with an accent and I must show it on the console with the accent.
I have no problem displaying accents on console with setlocale(LC_CTYPE, "C");
my problem is when reading the .txt file in the reading it does not detect the accents and reads rare characters.
my practice code is:
#include <iostream>
#include <locale.h>
#include<fstream>
#include<string>
using namespace std;
int main(){
setlocale (LC_CTYPE, "C");
ifstream file;
string text;
file.open("entryDisciplineESP.txt",ios::in);
if (file.fail()){
cout<<"The file could not be opened."<<endl;
exit(1);
}
while(!file.eof()){
getline(file,text);
cout<<text<<endl;
}
cout<<endl;
system("Pause");
return 0;
}
The .txt file in question contains:
Inicio
D1
Biatlón
S1
255
E1
Esprint 7,5 km (M); 100; 200
E2
Persecucion 10 km (M); 100; 200
ff
obviously I'm having problems with 'ó' but in the same way I have other .txt with other characters with accents so I need a solution for all these characters.
Researching I have read and tried to implement wstring and wifstream but I have not been able to implement that successfully.
I'm trying to achieve this on windows, the same way I need the solution to work on linux, at the moment I'm using dev c++ 5.11
Thank you very much in advance for your time and help.
| Your error is how you control your read-loop. See: Why !.eof() inside a loop condition is always wrong. Instead, control your read-loop with the stream-state returned by your read-function, e.g.
while (getline(file,text)) {
std::cout << text << '\n';
}
The character in question is simple extended ASCII (e.g. c3) and easily representable in std::string and with std::cout. Your full example, fixing Why is “using namespace std;” considered bad practice? would be
#include <iostream>
#include <fstream>
#include <string>
int main() {
setlocale (LC_CTYPE, "C");
std::ifstream file;
std::string text;
file.open ("entryDisciplineESP.txt");
if (file.fail()){
std::cerr << "The file could not be opened.\n";
exit(1);
}
while (getline(file,text)) {
std::cout << text << '\n';
}
std::cout.put('\n');
#ifdef _WIN32
system("Pause");
#endif
return 0;
}
Example Output
$ ./bin/accent_read
Inicio
D1
Biatlón
S1
255
E1
Esprint 7,5 km (M); 100; 200
E2
Persecucion 10 km (M); 100; 200
ff
Windows 10 Using UTF-8 Codepage
The problem you experience attempting to run the above code under Windows 10 console (which I presume is what DevC++ is launching output in), is the default codepage (437 - OEM United States) does not support UTF-8 characters. To change the codepage to UTF-8, you will use (65001 - Unicode (UTF-8)). See Code Page Identifiers
To get the proper output after compiling under VS with the C++17 language standard, all that was needed was to change the codepage using chcp 65001 in the console. (you also must have an UTF-8 font, mine is set to Lucida Console)
Output In Windows Console (Command Prompt) After Setting Codepage
C:\Users\david\source\repos\accents>chcp 65001
Active code page: 65001
C:\Users\david\source\repos\accents>Debug\accents.exe
Inicio
D1
Biatlón
S1
255
E1
Esprint 7,5 km (M); 100; 200
E2
Persecucion 10 km (M); 100; 200
ff
Press any key to continue . . .
You have the additional need to set the codepage programmatically due to DevC++ automatically launching the console. You can do that using SetConsoleOutputCP (65001). For example:
...
#include <windows.h>
...
#define CP_UTF8 65001
int main () {
// setlocale (LC_CTYPE, "C"); /* not needed */
/* set console output codepage to UTF-8 */
if (!SetConsoleOutputCP(CP_UTF8)) {
std::cerr << "error: unable to set UTF-8 codepage.\n";
return 1;
}
...
See SetConsoleOutputCP function. The analogous function for setting the input codepage is SetConsoleCP(uint codepage).
Output Using SetConsoleOutputCP()
Setting the console to the default 437 codepage and then using SetConsoleOutputCP (65001) to set output codepage to UTF-8, you get the same thing, e.g.
C:\Users\david\source\repos\accents>chcp 437
Active code page: 437
C:\Users\david\source\repos\accents>Debug\accents.exe
Inicio
D1
Biatlón
S1
255
E1
Esprint 7,5 km (M); 100; 200
E2
Persecucion 10 km (M); 100; 200
ff
Press any key to continue . . .
Also, check the DevC++ project (or program) settings and check whether you can set the output codepage there. (I don't use it, so don't know if it is possible).
|
71,776,681 | 71,777,052 | Is it possible to pass popen() a string and have it return null? | While working on line/branch coverage for a unit test, I came across the following code
bool run_command(char *command)
{
FILE *handle = popen(command, "r");
if (handle == nullptr)
{
std::cerr << "" << std::endl;
return false;
}
I've tried many different commands such as
"cat /dev/null | head"
but nothing seems to cause popen to fail. I also wrote a Python script, test.py,
for i in range(10000):
print(i)
and passed "python test.py | head" to popen(). This causes a "broken pipe" error, but popen still returns a valid address.
Is it possible to pass a command string to popen function that will cause it to return null?
| I don't think so. glibc's popen ultimately calls posix_spawn, whose documentation says that it only fails if fork() fails. And that would have nothing to do with the filename.
Likewise, there are a few other ways that popen can fail before calling posix_spawn, but they aren't related to the filename; only conditions like running out of memory or file descriptors.
|
71,777,048 | 71,777,566 | OOP in C++. How to drop the states | I am writing a simple graphics editor.
There are 3 buttons on the panel, by pressing which I draw a square, circle or line.
There are 3 button handlers that change the state and 3 mouse event handlers in the class responsible for drawing the workspace.
void Cpr111View::OnCirc()
{
state = 1;
}
void Cpr111View::OnLine()
{
state = 2;
}
void Cpr111View::OnRect()
{
state = 3;
}
To shorten the question, I will give only one handler out of 3.
void Cpr111View::OnMouseMove(UINT nFlags, CPoint point)
{
if (state==2)
{
int oldmode;
CClientDC *pDC = new CClientDC(this);
if (nFlags && MK_LBUTTON)
{
oldmode = pDC->GetROP2();
pDC->SetROP2(R2_NOT);
pDC->MoveTo(begin.x, begin.y);
pDC->LineTo(oldmouse.x, oldmouse.y);
pDC->MoveTo(begin.x, begin.y);
pDC->LineTo(point.x, point.y);
oldmouse = point;
pDC->SetROP2(oldmode);
CView::OnMouseMove(nFlags, point);
}
}
if (state == 1)
{
….
}
if (state == 3)
{
….
}
void Cpr111View::OnLButtonUp(UINT nFlags, CPoint point)
{
}
void Cpr111View::OnLButtonDown(UINT nFlags, CPoint point)
{
}
Here is a drawing system.
I want to do it without states. That is, create an abstract class Figure. With three virtual methods per render:
Class Figure
{
public:
void virtual MouseMove()=0;
void virtual ButtonUp()=0;
void virtual ButtonDown()=0;
}
And from him in the classes of figures to override these methods.
Class Recatngle:public Figure
{
public:
void MouceMove() override;
...
}
Then, when the button is clicked, create an object of the corresponding class, then the button handler will look like this:
void Cpr111View::OnRect()
{
figure = new Rectangle();
}
And when drawing, the mouse handler will simply call the method of the corresponding class:
void Cpr111View::OnMouseMove(UINT nFlags, CPoint point)
{
figure - > MouseMove();
}
In order for figure to be available in two different methods, we declare it in the class:
class Cpr111View : public CView
{
public:
Figure figure;
…
}
This is how I want to do it, but the problem is that it can't be done that way. At a minimum, you cannot declare an abstract class variable. Then what type should it be if I am going to write a pointer to different classes into it? How to implement this architecture correctly, or maybe there are better ideas?
| Using this way of polymorphic calls in C++ requires to use reference sematics.
I advise to read about it. E.g.: Reference and Value Semantics
So it class Cpr111View, you have to keep your Figure member by pointer, or by refernce.
In order to avoid having to manually manage the object, you should use a smart pointer like std::unique_ptr (or std::shared_ptr if you need to share ownership):
#include <memory> // for std::unique_ptr
class Cpr111View : public CView
{
public:
std::unique_ptr<Figure> figure;
//…
}
Of course you will need to allocate it before using it.
Instead of:
figure = new Rectangle();
use:
figure = std::make_unique<Rectangle>();
The method calls stay the same as in your code, e.g.:
figure->MouseMove();
If you not familiar with smart pointers in C++, I recomend to read about it. E.g.: What is a smart pointer and when should I use one?.
|
71,777,390 | 71,784,149 | Make QGraphicsVideoItem Fill QWidget | My goal is to create a simple video player QWidget that allows for some overlayed graphics (think subtitles or similar).
I started with the naive approach, which was to use QVideoWidget with another set of QWidget on top (my overlays). Unfortunately this does not work because Qt does not allow widgets with transparent background to render on top of video. The background shows as black instead of the actual video.
Next idea is to use QGraphicsScene et al. which is supposed to allow this kind of compositing, so I create a dead simple setup like this:
// The widget we will use as viewport
auto viewport= new QWidget();
//Set an easily recognizable bg color for debugging
palette.setColor(QPalette::Window, Qt::green);
viewport->setPalette(palette);
viewport->setAutoFillBackground(true);
// Our scene
auto mScene=new QGraphicsScene(this);
// The video
auto mVideoItem = new QGraphicsVideoItem();
mVideoItem->setPos(0,0);
myVideoSource.setVideoOutput(mVideoItem); // ... not shown: setting up of the video source
mScene->addItem(mVideoItem);
// Yellow line starting at 0,0 for debugging
auto line=new QGraphicsLineItem (0,0,100,100);
line->setPos(0,0);
line->setPen(QPen(Qt::yellow, 2));
mScene->addItem(line);
// A Text string
auto text=new QGraphicsTextItem("Its Wednesday my dudes", mVideoItem);
text->setPos(10, 10);
// Our view
mView=new QGraphicsView;
mView->setScene(mScene);
mView->setViewport(viewport);
viewport->show()
Now this looks promising because I can see compositing works; the line and text render flawlessly on top of the video. However the video is positioned in a seemingly random place in the widget. (see screensdhot)
At this point I have tried every conceivable and inconceivable combination of
mVideoItem->setSize();
mVideoItem->setOffset();
mScene->setSceneRect();
mView->fitInView();
mView->ensureVisible();
mView->centerOn()
Trying to fill the viewport widget with the video item but nothing seems logical at all. Instead of centering the content, it seems to fly around the screen in logic defying ways and I have given up. I put my code in the viewport widget's resizeEvent and use the viewport widget's size() as the base.
So my question is; How can I fill viewport widget with video item on resize?
| I dont think QGraphicsVideoItem is good for this task.
You can implement QAbstractVideoSurface that receives QVideoFrames and feeds them to QWidget that converts them to QImage, scales and draws them in paintEvent. Since you control paintEvent you can draw anything over your video, and get "fill viewport" feature for free.
Gist:
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Surface surface;
Widget widget(surface);
widget.show();
QMediaPlayer player;
player.setVideoOutput(&surface);
player.setMedia(QMediaContent(QUrl("path/to/media")));
player.play();
return a.exec();
}
bool Surface::present(const QVideoFrame &frame)
{
mFrame = frame;
emit frameReceived();
return true;
}
Widget::Widget(Surface &surface, QWidget *parent)
: QWidget{parent}, mSurface(surface)
{
connect(&mSurface,SIGNAL(frameReceived()),this,SLOT(update()));
}
void Widget::paintEvent(QPaintEvent *event)
{
QVideoFrame frame = mSurface.frame();
if (frame.map(QAbstractVideoBuffer::ReadOnly)) {
QPainter painter(this);
int imageWidth = mSurface.imageSize().width();
int imageHeight = mSurface.imageSize().height();
auto image = QImage(frame.bits(),
imageWidth,
imageHeight,
mSurface.imageFormat());
double scale1 = (double) width() / imageWidth;
double scale2 = (double) height() / imageHeight;
double scale = std::min(scale1, scale2);
QTransform transform;
transform.translate(width() / 2.0, height() / 2.0);
transform.scale(scale, scale);
transform.translate(-imageWidth / 2.0, -imageHeight / 2.0);
painter.setTransform(transform);
painter.drawImage(QPoint(0,0), image);
painter.setTransform(QTransform());
painter.setFont(QFont("Arial", 20));
int fontHeight = painter.fontMetrics().height();
int ypos = height() - (height() - imageHeight * scale) / 2 - fontHeight;
QRectF textRect(QPoint(0, ypos), QSize(width(), fontHeight));
QTextOption opt(Qt::AlignCenter);
painter.setPen(Qt::blue);
painter.drawText(textRect, "Subtitles sample", opt);
frame.unmap();
}
}
Full source: draw-over-video
Based on customvideosurface example from Qt.
|
71,777,564 | 71,778,006 | Can't use std::format in c++20 | I've been trying to use the std::format function included in C++20. As far as I can tell, clang 14 is supposed to support this feature, but for some reason I am receiving the following error: no member named 'format' in namespace 'std'. According to cppreference's compiler support chart, text formatting should be supported by clang, but I'm still receiving this error. I'm at a loss for what the issue is.
|
According to this, text formatting should be supported by clang
If you look closely, there is an asterisk in that cell:
14*
Below, it says:
* - hover over the version number to see notes
And when you hover, it says:
The paper is implemented but still marked as an incomplete feature. Not yet implemented LWG-issues will cause API and ABI breakage.
What's unsaid is that incomplete features are not enabled by default. But that makes sense since they wouldn't want users to depend on an API/ABI that will break. In my opinion, as also evidenced by this question, using green for this cell is misleading.
In conclusion, it's best to use the third party formatting library until the standard implementation of text formatting is complete, stable and non-experimental in major language implementations.
Other caveats:
You must include the header that defines std::format.
Clang doesn't use C++20 by default, so you must specify it explicitly.
Clang uses libstdc++ standard library on Linux by default (for compatibility with shared libraries), so in such case you won't be using the Clang's standard library by default, and libstdc++ hasn't implemented text formatting yet.
|
71,777,900 | 71,780,892 | c++ String from file to vector - more elegant way | I write a code in which I want to pass several strings from text file to string vector. Currently I do this that way:
using namespace std;
int main()
{
string list_name="LIST";
ifstream REF;
REF.open(list_name.c_str());
vector<string> titles;
for(auto i=0;;i++)
{
REF>>list_name;
if(list_name=="-1"){break;}
titles.push_back(list_name);
}
REF.close();
cout<<titles.size();
for(unsigned int i=0; i<titles.size(); i++)
{
cout<<endl<<titles[i];
}
It works fine, I get the output as expected. My concern is is there more elegant way to pass string from text file to vector directly, avoiding this fragment, when passing string from filestream to string object and assigning it to the vector with push_back as separate step:
REF>>list_name;
if(list_name=="-1"){break;}
titles.push_back(list_name);
| The other answers are maybe too complicated or too complex.
Let me first do a small review of your code. Please see my comments within the code:
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
using namespace std; // You should not open the full std namespace. Better to use full qualifiacation
int main()
{
string list_name = "LIST";
ifstream REF; // Here you coud directly use the construct ofr the istream, which will open the file for you
REF.open(list_name.c_str()); // No need to use c_str
vector<string> titles; // All variables should be initialized. Use {}
for (auto i = 0;; i++) // Endless loop. You could also write for(;;), but bad design
{
REF >> list_name;
if (list_name == "-1") { break; } // Break out of the endless loop. Bad design. Curly braces not needed
titles.push_back(list_name);
}
REF.close(); // No nbeed to close the file. With RAII, the destructor of the istream will close the file for you
cout << titles.size();
for (unsigned int i = 0; i < titles.size(); i++) // Better to use a range based for loop
{
cout << endl << titles[i]; // end not recommended. For cout`'\n' is beter, because it does not call flush unneccesarily.
}
}
You see many points for improvement.
Let me explain some of the more important topics to you.
You should use the std::ifstreams constructor to directly open the file.
Always check the result of such an operation. The bool and ! operator for the std::ifstream are overwritten. So a simple test can be done
Not need to close the file. The Destructor of the std::ifstream will do that for you.
There is a standard approach on how to read a file. Please see below.
If you want to read file until EOF (end of file) or any other condition, you can simply use a while loop and call the extraction operator >>
For example:
while (REF >> list_name) {
titles.push_back(list_name);
}
Why does this work? The extraction operator will always return a reference to the stream with what it was called. So, you can imagine that after reading the string, the while would contain while (REF), because REF was returned by (REF >> list_name. And, as mentioned already, the bool operator of the stream is overwritten and returns the state of the stream. If there would be any error or EOF, then if (REF) would be false.
So and now the additional condition: A comparison with "-1" can be easily added to the while statement.
while ((REF >> list_name) and (list_name != "-1")) {
titles.push_back(list_name);
}
This is a safe operatrion, because of boolean short-cut evaluation. If the first condition is already false, the second will not be evaluated.
With all the knwo-how above, the code could be refactored to:
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
int main() {
// Here our source data is stored
const std::string fileName{ "list.txt" };
// Open the file and check, if it could be opened
std::ifstream fileStream{ fileName };
if (fileStream) {
// Here we will store all titles that we read from the file
std::vector<std::string> titles{};
// Now read all data and store vit in our resulting vector
std::string tempTitle{};
while ((fileStream >> tempTitle) and (tempTitle != "-1"))
titles.push_back(tempTitle);
// For debug purposes. Show all titles on screen:
for (const std::string title : titles)
std::cout << '\n' << title;
}
else std::cerr << "\n*** Error: Could not open file '" << fileName << "'\n";
}
|
71,778,106 | 71,778,987 | Specialization of non-type template argument | I have a struct
template <auto& t>
struct Foo {
using Type = decltype(t);
};
I also have a template class:
template <typename T> class MyClass {};
I want to create a specialization for this struct for any arg of type MyClass:
template <typename T>
struct Foo <MyClass<T>& t> {
using Type = int;
};
I'd Like to be able to use this class like:
Foo<true>::Type t = false;
This code doesn't compile. How can I do this kind of specialization? Is there some other approach using std::enable_if that I can use to accomplish this?
You can see the code at https://onlinegdb.com/1Qzum1Fs2J
| Your code is near by the needed solution. The specialization simply needs a bit different syntax:
template <typename T> class MyClass {};
template < auto value >
struct Foo
{
void Check() { std::cout << "Default" << std::endl; }
};
template <typename T, MyClass<T> value>
struct Foo<value>
{
void Check() { std::cout << "Spezial" << std::endl; }
};
int main()
{
Foo<10> fi;
Foo<MyClass<int>{}> fm;
fi.Check();
fm.Check();
}
For gcc it needs trunk version. gcc 11 compiles but delivers wrong result!
See it working: Works on gcc trunk, clang trunk and msvc v19.30
|
71,778,162 | 71,778,402 | How to forward declare a template function in global or namespace with default argument? | template<typename T> void foo (T t, int i = 0); // declaration
int main () { foo(1, 0); } // error!!
template<typename T> void foo (T t, int i = 0) {} // definition
Above is a minimal reproducible example for a larger problem, where many header files are involved. Attempting to forward declare with default parameter results in below compilation:
error: redeclaration of ‘template void foo(T, int)’ may not have default arguments [-fpermissive]
How to fix this?
| A default argument, like int i = 0, is seen as a definition. Repeating it is therefore an ODR-violation.
Don't know exactly why, except that the standard explicitly says so
Each of the following is termed a definable item:
[...]
(1.6) a default argument for a parameter (for a function in a given > scope)
[...]
No translation unit shall contain more than one definition of any definable item.
http://eel.is/c++draft/basic.def.odr
The solution is then to only have the default argument appear once, likely in the declaration (and not repeated in the definition).
|
71,778,286 | 71,778,639 | try catch mechanism in c++ | I honestly searched and tried to implement try - catch mechanism in c++, but I failed: I don't have enough experience yet. In android there is a convenient way to catch general exceptions, whether it's a division by zero or an array out-of-bounds, like
int res;
int a=1;
int b=0;
try{res = a/b;}
catch(Exception e)
{
int stop=1;
};
Works fine, program does not crash.
Could you please tell me how to make an universal exceptions interceptor in C++, if possible.
Many thanks in advance for any advice!
| C++ has a diverse range of error handling for different problems.
Division by zero and many other errors (null pointer access, integer overflow, array-out-of-bounds) do not cause an exception that you can catch.
You could use tools such as clang's undefined behavior sanitizer to detect some of them, but that requires some extra work on your part, and comes with a performance hit.
The best way in C++ to deal with prevention of a division by zero, is to check for it:
int res;
int a=1;
int b=0;
if (b == 0)
{
int stop=1;
}
else
{
res = a/b;
}
See also the answers to this other very similar question.
|
71,778,543 | 71,778,909 | How can I sort function using priority queue in C++/C? | I want to array "functions" by priority
E.g SetNumber1 is first
SetNumber2 is second
ReadNumbers is last
////////////////
priority_queue < ??? > Q
Q.push(ReadNumbers());
Q.push(SetNumber2());
Q.push(SetNumber1());
i want to exec in order to SetNumber1() , SetNumber2(), ReadNumbers()
Thanks.
and sorry about my english skils, I'am korean, i'm not good at english
| std::priority_queue works with a compare function which can't work with the given functions. You need to define your own priorities as an enum and then you can put the pair of the prio and function in an std::multimap so all functions can be called according to their prio. You can put the pair also in a prio-q but then you still need a compare function that only works on the prio.
e.g.:
#include <iostream>
#include <map>
#include <functional>
enum class prio { HI, MID, LO };
int main()
{
std::multimap<prio, std::function<void()>> prio_mmap;
prio_mmap.insert(std::make_pair(prio::MID, []{ std::cout << "MID1\n"; }));
prio_mmap.insert(std::make_pair(prio::LO, []{ std::cout << "LO\n"; }));
prio_mmap.insert(std::make_pair(prio::HI, []{ std::cout << "HI\n"; }));
prio_mmap.insert(std::make_pair(prio::MID, []{ std::cout << "MID2\n"; }));
for (const auto& p: prio_mmap)
{
p.second();
}
}
|
71,778,599 | 71,779,291 | Some (but not all) targets in this export set were already defined | I create a CMakeLists.txt and the content is as followed
cmake_minimum_required (VERSION 3.8)
project(CTP_dll)
add_library(CTPdll SHARED CTPdll.cpp)
add_executable(CTPTest CTPTest.cpp)
target_link_libraries(CTPTest CTPdll)
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
target_link_libraries(CTPdll ${OpenCV_LIBS})
find_package(VTK REQUIRED)
include(${VTK_USE_FILE})
target_link_libraries(CTPTest ${VTK_LIBRARIES})
And the error info is
CMake Error at D:/vcpkg/installed/x64-windows/share/hdf5/hdf5-targets.cmake:37 (message):
Some (but not all) targets in this export set were already defined.
Targets Defined: hdf5::hdf5-shared;hdf5::hdf5_hl-shared
Targets not yet defined: hdf5::hdf5_cpp-shared;hdf5::hdf5_hl_cpp-shared
If I delete the including of VTK as followed, no error will be reported. But obvious I can't include VTK in CTPTest.cpp, which is unacceptable.
cmake_minimum_required (VERSION 3.8)
project(CTP_dll)
add_library(CTPdll SHARED CTPdll.cpp)
add_executable(CTPTest CTPTest.cpp)
target_link_libraries(CTPTest CTPdll)
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
target_link_libraries(CTPdll ${OpenCV_LIBS})
I compile this with Visual Studio 2022 on windows10 platform.
There was a similar question on the web but nobody replied. So I propose this question and hope someone can help.
| Seems you are encountering vcpkg issue #15502.
There is a pull request with a fix available, which apparently was not merged yet.
|
71,779,078 | 71,779,123 | How would I write this python code in c++ | I'm doing a leetcode problem about squares intersecting. Here's how I was able to figure it out in Python, but how would I write it in C++?
def intersections(rects):
events = []
for A in rects:
events.append((A.x, 'insert', A.y, A.Y))
events.append((A.X, 'remove', A.y, A.Y))
intersections = 0
ys = SortedList()
Ys = SortedList()
for x, op, y, Y in sorted(events):
if op == 'insert':
intersections += ys.bisect_right(Y) - Ys.bisect_left(y)
ys.add(y)
Ys.add(Y)
else:
ys.remove(y)
Ys.remove(Y)
return intersections
| I haven't touched C++ in some time... So this is merely an attempt to start with.
#include <algorithm>
#include <iostream>
#include <vector>
using namespace std;
struct Rect {
int x, X, y, Y;
};
int intersections(vector<Rect> &rects) {
vector<pair<int, pair<char, pair<int, int>>>> events;
for (auto &A : rects) {
events.emplace_back(A.x, make_pair('i', make_pair(A.y, A.Y)));
events.emplace_back(A.X, make_pair('r', make_pair(A.y, A.Y)));
}
sort(events.begin(), events.end());
int intersections = 0, n = events.size();
vector<int> ys, Ys;
for (int i = 0; i < n; i++) {
int x = events[i].first, y = events[i].second.second.first, Y = events[i].second.second.second;
if (events[i].second.first == 'i') {
intersections += upper_bound(ys.begin(), ys.end(), Y) - lower_bound(ys.begin(), ys.end(), y);
ys.push_back(y);
Ys.push_back(Y);
} else {
ys.erase(lower_bound(ys.begin(), ys.end(), y));
Ys.erase(lower_bound(Ys.begin(), Ys.end(), Y));
}
}
return intersections;
}
int main()
|
71,779,537 | 71,792,151 | how to implement duplex printing via GDI Print API | I need to implement duplex printing on a printer. I set the DEVMODE structure in the printer driver with the dmDuplex parameter
int dvmSize=DocumentProperties(GetForegroundWindow(),hPrinter,(LPWSTR)(printerName.c_str()),NULL,NULL,0);
DEVMODE *dvmSettings = (DEVMODE*) GlobalAlloc(GPTR,dvmSize);
DEVMODE *dvMode = (DEVMODE*) GlobalAlloc(GPTR,dvmSize);
DocumentProperties(GetForegroundWindow(),hPrinter,(LPWSTR)(printerName.c_str()),dvmSettings,NULL,DM_OUT_BUFFER);
dvmSettings->dmDuplex=DMDUP_HORIZONTAL;
dvmSettings->dmFields=DM_DUPLEX;
DocumentProperties(GetForegroundWindow(),hPrinter,(LPWSTR)(printerName.c_str()),dvMode,dvmSettings,DM_IN_BUFFER|DM_OUT_BUFFER);
As far as I understand, there are 2 ways of implementation. Via WritePrinter or create a HDC using CreateDC passing dvMode and draw directly to the context. The second way is more convenient for me, but will it work? I'm asking because I can't test it right now and I need to know if it will work.
| As far as I'm concerned, it will work.
You could call the CreateDC or PrintDlgEx function to get the printer DC.
If your application calls the CreateDC function, it must supply a driver and port name. To retrieve these names, call the GetPrinter or EnumPrinters function.
If your application calls the PrintDlgEx function and specifies the PD_RETURNDC value in the Flags member of the PRINTDLGEX structure, the system returns a handle to a device context for the printer selected by the user.
For more details I suggest you could refer to the Docs:
https://learn.microsoft.com/en-us/windows/win32/printdocs/sending-data-directly-to-a-printer
https://learn.microsoft.com/zh-cn/windows/win32/printdocs/gdi-printing
https://learn.microsoft.com/en-us/windows/win32/printdocs/printer-output
https://learn.microsoft.com/zh-cn/windows/win32/dlgbox/using-common-dialog-boxes
|
71,779,939 | 71,780,861 | C++: How to use different dynamic template in map | my header code:
template <typename T>
class A
{
}
template<> class A<short>;
template<> class A<float>;
in my cpp, i want to use a map to contain different type a, like following code:
class B
{
map<int, A*> a; /* how to declare a */
public:
AddA(int key, int type)
{
if (type == 1)
{
a.insert({ key, new A<short>() });
}
else
{
a.insert({ key, new A<float>() });
}
}
template<typename T>
func(int key, T v)
{
a[key].func(v);
}
};
question: how to implement it?
edit @ 0410, here is my solution:
class ABase
{
virtual void func(void* t)=0;
}
template <typename T> A;
template <short> A : public ABase
{
void func(void* t) override
{
auto value = *static_cast<short*>(t);
// do processing
}
template <float> A : public ABase
{
void func(void* t) override
{
auto value = *static_cast<float*>(t);
// do processing
}
CPP: used a map of ABase* for all the template class, and use a virtual func for all template interface
main()
{
map<int, ABase*> objs;
objs.insert({0, new A<short>()});
objs.insert({1, new A<float>()});
auto value=0;
objs[0]->func(&value);
auto value1=0.f;
objs[1]->func(&value1);
}
| If you really need to have multiple types in a single map, you can use a map of std::variant. But as already mentioned in the comments, this might be a design problem.
But if you need it, you can proceed with the std::map< int, std::variant<>>. Later on, if you want to access the stored element, you have to call std::visit to pick the element which is stored in std::variant.
See the following example:
template < typename T >
struct A
{
};
// spezialize if needed, here only for demonstration purpose
template <> struct A<short> { void func(short parm) { std::cout << "A<short> with " << parm << std::endl; } };
template <> struct A<float> { void func(float parm) { std::cout << "A<float> with " << parm << std::endl; } };
class B
{
std::map<int, std::variant<A<short>*, A<float>*>> a;
public:
void AddA(int key, int type)
{
if (type == 1)
{
a.insert({ key, new A<short>() });
}
else
{
a.insert({ key, new A<float>() });
}
}
template<typename T>
void func(int key, T v)
{
std::visit( [&v]( auto ptr ) { ptr->func(v); }, a[key] );
}
};
int main()
{
B b;
b.AddA( 1, 1 );
b.AddA( 2, 2 );
b.func( 1, 99 );
b.func( 2, 100 );
}
|
71,779,977 | 71,782,705 | Contradicting definition of implicit this parameter in the standard | I am learning about classes in C++. I came across the following statement from the standard:
During overload resolution, non-static cv-qualified member function of class X is treated as a function that takes an implicit parameter of type lvalue reference to cv-qualified X if it has no ref-qualifiers or if it has the lvalue ref-qualifier. Otherwise (if it has rvalue ref-qualifier), it is treated as a function taking an implicit parameter of type rvalue reference to cv-qualified X.
The above statement seems to imply that for a const qualified non-static member function of a class X will have an implicit parameter of type const X&.
But then i also came across:
The type of this in a member function of class X is X* (pointer to X). If the member function is cv-qualified, the type of this is cv X* (pointer to identically cv-qualified X). Since constructors and destructors cannot be cv-qualified, the type of this in them is always X*, even when constructing or destroying a const object.
So according to the above second quote the implicit this parameter for a const qualified non-static member function of class Xhas type const X*.
My question is that why is there such a difference. I mean during overload resolution for a const qualfied nonstatic member function, why is the implicit parameter considered as a const X& and not simply a const X* which seems to be the actual type of this.
| The implicit object parameter is not the same as this. this is a pointer referring to the object on which the member function was called while the implicit object parameter is the imagined first parameter of the member function, which is passed the object expression in the member function call (whats left of . in the member access expression) and so should be a reference parameter.
It wouldn't make sense to use a pointer for the implicit object parameter. It would make it impossible to overload the function on value category of the object expression. If a member function is &&-qualified, the implicit object parameter is a rvalue reference, so that if the object expression is a rvalue overload resolution correctly overload with a &-qualified member function.
So the implicit object parameter is const T& in your example, but this has type const T*. There is no contradiction.
|
71,780,299 | 73,680,598 | CppUMock for mocking open() function | Is it possible to mock the C open() function using CppUTest ?
#include "CppUTest/CommandLineTestRunner.h"
#include "CppUTest/TestHarness.h"
#include "CppUTestExt/MockSupport.h"
#include <fcntl.h>
extern "C"
{
#include "drivers/my_driver.h"
}
TEST_GROUP(Driver)
{
void teardown() {
mock().clear();
}
};
int open(const char *__path, int __oflag, ...)
{
return int(mock().actualCall(__func__)
.withParameter("__path", __path)
.withParameter("__oflag", __oflag)
.returnIntValue());
}
TEST(Driver, simpleTest)
{
mock().expectOneCall("open")
.withParameter("/dev/sys/my_hw", O_RDWR)
.andReturnValue(1);
mock().checkExpectations();
bool r = open_driver();
CHECK_TRUE(r);
}
int main(int ac, char** av)
{
return CommandLineTestRunner::RunAllTests(ac, av);
}
And here is my open_driver() functions :
#include "drivers/my_driver.h"
#include <string.h>
#include <fcntl.h>
bool open_driver() {
int fd_driver = open("/dev/iost/mast", O_RDWR);
char msg[255] = {0};
if(fd_driver == -1)
return false;
return true;
}
For now, I obtain the following error :
/home/.../open_driver.cpp:26: error: Failure in TEST(Driver, simpleTest)
Mock Failure: Expected call WAS NOT fulfilled.
EXPECTED calls that WERE NOT fulfilled:
open -> int /dev/sys/my_hw: <2 (0x2)> (expected 1 call, called 0 times)
EXPECTED calls that WERE fulfilled:
<none>
.
Errors (1 failures, 1 tests, 1 ran, 1 checks, 0 ignored, 0 filtered out, 0 ms)
| Use compiler option --wrap to handle this scenario.
For detail refer below link :
How to wrap functions with the `--wrap` option correctly?
|
71,780,316 | 71,781,163 | How to align text and images like the sample image provided in Qt? |
I want to display the images and Texts in the PDF file generated in the manner shown in the image provided and I don't know how to go on about it.
At first I tried this:
for(int i=0; i<COVDATA.COV_ComponentName.size(); i++){
//------------------------------------------------------------------------------------
x = x + 550;
y = y + 150;
// ScreenShot_path = COVDATA.COV_ComponentScreenShot[i];
// img = QImage(ScreenShot_path);
// painter.drawImage(QRect(100,y,210,150),img.scaled( 800 , 800, Qt::KeepAspectRatio) );
// painter.setPen(QPen(Qt::black, 5));
// painter.setFont(OpenSans);
// painter.setFont(QFont("Open Sans",10,QFont::Normal));
// painter.drawText( QRect(0,y,PageWidth-210,150), Qt::AlignRight|Qt::AlignVCenter, COVDATA.COV_ComponentName[i]);
if(i % 2 == 0 || i==0){
painter.setPen(QPen(Qt::black, 5));
painter.setFont(OpenSans);
painter.setFont(QFont("Open Sans",10,QFont::Normal));
painter.drawText( QRect(0,y,210,150), Qt::AlignLeft|Qt::AlignVCenter, COVDATA.COV_ComponentName[i]);
ScreenShot_path = COVDATA.COV_ComponentScreenShot[i];
img = QImage(ScreenShot_path);
painter.drawImage(QRect(150,y+350,210,450),img.scaled( 800 , 800, Qt::KeepAspectRatio) );
}else{
painter.setPen(QPen(Qt::black, 5));
painter.setFont(OpenSans);
painter.setFont(QFont("Open Sans",10,QFont::Normal));
painter.drawText( QRect(0,y,PageWidth-210,150), Qt::AlignRight|Qt::AlignVCenter, COVDATA.COV_ComponentName[i]);
ScreenShot_path = COVDATA.COV_ComponentScreenShot[i];
img = QImage(ScreenShot_path);
painter.drawImage(QRect(0,y+350,210,350),img.scaled( 800 , 800, Qt::KeepAspectRatio) );
}
}
But it didn't really work out for me and I honestly don't know how to go on about it. The COVDATA.COV_ComponentName is a list containing the texts I want to print while COVDATA.COV_ComponentScheenShot is a list containing the storage location of the images that are called in and store the image in the QImage object for them to be generated.
These are to be printed in my PDF file.
| Your coordinate computation is not working as intended. I've put together a short example illustrating how you can go about implementing a layout as you wish:
QStringList Text;
Text << "String 1" << "String 2" << "String 3" << "String 4" << "String 5" << "String 6" << "String 7" << "String 8";
int TextHeight = painter.fontMetrics().height() * 1.5;
int Padding = 10;
int NumOfColumns = 2;
int RectWidth = (geometry().width() - (NumOfColumns+1) * Padding) / NumOfColumns;
int NumOfRows = std::ceil(Text.size() / static_cast<float>(NumOfColumns)); // std::ceil requires #include <cmath>
int RectHeight = (geometry().height() - NumOfRows * TextHeight - (NumOfRows + 1) * Padding) / NumOfRows;
for (int i = 0; i < Text.size(); i++)
{
int x = Padding + (i % NumOfColumns) * (RectWidth + Padding);
int y = Padding + (i / NumOfColumns) * (RectHeight + TextHeight + Padding);
painter.setPen(QPen(Qt::black, 2));
painter.drawText(QRect(x, y, RectWidth, TextHeight), Qt::AlignLeft | Qt::AlignVCenter, Text[i]);
painter.drawRect(QRect(x, y + TextHeight, RectWidth, RectHeight));
}
I have replaced the COVDATA stuff and screenshots with a simple list of texts and rectangles.
With the above code, you can configure the number of columns (the number of rows will be computed from that such that all items in my list Text have place on the screen) as well as the padding between the rectangles.
But the principle should be clear:
I compute the x and y component depending on the index i
The coordinates are computed depending on a predefined with and height of the alloted spaces for the rectangles, in my case this is set to a fraction of the full space available to the widget (so it also adapts to resizing natively).
This is how the result looks like:
|
71,781,785 | 71,781,932 | Enforce class template specializations to provide one or more methods | I'm using a "traits" pattern where I have a base case expressed as a class template
template <class>
struct DoCache {
constexpr static bool value = false;
};
and I expect users to specialize for their types:
template <>
struct DoCache<MyType> {
constexpr static bool value = true;
static void write2Cache(MyType const&) { /* implementation */ }
static optional<MyType> readFromCache(string name) { /* implementation */ }
};
The typical use is to retrieve and use this as:
// Define a variable template
template <class T>
constexpr bool do_cache_v = DoCache<T>::value;
// Use the above trait in compile time branching:
if constexpr (do_cache_v<T>)
{
write2Cache(arg);
}
There's two problems I have with this code:
A user is only indirectly enforced to provide a "value" member when specializing, let alone making it the proper value (i.e. true). By indirectly I mean they'll get a bunch of compilation errors that one can only solve if they know the answer beforehand.
There's no way of "requiring" them to create the two needed methods, namely write2Cache and readFromCache, let alone having (const) correct types.
In some code-bases I've seen the considerations above being tackled by defining a generator macro like:
#define CACHABLE(Type, Writer, Reader) ...
Is there a better way to it?
Can concepts be used to restrict the way a specialization looks?
Is there a C++17 compatible way?
an answer to any of the above is appreciated
| C++17: Curiously recurring template pattern
It seems like a suitable use case for CRTP:
template<typename T>
struct DoCache {
void write2Cache() {
static_cast<T*>(this)->write2Cache();
}
// ...
};
template<typename T>
void write2Cache(DoCache<T>& t) {
t.write2Cache();
}
struct MyType : DoCache<MyType>
{
void write2Cache() { /* ... */ }
};
int main() {
MyType mt{};
write2Cache(mt);
}
Instead of requiring clients to specialize a library type over their own types, you require them to implementes their own types in-terms-of (static polymorphism) the contract/facade of the library type.
C++20: Concepts
With concepts you can skip polymorphism entirely:
template<typename T>
concept DoCachable = requires(T t) {
t.write2Cache();
};
template<DoCachable T>
void write2Cache(T& t) {
t.write2Cache();
}
struct MyType {
void write2Cache() { /* ... */ }
};
struct MyBadType {};
int main() {
MyType mt{};
write2Cache(mt);
MyBadType mbt{};
write2Cache(mbt); // error: ...
// because 'MyBadType' does not satisfy 'DoCachable'
// because 't.write2Cache()' would be invalid: no member named 'write2Cache' in 'MyBadType'
}
However again placing requirements on the definition site of client type (as opposed to specialization which can be done after the fact).
Trait-based conditional dispatch to write2Cache()?
But how is the trait do_cache_v exposed this way?
C++17 approach
Since the CRTP-based approach offers an "is-a"-relationsship via inheritance, you could simply implement a trait for "is-a DoCache<T>":
#include <type_traits>
template<typename>
struct is_do_cacheable : std::false_type {};
template<typename T>
struct is_do_cacheable<DoCache<T>> : std::true_type {};
template<typename T>
constexpr bool is_do_cacheable_v{is_do_cacheable<T>::value};
// ... elsewhere
if constexpr(is_do_cacheable_v<T>) {
write2Cache(t);
}
C++20 approach
With concepts, the concept itself can be used as a trait:
if constexpr(DoCachable<T>) {
write2Cache(t);
}
|
71,781,811 | 71,822,120 | How can I build amazon kinesis webrtc sdk in C on windows - missing header files | I'm trying to build WebRTC SDK in C for Embedded Devices on windows.
I have configured using CMake with -DBUILD_DEPENDENCIES=0, and have installed various libraries manually such as pthreads, usrsctp, libssl etc.
I don't have gstreamer installed, so I do get a message about not being able to configure one of the examples, but that is expected.
I'm running cmake from a "x64 native tools command prompt for vs 2019", hence the configuration below.
So after configuration I have a visual studio solution, which as far as I can tell should be able to build the examples.
However, the code uses an include file that is referencing non-existent files in the SDK. In particular, Include.h in com/amazonaws/kinesis/video/webrtcclient/ begins with:
#include <com/amazonaws/kinesis/video/client/Include.h>
#include <com/amazonaws/kinesis/video/common/Include.h>
#include <com/amazonaws/kinesis/video/webrtcclient/NullableDefs.h>
#include <com/amazonaws/kinesis/video/webrtcclient/Stats.h>
but there is no client or common directory in com/amazonaws/kinesis/video. The com directory is in the repo directory src\include, which to me looks like the video\client dir should have been checked out if it exists, rather than built.
I also don't see any solution to build any kinesis libraries, but the examples seem to include a lot of the source files directly - so is this SDK supposed to build a library as well?
Have I missed a build step somewhere? Do I need to download/build the rest of the kinesis video stream stuff as well as the webrtc sdk?
| Yes you will need to build the other KVS libraries that the WebRTC implementation depends on.
You can find them in the .gitmodules of the project.
You can also see how they are built/configured in the CMakeLists.txt
|
71,781,890 | 71,977,246 | Console outputs gibberish code after re-redirecting stdout to CON | When I use C++ to invoke Python program output (By system command with parameters), it outputs gibberish code at the end of line. After that, I couldn't input any character (Include Backspace and Enter), it displays a hollow square.
Console screenshot:
Whole function code: (Uses file process)
freopen("WCH_SYSTEM.tmp", "w", stdout);
system(("TRANS -i \"" + str + "\" > WCH_TRANS.tmp").c_str());
freopen("CON", "w", stdout);
Sleep(500);
ifstream fin("WCH_TRANS.tmp");
fin >> info;
cout << info << endl;
DeleteFile("WCH_SYSTEM.tmp");
| After repeated inspection, I have found that freopen() can cause these encoding problems.
You can redirect the output of the function (NOT FREOPEN!!!) to a temporarily file like this.
DO NOT USE FREOPEN LIKE THIS! THIS MAY CAUSE UNEXPECTED BEHAVIOR.
freopen("CON", mode, handle);
This is correct:
system(("TRANS -i \"" + str + "\" > WCH_TRANS.tmp").c_str());
Sleep(500);
ifstream fin("WCH_TRANS.tmp");
fin >> info;
cout << info << endl;
DeleteFile("WCH_TRANS.tmp");
I'm using TDM-GCC64 on Windows, maybe this issue only appear on this compiler.
|
71,782,254 | 71,782,608 | c++ completely generic event dispatcher | I try again to explain better again what I would achieve.
I would like make a thing like this (inspired to Unity's UnityEvent):
Public "variables" declared in some classes:
GameEvent<> OnEnemySpawn = GameEvent<>();
GameEvent<string> OnPlayerSpawn = GameEvent<string>();
GameEvent<string, float> OnEnemyDie = GameEvent<string, float>();
Referral where some other classes subscribe their methods:
...
enemySpawner.OnEnemySpawn.Subscribe(IncreaseEnemyAliveCountByOne);
...
playerSpawner.OnPlayerSpawn.Subscribe(NewPlayerSpawned);
...
enemy.OnEnemyDie.Subscribe(IncreasePlayerScore);
...
// Subscribed methods declaration
void IncreaseEnemyAliceCountByOne() { ... }
void NewPlayerSpawned(string playerName) { ... }
void IncreasePlayerScore(string playerName, float scoreToAdd) { ... }
And then GameEvent class would be able to notify the event happens:
...
OnEnemySpawn.Notify();
...
OnPlayerSpawn.Notify(newPlayer.PlayerName);
...
OnEnemyDie.Notify(playerKiller.PlayerName, scoreOnKill);
...
Actually, I achieved the declaration and subscription part creating this class:
templace<class ... T>
class GameEvent
{
private:
std::vector<std::function<void(T...)>> _subscribers;
public:
void Subscribe(std::function<void(T...)> newSubscriber)
{
_subscribers.push_back(newSubscriber);
}
}
The thing that makes me crazy is how implement the Notify method. How should I know how many parameters I received and which types they have
void Notify(T...)
{
for (std::function<void(T...)> subscriber : _subscribers)
{
}
}
I hope now this is a valid question cause I'm losing my mind behind this
| What is wrong with the obvious way?
void Notify(T... args)
{
// note: no need to write the type if it's quite long
// note: & means the std::function isn't copied
for (auto const& subscriber : _subscribers)
{
subscriber(args...);
}
}
|
71,782,644 | 71,787,455 | Open GL Depth and Alpha issues | I am rendering two triangles in GL. The bottom two vertices of each have a set alpha value to make them transparent. I am using Depth testing and Alpha blending defined by the following calls
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE);
glEnable(GL_SAMPLE_ALPHA_TO_ONE);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.1f);
This is the result I get:
I expect to see the top of the second triangle through the transparent pixels of the first triangle however it appears to be cut off by the depth test. The background color also appears more transparent over the triangles (some blending issue?)
How can I render this as expected?
Edit:
If I disable writing to the depth buffer and draw the triangles in back-to-front order I get the following result. Notice how the background is more transparent over the triangle, and the white line at the tip of the second triangle. Some definite blending issues here.
| I did two things to solve this:
Disable writing to depth buffer and draw triangles based on distance from the camera.
Use glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA); to fix blending issues.
Below is the final result
|
71,782,754 | 71,784,214 | How to create an interface to allow for the construction of different nested derived classes in C++? | My goal is to construct a derived classes nested class from the interface. However the nested classes don't have the same constructors. The question is how can I make an interface to create two different "sub-nested" classes.
Constraints:
Cannot use Heap
Nested Classes' Methods cannot be called before it is constructed
C++ 17
ITest::INestedTest* MakeTest(ITest* test, ITest::Config config)
{
// Can't call directly because it's not on the interface i.e. test.InitializeNestedTest ...
// Only workable situation is this:
if (condition)
{
auto myTest = static_cast<Test2::Test*>(test);
int p = 2;
return myTest->InitalizeNestedTest(config, p);
// ERROR function returning abstract class not allowed
} else {
auto myTest = static_cast<Test1::Test*>(test);
return myTest->InitalizeNestedTest(config);
// ERROR function returning abstract class not allowed
}
}
This static cast didn't return what I wanted previously because I was returning a pointer to a locally defined variable, which was pointed out in the comments. How am I able to return a class from this since it's an abstract class, do i need to cast it again or make multiple functions?
Test1::Test myTest;
auto myNestedTest = myTest.InitializeNestedTest(config);
I've thought of a few options but none of them seem right, or I'm not entirely sure how to implement them
Have an overloaded Virtual function for each type on the interface and then override them on the subclass (not sure if possible and doesn't seem like the right way to do it)
Extend the Config struct Test2 namespace so that it includes parameter p, so that they all have the same prototype and put it on the interface. (is it possible to "extend" the struct" from the interface?)
Maybe use a different type of cast, or do so in a different way?
I've included the definitions of my Interface and two subclasses for reference.
class ITest
{
//other things in ITest.hpp not relevant to question
public:
struct Config
{
int a;
bool enable;
};
class INestedTest
{
public:
virtual void Enable() const = 0;
virtual void Configure(Config const& config)
{
if(config.enable)
{
Enable();
}
}
};
};
namespace Test1
{
class Test : public ITest
{
public:
class NestedTest : public ITest::INestedTest
{
public:
NestedTest(Config const& config)
{
Configure(config);
}
void Enable() const override
{
//impl
}
}; // End NestedTest
NestedTest InitalizeNestedTest(Config const& config)
{
return NestedTest(config);
}
};
};
namespace Test2
{
class Test : public ITest
{
public:
class NestedTest : public ITest::INestedTest
{
public:
using Parameter = int;
NestedTest(ITest::Config const& config, Parameter p)
{
Configure(config);
}
void Enable() const override
{
//impl
}
}; // End NestedTest
NestedTest InitalizeNestedTest(Config const& config, NestedTest::Parameter p)
{
return NestedTest(config, p);
}
};
};
| Maybe you could make the object static so it's declared in RAM at compile time (and not heap or stack).
|
71,783,377 | 71,783,639 | What's the purpose of const swap() function? | While implementing a custom tuple (here), I found there is a wired swap() function that takes const parameters (cppreference):
template< class... Types >
constexpr void swap( const std::tuple<Types...>& lhs,
const std::tuple<Types...>& rhs ) noexcept(/* see below */);
and a const-qualified swap() member function (cppreference):
constexpr void swap( const tuple& other ) noexcept(/* see below */) const;
const means the object is read-only, but to swap two objects, it has to modify the objects, which violates the const-ness.
So, What's the purpose of const swap() function?
| This was introduced in the "zip" proposal P2321 originally described in "A Plan for C++23 Ranges" P2214.
P2321
swap for const tuple and const pair. Once tuples of references are
made const-assignable, the default std::swap can be called for const
tuples of references. However, that triple-move swap does the wrong
thing:
int i = 1, j = 2;
const auto t1 = std::tie(i), t2 = std::tie(j);
// If std::swap(t1, t2); called the default triple-move std::swap then
// this would do
auto tmp = std::move(t1);
t1 = std::move(t2);
t2 = std::move(tmp);
// i == 2, j == 2
This paper therefore proposes adding overloads of swap for const
tuples and pairs to correctly perform element-wise swap.
P2214 explains why const assignability is needed for the implementation of zip. It stems from assignment operators not being ref qualified.
|
71,783,833 | 71,784,130 | QRegularExpression find and capture all quoted and non-quoated parts in string | I am fairly new to using regexes.
I got a string which can contain quoted and not quoted substrings.
Here are examples of how they could look:
"path/to/program.exe" -a -b -c
"path/to/program.exe" -a -b -c
path/to/program.exe "-a" "-b" "-c"
path/to/program.exe "-a" -b -c
My regex looks like this: (("[^"]*")|([^"\t ]+))+
With ("[^"]+") I attempt to find every quoted substring and capture it.
With ([^"\t ]+) I attempt to find every substring without quotes.
My code to test this behaviour looks like this:
QString toMatch = R"del( "path/to/program.exe" -a -b -c)del";
qDebug() << "String to Match against: " << toMatch << "\n";
QRegularExpression re(R"del((("[^"]+")|([^"\t ]+))+)del");
QRegularExpressionMatchIterator it = re.globalMatch(toMatch);
int i = 0;
while (it.hasNext())
{
QRegularExpressionMatch match = it.next();
qDebug() << "iteration: " << i << " captured: " << match.captured(i) << "\n";
i++;
}
Output:
String to Match against: " \"path/to/program.exe\" -a -b -c"
iteration: 0 captured: "\"path/to/program.exe\""
iteration: 1 captured: "-a"
iteration: 2 captured: ""
iteration: 3 captured: "-c"
Testing it in Regex101 shows me the result I want.
I also tested it on some other websites e.g this.
I guess I am doing something wrong, could anyone point in the right direction?
Thanks in advance.
| You assume that the groups you need to get value from will change their IDs with each new match, while, in fact, all the groups IDs are set in the pattern itself.
I suggest removing all groups and just extract the whole match value:
QString toMatch = R"del( "path/to/program.exe" -a -b -c)del";
qDebug() << "String to Match against: " << toMatch << "\n";
QRegularExpression re(R"del("[^"]+"|[^"\s]+)del");
QRegularExpressionMatchIterator it = re.globalMatch(toMatch);
while (it.hasNext())
{
QRegularExpressionMatch match = it.next();
qDebug() << " matched: " << match.captured(0) << "\n";
}
Note the "[^"]+"|[^"\s]+ pattern matches either
"[^"]+" - ", then one or more chars other than " and then a "
| - or
[^"\s]+ - one or more chars other than " and whitespace.
See the updated pattern demo.
|
71,784,137 | 71,784,178 | std::deque is contiguous memory container or not? | std::deque is contiguous memory container or not ?
The famous book Effective STL by Scott Meyers says like below
Contiguous-memory containers (also known as array-based containers] store their elements in one or more (dynamically allocated) chunks of memory, each chunk holding more than one container element. If a new element is inserted or an existing element is erased, other elements in the same memory chunk have to be shifted up or down to make room for the new element or to fill the space formerly occupied by the erased element. This kind of movement affects both performance (see Items 5 and 14) and exception safety (as we'll soon see). The standard contiguous-memory containers are vector, string, and deque. The nonstandard rope is also a contiguous-memory container.
But you can find opposite explanation in cppreference.com
As opposed to std::vector, the elements of a deque are not stored contiguously: typical implementations use a sequence of individually allocated fixed-size arrays, with additional bookkeeping, which means indexed access to deque must perform two pointer dereferences, compared to vector's indexed access which performs only one.
Which one is true ?
| There is no conflict between the two quotes. Scott uses the term "contiguous container" in a sense a little wider than you might have seen it used elsewhere.
Scott writes (emphasize mine):
Contiguous-memory containers (also known as array-based containers] store their elements in one or more (dynamically allocated) chunks of memory, [...]
As the quote from cppref states, std::deque uses multiple arrays to store the elements. Within each array the elements are contiguous. Scott does not claim that all elements in a std::deque are contiguous in one chunk of memory.
|
71,784,144 | 71,784,346 | glReadPixel return fault result | I want to do the 3D-picking on OpenGl by the way of frame pick. On this way I need to use glReadPixel to get some information of the pixels on the screen currently, so I test it on the following way but get the wrong result.
First , I use Callback glfwSetCursorPosCallback(window, mouse_callback)and mouse_callback(GLFWwindow* window, double xpos, double ypos) to get the current mouse position (screen positon) and the pixel's color(when I change the mouse's position) ,then use std::cout to print it .
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
uint32_t color;
glm::vec4 c;
GLfloat stencil;
glReadPixels(xpos, ypos, 1, 1,GL_RGBA, GL_UNSIGNED_BYTE, &color);
c.r = (color & 0xFF) / 255.0F;
c.g = ((color >> 8) & 0xFF) / 255.0F;
c.b = ((color >> 16) & 0xFF) / 255.0F;
c.a = ((color >> 24) & 0xFF) / 255.0F;
std::cout << "color[0]" << c.r<< std::endl;
std::cout << "color[1]" << c.g << std::endl;
std::cout << "color[2]" << c.b << std::endl;
std::cout << "color[3]" << c.a << std::endl;
}
But the problem is when I render the scene below, and put my mouse on different part of this scene , the result seems to be wrong. From left-top region to the right-bottom regionm,the RBG should be (1.0,0.0,0.0) , (0.6,0.0,0.4) , (0.4,0.0,0.6) , (0.2,0.0,0.8) , (0.0,0.0,1.0). The pictures are below .
the return result is (0.4,0.0,0.6) but the right result shoud be (1.0,0.0,0.0)
the return result is (0.0,0.0,0.1) but the right result shoud be (0.4,0.0,0.6)
the return result is (0.8,0.0,0.2) but the right result shoud be (0.4,0.0,0.6)
I have try to use the simple examples (draw a triangle) to test the glReadPixel, the rusult seems to be right. I do not know why it goes wrong when use my scence. Could anyone give me some advices ? Or how to do the frame pick picking ? Thank u for your help!!
| OpenGL uses coordinates where the origin (0, 0) is the bottom-left of the window, and +Y is up. You have to convert to OpenGL's coordinate system when reading, since the cursor events use (0, 0) as the top-left of the window, and +Y is down.
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
int width, height;
glfwGetWindowSize(window, &width, &height);
int ixpos = (int)xpos;
int iypos = height - 1 - (int)ypos;
uint32_t color;
glReadPixels(ixpos, iypos, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &color);
...
}
|
71,784,326 | 71,785,079 | Print recursive * pattern | I have trouble printing the * pattern.
They should have 2 functions: printStars() and printLine(). printStars(int n), is used to print a line of n stars; the second one, printLines(int m), is to print m pairs of lines.
I have completed print half top but I could not reverse the pattern of the second half. You can't add more functions. There are only 2 functions printStars() and printLines() and must be recursive
The requirement:
* * * *
* * *
* *
*
*
* *
* * *
* * * *
Here is what I have done:
* * * *
* * *
* *
*
Code:
void printStars(int n){
if (n < 1) return;
cout << "*";
printStars(n-1);
}
void printLines(int m){
if (m < 1) return;
printStars(m);
cout << "\n";
printLines(m-1);
}
int main(int argc, const char * argv[]) {
int n;
cout << "n = ";
cin >> n;
printLines(n);
}
Hint from the question: Think in the way that the whole picture is generated in the following pattern:
| here is the small change you need:
void printLines(int m){
if (m < 1) return;
printStars(m);
cout << "\n";
printLines(m-1);
printStars(m); // just add these two lines to print line again after all internal ones are printed
cout << "\n";
}
|
71,784,364 | 71,796,163 | Qt 6.2.4 for QNX710 on Ubuntu 20.04 - Qt version is not properly installed | I tried to build Qt 6.2.4, installed via qt-unified-linux-x64-4.3.0-1-online.run on Ubuntu 20.04 LTS in a Virtual Box.
I installed Qt 6.2.4 in ~/Qt6 for Desktop gcc 64-bit and in source code.
QNX 7.1 is installed in ~/qnx710.
I sourced qnxsdp-env.sh:
$ . ~/qnx710/qnxsdp-env.sh
I added Qt6.2.4, Ninja and CMake to PATH:
$ export PATH=$PATH:~/Qt6/6.2.4/gcc_64/bin
$ export PATH=$PATH:~/Qt6/Tools/Ninja
$ export PATH=$PATH:~/Qt6/Tools/CMake/bin
Copied the qnx.cmake example from https://doc.qt.io/qt-6/building-qt-for-qnx.html#creating-a-toolchain-file-for-qnx
$ cat ~/cmake_support/toolchains/qnx.aarch64le.cmake
set(CMAKE_SYSTEM_NAME QNX)
set(arch gcc_ntoaarch64le)
set(CMAKE_C_COMPILER qcc)
set(CMAKE_C_COMPILER_TARGET ${arch})
set(CMAKE_CXX_COMPILER q++)
set(CMAKE_CXX_COMPILER_TARGET ${arch})
set(CMAKE_SYSROOT $ENV{QNX_TARGET})
Created a build directory and configured qt:
$ mkdir ~/Qt6/6.2.4/qnx_build
$ cd ~/Qt6/6.2.4/qnx_build
$ cmake -GNinja -DCMAKE_TOOLCHAIN_FILE=~/cmake_support/toolchains/qnx.aarch64le.cmake -DQT_HOST_PATH=~/Qt6/6.2.4/gcc_64 -DCMAKE_INSTALL_PREFIX=~/Qt6/6.2.4/qnx ../Src
Compile and install
$ cmake --build . --parallel && cmake --install .
This all went fine.
Then I tried to add the new Qt Version to Qt Creator, but this fails with Qt version is not properly installed, please run make install.
Details: Invalid Qt version.
And of course, once creating a kit, it fails and can't be used.
Any idea how to fix this?
| The Qt version was indeed not properly installed.
qmake expects a specific directory for target libraries
$ ~/Qt6/6.2.4/qnx/bin/qmake -v
QMake version 3.1
Using Qt version 6.2.4 in /home/werner/qnx710/target/qnx7/home/werner/Qt6/6.2.4/qnx/lib
So to fix QT Creator, I simply had to create a symbolic link:
$ cd ~/qnx710/target/qnx7
$ mkdir -p home/werner
$ ln -s ~/Qt6 .
|
71,784,465 | 71,785,634 | Run Eigen Parallel with OpenMPI | I am new to Eigen and is writing some simple code to test its performance. I am using a MacBook Pro with M1 Pro chip (I do not know whether the ARM architecture causes the problem). The code is a simple Laplace equation solver
#include <iostream>
#include "mpi.h"
#include "Eigen/Dense"
#include <chrono>
using namespace Eigen;
using namespace std;
const size_t num = 1000UL;
MatrixXd initilize(){
MatrixXd u = MatrixXd::Zero(num, num);
u(seq(1, fix<num-2>), seq(1, fix<num-2>)).setConstant(10);
return u;
}
void laplace(MatrixXd &u){
setNbThreads(8);
MatrixXd u_old = u;
u(seq(1,last-1),seq(1,last-1)) =
(( u_old(seq(0,last-2,fix<1>),seq(1,last-1,fix<1>)) + u_old(seq(2,last,fix<1>),seq(1,last-1,fix<1>)) +
u_old(seq(1,last-1,fix<1>),seq(0,last-2,fix<1>)) + u_old(seq(1,last-1,fix<1>),seq(2,last,fix<1>)) )*4.0 +
u_old(seq(0,last-2,fix<1>),seq(0,last-2,fix<1>)) + u_old(seq(0,last-2,fix<1>),seq(2,last,fix<1>)) +
u_old(seq(2,last,fix<1>),seq(0,last-2,fix<1>)) + u_old(seq(2,last,fix<1>),seq(2,last,fix<1>)) ) /20.0;
}
int main(int argc, const char * argv[]) {
initParallel();
setNbThreads(0);
cout << nbThreads() << endl;
MatrixXd u = initilize();
auto start = std::chrono::high_resolution_clock::now();
for (auto i=0UL; i<100; i++) {
laplace(u);
}
auto stop = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
// cout << u(seq(0, fix<10>), seq(0, fix<10>)) << endl;
cout << "Execution time (ms): " << duration.count() << endl;
return 0;
}
Compile with gcc and enable OpenMPI
james@MBP14 tests % g++-11 -fopenmp -O3 -I/usr/local/include -I/opt/homebrew/Cellar/open-mpi/4.1.3/include -o test4 test.cpp
Direct run the binary file
james@MBP14 tests % ./test4
8
Execution time (ms): 273
Run with mpirun and specify 8 threads
james@MBP14 tests % mpirun -np 8 test4
8
8
8
8
8
8
8
8
Execution time (ms): 348
Execution time (ms): 347
Execution time (ms): 353
Execution time (ms): 356
Execution time (ms): 350
Execution time (ms): 353
Execution time (ms): 357
Execution time (ms): 355
So obviously the matrix operation is not running in parallel, instead, every thread is running the same copy of the code.
What should be done to solve this problem? Do I have some misunderstanding about using OpenMPI?
| You are confusing OpenMPI with OpenMP.
The gcc flag -fopenmp enables OpenMP. It is one way to parallelize an application by using special #pragma omp statements in the code. The parallelization happens on a single CPU (or, to be precise, compute node, in case the compute node has multiple CPUs). This allows to employ all cores of that CPU. OpenMP cannot be used to parallelize an application over multiple compute nodes.
On the other hand, MPI (where OpenMPI is one particular implementation) can be used to parallelize a code over multiple compute nodes (i.e., roughly speaking, over multiple computers that are connected). It can also be used to parallelize some code over multiple cores on a single computer. So MPI is more general, but also much more difficult to use.
To use MPI, you need to call "special" functions and do the hard work of distributing data yourself. If you do not do this, calling an application with mpirun simply creates several identical processes (not threads!) that perform exactly the same computation. You have not parallelized your application, you just executed it 8 times.
There are no compiler flags that enable MPI. MPI is not built into any compiler. Rather, MPI is a standard and OpenMPI is one specific library that implements that standard. You should read a tutorial or book about MPI and OpenMPI (google turned up this one, for example).
Note: Usually, MPI libraries such as OpenMPI ship with executables/scripts (e.g. mpicc) that behave like compilers. But they are just thin wrappers around compilers such as gcc. These wrappers are used to automatically tell the actual compiler the include directories and libraries to link with. But again, the compilers themselves to not know anything about MPI.
|
71,784,571 | 71,784,913 | Getter returns a rvalue? | I know it's a bad attempt but why we cannot edit the private member via the getter?
class A
{
int a = 2;
public:
int GetA1() { return a; }
int& GetA2() { return a; }
int* GetA3() { return &a; }
};
int main() {
A test;
test.GetA1() = 4;
test.GetA2() = 4;
int b = test.GetA2();
int* pb = &b;
test.GetA3() = pb;
}
I think GetA1() doesn't work because it's using a rvalue, the copy by return. But then I'm a little surprised that GetA2() can work and at last really confused that GetA3() cannot work.
It seems that the value on member's address can be changed, but the address itself not. Does GetA3 returns a rvalue or an unmodifiable lvalue?
Thanks!
|
It seems that the value on member's address can be changed, but the address itself not.
Indeed. More generally, it isn't possible to change the address where any object is stored. This applies to all objects.
Does GetA3 returns a rvalue or an unmodifiable lvalue?
GetA3() is an rvalue, more specifically it is a prvalue (just like GetA1() is also a prvalue). The type of the prvalue is "pointer to int" (unlike the type of GetA1() which is just int).
|
71,784,778 | 71,793,829 | Why JNI GetByteArrayElements does not reserve pixel stride | I need to pass a YUV_420_8888 image from Android to C++ for processing. So, I take the image planes, convert them to ByteArray, then send them to C++ function.
val yBuffer: ByteBuffer = image.planes[0].buffer
val uBuffer: ByteBuffer = image.planes[1].buffer
val vBuffer: ByteBuffer = image.planes[2].buffer
val yByteArray = ByteArray(yBuffer.remaining())
val uByteArray = ByteArray(uBuffer.remaining())
val vByteArray = ByteArray(vBuffer.remaining())
yBuffer.get(yByteArray)
uBuffer.get(uByteArray)
vBuffer.get(vByteArray)
return NativeCppClass::process(yByteArray, uByteArray, vByteArray)
y plane has pixel stride 1. u and v planes have pixel stride 2. When I look at uByteArray and vByteArray, they are viewing the same memory block, with v plane starts before u plane. More particular, they look like this, for example:
vByteArray = [0, 1, 2, 3, 4, 5, 6]
uByteArray = [1, 2, 3, 4, 5, 6, 7]
Based on this, I expect we have this statement below. Let's call it (*) for easier reference:
uByteArray.begin - vByteArray.begin = 1; // begin is just a way to express the starting point of a byte array
I also have a ByteArray_JNI to convert ByteArray from Kotlin into a class called CppByteArray. They look like this:
class ByteArray_JNI {
public:
using CppType = CppByteArray;
using JniType = jbyteArray;
using Boxed = ByteArray_JNI;
static CppType toCpp(JNIEnv *jniEnv, JniType byteArray) {
return CppType{byteArray, jniEnv};
}
}
class CppByteArray {
public:
CppByteArray(jbyteArray data, JNIEnv *env) : array_(env, data) {
jboolean copied = false;
buffer_ = (uint8_t *)env->GetByteArrayElements(data, &copied);
// copied out param is false at this stage, so no copy
size_ = env->GetArrayLength(data);
}
const uint8_t* data() const {
return buffer_;
}
private:
djinni::GlobalRef<jbyteArray> array_;
uint8_t *buffer_ = nullptr;
jsize size_ = 0;
}
However, statement (*) above is not true inside C++:
class NativeCppClass {
public:
static CppByteArray process(CppByteArray &&y_array, CppByteArray &&u_array, CppByteArray &&v_array) {
auto u_begin = u_array.data();
auto v_begin = v_array.data();
// u_begin - v_begin = 462848 (not 1 as expected). My image has dimensions 1280x720, just in case it is related to the number 462848.
return something;
}
}
Why u_begin - v_begin = 462848 but not 1? GetByteArrayElements does not perform a copy in this case. The output parameter copied is false after calling GetByteArrayElements.
| According to ByteBuffer documentation: https://developer.android.com/reference/kotlin/java/nio/ByteBuffer#get_2, get copies the bytes into destination array. In this case, yByteArray, uByteArray and vByteArray are copied from the buffers. This explains why the offset in pointers is not 1.
|
71,785,247 | 71,785,515 | CUDA do deprecated functions still function in newer versions? | Does the function cudaGLRegisterBufferObject (deprecated after version 3.0) still work in newer versions (ie 6.X) ?
(I know that cudaGraphicsGLRegisterBuffer exists, however I'm doing some work on an old colleague's project and I don't know if a bug is caused by this, or something completely different.)
Thanks in advance.
| It should still work as of CUDA 11.6.2
At the moment, it is documented here so it is still present/available and should be still usable.
Deprecated means that it may be dropped in a future CUDA version. When it is dropped, it is no longer usable (meaning you won't be able to compile code that uses that API, in that future CUDA version where the support was dropped.)
|
71,785,396 | 71,785,452 | Concepts: require a function on a type without default constructing that type | I need to require of some type A that there exists a function f(A, A::B).
I'm testing that by calling f with instances of A and A::B. Is there a less ostentatious way to test against an instance of the dependent type without requiring default constructible?
template <class Container>
concept CanPutData = requires (Container a)
{
//put(typename Container::data_type{}, a); // overconstrained
put(*reinterpret_cast<typename Container::data_type*>(0), a); // oof
};
void test(CanPutData auto container) {}
template<class Container>
void put(typename Container::data_type const& data, Container& into) {}
template<class Data>
struct data_container { using data_type = Data; };
struct not_default_constructible_data { int& v; };
int main()
{
test(data_container<not_default_constructible_data>{});
return 0;
}
| Same way you're already getting a Container without requiring that one to be default constructible: by just sticking it in the parameter list of the requires expression:
template <class Container>
concept CanPutData = requires (Container container, typename Container::data_type data)
{
put(data, container);
};
|
71,785,710 | 71,785,832 | Template class with dpointer | I am trying to use the dpointer pattern in a generic class that made use of template but I cant figure how to define it correctly.
template <class TNode, class TLink>
class Network
{
private:
template<class TNode, class TLink>
struct Impl<TNode,TLink>;
std::unique_ptr<Impl<TNode,TLink>> d_ptr; //d_pointer
};
How can I define the Impl class in the cpp file?
template<class TNode, class TLink>
struct Network<TNode,TLink>::Impl<TNode, TLink>
{
vector<TNode> nodes;
vector<TLink> links;
}
This doesn't work! It says that Impl is not a template error C3856.
| The correct way to do this would be to use different names for the template parameters of the nested class template Impl from the names of the template parameters already used for the containing class template Network, as shown below:
template <class TNode, class TLink>
class Network
{
private:
//-----------------v--------v--------------->use different names
template<class T, class P>
struct Impl;
std::unique_ptr<Impl<TNode,TLink>> d_ptr; //d_pointer
};
template<class TNode, class TLink>
//-------------v--------v------------------->use different names
template<class T, class P>
struct Network<TNode,TLink>::Impl
{
std::vector<TNode> nodes;
std::vector<TLink> links;
};
Also refer to Why can templates only be implemented in the header file?.
|
71,785,894 | 71,845,139 | GTest Automatic Mock Class | For some time my old project used gmock_gen.py to generate automatically mocked classes (this is an old project from http://code.google.com/p/cppclean/ that it seems inactive and it depends on python2 that we don't want)
My question:
Is there anything on gtest environment that does the same as gmock_gen.py and supports python3, or what is the alternative to gmock_gen.py if we don't have or don't want to use python2?
Best regards,
Nuno
| It seems that the conversion to python3 is very simple.
You only need to do two things and only one is required (step 2.):
you can use the python tool 2to3 to convert the code from python 2 code into python 3 code (optional)
change only one line to prevent an exception on the execution of the scrip:
gmock_gtest/generator/cpp/ast.py:908
change from:
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens)
to
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens, None)
and that will work.
|
71,786,247 | 71,786,348 | Uncertainties about OOP and the struct Keyword | class NodeType {
public:
int info;
NodeType* link;
};
I came across this when learning about linked list, and as a beginner, at line 4, pointer link is an object of class NodeType, this interpretation is definitely wrong, so can somebody please explain what does this line mean? I don't recall learning this when I am interacting with the concept of OOP.
struct NodeType
{
int info;
struct NodeType* link;
};
I take that this structure declaration here is of the same as the class declared above, so my second question is, why is there a second struct keyword at line 4? Can the keyword be removed? Is this the phenomenon called nested struct?
| Yes, the two snippets are the same.
why is there a second struct keyword at line 4?
It's called an elaborated type specifier (a type with struct prepended to it, or class/union/enum; the definition class NodeType {} doesn't count as one).
It's useless here and can be removed. It's only useful when a struct is mentioned for the first time, so the compiler doesn't know it's a struct yet.
In this regard C++ is different from C, where you must prepend struct every time to refer to a struct.
[is] pointer link is an object of class NodeType?
No, an object of class NodeType would be NodeType link;, but then it wouldn't be a pointer.
You could say that link is an object of type NodeType * (a pointer to NodeType).
|
71,786,521 | 71,786,522 | Can't open dsw file in Visual Studio C++ 6.0 | When I try to "Open Workplace" of my project, visual studio does nothing, solution explorer is empty.
Also when I try to open my project, I occasionally see this error:
| The problem was that my dsp/dsw file endings were LF. You can check your file endings in your code editor or using this git command:
git ls-files --eol
After converting ds/dsp files to CRLF, I was able to open the project.
You can convert file endings in Unix using this command:
unix2dos YouFileName.dsw
|
71,786,975 | 71,787,018 | How can I seamlessly and discretely communicate new URI launch parameters to a currently running application in Windows? | Case: Click a URL in the browser and a video game that is currently running on user's desktop can ingest that data and do something.
I've been working on this for some time, but I don't know if I'm on the right path.
What I currently have:
A clickable URI in a webpage that can have different arguments for the client to recieve.
The URI scheme is registered in Windows. When clicking URI in the browser it will launch a c++ console 'launcher' or 'bridge' app that is already installed on the user's PC.
This 'launcher' is a middle-man that parses the URI arguments and communicates them to the main 'user' app (a video game) via IPC named pipes.
How do I:
In terms of UX, make the whole process discrete and seamless?
Specifically, I need to:
Keep launcher hidden from the user - no pop-up.
Needs to be only capable of running a single instance, even when invoked with new URI parameters. Do I just exit the current and create a new one?
User can click another URI in the webpage and the launcher will process the new arguments without displaying itself to the user.
Tech constraints:
Windows.
Preferably needs to be C++. C# could be a possibility.
Existing example:
Zoom conferencing software appears to works similar to this.
Clicking a URL in the browser launches a 'launcher' which starts the main app to video conference.
Closing that 'launcher' will minimize it into the system tray.
Clicking on a new link while the launcher is running does not start a new launcher, but it does start a new meeting.
How does something like this normally work?
| The OS automatically creates a console for /SUBSYSTEM:CONSOLE apps. It doesn't automatically create a window for /SUBSYSTEM:WINDOWS. So use /SUBSYSTEM:WINDOWS.
Then, create the named pipe before creating the main window.
If the return code tells you a new pipe was created (use FILE_FLAG_FIRST_PIPE_INSTANCE), you're the primary instance, create main window and run normally, with an OVERLAPPED read on the named pipe to receive data from future invocations.
If instead you opened an existing named pipe, write your command line through the named pipe and exit without ever creating a window.
You don't need a separate launcher at all, and actually separating the launcher from the main application creates a race condition (a second launcher starts before the first instance managed to launch the main program / before the main program is up and running, doesn't see an existing named pipe, and thinks it is the primary copy). You are better off putting both sides of the argument-forwarding logic into the main executable.
|
71,787,017 | 71,787,173 | std::unordered_multiset exception iterating bucket | My test case is the one shown below:
std::size_t t(const int &i) { return i | 0b01010101010101010101010101010101; }
int main()
{
std::unordered_multiset<int, decltype(&t)> um(100, t);
um.insert(9872934);
um.insert(9024582);
um.insert(2589429);
um.insert(2254009);
um.insert(3254082);
um.insert(3945820);
um.insert(8347893);
auto intf = t(9872934);
for (auto cb = um.begin(intf), end = um.end(intf); cb != end; ++cb)
{
std::cout << *cb;
}
};
Debugging with Microsoft Visual Studio Community 2022 v17.1.2 an exception is thrown constructing the iterator; first I thougth that the hash function (t) could be the one to blame so I've tried this:
std::unordered_multiset<int> um; // no custom hash, just multiset of integers...
um.insert(9872934);
um.insert(9024582);
um.insert(2589429);
um.insert(2254009);
um.insert(3254082);
um.insert(3945820);
um.insert(8347893);
auto intf = t(9872934);
for (auto cb = um.begin(intf), end = um.end(intf); cb != end; ++cb)
{
std::cout << *cb;
}
But it behaves the same way, even in online compilers (check it out). What I'm missing? How should I make this work?
| The argument to the begin(bucket) function is the bucket number not the key.
You need use bucket to get the bucket number that corresponds to the key
auto intf = um.bucket(t(9872934)); <<<====
for (auto cb = um.begin(intf), end = um.end(intf); cb != end; ++cb)
{
std::cout << *cb;
}
|
71,787,025 | 71,787,745 | Different order of adding same type objects causes an error | Could anyone explain why adding three objects like this a+(a+a) causes problems while a+a+a and (a+a)+a does not? The Foo class has one attribute num. Adding two Foo objects returns one with sum of their num values. Here is my code.
main.cpp
#include <iostream>
#include "Foo.h"
using namespace std;
int main()
{
Foo a(4), b;
b = a + a + a; // works fine
cout << a.getNum() << " " << b.getNum() << endl; // outputs "4 12" as it should
b = a + (a + a); // this one causes an error
cout << a.getNum() << " " << b.getNum() << endl;
return 0;
}
Foo.h
#ifndef FOO_H
#define FOO_H
class Foo
{
public:
Foo();
Foo(int a);
Foo operator+(Foo& other);
int getNum();
protected:
private:
int num;
};
#endif // FOO_H
Foo.cpp
Foo::Foo()
{
num = 0;
}
Foo::Foo(int a)
{
num = a;
}
Foo Foo::operator+(Foo& other)
{
Foo tmp = (*this);
tmp.num += other.num;
return tmp;
}
int Foo::getNum()
{
return num;
}
The error message says error: no match for 'operator+' (operand types are 'Foo' and 'Foo') even though the + operator is overloaded for Foo-type operands.
| The expression (a + a) yields a prvalue of type Foo. The only possible reference parameter types this kind of value can be assigned to are Foo&& or Foo const&. The reason why (a + a) + a (or the equivalent using no brackets) works is the fact that non-const functions can be invoked on prvalues.
I recommend going with the const version in this case, but you should also mark the operator const:
class Foo
{
...
Foo operator+(Foo const& other) const;
};
Foo Foo::operator+(Foo const& other) const
{
return num + other.num;
}
Probably preferrable would be to implement the operator at namespace scope which would allow you to apply the implicit conversion from int to Foo on both sides of +:
class Foo
{
...
friend Foo operator+(Foo const& s1, Foo const& s2)
{
return s1.num + s2.num;
}
...
};
Using this implementation not only
Foo c = a + 1;
works, but also
Foo d = 1 + a;
|
71,787,027 | 71,787,351 | Requirements on returned type that may have some member functions SFINAE'd away in the function's translation unit? | Refining from Why is the destructor implicitly called?
My understanding of calling convention is that functions construct their result where the caller asked them to (or in a conventional place?). With that in mind, this surprises me:
#include <memory>
struct X; // Incomplete type.
// Placement-new a null unique_ptr in-place:
void constructAt(std::unique_ptr<X>* ptr) { new (&ptr) std::unique_ptr<X>{nullptr}; }
// Return a null unique_ptr:
std::unique_ptr<X> foo() { return std::unique_ptr<X>{nullptr}; }
https://godbolt.org/z/rqb1fKq3x
Whereas constructAt compiles, happily placement-newing a null unique_ptr<X>, foo() doesn't compile because the compiler wants to instantiate unique_ptr<X>::~unique_ptr(). I understand why it can't instantiate that destructor (because as far as the language is concerned, it needs to follow the non-nullptr branch of the d'tor that then deletes the memory [https://stackoverflow.com/questions/28521950/why-does-unique-ptrtunique-ptr-need-the-definition-of-t]). Basically without a complete X, the unique_ptr's destructor is SFINAE'd away (right?). But why does a function returning a value have to know how to destruct that value? Isn't the caller the one that will have to destruct it?
Clearly my constructAt and foo functions aren't morally equivalent. Is this language pedantry, or is there some code path (exceptions?) where foo() would have to destruct that value?
| In your specific case there is no way that the destructor may be invoked. However, the standard specifies the situations in which the destructor is potentially invoked in more general terms. If a destructor is potentially invoked it requires a definition (even if there is no path that could call it) and this will therefore cause implicit instantiation which fails in your case since instantiation of the std::unique_ptr<X> destructor requires X to be complete.
In particular the destructor is potentially invoked for every result object in a return statement.
I think the reason for this choice is described in CWG issue 2176: In general there may be local variables and temporaries in the function which are destroyed after the result object of the return statement has been constructed. But if the destruction of one of these objects throws an exception, then the already constructed result object should also be destroyed. This requires the destructor to be defined.
CWG issue 2426 then made the destructor potentially invoked even if there is no actual invocation due to the above reasoning, in line with implementations. I assume this choice was made simply because it doesn't require any additional decision making on the compiler's part and was already implemented.
|
71,787,182 | 71,787,570 | Case of using OpenMP for multi-threading of a matrix factorization calculation of an existing serial code | I came across a code that uses a low-performance series for loop for calculating so-called "Crout factorization". If you need to know more I think the concept is similar to what is described here.
At first glance the code is a for loop that can simply become parallel by an omp directive:
SparseMtrx *Skyline :: factorized()
{
// Returns the receiver in U(transp).D.U Crout factorization form.
...
// this loop is very expensive and only uses single thread:
for ( int k = 2; k <= n; k++ ) {
int ack = adr.at(k);
int ack1 = adr.at(k + 1);
int acrk = k - ( ack1 - ack ) + 1;
for ( int i = acrk + 1; i < k; i++ ) {
int aci = adr.at(i);
int aci1 = adr.at(i + 1);
int acri = i - ( aci1 - aci ) + 1;
int ac;
if ( acri < acrk ) {
ac = acrk;
} else {
ac = acri;
}
int acj = k - ac + ack;
int acj1 = k - i + ack;
int acs = i - ac + aci;
double s = 0.0;
for ( int j = acj; j > acj1; j-- ) {
s += mtrx [ j ] * mtrx [ acs ];
acs--;
}
mtrx [ acj1 ] -= s; //mtrx here is the matrix values (shared)
}
double s = 0.0;
for ( int i = ack1 - 1; i > ack; i-- ) {
double g = mtrx [ i ];
int acs = adr.at(acrk);
acrk++;
mtrx [ i ] /= mtrx [ acs ];
s += mtrx [ i ] * g;
}
mtrx [ ack ] -= s;
}
...
But the problem is it seems that each step uses the updated value of the matrix values, mtrx, to calculate the next factorized values.
My question is there any way to use multi-threading to do this calculation?
The complete code is available in github via this link.
| The Crout factorization is one variant of Gaussian elimination. You can characterize such algorithms by 1. their end-product 2. how they go about it.
The Crout factorization computes LU where the diagonal of U is identity. Other factorizations normalize the diagonal of L, or they compute LDU with both L,U are normalized, or they compute LU with the diagonals of L & U equal. All that is beside the point for parallelization.
Next you can characterize Gaussian Elimination algorithms by how they do the computation. These are all mathematically equivalent re-organizations of the basic triply nested loop. Since they are mathematically equivalent, you can pick one or the other for favorable computational properties.
To get one thing out of the way: Gaussian Elimination has an intrinsically sequential component in the computation of the pivots, so you can not make it fully parallel. (Ok, some mumbling about determinants, but that's not realistic.) The outer loop is sequential with a number of steps equal to the matrix size. The question is whether you can make the inner loops parallel.
The Crout algorithm can be characterized as that in step "k" of the factorization it computes row "k" of U, and column "k" of L. Since the elements in both are not recursively related, these updates can be done in a parallel loop, which gives you a loop on "k" that is sequential, and for each k value two single loops that are parallel. That leaves the 3rd loop level: that one comes from the fact that each of the independent iterations involves a sum, therefore a reduction.
This does not sound great for parallelism: you can not collapse two loops if the inner is a reduction, so you have to choose which level to parallelize. Maybe you should use a different formulation of Gaussian Elimination. For instance, the normal algorithm is based on "rank-k updates", and those are very parallel. That algorithm has a single sequential loop, with in each step a collapse(2) parallel loop.
|
71,787,222 | 71,791,371 | Error with failing to load plugin in gazebo because of undefined symbol | When I run "roslaunch" there is the error:
[Err] [Plugin.hh:178] Failed to load plugin libmodel_push.so: /Robosub_Simulation/devel/lib/libmodel_push.so: undefined symbol: _ZN9ModelPush14SetJointStatesERKN5boost10shared_ptrIKN11sensor_msgs11JointState_ISaIvEEEEE
Can you guys please assist us with how to solve this error?
This probably has to do with including the library
#include <sensor_msgs/JointState.h>
and using it in the functions for the plugin called model_push.cpp:
void ModelPush::addSubscribeForce()
{
// ros::init("talker");
ros::NodeHandle* rosnode = new ros::NodeHandle();
ros::SubscribeOptions jointStatesSo = ros::SubscribeOptions::create<sensor_msgs::JointState>("/test", 1, SetJointStates,ros::VoidPtr(), rosnode->getCallbackQueue());
ros::Subscriber subJointState = rosnode->subscribe(jointStatesSo);
ros::spin();
}
static void SetJointStates(const sensor_msgs::JointState::ConstPtr &_js)
{
static ros::Time startTime = ros::Time::now();
{
std::cout<<"AYo"<<std::endl;
}
}
Lastly because this is a linker error here is the CMakeLists.txt that is in the plugin folder:
add_library(model_push SHARED model_push_plugin.cpp model_push.cpp)
add_dependencies(model_push ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
target_link_libraries(model_push ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} ${Boost_LIBRARIES})
I have already tried using QT5_WRAP_CPP instead for adding the dependencies (from https://answers.gazebosim.org/question/25672/gui-plugin-linker-error-undefined-symbol/) however that leads to the same error. Also, for some context, this code (please see the repository https://github.com/GU-RoboSub-Machine-Learning/Robosub_Simulation/tree/ros_listener) is meant to subscribe to a node called "test" and is based off this tutorial for subscribing to joint commands: http://gazebosim.org/tutorials?tut=drcsim_ros_cmds&cat=drcsim. Can guys you please provide suggestions on how to implement sensor_msgs/JointState correctly to not have the error, as shown in the beginning of this post?
| As @Tsyvarev recommended I put "ModelPush::" before the SetJointStates function declaration. So it looks like this below:
void ModelPush::SetJointStates(const sensor_msgs::JointState::ConstPtr &_js)
{
static ros::Time startTime = ros::Time::now();
{
std::cout<<"AYo"<<std::endl;
}
}
This got rid of the error however there is another error that popped up but is related to the implementation of a gazebo subscriber in the code. That separate error will probably be in another stack overflow post.
Side Note: there was an error with using static for this and notice how I do not declare the function declaration as static. However, I do declare the function in the model_push.h as static as shown here:
static void SetJointStates(const sensor_msgs::JointState::ConstPtr);
|
71,787,300 | 71,787,493 | variable declaration in while loop | using namespace std;
class Counter
{
int val;
public:
Counter() { val = 0; }
int Next() { return ++val; }
};
int main()
{
Counter counter;
while (int c = counter.Next() <= 5)
{
cout << c << endl;
}
}
When run, the program prints out 1 5 times. Thus, .Next() is returning the expected value as the while loop does terminate.
Why is the value of c remain set to the initial value?
I'm very new to C++ (this is the first C++ program I've written) and am just trying to understand how the return value of .Next() could be as expected and evaluated but not captured in the c variable.
| The <= operator has a higher precedence than the = operator.
So, with
while (int c = counter.Next() <= 5)
The compiler interprets it as:
while (int c = (counter.Next() <= 5))
which assigns the result of the logical expression to c, which will be 1 while the expression holds true.
Try this instead:
int c;
while ((c = counter.Next()) <= 5)
|
71,787,305 | 71,795,546 | Sorting a 2-d vector but getting an unexpected output | I am trying to sort a 2 D vector and I am getting the desired output for input 'n' less than 15 but above that it is not arranged in the order that I want. If all the first column values are 0 then the second column must have increasing ordered values.
#include <bits/stdc++.h>
using namespace std;
bool sortcol(const vector<long long int>& v1, const vector<long long int>& v2)
{
return v1[0] < v2[0];
}
int main()
{
int n;
cin >> n;
vector<vector<long long int>> arr(n,vector<long long int> (2));
for (int i = 0; i < n; i++)
{
arr[i][0] = 0;
arr[i][1] = i;
}
sort(arr.begin(), arr.end(),sortcol);
for(int i = 0;i<n;i++){
cout << i << " - " << arr[i][0] << " , " << arr[i][1] << endl;
}
}
Output I want to be like :-
15 0
0 - 0 , 0
1 - 0 , 1
2 - 0 , 2
3 - 0 , 3
4 - 0 , 4
5 - 0 , 5
6 - 0 , 6
7 - 0 , 7
8 - 0 , 8
9 - 0 , 9
10 - 0 , 10
11 - 0 , 11
12 - 0 , 12
13 - 0 , 13
14 - 0 , 14
But what I getting is :-
50 0
0 - 0 , 38
1 - 0 , 26
2 - 0 , 27
3 - 0 , 28
4 - 0 , 29
5 - 0 , 30
6 - 0 , 31
7 - 0 , 32
8 - 0 , 33
9 - 0 , 34
10 - 0 , 35
11 - 0 , 36
12 - 0 , 37
13 - 0 , 25
14 - 0 , 39
15 - 0 , 40
16 - 0 , 41
17 - 0 , 42
18 - 0 , 43
19 - 0 , 44
20 - 0 , 45
21 - 0 , 46
22 - 0 , 47
23 - 0 , 48
24 - 0 , 49
25 - 0 , 13
26 - 0 , 1
27 - 0 , 2
28 - 0 , 3
29 - 0 , 4
30 - 0 , 5
31 - 0 , 6
32 - 0 , 7
33 - 0 , 8
34 - 0 , 9
35 - 0 , 10
36 - 0 , 11
37 - 0 , 12
38 - 0 , 0
39 - 0 , 14
40 - 0 , 15
41 - 0 , 16
42 - 0 , 17
43 - 0 , 18
44 - 0 , 19
45 - 0 , 20
46 - 0 , 21
47 - 0 , 22
48 - 0 , 23
49 - 0 , 24
I am running this code on VS code
| As others have also noted in the comments, your sortcol() always returns false because v1[0] and v2[0] are always 0. Since the predicate sortcol() tells the sorting algorithm which elements are considered to be "smaller"/"less" than other elements, no element is considered smaller than another one. This implies that all elements are considered to be equal: If a<b is false and b<a is false, this implies a==b is true. In other words, the STL sorting algorithms assume a strict weak ordering, compare e.g. this post and this one.
So all your elements are considered to be equal by the sorting algorithm. The order of elements considered to be equal is implementation defined for std::sort(). Quote for std::sort:
The order of equal elements is not guaranteed to be preserved.
Hence, in your case the implementation is free to change the order of all elements as it sees fit, since all elements are considered to be equal. In practice a different algorithm is used once the input reaches a certain size, in which case a different algorithm is selected. For libstdc++ (the STL of gcc), this happens for n>16 (see the constant _S_threshold in the source code). That is the reason why you see a jump in behavior for n>16 with std::sort().
Other implementations of the STL might use other thresholds (e.g., the Microsoft STL seems to use a value of 32).
On the other hand, std::stable_sort() guarantees that the order of equal elements remains the same. Quote for std::stable_sort:
The order of equivalent elements is guaranteed to be preserved.
Of course, preserving the order is not free, and hence std::stable_sort can be slower.
So, if your sortcol() is really the predicate you want (although, in the example, it does not really make much sense), using std::stable_sort() is the solution you are look for.
|
71,787,514 | 71,787,515 | How to use MPE for MPI c++ project? | MPE is very useful for visualizing MPI programs, but it only provides compiler wrappers for C and Fortran: mpecc and mpef77, respectively. How can I use it if my MPI project is written in C++ and is normally compiled with mpic++, not mpicc (and so it can't be compiled with mpecc)?
How do I setup (1) the MPE library itself and (2) my C++ project?
| The answer is actually quite simple, nonetheless I struggled with it for days.
Steps shown below works with MPICH 3.3.2 on Linux (Ubuntu 18) run with WSL2 - some adjustments may be necessary in different evnironments.
MPE library setup for c++
You setup the MPE library normally, the same way you would for a C project - the necessary steps are:
Download and extract the latest MPE archive (I've used MPE 2-1.4.9 from here)
Navigate to extracted directory:
cd mpe2-2.4.9b
Configure library's build process - in my case the following command worked:
./configure MPI_CC=mpicc MPI_F77=mpif77 prefix=$HOME/installs/mpe2 MPI_CFLAGS=-pthread MPI_FFLAGS=-pthread
Explanation:
MPE is written in C, so we use mpicc to compile it - we do not (yet) specify how to build our project, so we do not use mpic++. If we use mpic++ as MPI_CC, the MPE library won't compile.
specifying Fortran flags isn't strictly necessary, but this way we avoid unneccessary errors in compilation output
prefix (installation path) is an arbitrary path of your choice, just remember what you have inserted here as it will be necessary in further steps
I had to provide manual linkage of the pthread library - this may/may not be neccessary depending on your system
Compile the MPE library:
make
Install the compiled library:
make install
Using MPE in c++ project
Since we cannot use a predefined compiler wrapper mpecc to compile c++, we have to link the necessary libraries manually, as we would do with any other library.
Suppose we have a file main.cpp with following content:
#include <mpi.h>
#include <mpe.h>
/* other necessary includes */
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
MPE_Init_log();
/* some MPI and MPE stuff */
MPE_Finish_log("test");
MPI_Finalize();
}
The specific command which allows to build a c++ file with MPI and MPE calls is:
mpic++ main.cpp -o main -I/$HOME/installs/mpe2/include -L/$HOME/installs/mpe2/lib -lmpe -pthread
Explanation:
We use mpic++ to link the all MPI items automatically
$HOME/installs/mpe2 is an arbitrary installation path you've specified when configuring the MPE library
-I flag tells the compiler where to look for header files (the ones we #include)
-L flag tells the compiler where to find the compiled library items (implementation of functions defined in included header files)
-l flag tells compiler to actually link our executable with specific library (which can be found thanks to us specyfing search location with -L flag)
I had to link pthread manually for MPE to work, but that may depend on your system
If you use cmake for building your project, the following CMakeLists.txt should work:
cmake_minimum_required(VERSION 3.5) # not necessarily 3.5, it;s just my setup
project(mpi_test) # arbitrary project name
find_package(MPI REQUIRED)
set(CMAKE_CXX_COMPILER mpic++)
include_directories($ENV{HOME}/installs/mpe2/include) # again, it's the MPE installation path
link_directories($ENV{HOME}/installs/mpe2/lib) # again, it's the MPE installation path
add_executable(main_exp src/main_exp.cpp)
target_link_libraries(main_exp mpe pthread) # again, pthread may/may not be neccessary
|
71,788,085 | 71,804,305 | CMake find_package for another library in same project | I want to make a builder project that checks out sub-modules and builds them as a group, and I would like to build them in a single pass.
builder
submod1
submod2 #depends on submod1
submod3 #depends on submod2
For testing I downloaded ZeroMQ and cppzmq as submodules and built both with the cppzmq/demo to confirm they are linkable. I chose them because cppzmq checks for libzmq as a target but the demo only links with a find_package.
cmake_minimum_required(VERSION 3.10)
project(ZMQ_builder)
option(BUILD_TESTS "No tests" OFF)
option(CPPZMQ_BUILD_TESTS "No tests" OFF)
option(BUILD_STATIC "ninja can only build static or shared" OFF)
option(BUILD_SHARED "ninja can only build static or shared" ON)
add_subdirectory(libzmq)
set_property(TARGET libzmq PROPERTY CXX_STANDARD 17)
if (NOT TARGET libzmq AND NOT TARGET libzmq-static)
message(WARNING "libzmq and libzmq-static don't exist after being created")
endif()
if(NOT TARGET ZeroMQ)
message(status "ZeroMQ needs an alias") #prints this
add_library(ZeroMQ ALIAS libzmq)
endif()
if(NOT TARGET ZeroMQ)
message(WARNING "ZeroMQ Target Still doesn't exist")
endif()
if(NOT ZeroMQ_FOUND)
message(WARNING "ZeroMQ marked as not found") #prints this warning
find_package(ZeroMQ)
#set(ZeroMQ_FOUND true)
endif()
add_subdirectory(cppzmq)
find_package(cppzmq) #dies here
add_subdirectory(cppzmq/demo)
which outputs
ZeroMQ marked as not found .../builder\CMakeLists.txt 28
CMake Warning at ...\builder\CMakeLists.txt:29 (find_package):
By not providing "FindZeroMQ.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "ZeroMQ", but
CMake did not find one.
Could not find a package configuration file provided by "ZeroMQ" with any
of the following names:
ZeroMQConfig.cmake
zeromq-config.cmake
Add the installation prefix of "ZeroMQ" to CMAKE_PREFIX_PATH or set
"ZeroMQ_DIR" to a directory containing one of the above files. If "ZeroMQ"
provides a separate development package or SDK, be sure it has been
installed. ...\builder\CMakeLists.txt 29
ZeroMQ was NOT found! .../builder/build/cppzmq/cppzmqConfig.cmake 53
So the find call ignored the ZeroMQ alias Target (add_library(ZeroMQ ALIAS libzmq)) and also missed ".../builder/build/libzmq/ZeroMQConfig.cmake" despite finding "../builder/build/cppzmq/cppzmqConfig.cmake"
I don't understand why find_package is ignoring the ZeroMQ target that's already in the build? and also why the build can find cppzmqConfig.cmake but not ZeroMQConfig.cmake despite both existing?
| find_package() doesn't actually look at the global targets list, it has its own targets list that only prevents other find calls from refetching
instead of creating an alias target call find_package_handle_standard_args in a Find<>.cmake module for each class
so for this case a Findcppzmq.cmake
include(FindPackageHandleStandardArgs)
find_package(ZeroMQ REQUIRED)
if(TARGET cppzmq)
find_package_handle_standard_args(cppzmq
REQUIRED_VARS cppzmq_BINARY_DIR)
endif()
and a FindZeroMQ.cmake
include(FindPackageHandleStandardArgs)
if(TARGET ZeroMQ)
find_package_handle_standard_args(ZeroMQ
REQUIRED_VARS ZeroMQ_BINARY_DIR)
endif()
alternatively if all the repos in the builder project are seperate you can instead use ExternalProject_Add so that projects will be generated in order at build time. Making an effective package.
cmake_minimum_required(VERSION 3.16)
project(ExternalBuilder)
include(ExternalProject)
set(CMAKE_INSTALL_PREFIX ${CMAKE_CURRENT_LIST_DIR}/install)
set(CMAKE_MODULE_PATH ${CMAKE_INSTALL_PREFIX})
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
ExternalProject_Add(
ZeroMQ
GIT_REPOSITORY https://github.com/zeromq/libzmq.git
GIT_TAG v4.3.2
CMAKE_ARGS -DCMAKE_MODULE_PATH:PATH=${CMAKE_INSTALL_PREFIX} -DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_INSTALL_PREFIX} -DBUILD_TESTS=OFF -DBUILD_STATIC=OFF
)
ExternalProject_Add(
cppzmq
DEPENDS ZeroMQ
GIT_REPOSITORY https://github.com/zeromq/cppzmq.git
GIT_TAG v4.7.1
CMAKE_ARGS -DCMAKE_MODULE_PATH:PATH=${CMAKE_INSTALL_PREFIX} -DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_INSTALL_PREFIX} -DCPPZMQ_BUILD_TESTS=OFF
)
ExternalProject_Add(
zmqProject
DEPENDS ZeroMQ
...
)
|
71,788,243 | 71,788,388 | How to specialize a type trait using concepts? | I am trying to use C++ concepts in order to write a type trait that will produce a different type depending on whether its template argument is a fundamental type or not:
template<typename T>
concept fundamental = std::is_fundamental_v<T>;
template<typename T>
concept non_fundamental = !std::is_fundamental_v<T>;
The following code works just as expected:
void Print(fundamental auto value)
{
std::cout << "fundamental\n";
}
void Print(non_fundamental auto value)
{
std::cout << "non fundamental\n";
}
int main()
{
Print(1); // prints "fundamental"
Print(std::string("str")); // prints "non fundamental"
}
Applying the same idea on type traits doesn't work.
template<fundamental T>
struct SomeTypeTrait
{
using type = T;
};
template<non_fundamental T>
struct SomeTypeTrait
{
using type = std::shared_ptr<T>;
};
using ExpectedToBeDouble = SomeTypeTrait<double>::type;
using ExpectedToBeSharedPtrOfString = SomeTypeTrait<std::string>::type; // fails to compile
I get a compiler error (MSVC) saying:
error C3855: 'SomeTypeTrait': template parameter 'T' is incompatible with the declaration
How can I achieve the desired behavior using concepts?
| Apparently the syntax is slightly different from what I had in mind.
Here is a working solution:
template<typename T>
struct SomeTypeTrait {};
template<fundamental T>
struct SomeTypeTrait<T> // note the extra <T>
{
using type = T;
};
template<non_fundamental T>
struct SomeTypeTrait<T> // note the extra <T>
{
using type = std::shared_ptr<T>;
};
Also, one of the specializations could become the default implementation to make the code a bit shorter and allows for more specializations to be added later:
template<typename T>
struct SomeTypeTrait // default
{
using type = std::shared_ptr<T>;
};
template<fundamental T>
struct SomeTypeTrait<T> // specialization for fundamental types
{
using type = T;
};
|
71,788,310 | 71,789,025 | pass a list as an argument from python to C++: Segmentation fault (core dumped) when running the second time | I am coding a self-defined python module embedding with C++.
test.py
import my_module
column_names = ['ukey', 'OrderRef', 'orderSize']
print(my_module.my_func(column_names))
my_module.cpp (partial)
static PyObject * my_func(PyObject *self, PyObject *args)
{
Py_Initialize();
if(!Py_IsInitialized()) { std::cout<<"PythonInit failed!"<<std::endl; }
PyObject *_list = nullptr;
int len;
std::vector<std::string> c_colArray;
if (!PyArg_ParseTuple(args, "O", &_list)) {
PyErr_SetString(PyExc_TypeError, "parameter type error.");
return NULL;
}
len = PyList_Size(_list);
PyObject * _item = nullptr;
const char * _line; /* pointer to the line as a string */
for (int i = 0; i < len; i++) {
_item = PyList_GetItem(_list, i);
_line = PyUnicode_AsUTF8(_item);
std::string _elem = _line;
c_colArray.push_back(_elem);
}
Py_DECREF(_list);
return Py_BuildValue("sss", c_colArray[0].c_str(), c_colArray[1].c_str(), c_colArray[2].c_str());
}
output
('ukey', 'OrderRef', 'orderSize')
The code works fine when first calling my_func, but it crashes when calling it again and arise Segmentation fault (core dumped)
| Py_DECREF of the input list in C++ code is not needed. PyArg_ParseTuple is just do parse only, just like some typecasting, not creating a new Python object.
Py_DECREF(_list);
After that, the column_names in python will become []. Then the my_func is called again, the input list is an empty list in the 2nd call. So c_colArray contains nothing, and the last c_colArray[0] will leading segmentation fault.
The simple solution is to remove the Py_DECREF(_list);
You could easily output the column_names in your python code like this:
import my_module
column_names = ['ukey', 'OrderRef', 'orderSize']
print(my_module.my_func(column_names))
print(column_names)
print(my_module.my_func(column_names))
|
71,788,620 | 71,789,084 | Translating GlobalPlatform from C to Delphi - Access violation errors | I want to use the GlobalPlatform.dll from kaoh Karsten Ohme in Delphi. So i tried to translate the headers so i can use the GlobalPlatform.dll in Delphi.
The first i translated was Connection.h, i uploaded it on pastebin here.
The second i translated was Security.h i uploaded it on pastebin here.
First i establish a context with the OPGP_establish_context function, that seems to go alright because the result is a OPGP_ERROR_STATUS_SUCCESS and the message also states "success".
But then i try to list the readers with the OPGP_list_readers function, which also returns a success - but when i try to read the returned names i get various access violations (mostly at adress 00000000 and trying to read 00000000, but there are variations between my tries).
My code is assigned to a button click:
procedure TfrmFormatCard.Button1Click(Sender: TObject);
const
BUFLEN = 1024;
var
Status,
Status2 : OPGP_ERROR_STATUS;
Context : OPGP_CARD_CONTEXT;
Names : array [0..BUFLEN +1] of Char;
Len : DWord;
begin
Context.libraryName := 'gppcscconnectionplugin';
Context.libraryVersion := '211';
Status := OPGP_establish_context(Context);
if Status.errorStatus = OPGP_ERROR_STATUS_SUCCESS then
begin
Len := 1024;
Status2 := OPGP_list_readers(Context, Names, Len);
if Status2.errorStatus = OPGP_ERROR_STATUS_SUCCESS then
begin
// Messagebox(application.Handle, names, '', 0);
end;
OPGP_release_context(Context);
end;
end;
When i use the above code i get no errors, but when i uncomment the messagebox - i get the access violations. I have been trying all day, and i modified everything .. but no luck. I can't see what im doing wrong. Maybe someone can help me out and point me in the right direction. I understand what a access violation on adress 00000000 means, but i dont know if i translated the headers the right way what might cause the error.
If someone could help me by checking, or testing it themselves - that would be highly appreciated.
I am using Delphi 10.4, and i have a internal smartcard reader (in the laptop), a Omnikey smartcard reader, and another unknown brand.
ps. Yes i am aware of the GPShell commandline utility, but i would like to avoid having to use that. I want to use smartcards for security, and the need for the commandline tool would make this a weak point - hence why i want to use the Library directly.
| In the 1st record you translated, OPGP_ERROR_STATUS, the errorMessage field is declared in the C code as:
TCHAR errorMessage[ERROR_MESSAGE_LENGTH+1];
where ERROR_MESSAGE_LENGTH is defined as 256, thus this array has 257 chars max.
But your translation:
errorMessage : array [0..ERROR_MESSAGE_LENGTH + 1] of Char;
has 258 chars max. This is because an array declaration in Delphi defines the indexes of the array, inclusive, so in your case you are declaring the array as having indexes 0..257, but it should be 0..256 instead, so drop the +1:
errorMessage : array [0..ERROR_MESSAGE_LENGTH] of Char;
You are making that same mistake in your translation of the OPGP_CARD_CONTEXT record, too:
OPGP_CARD_CONTEXT = record
librarySpecific : Pointer;
libraryName : array [0..64] of Char; // <--
libraryVersion : array [0..32] of Char; // <--
libraryHandle : Pointer;
connectionFunctions : OPGP_CONNECTION_FUNCTIONS;
end;
You are declaring libraryName as having 65 chars, and libraryVersion as having 33 chars. They need to be 64 and 32 instead, respectively:
OPGP_CARD_CONTEXT = record
librarySpecific : Pointer;
libraryName : array [0..63] of Char;
libraryVersion : array [0..31] of Char;
libraryHandle : Pointer;
connectionFunctions : OPGP_CONNECTION_FUNCTIONS;
end;
Per the original C declaration:
typedef struct {
PVOID librarySpecific; //!< Library specific data.
TCHAR libraryName[64]; //!< The name of the connection library to use.
TCHAR libraryVersion[32]; //!< The version of the connection library to use.
PVOID libraryHandle; //!< The handle to the library.
OPGP_CONNECTION_FUNCTIONS connectionFunctions; //!< Connection functions of the connection library. Is automatically filled in if the connection library can be loaded correctly.
} OPGP_CARD_CONTEXT;
So, it makes sense why an AV could occur, since OPGP_list_readers() internally accesses function pointers that are stored in the Context.connectionFunctions field following the arrays, thus the pointers would be accessed at the wrong memory offsets.
Something else to watch out for is TCHAR, which will map to either char or wchar_t depending on how the DLL is actually compiled. So that may or may not translate to Char in Delphi, depending on what version you are using (which you didn't say). In general, char -> AnsiChar, wchar_t -> WideChar. The project's unicode.h file suggests non-Windows builds are compiled to use char. But the project makefile suggests the Windows build is compiled to use wchar_t instead. It is not a good idea to use (P)Char in interop code because of this. Use (P)AnsiChar or (P)WideChar as needed instead.
UPDATE
Also, try zeroing out the memory of the Context before passing it to OPGP_establish_context(). The first thing OPGP_establish_context() does internally is call OPGP_release_context() on the Context, which means the Context can't contain any garbage in it (particularly in the libraryHandle and connectionFunctions.releaseContext fields) or else it will be mishandled.
|
71,788,664 | 71,791,393 | C++/C Importing Third Party Library to CMake | Hi I was wondering if anyone here could help me identify what I'm doing wrong while trying to add a library to my CMake project:
So originally I built the library https://github.com/recp/cglm in the command line with cmake.Heres what I did
I created a build folder in the desktop(mkdir build)
I changed directory to it (cd build)
And then I created the sln with cmake(cmake /path-to/cglm)
After that I opened Visual Studio 2019 and saw 5 projects: ALL_BUILD, cglm, INSTALL, PACKAGE, ZERO_CHECK
I built the cglm project and recieved this in the Build Folder
Then inside the debug folder of the build folder I saw 4 files: cglm.exp , cglm.lib , cgl-0.dll and cglm-0.pdb
Then I went to another project to add the library and created the following CMakeLists.txt
cmake_minimum_required (VERSION 3.8)
project ("MathPlease")
add_executable(MathPlease "MathPlease.cpp" "MathPlease.h")
link_directories("path-to/desktop/dev/cglm/build
find_package("cglm")
When I try to save that I receive the following error
Severity Code Description Project File Line Suppression State
Warning CMake Warning at C:\Users\asupr\source\repos\MathPlease\CMakeLists.txt:14 (find_package):
By not providing "Findcglm.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "cglm", but
CMake did not find one.
Could not find a package configuration file provided by "cglm" with any of
the following names:
cglmConfig.cmake
cglm-config.cmake
Add the installation prefix of "cglm" to CMAKE_PREFIX_PATH or set
"cglm_DIR" to a directory containing one of the above files. If "cglm"
provides a separate development package or SDK, be sure it has been
installed. MathPlease C:\Users\asupr\source\repos\MathPlease\CMakeLists.txt 14
If anyone needs the cmakeoutput.log I can paste it here as well any help would be greatly appreciated!
| In general you have two approaches -
Install cglm library & then make use of it in your project
Build cglm as part of your project
I have used both approaches & found the latter to be far better. Especially for smaller projects for these reasons -
Better intellisense, you can jump to 3rd party code and even edit
Easy to package and ship the project artifacts
Easy to manage in CI, version upgrades fo 3rd party projects
I use FectchContent CMake api to achieve this. (Alternatively same can be achieved by adding third party source-code to your project manually too)
Now I have not worked on cglm personally, but still a sample build file
cmake_minimum_required(VERSION 3.16)
project("MathPlease")
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
include(FetchContent)
set(FETCHCONTENT_QUIET FALSE)
fetchcontent_declare(
cglm
GIT_REPOSITORY https://github.com/recp/cglm.git
GIT_TAG v0.8.5
GIT_PROGRESS TRUE
)
if(NOT cglm_POPULATED)
message("populating cglm")
fetchcontent_populate(cglm)
add_subdirectory(${cglm_SOURCE_DIR} ${cglm_BUILD_DIR})
endif()
add_executable(${PROJECT_NAME} MathPlease.cpp)
target_link_libraries(${PROJECT_NAME} cglm)
P.S. FetchContent is a fairly recent CMake feature. You will need CMake > 3.11
|
71,788,764 | 71,788,925 | Confused with references on pointers in C++ | The task is to calculate the output of the following source code without a computer, which I would say is "123 234 678 678" because ref1 is a reference on the value of ptr and in the moment where the address of val2 is assigned to this pointer, then shouldn't ref1 as well refer to its value?
int main() {
int* ptr = nullptr;
int val1 = 123, val2 {234};
ptr = &val1;
int& ref1 = *ptr;
int& ref2 = val2;
std::cout << ref1 << " ";
std::cout << ref2 << " ";
*ptr = 456; //ref1 = 456
ptr = &val2; //*ptr = 234, why isn't ref1 as well equal to 234?
*ptr = 678; //*ptr = 678, why isn't ref1 as well equal to 678?
std::cout << ref1 << " ";
std::cout << ref2 << "\n";
return EXIT_SUCCESS;
//output: 123 234 456 678
}
| After this declarations
int& ref1 = *ptr;
int& ref2 = val2;
ref1 refers to val1 and ref2 refers to val2. After the declarations the references can not be changed to refer to other variables.
Pay attention to that the reference ref1 does not refer to the pointer ptr. It refers the variable val1 due to dereferencing the pointer that points to the variable val1.
So these statements
std::cout << ref1 << " ";
std::cout << ref2 << " ";
will output
121 234
In this statement
*ptr = 456;
the value of the variable val1 is changed to 456 through using the pointer ptr.
After that the value of the pointer ptr was changed to store the address of the variable val2
ptr = &val2;
and the value of the variable val2 was changed to 678 through using the pointer
*ptr = 678;
So these statements
std::cout << ref1 << " ";
std::cout << ref2 << "\n";
now will output
456 678
That is they output values of the variables to which the references refer to.
In this program the same one pointer was used to change values of two different objects due to reassignment of the pointer with addresses of the objects.
|
71,788,928 | 71,789,798 | How can I *directly* append to a QVariantList, stored in a QVariantMap? | I have a collection of QVariantMaps which contains QVariantLists in SOME of their entries.
I need to append to the lists. These lists can grow rather large, and I have to do this for many of them, many times per second, basically the whole time the program is running. So, efficiency is pretty important here..
I'm not arriving at a way to avoid making copies of these lists though! They might as well be immutable. :(
This works:
map[ key ] = QVariantList() << map[ key ].toList() << someOtherVarList;
But, that's doing a whole lot more work (under the hood) than I want...
Here's something I've tried, but ran into a wall:
I can get a non-const pointer to the variant I want to modify directly by using this macro I wrote utilizing a QMutableMapIterator:
#define getMapValuePtr( keyType, valueType, map, keyValue, ptrName ) \
valueType *ptrName( nullptr ); \
for( QMap<keyType, valueType>::iterator it = map.begin(); \
it != map.end(); ++it ) \
{ if( it.key()==keyValue ) { ptrName = &(*it); break; } }
Example:
const QString key( "mylist" );
QVariantMap map;
map[ key ] = QVariantList( {"a","b","c"} );
getMapValuePtr( QString, QVariant, map, key, valuePtr )
So now valuePtr could give me the ability to directly modify the value in the map, but the fact it's a QVariant is still in the way...
I have next tried the following without Qt Creator highlighting anything to indicate an error, but it does NOT quite compile despite that...
QVariantList *valListPtr( qvariant_cast<QVariantList *>(*valuePtr) );
valListPtr->append( QVariantList( {"x","y","z"} ) );
Ideas?
Note: Not all entries in these maps are list. I CANNOT therefore change my map type to QMap<QString,QVariantList>. I tried, and the many client classes to this code all threw fits... That would have simplified matters, had it worked, by implicitly eliminating this final sticking point.
| Got it!
I was on the right track, but had to perform a deep dive into the Qt source to figure out how to finish this off. I now have another macro to call upon, which takes me the rest of the way:
#define varPtrToListPtr( varPtr, ptrName ) \
QVariantList *ptrName( varPtr->type() == QVariant::List ? \
static_cast<QVariantList *>( static_cast<void *>( &(varPtr->d.data.c) ) ) \
: nullptr );
So, here's the example from above with the rest of the story filled in:
const QString key( "mylist" );
QVariantMap map;
map[ key ] = QVariantList( {"a","b","c"} );
getMapValuePtr( QString, QVariant, map, key, valuePtr )
if( !valuePtr ) return; // imagine this being inside a function...
varPtrToListPtr( valuePtr, valueListPtr )
if( valueListPtr ) *valueListPtr << QVariantList( {"x","y","z"} );
This achieves the same results as the straight forward one-liner I showed in the question, but now I'm avoiding all the overhead of creating and destroying temp copies of giant lists!
Admittedly, the initial search with the iterator might burn some time compared to using QMap::value( key ), but none of my maps here have many k/v pairs, so the overall gain of avoiding the list copies negates that.
Update:
I ran some tests using this method vs the original and observed it literally shave MORE than 99% of the time required off the same workload. Like doing a task in 50 ms rather than the painful 7.5 seconds the old way would take.
This was originally written on Windows, against MSVC. I discovered, however, that on other platforms/compilers this fails to build unless you make a 1 line tweak to the Qt base library:
Open the Qt header for class QVariant.
Search the file for the line containing: Private d;
Directly above that, add the line: public: // HACK!. That aligns other build contexts with the MSVC access to that class member.
Note: I fully acknowledge the "bad form", and the downsides, to this solution... BUT the payoff may be worth it for you. It was for my use case.
|
71,788,946 | 71,789,072 | How to implement `ssize_t` in c++ complier correctly? | I am currently learning C++ and I saw there was a line of codes writing on the textbook:
ssize_t num = index; // index could be -1 in the example of my textbook`
The num is a variable to contain the index return from a traversing. However,after I copy this code to my C++ compiler, it says it can not find the definition of ssize_t. And I can hardly find there is any definitions about ssize_t in C++. I saw that it could be use in C but not C++. Right?
| ssize_t can (and cannot) be used in C++ just as much as it can (and cannot) be used in C. There is no fundamental type by such name in either language, and there is no ssize_t type in the ISO C standard library, nor is there a std::ssize_t in the C++ standard library.
ssize_t is defined in the POSIX standard. If your program is compiled for a POSIX system, then you can use ssize_t by including the header from the POSIX library that defines it. On non POSIX system, such header doesn't exist, so this is not portable. You shouldn't use ssize_t in a cross platform program except in a POSIX specific module that uses the POSIX API.
When you aren't using the POSIX API, you can typically substitute ssize_t with std::ptrdiff_t.
|
71,789,005 | 71,789,258 | Template class implementation has compilation issues | I am creating an object using static member function of a class. and then calling a function inside.
int main(){
int a = 49;
auto foo = Foo::createFoo(a);
foo->study();
}
Implementation
for this example lets have study() display a value.
Structure wise I do have constraints to have the below interface and then the static method returning the object
class I_Foo{
public:
virtual void study(void) = 0;
};
class Foo : public I_Foo{
public:
static Foo* createFoo(int x_) {
return new Foo(x_);
}
void study() override{
std::cout << "studied: "<< st << std::endl;
}
protected:
Foo(int a) : st(a) {}
int st = 0;
};
This above implementation is absolutely working good and as expected i.e., variable a should be displayed as studied: a
Now I want to change the behaviour of this
Lets say for values below 50, the above implementation should remain "unchanged" i.e., variable a should be displayed as studied: a
However as the value becomes 50 and above then the output should display studied: a+100
#include <iostream>
class I_Foo{
public:
virtual void study(void) = 0;
};
template<int>
class FooImpl;
class Foo : public I_Foo {
public:
static Foo* createFoo(int x_) {
Foo* foo = nullptr;
if (x_>=50) {
foo = new FooImpl<2>(x_);
}
else {
foo = new FooImpl<1>(x_);
}
return foo;
}
};
template<int FT = 1>
class FooImpl : public Foo{
public:
void study() override{
std::cout << "studied: "<< st << std::endl;
}
FooImpl(int a) : st(a) {}
int st = 0;
};
template<>
void FooImpl<2>::study() override{
std::cout << "studied: "<< st + 100 << std::endl;
}
int main()
{
int a = 56;
auto foo = Foo::createFoo(a);
foo->study();
}
This obviously doesnot compile as the Foo has no clue what FooImpl is!
Please suggest how can I make this work. The requirement is to keep the main function untouched but have the new functionality.
Please note
Yes, I know this could be solved by if statement, but that is not the point. It has a complex implementation in reality I just tried to present it in a fairly understandable problem to study how these templates could work.
EDIT 1
After first editt I seeing the below compliation error
| As I wrote in the comment: Do not implement createFoo function inside the class declaration.
#include <iostream>
class I_Foo {
public:
virtual void study(void) = 0;
};
class Foo : public I_Foo {
public:
// here just DECLARE the method...
static Foo *createFoo(int x_);
};
/********/
/* SNIP */
/********/
// ...and IMPLEMENT it there
Foo *Foo::createFoo(int x_) {
if (x_>=50) {
return new FooImpl<2>(x_);
} else {
return new FooImpl<1>(x_);
}
}
int main() {
int a = 56;
auto foo = Foo::createFoo(a);
foo->study();
}
|
71,789,386 | 71,790,087 | Why my code in c++ is taking too long to execute? | When I click to compile and run my code in c++, using dev c++, the code takes a while to run in the console, even though it's something very basic. The console screen opens and goes black, with the cursor blinking, the program only starts after a few seconds. How can I solve this problem? Can someone help me, please?
#include <iostream>
using namespace std;
int main() {
int valor[5] ;
int i ;
for(i = 0 ; i < 5 ; i++) {
cout << "digite valor[" << i << "]" << endl ;
cin >> valor[i] ;
}
for( i = 0 ; i < 5 ; i++) {
cout << "valor[" << i << "]: "<< valor[i] << endl ;
}
return 0;
}
| Got a similar issue some years ago at work, with the imposed antivirus. All compiled executables, even the smaller one, took several seconds to really be launched.
We were forced to request some special rights to IT in order to be able to exclude from scanning our development folders. It solved the problem instantly.
You should be able to tell your antivirus to exclude your base development folder (let's say "C:\Users\malkmim\Projects" and all its subfolders) from scan, and then test again if you still have this issue.
|
71,789,519 | 71,789,695 | Unpack pointer to call object's method | If I have the following classes:
class Shape {
public:
virtual float getArea(){}
};
// A Rectangle is a Shape with a specific width and height
class Rectangle : public Shape { // derived form Shape class
private:
float width;
float height;
public:
Rectangle(float wid, float heigh) {
width = wid;
height = heigh;
}
float getArea(){
return width * height;
}
};
and in the main, I call the getArea function like this:
int main() {
Rectangle r(2, 6); // Creating Rectangle object
Shape* shape = &r; // Referencing Shape class to Rectangle object
cout << "Calling Rectangle getArea function: " << r.getArea() << endl; // Calls Rectangle.printArea()
cout << "Calling Rectangle from shape pointer: " << shape->getArea() << endl; // Calls shape's dynamic-type's
}
My question is, why can't I unpack the pointer shape, which in my understanding would give a Rectangle object and on which I can call its getArea function, like this:
cout << "Calling Rectangle from shape pointer: " << *shape.getArea() << endl
| Obviously, HolyBlackCat answered your question in his comment.
But if you are interested in "why" - check out C++ Operator Precedence.
You'd see that . -> Member access has higher priority than *a Indirection (dereference). So, unless parenthesis are used, you are trying to dereference a pointer shape with a member access operator .
|
71,790,744 | 71,790,825 | Why this code snippet works? The parameter of the lambda should be a lvalue-reference, whereas `std::bind` pass a rvalue(i.e. `std::move(ptr)`) to it | Why does this code snippet work?
#include <functional>
#include <iostream>
#include <memory>
int main()
{
std::unique_ptr<int> ptr(new int(666));
std::bind([](std::unique_ptr<int> &ptr){std::cout << *ptr << std::endl;}, std::move(ptr))();
}
You see the parameter of the lambda should be a lvalue-reference, whereas std::bind pass a rvalue(i.e. std::move(ptr)) to it.
Also, why does this code snippet not compile?
std::function<void()> = std::bind([](std::unique_ptr<int> &ptr){std::cout << *ptr << std::endl;}, std::move(ptr));
Just because std::function need copy all the objects, whereas std::move(ptr) is not copyable?
UPDATED:
The aforementioned code which could not compile is seen at https://localcoder.org/how-to-capture-a-unique-ptr-into-a-lambda-expression (i.e. see the 'solution 3' in the post). So the said solution is totally wrong. Am I right?
| std::bind() does not call the lambda, it returns a proxy that will call the lambda when the proxy is invoked later. So, just because you pass in an rvalue reference into std::bind() does not mean the proxy will pass an rvalue reference into the lambda.
And in fact, if you think about it, the proxy can't do so anyway. It has to move your rvalue-referenced unique_ptr object into something in the proxy to save it for later use after std::bind() has exited. And that something is itself not an rvalue, and so that something can then be passed into the lambda by lvalue reference.
|
71,790,753 | 71,798,429 | Drawing two triangles using index buffer | I am following Cherno's brilliant series on OpenGL, and I have encountered a problem. I have moved on from using a vertex buffer only, to now using a vertex buffer together with an index buffer.
What I want to happen, is for my program to draw two triangles, using the given positions and indices, however when I run my program I only get a black screen. My shaders are working fine when drawing only from a vertex buffer, but introducing the index buffer makes it fail. Here is the relevant parts of code:
float positions[] {
-0.5, -0.5,
0.5, -0.5,
0.5, 0.5,
-0.5, 0.5
};
unsigned int indices[] {
0, 1, 2,
2, 3, 0
};
unsigned int VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, 4*2*sizeof(float), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
unsigned int IBO;
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6*sizeof(unsigned int), indices, GL_STATIC_DRAW);
ShaderProgramSource source = parseShader("res/shaders/Basic.glsl");
unsigned int shader = createShader(source.vertexSource, source.fragmentSource);
glUseProgram(shader);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
I am pretty much sure that my code is equal to that of Cherno, but he gets a nice looking square on screen whereas I get nothing. Can you spot an error?
Here's some info on my system:
macOS 12.2.1
OpenGL Version 4.1
GLSL Version 3.3
Writing and compiling in Xcode
Static linking to GLEW and GLFW
| Unlike using Linux or Windows, a Compatibility profile OpenGL Context is not supported on a Mac. You must use a Core profile OpenGL Context. If you use a Core profile, you must create a Vertex Array Object because a core profile does not have a default Vertex Array Object.
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
|
71,790,777 | 71,793,224 | Fourth button Binary Conversion Not working | I'm going through the Inventor's Kit from Sparkfun, specifically around the Digital Trumpet. To expand on the project I'm adding a fourth button and trying to turn the buttons pressed into a binary number to give myself 16 notes from 4 buttons. Here's my code:
using namespace std;
//set the pins for the button and buzzer
int firstKeyPin = 2;
int secondKeyPin = 3;
int thirdKeyPin = 4;
int fourthKeyPin = 7;
int buzzerPin = 10;
void setup() {
Serial.begin(9600); //start a serial connection with the computer
//set the button pins as inputs
pinMode(firstKeyPin, INPUT_PULLUP);
pinMode(secondKeyPin, INPUT_PULLUP);
pinMode(thirdKeyPin, INPUT_PULLUP);
pinMode(fourthKeyPin, INPUT_PULLUP);
//set the buzzer pin as an output
pinMode(buzzerPin, OUTPUT);
}
void loop() {
auto toneTot = 0b0;
if (digitalRead(firstKeyPin) == LOW) {
tone(buzzerPin, 262); //play the frequency for c
toneTot |= 1;
}
if (digitalRead(secondKeyPin) == LOW) {
tone(buzzerPin, 330); //play the frequency for e
toneTot |= 10;
}
if (digitalRead(thirdKeyPin) == LOW) { //if the third key is pressed
tone(buzzerPin, 392); //play the frequency for g
toneTot |= 100;
}
if (digitalRead(fourthKeyPin) == LOW) { //if the fourth key is pressed
tone(buzzerPin, 494);
toneTot |= 1000;
}
Serial.println("Binary collected");
Serial.println(String(toneTot));
}
In general this has worked perfectly fine, except for the behavior of the 4th button. I've tried moving buttons around, switching pins, etc, but it keeps on working so that when a 4th button is pressed instead of values like 1001, 1010, 1011, etc, it comes out like 1002 and 1004
| Here:
toneTot |= 10;
Your are not setting Bit 1, as you expected. 10d is simular to 0b00001010, so you are setting Bit3 and Bit 1. Switch it to:
toneTot |= 0x02;
Same think for the other Bit set in tonTot
|
71,790,794 | 71,791,206 | What's the difference between an explicit call and an implicit call of the conversion function? | Consider this example:
struct A{
template<class T>
operator T(); // #1
};
struct B:A{
template<class U>
operator U&&(); // #2
};
int main(){
B b;
int a = b; // #3
b.operator int(); // #4
}
According to [class.member.lookup] p7
If N is a non-dependent conversion-function-id, conversion function templates that are members of T are considered. For each such template F, the lookup set
S(t,T) is constructed, considering a function template declaration to have the name t only if it corresponds to a declaration of F ([basic.scope.scope]). The members of the declaration set of each such lookup set, which shall not be an invalid set, are included in the result.
#1 and #2 are both included in the lookup result regardless of what the conversion-function-ids are in #3 and #4. The diagnosis for #3 is what we expect, in other words, both #1 and #2 are candidates and they are indistinguishable.
However, it seems that implementations only consider #2 as the unique candidate when processing #4. As said above, the candidate set should be the same for either #3 or #4. Do I omit some other rules that cause the difference? Or, Is it a bug in implementations?
| Implementations just haven’t caught up to the new(ly clarified) rules, which had to be largely invented in 2020 since no published standard version has ever described lookup for conversion function templates in any sensible way.
|
71,791,129 | 71,791,199 | Expression must have class type but it has type "*shape" | I am trying to create a vector as show below:
std::vector<double> dimensions_data_vec{input_shape_pointer.get_dimensions()};
In this code, input_shape_pointer is a pointer to a shape such as a rectangle. A shape has dimensions associated with it, eg. length and width. I now have to create another, separate class which takes a pointer to a shape and accesses its dimensions. To do this I am using the code snippet.
The get_dimensions() function is a part of the shape class and returns the dimensions of a shape in a vector which has a type double.
My code issues the error
Expression must have class type but it has type "*shape" (please note the asterisk here)
My question is, how do I have get_dimensions() work on the shape before the vector is initialised so that there is no mismatch, and dimensions_data_vec just takes in the vector double of shape dimensions? I think there may be an issue with initializing a vector with another vector anyway, but I want to work on one problem at a time.
| If input_shape_pointer is a pointer to a shape then use -> instead of .. For example, the expression
input_shape_pointer.get_dimensions()
should be replaced by:
input_shape_pointer->get_dimensions()
Note the use of -> instead of . in the above expression
|
71,791,358 | 71,804,945 | Is there an easy way to tell which C++ classes are abstract from a shared object library? | I'm currently writing a library that has some abstract classes. In addition to checking that the library compiles, I'd like to make sure that all pure virtual methods have been defined in classes that are intended to be concrete. I had hoped that I could get this information from nm or objdump, but so far I haven't been able to tell.
Consider the following minimal example:
struct A
{
void f() {}
};
struct B
{
virtual void f() = 0;
};
struct C : public B
{
void f() override {}
};
When I look at the nm output, I get the following. (I've excluded anything that doesn't relate to one of these classes.)
% nm -C test.so
0000000000001174 W A::f()
000000000000118c W B::B()
000000000000118c W B::B()
0000000000001180 W C::f()
00000000000011aa W C::C()
00000000000011aa W C::C()
0000000000003db0 V typeinfo for B
0000000000003d98 V typeinfo for C
0000000000002003 V typeinfo name for B
0000000000002000 V typeinfo name for C
0000000000003d80 V vtable for B
0000000000003d68 V vtable for C
It's easy to distinguish A (a class with no virtual methods) from B and C. But I want to be able to distinguish the fact that B is an abstract class whereas C is concrete.
Obviously, if I have a list of all pure virtual methods, and a map of the class hierarchy, then I could iterate over them and check whether they are defined. (Above, you can see that C::f is defined, but B::f is not.) But I was hoping that there would be an automatic way of doing this. (I was hoping that the vtable or typeinfo would show up differently above.)
Another way would be to add an extra file where I instantiate one object from every class that I expect to be concrete, and I will get a compiler error if any of them have undefined virtual methods. But this is also annoying.
Clearly, I'm not an expert on ELF or the C object file model in general, so I'd appreciate a useful introduction or reference if the answer turns out to be complicated.
|
As a programmer, I need to make sure that I have fully defined every class I expect to be instantiated.
Only you can know which classes these are.
if you have a list of such classes, you can generate a test program along the lines of:
#include <assert.h>
#include <mylib.h>
int main()
{
assert(!std::is_abstract<Class1>::value);
...
assert(!std::is_abstract<ClassN>::value);
}
compile and run it. If it doesn't assert, you are good.
You could in theory also do this at the test.so level. You'll need to locate the vtable for every expected-to-be-concrete class X, and examine the values in that table. If any value is equal to &__cxa_pure_virtual, then the class is not concrete.
This is complicated by the fact that the vtables are relocated at load time. So you'd have to find the vtables, then find relocation records which apply to them, and then search for __cxa_pure_virtual.
Overall, generating a test program is a much easier approach.
Update:
I was hoping that I could do something like nm -C test.so | grep ... | ... to select the names of pure virtual classes.
There is no way to do that.
At a glance I would be able to tell if any of them did not belong on the list.
What you could do is find all classes with virtual tables via nm -D test.so | grep ' _ZTV' | c++filt. Use that list of all classes to generate the test program, but instead of asserting simply print the class name and the result of the test.
Finally filter this list to just the classes where is_abstract<T>::value is true. Voilà: you now have the list you were seeking.
|
71,792,422 | 71,793,061 | why the relative path is different when I use Cmake build and VS 2019 build? | I'm new in Cmake. And I try to use Cmake to construct my project.
In my project, I need to load some resources in runtime. for instance:
string inFileName = "../Resources/resource.txt";
// string inFileName = "../../Resources/resource.txt";
ifstream ifs;
ifs.open(inFileName.c_str());
if (ifs) {
....
}
But when I use the command line cmake ../ and cmake --build . --config Release in project/build. my file path should be relative to ${PROJEDCT_BINARY}, i.e. inFileName = "../resources/resource.txt".
But when I use cmake ../ and open the sln file with VS2019 then right-click to build and run, my file path should be relative to the executable, i.e. inFileName = "../../resources/resource.txt".
I don't know why this happened, and I search through Internet, It seems no one else encounters this stupid question...
Below is my file structure.
|--3rdParty
|----CmakeLists.txt
|--include
|----header.h
|--source
|----source.cpp
|----CmakeLists.txt
|--resources
|----resource.txt
|--CmakeLists
and my root CmakeLists.txt
cmake_minimum_required(VERSION 3.12)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
project(OBMI VERSION 0.1.0.0 LANGUAGES C CXX CUDA)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR})
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR})
add_subdirectory(3rdParty)
add_subdirectory(source)
source/CmakeLists.txt
add_executable(mSI)
target_sources(mSI PRIVATE
${PROJECT_SOURCE_DIR}/include/header.h
# source
source.cpp
)
target_include_directories(multiSpectrumImaging
PRIVATE
${PROJECT_SOURCE_DIR})
target_link_libraries(mSI
PRIVATE
...
)
| When using relative paths to load files, the resolution of the final filename depends on the current working directory. The relative path is appended to that working directory. That current working directory is not necessarily the same as the path of your application; it will be the path of the surrounding environment from which the application is started (or can be set specifically for a debug environment in most IDE's).
You don't specify exactly how you run your program when you run it not from the IDE - just by double-clicking the executable maybe? You also don't tell us where the executable is built in relation to your sources?
Specifically for running from Visual Studio, you can set the working directory in the "Debugging" section of the Project Properties.
For a more flexible solution, what I typically do is to determine the path of your executable, and then appending the relative path to load resources to that.
Basically, the full executable path is stored in argv[0] (if you have a int main(int argc, char** argv) {...}, i.e., the first element of the second argument to your main function). For more information on this, see for example the answers to this other question.
|
71,792,448 | 71,792,665 | Why do C++ allocator requirements not require that construct() constructs an object of value_type? | I found this strange while I was reading this. It says value_type equals to T, but construct() constructs an object of type X, which has nothing to do with T.
However, reference page of std::vector only says
An allocator that is used to acquire/release memory and to construct/destroy the elements in that memory. The type must meet the requirements of Allocator. The behavior is undefined (until C++20)The program is ill-formed (since C++20) if Allocator::value_type is not the same as T.
so I thought that this means I can use any Allocator for std::vector<T, Allocator> if Allocator meets the requirements of Allocator and Allocator::value_type is same as T.
But if what Allocator::construct() really constructs is not Allocator::value_type, how can std::vector's implementation constructs the element of std::vector?
Edit: Well, std::vector's implementation can just use 'placement new', but then what is this Allocator::construct() all about? If it is not meant to be used in such situations like constructing elements of STL containers, in what kinds of situation is it really meant to be used?
| Note that a.construct(xp, args) is only optional. A std::vector merely needs allocate() to allocate memory and then it can use placement-new to construct objects in that memory. If the allocator has construct() then the vector can use it to create objects (also of type T) in memory previously obtained via allocate().
a.allocate:
Allocates storage suitable for an array object of type T[n] and creates the array, but does not construct array elements. May throw exceptions.
And returns a pointer to allocated memory.
I think I don't understand. X can be any type including T. Isn't that mean X can be something that is not T, so vector cannot use construct if the allocator it uses have X which is not equal to T?
No. The list is "Given..." and has the bullet:
xp, a dereferenceable pointer to some cv-unqualified object type X
That is, you, the caller, picks some X. Then the allocator has to meet the requirement that a.construct(xp, args) constructs an object of type X. What X actually is, that is not specified, hence whatever xp with type X you choose, the allocator has to meet that (optional) requirement.
As an analogon, consider someone writes some multiplication method and the requirement is:
Given some x and y of type int:
the method returns the result of x*y.
Now, "some x and y of type int" is what you choose, while "the method returns the result of x*y" is the requirement the method has to fulfill. It's not the method that chooses some x and some y and returns 42 always, because there are values x==1 and y==42 where it yields correct results. Rather, it means that for any x and any y you choose, the method should fullfill "the method returns the result of x*y" (for sake of simplicity, I ignored overflow here).
|
71,793,094 | 71,793,895 | Can't convert/cast between template types ( couldn’t deduce template parameter ) | Whilst messing around with templates and static_assert, I came across the following problem
( Below is a shortened example the problem is the same )
template< size_t n >
struct W {
class X {
public:
/* breaks normal constructor
template< size_t o >
X( typename W<o>::X & y ){
static_assert( o < n , "bad");
};
*/
template< size_t m >
operator typename W<m>::X() const {
static_assert( n < m , "bad");
return W<m>::X();
};
};
};
void func( W<6>::X y ){}
int main() {
W<5>::X x;
// > error: conversion from ‘X<5>’ to non-scalar type ‘X<6>’ requested
//W<6>::X y = x;
// > error: error: no matching function for call to ‘W<5>::X::operator W<6>::X()
// > note: template argument deduction/substitution failed:
// > note: couldn’t deduce template parameter ‘m’
//x.operator W<6>::X();
// > error: no matching function for call to ‘W<6>::X::X(W<5>::X&)’
// > note: template argument deduction/substitution failed:
// > note: couldn’t deduce template parameter ‘o’
//static_cast< W<6>::X >(x);
// similarly
//( W<6>::X ) x;
// End Goal:
// > error: could not convert ‘x’ from ‘X<5>’ to ‘X<6>’
func( x );
}
Simply put i'd like to be able to cast/convert W<n>::X objects to W<m>::X objects only if n < m.
Some additional notes
I can't use a class of the form X<n> instead of W<n>::X
The constructor of W::X<n> will be private
| The underlying problem is, that in nested types the template parameter can not be deduced. You can work around the problem if you provide a helper variable ( N in my example ) and use a more general template for the converting operator. You may filter out other types as W<n>::X with SFINAE if needed.
Here we go:
template< size_t n >
struct W {
class X {
public:
// helper to get template parameter because it can't be deduced
static constexpr size_t N = n;
template < typename T, size_t x=T::N >
operator T() const
{
//static_assert( x < n , "bad");
std::cout << "Target: " << n << " Source " << x << std::endl;
return T();
}
X()=default;
};
};
void func( W<6>::X ){}
int main() {
W<5>::X x;
W<6>::X y=x;
x.operator W<6>::X();
static_cast< W<6>::X >(x);
( W<6>::X ) x;
func( x );
}
|
71,793,245 | 71,793,321 | looking for an std algorithm to replace a simple for-loop | Is there a standard algorithm in the library that does the job of the following for-loop?
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
int main( )
{
const char oldFillCharacter { '-' };
std::vector<char> vec( 10, oldFillCharacter ); // construct with 10 chars
// modify some of the elements
vec[1] = 'e';
vec[7] = 'x';
vec[9] = '{';
const char newFillCharacter { '#' };
for ( auto& elem : vec ) // change the fill character of the container
{
if ( elem == oldFillCharacter )
{
elem = newFillCharacter;
}
}
// output to stdout
std::copy( std::begin( vec ), std::end( vec ),
std::ostream_iterator<char>( std::cout, " " ) );
std::cout << '\n';
/* prints: # e # # # # # x # { */
}
I want to replace the above range-based for-loop with a one-liner if possible. Is there any function that does this? I looked at std::for_each but I guess it's not suitable for such a scenario.
| This loop will replace every occurrence of oldFillCharacter with newFillCharacter. If you don't want to do something more fancy std::replace looks good:
std::replace(std::begin(vec), std::end(vec), oldFillCharacter, newFillCharacter);
Or a bit simpler with std::ranges::replace:
std::ranges::replace(vec, oldFillCharacter, newFillCharacter);
|
71,793,510 | 71,798,273 | Using CLion IDE, is there a way to measure performance and calculation cost of each line or part of the program while debugging? | I need to find which parts of the code are taking more CPU time and whether I can improve those parts. I can define a timer object in the code but the compilation of the code after each modification is taking too much time and I cannot continue this way.
I do not use VB but I found some information on measuring and monitoring tools on the code performance inside VB.
Is there a way to find the cost of each line of the code while executing step by step or between breakpoints in debugging mode using CLion IDE to pinpoint the parts that are taking more CPU time to execute?
| Take a look here: https://www.jetbrains.com/help/clion/cpu-profiler.html
CLion has an integration for perf (on Linux) and DTrace (on MacOS). Windows unfortunately doesn't provide profiling support at the operating system level, so you'll have to install a third-party profiler and won't be able to use it from within CLion. Your CPU manufacturer (AMD or Intel) provides one: Intel's is called VTune, AMD's is called µProf.
|
71,793,614 | 71,794,070 | Printing value of private variables of a class in a loop using member functions | I'm building my first program in C++ and I'm stuck where I'm trying to print the value of fee multiple times using for loop after giving value once. By running the loop it is giving garbage value every time after giving right value at first time. I'm new to the class topic in C++. Please tell me how can I print the same value of private variable fee every time the loop runs.
#include<iostream>
using namespace std;
class FEE
{
private:
int fee;
public:
void setfee()
{
cout<<"Enter the monthly fee= ";
cin>>fee;
}
void showfee()
{
cout<<"Monthly fee is "<<fee<<endl;
}
};
int main()
{
FEE stu[5];
for(int i=0;i<5;i++)
{
if(i==0)
stu[i].setfee();
stu[i].showfee();
}
}
| The problem is that you're calling the setfee member function only for the first object in the array stu. And since the array stu was default initialized meaning its elements were also default initialized, the data member fee of each of those elements inside stu has an indeterminate value. So, calling showfee on the elements on which setfee was not called, is undefined behavior since for those elements fee has indeterminate value and you're printing the value inside showfee.
because I want to set the same fee amount for every student and then autoprint it in a file for every stu[] array variable.
To solve this and do what you said in the above quoted statement, you can ask the user for the fee inside the main and then pass the input as an argument to the setfee member function. For this we have to change setfee member function in such a way that it has an int parameter. Next, using a 2nd for loop we could print the fee of each of the elements inside stu using showfee member function. This is shown below:
#include<iostream>
class FEE
{
private:
int fee = 0; //use in-class initializer for built in type
public:
void setfee(int pfee)
{
fee = pfee;
}
void showfee()
{
std::cout<<"Monthly fee is "<<fee<<std::endl;
}
};
int main()
{
FEE stu[5]; //default initiailized array
int inputFee = 0;
for(int i=0;i<5;i++)
{
std::cout<<"Enter the monthly fee= ";
std::cin>>inputFee;
//call setfee passing the inputFee
stu[i].setfee(inputFee);
}
for(int i = 0; i<5; ++i)
{
stu[i].showfee();
}
}
Demo
Also, note that using a std::vector instead of built in array is also an option here.
Some of the changes that i made include:
Added a parameter to the setfee member function.
Used in-class initializer for the fee data member.
The input fee is taken inside main which is then passed to the setfee member function for each of the element inside stu.
The member function showfee is called for each of the element inside stu using the 2nd for loop.
|
71,793,687 | 71,793,806 | Use specialization from base class | I have a class that inherits from a base class. The derived class has a template method. The base class has a specialized version of this method:
#include <iostream>
class Base {
public:
static void print(int i) { std::cout << "Base::print\n"; }
};
class Derived : public Base {
public:
static void print(bool b) { std::cout << "Derived::bool_print\n"; }
template <typename T>
static void print(T t) { std::cout << "Derived::print\n"; }
void Foo() {
print(1);
print(true);
print("foo");
}
};
int main()
{
Derived d;
d.Foo();
return 0;
}
The output is:
Derived::print
Derived::bool_print
Derived::print
The desired output is:
Base::print
Derived::bool_print
Derived::print
See code at https://onlinegdb.com/BY2znq8WV
Is there any way to tell Derived::Foo to use the specialization from Base instead of using the unspecialized version define in Derived?
Edit
The above example might be oversimplified as @Erdal Küçük showed. In actuality Derived subclasses from Base using CRTP, so it is not known if Base has a print method. A fuller example can be found at https://onlinegdb.com/N2IKgp0FY
| This might help:
class Derived : public Base {
public:
using Base::print; //one of the many useful usages of the keyword 'using'
//...
};
See: Using-declaration in class definition
|
71,793,910 | 71,795,080 | Access Java class object methos from JNI | I have an Android app in which I have implemented the connection with a WebSocket in the C++ code.
Now I would like to invoke a method of an object initialized in the Java class, via C++ code with JNI.
It's possible to do it?
This is my Activity:
public class MainActivity extends AppCompatActivity{
private MyCustomObject object; //This object is initialized in the life cycle of the activty
}
What I want to do is call object.myCustomMethod() from JNI.
| I tried to put the part of code for your use case.
Pass the custom object during onCreate to JNI
//MainActivity.java
public class MainActivity extends AppCompatActivity {
// Used to load the 'native-lib' library on application startup.
static {
System.loadLibrary("native-lib");
}
private MyCustomObject object;
protected void onCreate(Bundle savedInstanceState) {
object = new MyCustomObject();
//object is passed tthrough JNI call
intJNI(object);
}
public class MyCustomObject{
public void myCustomMethod(){
}
}
/**
* A native method that is implemented by the 'native-lib' native library,
* which is packaged with this application.
*/
public native void intJNI(MyCustomObject obj);
}
At native side you keep the reference of the object and call it at appropriate time
//JNI
static jobject globlaRefMyCustomObject;
static JavaVM *jvm;
extern "C" JNIEXPORT void JNICALL
Java_test_com_myapplication_MainActivity_intJNI(
JNIEnv* env,
jobject callingObject,
jobject myCustomObject) {
jint rs = env->GetJavaVM(&jvm);
assert (rs == JNI_OK);
//take the global reference of the object
globlaRefMyCustomObject = env->NewGlobalRef(myCustomObject);
}
//this is done in any background thread in JNI
void callJavaCallbackFucntion(){
JNIEnv *env;
jint rs = jvm->AttachCurrentThread(&env, NULL);
assert (rs == JNI_OK);
jclass MyCustomObjectClass = env->GetObjectClass(globlaRefMyCustomObject);
jmethodID midMyCustomMethod = env->GetMethodID(MyCustomObjectClass, "myCustomMethod", "()V");
env->CallVoidMethod(globlaRefMyCustomObject,midMyCustomMethod);
/* end useful code */
jvm->DetachCurrentThread();
}
//Release the Global refence at appropriate time
void JNI_OnUnload(JavaVM *vm, void *reserved){
JNIEnv* env;
if (vm->GetEnv(reinterpret_cast<void**>(&env), JNI_VERSION_1_6) != JNI_OK) {
return JNI_ERR;
}
env->DeleteGlobalRef(globlaRefMyCustomObject);
}
|
71,795,009 | 71,795,396 | Use base class implementation when base is template type | I have a class that receives its base type as a template arg and I want my derived class to call a function, print. This function should use the derived implementation by default but if the base class has a print function it should use the base implementation.
#include <iostream>
class BaseWithPrint {
public:
static void print(int i) { std::cout << "Base::print\n"; }
};
class BaseWithoutPrint {
};
template <typename B>
class Derived : public B {
public:
static void print(bool b) { std::cout << "Derived::bool_print\n"; }
template <typename T>
static void print(T t) { std::cout << "Derived::print\n"; }
void Foo() {
print(1);
print(true);
print("foo");
}
};
int main()
{
Derived<BaseWithPrint> d1;
d1.Foo();
Derived<BaseWithoutPrint> d2;
d2.Foo();
return 0;
}
This code only ever calls the Derived version of print.
Code can be seen at
https://onlinegdb.com/N2IKgp0FY
| If you know that the base class will have some kind of print, then you can add using B::print to your derived class. If a perfect match isn't found in the derived, then it'll check the base.
Demo
To handle it for the case where there may be a base print, I think you need to resort to SFINAE. The best SFINAE approach is really going to depend on your real world situation. Here's how I solved your example problem:
template <class T, class = void>
struct if_no_print_add_an_unusable_one : T {
// only ever called if derived calls with no args and neither
// the derived class nor the parent classes had that print.
// ie. Maybe best to force a compile fail:
void print();
};
template <class T>
struct if_no_print_add_an_unusable_one <T, decltype(T().print(int()))> : T {};
//====================================================================
template <class B>
class Derived : public if_no_print_add_an_unusable_one<B> {
using Parent = if_no_print_add_an_unusable_one<B>;
using Parent::print;
public:
// ... same as before
};
Demo
|
71,795,294 | 71,795,388 | Why my custom constructor is not called when an object is passed as a parameter? | I have the following code:
struct Entity {
Entity() {
std::cout << "[Entity] constructed\n";
}
~Entity() {
std::cout << "[Entity] destructed\n";
}
void Operation(void) {
std::cout << "[Entity] operation\n";
}
};
void funcCpy(Entity ent) {
ent.Operation();
}
int main() {
Entity e1;
funcCpy(e1);
}
This is the output:
[Entity] constructed
[Entity] operation
[Entity] destructed
[Entity] destructed
I expected my function to use the custom constructor, so the output would look like this:
[Entity] constructed
[Entity] operation
[Entity] constructed
[Entity] destructed
[Entity] destructed
Why does this happens? How could I use my custom constructor instead?
Thanks :)
| https://en.cppreference.com/w/cpp/language/copy_constructor
object as params will call copy constructor of class
|
71,795,380 | 71,796,588 | count std::optional types in variadic template tuple | I've got a parameter pack saved as a tuple in some function traits struct.
How can I find out, how many of those parameters are std::optional types?
I tried to write a function to check each argument with a fold expression, but this doesn't work as I only pass a single template type which is the tuple itself.
void foo1(){}
void foo2(int,float){}
void foo3(int, std::optional<int>, float, std::optional<int>){}
void foo4(int, std::optional<int>, bool){}
template<typename R, typename... TArgs>
struct ftraits<R(TArgs...)>
{
using ret = R;
using args = std::tuple<TArgs...>;
};
template<typename T>
struct is_optional : std::false_type
{
};
template<typename T>
struct is_optional<std::optional<T>> : std::true_type
{
};
template<typename... Ts>
constexpr auto optional_count() -> std::size_t
{
// doesn't work since Ts is a single parameter with std::tuple<...>
return (0 + ... + (is_optional<Ts>::value ? 1 : 0));
}
int main() {
using t1 = typename ftraits<decltype(foo1)>::args;
std::cout << optional_count<t1>() << std::endl; // should print 0
using t2 = typename ftraits<decltype(foo2)>::args;
std::cout << optional_count<t2>() << std::endl; // should print 0
using t3 = typename ftraits<decltype(foo3)>::args;
std::cout << optional_count<t3>() << std::endl; // should print 2
using t4 = typename ftraits<decltype(foo4)>::args;
std::cout << optional_count<t4>() << std::endl; // should print 1
}
| You can use template partial specialization to get element types of the tuple and reuse the fold-expression
template<typename>
struct optional_count_impl;
template<typename... Ts>
struct optional_count_impl<std::tuple<Ts...>> {
constexpr static std::size_t count =
(0 + ... + (is_optional<Ts>::value ? 1 : 0));
};
template<typename Tuple>
constexpr auto optional_count() -> std::size_t {
return optional_count_impl<Tuple>::count;
}
Demo
|
71,795,509 | 71,795,648 | Error writing Sec^2(x) -0.5 x in define function in C++ | Okay I'm trying to make a root finder calculator and right now I was testing it out trigonometric functions. Now I'm getting errors whenever a sec is involved prompting either a "sec has not been defined"
Here's what it looks like
Can someone explain to me whats wrong and how can I write "Sec^2(x)-0.5"
| C++ standard library doesn't provide secant function, you have to define it yourself.
double sec(double x)
{
return 1 / cos(x);
}
Also, ^2 does not mean "square" in C++, it's "bitwise XOR". You need to use * or pow:
sec(x) * sec(x) - 0.5;
pow(sec(x), 2) - 0.5;
And don't use macros, they are going to bite you. Functions are much easier to use and will always behave as you expect:
double g(double x)
{
return sec(x) * sec(x) - 0.5;
}
|
71,795,750 | 71,795,802 | how to brace initialize vector of custom class in C++? | Having this simple code:
#include <iostream>
#include <vector>
#include <string>
class Person{
public:
Person(std::string const& name) : name(name) {}
std::string const& getName() const {
return name;
}
private:
std::string name;
};
int main(){
// std::vector<int> ar = {1,2,3};
std::vector<Person> persons = {"John", "David", "Peter"};
}
I am getting error:
could not convert ‘{"John", "David", "Peter"}’ from ‘<brace-enclosed initializer list>’ to ‘std::vector<Person>’
So why vector of ints is enabled to brace-initialize, but custom class with implicit constructor (with std::string) cannot? And how to enable it?
| You just need more braces:
std::vector<Person> persons = {
{"John"}, {"David"}, {"Peter"}
};
|
71,795,886 | 71,796,501 | C++ Detect private member of friend class with CRTP | I have a CRTP Base class (Bar) which is inherited by a unspecified class. This Derived class may or may not have specific member (internal_foo), and this specific member my or may not have another member (test()).
In this scenario internal_foo will always be public, however test() is private, but declaring Bar as a friend.
I can detect internal_foo using traits fine, because it is public. But I cannot detect test() due to it being private, even though Bar is a friend.
The below example works due to test() being public:
template<class, class = void >
struct has_internal_foo : std::false_type {};
template<class T>
struct has_internal_foo<T,
void_t<
decltype(std::declval<T>().internal_foo)
>> : std::true_type {};
template<class, class = void>
struct internal_foo_has_test : std::false_type {};
template<class T>
struct internal_foo_has_test<T,
void_t<decltype(std::declval<T>().internal_foo.test())
>> : std::true_type {};
class InternalFoo
{
public:
void test()
{
}
};
class BadInternalFoo
{
};
template<class T>
class Bar
{
public:
template<class _T = T>
std::enable_if_t<conjunction<has_internal_foo<_T>, internal_foo_has_test<_T>>::value, void>
action()
{
static_cast<T&>(*this).internal_foo.test();
}
};
class Foo :
public Bar<Foo>
{
public:
InternalFoo internal_foo;
};
class BadFoo :
public Bar<BadFoo>
{
public:
BadInternalFoo internal_foo;
};
void test()
{
Foo foo;
BadFoo bad_foo;
foo.action(); // Compiles. As expected.
bad_foo.action(); // Does not compile. As expected.
}
However this next version does not work, due to test() being private:
template<class, class = void >
struct has_internal_foo : std::false_type {};
template<class T>
struct has_internal_foo<T,
void_t<
decltype(std::declval<T>().internal_foo)
>> : std::true_type {};
template<class, class = void>
struct internal_foo_has_test : std::false_type {};
template<class T>
struct internal_foo_has_test<T,
void_t<decltype(std::declval<T>().internal_foo.test())
>> : std::true_type {};
class InternalFoo
{
public:
template<class T>
friend class Bar;
template<class, class>
friend struct internal_foo_has_test;
private:
void test()
{
}
};
class BadInternalFoo
{
};
template<class T>
class Bar
{
public:
template<class _T = T>
std::enable_if_t<conjunction<has_internal_foo<_T>, internal_foo_has_test<_T>>::value, void>
action()
{
static_cast<T&>(*this).internal_foo.test();
}
};
class Foo :
public Bar<Foo>
{
public:
InternalFoo internal_foo;
};
class BadFoo :
public Bar<BadFoo>
{
public:
BadInternalFoo internal_foo;
};
void test()
{
Foo foo;
BadFoo bad_foo;
foo.action(); // Does not compile
bad_foo.action(); // Does not compile
}
As seen above, I have tried to friend the detection struct too, but that didn't help.
Is there a way to do what I am trying to do?
Ideally I would like this solution to be portable, and not use anything beyond C++11, 14 at the most. (I have implemented void_t & conjunction)
Edit:
The suggested question does not answer this one. That question wants to detect whether a member is public or private, and only access it if it is public, I wish for the detection to return positive on a private member of a friend class.
| Summary and the fix
Looks like a GCC 11 bug and your second attempt should in fact work.
However, I recommend rewriting action's definition in either of two ways so you don't even need the member detection idiom:
// Way 1
template<class _T = T>
decltype(std::declval<_T&>().internal_foo.test()) action() {
static_cast<T&>(*this).internal_foo.test();
}
// Way 1, with a different return type via the comma operator
template<class _T = T>
decltype(std::declval<_T&>().internal_foo.test(), std::declval<ReturnType>()) action() {
static_cast<T&>(*this).internal_foo.test();
}
// Way 2
template<class _T = T>
auto action() -> decltype(static_cast<_T&>(*this).internal_foo.test()) {
static_cast<_T&>(*this).internal_foo.test(); // Using _T for consistency
}
Note that I use _T inside the decltype so it's dependent on the template argument and can be SFINAEd. Also note that it's still possible to specify an arbitrary return return type without any enable_ifs.
Details
I took the liberty to prepend #include <type_traits> and using namespace std; to both of your examples and using C++17 so they can be compiled.
Some findings from the comments section:
Your first code (does not) compile(s) as expected with Clang 14, gcc 11 and gcc trunk: https://godbolt.org/z/EbaYvfPE3
Your second code (does not) compile(s) as expected with Clang add gcc trunk, but gcc 11 differs: https://godbolt.org/z/bbKrP8Mb9
There is an easier reproduction example: https://godbolt.org/z/T17dG3Mx1
#include <type_traits>
template<class, class = void>
struct has_test : std::false_type {};
template<class T>
struct has_test<T, std::void_t<decltype(std::declval<T>().test())>> : std::true_type {};
class HasPrivateTest
{
public:
template<class, class>
friend struct has_test;
friend void foo();
private:
void test() {}
};
// Comment the following line to make it compile with GCC 11
static_assert(has_test<HasPrivateTest>::value, "");
void foo() {
static_assert(has_test<HasPrivateTest>::value, "");
}
static_assert(has_test<HasPrivateTest>::value, "");
The code above compiles with Clang 14 and gcc trunk, but is rejected by gcc 11 with three "static assertion failed" messages, one for each assertion. However, commenting out the first static_assert makes all three compilers accept the code.
So it seems like GCC 11 (and earlier) tries to instantiate the template and do access checks depending on the context. Hence, if the first instantiation is outside of a friend, .test() method is not accessible, and the result is kind cached. However, if it's inside the friend void foo(), .test() is accessible and all static_asserts succeed.
@Klaus have pointed out a recent GCC bug whose fix seems relevant: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96204
|
71,796,015 | 71,796,104 | Two versions of enable_if usage work differently | I cannot understand why do two versions of copy constructor work differently (because of enable_if).
template <typename Type>
struct Predicate : std::integral_constant<bool, true>
{
};
template <>
struct Predicate<int> : std::integral_constant<bool, false>
{
};
template <typename FooType>
struct Settings
{
Settings() {}
// Here it works fine
template <typename OtherFooType>
Settings(const Settings<OtherFooType>& other, std::enable_if_t<Predicate<OtherFooType>::value, int*> = 0) {}
// In this case enable_if does not work
//template <typename OtherFooType>
//Settings(typename std::enable_if<Predicate<OtherFooType>::value, const Settings<OtherFooType>&>::type other){}
};
int main()
{
Settings<float> f = Settings<char>();
return 0;
}
From my point of view, here enable_if is the SFINAE way to conditionally remove copy constructor. It should work in both cases. Maybe am I missing understaing of SFINAE?
| In the second example the template parameter OtherFooType appears only left to the scope resolution operator :: in the function parameter.
Everything left of :: in a type specified by qualified name is a non-deduced context, meaning that the template argument for OtherFooType will not be deduced from the function parameter/argument pair.
As a consequence there is no way to deduce OtherFooType and so the constructor is always non-viable.
SFINAE doesn't even matter since that would be relevant only when substitution happens after successful deduction.
Also, (specializations of) constructor templates are never copy constructors. These are converting constructors and a copy constructor will still be declared implicitly.
|
71,796,647 | 71,796,740 | Why does std::unique_ptr<>.release() get evaluated before member function access which is in the lhs of assignment? | This results in a segmentation fault when accessing unique_ptr->get_id() as release() is run beforehand.
Is the ordering not guaranteed here?
#include <iostream>
#include <memory>
#include <unordered_map>
class D
{
private:
int id_;
public:
D(int id) : id_{id} { std::cout << "D::D\n"; }
~D() { std::cout << "D::~D\n"; }
int get_id() { return id_; }
void bar() { std::cout << "D::bar\n"; }
};
int main() {
std::unordered_map<int, D*> obj_map;
auto uniq_ptr = std::make_unique<D>(123);
obj_map[uniq_ptr->get_id()] = uniq_ptr.release();
obj_map.at(123)->bar();
return 0;
}
| This is because of [expr.ass]/1 which states:
[...] In all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression. The right operand is sequenced before the left operand. With respect to an indeterminately-sequenced function call, the operation of a compound assignment is a single evaluation.
emphasis mine
According to the above the right hand side gets evaluated first, then the left hand side, then the assignment happens. It should be noted that this is only guaranteed since C++17. Before C++17 the order of the left and right sides evaluations was unspecified.
|
71,797,023 | 71,797,865 | type deduction for std::function argument types with auto adds const | I have a struct with a method called call which has a const overload. The one and only argument is a std::function which either takes a int reference or a const int reference, depending on the overload.
The genericCall method does exactly the same thing but uses a template parameter instead of a std::function as type.
struct SomeStruct {
int someMember = 666;
void call(std::function<void(int&)> f) & {
f(someMember);
std::cout << "call: non const\n";
}
void call(std::function<void(const int&)> f) const& {
f(someMember);
std::cout << "call: const\n";
}
template <typename Functor>
void genericCall(Functor f) & {
f(someMember);
std::cout << "genericCall: non const\n";
}
template <typename Functor>
void genericCall(Functor f) const& {
f(someMember);
std::cout << "genericCall: const\n";
}
};
When I now create this struct and call call with a lambda and auto & as argument the std::function always deduces a const int & despite the object not being const.
The genericCall on the other hand deduces the argument correctly as int & inside the lamdba.
SomeStruct some;
some.call([](auto& i) {
i++; // ?? why does auto deduce it as const int & ??
});
some.genericCall([](auto& i) {
i++; // auto deduces it correctly as int &
});
I have no the slightest clue why auto behaves in those two cases differently or why std::function seems to prefer to make the argument const here. This causes a compile error despite the correct method is called. When I change the argument from auto & to int & everything works fine again.
some.call([](int& i) {
i++;
});
When I do the same call with a const version of the struct everything is deduced as expected. Both call and genericCall deduce a const int & here.
const SomeStruct constSome;
constSome.call([](auto& i) {
// auto deduces correctly const int & and therefore it should
// not compile
i++;
});
constSome.genericCall([](auto& i) {
// auto deduces correctly const int & and therefore it should
// not compile
i++;
});
If someone could shine some light on this I would be very grateful!
For the more curious ones who want to dive even deeper, this problem arose in the pull request: https://github.com/eclipse-iceoryx/iceoryx/pull/1324 while implementing a functional interface for an expected implementation.
| The problem is that generic lambdas (auto param) are equivalent to a callable object whose operator() is templated. This means that the actual type of the lambda argument is not contained in the lambda, and only deduced when the lambda is invoked.
However in your case, by having specific std::function arguments, you force a conversion to a concrete type before the lambda is invoked, so there is no way to deduce the auto type from anything. There is no SFINAE in a non-template context.
With no specific argument type, both your call are valid overloads. Actually any std::function that can match an [](auto&) is valid. Now the only rule is probably that the most cv-qualified overload wins. You can try with a volatile float& and you will see it will still choose that. Once it choose this overload, the compilation will fail when trying to invoke.
|
71,797,193 | 71,798,965 | Upgrade OpenCV 4.5 - constants not declared | After upgrading from OpenCV 3.2 to 4.5 I get a couple of compile errors. A few constants seems to have changed names
CV_ADAPTIVE_THRESH_GAUSSIAN_C
CV_FILLED
Error
g++ -O3 -std=c++17 txtbin.cpp -o txtbin `pkg-config opencv4 --cflags --libs`
In file included from txtbin.hpp:8,
from txtbin.cpp:11:
deskew.hpp: In member function ‘cv::Mat Deskew::preprocess(const cv::Mat&)’:
deskew.hpp:85:42: error: ‘CV_ADAPTIVE_THRESH_GAUSSIAN_C’ was not declared in this scope
85 | cv::adaptiveThreshold(img, thresh, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 15, -2);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from txtbin.cpp:11:
txtbin.hpp: In member function ‘void Txtbin::remove_boxes(cv::Mat&)’:
txtbin.hpp:217:79: error: ‘CV_FILLED’ was not declared in this scope; did you mean ‘CLD_KILLED’?
217 | cv::drawContours(mask, contours.contours, i, cv::Scalar(255, 255, 255), CV_FILLED);
| ^~~~~~~~~
| CLD_KILLED
txtbin.hpp: In member function ‘void Txtbin::remove_noise(cv::Mat&)’:
txtbin.hpp:262:78: error: ‘CV_FILLED’ was not declared in this scope; did you mean ‘CLD_KILLED’?
262 | cv::drawContours(mask, contours.contours, i, cv::Scalar(255, 255, 255), CV_FILLED);
| ^~~~~~~~~
| CLD_KILLED
txtbin.hpp: In member function ‘void Txtbin::remove_artifacts(cv::Mat&)’:
txtbin.hpp:289:77: error: ‘CV_FILLED’ was not declared in this scope; did you mean ‘CLD_KILLED’?
289 | cv::drawContours(mask, contours.contours, i, cv::Scalar(255, 255, 255), CV_FILLED);
| ^~~~~~~~~
| CLD_KILLED
txtbin.hpp: In member function ‘Txtbin::Bbox Txtbin::detect_textblock()’:
txtbin.hpp:352:76: error: ‘CV_FILLED’ was not declared in this scope; did you mean ‘CLD_KILLED’?
352 | cv::drawContours(mask, contours.contours, i, cv::Scalar(255, 255, 255), CV_FILLED);
| ^~~~~~~~~
| CLD_KILLED
txtbin.hpp: In member function ‘void Txtbin::detect_background_invert()’:
txtbin.hpp:498:79: error: ‘CV_FILLED’ was not declared in this scope; did you mean ‘CLD_KILLED’?
498 | cv::drawContours(mask, contours.contours, c, cv::Scalar(255, 255, 255), CV_FILLED);
| ^~~~~~~~~
| CLD_KILLED
| It's a matter of versions. OpenCV started as a heap of C code. Those preprocessor #defines all started with CV_... as a scoping hack.
OpenCV v2.0 introduced the C++ API. Constants now live in the cv namespace.
The old #defines were kept in v2 and v3 so people could transition more easily. In OpenCV v4, all the old C API was axed. It is an ex-API *whack*
Practically, for everything that can't be found, try replacing like so:
CV_FILLED => cv::FILLED
CV_ADAPTIVE_THRESH_GAUSSIAN_C => cv::ADAPTIVE_THRESH_GAUSSIAN_C
CV_* => cv::*
Exception: the data type/depth codes (CV_8UC3 ...) are still valid. Bulk string replacement on anything CV_ is not recommended.
|
71,797,216 | 71,797,386 | C++ template meta-programming: find out if variadic type list contains value | I'm trying to write a function that evaluates to true if a parameter passed at run-time is contained in a list of ints set up at compile-time. I tried adapting the print example here: https://www.geeksforgeeks.org/variadic-function-templates-c/#:~:text=Variadic%20templates%20are%20class%20or,help%20to%20overcome%20this%20issue.
And have tried this:
bool Contains(int j) { return false; }
template<int i, int... is>
bool Contains(int j) { return i == j || Contains<is...>(j); }
However, it gives me a compiler error saying "'Contains': no matching overloaded function found".
I've tried fiddling with angle-brackets but can't seem to get it to work.
| Your problem is that the recursive call is
Contains<is...>(j)
this looks for template overloads of Contains.
Your base case:
bool Contains(int j) { return false; }
is not a template. So the final call, when the pack is empty, of:
Contains<>(j)
cannot find the non-template.
There are a few easy fixes.
The best version requires a version of C++ greater than c++11; 17 I think:
template<int... is>
bool Contains(int j) { return ((is == j) || ...); }
This one is short, simple and clear.
The simple pre-c++14 ones generate O(n^2) total symbol length without jumping through extensive hoops. The c++17 one is O(n) total symbol length, much nicer. The c++14 one is obtuse, but also O(n) total symbol length.
So here are some c++11 ones that are suitable for modest lengths of packs:
None of the c++11 ones support empty packs:
template<class=void>
bool Contains(int j) { return false; }
template<int i, int... is>
bool Contains(int j) { return i == j || Contains<is...>(j); }
It relies on the fact that the first overload will never be selected except on an empty pack. (It is, due to a quirk in the standard, illegal to put any check that the pack is empty).
Another way that does not support empty packs is:
template<int i>
bool Contains(int j) { return i==j; }
template<int i0, int i1, int... is>
bool Contains(int j) { return Contains<i0>(j) || Contains<i1, is...>(j); }
which is more explicit than the first one.
The technique to get the total symbol length below O(n^2) involves doing a binary tree repacking of the parameter pack of integers. It is tricky and confusing, and I'd advise against it.
Live example.
Finally, here is a hacky one in c++14 that avoids the O(n^2) symbol length problem:
template<int...is>
bool Contains(int j) {
using discard=int[];
bool result = false;
(void)discard{0,((void)(result = result || (is==j)),0)...};
return result;
}
don't ask how it works. It is a technique that c++17 rendered obsolete on purpose.
|
71,797,257 | 73,218,764 | Android, how to securely store data from C++? | I need to securely store a piece of data (a single string) on Android from a pure C++ implementation.
AFAIK the way to do this is using SharedPreferences and KeyStore. SharedPreferences takes care of simple storage and KeyStore of encrypting the data. This is done trivially in Java/Kotlin, but all the C++ answers seem to point to a JNI implementation (that just binds/calls Java classes and methods)
The problem is: I'm not looking for Java/JNI implementations, all my functionality is implemented in pure C++ using JSI (JavaScript ⇔ C++ communication) bindings, so I cannot use JNI code without a huge headache of macros and workarounds.
There are some hints, such as this question, that it might be possible to use the low-level implementation of android. But I just cannot find any example on the web. I'm also not a C++ expert to go diving into AOSP source code.
Is this even possible? and if so, could anybody provide a simple example I could start with? Even just an example showing if the underlying frameworks are importable/includable in some custom C++ code would already be a great help.
| Short answer is: no, after much fumbling around it is not worth it to try to achieve certain things from C++ on android. The solution I ended up was calling java methods (bridged used JNI) from my C++ code.
|
71,797,491 | 71,797,842 | c++ replace memset with std::fill | Hello I'm trying to port some old code to new standards.
I have a
std::complex<double> **data_2D;
And I need to set a portion of that to zero. I used to do with:
memset( &( data_2D[dims[0]-delta][0] ), 0, delta*dims[1]*sizeof( std::complex<double> ) );
which gives this warning:
clearing an object of non-trivial type 'class std::complex<double>'; use assignment or value-initialization instead
How do I change that memset with a std::fill ? I got lost in double pointers...
Here is a minimale example:
#include <iostream>
#include <complex>
#include <string.h>
int main() {
unsigned int dims[2]={10,10};
std::complex<double> *cdata_ = new std::complex<double>[dims[0]*dims[1]];
std::complex<double> **data_2D;
data_2D= new std::complex<double> *[dims[0]];
for( unsigned int i=0; i<dims[0]; i++ ) {
data_2D[i] = cdata_ + i*dims[1];
for( unsigned int j=0; j<dims[1]; j++ ) {
data_2D[i][j] = 0.0;
}
}
unsigned int delta=5;
memmove( &( data_2D[0][0] ), &( data_2D[delta][0] ), ( dims[1]*dims[0]-delta*dims[1] )*sizeof( std::complex<double> ) );
memset( &( data_2D[dims[0]-delta][0] ), 0, delta*dims[1]*sizeof( std::complex<double> ) );
return 0;
}
| You don't need to change anything about the pointers. If you use std::fill_n you only need to be careful of the arguments being in a different order and that the number of elements, not the number of bytes, is expected:
std::fill_n( &( data_2D[dims[0]-delta][0] ), delta*dims[1], 0 );
However, &...[0] has the same effect as implicit array-to-pointer decay, so it can be written as well as
std::fill_n( data_2D[dims[0]-delta], delta*dims[1], 0 );
And I don't really see the point in having data_2D in the first place. Just index cdata_ directly:
std::fill_n( cdata_ + (dims[0]-delta)*dims[1], delta*dims[1], 0 );
Or if you want an interface to index by row and column, wrap cdata_ in a class with a member function indexing the array by 2D indices.
I would also recommend replacing new if you are already modernizing. It should be a std::vector instead.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.