question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
73,938,115 | 73,942,084 | boost::process environment not being propagated | I am trying to run an external command which uses environment variables to authenticate.
For this I am using boost::process:
namespace bp = boost::process;
std::string exec_bp(const std::string& cmd)
{
bp::ipstream pipe;
bp::child c(cmd, bp::std_out > pipe, boost::this_process::environment());
return std::string(std::istreambuf_iterator<char>(pipe), {});
}
This, however, doesn't work. I get an exception execve failed because the command I'm trying to run cannot find the environment variables it needs.
If I just use popen to run the command and read its stdout (per this answer), it works.
std::string exec_popen(const std::string& cmd)
{
std::array<char, 128> buffer;
std::string result;
std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(cmd.c_str(), "r"), pclose);
if (!pipe)
throw std::runtime_error("popen() failed!");
while (fgets(buffer.data(), buffer.size(), pipe.get()) != nullptr)
result += buffer.data();
return result;
}
As an example, here I am running the aws command line client to list some files in S3:
const std::string cmd = "aws s3 ls s3://foo/bar";
try
{
auto s = exec_bp(cmd);
std::cout << "exec_bp:\n" << s << '\n';
}
catch(const std::exception& e)
{
std::cout << "exec_bp failed; " << e.what() << '\n';
}
try
{
auto s = exec_popen(cmd);
std::cout << "exec_popen:\n" << s << '\n';
}
catch(const std::exception& e)
{
std::cout << "exec_popen failed; " << e.what() << '\n';
}
Output:
$ ./a.out | head
exec_bp failed; execve failed: Permission denied
exec_popen:
2021-07-05 17:35:08 2875777 foo1.gz
2021-07-05 17:35:09 4799065 foo2.gz
2021-07-05 17:35:10 3981241 foo3.gz
Why does passing boost::this_process::environment() to boost::process::child not propagate my process's environment?
How can I use boost::process to execute my command?
| To mimic the search-path behaviour of a shell, use boost::process::search_path.
To mimic a shell, use e.g.
bp::child c("/usr/bin/bash",
std::vector<std::string>{"-c", "aws s3 ls s3://foo/bar"},
// ...
Note this is typically bad for security. Instead, consider the more direct
bp::child c("/usr/local/bin/aws",
std::vector<std::string>{"s3", "ls", "s3://foo/bar"},
// ...
|
73,938,303 | 73,938,788 | error: type/value mismatch in template parameter list for std::variant | The following code doesn't work if class some_class is templated. So my guess is that I have to put the template specifier in front of something, but I don't really know where? I tried putting it in front of state::base and state::error types within the variant definition, but this doesn't work. Where do I put it and why?
Demo
#include <variant>
template <typename T>
class some_class
{
void do_something() {
struct state {
struct base{};
struct error{};
};
std::variant<state::base, state::error> fsm{};
}
};
int main() {}
Error:
<source>:12:47: error: type/value mismatch at argument 1 in template parameter list for 'template<class ... _Types> class std::variant'
12 | std::variant<state::base, state::error> fsm{};
| ^
<source>:12:47: note: expected a type, got 'some_class<T>::do_something()::state::base'
<source>:12:47: error: type/value mismatch at argument 1 in template parameter list for 'template<class ... _Types> class std::variant'
<source>:12:47: note: expected a type, got 'some_class<T>::do_something()::state::error'
| The compiler considers state::base and state::error to be dependent (on the template arguments), in which case they would require prefixing with typename so that they are considered types while parsing the class template definition.
However, intuitively these names/types seem like they shouldn't be dependent. Although the type is a different one for each specialization of the enclosing class template, it is guaranteed that state::base will always name and refer to a known type in the function definition for every instantiation of the enclosing template. This can't be changed e.g. via explicit or partial specialization.
This seems to be an open CWG issue (CWG 2074). The standard currently does not say that these names are dependent. They would be dependent only if the local classes contained a dependent type (see [temp.dep.type]/9.5), but they don't do that in your case. Still it seems some compilers consider these types to be dependent.
So your solution is to prefix them with typename even if that is probably not required or intended per standard.
|
73,938,329 | 73,942,752 | Why is this loop seeming to change the value of a variable? | The following code is meant to calculate 7 terms: tcapneg, tcappos, tneg1, tneg2, tpos1, tpos2, tzcap (only the calculation of tpos1 and tpos2 is shown here), and determine the entry that satisfies the condition of being the smallest positive non-zero entry.
int hitb;
double PCLx = 2.936728;
double PCLz = -0.016691;
double PDCx = 0.102796;
double PDCz = 0.994702;
double q = 0.002344;
double s = 0.0266;
double v = 0.0744;
double a = -q * PDCx * PDCx;
double b = s * PDCx - 2 * q*PCLx*PDCx - PDCz;
double c = -1.0*(PCLz + q * pow(PCLx, 2) - s * PCLx + v);
double d = b * b - 4 * a*c;
if (d >= 0.0f) // only take solution if t real
{
tpos1 = (-b + sqrt(d)) / (2 * a);
tpos2 = (-b - sqrt(d)) / (2 * a);
}
printf("\n %f %f %f %f %f %f %f", tcapneg, tcappos, tneg1, tneg2, tpos1, tpos2, tzcap);
yielding the result:
0.000000 0.000000 -40326.381162 -0.156221 -40105.748386 0.000194 0.016780
It is seen that the expected result should be smallest = tpos2 = 0.000194.
double smallest = -1.0;
double tlist[7] = { tcapneg, tcappos, tneg1, tneg2, tpos1, tpos2, tzcap };
const int size = sizeof(tlist) / sizeof(int);
for (int i = 0; i < size; i++)
{
if (tlist[i] > EPSILON && (smallest == -1.0 || tlist[i] < smallest))
{
smallest = tlist[i];
}
}
printf("\n %f", smallest);
The output for smallest = 0.000192, thus smallest != tpos2 != 0.00194. Why is there this small change in value for the selected smallest entry?
The result of smallest will be fed to the following code:
if (smallest == tneg1 || smallest == tneg2)
{
hitb = 1;
}
else if (smallest == tpos1 || smallest == tpos2)
{
hitb = 2;
}
else if (smallest == tcappos)
{
hitb = 3;
}
else if (smallest == tcapneg)
{
hitb = 4;
}
else if (smallest == tzcap)
{
hitb = 5;
}
In this case, we should satisfy the condition to write hitb = 2, however this is failing due to the inequality above.
| Your array double tlist[7] is in double with 7 elements. sizeof(double) is 8, so sizeof(tlist) is 8*7 = 56. While sizeof(int) is 4, so your size = sizeof(tlist) / sizeof(int) is 56/4 = 14. Your loop goes beyond number of elements in the array. It counts 7 more double values after the array in memory, because the array name is used as a pointer.
Here is my code to verify the above nanalysis
#include <iostream>
using namespace std;
int main()
{
double da[7] = {0.0, 0.0, -40326.381162, -0.156221, -40105.748386, 0.000194, 0.016780};
const int sda = sizeof(da);
const int sin = sizeof(int);
const int siz = sda/sin;
cout << "sda:" << sda << " sin:" << sin << " siz:" << siz << endl;
for( int i=0; i<siz; i++ ) {
cout << "da[" << i << "] = " << da[i] << endl;
}
return 0;
}
Here is the output
sda:56 sin:4 siz:14
da[0] = 0
da[1] = 0
da[2] = -40326.4
da[3] = -0.156221
da[4] = -40105.7
da[5] = 0.000194
da[6] = 0.01678
da[7] = 2.07324e-317
da[8] = 8.48798e-314
da[9] = 1.9098e-313
da[10] = 0
da[11] = 1.31616e-312
da[12] = 0
da[13] = 6.95332e-310
The correct code is
size = sizeof(tlist) / sizeof(double);
Use the following option for GCC to report runtime error in this case
g++ -fsanitize=bounds -o main.e main.cpp
|
73,938,344 | 73,993,883 | How qt find <Qxxx/qxxx.h>? | In Qt 6.4.0, we can use such code to include qt components:
#include <QtCore/qchar.h>
#include <QtCore/qbytearray.h>
#include <QtCore/qbytearrayview.h>
#include <QtWidgets/qtwidgetsglobal.h>
But I found that the real paths of those .h file are NOT under such folder like QtCore, QtWidgets etc. , actually most of them are under such directory:
/Users/tony/Qt/6.4.0/macos/lib/QtXXX.framework/Headers/qtxxx.h
I'm wondering that since QtCore is not the real path but Headers, Shouldn't we write #include "Headers/qtxxx.h" ? how can #include <QtCore/qchar.h> such path works?
| It's been solved: This phenomenon only shows on MacOS since MacOS uses "framework" to organize a set of lib and headers, which can be parsed by the clang-module name to the realpath.
|
73,938,523 | 73,938,581 | C++ Generic Addition Method | I have struct like this
template <typename T>
struct container
{
T norm() const
{
T sum = 0;
for (unsigned int i = 0; i < length; i++)
{
sum += value[i];
}
return sum;
};
private:
T *value{nullptr};
unsigned int length{0};
};
I have a norm() method that adds all the values from the "value".
I need to write this method so that it can add numbers and concatenate strings, characters.
The question is that I do not understand how to determine what type of variable is passed and how to set the type for the "sum"
I thought to determine the size of the first element and determine the type of the variable from it, but maybe there is a better method?
| To initialize a T, the way to do that in the template function is to use the brace-initializer:
T sum = {};
This will initialize sum to whatever the type T would be equal to if you default construct (for classes such as std::string) or value-initialize (for types such as int, double, etc.). For integer types, double, float, it is 0. For std::string it would be an empty string, etc.
After taking another look at your norm function, the following should also give the same results (not tested):
template <typename T>
T norm() const
{
return std::accumulate(value, value + length, T{}, std::plus<T>());
}
|
73,939,922 | 73,939,932 | How do I make pass and fail count work in c++? | I'm working on this function for one of my classes and my pass count works just fine, however, my fail count ALWAYS prints out 12. I've been reading my code top to bottom and just can't seem to find the problem.
#include <iostream>
#include <sstream>
#include <string>
using namespace std;
string passAndFailCount(string grades){
int pass_count;
int fail_count;
istringstream myStream(grades);
string grade;
while(myStream>>grade){
int i_grade=stoi(grade);
if (i_grade>=55){
++pass_count;
}
else{
++fail_count;
}
}
cout<<"Pass: "<<pass_count<<endl<<"Fail: "<<fail_count<<endl;
}
int main()
{
string grades;
getline(cin, grades);
passAndFailCount(grades);
}
| Your problem are uninitialized variables.
int pass_count = 0;
int fail_count = 0;
and you're set.
For an explanation. Non-global variables (which automatically get initialized to 'default' (0) as per the standard), automatic and dynamic variables (like the one you are using), assume the value of 'whatever was in memory at the time of allocating the variable'.
Memory is never empty, there's always 'something' written in there.
So the memory of your fail_count variable just 'happened to be' 12 during allocation, which is why you start with that value. This value can be anything within scope.
By explicitly assigning a variable, you 'initialize' the variable, putting it into a defined state (0) and your program should start working as expected.
|
73,939,959 | 73,942,023 | Why does std::pmr::polymorphic_allocator not propagate on container copy construction? | Why does std::pmr::polymorphic_allocator not propagate on container
copy construction?
See the Notes section here
Allocators do propagate on move construction, so this behavior seems to be inconsistent.
Also, with copy elision, this behavior can be somewhat odd. Depending on whether a copy ctor is elided or not, the constructed object can have different allocators: if the copy is elided, then the object will have the allocator of the source object. But if it's not elided, it will have the default allocator.
Consider:
#include <memory_resource>
#include <vector>
#include <cstdio>
char buffer[64];
std::pmr::monotonic_buffer_resource pool(buffer, sizeof(buffer));
std::pmr::vector<char> gvec{&pool};
std::pmr::vector<char> gget() {
return gvec;
}
std::pmr::vector<char> lget() {
std::pmr::vector<char> lvec{&pool};
return lvec;
}
int main() {
printf("default resource: %p\n", std::pmr::get_default_resource());
printf("pool: %p\n", &pool);
printf("\n");
printf("global copy resource: %p\n", gget().get_allocator().resource());
printf("local copy resource: %p\n", lget().get_allocator().resource());
}
In this example, gget will return a vector with default resource (even though the copied gvec object uses pool), while lget can return a vector with the pool resource (depending on whether the compiler decides to elide the copy ctor or not). On my machine, this is printed:
default resource: 0x7f67a5be11e8
pool: 0x55c5924c00e0
global copy resource: 0x7f67a5be11e8
local copy resource: 0x55c5924c00e0
But as far as I understand, it would be perfectly valid if the last line looked like this (i.e., the address of the default resource is printed):
local copy resource: 0x7f67a5be11e8
Is this a design issue, or am I misunderstanding something?
| From looking into the various standard proposals, I can find no explanation for making polymorphic allocators not propagate the memory resource on copy construction. The very first proposal (pdf) includes this language, and every version thereafter keeps moving it forward.
However, in searching around for the purpose of the mechanism that prevents this propagation (ie: select_on_container_copy_construction), I found this statement on a now-closed defect:
I think the people using stateful allocators will alter the default behaviour of select_on_container_copy_construction so that it doesn't propagate, but will return a default-constructed one (to ensure a stateful allocator referring to a stack buffer doesn't leak to a region where the stack buffer has gone).
This tracks with something stated in a section of N3525:
Type erasure is a powerful technique, but has its own flaws, such as that the allocators can be propagated outside of the scope in which they are valid
So it seems to me that this is done to make it more difficult for users to accidentally have stack-bound memory_resource types leave the scope in which they are intended to be used.
As for elision, the example you give shows that the circumstance you're concerned about (elision changing the behavior of a copy) pretty much cannot happen:
std::pmr::vector<char> lget() {
std::pmr::vector<char> lvec{&pool};
return lvec;
}
This performs a move. That move may be elided, but if it is not, it will be a move.
Indeed, it is essentially impossible to construct a case where elision is an option and vector would be copied. There are ways to turn lvec into a proper lvalue, but all of them shut off elision.
|
73,940,236 | 73,940,398 | C++ How to include a third party library into your library without compiling it? | I am building my first C++ library (header-only) and I want to include a third party library like Crypto++. Now I believe I am supposed to compile the Crypto++ library and put the compiled archive inside the ./lib directory and pass the linker commands when compiling my library. But since I do not want to compile the 3rd party library for every OS and I am building a header-only library I would like to include the 3rd party library without compiling it. Is there a way to I achieve this? BTW the Crypto++ library is not a header-only library.
The hierarchy of my project is as follows:
ProjectName/
./include/
./ProjectName/
...
./internal/
./cryptopp/
... source files ...
./lib/
When I try to include internal/cryptopp/sha.h without passing the linker arguments to the compiled library it throws the Undefined symbol ... errors as expected. Is there a way I can achieve this? Or is there no other way than compiling the library and putting the compiled archive inside the ./lib/ dir and passing the correct linker arguments?
I am aware that this is quite a newbie question, I have just never done this before and C++ is still quite new for me.
| You are developing a header-only, platform-independent C++ library that happens to depend on a traditional (i.e. not header-only) C++ library, but which is also portable to a wide range of target platforms.
What you need to tell your users is that your library depends on that other library (crypto++ in this case), and ask them to install this library before attempting to use your header-only library, and to also link against crypto++ when compiling executables or libraries that use your library.
Do not distribute the crypto++ source code yourself as part of your library, because your users should and will want to update the crypto++ library that they use whenever it receives an update, independent of your library. This is especially important with regard to cryptography libraries because of the possible security implications.
|
73,940,416 | 74,534,596 | VSCode GDB debugging Internal error while converting character sets | While debugging like normally (before I didn't have this kind of problem) GDB returned message :
Internal error while converting character sets: No error.
Only for viewing string or char kind of variables.
I have tried to disable Windows beta UTF-8 engine, tried additional commands from here StackOverflow
Unfortunately nothing works.
Adding additional command for GDB logging I receive the same message.
1: (394137) ->1059^error,msg="Internal error while converting character sets: No error."
EDIT
As @rainbow.gekota requested, I added some more informations.
Current OS : Windows 10 21H2 (Compilation: 19044:2006)
VSCode ver. : 1.72.0 x64 -> 64bbfbf67ada9953918d72e1df2f4d8e537d340e
GDB ver. : 12.1 for MinGW-W64 x86_64, built by Brecht Sanders
GDB installed from MSYS2 repos.
Here's my launch.json with which I was trying to fix this error with set charset UTF-8
{
"version": "0.2.0",
"configurations": [
{
"name": "Start debugging",
"type": "cppdbg",
"request": "launch",
"program": "${fileDirname}\\main.exe",
"args": ["arg1", "arg2", "arg3"],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": true,
"MIMode": "gdb",
"miDebuggerPath": "C:\\msys64\\mingw64\\bin\\gdb.exe",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Fix pretty-printing for gdb",
"text": "set charset UTF-8"
}
],
"preLaunchTask": "Build program",
"logging": { "engineLogging": true }
}
]
}
I don't have any more idea how to reproduce this error. It was working fine until one day.
| Unfortunately I couldn't find any logical answer to my question. The only thing that helped was deleting MSYS2 with gdb and gcc installed and installing it once again.
Now I can't reproduce this issues so maybe in the future I'll comeback with the same error. Will see.
|
73,940,961 | 73,941,091 | Confused about std::is_standard_layout type trait | Standard-layout class describes a standard layout class that:
has no non-static data members of type non-standard-layout class (or array of such types) or reference,
has no virtual functions and no virtual base classes,
has the same access control for all non-static data members,
has no non-standard-layout base classes,
has all non-static data members and bit-fields in the class and its base classes first declared in the same class, and
given the class as S, has no element of the set M(S) of types as a base class, where M(X) for a type X is defined as:
If X is a non-union class type with no (possibly inherited) non-static data members, the set M(X) is empty.
If X is a non-union class type whose first non-static data member has type X0 (where said member may be an anonymous union), the set M(X) consists of X0 and the elements of M(X0).
If X is a union type, the set M(X) is the union of all M(Ui) and the set containing all Ui, where each Ui is the type of the ith non-static data member of X.
If X is an array type with element type Xe, the set M(X) consists of Xe and the elements of M(Xe).
If X is a non-class, non-array type, the set M(X) is empty.
(numbered for convivence)
However, the following fails:
#include <type_traits>
struct A {
int a;
};
struct B : A {
int b;
};
static_assert(std::is_standard_layout<A>::value, "not standard layout"); // succeeds
static_assert(std::is_standard_layout<B>::value, "not standard layout"); // fails
Demo
I see that 1-4 are true, so is one of points under 5 false? I find the points under 5 a little confusing.
| The cppreference description is a little confusing.
The standard (search "standard-layout class is") says:
A standard-layout class is a class that:
has no non-static data members of type non-standard-layout class (or array of such types) or reference,
has no virtual functions (10.3) and no virtual base classes (10.1),
has the same access control (Clause 11) for all non-static data members,
has no non-standard-layout base classes,
either has no non-static data members in the most derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and
has no base classes of the same type as the first non-static data member.
You can see from the part I emphasized that a class with a non-static data member and a base class that itself has a non-static data member is not standard-layout.
I suspect this point is described as follows in cppreference:
has all non-static data members and bit-fields in the class and its base classes first declared in the same class
The point is that you must be able to cast the address of a standard-layout class object to a pointer to its first member, and back. It boils down to compatibility between C++ and C.
See also: Why is C++11's POD "standard layout" definition the way it is?.
|
73,941,068 | 73,941,077 | How to define a type dependent on template parameters | I have a class that has methods that will return Eigen::Matrix types. However, when these will be 1x1 matrices, I want to just return double, as treating scalars as Eigen::Matrix<double, 1, 1> is a bit nasty. Essentially, I want something like this:
template <int rows, int cols>
class foo
{
using my_type = ((rows == 1 && cols == 1) ? double : Eigen::Matrix<double, rows, cols>);
};
However, this (unsurprisingly) doesn't work. How can I create the equivalent of the above?
| Use std::conditional_t<cond, TThen, TElse>:
template <int rows, int cols>
class foo
{
using my_type = std::conditional_t<rows == 1 && cols == 1, double, Eigen::Matrix<double, rows, cols>>;
};
If you don't yet have std::conditional_t<>, you might try std::conditional<>::type (with typename prefix).
|
73,941,082 | 73,941,126 | I don't know what cause segmentation fault(core dumped) | void hexdump(void* ptr, const int buflen)
{
unsigned char* buf = (unsigned char*)ptr;
int i, j, d, hex = 0;
short* ins;
string op;
for (i = 0; i < buflen; i += 16) {
for (j = 0; j < 16; j += 4) {
if (i + j < buflen) {
cout << buflen << endl;
cout << "inst " << (i+j) / 4 << ": ";
I was using linux ubuntu server. My purpose for programing is to read mechine code binary file and get the assembly code and print it out.
However, above code is where failure is printed.
Until cout << "inst " << (i+j) / 4 << ": "; it works, and buflen(which is 24) is printed but after that segmentation fault(core dumped) comes out and my execution stops.
These are the rest of the code. (find,work functions are not yet made or used)
#include <fstream>
#include <vector>
#include <iostream>
#include <algorithm>
using namespace std;
string find(char op[7]);
void work(string inst, short* ins);
void hexdump(void* ptr, const int buflen)
{
unsigned char* buf = (unsigned char*)ptr;
int i, j, d, hex = 0;
short* ins;
string op;
for (i = 0; i < buflen; i += 16) {
for (j = 0; j < 16; j += 4) {
if (i + j < buflen) {
cout << buflen << endl;
cout << "inst " << (i+j) / 4 << ": ";
for (int a = 0; a < 32; a += 8) {
d = buf[i + j + a / 8];
for (int k = 0; k < 8; k++) {
if (d % 2 != 0) {
ins[k + a] = 1;
}
else {
ins[k + a] = 0;
}
d = d / 2;
}
}
for (int i = 31; i >= 0; i -= 4) {
hex = hex + ins[i] * 8;
hex = hex + ins[i - 1] * 4;
hex = hex + ins[i - 2] * 2;
hex = hex + ins[i - 3] * 1;
if (hex == 10)
printf("a");
else if (hex == 11)
printf("b");
else if (hex == 12)
printf("c");
else if (hex == 13)
printf("d");
else if (hex == 14)
printf("e");
else if (hex == 15)
printf("f");
else
printf("%d", hex);
hex = 0;
}
for (int i = 6; i >=0; i--) {
if (ins[i] == 1)
op.append("1");
else if (ins[i] == 0)
op.append("0");
}
cout << endl << op << endl;
//work(find(op), ins);
printf("\n");
}
}
}
}
int main(int argc, char* argv[])
{
ifstream in;
in.open(argv[1], ios::in | ios::binary);
if (in.is_open())
{
// get the starting position
streampos start = in.tellg();
// go to the end
in.seekg(0, std::ios::end);
// get the ending position
streampos end = in.tellg();
// go back to the start
in.seekg(0, std::ios::beg);
// create a vector to hold the data that
// is resized to the total size of the file
std::vector<char> contents;
contents.resize(static_cast<size_t>(end - start));
// read it in
in.read(&contents[0], contents.size());
// print it out (for clarity)
hexdump(contents.data(), contents.size());
}
in.close();
return 0;
}
string find(char op[7]) {
string inst("unknown instruction");
if(op=="")
return inst;
}
void work(string inst, short* ins);
| tldr: The variable ins is pointing to a random memory because the code never assigns it to anything valid. Hence, you have undefined behavior (crashing being the most likely outcome) when dereferencing this pointer and writing to it's address.
short* ins; // THIS POINTER NEVER GETS ALLOCATED OR ASSIGNED TO VALID MEMORY
string op;
for (i = 0; i < buflen; i += 16) {
for (j = 0; j < 16; j += 4) {
if (i + j < buflen) {
cout << buflen << endl;
cout << "inst " << (i+j) / 4 << ": ";
for (int a = 0; a < 32; a += 8) {
d = buf[i + j + a / 8];
for (int k = 0; k < 8; k++) {
if (d % 2 != 0) {
ins[k + a] = 1; // THIS IS UNDEFINED BEHAVIOR, IT PROBABLY CRASHES
|
73,941,558 | 73,941,962 | Parallelizing for-loop inside another loop efficiently with OpenMP | I have a problem writing the parallel instructions for a code that work like this:
// every iteration depends on the previous one
for (int iter = 0; iter < numIters; ++i)
{
#pragma omp parallel for num_threads(numThreads)
for (int p = 0; p < numParticles; ++p)
{
p_velocity_calculation(...);
}
// implicit sync barrier
#pragma omp parallel for num_threads(numThreads)
for (int p = 0; p < numParticles; ++p)
{
p_position_calculation(...);
}
}
The program is about a n-body simulation where first I need to calculate the velocities and then the positions of a set of particles, hence the separation of the two for-loops.
The code runs as expected, but from what I have inquired, the thread pools created by the #pragma omp directives are created and destroyed every iteration of the outer for-loop, but I don't want to waste resources creating them.
So my question is how can I reuse those thread pools and not creating/destroying the threads every iteration?
| First of all: the thread pools are not destroyed, only suspended.
Next: Have you timed this and found that creating the threads is a limiting factor in your application? If not, don't worry.
Or to put it constructively : I have timed it and unless you have an extremely short omp parallel for and you call it tens of thousand of times, the overhead is negligible.
But if you are really worried, put the omp parallel outside the time loop, and do an omp for around the particle loop. You will do some redundant work between the for loops, which you can either accept or put a omp master around if it affects global variables.
But really: I wouldn't worry.
|
73,941,872 | 73,941,989 | Copying an array of Object Pointers | What would be the correct way to copy an array of pointers pointing to a certain object into another object through the constructor?
Assuming that:
// ClassA.h
class ClassA {
ClassB** m_classBs{};
public:
ClassA(const ClassB* classBs[], size_t cnt);
}
ClassA::ClassA(const ClassB* classBs[], size_t cnt) {
m_classBs = new ClassB*[cnt]
for (size_t i = 0; i < cnt; i++) {
m_classBs[i] = &classBs[i];
// I have tried here using *m_classBs[i] = &classBs[I];
// and a lot of variations but neither seems to work for me
// at the moment. I am trying to copy the array of pointers
// from classBs[] (parameter in the constructor) to m_classBs
}
}
| You have to use use the correct type:
class ClassA {
const ClassB** m_classBs;
public:
ClassA(const ClassB* classBs[], size_t cnt);
~ClassA() { delete[] ClassB; }
};
ClassA::ClassA(const ClassB* classBs[], size_t cnt) : m_classBs(new const ClassB*[cnt]) {
for (size_t i = 0; i < cnt; i++) {
m_classBs[i] = classBs[i];
}
}
I prefer to completely avoid new whenever possible:
class ClassA {
std::vector<const ClassB*> m_classBs;
public:
ClassA(const ClassB* classBs[], size_t cnt);
};
ClassA::ClassA(const ClassB* classBs[], size_t cnt) : m_classBs(cnt) {
for (size_t i = 0; i < cnt; i++) {
m_classBs[i] = classBs[i];
}
}
|
73,941,895 | 73,942,807 | Why in my code cpp compare_exchange_strong updates and return false | The problem:
So I'm pretty new to CPP and i was trying to implement a simple comparison code using some atomicity concepts.
The problem is that I'm not getting a desired result, that is: even after the compare_exchange_strong function updates the value of the atomic variable (std::atomic), it returns false.
Below is the program code:
CPP:
Action::Action(Type type, Transfer *transfer)
: transfer(transfer),
type(type) {
Internal = 0;
InternalHigh = -1;
Offset = OffsetHigh = 0;
hEvent = NULL;
status = Action::Status::PENDING;
}
BOOL CancelTimeout(OnTimeoutCallback* rt)
{
auto expected = App::Action::Status::PENDING;
if (rt->action->status.compare_exchange_strong(expected, App::Action::Status::CANCEL))
{
CancelWaitableTimer(rt->hTimer);
return true;
}
return false;
}
HEADER:
struct Action : OVERLAPPED {
enum class Type : long {
SEND,
RECEIVE
};
enum class Status : long {
PENDING,
CANCEL,
TIMEOUT
};
atomic<Status> status;
Transfer *transfer = NULL;
Type type;
WSABUF *data = NULL;
OnTimeoutCallback *timeoutCallback;
Action(Type type, Transfer *transfer);
~Action();
}
Reviewing, the value of the variable rt->action->status is updated to the Action::Status::CANCEL enum, but the return of the compare_exchange_strong function is false.
See the problem in debug:
That said, the desired result is that the first breakpoint, referring to return true, would be triggered instead of return false, taking into account that it changed the value of the variable.
UPDATE: In the print I removed the first Breakpoint by accident, but I think it was understandable
Attempts already made
Modify the structure to: enum class Status : long
Modify the structure to: enum class Status : size_t
Modify the positions of all structure items
Similar topics already searched
[but without success]
Link
Search term
Why does compare_exchange_strong fail with std::atomic<double>, std::atomic<float> in C++?
compare_exchange_strong fail
cpp compare_exchange_strong fails spuriously?
compare exchange fails
Don't really get the logic of std::atomic::compare_exchange_weak and compare_exchange_strong
std::atomic::compare_exchange_weak and compare_exchange_strong
Does C++14 define the behavior of bitwise operators on the padding bits of unsigned int?
Padding problem compare exchange
Among several other topics with different search words
Importante Notes
The code is multi-threaded
There is nowhere else in the code where the value of the atomic
variable is being updated to the enum
Action::Status::CANCEL
I suspect it's something to do with padding (due to some Google
searches), but as I'm new to CPP, I don't know how to modify my
framework to solve the problem
A new instance of the Action structure is generated at each request,
and I also made sure that there is no concurrency occurring on the
same pointer (Action*), because with each change, a new instance of
the Action structure is generated
WAIT!
It is worth mentioning that I am using Google Translate to post this question, in case something is not right, if my question is incomplete, or is formatted in an inappropriate way, please comment so I can adjust it, thank you in advance,
Lucas P.
Updates:
I was not able to replicate the problem using a minified version of the code, that being said, I have to post the entire solution (which in turn is already quite small, as it is a project for studies):
https://drive.google.com/file/d/13fP7OUCC6GeMgUtrPHSOnSGUEBwDGqBC/view?usp=sharing
| TL:DR: race condition between another thread modifying it vs. the debugger getting control and reading memory of the process being debugged.
Or the value had been Action::Status::CANCEL for a long time, not expected = App::Action::Status::PENDING;, in which case a single thread running alone could have this behaviour. I assume your program expects this CAS to fail only when two threads are trying to do this around the same time, like only calling this function in the first place if something was pending.
I assume there's another thread that could call CancelTimeout at the same time, otherwise you wouldn't need an atomic RMW. (If this was the only thread that modified it, you'd just check the value, and do a pure store of the new value after a manual compare, like .store(CANCEL), perhaps with std::memory_order_release or relaxed.)
This would explain your observations:
Another thread won the race to modify rt->action->status, so its CAS returned true.
CAS_strong in this thread didn't modify the variable, and returned false.
The if body in this thread didn't run, so this thread hit your breakpoint.
After the debugger eventually got control and all threads of the process were paused, the debugger asked the kernel to read memory of the process being debugged. Since our CAS failed, the other thread's update of rt->action->status must have already happened, so the debugger will see it.
(Especially after all the time it takes for the debugger to get control, the dust will have time to settle. But assuming you're using an x86 or ARMv8, stores in one thread being visible to any other thread mean they're globally visible, to all threads; those ISAs are multi-copy atomic, no IRIW reordering.)
So CAS failed precisely because some other thread already changed the value. It wasn't changed by the thread where CAS failed. Your breakpoint will trigger whenever CAS fails, regardless of the value before or after the CAS.
For CAS_strong to actually return false and update the value, your compiler or CPU would have to be buggy. Those are possible (especially a compiler bug), but are extraordinary claims that require very carefully ruling out software causes of the same observations. That should never be your first guess when you haven't yet sorted out all the details and aren't sure you understand everything that's going on.
If you think a primitive operation didn't do what the docs said it does, it's almost always actually a bug somewhere else, or missing some possible explanation for what you're seeing that doesn't require a compiler bug to explain.
It's fine to ask a Stack Overflow question about what's going on, but keep in mind when writing your title that it's extremely unlikely that your C++ compiler is actually broken.
|
73,943,280 | 73,943,399 | Expanded from macro & Expected unqualified-id Errors | I'm currently studying global - local variables in C++. From what I understand, we can use the same variable name as global and local variable in the same program (in my program I've used 'g' as the same variable name). However, when I tried to use the same variable name as definition (#define) I got an "expanded from macro 'g' " error. Why is this happening? I thought defined variables work similarly to global variables, is there such a rule that we can't use the same variable name for global and #define at the same time? (My goal was to see if #define was more dominant than global variables).
Please help this newbie out!
Code:
#include <iostream>
#define g 5
using namespace std;
int g = 10;
int main()
{
int g = 20;
cout << g;
return 0;
}
| #define does not define a variable. It defines a macro that is replaced as a sequence of tokens in the rest of the code before any of the code is actually compiled. There is no scope to them.
There is literally a translation step (called the preprocessor) which will replace the g with 5 everywhere in the rest of the code, so that the program that will be compiled is actually
/* contents from <iostream> here */
using namespace std;
int 5 = 10;
int main()
{
int 5 = 20;
cout << 5;
return 0;
}
I think you can see why that doesn't work.
Use global const variables instead of #define to define constants. Preprocessor macros have no relation to variables or scopes.
Compilers also have options to show you the code after the preprocessing phase (instead of going on to the compilation phase). For example for GCC and Clang that would be the -E command line option. You can see the full output for your program here. (Scroll down to the very bottom on the right pane with the output. All that text before your code are the expanded contents from #include<iostream>.)
|
73,943,439 | 73,943,566 | Comparing String (.substr()) to Character (.back()) in C++ | I am trying to compare whether the last letter of the word "A", and the next letter of the word "B" is the same.
I'm iterating over "B"; and comparing within an if statement:
string A = ""
string B = "XYZTTTTLMN"
for (long i = 0; i < B.length(); i++) {
if ( A.back() != B.substr(i,1) ) { ... }
}
I receive an error that says I can't compare a string to a character. But, as far as I can see, A.back() returns a character and B.substr() returns a single digit string, which should be OK?
Any ideas on what I can do?
Much appreciated, thank you!
| In C++, a string having size=1 is not same as a character.
This is the reason you are getting error as string and character cannot be compared as you did.
To solve this you can use below code:
for (long i = 0; i < B.length(); i++) {
if ( A.back() != B.at(i) ) { ... }
}
OR
for (long i = 0; i < B.length(); i++) {
if ( A.back() != B[i] ) { ... }
}
|
73,944,851 | 73,946,794 | How to paint image inside of the entire QScrollArea? | Steps to reproduce using Qt Designer:
1- Add a "Grid Layout"
2- Right click in the MainWindow background and
Lay out -> Lay out in Grid
3- Add a "Scroll Area"
4- Add a "Frame"
5- Right click in the "Scroll Area" and
Lay out -> Lay out in Grid
6- Add a "Label" into the "Frame" and right click it
and Lay out -> Lay out in Grid
7- Add a "border-image" in the Label StyleSheet
Result:
Looks like its padding 10px in each side.
Would like to ask how i could make the image added into the QLabel to occupy the entire QScrollArea, filling these empty areas around the image where are marked with a X.
If possible using stylesheet.
| Widget's geometry is not the problem. If your widget is inside layout, then you do not need to care about geometry at all because it is automatically calculated by the layout. You only may need to care about the geometry of the whole window, but not child widgets.
The blank space around your widget is determined by layout's contents margins. Set the contents margins for the layout - set them all to zero. See the picture. If you have multiple layers of layout, you may want to set zero margins for all of them. It depends on your usecase.
You can also set the margins in code using yourLayout->setContentsMargins(0, 0, 0, 0);.
|
73,945,008 | 73,945,320 | How can i transfer elements from queue to stack? | A queue Q containing n items and an empty stack s are given. It is required to transfer all the items from the queue to the stack so that the item at the front of queue is on the TOP of the stack, and the order of all other items are preserved.
I tried coding it but it only prints the elements of queue.
#include <iostream>
#include <queue>
#include <stack>
using namespace std;
void Stack(queue<int>& q)
{
queue<int> s;
while(!q.empty())
{
q.push(s.top());
q.pop();
}
while(!s.empty())
{
s.push(q.front());
s.pop();
}
while(!q.empty())
{
q.push(s.top());
q.pop();
}
}
void printStack(queue<int> a)
{
while(!a.empty())
{
cout<<a.front()<<" ";
a.pop();
}
}
int main()
{
queue<int> q;
q.push(1);
q.push(2);
q.push(3);
q.push(4);
cout<<"Queue: ";
printStack(q);
cout<<"Stack: ";
return 0;
}
| Your transfer function (Stack) is wrong. The exercise it to use the queue and the stack to:
Empty the queue by pushing each element on to the stack.
Empty the stack by popping each element and pushing it into the queue
Empty the queue by pushing each element on to the stack.
The result will produce a stack whose top is the same as the original queue's front.
It should look like this:
#include <iostream>
#include <queue>
#include <stack>
using namespace std;
stack<int> Stack(queue<int>& q)
{
stack<int> s; // notice : stack, not queue
while (!q.empty())
{
s.push(q.front());
q.pop();
}
while (!s.empty())
{
q.push(s.top());
s.pop();
}
while (!q.empty())
{
s.push(q.front());
q.pop();
}
return s;
}
int main()
{
queue<int> q;
for (int i=1; i<=10; ++i)
q.push(i);
// transer the queue to a stack
stack<int> s = Stack(q);
// print (and empty) the stack.
while (!s.empty())
{
std::cout << s.top() << ' ';
s.pop();
}
std::cout.put('\n');
}
Output
1 2 3 4 5 6 7 8 9 10
|
73,945,017 | 73,949,085 | Contradictory SFINAE on constructor using std::is_constructible | Why is the following code behaving as commented?
struct S
{
template<typename T, typename = std::enable_if_t<!std::is_constructible_v<S, T>>>
S(T&&){}
};
int main() {
S s1{1}; // OK
int i = 1;
S s2{i}; // OK
static_assert(std::is_constructible_v<S, int>); // ERROR (any compiler)
}
I get that for the constructor to be enabled, the assertion must be false. But S still is constructed from int in the example above! What does the standard say and what does the compiler do?
I would assume that before enabling the template constructor, S in not constructible so std::is_constructible<S, int> instantiates to false. That enables the template constructor but also condemns std::is_constructible<S, int> to always test false.
I've also tested with my own (pseudo?) version of std::is_constructible:
#include <type_traits>
template<typename, typename T, typename... ARGS>
constexpr bool isConstructibleImpl = false;
template<typename T, typename... ARGS>
constexpr bool isConstructibleImpl<
std::void_t<decltype(T(std::declval<ARGS>()...))>,
T, ARGS...> =
true;
template<typename T, typename... ARGS>
constexpr bool isConstructible_v = isConstructibleImpl<void, T, ARGS...>;
struct S
{
template<typename T, typename = std::enable_if_t<!isConstructible_v<S, T>>>
S(T&&){}
};
int main() {
S s1{1}; // OK
int i = 1;
S s2{i}; // OK
static_assert(std::is_constructible_v<S, int>); // OK
}
I suppose that it is because now std::is_constructible is not sacrificed for SFINAE in the constructor. isConstructible is sacrificed instead.
That brings me to a second question: Is this last example a good way to perform SFINAE on a constructor without corrupting std::is_constructible?
Rational: I ended up trying that SFINAE pattern to tell the compiler not to use the template constructor if any of the other available constructors matches (especially the default ones), even imperfectly (e.g., a const & parameter should match a & argument and the template constructor should not be considered a better match).
| Your first code example is undefined behaviour, because S is not a complete type within the declaration of itself. std::is_constructible_v however requires all involved types to be complete:
See these paragraphs from cppreference.com:
T and all types in the parameter pack Args shall each be a complete
type, (possibly cv-qualified) void, or an array of unknown bound.
Otherwise, the behavior is undefined.
If an instantiation of a template above depends, directly or
indirectly, on an incomplete type, and that instantiation could yield
a different result if that type were hypothetically completed, the
behavior is undefined.
This makes sense because in order for the compiler to know if some type can be constructed it needs to know the full definition. In your first example, the code is kind of recursive: the compiler needs to find out if S can be constructed from T by checking the constructors of S which themselves depend on is_constructible etc.
|
73,945,190 | 73,945,279 | Can I inherit from std::array and overload operator []? | The question is rather straightforward. I'm trying to define a custom array class which has all the features of a normal std::array, but I want to add the ability of using operator [] with a custom type defined in my codebase.
One option is to wrap my class around the std::array, something like:
using MyType = double; // just an example
template<typename T, unsigned Size>
class MyArray
{
private:
std::array<T, Size> m_array{ 0 };
public:
T& operator [](MyType d) { return m_array[abs(round(d))]; }
void print()
{
for (int i : m_array) {
std::cout << i << std::endl;
}
}
};
int main()
{
MyType var = 0.5649;
MyArray<int, 5> vec;
vec[var] = 1;
vec.print();
return 0;
}
and the output is
0 1 0 0 0
as expected. The downside of this is that I can't access all the rest of the typical std::array interface. So I thought why not let my array inherit from std::array:
using MyType = double; // just an example
template<typename T, unsigned Size>
class MyArray : public std::array<T, Size>
{
public:
// Here I explicitely cast (*this) to std::array, so my operator [] is defined in terms
// of the std::array operator []
T& operator [](MyType var) { return std::array<T, Size>(*this)[abs(round(var))]; }
void print()
{
for (int i : (*this)) {
std::cout << i << std::endl;
}
}
};
int main()
{
MyType var = 2.123;
MyArray<int, 5> vec{ 0 };
vec[var] = 1;
vec.print();
return 0;
}
but in this case the output is
0 0 0 0 0
no matter the value or var, which means MyArray isn't working as it should. What am I doing wrong? Is there something fundamentally wrong or impossible when inheriting an std::array?
| The inheritance itself is legal, as long as you don't try to delete the derived class through a pointer to the base class, which lacks the virtual destructor.
The problem is that return std::array<T, Size>(*this)[abs(round(var))]; creates a temporary std::array and returns a reference to its element, which immediately becomes dangling, since the temporary is destroyed when the function returns.
You want return std::array<T, Size>::operator[](abs(round(var))); to call the operator[] of the base class. Alternatively, you can do static_cast<std::array<T, Size> &>(*this)[abs(round(var))];.
Also note that braced std::array initialization is wonky when you have nested braces. E.g. std::array<std::pair<int, int>, 2> x = {{1,1},{1,1}}; doesn't work, and requires an extra set of braces: std::array<std::pair<int, int>, 2> x = {{{1,1},{1,1}}};.
But if you add inheritance, this too stops working, and you need yet another pair of braces.
|
73,945,612 | 73,945,719 | c++ - How can I delete a undeclared pointer? | If the new operator was written in a function argument or constructor argument like:
Foo* f = new Foo(new Baz(0, 0));
// how to delete "new Baz(0, 0)"?
delete f;
I know it can be written to:
Baz* b = new Baz(0, 0)
Foo* f = new Foo(b);
delete b;
delete f;
But it became complicated, is there any better way?
Should I do something like this:
Class Foo {
private:
Baz* _b;
public:
Foo(Baz* b) : _b(b) {}
~Foo() {
delete _b;
}
}
// so that when I delete foo will also delete baz;
| Following my own recommendation about the rule of zero and using std::unique_ptr, I would do something like this:
class Foo
{
public:
Foo(int a, int b)
baz_{ std::make_unique<Baz>(a, b) }
{
}
private:
std::unique_ptr<Baz> baz_;
};
And
auto foo = std::make_unique<Foo>(0, 0);
With this the foo object own its own unique Baz object.
If you want Foo to by copyable then use std::shared_ptr and std::make_shared instead.
|
73,945,659 | 73,945,891 | Question about array new placement in C++ | I have question about new placement of array in c++.
below code is a sample code that i made.
#include <vector>
#include <iostream>
class Point
{
int x,y;
public:
Point():x(0), y(0){std::cout<<"Point() : "<<this<<std::endl;}
void print(){std::cout<<x<<":"<<y<<std::endl;}
Point(int a, int b) :x(a), y(b){std::cout<<"Point(int,int) : value & addr "<<a<<":"<<b<<" ~ "<<this<<std::endl;}
~Point(){std::cout<<"~Point() : "<<this<<" "<<x<<":"<<y<<std::endl;}
};
int main()
{
// multiple allocation
void* mem_ptr_arr = operator new(sizeof(Point)*3);
for(int i=0; i<3; i++)
new( mem_ptr_arr+sizeof(Point)*i ) Point(i,i);
Point* ref_ptr_arr = static_cast<Point*>(mem_ptr_arr);
// delete process
for(int i=0; i<3; i++)
(ref_ptr_arr+i)->~Point();
operator delete(ref_ptr_arr);
Point* new_ptr = new Point[3]{};
delete[] new_ptr;
return 0;
}
I want to duplicate functionality of new and delete operation.
So break down each operation like below
new -> operator new + new(some_ptr) Constructor
delete -> Obj.~Destructor + delete(some_ptr)
My question is, Is it correct usage of new placement of array(ref_ptr_arr)?
When i debug some memory, they do not use same heap address after delete previous pointer.
| I would simplify this way:
First, just allocate the array with a low level function like malloc() or mmap(), brk() and dispose of it accordingly. It helps to keep the two worlds separated.
Second, when calling placement new you do not necessarily need to take that pointer if you already have it.
And last it looks like you are doing void pointer arithmetic and that was banned.
// multiple allocation
Point* points = (Point*)std::malloc(sizeof(Point)*3);
for(int i=0; i<3; i++)
new ( &points[i] ) Point(i,i);
// delete process
for(int i=0; i<3; i++)
points[i].~Point();
std::free( points );
|
73,946,196 | 73,946,387 | Visual Studio compiles with CMake in x64-Debug mode but in x64-Release mode "could not find any instance of Visual Studio" | I'm using Visual Studio to open a CMake C++ project. When I set it to x64-Debug mode at the top and compile, it works fine. However, when I change it to x64-Release, it suddenly tells me:
CMake Error at CMakeLists.txt:5 (project):
Generator
Visual Studio 16 2019
could not find any instance of Visual Studio.
I'm literally using Visual Studio, why does CMake not find it anymore when building in release mode? I know that this code base compiled perfectly fine before and I didn't make any changes to it since then. Also, I tried to repair/reinstall Visual Studio but the issue persists. Can I verify some environment variables or whatever CMake checks?
When running CMake from CMD directly to generate a Visual Studio solution (cmake -G "Visual Studio 16 2019"), I get the same error as above. Running from x64 Native Tools Command Prompt for VS 2022" or Developer Command Prompt for VS 2022 does not help either.
Desktop development with C++ and C++ CMake tools for Windows are both installed.
>cmake --version
cmake version 3.24.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
| It turns out that deleting all Visual Studio generated files/folders and setting up a new x64-Release configuration solved the problem.
|
73,946,197 | 73,947,962 | Does boost asio strand run all handlers on the same thread? | The boost asio documentation talks about executors but I can't see if that implies the same thread.
The reason I'm curious about this is that the purpose of a strand seems to be to allow the developer not to have to worry about multithreading issues. If that is the case then I see two options for a strand, assuming more than one thread in the io_service/context:
Run all handlers on the same thread for the lifetime of the strand
Use different threads but
Use some mechanism to make sure one handler runs after the other
If run on a different thread make sure you use a memory fence so the second handler sees all updates from the previous handler
(I strongly suspect it is impossible to do 2.1 without doing 2.2)
The problem with 2. is that it hits performance because it requires fences, but I don't see anywhere that explicitly says a strand always uses the same thread.
| No.
All that is guaranteed is that
all handlers are invoked on a thread that is invoking run[_one] or poll_[one] on the execution context. This is is true for handlers on any executor
strand executors add the guarantee that handlers are only invoked sequentially (non-concurrently) and in the order they were posted (see e.g. https://www.boost.org/doc/libs/1_80_0/doc/html/boost_asio/reference/io_context__strand.html#boost_asio.reference.io_context__strand.order_of_handler_invocation)
So yes, depending on situation there may be overhead. However, in some situations there may be optimizations (e.g. dispatch from the local strand, or running on a context with a concurrency hint).
|
73,946,484 | 73,947,166 | is there a null stream? optional printing | I use functions like
void doStuff(type thing, bool print = false, std::ostream& S = std::cout)
{
thing++;
if(print)
S << "new thing: " << thing << '\n';
}
so that I can use the same function and decide on call if I want it to print documentation of what is happening and if I wish I can print that on seperate streams -I don't know if I can even do that with std::ostream-
I now believe that it will be better to do
void doStuff(type thing, std::ostream& S = NULL)
{
thing++;
if(S)
S << "new thing: " << thing << '\n';
}
but that doesn't work as std::ostream doesn't accept NULL
questions:
-is there some kind of constant of the stream type that stops the if condition?
-can I use a different type of stream that is more flexible to accept streams like string streams and file streams?
-is there a better way to handle flexible documentation?
| Writing own streams is not difficult and can be found in good books.
I created a "NULL" stream class for you.
Please see the very simple example below.
#include <iostream>
// This is a stream which does not output anything
class NullStream : public std::ostream
{
// streambuffer doing nothing
class NullBuffer : public std::streambuf
{
public:
int overflow(int c) noexcept override { return c; }
} nullBuffer;
public:
#pragma warning(suppress: 26455)
NullStream() : std::ostream(&nullBuffer) {}
NullStream(const NullStream&) = delete;
NullStream& operator=(const NullStream&) = delete;
};
// Define a global null stream
NullStream nout;
void doStuff(int& i, std::ostream& stream = nout)
{
i++;
stream << i << '\n';
}
int main() {
int i{};
doStuff(i, std::cout);
doStuff(i, std::cout);
doStuff(i, std::cout);
doStuff(i);
doStuff(i);
doStuff(i);
doStuff(i, std::cout);
}
|
73,946,791 | 73,946,996 | Macro for nested maps in C++ | Is it possible to create simple interface to create nested std::maps in C++? If this is possbile, can I go advanced and make it with different nested maps/vectors
CreateMaps(4); returns std::map<int,std::map<int,std::map<int,std::map<int,int>>>>>
CreateMaps(3); returns std::map<int,std::map<int,std::map<int,int>>>>
I am not really sure if this counts as macro or no.
My final target is to create maps at init and I want to divide into categories, type_0 has X subtypes which has Y subtypes and so on..., and I want to count how many times I reach certain scenario. Creating the map is defined after parsing a file, so I dont know the size and the number of nested maps at compile time.
| Yes you can, even without macros, but by using recursive templates. Here is what it could look like:
// Recursive definition
template<typename T, size_t N>
struct NestedMap {
using type = std::map<T, typename NestedMap<T, N - 1>::type>;
};
// Termination condition for the recursion
template<typename T>
struct NestedMap<T, 0> {
using type = T;
};
// Just a convenience
template<typename T, size_t N>
using NestedMap_t = NestedMap<T, N>::type;
And then you can use it like so:
NestedMap_t<int, 4> quadmap;
quadmap[1][2][3][4] = 42;
However, nesting containers is often not very efficient, and you might get better performance by flattening your data structure. If you want to have a map that is indexed by four integers, then you could also do:
std::map<std::array<int, 4>, int> quadmap;
quadmap[{1, 2, 3, 4}] = 42;
The above types are fixed at compile time. If you want something more flexible, you should make the map's key and value more dynamic. Consider:
std::map<std::vector<int>, std::any> anymap;
anymap[{1, 2, 3, 4}] = 42;
|
73,947,153 | 73,951,888 | Visual Studio 2022 behaving wierdly in C++, not showing errors in code | I'm currently learning C++ and encountered a strange behavior of VS. Errors are not showing up in code (even though IntelliSense is enabled - I checked settings) and the lines in error list are probably wrong too.
main.cpp:
#include <iostream>
#include <string>
#include "fraction.h"
using namespace std;
int main()
{
fraction a = fraction::fraction(2, -4);
cout << a.toString();
}
fraction.h:
#pragma once
#include <string>
class fraction
{
private:
int a;
int b;
void Optimise()
{
if (b < 0)
{
a = -a;
b = -b;
}
if (a % b == 0 || b % a == 0)
{
for (int i = 2; i <= b; i++)
{
if (a % i == 0 && b % i == 0)
{
a /= i;
b /= i;
}
}
}
}
public:
fraction(int numerator, int denominator)
{
a = numerator;
if (denominator != 0)
{
b = denominator;
}
else
{
//exception
}
Optimise();
}
string toString()
{
return a + "/" + b;
}
};
Code looks normal in text editor, however it won't compile and it shows these errors
Thanks for help! :)
Edit:
The reason was not writing std:: in front of strings in fraction.h, but VS did not highlight the error, see long answer below.
| The reason for the code for not compiling was not writing std::string or using namespace std at the top in fraction.h as @molbdnilo has pointed out. What confused me, was that VS highlighted the code and knew I ment to write std::, but the compiler did not.
Also, I would like to point out my mistake (which has nothing to do with compiler) in my work with strings (since I'm still learning).
Instead of:
string toString()
{
return a + "/" + b;
}
Should be:
std::string toString()
{
return std::to_string(a) + "/" + std::to_string(b);
}
|
73,948,048 | 73,948,102 | Is there a function for moving the contents of a char array a certain amount of addresses back in C++? | I have the following code for Arduino (C++). This is for a checksum consisting of 2 characters forming a base 16 value between 0 and 255. It takes int outputCheckSum and converts it to char outputCheckSumHex[3].
itoa (outputCheckSum, outputCheckSumHex, 16)
if (outputCheckSum < 16) { //Adds a 0 if CS has fewer than 2 numbers
outputCheckSumHex[1] = outputCheckSumHex[0];
outputCheckSumHex[0] = '0';
}
Since the output of itoa would be "X" instead of "0X" in the event of X having fewer than 2 characters, the last 3 lines are to move the characters one step back.
I now have plans to scale this up to a CS of 8 characters, and I was wondering whether there exists a function in C++ that can achieve that before I start writing more code. Please let me know if more information is required.
| You should be able to use memmove, it's one of the legacy C functions rather than C++ but it's available in the latter (in cstring header) and handles overlapping memory correctly, unlike memcpy.
So, for example, you could use:
char buff[5] = {'a', 'b', 'c', '.', '.'};
memmove(&(buff[2]), &(buff[0], 3);
// Now it's {'a', 'b', 'a', 'b', 'c'} and you can replace the first two characters.
Alternatively, you could use std::copy from the algorithm header but, for something this basic, memmove should be fine.
|
73,949,214 | 73,952,297 | Libtorch C++: Efficient/correct way for saving/loading Model and Optimizer State Dict for retraining | I am looking for the correct and most efficient way of saving, loading, and retraining a model in Libtorch (C++) with both the model and optimizer state dict. I believe I have everything correctly set (however this may not be right for saving and loading optimizer state dicts, only the model state dict I am absolutely sure of), my last question is where I set the Optimizer and give it the model parameters.
Saving Model and Optimizer:
// Save model state
torch::serialize::OutputArchive output_model_archive;
myModel.to(torch::kCPU);
myModel.save(output_model_archive);
output_model_archive.save_to(model_state_dict_path);
// Save optim state
torch::serialize::OutputArchive output_optim_archive;
myOptimizer->save(output_optim_archive);
output_optim_archive.save_to(optim_state_dict_path);
Loading model and optim state for retraining.
// Load model state
torch::serialize::InputArchive input_archive;
input_archive.load_from(state_dict);
myModel.load(input_archive);
// Load optim state
torch::serialize::InputArchive input_archive;
input_archive.load_from(state_dict);
myOptimizer->load(input_archive);
When creating the optimizer object, you need to give it the model parameters:
std::shared_ptr<torch::optim::Optimizer> myOptimizer;
myOptimizer.reset(new torch::optim::Adam(myModel.parameters(), torch::optim::AdamOptions(LR)));
Should this be done before the state dicts are loaded, after, or does it matter? For example, I am doing it like:
// Setup model and optimizer object, set model params in optimizer
// Load state dictionaries...
// Train epoch #n...
myOptimizer->step();
// Save state dictionaries
| To answer my own question, the model state dict needs to be loaded and then parameters put into the optimizer object. Then load the state dict into the optimizer object.
My use case was a little more complicated as I was aggregating gradients from multiple nodes where training was happening and doing an optimizer step on a "master" node. I was trying to simplify the problem above for the question, and I assumed I did not need the previous state dict since I was aggregating gradients. That was an incorrect assumption. The flow looks like:
// Load model state dict
// Aggregate gradients
// Load Optimizer state dict / params into optim
// Step
|
73,950,786 | 73,951,445 | How to create a view over a range that always picks the first element and filters the rest? | I have a collection of values:
auto v = std::vector{43, 1, 3, 2, 4, 6, 7, 8, 19, 101};
Over this collection of values I want to apply a view that follows this criteria:
First element should always be picked.
From the next elements, pick only even numbers until ...
... finding an element equal or greater than 6.
This is the view I tried:
auto v = std::vector{43, 1, 3, 2, 4, 6, 7, 8, 19, 101};
auto r = v |
std::views::take(1) |
std::views::filter([](const int x) { return !(x & 1); }) |
std::views::take_while([](const int x) { return x < 6; });
for (const auto &x : r)
std::cout << x << ' ';
But the execution don't even enter the print loop because the view is empty. My guess si that all the criteria is applied at once:
Pick first element (43).
Is odd number.
View ends.
What I was expecting:
Pick first element without checking anything.
From the rest of elements, filter only even numbers (2, 4, 6, 8).
From filtered elements, pick numbers until a number equal to or greater than 6 appears (2, 4).
43 2 4 is printed.
How can I build a view over my collection of values that behaves as I was expecting?
| With range-v3, you can use views::concat to concatenate the first element of the range and the remaining filtered elements, for example:
auto v = std::vector{43, 1, 3, 2, 4, 6, 7, 8, 19, 101};
auto r = ranges::views::concat(
v | ranges::views::take(1),
v | ranges::views::drop(1)
| ranges::views::filter([](const int x) { return !(x & 1); })
| ranges::views::take_while([](const int x) { return x < 6; })
);
Demo
|
73,950,891 | 73,950,982 | C++ code example that makes the compile loop forever | Given that the C++ template system is not context-free and it's also Turing-Complete, can anyone provide me a non-trivial example of a program that makes the g++ compiler loop forever?
For more context, I imagine that if the C++ template system is Turing-complete, it can recognize all recursively enumerable languages and decide over all recursive ones. So, it made me think about the acceptance problem, and its more famous brother, the halting problem. I also imagine that g++ must decide if the input belongs in the C++ language (as it belongs in the decidability problem) in the syntactic analysis. But it also must resolve all templates, and since templates are recursively enumerable, there must be a C++ program that makes the g++ syntactic analysis run forever, since it can't decide if it belongs in the C++ grammar or not.
I would also like to know how g++ deals with such things?
| While this is true in theory for the unlimited language, compilers in practice have implementation limits for recursive behavior (e.g. how deep template instantiations can be nested or how many instructions can be evaluated in a constant expression), so that it is probably not straight-forward to find such a case, even if we somehow ignore obvious problems of bounded memory. The standard specifically permits such limits, so if you want to be pedantic I am not even sure that any given implementation has to satisfy these theoretical concepts.
And also infinitely recursive template instantiation specifically is forbidden by the language. A program with such a construct has undefined behavior and the compiler can just refuse to compile if it is detected (although of course it cannot be detected in general).
|
73,951,046 | 73,951,244 | Collect positive floats & output the average when negative integer is input | I'm trying to write a program that reads in positive float numbers from the user and then when the user's is a negative number, gives the average of the numbers, excluding the negative.
#include <iostream>
using namespace std;
int main() {
float av_number, total = 0, input;
do {
for (int i = 1; i >= 1; i = ++i) {
cout << "Please input a number: ";
cin >> input;
total = total += input;
av_number = total / i;
cout << av_number << endl;
break;
}
} while (input >= 0);
cout << av_number << endl;
}
When I run this, the program simply adds the inputs together on each line, and then subtracts my final negative input before closing the program.
If I were to guess, It's likely a logical confliction within my sequence of do & for loops, but I'm unable to identify the issue. I may have also misused i in some fashion, but I'm not certain precisely how.
Any help would be appreciated as I'm still learning, Cheers!
|
you don't need the for loop, you just need some iterator to count the number of entered numbers, so you can delete that for loop and use a counter variable instead.
also, you are breaking in the loop without checking if the input < 0, so you can write this
if (input < 0)
break;
also, you shouldn't calculate av_number = total / counter; except only after the end of the big while loop
it's total += input; not total = total += input;
writing while (input >= 0) wouldn't make sense as long as you are breaking inside the loop when input < 0, so you can write while (true); instead
and this is the code edited:
#include <iostream>
using namespace std;
int main() {
float av_number = 1.0, total = 1, input = 1.0;
int counter = 0;
do {
cout << "Please input a number: ";
cin >> input;
if (input < 0)
break;
counter++;
total += input;
} while (true);
av_number = total / counter;
cout << av_number << endl;
}
and this is the output:
Please input a number: 5
Please input a number: 12
Please input a number: 7
Please input a number: -2
8
P.S : read more about Why is "using namespace std;" considered bad practice?
|
73,951,141 | 73,952,940 | "Must construct a QApplication before a QWidget" error, but only on Windows builds? | I am working on a CMAKE C++ project which uses the QT Libraries. (For me, 5.15.3, for others 5.12.x)
In this project, there is a class Vtk3DViewer : public QWidget. In its constructor, it tries to create one of its member variables, which is of type QVTKOpenGLNativeWidget. This is from the VTK libraries. (Located in include\vtk-9.2\QVTKOpenGLNativeWidget.h)
For me, this new QVTKOpenGLNativeWidget() call within the constructor of my QWidget fails with the following error:
"Must construct a QApplication before a QWidget"
But that's just it, we do create a QApplication in main() well before this point. And this only happens on Windows. Linux builds appear to not have any issue.
Switching from Debug to RelWithDebugInfo moves the error - making it happen much earlier and on creating a QToolBarExt instead.
Why is this happening, and how do I fix it?
Here is an example of main():
int main(int argc, char* argv[])
{
// Set info for settings & registry
QApplication::setOrganizationName(COMPANY_NAME);
QApplication::setOrganizationDomain(COMPANY_DOMAIN);
QApplication::setApplicationName(APP_NAME);
// Set up for software-based backend for VTK
QApplication::setAttribute(Qt::AA_UseOpenGLES);
QApplication a(argc, argv);
a.setWindowIcon(QIcon(":/main-window/favicon.ico"));
// Instantiate singletons
TaskExecutionManager::getInstance(); // Instantiate the task manager
DataDispatcher::getInstance();
// Create main window with default size
MainWindow w;
w.show();
// Start application event loop
return a.exec();
}
Then the main window's constructor calls:
void MainWindow::initializeMainWindow(Ui::MainWindow* ui)
{
this->setDockOptions(AnimatedDocks | AllowNestedDocks | AllowTabbedDocks | GroupedDragging);
// Main toolbar
m_topToolBar = new QToolBarExt(this); // This causes a "Must construct a QApplication before a QWidget" error
}
| The issue was that VTK was built in RELEASE, while our project was built in DEBUG.
(I did not see this as an issue, since we would never need to step into/debug VTK's code)
It appears this cryptic/incorrect error message will appear under these circumstances. Ensuring VTK and the project it includes are compiled the same way fixes it.
|
73,951,626 | 73,957,272 | Understand to which constrained edge a Steiner point belongs in constrained conforming triangulation CGAL | I followed the example reported here: https://doc.cgal.org/latest/Mesh_2/index.html (at paragraph "1.3 Example: Making a Triangulation Conforming Delaunay and Then Conforming Gabriel") to create a conforming constrained Delaunay triangulation using CGAL.
Making the triangulation conforming may introduce in the triangulation itself some Steiner vertexes that are not present among the original input vertexes. Is it possible to know to which original constrained edge a Steiner vertex belongs?
That is, when performing constrained triangulation, we can insert more than one CGAL::Polygon_2 into the triangulation as a constraint (it is done for example at this link from the CGAL manual: https://doc.cgal.org/latest/Triangulation_2/index.html); so, in other words, I would like to know if I can understand to which of the original constrained edges (or, if not possible, to which polygon) a specific Steiner vertex belongs to, is it possible? and how could I achieve this?
| If you use the Constrained_triangulation_plus_2 with your current triangulation as base triangulation, you will have a notion of subconstraints that will give you access to vertices in the middle of original constraints. However, if you have intersection between your input constraints, the intersection vertices will also be reported as inside a constraint.
In this example you have an example of iteration over input constraints and look at vertices on the constraints.
|
73,951,667 | 73,951,711 | How to define a C++ concept for a range to a std::pair of reference wrappers? | See the code below (also here https://www.godbolt.org/z/hvnvEv1ar). The code fails to compile if I uncomment the constraint for either rng or pair. I feel like I am missing something trivial, but I can't figure out why the constraint is not satisfied.
#include <vector>
#include <ranges>
#include <utility>
template <typename T>
struct is_reference_wrapper : std::false_type {};
template <typename T>
struct is_reference_wrapper<std::reference_wrapper<T>> : std::true_type {};
template <typename T>
inline constexpr bool is_reference_wrapper_v = is_reference_wrapper<T>::value;
template <typename T>
concept ReferenceWrapper = is_reference_wrapper_v<T>;
template <typename T>
concept ReferenceWrapperPair = requires(const T& t) {
{ t.first } -> ReferenceWrapper;
{ t.second } -> ReferenceWrapper;
};
template <typename T>
concept ReferenceWrapperPairRange =
std::ranges::range<T> && ReferenceWrapperPair<std::ranges::range_value_t<T>>;
int main()
{
std::vector<std::pair<int, int>> v{ {1,2}, {3,4}, {5,6} };
auto fn = [](std::pair<int, int>& val) {
return std::pair{std::reference_wrapper<int>{val.first}, std::reference_wrapper<int>{val.second} };
};
/* ReferenceWrapperPairRange */ auto rng = v | std::views::transform(fn);
/* ReferenceWrapperPair */ auto pair = *(rng.begin());
ReferenceWrapper auto first = pair.first;
ReferenceWrapper auto second = pair.second;
return 0;
}
| The compound requirements { expression } -> type-constraint requires that decltype((expression)) must satisfy the constraints imposed by type-constraint, since decltype((t.first)) will be treated as an ordinary lvalue expression, it will result in a const lvalue reference type.
You might want to use C++23 auto(x) to get the decay type
template <typename T>
concept ReferenceWrapperPair = requires(const T& t) {
{ auto(t.first) } -> ReferenceWrapper;
{ auto(t.second) } -> ReferenceWrapper;
};
Or change ReferenceWrapper concept to:
template <typename T>
concept ReferenceWrapper = is_reference_wrapper_v<std::remove_cvref_t<T>>;
|
73,951,944 | 73,957,219 | how to use Point_set and k_neighbor_search in cgal? | I am trying to make the orthogonal_k_neighbors _search (or any kdtree based search) in cgal with points defined in a Point_set object.
The idea would be to fetch a point and associated vector near a given point. However I am having issue about the correctness of the Traits adaptation for this specific case.
I think The Traits definition is not the expected one since though the following code does build it raises the following issue.
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Point_set_3.h>
#include <CGAL/Search_traits_3.h>
#include <CGAL/Search_traits_adapter.h>
#include <CGAL/property_map.h>
#include <CGAL/Orthogonal_k_neighbor_search.h>
using Kernel = CGAL::Exact_predicates_inexact_constructions_kernel;
using Point_3 = Kernel::Point_3 ;
using Vector_3 = Kernel::Vector_3 ;
using Point_set = CGAL::Point_set_3<Point_3, Vector_3>;
using Search_base = CGAL::Search_traits_3<Kernel>;
using Traits = CGAL::Search_traits_adapter<Point_set::Index,
Point_set::Property_map<Point_3>,
Search_base> ;
using K_neighbor_search = CGAL::Orthogonal_k_neighbor_search<Traits>;
using Tree = K_neighbor_search::Tree;
using Splitter = Tree::Splitter;
using Distance = K_neighbor_search::Distance;
int main()
{
Point_set pSet;
pSet.add_normal_map();
pSet.insert(Point_3(0,0,0), Vector_3(0,1,0));
pSet.insert(Point_3(0.1,0,0), Vector_3(0,1,0));
pSet.insert(Point_3(0.2,0,0), Vector_3(0,1,0));
pSet.insert(Point_3(0.3,0,0), Vector_3(0,1,0));
pSet.insert(Point_3(0.4,0,0), Vector_3(0,1,0));
pSet.insert(Point_3(0.5,0,0), Vector_3(0,1,0));
Tree tree(pSet.begin(), pSet.end());//, Splitter(), Traits(Point_set::Property_map<Point_3>()));
Point_3 query(0.1, 0.15, 0);
K_neighbor_search search(tree, query, 3);
}
Raising this error
terminate called after throwing an instance of 'CGAL::Assertion_exception'
what(): CGAL ERROR: assertion violation!
Expr: parray_ != nullptr
File: .../cgal/Surface_mesh/include/CGAL/Surface_mesh/Properties.h
Line: 614
Aborted (core dumped)
I did try to modify the Traits to get the expected comportment though I'm not really sure what the Key and PointPropertyMap shall really point to in my case.
Any recommendation is welcomed.
| You should write:
Traits traits(pSet.point_map());
Tree tree(pSet.begin(), pSet.end(), Splitter(), traits);
Point_3 query(0.1, 0.15, 0);
K_neighbor_search search(tree, query, 3, 0, true, Distance(pSet.point_map()));
Basically, you have to pass the point map to the traits and to the distance. The default point map is not valid as it needs a reference to the point set.
The API is not very intuitive, sorry about that. I'll try to improve it so that it's getting less confusing.
|
73,952,129 | 73,958,825 | A number K and an Array of size N is given, check whether we can get the sum of any two elements of array equal to K | I tried to solve the problem but my code still contains some bugs. Why isn't it running?
Here is the link of the question website: https://www.hackerearth.com/practice/data-structures/hash-tables/basics-of-hash-tables/practice-problems/algorithm/pair-sums/?
#include <iostream>
#include <bits/stdc++.h>
using namespace std;
const int n = 1e7 + 10;
int hsh[n];
int main()
{
int n, k;
cin >> n >> k;
int A[n];
for (int i = 0; i < n; i++)
{
cin >> A[i];
}
for (int i = 0; i < n; i++)
{
hsh[A[i]] = k - A[i];
}
int t = 0;
for (int i = 0; i < n; i++)
{
if (hsh[A[i]] == k - hsh[hsh[A[i]]])
{
cout << "YES";
t = 1;
break;
}
}
if (t == 0)
{
cout << "NO";
}
return 0;
}
| The problem is that while hsh[A[i]] is always valid, hsh[hsh[A[i]] is not.
Consider the following input:
1 1
10000
This does the following:
A[0] = 10000;
...
hsh[10000] = 1 - 10000; // = -99999
...
if (hsh[10000] == 1 - hsh[-99999]) {...}
So your code is reading out of bounds of the array hsh[]. Make sure you check first if hsh[A[i]] >= 0.
Note that your code is more complicated than necessary; you can do a single loop over the input to check if there is a matching pair:
#include <iostream>
static constexpr int max_k = 2e6;
static bool seen[max_k + 1];
int main()
{
int n, k;
std::cin >> n >> k;
for (int i = 0; i < n; ++i)
{
int A;
std::cin >> A;
if (A <= k && seen[k - A]) {
std::cout << "YES\n";
return 0;
}
seen[A] = true;
}
std::cout << "NO\n";
}
|
73,952,462 | 73,952,816 | pass string variable as a table name in an SQL query | My program takes in a string as a parameter, and attempts to create a table in an SQL database with the passed string parameter as the name of the table.
I have experience with SQL, but I'm new when it comes to implementing it in C++.
class logger {
public:
string name; //name of application to be logged
logger(string app) {
//open data base and get handle for future queries
name = app;
sqlite3 *db;
char *zErrMsg = 0;
int rc;
char *sql;
rc = sqlite3_open("@name", &db);
if( rc ){
fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));
sqlite3_close(db);
}
sql = "create table if not exists" + name + "(timestamp varchar(255), message varchar(255));"
rc = sqlite3_exec(db, sql, callback, 0, &zErrMsg);
if( rc != SQLITE_OK ){
fprintf(stderr, "SQL error: %s\n", zErrMsg);
sqlite3_free(zErrMsg);
} else {
fprintf(stdout, "Table created successfully\n");
}
sqlite3_close(db);
}
};
Error:
logger.cpp:29:52: error: cannot convert ‘std::__cxx11::basic_string<char>’ to ‘char*’ in assignment
29 | sql = "create table if not exists " + name + "(timestamp varchar(255), message varchar(255));"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| std::__cxx11::basic_string<char>
| You can't build a C string the same way you build a C++ std::string.
Instead, make sql be a std::string, build the string, then use its c_str() method to get a C string pointer from it.
string sql;
...
rc = sqlite3_exec(db, sql.c_str(), callback, 0, &zErrMsg);
|
73,952,909 | 73,953,099 | Two different Arrays are referencing each other in C | I am very new to C / C++ and I am learning the basics. I want to write a program that prints the number of vowels as vowCount in a person's name that is an input. I have two char arrays, name[20] to hold the string input from the user and vowels[] to hold vowels.
I want vowCount to increase by 1 if two chars match between the arrays. If I input name as "John", the nested for-loop, vowels[i] prints a,e,i,o,u,J,o,h,n. What is my mistake here and I don't understand why vowels[i] also prints elements from name[20]? And vowCount is always the same as the size of name[], including the null pointer at the end of the string.
#include <stdio.h>
#include <string.h>
using namespace std;
int main() {
char name[20];
char vowels[] = {'a','e','i','o','u'};
int vowCount = 0;
printf("Enter your name: ");
scanf("%19s",name);
for(uint16_t counter = 0; name[counter]!= '\0' ;counter++ ){
char test = name[counter];
printf("CHECKING: %c \n", test);
for(uint8_t i =0; vowels[i] != '\0'; i++){
printf("COMPARING WITH VOWEL: %c\n", vowels[i]);
if(test == vowels[i]){
vowCount++;
}
}
}
printf("\n%i", vowCount);
}
| This array
char vowels[] = {'a','e','i','o','u'};
does not contain a string: a sequence of characters terminated by the zero character '\0'.
As a result this for loop
for(uint8_t i =0; vowels[i] != '\0'; i++){
invokes undefined behavior because neither element of the array vowels is equal to '\0'.
Instead you could declare the array either like
char vowels[] = {'a','e','i','o','u', '\0' };
or you could use a string literal to initialize the array
char vowels[] = { "aeiou" };
that is the same as
char vowels[] = "aeiou";
Pay attention to that in C there are no namespaces. If you want to write a C program then remove this line
using namespace std;
Also neither declaration from the header <string.h> is used in your program.
A C program can look the following way
#include <stdio.h>
#include <string.h>
#include <ctype.h>
int main( void )
{
char name[20];
const char vowels[] = "aeiou";
size_t vowCount = 0;
printf("Enter your name: ");
scanf( "%19s", name );
for ( size_t i = 0; name[i]!= '\0' ; i++ )
{
unsigned char c = name[i];
printf("CHECKING: %c \n", c );
if ( strchr( vowels, tolower( c ) ) != NULL )
{
vowCount++;
}
}
printf( "\n%zu\n", vowCount );
}
|
73,953,329 | 73,953,377 | Non macro based unit test frameworks | Bjarne Stroustrup said in a video on CppCon that, one of his desires for the future of the language is to remove the preprocessor ecosystem from the standard.
I am looking for a unit test that is design is not macro based, but I am unable to found anything decent.
Does anyone knows about one?
| No decent ones currently exist, because there are still some things that are impossible to do in C++ without macros (e.g. getting a stringified version of an assert expression). Before we can remove our usage of macros, the C++ language needs to evolve. Check back in 10 years.
In the meantime, I recommend Catch2.
|
73,953,724 | 73,953,805 | c++ How do write a number sequence separated with comma's correctly? | The sequence consists of all numbers from 1 to N (inclusive), n(1<=n<=100). All the odd numbers are listed first, then all the even numbers. For example you input 8 and the code prints this: 1,3,5,7,2,4,6,8. Although when I input an odd number it adds a comma in the end. It's my first time here, no idea how to post it correctly...
Even numbers here
> for(int i=1;i<=n;i++){
if(i%2!=0){
cout<<i<<",";
}
}
Odd numbers here
for(int i=1;i<=n;i++){
if(i%2==0){
if(i==n){
cout<<i;
}
else{
cout<<i<<",";
}
}
}
| My pattern.
The comma doesn't get postfixed. It gets prefixed on everything but the first item.
// print odd
for (int i = 1; i <= n; i += 2)
{
if (i != 1)
{
cout << ",";
}
cout << i;
}
for (int i = 2; i <= n; i += 2)
{
cout << ",";
cout << i;
}
Or simplified:
for (int i = 1; i <= n; i += 2)
{
cout << ((i == 1) ? "" : ",") << i;
}
for (int i = 2; i <= n; i += 2)
{
cout << "," << i;
}
|
73,953,768 | 73,954,845 | Addressing stack variables | As far as I understand, stack variables are stored using an absolute offset to the stack frame pointer.
But how are those variables addressed later?
Consider the following code:
#include <iostream>
int main()
{
int a = 0;
int b = 1;
int c = 2;
std::cout << b << std::endl;
}
How does the compiler know where to find b? Does it store its offset to the stack frame pointer? And if so, where is this information stored? And does that mean that int needs more than 4 bytes to be stored?
| The location (relative to the stack pointer) of stack variables is a compile-time constant.
The compiler always knows how many things it's pushed to the stack since the beginning of the function and therefore the relative position of any one of them within the stack frame. (Unless you use alloca or VLAs1.)
On x86 this is usually achieved by addressing relative to the ebp or esp registers, which are typically used to represent the "beginning" and "end" of the stack frame. The offsets themselves don't need to be stored anywhere as they are built into the instruction as part of the addressing scheme.
Note that local variables are not always stored on the stack.
The compiler is free to put them wherever it wants, so long as it behaves as if it were allocated on the stack.
In particular, small objects like integers may simply stay in a register for the full duration of their lifespans (or until the compiler is forced to spill them onto the stack), constants may be stored in read-only memory, or any other optimization that the compiler deems fit.
Footnote 1: In functions that use alloca or a VLA, the compiler will use a separate register (like RBP in x86-64) as a "frame pointer" even in an optimized build, and address locals relative to the frame pointer, not the stack pointer. The amount of named C variables is known at compile time, so they can go at the top of the stack frame where the offset from them to the frame pointer is constant. Multiple VLAs can just work as pointers to space allocated as if by alloca. (That's one typical implementation strategy).
|
73,953,783 | 73,954,224 | "Cannot form reference to void" error even with `requires(!std::is_void_v<T>)` | I'm writing a pointer class and overloading the dereference operator operator*, which returns a reference to the pointed-to object. When the pointed-to type is not void this is fine, but we cannot create a reference to void, so I'm trying to disable the operator* using a requires clause when the pointed-to type is void.
However, I'm still getting compiler errors from GCC, Clang, and MSVC for the void case even though it does not satisfy the requires clause.
Here is a minimal example and compiler explorer link (https://godbolt.org/z/xbo5v3d1E).
#include <iostream>
#include <type_traits>
template <class T>
struct MyPtr {
T* p;
T& operator*() requires(!std::is_void_v<T>)
{
return *p;
}
};
int main() {
int x = 42;
MyPtr<int> i_ptr{&x};
*i_ptr = 41;
MyPtr<void> v_ptr{&x};
std::cout << *static_cast<int*>(v_ptr.p) << '\n';
std::cout << x << '\n';
return 0;
}
And here is the error (in Clang):
<source>:7:6: error: cannot form a reference to 'void'
T& operator*()
^
<source>:20:17: note: in instantiation of template class 'MyPtr<void>' requested here
MyPtr<void> v_ptr{&x};
^
1 error generated.
ASM generation compiler returned: 1
<source>:7:6: error: cannot form a reference to 'void'
T& operator*()
^
<source>:20:17: note: in instantiation of template class 'MyPtr<void>' requested here
MyPtr<void> v_ptr{&x};
^
1 error generated.
Execution build compiler returned: 1
However, if I change the return type of operator* from T& to auto&, then it works in all 3 compilers. If I use trailing return type auto ... -> T& I also get errors in all 3 compilers.
Is this a triple compiler bug, user error, or is this intended behavior?
| In addition to the Nelfeal's answer, let me give an alternative solution. The problem is not in the dependence of requires condition on T, but is in the return type T&. Let's use a helper type trait:
std::add_lvalue_reference_t<T> operator*()
requires(!std::is_void_v<T>)
{
...
}
It works because std::add_lvalue_reference_t<void> = void, which makes operator*() signature valid for T = void.
|
73,953,890 | 73,953,970 | How to validate bad input from a file stream? | I want to handle input validation in my such small program.
I have a file containing integer values separated by white spaces. But sometimes there's invalid values, for example letters or any other non-digit characters. So, I want to ignore those invalid values and push only the valid ints in a vector of integers.
Here is what I've tried out so far:
ifstream in("values");
if(!in)
throw runtime_error("File \'values\' was not properly opened!");
vector<int> v;
int x = 0;
while( !(in >> x).eof() ){
if( !in.fail() )
v.push_back(x);
else{
cout << "Invalid input\n";
in.clear();
in.ignore(numeric_limits<streamsize>::max(), '\n');
}
}
for(int val : v)
cout << val << ", ";
The input file values contain this:
5 7 23 16 81 v 1474 119 21 29 5 88*
The output:
Invalid input
5, 7, 23, 16, 81,
Why v causes reading to stop? And why clearing the stream didn't put it back in a valid state so that it can read the remaining valid value?
What matters to me a lot is: If I use std::cin input stream in the very same program with the very same input values, I get it correctly and as I expected:
Input:
5
7
23
16
81
v
Invalid input
1474
119
21
29
5
88*
Invalid input
Ctrl+D (eof)
Output:
5, 7, 23, 16, 81, 1474, 119, 21, 29, 5, 88,
| Your input file has everything on 1 line, so what do you think will happen when v is encountered and then in.ignore() is called to ignore everything up to the next '\n' (line break)? That's right, it will ignore the rest of the file!
When using cin, your input values are on separate lines, so calling ignore() with '\n' will ignore only the rest of the current line, leaving subsequent lines available for reading.
To solve your issue, try using ' ' (space) instead of '\n' (line break) when calling ignore() on your file stream:
in.ignore(numeric_limits<streamsize>::max(), ' ');
Online Demo
|
73,954,022 | 73,954,084 | Import C++ library that hasn't .dll or .lib files inside | So I have two examples of these libraries for printing nice tables. First and second.
I've watched MANY videos on "How to install/include/import a library into your C++ project" and each one talks about changing Visual Studio solution properties like C/C++ -> General -> Additional Include Directories, Linker -> General -> Additional Library Directories and finally Linker -> Input -> Additional Dependencies, which should contain paths to .h/.hpp or .dll/.lib files respectively.
So as we can see we don't have any of these files in the two libraries above.
Hence the question. How do I work with such libraries in my cpp project? I just need direct instructions or something because I don't think I'll ever understand otherwise. I am trying to install some c++ library for the first time.
Addition: I don't want to use any package managers.
| libfort has a README.md that describes how to integrate it with your project (a step that does not actually require prior compilation, as it is not a "lib" but merely some additional source files to be added to your project directly).
bprinter requires prior compilation / installation, and comes with a CMakeLists.txt configuration to do this. You'll need CMake to do it, but it makes things really simple for you. Check its documentation.
*.dll and *.lib are binary files compiled from the source you check out from a source repository. They are not supposed to be provided from a GitHub repo. Prior to using a compiled library in the way you described (by making the necessary adjustments in your project's VS configuration), you need to compile and install the library according to that project's documentation (as with bprinter).
|
73,954,645 | 73,954,933 | Can I use a map's iterator type as its mapped type? | I have a tree whose nodes are large strings. I don't really need to navigate the tree other than to follow the path from a node back to the root, so it suffices for each node to consist of the string and a pointer to its parent. I also need to be able to quickly find strings in the tree. The nodes of the tree themselves are not ordered, so this would require some sort of index. However, the strings are big enough that I would rather not duplicate them by storing them both in my tree and in my index.
I could implement both my tree and the index with a single std::map if the key for the map was the string and the mapped value was the pointer to its parent. However, I cannot figure out a way to write either of these types. My best guess would be something like this:
using Tree = std::map<std::string, typename Tree::const_iterator>;
or maybe:
using Node = std::pair<std::string const, typename Node const*>;
using Tree = std::map<std::string, Node const*>;
But these recursive type definitions don't compile. Is there any way to create this type in C++? Or a better way to do what I am trying to do?
| You can wrap the iterator in a type of your own and reference that type instead to avoid the recurisve type problem.
struct const_iterator_wrapper {
using iterator_type = map<string, const_iterator_wrapper>::const_iterator;
iterator_type iter;
const_iterator_wrapper() {}
const_iterator_wrapper(iterator_type _iter) : iter(_iter) {}
};
using tree = map<string, const_iterator_wrapper>;
|
73,954,725 | 73,954,769 | Is std::string an array of two iterators? | I do not understand the behavior of the following snippet.
How could this be happening?
#include <bits/stdc++.h>
using namespace std;
int main() {
string s = "apple";
string foo = {s.begin(), s.end()};
cout << foo << endl;
}
output:
apple
| Don't confuse how an object is constructed over what it fundamentally is.
A constructor can, and will, take in all kinds of things. Quite often these arguments are converted in some way, transformed into the form that's a more natural fit for the class in question.
In this case you're constructing a string out of a range of characters, or in other words, an arbitrary substring. There are many other methods, including converting from char*, which is something you'll see all the time:
std::string example = "example";
Here you can read that as "example is initialized with the value "example"".
|
73,954,974 | 74,019,802 | Gitlab CI/CD yml variables for CMake Windows build | I am writing CI/CD pipeline for Windows Gitlab runner installed on my local machine:
variables:
RT_VERSION: "0.1"
build-win64:
tags:
- "win64"
stage: build
script:
- echo $RT_VERSION
- mkdir build
- cd build
- cmake ../ -G "Visual Studio 16 2019" -DRELEASE_VERSION=$RT_VERSION -DCMAKE_BUILD_TYPE=Release
- cmake --build . --config Release --target package
artifacts:
name: "raytracing-$RT_VERSION.zip"
paths:
- build/raytracing-$RT_VERSION-win64.7z
expire_in: 24 hours
The corresponding cmake file:
if(WIN32)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Wall /WX /EHcs")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wextra")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -std=c++1z -pedantic")
endif()
include_directories(. ../output)
add_executable(raytracing
main.cpp
)
set(CPACK_SOURCE_GENERATOR "7Z")
set(CPACK_GENERATOR "7Z")
string(TIMESTAMP BUILD_VERSION "%Y.%m.%d.%H" UTC)
message(STATUS "DRAFT VERSION: " ${BUILD_VERSION})
set(RELEASE_VERSION "" CACHE STRING "program version")
if(NOT RELEASE_VERSION STREQUAL "")
set(BUILD_VERSION ${RELEASE_VERSION})
message(STATUS "RV: " ${RELEASE_VERSION})
endif()
message(STATUS "VERSION: ${BUILD_VERSION}")
set(CPACK_PACKAGE_VERSION ${BUILD_VERSION})
install (TARGETS raytracing RUNTIME DESTINATION bin)
include(CPack)
The CI/CD job forks fine except of build version:
...
Building Custom Rule D:/Programming/Gitlab/builds/Y3ZUwhN5/0/ktator/raytracing/CMakeLists.txt
CPack: Create package using 7Z
CPack: Install projects
CPack: - Install project: raytracing [Release]
CPack: Create package
CPack: - package: D:/Programming/Gitlab/builds/Y3ZUwhN5/0/ktator/raytracing/build/raytracing-$RT_VERSION-win64.7z generated.
...
Uploading artifacts...
Runtime platform arch=amd64 os=windows pid=11828 revision=bbcb5aba version=15.3.0
WARNING: build/raytracing-0.1-win64.7z: no matching files. Ensure that the artifact path is relative to the working directory
ERROR: No files to upload
Variable works fine in artifacts section and does not work in script section. Please help me with this issue.
I also tried to use percent symbol instead of dollar. It did not help.
- cmake ../ -G "Visual Studio 16 2019" -DRELEASE_VERSION=%RT_VERSION% -DCMAKE_BUILD_TYPE=Release
| The solution was found!
$RT_VERSION should be enclosed in quotation marks.
There is a correct gitlab.yml:
variables:
RT_VERSION: "0.1"
build-win64:
tags:
- "win64"
stage: build
script:
- echo $RT_VERSION
- mkdir build
- cd build
- cmake ../ -G "Visual Studio 16 2019" -DRELEASE_VERSION="$RT_VERSION" -DCMAKE_BUILD_TYPE=Release
- cmake --build . --config Release --target package
artifacts:
name: "raytracing-$RT_VERSION.zip"
paths:
- build/raytracing-$RT_VERSION-win64.7z
expire_in: 24 hours
|
73,955,065 | 73,955,340 | How to read all integers into array from input file in C++ | So let's say I have a file like this:
1 23
14
abc
20
It will read the numbers 1, 23, 14, but I also want it to read the value 20 into the array.
How can I accomplish this?
#include <iostream>
#include <fstream>
using namespace std;
int main(){
const int ARRAY_SIZE = 1000;
int numbers[ARRAY_SIZE], count = 0;
double total = 0, average;
ifstream inputFile;
if(argc == 2){
inputFile.open(argv[1]);
}
else{
cout << "Invalid" << endl;
return 1;
}
while (count < ARRAY_SIZE && inputFile >> numbers[count]){
count++;
}
inputFile.close();
...
| Here's how I would do it. I added comments to the code to explain the intent and used a std::vector<int> instead of an array with a hardcoded size since you didn't mention any restrictions and I feel it is a better way to handle the problem. If you must used a statically sized array you should be able to adapt this.
#include <iostream>
#include <fstream>
#include <vector>
#include <sstream>
#include <string>
// Avoid using namespace std;
std::vector<int> readData(std::istream& input)
{
// In general prefer std::vector for variable sized data
std::vector<int> ret;
std::string word;
std::stringstream ss;
while (input >> word)
{
// Clear error flags and set the conversion stream to the value read.
ss.clear();
ss.str(word);
int val = 0;
// The first condition will fail if the string doesn't start with a digit
// The second will fail if there is still data in the stream after some of
// it was converted.
if (ss >> val && ss.eof())
{
ret.push_back(val);
}
}
return ret;
}
int main()
{
// In your code open the file and pass the stream to
// the function instead of std::cin
auto v = readData(std::cin);
for (auto n : v)
{
std::cout << n << " ";
}
std::cout << "\n";
}
Input: 1 23 14 abc 20 123xyz 50 nope 12345 a 100
Output: 1 23 14 20 50 12345 100
Working example: https://godbolt.org/z/ab87KjzjK
|
73,955,616 | 73,956,209 | Iterate std::tuple indices with lambda | There are a lot of approaches how to iterate trough std::tuple. And it is similar to range-based-for loop. I want to do something like this, but with indices of tuple, to get access to elements of various tuples.
For example I have tuple of different types, but all of them has same free functions / operators std::tuple<float, std::complex, vec3, vec4> and I want to do some operation between two or more such tuples.
I tried to write something like this:
template<typename Lambda, typename... Types, int... Indices>
void TupleIndexElems_Indexed(TTuple<Types...>, Lambda&& Func, TIntegerSequence<int, Indices...>)
{
Func.template operator()<Indices...>();
}
template<typename TupleType, typename Lambda>
void TupleIndexElems(Lambda&& Func)
{
TupleIndexElems_Impl(TupleType{}, Func);
}
template<typename... Types, typename Lambda>
void TupleIndexElems_Impl(TTuple<Types...>, Lambda&& Func)
{
TupleIndexElems_Indexed(TTuple<Types...>{}, Func, TMakeIntegerSequence<int, sizeof...(Types)>{});
}
Usage:
FSkyLightSettings& operator+=(FSkyLightSettings& Other)
{
auto Tup1 = AsTuple();
auto Tup2 = Other.AsTuple();
using TupType = TTuple<float*, FLinearColor*, FLinearColor*>;
auto AddFunc = [] <typename Tup, int Index> (Tup t1, Tup t2)
{
*t1.template Get<Index>() = (*t1.template Get<Index>()) + (*t2.template Get<Index>());
};
TupleIndexElems<TupType>([=]<int... Indices>
{
AddFunc.template operator()<TupType, Indices>(Tup1, Tup2); // How to fold it?
});
return *this;
}
I thought the best way to do it is using variaic lambda template, but when I tried to call it, I confused about impossibility to use fold expression.
Are there any elegant solutions to do that (for various versions of C++)?
UPD: I've also tried to use recursive lambda, but I can't due to compiler error C3536:
auto PlusVariadic = [=]<int Index, int... Indices>
{
Plus.template operator()<TupType, Index>(Tup1, Tup2); // How to fold it?
if constexpr (Index != 0)
{
PlusVariadic.operator()<Indices...>();
}
};
| One convenient way in C++20 I use to iterate tuples is to create a constexpr_for function that calls a lambda with a std::integral_constant parameter to allow indexing, as described in my Achieving 'constexpr for' with indexing post.
#include <utility>
#include <type_traits>
template<size_t Size, typename F>
constexpr void constexpr_for(F&& function) {
auto unfold = [&]<size_t... Ints>(std::index_sequence<Ints...>) {
(std::forward<F>(function)(std::integral_constant<size_t, Ints>{}), ...);
};
unfold(std::make_index_sequence<Size>());
}
example usage:
#include <tuple>
#include <iostream>
int main() {
auto Tup1 = std::make_tuple(1, 2.0, 3ull, 4u);
auto Tup2 = std::make_tuple(1ull, 2.0f, 3.0, (char)4);
constexpr auto size = std::tuple_size_v<decltype(Tup1)>;
constexpr_for<size>([&](auto i) {
std::get<i>(Tup1) += std::get<i>(Tup2);
std::cout << "tuple<" << i << "> = " << std::get<i>(Tup1) << '\n';
});
}
Output:
tuple<0> = 2
tuple<1> = 4
tuple<2> = 6
tuple<3> = 8
Try it out on godbolt.
|
73,956,613 | 73,957,506 | QMap Datatype in QML basic usage? | Assume I have a simple QMap declared and Initialized with values inside C++ and correctly exposed to QML.
C++
QMap<QString, QMap<QString, QString>> airflow_buttons_;
//simple getter
[[nodiscard]] QMap<QString, QMap<QString, QString>> airflow_buttons() const;
QML
console.log(ClimateModel.airflow_buttons);
prints the following
(qrc:/Climate/AirflowButtons.qml:24) - QVariant(QMap<QString,QMap<QString,QString>>, QMap(("left", QMap(("top_air", "NOT_ACTIVE")))))
It is giving me a QVariant. I have no idea how to convert it to my desired Map with the desired values?
I want to use it as a simple JavaScript Map.
console.log(ClimateModel.airflow_buttons["left"]["top_air"]); //Should print "NOT_ACTIVE"
console.log(ClimateModel.airflow_buttons["left"]); //Should print the Map contents
The current error message is [debug] expression for onCompleted (qrc:/Climate/AirflowButtons.qml:24) - undefined
Example on what i want to achieve in plain Javascript
var myMap = {x: "Test", y: "Test 2"};
myMap["z"] = "Test 3";
console.log(myMap["z"]); //prints "Test 3"
console.log(JSON.stringify(myMap["z"])); //prints the map contents
How can I achieve this trivial task?
| You should be using a QVariantMap all the way:
QVariantMap airflow_buttons = {
{"left", QVariantMap{
{"top_air", "NOT ACTIVE"}}}
};
btw, in QML you can also do this if you fancy:
console.log(ClimateModel.airflow_buttons.left.top_air)
|
73,956,788 | 73,956,966 | Parse error in valid JSON data with nlohmann's C++ library | i have the following JSON data in a .json file:
[
{
"Type":"SET",
"routine":[
{
"ID":"1",
"InternalType":"Motorcycle",
"payload":"2"
},
{
"ID":"12",
"InternalType":"Chair"
}
]
},
{
"Type":"GET",
"routine":[
{
"ID":"1",
"InternalType":"Wheel"
},
{
"ID":"4",
"InternalType":"Car",
"payload":"6"
}
]
}
]
I try to check and parse the data as follows:
#include<iostream>
#include<fstream>
#include"json.hpp"
using json = nlohmann::json;;
using namespace std;
int main (){
string pathToFile = "/absolute/path/to/my/exampledata.json";
ifstream streamOfFile(pathToFile);
if(!json::accept(streamOfFile)){
cout << "JSON NOT VALID!" << endl;
} else {
cout << "Nothing to complain about!" << endl;
}
try {
json allRoutines = json::parse(streamOfFile);
} catch (json::parse_error& error){
cerr << "Parse error at byte: " << error.byte << endl;
}
return 0;
}
I compile an d run it as follows:
g++ myCode.cpp -o out
./out
Nothing to complain about!
Parse error at byte: 1
As I understood the function json::accept() it should return true if the JSONdata is valid.
https://json.nlohmann.me/api/basic_json/accept/
In my case you can see, that the accept() function returns true on the data stream but throws a json::parse_error on the actual parsing.
So I would like to know if anyone happens to see an error in my use of the two functions, the data or can explain to me why it behaves like that.
Kind Regards
| json::accept(streamOfFile) reads from the input stream till the end.
json::parse(streamOfFile) can't read after the input stream end.
This has nothing to do with the JSON library, it's a general feature of streams, once have been read, they come to the end of file state.
You might want to reset the stream to the zero file position, then parse the file
streamOfFile.clear(); // Reset eof state
streamOfFile.seekg(0, ios::beg) // Seek to the begin of the file
json allRoutines = json::parse(streamOfFile);
|
73,957,225 | 73,957,305 | Implement a swap function to get a move constructor in own container class | In all standard containers like std::map or std::vector there is a move constructor and a move assignment to avoid copying. I want to build my own Wector class with the same functionalities. My class declaration looks as follows:
class Wector{
public:
~Wector();
Wector();
Wector(std::size_t s);
Wector(std::size_t s, const double nr);
Wector(const std::initializer_list<double> il);
//copy constructor
Wector(const Wector & w);
//move constructor
Wector(Wector&& w);
std::size_t size() const;
double* begin(); // iterator begin
double* end(); // iterator end
const double* begin() const; // const iterator begin
const double* end() const; // const iterator
const double& operator[](std::size_t i)const;
double& operator[](std::size_t i);
Wector& operator=(const Wector& w);
//move assignment
Wector& operator=(Wector&& w);
private:
std::size_t wsize;
void swap(Wector& w);
double *element; // double *element = new double[s]
};
To implement the move-assignment and constructor I need a customer swap.
//move constructor
Wector::Wector(Wector&& w)
: Wector()
{
swap(w);
}
//move assignment
Wector& Wector::operator=(Wector&& w){
swap(w);
return *this;
}
But I have no idea how to implement the swap function without having direct access to the data element and without copying with help of the iterators.
void Wector::swap(Wector& v){
std::swap(wsize, v.size());
double * temp = new double[v.size()];
std::copy(w.begin(), w.end(), temp);
std::swap(element, temp);
delete [] temp; //edited
}
Does anybody know how it is implemented in the case of std::vector?
| You can just swap the pointers themselves (and, of course, the sizes). It doesn't matter which instance allocated the storage and which deletes it, it'll only belong to one instance at a time.
|
73,957,435 | 73,961,342 | Why gRPC client side cancellation only works when both client and server run on the same process? | I have an async server that streams data to a single async client. I would like to be able to cancel the streaming from the client side and have the server stop streaming.
Currently, I run both client and server on 2 separate processes on my local Windows 10 machine. But I have tried running the client on a separate machine and it behaves the same.
My server side endpoint is configured like so:
const auto server_grpc_port = "50051";
const auto server_endpoint = std::string("0.0.0.0:") + server_grpc_port;
serverBuilder.AddListeningPort(server_endpoint, grpc::InsecureServerCredentials());
My client side endpoint is configured like so:
const auto server_grpc_port = "50051";
const auto client_endpoint = std::string("localhost:") + server_grpc_port;
remoteStub = std::make_unique<MyRemoteApp::Stub>(grpc::CreateChannel(client_endpoint, grpc::InsecureChannelCredentials()));
After I start both client and server, I initiate an asynchronous server streaming.
At some point, I trigger cancellation from the client side which should cause the client to stop reading and the server to stop writing.
I follow the method described in this answer here and github issue here:
Server Side
Create a grpc::ServerContext instance
Call grpc::ServerContext::AsyncNotifyWhenDone(cancellation_tag). Once the cancellation_tag will appear on the completion queue, We may invoke grpc::ServerContext::IsCancelled() to determine if a client has cancelled the RPC.
Wait the RPC streaming to be initiated by the client: server->RequestMyStreamingRPC(... token ...)
Do a bunch of writes each time token arrives at the CompletionQueue.
If the cancellation_tag arrives at the CompletionQueue, then we stop the streaming.
Client Side
Create a grpc::ClientContext instance
Initiate the RPC - Stub::PrepareAsync<>
Call reader->Read as many times as we wish to receive data from the server.
At some point, call grpc::ClientContext::TryCancel();
We call reader->Finish which returns a CANCELLED Status.
destroy the grpc::ClientContext instance and the reader.
However, the cancellation_tag never reach the server. It is only when I destroy the Stub instance on the client side that I finally receive the cancellation_tag on the server's CompletionQueue. If I keep the stub alive, the server just keeps streaming data forever as if there is a client reading it.
After investigating this further, it seems the problem does not occur when both client and server run on the same process, nor when I implement a simple synchronous server. In these cases, cancellation works as expected.
So what could possibly go wrong? Could there be something wrong with how the asynchronous server is handling cancellation?
| After investigating this further, I think I found the issue. It seems to be some undocumented aspects regarding the behavior of a CompletionQueue.
I was using a single thread for the whole server program.
So the completion handlers are invoked on the same thread that calls a AsyncNext. Like so:
while(server_active)
{
status = _completionQueue->AsyncNext(&tag, &ok);
if (status == grpc::CompletionQueue::GOT_EVENT)
{
CallHandler(tag, ok); // does the logic, probably another write
}
}
Whenever a write would complete, it would immediately trigger another write.
It seemed that when the client triggered a cancellation, the relevant completion tag was in fact inserted to the queue, but the the draining loop never reached it, since it kept adding more write completions. It is as if the queue behaves in a last-in-first-out manner.
When I modified the loop to first drain the queue, and then invoke the handlers, I immediately got the expected behavior.
while(server_active)
{
std::vector<Completions> completions;
while(1) // drain the queue
{
status = _completionQueue->AsyncNext(&tag, &ok);
if (status == grpc::CompletionQueue::GOT_EVENT)
{
completions.emplace_back(tag, ok);
}
else
{
break;
}
}
for (auto completion : completions)
{
CallHandler(completion.tag, completion.ok); // does the logic
}
}
|
73,958,351 | 73,958,656 | Why does using double instead of float gives a wrong result in this double integration code? | I found the following code in this page to compute a double integral. Whenever I run it with all variables being declared as float, it gives the right result for the example integral, which is 3.91905. However, if I just change all float variables to double, the program gives a completely wrong result (2.461486) for this integral.
Could you help me undestanding why this happens? I expected to have a better result using double precision, but that's evidently not the case here.
Below is the code pasted from the aforementioned website.
// C++ program to calculate
// double integral value
#include <bits/stdc++.h>
using namespace std;
// Change the function according to your need
float givenFunction(float x, float y)
{
return pow(pow(x, 4) + pow(y, 5), 0.5);
}
// Function to find the double integral value
float doubleIntegral(float h, float k,
float lx, float ux,
float ly, float uy)
{
int nx, ny;
// z stores the table
// ax[] stores the integral wrt y
// for all x points considered
float z[50][50], ax[50], answer;
// Calculating the number of points
// in x and y integral
nx = (ux - lx) / h + 1;
ny = (uy - ly) / k + 1;
// Calculating the values of the table
for (int i = 0; i < nx; ++i) {
for (int j = 0; j < ny; ++j) {
z[i][j] = givenFunction(lx + i * h,
ly + j * k);
}
}
// Calculating the integral value
// wrt y at each point for x
for (int i = 0; i < nx; ++i) {
ax[i] = 0;
for (int j = 0; j < ny; ++j) {
if (j == 0 || j == ny - 1)
ax[i] += z[i][j];
else if (j % 2 == 0)
ax[i] += 2 * z[i][j];
else
ax[i] += 4 * z[i][j];
}
ax[i] *= (k / 3);
}
answer = 0;
// Calculating the final integral value
// using the integral obtained in the above step
for (int i = 0; i < nx; ++i) {
if (i == 0 || i == nx - 1)
answer += ax[i];
else if (i % 2 == 0)
answer += 2 * ax[i];
else
answer += 4 * ax[i];
}
answer *= (h / 3);
return answer;
}
// Driver Code
int main()
{
// lx and ux are upper and lower limit of x integral
// ly and uy are upper and lower limit of y integral
// h is the step size for integration wrt x
// k is the step size for integration wrt y
float h, k, lx, ux, ly, uy;
lx = 2.3, ux = 2.5, ly = 3.7,
uy = 4.3, h = 0.1, k = 0.15;
printf("%f", doubleIntegral(h, k, lx, ux, ly, uy));
return 0;
}
Thanks in advance for your help!
| Due to numeric imprecisions, this line:
ny = (uy - ly) / k + 1; // 'ny' is an int.
Evaluates to 5 when the types of uy, ly and k are float. When the type is double, it yields 4.
You may use std::round((uy - ly) / k) or a different formula (I haven't checked the mathematical correctness of the whole program).
|
73,959,099 | 73,959,232 | how to initialize sockets in linux environemnt | I have a code snippet to initialize a sockets in windows. How would I initialize the socket in Linux environment.
WSADATA wsa
if(WAStartup(MkeWORD(2,2), $wsa) !=0 )
{
exit(0);
}
| On Linux you don't initialize a network environment like WSA. Sockets can be used out of the box.
See https://man7.org/linux/man-pages/man2/socket.2.html for documentation.
|
73,960,414 | 73,960,808 | Is there a 'requires' replacement for 'void_t'? | void_t is a nice hack to detect compilability of certain expressions, but I wonder if there is some way to do that check with requires (or requires requires ) since I really do not like void_t from the readability perspective.
For example, for a certain type I want to check if some expression is fine (i.e. compilable) or not, including negations.
Ideally I would wish that this works, but it does not, probably since lambdas are not templated...
#include <unordered_set>
int main() {
auto a = []() requires requires(int x) {x<x;} {};
auto b = []() requires !requires(std::unordered_set<int> x) {x<x;} {};
}
If this use seems weird, my real motivation is to check that something does not compile, for example that my nontemplated type does not have operator< or that it is not constructible from int or...
P.S.: I know boost::hana has a way to do this, I am looking for vanilla C++20 solution.
|
my real motivation is to check that something does not compile, for example that my nontemplated type does not have operator <
This is possible with concepts, perhaps I am misunderstanding?
template<class T>
concept has_less_than = requires(const T& x, const T& y)
{
{x < y} -> std::same_as<bool>;
};
struct Has_Less
{
bool operator<(const Has_Less& other) const
{
return true;
}
};
struct Nope{};
int main()
{
static_assert(has_less_than<Has_Less>);
static_assert(!has_less_than<Nope>);
}
Live Demo
|
73,962,524 | 73,990,758 | Is it possible to use istringstream to read in more than one parameter of a stream function? | Instead of writing a new istringstream argument, can I add another parameter inside nameStream? I have what I think below, and if this method is elligible, then can I tell the input stream to read in a space or endline to separate the two fullnames?
#include <iostream>
#include <string>
using namespace std;
string lastNameFirst (string fullname){
fullname = "Don Blaheta";
fullname2 = "Julian Dymacek";
istringstream nameStream(fullname, fullname2);
string firstName;
string lastName;
string firstName2;
string lastName2;
nameStream>>firstName>>lastName>>firstName2>>lastName2;
return 0;
}
| No, that will not work.
As you can see in the definition of std::istringstreams constructor, it will not take 2 std::strings as parameter. So, you cannot do in this way.
You have to concatenate the 2 strings before and then handover to the constructor.
Please see below some example for illustrating what I was explaining:
#include <iostream>
#include <string>
#include <sstream>
using namespace std::string_literals;
int main() {
// Define source strings
std::string fullname1{ "Don Blaheta"s };
std::string fullname2{ "Julian Dymacek"s };
// here we will store the result
std::string firstName1{}, lastName1{}, firstName2{}, lastName2{};
// Create stream from concatenated strings
std::istringstream nameStream(fullname1 + " "s + fullname2);
// Extract the name parts
nameStream >> firstName1 >> lastName1 >> firstName2 >> lastName2;
// Show some debug output
std::cout << firstName1 << ' ' << lastName1 << '\n' << firstName2 << ' ' << lastName2 << ' ';
}
In more advanced C++ (starting with C++17) you could use variadic template parameters and fold expresssions to concatenate an arbitary number of names, and then split the parts into a std::vector. Here we can make use of the std::vectors range constructor(5) in combination with the std::istream_iterators constructor.
But here you need to learn more . . .
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include <iterator>
#include <algorithm>
using namespace std::string_literals;
template<typename... Strings>
std::vector<std::string> split(Strings&& ...strings) {
std::istringstream iss(((strings + " "s) + ...));
return { std::istream_iterator<std::string>(iss),{} };
}
int main() {
// Any number of names
std::string fullname1{ "Don Blaheta"s };
std::string fullname2{ "Julian Dymacek"s };
std::string fullname3{ "John Doe"s };
// Split all the names into parts
std::vector nameParts = split(fullname1, fullname2, fullname3);
// Show debug output
for (const std::string& s : nameParts) std::cout << s << '\n';
}
|
73,963,381 | 73,963,910 | high precision Sleep for values around 5 ms | I am looking for solution which would give me most accurate Sleep of a function for small values. I tried this code below to test it:
#include <thread>
#include <chrono>
#include <functional>
#include <windows.h>
using ms = std::chrono::milliseconds;
using us = std::chrono::microseconds;
using ns = std::chrono::nanoseconds;
const int total_test = 100;
const int sleep_time = 1;
template <typename F,typename T>
long long sleep_soluion_tester(T desired_sleep_time, F func)
{
long long avg = 0.0;
for (int i = 0; i < total_test; i++)
{
auto before = std::chrono::steady_clock::now();
func(desired_sleep_time);
auto after = std::chrono::steady_clock::now();
avg += std::chrono::duration_cast<T>(after - before).count();
}
return avg / total_test;
};
template <typename F>
ms sleep_soluion_tester(int desired_sleep_time, F func)
{
ms avg = ms(0);
for (int i = 0; i < total_test; i++)
{
auto before = std::chrono::steady_clock::now();
func(desired_sleep_time);
auto after = std::chrono::steady_clock::now();
avg += std::chrono::duration_cast<ms>(after - before);
}
return avg / total_test;
};
int main()
{
auto sleep = [](auto time) {
std::this_thread::sleep_for(time);
};
ms sleep_test1 = static_cast<ms>(sleep_time);
us sleep_test2 = static_cast<us>(sleep_test1);
ns sleep_test3 = static_cast<ns>(sleep_test1);
std::cout << " desired sleep time: " << sleep_time << "ms\n";
std::cout << " using Sleep(desired sleep time) "<< sleep_soluion_tester(sleep_time, Sleep).count() << "ms\n";
std::cout << " using this_thread::sleep_for(ms) " << sleep_soluion_tester(sleep_test1,sleep) << "ms\n";
std::cout << " using this_thread::sleep_for(us) " << sleep_soluion_tester(sleep_test2,sleep) << "ms\n";
std::cout << " using this_thread::sleep_for(ns) " << sleep_soluion_tester(sleep_test3,sleep) << "ms\n\n";
But there seems to be no difference, they all take around 15 milliseconds to complete even when the parameter passed is 1ms. Is there a way to increase the precision of sleeping function (or some implementation that would lead to higher precision)?
I know that using thread sleep is a non-deterministic method, as the scheduler will decide when the thread will be resumed, but I wonder if there is any way to increase precision?
| Well, sleeping exactly for a very short time cannot be guaranteed on standard desktop operating systems.
Also, you must deal with 2 aspects:
Time Measuring: If you measuring method has a low time resolution you may get completely wrong results when measuring.
The time resolution of the sleeping function itself.
Then, on Non-Real-Time-OS, like Windows, MacOS and Linux it can also happen, that your process/thread will not scheduled in time for wake up early enough. If the sleeping time is shorter, this is more likely to happen.
So, on Windows you may use the WIN32 API and adjust the timer period and set it to the lowest possible value:
timeBeginPeriod(1); // set period to 1 ms
At the end of your task or program - according to the documentation - you must end it with
timeEndPeriod(1);
Then you can use the WIN32 Sleep function:
Sleep(1); // (try to) sleep 1 ms
According to the documentation:
If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.
For further information see here: https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
On Linux you can try nanosleep, see https://man7.org/linux/man-pages/man2/nanosleep.2.html
For time measuring you might use the QueryPerformanceCounter function on Windows and clock_gettime on Linux respectively.
This all will not give you the guarantee to reach 1 ms resolution each time you sleep.
But the timeBeginPeriod is a try, since according to the documentation the default timer resolution is 15,6 ms, which fits exactly with your measurement. Setting of a higher resolution, e.g., 1 ms may help you then. For tasks which requires "real" real-time guarantee for being functional, this will not be enough. For that Real-Time OSes are built.
|
73,964,025 | 73,964,842 | Polymorphic Smart Pointer Array as Argument | I want a static 2D array that takes an interface class pointer.
Using raw pointer Base* rawr[5][5] works fine but I want to work with smart pointers and only pass the raw pointer as an argument.
How can I make the code work without changing the args to smart pointers?
class Base {};
class Child : public Base {};
void Foo(Base* array[5][5])
{
// Stuff
}
void OtherFoo(std::unique_ptr<Base> array[5][5])
{
// Stuff
}
int main()
{
std::unique_ptr<Base> rawr[5][5];
// argument of type "std::unique_ptr<Base, std::default_delete<Base>> (*)[5]"
// is incompatible with parameter of type "Base *(*)[5]"
Foo(rawr);
// no suitable conversion function from
// "std::unique_ptr<Base, std::default_delete<Base>>" to "Base *(*)[5]" exists
Foo(rawr[5][5]);
// expression must have class type but it has type
// "std::unique_ptr<Base, std::default_delete<Base>> (*)[5]"
Foo(rawr.get());
// expression must have pointer-to-class type but it has type
// "std::unique_ptr<Base, std::default_delete<Base>> (*)[5]"
Foo(rawr->get());
// This works
OtherFoo(rawr);
}
Newbie question + probably a duplicate but after googling for a while I didn't see an answer, sorry :'(
|
How can I make the code work without changing the args to smart pointers?
You can't pass around an array of smart pointers where an array of raw pointers is expected. However, you can have 2 separate arrays - an array of smart pointers, and an array of raw pointers that point to the objects that the smart pointers are managing, eg:
class Base {
public:
virtual ~Base() = default;
};
class Child : public Base {};
void Foo(Base* array[5][5])
{
// Stuff
}
void OtherFoo(std::unique_ptr<Base> array[5][5])
{
// Stuff
}
int main()
{
std::unique_ptr<Base> smartr[5][5];
Base* rawr[5][5];
// fill smartr as needed...
for (int i = 0; i < 5; ++i) {
for(int j = 0; j < 5; ++j) {
rawr[i][j] = smartr[i][j].get();
}
}
Foo(rawr);
OtherFoo(smartr);
}
|
73,964,853 | 73,965,551 | ARM64 inline assembly BL BLR BR instructions | I am in the middle of a problem that I can't seem to figure out. It involves testing the instruction set using c++ and inline assembly for the arm64. My current problems are with BL, BLR and the BR instructions. My current code looks as follows:
#include <stdio.h>
#define LDRARRAYLENGTH (4)
__asm __volatile
(
".global myFunction \n\t"
".p2align 4 \n\t"
".type myFunction,%function \n\t"
"myFunction: \n\t"
"mov x0, #10 \n\t"
"ret x30 \n\t"
);
/*
*
*/
bool BranchingModes(void)
{
bool BranchingModesFlag = false;
//local registers
int regw0 = 0x00;
int regw1 = 0x00;
int regw2 = 0x00;
int regw3 = 0x00;
/*
* Branch with Link branches to a PC-relative offset, setting
* the register X30 to PC+4. It provides a hint that this is a
* subroutine call.
*/
//Clear variables
regw0 = 0x00;
regw1 = 0x00;
regw2 = 0x00;
regw3 = 0x00;
__asm __volatile
(
"mov x0, #0 \n\t" /* setting up initial variable a */
"bl myFunction \n\t"
"mov %[reg1], x0 \n\t"
"nop \n\t"
:[reg0] "=r"(regw0), [reg1] "=r"(regw1)
:/* This is an empty input operand list */
);
/*
* The BL instruction calls a subroutine called myFunction within the subroutine
* a variable/register gets a value of 10. When the subroutine returns the
* the variable/register that got populated in the subroutine gets copied
* to another variable/register to acknowledge that the subroutine got called
* and returned using the BL instruction
*/
if((regw0 == 10) && (regw1 == 10))
{
BranchingModesFlag = true;
}
else
{
BranchingModesFlag = false;
}
return BranchingModesFlag;
}
int main()
{
unsigned int i0 = 0x00;
unsigned int counter = 0x00;
BranchingModes();
for(i0=0x00; i0<=10000; i0++)
{
counter = counter + 1;
}
return 0;
}
Issue is the code doesn't seem to make to the for loop after BranchingModes function. I know it's the asm section of the BranchingModes function which utilizes the BL instruction. I am not sure what I am doing wrong with that instruction. Am I returning correctly out of the "myFunction" using "ret x30"? I have attempted to use "BR x30" with no success. As BL updates "PC+4" and gets sorted to x30. Similar issue with the BLR instruction, I would appreciate any insight with my issues.
| You need to tell the compiler about all registers you clobber, but you don't. The compiler doesn't see your changes to x0 and x30, the latter of which is presumably the reason why your program never returns from BranchingModes.
Haven't tested it, but this should work:
__asm __volatile
(
"mov x0, #0 \n\t" /* setting up initial variable a */
"bl myFunction \n\t"
"mov %[reg1], x0 \n\t"
"nop \n\t"
:[reg0] "=r"(regw0), [reg1] "=r"(regw1)
:/* This is an empty input operand list */
:"x0", "x30"
);
Note that this is for calling your specific function. For arbitrary ABI-compliant functions, you'll need to list everything the ABI specifies to be caller-saved as clobber. This will usually be x0 through x18 plus x30, d8 through d15, as well as cc and memory.
|
73,965,388 | 73,974,658 | Where is my 'concepts' library for C++20 concepts? | I can successfully compile code that uses the concept keyword, but not anything that includes <concept> or <concepts>.
// #include <concepts> // fatal error: 'concepts' file not found
template<class T>
concept Adds = requires(T a, T b) { a+b; };
template<class T>
concept Subs = requires(T a, T b) { a-b; };
template<class T>
concept AddsSubs = Adds<T> && Subs<T>;
Obviously without the library, things like this won't work:
template <class X, typename Y>
requires std::constructible_from<X, Y>
// error: no member named 'constructible_from' in namespace 'std'
auto new_unique(Y && y) -> std::unique_ptr<X>;
That was slightly modified from a "how to use" page on the constructible_from concept.
I'm missing something simple, I'm sure.
I'm using Clang 15 as from the Github (92ab024f81e5b64e258b7c3baaf213c7c26fcf40). When I use G++, I have to replace -std=c++20 with -std=c++2a -fconcepts before it behaves the same (both errors above.)
By the way, when compiling with Clang I don't have -stdlib=libc++, this causes it to lose sight of all standard headers. Things like std::unique_ptr as above are visible just fine. I believe this is because either my build of llvm was interrupted or I didn't install correctly. However G++ wouldn't be affected by that.
I'm working on Linux but I can move to an online compiler if you prefer. I don't think it's relevant, correct me if I'm wrong.
| I compiled the following on Godbolt using Clang++ 15 and libc++. It proves that <concepts> is visible, that concepts can be applied and used, and that there are no linker errors.
#include <iostream>
#include <concepts>
#include <typeinfo>
template<class... T> using And = std::conjunction<T...>;
template<class... T> using Or = std::disjunction<T...>;
template<class S, class... T> using Same = And<std::is_same<S, T>...>;
template<class S, class... T> using In = Or<std::is_same<S, T>...>;
template<class S, class... T> using IIn = In<S, std::make_signed_t<T>...>;
template<class S, class... T> using UIn = In<S, std::make_unsigned_t<T>...>;
template<class T> concept IFixed = In<T, short, int, long, long long>{};
template<class T> concept UFixed = UIn<T, short, int, long, long long>{};
template<class T> concept Fixed = IFixed<T> || UFixed<T>;
template<class T> concept Floating = std::floating_point<T>;
template<class T> concept Arithmetic = Fixed<T> || Floating<T>;
void vprint(UFixed auto const& r) { std::cout << "(Unsigned) fixed: "
<< r << " (" << typeid(r).name() << ")\n"; }
void vprint(Fixed auto const& r) { std::cout << "(Signed) fixed: "
<< r << " (" << typeid(r).name() << ")\n"; }
void vprint(Arithmetic auto const& r) { std::cout << "Floating: "
<< r << " (" << typeid(r).name() << ")\n"; }
void vprint(auto const& r) { std::cout << "Non-arithmetic: "
<< r << " (" << typeid(r).name() << ")\n"; }
int main(int argc, const char *argv[]) {
vprint('0'); // Non-arithmetic: 0 (c)
vprint(true); // Non-arithmetic: 1 (b)
vprint(-1); // (Signed) fixed: -1 (i)
vprint(0u); // (Unsigned) fixed: 0 (j)
vprint(std::size_t{1}); // (Unsigned) fixed: 1 (m)
vprint(2-1.0e-6); // Floating: 2 (d)
vprint(""); // Non-arithmetic: (A1_c)
vprint(nullptr); // Non-arithmetic: nullptr (Dn)
vprint((void*)nullptr); // Non-arithmetic: (nil) (Pv)
flush(std::cout);
}
https://godbolt.org/z/1aG47v39z
That means it's still my installation, and not a problem with C++ or concepts. Not only that but everything works in G++ now, so it's absolutely my Clang or libc++ installation, and Clang reports the right version. All that's left is LLVM and libc++ as far as I know. If I can't figure it out, as n-1-8e9-wheres-my-share-m said, I should post a new question.
Update: I removed old versions of Clang, LLVM, etc., reinstalled the new ones, and worked on a few other things, and when I came back to it it compiled. Very frustrating. Accepting this answer seems wrong.
Update 2: I think I found it. -std=c++20 was fine, but an imported makefile was effectively forcing -std=c++17 later in the arguments, and I didn't look closely enough after I saw the ones I was looking for. All I was missing was a single question mark and it cost me days.
|
73,965,533 | 73,965,683 | std::vector of structs: How are the member of structs saved in memory | I have the following class:
struct xyzid{
uint16_t i;
uint16_t z,y,x;
};
std::vector<xyzid> example;
(...) // fill example
uint16_t* data = reinterpret_cast<uint16_t*>(example.data());
Can I now be sure that my pointer data is basically such that the first 16 bits refer to i, then z, y, x, before moving to the next element? Or, is it not guaranteed that the order of declaration in my struct is preserved within my std::vector container?
| The vector contains an allocated array of structs. The data pointer points at the 1st struct in the array. The structs in the array are stored sequentially in memory. And the fields of each struct are also stored sequentially in memory, in the order that they are declared. So yes, the array will consist of the 1st struct fields i, then z, then y, then x, then the 2nd struct fields, and so on.
However, what you CAN'T count on is that the 1st struct's z field will occupy the 2nd 16 bits of memory, or the y field will occupy the 3rd 16 bits of memory, etc. This is dependent on the struct's alignment padding. If you are expecting the memory to be filled in that way, the only way to guarantee that is to disable all alignment padding between the structs and their fields, by declaring the struct's alignment as 8-bit/1-byte, such as with #pragma pack(1) or __attribute(packed) or other similar compiler directive.
But, you really shouldn't rely on this behavior if you can help it. If you absolutely need the memory to have an exact layout, use serialization instead.
|
73,966,165 | 73,970,258 | When WINAPI calls my code and an exception is thrown, should I catch it and return an HRESULT instead? | I have implemented IThumbnailProvider which gets compiled to a dll and then registered using regsvr32.
Within the code, I make use of STL containers such as std::vector:
std::vector<double> someRGBAccumulatorForDownsamplingToThumbnail = std::vector<double>(1234567);
Because the STL containers are largely built around RAII, there is no null-checking for failed memory allocations. Instead, the above code will throw an exception in an out-of-memory scenario. When this happens, should I catch this exception to return an HRESULT (context: implementation of GetThumbnail) instead?
try {
// ...
} catch (bad_alloc& ex) {
return E_OUTOFMEMORY;
}
Or can WINAPI safely handle me allowing the exception to "bubble up"?
I am asking because I am reading that WINAPI is C-based, and that C does not have exceptions.
| IThumbnailProvider is a COM interface. The Component Object Model is a language-agnostic protocol that describes (among others) the binary contract between clients and implementers of interfaces. It establishes a boundary (the Application Binary Interface, ABI) with clear rules1.
Since the protocol is language-agnostic, things that are allowed to cross the ABI are limited to the least common denominator. It's ultimately slightly less than what C function calls support. Any language-specific construct (such as C++ exceptions) must not cross the ABI.
When implementing a COM interface in C++ you have to make sure that C++ exceptions never cross the ABI. The bare minimum you could do is mark all interface methods as noexcept:
HRESULT MyThumbnailProvider::GetThumbnail(UINT, HBITMAP*, WTS_ALPHATYPE*) noexcept {
// ...
}
While that meets all requirements of the COM contract, it's generally not desirable to have an uncaught exception bring down the entire process in which the COM object lives.
A more elaborate solution would instead catch all exceptions and turn them into HRESULT error codes (see Error Handling in COM), similar to what the code in question does:
HRESULT MyThumbnailProvider::GetThumbnail(UINT, HBITMAP*, WTS_ALPHATYPE*) noexcept {
try {
// ...
} catch(...) {
return E_FAIL;
}
}
Again, this is perfectly valid, though any COM developer dreads seeing the 0x80004005 error code, that's semantically equivalent to "something went wrong". Hardly useful when trying to diagnose an issue.
A more helpful implementation would attempt to map certain well-known C++ exception types to standard HRESULT values (e.g. std::bad_alloc -> E_OUTOFMEMORY, or std::system_error to the result of calling HRESULT_FROM_WIN32). While one could manually implement a catch-cascade on every interface method implementation, there are libraries that do it for you already. The Windows Implementation Library (WIL) provides exception guards for this purpose, keeping the details out of your code.
The following is a possible interface method implementation using the WIL:
HRESULT MyThumbnailProvider::GetThumbnail(UINT, HBITMAP*, WTS_ALPHATYPE*) noexcept {
try {
// ...
}
CATCH_RETURN();
}
As an aside, I've kept the noexcept specifiers on the latter two implementations as a defensive measure only; they are not strictly required, but keep the interface valid in case the implementation changes in the future in a way that would allow a C++ exception to escape.
1 I'm not aware of an official document that spells out those rules. We have to assume that the compiler is the specification. Incidentally, Microsoft's C and C++ compiler do not agree, which the Direct2D team found out the hard way.
|
73,966,205 | 73,971,745 | Implementing a "virtual" method returning *this (covariant return type) | I'm writing a hierarchy of classes of C++, let's say A, B inheriting A, C inheriting A, and D inheriting B.
Now, all of these classes must have a method bar() &, whose body is:
{
A::foo();
return *this;
}
It's the exact same code, doing the exact same thing - except for the type of the return value - which returns an lvalue reference to the class' type.
Now, the signature of this method would be different for every class. But - it's essentially the same method. The thing is, the way things stand, I need to replicate the code for it many times. How can I avoid this code duplication?
I was thinking of writing some mixin with CRTP, but when I get into the details it becomes super-ugly.
Note: For the purposes of this example, bar() is only defined for lvalues so as not to get into the question of legitimacy of returning *this from an rvalue.
| As Raymond Chen commented, c++23 would have deducing this which allows code like:
struct A
{
template <typename Self>
Self& bar(this Self& self) // Here self is the static type which calls bar
// so potentially the derived type
{
self.A::foo(); // or self.foo();
return self;
}
// ...
};
struct B : A{};
struct C : A{};
struct D : B{};
But currently, CRTP might help, something like:
struct A
{
// ...
};
template <typename Derived, typename Base>
struct A_CRTP : Base
{
Derived& bar()
{
A::foo();
return static_cast<Derived&>(*this);
}
// ...
};
struct B : A_CRTP<B, A> {};
struct C : A_CRTP<C, A> {};
struct D : A_CRTP<D, B> {};
|
73,966,725 | 73,993,212 | Does reinterpret_cast<unsigned long long> of an int64_t value really break strict aliasing? | I'm attempting to write a generic version of __builtin_clz that handles all integer types, including signed ones. To ensure that conversion of signed to unsigned types doesn't change the bit representation, I decided to use reinterpret_cast.
I've got stuck on int64_t which unlike the other types doesn't seem to work with reinterpret_cast.
I would think the code below is correct but it generates a warning in GCC.
#include <cstdint>
int countLeadingZeros(const std::int64_t value)
{
static_assert(sizeof(std::int64_t) == sizeof(unsigned long long));
return __builtin_clzll(reinterpret_cast<const unsigned long long&>(value));
}
(demo)
GCC shows a warning: dereferencing type-punned pointer will break strict-aliasing rules.
Clang compiles it without a complaint.
Which compiler is right?
If it is GCC, what is the reason for the violation of strict-aliasing?
Edit: After reading the answers, I can see that the described behavior applies not only to conversion int64_t -> unsigned long long but also to long -> long long. The latter one makes the problem a little more obvious.
| If you have a signed integer type T, you can access its value through a pointer/reference to the unsigned version of T and vice-versa.
What you cannot do is access its value through a pointer/reference to the unsigned version of U, where U is not the original type. That's undefined behavior.
long and long long are not the same type, no matter what the size of those types say. int64_t may be an alias for a long, a long long, or some other type. But unless you know that int64_t is an alias for signed long long (and no, testing its size is not good enough), you cannot access its value through a reference to unsigned long long.
|
73,967,019 | 73,967,060 | My program skips any digit that ends with a 9 | When I build it and run I noticed that my program sets dig1 to 0 as soon as it hits 9. So the output looks like this: 00, 01... 08, 10.
I searched on stackoverflow and cplusplus.com for a possible solution but I couldn't find it.
Here's the code in question:
#include <iostream>
using namespace std;
int main()
{
int i;
char dig1 = '0', dig2 = '0';
cout << dig2 << dig1 << endl;
for(i = 0; i < 90; i++)
{
dig1++;
if(dig1 == '9')
{
dig1 = '0';
dig2++;
}
cout << dig2 << dig1 << endl;
}
return 0;
}
| Just replace (dig1=='9') with (dig1>'9') as in
#include <iostream>
using namespace std;
int main()
{
int i;
char dig1 = '0', dig2 = '0';
for(i = 0; i < 90; i++)
{
cout << dig2 << dig1 << endl;
dig1++;
if(dig1 > '9')
{
dig1 = '0';
dig2++;
}
}
return 0;
}
https://godbolt.org/z/TMbjbzsE1
Produces:
00
01
02
03
04
05
06
07
08
09
10
11
12
13
|
73,967,750 | 73,967,779 | How do I limit the amount of digits accepted from the user? | I currently have this and I can't seem to make my else work so the whole program doesn't limit the input successfully. (Sorry for my English I speak French).
cout << "Veuillez entrer votre nombre de 6 chiffres : ";
cin >> val;
if (val < 100000);
{
cout << "Erreur! Veuillez recommencez avec un nombre a 6 chiffres. " << endl << endl;
return main();
}
if (val > 999999);
{
cout << "Erreur! Veuillez recommencez avec un nombre a 6 chiffres. " << endl << endl;
return main();
}
else
if
{
nb1 = val / 100000 % 10;
cout << nb1;
}
| Firstly, a semicolon between the ) and { of an if statement creates an empty if statement body, followed by an unconditional block, which is not what you want.
To check if a number has more than a certain number of digits, try comparing against negative 999999 instead.
The code should be made shorter by using logical operators instead of 2 separate if statements.
Lastly, you should not recurse into main(). Use goto instead.
Corrected Code:
retry:
cout << "Veuillez entrer votre nombre de 6 chiffres : ";
cin >> val;
if (val < -999999 || val > 999999) // Note the ; has been removed
{
cout << "Erreur! Veuillez recommencez avec un nombre a 6 chiffres. " << endl << endl;
goto retry;
}
nb1 = val / 100000 % 10;
cout << nb1;
Alternatively, you could use the abs() function (use #include <cstdlib>):
if (abs(val) > 999999)
{
cout << "Erreur! Veuillez recommencez avec un nombre a 6 chiffres. " << endl << endl;
goto retry:
}
nb1 = val / 100000 % 10;
cout << nb1;
|
73,968,056 | 74,029,119 | Module loading and unloading when calling createwindow | I am trying to create a simple dropdown menu in a dialog box. Here is the bit of code that actually does it:
BOOL CALLBACK Remove(HWND hDlgc, UINT message, WPARAM wParam, LPARAM lParam)
//message handler for remove category box
{
//UNREFERENCED_PARAMETER(lParam);
HINSTANCE current = GetModuleHandle(NULL);
//GetModuleHandleExA(GET_MODULE_HANDLE_EX_FLAG_PIN, "comctl32.dll", NULL);
CreateWindow(WC_COMBOBOXW, _TEXT(""), CBS_DROPDOWN | CBS_HASSTRINGS | WS_CHILD | WS_OVERLAPPED | WS_VISIBLE, 100, 100, 200, 200, hDlgc, NULL, NULL, NULL, NULL);
This will work and it will show the combo box, but only after waiting for 2 minutes or so... very undesirable! my program will go into a not responding state before the combo box shows up. The output shows that comctl32.dll get loaded and unloaded about 1500 times before the combo box shows up. When it does, it is still unresponsive and I have to wait more until it begins to work. I tried pinning the module to stop the loading and unloading but that did not do anything. Any help appreciated. As you can see I am very new to win32 programming. I got the backend of my program to work nicely, its just this gui that is bugging me.
EDIT: here is the as short as i could get it code. Just create a blank desktop project in VS, and then replace the "about" function in the bottom with the following: (and also include commctrl.h)
INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam)
{
UNREFERENCED_PARAMETER(lParam);
HWND dd_Hand = CreateWindow(WC_COMBOBOXW, _TEXT(""), CBS_DROPDOWNLIST | CBS_HASSTRINGS | WS_CHILD | WS_OVERLAPPED | WS_VISIBLE,
20, 20, 200, 200, hDlg, NULL, NULL, NULL);
switch (message)
{
case WM_INITDIALOG:
return (INT_PTR)TRUE;
case WM_COMMAND:
if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL)
{
EndDialog(hDlg, LOWORD(wParam));
return (INT_PTR)TRUE;
}
break;
}
return (INT_PTR)FALSE;
}
If I do this, I get the symptoms described previously.
EDIT AGAIN: I put the createwindow function for the combobox into the WM_CREATE case of WndProc, and everything works as it should, loads instantly. I am starting to doubt that this is the right way to create a combobox within a dialog box. Any suggestions for doing this another way (havent been able to find a way to do this with a splitbutton resource) are also welcome.
| Solution was simple. just put this code:
HWND dd_Hand = CreateWindow(WC_COMBOBOXW, _TEXT(""), CBS_DROPDOWNLIST | CBS_HASSTRINGS | WS_CHILD | WS_OVERLAPPED | WS_VISIBLE,
20, 20, 200, 200, hDlg, NULL, NULL, NULL);
and the code that loads the combobox so it runs only once. No more problems. Also another even simpler way to do this would be to create a combobox resource and use the SendMessage() function.
|
73,968,547 | 74,011,263 | Slide animated text on hover | .h
class myButton : public QPushButton
{
Q_OBJECT
public:
QPropertyAnimation* anim;
struct WidgetPos { int x = 0; int y = 0; int w = 0; int h = 0; };
WidgetPos wp;
void CreateAnimation(QByteArray propertyName)
{
if (propertyName == "geometry")
{
anim = new QPropertyAnimation(this, propertyName);
this->anim->setDuration(100);
this->anim->setEasingCurve(QEasingCurve::Linear);
this->wp.x = this->x();
this->wp.y = this->y();
this->wp.w = this->width();
this->wp.h = this->height();
}
}
myButton(QWidget* parent = 0) : QPushButton(parent) {}
bool eventFilter(QObject* obj, QEvent* event)
{
if (event->type() == QEvent::Enter)
{
if (!this->wp.x)
this->CreateAnimation("geometry");
this->anim->stop();
this->anim->setStartValue(
QRect(this->x(), this->y(), this->width(), this->height()));
this->anim->setEndValue(
QRect(this->x(), this->y(), (this->wp.w + 200) - this->width(), this->height()));
this->anim->start();
}
else if (event->type() == QEvent::Leave)
{
this->anim->stop();
this->anim->setStartValue(
QRect(this->x(), this->y(), (this->wp.w + 200) - this->width(), this->height()));
this->anim->setEndValue(
QRect(this->wp.x, this->wp.x, this->wp.w, this->wp.h));
this->anim->start();
}
return QWidget::eventFilter(obj, event);
}
};
.cpp
QtWidgetsApplication::QtWidgetsApplication(QWidget * parent)
: QMainWindow(parent)
{
ui.setupUi(this);
QPushButton* btn = new myButton(this);
btn->setGeometry(100, 100, 50, 40);
btn->setStyleSheet(R"(QPushButton {
background-image: url(:/tutorial.png);
background-repeat: no-repeat; }
)");
QLabel* labl = new QLabel(btn);
labl->setObjectName("label");
labl->setGeometry(32, 0, btn->width() + 32, btn->height());
labl->setText("Hello World");
labl->setAlignment(Qt::AlignCenter);
labl->show();
btn->installEventFilter(btn);
return;
}
So far what I did result on:
If I move the mouse on it so fast it becomes messy, and the "closing" animation <= isn't working.
I'm struggling with the calculation of the animation QRect and handling it when there's an animation already running.
The goal is to create a smooth animation effect similar to see in this gif:
| I think the reason for the issue you are having is because when you are leaving the widget you set the start animation to the maximum width the button could take instead of starting it from the current width. I've implemented my own QPushButton subclass in the following way which seems to achieve the result you need. Instead of creating an event filter, I'll just override the enter and leave event. We'll also need to update the initial geometry every time the widget is moved or resized (outside of the animation), so I'm overriding the move and resize event as well.
// MyButton.h
class MyButton : public QPushButton
{
public:
MyButton(QWidget* parent = nullptr);
~MyButton() = default;
protected:
void enterEvent(QEvent *event) override;
void leaveEvent(QEvent* event) override;
void moveEvent(QMoveEvent *event) override;
void resizeEvent(QResizeEvent* event) override;
private:
QPropertyAnimation* m_animation;
QRect m_init_geometry;
double m_duration;
double m_extension;
};
Here is the implementation:
// MyButton.cpp
MyButton::MyButton(QWidget* parent)
: QPushButton(parent)
, m_animation(nullptr)
, m_init_geometry()
, m_duration(200)
, m_extension(100)
{
m_animation = new QPropertyAnimation(this, "geometry", this);
m_animation->setDuration(m_duration);
m_animation->setEasingCurve(QEasingCurve::Linear);
m_init_geometry = geometry();
}
void MyButton::enterEvent(QEvent *event)
{
QPushButton::enterEvent(event);
m_animation->stop();
// update the duration so that we get a uniform speed when triggering this animation midway
m_animation->setDuration(((m_init_geometry.width() + m_extension - width())/m_extension)*m_duration);
m_animation->setStartValue(geometry());
m_animation->setEndValue(QRectF(m_init_geometry.x(), m_init_geometry.y(), m_init_geometry.width() + m_extension, m_init_geometry.height()));
m_animation->start();
}
void MyButton::leaveEvent(QEvent *event)
{
QPushButton::leaveEvent(event);
m_animation->stop();
// update the duration so that we get a uniform speed when triggering this animation midway
m_animation->setDuration(((width() - m_init_geometry.width())/m_extension)*m_duration);
m_animation->setStartValue(geometry());
m_animation->setEndValue(m_init_geometry);
m_animation->start();
}
void MyButton::moveEvent(QMoveEvent *event)
{
// ignore the move event if it's due to the animation, otherwise store the new geometry
if(m_animation->state() == QPropertyAnimation::Running) return;
QPushButton::moveEvent(event);
m_init_geometry.setTopLeft(event->pos());
}
void MyButton::resizeEvent(QResizeEvent *event)
{
// ignore the move event if it's due to the animation, otherwise store the new geometry
if(m_animation->state() == QPropertyAnimation::Running) return;
QPushButton::resizeEvent(event);
m_init_geometry.setSize(event->size());
}
Notice that the start value of the closing animation is the current geometry and not the initial geometry plus the extended width. I'm updating reducing the duration of the opening animation linearly depending on how close the current width is to the full extended width; similarly for the closing animation. The rest now is very similar to your code:
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
auto* btn = new MyButton(this);
btn->setGeometry(100, 100, 60, 80);
btn->setStyleSheet(R"(QPushButton {
background-image: url(:/ubuntu.png);
background-repeat: no-repeat;
background-origin: content;
background-position: left center;}
)");
auto* labl = new QLabel("Hello World", btn);
labl->setAlignment(Qt::AlignCenter);
labl->setGeometry(btn->width(), 0, labl->width(), btn->height());
}
The result looks like this
|
73,968,622 | 73,968,833 | How to create a runtime variable-size array efficiently in C++? | Our library has a lot of chained functions that are called thousands of times when solving an engineering problem on a mesh every time step during a simulation. In these functions, we must create arrays whose sizes are only known at runtime, depending on the application. There are three choices we have tried so far, as shown below:
void compute_something( const int& n )
{
double fields1[n]; // Option 1.
auto *fields2 = new double[n]; // Option 2.
std::vector<double> fields3(n); // Option 3.
// .... a lot more operations on the field variables ....
}
From these choices, Option 1 has worked with our current compiler, but we know it's not safe because we may overflow the stack (plus, it's non standard). Option 2 and Option 3 are, on the other hand, safer, but using them as frequently as we do, is impacting the performance in our applications to the point that the code runs ~6 times slower than using Option 1.
What are other options to handle memory allocation efficiently for dynamic-sized arrays in C++? We have considered constraining the parameter n, so that we can provide the compiler with an upper bound on the array size (and optimization would follow); however, in some functions, n can be pretty much arbitrary and it's hard to come up with a precise upper bound. Is there a way to circumvent the overhead in dynamic memory allocation? Any advice would be greatly appreciated.
|
Create a cache at startup and pre-allocate with a reasonable size.
Pass the cache to your compute function or make it part of your class if compute() is a method
Resize the cache
std::vector<double> fields;
fields.reserve( reasonable_size );
...
void compute( int n, std::vector<double>& fields ) {
fields.resize(n);
// .... a lot more operations on the field variables ....
}
This has a few benefits.
First, most of the time the size of the vector will be changed but no allocation will take place due to the exponential nature of std::vector's memory management.
Second, you will be reusing the same memory so it will be likely it will stay in cache.
|
73,968,826 | 73,968,982 | C++ constructor on unnamed object idiom? | I have a C++ class that works like the Linux "devmem" utility for embedded systems. I'll simplify and give an outline of it here:
struct DevMem
{
DevMem(off_t start, off_t end)
{
// map addresses start to end into memory with mmap
}
~DevMem()
{
// release mapped memory with munmap
}
uint32_t read(off_t address)
{
// return the value at mapped address
}
void write(off_t address, uint32_t value)
{
// write value to mapped address
}
};
I use it like this:
void WriteConverter(uint32_t value)
{
off_t base = 0xa0000000;
DevMem dm(base, base+0x100); // set up mapping for region
dm.write(base+0x8, value); // output value to converter
dm.write(base+0x0, 1); // strobe hardware
while (dm.read(base+0x4)) // wait until done
;
}
And this works great. RAII ensures the mapped memory is released when I'm done with it. But some hardware is really simple and only needs a single read or write. It was bothering me that in order to access that hardware, I would have to invent some name for the instantiation of the class:
DevMem whatever(0xa0001000, 0xa0001000); // map the thing
whatever.write(0xa0001000, 42); // do the thing
With the named object and the repetition of the address three times, it's a little verbose. So I made a change to the constructor so that I could leave off the end parameter if I'm only mapping a single address:
DevMem(off_t start, off_t end = 0)
{
// map addresses start to end into memory with mmap
}
And then I overloaded the read and write routines so the address wasn't passed:
uint32_t read()
{
// return the value at the constructor's start address
}
void write(uint32_t value)
{
// write value to the constructor's start address
}
And I discovered that I could then do this:
DevMem(0xa0001000).write(42); // do the thing
And this works. I don't need to invent a name for the object, it's less verbose, the value is written (or read), and RAII cleans it up nicely. What I assume is happening is that C++ is constructing an unnamed object, dereferencing it, using it, and then destructing it.
Is this use of an unnamed object valid? I mean, it compiles okay, GCC and clang don't complain with common warnings cranked up, and it does actually work on the target hardware. I just can't find any examples of such a thing on the Interwebs. Is this a named idiom?
| Yep, completely valid. You create the object, use it and then the destructor kicks in. Your compiler will probably generate the same assembly in your whatever example if whateverhas a reasonable scope.
I don't know any names for this construct though. As well as I wouldn't call this an idiom.
|
73,968,947 | 73,968,961 | How to use struct declared in header file from source file c++? | I have a header file (abc.hpp) as bellow:
#include <iostream>
#include <stdio.h>
#include <vector>
using std::vector;
class ABC {
public:
struct BoxPoint {
float x;
float y;
} ;
vector<BoxPoint> getBoxPoint();
Then, in the source file (abc.cpp):
#include "abc.hpp"
vector<BoxPoint> ABC::getBoxPoint(){
vector<BoxPoint> boxPointsl;
BoxPoint xy = {box.x1, box.y1};
boxPointsl.push_back(xy);
return boxPointsl
}
When I compile, there is error:
error: ‘BoxPoint’ was not declared in this scope
at line vector<BoxPoint> ABC::getBoxPoint()
If I change to void ABC::getBoxPoint(), (also change in the header file and remove return boxPointsl, the error does not exist.
Can you point me to why display the error and how to resolve it?
Thank you!
| the symbol BoxPoint was only known inside ABC scope, outside you have to add the ABC scope, so ABC::BoxPoint:
#include "abc.hpp"
vector<ABC::BoxPoint> ABC::getBoxPoint(){
vector<BoxPoint> boxPointsl;
BoxPoint xy = {box.x1, box.y1};
boxPointsl.push_back(xy);
return boxPointsl
}
Inside the function you can use BoxPoint again, since we are inside the ABC scope again.
|
73,969,326 | 73,969,701 | Using C++20 Ranges to Avoid Loops | I was assigned a task where I need to solve a problem given several constraints. The point is to enforce the use of STL algorithms, iterators, and new c++20 functionality including things like ranges. However I've been reading on ranges for hours and I still can't figure out how I can implement the problem given all the constraints. I've simplified the problem and removed the specific details to make it more generic.
The Problem:
Write a function that 1) takes in a vector of custom objects input 2) returns a vector of a different type that includes an element for each object in input that satisfies some conditions. The value added is based on the properties of the input objects.
I realize this may sound obscure so here's a simple example. For an input vector of Shapes where each has a name and an area:
vector<Shapes> input{ { "Square", 10 }, { "Triangle", 30 } , { "Square", 1 }, { "Circle", 30 }, { "Triangle", 15 } };
return a vector of enums
enum Color { RED, BLUE, GREEN };
such that an enum is added for each Square or Circle. The value of the enum is determined based on the area of each Shape. So, let's say, if the area is above 20, RED is added, otherwise, GREEN is added.
So in this case we'd return { GREEN, GREEN, RED }
This is all well and good and could be implemented in a myriad of ways, what makes it very difficult are the constraints.
The Constraints:
no loops or recursion
no std::for_each
no data structures other than std::vector
no memory allocation (other than a one-time allocation for the return vector)
no by-reference lambda captures or mutable lambdas
cannot modify the input vector
My professor claims that "c++20 ranges make this task particularly simple." But even after reading on ranges for hours I'm not even sure where would I begin. My current train of thought is to create a std::view and filter it based on the conditions (Squares & Circles) but then I'm not sure how I would create a new vector of a different type and add elements to it based on the properties of the elements in the view without using loops, for_each, or by-reference lambdas..
| If you simply want to transform every shape to an enum, all you need is:
auto colors = input | transform([](const Shapes& s){return s.size > 20 ? RED : GREEN;});
colors can then be looped through in a for-loop:
for (const auto& c : colors)
Or be made into a new vector:
std::vector<Color> colorEnums{colors.begin(), colors.end()};
...though C++23's ranges::to would be helpful for the latter.
If you want to only generate the color elements for some elements in your input vector (ex.: not triangles), add filter before transform, e.g.:
auto colors = input |
filter([](const Shapes& s){return s.type != "Triangle";}) |
transform([](const Shapes& s){return s.size > 20 ? RED : GREEN;});
Note that having filter after transform would work too, but be less efficient, as the code would then transform all elements, before discarding some of them. So use filter as early as possible.
|
73,969,827 | 73,971,588 | How can I solve the error LNK2019: unresolved external symbol u_erroraName_58 referenced in function " public? | I am new at DCMTK and Visual Stadio .I get this error, and I don't know how to fix it. I'm using Visual Studio 2022.
This is my code:
#include <iostream>
#include "dcmtk/config/osconfig.h"
#include "dcmtk/dcmdata/dcfilefo.h"
#include "dcmtk/dcmdata/libi2d/i2d.h"
#include "dcmtk/dcmdata/libi2d/i2djpgs.h"
#include "dcmtk/dcmdata/libi2d/i2dplsc.h"
#include "dcmtk/dcmdata/dctk.h"
using namespace std;
void loadDicom() {
DcmFileFormat fileformat;
OFCondition status = fileformat.loadFile("ct000.dcm");
if (status.good())
{
OFString patientName;
if (fileformat.getDataset()->findAndGetOFString(DCM_PatientName, patientName).good())
{
cout << "Patient's Name: " << patientName << endl;
}
else
cerr << "Error: cannot access Patient's Name!" << endl;
}
else
cerr << "Error: cannot read DICOM file (" << status.text() << ")" << endl;
}
int main()
{
cout << "Hello World!\n";
//createDicomImage();
loadDicom();
}
Please help me! Thanks
| You are probably missing some essential DCMTK libraires.
You can check those by :
1/Right Click on your project -> Properties.
2/Under "Configuration Properties" find Linker ->Input -> Additional Dependencies.
Check if you have all the necessary libraries in the right order.
You might need to add netapi32 or wsock32 or both. (don't forget to insert those on alphabetic order).
|
73,970,426 | 73,970,682 | Using iterators but with same amount of loc? | One can loop over a list by both:
#include <iostream>
#include <list>
using namespace std;
int main()
{
list<int> alist{1, 2, 3};
for (const auto& i : alist)
cout << i << endl;
list<int>::iterator i;
for (i = alist.begin(); i != alist.end(); i++)
cout << *i << endl;
return 0;
}
Mostly I don't use iterators because of the extra line of code I have to write, list<int>::iterator i;.
Is there anyway of not writing it? And still use iterator? Any new trick on newer C++ versions? Perhaps implementing my own list instead of using the one from stl?
|
Mostly I don't use iterators because of the extra line of code I have to write, list<int>::iterator i;.
You don't need to put it in an extra line. As with every for loop, you can define the iterator type inside of the parentheses, unless you'll need the value outside of the loops body.
So you can also write
for (list<int>::iterator i = alist.begin(); i != alist.end(); i++)
cout << *i << endl;
or
for (auto i = alist.begin(); i != alist.end(); i++)
cout << *i << endl;
|
73,970,668 | 73,971,983 | FreeConsole() does not detach application from cmd | I launch my application in cmd. I compiled it by Cmake without WIN32_EXECUTABLE, so it hangs in cmd (i.e. is launched as not detached). Now I want to close the console and try to achieve this by calling FreeConsole().
This works in case I double-click the application -- the black console is flashes quickly and gets closed. But it does not work when I launch it in cmd. The cmd is still attached to the launched application and FreeConsole() does not help.
Is there any way to detach it from cmd programmatically? I have found the opportunity to run start /b myapp, but I would like to do it in a programmatical way.
UPDATE
A rough implementation of the answer of Anders for those who are interested -- see below. In this implementation I pass all the arguments, supplied to main.com, to the child process main.exe.
// main.cpp
// cl main.cpp
// link main.obj /SUBSYSTEM:WINDOWS /ENTRY:"mainCRTStartup"
//
#include <stdio.h>
#include <Windows.h>
int main(int argc, char** argv)
{
if (AttachConsole(ATTACH_PARENT_PROCESS)) {
freopen("CONOUT$", "w", stdout);
freopen("CONOUT$", "w", stderr);
freopen("CONIN$", "r", stdin);
}
for (int i = 0; i < argc; i++)
printf("--%s", argv[i]);
printf("Hello\n");
int k = 0;
scanf("%d", &k);
printf("%d\n", k);
return 0;
}
// helper.cpp
// cl helper.cpp
// link helper.obj /OUT:main.com
//
#include <windows.h>
#include <stdio.h>
#include <string.h>
void main( int argc, char **argv )
{
STARTUPINFO si;
PROCESS_INFORMATION pi;
ZeroMemory( &si, sizeof(si) );
si.cb = sizeof(si);
ZeroMemory( &pi, sizeof(pi) );
// Reconstructing the command line args for main.exe:
char cmdLine[32767] = { 0 };
strcpy(cmdLine, "main.exe ");
int shift = strlen("main.exe ");
for (int i = 1 ; i < argc; i++)
{
strcpy(cmdLine + shift, argv[i]);
const int argLength = strlen(argv[i]);
cmdLine[shift + argLength] = ' ';
shift += (argLength + 1);
}
printf("\n!!!!%s!!!!!%s\n", cmdLine, GetCommandLine());
// Start the child process.
// https://learn.microsoft.com/en-us/windows/win32/procthread/creating-processes
//
if( !CreateProcess(NULL, // No module name (use command line)
cmdLine, // Command line
NULL, // Process handle not inheritable
NULL, // Thread handle not inheritable
FALSE, // Set handle inheritance to FALSE
0, // No creation flags
NULL, // Use parent's environment block
NULL, // Use parent's starting directory
&si, // Pointer to STARTUPINFO structure
&pi ) // Pointer to PROCESS_INFORMATION structure
)
{
printf( "CreateProcess failed (%d).\n", GetLastError() );
return;
}
// Wait until child process exits.
WaitForSingleObject( pi.hProcess, INFINITE );
// Close process and thread handles.
CloseHandle( pi.hProcess );
CloseHandle( pi.hThread );
}
| Creating a "perfect" application that can be both GUI or console is not possible on Windows.
When a .exe has its subsystem set to console, two things happen:
CreateProcess will attach it to the parents console. If the parent does not have one, a new console is created. This can only be suppressed by passing flags to CreateProcess.
cmd.exe will wait for the process to finish.
The trick of setting the subsystem to Windows and using AttachConsole is sub-optimal because cmd.exe will not wait.
The best solution is to create your main .exe as a Windows application. Create another console helper application that you rename from .exe to .com. The helper application will just launch the main app in GUI mode and then quit. In console mode it has to tell the main app that it needs to attach and you need to WaitForSingleObject on it. Example application available here .
Visual Studio uses this .com trick. The trick works because .com is listed before .exe in %pathext%.
|
73,970,778 | 73,971,415 | Giving integer values to QPushButton(s) to apply C++ logic on them in Qt | I have a switch case with condition values from 0-8. Upon each value, I give it an x and y coordinate of a 3x3 array. Then the code operates logic on the 3x3 array to get the results.
I also have 9 buttons in my Qt Designer ui (arranged in a 3x3 matrix). I want each click of the nine buttons to get a corresponding integer value which will then pass through the switch case.
switch (square_number)
{
case 1:
x = 0;
y = 0;
break;
case 2:
x = 0;
y = 1;
break;
case 3:
x = 0;
y = 2;
break;
.
.
.
//and so on...
default:
break;
}
I want the square_number to get the integer value corresponding to the push-button clicked.
QSignalMapper seems obsolete or weird at the very least. And I am not that familiar with lambda expressions. Is there a simple way of doing what I want to do?
| You can try this approach and use dynamic property:
static constexpr auto IndexPropertyName = "index";
void MainWidnow::SetupPB(QPushButton* b, QPoint index)
{
connect(b, &QPushButton::clicked, this, &MainWidnow::onSomeButtoPressed);
p->setProperty(IndexPropertyName, index);
}
void MainWidnow::onSomeButtoPressed(bool checked)
{
auto index = sender()->property(IndexPropertyName).toPoint();
doSomethingOn(index);
}
QObject Class | Qt Core 6.4.0
bool QObject::setProperty(const char *name, const QVariant &value)
Sets the value of the object's name property to value.
If the property is defined in the class using Q_PROPERTY then true is returned on success and false otherwise. If the property is not defined using Q_PROPERTY, and therefore not listed in the meta-object, it is added as a dynamic property and false is returned.
|
73,971,095 | 73,971,357 | I can't figure out how to edit the data in my array and the one time that I did it "forgot" the data after I increased the size. (C++) | I'm somewhat new to C++ I decided to work on a small project and I'm currently trying to set up something to make an array, put 5 objects into it and then increase its size and put more objects into it; however, I've run into an issue where I cannot figure out how to put data into the array
This is what I've got so far:
#include <iostream>
#include "Token.h"
int main()
{
Token* ts = new Token[5]; //create the initial array
ts[0] = new Token(TT_PLUS, "+"); //add the item
int size = sizeof(ts) / sizeof(Token); //get the new size
size_t newSize = size * 2; // double it
Token* newArr = new Token[newSize]; //create new array
memcpy(newArr, ts, size * sizeof(Token)); //copy data
size = newSize; //The array resizing is code I found, so I'm not sure why this is here...
delete[] ts; //delete old array data
ts = newArr; // array is now updated?
std::cout << ts[0].type << std::endl; trying to get the type of Token 0
}
This is my error(s):
Severity Code Description Project File Line Suppression State
Error (active) E0349 no operator "=" matches these operands ATestProgrammingLang C:\Users\usr\source\repos\ATestProgrammingLang\ATestProgrammingLang.cpp 10
Severity Code Description Project File Line Suppression State
Error C2679 binary '=': no operator found which takes a right-hand operand of type 'Token *' (or there is no acceptable conversion) ATestProgrammingLang C:\Users\usr\source\repos\ATestProgrammingLang\ATestProgrammingLang.cpp 10
Let me know if I need to give you more information.
| First of all I think you use C-style. Use STL containers (std::vector for example) instead of c arrays.
I don't know what is Token class, but in my opinion you don't need to use new in this part:
ts[0] = new Token(TT_PLUS, "+"); //add the item
|
73,971,250 | 73,971,279 | Why does an overriden template class method still get compiled? | When I create a templated class with a virtual function, and override the function in a derived class, the base function still tries to get compiled.
Here is the minimal reproducible example:
#include <iostream>
template<typename T>
class Base
{
public:
virtual void Method()
{
static_assert(false);
}
};
class Derived : public Base<int>
{
public:
void Method() override
{
std::cout << "Hello, World!";
}
};
int main()
{
Derived d{};
d.Method();
return 0;
}
Why does the base method still try to compile when it is overriden?
| First, it needs to be compiled as user can instantiate Base<T> and directly call Method() on it. If you wanted to make it non-callable, use virtual void Method() = 0; for abstract functions.
Second, not only compiled, it's actually accessible from Derived: you can call it e.g. from Derived::Method(), as Base<int>::Method().
|
73,971,774 | 73,971,869 | Vector Pointer in C++ | I have a vector storing struct, and I pass it by address to a function. I wonder why I can only access to the elements by writing
(*v)[0].value
instead of
v[0].value
When I use struct array, and pass it to a function by address, I can simply write
v[0].value to get the value of an element. Any explanation is welcomed and thanks in advance.
| You're comparing a pointer to the first element of an array to a pointer to the whole vector.
Let's try to make a pointer to the whole array instead of its first element:
int a[3] = {1,2,3};
int (*ap)[3] = &a;
// int v1 = ap[0]; // error
int v2 = (*ap)[0]; // ok
Which is exactly like how a pointer to a vector behaves:
std::vector<int> b = {1,2,3};
std::vector<int> *bp = &b;
// int v3 = bp[0]; // error
int v4 = (*bp)[0]; // ok
Or, the other way around, let's make a pointer to the first element of a vector:
std::vector<int> c = {1,2,3};
int *cp = c.data(); // or `&c[0]`
int v5 = cp[0]; // ok
Which is similar to a pointer to the first element of an array:
int d[3] = {1,2,3};
int *dp = d; // or `&d[0]`
int v6 = dp[0]; // ok
|
73,971,862 | 73,972,282 | how to take formatted input in c++? | Well i know how to do this in c, for example:
#include <stdio.h>
int main()
{
int a,b,c;
scanf("%d:%d,%d",&a,&b,&c);
printf("%d %d %d",a,b,c);
return 0;
}
But for to do this in c++? Can cin be use like scanf?
| Since input format is "%H:%M,%s" I suspect that a time is an input.
In such case there is better simpler way to do it:
std::tm t{};
while(std::cin >> std::get_time(&t, "%H:%M,%S")) {
std::cout << std::put_time(&t, "%H %M %S") << '\n';
}
https://godbolt.org/z/Y5o9cYc4G
|
73,972,679 | 73,972,811 | CMake use variables from file | I'd like to share some variables between my scripts and CMake. It would be nice to re-use bash syntax:
var1=value1
var2=value2
...
I was trying to use the solution from here as the problem is pretty much the same: https://stackoverflow.com/a/17167673/15035275 But the solution didn't work for me.
I created a file config.in containing:
FOO=foobar
And I added this config file to my CMakeLists file:
cmake_minimum_required(VERSION 3.16)
configure_file(
${CMAKE_CURRENT_SOURCE_DIR}/config.in
${CMAKE_CURRENT_BINARY_DIR}/config
@ONLY
)
set(CMAKE_VERBOSE_ON)
set(CMAKE_VERBOSE_MAKEFILE ON)
message(STATUS "foo is equal to: ${FOO}")
...
But the variable ${FOO} isn't visible by CMake:
/bin/cmake -DCMAKE_BUILD_TYPE=Debug -G Ninja -S /home/p -B /home/p/cmake-build-debug
-- foo is equal to:
-- The CXX compiler identification is Clang 10.0.0
...
How can I use the variables from config.in ?
| The answer you linked is not applicable to your problem - I've left a comment on that answer as well.
I don't know of a generic way apart from the regex-based solutions offered by other people in the question you linked.
I can only give you a somewhat more verbose alternative, based on this answer:
execute_process(COMMAND ". config.in && echo -n $FOO" OUTPUT_VARIABLE FOO)
message(STATUS "FOO=${FOO}")
|
73,972,993 | 73,974,718 | How i colud use template function in arguments of template function? | I'm trying to realize some abstraction with functions in c++.
I want to do template function which takes two functions as arguments:
template <class inpOutp, class decis>
bool is_part_of_triangle(inpOutp ft_take_data,
decis ft_result){
return (ft_take_data(ft_result));
}
first one ft_take_data is template too and takes one function as argument:
template <class dec>
bool take_data(dec ft_result){
...
ft_result(cathetus_size, x_X, y_X);
...
}
second one ft_result should be the argument of ft_take_data:
int result(int cath_size, int x_X, int x_Y){
...
}
And i try to run it all in main like:
int main(void){
return (is_part_of_triangle(take_data, result));
}
But i have the error from compiler:
error: no matching function for call to 'is_part_of_triangle(<unresolved overloaded function type>, int (&)(int, int, int))'
return (is_part_of_triangle(take_data, result));
main.cpp:38:7: note: candidate: template<class inpOutp, class decis> bool is_part_of_triangle(inpOutp, decis)
bool is_part_of_triangle(inpOutp ft_take_data,
^~~~~~~~~~~~~~~~~~~
main.cpp:38:7: note: template argument deduction/substitution failed:
main.cpp:49:47: note: couldn't deduce template parameter 'inpOutp'
return (is_part_of_triangle(take_data, result));
How can i realize this scheme - run template function with two functions in arguments, one of which the template function too (which call second one):
-> func1(func2, func3);
-> in func1 { func2(func3); }
-> in func2 { func3(...); }
| The take_data is a template not an real function of which the address/ function pointer can be passed.
In order to get a concrete function, the template must be instantiated.
That means you need to pass something like:
take_data<TYPE OF NON-TEMPLATE FUNCTION>
Or simply
take_data<decltype(FUNCTION)>
That means you can either
return is_part_of_triangle(&take_data<int (*)(int, int, int)>, &result);
Or
return is_part_of_triangle(&take_data<decltype(result)>, &result);
|
73,973,055 | 73,975,137 | Template class static string member not initialized correctly | I am trying to create a simple wrapper around glib2.0's GVariant.
I imagined to have a templated class that would be used to derive a format string of GVariant's type:
template <typename T>
struct Object
{
static const std::string format_string;
};
For now I have hard-coded basic integral types and string and determined the way to derive arrays and dictionaries as follows:
// Integral types
template <>
const std::string Object<uint8_t>::format_string{"y"};
...
// String
template <>
const std::string Object<std::string>::format_string{"s"};
// Array
template <typename T>
struct Object<std::vector<T>>
{
static const std::string format_string;
};
template <typename T>
const std::string Object<std::vector<T>>::format_string{
"a" + Object<T>::format_string};
// Dictionary
template <typename K, typename V>
struct Object<std::map<K, V>>
{
static const std::string format_string;
};
template <typename K, typename V>
const std::string Object<std::map<K, V>>::format_string{
"{" + Object<K>::format_string + Object<V>::format_string + "}"};
For tuple i use the following string deduction method:
template <typename T, typename... Ts>
std::string derive_string()
{
if constexpr (sizeof...(Ts)) {
return Object<T>::format_string + derive_string<Ts...>();
} else {
return Object<T>::format_string;
}
}
// Tuple
template <typename... Ts>
struct Object<std::tuple<Ts...>>
{
static const std::string format_string;
};
template <typename... Ts>
const std::string Object<std::tuple<Ts...>>::format_string{
"(" + derive_string<Ts...>() + ")"};
However, when I try to debug-print the format_string member of each Object class
using IntArray = std::vector<int>;
using IntStrMap = std::map<int, std::string>;
using Int_Double_Bool = std::tuple<int, double, bool>;
using Int_IntStrMap_Bool = std::tuple<int, IntStrMap, bool>;
using Int_IntDoubleMap = std::tuple<int, std::map<int, double>>;
std::cout << "bool type:\n " << Object<bool>::format_string
<< "\nuint8_t type:\n " << Object<uint8_t>::format_string
<< "\nint16_t type:\n " << Object<int16_t>::format_string
<< "\nuint16_t type:\n " << Object<uint16_t>::format_string
<< "\nint32_t type:\n " << Object<int32_t>::format_string
<< "\nuint32_t type:\n " << Object<uint32_t>::format_string
<< "\nint64_t type:\n " << Object<int64_t>::format_string
<< "\nuint64_t type:\n " << Object<uint64_t>::format_string
<< "\ndouble type:\n " << Object<double>::format_string
<< "\nstring type:\n " << Object<std::string>::format_string
<< "\n[int] type\n " << Object<IntArray>::format_string
<< "\n{int: str} type\n " << Object<IntStrMap>::format_string
<< "\n(int, double, bool) type\n "
<< Object<Int_Double_Bool>::format_string
<< "\n(int, {int: str}, bool) type\n "
<< Object<Int_IntStrMap_Bool>::format_string
<< "\n(int, {int: double}) type\n "
<< Object<Int_IntDoubleMap>::format_string;
<< std::endl;
I get the following:
bool type:
b
uint8_t type:
y
int16_t type:
n
uint16_t type:
q
int32_t type:
i
uint32_t type:
u
int64_t type:
x
uint64_t type:
t
double type:
d
string type:
s
[int] type
ai
{int: str} type
{is}
(int, double, bool) type
(idb)
(int, {int: str}, bool) type
(i{is}b)
(int, {int: double}) type
(i)
As it is seen from the last two object printouts, the tuple that includes types that were used somewhere else ({int: str}) is derived correctly, while the one that does not ({int: double}), are not.
What am I doing wrong here?
| With C++17, you might simply do:
template <typename... Ts>
const std::string Object<std::tuple<Ts...>>::format_string{
"(" + (Object<Ts>::format_string + ... + "") + ")"};
which solves the issue with gcc
Demo
But problems still exists for clang (even for std::vector<int>).
I suspect Static Initialization Order Fiasco.
Using functions instead of previous/cached results works for both compilers:
std::string make_string(std::type_identity<bool>) { return "b"; }
std::string make_string(std::type_identity<uint8_t>) { return "y"; }
std::string make_string(std::type_identity<uint16_t>) { return "n"; }
std::string make_string(std::type_identity<int16_t>) { return "q"; }
std::string make_string(std::type_identity<int32_t>) { return "i"; }
std::string make_string(std::type_identity<uint32_t>) { return "u"; }
std::string make_string(std::type_identity<int64_t>) { return "x"; }
std::string make_string(std::type_identity<uint64_t>) { return "t"; }
std::string make_string(std::type_identity<double>) { return "d"; }
std::string make_string(std::type_identity<std::string>) { return "s"; }
template <typename T>
std::string make_string(std::type_identity<std::vector<T>>)
{
return "a" + make_string(std::type_identity<T>{});
}
template <typename K, typename V>
std::string make_string(std::type_identity<std::map<K, V>>)
{
return "{" + make_string(std::type_identity<K>{})
+ make_string(std::type_identity<V>{}) + "}";
}
template <typename... Ts>
std::string make_string(std::type_identity<std::tuple<Ts...>>)
{
return "(" + (make_string(std::type_identity<Ts>{}) + ... + "") + ")";
}
template <typename T>
struct Object
{
static const std::string format_string;
};
template <typename T>
const std::string Object<T>::format_string = make_string(std::type_identity<T>{});
Demo
|
73,974,753 | 73,975,123 | undefined reference to 'std::filesystem' using Bazel build | trying to pull and build Visqol following the instructions provided for Linux. I downloaded Bazel, everything seems fine. But when I try to execute with bazel build :visqol -c opt i get the following errors
bazel-out/k8-opt/bin/_objs/visqol/main.o:main.cc:function main: error: undefined reference to 'std::filesystem::__cxx11::path::_M_split_cmpts()'
bazel-out/k8-opt/bin/_objs/visqol/main.o:main.cc:function main: error: undefined reference to 'std::filesystem::status(std::filesystem::__cxx11::path const&)'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::FileExists(Visqol::FilePath const&): error: undefined reference to 'std::filesystem::__cxx11::path::_M_split_cmpts()'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::FileExists(Visqol::FilePath const&): error: undefined reference to 'std::filesystem::status(std::filesystem::__cxx11::path const&)'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::ReadFilesToCompare(Visqol::FilePath const&): error: undefined reference to 'std::filesystem::__cxx11::path::_M_split_cmpts()'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::BuildFilePairPaths(Visqol::CommandLineArgs const&): error: undefined reference to 'std::filesystem::__cxx11::path::_M_split_cmpts()'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::BuildFilePairPaths(Visqol::CommandLineArgs const&): error: undefined reference to 'std::filesystem::status(std::filesystem::__cxx11::path const&)'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::BuildFilePairPaths(Visqol::CommandLineArgs const&): error: undefined reference to 'std::filesystem::status(std::filesystem::__cxx11::path const&)'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::Parse(int, char**): error: undefined reference to 'std::filesystem::current_path[abi:cxx11]()'
bazel-out/k8-opt/bin/_objs/visqol_lib/commandline_parser.o:commandline_parser.cc:function Visqol::VisqolCommandLineParser::Parse(int, char**): error: undefined reference to 'std::filesystem::current_path[abi:cxx11]()'
collect2: error: ld returned 1 exit status
I've seen similar issues in c++ on the web but most of these are building with make commands and solve this by adding the -lstdc++fs or LDLIBS=-lstdc++fs options. However, those are unrecognized for Bazel
I'm on Debian 10.13 and my Bazel version is 5.3.1, my c++ version seems to be 8.3.0
| In accordance with how to use std::filesystem on gcc 8?, change the line build --linkopt=-ldl in .bazelrc into:
build --linkopt=-lstdc++fs -ldl
You could also open a ticket in the Visgol repo and tell them about this problem and get a proper solution in the future releases of Visgol.
|
73,975,225 | 73,975,815 | Is the initializer for a const static data member considered a default member initializer? | Is the initializer for a const static data member considered a default member initializer?
The relevant wording is [class.mem.general]/10:
A brace-or-equal-initializer shall appear only in the declaration of a
data member. (For static data members, see [class.static.data]; for
non-static data members, see [class.base.init] and [dcl.init.aggr]). A
brace-or-equal-initializer for a non-static data member specifies
a default member initializer for the member [..]
So for example:
constexpr int f() { return 0; }
struct A {
static const int I = f();
};
Is the brace-or-equal-initializer f() considered a default member initializer?
| No.
Static data members aren't initialised in constructors. f() is just the initialiser for A::I.
A default member initialiser is used to initialise a non-static data member in each constructor where the mem-initializer-list doesn't otherwise initialise that member. That is, it's a default for the initialisers of that member.
[class.base.init#9]
|
73,975,409 | 73,981,827 | A simple threaded program: questions and doubts about detach() | Here is my threaded program, which is very simple:
#include <iostream>
#include <thread>
using namespace std;
void myprint(int a)
{
cout << a << endl;
}
int main()
{
thread obj(myprint, 3);
obj.detach();
cout << "end!!!" << endl;
}
What I can't understand is that the program occasionally has sub-threads that print 3 multiple times, such as:
3
3
end!!!
I have browsed through many sources but have not found a good answer to this question.
I know that std::out is thread-unsafe and understand the garbled output due to race. The issue of duplicate output, however, doesn't seem to have much to do with race, as it doesn't occur in join() contexts.
The relevant configuration of the programming environment is as follows
Linux version 5.15.0-48-generic (buildd@lcy02-amd64-080)
(gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0,
GNU ld (GNU Binutils for Ubuntu) 2.38)
GCC version:
gcc version 11.2.0
| Aha,this morning I had a stroke of genius and solve the problem successfully. You just need to test the cpu time that join() and detach() executed, you will get the answers.
I think @Someprogrammerdude is right, it's just that the join takes longer to execute, so the problem of duplicate output only occurs in detach()
|
73,975,615 | 73,975,665 | Difference in queue sizes giving arbitrarily large value C++ | #include <iostream>
#include <queue>
using namespace std;
int main()
{
// cout<<"Hello World";
priority_queue<int> spq; // max heap
priority_queue <int, vector<int>, greater<int>> lpq; // min heap
spq.push(1);
lpq.push(2);
lpq.push(3);
cout << spq.size() - lpq.size() << endl;
return 0;
}
This code is giving me un-expectedly very large value of 18446744073709551615
I am not able to understand the issue here.
| You have unsigned integer wrap-around.
spq.size() is 1, lpq.size() is 2.
So when you do 1 - 2, since you're using unsigned numbers, you don't end up in the negatives, you instead wrap around to the largest unsigned number you have, 18446744073709551615.
|
73,975,788 | 73,976,575 | C++/rapidjson - unable to iterate over array of objects | I have the following json:
{
"data": {
"10": {
"id": 11,
"accountId": 11,
"productId": 2,
"variantId": 0,
"paymentMethod": "PayPal",
"amount": "9.99",
"status": "confirmed",
"created_at": "2019-12-02T21:55:30.000000Z",
"updated_at": "2019-12-02T21:56:57.000000Z"
},
"12": {
"id": 14,
"accountId": 69,
"productId": 3,
"variantId": 0,
"paymentMethod": "PayPal",
"amount": "19.99",
"status": "confirmed",
"created_at": "2019-12-05T09:09:21.000000Z",
"updated_at": "2019-12-05T09:14:09.000000Z"
},
}
}
Here is how I parse it:
// parse JSON response
rapidjson::Document parsed_response;
parsed_response.Parse(readBuffer.c_str());
const rapidjson::Value& data = parsed_response[ "data" ];
for ( rapidjson::Value::ConstMemberIterator iterator = data.MemberBegin(); iterator != data.MemberEnd(); ++iterator )
{
std::cout << iterator->name.GetString() << std::endl;
}
This gives following output:
10
12
I am unable unfortunately to get into objects 10 and 12 and get the accountId or any other member of objects 10 and 12.
How can I get inside these objects and retrieve any data there like accountId, productId, variantId etc.. ?
| iterator->name refers to the bit before the : in the json, to get at the other side you need iterator->value.
for ( auto & data : parsed_response[ "data" ].GetObject() )
{
std::cout << data.name.GetString()
<< data.value["accountId"].GetString()
<< data.value["productId"].GetString()
<< data.value["variantId"].GetString()
<< std::endl;
}
|
73,976,001 | 73,977,954 | g++ undefined reference to while linking with a custom shared library | I am trying to link my C++ program with a custom shared library. The shared library is written in C, however, I don't think that it should matter. My library is called libfoo.so. Its full path is /home/xyz/customlib/libfoo.so. I've also added /home/xyz/customlib to my LD_LIBRARY_PATH so ld should be able to find it. When I run nm -D libfoo.so, I get:
...
00000000000053a1 T myfunc
000000000000505f T foo
0000000000004ca9 T bar
00000000000051c6 T baz
000000000000527f T myfunc2
...
So I think that my library is correctly compiled. I am trying to link it my c++ file test.cpp by running
g++ -L/home/xyz/customlib -Og -g3 -ggdb -fsanitize=address test.cpp -o test -lfoo
However, I am getting the following errors:
/usr/bin/ld: in function sharedlib_test1()
/home/xyz/test.cpp:11: undefined reference to `myfunc()'
/usr/bin/ld: in function sharedlib_test2()
/home/xyz/test.cpp:33: undefined reference to `foo()'
...
What am I doing wrong? How can I resolve this? Note that I have renamed the files and the shared library as I cannot share the exact file and function names.
| As indicated by the comments in the post, it turns out that I was missing the extern "C" in the header file. So in my code, I wrapped my include statement in an extern "C" which solved the issue.
TLDR: The header include in the code should look like:
extern "C" // Import C style functions
{
#include <foo.h>
}
|
73,976,245 | 73,976,378 | What are the use-cases of "complete-class context"? What's the benefit that a "complete-class context" provides? | In current working draft, the definition of complete-class context is: (§ 11.4.1 [class.mem.general]/7):
A complete-class context of a class (template) is a
(7.1) function body ([dcl.fct.def.general]),
(7.2) default argument ([dcl.fct.default]),
(7.3) default template argument ([temp.param]),
(7.4) noexcept-specifier ([except.spec]), or
(7.5) default member initializer
within the member-specification of the class or class template.
In fact, I don't understand the intention of this wording. What are the use-cases of "complete-class context"? What's the benefit that a "complete-class context" provides? So any further explanation will be appreciated.
For instance, what does mean that a function body is a complete-class context? Is this mean, for example, ..
struct X {
void f(){
X x{};
}
};
.. that I can create an X object within the function body of X::f? if yes, does it mean anything else?
| If we did not have a complete-class context then we could not write a class like
struct foo
{
void bar() { std::cout << kitty; }
std::string kitty = "meow";
};
because kitty isn't known about until after bar is defined. You would have to write the class like
struct foo
{
void bar();
std::string kitty = "meow";
};
void foo::bar() { std::cout << kitty; }
|
73,976,312 | 73,988,791 | open62541 client fails when calling method with custom datatype input argument | I'm using open62541 to connect to an OPC/UA server and I'm trying to call methods that a certain object on that server provides. Those methods have custom types as input arguments; for example, the following method takes a structure of three booleans:
<opc:Method SymbolicName="SetStatusMethodType" ModellingRule="Mandatory">
<opc:InputArguments>
<opc:Argument Name="Status" DataType="VisionStatusDataType" ValueRank="Scalar"/>
</opc:InputArguments>
<opc:OutputArguments />
</opc:Method>
Here, VisionStatusDataType is the following structure:
<opc:DataType SymbolicName="VisionStatusDataType" BaseType="ua:Structure">
<opc:ClassName>VisionStatus</opc:ClassName>
<opc:Fields>
<opc:Field Name="Camera" DataType="ua:Boolean" ValueRank="Scalar"/>
<opc:Field Name="StrobeController" DataType="ua:Boolean" ValueRank="Scalar"/>
<opc:Field Name="Server" DataType="ua:Boolean" ValueRank="Scalar"/>
</opc:Fields>
</opc:DataType>
Now, when calling the method, I'm encoding the data into an UA_ExtensionObject, and wrap that one as an UA_Variant to provide it to UA_Client_call. The encoding looks like this:
void encode(const QVariantList& vecqVar, size_t& nIdx, const DataType& dt, std::back_insert_iterator<std::vector<UAptr<UA_ByteString>>> itOut)
{
if (dt.isSimple())
{
auto&& qVar = vecqVar.at(nIdx++);
auto&& uaVar = convertToUaVar(qVar, dt.uaType());
auto pOutBuf = create<UA_ByteString>();
auto nStatus = UA_encodeBinary(uaVar.data, dt.uaType(), pOutBuf.get());
statusCheck(nStatus);
itOut = std::move(pOutBuf);
}
else
{
for (auto&& dtMember : dt.members())
encode(vecqVar, nIdx, dtMember, itOut);
}
}
UA_Variant ToUAVariant(const QVariant& qVar, const DataType& dt)
{
if (dt.isSimple())
return convertToUaVar(qVar, dt.uaType());
else
{
std::vector<UAptr<UA_ByteString>> vecByteStr;
auto&& qVarList = qVar.toList();
size_t nIdx = 0UL;
encode(qVarList, nIdx, dt, std::back_inserter(vecByteStr));
auto pExtObj = UA_ExtensionObject_new();
pExtObj->encoding = UA_EXTENSIONOBJECT_ENCODED_BYTESTRING;
auto nSizeAll = std::accumulate(vecByteStr.cbegin(), vecByteStr.cend(), 0ULL, [](size_t nSize, const UAptr<UA_ByteString>& pByteStr) {
return nSize + pByteStr->length;
});
auto&& uaEncoded = pExtObj->content.encoded;
uaEncoded.typeId = dt.uaType()->typeId;
uaEncoded.body.length = nSizeAll;
auto pData = uaEncoded.body.data = new UA_Byte[nSizeAll];
nIdx = 0UL;
for (auto&& pByteStr : vecByteStr)
{
memcpy_s(pData + nIdx, nSizeAll - nIdx, pByteStr->data, pByteStr->length);
nIdx += pByteStr->length;
}
UA_Variant uaVar;
UA_Variant_init(&uaVar);
UA_Variant_setScalar(&uaVar, pExtObj, &UA_TYPES[UA_TYPES_EXTENSIONOBJECT]);
return uaVar;
}
}
The DataType class is a wrapper for the UA_DataType structure; the original open62541 type can be accessed via DataType::uaType().
Now, once a have the variant (containing the extension object), the method call looks like this:
auto uavarInput = ToUAVariant(qvarArg, dtInput);
UA_Variant* pvarOut;
size_t nOutSize = 0UL;
auto nStatus = UA_Client_call(m_pClient, objNode.nodeId(), m_uaNodeId, 1UL, &uavarInput, &nOutSize, &pvarOut);
The status is 2158690304, i.e. BadInvalidArgument according to UA_StatusCode_name.
Is there really something wrong with the method argument? Are we supposed to send ExtensionObjects, or what data type should the variant contain?
Is it possible that the server itself (created using the .NET OPC/UA stack) is not configured correctly?
N.B., the types here are custom types; that is, the encoding is done manually (see above) by storing the byte representation of all members next to each other in an UA_ByteString - just the opposite of what I'm doing when reading variables or output arguments, which works just fine.
| The problem is the typeId of the encoded object. For the server in order to understand the received data, it needs to know the NodeId of the encoding, not the actual NodeId of the type itself. That encoding can be found by following the HasEncoding reference (named "Default Binary") of the type:
auto pRequest = create<UA_BrowseRequest>();
auto pDescr = pRequest->nodesToBrowse = UA_BrowseDescription_new();
pRequest->nodesToBrowseSize = 1UL;
pDescr->nodeId = m_uaNodeId;
pDescr->resultMask = UA_BROWSERESULTMASK_ALL;
pDescr->browseDirection = UA_BROWSEDIRECTION_BOTH;
pDescr->referenceTypeId = UA_NODEID_NUMERIC(0, UA_NS0ID_HASENCODING);
auto response = UA_Client_Service_browse(m_pClient, *pRequest);
for (auto k = 0UL; k < response.resultsSize; ++k)
{
auto browseRes = response.results[k];
for (auto n = 0UL; n < browseRes.referencesSize; ++n)
{
auto browseRef = browseRes.references[n];
if (ToQString(browseRef.browseName.name).contains("Binary"))
{
m_nodeBinaryEnc = browseRef.nodeId.nodeId;
break;
}
}
}
Once you have that NodeId, you pass it to UA_ExtensionObject::content::encoded::typeId:
auto pExtObj = UA_ExtensionObject_new();
pExtObj->encoding = UA_EXTENSIONOBJECT_ENCODED_BYTESTRING;
auto nSizeAll = std::accumulate(vecByteStr.cbegin(), vecByteStr.cend(), 0ULL, [](size_t nSize, const UAptr<UA_ByteString>& pByteStr) {
return nSize + pByteStr->length;
});
auto&& uaEncoded = pExtObj->content.encoded;
uaEncoded.typeId = dt.encoding();
uaEncoded.body.length = nSizeAll;
auto pData = uaEncoded.body.data = new UA_Byte[nSizeAll];
nIdx = 0UL;
for (auto&& pByteStr : vecByteStr)
{
memcpy_s(pData + nIdx, nSizeAll - nIdx, pByteStr->data, pByteStr->length);
nIdx += pByteStr->length;
}
|
73,976,552 | 73,977,235 | Sort a c++ list and remove all duplicate strings | #include <cstdlib>
#include <fstream>
#include <iostream>
#include <iterator>
#include <list>
#include <string>
using namespace std;
istream& GetLines(istream& is, list<string>& list) {
string str;
if (list.size() > 0) {
list.clear();
}
while (getline(is, str, is.widen('\n'))) {
list.push_back(str);
}
return is;
}
void Print(const list<string>& list) {
for (auto it = list.cbegin(); it != list.cend(); it++) {
cout << *it << endl;
}
}
void SortAndUnique(list<string>& list) {}
int main() {
list<string> list;
ifstream f(
"/home/jacksparrow/Downloads/university/project/module3/Module_3/"
"T2-list/src /main.cpp");
// Read the file into list
if (!f.is_open() || !GetLines(f, list).eof()) {
cout << "Opening error: Error reading main.cpp" << endl;
return EXIT_FAILURE;
}
Print(list);
cout << "---" << endl;
// Sort and unique
SortAndUnique(list);
Print(list);
cout << "---" << endl;
// Print again
Print(list);
return 0;
}
I have got the above code in a file "main.cpp" and, I read this file "main.cpp" into a list "list list;", now what I want to do is to sort that list into alphabetical order and remove the duplicate strings. Also, this upper code is my main.cpp file in which I included (#include "list.cpp" ) file which's code is written below and does 3 functions:
Reads the file data into the list "getline()".
Prints the list Print()
sort that "list" into alphabetical order and, removes the duplicate strings SortAndUnique().
| The proper way to solve this is as suggested to read the documentation about the container you are using and in std::list Operations, you'll find this list:
Public member function
Description
merge
merges two sorted lists
splice
moves elements from another list
removeremove_if
removes elements satisfying specific criteria
reverse
reverses the order of the elements
unique
removes consecutive duplicate elements
sort
sorts the elements
The linked member functions are overloaded:
std::list::sort:
void sort();
template< class Compare >
void sort( Compare comp );
std::list::unique:
size_type unique();
template< class BinaryPredicate >
size_type unique( BinaryPredicate p );
In your case you do not need to provide a Compare functor or a BinaryPredicate since the overloads taking no arguments already do what you want.
|
73,976,779 | 73,977,676 | Trying to use copy and swap idiom on operator= | While trying to implement MyVector I end up with:
#include <iostream>
#include <string>
using namespace std;
template <typename T>
class MyVector
{
int m_size = 0;
int m_capacity = 0;
T* m_data = nullptr;
public:
MyVector()
{
cout << "defautl ctor" << endl;
realloc(2);
}
MyVector(const MyVector& v)
: m_size(v.m_size),
m_capacity(v.m_capacity)
{
cout << "copy ctor" << endl;
m_data = new T[m_size];
*m_data = *v.m_data;
}
MyVector(MyVector&& v)
: m_size(v.m_size),
m_capacity(v.m_capacity)
{
cout << "move ctor" << endl;
m_data = move(v.m_data);
v.m_data = nullptr;
v.m_size = 0;
v.m_capacity = 0;
}
MyVector& operator= (const MyVector& v)
{
cout << "copy assignment operator" << endl;
// m_size = v.m_size;
// m_capacity = v.m_capacity;
// *m_data = *v.m_data;
MyVector<int> copy = v;
swap(*this, copy);
return *this;
}
void push_back(const T& value)
{
if (!(m_size < m_capacity))
{
// cout << value << " size is " << m_size << " capacity is " << m_capacity << endl;
realloc(m_size*2);
// return;
}
m_data[m_size++] = value;
}
T& operator[] (int index)
{
cout << "index " << index << " of size " << m_size << endl;
if (!(index < m_size))
cout << "index out of bounds" << endl;
return m_data[index];
}
int size()
{
return m_size;
}
T* begin()
{
return &m_data[0];
}
T* end()
{
return &m_data[size()];
}
private:
void realloc(int new_capacity)
{
// cout << __func__ << " new capacity " << new_capacity << endl;
T* data = new T[new_capacity];
for (int i = 0; i < m_size; i++)
data[i] = m_data[i];
delete[] m_data;
m_data = data;
m_capacity = new_capacity;
}
};
int main(int argc, char** argv)
{
cout << "starting..." << endl;
MyVector<int> a;
a.push_back(7);
MyVector<int> d;
d = a;
cout << a[0] << endl;
cout << d[0] << endl;
return 0;
}
Where the operator= is
MyVector& operator= (const MyVector& v)
{
cout << "copy assignment operator" << endl;
// m_size = v.m_size;
// m_capacity = v.m_capacity;
// *m_data = *v.m_data;
MyVector<int> copy = v; // copy and swap
swap(*this, copy);
return *this;
}
However this led to what seems to be a recursive behavior. So, is my understanding of the copy and swap approach wrong? Or is there something else I'm missing?
| As stated in comments, you did not implement a swap() function for MyVector, so the statement swap(*this, copy); is calling std::swap() (one of the many pitfalls of using using namespace std;), which will invoke your operator= again, hence the recursive behavior you are seeing.
Also, your copy constructor is not implemented correctly. It is not copying all of the elements from the input array into the new array. It is copying only the 1st element.
Also, you are missing a destructor to free the allocated array.
Also, since MyVector has both copy and move constructors, your two assignment operator='s can (and should) be merged into one operator that takes a MyVector by value. This lets the compiler decide whether to call the operator with copy or move semantics depending on whether the caller is passing in an lvalue or an rvalue, respectively. The operator can then just swap whatever it is given, as the input will have already been copied or moved before the operator is entered.
Try something more like this:
#include <iostream>
#include <string>
#include <algorithm>
#include <stdexcept>
template <typename T>
class MyVector
{
int m_size = 0;
int m_capacity = 0;
T* m_data = nullptr;
public:
MyVector()
{
std::cout << "default ctor" << std::endl;
realloc(2);
}
MyVector(const MyVector& v)
{
std::cout << "copy ctor" << std::endl;
realloc(v.m_capacity);
std::copy_n(v.m_data, v.m_size, m_data);
m_size = v.m_size;
}
MyVector(MyVector&& v)
{
std::cout << "move ctor" << std::endl;
v.swap(*this);
}
~MyVector()
{
std::cout << "dtor" << std::endl;
delete[] m_data;
}
MyVector& operator= (MyVector v)
{
std::cout << "assignment operator" << std::endl;
v.swap(*this);
return *this;
}
void push_back(const T& value)
{
if (m_size >= m_capacity)
{
// std::cout << value << " size is " << m_size << " capacity is " << m_capacity << std::endl;
realloc(m_size * 2);
}
m_data[m_size] = value;
++m_size;
}
T& operator[] (int index)
{
std::cout << "index " << index << " of size " << m_size << std::endl;
if (index < 0 || index >= m_size)
throw std::out_of_range("index out of bounds");
return m_data[index];
}
int size() const
{
return m_size;
}
T* begin()
{
return &m_data[0];
}
T* end()
{
return &m_data[m_size];
}
void swap(MyVector &other)
{
std::swap(m_data, other.m_data);
std::swap(m_size, other.m_size);
std::swap(m_capacity, other.m_capacity);
}
private:
void realloc(int new_capacity)
{
// std::cout << __func__ << " new capacity " << new_capacity << std::endl;
T* data = new T[new_capacity];
std::copy_n(m_data, m_size, data);
std::swap(m_data, data);
m_capacity = new_capacity;
delete[] data;
}
};
// for std::swap() to use via ADL...
void swap(MyVector &v1, MyVector &v2)
{
v1.swap(v2);
}
|
73,976,814 | 73,988,934 | converting (roughly) the current time into a 32 bit integer | I just need a way to roughly convert current time into a 32 bit integer. It doesn't need to be very accurate.. even accuracy of only a couple minutes would be ok.
The main idea is the next time I need to check it, it should be higher than or at least not lower than the previous time I retrieved this value. (I don't check very often)
I use C++ 20 (the MSVC version of it, whatever comes with VS 2022)
|
I use C++ 20 (the MSVC version of it, whatever comes with VS 2022)
Excellent. This means that you have very good tools to work with in the chrono department.
Because of your requirements, there are only a few solutions I would consider acceptable.
You should not deal in local time as it jumps around too much due to daylight saving or other political decisions. You should only deal with UTC.
You should not track time with precision finer than seconds. You could track time in minutes, but seconds will do, and be considerably more responsive.
Because of your 32 bit storage requirements, you should not use the Unix Time epoch of 1970-01-01 00:00:00 UTC. This will overflow signed 32 bit storage in 2038, which is only 16 years from now. 16 years may seem like a long time. But software has a way of lasting a long time. You want to set your time bombs for long after your grandchildren have lived a good long life.
I recommend a signed 32 bit integer vs unsigned so that subtraction larger from smaller will get you a negative amount, instead of wrapping to a large positive. However if you want to use unsigned, you can double your range, and it is a simple change in the code.
Here is a small function that meets these requirements.
#include <chrono>
// seconds since 2020
int
get_current_time()
{
using namespace std::chrono;
constexpr sys_days epoch = 2020y/1/1;
auto now = floor<seconds>(system_clock::now());
return static_cast<int>((now - epoch)/1s);
}
This will give you seconds since 2020, and will not overflow until the year 2088. It would also be easy for a future maintenance programmer to adjust the epoch 60 years from now.
The variable now holds a count of seconds since the Unix Time epoch of 1970-01-01 00:00:00 UTC. However it is held in signed 64 bit storage which won't overflow until well after the sun turns into a red giant.
The expression (now - epoch)/1s subtracts off the epoch, giving you 64 bit std::chrono::seconds since 2020-01-01 00:00:00 UTC. Dividing that by 1s gives you a signed 64 bit integral type without changing the value. (now - epoch()).count() would do the same thing. Use whichever expression you like better.
Finally just static_cast that signed 64 bit integral type to whatever your desired 32 bit integral type is.
|
73,977,225 | 74,234,480 | vs code c++: unable to establish a connection to GDB | I'm trying to set up a OpenGL environment in vs code, I'm using MinGW64 with msys for compilation and package management, I wrote a tasks and launch json files for generating builds, but when I run the build that was generated I get an error stating "unable to establish connection to GDB" and my app aborts.
this is my launch.json:
"version": "0.2.0",
"configurations":
[
{
"name": "Lauch OpenGL App",
"type": "cppdbg",
"request": "launch",
"preLaunchTask": "Build OpenGL App",
"cwd": "${workspaceRoot}",
"program": "${workspaceRoot}\\Build\\app",
"stopAtEntry": false,
"externalConsole": true,
"MIMode": "gdb",
"miDebuggerPath": "C:\\msys64\\mingw64\\bin\\gdb.exe",
"setupCommands":
[
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
]
}
This is my tasks.json:
{
"tasks":
[
{
"label": "Compile source code",
"type": "shell",
"command": "C:\\msys64\\mingw64\\bin\\g++.exe",
"args":
[
"-c",
"main.cpp",
"-o",
"Build\\Temp\\main.o"
]
},
{
"label": "Link Libraries",
"type": "shell",
"command": "C:\\msys64\\mingw64\\bin\\g++.exe",
"args":
[
"-o",
"Build\\app",
"Build\\Temp\\main.o",
"-L.",
"-lglfw3",
"-lopengl32",
"-lgdi32"
]
},
{
"label": "Cleanup",
"type": "shell",
"command": "Remove-Item",
"args":
[
"Build\\Temp\\*.*"
]
},
{
"label": "Build OpenGL App",
"dependsOrder": "sequence",
"dependsOn": ["Compile source code", "Link Libraries", "Cleanup"]
}
],
"version": "2.0.0"
}
When I run my build tasks everything works until the moment the app launches then the following error is shown:
And this is printed to the console:
| No clue what the problem was, but I ended up reinstalling everything (msys, mingw, g++, gdb etc....) and the issue was fixed
|
73,977,688 | 73,980,929 | Unreal Engine 5.0.3 ERROR: Could not find NetFxSDK install dir | I was trying to create a new C++ Unreal Engine project, but every time I do it (Only with C++), I get a popping window that says:
Running C:/Program Files/Epic Games/UE_5.0/Engine/Binaries/DotNET/UnrealBuildTool/UnrealBuildTool.exe -projectfiles -project="C:/Users/user/Documents/Unreal Projects/MyProject/MyProject.uproject" -game -rocket -progress
Log file: C:\Users\user\AppData\Local\UnrealBuildTool\Log_GPF.txt
Some Platforms were skipped due to invalid SDK setup: IOS, Android, Linux, LinuxArm64.
See the log file for detailed information
Discovering modules, targets and source code for project...
ERROR: Could not find NetFxSDK install dir; this will prevent SwarmInterface from installing. Install a version of .NET Framework SDK at 4.6.0 or higher.
| I had this same problem earlier today.
From Visual Studio Installer you can install .NET Framework SDK 4.6.0
Main GUI for Visual Studio Install
From here click on the modify button to add features and functionally.
Modifying VS Installer Screen
Then you just need to check the box for the .NET Framework and follow the install steps.
This worked for me, hopefully it'll work for you too!
|
73,977,781 | 73,978,670 | Finding nearest "ancestor" in tree of types | I am forming a "tree" of types in the following way:
template <typename T, typename PARENT, typename ... CHILDREN>
class Node {};
class X;
class A;
class AA;
class AB;
class B;
class BA;
class BB;
class X : public Node<X, void, A, B> {};
class A : public Node<A, X, AA, AB> {};
class AA : public Node<AA, A> {};
class AB : public Node<AB, A> {};
class B : public Node<B, X, BA, BB> {};
class BA : public Node<BA, B> {};
class BB : public Node<BB, B> {};
I need to write a "traits" type such that I can get the "nearest ancestor" given CHILDREN... and PARENT from within the Node base class. For example
nearest_ancestor<X, BB>::type should be B (because B is the next accessible leaf from X "on the way to" BB)
Please let me know if what I am asking doesn't make sense.
| I think I figured it out.
Assuming the Node base class has the following:
template <typename T, typename PARENT, typename ... CHILDREN>
class Node {
using parent_type = PARENT;
};
Then nearest_ancestor can be defined as:
template <typename T, typename U, typename ENABLE = void>
struct nearest_ancestor {
using type = typename nearest_ancestor<T, typename U::parent_type>::type;
};
template <typename T, typename U>
struct nearest_ancestor<T, U, std::enable_if_t<std::is_same_v<T, typename U::parent_type>>> {
using type = U;
};
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.