question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
72,849,474 | 72,849,830 | what is boost::geometry::correct doing in this case? | The following code generates the output I expect:
MULTILINESTRING((5 5,4 4),(2 2,1 1))
However, if I remove the call to boost::geometry::correct() it returns the incorrect result:
MULTILINESTRING((5 5,1 1))
Code below:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/polygon.hpp>
#include <boost/geometry/geometries/linestring.hpp>
#include <boost/geometry/multi/geometries/multi_linestring.hpp>
#include <iostream>
namespace bg = boost::geometry;
namespace bgm = boost::geometry::model;
using point = bgm::point<double, 2, bg::cs::cartesian>;
using polygon = bgm::polygon<point>;
using polyline = bgm::linestring<point>;
using polylines = bgm::multi_linestring<polyline>;
int main()
{
polygon poly = {
{ {1,5}, {5,5}, {5,1}, {1,1} },
{ {2,4}, {2,2}, {4,2}, {4,4} }
};
polyline line = { {6,6},{0,0} };
bg::correct(poly);
polylines result;
bg::intersection(poly, line, result);
std::cout << bg::wkt(result) << "\n";
return 0;
}
The geometry defined in the above looks like the following. The red line segment and blue polygon with a hole should intersect to the green line segments.
I defined the vertices of the hole in counter-clockwise order as it is typical in computational geometry for holes to have reverse orientation of non-holes (It's also typical for non-holes to have counter-clockwise orientation but boost::geometry seems to default to clockwise). If I flip the orientation of the hole it does not fix the problem (although I do get a different wrong result). I am not sure what else correct could be doing.
| boost::correct() is closing both the inner and outer polygons.
That is, the following returns the expected output:
namespace bg = boost::geometry;
namespace bgm = boost::geometry::model;
using point = bgm::point<double, 2, bg::cs::cartesian>;
using polygon = bgm::polygon<point>;
using polyline = bgm::linestring<point>;
using polylines = bgm::multi_linestring<polyline>;
int main()
{
polygon poly = {
{ {1,5}, {5,5}, {5,1}, {1, 1}, {1,5}},
{ {2,4}, {2,2}, {4,2}, {4,4}, {2,4}}
};
polyline line = { {6,6},{0,0} };
polylines result;
bg::intersection(poly, line, result);
std::cout << bg::wkt(result) << "\n";
return 0;
}
|
72,849,505 | 72,849,639 | Public Setters vs. Friend Class vs. Specific Constructor | I'm making a simple programming language, and have encountered the following problem:
I have a Parser class which has methods that return derived classes of the Node struct. Currently all of the Parser class methods look something like this:
DerivedNode Parser::ParseDerived()
{
DerivedNode node{};
node.Field1 = 0;
node.Field2 = 10;
return node;
}
I recently switched the Node type from being a struct to a class, because I want to implement some oop, thus I made fields of Node class private. I'm refactoring the Parser class and struggling to decide, what is the best option out of these 3:
// Option 1: Public setters
DerivedNode Parser::ParseDerived()
{
DerivedNode node{};
node.SetField1(0);
node.SetField2(10);
return node;
}
// Option 2: Making Parser a friend of all Node derived classes
DerivedNode Parser::ParseDerived()
{
DerivedNode node{};
node.m_Field1 = 0;
node.m_Field2 = 10;
return node;
}
// Option 3: Storing in variables and calling a constructor
DerivedNode Parser::ParseDerived()
{
size_t field1 = 0;
size_t field2 = 10;
return DerivedNode{ field1, field2 };
}
I would love to hear which of these methods is the best and some arguments why (sorry for my English).
| A class is supposed to hold an invariant. Unless all combination of all field values are correct, 2nd version is strongly discouraged; 3rd is recommended. It's also the way to go for immutable structures which help debugging and testing very much.
|
72,849,627 | 72,850,561 | why does the code execute list in a wrong sequence? | I am trying to concatenate two lists based on array (in c++), empty the first over the second, and if the insertion failed (maximum size reached) it will keep each list as it was before insertion.
therefor, the code works well, but the problem is that it execute the list in a wrong sequence
like, the first list contains 10 20 30 40 50
the second 100 200 300
when it is concatenated with the second it would be as 100 200 300 10 30 50 20 40
but I want it to be 100 200 300 10 20 30 40 50
the code I wrote :
using namespace std;
const int maxsize=100;
template<class T>
class list{
T entry[maxsize];
int count;
public:
list(){
count=0;
}
bool empty(){
return count==0;
}
bool insert(int pos, T item){
if(pos<0 || pos>count) return 0;
if(count>=maxsize) return 0;
for(int i=count-1; i>=pos; i--)
entry[i+1]=entry[i];
entry[pos]=item;
count++;
return 1;
}
bool remove(int pos){
if(pos<0 || pos>=count) return 0;
for(int i=pos; i<count-1; i++)
entry[i]=entry[i+1];
count--;
return 1;
}
bool retrieve(int pos, int &item){
if(pos<0 || pos>=count) return 0;
item=entry[pos];
return 1;
}
bool replace(int pos, int item){
if(pos<0 || pos>=count) return 0;
entry[pos]=item;
return 1;
}
int size(){
return count;
}
};
void print(list<int>L){
int item;
for(int i=0;i<L.size();i++){
L.retrieve(i,item);
cout<<item<<" ";
}
cout<<endl;
}
void fill(list<int>&L, int n){
for(int i=1; i<n; i++)
L.insert(L.size(),rand()%100);
}
bool concat (list<int>&l1,list<int>&l2){
int item;
int c=l2.size();
while(!l1.empty()) {
for(int i=0; i<l1.size(); i++){
l1.retrieve(i,item);
if(l2.insert(l2.size(),item)==0){
for(int j=c; j>l2.size()-1; j--){
l2.retrieve(j,item);
l1.insert(l1.size(),item);
l2.remove(j);
}
return 0;
}
else {
c++;
l1.remove(i);
}
}
}
return 1;
}
main(){
list<int>L1, L2;
L1.insert(0,10);
L1.insert(1,20);
L1.insert(2,30);
L1.insert(3,40);
L1.insert(4,50);
L2.insert(0,123);
L2.insert(1,143);
L2.insert(2,345);
L2.insert(3,545);
L2.insert(4,536);
print(L1);
print(L2);
cout<<"<<1: succeeded, 0: failed>> "<<concat(L1,L2)<<endl;
cout<<"First List: ";
print(L1);
cout<<"Second List: ";
print(L2);
}```
| First some nagging:
Stop lying. This is not a list. It's a vector.
0 is not a bool, use true/false
if you are going to return a bool to say if something failed then actually check the return value
don't use out parameters
use exceptions, std::optional or std::expected for error handling with return values
retrieve and replace should be named operator[] or at and have a const and not-const flavour
int maxsize? Seriously? I can't have lists with more than 2 billion items?
maxsize could be a template parameter
your indentation is broken
Lets look at your code line by line:
bool concat (list<int>&l1,list<int>&l2){
So you want to concat l1 l2 into a single list.
int item;
Don't declare variables before you need them.
int c=l2.size();
while(!l1.empty()) {
Wait, if you want to add l2 to l1 then why are you looping till l1 is empty?
Did you implement adding l1 to l2 instead of l2 to l1? So in your concat the arguments are reversed?
for(int i=0; i<l1.size(); i++){
And now you loop a second time, this time over all elements of l1. So for some reason this loop won't make l1 empty so you have to try over and over with the while?
l1.retrieve(i,item);
Get the i-th item of l1.
if(l2.insert(l2.size(),item)==0){
And insert it at the end of l2.
If it fails:
for(int j=c; j>l2.size()-1; j--){
Starting with the old size of the list, as long as it's the last element, so actually just for the last element at position c:
l2.retrieve(j,item);
Retrieve the item one past the end of the list. So this fails and item is still the i-th element of l1.
l1.insert(l1.size(),item);
Add the i-th element of l1 to the end of l1 and ignore if it fails.
l2.remove(j);
Remove the element one past the end of l2 from the list. So this too just fails.
}
return 0;
Tell everyone we failed and now l1 and l2 are both possibly changed and maybe some item was lost.
} else {
If inserting the element in l2 succeeded:
c++;
Update the index for the end of l2 so we can't actually restore l2 to it's original state if things fail later on.
l1.remove(i);
and remove the item to complete the move.
That means item 1 is now item 0. But next loop i = 1, which is item 2 in the original list. So the for loop actually only moves every second item
to l2, which is the reason for the while.
Too bad that will scramble l1 as you append it to l2.
}
}
}
return 1;
But hey, success. We didn't overflow l2 and loose an item.
}
The list has a count. Use it to check at the start if there is enough space. No undo when things break mid way, that way lies madness.
Then use either the while with remove every loop or the for and set l1.count = 0; at the end. The later is obviously faster (O(n) vs. O(n^2)).
|
72,849,632 | 72,849,787 | How to find occurrences of a pair in a multimap | I have been trying to write a program that finds the occurrences of a pair in a multimap. So far I am thinking of using multimap::equal_range.
For example, if my multimap is {(BO, MA), (CL, SC), (DA, TX), (FL, MI), (FL, MI), (MI, FL), (OR, FL)} and I search for all occurrences of (FL, MI) in the multimap, then my program should ideally return two iterators, pointing to the 3rd and 5th elements. I can then subtract the two iterators to find the number of occurrences of the pair. However, multimap::equal_range only checks if a key is equivalent to a single value.
Is there a way to use multimap::equal_range to indicate the range of iterators pointing to the pairs with the same key and the same value of the target pair? Or is there an existing method that I can use? Any help is appreciated!
| I suggest using a std::unordered_map<std::string, std::unordered_map<std::string, unsigned>> instead. You then get 2 fast lookups and the count without iterating.
Example:
#include <iostream>
#include <iterator>
#include <map>
#include <unordered_map>
int main(void) {
// your original looks something like this:
std::multimap<std::string, std::string> m{
{"BO", "MA"}, {"CL", "SC"}, {"DA", "TX"}, {"FL", "MI"}, {"BO", "MA"},
{"FL", "OR"}, {"FL", "MI"}, {"MI", "FL"}, {"OR", "FL"}};
// my suggested map:
std::unordered_map<std::string, std::unordered_map<std::string, unsigned>> m2;
// transform your original map to my suggested map:
for(auto&[k, v]: m) ++m2[k][v];
// lookup:
std::cout << m2["FL"]["MI"] << '\n'; // prints 2
}
|
72,849,733 | 72,849,856 | boost asio with little endian | I am integrating a library that requires little endian for length. It's formatted with little endian and then a custom serialized object. How do I convert 4 byte char into a int? The little endian tells me the size of the serialized object to read.
so if I receive "\x00\x00\x00H\x00" I would like to be able to get the decimal value out.
my library looks like
char buffer_size[size_desc]
m_socket->receive(boost::asio::buffer(buffer, size_desc));
int converted_int = some_function(buffer); <-- not sure what to do here
char buffer_obj[converted_int];
m_socket->receive(boost::asio::buffer(buffer, size_desc));
| For a simple solution you could do couple of tricks,
Reverse with a cast:
// #include <stdafx.h>
#include <cassert>
#include <iomanip>
#include <iostream>
#include <algorithm>
#include <string>
int main()
{
char buff[4] = {3,2,1,0};
std::cout << (*reinterpret_cast<int*>(&buff[0])) << "\n";
std::reverse(buff, buff+4);
std::cout << (*reinterpret_cast<int*>(&buff[0]));
return 0;
};
Boost also comes with an endianness library:
https://www.boost.org/doc/libs/1_74_0/libs/endian/doc/html/endian.html#buffers
You can use the built in types, like:
big_int32_t
little_int16_t
|
72,849,963 | 72,850,125 | Why TIFFReadRGBAImage() throws an exception when raster is smaller than image? | I'm using libtiff to read Image data into an array. I have the following code
std::vector <uint32>> image;
uint32 width;
uint32 height;
TIFFGetField(tif, TIFFTAG_IMAGEWIDTH, &width);
TIFFGetField(tif, TIFFTAG_IMAGELENGTH, &height);
uint32 npixels = width * height;
uint32* raster;
raster = (uint32*)_TIFFmalloc(npixels * sizeof(uint32));
if (TIFFReadRGBAImageOriented(tif, width, height, raster, ORIENTATION_TOPLEFT, 0) == 1)
{
std::cout << "success" << std:endl;
}
This code works. However, what I actually want is to reduce my width and height so that only a cropped part of the image is read into the raster. Thus my actual code for npixels is:
uint32 npixels = (width -100) * (height -100);
When I try to run this, I get an:
Exception Error at 0x00007FFEC7A2FC4E (tiff.dll): Access violation when trying to write at position 0x00000251B12C7000
In the libtiff documentation it says:
The raster is assumed to be an array of width times height 32-bit entries, where width must be less than or equal to the width of the image (height may be any non-zero size). If the raster dimensions are smaller than the image, the image data is cropped to the raster bounds.
based on that I thought reducing npixels does the trick... How do I cut the right and lower part of the image I want to write into my raster?
| You just changed the number of elements in allocated buffer, but still try to read the image of original size, thus you get access violation since the buffer is overflown. To get the cropping you should pass correct width and height to TIFFReadRGBAImageOriented as well:
uint32 nwidth = width - 100;
uint32 nheight = height - 100;
uint32 npixels = nwidth * nheight;
raster = (uint32*)_TIFFmalloc(npixels * sizeof(uint32));
if (TIFFReadRGBAImageOriented(tif, nwidth, nheight, raster, ORIENTATION_TOPLEFT, 0) == 1)
{
std::cout << "success" << std:endl;
}
|
72,850,153 | 72,850,246 | Why is function call treated as instantiation when I cast in template arguments? | I've got the following code:
template <bool condition>
struct enable_if { };
template <>
struct enable_if<true> { using type = bool; };
template <typename T>
class is_callable {
using Yes = char[1];
using No = char[2];
template <typename U> static Yes& filter(decltype(&U::operator()));
template <typename U> static No& filter(...);
public:
constexpr operator bool() { return sizeof(filter<T>(nullptr)) == sizeof(Yes); }
};
template <typename Lambda, typename enable_if<is_callable<Lambda>{}>::type = true>
void doSomethingWithLambda(Lambda func) {
func();
}
int main() {
doSomethingWithLambda([]() { });
}
The important part is the enable_if<is_callable<Lambda>{}>::type part.
One is forced to instantiate is_callable<Lambda> with {} because if one were to use (), C++ would mistake it for a function call.
Feel free to correct me if I'm wrong, but as far as I know, C++ assumes it is a function in the () case so that the type of expression isn't determined after the time of writing, saving everyone a headache. What I mean by that is, assuming you had a function version and a class version of is_callable (separated by SFINAE using enable_if or something along those lines), the type Lambda could determine the true meaning of (), either a function call or an instantiation. Like I said, as far as I know, C++ wants to avoid this confusion, so it assumes function call and fails if such a function does not exist.
Based on the assumptions above, the following shouldn't work:
enable_if<(bool)is_callable<Lambda>()>::type
What does it matter if I cast the result of the function call (never mind that functions couldn't even be evaluated in this context)? Why is this suddenly treated as an instantiation instead of a function call?
| No, your understanding is not correct.
Firstly, a name can't refer to both a class template and a function template. If that happens the program is ill-formed. (And defining both in the same scope is not allowed to begin with.)
Secondly, is_callable<Lambda>() as template argument is not a function call to begin with. It is a function type. It is the type of a function which has no parameters and returns a is_callable<Lambda>.
When the compiler parses a template argument, it can interpret it in two ways: Either as a type or as an expression (or as a braced-init-list), because template parameters can be type parameters or non-type parameters.
When the compiler reads is_callable<Lambda>() it notices that is_callable is a class template and then realizes that is_callable<Lambda> is therefore a type. If you have a type, let's shorten it to T, then T() can either be syntax representing the type of a function returning T and taking no arguments, or it can be an expression formed from one single functional notation explicit cast (which you imprecisely call "instantiation").
There is no way to differentiate these two cases in the context, but the compiler needs to know whether this is a type template argument or a non-type template argument. So there is a rule saying that such ambiguities are always resolved in favor of a type.
If is_callable was a function template instead, there would be no ambiguity, because then is_callable<Lambda> is not a type and therefore is_callable<Lambda>() cannot be a function type. It must be a function call instead and therefore an expression and non-type template argument.
When you write (bool)is_callable<Lambda>() this is not valid syntax for a type and therefore there is no ambiguity. It is a non-type template argument and an expression. And is_callable<Lambda>() is a funcational notation explicit cast because is_callable<Lambbda> is a type. If is_callable was a function template instead of a class template, then it would be a function call.
|
72,850,570 | 72,850,606 | Garbage value in an array where array length and input is defined | I am a beginner, and I am trying to learn C++.
For now, all I am trying to do is input 3 numbers, and print them back.
#include <iostream>
using namespace std;
int main(){
int n[2];
cout << "Enter three numbers" << endl;
for (int j = 0; j <= 2; j++){
cin >> n[j];
}
cout << "Debug " << n[2] << endl;
cout << endl;
for (int i = 0; i <= 2; i++){
cout << n[i] << "\t" << i << endl;
}
return 0;
}
Every time I print them, the last value of the array is modified, and I cannot figure out why! For a test input 6,7,8, the output is in the image below.
| This for
for (int j=0;j<=2;j++){
cin>>n[j];
}
expects that the array has at least three elements with indices in the range [0, 2].
However you declared an array with two elements
int n[2];
If you are going to input three elements then the array should be defined as
int n[3];
|
72,850,876 | 72,850,951 | Emplace with primitive types | Since in C++, primitive types destructors do nothing [Do Primitive Types in C++ have destructors?], is it safe to rely on the value of int a to be the same after a call to queue::emplace? Specifically,
queue<int> q;
int a = 5;
q.emplace(a);
// is a==5 here?
Perhaps the first question would also answer this, though for this example:
queue<pair<int,int>> q;
int a = 1, b = 2;
q.emplace(a,b);
// is a == 1, b == 2?
| The parameters in the shown code are all lvalues.
For the emplaced primitive type, as is the case here: an lvalue that gets passed to emplace() does not get modified. If the container contains a class with a constructor, that emplace ends up invoking, for a "well-behaved" constructor the parameter won't get modified whether it's an int or a discrete class instance.
The only time something that gets passed to emplace() typically gets altered, in some way, would be if that "something" is a movable rvalue, and the corresponding constructor parameter is an rvalue, and the constructor moves the rvalue somewhere else, leaving the original parameter in some valid, but unspecified state. Or, if "something" is a non-const lvalue, and the constructor intentionally modifies it, but that would be rather rude (not well-behaved).
This wouldn't be the case for primitive types, in any case.
|
72,851,022 | 72,851,200 | How to use BOOST_PP_SEQ_FOR_EACH for execting a function for each in the sequence? | I intend to use BOOST_PP_SEQ_FOR_EACH to run a function for all variables of a sequence:
#include <iostream>
#include <boost/preprocessor.hpp>
#include <boost/preprocessor/seq/for_each.hpp>
#define SEQ (w)(x)(y)(z)
#define MACRO(r, data, elem) foo(#elem);
using namespace std;
void foo(string a) {
cout << a << endl;
}
int main(){
BOOST_PP_SEQ_FOR_EACH(MACRO, ,SEQ) ;
return 0 ;
}
The expected output is like:
w
x
y
z
, while the actual result is:
BOOST_PP_SEQ_HEAD((w)(x)(y)(z))
BOOST_PP_SEQ_HEAD((x)(y)(z))
BOOST_PP_SEQ_HEAD((y)(z))
BOOST_PP_SEQ_HEAD((z))
I don't know what happens to the expansion. I am thinking BOOST_PP_SEQ_FOR_EACH clause is expanded into
MACRO(r, ,w) MACRO(r, ,x) MACRO(r, ,y) MACRO(r, ,z)
and MACRO(r, ,w) is expanded into foo("w"); for instance.
| BOOST_PP_SEQ_HEAD((a)(b)(c)) is a macro to get the head of a preprocessor sequence and would expand to a. But #elem prevents that macro from being expanded.
Use BOOST_PP_STRINGIZE to expand the macro as well:
#define MACRO(r, data, elem) foo(BOOST_PP_STRINGIZE(elem));
|
72,851,116 | 72,851,293 | Is std::construct_at on const member safe? | I have a class Obj with a const member i:
class Obj {
const int i;
...
};
But I need to set i to 0 in my move constructor. (Because if i isn't 0, the destructor will delete stuff, and since I moved the object, that will result in a double free)
Is it safe to modify Obj::i in the move constructor like this?
Obj::Obj(Obj &&other) :
i(other.i)
{
std::destroy_at(&other.i);
std::construct_at(&other.i, 0);
}
From how I understand it, it is safe to do this when std::construct_at replaces other.i with a "transparently replaceable object". But I'm not completely sure what the definition means:
(8) An object o1 is transparently replaceable by an object o2 if:
(8.1) the storage that o2 occupies exactly overlays the storage that o1 occupied, and
(8.2) o1 and o2 are of the same type (ignoring the top-level cv-qualifiers), and
(8.3) o1 is not a complete const object, and
(8.4) neither o1 nor o2 is a potentially-overlapping subobject ([intro.object]), and
(8.5) either o1 and o2 are both complete objects, or o1 and o2 are direct subobjects of objects p1 and p2, respectively, and p1 is transparently replaceable by p2.
(https://eel.is/c++draft/basic#life-8)
From my understanding, at least 8.1, 8.2, and 8.3 apply, but I'm not completely sure, and I don't really understand 8.4 and 8.5.
So am I correct in thinking this should work (in C++20), or would this result in undefined behavior?
| A potentially-overlapping subobject is a base class subobject or a member marked with [[no_unique_address]]. Obj::i is not so 8.4 applies.
If you take p1 and p2 to be the same object, other, then 8.5 probably applies (an object can transparently replace itself), except in that it doesn't apply recursively (e.g., Obj is a base class/[[no_unique_address]] member of some other class, or the complete object it is part of is const and other has been const_cast or is a mutable member). But it will practically always apply.
But consider just not making it a const member, since you do need to modify it here. Your move constructor should also clear out other (e.g., setting any pointers to nullptr, clearing any file handles, zeroing other stuff), so there is no chance for the destructor to accidentally double delete stuff.
|
72,852,384 | 72,854,033 | Can I hide implementation details of this concept from the end user? | I have looked at several similar questions on SO. Maybe I am not grokking the solutions there. In those questions when the return type is auto or templated then separating declaration and definition in two different units causes a failure in compilation. This can be solved by explicitly declaring a concrete signature for the function definition. In my case I am not sure how to do that.
My scenario is as below:
// api.h
template <typename TImpl>
concept IsAProcessor = requires(TImpl impl)
{
impl.init();
impl.process();
impl.deinit();
};
enum UseCase {
USECASE1,
USECASE2
};
template <IsAProcessor TImpl>
void Process(TImpl& impl)
{
impl.process();
}
class Engine
{
public:
IsAProcessor auto getInstance(UseCase a);
};
// End - api.h
// api.cpp
#include "api.h"
#include "third_party.h"
IsAProcessor auto Engine::getInstance(UseCase a) {
switch (UseCase) {
case USECASE1:
return UseCase1Impl(); // Defined in third_party.h and satisfies concept requirement.
case USECASE2:
return UseCase2Impl();
}
}
// End - api.cpp
// third_party.h
class UseCase1Impl {
public:
void init(void);
void process(void);
void deinit(void);
}
// End - third_party.h
// third_party.cpp
#include "third_party.h"
void UseCase1Impl::init(void) {...};
// and so forth
// End - third_party.cpp
// User code
#include "api.h"
{
auto en = Engine();
auto usecase = en.getInstance(UseCase::USECASE1);
//^^^ cannot be used before it is defined here
Process(usecase);
}
As I mentioned in the question, it is not desirable to expose UseCase1Impl and UseCase2Impl. How do I get past the error: function 'getInstance' with deduced return type cannot be used before it is defined
| The return type of a function is a static property, it can't change based on runtime data.
If you can, lift UseCase to a template parameter, and use if constexpr to have exactly one active return for each instantiation.
template<UseCase a>
auto Engine::getInstance() {
if constexpr (a == USECASE1)
return UseCase1Impl(); // Defined in third_party.h and satisfies concept requirement.
if constexpr (a == USECASE2)
return UseCase2Impl();
}
If you can't do that, you will have to find a common type to return.
struct IProcessor
{
virtual ~IProcessor() = default;
virtual void init() = 0;
virtual void process() = 0;
virtual void deinit() = 0;
};
template <IsAProcessor T>
class ProcessorFacade : public IProcessor
{
T impl;
public:
template <typename... Args>
ProcessorFacade(Args&&... args) : impl(std::forward<Args>(args)...) {}
void init() final { impl.init(); }
void process() final { impl.process(); }
void deinit() final { impl.deinit(); }
};
std::unique_ptr<IProcessor> Engine::getInstance(UseCase a) {
switch (UseCase) {
case USECASE1:
return std::make_unique<ProcessorFacade<UseCase1Impl>>();
case USECASE2:
return std::make_unique<ProcessorFacade<UseCase2Impl>>();
}
}
|
72,852,555 | 72,865,359 | How to validate properly ffmpeg pts/dts after demuxing/decoding? | How should I validate pts/dts after demuxing and then after decoding?
For me it is significant to have valid pts all the time for days and
possibly weeks of continuous streaming.
After demuxing I check:
dts <= pts
prev_packet_dts < next_packet_pts
I also discard packets with AV_NOPTS_VALUE and wait for packets with
proper pts, because I don't know video duration at this case.
pts of packets can be not increasing because of I-P-B frames
Is it all right?
What about decoded AVFrames?
Should 'pts' be increasing all the time?
Why at some point 'pts' could lag behind 'dts'?
Why pict_type is a parameter of AVFrame? Should be at AVPacket, because
AVPacket is a compressed frame, not the opposite?
| At libav support I was advised to not rely on decoder output. It is more solid to produce pts/dts for encoding/muxing manually and I should search for ffmpeg tools sources to proper implementation. I will search for this approach.
For now I discard AVFrames only with AV_NOPTS_VALUE, and the rest of encoding/muxing works fine.
Validation of AVPackets after Demuxing remains the same, as described above.
|
72,853,130 | 72,869,205 | How I can combine log entries based on the second column? | So I have email.log file(limited example):
2021-04-30T23:55:00.127629 886715E6D6C9D4FB status=rejected
2021-04-30T23:55:00.791921 F8F63278A6A3AD87 from=<sarah.smith@example.com>
2021-04-30T23:55:01.470432 418512384DDDD2C6 from=<robert.rodriguez@example.com>
2021-04-30T23:55:01.697902 0D8760D4ADAB456D message-id=<696b66ea-f493-4ba2-9e8a-553bd03b7d37@2PPZOR2ULU>
2021-04-30T23:55:01.736492 0D8760D4ADAB456D from=<william.smith@example.com>
2021-04-30T23:55:02.043100 0D8760D4ADAB456D to=<william.davis@example.com>
2021-04-30T23:55:02.842802 EC5AD35BADC381F9 client=10.2.38.215
2021-04-30T23:55:03.132660 2AB0E95297136E70 client=2001:db8::8f75:20e2:c47f
2021-04-30T23:55:03.550296 BAB22895DB867DFF status=sent
2021-04-30T23:55:04.392986 5BE423F6370D1D1B client=10.38.217.222
2021-04-30T23:55:04.661467 5D11914582F8C85A client=2001:db8::7bbb:6743:c8c5
2021-04-30T23:55:05.306358 E2E8D917BB751176 message-id=<da3d4d7c-643c-4a15-989a-3cd6269030f4@30Q7E75V7B>
2021-04-30T23:55:05.872830 F8F63278A6A3AD87 to=<patricia.garcia@example.com>
2021-04-30T23:55:06.272336 F8F63278A6A3AD87 status=sent
2021-04-30T23:55:06.716495 C7CC8201A67C8E52 from=<thomas.wilson@example.com>
2021-04-30T23:55:07.056882 5BE423F6370D1D1B message-id=<d5db8bc8-871e-48de-9e64-e23fca0c0134@56WZW5K1C2>
2021-04-30T23:55:07.113379 0D8760D4ADAB456D status=sent
2021-04-30T23:55:07.370491 5D11914582F8C85A message-id=<b041aedf-07ec-4e67-9a1a-e9f23a6ab434@82AZ76YJPI>
2021-04-30T23:55:07.732459 2AB0E95297136E70 message-id=<c948f7eb-144c-4688-b122-1191eea2cb29@4Q1QAJ0BLI>
2021-04-30T23:55:08.608998 C1D805D68377D513 to=<karen.smith@example.com>
2021-04-30T23:55:08.782778 418512384DDDD2C6 to=<jessica.jones@example.com>
2021-04-30T23:55:09.383173 E2E8D917BB751176 from=<barbara.rodriguez@example.com>
2021-04-30T23:55:09.676896 33DD2B20F2AB9262 status=sent
2021-04-30T23:55:10.157677 452FAD67C2867C47 client=10.38.217.222
2021-04-30T23:55:11.064902 C7CC8201A67C8E52 to=<patricia.jones@example.com>
2021-04-30T23:55:11.709673 E2E8D917BB751176 to=<richard.garcia@example.com>
2021-04-30T23:55:12.667447 EC5AD35BADC381F9 message-id=<b2fd4dac-2513-4895-ac84-3c68ecabc3ec@U83K85L7HK>
I started with reading from file to struct vector and now I need to combine events based on sessionid(column 2). Events may be happening in parallel and overlapping. Incomplete sessions (missing any of the fields) should be ignored. How I can do this?
So far my code look like this:
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
#include <algorithm>
using namespace std;
struct Session
{
string time;
string sessionid;
string other;
friend istream& operator>>(istream& input, Session& session);
friend ostream& operator<<(ostream& output, Session& session);
};
istream& operator>>(istream& input, Session& session)
{
input >> session.time;
input >> session.sessionid;
input >> session.other;
return input;
}
ostream& operator<<(ostream& output, Session& session)
{
output << session.time;
output << session.sessionid;
output << session.other;
return output;
}
int main()
{
vector<Session> session;
vector<Session> complete_session;
Session record;
ifstream read("email.log");
while (read >> record)
{
session.push_back(record);
}
read.close();
return 0;
}
| I will write a quick answer, because comments discussion is getting too long. Assuming your operator>> is working correctly, you could easily combine your log entries using std::unordered_map (faster, but elements are not sorted) or std::map (slower, but your map entries will be sorted by sessionid). For a quick example you can modify your main like this:
int main()
{
std::unordered_map<std::string, std::vector<Session> > mySessionMap;
Session record;
std::ifstream read("email.log");
while (read >> record)
{
mySessionMap[record.sessionid].push_back(record);
}
for (const auto& [sessionId, sessions] : mySessionMap) // c++17 allows us to use structured binding here
{
std::cout << "sessionId: " << sessionId << std::endl;
for (const auto& session : sessions)
{
std::cout << session << std::endl;
}
}
return 0;
}
Note that you don't need to explicitly create an empty vector when accessing the map with a new key, because operator[] already does it for you if the key does not exist.
You might also consider making your std::istream& operator>>(std::istream& input, Session& session) a bit more robust by checking if the extracted data is in the expected format.
In std::ostream& operator<<(std::ostream& output, const Session& session) you should use const reference for the second argument to allow printing const objects.
If something is unclear, take a look at documentation, everything is nicely explained with examples.
|
72,853,651 | 72,853,992 | Cant make addition of array, no match for call to '(std::string {aka std::basic_string<char>}) (std::string&, std::string&)' | I have function in array and cant make the addition of it in main function because it wont 'allow'? can someone help me? The error: 154 no match for call to '(std::string {aka std::basic_string}) (std::string&, std::string&)'. If anyone wants to see the full code, please head to https://github.com/infaddil/beyblade/blob/main/newassver2.cpp
float type(string s4, string s3)
{
float marks;
if(s4 == "Attack")
marks = 4;
else if (s3 == "Balance")
marks = 3;
else if (s3 == "Attack")
marks = 2;
else if (s3 == "Defense")
marks = 1;
return marks;
}
cout << "Your mark is " << type(s4[randomnumber], s3[randomnumber]) << endl;
| First line of your main():
string player1_name, player2_name, beyblade_name, product_code, type, plus_mode, system;
You defined string type,and your function is type too;
Add :: before type like this
cout << "Your mark is " << ::type(s4[randomnumber], s3[randomnumber]) << endl;
or pick another name.
|
72,856,018 | 72,857,109 | How can I parse a std::string (YYYY-MM-DD) into a COleDateTime object? | I have a COleDateTime object and I want to parse a date string in the format YYYY-MM-DD.
The string variable, for example, is:
std::string strDate = "2022-07-04";
COleDateTime allows me to use ParseDateTime to parse a string, but I see no way to tell it what format the components of the date string are. In C# I can do such with DateTime.Parse....
| Based on the suggestion by @xMRi in the comments I have decided to use:
CString strDutyMeetingDate = CString(tinyxml2::attribute_value(pDutyWeek, "Date").c_str());
int iDay{}, iMonth{}, iYear{};
if(_stscanf_s(strDutyMeetingDate, L"%d-%d-%d", &iYear, &iMonth, &iDay) == 3)
{
const auto datDutyMeetingDate = COleDateTime(iYear, iMonth, iDay, 0, 0, 0);
}
|
72,856,268 | 72,856,461 | Unfamiliar C++ code with strange return types | I encountered some really strange c++ code that I have never seen before, even though I have a little bit of experience. I tryed searching for it but I had no luck. This is super strange and either im really stupid or this is some wizard magic code. My question is how can the GetHelixCenter have a bool return type but still fill the helixcenterpos[] array? The GetHelixCenter function is called from another function like:
Double_t helixcenterpos[2];
GetHelixCenter(pparam,helixcenterpos);
and later on helixcenterpos is accessed:
Double_t xpos = helixcenterpos[0];
but it is untouched in between. The GetHelixCenter function looks like:
Bool_t AliV0ReaderV1::GetHelixCenter(const AliExternalTrackParam *track,Double_t center[2]){
// Get Center of the helix track parametrization
Int_t charge=track->Charge();
Double_t b=fInputEvent->GetMagneticField();
Double_t helix[6];
track->GetHelixParameters(helix,b);
Double_t xpos = helix[5];
Double_t ypos = helix[0];
Double_t radius = TMath::Abs(1./helix[4]);
Double_t phi = helix[2];
if(phi < 0){
phi = phi + 2*TMath::Pi();
}
phi -= TMath::Pi()/2.;
Double_t xpoint = radius * TMath::Cos(phi);
Double_t ypoint = radius * TMath::Sin(phi);
if(b<0){
if(charge > 0){
xpoint = - xpoint;
ypoint = - ypoint;
}
}
if(b>0){
if(charge < 0){
xpoint = - xpoint;
ypoint = - ypoint;
}
}
center[0] = xpos + xpoint;
center[1] = ypos + ypoint;
return 1;
}
| The helixcenterpos array is passed as the second argument:
Bool_t AliV0ReaderV1::GetHelixCenter(const AliExternalTrackParam *track,Double_t center[2]){
Here, center decays to a pointer to the first value of your original array. Hence
center[0] = xpos + xpoint;
center[1] = ypos + ypoint;
write to that array.
It should be pointed out, that the parameter would be better chosen as
Bool_t AliV0ReaderV1::GetHelixCenter(const AliExternalTrackParam *track,Double_t (¢er)[2]){
so that you are sure that GetHelixCenter always receives the correct array.
|
72,856,569 | 72,856,606 | Does const_cast waste extra memory? | Let's see the example first.
#include <iostream>
int main()
{
const int constant = 1;
const int* const_p = &constant;
int* modifier = const_cast<int*>(const_p);
*modifier = 100;
std::cout << "constant: " << constant << ", *const_p=" << *const_p;
//Output: constant: 1, *const_p=100
return 0;
}
I don't know how it achieved in the memory architecture. It seems that the compiler have occupied extra memory space in the stack so that we can keep track of the "original" constant whose value is 1, and a new memory location in the stack whose value is 100. Is it? So will const_cast indeed consume extra memory as a beginner might not first expect?
| This
*modifier = 100;
is undefined. You cannot change the value of a const int.
You can cast away constness but you cannot possibly modify something that is constant. A correct usage of the const cast would be for example:
int not_constant = 1; // not const !!
const int* const_p = ¬_constant;
int* modifier = const_cast<int*>(const_p);
*modifier = 100; // ok because not_constant is not const
No "extra memory" is being used here.
What happens in your code is probably that the compiler sees
std::cout << "constant: " << constant << ", *const_p=" << *const_p;
And the compiler "knows" that const int constant cannot possibly change its value after initialization, hence it can replace that line with
std::cout << "constant: " << 1 << ", *const_p=" << *const_p;
|
72,857,296 | 72,857,671 | Why my code is not working for problem (Marathon) Code forces | You are given four distinct integers a, b, c, d.
Timur and three other people are running a marathon. The value a is the distance that Timur has run and b, c, d correspond to the distances the other three participants ran.
Output the number of participants in front of Timur.
Input
The first line contains a single integer t (1≤t≤104) — the number of test cases.
The description of each test case consists of four distinct integers a, b, c, d (0≤a,b,c,d≤104).
Output
For each test case, output a single integer — the number of participants in front of Timur.
#include <iostream>
using namespace std;
int main()
{
int t, p(0);
cin >> t;
while (t--) {
int a, b, c, d, x;
cin >> a >> b >> c >> d;
if (b > a) {
p++;
} else if (c > a) {
p++;
} else if (d > a) {
p++;
}
cout << p << endl;
}
}
| Try changing the else ifs to ifs and declaring the variable p inside the loop:
#include <iostream>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int p = 0;
int a, b, c, d;
cin >> a >> b >> c >> d;
if (b > a) {
p++;
}
if (c > a) {
p++;
}
if (d > a) {
p++;
}
cout << p << '\n';
}
return 0;
}
Example Usage:
4
2 3 4 1
10000 0 1 2
500 600 400 300
0 9999 10000 9998
2
0
1
3
|
72,857,582 | 72,857,620 | HelloWorld.exe (process 12192) exited with code 0 issue | I'm a beginner in learning C++ programming and just started to use IDE VS Community 2022.
I've created the new project corresponding to tutorial and when i run it i get the messege in the console:
C:\Users\??????\source\repos\HelloWorld\x64\Debug\HelloWorld.exe (process 12192) exited with code 0.
The code is
#include <iostream>
int main()
{
std::cout << "Hello, world!";
return 0;
}
I know it's not an error, but is there some of solution to remove this message?
Thank you in advance!
| This is a feature of the IDE you are using. Try to run your program using the command line prompt directly and the message will not be displayed.
|
72,857,889 | 72,858,254 | different behaviour for filesystem::path(filePath).filename() between gcc7.3 and gcc9.3 | I see different outputs when running this piece of code in gcc7.3 (using C++14) and gcc9.3 (using C++17):
#include <iostream>
#if (__cplusplus >= 201703L)
#include <filesystem>
namespace fs = std::filesystem;
#else
#include <experimental/filesystem>
namespace fs = std::experimental::filesystem;
#endif
using namespace std;
std::string getBaseName(const std::string& filePath) {
return fs::path(filePath).filename().string();
}
int main()
{
std::cout<<"getBaseName(/test/absolute/dir/)="<<getBaseName("/test/absolute/dir/")<<std::endl;
std::cout<<"getBaseName(/)="<<getBaseName("/")<<std::endl;
return 0;
}
In gcc7.3 (C++14), it gave me:
getBaseName(/test/absolute/dir/)=.
getBaseName(/)=/
In gcc9.3(C++17), I got:
getBaseName(/test/absolute/dir/)=
getBaseName(/)=
I wonder if this is a bug in gcc7.3 (and therefore std::experimental) and if so, do we have any workaround without relying on any third party libraries?
| <experimental/filesystem> implements the filesystem library according to the Filesystem TS (basically an experimental extension of C++14), while <filesystem> is the filesystem library part of C++17 (and later).
The two are not identical specifications. The latter is based on the experience with the former, but as the former was never part of the standard proper, changes to the API could be made for C++17.
This is one of these changes. Specifically, the change is the resolution of NB comments on the C++17 draft. See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0492r2.html#US74.
For workaround, there isn't one really. No C++ compiler is required to support <experimental/filesystem> at all and <filesystem> will be available only starting with C++17.
If you really need support for C++14 or earlier, maybe you should use <boost/filesystem> instead for portability. However, I think the interface may differ slightly from both C++17 std::filesystem and the Filesystem TS.
Otherwise I would recommended just requiring C++17 and dropping any support for <experimental/filesystem>.
|
72,857,891 | 72,863,253 | Create Internet shortcut using C++ | I need to be able to create an Internet shortcut to a specific URL and always open it with Microsoft Edge. The only info that is out there seems to be [this page][1].
I'm not sure how to use this site, or look for an example on how to create an Internet shortcut with target path and URL.
Any ideas?
I did manage to find this code and was able to get it to work with either browser type or URL but not both. Tried escaping quotation marks but still nothing.
{
CoInitialize(NULL);
WCHAR sp[MAX_PATH] = { 0 };
WCHAR p[MAX_PATH] = { 0 };
WCHAR deskPath[MAX_PATH] = { 0 };
SHGetFolderPathW(NULL, CSIDL_DESKTOP, NULL, 0, deskPath);
swprintf_s(sp, _countof(sp), L"%s\\ShortcutTest", deskPath);
WCHAR path[MAX_PATH] = { 0 };
std::wstring path1 = L"C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe" "http://www.bing.com";
SHGetFolderPathW(NULL, CSIDL_PROGRAM_FILESX86, NULL, 0, path);
swprintf_s(p, _countof(p), path1.c_str(), path);
CreateLink(p, sp, L"", L"");
CoUninitialize();
return 0;
| MSDN provides example code for creating shortcuts with IShellLink. This is also referenced in answers on Stack Overflow, most notably this one: How to programmatically create a shortcut using Win32
For your requirement specifically, note that the IShellLink object provides a method SetArguments. You can use this to specify the URL that will be passed on the command line when running MS Edge.
So let's expand the example CreateLink function, rearrange it a little and add a parameter that lets you provide arguments:
#include <windows.h>
#include <shlobj.h>
HRESULT CreateLink(LPCWSTR lpszShortcut,
LPCWSTR lpszPath,
LPCWSTR lpszArgs,
LPCWSTR lpszDesc)
{
HRESULT hres;
IShellLinkW* psl;
hres = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER, IID_IShellLinkW, (LPVOID*)&psl);
if (SUCCEEDED(hres))
{
IPersistFile* ppf;
psl->SetPath(lpszPath);
psl->SetArguments(lpszArgs);
psl->SetDescription(lpszDesc);
// Save link
hres = psl->QueryInterface(IID_IPersistFile, (LPVOID*)&ppf);
if (SUCCEEDED(hres))
{
hres = ppf->Save(lpszShortcut, TRUE);
ppf->Release();
}
psl->Release();
}
return hres;
}
Now all you need is to invoke it correctly. Taking your example as a starting point:
#include <string>
int main()
{
HRESULT hres = CoInitialize(NULL);
if (SUCCEEDED(hres))
{
PWSTR deskPath, programsPath;
SHGetKnownFolderPath(FOLDERID_Desktop, 0, NULL, &deskPath);
SHGetKnownFolderPath(FOLDERID_ProgramFilesX86, 0, NULL, &programsPath);
std::wstring linkFile = std::wstring(deskPath) + L"\\ShortcutTest.lnk";
std::wstring linkPath = std::wstring(programsPath) + L"\\Microsoft\\Edge\\Application\\msedge.exe";
LPCWSTR linkArgs = L"https://www.stackoverflow.com";
LPCWSTR linkDesc = L"Launch Stack Overflow";
CreateLink(linkFile.c_str(), linkPath.c_str(), linkArgs, linkDesc);
CoTaskMemFree(deskPath);
CoTaskMemFree(programsPath);
}
CoUninitialize();
return 0;
}
Note that I also used the correct extension for the shortcut file, which is .lnk. This is required for Windows to recognize the file as a shortcut.
I also changed the method of acquiring standard folders, as per recommendation in MSDN documentation. You should generally avoid anything that uses MAX_PATH. For clarity I am not testing whether these calls succeed, but you should do so in a complete program.
|
72,857,947 | 72,858,095 | Combining static data structures | I am trying to come up with a good way to define data for a seven-segment display.
Let's say that the display segments are named like this:
-A-
F B
-G-
E C
-D-
So to display a 1 you need to turn on B,C - and for 2 you need A,B,G,E,D.
Furthermore, each line of the display is connected to an IO expander chip, and they are turned on by writing a 1 to the correct bit in the chip (over I2C, but that is not important).
Now I can structure my code like this:
enum Segments
{
A = 0x02,
B = 0x20,
C = 0x10,
D = 0x08,
E = 0x04,
F = 0x40,
G = 0x01
};
enum class Digits
{
D0 = A + B + C + D + E + F,
D1 = B + C,
D2 = A + B + G + E + D,
D3 = A + B + C + D + G,
...
};
Which does give a correct and useful result (D0 = 0x7E, D1 = 0x30, etc.). HOWEVER! There is a wrinkle. I actually have a dual seven-segment display, and the two displays are not wired up identically internally (this is a hardware issue that I cannot change).
So now I am looking for a way to do something like this (pseudo-code):
enum Left_Segments
{
A = 0x02,
B = 0x20,
C = 0x10,
D = 0x08,
E = 0x04,
F = 0x40,
G = 0x01
};
enum Right_Segments
{
A = 0x02,
B = 0x10,
C = 0x40,
D = 0x08,
E = 0x20,
F = 0x04,
G = 0x01
};
template<class T>
enum class Digits
{
D0 = T::A + T::B + T::C + T::D + T::E + T::F,
D1 = T::B + T::C,
D2 = T::A + T::B + T::G + T::E + T::D,
D3 = T::A + T::B + T::C + T::D + T::G,
...
};
assert(Digits<Left_Segment>::D2 == 0x2F);
assert(Digits<Right_Segment>::D2 == 0x3B);
Or some other way of doing this, I am not attached to any specific notation. My goal is to define each digit once in a wiring-agnostic way, and then be able to plug in a specific wiring to produce the bit sequence to write to my chip. I would also appreciate if it was more type- and name-safe than old C-style enums.
| You are just missing a mapping from digits to the segments that should light up. Your current mapping is from digits to hardware addresses directly. Just don't do it all at once.
Actually for a nice visual code, I'd suggest to internally rename the segments like this:
-S0-
S1 S2
-S3-
S4 S5
-S6-
In the following I will just use those indices as indices into an arary. I am not using an enum. If you prefer the enum you can replace the arrays with maps with the enum as key.
Now you can easily use arrays to define what segments should be activated for which digit:
std::map< int, std::array<int,7> mapping {
{ 1, { 0,
0 , 1,
0,
0 , 1,
0 } },
{ 2, { ....
The segments to be activated for digit x are then elements of mapping[x] equal to 1.
The hardware addresses you place elsewhere, eg in a
std::array<int,7> addresses_display1, addresses_display2;
Then you can control either of the two displays via (not sure about details, I just produce the same sum you do):
int get_sum( int digit,const std::map< int, std::array<int,7>& mapping, const std::array<int,7>& addresses) {
int sum = 0;
auto it = mapping.find(digit);
if (it == mapping.end()) {
return sum; // error invalid digit
}
for (int i=0;i < 7; ++i) {
sum += (*it)[i] * addresses[i];
}
return sum;
}
|
72,858,061 | 72,893,623 | Are I/O streams really thread-safe? | I wrote a program that writes random numbers to one file in the first thread, and another thread reads them from there and writes to another file those that are prime numbers. The third thread is needed to stop/start the work. I read that I/O threads are thread-safe. Since writing to a single shared resource is thread-safe, what could be the problem?
Output: always correct record in numbers.log, sometimes no record in numbers_prime.log when there are prime numbers, sometimes they are all written.
#include <iostream>
#include <fstream>
#include <thread>
#include <mutex>
#include <vector>
#include <condition_variable>
#include <future>
#include <random>
#include <chrono>
#include <string>
using namespace std::chrono_literals;
std::atomic_int ITER_NUMBERS = 30;
std::atomic_bool _var = false;
bool ret() { return _var; }
std::atomic_bool _var_log = false;
bool ret_log() { return _var_log; }
std::condition_variable cv;
std::condition_variable cv_log;
std::mutex mtx;
std::mutex mt;
std::atomic<int> count{0};
std::atomic<bool> _FL = 1;
int MIN = 100;
int MAX = 200;
bool is_empty(std::ifstream& pFile) // function that checks if the file is empty
{
return pFile.peek() == std::ifstream::traits_type::eof();
}
bool isPrime(int n) // function that checks if the number is prime
{
if (n <= 1)
return false;
for (int i = 2; i <= sqrt(n); i++)
if (n % i == 0)
return false;
return true;
}
void Log(int min, int max) { // function that generates random numbers and writes them to a file numbers.log
std::string str;
std::ofstream log;
std::random_device seed;
std::mt19937 gen{seed()};
std::uniform_int_distribution dist{min, max};
log.open("numbers.log", std::ios_base::trunc);
for (int i = 0; i < ITER_NUMBERS; ++i, ++count) {
std::unique_lock<std::mutex> ulm(mtx);
cv.wait(ulm,ret);
str = std::to_string(dist(gen)) + '\n';
log.write(str.c_str(), str.length());
log.flush();
_var_log = true;
cv_log.notify_one();
//_var_log = false;
//std::this_thread::sleep_for(std::chrono::microseconds(500000));
}
log.close();
_var_log = true;
cv_log.notify_one();
_FL = 0;
}
void printCheck() { // Checking function to start/stop printing
std::cout << "Log to file? [y/n]\n";
while (_FL) {
char input;
std::cin >> input;
std::cin.clear();
if (input == 'y') {
_var = true;
cv.notify_one();
}
if (input == 'n') {
_var = false;
}
}
}
void primeLog() { // a function that reads files from numbers.log and writes prime numbers to numbers_prime.log
std::unique_lock ul(mt);
int number = 0;
std::ifstream in("numbers.log");
std::ofstream out("numbers_prime.log", std::ios_base::trunc);
if (is_empty(in)) {
cv_log.wait(ul, ret_log);
}
int oldCount{};
for (int i = 0; i < ITER_NUMBERS; ++i) {
if (oldCount == count && count != ITER_NUMBERS) { // check if primeLog is faster than Log. If it is faster, then we wait to continue
cv_log.wait(ul, ret_log);
_var_log = false;
}
if (!in.eof()) {
in >> number;
if (isPrime(number)) {
out << number;
out << "\n";
}
oldCount = count;
}
}
}
int main() {
std::thread t1(printCheck);
std::thread t2(Log, MIN, MAX);
std::thread t3(primeLog);
t1.join();
t2.join();
t3.join();
return 0;
}
| Thanks to those who wrote about read-behind-write, now I know more. But that was not the problem. The main problem was that if it was a new file, when calling pFile.peek() in the is_empty function, we permanently set the file flag to eofbit. Thus, until the end of the program in.rdstate() == std::ios_base::eofbit.
Fix: reset the flag state.
if (is_empty(in)) {
cv_log.wait(ul, ret_log);
}
in.clear(); // reset state
There was also a problem with the peculiarity of reading/writing one file from different threads, though it was not the cause of my program error, but it led to another one.
Because if when I run the program again primeLog() opens std::ifstream in("numbers.log") for reading faster than log.open("numbers.log", std::ios_base::trunc), then in will save old data into its buffer faster than log.open will erase them with the std::ios_base::trunc flag. Hence we will read and write to numbers_prime.log the old data.
|
72,858,345 | 72,858,515 | How to call non-const method when a const method with the same signature exists? | OpenCV's Mat class contains the following two methods:
template<typename _Tp> inline
_Tp* Mat::ptr(int y)
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (_Tp*)(data + step.p[0] * y);
}
template<typename _Tp> inline
const _Tp* Mat::ptr(int y) const
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (const _Tp*)(data + step.p[0] * y);
}
and in my code I have the following line:
uchar* row_ptr = input_img.ptr<uchar>(0);
which genereates the compile-time error:
error: invalid conversion from ‘const unsigned char*’ to ‘uchar*’ {aka ‘unsigned char*’}
The method that is called is const, but a non-const method with the same name and argument list exists. How can I specify that I want to use the non-const version of the method? Thank you.
| Typically you don't "select" which function to call, but the compiler will call the right function for you.
Consider this example:
#include <iostream>
struct foo {
int bar() { return 1;};
int bar() const { return 2;}
};
int main(){
const foo f;
foo f2;
std::cout << f.bar();
std::cout << f2.bar();
}
The output is
21
Because you can only call the const function bar on a const foo. Calling the non-const method would potentially modify it, hence you cannot call it when the object is const.
When the object is not const then both could be called but the non-const one is called. Only in this case you can "select", for example:
const foo& const_f = f;
const_f.bar();
This now calls the const method, even though the actual object (f) isn't const, because the method is called using a reference to const.
TL;DR: The error suggests that input_img is either a const Mat or a const reference. Hence you can only call the const overload. If you want to call the other you need a non-const reference.
|
72,858,434 | 72,858,713 | Qt6: "Unable to read Memory" when pointing to a QLineEdit from a QFormLayout | I want to get the text from a QLineEdit, which is in a QFormLayout, to save it to a File. The saving works fine, but I am not able to get the text form the QLineEdit and when I look at it from the Debugger it says "Unable to read Memory". I can´t figure out how to correctly point to the QLineEdit, so that I can get the text.
With this code I want to get the text:
QJsonArray Kegelbuch::saveSettings() {
QFormLayout* formLayout = (QFormLayout*)ui.einstellungenTab->layout();
QJsonArray data;
QLineEdit* settingsEdit;
for (int i = 0; i < formLayout->rowCount(); i++) {
settingsEdit = (QLineEdit*)formLayout->itemAt(i, QFormLayout::ItemRole::FieldRole);
}
return data;
}
How the window looks:
| Replace
settingsEdit = (QLineEdit*)formLayout->itemAt(i, QFormLayout::ItemRole::FieldRole);
with
settingsEdit = (QLineEdit*)(formLayout->itemAt(i, FormLayout::ItemRole::FieldRole)->widget());
Background: itemAt() returns a QLayoutItem*, so you need to call QWidget *QLayoutItem::widget() to get the widget.
|
72,858,660 | 72,862,882 | How do I find the indices of elements in a vector which are also in another vector using RcppArmadillo? | I am stuck trying to find the indices of elements in a vector x whose elements are also in another vector vals using Rcpp Armadillo. Both x and vals are of type arma::uvec.
In R, this would be straightforward:
x <- c(1,1,1,4,2,4,4)
vals <- c(1,4)
which(v %in% vals)
I've scanned the Armadillo docs and find() was my obvious first try; but it didn't work, since vals is a vector. I've also tried intersect() but it returns only the first unique indices.
What would be a good/efficient way to do this using Armadillo? Do I have to iterate through the elements in vals using find()?
| A quick dirty way:
Rcpp::cppFunction("
arma::uvec ind(arma::uvec x, arma::uvec y){
arma::vec a(x.size(), arma::fill::zeros);
for (auto i:y) a = a + (x==i);
return arma::find(a) + 1;
}
", 'RcppArmadillo')
c(ind(v, vals))
[1] 1 2 3 4 6 7
|
72,858,877 | 72,858,979 | Is it safe to transfer C++ objects among shared libs with the extern "C"? | Suppose I have C++ object, like std::function. Is it safe in every way to pass such an object to another dynamically loaded shared library like this:
// lib
extern "C"
{
void call( void* f )
{
auto f_callable = (std::function<void()>*)f;
f_callable->operator()();
}
}
// executable
auto call = ( void (*) ( void* ) ) dlsym( lib_, "call" );
std::function<void()> f = []{
printf( "called" );
};
call( ( void* )&f );
What if the library and executable are compiled by different compilers (like clang and GCC)? Or by different versions of the same compiler?
| This is defined behavior, provided that both parts of the C++ code gets generated by the same exact compiler. Casting the same thing to/from a void * is defined behavior. Presuming that the second C++ code sees the same C linkage, this is also defined behavior.
Whether or not it is safe when different compilers or different versions of the compilers are involved depends on whatever ABI guarantees these compilers provide. You will need to check their respective documentation to determine that.
|
72,858,915 | 72,859,347 | Getting the Address of a function that can only be called | I already asked a Question but I think thats very special and will not get a concrete answer.
Im trying to give a simpler Explanation of what i need help with.
The Issue is that d3d12::pCommandList->CopyTextureRegion; doesnt work because CopyTextureRegion is a function that can only be called, but i need the address of it.
For example this will give me the Address of the ID3D12GraphicsCommandList: d3d12::pCommandList;
This is a small part of my code:
namespace d3d12
{
IDXGISwapChain3* pSwapChain;
ID3D12Device* pDevice;
ID3D12CommandQueue* pCommandQueue;
ID3D12Fence* pFence;
ID3D12DescriptorHeap* d3d12DescriptorHeapBackBuffers = nullptr;
ID3D12DescriptorHeap* d3d12DescriptorHeapImGuiRender = nullptr;
ID3D12DescriptorHeap* pSrvDescHeap = nullptr;;
ID3D12DescriptorHeap* pRtvDescHeap = nullptr;;
ID3D12GraphicsCommandList* pCommandList;
FrameContext* FrameContextArray;
ID3D12Resource** pID3D12ResourceArray;
D3D12_CPU_DESCRIPTOR_HANDLE* RenderTargetDescriptorArray;
HANDLE hSwapChainWaitableObject;
HANDLE hFenceEvent;
UINT NUM_FRAMES_IN_FLIGHT;
UINT NUM_BACK_BUFFERS;
UINT frame_index = 0;
UINT64 fenceLastSignaledValue = 0;
}
d3d12::pDevice->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, d3d12::FrameContextArray[0].CommandAllocator, NULL, IID_PPV_ARGS(&d3d12::pCommandList));
// issue, CopyTextureRegion can only be called. I need the address of it
auto RegionHookAddress = d3d12::pCommandList->CopyTextureRegion;
| Taking the address of the member function:
auto RegionHookAddress = &d3d12::ID3D12GraphicsCommandList::CopyTextureRegion;
Calling the member function:
(d3d12::pCommandList->*RegionHookAddress)(...);
|
72,859,109 | 72,859,458 | Is enable_if the most concise way to define a function accepting only rvalues, but of any type? | I'm referring to this:
#include <utility>
template<typename T, typename = std::enable_if_t<std::is_rvalue_reference_v<T&&>>>
auto f(T&&) {}
int main(){
int i{};
f(std::move(i)); // ok
f(int{1}); // ok
f(i); // compile time error, as expected
}
Are there any other, shorter ways to accomplish the same?
For a moment I thought something like this could work
template<typename T>
auto f(decltype(std::declval<T>())&&) {}
but the IDE told me couldn't infer template argument 'T', and I verified here, in the section Non-deduced contexts, that the expression of a decltype-specifier is indeed a non-deduced context.
I'm interested also in a c++17 solution, if any exists.
| As @HolyBlackCat commented, you can use concepts to simplify the function signature
#include <type_traits>
template<typename T>
requires (!std::is_lvalue_reference_v<T>)
auto f(T&&) {}
Or detect lvalue or rvalue by checking the validity of a lambda expression that accepts an lvalue reference
#include <utility>
template<typename T>
requires (!requires (T&& x) { [](auto&){}(std::forward<T>(x)); })
auto f(T&&) {}
|
72,859,143 | 72,862,301 | Boost graph non contiguous vertex indices | #include <boost/graph/adjacency_list.hpp>
typedef boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS,
boost::no_property,
boost::property<boost::edge_weight_t, double>>
DiGraph;
typedef boost::graph_traits<DiGraph>::vertex_descriptor vertex_descriptor;
int main () {
std::vector<std::size_t> vertices = { 1, 5, 10};
std::vector<std::pair<std::size_t, std::size_t>> edges = {std::make_pair(1, 5),
std::make_pair(5, 10)};
std::vector<double> weights = {2., 2.};
DiGraph di_graph (edges.begin(), edges.end(), weights.begin(), vertices.size());
DiGraph::vertex_descriptor v_start = boost::vertex(1, di_graph);
std::vector<vertex_descriptor> parents(boost::num_vertices(di_graph));
boost::dijkstra_shortest_paths(di_graph, v_start,
boost::predecessor_map(boost::make_iterator_property_map(parents.begin(), boost::get(boost::vertex_index, di_graph))));
}
This allocates a vector parents of size 11, since boost uses contiguous vertex indices.
I want the non-contiguous vertices (1, 5, 10..) but don't want the unnecessary memory space for the vector parents.
How can I make a mapping from my vertex indices to the vertex indices 1, 2, 3 and pass it to boost::dijkstra_shortest_paths?
On top of that it would be even more convenient to receive the result of dijkstra in a struct parents and access the predecessor of an element with my index, e.g.
parents[10]
but without a vector of length 11 or just have an easy conversion function f I could use
parents[f(10)]
I did take a look at the documentation of boost graph and thought the IndexMap could make this possible, but I don't understand the concept and can't make it work.
| With the boost::vecS vertex container selection, the vertex index is implicit, and the call
DiGraph di_graph(
edges.begin(), edges.end(), weights.begin(), vertices.size());
is a lie: you tell it that there are 3 vertices, but then you index out of bounds (5, 10 are outside [0,1,2]).
Note also that
V v_start = boost::vertex(1, di_graph);
selects the second vertex, not vertex 1.
Internal Names
I'd probably suggest a more recent addition: internal vertex names. If we add a vertex property bundle, like simply int:
using DiGraph = boost::adjacency_list<
boost::vecS,
boost::vecS,
boost::directedS,
int,
boost::property<boost::edge_weight_t, double>>;
And then also teach BGL that we can use it as the vertex internal name:
template<> struct boost::graph::internal_vertex_name<int> {
struct type : std::identity {
using result_type = int;
};
};
Now creating the equivalent graph is simply:
DiGraph g;
add_edge(1, 5, 2., g);
add_edge(5, 10, 2., g);
That's all. You can see that it created vertices with implicit indices as the descriptors:
for (auto e : make_iterator_range(edges(g))) {
std::cout << "edge: " << e << "\n";
}
Prints:
edge: (0,2)
edge: (1,0)
To get the names, use property maps like so:
for (auto v : make_iterator_range(vertices(g))) {
std::cout << "vertex at index " << v << " named " << g[v] << "\n";
}
Printing
vertex at index 0 named 5
vertex at index 1 named 1
vertex at index 2 named 10
Using internal vertex names, you can query vertices by property bundles:
boost::optional<V> v_start = g.vertex_by_property(1);
Now, all I can suggest is using safe iterator maps:
dijkstra_shortest_paths(
g,
v_start.value(),
boost::predecessor_map(boost::make_safe_iterator_property_map(
parents.begin(), parents.size(), get(boost::vertex_index, g))));
for (size_t i = 0; i < parents.size(); ++i) {
std::cout << "Parent for '" << g[i] << "' is '" << g[parents[i]] << "'\n";
}
Live Demo
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/dijkstra_shortest_paths.hpp>
#include <iostream>
template<> struct boost::graph::internal_vertex_name<int> {
struct type : std::identity {
using result_type = int;
};
};
using DiGraph = boost::adjacency_list<
boost::vecS,
boost::vecS,
boost::directedS,
int,
boost::property<boost::edge_weight_t, double>>;
using V = DiGraph::vertex_descriptor;
using boost::make_iterator_range;
int main()
{
DiGraph g;
add_edge(1, 5, 2., g);
add_edge(5, 10, 2., g);
for(auto e : make_iterator_range(edges(g)))
std::cout << "edge: " << e << "\n";
for(auto v : make_iterator_range(vertices(g)))
std::cout << "vertex at index " << v << " named " << g[v] << "\n";
boost::optional<V> v_start = g.vertex_by_property(1);
std::vector<V> parents(num_vertices(g));
dijkstra_shortest_paths(
g,
v_start.value(),
boost::predecessor_map(boost::make_safe_iterator_property_map(
parents.begin(), parents.size(), get(boost::vertex_index, g))));
for (size_t i = 0; i < parents.size(); ++i) {
std::cout << "Parent for '" << g[i] << "' is '" << g[parents[i]] << "'\n";
}
}
Prints
edge: (0,2)
edge: (1,0)
vertex at index 0 named 5
vertex at index 1 named 1
vertex at index 2 named 10
Parent for '5' is '1'
Parent for '1' is '1'
Parent for '10' is '5'
|
72,859,235 | 72,860,428 | Qt connect signals and slots of different windows/ mirror a lineEdit text on two windows | I'm new to any form of programming but have to do a project with Qt for my "programming for engineers" course where we simultaneously learn the basics of c++.
I have to display a text from one lineEdit to a lineEdit in another window.
I have a userWindow that opens from the mainWindow and in this userWindow I have a lineEdit widget that displays the current selected user as a QString (from a QDir object with .dirName() ). But now I have to display the same String in a lineEdit in the mainWindow as well.
From what I've read I have to do this with "connect(...)" which I have done before with widgets inside a single .cpp file but now I need to connect a ui object and signal from one window to another and I'm struggling.
My idea /what I could find in the internet was this:
userWindow.cpp
#include "userwindow.h"
#include "ui_userwindow.h"
#include "mainwindow.h"
#include <QDir>
#include <QMessageBox>
#include <QFileDialog>
#include <QFileInfo>
QDir workingUser; //this is the current selected user. I tried defining it in userWindow.h but that wouldn't work how I needed it to but that's a different issue
userWindow::userWindow(QWidget *parent) : //konstruktor
QDialog(parent),
ui(new Ui::userWindow)
{
ui->setupUi(this);
QObject::connect(ui->outLineEdit, SIGNAL(textChanged()), mainWindow, SLOT(changeText(workingUser) //I get the error " 'mainWIndow' does not refer to a value "
}
[...]
mainWindow.h
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QDir>
#include <QMainWindow>
QT_BEGIN_NAMESPACE
namespace Ui { class MainWindow; }
QT_END_NAMESPACE
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
MainWindow(QWidget *parent = nullptr);
~MainWindow();
public slots:
void changeText(QDir user); //this is the declaration of my custom SLOT (so the relevant bit)
private slots:
void on_userButton_clicked();
void on_settingsButton_clicked();
private:
Ui::MainWindow *ui;
};
#endif // MAINWINDOW_H
mainwindow.cpp
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "userwindow.h"
#include "settingswindow.h"
#include "click_test_target.h"
#include "random_number_generator.h"
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
[...]
}
[...]
//here I define the slot
void MainWindow::changeText(QDir user)
{
QString current = user.dirName();
ui->userLine->insert("The current working directory is: "); //"userLine" is the lineEdit I want to write the text to
ui->userLine->insert(current);
}
I know I'm doing something wrong with the object for the SLOT but can't figure out how to do it correctly.
If anyone could help me I would be very grateful.
Alternatively: perhaps there is another way to mirror the text from one lineEdit to another over multiple windows. If anybody could share a way to do this I would be equally grateful. Is there maybe a way to somehow define the variable "QDir workingUser;" in such a way as to be able to access and overwrite it in all of my .cpp files?
Thank you in advance for any help. Regards,
Alexander M.
|
"I get the error 'mainWindow' does not refer to a value"
I don't see you having any "mainWindow" named variable anywhere,
but you also mentioned that the MainWindow is the parent, which means you could get reference anytime, like:
MainWindow *mainWindow = qobject_cast<MainWindow *>(this->parent());
Also, your signal-handler (changeText(...) slot) should take QString as parameter (instead of QDir), this way you handle how exactly the conversion is handled, in case users type some random text in input-field (text-edit).
void changeText(const QString &input);
Finally, you either need to specify type:
QObject::connect(ui->outLineEdit, SIGNAL(textChanged(QString)), mainWindow, SLOT(changeText(QString));
Or, use the new Qt-5 syntax:
connect(ui->outLineEdit, &QLineEdit::textChanged,
mainWindow, &MainWindow::changeText);
|
72,859,241 | 72,863,041 | Horizontal min on avx2 8 float register and shuffle paired registers alongside | After ray vs triangle intersection test in 8 wide simd, I'm left with updating t, u and v which I've done in scalar below (find lowest t and updating t,u,v if lower than previous t). Is there a way to do this in simd instead of scalar?
int update_tuv(__m256 t, __m256 u, __m256 v, float* t_out, float* u_out, float* v_out)
{
alignas(32) float ts[8];_mm256_store_ps(ts, t);
alignas(32) float us[8];_mm256_store_ps(us, u);
alignas(32) float vs[8];_mm256_store_ps(vs, v);
int min_index{0};
for (int i = 1; i < 8; ++i) {
if (ts[i] < ts[min_index]) {
min_index = i;
}
}
if (ts[min_index] >= *t_out) { return -1; }
*t_out = ts[min_index];
*u_out = us[min_index];
*v_out = vs[min_index];
return min_index;
}
I haven't found a solution that finds the horizontal min t and shuffles/permutes it's pairing u and v along the way other than permuting and min testing 8 times.
| First find horizontal minimum of the t vector. This alone is enough to reject values with your first test.
Then find index of that first minimum element, extract and store that lane from u and v vectors.
// Horizontal minimum of the vector
inline float horizontalMinimum( __m256 v )
{
__m128 i = _mm256_extractf128_ps( v, 1 );
i = _mm_min_ps( i, _mm256_castps256_ps128( v ) );
i = _mm_min_ps( i, _mm_movehl_ps( i, i ) );
i = _mm_min_ss( i, _mm_movehdup_ps( i ) );
return _mm_cvtss_f32( i );
}
int update_tuv_avx2( __m256 t, __m256 u, __m256 v, float* t_out, float* u_out, float* v_out )
{
// Find the minimum t, reject if t_out is larger than that
float current = *t_out;
float ts = horizontalMinimum( t );
if( ts >= current )
return -1;
// Should compile into vbroadcastss
__m256 tMin = _mm256_set1_ps( ts );
*t_out = ts;
// Find the minimum index
uint32_t mask = (uint32_t)_mm256_movemask_ps( _mm256_cmp_ps( t, tMin, _CMP_EQ_OQ ) );
// If you don't yet have C++/20, use _tzcnt_u32 or _BitScanForward or __builtin_ctz intrinsics
int minIndex = std::countr_zero( mask );
// Prepare a permutation vector for the vpermps AVX2 instruction
// We don't care what's in the highest 7 integer lanes in that vector, only need the first lane
__m256i iv = _mm256_castsi128_si256( _mm_cvtsi32_si128( (int)minIndex ) );
// Permute u and v vector, moving that element to the first lane
u = _mm256_permutevar8x32_ps( u, iv );
v = _mm256_permutevar8x32_ps( v, iv );
// Update the outputs with the new numbers
*u_out = _mm256_cvtss_f32( u );
*v_out = _mm256_cvtss_f32( v );
return minIndex;
}
While relatively straightforward and probably faster than your current method with vector stores followed by scalar loads, the performance of the above function is only great when that if branch is well-predicted.
When that branch is unpredictable (statistically, results in random outcomes), a completely branchless implementation might be a better fit. Gonna be more complicated though, load old values with _mm_load_ss, conditionally update with _mm_blendv_ps, and store back with _mm_store_ss.
|
72,859,301 | 72,860,463 | CMake with multiple sub projects building into one directory | I'm not very familiar with CMake and still find it quite confusing. I have a project that has a server and client that I want to be able to run independent of each other but that builds together into the same directory (specifically the top level project build directory kind of like how games have the server launcher and game launcher in the same directory) Currently it just creates a builds directory in each sub project, so one in client, one in server etc.
This is my current project structure
.
├── CMakeLists.txt
├── builds
│ ├── debug
│ └── release
├── client
│ ├── CMakeLists.txt
│ ├── assets
│ └── source
│ └── Main.cpp
├── documentation
├── libraries
│ ├── glfw-3.3.7
│ └── glm
├── server
│ ├── CMakeLists.txt
│ └── source
│ └── Main.cpp
└── shared
├── PlatformDetection.h
├── Utility.h
├── events
└── platform
├── linux
├── macos
└── windows
This is my root CMake file
cmake_minimum_required(VERSION 3.20)
project(Game VERSION 1.0.0)
add_subdirectory(libraries/glfw-3.3.7)
add_subdirectory(client)
add_subdirectory(server)
Client CMake file
cmake_minimum_required(VERSION 3.20)
project(Launcher LANGUAGES CXX VERSION 1.0.0)
set(CMAKE_CXX_STANDARD 23)
set(SOURCE_FILES source/Main.cpp ../shared/events/Event.h ../shared/Utility.h
source/Client.cpp source/Client.h ../shared/PlatformDetection.h ../shared/events/EventManagementSystem.cpp
../shared/events/EventManagementSystem.h)
set(GLFW_BUILD_DOCS OFF CACHE BOOL "" FORCE)
set(GLFW_BUILD_TESTS OFF CACHE BOOL "" FORCE)
set(GLFW_BUILD_EXAMPLES OFF CACHE BOOL "" FORCE)
include_directories(${CMAKE_SOURCE_DIR}/libraries/glm)
include_directories(${CMAKE_SOURCE_DIR}/libraries/glfw-3.3.7/include/GLFW)
include_directories(${CMAKE_SOURCE_DIR}/shared)
add_executable(Launcher ${SOURCE_FILES})
target_link_libraries(Launcher LINK_PUBLIC glfw)
Server CMake file
cmake_minimum_required(VERSION 3.20)
project(ServerLauncher LANGUAGES CXX VERSION 1.0.0)
set(CMAKE_CXX_STANDARD 23)
set(SOURCE_FILES source/Main.cpp ../shared/events/Event.h ../shared/Utility.h
../shared/PlatformDetection.h ../shared/events/EventManagementSystem.cpp
../shared/events/EventManagementSystem.h)
include_directories(${CMAKE_SOURCE_DIR}/libraries/glm)
include_directories(${CMAKE_SOURCE_DIR}/shared)
add_executable(ServerLauncher ${SOURCE_FILES})
How can I make the client and server build into the same directory? And can these cmake file structures be improved at all? They seem quite messy and all over the place to me though that may just be due to my unfamiliarity with CMake.
| You cannot have multiple subdirectories use the same build directory, but that doesn't seem what you're trying to achieve.
Assuming you don't set the variable CMAKE_RUNTIME_OUTPUT_DIRECTORY anywhere in your project, and you don't specify the RUNTIME_OUTPUT_DIRECTORY target property for any of your targets by some other means, you could simply set the variable in the toplevel CMakeLists.txt before using add_subdirectory:
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/bin)
add_subdirectory(...)
...
Note that for distributing the program you should be using install() logic:
Client CMakeLists.txt
...
install(TARGETS Launcher RUNTIME)
Server CMakeLists.txt
...
install(TARGETS ServerLauncher RUNTIME)
Note that you may need to add logic for installing dependencies.
Using those install commands allows you to use
cmake --install <build dir> --prefix <install dir>
to install the programs locally in the default directory for binaries on the system. Furthermore it's the basis for packaging your project using cpack.
|
72,860,277 | 72,860,325 | No memory is allocated while creating the class, then where variables in the class saved? | In object oriented programming, there is concept of class and objects. We define a class and then create its instance (object). Consider the below C++ example:
class Car{
public:
string model;
bool petrol;
int wheels = 4;
}
int main(){
Car A;
cout << A.wheels;
return 0;
}
Now I know no memory was allocated to the class Car until the object A was created. Now I am swamped in the confusion that if no memory was allocated to the Car, then at the time of creating object A, how object A will know that wheels is equal to 4 ? I mean it should be saved somewhere in the memory.
Please ignore mistakes as it is a question from beginner's side :)
| There are 2 types of storage at work here.
The information about Car is stored in memory. That is the code in its methods, its layout, including the literal value 4 which initializes wheels. This exists in the binary executable file, and exists in memory at all times your application is running.
But when you say "no memory allocated.." you're thinking about memory of an instance of Car, where memory is allocated each time you create a new instance.
|
72,860,507 | 72,860,684 | How do i make a void method know what type to use? | So my question is very simple but i can't find how to do it .
I created this method on .cpp
void Message_Test::Integers(const uint64_t &_value){}
and in the .hpp
void Integers(const uint64_t &_value)
I'm passing a uint64_t value { 35000 } to the method like so Integers(value), but i also would like to use this method with a uint16_t value { 35000 }, without having to create multiple methods.
I would like to know if there is a way to change my entire parameter so it knows if its a uint64_t or a 32,16,etc ...
| Given that you want a const reference, assuming you had a good reason for it to be a reference, you can not allow any implicit conversion to happen. If the function is called with an uint16_t the compiler will create a temporary uint64_t and pass the address of that. For any valid reason to use a reference to a primitive type this would be fatal.
I don't think you have any other choice than use a template
template <typename T>
void Integers(const T &_value){ ... }
Note that the template must be in the header.
When you store the value reference somewhere for later use (That's why it's a reference, right? Because otherwise just pass by value.) you then have to figure out how to store it, e.g. with std::variant. Or template the whole class. Depends on the context.
|
72,860,572 | 72,863,702 | C++ inline function inside a static class or namespace | I have a very small function that could be a macro, but anyway, I thought inline function would do exactly the same.
But nah, when I mark a function in a namespace as inline it is not visible by any other file that includes my module.
I tried it both with a static class and a namespace. No error when the function is declared or implemented, error in each reference, exactly as if the function was static.
Is there a way to have an inline function available for other files that include the file where my function is declared?
Let's say I have a module that handles some hardware stuff, and one of the functions is something like this:
bool getState(HWFlag flag)
{
return (State & flag) > 0
}
I know, it can be written as macro, but logically the function is a part of a bigger module. I would certainly used a macro in C, but it's C++ so I thought there might be a better way. But well, I obviously don't understand how it works.
BTW, shouldn't the compiler just inline it anyway so I shouldn't even care?
BTW2: Is making the module that mainly talks with C code a class with only static method a bad idea or does it have any use? That's how I made it initially but later decided to just make it a namespace to simplify the syntax a little. But in any case, if I use the inline keyword the function becomes private.
| An inline function should be defined identically in every translation unit that uses it. So you should define your inline function exactly as you would a macro--in a header file that gets included by all of the files that need it.
|
72,862,163 | 72,872,786 | Content of directory messes with how the code runs even though there's no reference of it | So I spent 2 hours trying to narrow down the cause of my code not working and I think it might just be something weird. Here's the exact example code I have and I can't minimize it further (yes, bar does literally nothing) :
// thread example
#include <iostream> // std::cout
#include <thread> // std::thread
void bar()
{
// do stuff...
}
int main()
{
std::cout << "please run" << std::endl;
std::thread t(bar);
t.join();
std::cout << "completed.\n";
return 0;
}
Here's how I build it :
g++ -std=c++0x -o test test.cpp -pthread
When all of this is done in a blank directory, it works perfectly, but if I put test.cpp in my project directory, compile it here and run it, it crashes before even entering main. Here's the crash report but I translated it from French so it's not the exact text:
"The entry point of the procedure std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)()) cannot be found in the library of dynamic links [project directory]\test.exe"
Note that this message did not appear in the console command but in a separate window pop-up from where I can't copy-paste.
Here's what's the project directory is (not quite exactly because script files were updated but it's the same structure and libraries) : https://github.com/Leroymilo/RTX_Cpp
It has SFML and jsonCpp in it if it changes anything.
Also I am on Windows10 and I'm using VScode if it makes any difference.
I did what's advised in this post : Segmentation Fault while creating thread using #include<thread> but it does not change the result.
| The issue was the presence of a file called "libstdc++-6.dll" in my project directory. I have no memories of why it's here because I copied all libraries from another project but if anyone does the same mistake, here's your solution.
Edit : I found out about why I had this in my files : it's because my build wasn't static and launching the built executable on another computer showed a pop-up asking for these. I fixed my static build issue with the answers provided in this post : Issues with libgcc_s_dw2-1.dll and libstdc++-6.dll on build .
|
72,862,644 | 72,862,684 | How to do error handling with vector of pointers? | This is probably a noob question, but I am a bit confused.
I have a block of code that looks like this:
std::vector<MyObject*> datas;
try
{
MyObject data = get_me_data();
MyObject* dat=&data;
datas.push_back(dat);
}
catch (...) {}
do_something_with_datas(datas);
The issue is that function after try/catch code that should handle std::vector does not work properly as I am getting nonsense from the function.
I think the issue is that dat goes out of scope in try block, so state of what it points is undefined and I really shouldn't have used MyObject* dat=&data?
I am also uncertain if vector is cleared properly.
Can someone please point me at right direction? I have some algorithm that creates bunch of MyObject data objects. I wish to pack pointers to them into a vector and process them after the try/catch code.
This is simplified code, actual algorithm uses MyObject* because the code implements iterative algorithm that updates dat object before pushing them into vector. So main issue here is how to make get_me_data() function give me a pointer to data, and not the data.
Would I need to use new operator at the line when it creates the object?
I am new to C++, and I am certain there are many better ways to implement this, can someone explain how would move semantics be used here. What I understood from move constructors is that they turn rvalue into lvalue and provide a more efficient copy, i.e they'd move pointers that point to array fields and "neglect" original object, so the memory would be "moved" to a new object, instead of copied. I am not sure if that'd work with the code I wrote as the idea is to iteratively update fields of MyObject. As all classes in the code need MyObject as their input, it made little sense to me to move it around and ensuring that every class in the package is moveable. It seems much easier to have one MyObject and to pass a pointer to it across the whole package, and not be bothered too much about ownership. Am I horribly wrong with this implementation?
|
I think the issue is that dat goes out of scope in try block
Your hunch is correct.
I am also uncertain if vector is cleared properly.
Although this cannot be determined, based on the shown code this is very likely. This happens very often in code that suffers from a common problem called "Pointless Use Of Pointers".
There's nothing in the shown code that requires pointers. Maybe there might be some other reason, that's not shown, but as as the shown code is concerned there is no reason why the vector cannot simply be:
std::vector<MyObject> datas;
And the try/catch block gets reduced to:
datas.push_back(get_me_data());
It's unclear whether this still needs exception handling. Whether it does or does not everything now works correctly, either way. Finally: whether or not the "vector is cleared properly" is now a completely moot point. It will be automatically cleared properly. The std::vector will make sure that its contents gets correctly cleared when it is destroyed.
I have some algorithm that creates bunch of MyObject data objects.
Great, here they are, in their entirety: std::vector<MyObject>. Just say "No" to pointless use of pointers.
|
72,864,514 | 72,864,585 | Passing pointer reference from template function to non-template function | I am attempting to move around a pointer by reference (T*&) between some template functions. Under certain conditions this pointer reference may get passed to a different function that accepts a void pointer reference (void*&). When I attempt to pass the templated type into the function accepting a void*&, it gives me the error:
error: cannot bind non-const lvalue reference of type 'void*&' to an rvalue of type 'void*'
This error is pretty self explanatory on its own. However I can't readily make sense of the error in context of the code. Here is a minimal reproduction of my error I was able to make in Godbolt (x86_64 gcc 10.2):
#include <iostream>
#include <type_traits>
void NonTempFunct(void*& Ptr)
{
std::cout << "Pointer Value: " << Ptr << ".\n";
}
template<typename T, typename = std::enable_if_t< std::is_pointer_v<T> >>
void TempFunct(T& Param)
{
std::cout << "Pointer found.\n";
NonTempFunct( Param );
}
template<typename T, typename = std::enable_if_t< !std::is_pointer_v<T> >, typename = void>
void TempFunct(T& Param)
{
std::cout << "Non pointer found. No op.\n";
}
int main()
{
int Value = 50;
int* pValue = &Value;
TempFunct( pValue );
return 0;
}
The error specifically complains about the invocation of NonTempFunct(void*&). As far as I am aware, there are no rvalues in this chain. They all have names and refer back to an automatically allocated variable.
I didn't stop here though, and fiddled with the code a bit. Using std::forward (NonTempFunct( std::forward<T&>(Param) );) or std::move (NonTempFunct( std::move(Param) );) when invoking NonTempFunct didn't change the error produced.
VERY curiously, when I switched the references in both TempFunct declarations to a universal reference (&&) the program did compile, however the wrong version was selected with SFINAE, suggesting the std::is_pointer_v<T> check failed with universal references.
The one thing that did work was a reinterpret_cast in the call to NonTempFunct (without universal references).
NonTempFunct( reinterpret_cast<void*&>(Param) );
That compiles. I fear I don't understand C++ well enough to make sense of these results. My specific questions are:
Where is the rvalue from the initial error coming from?
Why does the use of a universal reference cause std::is_pointer_v to fail?
Why does a reinterpret_cast bypass these issues?
| Case 1
Here we discuss the reason for the mentioned error.
The problem is that param is an lvalue of type int* and it can be converted to a prvalue of type void* when passing it as the call argument in NonTempFunct( Param ); but the parameter of NonTempFunct is a non-const lvalue reference which cannot be bound to an rvalue.
Essentially, the result of the conversion(int*->void*) will be a prvalue and a non-const lvalue reference cannot be bound to that rvalue.
To solve this you can either make the parameter of NonTempFunct to be a const lvalue reference or simply a void* as shown below
Method 1
//----------------------vvvvv---------->added this
void NonTempFunct(void *const& Ptr)
{
std::cout << "Pointer Value: " << Ptr << ".\n";
}
Working demo
Method 2
//----------------vvvvv---------->removed the reference
void NonTempFunct(void* Ptr)
{
std::cout << "Pointer Value: " << Ptr << ".\n";
}
Working demo
Case 2
Here we discuss the reason when we use universal reference, the program compiles without any error.
When you make the function template's parameter to be T&& and use the call TempFunct( pValue ) then T is deduced to be int*& i.e., non const lvalue reference to a non const pointer to int.
This means that std::is_pointer_v<T> will be the same as std::is_pointer_v<int*&> which will be false. Demo.
This in turn means that the first overloaded version will be SFINAE'd OUT. And since the second version is viable(as it uses !std::is_pointer_v<T> which is the same as !std::is_pointer_v<int*&> and so is true ), it will be used and we will get the output Non pointer found. No op.
|
72,864,596 | 72,865,115 | iterate over a Variadic Template function and choose pointer arguments | I have a Variadic Template function in C++ and I want to iterate over the template arguments and cherry-pick those arguments that are pointers.
PLEASE SEE THE UPDATE SECTION BELOW.
So, I have the following code below. I wrote a skeleton code that is compilable and runnable.
g++ -std=c++17 f3_stackoverflow.cpp && ./a.out
I am just curious about the part that says: LOGIC GOES HERE
#include <iostream>
#include <vector>
#include <utility>
template <typename... Args>
std::vector<std::pair<int, void*>> check(Args... args) {
std::vector<std::pair<int, void*>> ret;
// LOGIC GOES HERE
return ret;
}
void printVector(std::vector<std::pair<int, void*>> v) {
for(const auto& _v : v) {
std::cout << _v.first << " : " << _v.second << std::endl;
}
}
int main(int argc, char const *argv[])
{
int n = 100;
int a;
std::vector<int> b(n);
float c;
std::vector<float> d(n);
char e;
std::vector<char> f(n);
auto pairs = check(a, b.data(), c, d.data(), e, f.data());
printVector(pairs);
return 0;
}
So, I want to see the following output in the stdout of the program:
1 : 0x123
3 : 0x567
5 : 0x980
UPDATE:
Basically, I am looking for the indexes where the argument is a pointer (e.g., int*, float*, ...) and the address the pointer is pointing to. That is why you see the output that I provided.
Explanation of the output: The second, fourth, and sixth arguments are pointers (hence, 1, 3, 5 in zero-based indexing).
| template <int N, typename Arg, typename... Args>
void helper(std::vector<std::pair<int, void*>>& v, Arg arg, Args... args) {
if constexpr(std::is_pointer_v<Arg>) {
v.emplace_back(N, (void*)arg);
}
if constexpr(sizeof...(args) > 0) {
helper<N+1, Args...>(v, args...);
}
}
template <typename... Args>
std::vector<std::pair<int, void*>> check(Args... args) {
std::vector<std::pair<int, void*>> ret;
helper<0>(ret, args...);
return ret;
}
Demo
Maybe a simpler version using fold expression
template <typename Arg>
void helper(std::vector<std::pair<int, void*>>& v, Arg arg, int idx) {
if constexpr(std::is_pointer_v<Arg>) {
v.emplace_back(idx, (void*)arg);
}
}
template <typename... Args>
std::vector<std::pair<int, void*>> check(Args... args) {
std::vector<std::pair<int, void*>> ret;
int n = 0;
(helper(ret, args, n++), ...);
return ret;
}
Demo
|
72,864,817 | 72,865,875 | Pass array from C# to C++ | I am trying to pass an array from C# to a C++ DLL and then print it from C++.
The C# code is the following:
[DllImport("DLL1.dll", CallingConvention = CallingConvention.StdCall)]
public static extern void GMSH_PassVector([MarshalAs(UnmanagedType.SafeArray,SafeArraySubType = VarEnum.VT_I4)] int[] curveTag);
int[] curveTagArray = { 1, 2, 3, 4 };
GMSH_PassVector(curveTagArray);
The C++ header file code is the following:
extern "C" GMSH_API void GMSH_PassVector(std::vector<int> *curveTags);
The C++ cpp file code is the following to display the first element of the array:
void GMSH_PassVector(std::vector<int> *curveTags)
{
printf("Hello %d \n", curveTags[0]);
}
Seems that I'm doing something wrong, a value is displayed but it is the wrong one.
| I don't think the marshal mechanism is able to translate a int[] to a std::vector. It's better to use language base types. Change the C++ function as
void GMSH_PassVector(int * arr, int size)
and in C# you have to instantiate a
IntPtr to hold the address of the first element of the array.
int bufferSize = 4;
int[] buffer = new int[bufferSize];
buffer[0] = 1;
buffer[1] = 2;
buffer[2] = 3;
buffer[3] = 4;
int size = Marshal.SizeOf(buffer[0]) * buffer.Length;
IntPtr bufferPtr = Marshal.AllocHGlobal(size);
Marshal.Copy(buffer, 0, bufferPtr, size);
GMSH_PassVector(bufferPtr, size);
Marshal.FreeHGlobal(bufferPtr);
|
72,864,937 | 72,865,021 | How to do Google test for function which are not a part of class? | Am performing Google test, Here am facing one challenge.
In my project some of the functions are not included in the header file, they are directly included in the source file. I can access the functions which are in header file by creating obj of the class but am not able to access which are only in source file.
Please guide me how to access these function.
Thanks in advance!.
Kiran JP
| Declare them extern in your test code.
Example. Let's say you have a source file like this:
// Function declared and defined in .cpp file
void myFunction() {
// implementation
}
Then you could go ahead and do the following in your test code:
extern void myFunction();
TEST(MyTest) {
myFunction();
}
Unless the function was explicitly declared with internal linkage. Either by declaring it static or by declaring/defining it inside an anonymous namespace in C++11 or above.
|
72,865,106 | 72,866,133 | May this kind of rewrite of placement new compile? | Background
I am cleaning up a legacy codebase by applying a coding guideline for the new statement.
There is code like auto x = new(ClassName); that I rewrite to auto x = new ClassName();. It's quite obvious that this is not a placement new and I don't need to think about it.
However, there's also code like auto x = new(ClassName)(argument); which looks much like a placement new. For now I blindly rewrote such code to auto x = new ClassName(argument); as well.
Question
Might there be a case where a real placement new like auto x = new(placement-params)(Type); is rewritten as auto x = new placement-params(Type); and it still compiles but changes the meaning of the program?
| placement-params is not a type, it is a value.
Consider this code with a placement new:
int* buf = new int;
int* a = new(buf)(int);
If we remove parenthesis around buf, the compiler can easily detect buf is not a type.
int* a = new buf(int); // compile error
Even if we create a type named buf, by the name lookup rules, the name in the inner scope is found first:
class buf { // found second
buf(int) {}
};
int main() {
int *buf = new int; // found first
int* a = new buf(int); // compile error
return 0;
}
According to this answer, when type and value are in a same scope, the value is found first. For example:
class buf { // found second
buf(int) {}
};
int *buf = new int; // found first
int main() {
int *a = new buf(int); // compile error
return 0;
}
|
72,865,756 | 72,865,967 | How can I transfer matrix data from Matlab to OpenCV, C++? | I have a 57X1 double matrix in Matlab and I want to find a way to save that data and then load it to a new OpenCV Mat. For actual images I used to do imwrite in Matlab and then imread in OpenCV but, in this current situation, the result was a Mat with all of the values equal to 255.
| The simplest way is to just use csvwrite to write as a text file, and then load it in C++ by reading the numbers from the text file.
If you must have binary exact values, you can use the fopen, fwrite, fclose to write the values in binary format, and then use the equivalent functions (i.e. fread or ifstream::read) to read the binary data directly to the Mat buffer.
|
72,865,925 | 72,866,049 | On uint64 to double conversion: Why is the code simpler after a shift right by 1? | Why is AsDouble1 much more straightforward than AsDouble0?
// AsDouble0(unsigned long): # @AsDouble0(unsigned long)
// movq xmm1, rdi
// punpckldq xmm1, xmmword ptr [rip + .LCPI0_0] # xmm1 = xmm1[0],mem[0],xmm1[1],mem[1]
// subpd xmm1, xmmword ptr [rip + .LCPI0_1]
// movapd xmm0, xmm1
// unpckhpd xmm0, xmm1 # xmm0 = xmm0[1],xmm1[1]
// addsd xmm0, xmm1
// addsd xmm0, xmm0
// ret
double AsDouble0(uint64_t x) { return x * 2.0; }
// AsDouble1(unsigned long): # @AsDouble1(unsigned long)
// shr rdi
// cvtsi2sd xmm0, rdi
// addsd xmm0, xmm0
// ret
double AsDouble1(uint64_t x) { return (x >> 1) * 2.0; }
Code available at: https://godbolt.org/z/dKc6Pe6M1
| x86 has an instruction to convert between signed integers and floats. Unsigned integer conversion is (I think) supported by AVX512, which most compilers don't assume by default. If you shift right a uint64_t once, the sign bit is gone, so you can interpret it as a signed integer and have the same result.
|
72,865,996 | 72,866,174 | Facing problems in my first time handling CMake, Third party(header only) libraries | I want to use the following library
https://github.com/gmeuli/caterpillar
It's documentation says that it's a header-only library, and that I should "directly integrate it into my source files with #include <caterpillar/caterpillar.h>." It also depends on a few other libraries, one of which I need to use directly as well.
So far I have done the following:
create cmake project to make an 'executable' (with the vscode extension)
created a 'lib' folder, inside which I did
git clone https://github.com/gmeuli/caterpillar
Then, I did include_directories(lib) in my cmake file.
But #include <caterpillar/caterpillar.h> doesn't quite work in my singular main.cpp file.
I played around with various CMake functions, and it either gave the error "No such file or directory" regarding caterpillar/caterpillar.h itself, or it gave "cannot open source file... dependent of caterpillar/caterpillar.h" depending on how I messed with the cmake file.
For reference:
cat ~/project/main.cpp
#include <caterpillar/caterpillar.hpp>
#include <lorina/lorina.hpp> //how do I include this ? it's in the lib folder of caterpillar itself, or do I need to have a copy of it in my lib folder too
int main()
{
// stuff in lorina:: namespace
// stuff in caterpillar:: namespace
return 0;
}
cat ~/project/CMakeLists.txt
include_directories(lib)
//... rest is stuff like CXX standard, etc etc
tree ~/project
main.cpp
lib/
caterpillar/
build/
cmake generated stuff
CMakeLists.txt
| Firstly, modern cmake recommends target_include_directories() instead of old include_directories() for better scope management.
Actually <caterpillar/caterpillar.hpp> is not in $PROJECT_SOURCE_DIR/lib directory. That's why your code not works.
CMakeLists example:
cmake_minimum_required(VERSION 3.22)
project(myproject)
set(CMAKE_CXX_STANDARD 17)
add_executable(my_project main.cpp)
target_include_directories(my_project PRIVATE ${PROJECT_SOURCE_DIR}/lib/caterpillar/include)
# project_src_dir/lib/catepillar/include/ is the directory where you find the headers like <catepillar/catepillar.hpp>
target_include_directories(my_project PRIVATE ${PROJECT_SOURCE_DIR}/lib/caterpillar/lib/lorina)
caterpillar's document describes how to include their headers in a traditional style, assuming the readers could understand this and decide where to put the headers themselves. (which means you don't need the whole git repo but only the "include" dir.)
For this specific problem, the library has provided a detailed CMakeLists.txt for users to include:
cmake_minimum_required(VERSION 3.22)
project(my_project)
set(CMAKE_CXX_STANDARD 17)
add_subdirectory(lib/caterpillar)
# this works because "project_src_dir/lib/catepillar/CMakeLists.txt" exists.
add_executable(my_project main.cpp)
target_link_libraries(my_project PRIVATE caterpillar)
# you need to tell cmake to add all catepillar settings into your project
|
72,866,491 | 72,868,563 | VS Code is incorrectly formatting numbers separated by apostrophes in C++ | I am using VS Code for developing C++ code and these are my settings:
Editor: Format On Save: on
C_Cpp: Formatting: Default
C_Cpp: Clang_format_fallback: GNU
This is my code before formatting:
#include <iostream>
using namespace std;
int
main ()
{
cout << endl;
long large_number{ 7'958'482'164 };
cout << "Large number: " << large_number << endl;
return 0;
}
This is my code after formatting:
#include <iostream>
using namespace std;
int
main ()
{
cout << endl;
long large_number
{
7 '958' 482'164 }; cout << "Large number: " << large_number << endl;
return 0;
}
Why is this happening? How can I prevent that behavior?
| Change C_Cpp: Clang_format_fallback to a value other than GNU, I changed it to Visual Studio and it now works fine.
|
72,866,641 | 72,898,110 | Black background appears after converting from Mat(OpenCV) to UIImage | I have an application that uses some functions of the OpenCV library to edit images.
After converting UIImage to Mat and Mat to UIImage I get a black background in the image
please tell me how to fix it
my code which i use to convert UIImage to Mat
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
my code which i use to convert Mat to UIImage
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
example of my code
-(UIImage *)stroked:(UIImage *)image color:(int)color strock:(float)strock {
cv::Mat img = [self cvMatFromUIImage:image];
return [self UIImageFromCVMat:img];
}
| Functionally functional, no reproduction issues.
Because of a hint, I adjusted kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault to kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast
kCGImageAlphaNone|kCGBitmapByteOrderDefault to
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast
|
72,867,013 | 72,867,691 | Read non-specified format file data | I have a problem reading file data packed as binary resource. I have something like this
7ë?Vý˝‹ĺ”>˙†J˙l$í?źÔ=ć$ľ>˙†J˙(çî?Yý˝ć$ľ>˙†J˙'Šč?[ý˝"6?˙†J˙KÓć?[YČ="6?˙†J˙…?Ů?[ý˝fË$?˙†J˙4Ą×?ĄŰŞ=fË$?˙†J˙ĚúĹ?[ý˝n8?˙†J˙r„Ä?s˛…=n8?˙†J˙Ôž°?[ý˝.2??˙†J˙?š?$>Í<n8?˙†J˙ÜB›?[ý˝n8?˙†J˙#ţ‡?[ý˝fË$?˙†J˙}ű†?eâ;fË$?˙†J˙gq?[ý˝"6?˙†J˙Ëšo?Đuő»"6?˙†J˙OÄ]?[ý˝_Yż>˙†J˙ľ\?P‰Ľ_Yż>˙†J˙: W?[ý˝ ÓS>˙†J˙JeU?@3ŁĽ ÓS>˙†J˙OÄ]?[ý˝Í#=˙†J˙ľ\?
It should be numeric data 1 float - 4 bytes.
it should be "1.8376168"
First 4 bytes of file converted to hex is
07 37 EB 3F
I need some help with converting this to float.
I managed to do this using memcpy.
Here is a code
(You can't see first character before 7 and it's [BELL] it works in VS)
void using_memcpy(const char* s, size_t slen) {
float* f = new float[slen / sizeof(float)];
memcpy(f, s, slen / sizeof(float) * sizeof(float));
for (size_t i = 0; i < slen / sizeof(float); ++i) {
printf("%zu = %f\n", i, f[i]);
}
delete f;
}
std::string ch = "7ë?";
using_memcpy(ch.c_str(), ch.size());
| If your data is the binary representation of IEEE 754 floating point numbers, which it looks like it is, you can memcpy that data into a float variable.
You may need to do endianness conversion, depending on the platform you're compiling for.
Compiler explorer
// read this from a file and store it in some container.
// this is just a std::string for an easier example
const std::string raw_data = "7ë?Vý˝‹ĺ”>˙†J˙l$í?źÔ=ć$ľ>˙†J˙(çî?Yý˝ć$ľ>˙†J˙'Šč?[ý˝\"6?˙†J˙KÓć?[YČ=\"6?˙†J˙…?Ů?[ý˝fË$?˙†J˙4Ą×?ĄŰŞ=fË$?˙†J˙ĚúĹ?[ý˝n8?˙†J˙r„Ä?s˛…=n8?˙†J˙Ôž°?[ý˝.2??˙†J˙?š?$>Í<n8?˙†J˙ÜB›?[ý˝n8?˙†J˙#ţ‡?[ý˝fË$?˙†J˙}ű†?eâ;fË$?˙†J˙gq?[ý˝\"6?˙†J˙Ëšo?Đuő»\"6?˙†J˙OÄ]?[ý˝_Yż>˙†J˙ľ\?P‰Ľ_Yż>˙†J˙: W?[ý˝ ÓS>˙†J˙JeU?@3ŁĽ ÓS>˙†J˙OÄ]?[ý˝Í#=˙†J˙ľ\?";
for(size_t idx = 0;
idx <= (raw_data.size() - 4);
idx += 4)
{
static_assert(sizeof(float) == 4);
if constexpr (std::endian::native == std::endian::big)
{
// need to swap endianness
int32_t temp;
std::memcpy(&temp, raw_data.data() + idx, 4);
temp = byteswap(temp); //check the compiler explorer link for the implementation
float f;
std::memcpy(&f, &temp, 4);
std::cout << f << std::endl;
}
else
{
float f;
std::memcpy(&f, raw_data.data() + idx, 4);
std::cout << f << std::endl;
}
}
|
72,868,191 | 72,868,392 | C++: Alternative to this range's view construction with same requirements? | To remove manual logic in my code, I use that construct:
std::ranges::drop_view a { std::ranges::take_view(my_range, my_range.size() -X), Y};
with X and Y values I pass at runtime.
Even though I check the algorithms, I could not find a shorter way that has the following constraints:
don't go beyond or below the range I want, and don't do anything if the range has 0 elements -> no overflow
non owning -> no copies
ranges::subranges doesn't meet those requirements.
Thanks
| You can compose take_view and drop_view into a new adaptor
auto take_and_drop = [](auto n, auto m) {
return std::views::take(n) | std::views::drop(m);
};
auto a = my_range | take_and_drop(my_range.size() - X, Y);
|
72,868,412 | 72,868,793 | How can I set add_executable WIN32 property or not depending on the build type? | This fails with the error "Cannot find source file: WIN32. Tried extensions..."
add_executable(${PROJECT_NAME} $<$<CONFIG:Release>:WIN32> main.cpp)
I need this in order to launch the app in the console in Debug mode and being able to read information printed to the console.
And as far as I know it's wrong and is advised agains in the cmake docs to check CMAKE_BUILD_TYPE being Release directly.
| As you noticed you cannot use generator expressions for the WIN32 keyword in the add_executable command.
Instead, try setting the corresponding property WIN32_EXECUTABLE on the target:
set_target_properties(${PROJECT_NAME} PROPERTIES WIN32_EXECUTABLE $<CONFIG:Release>)
|
72,869,955 | 72,870,229 | derived class as a parameter of templated function which is specialized for its base class | class Base {};
class Derived : public Base {};
class SomeClass
{
template<typename T>
static void SetContent(T* pChild, OVariant content)
{
LOG_ASSERT(0, "All classes must be specialized!. Please provide implementation for this class.");
}
};
template <>
void SomeClass::SetContent(Base* value)
{
LOG_TRACE("Yay!");
}
int main() {
SomeClass foo;
Derived derived;
foo.SetContent(&derived);//want it to call SomeClass::SetContent(Base* value)
return 0;
}
When I call foo.SetContent(derived), I get the Assert. Is it not possible for the derived class to use the specialization for it's base class?
| You can convert a Derived* to a Base*, but I think you rather want to specialize for all T that have Base as base
#include <type_traits>
#include <iostream>
class Base {};
class Derived : public Base {};
template <typename T,typename = void>
struct impl {
void operator()(T*) {
std::cout <<"All classes must be specialized!. Please provide implementation for this class.\n";
}
};
template <typename T>
struct impl<T,std::enable_if_t<std::is_base_of_v<Base,T>>> {
void operator()(T*) {
std::cout << "Yay\n";
}
};
class SomeClass
{
public:
template<typename T>
static void SetContent(T* pChild)
{
impl<T>{}(pChild);
}
};
struct bar{};
int main() {
SomeClass foo;
Derived derived;
foo.SetContent(&derived);
bar b;
foo.SetContent(&b);
}
Output:
Yay
All classes must be specialized!. Please provide implementation for this class.
//want it to call SomeClass::SetContent(Base* value)
Note that if the template argument is deduced, it will be deduced as Derived not as Base and the argument is Derived*. SomeClass::SetContent<Base>(&derived); would already work as expected with your code (because Derived* can be converted to Base*).
|
72,870,279 | 72,873,583 | access variable in so file and register callback function in ctypes | I'm trying to access variable declared in cpp header file from the compiled shared object.
Below is my case
/cpp_header.hpp/
#include <stdint.h>
#include <stdio.h>
#include <string.h>
//variables declaration
const uint8_t variable1 = 3;
const uint16_t variable2 = 4056;
const uint16_t variable3 = 3040;
typedef struct {
void* a;
uint32_t b
uint16_t w;
uint16_t h;
size_t p;
} structure1 ;
typedef struct {
uint32_t a;
uint32_t b
} structure2 ;
//callback function declaration
typedef void (*one_callback) (const structure1 *);
typedef void (*output_callback) (const structure1 *);
typedef void (*inpout_callback) (const structure2 *);
//APIs using the callback function
int start_api(enum_type, output_callback, inpout_callback);
What i'm trying in ctypes
/ctype_wrapper.py/
import ctypes
from ctypes import *
lib_instance = CDLL('test.so')
#accessing the variable declared in cpp header
variable1 = c_uint8.in_dll(lib_instance, 'variable1')
variable2 = c_uint16.in_dll(lib_instance, 'variable2')
variable3 = c_uint16.in_dll(lib_instance, 'variable2')
//registering callback function
ctype_start_api = lib_instance.start_api
ctype_start_api.argtypes = [enum_type, output_callback, inpout_callback] # register the callback
ctype_start_api.restype = c_int
Error output
#error for variable access
File "ctype_wrapper.py", line 6, in <module>
variable1 = c_uint8.in_dll(lib_instance, 'variable1')
ValueError: ./test.so: undefined symbol: variable1
For callback register, i referred the ctypes document but no idea how to implement that for my scenario.
Is my variable declaration is correct in header.hpp file or i need to add anything to get export the variables in compiled so file?
| Variables and function in must be exported for ctypes to find them. extern may be sufficient on Linux to export variables, but on Windows both variables and functions need an additional __declspec(dllexport) declaration.
ctypes also expects exported variables and functions to be C linkage, so C++ variables and functions need to be wrapped in extern "C".
Here's a working example, tested on Windows, that also demonstrates callbacks:
test.hpp
#include <stdint.h>
#ifdef _WIN32
# define API __declspec(dllexport)
#else
# define API
#endif
typedef struct {
void* a;
uint32_t b;
uint16_t w;
uint16_t h;
size_t p;
} structure1;
typedef struct {
uint32_t a;
uint32_t b;
} structure2;
typedef void (*output_callback) (const structure1 *);
typedef void (*inpout_callback) (const structure2 *);
extern "C" {
API extern const uint8_t variable1;
API extern const uint16_t variable2;
API extern const uint16_t variable3;
API int start_api(output_callback, inpout_callback);
}
test.cpp
#include "test.hpp"
extern "C" {
const uint8_t variable1 = 3;
const uint16_t variable2 = 4056;
const uint16_t variable3 = 3040;
int start_api(output_callback ocb, inpout_callback iocb) {
structure1 s1 { nullptr, 1, 2, 3, 4 };
structure2 s2 { 5, 6 };
if(ocb)
ocb(&s1);
if(iocb)
iocb(&s2);
return 0;
}
}
test.py
import ctypes as ct
class Structure1(ct.Structure):
_fields_ = (('a', ct.c_void_p),
('b', ct.c_uint32),
('w', ct.c_uint16),
('h', ct.c_uint16),
('p', ct.c_size_t))
# Good habit: print representation of class so it can print itself.
def __repr__(self):
return f'Structure1(a={self.a}, b={self.b}, w={self.w}, h={self.h}, p={self.p})'
class Structure2(ct.Structure):
_fields_ = (('a', ct.c_uint32),
('b', ct.c_uint32))
def __repr__(self):
return f'Structure2(a={self.a}, b={self.b})'
OCB = ct.CFUNCTYPE(None, ct.POINTER(Structure1))
IOCB = ct.CFUNCTYPE(None, ct.POINTER(Structure2))
# decorating a function with the callback signature makes it callable from C
@OCB
def output_callback(ps1):
print(ps1.contents)
@IOCB
def inpout_callback(ps2):
print(ps2.contents)
lib_instance = ct.CDLL('./test')
start_api = lib_instance.start_api
start_api.argtypes = OCB, IOCB
start_api.restype = ct.c_int
variable1 = ct.c_uint8.in_dll(lib_instance, 'variable1')
variable2 = ct.c_uint16.in_dll(lib_instance, 'variable2')
variable3 = ct.c_uint16.in_dll(lib_instance, 'variable3')
print(variable1.value, variable2.value, variable3.value)
start_api(output_callback, inpout_callback)
Output:
3 4056 3040
Structure1(a=None, b=1, w=2, h=3, p=4)
Structure2(a=5, b=6)
|
72,870,293 | 72,871,591 | Cmake reconfiguration with sanitizers added doesn't trigger ninja to recompile | Let's assume a minimal top level CMakeLists.txt like this:
1 cmake_minimum_required(VERSION 3.22)
2 set(CMAKE_CXX_STANDARD 20)
3
4 project(stackoverflow LANGUAGES CXX C)
5
6 add_executable(prog src/main.cpp)
7
8 option(ENABLE_SANITIZER "Enables sanitizer" OFF)
9 if(ENABLE_SANITIZER)
10 target_compile_options(prog PUBLIC -fsanitize=address)
11 target_link_options(prog PUBLIC -fsanitize=address)
12 endif()
Where the option ENABLE_SANITIZER adds an address sanitizer to the build.
When I configure with the sanitizer with
cmake -S . -B ./build -G "Ninja Multi-Config" -DENABLE_SANITIZER=ON
and build with
cmake --build ./build/ --target prog
everything compiles as it should, but when I reconfigure with
cmake -S . -B ./build -G "Ninja Multi-Config"
and build it again, ninja tells me that there is nothing to do:
ninja: no work to do.
Why does this happen when I clearly removed a compile option and link option?
| When you set a variable, it is set inside cache CMakeCache.txt. When you don't reset it when reconfiguring, it preserves its previous value. The option.... OFF, only set's the option to OFF if it is unset. Even set(ENABLE_SANITIZER OFF) will not set the variable if it is in cache, only set(.... CACHE "" "" FORCE), refer to documentation.
|
72,870,386 | 72,870,456 | why the destructor is called only one time when the constructor is called 5 times? | I'm trying to learn more about C++ ,int this code I'm allocating an array of A's (5 in this case), what I understand that 5 A's will be allocated ...so the compiler will call 5 times the constructer , but in case of deleting that array it calls the destructor one time only ,so my question is why does it call the destructor one time only when it has 5 A's , shouldn't he call the destructor 5 times??
I have this code :
#include <iostream>
using namespace std;
class A {
public:
A() { std::cout << "IM in C'tor" << std::endl; };
~A() { std::cout << "IM in De'tor" << std::endl; }
};
int main()
{
A* a = new A[5];
delete a; // ingone the errors,the important thing is calling the
C'tor and D'tor`
return 0;
}
| You need to use delete[] a to delete an array of things allocated with new[]. If you do that, you'll see the correct output:
IM in C'tor
IM in C'tor
IM in C'tor
IM in C'tor
IM in C'tor
IM in De'tor
IM in De'tor
IM in De'tor
IM in De'tor
IM in De'tor
|
72,870,732 | 73,090,786 | Array, which elements links to elements of another array | I want to have an array each elements of each somehow indicates some element of another resizable array
I tried:
vector <int> a={1,2,3};
vector <int*> b={*(a[0]),*(a[1]),*(a[2]));
But every editing of size of vector a, copies himself to empty place of memory, so pointers in array b links to an empty place
| I used unordered_map to store elements. In second array I stored keys to map.
How to close this question?
|
72,870,778 | 72,882,610 | C++ - detect is first base class at compile time | I'd like to detect that class Base is the first base of class Deriv. That is, they have the same pointer.
The example below doesn't work. I tried a few more things, wrapping casts in functions and unions, and got nowhere.
With a union it works only if all the types are literal - default destructable etc, which my classes are not.
Is there a way to do it? ideally in C++14?
template <class Base, class Deriv, bool IsBase = std::is_base_of<Base, Deriv>::value>
struct is_first_base_of {
static constexpr bool value = reinterpret_cast<Base*>(1) == static_cast<Base*>(reinterpret_cast<Deriv*>(1)); // this doesn't work
};
template <class Base, class Deriv>
struct is_first_base_of<Base, Deriv, false> {
static constexpr bool value = false;
};
struct A1 { int a1; }
struct A2 { int a2; }
struct B : A1, A2 { int b; }
static_assert(is_first_base_of<A1, B>::value == true, "");
static_assert(is_first_base_of<A2, B>::value == false, "");
UPDATE
That's the code I use now following @user17732522's idea of static_cast-ing to void*. It works on g++ 5.5, but not on 10.3:
template <class Base, class Deriv, bool IsBase = std::is_base_of<Base, Deriv>::value>
struct is_first_base_of {
static constexpr const Deriv* p0 = nullptr;
static constexpr const Deriv* p1 = &(p0[123]); // must use non-null!
static constexpr const void* base() { return static_cast<const void*>(static_cast<const Base*>(p1)); }
static constexpr const void* deriv() { return static_cast<const void*>(p1); }
static constexpr bool value = base() == deriv();
};
template <class Base, class Deriv>
struct is_first_base_of<Base, Deriv, false> {
static constexpr bool value = false;
};
| For aggregate classes you can probably use the aggregate initialization mechanism and conversion operator templates to detect the first base's type.
Except for this class, I don't think it is generally possible to detect the first base class.
If you want to test instead whether the base has the same address, then static_cast<void*>(static_cast<Base*>(x)) == static_cast<void*>(x) should be fine and should also work in constant expression evaluation. It will fail to compile if Base is an ambiguous or inaccessible base of Deriv.
However you need to create a x of type Deriv first, limiting the approach. Something like reinterpret_cast<Deriv*>(/*number*/) to create such an object, as you are attempting in your question, has undefined behavior when passed to the static_cast<Base*>, even if number is 0. std::declval is also not possible since this is an evaluated context. Therefore Deriv must have some constructor that is known to be usable (in the constant expression).
This is not the same as finding the first base though. The standard specifies memory layout requirements only for standard-layout types, which the type in your question is not.
Even with usual ABI specifications providing for class layout in all cases, this is not the same as finding the first base. Because of empty base class optimization a second base may satisfy this test as well.
Although the latter test verifies that the address of the base class subobject is equal to that of the derived object, the standard does not allow simply reinterpret_casting between the two types if the class is not standard-layout, at least since C++17 (and maybe earlier as well). The requirement for that is stricter and called pointer-interconvertibility. This stricter property can be tested for with std::is_pointer_interconvertible_base_of.
Casting from derived-to-base, assuming the addresses are equal, is still possible by applying std::launder after reinterpret_cast. However, the reverse is not allowed by the std::launder precondition.
I am not sure why the std::launder reachability precondition is that way in this case, since it is possible to obtain a pointer to the derived class via static_cast anyway, but the condition doesn't take that into account. In the case of members rather than bases, the condition allows a compiler to assume that other class subobjects cannot be reached through a pointer to such a member, assuming pointer-interconvertibility does not apply.
|
72,870,785 | 72,871,588 | If-else statement either all or none | PS: Not a homework question
I have three strings: string1, string2, string3
Either all of them have to be empty or none of them. In the invalid scenario where some of them (not all) are empty, I have to inform which one(s) is/are empty.
Following is my if-else block which is verbose. Is there a concise and better way to write the if-else block?
if(!string1.empty() || !string2.empty() || !string3.empty()) // Check if any one of them is non-empty
{
// If any one of them is non-empty, all of them should be non-empty and I should inform which one(s) is/are empty
bool some_string_is_empty = false;
if(string1.empty())
{
some_string_is_empty = true;
cout << "string1 is empty" << endl;
}
if(string2.empty())
{
some_string_is_empty = true;
cout << "string2 is empty" << endl;
}
if(string3.empty())
{
some_string_is_empty = true;
cout << "string3 is empty" << endl;
}
if(some_string_is_empty)
{
// This is an invalid state, return
return 0;
}
}
// We are now in a valid state
{
//do something
}
| We can generically check for n booleans to be in agreement by simply adding them:
if ((Check1() + Check2() + ... + Checkn()) % n)
{
// They're not all equal
}
Which we could make into a function like so:
template <class ... bools>
bool AllOrNothing (bools ... bs)
{
return (0 + ... + bs) % sizeof...(bs);
}
In our case we can solve directly like this:
bool not_all_same = (string1.empty() + string2.empty() + string3.empty()) % 3;
https://godbolt.org/z/TK11rcxn3
|
72,870,905 | 72,871,050 | How compiler enforces C++ volatile in ARM assembly | According to cppreference, store of one volatile qualified cannot be reordered wrt to another volatile qualified variable. In other words, in the below example, when y becomes 20, it is guaranteed that x will be 10.
volatile int x, y;
...
x = 10;
y = 20;
According to Wikipedia, ARM processor a store can be reordered after another store. So, in the below example, second store can be executed before first store since both destinations are disjoint, and hence they can be freely reordered.
str r1, [r3]
str r2, [r3, #4]
With this understanding, I wrote a toy program:
volatile int x, y;
int main() {
x = 10;
y = 20;
}
I expected some fencing to be present in the generated assembly to guarantee the store order of x and y. But the generated assembly for ARM was:
main:
movw r3, #:lower16:.LANCHOR0
movt r3, #:upper16:.LANCHOR0
movs r1, #10
movs r2, #20
movs r0, #0
str r1, [r3]
str r2, [r3, #4]
bx lr
x:
y:
So, how storing order is enforced here?
|
so, in the below example, second store can be executed before first store since both destinations are disjoint, and hence they can be freely reordered.
The volatile keyword limits the reordering (and elision) of instructions by the compiler, but its semantics don't say anything about visibility from other threads or processors.
When you see
str r1, [r3]
str r2, [r3, #4]
then volatile has done everything required. If the addresses of x and y are I/O mapped to a hardware device, it will have received the x store first. If an interrupt pauses operation of this thread between the two instructions, the interrupt handler will see the x store and not the y. That's all that is guaranteed.
The memory ordering model only describes the order in which effects are observable from other processors. It doesn't alter the sequence in which instructions are issued (which is the order they appear in the assembly code), but the order in which they are committed (ie, a store becomes externally visible).
It is certainly possible that a different processor could see the result of the y store before the x - but volatile is not and never has been relevant to that problem. The cross-platform solution to this is std::atomic.
There is unfortunately a load of obsolete C code available on the internet that does use volatile for synchronization - but this is always a platform-specific extension, and was never a great idea anyway. Even less fortunately the keyword was given exactly those semantics in Java (which isn't really used for writing interrupt handlers), increasing the confusion.
If you do see something using volatile like this, it's either obsolete or was incompetently translated from Java. Use std::atomic, and for anything more complex than simple atomic load/store, it's probably better (and is certainly easier) to use std::mutex.
|
72,871,031 | 72,871,592 | Does icc -xCORE-AVX2 force the non-utilisation of AVX512 instructions on Xeon Gold if -O3 is on? | As per the title,
Will programs compiled with the intel compiler under
icc -O3 -xCORE-AVX2 program.cpp
Generate AVX512 instructions on a Xeon Gold 61XX?
Our assembler analysis doesn't seem to find one, but that is no guarantee.
Thanks!
| In ICC classic, no, you can use intrinsics for any instruction without telling the compiler to enable it. (Unlike GCC or clang where you have to enable instruction sets to use their intrinsics, like the LLVM-based Intel OneAPI compiler.)
But the compiler won't emit AVX-512 instructions other than from intrinsics (or inline asm), without enabling a -march=skylake-avx512 or -march=native (aka -xHOST) or similar option that implies -mavx512f. Or a pragma or __attribute__((target("string"))) to enable AVX-512 for a single function.
This is true for all the major x86 compilers, AVX-512 is not on by default.
Use -O3 -march=native if you want to make code optimized for the machine you're running on, just like with GCC or clang.
In ICC classic, you can also let the compiler use certain instruction-sets on a per-function basis, with _allow_cpu_features(_FEATURE_AVX512F|_FEATURE_BMI); which works more like a pragma, affecting compile-time code-gen. See the docs.
Also related: The Effect of Architecture When Using SSE / AVX Intrinisics re: gcc/clang vs. MSVC vs. ICC.
|
72,871,282 | 72,871,485 | Is the std::vector copied or moved in this case? | In the following code which implements the Viterbi algorithm: (Wikipedia link)
std::pair<std::vector<index_t>, float> viterbi_get_optimal_path(const SoundGraph &g,
SequenceIter s_first,
SequenceIter s_last,
index_t curr_index) {
if (s_first == s_last) {
return {std::vector<index_t>({curr_index}), 1.0f};
}
std::vector<index_t> seq;
float prob = 0.0f;
for (const auto &[next_index, curr_sound] : g.edges(curr_index)) {
if (curr_sound.sound_ == *s_first) {
auto [res_seq, res_prob] =
viterbi_get_optimal_path(g, std::next(s_first), s_last, next_index);
if (res_prob > 0.0f && curr_sound.prob_ * res_prob > prob) {
prob = curr_sound.prob_ * res_prob;
std::swap(seq, res_seq);
seq.push_back(curr_index);
}
}
}
return {seq, prob};
}
Is the std::vector<index_t> copied or moved in this line?
auto [res_seq, res_prob] =
viterbi_get_optimal_path(g, std::next(s_first), s_last, next_index);
I'd like to believe it was moved but I'm not sure.
index_t is just std::ptrdiff_t
| Assuming C++17 or later, it will be copied once if the function returns via return {seq, prob}; and moved once if it returns via return {std::vector<index_t>({curr_index}), 1.0f};.
You can avoid the copy, by explicitly moving in the return statement:
return {std::move(seq), prob};
In the most common case such a move of a local variable is implicitly done, but only if the return statement directly (and only) names the variable. That is not the case here, so you need to move manually.
You can potentially avoid the move operations as well by defining seq as a std::pair<std::vector<index_t>, float> instead and then returning return seq;. This so-called named return value optimization is not guaranteed, but the compiler is allowed to apply it. To help the compiler apply it, you might then want to make sure that both branches use return seq; then. Before applying this blindly, benchmark though. This is a significant enough change that other effects might dominate and end with worse overall result.
|
72,871,304 | 72,871,834 | Calling a common method of tuple elements | Say I have a tuple of types T1,...,TN that implement some method, apply().
How do I define a function that takes this tuple and some initial element, and returns the chained call of apply() on this element?
For example:
template <typename... Args, typename Input>
auto apply(std::tuple<Args...> const &tpl, Input x) {
// return ???
}
// simple example
struct Sqr {
static int apply(int x) { return x * x; }
};
enum class Choice {
One,
Two,
};
struct Choose {
static int apply(Choice choice) {
switch (choice) {
case Choice::One:
return 1;
case Choice::Two:
return 2;
}
}
};
void test() {
auto tpl = std::tuple(Sqr{}, Choose{});
assert(apply(tpl, Choice::One) == 1);
assert(apply(tpl, Choice::Two) == 4);
}
I tried to use fold expressions, and variations of answers from: Template tuple - calling a function on each element but couldn't get anything to compile.
The main difference is that I need each invocation's result as the input for the next one.
Concretely, I tried the following, which failed because it calls each argument with the initial value:
template <typename... Args, typename Input>
auto apply(std::tuple<Args...> const &tpl, Input x) {
return std::apply([&x](auto &&... args) {
return (..., args.apply(x));
}, tpl);
}
Clarifications and assumptions:
I want the methods to be called in a specific order - last to first - similarly to mathematical function composition.
(f * g)(x) := f(g(x))
The input and output types of each tuple argument are not constricted. The only assumption is that consecutive arguments agree on the corresponding types.
| There may be snazzier C++17 ways of doing it, but there is always good old-fashioned partially-specialized recursion. We'll make a struct that represents your recursive algorithm, and then we'll build a function wrapper around that struct to aid in type inference. First, we'll need some imports.
#include <tuple>
#include <utility>
#include <iostream> // Just for debugging later :)
Here's our structure definition.
template <typename Input, typename... Ts>
struct ApplyOp;
Not very interesting. It's an incomplete type, but we're going to provide specializations. As with any recursion, we need a base case and a recursive step. We're inducting on the tuple elements (you're right to think of this as a fold-like operation), so our base case is when the tuple is empty.
template <typename Input>
struct ApplyOp<Input> {
Input apply(Input x) {
return x;
}
};
In this case, we just return x. Computation complete.
Now our recursive step takes a variable number of arguments (at least one) and invokes .apply.
template <typename Input, typename T, typename... Ts>
struct ApplyOp<Input, T, Ts...> {
auto apply(Input x, const T& first, const Ts&... rest) {
auto tail_op = ApplyOp<Input, Ts...>();
return first.apply(tail_op.apply(x, rest...));
}
};
The tail_op is our recursive call. It instantiates the next version of ApplyOp. There are two apply calls in this code. first.apply is the apply method in the type T; this is the method you control which determines what happens at each step. The tail_op.apply is our recursive call to either another version of this apply function or to the base case, depending on what Ts... is.
Note that we haven't said anything about tuples yet. We've just taken a variadic parameter pack. We're going to convert the tuple into a parameter pack using an std::integer_sequence (More specifically, an std::index_sequence). Basically, we want to take a tuple containing N elements and convert it to a sequence of parameters of the form
std::get<0>(tup), std::get<1>(tup), ..., std::get<N-1>(tup)
So we need to get an index sequence from 0 up to N-1 inclusive (where N-1 is our std::tuple_size).
template <typename Input, typename... Ts>
auto apply(const std::tuple<Ts...>& tpl, Input x) {
using seq = std::make_index_sequence<std::tuple_size<std::tuple<Ts...>>::value>;
// ???
}
That complicated-looking type alias is building our index sequence. We take the tuple's size (std::tuple_size<std::tuple<Ts...>>::value) and pass it to std::make_index_sequence, which gives us an std::index_sequence<0, 1, 2, ..., N-1>. Now we need to get that index sequence as a parameter pack. We can do that with one extra layer of indirection to get type inference.
template <typename Input, typename... Ts, std::size_t... Is>
auto apply(const std::tuple<Ts...>& tpl, Input x, std::index_sequence<Is...>) {
auto op = ApplyOp<Input, Ts...>();
return op.apply(x, std::get<Is>(tpl)...);
}
template <typename Input, typename... Ts>
auto apply(const std::tuple<Ts...>& tpl, Input x) {
using seq = std::make_index_sequence<std::tuple_size<std::tuple<Ts...>>::value>;
return apply(tpl, x, seq());
}
The second apply is the one outside users call. They pass a tuple and an input value. Then we construct an std::index_sequence of the appropriate type and pass that to the first apply, which uses that index sequence to access each element of the tuple in turn.
Complete, runnable example
|
72,871,346 | 72,873,055 | How to simply build an external project with cmake externalproject_add | I've got a library I want to integrate into an existing cmake build. All cmake has to do is go into that directory, run "make", perform install steps as I lay out (probably just a copy to an included binary directory), and then keep doing its thing. Cmake continues to step on my toes trying to create directories and guess at pathnames.
The command in the base CMakeLists.txt is:
ExternalProject_Add(mylib BINARY_DIR ${CMAKE_SOURCE_DIR}/mylib/sdk BUILD_COMMAND make)
However, when I try to build, cmake complains about:
CMake Error at /usr/share/cmake-3.16/Modules/ExternalProject.cmake:2630 (message):
No download info given for 'mylib' and its source directory:
/home/brydon/build/myTarget/existingLib/mylib-prefix/src/mylib
is not an existing non-empty directory. Please specify one of:
* SOURCE_DIR with an existing non-empty directory
* DOWNLOAD_COMMAND
* URL
* GIT_REPOSITORY
* SVN_REPOSITORY
* HG_REPOSITORY
* CVS_REPOSITORY and CVS_MODULE
Call Stack (most recent call first):
/usr/share/cmake-3.16/Modules/ExternalProject.cmake:3236 (_ep_add_download_command)
CMakeLists.txt:83 (ExternalProject_Add)
Why is it jumping at all of these directories? I don't understand what CMake is trying to do here - all it needs to do is run make in the directory that I very clearly spefified as the build dir.
I have tried using SOURCE_DIR but then I get an error that there is no CMakeLists.txt in that directory, which again is not what I want.
How can I get cmake to very simply use an existing makefile, and nothing more?
| If you aren't downloading code then SOURCE_DIR needs to be set to an existing directory containing your library.
If you aren't using cmake then you need to set CONFIGURE_COMMAND to an empty string as stated in the ExternalProject_Add documentation.
|
72,871,723 | 72,880,612 | pybind11 very simple example: importError when importing in python | I'm trying to compile a very simple example using pybind11, but unlike all tutorials I can find, I don't want to copy the pybind11 repo into my project. I currently have
CMakeLists.txt
cmake_minimum_required(VERSION 3.22)
project(relativity)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED YES)
find_package(pybind11)
file(GLOB SOURCES "*.cpp")
pybind11_add_module(${PROJECT_NAME} ${SOURCES})
main.cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
When I run cmake .. and make I get no errors and the relativity.so file is built. However if I attempt to import it in python using import relativity I get:
ImportError: dynamic module does not define module export function (PyInit_relativity)
What am I doing wrong exactly? I can't really find any detailed examples or tutorials that do it this way.
EDIT:
I tried cloning the pybind11 repo into my project and using the following CMakeLists.txt
cmake_minimum_required(VERSION 3.22)
project(relativity)
add_subdirectory(pybind11)
pybind11_add_module(${PROJECT_NAME} main.cpp)
but this gives the same error when importing in python3.
| The first argument passed to the PYBIND11_MODULE macro should be the name of the module (and therefore should match the content of the "PROJECT_NAME" variable as defined in the cmake file):
PYBIND11_MODULE(relativity, m) { // <---- "relativity" instead of "example"
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
|
72,871,781 | 72,872,092 | Creating custom sizeof() that returns narrower types | The Issue
sizeof returns size_t type, so when passed as argument to functions that takes in narrower types (e.g. unsigned char), implicit conversion occurs. In many cases these are 3rd party library functions, so their prototypes are beyond my control. Compilers are now typically smart enough to detect whether such conversions would really cause truncation and warn you about it, but some static code analyzers will still flag such cases out, leading to lots of false positives. Explicitly casting the result of sizeof typically resolves the analysis warnings but it would hide the compiler warnings, not to mention that it makes things clunky.
My Solution
template<class T1, class T2>
struct sizeofxx {
static constexpr T2 value{ sizeof(T1) };
};
template <class T>
constexpr unsigned int sizeof32 = sizeofxx<T, unsigned int>::value;
template <class T>
constexpr unsigned short sizeof16 = sizeofxx<T, unsigned short>::value;
template <class T>
constexpr unsigned char sizeof8 = sizeofxx<T, unsigned char>::value;
Usage:
unsigned int foo = sizeof32<float>;
const char bar[255];
unsigned char foo3 = sizeof8<decltype(bar)>;
It relies on aggregate initialization to guard against narrowing conversion at compile time. So if I had used bar[256], the build fails.
Limitation
But as you can see, using it on variables is rather clunky (due to the need for decltype). Is there a simpler way to do this? I know one way is wrap it in a macro, but this would prevent IDEs like Visual Studio from helping you resolve the value when you mouseover it. Another way is to create a constexpr function:
template <class T1>
constexpr unsigned char sizeof8f(T1&) {
return sizeof(T1);
}
But this also does not allow for IDE code-time resolution, and would expand the number of symbols involved since they need to be of different names from the earlier implementation that operates on types.
Any other suggestions on resolving the root issue (static code analysis warnings) are welcomed. And no, suppressing them is not feasible.
| For your specific problem, there need not be any runtime checks even on debug builds as some has suggested, since the value is itself a constexpr. You can write a simple utility to cast a value to the smallest type that is able to hold it.
template<size_t N>
inline constexpr auto minuint = []{
if constexpr(N >= 1ull << 32)
return N;
else if constexpr(N >= 1ull << 16)
return uint32_t(N);
else if constexpr(N >= 1ull << 8)
return uint16_t(N);
else
return uint8_t(N);
}();
On the other hand, no function or template can ever accept both expressions and types.
The only possible way to imitate sizeof behaviour is to use a macro.
#define Sizeof(x) minuint<sizeof(x)>
With this, you never get false warnings on narrowing conversions: if there is a warning, you are doing something wrong.
|
72,872,200 | 72,872,232 | How to convert string to an int array using stoi() && substr() | im triying to convert an string to an integer and save those numbers into an array, i tried like this
#include <iostream>
#include <cstdlib>
#include <string>
using namespace std;
int main() {
int number[5];
string input;
//numbers
cout << "type sonme numbers"<<endl;
cin >> input;
for(int i = 0 ; i<= 4; i++){
number[i] = stoi(input.substr(i,i),0,10);
cout << number[i];
}
return 0;
}
when i run it this error comes out:
terminate called after throwing an instance of 'std::invalid_argument'
what(): stoi
| Your first loop is asking for a substring beginning at index 0, with length 0, so you're passing an empty string to stoi. Even if you in fact provided valid inputs (a string of at least eight digits, so you could call .substr(4, 4) on it and get useful results), the first loop always tries to parse the empty string and dies. Don't do that.
It's unclear what the goal here is. If you meant to parse each digit independently, then what you wanted was:
number[i] = stoi(input.substr(i, 1), 0, 10);
which would parse out five sequential length one substrings.
|
72,872,373 | 73,068,894 | How to color the output stream of std::cout, but not of std::cerr and std::clog? | I am dealing with the following problem: I am on Ubuntu and if I color all the stream in red, for example with the following command:
#include <iostream>
std::cout << "\033[31m" << "From now on the stream is red!";
what happens is that not only the std::cout object, but also std::cerr and std::clog objects will display red strings from now on.
I was wondering if is there a way to color only std::cout output and let std::cerr and std::clog outputs unchanged, in a way to be able to do:
#include <iostream>
std::cout << "\033[31m" << "From now on the std::cout stream is red!"; // red stream
std::cerr << "This stream is NOT red"; // normal stream (not colored)
std::clog << "This stream is NOT red"; // normal stream (not colored)
What I need is a "setting" (function, class etc...) able to fix this requirement at the beginning of the program and let it unchanged until the end.
How can I do this?
| After a few days of tries I found a pretty suitable solution for this problem. I simply created a functor able to apply changes directly to the std::ostream object, to be used in this way:
functor( std::cout ) << "Modified output stream";
Such an implementation si a bit long and can be found here.
|
72,873,044 | 72,938,120 | Retrive Informations about currently running sessions using Windows.Media.Control with C++/WinRT | I would like to know, how to retrive informations(e.g application name) about all the sessions that are currently running.
GlobalSystemMediaTransportControlsSessionManager SessionManager();
IVectorView<GlobalSystemMediaTransportControlsSession> Sessions;
Sessions = SessionManager.GetSessions();
// for sessions - session.SourceAppUserModelId
I want to learn WinRT with C++, so I ve been trying to do something with Windows Media Control, but looking at the documentation:
https://learn.microsoft.com/en-us/uwp/api/windows.media.control?view=winrt-22621,
https://learn.microsoft.com/en-us/uwp/api/windows.media.control.globalsystemmediatransportcontrolssessionmanager?view=winrt-22621
I have no idea what I should do. I would appreciate any links to tutorials or explanations on how to do it or learn it.
| Using, trial and error method, I have finally achived what I was looking for, in a way.
Hopefully, it will be usefull to someone other, than me.
I didn't know, you have to pass NULL as parameter to GlobalSystemMediaTransportControlsSessionManager class, for some reason.
Also, had some problem with converting hstring to string, but I found a function winrt::to_string().
GlobalSystemMediaTransportControlsSessionManager SessionManager(NULL);
// get SessionManager to provide access to info of playback
if (SessionManager == NULL) {
// gets SessionManager instance
IAsyncOperation session_async = SessionManager.RequestAsync();
// waits 5 seconds before failure
if (session_async.wait_for(std::chrono::seconds{ 5 }) == AsyncStatus::Completed) {
SessionManager = session_async.GetResults();
}
else {
std::cout << "Couldnt request instance of Session Manager" << std::endl;
}
std::cout << "Done" << std::endl;
}
IVectorView<GlobalSystemMediaTransportControlsSession> sessions = SessionManager.GetSessions();
int i = 0;
for (auto const &session : sessions) {
std::cout << i << ' ';
// get Application name
winrt::hstring session_name = session.SourceAppUserModelId();
std::cout << " name- " << winrt::to_string(session_name);
// pause session
IAsyncOperation<bool> paused = session.TryPauseAsync();
i++;
}
|
72,873,908 | 72,889,879 | How Call C++ Variables Using CGo For Standard Libraries | I am trying to get a variable value from a c++ code using cgo. For libraries ended in .hall works fine, but for libraries like <iostream>, <map>, <string> etc, I got the following error:
fatal error: iostream: No such file or directory
4 | #include <iostream>
| ^~~~~~~~~~
Below my code:
package main
/*
#cgo LDFLAGS: -lc++
#include <iostream>
std::string plus() {
return "Hello World!\n";
}
*/
import "C"
import "fmt"
func main() {
a := Plus_go()
fmt.Println(a)
}
func Plus_go() string {
return C.plus()
}
I added the #cgo LDFLAGS: -lc++ flag because I saw this recommendation on an answer here on stackoverflow at https://stackoverflow.com/a/41615301/15024997.
I am using VS Code (not VS Studio), windows 10, Go 1.18 (lastest version).
I ran the following commands go tool cgo -debug-gcc mycode.go to trace compiler execution and output:
$ gcc -E -dM -xc -m64 - <<EOF
#line 1 "cgo-builtin-prolog"
#include <stddef.h> /* for ptrdiff_t and size_t below */
/* Define intgo when compiling with GCC. */
typedef ptrdiff_t intgo;
#define GO_CGO_GOSTRING_TYPEDEF
typedef struct { const char *p; intgo n; } _GoString_;
typedef struct { char *p; intgo n; intgo c; } _GoBytes_;
_GoString_ GoString(char *p);
_GoString_ GoStringN(char *p, int l);
_GoBytes_ GoBytes(void *p, int n);
char *CString(_GoString_);
void *CBytes(_GoBytes_);
void *_CMalloc(size_t);
__attribute__ ((unused))
static size_t _GoStringLen(_GoString_ s) { return (size_t)s.n; }
__attribute__ ((unused))
static const char *_GoStringPtr(_GoString_ s) { return s.p; }
#line 3 "C:\\Users\\Home\\OneDrive\\Desktop\\DevicesC++\\devices.go"
#include <iostream>
std::string plus() {
return "Hello World!\n";
}
#line 1 "cgo-generated-wrapper"
EOF
C:\Users\Home\OneDrive\Desktop\DevicesC++\devices.go:5:10: fatal error: iostream: No such file or directory
5 | #include <iostream>
| ^~~~~~~~~~
compilation terminated.
C:\Users\Home\OneDrive\Desktop\DevicesC++\devices.go:5:10: fatal error: iostream: No such file or directory
5 | #include <iostream>
| ^~~~~~~~~~
compilation terminated.
| CGo allows you to link your Go code against code that implements the C-style foreign function interfaces. This does not mean that you can just stick arbitrary-language code into place.
Let's start with the first problem, which is that the import "C" line in one of your Go files must contain only C code above it. That is:
/*
#include <stdlib.h>
extern char *cstyle_plus();
*/
is OK, but:
/*
#include <stdlib.h>
extern std::string *plus();
*/
is not, nor may you #include any C++ header here. To oversimplify things a bit, the comment here is in effect snipped out and fed to a C compiler. If it's not valid C, it won't compile.
If you want to include C++ code, you can, but you must put it in a separate file or files (technically speaking, a "translation unit" in C or C++ terminology). CGo will then compile that file to object code.
The next problem, however, is that the object code must conform to the C Foreign Function Interface implemented by CGo. This means your C++ code must return C types (and/or receive such types as arguments). As std::string is not a C string, you literally can't return it directly.
It's not very efficient (and there exist some attempts to work around this), but the usual method for dealing with this is to have C functions return C-style "char *" or "const char *" strings. If the string itself has non-static duration—as yours does—you must use malloc here, specifically the C malloc (std::malloc may be a non-interoperable one).
The function itself must also be callable from C code. This means we'll need to use extern "C" around it.
Hence, our plus.cpp file (or whatever you would like to call it) might read this way:
#include <stdlib.h>
#include <iostream>
std::string plus() {
return "Hello World!\n";
}
extern "C" {
char *cstyle_plus() {
// Ideally we'd use strdup here, but Windows calls it _strdup
char *ret = static_cast<char *>(malloc(plus().length() + 1));
if (ret != NULL) {
strcpy(ret, plus().c_str());
}
return static_cast<char *>(ret);
}
}
We can then invoke this from Go using this main.go:
package main
/*
#include <stdlib.h>
extern char *cstyle_plus();
*/
import "C"
import (
"fmt"
"unsafe"
)
func Plus_go() string {
s := C.cstyle_plus()
defer C.free(unsafe.Pointer(s))
return C.GoString(s)
}
func main() {
a := Plus_go()
fmt.Println(a)
}
Adding a trivial go.mod and building, the resulting code runs; the double newline is because the C string has a newline in it, and fmt.Println adds a newline:
$ go build
$ ./cgo_cpp
Hello World!
This code is a bit sloppy: should malloc fail, it returns NULL, and C.GoString turns that into an empty string. However, real code should try, as much as possible, to avoid this kind of silly allocation-and-free sequence: we might know the string length, or have a static string that does not require this kind of silly malloc, for instance.
|
72,874,026 | 72,876,082 | Bad Request: message text is empty when sending get request via winapi to telegram bot | I'm trying to send message to telegram chat from bot using winapi and c++.
Here is my code
char szData[1024];
// initialize WinInet
HINTERNET hInternet = ::InternetOpen(TEXT("WinInet Test"), INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0);
if (hInternet != NULL)
{
// open HTTP session
HINTERNET hConnect = ::InternetConnect(hInternet, L"api.telegram.org", INTERNET_DEFAULT_HTTPS_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 1);
if (hConnect != NULL)
{
wstring request = L"/bot<bot_id>/sendMessage";
// open request
HINTERNET hRequest = ::HttpOpenRequest(hConnect, L"GET", (LPCWSTR)request.c_str(), NULL, NULL, 0, INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_SECURE, 1);
if (hRequest != NULL)
{
// send request
const wchar_t* params = L"?chat_id=<chat_id>&text=test";
BOOL isSend = ::HttpSendRequest(hRequest, NULL, 0, (LPVOID)params, wcslen(params));
if (isSend)
{
for (;;)
{
// reading data
DWORD dwByteRead;
BOOL isRead = ::InternetReadFile(hRequest, szData, sizeof(szData) - 1, &dwByteRead);
// break cycle if error or end
if (isRead == FALSE || dwByteRead == 0)
break;
// saving result
szData[dwByteRead] = 0;
}
}
// close request
::InternetCloseHandle(hRequest);
}
// close session
::InternetCloseHandle(hConnect);
}
// close WinInet
::InternetCloseHandle(hInternet);
}
wstring answer = CharPToWstring(szData);
return answer;
But I've got {"ok":false,"error_code":400,"description":"Bad Request: message text is empty"} response. <chat_id> is id consisted of digits(12345678).
If I run this request in postman or in browser - then everything is ok.
I also tried to run this request using WinHttp* methods and result is the same.
What should I change in my request parameters to make it work?
| There are a number of issues with this code:
You don't need to typecast the return value of wstring::c_str() to LPCWSTR (aka const wchar_t*), as it is already that type.
You can't send body data in a GET request. The Telegram Bot API expects body data to be sent in a POST request instead.
You are telling HttpSendRequest() to send body data from a wchar_t* UTF-16 string, but that is not the correct encoding that the server is expecting. You need to use a char* UTF-8 string instead.
You are not sending a Content-Type request header to tell the server what the format of the body data is. The API supports several different formats. In this case, since you are sending the data in application/x-www-form-urlencoded format, you need to add a Content-Type: application/x-www-form-urlencoded header to the request.
With all of that said, try this instead:
// initialize WinInet
HINTERNET hInternet = ::InternetOpenW(L"WinInet Test", INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0);
if (hInternet == NULL) ... // error handling
// open HTTP session
HINTERNET hConnect = ::InternetConnectW(hInternet, L"api.telegram.org", INTERNET_DEFAULT_HTTPS_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 1);
if (hConnect == NULL) ... // error handling
// open request
wstring wsResource = L"/bot<bot_id>/sendMessage";
HINTERNET hRequest = ::HttpOpenRequestW(hConnect, L"POST", wsResource.c_str(), NULL, NULL, 0, INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_SECURE, 1);
if (hRequest == NULL) ... // error handling
// send request
string sBody = u8"chat_id=<chat_id>&text=test";
BOOL isSend = ::HttpSendRequestW(hRequest, L"Content-Type: application/x-www-form-urlencoded", -1L, sBody.c_str(), sBody.size());
if (!isSend) ... // error handling
string sReply;
char szData[1024];
DWORD dwByteRead;
while (::InternetReadFile(hRequest, szData, sizeof(szData), &dwByteRead) && dwByteRead != 0)
{
// saving result
sReply.append(szData, dwByteRead);
}
...
// use sReply as needed ...
|
72,874,439 | 72,874,485 | What's going on, when trying to print uninitialized string | I'm just decided to test malloc and new. Here is a code:
#include <iostream>
#include <string>
struct C
{
int a = 7;
std::string str = "super str";
};
int main()
{
C* c = (C*)malloc(sizeof(C));
std::cout << c->a << "\n";
std::cout << c->str << "\n";
free(c);
std::cout << "\nNew:\n\n";
c = new C();
std::cout << c->a << "\n";
std::cout << c->str << "\n";
}
Why an output of this program stops at std::cout << c->a << "\n";:
-842150451
C:\Code\Temp\ConsoleApplication12\x64\Debug\ConsoleApplication12.exe (process 22636) exited with code 0.
Why does compiler show no errors - I thought, std::string isn't initialized properly in case of malloc, so it should break something.
If I comment out printing of the string, I'm getting a full output:
-842150451
New:
7
super str
C:\Code\Temp\ConsoleApplication12\x64\Debug\ConsoleApplication12.exe (process 21652) exited with code 0.
I use MSVS2022.
| You've used malloc. One of the reasons to not do this is that it hasn't actually initialized your object. It's just allocated memory for it. As a result, when accessing member fields, you get undefined behavior.
You have also forgotten to delete the C object you created with new. But you may wish to use a std::unique_ptr in this scenario, to avoid having to explicitly delete the object at all. The smart pointer will automatically free the memory when it goes out of scope at the end of main.
auto c = std::make_unique<C>();
std::cout << c->a << std::endl;
std::cout << c->str << std::endl;
|
72,874,949 | 72,875,967 | PlaySound plays default windows error sound | I'm trying to make a simple audio player in C++ using the Win32 API library.
How this program currently works, is you select a file via file explorer, which then the file's location is saved onto a list box. When you press the "play" button, it takes the file location from the list box, and parses it to a function that uses the parameter to play the desired file.
But for some reason, instead of playing the desired audio file. It plays the default windows error sound, even though that the file is found.
I tried this with a test sound file, that is in the projects structure. I know the program can find it, but still gives me the same issue.
If somebody could help me, that would be great. I'm more than happy to update this post with more code if you guys want me to.
Window Procedure (In oshyClient.cpp)
LRESULT CALLBACK WindProc(HWND hWnd, UINT msg, WPARAM wp, LPARAM lp) {
switch (msg) {
case WM_COMMAND:
switch (wp) {
case MENU_EXIT:
PostQuitMessage(0);
break;
case MENU_ADDAUDIO:
queue.addNewAudioToQueue(hWnd);
break;
case BUTTON_PLAY:
char text[100];
SendMessage(queue.hAudioQueue, LB_GETTEXT, 0, (LPARAM)text);
audio.playAudio(text); // The audio issue is here.
break;
}
break;
case WM_CREATE:
menu.displayMenu(hWnd);
queue.displayAudioQueue(hWnd);
audio.displayAudioControls(hWnd);
break;
case WM_DESTROY:
PostQuitMessage(0);
break;
default:
return DefWindowProcW(hWnd, msg, wp, lp);
}
}
oshyClientAudioPlayer.cpp
#include "oshyClientAudioPlayer.h"
#include "oshyClient.h"
#include <iostream>
void AudioPlayer::displayAudioControls(HWND hWnd) {
CreateWindow(L"Button", L"Play", WS_VISIBLE | WS_CHILD, 5, 10, 35, 25, hWnd, (HMENU)BUTTON_PLAY, NULL, NULL);
}
void AudioPlayer::playAudio(const char* audioLocation) {
PlaySound((LPCWSTR)audioLocation, 0, SND_FILENAME);
}
oshyClientAudioQueue.cpp
#include "oshyClientAudioQueue.h"
#include "oshyClient.h"
void AudioQueue::displayAudioQueue(HWND hWnd) {
hAudioQueue = CreateWindowEx(WS_EX_CLIENTEDGE, L"listbox", L"", WS_CHILD | WS_VISIBLE | WS_VSCROLL | ES_AUTOVSCROLL | 0, 4, 92, 474, 250, hWnd, (HMENU)ID_LISTBOX, 0, 0);
}
void AudioQueue::addNewAudioToQueue(HWND hWnd) {
OPENFILENAMEA file;
char fileName[100];
ZeroMemory(&file, sizeof(OPENFILENAME));
file.lStructSize = sizeof(OPENFILENAME);
file.hwndOwner = hWnd;
file.lpstrFile = fileName;
file.lpstrFile[0] = '\0';
file.nMaxFile = 100;
file.lpstrFilter = "All Files\0*.*";
file.nFilterIndex = 1;
if (GetOpenFileNameA(&file)) {
SendMessageA(hAudioQueue, LB_ADDSTRING, 0, (LPARAM)file.lpstrFile);
}
}
| When calling PlaySound(), you are type-casting a char* to a wchar_t*. Don't do that. Use PlaySoundA() when passing in a char* string, eg:
void AudioPlayer::playAudio(const char* audioLocation) {
PlaySoundA(audioLocation, 0, SND_FILENAME);
}
However, you are creating your ListBox as a Unicode window, so you should be using wchar_t strings instead of char strings, eg:
case BUTTON_PLAY:
wchar_t text[MAX_PATH];
SendMessageW(queue.hAudioQueue, LB_GETTEXT, 0, (LPARAM)text);
audio.playAudio(text);
break;
void AudioPlayer::playAudio(const wchar_t* audioLocation) {
PlaySoundW(audioLocation, 0, SND_FILENAME);
}
void AudioQueue::addNewAudioToQueue(HWND hWnd) {
wchar_t fileName[MAX_PATH];
fileName[0] = L'\0';
OPENFILENAMEW file;
ZeroMemory(&file, sizeof(file));
file.lStructSize = sizeof(file);
file.hwndOwner = hWnd;
file.lpstrFile = fileName;
file.nMaxFile = MAX_PATH;
file.lpstrFilter = L"All Files\0*.*\0";
file.nFilterIndex = 1;
file.Flags = OFN_PATHMUSTEXIST | OFN_FILEMUSTEXIST;
if (GetOpenFileNameW(&file)) {
SendMessageW(hAudioQueue, LB_ADDSTRING, 0, (LPARAM)fileName);
}
}
|
72,874,966 | 72,875,277 | I cant send a message with a discord webhook using cURL error : "Cannot send an empty message | So im trying to send a message to a discord webhook using this code:
#include <iostream>
#include <curl/curl.h>
int main(void)
{
CURL* curl;
CURLcode res;
const char* WEBHOOK = "webhookLink";
const char* content = "test";
curl_global_init(CURL_GLOBAL_ALL);
curl = curl_easy_init();
if (curl) {
curl_easy_setopt(curl, CURLOPT_URL, WEBHOOK);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, content);
res = curl_easy_perform(curl);
if (res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
curl_easy_cleanup(curl);
}
curl_global_cleanup();
}
I got this code from the cURL docs. Every time i run this it outputs {"message": "Cannot send an empty message", "code": 50006} in the console.
Any ideas?
Edit: it worked with the command line
curl -i -H "Accept: application/json" -H "Content-Type:application/json" -X POST --data "{\"content\": \"Posted Via Command line\"}" $WEBHOOK_URL
| You need to add the Content-Type header to your request.
Example (I have no discord webhook so I can't test it):
#include <curl/curl.h>
#include <iostream>
int main(void) {
CURL* curl;
CURLcode res;
const char* WEBHOOK = "webhookLink";
const char* content = R"aw({"content": "Posted Via libcurl"})aw";
curl_global_init(CURL_GLOBAL_ALL);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, WEBHOOK);
// create a curl list of header rows:
struct curl_slist* list = NULL;
// add Content-Type to the list:
list = curl_slist_append(list, "Content-Type: application/json");
// set this list as HTTP headers:
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, list);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, content);
res = curl_easy_perform(curl);
curl_slist_free_all(list); // and finally free the list
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
curl_easy_cleanup(curl);
}
curl_global_cleanup();
}
(additional error checking should also be added)
|
72,875,177 | 72,875,849 | Error using Eigen: Perform element-wise multiplication between a vector and matrix | I am trying to perform an element-wise multiplication of a row vector with matrix. In MATLAB this would be simply done by the "dot" operator or:
deriv = 1i * k .* fk;
where k is row vector and fk is a matrix.
Now in C++ I have this code:
static const int nx = 10;
static const int ny = 10;
static const int nyk = ny/2 + 1;
static const int nxk = nx/2 + 1;
static const int ncomp = 2;
Matrix <double, 1, nx> eK;
eK.setZero();
for(int i = 0; i < nx; i++){
eK[i] = //some expression
}
fftw_complex *UOut;
UOut= (fftw_complex*) fftw_malloc((((nx)*(ny+1))*nyk)* sizeof(fftw_complex));
for (int i = 0; i < nx; i++){
for (int j = 0; j < ny+1; j++){
for (int k = 0; k < ncomp; k++){
UOut[i*(ny+1)+j][k] = //FFT of some expression
}
}
}
Eigen::Map<Eigen::MatrixXcd, Eigen::Unaligned> U(reinterpret_cast<std::complex<double>*>(UOut),(ny+1),nx);
Now, I am trying to take the product of eK which is a row vector of 1 x 10 and the matrix U of a 11 x 10. I tried few things, none of which seem to work really:
U = 1i * eKX.array() * euhX.array() ; //ERROR
static assertion failed: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
( \
| ~~~
176 | (int(Eigen::internal::size_of_xpr_at_compile_time<TYPE0>::ret)==0 && int(Eigen::internal::size_of_xpr_at_compile_time<TYPE1>::ret)==0) \
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
177 | || (\
| ^~~~~
178 | (int(TYPE0::RowsAtCompileTime)==Eigen::Dynamic \
| Eigen doesn't do broadcasting the same way Matlab or Numpy do unless you explicitely ask for it, for example with matrix.array().rowwise() * vector.array()
The IMHO clearer form is to interpret the vector as a diagonal matrix.
Eigen::VectorXd eK = ...;
Eigen::Map<Eigen::MatrixXcd, Eigen::Unaligned> U = ...;
Eigen::MatrixXcd result = U * (eK * 1i).asDiagonal();
|
72,875,270 | 72,876,093 | Can you "hop" between "linked classes" in C++ metaprogramming? | Suppose you have something like this:
template<class D>
class HasDef {
public:
typedef D Def;
};
class A : public HasDef<class B> {};
class B : public HasDef<class C> {};
class C {};
So it is like a "metaprogramming linked list", with type links, via the included typedef Def. Now I want to make a template "Leaf" that, when applied to A, follows the links to yield C:
void f() {
Leaf<A>::type v; // has type C
}
Is it even possible at all to do this? I've tried some methods with std::compare and similar, but none are valid code: everything seems to run into issues with either that C has no Def typedef, or else that the type Leaf<> itself is incomplete when the inner recursive "call" is made so it (or its internal type type) cannot be referenced.
FWIW, the reason I want this is for making a "hierarchical state machine" where that Def represents the default state for each state in the hierarchy, and something a bit more elaborate that this seems to provide a fairly neat and clean "user interface syntax" for it.
| I don't really like f(...) in modern code, thus my version uses void_t from C++17:
#include <type_traits>
template<class D>
struct HasDef {
typedef D Def;
};
struct A : HasDef<class B> {};
struct B : HasDef<class C> {};
struct C {};
template <typename T, typename=void>
struct DefPresent : std::false_type{};
template <typename T>
struct DefPresent<T, std::void_t<typename T::Def>> : std::true_type{};
template<typename T, bool deeper = DefPresent<T>::value>
struct Leaf
{
using Type = typename Leaf<typename T::Def>::Type;
};
template<typename T>
struct Leaf<T, false >
{
typedef T Type;
};
static_assert(std::is_same<typename Leaf<C>::Type, C>::value, "C");
static_assert(std::is_same<typename Leaf<B>::Type, C>::value, "B");
static_assert(std::is_same<typename Leaf<A>::Type, C>::value, "A");
https://godbolt.org/z/5h5rfe81o
EDIT: just for completenes, 2 C++20 variants utilizing concepts. Tested on GCC 10
#include <type_traits>
#include <concepts>
template<class D>
struct HasDef {
typedef D Def;
};
struct A : HasDef<class B> {};
struct B : HasDef<class C> {};
struct C {};
template <typename T>
concept DefPresent = requires(T a)
{
typename T::Def;
};
template<typename T>
struct Leaf
{
using Type = T;
};
template<typename T>
requires DefPresent<T>
struct Leaf<T>
{
using Type = Leaf<typename T::Def>::Type;
};
static_assert(std::is_same_v<typename Leaf<C>::Type, C>, "C");
static_assert(std::is_same_v<typename Leaf<B>::Type, C>, "B");
static_assert(std::is_same_v<typename Leaf<A>::Type, C>, "A");
template<typename T>
struct Leaf2
{
template <typename M>
static M test(M&&);
template <DefPresent M>
static auto test(M&&) -> typename Leaf2<typename M::Def>::Type;
using Type = decltype(test(std::declval<T>()));
};
static_assert(std::is_same<typename Leaf2<C>::Type, C>::value, "C");
static_assert(std::is_same<typename Leaf2<B>::Type, C>::value, "B");
static_assert(std::is_same<typename Leaf2<A>::Type, C>::value, "A");
https://godbolt.org/z/vcqEaPrja
|
72,875,423 | 72,875,509 | std::for_each and unordered_map value modification with parallel execution policy | Does this usage of parallel for_each is ok with unordered_map:
void test()
{
std::vector<double> vec;
constexpr auto N = 1000000;
for(auto i=0;i<N;i++) // this is just for the example purpose
vec.push_back(i*1.0);
auto my_map = std::unordered_map<double,double>();
for(const auto d: vec)
my_map.try_emplace(d,d); // I prefill the map with some elements
// Here i use par_unseq but just modify the value not the key, so just individual elements of the map
std::for_each(std::execution::par_unseq,vec.cbegin(),vec.cend(),[&](double d) { return my_map.at(d)=d+1.0;});
auto total=0.0;
for(const auto [key,value]: my_map)
total+=value;
std::cout << total << std::endl;
}
I first fill the unordered_map with empty elements and then just modify each individual elements. All my tests are successful but I dont know if its just luck or not.
| According to cppreference:
When using parallel execution policy, it is the programmer's responsibility to avoid data races and deadlocks
So, no (direct) help from the Standard Library there.
However, as you yourself point put, this line:
my_map.at(d)=d+1.0;
is only reading the map. The only thing it's writing to is the elements (by which I mean values) in the map, and since each parallel path of execution will be writing to a different element, this should be OK.
Sidenote: your lambda doesn't need to return anything.
|
72,875,968 | 73,101,805 | How to convert a quaternion to a polar/azimuthal angle rotation | I have an arcball camera with a rotation defined by two angles (phi/theta, polar/azimuthal) that is controlled with mouse movement.
I convert these two angles (as euler angles) to a quaternion like this:
glm::quat rotation = glm::quat(glm::vec3(phi, theta, 0));
At some point I need to convert a quaternion back to two angles, but I think there is an infinite number of solutions. Is there a way to get back the two angles without any roll?
Or is there a better solution to make an arcball/orbit camera without using euler angles and keeping only the quaternion rotation of the camera?
| I found a solution:
Start with a unit vector pointing in the Z axis (depends on your engine's handedness and up-vector) glm::vec3 v = glm::vec3(0, 0, 1);
Rotate the vector with the quaternion you want to convert v = q*v; glm does this automatically, otherwise rotate a vector like this :
quat v_quat = quat(v.x, v.y, v.z, 0); // pure quaternion
v_quat = (q*v_quat)*q.conjugate();
v = vec3(v_quat.x, v_quat.y, v_quat.z);
The rotated vector is a unit vector, pointing somewhere in a unit sphere centered at the origin, convert the vector's position from cartesian coordinates to spherical coordinates :
float phi = atan2(v.x, v.z);
float theta = acos(v.y/length(v)));
|
72,876,099 | 72,876,859 | CUDA no operator += for volatile cuda::std::complex<float> | I have a kernel that uses cuda::std::complex<float>, and in this kernel I want to do warp reduction, following this post.
The warpReduce function:
template <typename T, unsigned int blockSize>
__device__ void warpReduce(volatile T *sdata, unsigned int tid) {
if (blockSize >= 64) sdata[tid] += sdata[tid + 32];
if (blockSize >= 32) sdata[tid] += sdata[tid + 16];
if (blockSize >= 16) sdata[tid] += sdata[tid + 8];
if (blockSize >= 8) sdata[tid] += sdata[tid + 4];
if (blockSize >= 4) sdata[tid] += sdata[tid + 2];
if (blockSize >= 2) sdata[tid] += sdata[tid + 1];
}
I'm getting the error: error : no operator "+=" matches these operands, operand types are: volatile cuda::std::complex<float> += volatile cuda::std::complex<float>.
Simply removing the volatile as mentioned in this post doesnt work.
Is there any way I can still use a complex type (thrust/cuda::std) in warp reduction?
kernel
template <unsigned int blockSize>
__global__ void reduce6(cuda::std::complex<float>*g_idata, cuda::std::complex<float>*g_odata, unsigned int n) {
extern __shared__ int sdata[];
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*(blockSize*2) + tid;
unsigned int gridSize = blockSize*2*gridDim.x;
sdata[tid] = 0;
while (i < n) { sdata[tid] += g_idata[i] + g_idata[i+blockSize]; i += gridSize; }
__syncthreads();
if (blockSize >= 512) { if (tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads(); }
if (blockSize >= 256) { if (tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads(); }
if (blockSize >= 128) { if (tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads(); }
if (tid < 32) warpReduce<cuda::std::complex<float>, blockSize>(sdata, tid);
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
I found a workaround by doing a reinterpret cast to float2/double2 first. But I dont know if this has any other implications. I read about undefined behaviour. Other suggestions?
This works:
template <typename T>
struct myComplex;
template <>
struct myComplex<float>
{
typedef float2 type;
};
template <>
struct myComplex<double>
{
typedef double2 type;
};
template <typename T>
__device__ void warpReduce(volatile T *SharedData, int tid)
{
SharedData[tid].x += SharedData[tid + 32].x;
SharedData[tid].x += SharedData[tid + 16].x;
SharedData[tid].x += SharedData[tid + 8].x;
SharedData[tid].x += SharedData[tid + 4].x;
SharedData[tid].x += SharedData[tid + 2].x;
SharedData[tid].x += SharedData[tid + 1].x;
SharedData[tid].y += SharedData[tid + 32].y;
SharedData[tid].y += SharedData[tid + 16].y;
SharedData[tid].y += SharedData[tid + 8].y;
SharedData[tid].y += SharedData[tid + 4].y;
SharedData[tid].y += SharedData[tid + 2].y;
SharedData[tid].y += SharedData[tid + 1].y;
}
// and then in the kernel:
warpReduce(reinterpret_cast<typename myComplex<T>::type *>(data), tid);
| According to my testing, in CUDA 11.7, the issue revolves around the use of volatile.
According to this blog, this style of programming (implicit warp-synchronous) is deprecated.
additionally, this part of your posted code could not possibly be correct:
extern __shared__ int sdata[];
Combining these ideas, we can do the following:
$ cat t7.cu
#include <cuda/std/complex>
#include <iostream>
// assumes blocksize is 64 or larger power of 2 up to max of 512 (or 1024 see below)
template <typename T>
__device__ void warpReduce(T *sdata, unsigned int tid) {
T v = sdata[tid];
v += sdata[tid+32];
sdata[tid] = v; __syncwarp();
v += sdata[tid+16]; __syncwarp();
sdata[tid] = v; __syncwarp();
v += sdata[tid+8]; __syncwarp();
sdata[tid] = v; __syncwarp();
v += sdata[tid+4]; __syncwarp();
sdata[tid] = v; __syncwarp();
v += sdata[tid+2]; __syncwarp();
sdata[tid] = v; __syncwarp();
v += sdata[tid+1]; __syncwarp();
sdata[tid] = v;
}
template <unsigned int blockSize, typename T>
__global__ void reduce6(T *g_idata, T *g_odata, size_t n) {
extern __shared__ T sdata[];
unsigned int tid = threadIdx.x;
size_t i = blockIdx.x*(blockSize*2) + tid;
size_t gridSize = blockSize*2*gridDim.x;
sdata[tid] = 0;
while (i < n) { sdata[tid] += g_idata[i] + g_idata[i+blockSize]; i += gridSize; }
__syncthreads();
// if (blockSize == 1024) { if (tid < 512) { sdata[tid] += sdata[tid + 512]; } __syncthreads(); } // uncomment to support blocksize of 1024
if (blockSize >= 512) { if (tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads(); }
if (blockSize >= 256) { if (tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads(); }
if (blockSize >= 128) { if (tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads(); }
if (tid < 32) warpReduce(sdata, tid);
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
using my_t = cuda::std::complex<float>;
int main(){
size_t n = 2048;
const unsigned blk = 128;
unsigned grid = n/blk/2;
my_t *i, *h_i;
my_t *o, *h_o;
h_i = new my_t[n];
h_o = new my_t[grid];
for (size_t i = 0; i < n; i++) h_i[i] = {1,1};
cudaMalloc(&i, n*sizeof(my_t));
cudaMalloc(&o, grid*sizeof(my_t));
cudaMemcpy(i, h_i, n*sizeof(my_t), cudaMemcpyHostToDevice);
reduce6<blk><<<grid,blk, blk*sizeof(my_t)>>>(i, o, n);
cudaMemcpy(h_o, o, grid*sizeof(my_t), cudaMemcpyDeviceToHost);
for (int i = 0; i < grid; i++)
std::cout << cuda::std::real(h_o[i]) << "," << cuda::std::imag(h_o[i]) << std::endl;
cudaDeviceSynchronize();
}
$ nvcc -o t7 t7.cu
$ compute-sanitizer ./t7
========= COMPUTE-SANITIZER
256,256
256,256
256,256
256,256
256,256
256,256
256,256
256,256
========= ERROR SUMMARY: 0 errors
$
|
72,876,295 | 72,876,381 | Pass List to Function Requiring std::initializer_list<std::initializer_list< type > >? | I'm using OpenNN to write a proof of concept right now, and I'm having an issue with declaring inputs for a Tensor.
From the OpenNN website we see that the neural net accepts a Tensor input
Tensor<type, 2> inputs(1,9);
inputs.setValues({{type(4),type(3),type(3),type(2),type(3),type(4),type(3),type(2),type(1)}});
neural_network.calculate_outputs(inputs);
I did figure out a workaround to convert a vector to a tensor, but it's long and a little tedious.
I then attempted to pass a vector of a vector, a brace enclosed vector, a brace enclosed array, a dynamically allocated array of the list of values.
The error:
cannot convert '<brace-enclosed initializer list>' to 'const Eigen::internal::Initializer<Eigen::Tensor<long long unsigned int, 2>, 2>::InitList&' {aka 'const std::initializer_list<std::initializer_list<long long unsigned int> >&'}
The error continues to just be a variation of (Type does not match type)
The code to reproduce the error (assuming you've gotten the OpenNN library setup.
Tensor<uint64_t, 2> createFilledTensor(int index)
{
uint64_t * inList = new uint64_t[index]();
for(int i = 0; i < index; i++)
{
inList[i] = 356534563546356;
}
Tensor<uint64_t, 2> inputs(1, index);
inputs.setValues({inList});
return inputs;
}
Also, feel it's worth noting, right now the data doesn't matter as I am trying to figure out HOW to get it to the tensor.
| EDIT:
Found a relevant post here
This solution is more for anyone else that comes around and my question can't be answered;
I solved this problem as follows:
namespace Eigen {
template < typename T >
decltype(auto) TensorLayoutSwap(T&& t)
{
return Eigen::TensorLayoutSwapOp<typename std::remove_reference<T>::type>(t);
}
}
Eigen::Tensor<uint64_t, 2> createDataSetFromPair(std::pair<std::vector<uint64_t>, int> data)
{
Eigen::Tensor<uint64_t, 2> dataTensor(1,data.second);
auto mapped_t = Eigen::TensorMap<Eigen::Tensor<uint64_t, 2, Eigen::RowMajor>>(&(data.first)[0], data.first.size(), 1);
return Eigen::TensorLayoutSwap(mapped_t);
}
where pair(vec is the data list, and int is the amount of data being processed. I did this for my personal use as it has special application for what I'm doing, but I believe you could use vec.size() and only need a vector as a param
|
72,876,587 | 72,890,798 | static shared_ptr not keeping value across function calls | I have an input.hpp (which I won't post for the sake of brevity) and an input.cpp file that looks like this (some things removed):
#include "details/macros.hpp"
#if defined(PLATFORM_WINDOWS)
#include "details/win32/input.inl"
#else
#error "No input implementation for this platform."
#endif
#define CHECK_INPUT_TYPE(type) \
if (types[input_type_bits::type]) \
{ \
auto res = poll_##type(); \
if (res.code() != errors::ok) \
LOG(error) << res; \
}
namespace input
{
void poll_input(flagset<input_type_bits> types)
{
CHECK_INPUT_TYPE(keyboard)
CHECK_INPUT_TYPE(mouse)
CHECK_INPUT_TYPE(touch)
CHECK_INPUT_TYPE(gesture)
CHECK_INPUT_TYPE(gamepad)
}
}
And an input.inl that looks like this (also cut down for brevity):
#ifndef WIN32_INPUT
#define WIN32_INPUT
#include <Windows.h>
namespace input
{
static bool g_init = false;
static std::shared_ptr<system::surface> g_surface = nullptr;
static error<errors> poll_gamepad()
{
if (!g_init)
{
auto ptr = create_surface();
g_surface = std::move(ptr);
g_init = true;
}
HWND hwnd = reinterpret_cast<HWND>(g_surface->native_ptr());
return MAKE_ERROR(errors::ok);
}
}
#endif
However what is currently happening is that when I try to access g_surface's method, it works for the first call (in which g_init was false) but in the second time the poll_input function is called, according to Visual Studio's debugger, g_surface is empty and accessing the method throws an exception.
What gives? g_init was set to true successfully across calls and yet g_surface wasnt? If I move g_surface to be a static variable inside poll_gamepad it does work properly but that is something I'd like to avoid. Is there something about a static shared_ptr that I'm missing?
| I found out the answer to this problem. I mistakingly (don't code while tired people!) had a call to the Win32 API GetKeyboardState that was using the wrong static variable as the output buffer and caused static memory corruption.
Thank you for everyone's help and I apologize for not giving much information to deal with. That was my bad!
Anyways, I'm glad it's working properly now, case closed.
|
72,876,699 | 72,876,706 | vector size changes after push_back() | I am not sure why the .size() of a vector<string> (10) below is changing from 10 to 20 after .push_back(string) on it. I would assume it should remain the same.
int main() {
vector<string> StrVec(10);
vector<int> intVec(10);
iota(intVec.begin(), intVec.end(), 1);
cout << "StrVec.length = " << StrVec.size() << endl;
for (int i : intVec)
{
StrVec.push_back(to_string(i));
}
cout << "StrVec.length = " << StrVec.size() << endl;
return 0;
}
Output:
StrVec.length = 10
StrVec.length = 20
| When you write vector<string> StrVec(10);, it initializes StrVec with 10 default-initialized string elements. Then, each push_back() pushes a new element to StrVec while iterating over intVec, thus arriving at 20 elements.
If you only wanted to pre-allocate memory (but not have any elements), you might consider using this instead:
vector<string> StrVec;
StrVec.reserve(10);
If you'd like to access elements of an already allocated vector, you might use StrVec[i], where i is the index. Note that you might not index past the end of the vector.
|
72,876,717 | 72,876,767 | Window closes only after clicking exit button multiple times? | When I try to exit the window by clicking the X at the top corner, the program wouldn't end and just continue running. Only after repeatedly clicking the X button, the window managed to close. Why is this the case?
main.cpp:
#include <iostream>
#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
#include "window.hpp"
#include "utils.hpp"
int main(int argc, char *argv[]) {
if (SDL_Init(SDL_INIT_VIDEO) > 0){
std::cout << "SDL has failed to initialize." << std::endl;
std::cout << "SDL ERROR: " << SDL_GetError() << std::endl;
}
if (!(IMG_Init(IMG_INIT_PNG))) {
std::cout << "SDL image has failed to initialize." << std::endl;
std::cout << "SDL_image ERROR: " << IMG_GetError() << std::endl;
}
RenderWindow window("game", 950, 720);
bool gameLoopRunning = true;
SDL_Event event;
const float deltaTime = 0.01f;
float accumulator = 0.0f;
float currentTime = utils::timeInSeconds();
while (gameLoopRunning) {
int startTicks = SDL_GetTicks();
float newTime = utils::timeInSeconds();
float frameTime = newTime - currentTime;
currentTime = newTime;
accumulator += frameTime;
while (accumulator >= deltaTime) {
while (SDL_PollEvent(&event)) {
switch(event.type) {
case SDL_QUIT:
gameLoopRunning = false;
}
accumulator -= deltaTime;
}
}
const float alpha = accumulator / deltaTime;
int frameTicks = SDL_GetTicks() - startTicks;
if (frameTicks < 1000/window.getRefreshRate()) {
SDL_Delay(1000/window.getRefreshRate() - frameTicks);
}
}
window.cleanUp();
SDL_Quit();
return 0;
}
window.cpp:
RenderWindow::RenderWindow(const char* p_title, int p_width, int p_height)
: window(NULL), renderer(NULL) {
window = SDL_CreateWindow(p_title, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
p_width, p_height, SDL_WINDOW_SHOWN);
if (window == NULL) {
std::cout << "Failed to create window: " << std::endl;
std::cout << "SDL Error: " << SDL_GetError() << std::endl;
}
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
}
int RenderWindow::getRefreshRate() {
int displayIndex = SDL_GetWindowDisplayIndex(window);
SDL_DisplayMode mode;
SDL_GetDisplayMode(displayIndex, 0, &mode);
return mode.refresh_rate;
}
void RenderWindow::cleanUp() {
SDL_DestroyWindow(window);
}
void RenderWindow::clear() {
SDL_RenderClear(renderer);
}
utils.hpp:
namespace utils {
inline float timeInSeconds() {
float ticks = SDL_GetTicks();
ticks *= 0.001f;
return ticks;
}
}
| Those two loops will not exit just because you clicked exit:
while (accumulator >= deltaTime) {
while (SDL_PollEvent(&event)) {
switch(event.type) {
case SDL_QUIT:
gameLoopRunning = false;
}
accumulator -= deltaTime;
}
}
The accumulator loop is especially likely to run for a while if you game is behind, in game time. You should make it explicit that you want to break out of those loops, and not just wait later:
while (accumulator >= deltaTime && gameLoopRunning) {
while (SDL_PollEvent(&event) && gameLoopRunning) {
switch(event.type) {
case SDL_QUIT:
gameLoopRunning = false;
}
accumulator -= deltaTime;
}
}
Notice the extra && gameLoopRunning.
|
72,877,364 | 72,877,445 | Why here template Vector3<int> cannot convert to Vector3<int>? | It seems quite weird. Here you can see the error message is that a convert happens between one type and it fails. If I remove the explicit modifier from Vector3's copy constructor it is fine, no error. Could someone explain why? I'm confused.
template<typename T>
class Vector3 {
public:
explicit Vector3(const Vector3& v) :x(v.x), y(v.y), z(v.z) {}
Vector3() :x(0), y(0), z(0) {}
T x, y, z;
};
template<typename T>
Vector3<T> getVec3() {
return Vector3<T>(); //c2440 "return":cannot convert Vector3<int> to Vector3<int>
}
int main()
{
getVec3<int>();
}
| return Vector3<T>(); performs copy initialization, which won't consider explicit constructors: including the copy constructor. That's why you should mark the copy constructor non-explicit.
Copy-initialization is less permissive than direct-initialization: explicit constructors are not converting constructors and are not considered for copy-initialization.
BTW: Since C++17 your code would work fine because of mandatory copy elision, the copy (and move construction) will be omitted completely.
|
72,877,370 | 72,877,452 | Why can't I run my getline code without the stringstream? How do i use stringstream to make this code work? | #include<iostream>
#include<string>
using namespace std;
int main() {
string randomwords,temp;
getline(cin,randomwords);
while(getline(randomwords,temp,' ')) {
cout<<temp<<endl;
}
return 0;
}
| std::getline's first parameter is a std::basic_istream. There is no conversion between a std::basic_string and a std::basic_istream, so you cannot pass a std::string (a specialization of std::basic_string) as a first parameter to std::getline. This is a fundamental rule of C++, parameters to functions must have matching types or have one of several conversions that can be used to convert an object of one type to the other one. There are none here, so that's why it won't work.
However, std::basic_istringstream has an overloaded constructor that takes a std::basic_string as a parameter. Normally that can be used as an implicit conversion, but this particular constructor is an explicit constructor which prohibits it from being used in implicit type conversions. Therefore you'll just do the job yourself: construct an input stream from a string explicitly, and std::getline will happily use it. Mission accomplished.
|
72,877,408 | 72,878,033 | Center viewport after resize OpenGL / GLUT | Im working in my reshape callback but i cant get the viewport centered after resize, it stays in the top-left corner. Im working with FreeGLUT.
This is my reshape function:
void reshape(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
}
This is the un-resized window (1024x768):
As you can see, there is a circle in the center of the screen.
This is the full resized window:
What im trying to do is no to change the draw resolution, but to center it in the middle of the resized viewport.
| The problem is the orthographic projection and the view space coordinates:
gluOrtho2D(0, w, h, 0);
In this projection, the upper left coordinate is (0, 0) and the lower right is (w, h), so the center is (w/2, h/2), which depends on the size of the view. Since the object's coordinate has not changed, it is no longer in the center of the scene, since the object's coordinate is still the old center (old_w/2, old_h/2).
Use a projection where the center is (0, 0):
gluOrtho2D(-w/2, w/2, h/2, -h/2);
Of course, now you need to specify different coordinates for your objects in the scene. e.g. (0, 0) for the center.
If you want to keep the scene as is, you need to keep the projection. Just change the viewport (glViewport(0, 0, w, h)) but don't change the projection (remove gluOrtho2D(0, w, h, 0)).
|
72,877,432 | 72,900,483 | G++ failed to compile __attribute__ keyword | I tried compiling attribute with g++, but failed, gcc will compile successfully.
g++ test.c -o test
Here is the function:
#include <stdio.h>
#include <stdlib.h>
struct student{
int num;
};
static __inline int student_information (struct student *) __attribute__((__unused__));
static __inline int student_information(stu) struct student *stu;
{
return 0;
}
int main(void)
{
struct student *stu = (struct student *)malloc(sizeof(struct student));
student_information(stu);
return 0;
}
Here is the failture message:
test.c:9:44: error: ‘student_information’ declared as an ‘inline’ variable
static __inline int student_information(stu) struct student *stu;
^
test.c:9:44: error: ‘int student_information’ redeclared as different kind of symbol
test.c:8:21: error: previous declaration of ‘int student_information(student*)’
static __inline int student_information (struct student *) __attribute__((__unused__));
^
test.c:9:41: error: ‘stu’ was not declared in this scope
static __inline int student_information(stu) struct student *stu;
^
test.c:10:1: error: expected unqualified-id before ‘{’ token
{
^
test.c:8:21: warning: inline function ‘int student_information(student*)’ used but never defined [enabled by default]
static __inline int student_information (struct student *) __attribute__((__unused__));
I don't know why this is wrong, but how do I compile attribute with g++
| It's because g++ is a C++ compiler and you're giving it invalid C++: it has to reject this code, so it just tells you why and quits. Also, the __attribute__ keyword is a compiler-specific extension, which in this case could be replaced by [[maybe_unused]]. In fact nothing in standard C++ starts with any underscores: such names are reserved for use as custom extensions or implementation details by compiler and standard library implementations.
|
72,877,471 | 72,879,841 | C++ confusing closure captures [v] vs [v = v] | In the following code, it seems that the compiler sometimes prefer to call the templated constructor and fails to compile when a copy constructor should be just fine. The behavior seems to change depending on whether the value is captured as [v] or [v = v], I thought those should be exactly the same thing. What am I missing?
I'm using gcc 11.2.0 and compiling it with "g++ file.cpp -std=C++17"
#include <functional>
#include <iostream>
#include <string>
using namespace std;
template <class T>
struct record {
explicit record(const T& v) : value(v) {}
record(const record& other) = default;
record(record&& other) = default;
template <class U>
record(U&& v) : value(forward<U>(v)) {} // Removing out this constructor fixes print1
string value;
};
void call(const std::function<void()>& func) { func(); }
void print1(const record<string>& v) {
call([v]() { cout << v.value << endl; }); // This does not compile, why?
}
void print2(const record<string>& v) {
call([v = v]() { cout << v.value << endl; }); // this compiles fine
}
int main() {
record<string> v("yo");
print1(v);
return 0;
}
| I don't disagree with 康桓瑋's answer, but I found it a little hard to follow, so let me explain it with a different example. Consider the following program:
#include <functional>
#include <iostream>
#include <typeinfo>
#include <type_traits>
struct tracer {
tracer() { std::cout << "default constructed\n"; }
tracer(const tracer &) { std::cout << "copy constructed\n"; }
tracer(tracer &&) { std::cout << "move constructed\n"; }
template<typename T> tracer(T &&t) {
if constexpr (std::is_same_v<T, const tracer>)
std::cout << "template constructed (const rvalue)\n";
else if constexpr (std::is_same_v<T, tracer&>)
std::cout << "template constructed (lvalue)\n";
else
std::cout << "template constructed (other ["
<< typeid(T).name() << "])\n";
}
};
int
main()
{
using fn_t = std::function<void()>;
const tracer t;
std::cout << "==== value capture ====\n";
fn_t([t]() {});
std::cout << "==== init capture ====\n";
fn_t([t = t]() {});
}
When run, this program outputs the following:
default constructed
==== value capture ====
copy constructed
template constructed (const rvalue)
==== init capture ====
copy constructed
move constructed
So what's going on here? First, note in both cases, the compiler must materialize a temporary lambda object to pass into the constructor for fn_t. Then, the constructor of fn_t must make a copy of the lambda object to hold on to it. (Since in general the std::function may outlive the lambda that was passed in to its constructor, it cannot retain the lambda by reference only.)
In the first case (value capture), the type of the captured t is exactly the type of t, namely const tracer. So you can think of the unnamed type of the lambda object as some kind of compiler-defined struct that contains a field of type const tracer. Let's give this structure a fake name of LAMBDA_T. So the argument to the constructor to fn_t is of type LAMBDA_T&&, and an expression that accesses the field inside is consequently of type const tracer&&, which matches the template constructor's forwarding reference better than the actual copy constructor. (In overload resolution rvalues prefer binding to rvalue references over binding to const lvalue references when both are available.)
In the second case (init capture), the type of the captured t = t is equivalent to the type of tnew in a declaration like auto tnew = t, namely tracer. So now the field in our internal LAMBDA_T structure is going to be of type tracer rather than const tracer, and when an argument of type LAMBDA_T&& to fn_t's constructor must be move-copied, the compiler will choose tracer's normal move constructor for moving that field.
|
72,878,439 | 72,878,595 | Debugger is not stepping into expected function | #include<iostream>
#include<string>
using namespace std;
void reverse(string s){
if(s.length()==0){ //base case
return;
}
string ros=s.substr(1);
reverse(ros);
cout<<s[0];
}
int main(){
reverse("binod");
}
debugger_img_1
debugger_img_2
PFA,
The debugger is supposed to step into the reverse() function. But it is opening these tabs.
| The debugger is stepping into the std::string(const char*) constructor. Your code calls this implicitly before calling reverse because you pass "binod" (which effectively has type const char*) to a function expecting a std::string.
There's nothing wrong here, it's not the wrong function, just a function you didn't realise was being called. Just step out and then step in again.
Side note: Visual Studio's debugger has the 'Just My Code!' feature which, when enabled, means the debugger only steps into code you wrote. Can be a useful time saver.
|
72,878,763 | 72,879,473 | C++ template specialization for enum | I want to map known type to enum value defined by myself.
enum class MyType : uint8_t {
Int8,
Uint8,
Int16,
Uint16,
Int32,
Uint32,
... // some other primitive types.
};
template <typename T>
constexpr uint8_t DeclTypeTrait();
template <>
constexpr uint8_t DeclTypeTrait<int8_t>() {
return static_cast<uint8_t>(MyType::Int8);
}
... // Specialize for each known number type.
Also for any enum type defined by user, I want to map it to Int32. User must define his enum class on int32_t.
using Enumeration = int32_t;
// Some user defined enum.
enum class CameraKind : Enumeration {
Perspective,
Orthographic
};
So I implement DeclTypeTrait like this:
template <typename T,
class = typename std::enable_if<std::is_enum<T>::value>::type>
constexpr uint8_t DeclTypeTrait() {
return static_cast<uint8_t>(MyType::Int32);
}
But I got error: "call to 'DeclTypeTrait' is ambiguous"
candidate function [with T = CameraKind]
candidate function [with T = CameraKind, $1 = void]
My question is how to accomplish this:
// If a have a variable of known types or any enum on int32.
int8_t v1;
CameraKind camera;
std::string s;
DeclTypeTrait<decltype(v1)>() -> MyType::Int8
DeclTypeTrait<decltype(camera)>() -> MyType::Int32
DeclTypeTrait<decltype(s)>() // Report compile error is OK.
| Use a class template will be much simpler for your case.
template <typename T, typename = std::void_t<>>
struct DeclTypeTraitT {
};
template <typename T>
inline constexpr uint8_t DeclTypeTrait = DeclTypeTraitT<T>::value;
template <>
struct DeclTypeTraitT<int8_t> {
static constexpr uint8_t value = static_cast<uint8_t>(MyType::Int8);
};
template <typename T>
struct DeclTypeTraitT<T, std::enable_if_t<std::is_enum_v<T>>> {
static constexpr uint8_t value = static_cast<uint8_t>(MyType::Int32);
};
Then
CameraKind camera;
static_assert(DeclTypeTrait<decltype(camera)> == static_cast<uint8_t>(MyType::Int32));
Demo
As @HolyBlackCat saying, if you want to map the underlying type, you can use
template <typename T>
struct DeclTypeTraitT<T, std::enable_if_t<std::is_enum_v<T>>> {
static constexpr uint8_t value = DeclTypeTrait<std::underlying_type_t<T>>;
};
|
72,879,821 | 72,899,607 | CUDA Unified Memory: Difference in behaviour on Windows and Linux | I am porting an application from Linux to Windows and discovered significant runtime differences of the same code on the same hardware between Windows and Linux.
A minimal working example:
#include <iostream>
#include <chrono>
#include <cuda.h>
constexpr unsigned int MB = 1000000;
constexpr unsigned int num_bytes = 20 * MB;
constexpr unsigned int repeats = 50;
constexpr unsigned int the_answer = 42;
constexpr unsigned int half_of_the_answer = the_answer / 2;
constexpr unsigned int array_index = 100;
__global__ void kernel(uint8_t* data){
int i = blockIdx.x * blockDim.x + threadIdx.x;
if(i<num_bytes){
data[i] = half_of_the_answer;
}
}
void doSomethingOnGPU(uint8_t* data){
cudaStream_t stream;
cudaStreamCreate(&stream);
cudaStreamAttachMemAsync(stream, data, 0, cudaMemAttachSingle);
kernel<<<num_bytes/1000, 1000, 0, stream>>>(data);
cudaStreamSynchronize(stream);
cudaStreamDestroy(stream);
cudaDeviceSynchronize();
}
void doSomethingOnCPU(uint8_t* pic_unpacked){
for(unsigned int i=0; i < num_bytes; i++){
pic_unpacked[i] = the_answer;
}
}
int main() {
uint8_t* data{};
cudaMallocManaged(&data, num_bytes, cudaMemAttachHost);
for(unsigned int i=0;i<repeats;i++){
auto start_time_cpu = std::chrono::high_resolution_clock::now();
doSomethingOnCPU(data);
auto stop_time_cpu = std::chrono::high_resolution_clock::now();
auto duration_cpu = std::chrono::duration_cast<std::chrono::milliseconds>(stop_time_cpu-start_time_cpu);
std::cout << "CPU computation took "<< duration_cpu.count() << "ms, data[" << array_index << "]="
<< static_cast<unsigned int>(data[array_index]) << std::endl;
auto start_time_gpu = std::chrono::high_resolution_clock::now();
doSomethingOnGPU(data);
auto stop_time_gpu = std::chrono::high_resolution_clock::now();
auto duration_gpu = std::chrono::duration_cast<std::chrono::milliseconds>(stop_time_gpu-start_time_gpu);
std::cout << "GPU computation took "<< duration_gpu.count() << "ms, data[" << array_index << "]="
<< static_cast<unsigned int>(data[array_index]) << std::endl << std::endl;
}
cudaFree(data);
return 0;
}
This leads to the following output on Windows:
CPU computation took 216ms, data[100]=42
GPU computation took 29ms, data[100]=21
and to the following output on Linux:
CPU computation took 20ms, data[100]=42
GPU computation took 1ms, data[100]=21
Both are built in Release mode (Linux->GCC, Win->MSVC).
It seems to me, that the automatic memory transfers do not work well under Windows.
Explicit memory transfers with
cudaMallocHost(&hostMem, size);
cudaMalloc(&cudaMem, size);
cudaMemcpy(hostMem, cudaMem, size, cudaMemcpyDeviceToHost);
cudaMemcpy(cudaMem, hostMem, size, cudaMemcpyHostToDevice);
work more or less with the same speed under Linux and Windows.
Why is there this big runtime difference between Linux and Windows when working with unified memory?
| According to the documentation:
GPUs with SM architecture 6.x or higher (Pascal class or newer) provide additional Unified Memory features such as on-demand page migration and GPU memory oversubscription. [...] Applications running on Windows (whether in TCC or WDDM mode) will use the basic Unified Memory model as on pre-6.x architectures even when they are running on hardware with compute capability 6.x or higher.
Of the features explicitly mentioned here, I would think that "on-demand page migration" is very relevant for the increased performance under Linux.
|
72,879,874 | 72,879,917 | Why iterative std::max with 2 constants is faster than std::max with initializer list? | Compiler : Visual Studio 2019 , Optimization : (Favor Speed)(/O2)
In a loop (over 1 million cycles), I use std::max to find the maximum element among 10 elements.
When I use std::max iteratively, like
using namespace std;
using namespace chrono;
auto start = high_resolution_clock::now();
for(int i=0;i<1000000;i++)
out = max(arr[i],max(arr2[i],max(....);
auto end= high_resolution_clock::now();
cout << duration_cast<milliseconds>(end-start).count()<<endl;
is much faster than
using namespace std;
array<int,10> arrs;
auto start = high_resolution_clock::now();
for(int i=0;i<1000000;i++)
{
arrs = {arr[i],arr2[i],....};
out = max(arrs);
}
auto end= high_resolution_clock::now();
cout << duration_cast<milliseconds>(end-start).count()<<endl;
Why is that ?
This is actually not a specific question for the example above.
Why
constexpr const T& max( const T& a, const T& b );
is much faster than
template< class T >
constexpr T max( std::initializer_list<T> ilist );
?
| You are copying all of the array elements when you are constructing an initializer list, which is going to incur more overhead.
|
72,880,385 | 72,880,456 | How to allocate char poiter to pointer char ** is it possible in C++ or do I need C for this | Lets say I have char pointer to pointer now I want to allocate space for 3 pointers. I believe size of C++ char pointer is also 8 bytes. first pointer sized of 8 bytes will have strings that I will allocate later. I want to allocate memory for 3 pointers so I can access these pointers through a[0][string_num] to a[2][string_num] Then after all that I all allocate what a[0] pointer and a[1] pointer and a[2] pointing what strings
char **a;
I tried something like this. This throws compiler error that
a = new (char *)[3];
Error
error: array bound forbidden after parenthesized type-id
11 | a = new (char *)[3];
| ^
In C this is possible. is it also possible in C++?
| Don't put parentheses around the type.
a = new char *[3];
As an aside, if you are writing C++, use std::string for strings, and std::vector for dynamic arrays.
|
72,880,495 | 72,880,588 | Why doesn't push_back keep working in a loop? | Completely new to C++. Programmed selection sort on 1D array of arbitrary length. Want to allow user to keep inputting integers into console to make an array of desired length, to be subsequently sorted.
Can only seem to make arrays of length 2 using a while loop for adding elements. Code and example of erroneous result when inputting 6, 2, 3, and 9 shown below.
Script:
// Preprocessor directives and namespace declaration
#include <iostream>
#include <vector>
using namespace std;
// Function
void SelectionSort(int *arr, int len)
{
// Loop through index j in arr
for (int j = 0; j < len; j++) {
// Assume element j is minimum, and initialise minIndex
int min = arr[j];
int minIndex = j;
// Loop through comparisons to determine actual minimum
// (of elements after and including j)
for (int i = j; i < len; i++)
{
if (min > arr[i])
{
min = arr[i];
minIndex = i;
}
}
// Swap minimum with element j
int temp = arr[j];
arr[j] = min;
arr[minIndex] = temp;
}
// Display resulting array
for (int i = 0; i + 1 < len; i++)
{
cout << arr[i] << ", ";
}
cout << arr[len - 1] << endl;
}
// Main
int main()
{
// Explain program to user
cout << "Sort 1D array of user-inputted length/contents" << endl;
cout << "To finish array, enter -999" << endl;
// Initialise dynamic array
vector<int> vDyn (1);
vDyn[0] = 0;
cout << "Enter first element of array: ";
int firstElement = 0;
cin >> firstElement;
vDyn[0] = firstElement;
// Loop to define elements until desired length reached
bool keepGoing = true;
while (keepGoing == true)
{
cout << "Enter another element: ";
int newElement = 0;
cin >> newElement;
if (newElement != -999)
{
vDyn.push_back(newElement);
} else
{
keepGoing = false;
}
}
// Convert vector to array (dynamic to static)
int* v = &vDyn[0];
// Get array length
int len = sizeof(v) / sizeof(v[0]);
// Run SelectionSort function
SelectionSort(v, len);
return 0;
}
Terminal:
Sort 1D array of user-inputted length/contents
To finish array, enter -999
Enter first element of array: 6
Enter another element: 2
Enter another element: 3
Enter another element: 9
Enter another element: -999
2, 6
| This declaration
int len = sizeof(v) / sizeof(v[0]);
is equivalent to the declaration
int len = sizeof( int * ) / sizeof( int );
because the variable v is declared like
int* v = &vDyn[0];
The size of a pointer is equal usually to 4 or 8 bytes. So the variable length will have the value either 1 or 2 and does not depend on the number of elements stored in the vector..
Instead you should use for example
size_t len = vDyn.size();
You could declare the function like
void SelectionSort(int *arr, size_t len);
and call it like
SelectionSort( vDyn.data(), vDyn.size() );
Also as in C++ there is standard function std::swap declared in the header <utility> then instead of this code snippet
// Swap minimum with element j
int temp = arr[j];
arr[j] = min;
arr[minIndex] = temp;
you could just write
if ( j != minIndex ) std::swap( arr[j], arr[minIndex] );
And the inner for loop could look like
for ( size_t i = j + 1; i < len; i++)
^^^^^
In fact your function SelectionSort is a C function. A C++ function should be more general and use iterators. In this case it could sort arrays along with other containers.
Here is a demonstration program that shows a more general function called for an array based on a vector.
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
template <typename ForwardIterator>
void SelectionSort( ForwardIterator first, ForwardIterator last )
{
for ( ; first != last; ++first )
{
auto current_min = first;
for ( auto next = std::next( first ); next != last; ++next )
{
if ( *next < *current_min ) current_min = next;
}
if ( current_min != first )
{
std::iter_swap( current_min, first );
}
}
}
int main()
{
std::vector<int> v = { 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 };
for ( const auto &item : v )
{
std::cout << item << ' ';
}
std::cout << '\n';
SelectionSort( v.data(), v.data() + v.size() );
for ( const auto &item : v )
{
std::cout << item << ' ';
}
std::cout << '\n';
}
The program output is
9 8 7 6 5 4 3 2 1 0
0 1 2 3 4 5 6 7 8 9
In general you need also to write an overloaded function that accepts also a comparison function.
|
72,880,528 | 72,915,991 | Sonar cognitive complexity checking for a function | I try to understand how sonarqube calculates the coginitive complexity and I wonder if this is correct and for instance this function's complexity is indeed 16. I guess it is not 16 because limit of 15 was not exceeded. Can you help me what is exact cognitive complexity of this function?
Thank you.
bool sonarQuestion()
{
if (not (1 and 0 and 1)) // 1 + 2 (1 for if + 2 logical operators) = 3
{
return false;
}
if (1 and 1) // 1 + 1 = 2
{
if (not (1 and 2 and 3 and (1 or 0))) // 1 + 4 + 1 (1 for if + 4 logical operator + 1 for nesting) = 6
{
return false;
}
}
if (2) // 1
{
if (not (2 and 3 and 5)) // 1 + 2 + 1 = 4
{
return false;
}
}
// total is 16
return true;
}
| That's an interesting way of looking at complexity. I'm not familiar with Sonar at all, but I did find this link where they explain the principles. Looking at that document, I think your example has a score of 12:
bool sonarQuestion()
{
// 1 + 1 (1 if, 1 sequence of operators)
if (not (1 and 0 and 1))
{
return false;
}
if (1 and 1) // +1 +1
{
// +2 +2 (nested if, 2 sequences)
if (not (1 and 2 and 3 and (1 or 0)))
{
return false;
}
}
if (2) // +1
{
if (not (2 and 3 and 5)) // +2 +1
{
return false;
}
}
// total is 2 + 6 + 4 = 12
return true;
}
|
72,880,578 | 72,881,202 | Ranges-v3 transform limitations | I am trying to use ranges-v3 to split an SNMP OID into parts and return them as a std::deque<uint32_t>.
The following code works but only after I added a number of additional un-natural steps:
#include <range/v3/all.hpp>
/// split the supplied string into nodes, using '.' as a delimiter
/// @param the path to split , e.g "888.1.2.3.4"
/// @return a std::deque<uint32_t> containing the split paths
static std::deque<uint32_t> splitPath(std::string_view path) {
constexpr std::string_view delim{"."};
auto tmp = path | ranges::views::split(delim)
| ranges::to<std::vector<std::string>>()
;
return tmp | ranges::views::transform([](std::string_view v) {
return std::stoul(std::string{v}); })
| ranges::to<std::deque<uint32_t>>();
}
Initially I expected the following to simply work:
static std::deque<uint32_t> splitPath(std::string_view path) {
constexpr std::string_view delim{"."};
return path | ranges::views::split(delim)
| ranges::views::transform([](std::string_view v) {
return std::stoul(std::string{v}); })
| ranges::to<std::deque<uint32_t>>();
}
But that results in the following error:
error: no match for ‘operator|’ (operand types are
‘ranges::split_view<std::basic_string_view<char>,
std::basic_string_view<char> >’ and
‘ranges::views::view_closure<ranges::detail::
bind_back_fn_<ranges::views::transform_base_fn, ahk::snmp::
{anonymous}::splitPath(std::string_view)::<lambda(std::string_view)> > >’)
36 | return path | ranges::views::split(delim)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| ranges::split_view<std::basic_string_view<char>,
std::basic_string_view<char> >
37 | | ranges::views::transform([](std::string_view v) {
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| ranges::views::view_closure<ranges::detail::bind_back_fn_
<ranges::views::transform_base_fn, ahk::snmp::
{anonymous}::splitPath(std::string_view)::<lambda(std::string_view)> > >
38 | return std::stoul(std::string{v}); })
Why is it necessary to convert the result of the first operation to a std::vector and store in a named value (tmp) before calling ranges::views::transform? Even the following code (which removes the named value tmp fails:
static std::deque<uint32_t> splitPath(std::string_view path) {
constexpr std::string_view delim{"."};
return path | ranges::views::split(delim)
| ranges::to<std::vector<std::string>>()
| ranges::views::transform([](std::string_view v) {
return std::stoul(std::string{v}); })
| ranges::to<std::deque<uint32_t>>();
}
| The value type of the range returned by ranges::views::split isn't std::string_view, it is a implementation detail type.
I'm not sure why you were able to | to<std::vector<std::string>> at all.
Because it uses a sentinel, you will need to convert it to a common range (prior to C++20, when std::string_view is constructible from an iterator and sentinel, or C++23 when it is constructible from a range).
std::deque<uint32_t> splitPath(std::string_view path) {
constexpr std::string_view delim{"."};
auto toul = [](auto v){
auto c = v | ranges::views::common;
return std::stoul(std::string(c.begin(), c.end()));
};
return path | ranges::views::split(delim)
| ranges::views::transform(toul)
| ranges::to<std::deque<uint32_t>>();
}
|
72,880,681 | 72,881,153 | Why we need 'namespace scope' concept? - in C++ | I learned that in namespace "name decoration(mangling)" takes place so that it can be differentiated from other same identifiers which is in different namespace.
Wiki: Name mangling
If then, Why "namespace scope" exists? I thought just 'name decoration' can solve all problem about name conflicting.
Because in C, the reason for name conflicting is eventually that "different entities have same identifier".
Name decoration can make names(identifiers) different from each other internally, so I think name decoration is what all we need.
Then, why C++ use 'namespace scope' concept? Just for to use unqualified name in namespace scope? I want to know if there is any reason.
| Namespaces scope is very useful in programming for the following reasons:
Avoids name collisions between functions/classes, eg., Suppose you have two functions of same name but in different namespace scope (foo::func() and bar::func())
Can be used for managing similar functions inside it, eg., sin(), cos() and sqrt() function could be under namespace Math.
Avoids confusion between classes for an example suppose there's two classes named lexer, but in different namespace scope like json::lexer and xml::lexer. Now, it gives clear understanding to the programmer to choose between them according to their language i.e, xml or json.
An everyday example of namespace could be std, stands for standard, it contains every single functions and classes defined in the standard library, it also includes STL classes like std::vector, std::map and std::string.
NOTE: There's no concept of namespace scope in C.
|
72,880,726 | 72,881,033 | multi-thread program initialization using call_once vs atomic_flag | In book C++ Concurrency in Action 2nd, 3.3.1, the author introduced a way using call_once function to avoid double-checked locking pattern when doing initialization in multi-thread program,
std::shared_ptr<some_resource> resource_ptr;
std::once_flag resource_flag;
void init_resource()
{
resource_ptr.reset(new some_resource);
}
void foo()
{
std::call_once(resource_flag,init_resource); #1
resource_ptr->do_something();
}
the reason is explained in this [answer][1]. I used to use atomic_flag to do initialization in multi-thread program, something like this:
td::atomic_flag init = ATOMIC_FLAG_INIT;
std::atomic<bool> initialized = false;
void Init()
{
if (init.test_and_set()) return;
DoInit();
initialized = true;
}
void Foo(){
if(!initialized) return;
DoSomething(); // use some variable intialized in DoInit()
}
every threads will call Init() before call Foo().
After read the book, I wonder will the above pattern cause race condition, therefore not safe to use? Is it possible that the compiler reorder the instructions and initialized become true before DoInit() finish?
[1]: Explain race condition in double checked locking
| The race condition in your code happens when thread 1 enters DoInit and thread 2 skips it and proceeds to Foo.
You handle it with if(!initialized) return in Foo but this is not always possible: you should always expect a method to accidently do nothing and you can forget to add such checks to other methods.
With std::call_once with concurrent invocation the execution will only continue after the action is executed.
Regarding reordering, atomics operations use memory_order_seq_cst by default that does not allow any reordering.
|
72,880,751 | 72,882,027 | Why does assigning a value to a string in a struct crash the program? | I have commented out the problematic string, attempted to pass the input to a string that is not a member of the struct, then passing it to the correct string, to no avail. To achieve the intended function, the string must go through this struct. Where is it going wrong?
Structure code:
#include <iostream>
#include <string>
#include <fstream>
#include <iomanip>
using namespace std;
class passingdata
{
public:
passingdata()
{
//constructor
};
~passingdata()
{
//destructor
};
int convertedResponse;
const string headers[4] = {"Labor/Materials", "Cost (per unit)", "Total Units", "Total Cost"}; //this is all to be written to a file later.
struct dynInputs
{
string name;
int perCost;
int unitTotal;
int totalCost = perCost * unitTotal;
};
void acceptInputs()
{
string name = "";
string response = "";
const string positiveResponse = "yes";
cout << "Would you like to insert a label?" << endl;
getline(cin, response);
if (response == positiveResponse)
{
populateSaveData();
}
else
{
//nothing yet
}
}
void populateSaveData()
{
if (convertedResponse = 1)
{
cout << "How many labels would you like to create?" << endl;
int labelCount;
cin >> labelCount;
cin.clear();
int labelsNeeded = labelCount;
dynInputs* dynamicInputs;
dynamicInputs = new dynInputs[labelsNeeded];
while (labelsNeeded > 0)
{
cout << "please type the name for this row" << endl;
cin.ignore();
//string tempName = "";
//getline(cin, tempName); this works!
getline(cin, dynamicInputs[labelsNeeded].name); //this breaks, goes to trash memory when done this way
system("pause");
cin.clear();
//tempName = dynamicInputs[labelsNeeded].name; breaks as well
cout << dynamicInputs[labelsNeeded].name << endl;
//cout << tempName << endl;
system("pause");
cout << "please type the cost of the unit, and the number of units" << endl;
cin >> dynamicInputs[labelsNeeded].perCost;
cin.clear();
cin >> dynamicInputs[labelsNeeded].unitTotal;
cin.clear();
labelsNeeded--;
}
cout << dynamicInputs[0].unitTotal << endl;
The dynamicInputs[labelsNeeded] array points to junk memory, yet I'm unsure of why it only crashes assigning value to the string.
| In labelsNeeded you store the size of the array.
Then in the first iteration you use labelsNeeded to index into your array. Since C++ indexes an array starting from 0, the largest possible valid index is (the size of the array) - 1.
Eg.: For an array of size 4, your valid index range is [0, 1, 2, 3].
Now what you are doing is setting labelsNeeded to equal labelCount and then allocate an array of the size equalig labelsNeeded. And then in the first iteration you use the value of labelsNeeded as an index with this original value for accessing an element in your array. Which goes past the valid range of your array. Hence the program crashes.
I see that at the and of the iteration you decrement labelsNeeded but that is too late considering that you already tried to use the original value earlier in the code.
Your labelsNeeded > 0 condition for your while loop is also incorrect if you are using this "decrement the index at the end of the iteration" solution since it will fail to write the first (at index 0) element of your array.
Try moving the labelsNeeded-- line to the beginning of the iteration.
Note:
As to "why it only crashes assigning value to the string".
C++ (or rather the runtime) does not care whether your pointer points to a valid address or not. Simply because a pointer is just a memory address. By it self it is just a number stored in memory. A pointer that references invalid memory will only crash your program (or do other weird stuff) if you want to dereference it or in other words -> If you want to actually use the pointer to access that place in memory. It is not the "invalid" memory address in the pointer that crashes but the act of trying to access the memory at that address. The distinction may look subtle but is very important nonetheless. You can have any number of "null pointers" in the program as long as you don't try to dereference null.
Note 2:
Yours is an especially interesting mode of failure since it can fail in one of two places:
Since you are indexing an element that is one past the end of the array, that may as well coincide with the end of the heap that was assigned to the program. So it may crash there. But... Most likely your array is not allocated in such a place and there will still be accessible heap past the end of your array so dereferencing one element past your array may as well give you "something" and by something I mean some memory content cast to the type that you have. But of course from your point of view that is some random data.
If execution survived the previous section and now you have a struct with random data, you also have your string in your struct that is also filled with random garbage. Which means its pointer to the actual string content is also random (most likely pointing somewhere before or past your addressable space) as well as its size and other state information also being garbage. So if you reassign that string then its original content pointer will likely be accessed (eg.: deallocation) which will result in a crash.
|
72,881,213 | 72,882,345 | How to handle invalid state after move especially for objects with validating constructor? | I made a class for a function's argument to delegate its validation and also for function overloading purposes.
Throwing from constructor guarantees that the object will either be constructed in a valid state or will not be constructed at all. Hence, there is no need to introduce any checking member functions like explicit operator bool() const.
// just for exposition
const auto &certString = open_cert();
add_certificate(cert_pem{certString.cbegin(), certString.cend()}); // this will either throw
// or add a valid certificate.
// cert_pem is a temporary
However, there are issues which I don't see a appealing solution for:
Argument-validation class might itself be made non-persistent - to be used only for validation as a temporary object. But what about classes that are allowed to be persistent? That is living after function invocation:
// just for exposition
const auto &certString = open_cert();
cert_pem cert{certString.cbegin(), certString.cend()}; // allowed to throw
cert_pem moved = std::move(cert); // cert invalidated
cert_pem cert_invalid = std::move(cert); // is not allowed to throw
add_certificate(cert_invalid); // we lost the whole purpoce
I can see several ways to treat this without introducing state-checking (thus declaring a class stateful) functions:
Declare object "unusable" after move. - A really simple recipe for disaster
Declare move constructor and assignment operator deleted. Allow only copy - Resources might be very expensive to copy. Or even not possible if using a PIMPL idiom.
Use heap allocation when need an object to be persistent - this looks like most obvious. But has an unnecessary penalty on performance. Especially when some class has as members several such objects - there will be several memory allocations upon construction.
Here is a code example for 2):
/**
* Class that contains PEM certificate byte array.
* To be used as an argument. Ensures that input certificate is valid, otherwise throws on construction.
*/
class cert_pem final
{
public:
template <typename IterT>
cert_pem(IterT begin, IterT end)
: value_(begin, end)
{
validate(value_);
}
const std::vector<uint8_t>& Value() const noexcept(false)
{
return value_;
}
cert_pem (const cert_pem &) = default;
cert_pem & operator=(const cert_pem &) = default;
cert_pem (cert_pem &&) = delete;
cert_pem & operator=(cert_pem &&) = delete;
private:
/**
* \throws std::invalid_argument
*/
static void Validate(const std::vector<uint8_t>& value) noexcept(false);
static void ValidateNotEmpty(const std::vector<uint8_t>& value) noexcept(false);
private:
std::vector<uint8_t> value_;
};
Is there another way to handle this problem without these shortcomings? Or will I have to choose one of the above?
I think that with argument-validating classes a good way would be to not allow it to be persistent - only temporary object is allowed. But I am not sure if it is possible in C++.
| You are trying to maintain two invariants at once, and their semantics are in conflict. The first invariant is the validity of the certificate. The second is for memory management.
For the first invariant, you decided that there can be no invalid constructed object, but for the second, you decided that the object can be either valid or unspecified†. This is only possible because the deallocation has a check somewhere.
There is no way around this: you either add a check for the first or you decouple the invariants. One way of decoupling them is to follow the design of std::lock_guard
cert c = open_cert(); // c is guaranteed to not have memory leaks and is movable
{
cert_guard cg{c}; // cg is guaranteed to be valid, but cg is non-movable
}
But wait, you might ask, how do you transfer the validity to another cert_guard?
Well, you can't.
That is the semantics you chose for the first invariant: it is valid exactly during the lifetime of the object. That is the entire point.
† Unspecified and invalid as far as the certificate is concerned.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.