qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
3,400,144
|
All,
I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator.
What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update.
I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode:
```
while true:
if position update generated / received
open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554"
```
This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates).
edit:
Alternative method, perhaps more clean, is to use the telnetlib library of python.
```
import telnetlib
tn = telnetlib.Telnet("localhost",5554)
while True:
if position update generated / received
tn.write("geo fix longitude latitude altitude\r\n")
```
|
2010/08/03
|
[
"https://Stackoverflow.com/questions/3400144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/155392/"
] |
We have confirmed this bug. This is due to the end\_time having to be aligned with day delimiters in PST in order for the Insights table to return any data. To address this issue, we introduced two custom functions you can use to query the insights table:
1. end\_time\_date() : accept DATE in string form(e.g. '2010-08-01')
2. period() : accept 'lifetime', 'day', 'week' and 'month'
For example, you can now query the insights table using:
```
SELECT metric, value FROM insights
WHERE object_id = YOUR_APP_ID AND
metric = 'application_active_users' AND
end_time = end_time_date('2010-09-01') AND
period = period('day');
```
We will document about these functions soon, sorry for all the inconvenience!
P.S. If you don't want to use the end\_time\_date() function, please make sure the end\_time timestamp in your query is aligned with day delimiters in PST.
Thanks!
Facebook Insights Team
|
The response you're seeing is an empty response which doesn't necessarily mean there's no metric data available. A few ideas what might cause this:
* Are you using a user access token? If yes, does the user own the page? Is the 'read\_insights' extended permission granted for the user / access token? How about 'offline\_access'?
* end\_time needs should be specified as midnight, Pacific Time.
* Valid periods are 86400, 604800, 2592000 (day, week, month)
* Does querying 'page\_fan\_adds' metric yield meaningful results for a given period?
While I haven't worked with the insights table, working with Facebook's FQL taught me not to expect error messages, error codes but try to follow the documentation (if available) and then experiment with it...
As for the date, use the following ruby snippet for midnight, today:
```
Date.new(2010,9,14).to_time.to_i
```
---
I also found the following on the Graph API documentation page:
>
> **Impersonation**
>
>
> You can impersonate pages administrated by your users by requesting the "manage\_pages" extended permission.
>
>
> Once a user has granted your application the "manage\_pages" permission, the "accounts" connection will yield an additional access\_token property for every page administrated by the current user. These access\_tokens can be used to make calls on behalf of a page. The permissions granted by a user to your application will now also be applicable to their pages. ([source](http://developers.facebook.com/docs/api))
>
>
>
Have you tried requesting this permission and use &metadata=1 in a Graph API query to get the access token for each account?
|
3,400,144
|
All,
I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator.
What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update.
I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode:
```
while true:
if position update generated / received
open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554"
```
This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates).
edit:
Alternative method, perhaps more clean, is to use the telnetlib library of python.
```
import telnetlib
tn = telnetlib.Telnet("localhost",5554)
while True:
if position update generated / received
tn.write("geo fix longitude latitude altitude\r\n")
```
|
2010/08/03
|
[
"https://Stackoverflow.com/questions/3400144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/155392/"
] |
The response you're seeing is an empty response which doesn't necessarily mean there's no metric data available. A few ideas what might cause this:
* Are you using a user access token? If yes, does the user own the page? Is the 'read\_insights' extended permission granted for the user / access token? How about 'offline\_access'?
* end\_time needs should be specified as midnight, Pacific Time.
* Valid periods are 86400, 604800, 2592000 (day, week, month)
* Does querying 'page\_fan\_adds' metric yield meaningful results for a given period?
While I haven't worked with the insights table, working with Facebook's FQL taught me not to expect error messages, error codes but try to follow the documentation (if available) and then experiment with it...
As for the date, use the following ruby snippet for midnight, today:
```
Date.new(2010,9,14).to_time.to_i
```
---
I also found the following on the Graph API documentation page:
>
> **Impersonation**
>
>
> You can impersonate pages administrated by your users by requesting the "manage\_pages" extended permission.
>
>
> Once a user has granted your application the "manage\_pages" permission, the "accounts" connection will yield an additional access\_token property for every page administrated by the current user. These access\_tokens can be used to make calls on behalf of a page. The permissions granted by a user to your application will now also be applicable to their pages. ([source](http://developers.facebook.com/docs/api))
>
>
>
Have you tried requesting this permission and use &metadata=1 in a Graph API query to get the access token for each account?
|
If, like me, you came here after getting this from another FQL statement, then your problem is not including the access\_token parameter, e.g.
<https://api.facebook.com/method/fql.query?query=SELECT+name+FROM+user+WHERE+uid+%3D+me()&access_token=>...
(You can use fb.getAccessToken())
|
3,400,144
|
All,
I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator.
What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update.
I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode:
```
while true:
if position update generated / received
open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554"
```
This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates).
edit:
Alternative method, perhaps more clean, is to use the telnetlib library of python.
```
import telnetlib
tn = telnetlib.Telnet("localhost",5554)
while True:
if position update generated / received
tn.write("geo fix longitude latitude altitude\r\n")
```
|
2010/08/03
|
[
"https://Stackoverflow.com/questions/3400144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/155392/"
] |
We have confirmed this bug. This is due to the end\_time having to be aligned with day delimiters in PST in order for the Insights table to return any data. To address this issue, we introduced two custom functions you can use to query the insights table:
1. end\_time\_date() : accept DATE in string form(e.g. '2010-08-01')
2. period() : accept 'lifetime', 'day', 'week' and 'month'
For example, you can now query the insights table using:
```
SELECT metric, value FROM insights
WHERE object_id = YOUR_APP_ID AND
metric = 'application_active_users' AND
end_time = end_time_date('2010-09-01') AND
period = period('day');
```
We will document about these functions soon, sorry for all the inconvenience!
P.S. If you don't want to use the end\_time\_date() function, please make sure the end\_time timestamp in your query is aligned with day delimiters in PST.
Thanks!
Facebook Insights Team
|
I'm not sure the date is correct. do you really want the date as an integer?
Usually SQL takes dates in db-format, so to format it you'd use:
```
Date.new(2010,9,14).to_s(:db)
(Time.now - 5.days).to_s(:db)
# or even better:
5.days.ago.to_s(:db)
```
|
3,400,144
|
All,
I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator.
What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update.
I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode:
```
while true:
if position update generated / received
open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554"
```
This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates).
edit:
Alternative method, perhaps more clean, is to use the telnetlib library of python.
```
import telnetlib
tn = telnetlib.Telnet("localhost",5554)
while True:
if position update generated / received
tn.write("geo fix longitude latitude altitude\r\n")
```
|
2010/08/03
|
[
"https://Stackoverflow.com/questions/3400144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/155392/"
] |
We have confirmed this bug. This is due to the end\_time having to be aligned with day delimiters in PST in order for the Insights table to return any data. To address this issue, we introduced two custom functions you can use to query the insights table:
1. end\_time\_date() : accept DATE in string form(e.g. '2010-08-01')
2. period() : accept 'lifetime', 'day', 'week' and 'month'
For example, you can now query the insights table using:
```
SELECT metric, value FROM insights
WHERE object_id = YOUR_APP_ID AND
metric = 'application_active_users' AND
end_time = end_time_date('2010-09-01') AND
period = period('day');
```
We will document about these functions soon, sorry for all the inconvenience!
P.S. If you don't want to use the end\_time\_date() function, please make sure the end\_time timestamp in your query is aligned with day delimiters in PST.
Thanks!
Facebook Insights Team
|
If, like me, you came here after getting this from another FQL statement, then your problem is not including the access\_token parameter, e.g.
<https://api.facebook.com/method/fql.query?query=SELECT+name+FROM+user+WHERE+uid+%3D+me()&access_token=>...
(You can use fb.getAccessToken())
|
44,364,458
|
Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in `.robot` files.
Test suites have test cases which are called "Keywords".
**Test Cases**
Create New Vehicle
```
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
```
Those "Keywords" are imported from python library and look like:
```
@keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
```
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
|
2017/06/05
|
[
"https://Stackoverflow.com/questions/44364458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8113230/"
] |
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
[RED Debug with PyDev](http://nokia.github.io/RED/help/user_guide/launching/robot_python_debug.html)
|
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to `sideBarPage.createNewVehicle()` should throw an exception such as "unable to create new vehicle". Likewise, the call to `basicVehicleCreation.setKennzeichen(registrationno)` should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
```
@keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...
```
|
54,752,681
|
I am working on a thesis regarding Jacobsthal sequences (A001045) and how they can be considered as being composed of some number of distinct sub-sequences. I have made a comment on A077947 indicating this and have included a python program. Unfortunately the program as written leaves a lot to be desired and so of course I wanted to turn to Stack to see if anyone here knows how to improve the code!
**Here is the code:**
```
a=1
b=1
c=2
d=5
e=9
f=18
for x in range(0, 100):
print(a, b, c, d, e, f)
a = a+(36*64**x)
b = b+(72*64**x)
c = c+(144*64**x)
d = d+(288*64**x)
e = e+(576*64**x)
f = f+(1152*64**x)
```
**I explain the reasoning behind this as follows:**
>
> The sequence A077947 is generated by 6 digital root preserving sequences
> stitched together; per the Python code these sequences initiate at the
> seed values a-f. The number of iterations required to calculate a given
> A077947 a(n) is ~n/6. The code when executed returns all the values for
> A077947 up to range(x), or ~x\*6 terms of A077947. I find the repeated
> digital roots interesting as I look for periodic digital root preservation
> within sequences as a method to identify patterns within data. For
> example, digital root preserving sequences enable time series analysis of
> large datasets when estimating true-or-false status for alarms in large IT
> ecosystems that undergo maintenance (mod7 environments); such analysis is
> also related to predicting consumer demand / patterns of behavior.
> Appropriating those methods of analysis, carving A077947 into 6 digital
> root preserving sequences was meant to reduce complexity; the Python code
> reproduces A077947 across 6 "channels" with seed values a-f. This long
> paragraph boils down to statement, "The digital roots of the terms of the
> sequence repeat in the pattern (1, 1, 2, 5, 9, 9)." The bigger statement
> is that all sequences whose digital roots repeat with a pattern can be
> partitioned/separated into an equal number of distinct sequences and those
> sequences can be calculated independently. There was a bounty related to
> this sequence.
>
>
>
This code is ugly but I cannot seem to get the correct answer without coding it this way;
I have not figured out how to write this as a function due to the fact I cannot seem to get the recurrence values to store properly in a function.
So of course if this yields good results we hope to link the discussion to the OEIS references.
**Here is a link to the sequence:**
<https://oeis.org/A077947>
|
2019/02/18
|
[
"https://Stackoverflow.com/questions/54752681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1204443/"
] |
Here's an alternative way to do it without a second for loop:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
for x in range(100):
print(*sequences)
sequences = [ s + m*64**x for s,m in zip(sequences,multipliers) ]
```
[EDIT] Looking at the values I noticed that this particular sequence could also be obtained with:
N[i+1] = 2 \* N[i] + (-1,0,1 in rotation)
or
N[i+1] = 2 \* N[i] + i mod 3 - 1 *(assuming a zero based index)*
```
i N[i] [-1,0,1] N[i+1]
0 1 -1 --> 2*1 - 1 --> 1
1 1 0 --> 2*1 + 0 --> 2
2 2 1 --> 2*2 + 1 --> 5
3 5 -1 --> 2*5 - 1 --> 9
4 9 0 --> 2*9 + 0 --> 18
...
```
So a simpler loop to produce the sequence could be:
```
n = 1
for i in range(100):
print(n)
n = 2*n + i % 3 - 1
```
Using the reduce function from functools can make this even more concise:
```
from functools import reduce
sequence = reduce(lambda s,i: s + [s[-1]*2 + i%3 - 1],range(20),[1])
print(sequence)
>>> [1, 1, 2, 5, 9, 18, 37, 73, 146, 293, 585, 1170, 2341, 4681, 9362, 18725, 37449, 74898, 149797, 299593, 599186]
```
Using your multi-channel approach and my suggested formula this would give:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
allSequences = reduce(lambda ss,x: ss + [[ s + m*64**x for s,m in zip(ss[-1],multipliers) ]],range(100),[sequences])
for seq in allSequences: print(*seq) # print 6 by 6
```
[EDIT2] If all your sequences are going to have a similar pattern (i.e. starting channels, multipliers and calculation formula), you could generalize the printing of such sequences in a function thus only needing one line per sequence:
```
def printSeq(calcNext,sequence,multipliers,count):
for x in range(count):
print(*sequence)
sequence = [ calcNext(x,s,m) for s,m in zip(sequence,multipliers) ]
printSeq(lambda x,s,m:s*2+m*64**x,[1,1,2,5,9,18],multipliers=[36,72,144,288,576,1152],count=100)
```
[EDIT3] Improving on the printSeq function.
I believe you will not always need an array of multipliers to compute the next value in each channel. An improvement on the function would be to provide a channel index to the lambda function instead of a multiplier. This will allow you to use an an array of multiplier if you need to but will also let you use a more general calculation.
```
def printSeq(name,count,calcNext,sequence):
p = len(sequence)
for x in range(count):
print(name, x,":","\t".join(str(s) for s in sequence))
sequence = [ calcNext(x,s,c,p) for c,s in enumerate(sequence) ]
```
The lambda function is given 4 parameters and is expected to return the next sequence value for the specified channel:
```
s : current sequence value for the channel
x : iteration number
c : channel index (zero based)
p : number of channels
```
So, using an array inside the formula would express it like this:
```
printSeq("A077947",100,lambda x,s,c,p: s + [36,72,144,288,576,1152][c] * 64**x, [1,1,2,5,9,18])
```
But you could also use a more general formula that is based on the channel index (and number of channels):
```
printSeq("A077947",100,lambda x,s,c,p: s + 9 * 2**(p*x+c+2), [1,1,2,5,9,18])
```
or ( 6 channels based on 2\*S + i%3 - 1 ):
```
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*(c%3*2 - (c+2)%3 - 1) ,[1,1,2,5,9,18])
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*[-3,1,2][c%3],[1,1,2,5,9,18])
```
My reasoning here is that if you have a function that can compute the next value based on the current index and value in the sequence, you should be able to define a striding function that will compute the value that is N indexes farther.
Given F(i,S[i]) --> i+1,S[i+1]
```
F2(i,S[i]) --> i+2,S[i+2] = F(F(i,S[i]))
F3(i,S[i]) --> i+3,S[i+3] = F(F(F(i,S[i])))
...
F6(i,S[i]) --> i+6,S[i+6] = F(F(F(F(F(F(i,S[i]))))))
...
Fn(i,S[i]) --> i+n,S[i+n] = ...
```
This will always work and should not require an array of multipliers. Most of the time it should be possible to simplify Fn using mere algebra.
for example A001045 : F(i,S) = i+1, 2\*S + (-1)\*\*i
```
printSeq("A001045",20,lambda x,s,c,p: 64*s + 21*(-1)**(x*p+c),[0,1,1,3,5,11])
```
Note that from the 3rd value onward, the next value in that sequence can be computed without knowing the index:
A001045: F(S) = 2\*S + 1 - 2\*0\*\*((S+1)%4)
|
This will behave identically to your code, and is arguably prettier. You'll probably see ways to make the magic constants less arbitrary.
```
factors = [ 1, 1, 2, 5, 9, 18 ]
cofactors = [ 36*(2**n) for n in range(6) ]
for x in range(10):
print(*factors)
for i in range(6):
factors[i] = factors[i] + cofactors[i] * 64**x
```
To calculate just one of the subsequences, it would be enough to keep `i` fixed as you iterate.
|
54,752,681
|
I am working on a thesis regarding Jacobsthal sequences (A001045) and how they can be considered as being composed of some number of distinct sub-sequences. I have made a comment on A077947 indicating this and have included a python program. Unfortunately the program as written leaves a lot to be desired and so of course I wanted to turn to Stack to see if anyone here knows how to improve the code!
**Here is the code:**
```
a=1
b=1
c=2
d=5
e=9
f=18
for x in range(0, 100):
print(a, b, c, d, e, f)
a = a+(36*64**x)
b = b+(72*64**x)
c = c+(144*64**x)
d = d+(288*64**x)
e = e+(576*64**x)
f = f+(1152*64**x)
```
**I explain the reasoning behind this as follows:**
>
> The sequence A077947 is generated by 6 digital root preserving sequences
> stitched together; per the Python code these sequences initiate at the
> seed values a-f. The number of iterations required to calculate a given
> A077947 a(n) is ~n/6. The code when executed returns all the values for
> A077947 up to range(x), or ~x\*6 terms of A077947. I find the repeated
> digital roots interesting as I look for periodic digital root preservation
> within sequences as a method to identify patterns within data. For
> example, digital root preserving sequences enable time series analysis of
> large datasets when estimating true-or-false status for alarms in large IT
> ecosystems that undergo maintenance (mod7 environments); such analysis is
> also related to predicting consumer demand / patterns of behavior.
> Appropriating those methods of analysis, carving A077947 into 6 digital
> root preserving sequences was meant to reduce complexity; the Python code
> reproduces A077947 across 6 "channels" with seed values a-f. This long
> paragraph boils down to statement, "The digital roots of the terms of the
> sequence repeat in the pattern (1, 1, 2, 5, 9, 9)." The bigger statement
> is that all sequences whose digital roots repeat with a pattern can be
> partitioned/separated into an equal number of distinct sequences and those
> sequences can be calculated independently. There was a bounty related to
> this sequence.
>
>
>
This code is ugly but I cannot seem to get the correct answer without coding it this way;
I have not figured out how to write this as a function due to the fact I cannot seem to get the recurrence values to store properly in a function.
So of course if this yields good results we hope to link the discussion to the OEIS references.
**Here is a link to the sequence:**
<https://oeis.org/A077947>
|
2019/02/18
|
[
"https://Stackoverflow.com/questions/54752681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1204443/"
] |
Here's an alternative way to do it without a second for loop:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
for x in range(100):
print(*sequences)
sequences = [ s + m*64**x for s,m in zip(sequences,multipliers) ]
```
[EDIT] Looking at the values I noticed that this particular sequence could also be obtained with:
N[i+1] = 2 \* N[i] + (-1,0,1 in rotation)
or
N[i+1] = 2 \* N[i] + i mod 3 - 1 *(assuming a zero based index)*
```
i N[i] [-1,0,1] N[i+1]
0 1 -1 --> 2*1 - 1 --> 1
1 1 0 --> 2*1 + 0 --> 2
2 2 1 --> 2*2 + 1 --> 5
3 5 -1 --> 2*5 - 1 --> 9
4 9 0 --> 2*9 + 0 --> 18
...
```
So a simpler loop to produce the sequence could be:
```
n = 1
for i in range(100):
print(n)
n = 2*n + i % 3 - 1
```
Using the reduce function from functools can make this even more concise:
```
from functools import reduce
sequence = reduce(lambda s,i: s + [s[-1]*2 + i%3 - 1],range(20),[1])
print(sequence)
>>> [1, 1, 2, 5, 9, 18, 37, 73, 146, 293, 585, 1170, 2341, 4681, 9362, 18725, 37449, 74898, 149797, 299593, 599186]
```
Using your multi-channel approach and my suggested formula this would give:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
allSequences = reduce(lambda ss,x: ss + [[ s + m*64**x for s,m in zip(ss[-1],multipliers) ]],range(100),[sequences])
for seq in allSequences: print(*seq) # print 6 by 6
```
[EDIT2] If all your sequences are going to have a similar pattern (i.e. starting channels, multipliers and calculation formula), you could generalize the printing of such sequences in a function thus only needing one line per sequence:
```
def printSeq(calcNext,sequence,multipliers,count):
for x in range(count):
print(*sequence)
sequence = [ calcNext(x,s,m) for s,m in zip(sequence,multipliers) ]
printSeq(lambda x,s,m:s*2+m*64**x,[1,1,2,5,9,18],multipliers=[36,72,144,288,576,1152],count=100)
```
[EDIT3] Improving on the printSeq function.
I believe you will not always need an array of multipliers to compute the next value in each channel. An improvement on the function would be to provide a channel index to the lambda function instead of a multiplier. This will allow you to use an an array of multiplier if you need to but will also let you use a more general calculation.
```
def printSeq(name,count,calcNext,sequence):
p = len(sequence)
for x in range(count):
print(name, x,":","\t".join(str(s) for s in sequence))
sequence = [ calcNext(x,s,c,p) for c,s in enumerate(sequence) ]
```
The lambda function is given 4 parameters and is expected to return the next sequence value for the specified channel:
```
s : current sequence value for the channel
x : iteration number
c : channel index (zero based)
p : number of channels
```
So, using an array inside the formula would express it like this:
```
printSeq("A077947",100,lambda x,s,c,p: s + [36,72,144,288,576,1152][c] * 64**x, [1,1,2,5,9,18])
```
But you could also use a more general formula that is based on the channel index (and number of channels):
```
printSeq("A077947",100,lambda x,s,c,p: s + 9 * 2**(p*x+c+2), [1,1,2,5,9,18])
```
or ( 6 channels based on 2\*S + i%3 - 1 ):
```
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*(c%3*2 - (c+2)%3 - 1) ,[1,1,2,5,9,18])
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*[-3,1,2][c%3],[1,1,2,5,9,18])
```
My reasoning here is that if you have a function that can compute the next value based on the current index and value in the sequence, you should be able to define a striding function that will compute the value that is N indexes farther.
Given F(i,S[i]) --> i+1,S[i+1]
```
F2(i,S[i]) --> i+2,S[i+2] = F(F(i,S[i]))
F3(i,S[i]) --> i+3,S[i+3] = F(F(F(i,S[i])))
...
F6(i,S[i]) --> i+6,S[i+6] = F(F(F(F(F(F(i,S[i]))))))
...
Fn(i,S[i]) --> i+n,S[i+n] = ...
```
This will always work and should not require an array of multipliers. Most of the time it should be possible to simplify Fn using mere algebra.
for example A001045 : F(i,S) = i+1, 2\*S + (-1)\*\*i
```
printSeq("A001045",20,lambda x,s,c,p: 64*s + 21*(-1)**(x*p+c),[0,1,1,3,5,11])
```
Note that from the 3rd value onward, the next value in that sequence can be computed without knowing the index:
A001045: F(S) = 2\*S + 1 - 2\*0\*\*((S+1)%4)
|
Here is an alternative version using generators:
```
def Jacobsthal():
roots = [1, 1, 2, 5, 9, 18]
x = 0
while True:
yield roots
for i in range(6):
roots[i] += 36 * 2**i * 64**x
x += 1
```
And here is a safe way to use it:
```
j = Jacobsthal()
for _ in range(10):
print(j.send(None))
```
|
54,752,681
|
I am working on a thesis regarding Jacobsthal sequences (A001045) and how they can be considered as being composed of some number of distinct sub-sequences. I have made a comment on A077947 indicating this and have included a python program. Unfortunately the program as written leaves a lot to be desired and so of course I wanted to turn to Stack to see if anyone here knows how to improve the code!
**Here is the code:**
```
a=1
b=1
c=2
d=5
e=9
f=18
for x in range(0, 100):
print(a, b, c, d, e, f)
a = a+(36*64**x)
b = b+(72*64**x)
c = c+(144*64**x)
d = d+(288*64**x)
e = e+(576*64**x)
f = f+(1152*64**x)
```
**I explain the reasoning behind this as follows:**
>
> The sequence A077947 is generated by 6 digital root preserving sequences
> stitched together; per the Python code these sequences initiate at the
> seed values a-f. The number of iterations required to calculate a given
> A077947 a(n) is ~n/6. The code when executed returns all the values for
> A077947 up to range(x), or ~x\*6 terms of A077947. I find the repeated
> digital roots interesting as I look for periodic digital root preservation
> within sequences as a method to identify patterns within data. For
> example, digital root preserving sequences enable time series analysis of
> large datasets when estimating true-or-false status for alarms in large IT
> ecosystems that undergo maintenance (mod7 environments); such analysis is
> also related to predicting consumer demand / patterns of behavior.
> Appropriating those methods of analysis, carving A077947 into 6 digital
> root preserving sequences was meant to reduce complexity; the Python code
> reproduces A077947 across 6 "channels" with seed values a-f. This long
> paragraph boils down to statement, "The digital roots of the terms of the
> sequence repeat in the pattern (1, 1, 2, 5, 9, 9)." The bigger statement
> is that all sequences whose digital roots repeat with a pattern can be
> partitioned/separated into an equal number of distinct sequences and those
> sequences can be calculated independently. There was a bounty related to
> this sequence.
>
>
>
This code is ugly but I cannot seem to get the correct answer without coding it this way;
I have not figured out how to write this as a function due to the fact I cannot seem to get the recurrence values to store properly in a function.
So of course if this yields good results we hope to link the discussion to the OEIS references.
**Here is a link to the sequence:**
<https://oeis.org/A077947>
|
2019/02/18
|
[
"https://Stackoverflow.com/questions/54752681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1204443/"
] |
Here's an alternative way to do it without a second for loop:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
for x in range(100):
print(*sequences)
sequences = [ s + m*64**x for s,m in zip(sequences,multipliers) ]
```
[EDIT] Looking at the values I noticed that this particular sequence could also be obtained with:
N[i+1] = 2 \* N[i] + (-1,0,1 in rotation)
or
N[i+1] = 2 \* N[i] + i mod 3 - 1 *(assuming a zero based index)*
```
i N[i] [-1,0,1] N[i+1]
0 1 -1 --> 2*1 - 1 --> 1
1 1 0 --> 2*1 + 0 --> 2
2 2 1 --> 2*2 + 1 --> 5
3 5 -1 --> 2*5 - 1 --> 9
4 9 0 --> 2*9 + 0 --> 18
...
```
So a simpler loop to produce the sequence could be:
```
n = 1
for i in range(100):
print(n)
n = 2*n + i % 3 - 1
```
Using the reduce function from functools can make this even more concise:
```
from functools import reduce
sequence = reduce(lambda s,i: s + [s[-1]*2 + i%3 - 1],range(20),[1])
print(sequence)
>>> [1, 1, 2, 5, 9, 18, 37, 73, 146, 293, 585, 1170, 2341, 4681, 9362, 18725, 37449, 74898, 149797, 299593, 599186]
```
Using your multi-channel approach and my suggested formula this would give:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
allSequences = reduce(lambda ss,x: ss + [[ s + m*64**x for s,m in zip(ss[-1],multipliers) ]],range(100),[sequences])
for seq in allSequences: print(*seq) # print 6 by 6
```
[EDIT2] If all your sequences are going to have a similar pattern (i.e. starting channels, multipliers and calculation formula), you could generalize the printing of such sequences in a function thus only needing one line per sequence:
```
def printSeq(calcNext,sequence,multipliers,count):
for x in range(count):
print(*sequence)
sequence = [ calcNext(x,s,m) for s,m in zip(sequence,multipliers) ]
printSeq(lambda x,s,m:s*2+m*64**x,[1,1,2,5,9,18],multipliers=[36,72,144,288,576,1152],count=100)
```
[EDIT3] Improving on the printSeq function.
I believe you will not always need an array of multipliers to compute the next value in each channel. An improvement on the function would be to provide a channel index to the lambda function instead of a multiplier. This will allow you to use an an array of multiplier if you need to but will also let you use a more general calculation.
```
def printSeq(name,count,calcNext,sequence):
p = len(sequence)
for x in range(count):
print(name, x,":","\t".join(str(s) for s in sequence))
sequence = [ calcNext(x,s,c,p) for c,s in enumerate(sequence) ]
```
The lambda function is given 4 parameters and is expected to return the next sequence value for the specified channel:
```
s : current sequence value for the channel
x : iteration number
c : channel index (zero based)
p : number of channels
```
So, using an array inside the formula would express it like this:
```
printSeq("A077947",100,lambda x,s,c,p: s + [36,72,144,288,576,1152][c] * 64**x, [1,1,2,5,9,18])
```
But you could also use a more general formula that is based on the channel index (and number of channels):
```
printSeq("A077947",100,lambda x,s,c,p: s + 9 * 2**(p*x+c+2), [1,1,2,5,9,18])
```
or ( 6 channels based on 2\*S + i%3 - 1 ):
```
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*(c%3*2 - (c+2)%3 - 1) ,[1,1,2,5,9,18])
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*[-3,1,2][c%3],[1,1,2,5,9,18])
```
My reasoning here is that if you have a function that can compute the next value based on the current index and value in the sequence, you should be able to define a striding function that will compute the value that is N indexes farther.
Given F(i,S[i]) --> i+1,S[i+1]
```
F2(i,S[i]) --> i+2,S[i+2] = F(F(i,S[i]))
F3(i,S[i]) --> i+3,S[i+3] = F(F(F(i,S[i])))
...
F6(i,S[i]) --> i+6,S[i+6] = F(F(F(F(F(F(i,S[i]))))))
...
Fn(i,S[i]) --> i+n,S[i+n] = ...
```
This will always work and should not require an array of multipliers. Most of the time it should be possible to simplify Fn using mere algebra.
for example A001045 : F(i,S) = i+1, 2\*S + (-1)\*\*i
```
printSeq("A001045",20,lambda x,s,c,p: 64*s + 21*(-1)**(x*p+c),[0,1,1,3,5,11])
```
Note that from the 3rd value onward, the next value in that sequence can be computed without knowing the index:
A001045: F(S) = 2\*S + 1 - 2\*0\*\*((S+1)%4)
|
As you can see from the OEIS annotation, you only need 3 initial values and 1 recursion of 3 previous sequence elements, a(n-1), a(n-2) and a(n-3). You can then easily obtain the series.
```
# a(n-1)+a(n-2)+2*a(n-3)
# start values: 1, 1, 2
a1, a2, a3 = 1,1,2 #initial vales
m3, m2, m1 = 1,1,2 #multipliers for a(n-1):m3 a(n-2):m2 and a(n-3):m1
print a1, a2, a3,
for i in range(16):
a1,a2,a3=a2,a3,a3*m3+a2*m2+a1*m1
print a3,
```
This gives you 1 1 2 5 9 18 37 73 146 293 585 1170 2341 4681 9362 18725 ...
|
12,884,512
|
I am in my first steps in learning python so excuse my questions please. I want to run the code below (taken from: <http://docs.python.org/library/ssl.html>) :
```
import socket, ssl, pprint
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# require a certificate from the server
ssl_sock = ssl.wrap_socket(s,
ca_certs="F:/cert",
cert_reqs=ssl.CERT_REQUIRED)
ssl_sock.connect(('www.versign.com', 443))
print repr(ssl_sock.getpeername())
print ssl_sock.cipher()
print pprint.pformat(ssl_sock.getpeercert())
# Set a simple HTTP request -- use httplib in actual code.
ssl_sock.write("""GET / HTTP/1.0\r
Host: www.verisign.com\r\n\r\n""")
# Read a chunk of data. Will not necessarily
# read all the data returned by the server.
data = ssl_sock.read()
# note that closing the SSLSocket will also close the underlying socket
ssl_sock.close()
```
I got the following errors:
>
> Traceback (most recent call last):
> File "C:\Users\e\workspace\PythonTesting\source\HelloWorld.py", line 38, in
> ssl\_sock.connect(('www.versign.com', 443))
>
>
> File "C:\Python27\lib\ssl.py", line 331, in connect
>
>
>
> ```
> self._real_connect(addr, False)
>
> ```
>
> File "C:\Python27\lib\ssl.py", line 314, in \_real\_connect
>
>
> self.ca\_certs, self.ciphers)
>
>
> ssl.SSLError: [Errno 185090050] \_ssl.c:340: error:0B084002:x509 certificate routines:X509\_load\_cert\_crl\_file:system lib
>
>
>
The error reporting in python does not look guiding to find the source of the problem. i might be mistaken. Can anybody help in telling me what is the problem in the code ?
|
2012/10/14
|
[
"https://Stackoverflow.com/questions/12884512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1476749/"
] |
Your code is referring to a certificate *file* on drive 'F:' (using the `ca_certs` parameter), which is not found during execution -- is there one?
See the relevant [documentation](http://docs.python.org/library/ssl.html#ssl.wrap_socket):
>
> The ca\_certs file contains a set of concatenated “certification
> authority” certificates, which are used to validate certificates
> passed from the other end of the connection.
>
>
>
|
Does the certificate referenced exist on your filesystem? I think that error is in response to invalid cert from this code:
ssl\_sock = ssl.wrap\_socket(s,ca\_certs="F:/cert",cert\_reqs=ssl.CERT\_REQUIRED)
|
12,884,512
|
I am in my first steps in learning python so excuse my questions please. I want to run the code below (taken from: <http://docs.python.org/library/ssl.html>) :
```
import socket, ssl, pprint
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# require a certificate from the server
ssl_sock = ssl.wrap_socket(s,
ca_certs="F:/cert",
cert_reqs=ssl.CERT_REQUIRED)
ssl_sock.connect(('www.versign.com', 443))
print repr(ssl_sock.getpeername())
print ssl_sock.cipher()
print pprint.pformat(ssl_sock.getpeercert())
# Set a simple HTTP request -- use httplib in actual code.
ssl_sock.write("""GET / HTTP/1.0\r
Host: www.verisign.com\r\n\r\n""")
# Read a chunk of data. Will not necessarily
# read all the data returned by the server.
data = ssl_sock.read()
# note that closing the SSLSocket will also close the underlying socket
ssl_sock.close()
```
I got the following errors:
>
> Traceback (most recent call last):
> File "C:\Users\e\workspace\PythonTesting\source\HelloWorld.py", line 38, in
> ssl\_sock.connect(('www.versign.com', 443))
>
>
> File "C:\Python27\lib\ssl.py", line 331, in connect
>
>
>
> ```
> self._real_connect(addr, False)
>
> ```
>
> File "C:\Python27\lib\ssl.py", line 314, in \_real\_connect
>
>
> self.ca\_certs, self.ciphers)
>
>
> ssl.SSLError: [Errno 185090050] \_ssl.c:340: error:0B084002:x509 certificate routines:X509\_load\_cert\_crl\_file:system lib
>
>
>
The error reporting in python does not look guiding to find the source of the problem. i might be mistaken. Can anybody help in telling me what is the problem in the code ?
|
2012/10/14
|
[
"https://Stackoverflow.com/questions/12884512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1476749/"
] |
This is one area where the Python standard library is known to be difficult to use. Instead you may want to use the requests library. Documentation on sending certificates is available at: <http://docs.python-requests.org/en/latest/user/advanced/#ssl-cert-verification>
|
Does the certificate referenced exist on your filesystem? I think that error is in response to invalid cert from this code:
ssl\_sock = ssl.wrap\_socket(s,ca\_certs="F:/cert",cert\_reqs=ssl.CERT\_REQUIRED)
|
68,900,182
|
If I have a string which is the same as a python data type and I would like to check if another variable is that type how would I do it? Example below.
```
dtype = 'str'
x = 'hello'
bool = type(x) == dtype
```
The above obviously returns False but I'd like to check that type('hello') is a string.
|
2021/08/23
|
[
"https://Stackoverflow.com/questions/68900182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16737078/"
] |
You can use `eval`:
```
bool = type(x) is eval(dtype)
```
but beware, `eval` will execute any python code, so if you're taking `dtype` as user input, they can execute their own code in this line.
|
If your code *actually* looks like the example you showed and `dtype` isn't coming from user input, then also keep in mind that `str` (as a value in Python) is a valid object which represents the string type. Consider
```
dtype = str
x = 'hello'
print(isinstance(x, dtype))
```
`str` is a value like any other and can be assigned to variables. No `eval` magic required.
|
68,900,182
|
If I have a string which is the same as a python data type and I would like to check if another variable is that type how would I do it? Example below.
```
dtype = 'str'
x = 'hello'
bool = type(x) == dtype
```
The above obviously returns False but I'd like to check that type('hello') is a string.
|
2021/08/23
|
[
"https://Stackoverflow.com/questions/68900182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16737078/"
] |
You can use `eval`:
```
bool = type(x) is eval(dtype)
```
but beware, `eval` will execute any python code, so if you're taking `dtype` as user input, they can execute their own code in this line.
|
I think the best way to do this verification is to use **isinstance**, like:
```
isinstance(x, str) # returns True
```
From the docs:
>
>
isinstance(object, classinfo)
```
Return True if the object argument is an instance of the classinfo argument, or of a (direct, indirect or virtual) subclass thereof. If object is not an object of the given type, the function always returns False. If classinfo is a tuple of type objects (or recursively, other such tuples), return True if object is an instance of any of the types. If classinfo is not a type or tuple of types and such tuples, a TypeError exception is raised.
```
<https://docs.python.org/3/library/functions.html#isinstance>
|
68,900,182
|
If I have a string which is the same as a python data type and I would like to check if another variable is that type how would I do it? Example below.
```
dtype = 'str'
x = 'hello'
bool = type(x) == dtype
```
The above obviously returns False but I'd like to check that type('hello') is a string.
|
2021/08/23
|
[
"https://Stackoverflow.com/questions/68900182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16737078/"
] |
Don't write :
-------------
```
bool = type(x) == dtype
```
*because `dtype` is a variabe It is in the form of a string , not logical !!*
*you should be entered a statement to check is str or no*
Also, the string in Python is an object so to call it write :
`str` not write `dtype = 'str'`,exemple :
```
type(x) == str
```
i fixed your code and just try this :
-------------------------------------
```
x = 'hello'
if type(x) == str:
print(True)
else:
print(False)
```
***It's a simple code but there are other shortcuts that come from Python***
>
> try that and good day for coding !!
>
>
>
|
If your code *actually* looks like the example you showed and `dtype` isn't coming from user input, then also keep in mind that `str` (as a value in Python) is a valid object which represents the string type. Consider
```
dtype = str
x = 'hello'
print(isinstance(x, dtype))
```
`str` is a value like any other and can be assigned to variables. No `eval` magic required.
|
68,900,182
|
If I have a string which is the same as a python data type and I would like to check if another variable is that type how would I do it? Example below.
```
dtype = 'str'
x = 'hello'
bool = type(x) == dtype
```
The above obviously returns False but I'd like to check that type('hello') is a string.
|
2021/08/23
|
[
"https://Stackoverflow.com/questions/68900182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16737078/"
] |
Don't write :
-------------
```
bool = type(x) == dtype
```
*because `dtype` is a variabe It is in the form of a string , not logical !!*
*you should be entered a statement to check is str or no*
Also, the string in Python is an object so to call it write :
`str` not write `dtype = 'str'`,exemple :
```
type(x) == str
```
i fixed your code and just try this :
-------------------------------------
```
x = 'hello'
if type(x) == str:
print(True)
else:
print(False)
```
***It's a simple code but there are other shortcuts that come from Python***
>
> try that and good day for coding !!
>
>
>
|
I think the best way to do this verification is to use **isinstance**, like:
```
isinstance(x, str) # returns True
```
From the docs:
>
>
isinstance(object, classinfo)
```
Return True if the object argument is an instance of the classinfo argument, or of a (direct, indirect or virtual) subclass thereof. If object is not an object of the given type, the function always returns False. If classinfo is a tuple of type objects (or recursively, other such tuples), return True if object is an instance of any of the types. If classinfo is not a type or tuple of types and such tuples, a TypeError exception is raised.
```
<https://docs.python.org/3/library/functions.html#isinstance>
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
```
from functools import reduce # Only in Python 3, omit this in Python 2.x
from decimal import *
d = Decimal('3.14159')
t = d.as_tuple()
theInteger = reduce(lambda rst, x: rst * 10 + x, t.digits)
theExponent = t.exponent
```
|
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
digits=t.digits
theInteger=0
for x in range(len(digits)):
theInteger=theInteger+digits[x]*10**(len(digits)-x)
```
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
You could do this :
[ This is 3 times faster than the other methods ]
```
d=Decimal('3.14159')
list_d = str(d).split('.')
# Converting the decimal to string and splitting it at the decimal point
# If decimal point exists => Negative exponent
# i.e 3.14159 => "3", "14159"
# exponent = -len("14159") = -5
# integer = int("3"+"14159") = 314159
if len(list_d) == 2:
# Exponent is the negative of length of no of digits after decimal point
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
# If the decimal point does not exist => Positive / Zero exponent
# 3400
# exponent = len("3400") - len("34") = 2
# integer = int("34") = 34
else:
str_dec = list_d[0].rstrip('0')
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
print integer, exponent
```
### Performance testing
```
def to_int_exp(decimal_instance):
list_d = str(decimal_instance).split('.')
if len(list_d) == 2:
# Negative exponent
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
else:
str_dec = list_d[0].rstrip('0')
# Positive exponent
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
return integer, exponent
def to_int_exp1(decimal_instance):
t=decimal_instance.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
exponent = t[2]
integer = sign * int(''.join(map(str,digits)))
return integer, exponent
```
Calculating the time taken for 100,000 loops for both methods :
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for string parsing method : 1.56606507301
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp1(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for convertion to tuple then extract method : 4.67159295082
|
```
from functools import reduce # Only in Python 3, omit this in Python 2.x
from decimal import *
d = Decimal('3.14159')
t = d.as_tuple()
theInteger = reduce(lambda rst, x: rst * 10 + x, t.digits)
theExponent = t.exponent
```
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
```
from functools import reduce # Only in Python 3, omit this in Python 2.x
from decimal import *
d = Decimal('3.14159')
t = d.as_tuple()
theInteger = reduce(lambda rst, x: rst * 10 + x, t.digits)
theExponent = t.exponent
```
|
Get the exponent directly from the tuple as you were:
```
exponent = d.as_tuple()[2]
```
Then multiply by the proper power of 10:
```
i = int(d * Decimal('10')**-exponent)
```
Putting it all together:
```
from decimal import Decimal
_ten = Decimal('10')
def int_exponent(d):
exponent = d.as_tuple()[2]
int_part = int(d * (_ten ** -exponent))
return int_part, exponent
```
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
You could do this :
[ This is 3 times faster than the other methods ]
```
d=Decimal('3.14159')
list_d = str(d).split('.')
# Converting the decimal to string and splitting it at the decimal point
# If decimal point exists => Negative exponent
# i.e 3.14159 => "3", "14159"
# exponent = -len("14159") = -5
# integer = int("3"+"14159") = 314159
if len(list_d) == 2:
# Exponent is the negative of length of no of digits after decimal point
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
# If the decimal point does not exist => Positive / Zero exponent
# 3400
# exponent = len("3400") - len("34") = 2
# integer = int("34") = 34
else:
str_dec = list_d[0].rstrip('0')
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
print integer, exponent
```
### Performance testing
```
def to_int_exp(decimal_instance):
list_d = str(decimal_instance).split('.')
if len(list_d) == 2:
# Negative exponent
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
else:
str_dec = list_d[0].rstrip('0')
# Positive exponent
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
return integer, exponent
def to_int_exp1(decimal_instance):
t=decimal_instance.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
exponent = t[2]
integer = sign * int(''.join(map(str,digits)))
return integer, exponent
```
Calculating the time taken for 100,000 loops for both methods :
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for string parsing method : 1.56606507301
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp1(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for convertion to tuple then extract method : 4.67159295082
|
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
digits=t.digits
theInteger=0
for x in range(len(digits)):
theInteger=theInteger+digits[x]*10**(len(digits)-x)
```
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
Get the exponent directly from the tuple as you were:
```
exponent = d.as_tuple()[2]
```
Then multiply by the proper power of 10:
```
i = int(d * Decimal('10')**-exponent)
```
Putting it all together:
```
from decimal import Decimal
_ten = Decimal('10')
def int_exponent(d):
exponent = d.as_tuple()[2]
int_part = int(d * (_ten ** -exponent))
return int_part, exponent
```
|
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
digits=t.digits
theInteger=0
for x in range(len(digits)):
theInteger=theInteger+digits[x]*10**(len(digits)-x)
```
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
You could do this :
[ This is 3 times faster than the other methods ]
```
d=Decimal('3.14159')
list_d = str(d).split('.')
# Converting the decimal to string and splitting it at the decimal point
# If decimal point exists => Negative exponent
# i.e 3.14159 => "3", "14159"
# exponent = -len("14159") = -5
# integer = int("3"+"14159") = 314159
if len(list_d) == 2:
# Exponent is the negative of length of no of digits after decimal point
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
# If the decimal point does not exist => Positive / Zero exponent
# 3400
# exponent = len("3400") - len("34") = 2
# integer = int("34") = 34
else:
str_dec = list_d[0].rstrip('0')
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
print integer, exponent
```
### Performance testing
```
def to_int_exp(decimal_instance):
list_d = str(decimal_instance).split('.')
if len(list_d) == 2:
# Negative exponent
exponent = -len(list_d[1])
integer = int(list_d[0] + list_d[1])
else:
str_dec = list_d[0].rstrip('0')
# Positive exponent
exponent = len(list_d[0]) - len(str_dec)
integer = int(str_dec)
return integer, exponent
def to_int_exp1(decimal_instance):
t=decimal_instance.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
exponent = t[2]
integer = sign * int(''.join(map(str,digits)))
return integer, exponent
```
Calculating the time taken for 100,000 loops for both methods :
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for string parsing method : 1.56606507301
```
ttaken = time.time()
for i in range(100000):
d = Decimal(random.uniform(-3, +3))
to_int_exp1(d)
ttaken = time.time() - ttaken
print ttaken
```
Time taken for convertion to tuple then extract method : 4.67159295082
|
Get the exponent directly from the tuple as you were:
```
exponent = d.as_tuple()[2]
```
Then multiply by the proper power of 10:
```
i = int(d * Decimal('10')**-exponent)
```
Putting it all together:
```
from decimal import Decimal
_ten = Decimal('10')
def int_exponent(d):
exponent = d.as_tuple()[2]
int_part = int(d * (_ten ** -exponent))
return int_part, exponent
```
|
3,950,330
|
Is there a way to change python2.x source code to python 3.x manually. I guess using lib2to3 this can be done but I don't know exactly how to do this ?
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3950330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/441459/"
] |
Thanks. Here is the answer I was looking for:
```
from lib2to3.refactor import RefactoringTool, get_fixers_from_package
"""assume `files` to a be a list of all filenames you want to convert"""
r = RefactoringTool(get_fixers_from_package('lib2to3.fixes'))
r.refactor(files, write=True)
```
|
Yes, porting is what you are looking here.
Porting is a non-trivial task that requires making various decisions about your code. For instance, whether or not you want to maintaing backward compatibility. There is no single, universal solution to porting. The way you port depends on your specific requirements.
The best resource I have found for porting apps from Python 2 to 3 is the wiki page [PortingPythonToPy3k](http://wiki.python.org/moin/PortingPythonToPy3k). The page contains several approaches to porting as well as a lot of links to resources that are potentially helpful in porting work.
|
6,156,358
|
The example from [this post](https://stackoverflow.com/questions/6144274/string-replace-utility-conversion-from-python-to-f) has an example
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Seq.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
that gives an error when compilation.
```
error FS0001: This expression was expected to have type
string []
but here has type
seq<string>
```
How to convert seq to string[] to remove this error message?
|
2011/05/27
|
[
"https://Stackoverflow.com/questions/6156358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
Building on Jaime's answer, since `ReadAllLines()` returns an array, just use `Array.map` instead of `Seq.map`
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Array.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
|
You can use
```
File.WriteAllLines("tclscript.txt", Seq.toArray lines)
```
or alternatively just attach
```
|> Seq.toArray
```
after the Seq.map call.
(Also note that in .NET 4, there is an overload of WriteAllLines that does take a Seq)
|
6,156,358
|
The example from [this post](https://stackoverflow.com/questions/6144274/string-replace-utility-conversion-from-python-to-f) has an example
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Seq.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
that gives an error when compilation.
```
error FS0001: This expression was expected to have type
string []
but here has type
seq<string>
```
How to convert seq to string[] to remove this error message?
|
2011/05/27
|
[
"https://Stackoverflow.com/questions/6156358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
You can use
```
File.WriteAllLines("tclscript.txt", Seq.toArray lines)
```
or alternatively just attach
```
|> Seq.toArray
```
after the Seq.map call.
(Also note that in .NET 4, there is an overload of WriteAllLines that does take a Seq)
|
Personally, I prefer sequence expressions over higher-order functions, unless you're piping the output through a series of functions. It's usually cleaner and more readable.
```
let lines = [| for line in File.ReadAllLines("tclscript.do") -> line.Replace("{", "{{").Replace("}", "}}") |]
File.WriteAllLines("tclscript.txt", lines)
```
### With regex replacement
```
let lines =
let re = System.Text.RegularExpressions.Regex(@"#(\d+)")
[|for line in File.ReadAllLines("tclscript.do") ->
re.Replace(line.Replace("{", "{{").Replace("}", "}}"), "$1", 1)|]
File.WriteAllLines("tclscript.txt", lines)
```
|
6,156,358
|
The example from [this post](https://stackoverflow.com/questions/6144274/string-replace-utility-conversion-from-python-to-f) has an example
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Seq.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
that gives an error when compilation.
```
error FS0001: This expression was expected to have type
string []
but here has type
seq<string>
```
How to convert seq to string[] to remove this error message?
|
2011/05/27
|
[
"https://Stackoverflow.com/questions/6156358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
Building on Jaime's answer, since `ReadAllLines()` returns an array, just use `Array.map` instead of `Seq.map`
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Array.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
|
Personally, I prefer sequence expressions over higher-order functions, unless you're piping the output through a series of functions. It's usually cleaner and more readable.
```
let lines = [| for line in File.ReadAllLines("tclscript.do") -> line.Replace("{", "{{").Replace("}", "}}") |]
File.WriteAllLines("tclscript.txt", lines)
```
### With regex replacement
```
let lines =
let re = System.Text.RegularExpressions.Regex(@"#(\d+)")
[|for line in File.ReadAllLines("tclscript.do") ->
re.Replace(line.Replace("{", "{{").Replace("}", "}}"), "$1", 1)|]
File.WriteAllLines("tclscript.txt", lines)
```
|
34,464,872
|
I have download some mesh exporter script to learn how to write an export script in python for blender(2.6.3).
The script follows the standard register/unregister in order to register or unregister the script.
```
### REGISTER ###
def menu_func(self, context):
self.layout.operator(Export_objc.bl_idname, text="Objective-C Header (.h)")
def register():
bpy.utils.register_module(__name__)
bpy.types.INFO_MT_file_export.append(menu_func)
def unregister():
bpy.utils.unregister_module(__name__)
bpy.types.INFO_MT_file_export.remove(menu_func)
###if __name__ == "__main__":
### register()
unregister()
```
The issue is that when I use runScript to run the script from the text editor(after changing it to unregister upon run) it removes the the script but leaves an unclickbale leftover in the export menu which I cannot remove.
If I run the register again, it will turn back the inactive menu option into a clickable exporter menu item but in addition it will add another copy of the menu item.
The reason I want to keep registering and unregistering is mostly because I want to make changes and test them out...
Maybe I should run the function directly without registering but even though now I have this in my export menu:
[](https://i.stack.imgur.com/YAeX2.png)
How do I remove these items and not have many versions of my script in the export menu(depending if I made changes), also should I just put the function instead of the register/unregister when I am fiddling with the script and trying thigns out?
|
2015/12/25
|
[
"https://Stackoverflow.com/questions/34464872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097185/"
] |
Well I have found a workable way...
If you press 'F8' it will reload all plugins and remove the "dead" menu items.
That solves the multiple additions of the same addon.
So now if I want to change the addon and test it I do something like this:
1. Run script with unregister
2. Press F8
3. Run script with register
That is how I update the addon and there is the additional step of actually running it from the export/import menu.
If you have an easier way to test changes for the addon please let me know...
|
I am not 100% certain of the cause but it relates to running an addon script that adds a menu item within blender's text editor. Even blender's template scripts do the same thing.
I think the best solution is to use it like a real addon - that is save it to disk and enable/disable it in the addon preferences. You can either save it to the installed addons folder, within your [user settings folder](http://www.blender.org/manual/getting_started/installing_blender/directorylayout.html) or create a folder and set the [file path for scripts](http://www.blender.org/manual/preferences/file.html#scripts-path). You could also use the [Install from File](http://www.blender.org/manual/advanced/scripting/python/add_ons.html#installation-of-a-3rd-party-add-on) button in the addons preferences.
|
48,967,621
|
I will admit I'm stuck on a school project right now.
I have defined functions that will generate random numbers for me, as well as a random operator (+, -, or \*).
I have also defined a function that will display a problem using these random numbers.
I have created a program that will generate and display a random problem and ask the user for the solution. If the user is correct, the program prints 'Correct', and the opposite if the user is incorrect.
I have put all of this inside of a loop that will make it repeat 10 times. My issue is that I need it to generate 10 different problems instead of the same problem that it randomized the first time, 10 times.
Sorry for the weird wording.
\*I am using python but am showing the code here using the CSS viewer because I couldn't get it to display any other way.
Thank you.
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
randNum = getOp(max)
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
answer = doIt(operand1, operand2, operator)
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48967621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856878/"
] |
Just move your code that is generating the random values into your for loop:
```
for _ in range(10): #loops the program 10 times
randNum = getOp(max)
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
answer = doIt(operand1, operand2, operator)
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
That way its called before every time you ask the user for input.
|
You generate the problem and then show it 10 times in the loop:
```
generateProblem()
for _ in range(10):
showProblem()
```
of course you will get the same problem shown 10 times. To fix this, generate the problem *inside* the loop:
```
for _ in range(10):
generateProblem()
showProblem()
```
|
48,967,621
|
I will admit I'm stuck on a school project right now.
I have defined functions that will generate random numbers for me, as well as a random operator (+, -, or \*).
I have also defined a function that will display a problem using these random numbers.
I have created a program that will generate and display a random problem and ask the user for the solution. If the user is correct, the program prints 'Correct', and the opposite if the user is incorrect.
I have put all of this inside of a loop that will make it repeat 10 times. My issue is that I need it to generate 10 different problems instead of the same problem that it randomized the first time, 10 times.
Sorry for the weird wording.
\*I am using python but am showing the code here using the CSS viewer because I couldn't get it to display any other way.
Thank you.
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
randNum = getOp(max)
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
answer = doIt(operand1, operand2, operator)
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48967621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856878/"
] |
You generate the problem and then show it 10 times in the loop:
```
generateProblem()
for _ in range(10):
showProblem()
```
of course you will get the same problem shown 10 times. To fix this, generate the problem *inside* the loop:
```
for _ in range(10):
generateProblem()
showProblem()
```
|
Crrected Your Code
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
48,967,621
|
I will admit I'm stuck on a school project right now.
I have defined functions that will generate random numbers for me, as well as a random operator (+, -, or \*).
I have also defined a function that will display a problem using these random numbers.
I have created a program that will generate and display a random problem and ask the user for the solution. If the user is correct, the program prints 'Correct', and the opposite if the user is incorrect.
I have put all of this inside of a loop that will make it repeat 10 times. My issue is that I need it to generate 10 different problems instead of the same problem that it randomized the first time, 10 times.
Sorry for the weird wording.
\*I am using python but am showing the code here using the CSS viewer because I couldn't get it to display any other way.
Thank you.
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
randNum = getOp(max)
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
answer = doIt(operand1, operand2, operator)
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48967621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856878/"
] |
Just move your code that is generating the random values into your for loop:
```
for _ in range(10): #loops the program 10 times
randNum = getOp(max)
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
answer = doIt(operand1, operand2, operator)
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
That way its called before every time you ask the user for input.
|
Crrected Your Code
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
17,370,820
|
I have come across some python code with slice notation that I am having trouble figuring out.
It looks like slice notation but uses a comma and a list:
```
list[:, [1, 2, 3]]
```
Is this syntax valid? If so what does it do?
**edit** looks like it is a 2D numpy array
|
2013/06/28
|
[
"https://Stackoverflow.com/questions/17370820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2502012/"
] |
Assuming that the object is really a `numpy` array, this is known as [advanced indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing), and picks out the specified columns:
```
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a[:, [1,2,3]]
array([[ 1, 2, 3],
[ 5, 6, 7],
[ 9, 10, 11]])
>>> a[:, [1,3]]
array([[ 1, 3],
[ 5, 7],
[ 9, 11]])
```
Note that this won't work with the standard Python list:
```
>>> a.tolist()
[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
>>> a.tolist()[:,[1,2,3]]
Traceback (most recent call last):
File "<ipython-input-17-7d77de02047a>", line 1, in <module>
a.tolist()[:,[1,2,3]]
TypeError: list indices must be integers, not tuple
```
|
It generates a complex value and passes it to [`__*item__()`](http://docs.python.org/2/reference/datamodel.html#object.__getitem__):
```
>>> class Foo(object):
... def __getitem__(self, val):
... print val
...
>>> Foo()[:, [1, 2, 3]]
(slice(None, None, None), [1, 2, 3])
```
What it actually *performs* depends on the type being indexed.
|
61,996,756
|
When I install npm on my project ionic with Angular. There is a failed install of node-sass/ node-gyp
error show like this :
>
> $ npm install
>
>
>
> >
> > node-sass@4.10.0 install C:\Users\d\Documents\project\app\node\_modules\node-sass
> > node scripts/install.js
> >
> >
> >
>
>
> Downloading binary from
> <https://github.com/sass/node-sass/releases/download/v4.10.0/win32-x64-72_binding.node>
> Cannot download
> "<https://github.com/sass/node-sass/releases/download/v4.10.0/win32-x64-72_binding.node>":
>
>
> HTTP error 404 Not Found
>
>
> Hint: If github.com is not accessible in your location
> try setting a proxy via HTTP\_PROXY, e.g.
>
>
>
> ```
> export HTTP_PROXY=http://example.com:1234
>
> ```
>
> or configure npm proxy via
>
>
>
> ```
> npm config set proxy http://example.com:8080
>
> ```
>
>
> >
> > node-sass@4.10.0 postinstall C:\Users\d\Documents\project\app\node\_modules\node-sass
> > node scripts/build.js
> >
> >
> >
>
>
> Building: C:\Program Files\nodejs\node.exe
> C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js
> rebuild --verbose --libsass\_ext= --libsass\_cflags= --libsass\_ldflags=
> --libsass\_library= gyp info it worked if it ends with ok gyp verb cli [ gyp verb cli 'C:\Program Files\nodejs\node.exe', gyp verb cli
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js',
> gyp verb cli 'rebuild', gyp verb cli '--verbose', gyp verb cli
>
> '--libsass\_ext=', gyp verb cli '--libsass\_cflags=', gyp verb cli
>
> '--libsass\_ldflags=', gyp verb cli '--libsass\_library=' gyp verb cli
> ] gyp info using node-gyp@3.8.0 gyp info using node@12.13.1 | win32 |
> x64 gyp verb command rebuild [] gyp verb command clean [] gyp verb
> clean removing "build" directory gyp verb command configure [] gyp
> verb check python checking for Python executable
> "C:\Users\d.windows-build-tools\python27\python.exe" in the PATH gyp
> verb `which` failed Error: not found:
> C:\Users\d.windows-build-tools\python27\python.exe gyp verb `which`
> failed at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)
> gyp verb `which` failed at F
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)
> gyp verb `which` failed at E
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)
> gyp verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5
> gyp verb `which` failed at FSReqCallback.oncomplete (fs.js:158:21)
> gyp verb `which` failed
> C:\Users\d.windows-build-tools\python27\python.exe Error: not found:
> C:\Users\d.windows-build-tools\python27\python.exe gyp verb `which`
> failed at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)
> gyp verb `which` failed at F
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)
> gyp verb `which` failed at E
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)
> gyp verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5
> gyp verb `which` failed at FSReqCallback.oncomplete (fs.js:158:21)
> { gyp verb `which` failed stack: 'Error: not found:
> C:\Users\d\.windows-build-tools\python27\python.exe\n' + gyp verb
> `which` failed ' at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)\n'
> + gyp verb `which` failed ' at F (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)\n'
> + gyp verb `which` failed ' at E (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5\n'
> + gyp verb `which` failed ' at FSReqCallback.oncomplete (fs.js:158:21)', gyp verb `which` failed code: 'ENOENT' gyp verb
> `which` failed } gyp verb could not find
> "C:\Users\d.windows-build-tools\python27\python.exe". checking python
> launcher gyp verb could not find
> "C:\Users\d.windows-build-tools\python27\python.exe". guessing
> location gyp verb ensuring that file exists: C:\Python27\python.exe
> gyp verb check python version `C:\Python27\python.exe -c "import sys;
> print "2.7.16 gyp verb check python version .%s.%s" %
> sys.version_info[:3];"` returned: %j gyp verb get node dir no --target
> version specified, falling back to host node version: 12.13.1 gyp verb
> command install [ '12.13.1' ] gyp verb install input version string
> "12.13.1" gyp verb install installing version: 12.13.1 gyp verb
> install --ensure was passed, so won't reinstall if already installed
> gyp verb install version is already installed, need to check
> "installVersion" gyp verb got "installVersion" 9 gyp verb needs
> "installVersion" 9 gyp verb install version is good gyp verb get node
> dir target node version installed: 12.13.1 gyp verb build dir
> attempting to create "build" dir:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build gyp verb
> build dir "build" dir needed to be created?
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build gyp verb
> find vs2017 Found installation at: C:\Program Files (x86)\Microsoft
> Visual Studio\2019\Enterprise gyp verb find vs2017 - Found
> Microsoft.VisualStudio.Component.Windows10SDK.18362 gyp verb find
> vs2017 - Found Microsoft.VisualStudio.Component.VC.Tools.x86.x64 gyp
> verb find vs2017 - Found Microsoft.VisualStudio.VC.MSBuild.Base gyp
> verb find vs2017 - Using this installation with Windows 10 SDK gyp
> verb find vs2017 using installation: C:\Program Files (x86)\Microsoft
> Visual Studio\2019\Enterprise gyp verb build/config.gypi creating
> config file gyp verb build/config.gypi writing out config file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build\config.gypi
> gyp verb config.gypi checking for gypi file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\config.gypi
> gyp verb common.gypi checking for gypi file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\common.gypi
> gyp verb gyp gyp format was not specified; forcing "msvs" gyp info
> spawn C:\Python27\python.exe gyp info spawn args [ gyp info spawn args
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\gyp\gyp\_main.py',
> gyp info spawn args 'binding.gyp', gyp info spawn args '-f', gyp
> info spawn args 'msvs', gyp info spawn args '-G', gyp info spawn
> args 'msvs\_version=2015', gyp info spawn args '-I', gyp info spawn
> args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-sass\build\config.gypi',
> gyp info spawn args '-I', gyp info spawn args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\addon.gypi',
> gyp info spawn args '-I', gyp info spawn args
>
> 'C:\Users\d\.node-gyp\12.13.1\include\node\common.gypi', gyp
> info spawn args '-Dlibrary=shared\_library', gyp info spawn args
>
> '-Dvisibility=default', gyp info spawn args
>
> '-Dnode\_root\_dir=C:\Users\d\.node-gyp\12.13.1', gyp info spawn
> args
>
> '-Dnode\_gyp\_dir=C:\Users\d\Documents\project\app\node\_modules\node-gyp',
> gyp info spawn args
>
> '-Dnode\_lib\_file=C:\Users\d\.node-gyp\12.13.1\<(target\_arch)\node.lib', gyp info spawn args
>
> '-Dmodule\_root\_dir=C:\Users\d\Documents\project\app\node\_modules\node-sass',
> gyp info spawn args '-Dnode\_engine=v8', gyp info spawn args
>
> '--depth=.', gyp info spawn args '--no-parallel', gyp info spawn
> args '--generator-output', gyp info spawn args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-sass\build',
> gyp info spawn args '-Goutput\_dir=.' gyp info spawn args ] gyp verb
> command build [] gyp verb build type Release gyp verb architecture x64
> gyp verb node dev dir C:\Users\d.node-gyp\12.13.1 gyp verb found
> first Solution file build/binding.sln gyp verb using MSBuild:
> C:\Program Files (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe gyp info spawn
> C:\Program Files (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe gyp info spawn
> args [ gyp info spawn args 'build/binding.sln', gyp info spawn args
> '/nologo', gyp info spawn args
>
> '/p:Configuration=Release;Platform=x64' gyp info spawn args ] gyp ERR!
> UNCAUGHT EXCEPTION gyp ERR! stack Error: spawn C:\Program Files
> (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe ENOENT gyp ERR!
> stack at Process.ChildProcess.\_handle.onexit
> (internal/child\_process.js:264:19) gyp ERR! stack at onErrorNT
> (internal/child\_process.js:456:16) gyp ERR! stack at
> processTicksAndRejections (internal/process/task\_queues.js:80:21) gyp
> ERR! System Windows\_NT 10.0.18362 gyp ERR! command "C:\Program
> Files\nodejs\node.exe"
> "C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js"
> "rebuild" "--verbose" "--libsass\_ext=" "--libsass\_cflags="
> "--libsass\_ldflags=" "--libsass\_library=" gyp ERR! cwd
> C:\Users\d\Documents\project\app\node\_modules\node-sass gyp ERR! node
> -v v12.13.1 gyp ERR! node-gyp -v v3.8.0 gyp ERR! This is a bug in `node-gyp`. gyp ERR! Try to update node-gyp and file an Issue if it
> does not help: gyp ERR!
>
> <https://github.com/nodejs/node-gyp/issues> Build failed with error
> code: 7 npm WARN angular-ng-autocomplete@1.1.12 requires a peer of
> @angular/common@^6.0.0-rc.0 || ^6.0.0 but none is installed. You must
> install peer dependencies yourself. npm WARN
> angular-ng-autocomplete@1.1.12 requires a peer of
> @angular/core@^6.0.0-rc.0 || ^6.0.0 but none is installed. You must
> install peer dependencies yourself. npm WARN
> angular-resize-event@1.2.1 requires a peer of @angular/core@^8.2.14
> but none is installed. You must install peer dependencies yourself.
> npm WARN angular-resize-event@1.2.1 requires a peer of rxjs@~6.5.4 but
> none is installed. You must install peer dependencies yourself.
>
> npm WARN angular-resize-event@1.2.1 requires a peer of core-js@^3.6.1
> but none is installed. You must install peer dependencies yourself.
>
> npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4
> (node\_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY:
> Unsupported platform for fsevents@1.2.4: wanted
> {"os":"darwin","arch":"any"} (current:
> {"os":"win32","arch":"x64"})win32","arch":"x64"}) npm WARN optional
> SKIPPING OPTIONAL DEPENDENCY: node-sass@4.10.0
> (node\_modules\node-sass): npm WARN optional SKIPPING OPTIONAL
> DEPENDENCY: node-sass@4.10.0 postinstall: `node scripts/build.js` npm
> WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 1
>
>
> added 83 packages from 166 contributors, removed 618 packages, updated
> 191 packages and audited 1597 packages in 52.38s found 2966
> vulnerabilities (2197 low, 11 moderate, 756 high, 2 critical) run
> `npm audit fix` to fix them, or `npm audit` for details
>
>
>
package.json
```
{
"name": "project",
"version": "0.0.1",
"author": "Ionic Framework",
"homepage": "http://ionicframework.com/",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"@angular/animations": "7.1.4",
"@angular/cdk": "7.1.0",
"@angular/common": "7.1.4",
"@angular/core": "7.1.4",
"@angular/forms": "7.1.4",
"@angular/http": "7.1.4",
"@angular/platform-browser": "7.1.4",
"@angular/platform-browser-dynamic": "7.1.4",
"@angular/router": "7.1.4",
"@fortawesome/fontawesome-free": "5.12.0",
"@ionic-native/core": "5.1.0",
"@ionic-native/file": "5.1.0",
"@ionic-native/file-path": "5.1.0",
"@ionic-native/file-transfer": "5.1.0",
"@ionic-native/in-app-browser": "5.5.1",
"@ionic-native/native-page-transitions": "5.5.1",
"@ionic-native/splash-screen": "5.1.0",
"@ionic-native/status-bar": "5.1.0",
"@ionic/angular": "4.0.0-beta.15",
"@kolkov/angular-editor": "^0.15.1",
"@progress/kendo-angular-buttons": "^4.0.0",
"@progress/kendo-angular-charts": "3.9.0",
"@progress/kendo-angular-dateinputs": "2 - 3",
"@progress/kendo-angular-dropdowns": "2 - 3",
"@progress/kendo-angular-excel-export": "1 - 2",
"@progress/kendo-angular-grid": "^3.14.4",
"@progress/kendo-angular-inputs": "2 - 5",
"@progress/kendo-angular-intl": "^1.0.0",
"@progress/kendo-angular-l10n": "^1.1.0",
"@progress/kendo-angular-popup": "^2.0.0",
"@progress/kendo-data-query": "^1.0.0",
"@progress/kendo-drawing": "^1.0.0",
"@progress/kendo-theme-default": "latest",
"angular-gridster2": "^7.2.0",
"angular-ng-autocomplete": "1.1.12",
"angular-resize-event": "1.2.1",
"cordova-android": "8.0.0",
"cordova-ios": "5.0.1",
"cordova-plugin-device": "2.0.2",
"cordova-plugin-ionic-webview": "2.3.1",
"cordova-plugin-splashscreen": "5.0.2",
"cordova-plugin-statusbar": "2.4.2",
"cordova-plugin-whitelist": "1.3.3",
"core-js": "^2.4.1",
"file-saver": "^2.0.2",
"hammerjs": "2.0.0",
"ionic": "4.6.0",
"jspdf": "^1.5.3",
"jszip": "^3.2.2",
"lodash": "4.17.15",
"moment": "2.24.0",
"mydatepicker": "2.6.6",
"ng-select": "1.0.2",
"ng2-ace-editor": "0.3.9",
"ngx-bootstrap": "5.3.2",
"ngx-color-picker": "^5.3.8",
"ngx-dropzone": "1.2.0",
"ngx-perfect-scrollbar": "7.2.1",
"release": "6.0.1",
"rxjs": "6.3.3",
"rxjs-compat": "^6.0.0",
"stream": "0.0.2",
"tslib": "1.9.0",
"zone.js": "0.8.26"
},
"devDependencies": {
"@angular-devkit/architect": "0.11.4",
"@angular-devkit/build-angular": "0.11.4",
"@angular-devkit/core": "7.1.4",
"@angular-devkit/schematics": "7.1.4",
"@angular/cli": "7.1.4",
"@angular/compiler": "7.1.4",
"@angular/compiler-cli": "7.1.4",
"@angular/language-service": "7.1.4",
"@ionic/angular-toolkit": "1.2.0",
"@types/node": "10.12.0",
"@types/jasmine": "2.8.8",
"@types/jasminewd2": "2.0.3",
"codelyzer": "4.5.0",
"jasmine-core": "2.99.1",
"jasmine-spec-reporter": "4.2.1",
"karma": "3.1.4",
"karma-chrome-launcher": "2.2.0",
"karma-coverage-istanbul-reporter": "2.0.1",
"karma-jasmine": "1.1.2",
"karma-jasmine-html-reporter": "0.2.2",
"protractor": "5.4.0",
"ts-node": "7.0.0",
"tslint": "5.12.0",
"typescript": "3.1.6",
"@svgdotjs/svg.js": "3.0.16"
},
"description": "An Ionic project",
"cordova": {
"plugins": {
"cordova-plugin-whitelist": {},
"cordova-plugin-statusbar": {},
"cordova-plugin-device": {},
"cordova-plugin-splashscreen": {},
"cordova-plugin-ionic-webview": {
"ANDROID_SUPPORT_ANNOTATIONS_VERSION": "27.+"
},
"cordova-plugin-ionic-keyboard": {},
"com.telerik.plugins.nativepagetransitions": {},
"cordova-plugin-inappbrowser": {}
},
"platforms": [
"android",
"ios"
]
}
}
```
npm version: 6.14.4
|
2020/05/25
|
[
"https://Stackoverflow.com/questions/61996756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3766841/"
] |
Short answer: **Avoid global variables!**
In your `delete` function you set the value of the global variable `temp_node`.
Then you call the function `count`. In `count` you also use the global variable `temp_node`. You change it until it has the value NULL.
Then back in the `delete` function, you do:
```
temp_node = temp_node->next;
```
Dereference of a NULL pointer! That is real bad and crashes your program.
So to start with: **Get rid of all global variables**
As an example, your `count` function should be:
```
int count(NODE* p)
{
int count = 0;
while (p != NULL) {
count++;
p = p->next;
}
return count;
}
```
and call it like: `counter = count(first_node);`
And your `delete` function could look like:
```
NODE* delete(NODE* first_node) { ... }
```
That said ...
The principle in your `delete` function is wrong. You don't need to count the number of nodes. Simply iterate until you reach the end, i.e. `next` is NULL.
Further - why do you `malloc` memory in the `delete` function? And why do you overwrite the pointer just after `malloc`? Then you have a memory leak.
```
temp_node = (NODE*)malloc(sizeof(NODE)); // WHY??
temp_node = first_node; // UPS... temp_node assigned new value.
// So malloc'ed memory is lost.
```
Now - what happens when you find the matching node:
```
if (flightno == data) {
temp_node = temp_node->next;
first_node = temp_node; // UPS.. first_node changed
printf("\nFlight log deleted.\n");
}
```
Then you change first\_node. So all nodes **before** the current node is lost! That's not what you want. You only want to change `first_node` when the match is on the very first node in the linked list.
Then: `for (j = 0; j <= counter; j++)` --> `for (j = 0; j < counter; j++)` But as I said before... don't use this kind of loop.
Use something like:
```
while (temp_node != NULL)
{
...
temp_node = temp_node->next;
}
```
BTW: Why do you do a print out in every loop? Move the negative print out outside the loop.
A `delete` function can be implemented in many ways. The below example is not the most compact implementation but it's pretty simple to understand.
```
NODE* delete(NODE* head, int value_to_match)
{
NODE* p = head;
if (p == NULL) return NULL;
// Check first node
if (p->data == value_to_match)
{
// Delete first node
head = head->next; // Update head to point to next node
free(p); // Free (aka delete) the node
return head; // Return the new head
}
NODE* prev = p; // prev is a pointer to the node before
p = p->next; // the node that p points to
// Check remaining nodes
while(p != NULL)
{
if (p->data == value_to_match)
{
prev->next = p->next; // Take the node that p points to out
// of the list, i.e. make the node before
// point to the node after
free(p); // Free (aka delete) the node
return head; // Return head (unchanged)
}
prev = p; // Move prev and p forward
p = p->next; // in the list
};
return head; // Return head (unchanged)
}
```
and call it like:
```
head = delete(head, SOME_VALUE);
```
|
You are probably making an extra loop in your delete function. You should check if you are deleting an node which isn't part of your linked list.
|
1,664,587
|
first time poster.
I'm turning to my first question on stack overflow because I've found little resources in trying to find an answer. I'm looking to execute Selenium python tests from a C# application. I don't want to have to compile the C# Selenium tests each time; I want to take advantage of IronPython scripting for dynamic selenium testing. (note: I have little Python or ScriptEngine, et al experience.)
Selenium outputs unit tests in python in the following form:
```
from selenium import selenium
import unittest
class TestBlah(unittest.TestCase):
def setUp(self):
self.selenium = selenium(...)
self.selenium.start()
def test_blah(self):
sel = self.selenium
sel.open("http://www.google.com/webhp")
sel.type("q", "hello world")
sel.click("btnG")
sel.wait_for_page_to_load(5000)
self.assertEqual("hello world - Google Search", sel.get_title())
print "done"
def tearDown(self):
self.selenium.stop()
if __name__ == "__main__":
unittest.main()
```
I can get this to run, no problem, from the command line using ipy.exe:
```
ipy test_google.py
```
And I can see Selenium Server fire up a firefox browser instance and run the test.
I cannot achieve the same result using the ScriptEngine, et al API in C#, .NET 3.5, and I think it's centered around not being able to execute the main() function I'm guessing the following code is:
```
if __name__ == "__main__":
unittest.main()
```
I've tried engine.ExecuteFile(), engine.CreateScriptSourceFromString()/source.Execute(), and engine.CreateScriptSourceFromFile()/source.Execute(). I tried scope.SetVariable("`__name__`", "`__main__`"). I do get some success when I comment out the if `__name__` part of the py file and call engine.CreateScriptSourceFromString("unittest.main(module=None") after engine.Runtime.ExecuteFile() is called on the py file. I've tried storing the results in python and accessing them via scope.GetVariable(). I've also tried writing a python function I could call from C# to execute the unit tests.
(engine is an instance of ScriptEngine, source an instance of ScriptSource, etc.)
My ignorance of Python, ScriptEngine, or the unittest module could easily be behind my troubles. Has anyone had any luck executing python unittests using the ScriptEngine, etc API in C#? Has anyone successfully executed "main" code from ScriptEngine?
Additionally, I've read that unittest has a test runner that will help in accessing the errors via a TestResult object. I believe the syntax is the following. I haven't gotten here yet, but know I'll need to harvest the results.
```
unittest.TextTestRunner(verbosity=2).run(unittest.main())
```
Thanks in advance. I figured it'd be better to have more details than less. =P
|
2009/11/03
|
[
"https://Stackoverflow.com/questions/1664587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/201308/"
] |
Looking at the [source code](http://ironpython.codeplex.com/SourceControl/ListDownloadableCommits.aspx) to the IronPython Console (ipy.exe), it looks like it eventually boils down to calling `ScriptSource.ExecuteProgram()`. You can get a `ScriptSource` from any of the various `ScriptEngine.CreateScriptSourceFrom*` methods.
For example:
```
import clr
clr.AddReference("IronPython")
from IronPython.Hosting import Python
engine = Python.CreateEngine()
src = engine.CreateScriptSourceFromString("""
if __name__ == "__main__":
print "this is __main__"
""")
src.ExecuteProgram()
```
Running this will print "this is **main**".
|
Try the following:
```
unittest.main(module=__name__)
```
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
You could use [eval()](https://www.w3schools.com/python/ref_func_eval.asp) like:
```py
eval(playlist_full[2]).get("name")
```
which would do exactly what you want, evaluate the string as python code.
It's not great practice though. It would be better/safer to store the songs themselves in a dictionary or list that can have non-eval'd references.
|
You can use [`locals()`](https://docs.python.org/3/library/functions.html#locals) built-in function to do that:
```py
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(locals()[f'song{i}'])
```
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
You could use [eval()](https://www.w3schools.com/python/ref_func_eval.asp) like:
```py
eval(playlist_full[2]).get("name")
```
which would do exactly what you want, evaluate the string as python code.
It's not great practice though. It would be better/safer to store the songs themselves in a dictionary or list that can have non-eval'd references.
|
```
varnames=locals()
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(varnames[song_i])
print(playlist_full[2].get("name"))
```
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
You could use [eval()](https://www.w3schools.com/python/ref_func_eval.asp) like:
```py
eval(playlist_full[2]).get("name")
```
which would do exactly what you want, evaluate the string as python code.
It's not great practice though. It would be better/safer to store the songs themselves in a dictionary or list that can have non-eval'd references.
|
It's completely redundant to keep your data as both individual variables and members of a list. If a list is what you need, create it that way in the first place.
```
playlist_full = [{"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"},
{"name": "I Want Your...", "duration": 4.3, "artist": "Wham"},
{"name": "Thriller", "duration": 3.8, "artist": "MJ"},
{"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"},
{"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"},
{"name": "Classic", "duration": 2.5, "artist": "MKTO"},
{"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}]
```
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
```
varnames=locals()
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(varnames[song_i])
print(playlist_full[2].get("name"))
```
|
You can use [`locals()`](https://docs.python.org/3/library/functions.html#locals) built-in function to do that:
```py
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(locals()[f'song{i}'])
```
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
It's completely redundant to keep your data as both individual variables and members of a list. If a list is what you need, create it that way in the first place.
```
playlist_full = [{"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"},
{"name": "I Want Your...", "duration": 4.3, "artist": "Wham"},
{"name": "Thriller", "duration": 3.8, "artist": "MJ"},
{"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"},
{"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"},
{"name": "Classic", "duration": 2.5, "artist": "MKTO"},
{"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}]
```
|
You can use [`locals()`](https://docs.python.org/3/library/functions.html#locals) built-in function to do that:
```py
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(locals()[f'song{i}'])
```
|
67,180,248
|
How can I get the text of a button clicked and return it to python? The button is selected using a mouse-click generated by the user in the Selenium WebDriver browser.
I'm trying to do as follows:
```
x=driver.execute_script("$(document).click(function(event){var text= $(event.target).text(); return text})")
```
but when I print the contents of `x` it returns None. When I try to use an alert to display the contents of `text`, it returns the correct contents but I want to return it in Python.
What am I doing wrong?
|
2021/04/20
|
[
"https://Stackoverflow.com/questions/67180248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12296610/"
] |
Once you click on the button,
You can extract the text of the button only if it's still available in the dom visible, else you can't.
|
```
# Identify element
element = driver.find_element_by_id("id")
# Click element
element.click()
# Get text
print("Text is: " + element.text)
# Or
print("Text is: " + element.get_attribute("innerHTML")
```
|
50,639,390
|
I am trying to write a music program in Python that takes some music written by the user in a text file and turns it into a midi. I'm not particularly experienced with python at this stage so I'm not sure what the reason behind this issue is. I am trying to write the source file parser for the program and part of this process is to create a list containing all the lines of the text file and breaking each line down into its own list to make them easier to work with. I'm successfully able to do that, but there is a problem.
I want the code to **ignore** lines that are only whitespace (So the user can make their file at least kind of readable without having all the lines thrown together one on top of the other), but I can't seem to figure out how to do that. I tried doing this
```
with open(path, "r") as infile:
for row in infile:
if len(row):
srclines.append(row.split())
```
And this **does** work as far as creating the list of lines and separating each word goes, BUT it still appends the empty lines that are only whitespace... I confirmed this by doing this
```
for entry in srclines:
print entry
```
Which gives, for example
```
['This', 'is']
[]
['A', 'test']
```
With the original text being
```
This is
A test
```
But strangely, if during the printing stage I do another len() check then the empty lines are actually **ignored** like I want, and it looks like this
```
['This', 'is']
['A', 'test']
```
What is the cause of this? Does this mean I can only go over the list and remove empty entries after I generate It? Or am I just doing the line import code wrong? I am using python 3.6 to test this code with by the way
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50639390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6283375/"
] |
`row` contains a newline, so it's not empty. But `row.split()` doesn't find any non-whitespace characters, so it returns an empty list.
Use
```
if len(row.strip()):
```
to ignore the newline (and any other leading/trailing spaces).
Or more simply:
```
if row.strip():
```
since an empty string is falsy.
|
Try creating a [list comprehension](https://www.python-course.eu/python3_list_comprehension.php):
```
with open('d.txt', "r") as infile:
print([i.strip().split() for i in infile if i.strip()])
```
Output:
```
[['This', 'is'], ['A', 'test']]
```
|
50,639,390
|
I am trying to write a music program in Python that takes some music written by the user in a text file and turns it into a midi. I'm not particularly experienced with python at this stage so I'm not sure what the reason behind this issue is. I am trying to write the source file parser for the program and part of this process is to create a list containing all the lines of the text file and breaking each line down into its own list to make them easier to work with. I'm successfully able to do that, but there is a problem.
I want the code to **ignore** lines that are only whitespace (So the user can make their file at least kind of readable without having all the lines thrown together one on top of the other), but I can't seem to figure out how to do that. I tried doing this
```
with open(path, "r") as infile:
for row in infile:
if len(row):
srclines.append(row.split())
```
And this **does** work as far as creating the list of lines and separating each word goes, BUT it still appends the empty lines that are only whitespace... I confirmed this by doing this
```
for entry in srclines:
print entry
```
Which gives, for example
```
['This', 'is']
[]
['A', 'test']
```
With the original text being
```
This is
A test
```
But strangely, if during the printing stage I do another len() check then the empty lines are actually **ignored** like I want, and it looks like this
```
['This', 'is']
['A', 'test']
```
What is the cause of this? Does this mean I can only go over the list and remove empty entries after I generate It? Or am I just doing the line import code wrong? I am using python 3.6 to test this code with by the way
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50639390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6283375/"
] |
`row` contains a newline, so it's not empty. But `row.split()` doesn't find any non-whitespace characters, so it returns an empty list.
Use
```
if len(row.strip()):
```
to ignore the newline (and any other leading/trailing spaces).
Or more simply:
```
if row.strip():
```
since an empty string is falsy.
|
testdoc.txt has lot of empty lines; but in output they are out;
```
src = 'testdoc.txt'
with open(src, 'r') as f:
for r in f:
if len(r) > 1:
print(r.strip())
```
and now you can obviously put this all in list or in list line by line instead of print whatever fits your further logic
|
14,633,021
|
I have an AppHarbor app that I'm using as an external service which will get requested by my other servers which use Google App Engine (python). The appharbor app is basically getting pinged a lot to process some data that I send it.
Because I'll be constantly pinging the service, and time is important, is it possible to reference my appharbor app through its IP address and not the hostname? Basically I want to eliminate having to do DNS lookups and speed up the response.
I'm using Google App Engine's urlfetch (<https://developers.google.com/appengine/docs/python/urlfetch/overview>) to do the request. Is caching the ip address something urlfetch is already doing under the covers? If not, is it possible to do so?
|
2013/01/31
|
[
"https://Stackoverflow.com/questions/14633021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361897/"
] |
I doubt that DNS lookups will be your bottleneck, but anyway as far as I can see DNS lookups are cached by the system (for at least the TTL).
|
Sign up for the AppEngine Sockets Trusted Tester ([here](https://docs.google.com/a/postmaster.io/spreadsheet/viewform?formkey=dF9QR3pnQ2pNa0dqalViSTZoenVkcHc6MQ#gid=0)) and use the normal python:
```
socket.gethostbyname(...)
```
|
14,633,021
|
I have an AppHarbor app that I'm using as an external service which will get requested by my other servers which use Google App Engine (python). The appharbor app is basically getting pinged a lot to process some data that I send it.
Because I'll be constantly pinging the service, and time is important, is it possible to reference my appharbor app through its IP address and not the hostname? Basically I want to eliminate having to do DNS lookups and speed up the response.
I'm using Google App Engine's urlfetch (<https://developers.google.com/appengine/docs/python/urlfetch/overview>) to do the request. Is caching the ip address something urlfetch is already doing under the covers? If not, is it possible to do so?
|
2013/01/31
|
[
"https://Stackoverflow.com/questions/14633021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361897/"
] |
You can theoretically send requests directly to an IP address, but you would have to also [pass the host header](http://drewish.com/content/2010/03/using_curl_and_the_host_header_to_bypass_a_load_balancer) so that the AppHarbor routing layer can figure out what application gets the request.
As Shay mentions, you shouldn't do this though - DNS queries are cached and are not likely to be a bottleneck and you're setting yourself up for breakage because the IP address might change with the domain being pointed to a new IP.
|
Sign up for the AppEngine Sockets Trusted Tester ([here](https://docs.google.com/a/postmaster.io/spreadsheet/viewform?formkey=dF9QR3pnQ2pNa0dqalViSTZoenVkcHc6MQ#gid=0)) and use the normal python:
```
socket.gethostbyname(...)
```
|
14,633,021
|
I have an AppHarbor app that I'm using as an external service which will get requested by my other servers which use Google App Engine (python). The appharbor app is basically getting pinged a lot to process some data that I send it.
Because I'll be constantly pinging the service, and time is important, is it possible to reference my appharbor app through its IP address and not the hostname? Basically I want to eliminate having to do DNS lookups and speed up the response.
I'm using Google App Engine's urlfetch (<https://developers.google.com/appengine/docs/python/urlfetch/overview>) to do the request. Is caching the ip address something urlfetch is already doing under the covers? If not, is it possible to do so?
|
2013/01/31
|
[
"https://Stackoverflow.com/questions/14633021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361897/"
] |
I doubt that DNS lookups will be your bottleneck, but anyway as far as I can see DNS lookups are cached by the system (for at least the TTL).
|
You can theoretically send requests directly to an IP address, but you would have to also [pass the host header](http://drewish.com/content/2010/03/using_curl_and_the_host_header_to_bypass_a_load_balancer) so that the AppHarbor routing layer can figure out what application gets the request.
As Shay mentions, you shouldn't do this though - DNS queries are cached and are not likely to be a bottleneck and you're setting yourself up for breakage because the IP address might change with the domain being pointed to a new IP.
|
28,690,325
|
I have such problem I have this piece of code on python2.7. It works approximately 60 seconds for object with slightly more than 70000 items in the object. How it works? It gets an object with paths to another objects and convert them to the ASCII strings. I think the problem why it is so slow is loops.
```
def createPath(self, path, NameOfFile ):
temp = []
for j in range( path.shape[0] ):
rr = path[j][0]
obj = NameOfFile[rr]
string = ''.join(chr(i) for i in obj[:])
string = string.replace("aaaa","bbbb")
temp.append(string)
return ( np.array(temp) )
```
It is not my own code, I found it in the Web so what my question is ? How to make this piece of code faster ? I haven't huge experience in the python, but maybe there some useful libraries or tricks that may help. I appreciate all help, any ideas may be helpful.
|
2015/02/24
|
[
"https://Stackoverflow.com/questions/28690325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4473386/"
] |
ID of an element must be unique, when you use id selector it will return only the first element with the id, so all the click handlers are added to the first button.
Use classes and event delegation
```
$(document).ready(function () {
$("#image-btn").click(function () {
var $imageElement = $("<div class='image_element' ><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' width='22px'><button type='button' class='img_btn' >Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
});
$("#element-holder").on('click', '.closebtn', function(){
$(this).closest('.image_element').remove();
})
});
```
Demo: [Fiddle](http://jsfiddle.net/arunpjohny/mnxtL5k7/)
More Read:
* [Event binding on dynamically created elements?](https://stackoverflow.com/questions/203198/event-binding-on-dynamically-created-elements)
|
use $(this) instead of $imageElement
```
$(document).ready(function(){
$("#image-btn").click(function(){
var $imageElement = $("<div class='image_element' id='image-element'><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' id='close-img-btn' width='22px'><button type='button' class='img_btn' id='img-btn'>Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
$("#close-img-btn").click(function(){
$(this).remove();
});
});
});
```
|
28,690,325
|
I have such problem I have this piece of code on python2.7. It works approximately 60 seconds for object with slightly more than 70000 items in the object. How it works? It gets an object with paths to another objects and convert them to the ASCII strings. I think the problem why it is so slow is loops.
```
def createPath(self, path, NameOfFile ):
temp = []
for j in range( path.shape[0] ):
rr = path[j][0]
obj = NameOfFile[rr]
string = ''.join(chr(i) for i in obj[:])
string = string.replace("aaaa","bbbb")
temp.append(string)
return ( np.array(temp) )
```
It is not my own code, I found it in the Web so what my question is ? How to make this piece of code faster ? I haven't huge experience in the python, but maybe there some useful libraries or tricks that may help. I appreciate all help, any ideas may be helpful.
|
2015/02/24
|
[
"https://Stackoverflow.com/questions/28690325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4473386/"
] |
ID of an element must be unique, when you use id selector it will return only the first element with the id, so all the click handlers are added to the first button.
Use classes and event delegation
```
$(document).ready(function () {
$("#image-btn").click(function () {
var $imageElement = $("<div class='image_element' ><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' width='22px'><button type='button' class='img_btn' >Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
});
$("#element-holder").on('click', '.closebtn', function(){
$(this).closest('.image_element').remove();
})
});
```
Demo: [Fiddle](http://jsfiddle.net/arunpjohny/mnxtL5k7/)
More Read:
* [Event binding on dynamically created elements?](https://stackoverflow.com/questions/203198/event-binding-on-dynamically-created-elements)
|
You need remove the parent/child near at button you press.
For example:
```
$("#close-img-btn").click(function(){
$(this).parent('content-div').remove();
});
```
|
28,690,325
|
I have such problem I have this piece of code on python2.7. It works approximately 60 seconds for object with slightly more than 70000 items in the object. How it works? It gets an object with paths to another objects and convert them to the ASCII strings. I think the problem why it is so slow is loops.
```
def createPath(self, path, NameOfFile ):
temp = []
for j in range( path.shape[0] ):
rr = path[j][0]
obj = NameOfFile[rr]
string = ''.join(chr(i) for i in obj[:])
string = string.replace("aaaa","bbbb")
temp.append(string)
return ( np.array(temp) )
```
It is not my own code, I found it in the Web so what my question is ? How to make this piece of code faster ? I haven't huge experience in the python, but maybe there some useful libraries or tricks that may help. I appreciate all help, any ideas may be helpful.
|
2015/02/24
|
[
"https://Stackoverflow.com/questions/28690325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4473386/"
] |
You need remove the parent/child near at button you press.
For example:
```
$("#close-img-btn").click(function(){
$(this).parent('content-div').remove();
});
```
|
use $(this) instead of $imageElement
```
$(document).ready(function(){
$("#image-btn").click(function(){
var $imageElement = $("<div class='image_element' id='image-element'><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' id='close-img-btn' width='22px'><button type='button' class='img_btn' id='img-btn'>Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
$("#close-img-btn").click(function(){
$(this).remove();
});
});
});
```
|
17,438,852
|
I want to pass in a string to my python script which contains escape sequences such as: `\x00` or `\t`, and spaces.
However when I pass in my string as:
```
some string\x00 more \tstring
```
python treats my string as a raw string and when I print that string from inside the script, it prints the string literally and it does not treat the `\` as an escape sequence.
i.e. it prints exactly the string above.
**UPDATE:(AGAIN)**
I'm using *python 2.7.5* to reproduce, create a script, lets call it `myscript.py`:
```
import sys
print(sys.argv[1])
```
now save it and call it from the windows command prompt as such:
```
c:\Python27\python.exe myscript.py "abcd \x00 abcd"
```
the result I get is:
```
> 'abcd \x00 abcd'
```
P.S in my actual script, I am using option parser, but both have the same effect. Maybe there is a parameter I can set for option parser to handle escape sequences?
|
2013/07/03
|
[
"https://Stackoverflow.com/questions/17438852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2059819/"
] |
The string you receive in `sys.argv[1]` is exactly what you typed on the command line. Its backslash sequences are left intact, not interpreted.
To interpret them, follow [this answer](https://stackoverflow.com/questions/4020539/process-escape-sequences-in-a-string-in-python): basically use `.decode('string_escape')`.
|
I don't know that you can parse entire strings without writing a custom parser but optparse supports [sending inputs in different formats](http://docs.python.org/2/library/optparse.html#standard-option-types) (hexidecimal, binary, etc).
```
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-n", type="int", dest="num")
options, args = parser.parse_args()
print options
```
then when you run
```
C:\Users\John\Desktop\script.py -n 0b10
```
you get an output of
```
{'num': 2}
```
The reason I say you'll have to impletement a custom parser to make this work is because it isn't Python the changes the input but rather it is [something](https://stackoverflow.com/questions/5994827/how-to-read-argv-value-without-escape-the-string) [the shell does](https://stackoverflow.com/questions/9590838/python-escape-special-characters-in-sys-argv). Python might have a built in module to handle this but I am not aware of it if it exists.
|
17,438,852
|
I want to pass in a string to my python script which contains escape sequences such as: `\x00` or `\t`, and spaces.
However when I pass in my string as:
```
some string\x00 more \tstring
```
python treats my string as a raw string and when I print that string from inside the script, it prints the string literally and it does not treat the `\` as an escape sequence.
i.e. it prints exactly the string above.
**UPDATE:(AGAIN)**
I'm using *python 2.7.5* to reproduce, create a script, lets call it `myscript.py`:
```
import sys
print(sys.argv[1])
```
now save it and call it from the windows command prompt as such:
```
c:\Python27\python.exe myscript.py "abcd \x00 abcd"
```
the result I get is:
```
> 'abcd \x00 abcd'
```
P.S in my actual script, I am using option parser, but both have the same effect. Maybe there is a parameter I can set for option parser to handle escape sequences?
|
2013/07/03
|
[
"https://Stackoverflow.com/questions/17438852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2059819/"
] |
The string you receive in `sys.argv[1]` is exactly what you typed on the command line. Its backslash sequences are left intact, not interpreted.
To interpret them, follow [this answer](https://stackoverflow.com/questions/4020539/process-escape-sequences-in-a-string-in-python): basically use `.decode('string_escape')`.
|
myscript.py contains:
```
import sys
print(sys.argv[1].decode('string-escape'))
```
result
abcd abcd
|
17,438,852
|
I want to pass in a string to my python script which contains escape sequences such as: `\x00` or `\t`, and spaces.
However when I pass in my string as:
```
some string\x00 more \tstring
```
python treats my string as a raw string and when I print that string from inside the script, it prints the string literally and it does not treat the `\` as an escape sequence.
i.e. it prints exactly the string above.
**UPDATE:(AGAIN)**
I'm using *python 2.7.5* to reproduce, create a script, lets call it `myscript.py`:
```
import sys
print(sys.argv[1])
```
now save it and call it from the windows command prompt as such:
```
c:\Python27\python.exe myscript.py "abcd \x00 abcd"
```
the result I get is:
```
> 'abcd \x00 abcd'
```
P.S in my actual script, I am using option parser, but both have the same effect. Maybe there is a parameter I can set for option parser to handle escape sequences?
|
2013/07/03
|
[
"https://Stackoverflow.com/questions/17438852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2059819/"
] |
myscript.py contains:
```
import sys
print(sys.argv[1].decode('string-escape'))
```
result
abcd abcd
|
I don't know that you can parse entire strings without writing a custom parser but optparse supports [sending inputs in different formats](http://docs.python.org/2/library/optparse.html#standard-option-types) (hexidecimal, binary, etc).
```
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-n", type="int", dest="num")
options, args = parser.parse_args()
print options
```
then when you run
```
C:\Users\John\Desktop\script.py -n 0b10
```
you get an output of
```
{'num': 2}
```
The reason I say you'll have to impletement a custom parser to make this work is because it isn't Python the changes the input but rather it is [something](https://stackoverflow.com/questions/5994827/how-to-read-argv-value-without-escape-the-string) [the shell does](https://stackoverflow.com/questions/9590838/python-escape-special-characters-in-sys-argv). Python might have a built in module to handle this but I am not aware of it if it exists.
|
61,512,822
|
Running in Jupyter-notebook
Python version 3.6
Pyspark version 2.4.5
Hadoop version 2.7.3
I essentially have the same issue described [Unable to write spark dataframe to a parquet file format to C drive in PySpark](https://stackoverflow.com/questions/59220832/unable-to-write-spark-dataframe-to-a-parquet-file-format-to-c-drive-in-pyspark/59223439#59223439?newreg=6dfdf1ebd3c94e118056c86a8691342a)
Steps I have taken:
1. Copied hadoop-2.7.1 binaries offered at <https://github.com/steveloughran/winutils> to folder in C root directory.
2. Created HADOOP\_HOME enviorment variable and pointed it to directory mentioned above (i.e C:\hadoop-2.7.1)
below is the command I am trying to run and the error I am getting
```
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession.builder.getOrCreate()
df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
```
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-8-3c18766c3167> in <module>
4 #os.environ['HADOOP_HOME'] = "C:\hadoop-2.7.1"
5 #sys.path.append("C:\hadoop-2.7.1\bin")
----> 6 df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
~\anaconda3\lib\site-packages\pyspark\sql\readwriter.py in save(self, path, format, mode, partitionBy, **options)
737 self._jwrite.save()
738 else:
--> 739 self._jwrite.save(path)
740
741 @since(1.4)
~\anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~\anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~\anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o383.save.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 32 more
Caused by: ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
```
|
2020/04/29
|
[
"https://Stackoverflow.com/questions/61512822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13436683/"
] |
You have to use `$sum` to sum the size of each array like this
```js
{
"$group": {
"_id": {
"vehicleid": "$vehicleid",
"date": "$date"
},
"count": { "$sum": { "$size": "$points" } }
}
}
```
|
**You can follow this code**
```
$group : {
_id : {
"vehicleid":"$vehicleid",
"date":"$date"
count: { $sum: 1 }
}
}
```
|
61,512,822
|
Running in Jupyter-notebook
Python version 3.6
Pyspark version 2.4.5
Hadoop version 2.7.3
I essentially have the same issue described [Unable to write spark dataframe to a parquet file format to C drive in PySpark](https://stackoverflow.com/questions/59220832/unable-to-write-spark-dataframe-to-a-parquet-file-format-to-c-drive-in-pyspark/59223439#59223439?newreg=6dfdf1ebd3c94e118056c86a8691342a)
Steps I have taken:
1. Copied hadoop-2.7.1 binaries offered at <https://github.com/steveloughran/winutils> to folder in C root directory.
2. Created HADOOP\_HOME enviorment variable and pointed it to directory mentioned above (i.e C:\hadoop-2.7.1)
below is the command I am trying to run and the error I am getting
```
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession.builder.getOrCreate()
df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
```
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-8-3c18766c3167> in <module>
4 #os.environ['HADOOP_HOME'] = "C:\hadoop-2.7.1"
5 #sys.path.append("C:\hadoop-2.7.1\bin")
----> 6 df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
~\anaconda3\lib\site-packages\pyspark\sql\readwriter.py in save(self, path, format, mode, partitionBy, **options)
737 self._jwrite.save()
738 else:
--> 739 self._jwrite.save(path)
740
741 @since(1.4)
~\anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~\anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~\anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o383.save.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 32 more
Caused by: ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
```
|
2020/04/29
|
[
"https://Stackoverflow.com/questions/61512822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13436683/"
] |
You have to use `$sum` to sum the size of each array like this
```js
{
"$group": {
"_id": {
"vehicleid": "$vehicleid",
"date": "$date"
},
"count": { "$sum": { "$size": "$points" } }
}
}
```
|
You can use any of the following aggregation pipelines. You will get the size of the `points` array field. Each pipeline uses different approach, and the output details differ, but the size info will be same.
The code runs with PyMongo:
```
pipeline = [
{
"$unwind": "$points"
},
{
"$group": {
"_id": { "vehicleid": "$vehicleid", "date": "$date" },
"count": { "$sum": 1 }
}
}
]
pipeline = [
{
"$addFields": { "count": { "$size": "$points" } }
}
]
```
|
61,512,822
|
Running in Jupyter-notebook
Python version 3.6
Pyspark version 2.4.5
Hadoop version 2.7.3
I essentially have the same issue described [Unable to write spark dataframe to a parquet file format to C drive in PySpark](https://stackoverflow.com/questions/59220832/unable-to-write-spark-dataframe-to-a-parquet-file-format-to-c-drive-in-pyspark/59223439#59223439?newreg=6dfdf1ebd3c94e118056c86a8691342a)
Steps I have taken:
1. Copied hadoop-2.7.1 binaries offered at <https://github.com/steveloughran/winutils> to folder in C root directory.
2. Created HADOOP\_HOME enviorment variable and pointed it to directory mentioned above (i.e C:\hadoop-2.7.1)
below is the command I am trying to run and the error I am getting
```
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession.builder.getOrCreate()
df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
```
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-8-3c18766c3167> in <module>
4 #os.environ['HADOOP_HOME'] = "C:\hadoop-2.7.1"
5 #sys.path.append("C:\hadoop-2.7.1\bin")
----> 6 df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
~\anaconda3\lib\site-packages\pyspark\sql\readwriter.py in save(self, path, format, mode, partitionBy, **options)
737 self._jwrite.save()
738 else:
--> 739 self._jwrite.save(path)
740
741 @since(1.4)
~\anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~\anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~\anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o383.save.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 32 more
Caused by: ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
```
|
2020/04/29
|
[
"https://Stackoverflow.com/questions/61512822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13436683/"
] |
You can use any of the following aggregation pipelines. You will get the size of the `points` array field. Each pipeline uses different approach, and the output details differ, but the size info will be same.
The code runs with PyMongo:
```
pipeline = [
{
"$unwind": "$points"
},
{
"$group": {
"_id": { "vehicleid": "$vehicleid", "date": "$date" },
"count": { "$sum": 1 }
}
}
]
pipeline = [
{
"$addFields": { "count": { "$size": "$points" } }
}
]
```
|
**You can follow this code**
```
$group : {
_id : {
"vehicleid":"$vehicleid",
"date":"$date"
count: { $sum: 1 }
}
}
```
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
#Convert date into the proper format so that date time operation can be easily performed
```
df_Time_Table["Date"] = pd.to_datetime(df_Time_Table["Date"])
# Cal Year
df_Time_Table['Year'] = df_Time_Table['Date'].dt.strftime('%Y')
```
|
When you write
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%m/%d')
```
It can fixed
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
Your problem here is that the dtype of 'Date' remained as str/object. You can use the `parse_dates` parameter when using `read_csv`
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', parse_dates= [col],encoding='utf-8-sig', usecols= ['Date', 'ids'],)
df['Month'] = df['Date'].dt.month
```
From [the documentation for the `parse_dates` parameter](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
>
> **parse\_dates** : *bool or list of int or names or list of lists or dict, default False*
>
>
> The behavior is as follows:
>
>
> * boolean. If True -> try parsing the index.
> * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
> * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
> * dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
>
>
> If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use `pd.to_datetime` after `pd.read_csv`. To parse an index or column with a mixture of timezones, specify `date_parser` to be a partially-applied `pandas.to_datetime()` with `utc=True`. See Parsing a CSV with mixed timezones for more.
>
>
> Note: A fast-path exists for iso8601-formatted dates.
>
>
>
The relevant case for this question is the "list of int or names" one.
col is the columns index of 'Date' which parses as a separate date column.
|
#Convert date into the proper format so that date time operation can be easily performed
```
df_Time_Table["Date"] = pd.to_datetime(df_Time_Table["Date"])
# Cal Year
df_Time_Table['Year'] = df_Time_Table['Date'].dt.strftime('%Y')
```
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
Your problem here is that the dtype of 'Date' remained as str/object. You can use the `parse_dates` parameter when using `read_csv`
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', parse_dates= [col],encoding='utf-8-sig', usecols= ['Date', 'ids'],)
df['Month'] = df['Date'].dt.month
```
From [the documentation for the `parse_dates` parameter](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
>
> **parse\_dates** : *bool or list of int or names or list of lists or dict, default False*
>
>
> The behavior is as follows:
>
>
> * boolean. If True -> try parsing the index.
> * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
> * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
> * dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
>
>
> If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use `pd.to_datetime` after `pd.read_csv`. To parse an index or column with a mixture of timezones, specify `date_parser` to be a partially-applied `pandas.to_datetime()` with `utc=True`. See Parsing a CSV with mixed timezones for more.
>
>
> Note: A fast-path exists for iso8601-formatted dates.
>
>
>
The relevant case for this question is the "list of int or names" one.
col is the columns index of 'Date' which parses as a separate date column.
|
`train_data=pd.read_csv("train.csv",parse_dates=["date"])`
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
`train_data=pd.read_csv("train.csv",parse_dates=["date"])`
|
When you write
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%m/%d')
```
It can fixed
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
Your problem here is that the dtype of 'Date' remained as str/object. You can use the `parse_dates` parameter when using `read_csv`
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', parse_dates= [col],encoding='utf-8-sig', usecols= ['Date', 'ids'],)
df['Month'] = df['Date'].dt.month
```
From [the documentation for the `parse_dates` parameter](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
>
> **parse\_dates** : *bool or list of int or names or list of lists or dict, default False*
>
>
> The behavior is as follows:
>
>
> * boolean. If True -> try parsing the index.
> * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
> * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
> * dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
>
>
> If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use `pd.to_datetime` after `pd.read_csv`. To parse an index or column with a mixture of timezones, specify `date_parser` to be a partially-applied `pandas.to_datetime()` with `utc=True`. See Parsing a CSV with mixed timezones for more.
>
>
> Note: A fast-path exists for iso8601-formatted dates.
>
>
>
The relevant case for this question is the "list of int or names" one.
col is the columns index of 'Date' which parses as a separate date column.
|
When you write
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%m/%d')
```
It can fixed
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
First you need to define the format of date column.
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d %H:%M:%S')
```
For your case base format can be set to;
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
```
After that you can set/change your desired output as follows;
```
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
```
|
#Convert date into the proper format so that date time operation can be easily performed
```
df_Time_Table["Date"] = pd.to_datetime(df_Time_Table["Date"])
# Cal Year
df_Time_Table['Year'] = df_Time_Table['Date'].dt.strftime('%Y')
```
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
Your problem here is that `to_datetime` silently failed so the dtype remained as `str/object`, if you set param `errors='coerce'` then if the conversion fails for any particular string then those rows are set to `NaT`.
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
```
So you need to find out what is wrong with those specific row values.
See the [docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html)
|
First you need to define the format of date column.
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d %H:%M:%S')
```
For your case base format can be set to;
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
```
After that you can set/change your desired output as follows;
```
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
```
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
Your problem here is that `to_datetime` silently failed so the dtype remained as `str/object`, if you set param `errors='coerce'` then if the conversion fails for any particular string then those rows are set to `NaT`.
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
```
So you need to find out what is wrong with those specific row values.
See the [docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html)
|
When you write
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%m/%d')
```
It can fixed
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
First you need to define the format of date column.
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d %H:%M:%S')
```
For your case base format can be set to;
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
```
After that you can set/change your desired output as follows;
```
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
```
|
`train_data=pd.read_csv("train.csv",parse_dates=["date"])`
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
First you need to define the format of date column.
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d %H:%M:%S')
```
For your case base format can be set to;
```
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
```
After that you can set/change your desired output as follows;
```
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
```
|
Your problem here is that the dtype of 'Date' remained as str/object. You can use the `parse_dates` parameter when using `read_csv`
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', parse_dates= [col],encoding='utf-8-sig', usecols= ['Date', 'ids'],)
df['Month'] = df['Date'].dt.month
```
From [the documentation for the `parse_dates` parameter](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
>
> **parse\_dates** : *bool or list of int or names or list of lists or dict, default False*
>
>
> The behavior is as follows:
>
>
> * boolean. If True -> try parsing the index.
> * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
> * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
> * dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
>
>
> If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use `pd.to_datetime` after `pd.read_csv`. To parse an index or column with a mixture of timezones, specify `date_parser` to be a partially-applied `pandas.to_datetime()` with `utc=True`. See Parsing a CSV with mixed timezones for more.
>
>
> Note: A fast-path exists for iso8601-formatted dates.
>
>
>
The relevant case for this question is the "list of int or names" one.
col is the columns index of 'Date' which parses as a separate date column.
|
61,036,609
|
As illustrated below, I am looking for an easy way to combine two or more heat-maps into one, i.e., a heat-map with multiple colormaps.
The idea is to break each cell into multiple sub-cells. I couldn't find any python library with such a visualization function already implemented. Anybody knows something (at least) close to this?
[](https://i.stack.imgur.com/HmJM5.png)
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61036609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3625770/"
] |
The heatmaps can be drawn column by column. White gridlines can mark the cell borders.
```py
import numpy as np
from matplotlib import pyplot as plt
a = np.random.random((5, 6))
b = np.random.random((5, 6))
vmina = a.min()
vminb = b.min()
vmaxa = a.max()
vmaxb = b.max()
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(10,3), gridspec_kw={'width_ratios':[1,1,2]})
ax1.imshow(a, cmap='Reds', interpolation='nearest', origin='lower', vmin=vmina, vmax=vmaxa)
ax1.set_xticks(np.arange(.5, a.shape[1]-1, 1), minor=True)
ax1.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
ax2.imshow(b, cmap='Blues', interpolation='nearest', origin='lower', vmin=vminb, vmax=vmaxb)
ax2.set_xticks(np.arange(.5, a.shape[1]-1, 1), minor=True)
ax2.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
for i in range(a.shape[1]):
ax3.imshow(a[:,i:i+1], extent=[2*i-0.5, 2*i+0.5, -0.5, a.shape[0]-0.5 ],
cmap='Reds', interpolation='nearest', origin='lower', vmin=vmina, vmax=vmaxa)
ax3.imshow(b[:,i:i+1], extent=[2*i+0.5, 2*i+1.5, -0.5, a.shape[0]-0.5 ],
cmap='Blues', interpolation='nearest', origin='lower', vmin=vminb, vmax=vmaxb)
ax3.set_xlim(-0.5, 2*a.shape[1] -0.5 )
ax3.set_xticks(np.arange(1.5, 2*a.shape[1]-1, 2), minor=True)
ax3.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
for ax in (ax1, ax2, ax3):
ax.grid(color='white', which='minor', lw=2)
ax.set_xticks([])
ax.set_yticks([])
ax.tick_params(axis='both', which='both', size=0)
plt.show()
```
[](https://i.stack.imgur.com/FvxOx.png)
PS: If brevity were an important factor, all embellishments, details and comparisons could be left out:
```py
# import numpy as np
# from matplotlib import pyplot as plt
a = np.random.random((5, 6))
b = np.random.random((5, 6))
norma = plt.Normalize(vmin=a.min(), vmax=a.max())
normb = plt.Normalize(vmin=b.min(), vmax=b.max())
for i in range(a.shape[1]):
plt.imshow(a[:, i:i + 1], extent=[2*i-0.5, 2*i+0.5, -0.5, a.shape[0]-0.5], cmap='Reds', norm=norma)
plt.imshow(b[:, i:i + 1], extent=[2*i+0.5, 2*i+1.5, -0.5, a.shape[0]-0.5], cmap='Blues', norm=normb)
plt.xlim(-0.5, 2*a.shape[1]-0.5)
# plt.show()
```
|
You can restructure your arrays to have empty columns between you actual data then create a masked array to plot heatmaps with transparency. Here's one method (maybe not the best) to add empty columns:
```
arr1 = np.arange(20).reshape(4, 5)
arr2 = np.arange(20, 0, -1).reshape(4, 5)
filler = np.nan * np.zeros((4, 5))
c1 = np.vstack([arr1, filler]).T.reshape(10, 4).T
c2 = np.vstack([filler, arr2]).T.reshape(10, 4).T
c1 = np.ma.masked_array(c1, np.isnan(c1))
c2 = np.ma.masked_array(c2, np.isnan(c2))
plt.pcolormesh(c1, cmap='bone')
plt.pcolormesh(c2, cmap='jet')
```
You can also use `np.repeat` and mask every other column as @JohanC notes
```
c1 = np.ma.masked_array(np.repeat(arr1, 2, axis=1), np.tile([True, False], arr1.size))
c2 = np.ma.masked_array(np.repeat(arr2, 2, axis=1), np.tile([False, True], arr2.size))
```
[](https://i.stack.imgur.com/jTJXn.png)
|
14,459,258
|
Games from Valve use following [data format](http://media.steampowered.com/apps/440/scripts/items/items_game.9aee6b38c52d8814124b8fbfc8d13e7b1faa944f.txt)
```
"name1"
{
"name2" "value2"
"name3"
{
"name4" "value4"
}
}
```
Does this format have a name or is it just self made?
Can I parse it in python?
|
2013/01/22
|
[
"https://Stackoverflow.com/questions/14459258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1670759/"
] |
I'm not sure that it has a name, but it seems very straightforward: a node consists of a key and either a value or a set of values that are themselves either plain strings or sets of key-value pairs. It would be trivial to parse recursively, and maps cleanly to a structure of nested python dictionaries.
|
Looks like their own format, called Valve Data Format. Documentation [here](https://developer.valvesoftware.com/wiki/KeyValues), I don't know if there is a parser available in python, but here is a question about [parsing it in php](https://stackoverflow.com/questions/9301511/parsing-valve-data-format-files-in-php)
|
14,459,258
|
Games from Valve use following [data format](http://media.steampowered.com/apps/440/scripts/items/items_game.9aee6b38c52d8814124b8fbfc8d13e7b1faa944f.txt)
```
"name1"
{
"name2" "value2"
"name3"
{
"name4" "value4"
}
}
```
Does this format have a name or is it just self made?
Can I parse it in python?
|
2013/01/22
|
[
"https://Stackoverflow.com/questions/14459258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1670759/"
] |
I'm not sure that it has a name, but it seems very straightforward: a node consists of a key and either a value or a set of values that are themselves either plain strings or sets of key-value pairs. It would be trivial to parse recursively, and maps cleanly to a structure of nested python dictionaries.
|
Looks a lot like JSON without comma and colon seperators. You could parse it manually since it has the same logic to it.
Seems to consist of name-value pairs, so after a name, finding a '{' or another string in "" would mean a value.
A composite structure of custom classes would make it easy to handle. As Matti John linked, there is documentation.
|
50,917,003
|
I'm trying to create a simple program to convert a binary number, for example `111100010` to decimal `482`. I've done the same in Python, and it works, but I can't find what I'm doing wrong in C++.
When I execute the C++ program, I get `-320505788`. What have I done wrong?
This is the Python code:
```python
def digit_count(bit_number):
found = False
count = 0
while not found:
division = bit_number / (10 ** count)
if division < 1:
found = True
else:
count += 1
return count
def bin_to_number(bit_number):
digits = digit_count(bit_number)
number = 0
for i in range(digits):
exp = 10 ** i
if exp < 10:
digit = int(bit_number % 10)
digit = digit * (2 ** i)
number += digit
else:
digit = int(bit_number / exp % 10)
digit = digit * (2 ** i)
number += digit
print(number)
return number
bin_to_convert = 111100010
bin_to_number(bin_to_convert)
# returns 482
```
This is the C++ code:
```cpp
#include <iostream>
#include <cmath>
using namespace std;
int int_length(int bin_number);
int bin_to_int(int bin_number);
int main()
{
cout << bin_to_int(111100010) << endl;
return 0;
}
int int_length(int bin_number){
bool found = false;
int digit_count = 0;
while(!found){
int division = bin_number / pow(10, digit_count);
if(division < 1){
found = true;
}
else{
digit_count++;
}
}
return digit_count;
}
int bin_to_int(int bin_number){
int number_length = int_length(bin_number);
int number = 0;
for(int i = 0; i < number_length; i++){
int e = pow(10, i);
int digit;
if(e < 10){
digit = bin_number % 10;
digit = digit * pow(2, i);
number = number + digit;
}
else{
if((e % 10) == 0){
digit = 0;
}
else{
digit = bin_number / (e % 10);
}
digit = digit * pow(2, i);
number = number + digit;
}
}
return number;
}
```
|
2018/06/18
|
[
"https://Stackoverflow.com/questions/50917003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5533085/"
] |
The problem is that you converted this fragment of Python code
```
else:
digit = int(bit_number / exp % 10)
digit = digit * (2 ** i)
number += digit
```
into this:
```
else{
if((e % 10) == 0){
digit = 0;
}
else{
digit = bin_number / (e % 10);
}
digit = digit * pow(2, i);
number = number + digit;
}
```
In other words, you are trying to apply `/` *after* applying `%`, and protect from division by zero in the process.
This is incorrect: you should apply them the other way around, like this:
```
else{
digit = (bit_number / e) % 10;
digit = digit * pow(2, i);
number = number + digit;
}
```
[Demo 1](https://ideone.com/Oq6ZVk)
Note that the entire conditional is redundant - you can remove it from your `for` loop:
```
for(int i = 0; i < number_length; i++){
int e = pow(10, i);
int digit = (bit_number / e) % 10;
digit = digit * pow(2, i);
number = number + digit;
}
```
[Demo 2](https://ideone.com/7f4wsC)
|
One problem is that the 111100010 in main is not a [binary literal](https://en.cppreference.com/w/cpp/language/integer_literal) for 482 base 10 but is actually the decimal value of 111100010. If you are going to use a binary literal there is no need for any of your code, just write it out since an integer is an integer regardless of the representation.
If you are trying to process a binary string, you could do something like this instead
```
#include <iostream>
#include <algorithm>
using namespace std;
int bin_to_int(const std::string& binary_string);
int main()
{
cout << bin_to_int("111100010") << endl;
cout << 0b111100010 << endl;
return 0;
}
int bin_to_int(const std::string& bin_string){
//Strings index from the left but bits start from the right so reverse it
std::string binary = bin_string;
std::reverse(binary.begin(), binary.end());
int number_length = bin_string.size();
//cout << "bits " << number_length << "\n";
int number = 0;
for(int i = 0; i <= number_length; i++){
int bit_value = 1 << i;
if(binary[i] == '1')
{
//cout << "Adding " << bit_value << "\n";
number += bit_value;
}
}
return number;
}
```
Note that to use the binary literal you will need to compile for c++14.
|
49,677,110
|
I am trying to decorate a function which is already decorated by `@click` and called from the command line.
Normal decoration to capitalise the input could look like this:
**standard\_decoration.py**
```
def capitalise_input(f):
def wrapper(*args):
args = (args[0].upper(),)
f(*args)
return wrapper
@capitalise_input
def print_something(name):
print(name)
if __name__ == '__main__':
print_something("Hello")
```
Then from the command line:
```
$ python standard_decoration.py
HELLO
```
The first example from the [click documentation](http://click.pocoo.org/5/) looks like this:
**hello.py**
```
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
```
When run from the command line:
```
$ python hello.py --count=3
Your name: John
Hello John!
Hello John!
Hello John!
```
1. What is the correct way to apply a decorator which modifies the inputs to this click decorated function, eg make it upper-case just like the one above?
2. Once a function is decorated by click, would it be true to say that any positional arguments it has are transformed to keyword arguments? It seems that it matches things like `'--count'` with strings in the argument function and then the order in the decorated function no longer seems to matter.
|
2018/04/05
|
[
"https://Stackoverflow.com/questions/49677110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4288043/"
] |
It appears that click passes keywords arguments. This should work. I think it needs to be the first decorator, i.e. it is called after all of the click methods are done.
```
def capitalise_input(f):
def wrapper(**kwargs):
kwargs['name'] = kwargs['name'].upper()
f(**kwargs)
return wrapper
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
@capitalise_input
def hello(count, name):
....
```
You could also try something like this to be specific about which parameter to capitalize:
```
def capitalise_input(key):
def decorator(f):
def wrapper(**kwargs):
kwargs[key] = kwargs[key].upper()
f(**kwargs)
return wrapper
return decorator
@capitalise_input('name')
def hello(count, name):
```
|
About click command groups - we need to take into account what the documentation says - <https://click.palletsprojects.com/en/7.x/commands/#decorating-commands>
So in the end a simple decorator like this:
```
def sample_decorator(f):
def run(*args, **kwargs):
return f(*args, param="yea", **kwargs)
return run
```
needs to be converted to work with click:
```
from functools import update_wrapper
def sample_decorator(f):
@click.pass_context
def run(ctx, *args, **kwargs):
return ctx.invoke(f, *args, param="yea", **kwargs)
return update_wrapper(run, f)
```
(The documentation suggests using `ctx.invoke(f, ctx.obj,` but that has led to an error of 'duplicite arguments'.)
|
49,677,110
|
I am trying to decorate a function which is already decorated by `@click` and called from the command line.
Normal decoration to capitalise the input could look like this:
**standard\_decoration.py**
```
def capitalise_input(f):
def wrapper(*args):
args = (args[0].upper(),)
f(*args)
return wrapper
@capitalise_input
def print_something(name):
print(name)
if __name__ == '__main__':
print_something("Hello")
```
Then from the command line:
```
$ python standard_decoration.py
HELLO
```
The first example from the [click documentation](http://click.pocoo.org/5/) looks like this:
**hello.py**
```
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
```
When run from the command line:
```
$ python hello.py --count=3
Your name: John
Hello John!
Hello John!
Hello John!
```
1. What is the correct way to apply a decorator which modifies the inputs to this click decorated function, eg make it upper-case just like the one above?
2. Once a function is decorated by click, would it be true to say that any positional arguments it has are transformed to keyword arguments? It seems that it matches things like `'--count'` with strings in the argument function and then the order in the decorated function no longer seems to matter.
|
2018/04/05
|
[
"https://Stackoverflow.com/questions/49677110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4288043/"
] |
Harvey's answer won't work with command groups. Effectively this would replace the 'hello' command with 'wrapper' which is not what we want. Instead try something like:
```
from functools import wraps
def test_decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
kwargs['name'] = kwargs['name'].upper()
return f(*args, **kwargs)
return wrapper
```
|
About click command groups - we need to take into account what the documentation says - <https://click.palletsprojects.com/en/7.x/commands/#decorating-commands>
So in the end a simple decorator like this:
```
def sample_decorator(f):
def run(*args, **kwargs):
return f(*args, param="yea", **kwargs)
return run
```
needs to be converted to work with click:
```
from functools import update_wrapper
def sample_decorator(f):
@click.pass_context
def run(ctx, *args, **kwargs):
return ctx.invoke(f, *args, param="yea", **kwargs)
return update_wrapper(run, f)
```
(The documentation suggests using `ctx.invoke(f, ctx.obj,` but that has led to an error of 'duplicite arguments'.)
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
I'm using Macbook Pro M1 2020 model and faced the same issue. The issue was only with my cffi and pip versions maybe. Because these 4 steps helped me -
1. Uninstalling old cffi `pip uninstall cffi`
2. Upgrading pip `python -m pip install --upgrade pip`
3. Reinstalling cffi `pip install cffi`
4. Intalling cryptography `pip install cryptography`
|
A little late to the party, but the solutions above didn't work for me. Paul got me on the right track, but my problem was that pyenv used the mac libffi for its build and cffi used the homebrew version. I read this somewhere, can't claim this unique insight.
My solution was to ensure that my python (3.8.13) was built by pyenv using the homebrew libffi by ensuring correct headers libraries and package config:
```
export LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib -L$(brew --prefix openssl@1.1)/lib -L$(brew --prefix libffi)/lib"
export CPPFLAGS="-I$(brew --prefix zlib)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix openssl@1.1)/include -I$(brew --prefix libffi)/include"
export PKG_CONFIG_PATH="$(brew --prefix openssl@1.1)/lib/pkgconfig:$(brew --prefix libffi)/lib/pkgconfig"
```
rebuilding python...
```
pyenv uninstall 3.8.13
pyenv install 3.8.13
```
killing the pip cache
```
pip cache purge
```
and, finally, reinstalling my dependencies using pipenv
```
pipenv --rm
pipenv sync --dev
```
After these steps, I was free from the dreaded
```
ImportError: dlopen(/private/var/folders/k7/z3mq67_532bdr_rcm2grml240000gn/T/pip-build-env-apk5b25z/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_ffi_prep_closure'
```
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
[This answer here worked like a charm! @paveldroo](https://stackoverflow.com/a/66422219/8524011)
As an extension to the answer above, I went ahead and saved the alias in step 3 as `alias ibrew='arch -x86_64 /usr/local/bin/brew'` at `~/.zshrc`
This means when I install anything with `brew` command, it gets installed for M1 architecture, and when I install with `ibrew` command it gets installed for -x86\_64 architecture.
As a consequence, I installed two instances of python3 at my system one at `/opt/homebrew/bin/python3` using `brew` and the other at `/usr/local/bin/python3` using `ibrew`
The two versions adds some flexibility on creating the project virtual environments as needed. For example you could create virtual environments using:
1. `/usr/local/bin/python3 -m venv venv` for -x86\_64 architecture
2. `/opt/homebrew/bin/python3 -m venv venv` for M1 architecture
|
I have uninstalled older version of `cffi` and `cryptography`,
```
pip uninstall cffi
pip uninstall cryptography
```
and updated the `requirements.txt` file from exact versions to updated versions
```
# requirements.txt
cffi>=1.15.1
cryptography>=38.0.1
```
(version number can be different).
This resolved my issue
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
I'm using Macbook Pro M1 2020 model and faced the same issue. The issue was only with my cffi and pip versions maybe. Because these 4 steps helped me -
1. Uninstalling old cffi `pip uninstall cffi`
2. Upgrading pip `python -m pip install --upgrade pip`
3. Reinstalling cffi `pip install cffi`
4. Intalling cryptography `pip install cryptography`
|
I have uninstalled older version of `cffi` and `cryptography`,
```
pip uninstall cffi
pip uninstall cryptography
```
and updated the `requirements.txt` file from exact versions to updated versions
```
# requirements.txt
cffi>=1.15.1
cryptography>=38.0.1
```
(version number can be different).
This resolved my issue
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
This issue is due to a mismatch between the libffi header version and the version of libffi the dynamic linker finds. In general it appears users encountering this problem have homebrew libffi installed and have a Python built against that in some fashion.
When this happens `cffi` (a `cryptography` dependency) compiles, but fails at runtime raising this error. This should be fixable by passing the right path as a linker argument. To reinstall `cffi` you should `pip uninstall cffi` followed by
`LDFLAGS=-L$(brew --prefix libffi)/lib CFLAGS=-I$(brew --prefix libffi)/include pip install cffi --no-binary :all:`
This is an ugly workaround, but will get you past this hurdle for now.
**Update**: I've uploaded arm64 wheels for macOS so the below compilation is no longer required if your `pip` is up-to-date. However, if, for some reason you wish to compile it yourself:
`LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install cryptography`
|
I have uninstalled older version of `cffi` and `cryptography`,
```
pip uninstall cffi
pip uninstall cryptography
```
and updated the `requirements.txt` file from exact versions to updated versions
```
# requirements.txt
cffi>=1.15.1
cryptography>=38.0.1
```
(version number can be different).
This resolved my issue
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
I'm using Macbook Pro M1 2020 model and faced the same issue. The issue was only with my cffi and pip versions maybe. Because these 4 steps helped me -
1. Uninstalling old cffi `pip uninstall cffi`
2. Upgrading pip `python -m pip install --upgrade pip`
3. Reinstalling cffi `pip install cffi`
4. Intalling cryptography `pip install cryptography`
|
I wasn't able to previously install cffi, until I discovered an unrelated issue. I was at this for about two days, until I found this command:
```sh
python3 -m ensurepip --upgrade
```
Magically, everything started working for me. It came from an issue between Python and Pip coming from different sources.
Answer stolen from this question: [using pip3: module "importlib.\_bootstrap" has no attribute "SourceFileLoader"](https://stackoverflow.com/questions/44761958/using-pip3-module-importlib-bootstrap-has-no-attribute-sourcefileloader)
Edit: This may be a courtesy of the above poster, so could be unrelated. If so, thank you anonymous human!
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
I'm using Macbook Pro M1 2020 model and faced the same issue. The issue was only with my cffi and pip versions maybe. Because these 4 steps helped me -
1. Uninstalling old cffi `pip uninstall cffi`
2. Upgrading pip `python -m pip install --upgrade pip`
3. Reinstalling cffi `pip install cffi`
4. Intalling cryptography `pip install cryptography`
|
[This answer here worked like a charm! @paveldroo](https://stackoverflow.com/a/66422219/8524011)
As an extension to the answer above, I went ahead and saved the alias in step 3 as `alias ibrew='arch -x86_64 /usr/local/bin/brew'` at `~/.zshrc`
This means when I install anything with `brew` command, it gets installed for M1 architecture, and when I install with `ibrew` command it gets installed for -x86\_64 architecture.
As a consequence, I installed two instances of python3 at my system one at `/opt/homebrew/bin/python3` using `brew` and the other at `/usr/local/bin/python3` using `ibrew`
The two versions adds some flexibility on creating the project virtual environments as needed. For example you could create virtual environments using:
1. `/usr/local/bin/python3 -m venv venv` for -x86\_64 architecture
2. `/opt/homebrew/bin/python3 -m venv venv` for M1 architecture
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
This issue is due to a mismatch between the libffi header version and the version of libffi the dynamic linker finds. In general it appears users encountering this problem have homebrew libffi installed and have a Python built against that in some fashion.
When this happens `cffi` (a `cryptography` dependency) compiles, but fails at runtime raising this error. This should be fixable by passing the right path as a linker argument. To reinstall `cffi` you should `pip uninstall cffi` followed by
`LDFLAGS=-L$(brew --prefix libffi)/lib CFLAGS=-I$(brew --prefix libffi)/include pip install cffi --no-binary :all:`
This is an ugly workaround, but will get you past this hurdle for now.
**Update**: I've uploaded arm64 wheels for macOS so the below compilation is no longer required if your `pip` is up-to-date. However, if, for some reason you wish to compile it yourself:
`LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install cryptography`
|
A little late to the party, but the solutions above didn't work for me. Paul got me on the right track, but my problem was that pyenv used the mac libffi for its build and cffi used the homebrew version. I read this somewhere, can't claim this unique insight.
My solution was to ensure that my python (3.8.13) was built by pyenv using the homebrew libffi by ensuring correct headers libraries and package config:
```
export LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib -L$(brew --prefix openssl@1.1)/lib -L$(brew --prefix libffi)/lib"
export CPPFLAGS="-I$(brew --prefix zlib)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix openssl@1.1)/include -I$(brew --prefix libffi)/include"
export PKG_CONFIG_PATH="$(brew --prefix openssl@1.1)/lib/pkgconfig:$(brew --prefix libffi)/lib/pkgconfig"
```
rebuilding python...
```
pyenv uninstall 3.8.13
pyenv install 3.8.13
```
killing the pip cache
```
pip cache purge
```
and, finally, reinstalling my dependencies using pipenv
```
pipenv --rm
pipenv sync --dev
```
After these steps, I was free from the dreaded
```
ImportError: dlopen(/private/var/folders/k7/z3mq67_532bdr_rcm2grml240000gn/T/pip-build-env-apk5b25z/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_ffi_prep_closure'
```
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
A little late to the party, but the solutions above didn't work for me. Paul got me on the right track, but my problem was that pyenv used the mac libffi for its build and cffi used the homebrew version. I read this somewhere, can't claim this unique insight.
My solution was to ensure that my python (3.8.13) was built by pyenv using the homebrew libffi by ensuring correct headers libraries and package config:
```
export LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib -L$(brew --prefix openssl@1.1)/lib -L$(brew --prefix libffi)/lib"
export CPPFLAGS="-I$(brew --prefix zlib)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix openssl@1.1)/include -I$(brew --prefix libffi)/include"
export PKG_CONFIG_PATH="$(brew --prefix openssl@1.1)/lib/pkgconfig:$(brew --prefix libffi)/lib/pkgconfig"
```
rebuilding python...
```
pyenv uninstall 3.8.13
pyenv install 3.8.13
```
killing the pip cache
```
pip cache purge
```
and, finally, reinstalling my dependencies using pipenv
```
pipenv --rm
pipenv sync --dev
```
After these steps, I was free from the dreaded
```
ImportError: dlopen(/private/var/folders/k7/z3mq67_532bdr_rcm2grml240000gn/T/pip-build-env-apk5b25z/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_ffi_prep_closure'
```
|
I have uninstalled older version of `cffi` and `cryptography`,
```
pip uninstall cffi
pip uninstall cryptography
```
and updated the `requirements.txt` file from exact versions to updated versions
```
# requirements.txt
cffi>=1.15.1
cryptography>=38.0.1
```
(version number can be different).
This resolved my issue
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
This issue is due to a mismatch between the libffi header version and the version of libffi the dynamic linker finds. In general it appears users encountering this problem have homebrew libffi installed and have a Python built against that in some fashion.
When this happens `cffi` (a `cryptography` dependency) compiles, but fails at runtime raising this error. This should be fixable by passing the right path as a linker argument. To reinstall `cffi` you should `pip uninstall cffi` followed by
`LDFLAGS=-L$(brew --prefix libffi)/lib CFLAGS=-I$(brew --prefix libffi)/include pip install cffi --no-binary :all:`
This is an ugly workaround, but will get you past this hurdle for now.
**Update**: I've uploaded arm64 wheels for macOS so the below compilation is no longer required if your `pip` is up-to-date. However, if, for some reason you wish to compile it yourself:
`LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install cryptography`
|
I wasn't able to previously install cffi, until I discovered an unrelated issue. I was at this for about two days, until I found this command:
```sh
python3 -m ensurepip --upgrade
```
Magically, everything started working for me. It came from an issue between Python and Pip coming from different sources.
Answer stolen from this question: [using pip3: module "importlib.\_bootstrap" has no attribute "SourceFileLoader"](https://stackoverflow.com/questions/44761958/using-pip3-module-importlib-bootstrap-has-no-attribute-sourcefileloader)
Edit: This may be a courtesy of the above poster, so could be unrelated. If so, thank you anonymous human!
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
This issue is due to a mismatch between the libffi header version and the version of libffi the dynamic linker finds. In general it appears users encountering this problem have homebrew libffi installed and have a Python built against that in some fashion.
When this happens `cffi` (a `cryptography` dependency) compiles, but fails at runtime raising this error. This should be fixable by passing the right path as a linker argument. To reinstall `cffi` you should `pip uninstall cffi` followed by
`LDFLAGS=-L$(brew --prefix libffi)/lib CFLAGS=-I$(brew --prefix libffi)/include pip install cffi --no-binary :all:`
This is an ugly workaround, but will get you past this hurdle for now.
**Update**: I've uploaded arm64 wheels for macOS so the below compilation is no longer required if your `pip` is up-to-date. However, if, for some reason you wish to compile it yourself:
`LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install cryptography`
|
Probably, you'll have a problem with more packages and each has it's own solution for Apple Silicon, it's exhausting.
I came to final solution: using x86\_x64 Homebrew which installs x86 packages, including Python. Thus, all your requirements are installing as on the x86\_x64 macs and there are no more problems with the compilation errors and so on.
Instructions:
1. Run iTerm2 (or default Terminal app) under Rosetta 2 (right click on the app icon -> `Get info` -> `Open using rosetta`).
2. Install homebrew as usual `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"` or you can get this link from <https://brew.sh/> for security reasons (never copy curl commands from stackoverflow without double-checking).
3. Add an alias in your `~/.zshrc` (if you're using ZSH) or `~/.bash_profile` (if you're bash user): `alias brew='arch -x86_64 /usr/local/bin/brew'`.
4. Turn off `Open using rosetta` in iTerm2 `Get info`.
Now, every time you'll print `brew` in terminal apps you'll run x86\_x64 Homebrew. And when you install any package from `brew`, it'll work under Rosetta 2 automatically.
|
10,442,913
|
I am working on HTML tables using python.
I want to know that how can i fetch different column values using lxml?
HTML table :
```
<table border="1">
<tr>
<td>Header_1</td>
<td>Header_2</td>
<td>Header_3</td>
<td>Header_4</td>
</tr>
<tr>
<td>row 1_cell 1</td>
<td>row 1_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 2_cell 1</td>
<td>row 2_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 3_cell 1</td>
<td>row 3_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 4_cell 1</td>
<td>row 4_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
</table>
```
and I am looking to get output as :
```
[
[
('Header_1', 'Header_2'),
('row 1_cell 1', 'row 1_cell 2'),
('row 2_cell 1', 'row 2_cell 2'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 2')
],
[
('Header_1', 'Header_3'),
('row 1_cell 1', 'row 1_cell 3'),
('row 2_cell 1', 'row 2_cell 3'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 3')
]
]
```
how can i fetch such different column and their values?
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10442913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
I do not know how do you make the choice of Header1+Header2, or Header1+Header3,... As the tables must be reasonably small, I suggest to collect all the data, and only then to extract the wanted subsets of the table. The following code show the possible solution:
```
import lxml.etree as ET
def parseTable(table_fragment):
header = None # init - only to create the variable (name)
rows = [] # init
# Parse the table with lxml (the standard xml.etree.ElementTree would be also fine).
tab = ET.fromstring(table_fragment)
for tr in tab:
lst = []
if header is None:
header = lst
else:
rows.append(lst)
for e in tr:
lst.append(e.text)
return header, rows
def extractColumns(header, rows, clst):
header2 = []
for i in clst:
header2.append(header[i - 1]) # one-based to zero-based
rows2 = []
for row in rows:
lst = []
rows2.append(lst)
for i in clst:
lst.append(row[i - 1]) # one-based to zero-based
return header2, rows2
def myRepr(header, rows):
out = [repr(tuple(header))] # init -- list with header
for row in rows:
out.append(repr(tuple(row))) # another row
return '[\n' + (',\n'.join(out)) + '\n]' # join to string
table_fragment = '''\
<table border="1">
<tr>
<td>Header_1</td>
<td>Header_2</td>
<td>Header_3</td>
<td>Header_4</td>
</tr>
<tr>
<td>row 1_cell 1</td>
<td>row 1_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 2_cell 1</td>
<td>row 2_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 3_cell 1</td>
<td>row 3_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 4_cell 1</td>
<td>row 4_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
</table>'''
# Parse the table
header, rows = parseTable(table_fragment)
# For debugging...
print header
print rows
# Collect the representations of the selections. The extractColumns()
# returns a tuple. The * expands it to two arguments.
lst = []
lst.append(myRepr(header, rows))
lst.append(myRepr(*extractColumns(header, rows, [1, 2])))
lst.append(myRepr(*extractColumns(header, rows, [1, 3])))
lst.append(myRepr(*extractColumns(header, rows, [1, 2, 4])))
# Write the output.
with open('output.txt', 'w') as f:
f.write('[\n')
f.write(',\n'.join(lst))
f.write('\n]')
```
The output.txt now contains:
```
[
[
('Header_1', 'Header_2', 'Header_3', 'Header_4'),
('row 1_cell 1', 'row 1_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 2_cell 1', 'row 2_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 3_cell 1', 'row 3_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 4_cell 1', 'row 4_cell 2', 'row 1_cell 3', 'row 1_cell 4')
],
[
('Header_1', 'Header_2'),
('row 1_cell 1', 'row 1_cell 2'),
('row 2_cell 1', 'row 2_cell 2'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 2')
],
[
('Header_1', 'Header_3'),
('row 1_cell 1', 'row 1_cell 3'),
('row 2_cell 1', 'row 1_cell 3'),
('row 3_cell 1', 'row 1_cell 3'),
('row 4_cell 1', 'row 1_cell 3')
],
[
('Header_1', 'Header_2', 'Header_4'),
('row 1_cell 1', 'row 1_cell 2', 'row 1_cell 4'),
('row 2_cell 1', 'row 2_cell 2', 'row 1_cell 4'),
('row 3_cell 1', 'row 3_cell 2', 'row 1_cell 4'),
('row 4_cell 1', 'row 4_cell 2', 'row 1_cell 4')
]
]
```
|
Look into LXML as an html/xml parser that you could use. Then simply make a recursive function.
|
18,732,803
|
So I'm trying to build an insult generator that will take lists, randomize the inputs, and show the randomized code at the push of a button.
Right now, the code looks like...
```
import Tkinter
import random
section1 = ["list of stuff"]
section2 = ["list of stuff"]
section3 = ["list of stuff"]
class myapp(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self, parent)
self.parent = parent
self.initialize()
def initialize(self):
self.grid() # creates grid layout manager where we can place our widgets within the window
button = Tkinter.Button(self, text=u"Generate!", command=self.OnButtonClick)
button.grid(column=1, row=0)
self.labelVariable = Tkinter.StringVar()
label = Tkinter.Label(self, textvariable=self.labelVariable, anchor='w', fg='white', bg='green')
label.grid(column=0, row=1, columnspan=2, sticky='EW')
self.labelVariable.set(u"Oh hi there !")
self.grid_columnconfigure(0, weight=1)
self.resizable(True, False)
self.update()
self.geometry(self.geometry())
def generator():
a = random.randint(0, int(len(section1))-1)
b = random.randint(0, int(len(section2))-1)
c = random.randint(0, int(len(section3))-1)
myText = "You are a "+ section1[a]+" "+section2[b]+'-'+section3[c]+"! Fucker."
return myText
def OnButtonClick(self):
self.labelVariable.set(myText + "(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0, Tkinter.END)
if __name__=='__main__':
app = myapp(None) # Instantiates the class
app.title('Random Insult Generator') # Names the window we're creating.
app.mainloop() # Program will loop indefinitely, awaiting input
```
Right now, the error it's giving is that the `myText` isn't defined.
Any thoughts on how to fix it?
**Edit:**
The error message is...
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__
return self.func(*args)
File "...", line 41, in OnButtonClick
self.labelVariable.set(myText+"(You clicked the button !)")
NameError: global name 'myText' is not defined
```
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18732803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2658570/"
] |
```
def OnButtonClick(self):
myText = self.generator() # CALL IT!
self.labelVariable.set(myText+"(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0,Tkinter.END)
```
AND
```
def generator(self):....
```
|
change with OnButtonClick function's second line and replace myText with generator()
|
18,732,803
|
So I'm trying to build an insult generator that will take lists, randomize the inputs, and show the randomized code at the push of a button.
Right now, the code looks like...
```
import Tkinter
import random
section1 = ["list of stuff"]
section2 = ["list of stuff"]
section3 = ["list of stuff"]
class myapp(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self, parent)
self.parent = parent
self.initialize()
def initialize(self):
self.grid() # creates grid layout manager where we can place our widgets within the window
button = Tkinter.Button(self, text=u"Generate!", command=self.OnButtonClick)
button.grid(column=1, row=0)
self.labelVariable = Tkinter.StringVar()
label = Tkinter.Label(self, textvariable=self.labelVariable, anchor='w', fg='white', bg='green')
label.grid(column=0, row=1, columnspan=2, sticky='EW')
self.labelVariable.set(u"Oh hi there !")
self.grid_columnconfigure(0, weight=1)
self.resizable(True, False)
self.update()
self.geometry(self.geometry())
def generator():
a = random.randint(0, int(len(section1))-1)
b = random.randint(0, int(len(section2))-1)
c = random.randint(0, int(len(section3))-1)
myText = "You are a "+ section1[a]+" "+section2[b]+'-'+section3[c]+"! Fucker."
return myText
def OnButtonClick(self):
self.labelVariable.set(myText + "(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0, Tkinter.END)
if __name__=='__main__':
app = myapp(None) # Instantiates the class
app.title('Random Insult Generator') # Names the window we're creating.
app.mainloop() # Program will loop indefinitely, awaiting input
```
Right now, the error it's giving is that the `myText` isn't defined.
Any thoughts on how to fix it?
**Edit:**
The error message is...
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__
return self.func(*args)
File "...", line 41, in OnButtonClick
self.labelVariable.set(myText+"(You clicked the button !)")
NameError: global name 'myText' is not defined
```
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18732803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2658570/"
] |
If im dissectiong your code properly, you need to set def generator(): outside of the class you've defined; aka, make it a local function, not a method of myapp. secondly, you are trying to use the myText variable inside your onButtonClick method, but as your error states, it is not defined. in order to use the data your generator function is sending, you need to call the function. Just treat the last line of your def generator as a placeholder.
your accessor method should look like this:
```
def OnButtonClick(self):
self.labelVariable.set(generator()+"(You clicked the button !)")
#self.entry.focus_set()
#self.entry.selection_range(0,Tkinter.END)
```
(the entry widget is not needed, and you have not called .pack() on it yet ;) )
and your generator function should be outside of your class.
Alternatively, you could leave the generator function inside your class as a method, but you would need to add the self. arguement as a parameter to it:
```
def generator(self):
```
and add self.generator() when you call it in your button click event method.
Cheers!
|
```
def OnButtonClick(self):
myText = self.generator() # CALL IT!
self.labelVariable.set(myText+"(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0,Tkinter.END)
```
AND
```
def generator(self):....
```
|
18,732,803
|
So I'm trying to build an insult generator that will take lists, randomize the inputs, and show the randomized code at the push of a button.
Right now, the code looks like...
```
import Tkinter
import random
section1 = ["list of stuff"]
section2 = ["list of stuff"]
section3 = ["list of stuff"]
class myapp(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self, parent)
self.parent = parent
self.initialize()
def initialize(self):
self.grid() # creates grid layout manager where we can place our widgets within the window
button = Tkinter.Button(self, text=u"Generate!", command=self.OnButtonClick)
button.grid(column=1, row=0)
self.labelVariable = Tkinter.StringVar()
label = Tkinter.Label(self, textvariable=self.labelVariable, anchor='w', fg='white', bg='green')
label.grid(column=0, row=1, columnspan=2, sticky='EW')
self.labelVariable.set(u"Oh hi there !")
self.grid_columnconfigure(0, weight=1)
self.resizable(True, False)
self.update()
self.geometry(self.geometry())
def generator():
a = random.randint(0, int(len(section1))-1)
b = random.randint(0, int(len(section2))-1)
c = random.randint(0, int(len(section3))-1)
myText = "You are a "+ section1[a]+" "+section2[b]+'-'+section3[c]+"! Fucker."
return myText
def OnButtonClick(self):
self.labelVariable.set(myText + "(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0, Tkinter.END)
if __name__=='__main__':
app = myapp(None) # Instantiates the class
app.title('Random Insult Generator') # Names the window we're creating.
app.mainloop() # Program will loop indefinitely, awaiting input
```
Right now, the error it's giving is that the `myText` isn't defined.
Any thoughts on how to fix it?
**Edit:**
The error message is...
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__
return self.func(*args)
File "...", line 41, in OnButtonClick
self.labelVariable.set(myText+"(You clicked the button !)")
NameError: global name 'myText' is not defined
```
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18732803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2658570/"
] |
If im dissectiong your code properly, you need to set def generator(): outside of the class you've defined; aka, make it a local function, not a method of myapp. secondly, you are trying to use the myText variable inside your onButtonClick method, but as your error states, it is not defined. in order to use the data your generator function is sending, you need to call the function. Just treat the last line of your def generator as a placeholder.
your accessor method should look like this:
```
def OnButtonClick(self):
self.labelVariable.set(generator()+"(You clicked the button !)")
#self.entry.focus_set()
#self.entry.selection_range(0,Tkinter.END)
```
(the entry widget is not needed, and you have not called .pack() on it yet ;) )
and your generator function should be outside of your class.
Alternatively, you could leave the generator function inside your class as a method, but you would need to add the self. arguement as a parameter to it:
```
def generator(self):
```
and add self.generator() when you call it in your button click event method.
Cheers!
|
change with OnButtonClick function's second line and replace myText with generator()
|
62,403,240
|
I was doing some question in C and I was asked to provide the output of this question :
```
#include <stdio.h>
int main()
{
float a =0.7;
if(a<0.7)
{
printf("Yes");
}
else{
printf("No");
}
}
```
By just looking at the problem I thought the answer would be *NO* but after running I found that it was *YES*
I searched the web about float and found [0.30000000000000004.com](https://0.30000000000000004.com/)
Just out of curiosity I ran the same code in python :
```
x = float(0.7)
if x < 0.7 :
print("YES")
else :
print("NO")
```
Here the output is *NO*
I am confused!
Maybe I am missing something
Please help me with this problem.
Thanks in advance!
|
2020/06/16
|
[
"https://Stackoverflow.com/questions/62403240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9715289/"
] |
```
float a = 0.7;
if(a<0.7)
```
The first line above takes the `double` `0.7` and crams it into a `float`, which almost certainly has less precision (so you may lose information).
The second line upgrades the `float a` to a `double` (because you're comparing it with a `double 0.7`, and that's one of the things C does for you) but it's too late at that point, the information is already gone.
You can see this effect with:
```
#include <stdio.h>
int main(void) {
float a = 0.7;
float b = 0.7f;
double c = 0.7;
printf("a %.50g\nb %.50g\nc %.50g\n", a, b, c);
return 0;
}
```
which generates something like:
```
a 0.699999988079071044921875
b 0.699999988079071044921875
c 0.69999999999999995559107901499373838305473327636719
```
*Clearly,* the `double c` variable has about double the precision (which is why they're often referred to as single and double precision) than both:
* the `double 0.7` crammed into the `float a` variable; and
* the `float b` variable that had the `float 0.7` stored into it.
*Neither* of them is exactly `0.7` due to the way floating point numbers work, but the `double` is closer to the desired value, hence not equal to the `float`.
It's like pouring a full four-litre bucket of water into a three-litre bucket and then back again. The litre you lost in the overflow of the smaller bucket doesn't magically re-appear :-)
If you change the type of your `a` to `double`, or use `float` literals like `0.7f`, you'll find things work more as you expect, since there's no loss of precision in that case.
---
The reason why you don't see the same effect in Python is because there's *one* underlying type for these floating point values:
```
>>> x = float(.7)
>>> type(x)
<class 'float'>
>>> type(.7)
<class 'float'>
```
From the Python docs:
>
> There are three distinct numeric types: integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of integers. Integers have unlimited precision. ***Floating point numbers are usually implemented using double in C.***
>
>
>
Hence no loss of precision in that case.
The use of `double` seems to be confirmed by (slightly reformatted):
```
>>> import sys
>>> print(sys.float_info)
sys.float_info(
max=1.7976931348623157e+308,
max_exp=1024,
max_10_exp=308,
min=2.2250738585072014e-308,
min_exp=-1021,
min_10_exp=-307,
dig=15,
mant_dig=53,
epsilon=2.220446049250313e-16,
radix=2,
rounds=1
)
```
The exponents and min/max values are identical to those found in [IEEE754 double precision](https://en.wikipedia.org/wiki/IEEE_754-1985) values.
|
In `a<0.7` the constant `0.7` is a `double` then `a` which is a `float` is promoted
to `double` before the comparison.
Nothing guarantees that these two constants (as `float` and as `double`) are the same.
As `float` the fractional part of `0.7` is `00111111001100110011001100110011`; as `double` the fractional part of `0.7` is `0110011001100110011001100110011001100110011001100110`.
The value converted from `float` will have its mantissa filled with `0`s when promoted to `double`.
Comparing these two sequences of bits shows that the `double` constant is greater than the `float` constant (the second bit already differs), which
leads to displaying `"Yes"`.
On the other hand, in python, only the `double` representation exists
for floating point numbers; thus there is no difference between what
is stored in `a` and the constant `0.7` of the comparison, which leads
to displaying `"No"`.
|
70,915,615
|
I am trying to use a parent class as a blueprint for new classes.
E.g. the `FileValidator` contains all generic attributesand methods for a generic file. Then I want to create for example a `ImageValidator` inheriting everything from the FileValidator but with additional, more specific attribtues, methods. etc. In this example the child class is called: `FileValidatorPlus`
My understanding was, that if I inherit the parent class I can just plug-in more attributes/methods without repeating anything, like just adding `min_size`. But the following code gives: `TypeError: FileValidatorPlus.__init__() got an unexpected keyword argument 'max_size'`
```
class File:
def __init__(self, file_size=100):
self.file_size = file_size
class FileValidator(object):
error_messages = {'max_size': 'some error msg'}
def __init__(self, max_size=None):
self.max_size = max_size
def __call__(self, file):
print(self.max_size > file.file_size)
class FileValidatorPlus(FileValidator):
error_messages = {'min_size': 'some other error msg'}
def __init__(self, min_size=None):
super(FileValidatorPlus, self).__init__()
self.error_messages.update(super(FileValidatorPlus, self).error_messages)
self.min_size = min_size
validator = FileValidatorPlus(max_size=10, min_size=20)
print(validator.error_messages)
print(validator.max_size)
print(validator.min_size)
```
Full Traceback:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
TypeError: FileValidatorPlus.__init__() got an unexpected keyword argument 'max_size'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_code_executor.py", line 108, in add_exec
more, exception_occurred = self.do_add_exec(code_fragment)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 90, in do_add_exec
command.run()
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_console_types.py", line 35, in run
self.more = self.interpreter.runsource(text, '<input>', symbol)
File "/usr/local/lib/python3.10/code.py", line 74, in runsource
self.runcode(code)
File "/usr/local/lib/python3.10/code.py", line 94, in runcode
self.showtraceback()
File "/usr/local/lib/python3.10/code.py", line 148, in showtraceback
sys.excepthook(ei[0], ei[1], last_tb)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 112, in info
traceback.print_exception(type, value, tb)
NameError: name 'traceback' is not defined
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 284, in process_exec_queue
interpreter.add_exec(code_fragment)
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_code_executor.py", line 132, in add_exec
return more, exception_occurred
UnboundLocalError: local variable 'exception_occurred' referenced before assignment
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 511, in <module>
pydevconsole.start_server(port)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 407, in start_server
process_exec_queue(interpreter)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 292, in process_exec_queue
traceback.print_exception(type, value, tb, file=sys.__stderr__)
UnboundLocalError: local variable 'traceback' referenced before assignment
Process finished with exit code 1
```
|
2022/01/30
|
[
"https://Stackoverflow.com/questions/70915615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11971785/"
] |
Consider below approach
```
with example as (
select '670000000000100000000000000000000000000000000000000000000000000' as s
)
select s, (select sum(cast(num as int64)) from unnest(split(s,'')) num) result
from example
```
with output
[](https://i.stack.imgur.com/aiBCO.png)
|
Yet another [fun] option
```
create temp function sum_digits(expression string)
returns int64
language js as """
return eval(expression);
""";
with example as (
select '670000000000100000000000000000000000000000000000000000000000000' as s
)
select s, sum_digits(regexp_replace(replace(s, '0', ''), r'(\d)', r'+\1')) result
from example
```
with output
[](https://i.stack.imgur.com/BUflp.png)
What it does is -
* first it transform initial long string into shorter one - `671`.
* then it transforms it into expression - `+6+7+1`
* and finally pass it to javascript `eval` function *(unfortunatelly BigQuery does not have [hopefully yet] `eval` function)*
|
8,219,630
|
As a developer that has worked on more than one python project at once, I love the idea of Virtualenv. But, I'm currently trying to get Komodo IDE to play nice with VirtualEnv on a Windows box. I've downloaded virtualenvwrapper-win and got it working (btw, you are using Virtualenv on windows you should check it out):
<http://pypi.python.org/pypi/virtualenvwrapper-win>
however, I can't quite figure out what I need to do to get Komodo IDE to respect it all. I found the following for Mac users:
<http://blog.haydon.id.au/2010/11/taming-komodo-dragon-for-virtualenv-and.html>
But, so far, no luck. I'm pretty sure that I need to set a postactivate script to set some environment variables for Komodo to pick up.
Has anyone gotten this working before?
I'm using:
Win7, Python 2.6, Komodo IDE 6.1.3
Thanks in advance!
|
2011/11/21
|
[
"https://Stackoverflow.com/questions/8219630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265681/"
] |
I finally ended up posting the same question on the ActiveState forum. The reply was that it doesn't officially support VirtualEnv, yet. But, that you can make get it to work by adjusting the paths, etc. Here is the link to the question/reply.
<http://community.activestate.com/node/7499>
|
You can do this by adding the virtualenv's Python library to the project. Right-click on Project > Properties > Languages > Python > Additional Python Import Directories.
Now if someone could tell me how to add a folder like that in Mac when the virtualenv is under a hidden folder (without turning hidden folders on in Finder).
|
8,219,630
|
As a developer that has worked on more than one python project at once, I love the idea of Virtualenv. But, I'm currently trying to get Komodo IDE to play nice with VirtualEnv on a Windows box. I've downloaded virtualenvwrapper-win and got it working (btw, you are using Virtualenv on windows you should check it out):
<http://pypi.python.org/pypi/virtualenvwrapper-win>
however, I can't quite figure out what I need to do to get Komodo IDE to respect it all. I found the following for Mac users:
<http://blog.haydon.id.au/2010/11/taming-komodo-dragon-for-virtualenv-and.html>
But, so far, no luck. I'm pretty sure that I need to set a postactivate script to set some environment variables for Komodo to pick up.
Has anyone gotten this working before?
I'm using:
Win7, Python 2.6, Komodo IDE 6.1.3
Thanks in advance!
|
2011/11/21
|
[
"https://Stackoverflow.com/questions/8219630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265681/"
] |
I finally ended up posting the same question on the ActiveState forum. The reply was that it doesn't officially support VirtualEnv, yet. But, that you can make get it to work by adjusting the paths, etc. Here is the link to the question/reply.
<http://community.activestate.com/node/7499>
|
Use the context menu to setup [virtualenv](https://stackoverflow.com/questions/8219630/virtualenv-and-komodo-ide-on-windows). Right-click on Project > Properties > Languages > Python > Additional Python Import Directories.
Use an alias in .profile to add support for [rvm](http://community.activestate.com/forum/does-komodo-support-rvm-or-not).
```
alias komodo='open -a "Komodo Edit"'
```
From there, type "rvm use ree( or rbx, or 1.9.1, or whichever version you want)."
|
8,219,630
|
As a developer that has worked on more than one python project at once, I love the idea of Virtualenv. But, I'm currently trying to get Komodo IDE to play nice with VirtualEnv on a Windows box. I've downloaded virtualenvwrapper-win and got it working (btw, you are using Virtualenv on windows you should check it out):
<http://pypi.python.org/pypi/virtualenvwrapper-win>
however, I can't quite figure out what I need to do to get Komodo IDE to respect it all. I found the following for Mac users:
<http://blog.haydon.id.au/2010/11/taming-komodo-dragon-for-virtualenv-and.html>
But, so far, no luck. I'm pretty sure that I need to set a postactivate script to set some environment variables for Komodo to pick up.
Has anyone gotten this working before?
I'm using:
Win7, Python 2.6, Komodo IDE 6.1.3
Thanks in advance!
|
2011/11/21
|
[
"https://Stackoverflow.com/questions/8219630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265681/"
] |
You can do this by adding the virtualenv's Python library to the project. Right-click on Project > Properties > Languages > Python > Additional Python Import Directories.
Now if someone could tell me how to add a folder like that in Mac when the virtualenv is under a hidden folder (without turning hidden folders on in Finder).
|
Use the context menu to setup [virtualenv](https://stackoverflow.com/questions/8219630/virtualenv-and-komodo-ide-on-windows). Right-click on Project > Properties > Languages > Python > Additional Python Import Directories.
Use an alias in .profile to add support for [rvm](http://community.activestate.com/forum/does-komodo-support-rvm-or-not).
```
alias komodo='open -a "Komodo Edit"'
```
From there, type "rvm use ree( or rbx, or 1.9.1, or whichever version you want)."
|
41,846,466
|
I am currently experimenting with Behavioral Driven Development. I am using behave\_django with selenium. I get the following output
```
Creating test database for alias 'default'...
Feature: Open website and print title # features/first_selenium.feature:1
Scenario: Open website # features/first_selenium.feature:2
Given I open seleniumframework website # features/steps/first_selenium.py:2 0.001s
Traceback (most recent call last):
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "features/steps/first_selenium.py", line 4, in step_impl
context.browser.get("http://www.seleniumframework.com")
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/runner.py", line 214, in __getattr__
raise AttributeError(msg)
AttributeError: 'Context' object has no attribute 'browser'
Then I print the title # None
Failing scenarios:
features/first_selenium.feature:2 Open website
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
0 steps passed, 1 failed, 1 skipped, 0 undefined
Took 0m0.001s
Destroying test database for alias 'default'...
```
Here is the code:
first\_selenium.feature
```
Feature: Open website and print title
Scenario: Open website
Given I open seleniumframework website
Then I print the title
```
first\_selenium.py
```
from behave import *
@given('I open seleniumframework website')
def step_impl(context):
context.browser.get("http://www.seleniumframework.com")
@then('I print the title')
def step_impl(context):
title = context.browser.title
assert "Selenium" in title
```
manage.py
```
#!/home/vagrant/newproject3/newproject3/venv/bin/python
import os
import sys
sys.path.append("/home/vagrant/newproject3/newproject3/site/v2/features")
import dotenv
if __name__ == "__main__":
path = os.path.realpath(os.path.dirname(__file__))
dotenv.load_dotenv(os.path.join(path, '.env'))
from configurations.management import execute_from_command_line
#from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
I'm not sure what this error means
|
2017/01/25
|
[
"https://Stackoverflow.com/questions/41846466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7402682/"
] |
I know it is a late answer but maybe somebody is going to profit from it:
you need to declare the context.browser (in a before\_all/before\_scenario/before\_feature hook definition or just test method definition) before you use it, e.g.:
```
context.browser = webdriver.Chrome()
```
Please note that the hooks must be defined in a separate environment.py module
|
In my case the browser wasn't installed. That can be a case too. Also ensure path to geckodriver is exposed if you are working with Firefox.
|
18,005,365
|
I need to start a python script with bash using nohup passing an arg that aids in defining a constant in a script I import. There are lots of questions about passing args but I haven't found a successful way using nohup.
a simplified version of my bash script:
```
#!/bin/bash
BUCKET=$1
echo $BUCKET
script='/home/path/to/script/script.py'
echo "starting $script with nohup"
nohup /usr/bin/python $script $BUCKET &
```
the relevant part of my config script i'm importing:
```
FLAG = sys.argv[0]
if FLAG == "b1":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket1"
AWS_SECRET_ACCESS_KEY = "secret"
elif FLAG == "b2":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket2"
AWS_SECRET_ACCESS_KEY = "secret"
else:
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket3"
AWS_SECRET_ACCESS_KEY = "secret"
```
the script thats using it:
```
from config import BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
#do stuff with the values.
```
Frankly, since I'm passing the args to script.py, I'm not confident that they'll be in scope for the import script. That said, when I take a similar approach without using nohup, it works.
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/18005365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1901847/"
] |
In general, the argument vector for any program starts with the program itself, and then all of its arguments and options. Depending on the language, the program may be `sys.argv[0]`, `argv[0]`, `$0`, or something else, but it's basically always argument #0.
Each program whose job is to run another program—like `nohup`, and like the Python interpreter itself—generally drops itself and all of its own options, and gives the target program the rest of the command line.
So, [`nohup`](http://linux.die.net/man/1/nohup) takes a `COMMAND` and zero or more `ARGS`. Inside that `COMMAND`, `argv[0]` will be `COMMAND` itself (in this case, `'/usr/bin/python'`), and `argv[1]` and later will be the additional arguments (`'/home/path/to/script/script.py'` and whatever `$BUCKET` resolves to).
Next, Python takes zero or more options, a script, and zero or more args to that script, and exposes the script and its args as [`sys.argv`](http://docs.python.org/3/library/sys.html#sys.argv). So, in your script, `sys.argv[0]` will be `'/home/path/to/script/script.py'`, and `sys.argv[1]` will be whatever `$BUCKET` resolves to.
And `bash` works similarly to Python; `$1` will be the first argument to the bash wrapper script (`$0` will be the script itself), and so on. So, `sys.argv[1]` in the inner Python script will end up getting the first argument passed to the bash wrapper script.
Importing doesn't affect `sys.argv` at all. So, in both your `config` module and your top-level script, if you `import sys`, `sys.argv[1]` will hold the `$1` passed to the bash wrapper script.
(On some platforms, in some circumstances `argv[0]` may not have the complete path, or may even be empty. But that isn't relevant here. What you care about is the eventual `sys.argv[1]`, and `bash`, `nohup`, and `python` are all guaranteed to pass that through untouched.)
|
```
nohup python3 -u ./train.py --dataset dataset_directory/ --model model_output_directory > output.log &
```
Here Im executing train.py file with python3, Then -u is used to ignore buffering and show the logs on the go without storing, specifying my **dataset\_directory** with argument style and **model\_output\_directory** then **Greater than symbol(>)**
then the logs is stored in **output.log** and them atlast and(**&**) symbol is used
To terminate this process
```
ps ax | grep train
```
then note the process\_ID
```
sudo kill -9 Process_ID
```
|
18,968,607
|
I'm trying to select timestamps columns from Cassandra 2.0 using cqlengine or cql(python), and i'm getting wrong results.
This is what i get from cqlsh ( or thrift ):
"2013-09-23 00:00:00-0700"
This is what i get from cqlengine and cql itself:
"\x00\x00\x01AG\x0b\xd5\xe0"
If you wanna reproduce the error, try this:
* open cqlsh
* create table test (name varchar primary key, dt timestamp)
* insert into table test ('Test', '2013-09-23 12:00') <<< Yes, i have tried to add by another ways....
* select \* from test ( Here it's everything fine )
* Now go on cqlengine or cql itself and select that table and you will get a broken hexadecimal.
Thanks !
|
2013/09/23
|
[
"https://Stackoverflow.com/questions/18968607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2808750/"
] |
Unfortunately, cqlengine is not currently compatible with cassandra 2.0
There were some new types introduced with Cassandra 2.0, and we haven't had a chance to make cqlengine compatible with them. I'm also aware of a problem with blob columns.
This particular issue is caused by the cql driver returning the timestamp as a raw string of bytes, as opposed to an integer.
Since cqlengine does not support Cassandra 2.0 yet, your best bet is to use Cassandra 1.2.x until we can get it updated, cqlengine doesn't support any of the new 2.0 features anyway. If you really need to use 2.0, you can work around this problem by subclassing the DateTime column like so:
```
class NewDateTime(DateTime):
def to_python(self, val):
if isinstance(val, basestring):
val = struct.unpack('!Q', val)[0] / 1000.0
return super(NewDateTime, self).to_python(val)
```
|
The `timestamp` datatype stores values as the number of milliseconds since the epoch, in a long. It seems that however you are printing it is interpreting it as a string. This works for me using cql-dbapi2 after creating and inserting as in the question:
```
>>> import cql
>>> con = cql.connect('localhost', keyspace='ks', cql_version='3.0.0')
>>> cursor = con.cursor()
>>> cursor.execute('select * from test;')
True
>>> cursor.fetchone()
[u'Test', 1379934000.0]
```
|
29,320,466
|
I have tried to use [emcee](http://dan.iel.fm/emcee/current/user/advanced/) library to implement Monte Carlo Markov Chain inside a class and also make multiprocessing module works but after running such a test code:
```
import numpy as np
import emcee
import scipy.optimize as op
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10*np.random.rand(N))
yerr = 0.1+0.5*np.random.rand(N)
y = m_true*x+b_true
y += np.abs(f_true*y) * np.random.randn(N)
y += yerr * np.random.randn(N)
class modelfit():
def __init__(self):
self.x=x
self.y=y
self.yerr=yerr
self.m=-0.6
self.b=2.0
self.f=0.9
def get_results(self):
def func(a):
model=a[0]*self.x+a[1]
inv_sigma2 = 1.0/(self.yerr**2 + model**2*np.exp(2*a[2]))
return 0.5*(np.sum((self.y-model)**2*inv_sigma2 + np.log(inv_sigma2)))
result = op.minimize(func, [self.m, self.b, np.log(self.f)],options={'gtol': 1e-6, 'disp': True})
m_ml, b_ml, lnf_ml = result["x"]
return result["x"]
def lnprior(self,theta):
m, b, lnf = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def lnprob(self,theta):
lp = self.lnprior(theta)
likelihood=self.lnlike(theta)
if not np.isfinite(lp):
return -np.inf
return lp + likelihood
def lnlike(self,theta):
m, b, lnf = theta
model = m * self.x + b
inv_sigma2 = 1.0/(self.yerr**2 + model**2*np.exp(2*lnf))
return -0.5*(np.sum((self.y-model)**2*inv_sigma2 - np.log(inv_sigma2)))
def run_mcmc(self,nstep):
ndim, nwalkers = 3, 100
pos = [self.get_results() + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
self.sampler = emcee.EnsembleSampler(nwalkers, ndim, self.lnprob,threads=10)
self.sampler.run_mcmc(pos, nstep)
test=modelfit()
test.x=x
test.y=y
test.yerr=yerr
test.get_results()
test.run_mcmc(5000)
```
I got this error message :
```
File "MCMC_model.py", line 157, in run_mcmc
self.sampler.run_mcmc(theta0, nstep)
File "build/bdist.linux-x86_64/egg/emcee/sampler.py", line 157, in run_mcmc
File "build/bdist.linux-x86_64/egg/emcee/ensemble.py", line 198, in sample
File "build/bdist.linux-x86_64/egg/emcee/ensemble.py", line 382, in _get_lnprob
File "build/bdist.linux-x86_64/egg/emcee/interruptible_pool.py", line 94, in map
File "/vol/aibn84/data2/zahra/anaconda/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
cPickle.PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
```
I reckon it has something to do with how I have used **multiprocessing** in the **class** but I could not figure out how I could keep the structure of my class the way it is and meanwhile use multiprocessing as well??!!
I will appreciate for any tips.
P.S. I have to mention the code works perfectly if I remove `threads=10` from the last function.
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29320466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2811074/"
] |
There are a number of SO questions that discuss what's going on:
1. <https://stackoverflow.com/a/21345273/2379433>
2. <https://stackoverflow.com/a/28887474/2379433>
3. <https://stackoverflow.com/a/21345308/2379433>
4. <https://stackoverflow.com/a/29129084/2379433>
…including this one, which seems to be your response… to nearly the same question:
5. <https://stackoverflow.com/a/25388586/2379433>
However, the difference here is that you are not using `multiprocessing` directly -- but`emcee` is. Therefore, the `pathos.multiprocessing` solution (from the links above) is not available for you. Since `emcee` uses `cPickle`, you'll have to stick to things that `pickle` knows how to serialize. You are out of luck for class instances. Typical workarounds are to either use `copy_reg` to register the type of object you want to serialize, or to add a `__reduce__` method to tell python how to serialize it. You can see several of the answers from the above links suggest similar things… but none enable you to keep the class the way you have written it.
|
For the record, you can now create a `pathos.multiprocessing` pool, and pass it to emcee using the `pool` argument. However, be aware that the overhead of multiprocessing can actually slow things down, unless your likelihood is particularly time-consuming to compute.
|
70,747,394
|
I am trying to check if a user input as a string exists in a list called categoriesList which appends categories from a text file named categories.txt. If the user inputs a category that then exists in categoriesList my code should be able to print out "Category exists", otherwise "Category doesn't exist".
Here is the code:
```
categoriesList = []
with open("categories.txt", "r") as OpenCategories:
for category in (OpenCategories):
categoriesList.append(category)
while True:
inputCategories = input("Please enter a category:")
if inputCategories in categoriesList:
print("Category exists")
break
else:
print("Category doesn't exist")
break
```
When I run this code it always outputs Category doesn't exist even if the category I enter actually exists in categoriesList. How would I solve this problem in the code? Furthermore, I want to be able to get one input from the user for entering a category so I don't want "Please enter a category" to come up several times, I just want the code to make it come up just once.
Also, it would be much appreciated if I could know the code on how I would then do all of the above in tkinter as I need to do the above in GUI. I think you need to have labels and allow the user to enter a category in a box on the screen.
I have tried to make code which tries to check a user input exists in a list after getting the input on a tk screen as its not enough for me to just have the check happening in a python console and its not doing it properly, so here is the code:
```
import tkinter as tk
from tkinter import ttk
window=tk.Tk()
canvas1 = tk.Canvas(window, width = 400, height = 300)
canvas1.pack()
label1 = Label(window, text="Please enter a category:")
label1.pack()
entry = Entry(window, width=50)
entry.pack()
def for_button():
checkUserInput = entry.get()
button = Button(window, text="Check", command=for_button)
button.pack()
for i in categoriesList:
if button in categoriesList:
categoryExist = Label(window, text="Category exists")
categoryExist.pack()
else:
categoryNotExist= Label(window, text="Category doesn't exist")
categoryNotExist.pack()
window.mainloop()
```
It uses the list categoriesList from the code eariler in the post that was given and I am trying to get the user to enter a category into the text box on the tk screen and click "check" button afterwards but before the user can give an input "category doesn't exist" comes up numerous times which is what I don't want the code to be doing.
|
2022/01/17
|
[
"https://Stackoverflow.com/questions/70747394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17928821/"
] |
Your `STATICFILES_FINDERS` setting tells Django that it should look for static files in the following places:
* `FileSystemFinder` tells it to look in whichever locations are listed in STATICFILES\_DIRS;
* `AppDirectoriesFinder` tells it to look in the `static` folder of each registered app in INSTALLED\_APPS.
In normal circumstances, STATICFILES\_DIRS should not make a difference to Wagtail's own static files. This is because Wagtail's static files are stored within the apps that make up the Wagtail package, and will be pulled in by AppDirectoriesFinder - FileSystemFinder (and STATICFILES\_DIRS) do not come into play.
The fact that you're seeing a difference suggests to me that you've previously customised Wagtail's JS / CSS by placing static files within your project's 'static' folder, in a location such as `myproject/static/wagtailadmin/css/`, to override the built-in files. These customisations would presumably have been made against Wagtail 2.8 and will not behave correctly against Wagtail 2.15. The solution is to remove these custom files from your project.
|
Try changing:
`STATICFILES_DIRS = [os.path.join(PROJECT_DIR, 'static'),]`
to
`STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'),]`
|
15,128,404
|
I am making a GUI in wxpython.
I want to place images next to radio buttons.
How should i do that in wxpython?
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118322/"
] |
I suggest using wx.ToggleButton with bitmap labels if you are using 2.9, or one of the bitmap toggle button classes in wx.lib.buttons if you are still on 2.8. You can then implement the "radio button" functionality yourself by untoggling all other buttons in the group when one of them is toggled. Using the bitmap itself as the radio button will look nicer and will save space.
|
I'm not sure what you mean. Are you wanting images instead of the actual radio button itself? That is not supported. If you want an image in addition to the radio button, then just use a group of horizontal box sizers or one of the grid sizers. Add the image and then the radio button. And you're done!
|
15,128,404
|
I am making a GUI in wxpython.
I want to place images next to radio buttons.
How should i do that in wxpython?
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118322/"
] |
I suggest using wx.ToggleButton with bitmap labels if you are using 2.9, or one of the bitmap toggle button classes in wx.lib.buttons if you are still on 2.8. You can then implement the "radio button" functionality yourself by untoggling all other buttons in the group when one of them is toggled. Using the bitmap itself as the radio button will look nicer and will save space.
|
I am satisfied with the following:
* image icon is left to the radio button,
* click on the image activates radion button.
It seems that usability does not suffer.
```
def make_radio_with_icon(parent_window, bitmap, label):
sizer = wx.BoxSizer(orient=wx.HORIZONTAL)
sizer.Add(bitmap)
r = wx.RadioButton(parent_window, label=label)
sizer.Add(r)
def on_click(evt):
r.SetValue(1)
bitmap.Bind(wx.EVT_LEFT_DOWN, on_click)
return sizer
```
By analogue, you can implement the ordering: radion button itself, image, label.
|
60,976,753
|
well i have this DF in python
```
folio id_incidente nombre app apm \
0 1 1 SIN DATOS SIN DATOS SIN DATOS
1 131 100085 JUAN DOMINGO GONZALEZ DELGADO
2 132 100085 FRANCISCO JAVIER VELA RAMIREZ
3 133 100087 JUAN CARLOS PEREZ MEDINA
4 134 100088 ARMANDO SALINAS SALINAS
... ... ... ... ... ...
1169697 1223258 866846 IVAN RIVERA SILVA
1169698 1223259 866847 EDUARDO PLASCENCIA MARTINEZ
1169699 1223260 866848 FRANCISCO JAVIER PLASCENCIA MARTINEZ
1169700 1223261 866849 JUAN ALBERTO MARTINEZ ARELLANO
1169701 1223262 866850 JOSE DE JESUS SERRANO GONZALEZ
foto_barandilla fecha_hora_registro
0 1.jpg 0/0/0000 00:00:00
1 131.jpg 2008-08-07 15:42:25
2 132.jpg 2008-08-07 15:50:42
3 133.jpg 2008-08-07 16:37:24
4 134.jpg 2008-08-07 17:18:12
... ... ...
1169697 20200330103123_239288573.jpg 2020-03-30 10:32:10
1169698 20200330103726_1160992585.jpg 2020-03-30 10:38:25
1169699 20200330103837_999151106.jpg 2020-03-30 10:39:44
1169700 20200330104038_29275767.jpg 2020-03-30 10:41:52
1169701 20200330104145_640780023.jpg 2020-03-30 10:45:35
```
here the app and apm are the mother and father surnames, then i tried these in order to get another column with the whole name
```
names = {}
for i in range(1,df.shape[0]+1):
try:
names[i] = df["nombre"].iloc[i]+' '+df["app"].iloc[i]+' '+df["apm"].iloc[i]
except:
print(df["folio"].iloc[i], df["nombre"].iloc[i],df["app"].iloc[i],df["apm"].iloc[i])
```
but i get these
```
400085 nan nan nan
400631 nan nan nan
401267 nan nan nan
401933 nan nan nan
401942 nan nan nan
402030 nan nan nan
403008 nan nan nan
403010 nan nan nan
403011 nan nan nan
403027 nan nan nan
403384 nan nan nan
403399 nan nan nan
403415 nan nan nan
403430 nan nan nan
404764 nan nan nan
501483 CARLOS ESPINOZA nan
504723 RICARDO JARED LOPEZ ACOSTA nan
506989 JUAN JOSE FLORES OCHOA nan
507376 JOSE DE JESUS VENEGAS nan
.....
```
i tried to use the fillna.('') like this
```
df["app"].fillna('')
df["apm"].fillna('')
df["nombre"].fillna('')
```
but the result is the same, i hope you can help me in order to make the column with the whole name, like name+surname1+surname2
edit: here is my minimal version, the reporte files are(each one) a part of the whole database as show up here,
```
for i in range(1,31):
exec('reporte_%d = pd.read_excel("/home/workstation/Desktop/fotos/Fotos/Detenidos/Reporte Detenidos CER %d.xlsx", encoding="latin1" )'%(i,i))
reportes = [reporte_1,reporte_2,reporte_3,reporte_4,reporte_5,reporte_6,reporte_7,reporte_8,reporte_9,reporte_10,reporte_11,reporte_12,reporte_13,reporte_14,reporte_15,reporte_16,reporte_17,reporte_18,reporte_19,reporte_20,reporte_21,reporte_22,reporte_23,reporte_24,reporte_25,reporte_26,reporte_27,reporte_28,reporte_29,reporte_30]
df = pd.concat(reportes)
```
now when i run
```
df['Full_name'] = [' '.join([y for y in x if pd.notna(y)]) for x in zip(df['nombre'], df['app'], df['apm'])]
```
i get this error TypeError: sequence item 1: expected str instance, int found
|
2020/04/01
|
[
"https://Stackoverflow.com/questions/60976753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11579387/"
] |
Use [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values) with [`Array.prototype.some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some):
```js
const obj = {
id: '123abc',
carrier_name: 'a',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj).some(val => val === ""))
const obj2 = {
id: '123abc',
carrier_name: '',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj2).some(val => val === ""))
```
|
You could check with `every` and `Boolean` as callback, **if you have only strings**.
```js
const check = object => Object.values(object).every(Boolean);
console.log(check({ foo: 'bar' })); // true
console.log(check({ foo: '' })); // false
console.log(check({ foo: '', bar: 'baz' })); // false
console.log(check({ foo: '', bar: '' })); // false
```
|
60,976,753
|
well i have this DF in python
```
folio id_incidente nombre app apm \
0 1 1 SIN DATOS SIN DATOS SIN DATOS
1 131 100085 JUAN DOMINGO GONZALEZ DELGADO
2 132 100085 FRANCISCO JAVIER VELA RAMIREZ
3 133 100087 JUAN CARLOS PEREZ MEDINA
4 134 100088 ARMANDO SALINAS SALINAS
... ... ... ... ... ...
1169697 1223258 866846 IVAN RIVERA SILVA
1169698 1223259 866847 EDUARDO PLASCENCIA MARTINEZ
1169699 1223260 866848 FRANCISCO JAVIER PLASCENCIA MARTINEZ
1169700 1223261 866849 JUAN ALBERTO MARTINEZ ARELLANO
1169701 1223262 866850 JOSE DE JESUS SERRANO GONZALEZ
foto_barandilla fecha_hora_registro
0 1.jpg 0/0/0000 00:00:00
1 131.jpg 2008-08-07 15:42:25
2 132.jpg 2008-08-07 15:50:42
3 133.jpg 2008-08-07 16:37:24
4 134.jpg 2008-08-07 17:18:12
... ... ...
1169697 20200330103123_239288573.jpg 2020-03-30 10:32:10
1169698 20200330103726_1160992585.jpg 2020-03-30 10:38:25
1169699 20200330103837_999151106.jpg 2020-03-30 10:39:44
1169700 20200330104038_29275767.jpg 2020-03-30 10:41:52
1169701 20200330104145_640780023.jpg 2020-03-30 10:45:35
```
here the app and apm are the mother and father surnames, then i tried these in order to get another column with the whole name
```
names = {}
for i in range(1,df.shape[0]+1):
try:
names[i] = df["nombre"].iloc[i]+' '+df["app"].iloc[i]+' '+df["apm"].iloc[i]
except:
print(df["folio"].iloc[i], df["nombre"].iloc[i],df["app"].iloc[i],df["apm"].iloc[i])
```
but i get these
```
400085 nan nan nan
400631 nan nan nan
401267 nan nan nan
401933 nan nan nan
401942 nan nan nan
402030 nan nan nan
403008 nan nan nan
403010 nan nan nan
403011 nan nan nan
403027 nan nan nan
403384 nan nan nan
403399 nan nan nan
403415 nan nan nan
403430 nan nan nan
404764 nan nan nan
501483 CARLOS ESPINOZA nan
504723 RICARDO JARED LOPEZ ACOSTA nan
506989 JUAN JOSE FLORES OCHOA nan
507376 JOSE DE JESUS VENEGAS nan
.....
```
i tried to use the fillna.('') like this
```
df["app"].fillna('')
df["apm"].fillna('')
df["nombre"].fillna('')
```
but the result is the same, i hope you can help me in order to make the column with the whole name, like name+surname1+surname2
edit: here is my minimal version, the reporte files are(each one) a part of the whole database as show up here,
```
for i in range(1,31):
exec('reporte_%d = pd.read_excel("/home/workstation/Desktop/fotos/Fotos/Detenidos/Reporte Detenidos CER %d.xlsx", encoding="latin1" )'%(i,i))
reportes = [reporte_1,reporte_2,reporte_3,reporte_4,reporte_5,reporte_6,reporte_7,reporte_8,reporte_9,reporte_10,reporte_11,reporte_12,reporte_13,reporte_14,reporte_15,reporte_16,reporte_17,reporte_18,reporte_19,reporte_20,reporte_21,reporte_22,reporte_23,reporte_24,reporte_25,reporte_26,reporte_27,reporte_28,reporte_29,reporte_30]
df = pd.concat(reportes)
```
now when i run
```
df['Full_name'] = [' '.join([y for y in x if pd.notna(y)]) for x in zip(df['nombre'], df['app'], df['apm'])]
```
i get this error TypeError: sequence item 1: expected str instance, int found
|
2020/04/01
|
[
"https://Stackoverflow.com/questions/60976753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11579387/"
] |
Use [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values) with [`Array.prototype.some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some):
```js
const obj = {
id: '123abc',
carrier_name: 'a',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj).some(val => val === ""))
const obj2 = {
id: '123abc',
carrier_name: '',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj2).some(val => val === ""))
```
|
Simple loop and check
```js
const obj = {
id: '123abc',
carrier_name: 'a',
group_id: 'a',
member_id: '',
plan_name: '',
}
const checkIfEmpty = obj => {
for (const property in obj) {
if (obj[property].length === 0) {
return true
}
}
return false
}
console.log(checkIfEmpty(obj))
```
|
60,976,753
|
well i have this DF in python
```
folio id_incidente nombre app apm \
0 1 1 SIN DATOS SIN DATOS SIN DATOS
1 131 100085 JUAN DOMINGO GONZALEZ DELGADO
2 132 100085 FRANCISCO JAVIER VELA RAMIREZ
3 133 100087 JUAN CARLOS PEREZ MEDINA
4 134 100088 ARMANDO SALINAS SALINAS
... ... ... ... ... ...
1169697 1223258 866846 IVAN RIVERA SILVA
1169698 1223259 866847 EDUARDO PLASCENCIA MARTINEZ
1169699 1223260 866848 FRANCISCO JAVIER PLASCENCIA MARTINEZ
1169700 1223261 866849 JUAN ALBERTO MARTINEZ ARELLANO
1169701 1223262 866850 JOSE DE JESUS SERRANO GONZALEZ
foto_barandilla fecha_hora_registro
0 1.jpg 0/0/0000 00:00:00
1 131.jpg 2008-08-07 15:42:25
2 132.jpg 2008-08-07 15:50:42
3 133.jpg 2008-08-07 16:37:24
4 134.jpg 2008-08-07 17:18:12
... ... ...
1169697 20200330103123_239288573.jpg 2020-03-30 10:32:10
1169698 20200330103726_1160992585.jpg 2020-03-30 10:38:25
1169699 20200330103837_999151106.jpg 2020-03-30 10:39:44
1169700 20200330104038_29275767.jpg 2020-03-30 10:41:52
1169701 20200330104145_640780023.jpg 2020-03-30 10:45:35
```
here the app and apm are the mother and father surnames, then i tried these in order to get another column with the whole name
```
names = {}
for i in range(1,df.shape[0]+1):
try:
names[i] = df["nombre"].iloc[i]+' '+df["app"].iloc[i]+' '+df["apm"].iloc[i]
except:
print(df["folio"].iloc[i], df["nombre"].iloc[i],df["app"].iloc[i],df["apm"].iloc[i])
```
but i get these
```
400085 nan nan nan
400631 nan nan nan
401267 nan nan nan
401933 nan nan nan
401942 nan nan nan
402030 nan nan nan
403008 nan nan nan
403010 nan nan nan
403011 nan nan nan
403027 nan nan nan
403384 nan nan nan
403399 nan nan nan
403415 nan nan nan
403430 nan nan nan
404764 nan nan nan
501483 CARLOS ESPINOZA nan
504723 RICARDO JARED LOPEZ ACOSTA nan
506989 JUAN JOSE FLORES OCHOA nan
507376 JOSE DE JESUS VENEGAS nan
.....
```
i tried to use the fillna.('') like this
```
df["app"].fillna('')
df["apm"].fillna('')
df["nombre"].fillna('')
```
but the result is the same, i hope you can help me in order to make the column with the whole name, like name+surname1+surname2
edit: here is my minimal version, the reporte files are(each one) a part of the whole database as show up here,
```
for i in range(1,31):
exec('reporte_%d = pd.read_excel("/home/workstation/Desktop/fotos/Fotos/Detenidos/Reporte Detenidos CER %d.xlsx", encoding="latin1" )'%(i,i))
reportes = [reporte_1,reporte_2,reporte_3,reporte_4,reporte_5,reporte_6,reporte_7,reporte_8,reporte_9,reporte_10,reporte_11,reporte_12,reporte_13,reporte_14,reporte_15,reporte_16,reporte_17,reporte_18,reporte_19,reporte_20,reporte_21,reporte_22,reporte_23,reporte_24,reporte_25,reporte_26,reporte_27,reporte_28,reporte_29,reporte_30]
df = pd.concat(reportes)
```
now when i run
```
df['Full_name'] = [' '.join([y for y in x if pd.notna(y)]) for x in zip(df['nombre'], df['app'], df['apm'])]
```
i get this error TypeError: sequence item 1: expected str instance, int found
|
2020/04/01
|
[
"https://Stackoverflow.com/questions/60976753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11579387/"
] |
Simple loop and check
```js
const obj = {
id: '123abc',
carrier_name: 'a',
group_id: 'a',
member_id: '',
plan_name: '',
}
const checkIfEmpty = obj => {
for (const property in obj) {
if (obj[property].length === 0) {
return true
}
}
return false
}
console.log(checkIfEmpty(obj))
```
|
You could check with `every` and `Boolean` as callback, **if you have only strings**.
```js
const check = object => Object.values(object).every(Boolean);
console.log(check({ foo: 'bar' })); // true
console.log(check({ foo: '' })); // false
console.log(check({ foo: '', bar: 'baz' })); // false
console.log(check({ foo: '', bar: '' })); // false
```
|
25,449,779
|
I use Google Cloud SDK under Window 7 64bit.
Google Cloud SDK and python install success. and run gcloud.
The error occurs as shown below.
```
C:\Program Files\Google\Cloud SDK>gcloud
Traceback (most recent call last):
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\./lib\googlecloudsdk\gcloud\gcloud.py", line 137, in <
module>
_cli = CreateCLI()
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\./lib\googlecloudsdk\gcloud\gcloud.py", line 98, in Cr
eateCLI
sdk_root = config.Paths().sdk_root
AttributeError: 'Paths' object has no attribute 'sdk_root'
```
Can ask for help? Thanks
|
2014/08/22
|
[
"https://Stackoverflow.com/questions/25449779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/485569/"
] |
I had the same problem. And this helped me solve this problem
Manually removed directory: C:\Program Files\Google\Cloud SDK
Then rerun: GoogleCloudSDKInstaller.exe
And make sure that you have connection to needed DL servers (I was first behind company firewall and installer didn't download all files - and no complains either by installer...)
Then I was OK again..
Source: <https://code.google.com/p/google-cloud-sdk/issues/detail?id=62&thanks=62&ts=1407851956>
|
```
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt update
sudo apt-get install google-cloud-sdk
gcloud init
```
|
18,805,720
|
All I know how to do is type "python foo.py" in dos; the program runs but then exits python back to dos. Is there a way to run foo.py from within python? Or to stay in python after running? I want to do this to help debug, so that I may look at variables used in foo.py
(Thanks from a newbie)
|
2013/09/14
|
[
"https://Stackoverflow.com/questions/18805720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2779936/"
] |
You can enter the python interpreter by just typing Python. Then if you run:
```
execfile('foo.py')
```
This will run the program and keep the interpreter open. More details [here](http://docs.python.org/2/library/functions.html#execfile).
|
To stay in Python afterwards you could just type 'python' on the command prompt, then run your code from inside python. That way you'll be able to manipulate the objects (lists, dictionaries, etc) as you wish.
|
18,805,720
|
All I know how to do is type "python foo.py" in dos; the program runs but then exits python back to dos. Is there a way to run foo.py from within python? Or to stay in python after running? I want to do this to help debug, so that I may look at variables used in foo.py
(Thanks from a newbie)
|
2013/09/14
|
[
"https://Stackoverflow.com/questions/18805720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2779936/"
] |
You can enter the python interpreter by just typing Python. Then if you run:
```
execfile('foo.py')
```
This will run the program and keep the interpreter open. More details [here](http://docs.python.org/2/library/functions.html#execfile).
|
add the module `q` , and use its `q.d()` method (I did it with `easy_install q`)
<https://pypi.python.org/pypi/q>
```
import q
....
#a bunch of code in foo.py
...
q.d()
```
that will give you a console at any point in your program where you put it that you can interact with your script
consider the following foo.py
```
import q
for x in range(5):
q.d()
```
now examine what happens when I run it
```
C:\Python_Examples>python qtest.py
Python console opened by q.d() in <module>
>>> print x
0
>>>
Python console opened by q.d() in <module>
>>> print x
1
>>>
Python console opened by q.d() in <module>
>>> print x
2
>>>
Python console opened by q.d() in <module>
>>> print x
3
```
(note to continue execution of script use ctrl+z)
in my experience this has been very helpful since you often want to pause and examine stuff mid execution (not at the end)
|
27,621,018
|
how to perform
```
echo xyz | ssh [host]
```
(send xyz to host)
with python?
I have tried pexpect
```
pexpect.spawn('echo xyz | ssh [host]')
```
but it's performing
```
echo 'xyz | ssh [host]'
```
maybe other package will be better?
|
2014/12/23
|
[
"https://Stackoverflow.com/questions/27621018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3419895/"
] |
<http://pexpect.sourceforge.net/pexpect.html#spawn>
Gives an example of running a command with a pipe :
```
shell_cmd = 'ls -l | grep LOG > log_list.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
```
Previous incorrect attempt deleted to make sure no-one is confused by it.
|
You don't need `pexpect` to simulate a simple shell pipeline. The simplest way to emulate the pipeline is the `os.system` function:
```
os.system("echo xyz | ssh [host]")
```
A more Pythonic approach is to use [the `subprocess` module](https://docs.python.org/2/library/subprocess.html):
```
p = subprocess.Popen(["ssh", "host"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write("xyz\n")
output = p.communicate()[0]
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.