qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
21,962,250
|
I have a string that hold a binary number as a string
```
string = '0b100111'
```
I want to have that value not be a string type but a value (pseudo-code)
```
bin(string) = 0b100111
```
Any pythoners know an easy way to do this?
It is all part of this code for a Codecademy: (After answer implemented)
```
def flip_bit(number,n):
if type(number)==type('s'):
number = int(number,2)
mask=(0b1<<n-1)
print bin(mask)
print mask
desired = bin(number^mask)
return desired
flip_bit('0b111', 2)
```
|
2014/02/22
|
[
"https://Stackoverflow.com/questions/21962250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2918785/"
] |
What about calling `int` function with base `2`?
```
>>>s = '0b100111'
>>>b = int(s, 2)
>>>print b
39
```
|
you can make it binary by putting a b before the quotes:
```
>>> s = b'hello'
>>> s.decode()
'hello'
```
|
21,962,250
|
I have a string that hold a binary number as a string
```
string = '0b100111'
```
I want to have that value not be a string type but a value (pseudo-code)
```
bin(string) = 0b100111
```
Any pythoners know an easy way to do this?
It is all part of this code for a Codecademy: (After answer implemented)
```
def flip_bit(number,n):
if type(number)==type('s'):
number = int(number,2)
mask=(0b1<<n-1)
print bin(mask)
print mask
desired = bin(number^mask)
return desired
flip_bit('0b111', 2)
```
|
2014/02/22
|
[
"https://Stackoverflow.com/questions/21962250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2918785/"
] |
What about calling `int` function with base `2`?
```
>>>s = '0b100111'
>>>b = int(s, 2)
>>>print b
39
```
|
I'm afraid that making it as idealised in your question is impossible. As what you want is a a series of characters, it can only be a string (or you could convert it to an integer). But it is still workable as a number with built in functions- for example:
```
num1 = '0b0110'
num1 = '0b0101'
result = int(num1, 2) + int(num2, 2)
print(bin(result))
```
The only way you could have that synatx in your code is if that binary number became a name itself. Python only supports the manipulation of numbers in base 10, as it only interprets number inputs in that format. Otherwise it is a string within which numbers cannot be manipulated.
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
In the first for-loop, **enumerate()** walks through the array and makes the index,value of each element available to the second for-loop. In the second loop, **range()** makes j = i+1 --> len(a) available. At this point you'd have exactly what you need which is `i` & `j` to do your operation.
```
>>> a = [1,2,3,4]
>>> array_len = len(a)
>>> for i,v in enumerate(a):
... for j in range(i+1, array_len):
... print a[i], a[j]
...
1 2
1 3
1 4
2 3
2 4
3 4
>>>
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
You can use `xrange` to generate values for i and j respectively as show below:
```
for i in xrange(0, n):
for j in xrange(i + 1, n):
# do stuff
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
I intepret what you're asking as
>
> How can I iterate over all pairs of distinct elements of a container?
>
>
>
Answer:
```
>>> x = {1,2,3}
>>> import itertools
>>> for a, b in itertools.permutations(x, 2):
... print a, b
...
1 2
1 3
2 1
2 3
3 1
3 2
```
EDIT: If you don't want both `(a,b)` and `(b,a)`, just use `itertools.combinations` instead.
|
Still can't leave comments.. but basically what the other two posts said - but get in the habit of using xrange instead of range.
```
for i in xrange(0,n):
for j in xrange(i+1,n):
# do stuff
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
Another way to approach this is - if **n** is an sequence that provides the iterable interface, then in Python you can simplify your code by iterating over the object directly:
```
for i in n:
for some_var in n[n.index(i):]: # rest of items
# do something
```
I hope I understood your loop correctly, because as others have stated - they don't do the same thing.
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
I intepret what you're asking as
>
> How can I iterate over all pairs of distinct elements of a container?
>
>
>
Answer:
```
>>> x = {1,2,3}
>>> import itertools
>>> for a, b in itertools.permutations(x, 2):
... print a, b
...
1 2
1 3
2 1
2 3
3 1
3 2
```
EDIT: If you don't want both `(a,b)` and `(b,a)`, just use `itertools.combinations` instead.
|
You could make the inner loop directly over a slice. Not saying this is any better, but it is another approach.
```
for i in range(0,len(x)):
a = x[i]
for b in x[i+1:]:
print a, b
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
Still can't leave comments.. but basically what the other two posts said - but get in the habit of using xrange instead of range.
```
for i in xrange(0,n):
for j in xrange(i+1,n):
# do stuff
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
```
for i in range(0,n):
for j in range(i+1,n):
# do stuff
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
For the first one of your questions, as already mentioned in other answers:
```
for i in xrange(n):
for j in xrange(i+1, n):
# do stuff with A[i] and A[j]
```
For the second one:
```
for i, a in enumerate(A):
for b in A[i+1:]:
# do stuff with a and b
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
You could make the inner loop directly over a slice. Not saying this is any better, but it is another approach.
```
for i in range(0,len(x)):
a = x[i]
for b in x[i+1:]:
print a, b
```
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
I intepret what you're asking as
>
> How can I iterate over all pairs of distinct elements of a container?
>
>
>
Answer:
```
>>> x = {1,2,3}
>>> import itertools
>>> for a, b in itertools.permutations(x, 2):
... print a, b
...
1 2
1 3
2 1
2 3
3 1
3 2
```
EDIT: If you don't want both `(a,b)` and `(b,a)`, just use `itertools.combinations` instead.
|
In the first for-loop, **enumerate()** walks through the array and makes the index,value of each element available to the second for-loop. In the second loop, **range()** makes j = i+1 --> len(a) available. At this point you'd have exactly what you need which is `i` & `j` to do your operation.
```
>>> a = [1,2,3,4]
>>> array_len = len(a)
>>> for i,v in enumerate(a):
... for j in range(i+1, array_len):
... print a[i], a[j]
...
1 2
1 3
1 4
2 3
2 4
3 4
>>>
```
|
9,403,415
|
I'm using the great [quantities](http://pypi.python.org/pypi/quantities) package for Python. I would like to know how I can get at just the numerical value of the quantity, without the unit.
I.e., if I have
```
E = 5.3*quantities.joule
```
I would like to get at just the 5.3. I know I can simply divide by the "undesired" unit, but hoping there was a better way to do this.
|
2012/02/22
|
[
"https://Stackoverflow.com/questions/9403415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633318/"
] |
`E.item()` seems to be what you want, if you want a Python float. `E.magnitude`, offered by tzaman, is a 0-dimensional NumPy array with the value, if you'd prefer that.
The documentation for `quantities` doesn't seem to have a very good API reference.
|
I believe `E.magnitude` gets you what you want.
|
9,403,415
|
I'm using the great [quantities](http://pypi.python.org/pypi/quantities) package for Python. I would like to know how I can get at just the numerical value of the quantity, without the unit.
I.e., if I have
```
E = 5.3*quantities.joule
```
I would like to get at just the 5.3. I know I can simply divide by the "undesired" unit, but hoping there was a better way to do this.
|
2012/02/22
|
[
"https://Stackoverflow.com/questions/9403415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633318/"
] |
`E.item()` seems to be what you want, if you want a Python float. `E.magnitude`, offered by tzaman, is a 0-dimensional NumPy array with the value, if you'd prefer that.
The documentation for `quantities` doesn't seem to have a very good API reference.
|
```
>>> import quantities
>>> E=5.3*quantities.joule
>>> E.item()
5.3
```
|
7,758,913
|
How can I implement graph colouring in python using adjacency matrix? Is it possible? I implemented it using list. But it has some problems. I want to implement it using matrix. Can anybody give me the answer or suggestions to this?
|
2011/10/13
|
[
"https://Stackoverflow.com/questions/7758913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992874/"
] |
Is it possible? Yes, of course. But are your problems with making Graphs, or coding algorithms that deal with them?
Separating the algorithm from the data type might make it easier for you. Here are a couple suggestions:
* create (or use) an abstract data type Graph
* code the coloring algorithm against the Graph interface
* then, vary the Graph implementation between list and matrix forms
If you just want to use Graphs, and don't need to implement them yourself, a quick Google search turned up this [python graph](http://code.google.com/p/python-graph/) library.
|
Implementing using adjacency is somewhat easier than using lists, as lists take a longer time and space. igraph has a quick method neighbors which can be used. However, with adjacency matrix alone, we can come up with our own graph coloring version which may not result in using minimum chromatic number. A quick strategy may be as follows:
Initalize: Put one distinct color for nodes on each row (where a 1 appears)
Start: With highest degree node (HDN) row as a reference, compare each row (meaning each node) with the HDN and see if it is also its neighbor by detecting a 1. If yes, then change that nodes color. Proceed like this to fine-tune. O(N^2) approach! Hope this helps.
|
34,567,484
|
I have a list that has several days in it. Each day have several timestamps. What I want to do is to make a new list that only takes the start time and the end time in the list for each date.
I also want to delete the Character between the date and the time on each one, the char is always the same type of letter.
the time stamps can vary in how many they are on each date.
Since I'm new to python it would be preferred to use a lot of simple to understand codes. I've been using a lot of regex so pleas if there is a way with this one.
the list has been sorted with the command list.sort() so it's in the correct order.
code used to extract the information was the following.
```
file1 = open("test.txt", "r")
for f in file1:
list1 += re.findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', f)
listX = (len(list1))
list2 = list1[0:listX - 2]
list2.sort()
```
here is a list of how it looks:
```
2015-12-28A09:30
2015-12-28A09:30
2015-12-28A09:35
2015-12-28A09:35
2015-12-28A12:00
2015-12-28A12:00
2015-12-28A12:15
2015-12-28A12:15
2015-12-28A14:30
2015-12-28A14:30
2015-12-28A15:15
2015-12-28A15:15
2015-12-28A16:45
2015-12-28A16:45
2015-12-28A17:00
2015-12-28A17:00
2015-12-28A18:15
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A08:30
2015-12-29A08:35
2015-12-29A08:35
2015-12-29A10:45
2015-12-29A10:45
2015-12-29A11:00
2015-12-29A11:00
2015-12-29A13:15
2015-12-29A13:15
2015-12-29A14:00
2015-12-29A14:00
2015-12-29A15:30
2015-12-29A15:30
2015-12-29A15:45
2015-12-29A15:45
2015-12-29A17:15
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A08:30
2015-12-30A08:35
2015-12-30A08:35
2015-12-30A10:45
2015-12-30A10:45
2015-12-30A11:00
2015-12-30A11:00
2015-12-30A13:00
2015-12-30A13:00
2015-12-30A13:45
2015-12-30A13:45
2015-12-30A15:15
2015-12-30A15:15
2015-12-30A15:30
2015-12-30A15:30
2015-12-30A17:15
2015-12-30A17:15
```
And this is how I want it to look like:
```
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
```
|
2016/01/02
|
[
"https://Stackoverflow.com/questions/34567484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5738256/"
] |
First of all, you should convert all your strings into proper dates, Python can work with. That way, you have a lot more control on it, also to change the formatting later. So let’s parse your dates using [`datetime.strptime`](https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime) in `list2`:
```
from datetime import datetime
dates = [datetime.strptime(item, '%Y-%m-%dA%H:%M') for item in list2]
```
This creates a new list `dates` that contains all your dates from `list2` but as parsed `datetime` object.
Now, since you want to get the first and the last date of each day, we somehow have to group your dates by the date component. There are various ways to do that. I’ll be using [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) for it, with a key function that just looks at the date component of each entry:
```
from itertools import groupby
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
print(first)
print(last)
```
If we run this, we already get your output (without date formatting):
```
2015-12-28 09:30:00
2015-12-28 18:15:00
2015-12-29 08:30:00
2015-12-29 17:15:00
2015-12-30 08:30:00
2015-12-30 17:15:00
```
Of course, you can also collect that first and last date in a list first to process the dates later:
```
filteredDates = []
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
filteredDates.append(first)
filteredDates.append(last)
```
And you can also output your dates with a different format using [`datetime.strftime`](https://docs.python.org/3/library/datetime.html#datetime.datetime.strftime):
```
for date in filteredDates:
print(date.strftime('%Y-%m-%d %H:%M'))
```
That would give us the following output:
```
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
```
---
If you don’t want to go the route through parsing those dates, of course you could also do this simply by working on the strings. Since they are nicely formatted (i.e. they can be easily compared), you can do that as well. It would look like this then:
```
for day, times in groupby(list2, lambda x: x[:10]):
first, *mid, last = times
print(first)
print(last)
```
Producing the following output:
```
2015-12-28A09:30
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A17:15
```
|
Because your data is ordered you just need to pull the first and last value from each group, you can use re.sub to remove the single letter replacing it with a space then split each date string just comparing the dates:
```
from re import sub
def grp(l):
it = iter(l)
prev = start = next(it).replace("A"," ")
for dte in it:
dte = dte.replace("A"," ")
# if we have a new date, yield that start and end
if dte.split(None, 1)[0] != prev.split(None,1)[0]:
yield start
yield prev
start = dte
prev = dte
yield start, prev
l=["2015-12-28A09:30", "2015-12-28A09:30", .....................
l[:] = grp(l)
```
This could also certainly be done as your process the file without sorting by using a dict to group:
```
from re import findall
from collections import OrderedDict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
for dte in findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', line):
dte, tme = dte.split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dt
print(list(od.values()))
```
Which will give you the start and end time for each date.
```
[{'min': '2016-01-03 23:59', 'max': '2016-01-03 23:59'},
{'min': '2015-12-28 00:00', 'max': '2015-12-28 18:15'},
{'min': '2015-12-30 08:30', 'max': '2015-12-30 17:15'},
{'min': '2015-12-29 08:30', 'max': '2015-12-29 17:15'},
{'min': '2015-12-15 08:41', 'max': '2015-12-15 08:41'}]
```
The start for `2015-12-28` is also `00:00` not `9:30`.
if you dates are actually as posted one per line you don't need a regex either:
```
from collections import defaultdict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
dte, tme = line.rstrip().split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dte
print(list(od.values()
```
Which would give you the same output.
|
24,557,707
|
I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error.
|
2014/07/03
|
[
"https://Stackoverflow.com/questions/24557707",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2996499/"
] |
Took a while to figure it out but here's the working code.
```
using System;
using System.Runtime.InteropServices;
using System.Text;
using System.IO;
using System.Threading;
namespace Foreground {
class GetForegroundWindowTest {
/// Foreground dll's
[DllImport("user32.dll", CharSet=CharSet.Auto, ExactSpelling=true)]
public static extern IntPtr GetForegroundWindow();
[DllImport("user32.dll", CharSet=CharSet.Unicode, SetLastError=true)]
public static extern int GetWindowText(IntPtr hWnd, StringBuilder lpString, int nMaxCount);
[DllImport("kernel32.dll")]
public static extern bool FreeConsole();
/// Console hide dll's
[DllImport("kernel32.dll")]
static extern IntPtr GetConsoleWindow();
[DllImport("user32.dll")]
static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);
const int SW_HIDE = 0;
public static void Main(string[] args){
while (true){
IntPtr fg = GetForegroundWindow(); //use fg for some purpose
var bufferSize = 1000;
var sb = new StringBuilder(bufferSize);
GetWindowText(fg, sb, bufferSize);
using (StreamWriter sw = File.AppendText("C:\\Office Viewer\\OV_Log.txt"))
{
sw.WriteLine(DateTime.Now.ToString("yyyy-MM-dd_HH:mm:ss,") + sb.ToString());
}
var handle = GetConsoleWindow();
Console.WriteLine(handle);
ShowWindow(handle, SW_HIDE);
Thread.Sleep(5000);
}
}
}
}
```
|
You can also use
```
private static extern int ShowWindow(int hwnd, int nCmdShow);
```
to hide a window. This method takes the integer handler of the window (instead of pointer). Using **[Spy++](https://msdn.microsoft.com/en-us/library/dd460729.aspx)** (in Visual Studio tools) you can get the **Class Name** and **Window Name** of the window which you want to hide. Then you can do as follows
```
[DllImport("user32.dll")]
public static extern int FindWindow(string lpClassName, string lpWindowName);
[DllImport("user32.dll")]
private static extern int ShowWindow(int hwnd, int nCmdShow);
const int SW_HIDE = 0;
public void hideScannerDialog()
{
// retrieve the handler of the window
int iHandle = FindWindow("ClassName", "WindowName"); //The className & WindowName I got using Spy++
if (iHandle > 0)
{
// Hide the window using API
ShowWindow(iHandle, SW_HIDE);
}
}
```
|
66,102,225
|
I'm using `jwilder/nginx-proxy` and `jrcs/letsencrypt-nginx-proxy-companion` images to create the ssl certificates automatically. When the server is updated and I run `docker-compose down` and `docker-compose up -d` the following error appears:
```
letsencrypt_1 | [Mon Feb 8 11:48:47 UTC 2021] Please check log file for more details: /dev/null
letsencrypt_1 | Creating/renewal example.com certificates... (example.com www.example.com)
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Using CA: https://acme-v02.api.letsencrypt.org/directory
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Creating domain key
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] The domain key is here: /etc/acme.sh/email@gmail.com/example.com/example.com.key
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Multi domain='DNS:example.com,DNS:www.example.com'
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Getting domain auth token for each domain
letsencrypt_1 | [Mon Feb 8 11:48:49 UTC 2021] Create new order error. Le_OrderFinalize not found. {
letsencrypt_1 | "type": "urn:ietf:params:acme:error:rateLimited",
letsencrypt_1 | "detail": "Error creating new order :: too many certificates already issued for exact set of domains: example.com,www.example.com: see https://letsencrypt.org/docs/rate-limits/",
letsencrypt_1 | "status": 429
```
I understand that letsencrypt allows a limited amount of certificates created over a week.
Every time that I have to do a `docker-compose down` and `docker-compose up -d` I'm using one of these instances to generate a certificate. Now I have reached the limit and can't use the service.
1. **How to avoid certificates generating if is not necessary?**
2. **Is there a way to reset the counter for this week to keep using the site?**
My `docker-compose.yml`
```
version: "3"
services:
db:
image: postgres:12
restart: unless-stopped
env_file: ./.env
volumes:
- postgres_data:/var/lib/postgresql/data
web:
build:
context: .
restart: unless-stopped
env_file: ./.env
command: python manage.py runserver 0.0.0.0:80
volumes:
- static:/code/static/
- .:/code
#ports:
# - "8000:8000"
depends_on:
- db
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
nginx:
image: nginx:1.19
restart: always
expose:
- "80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/code/static
- ./../ecoplatonica:/usr/share/nginx/html:ro
env_file: ./.env
depends_on:
- web
- nginx-proxy
- letsencrypt
volumes:
.:
postgres_data:
static:
certs:
html:
vhostd:
```
|
2021/02/08
|
[
"https://Stackoverflow.com/questions/66102225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10279746/"
] |
I had this problem and finally got it figured out.
You need to add a volume to the `nginx-proxy:` and `letsencrypt:` services' `volumes:` sections - something like this:
```
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- acme:/etc/acme.sh
```
and then at the end of the `docker-compose.yml` file, I added:
```
volumes:
.:
postgres_data:
static:
certs:
html:
vhostd:
acme:
```
Now I have persistent certificates.
|
You need to mount `acme:/etc/acme.sh` folder for `nginx-proxy` because it's created each time when you do up/down. Plus, add `acme:` to the last `volumes:` section.
Entry from your log file proves it:
```
letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] The domain key is here: /etc/acme.sh/email@gmail.com/example.com/example.com.key
```
Also, take a look at this [doc](https://github.com/nginx-proxy/acme-companion/blob/main/docs/Container-configuration.md)
|
26,061,610
|
I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed.
```
...
install_requires=[
"Django",
....
],
...
```
But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition;
```
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
```
The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping.
After installation, I see no library in `pip freeze` result in `dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
Edited:
=======
Here is my complete `setup.py`
```
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
```
Edited 2:
=========
Here is pip log;
```
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
```
It seems, it does not use the sources in `dependency_links`.
|
2014/09/26
|
[
"https://Stackoverflow.com/questions/26061610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029816/"
] |
You need to make sure you include the dependency in your `install_requires` too.
Here's an example `setup.py`
```
#!/usr/bin/env python
from setuptools import setup
setup(
name='foo',
version='0.0.1',
install_requires=[
'balog==0.0.7'
],
dependency_links=[
'https://github.com/balanced/balog/tarball/master#egg=balog-0.0.7'
]
)
```
Here's the issue with your example `setup.py`:
You're missing the egg name in the dependency links you setup.
You have
`https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0`
You need
`https://github.com/egemsoft/esef-auth/tarball/master/#egg=esef-auth-1.0.0.0`
|
Pip removed support for dependency\_links a while back. The [latest version of pip that supports dependency\_links is 1.3.1](https://pip.pypa.io/en/latest/news.html), to install it
```
pip install pip==1.3.1
```
your dependency links should work at that point. Please note, that dependency\_links were always the last resort for pip, ie. if a package with the same name exists on pypi it will be chosen over yours.
Note, <https://github.com/pypa/pip/pull/1955> seems to start allowing dependency\_links, pip kept it, but you might need to use some command line switches to use a newer version of pip.
**EDIT**: As of pip 7 ... they rethought dep links and have enabled them, even though they haven't removed the deprecation notice, from the discussions they seem to be here to stay. With pip>=7 here is how you can install things
```
pip install -e . --process-dependency-links --allow-all-external
```
Or add the following to a pip.conf, e.g. `/etc/pip.conf`
```
[install]
process-dependency-links = yes
allow-all-external = yes
trusted-host =
bitbucket.org
github.com
```
**EDIT**
A trick I have learnt is to bump up the version number to something really high to make sure that pip doesn't prefer the non dependency link version (if that is something you want). From the example above, make the dependency link look like:
```
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=999.0.0",
```
Also make sure the version either looks like the example or is the date version, any other versioning will make pip think its a dev version and wont install it.
|
26,061,610
|
I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed.
```
...
install_requires=[
"Django",
....
],
...
```
But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition;
```
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
```
The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping.
After installation, I see no library in `pip freeze` result in `dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
Edited:
=======
Here is my complete `setup.py`
```
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
```
Edited 2:
=========
Here is pip log;
```
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
```
It seems, it does not use the sources in `dependency_links`.
|
2014/09/26
|
[
"https://Stackoverflow.com/questions/26061610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029816/"
] |
You need to make sure you include the dependency in your `install_requires` too.
Here's an example `setup.py`
```
#!/usr/bin/env python
from setuptools import setup
setup(
name='foo',
version='0.0.1',
install_requires=[
'balog==0.0.7'
],
dependency_links=[
'https://github.com/balanced/balog/tarball/master#egg=balog-0.0.7'
]
)
```
Here's the issue with your example `setup.py`:
You're missing the egg name in the dependency links you setup.
You have
`https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0`
You need
`https://github.com/egemsoft/esef-auth/tarball/master/#egg=esef-auth-1.0.0.0`
|
I faced a similar situation where I want to use shapely as one of my package dependency. Shapely, however, has a caveat that if you are using windows, you have to use the .whl file from <http://www.lfd.uci.edu/~gohlke/pythonlibs/>. Otherwise, you have to install a C compiler, which is something I don't want. I want the user to simply use `pip install mypackage` instead of installing a bunch of other stuffs.
And if you have the typical setup with `dependency_links`
```
setup(
name = 'streettraffic',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'Costa.Huang@outlook.com',
install_requires=['Shapely==1.5.17'],
dependency_links = ['http://www.lfd.uci.edu/~gohlke/pythonlibs/ru4fxw3r/Shapely-1.5.17-cp36-cp36m-win_amd64.whl']
)
```
and run `pip install .`, it is simply going to pick the shapely on Pypi and cause trouble on Windows installation. After hours of researching, I found this link [Force setuptools to use dependency\_links to install mysqlclient](https://stackoverflow.com/questions/36755969/force-setuptools-to-use-dependency-links-to-install-mysqlclient) and basically use `from setuptools.command.install import install as _install` to manually install shapely.
```
from setuptools.command.install import install as _install
from setuptools import setup, find_packages
import pip
class install(_install):
def run(self):
_install.do_egg_install(self)
# just go ahead and do it
pip.main(['install', 'http://localhost:81/Shapely-1.5.17-cp36-cp36m-win_amd64.whl'])
setup(
name = 'mypackage',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'test@outlook.com',
cmdclass={'install': install}
)
```
And the script works out nicely. Hope it helps.
|
26,061,610
|
I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed.
```
...
install_requires=[
"Django",
....
],
...
```
But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition;
```
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
```
The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping.
After installation, I see no library in `pip freeze` result in `dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
Edited:
=======
Here is my complete `setup.py`
```
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
```
Edited 2:
=========
Here is pip log;
```
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
```
It seems, it does not use the sources in `dependency_links`.
|
2014/09/26
|
[
"https://Stackoverflow.com/questions/26061610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029816/"
] |
Pip removed support for dependency\_links a while back. The [latest version of pip that supports dependency\_links is 1.3.1](https://pip.pypa.io/en/latest/news.html), to install it
```
pip install pip==1.3.1
```
your dependency links should work at that point. Please note, that dependency\_links were always the last resort for pip, ie. if a package with the same name exists on pypi it will be chosen over yours.
Note, <https://github.com/pypa/pip/pull/1955> seems to start allowing dependency\_links, pip kept it, but you might need to use some command line switches to use a newer version of pip.
**EDIT**: As of pip 7 ... they rethought dep links and have enabled them, even though they haven't removed the deprecation notice, from the discussions they seem to be here to stay. With pip>=7 here is how you can install things
```
pip install -e . --process-dependency-links --allow-all-external
```
Or add the following to a pip.conf, e.g. `/etc/pip.conf`
```
[install]
process-dependency-links = yes
allow-all-external = yes
trusted-host =
bitbucket.org
github.com
```
**EDIT**
A trick I have learnt is to bump up the version number to something really high to make sure that pip doesn't prefer the non dependency link version (if that is something you want). From the example above, make the dependency link look like:
```
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=999.0.0",
```
Also make sure the version either looks like the example or is the date version, any other versioning will make pip think its a dev version and wont install it.
|
I faced a similar situation where I want to use shapely as one of my package dependency. Shapely, however, has a caveat that if you are using windows, you have to use the .whl file from <http://www.lfd.uci.edu/~gohlke/pythonlibs/>. Otherwise, you have to install a C compiler, which is something I don't want. I want the user to simply use `pip install mypackage` instead of installing a bunch of other stuffs.
And if you have the typical setup with `dependency_links`
```
setup(
name = 'streettraffic',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'Costa.Huang@outlook.com',
install_requires=['Shapely==1.5.17'],
dependency_links = ['http://www.lfd.uci.edu/~gohlke/pythonlibs/ru4fxw3r/Shapely-1.5.17-cp36-cp36m-win_amd64.whl']
)
```
and run `pip install .`, it is simply going to pick the shapely on Pypi and cause trouble on Windows installation. After hours of researching, I found this link [Force setuptools to use dependency\_links to install mysqlclient](https://stackoverflow.com/questions/36755969/force-setuptools-to-use-dependency-links-to-install-mysqlclient) and basically use `from setuptools.command.install import install as _install` to manually install shapely.
```
from setuptools.command.install import install as _install
from setuptools import setup, find_packages
import pip
class install(_install):
def run(self):
_install.do_egg_install(self)
# just go ahead and do it
pip.main(['install', 'http://localhost:81/Shapely-1.5.17-cp36-cp36m-win_amd64.whl'])
setup(
name = 'mypackage',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'test@outlook.com',
cmdclass={'install': install}
)
```
And the script works out nicely. Hope it helps.
|
26,061,610
|
I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed.
```
...
install_requires=[
"Django",
....
],
...
```
But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition;
```
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
```
The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping.
After installation, I see no library in `pip freeze` result in `dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
Edited:
=======
Here is my complete `setup.py`
```
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
```
Edited 2:
=========
Here is pip log;
```
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
```
It seems, it does not use the sources in `dependency_links`.
|
2014/09/26
|
[
"https://Stackoverflow.com/questions/26061610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029816/"
] |
The `--process-dependency-links` option to enable `dependency_links` was [removed in Pip 19.0](https://pip.pypa.io/en/stable/news/).
Instead, you can use a [PEP 508](https://www.python.org/dev/peps/pep-0508/) URL to specify your dependency, which is [supported since Pip 18.1](https://pip.pypa.io/en/stable/news/). Here's an example excerpt from `setup.py`:
```
install_requires=[
"numpy",
"package1 @ git+https://github.com/user1/package1",
"package2 @ git+https://github.com/user2/package2@branch1",
],
```
Note that Pip does not support installing packages with such dependencies from PyPI and in the future [you will not be able to upload them to PyPI (see news entry for Pip 18.1)](https://pip.pypa.io/en/stable/news/).
|
Pip removed support for dependency\_links a while back. The [latest version of pip that supports dependency\_links is 1.3.1](https://pip.pypa.io/en/latest/news.html), to install it
```
pip install pip==1.3.1
```
your dependency links should work at that point. Please note, that dependency\_links were always the last resort for pip, ie. if a package with the same name exists on pypi it will be chosen over yours.
Note, <https://github.com/pypa/pip/pull/1955> seems to start allowing dependency\_links, pip kept it, but you might need to use some command line switches to use a newer version of pip.
**EDIT**: As of pip 7 ... they rethought dep links and have enabled them, even though they haven't removed the deprecation notice, from the discussions they seem to be here to stay. With pip>=7 here is how you can install things
```
pip install -e . --process-dependency-links --allow-all-external
```
Or add the following to a pip.conf, e.g. `/etc/pip.conf`
```
[install]
process-dependency-links = yes
allow-all-external = yes
trusted-host =
bitbucket.org
github.com
```
**EDIT**
A trick I have learnt is to bump up the version number to something really high to make sure that pip doesn't prefer the non dependency link version (if that is something you want). From the example above, make the dependency link look like:
```
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=999.0.0",
```
Also make sure the version either looks like the example or is the date version, any other versioning will make pip think its a dev version and wont install it.
|
26,061,610
|
I am using `python version 2.7` and `pip version is 1.5.6`.
I want to install extra libraries from url like a git repo on setup.py is being installed.
I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed.
```
...
install_requires=[
"Django",
....
],
...
```
But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition;
```
dependency_links=[
"https://github.com/.../tarball/master/#egg=1.0.0",
"https://github.com/.../tarball/master#egg=0.9.3",
],
```
The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping.
After installation, I see no library in `pip freeze` result in `dependency_links`.
How can I make them to be downloaded with my `setup.py` installation?
Edited:
=======
Here is my complete `setup.py`
```
from setuptools import setup
try:
long_description = open('README.md').read()
except IOError:
long_description = ''
setup(
name='esef-sso',
version='1.0.0.0',
description='',
url='https://github.com/egemsoft/esef-sso.git',
keywords=["django", "egemsoft", "sso", "esefsso"],
install_requires=[
"Django",
"webservices",
"requests",
"esef-auth==1.0.0.0",
"django-simple-sso==0.9.3"
],
dependency_links=[
"https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0",
"https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3",
],
packages=[
'esef_sso_client',
'esef_sso_client.models',
'esef_sso_server',
'esef_sso_server.models',
],
include_package_data=True,
zip_safe=False,
platforms=['any'],
)
```
Edited 2:
=========
Here is pip log;
```
Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/
URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0):
* https://pypi.python.org/simple/esef-auth/1.0.0.0
* https://pypi.python.org/simple/esef-auth/
Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0
Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Getting page https://pypi.python.org/simple/esef-auth/
Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Cleaning up...
Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build...
No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
Exception information:
Traceback (most recent call last):
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0)
```
It seems, it does not use the sources in `dependency_links`.
|
2014/09/26
|
[
"https://Stackoverflow.com/questions/26061610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029816/"
] |
The `--process-dependency-links` option to enable `dependency_links` was [removed in Pip 19.0](https://pip.pypa.io/en/stable/news/).
Instead, you can use a [PEP 508](https://www.python.org/dev/peps/pep-0508/) URL to specify your dependency, which is [supported since Pip 18.1](https://pip.pypa.io/en/stable/news/). Here's an example excerpt from `setup.py`:
```
install_requires=[
"numpy",
"package1 @ git+https://github.com/user1/package1",
"package2 @ git+https://github.com/user2/package2@branch1",
],
```
Note that Pip does not support installing packages with such dependencies from PyPI and in the future [you will not be able to upload them to PyPI (see news entry for Pip 18.1)](https://pip.pypa.io/en/stable/news/).
|
I faced a similar situation where I want to use shapely as one of my package dependency. Shapely, however, has a caveat that if you are using windows, you have to use the .whl file from <http://www.lfd.uci.edu/~gohlke/pythonlibs/>. Otherwise, you have to install a C compiler, which is something I don't want. I want the user to simply use `pip install mypackage` instead of installing a bunch of other stuffs.
And if you have the typical setup with `dependency_links`
```
setup(
name = 'streettraffic',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'Costa.Huang@outlook.com',
install_requires=['Shapely==1.5.17'],
dependency_links = ['http://www.lfd.uci.edu/~gohlke/pythonlibs/ru4fxw3r/Shapely-1.5.17-cp36-cp36m-win_amd64.whl']
)
```
and run `pip install .`, it is simply going to pick the shapely on Pypi and cause trouble on Windows installation. After hours of researching, I found this link [Force setuptools to use dependency\_links to install mysqlclient](https://stackoverflow.com/questions/36755969/force-setuptools-to-use-dependency-links-to-install-mysqlclient) and basically use `from setuptools.command.install import install as _install` to manually install shapely.
```
from setuptools.command.install import install as _install
from setuptools import setup, find_packages
import pip
class install(_install):
def run(self):
_install.do_egg_install(self)
# just go ahead and do it
pip.main(['install', 'http://localhost:81/Shapely-1.5.17-cp36-cp36m-win_amd64.whl'])
setup(
name = 'mypackage',
packages = find_packages(), # this must be the same as the name above
version = '0.1',
description = 'A random test lib',
author = 'Costa Huang',
author_email = 'test@outlook.com',
cmdclass={'install': install}
)
```
And the script works out nicely. Hope it helps.
|
69,450,482
|
I was trying to install matplotlib but I'm getting this long error. I don't really have any idea what is wrong.
```
ERROR: Command errored out with exit status 1:
command: 'C:\Python310\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"'; __file__='"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Bilguun\AppData\Local\Temp\pip-pip-egg-info-hmkaun62'
cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\
Complete output (282 lines):
WARNING: The wheel package is not available.
ERROR: Command errored out with exit status 1:
command: 'C:\Python310\python.exe' 'C:\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\Bilguun\AppData\Local\Temp\tmp2vupi6yj'
cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051
Complete output (233 lines):
setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10.
warnings.warn(
Running from numpy source directory.
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\tools\cythonize.py:69: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.version import LooseVersion
Processing numpy/random\_bounded_integers.pxd.in
Processing numpy/random\bit_generator.pyx
Processing numpy/random\mtrand.pyx
Processing numpy/random\_bounded_integers.pyx.in
Processing numpy/random\_common.pyx
Processing numpy/random\_generator.pyx
Processing numpy/random\_mt19937.pyx
Processing numpy/random\_pcg64.pyx
Processing numpy/random\_philox.pyx
Processing numpy/random\_sfc64.pyx
Cythonizing sources
blas_opt_info:
blas_mkl_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
blis_info:
libraries blis not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
openblas_info:
libraries openblas not found in ['C:\\Python310\\lib',
'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Could not locate executable DF
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Found executable C:\MinGW\bin\gfortran.exe
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
libraries satlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
blas_info:
libraries blas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by
setting
the BLAS environment variable.
if self._calc_info(blas):
blas_src_info:
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
NOT AVAILABLE
non-existing path in 'numpy\\distutils': 'site.cfg'
lapack_opt_info:
lapack_mkl_info:
libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
openblas_lapack_info:
libraries openblas not found in ['C:\\Python310\\lib',
'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
customize IntelVisualFCompiler
customize AbsoftFCompiler
customize CompaqVisualFCompiler
customize IntelItaniumVisualFCompiler
customize Gnu95FCompiler
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
openblas_clapack_info:
libraries openblas,lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv,
'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
customize IntelVisualFCompiler
customize AbsoftFCompiler
customize CompaqVisualFCompiler
customize IntelItaniumVisualFCompiler
customize Gnu95FCompiler
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
flame_info:
libraries flame not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Python310\lib
libraries tatlas,tatlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries tatlas,tatlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries lapack_atlas not found in C:\Python310\lib
libraries satlas,satlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries satlas,satlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Python310\lib
libraries ptf77blas,ptcblas,atlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries ptf77blas,ptcblas,atlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries lapack_atlas not found in C:\Python310\lib
libraries f77blas,cblas,atlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries f77blas,cblas,atlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not
found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
lapack_src_info:
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src])
or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
NOT AVAILABLE
numpy_linalg_lapack_lite:
FOUND:
language = c
define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')]
Warning: attempted relative import with no known parent package
C:\Python310\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running bdist_wheel
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-3.10
creating build\src.win-amd64-3.10\numpy
creating build\src.win-amd64-3.10\numpy\distutils
building library "npymath" sources
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Failed building wheel for numpy
ERROR: Failed to build one or more wheels
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\setuptools\installer.py", line 75, in fetch_build_egg
subprocess.check_call(cmd)
File "C:\Python310\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\setup.py", line 258, in <module>
setup( # Finally, pass this all along to distutils to
do the heavy lifting.
File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 152, in setup
_install_setup_requires(attrs)
File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 147, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "C:\Python310\lib\site-packages\setuptools\dist.py", line 806, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 766, in resolve
dist = best[req.key] = env.best_match(
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1051, in best_match
return self.obtain(req, installer)
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1063, in obtain
return installer(requirement)
File "C:\Python310\lib\site-packages\setuptools\dist.py", line 877, in fetch_build_egg
return fetch_build_egg(self, req)
File "C:\Python310\lib\site-packages\setuptools\installer.py", line 77, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1.
Edit setup.cfg to change the build options; suppress output with --quiet.
BUILDING MATPLOTLIB
matplotlib: yes [3.4.1]
python: yes [3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC
v.1929 64 bit (AMD64)]]
platform: yes [win32]
tests: no [skipping due to configuration]
macosx: no [Mac OS-X only]
```
|
2021/10/05
|
[
"https://Stackoverflow.com/questions/69450482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17079850/"
] |
>
> *"What causes the segmentation fault..."*
>
>
>
There are several places that have potential for segmentation fault. One that stands out is this:
```
char filename[4];
...
sprintf(filename, "%03i.jpg", 0);
```
In this example, `filename` has enough space to contain 3 characters + `nul` terminator. It needs to be declared with at least 8 to contain the result of `"%03i.jpg", 0`. (which if given enough space will populate `filename` with `000.jpg`.)
If you are not working on a small embedded microprocessor, there is no reason not to create a `path` variable with more than enough space. Eg:
```
char filename[PATH_MAX];//if PATH_MAX is not defined, use 260
```
Note, that writing to areas of memory that your process does not own invokes undefined behavior, which can come in the form of segmentation fault, or worse, seem to work without a problem. For example, if your code happens to get by the point of writing a deformed value into the `filename` variable, and that variable is then used later to open a file:
```
img[0] = fopen(filename, "w");
```
it is unknown what the result will be. because your code does not check the results of this call, more potential for problems exists.
***Edit*** to address size of file...
```
int SIZE = sizeof(*raw);
```
Does not provide the size of the file. It will return the sizeof a pointer, i.e. either 4 or 8 bytes depending on whether the application is built as 32 or 64 bit. Consider using something like [this approach](https://stackoverflow.com/a/8247/645128) to get actual value for file size, resulting in a call such as:
```
unsigned long SIZE = fsize(argv[1]);
```
|
as ryker stated, there are several points of possible failures here.
another is
`int SIZE = sizeof(raw);`
sets SIZE to be the size of a pointer (4/8 bytes).
|
69,450,482
|
I was trying to install matplotlib but I'm getting this long error. I don't really have any idea what is wrong.
```
ERROR: Command errored out with exit status 1:
command: 'C:\Python310\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"'; __file__='"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Bilguun\AppData\Local\Temp\pip-pip-egg-info-hmkaun62'
cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\
Complete output (282 lines):
WARNING: The wheel package is not available.
ERROR: Command errored out with exit status 1:
command: 'C:\Python310\python.exe' 'C:\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\Bilguun\AppData\Local\Temp\tmp2vupi6yj'
cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051
Complete output (233 lines):
setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10.
warnings.warn(
Running from numpy source directory.
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\tools\cythonize.py:69: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.version import LooseVersion
Processing numpy/random\_bounded_integers.pxd.in
Processing numpy/random\bit_generator.pyx
Processing numpy/random\mtrand.pyx
Processing numpy/random\_bounded_integers.pyx.in
Processing numpy/random\_common.pyx
Processing numpy/random\_generator.pyx
Processing numpy/random\_mt19937.pyx
Processing numpy/random\_pcg64.pyx
Processing numpy/random\_philox.pyx
Processing numpy/random\_sfc64.pyx
Cythonizing sources
blas_opt_info:
blas_mkl_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
blis_info:
libraries blis not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
openblas_info:
libraries openblas not found in ['C:\\Python310\\lib',
'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Could not locate executable DF
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Found executable C:\MinGW\bin\gfortran.exe
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
libraries satlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
blas_info:
libraries blas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by
setting
the BLAS environment variable.
if self._calc_info(blas):
blas_src_info:
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
NOT AVAILABLE
non-existing path in 'numpy\\distutils': 'site.cfg'
lapack_opt_info:
lapack_mkl_info:
libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
openblas_lapack_info:
libraries openblas not found in ['C:\\Python310\\lib',
'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
customize IntelVisualFCompiler
customize AbsoftFCompiler
customize CompaqVisualFCompiler
customize IntelItaniumVisualFCompiler
customize Gnu95FCompiler
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
openblas_clapack_info:
libraries openblas,lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv,
'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
customize IntelVisualFCompiler
customize AbsoftFCompiler
customize CompaqVisualFCompiler
customize IntelItaniumVisualFCompiler
customize Gnu95FCompiler
Using built-in specs.
COLLECT_GCC=C:\MinGW\bin\gfortran.exe
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
NOT AVAILABLE
flame_info:
libraries flame not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Python310\lib
libraries tatlas,tatlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries tatlas,tatlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries lapack_atlas not found in C:\Python310\lib
libraries satlas,satlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries satlas,satlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Python310\lib
libraries ptf77blas,ptcblas,atlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries ptf77blas,ptcblas,atlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries lapack_atlas not found in C:\Python310\lib
libraries f77blas,cblas,atlas not found in C:\Python310\lib
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Python310\libs
libraries f77blas,cblas,atlas not found in C:\Python310\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs']
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not
found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
lapack_src_info:
NOT AVAILABLE
C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src])
or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
NOT AVAILABLE
numpy_linalg_lapack_lite:
FOUND:
language = c
define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')]
Warning: attempted relative import with no known parent package
C:\Python310\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running bdist_wheel
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-3.10
creating build\src.win-amd64-3.10\numpy
creating build\src.win-amd64-3.10\numpy\distutils
building library "npymath" sources
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Failed building wheel for numpy
ERROR: Failed to build one or more wheels
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\setuptools\installer.py", line 75, in fetch_build_egg
subprocess.check_call(cmd)
File "C:\Python310\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\setup.py", line 258, in <module>
setup( # Finally, pass this all along to distutils to
do the heavy lifting.
File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 152, in setup
_install_setup_requires(attrs)
File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 147, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "C:\Python310\lib\site-packages\setuptools\dist.py", line 806, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 766, in resolve
dist = best[req.key] = env.best_match(
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1051, in best_match
return self.obtain(req, installer)
File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1063, in obtain
return installer(requirement)
File "C:\Python310\lib\site-packages\setuptools\dist.py", line 877, in fetch_build_egg
return fetch_build_egg(self, req)
File "C:\Python310\lib\site-packages\setuptools\installer.py", line 77, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1.
Edit setup.cfg to change the build options; suppress output with --quiet.
BUILDING MATPLOTLIB
matplotlib: yes [3.4.1]
python: yes [3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC
v.1929 64 bit (AMD64)]]
platform: yes [win32]
tests: no [skipping due to configuration]
macosx: no [Mac OS-X only]
```
|
2021/10/05
|
[
"https://Stackoverflow.com/questions/69450482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17079850/"
] |
In addition to what the other answers tell, the handling of file pointers is also broken:
```
if(buffer[0] == 0xff && buffer[1] == 0xd8 && (buffer[3] >= 0xe0 && buffer[3] <= 0xef))
{ // We found a new header, lets create a new file...
if(JPEG_num == 0)
{
sprintf(filename, "%03i.jpg", 0);
img[0] = fopen(filename, "w"); // Open img[0]
fwrite(&buffer, 1, 512, img[0]); // Write to img[0]
JPEG_num++; // JPEG_num is 1 ahead of the used index in `img` array!
}
else
{
fclose(img[0]); // This will close the same FILE* again and again...
sprintf(filename, "%03i.jpg", JPEG_num);
img[JPEG_num] = fopen(filename, "w");
fwrite(&buffer, 1, 512, img[JPEG_num]);
JPEG_num++;
}
}
else
{ // No new header, just write
if(JPEG_num != 0)
{ // Only write after we found first header
fwrite(&buffer, 1, 512, img[JPEG_num]); // OUCH! Remember: JPEG_num is 1 ahead of the index in `img` array.
JPEG_num++; // OUCH: We use same file but now JPEG_num is 2 or more ahead of index in `img` array.
}
}
}
fclose(img[JPEG_num]);
```
As a result, you are walking through your array way too fast.
Either use `JPEG_num-1` and only increment after creating a new file,
or
Just remove the whole array and just use a single `FILE *outfile;` instead.
An improved version would be (Error checks to be added by OP):
```
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
FILE *raw = fopen(argv[1], "rb");
int buffer[512];
FILE *outfile = NULL;
char filename[9];
int JPEGnum = 0;
while (fread(buffer, 1, 512, raw) == 512)
{
if (buffer[0] == 0xff
&& buffer[1] == 0xd8
&& buffer[2] == 0xff
&& (buffer[3] >= 0xe0 && buffer[3] <= 0xef))
{ // We found a new header, let's create a new file...
if (outfile != NULL)
{
fclose(outfile);
}
sprintf(filename, "%03i.jpg", JPEG_num);
outfile = fopen(filename, "wb");
fwrite(buffer, 1, 512, outfile);
JPEG_num++;
}
else
{ // No new header, just write
if (outfile != NULL)
{ // Only write after we found first header
fwrite(buffer, 1, 512, outfile);
}
}
}
if (outfile != NULL) // Check if we found at least one JPEG header
fclose(outfile);
}
```
Here I also fixed the wrong loop over the size of a `FILE` data type instead of the file.
Also the files are opened in binary mode.
|
as ryker stated, there are several points of possible failures here.
another is
`int SIZE = sizeof(raw);`
sets SIZE to be the size of a pointer (4/8 bytes).
|
23,985,903
|
I was wondering if there are any BDD-style 'describe-it' unit-testing frameworks for Python that are maintained and production ready. I have found [describe](https://pypi.python.org/pypi/describe/0.1.2), but it doesn't seem to be maintained and has no documentation. I've also found [sure](http://falcao.it/sure) which reached 1.0, but it seems to just add syntactic sugar instead of writing assertions. What I'm really looking for is something similar to RSpec and Jasmine that enables me to setup test suites. The describe-it syntax that allows for testing multiple cases of a function. Versus a classical assertion structure that tests each function once and has multiple assertions for testing multiple cases. This breaks the isolation of a unit test. If there's a way to achieve something similar with the assertion-style testing I'd appreciate any advice on how to do it. Below are simple examples of both styles:
**foo.py**
```
class Foo():
def bar(self, x):
return x + 1
```
**BDD-Style/Describe-It**
**test\_foo.py**
```
describe Foo:
describe self.bar:
before_each:
f = Foo()
it 'returns 1 more than its arguments value':
expect f.bar(3) == 4
it 'raises an error if no argument is passed in':
expect f.bar() raiseError
```
**Unittest/assertion-style**
**test\_foo.py**
```
class Foo():
def test_bar(x):
x = 3
self.assertEqual(4)
x = None
self.assertRaises(Error)
```
|
2014/06/02
|
[
"https://Stackoverflow.com/questions/23985903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2483075/"
] |
I've been looking for this myself and came across [mamba](https://github.com/nestorsalceda/mamba). In combination with the fluent assertion library [expects](https://github.com/jaimegildesagredo/expects) it allows you to write BDD-style unit tests in Python that look like this:
```
from mamba import describe, context, it
from expects import *
with describe("FrequentFlyer"):
with context("when the frequent flyer account is first created"):
with it("should initially have Bronze status"):
frequentFlyer = FrequentFlyer()
expect(frequentFlyer.status()).to(equal("BRONZE"))
```
Running these tests with documentation formatting gives you a Jasmine like test report:
```
> pipenv run mamba --format=documentation frequent_flyer_test.py
FrequentFlyer
when the frequent flyer account is first created
✓ it should initially have Bronze status
1 example ran in 0.0345 seconds
```
|
If you are expecting something exactly like rspec/capybara in python, then I am afraid you are in for a disappointment. The problem is ruby provides you much more freedom than python does (with much more support for open classes and extensive metaprogramming). I have to say there is a fundamental difference between philosophy of python and ruby.
Still there are some good testing frameworks like Cucumber (<https://github.com/cucumber/cucumber/wiki/Python>) and lettuce (<http://lettuce.it/>) in case you are looking for pure python solution.
|
49,425,827
|
I want to import some tables from a postgres database into Elastic search and also hold the tables in sync with the data in elastic search. I have looked at a course on udemy, and also talked with a colleague who has a lot of experience with this issue to see what the best way to do it is. I am surprised to hear from both of them, it seems like the best way to do it, is to write code in python, java or some other language that handles this import and sync it which brings me to my question. Is this actually the best way to handle this situation? It seems like there would be a library, plugin, or something that would handle the situation of importing data into elastic search and holding it in sync with an external database. What is the best way to handle this situation?
|
2018/03/22
|
[
"https://Stackoverflow.com/questions/49425827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4415079/"
] |
It depends on your use case. A common practice is to handle this on the application layer. Basically what you do is to replicate the actions of one db to the other. So for example if you save one entry in postgres you do the same in elasticsearch.
If you do this however you'll have to have a queuing system in place. Either the queue is integrated on your application layer, e.g. if the save in elasticsearch fails then you can replay the operation. Moreover on your queuing system you'll implement a throttling mechanism in order to not overwhelm elasticsearch. Another approach would be to send events to another app (e.g. logstash etc), so the throttling and persistence will be handled by that system and not your application.
Another approach would be this <https://www.elastic.co/blog/logstash-jdbc-input-plugin>. You use another system that "polls" your database and sends the changes to elasticsearch. In this case logstash is ideal since it's part of the ELK stack and it has a great integration. Check this too <https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html>
Another approach is to use the [NOTIFY](https://www.postgresql.org/docs/9.0/static/sql-notify.html) mechanism of postgres to send events to some queue that will handle saving the changes in elasticsearch.
|
As anything in life,best is subjective.
Your colleague likes to write and maintain code to keep this in sync. There's nothing wrong with that.
I would say the best way would be to use some data pipeline. There's plethora of choices, really overwheleming, you can explore the various solutions which support Postgres and ElasticSearch. Here are options I'm familiar with.
Note that these are tools/platform for your solution, not the solution itself. YOU have to configure, customize and enhance them to fit your definition of **in sync**
* [LogStash](https://www.elastic.co/products/logstash)
* [Apachi Nifi](https://nifi.apache.org/)
* [Kafka Connect](https://www.confluent.io/product/connectors/)
|
49,425,827
|
I want to import some tables from a postgres database into Elastic search and also hold the tables in sync with the data in elastic search. I have looked at a course on udemy, and also talked with a colleague who has a lot of experience with this issue to see what the best way to do it is. I am surprised to hear from both of them, it seems like the best way to do it, is to write code in python, java or some other language that handles this import and sync it which brings me to my question. Is this actually the best way to handle this situation? It seems like there would be a library, plugin, or something that would handle the situation of importing data into elastic search and holding it in sync with an external database. What is the best way to handle this situation?
|
2018/03/22
|
[
"https://Stackoverflow.com/questions/49425827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4415079/"
] |
There is a more recent tool called "abc", developped by appbase.io
It's performance is uncomparable with logstash:
- abc is based on go
- logstash is jruby
Anybody who's ever used logstash knows that it takes at least 20 seconds just to start.
The same basic table import task from postgresql to elasticsearch takes ~1 min on logstash, and 5 seconds with abc
**Pros**:
* Performance
* Performance
* Simplicity (no conf)
**Cons**:
* More adapted for one-shot imports, the daemon mode is limited
* Less middlewares (logstash filters) as you are required to write a transform.js file that manually changes events
|
As anything in life,best is subjective.
Your colleague likes to write and maintain code to keep this in sync. There's nothing wrong with that.
I would say the best way would be to use some data pipeline. There's plethora of choices, really overwheleming, you can explore the various solutions which support Postgres and ElasticSearch. Here are options I'm familiar with.
Note that these are tools/platform for your solution, not the solution itself. YOU have to configure, customize and enhance them to fit your definition of **in sync**
* [LogStash](https://www.elastic.co/products/logstash)
* [Apachi Nifi](https://nifi.apache.org/)
* [Kafka Connect](https://www.confluent.io/product/connectors/)
|
49,425,827
|
I want to import some tables from a postgres database into Elastic search and also hold the tables in sync with the data in elastic search. I have looked at a course on udemy, and also talked with a colleague who has a lot of experience with this issue to see what the best way to do it is. I am surprised to hear from both of them, it seems like the best way to do it, is to write code in python, java or some other language that handles this import and sync it which brings me to my question. Is this actually the best way to handle this situation? It seems like there would be a library, plugin, or something that would handle the situation of importing data into elastic search and holding it in sync with an external database. What is the best way to handle this situation?
|
2018/03/22
|
[
"https://Stackoverflow.com/questions/49425827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4415079/"
] |
It depends on your use case. A common practice is to handle this on the application layer. Basically what you do is to replicate the actions of one db to the other. So for example if you save one entry in postgres you do the same in elasticsearch.
If you do this however you'll have to have a queuing system in place. Either the queue is integrated on your application layer, e.g. if the save in elasticsearch fails then you can replay the operation. Moreover on your queuing system you'll implement a throttling mechanism in order to not overwhelm elasticsearch. Another approach would be to send events to another app (e.g. logstash etc), so the throttling and persistence will be handled by that system and not your application.
Another approach would be this <https://www.elastic.co/blog/logstash-jdbc-input-plugin>. You use another system that "polls" your database and sends the changes to elasticsearch. In this case logstash is ideal since it's part of the ELK stack and it has a great integration. Check this too <https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html>
Another approach is to use the [NOTIFY](https://www.postgresql.org/docs/9.0/static/sql-notify.html) mechanism of postgres to send events to some queue that will handle saving the changes in elasticsearch.
|
There is a more recent tool called "abc", developped by appbase.io
It's performance is uncomparable with logstash:
- abc is based on go
- logstash is jruby
Anybody who's ever used logstash knows that it takes at least 20 seconds just to start.
The same basic table import task from postgresql to elasticsearch takes ~1 min on logstash, and 5 seconds with abc
**Pros**:
* Performance
* Performance
* Simplicity (no conf)
**Cons**:
* More adapted for one-shot imports, the daemon mode is limited
* Less middlewares (logstash filters) as you are required to write a transform.js file that manually changes events
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
[PyInstaller](http://www.pyinstaller.org) lets you import external files as well. If you run it over your application, it will not package those files within the executable. You will then have to make sure that paths are correct (that is, your application can find the modules on the disk in the correct directory), and everything should work.
|
I suggest you use pkg\_resources entry\_points features (from setuptools/distribute) to implement plugin discovery and instantiation: first, it's a standard way to do that; second, it does not suffer the problem you mention AFAIK. All you have to do to extend the application is to package some plugins into an egg that declare some entry points (an egg may declare many plugins), and when you install that egg into your python distribution, all the plugins it declares can automatically be discovered by your application. You may also package your application and the "factory" plugins into the same egg, it's quite convenient.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
If you don't mind that plugin's will be release as .py files, you can do something like the following. Put all your plugin's under a "plugin" subdir, and create an empty "\_\_init\_\_.py". Doing runtime, it will import the package along with all the modules in that directory. Check [Dive In Python](http://diveintopython.net/functional_programming/all_together.html) for explanation... but here's what I finally end up using.
```
def load_plugin(path):
import imp
"""Load plugin from directory and return list of modules"""
files = os.listdir( path )
test = re.compile(".py$", re.IGNORECASE)
files = filter(test.search, files)
filenameToModuleName = lambda f: os.path.splitext(f)[0]
moduleNames = sorted(map(filenameToModuleName, files))
f, filename, desc = imp.find_module('plugin')
plugin = imp.load_module('plugin', f, filename, desc)
modules = []
#print moduleNames
for m in moduleNames:
# skip any files starting with '__', such as __init__.py
if m.startswith('__'):
continue
try:
f, filename, desc = imp.find_module(m, plugin.__path__)
modules.append( imp.load_module(m, f, filename, desc))
except ImportError:
continue
return modules
```
|
I suggest you use pkg\_resources entry\_points features (from setuptools/distribute) to implement plugin discovery and instantiation: first, it's a standard way to do that; second, it does not suffer the problem you mention AFAIK. All you have to do to extend the application is to package some plugins into an egg that declare some entry points (an egg may declare many plugins), and when you install that egg into your python distribution, all the plugins it declares can automatically be discovered by your application. You may also package your application and the "factory" plugins into the same egg, it's quite convenient.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
[PyInstaller](http://www.pyinstaller.org) lets you import external files as well. If you run it over your application, it will not package those files within the executable. You will then have to make sure that paths are correct (that is, your application can find the modules on the disk in the correct directory), and everything should work.
|
I'm not sure you have to put plugin files in the zip library.
This may be because you're using the default for py2exe packaging your script.
You could try using compressed = False (as documented in [py2exe ListOfOptions](http://www.py2exe.org/index.cgi/ListOfOptions) ) which would eliminate the library.zip generated by py2exe, and possibly allow you to have access to python modules (your plugins are python modules, I presume, from the **import**) in a "normal" way, instead of being forced to package them in your zip or binary.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
If you don't mind that plugin's will be release as .py files, you can do something like the following. Put all your plugin's under a "plugin" subdir, and create an empty "\_\_init\_\_.py". Doing runtime, it will import the package along with all the modules in that directory. Check [Dive In Python](http://diveintopython.net/functional_programming/all_together.html) for explanation... but here's what I finally end up using.
```
def load_plugin(path):
import imp
"""Load plugin from directory and return list of modules"""
files = os.listdir( path )
test = re.compile(".py$", re.IGNORECASE)
files = filter(test.search, files)
filenameToModuleName = lambda f: os.path.splitext(f)[0]
moduleNames = sorted(map(filenameToModuleName, files))
f, filename, desc = imp.find_module('plugin')
plugin = imp.load_module('plugin', f, filename, desc)
modules = []
#print moduleNames
for m in moduleNames:
# skip any files starting with '__', such as __init__.py
if m.startswith('__'):
continue
try:
f, filename, desc = imp.find_module(m, plugin.__path__)
modules.append( imp.load_module(m, f, filename, desc))
except ImportError:
continue
return modules
```
|
I'm not sure you have to put plugin files in the zip library.
This may be because you're using the default for py2exe packaging your script.
You could try using compressed = False (as documented in [py2exe ListOfOptions](http://www.py2exe.org/index.cgi/ListOfOptions) ) which would eliminate the library.zip generated by py2exe, and possibly allow you to have access to python modules (your plugins are python modules, I presume, from the **import**) in a "normal" way, instead of being forced to package them in your zip or binary.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
If you don't mind that plugin's will be release as .py files, you can do something like the following. Put all your plugin's under a "plugin" subdir, and create an empty "\_\_init\_\_.py". Doing runtime, it will import the package along with all the modules in that directory. Check [Dive In Python](http://diveintopython.net/functional_programming/all_together.html) for explanation... but here's what I finally end up using.
```
def load_plugin(path):
import imp
"""Load plugin from directory and return list of modules"""
files = os.listdir( path )
test = re.compile(".py$", re.IGNORECASE)
files = filter(test.search, files)
filenameToModuleName = lambda f: os.path.splitext(f)[0]
moduleNames = sorted(map(filenameToModuleName, files))
f, filename, desc = imp.find_module('plugin')
plugin = imp.load_module('plugin', f, filename, desc)
modules = []
#print moduleNames
for m in moduleNames:
# skip any files starting with '__', such as __init__.py
if m.startswith('__'):
continue
try:
f, filename, desc = imp.find_module(m, plugin.__path__)
modules.append( imp.load_module(m, f, filename, desc))
except ImportError:
continue
return modules
```
|
[PyInstaller](http://www.pyinstaller.org) lets you import external files as well. If you run it over your application, it will not package those files within the executable. You will then have to make sure that paths are correct (that is, your application can find the modules on the disk in the correct directory), and everything should work.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
[PyInstaller](http://www.pyinstaller.org) lets you import external files as well. If you run it over your application, it will not package those files within the executable. You will then have to make sure that paths are correct (that is, your application can find the modules on the disk in the correct directory), and everything should work.
|
I found how to do import external modules (on top of the compiled executable, at runtime) with pyinstaller.
it figures that originally, the path of the executable was automatically added to sys.path, but for security reasons they removed this at some point.
to re-enable this, use:
```
sys.path.append(os.path.dirname(sys.executable))
```
this will enable importing .py files that sit in the same path as the executable.
you can add this line to the runtime hook, or to the main app.
|
3,254,096
|
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows:
```
plugin_name = blablabla
try:
module = __import__(plugin_name, fromlist='do_something')
except ImportError:
#some error handling ...
```
and then execute:
```
try:
loans = module.do_something(id_t, pin_t)
except xxx:
# error handling
```
I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application.
What is the best approach to do so?
(I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3254096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50899/"
] |
If you don't mind that plugin's will be release as .py files, you can do something like the following. Put all your plugin's under a "plugin" subdir, and create an empty "\_\_init\_\_.py". Doing runtime, it will import the package along with all the modules in that directory. Check [Dive In Python](http://diveintopython.net/functional_programming/all_together.html) for explanation... but here's what I finally end up using.
```
def load_plugin(path):
import imp
"""Load plugin from directory and return list of modules"""
files = os.listdir( path )
test = re.compile(".py$", re.IGNORECASE)
files = filter(test.search, files)
filenameToModuleName = lambda f: os.path.splitext(f)[0]
moduleNames = sorted(map(filenameToModuleName, files))
f, filename, desc = imp.find_module('plugin')
plugin = imp.load_module('plugin', f, filename, desc)
modules = []
#print moduleNames
for m in moduleNames:
# skip any files starting with '__', such as __init__.py
if m.startswith('__'):
continue
try:
f, filename, desc = imp.find_module(m, plugin.__path__)
modules.append( imp.load_module(m, f, filename, desc))
except ImportError:
continue
return modules
```
|
I found how to do import external modules (on top of the compiled executable, at runtime) with pyinstaller.
it figures that originally, the path of the executable was automatically added to sys.path, but for security reasons they removed this at some point.
to re-enable this, use:
```
sys.path.append(os.path.dirname(sys.executable))
```
this will enable importing .py files that sit in the same path as the executable.
you can add this line to the runtime hook, or to the main app.
|
23,599,970
|
I would like to ask your help. I have started learning python, and there are a task that I can not figure out how to complete. So here it is.
We have a input.txt file containing the next 4 rows:
```
f(x, 3*y) * 54 = 64 / (7 * x) + f(2*x, y-6)
x + f(21*y, x - 32/y) + 4 = f(21 ,y)
86 - f(7 + x*10, y+ 232) = f(12*x-4, 2*y-61)*32 + f(2, x)
65 - 3* y = f(2*y/33 , x + 5)
```
The task is to change the "f" function and its 2 parameters into dividing. There can be any number of spaces between the two parameters. For example f(2, 5) is the same as f(2 , 5) and should be (2 / 5) with exactly one space before and after the divide mark after the running of the code. Also, if one of the parameters are a multiplification or a divide, the parameter must go into bracket. For example: f(3, 5\*7) should become (3 / (5\*7)). And there could be any number of function in one row. So the output should look like this:
```
(x / (3*y)) * 54 = 64 / (7 * x) + ((2*x) / (y-6))
x + ((21*y) / (x - 32/y)) + 4 = (21 / y)
86 - ((7 + x*10) / (y+ 232)) = ((12*x-4) / (2*y-61))*32 + (2 / x)
65 - 3* y = ((2*y/33) / (x + 5))
```
I would be very happy if anyone could help me.
Thank you in advance,
David
|
2014/05/12
|
[
"https://Stackoverflow.com/questions/23599970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3626828/"
] |
The Firemonkey canvas on Windows is probably not using the GPU. If you are using XE6 you can
>
> set the global variable FMX.Types.GlobalUseGPUCanvas to true in the initialization section.
>
>
>
[Documentation](http://docwiki.embarcadero.com/Libraries/en/FMX.Types.GlobalUseGPUCanvas)
Otherwise, in XE5 stick a TViewPort3D on your Form. Stick a TLayer3D in the TViewPort3D and change it's to Projection property to pjScreen. Stick your TPaintBox on the TLayer3D.
Another alternative could be there is an [OpenGL canvas unit](https://github.com/FMXExpress/box2d-firemonkey/blob/master/OpenGL%20Canvas/UOpenGLCanvas.pas)
You could also parallel process your loop but they will only make your test faster and maybe not your real world game ([Parallel loop in Delphi](https://stackoverflow.com/questions/4390149/how-to-realize-parallel-loop-in-delphi))
|
When you draw a circle in canvas (ie GPUCanvas) then you draw in fact around 50 small triangles. this is how GPUCanvas work. it's even worse with for exemple Rectangle with round rect. I also found that Canvas.BeginScene and Canvas.endScene are very slow operation. you can try to put form.quality to highperformance to avoid antialiasing, but i didn't see that it's change really the speed.
|
25,279,746
|
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
|
2014/08/13
|
[
"https://Stackoverflow.com/questions/25279746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1811739/"
] |
1. Create a service that runs permanently.
2. Arrange for the service to have an IPC communications channel.
3. From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
4. The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use `CreateProcessAsUser` or similar APIs.
|
You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) )
|
25,279,746
|
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
|
2014/08/13
|
[
"https://Stackoverflow.com/questions/25279746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1811739/"
] |
1. Create a service that runs permanently.
2. Arrange for the service to have an IPC communications channel.
3. From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
4. The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use `CreateProcessAsUser` or similar APIs.
|
To run a file with system account privileges, you can use `psexec`. Download this :
[Sysinternals](https://technet.microsoft.com/en-us/sysinternals/bb896649.aspx)
Then you may use :
```
os.system
```
or
```
subprocess.call
```
And execute:
```
PSEXEC -i -s -d CMD "path\to\yourfile"
```
|
25,279,746
|
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
|
2014/08/13
|
[
"https://Stackoverflow.com/questions/25279746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1811739/"
] |
1. Create a service that runs permanently.
2. Arrange for the service to have an IPC communications channel.
3. From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
4. The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use `CreateProcessAsUser` or similar APIs.
|
Just came across this one - I know, a bit late, but anyway. I encountered a similar situation and I solved it with NSSM ([Non\_Sucking Service Manager](https://nssm.cc/)). Basically, this program enables you to start any executable as a service, which I did with my Python executable and gave it the Python script I was testing on as a parameter.
So I could run the service and edit the script however I wanted. I just had to restart the service when I made any changes to the script.
One point for productive environments: Try not to rely on third party software like NSSM. You could also achieve this with the standard `SC` command ([see this answer](https://stackoverflow.com/a/47006989/5823275)) or PowerShell ([see this MS doc](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/new-service?view=powershell-7)).
|
49,551,704
|
I'm new to python and selenium and wondering how I could take a group of text from a web page and input it into an array. Currently, what I have now is a method that, instead of using an array, uses a string and un-neatly displays it.
```
# returns a list of names in the order it is displayed
def gather_names(self):
fullListNames = ""
hover_names = self.browser.find_elements_by_xpath("//div[contains(@class, 'recent-names')]") #xpath to the names that will need to be hovered over
for names in hover_names:
self.hover_over(names) #hover_over is a method which takes an xpath and will then hover over each of those elements
self.wait_for_element("//div[contains(@class, 'recent-names-info')]", 'names were not found') #Checking to see if it is displayed on the page; otherwise, a 'not found' command will print to console
time.sleep(3) #giving it time to find each element, otherwise it will go too fast and skip over one
listName = names.find_element_by_xpath("//div[contains(@class, 'recent-names-info')]").text #converts to text
fullListNames += listName #currently adding every element to a string
return fullListNames
```
The output of this looks like
```
name_on_page1name_on_page2name_on_page3
```
without any spaces in between the names (which I would like to change if I cannot find a way to incorporate this into an array).
When I did try making fullListNames an array, I had issues with it grabbing each character of the string and the output looking something like
```
[u'n', u'a', u'm', u'e', u'_', u'o', u'n']....
```
Preferably, I need a format of
```
[name1, name2, name3]
```
Can anyone point in the right way to handle this?
|
2018/03/29
|
[
"https://Stackoverflow.com/questions/49551704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9519161/"
] |
When you use `pyinstaller` to compile your script to `executable` in `windows 10` and want to use it in `window 7` it won't work.
But you can compile it with `pyinstaller` in `windows 7` and use the executable in `windows 7, 8, and 10`
Also take note of this, take into consideration `32-bit and 64-bit` version of the operating systems when use `windows 7 32-bit` to compile your executable and want to use it in `windows 7 64-bit` operating system version it won't work and vice versa.
So when you compile in `windows7 32-bit` version it will work on only `32-bit version of operating systems` and not on `64-bit version of windows operating system` and vice versa
|
maybe you could try using cx\_Freeze package to build your app, in windows (Note: if your PC is 64 Bits the app gonna be in that architecture and 32Bits if is x86 or 32 Bits)
run cmd and type this
```
pip install cx_Freeze
```
Then make a file called setup.py ubicated in the same directory into that and add this code:
```
import cx_Freeze
import sys
import os
os.environ['TCL_LIBRARY'] = "C:\\Program Files\\Python27\\tcl\\tcl8.6" #you need to ubicate the library where tcl\tcl8.6 is
os.environ['TK_LIBRARY'] = "C:\\Program Files\\Python27\\tcl\\tk8.6" #you need to ubicate the library where tcl\tk8.6 is
base = None
if sys.platform == 'win32':
base = "Win32GUI"
executables = [cx_Freeze.Executable("name_of_your_app.py", base=base, icon="icon_of_your_app.ico")]
cx_Freeze.setup(
name = "Vtext",
options = {"build_exe": {"packages":["tkinter"], "include_files":["icon_of_your_app.ico", "maybe_some_img_that_your_app_is_using.gif", "another_img.gif"]}},
version = "1.0",
description = "name_of_your_app",
executables = executables
)"""
```
Later open a cmd and change the directory where your app is, for example:
```
C:\Users\Myname> cd C:\Users\Myname\MyTkinterApp
```
Then, type this:
```
python setup.py build
```
And is everything is ok, your app is gonna builded
Watch this video for more info: <https://www.youtube.com/watch?v=HosXxXE24hA&t=0s&index=29&list=PLQVvvaa0QuDclKx-QpC9wntnURXVJqLyk>
|
66,132,304
|
i am trying to post on facebook wall using selenium in python. I am able to login but after login it cant find class name of status box which i copied from browser
here is my code-
```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
user_name = "email"
password = "password"
msg = "hi i am new here"
driver = webdriver.Firefox()
driver.get("https://www.facebook.com")
element = driver.find_element_by_id("email")
element.send_keys(user_name)
element = driver.find_element_by_id("pass")
element.send_keys(password)
element.send_keys(Keys.RETURN)
time.sleep(5)
post_box = driver.find_element_by_class_name("a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7")
post_box.click()
time.sleep(5)
post_box.send_keys(msg)
```
the snapshot of code i copied from browser is attached as image [here](https://i.stack.imgur.com/LmZeX.png)
here is error i recived-
```
Traceback (most recent call last):
File "C:/Users/rosha/Desktop/facebook bot.py", line 17, in <module>
post_box = driver.find_element_by_class_name("a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7")
File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 564, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 976, in find_element
return self.execute(Command.FIND_ELEMENT, {
File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7
```
|
2021/02/10
|
[
"https://Stackoverflow.com/questions/66132304",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11942305/"
] |
try to find element by Xpath for example:
**driver.find\_element(By.XPATH, '//button[text()="Some text"]')**
to find the xpath from the browser, just right click on something in the webpage and press inspect after that right click, a menu will appear, navigate to copy then another menu will appear, press copy fullpath.
check this <https://selenium-python.readthedocs.io/locating-elements.html>
|
The problem is that `driver.find_element_by_class_name()` can be used for one class, and not multiple classes as you have: `a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7` which are multiple classes separated by spaces.
Refer to the solution [suggested here](https://stackoverflow.com/a/44760303/12106481), it suggests using `find_elements_by_xpath` or `find_element_by_css_selector`.
|
53,384,795
|
I have this data structure:
```
[array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1]]), array([[0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0],
[1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0]]), array([[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], etc....
```
I want to flatten this into a list of lists something like:
```
[[0 1 0 1 1 1 0 5 1 0 2 1]
[1 6 1 0 0 1 1 1 2 0 2 0]
[2 0 5 0 5 2 2 0 6 3 2 2]
[1 0 1 1 1 1 0 2 0 0 0 1]]
```
How do we do this in python?
|
2018/11/20
|
[
"https://Stackoverflow.com/questions/53384795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I believe you're looking for `vstack`:
```
>>> np.vstack(l)
array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1],
[0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0],
[1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```
Note that this is equivalent to:
```
>>> np.concatenate(x, axis=0)
array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1],
[0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0],
[1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```
To convert to list, use `tolist`:
```
>>> np.vstack(l).tolist()
[[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
# or
>>> np.concatenate(x, axis=0).tolist()
[[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
```
|
Use `flatten`:
```
print([i.flatten() for i in l])
```
Or:
```
print(list(map(lambda x: x.flatten(),l)))
```
Both output:
```
[array([0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1]), array([0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0, 1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1,
0]), array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0])]
```
|
62,262,007
|
```
import base64
s = "05052020"
```
python2.7
```
base64.b64encode(s)
```
output is string `'MDUwNTIwMjA='`
python 3.7
```
base64.b64encode(b"05052020")
```
output is bytes
```
b'MDUwNTIwMjA='
```
I want to replace = with "a"
```
s = str(base64.b64encode(b"05052020"))[2:-1]
s = s.replace("=", "a")
```
I realise it is dirty way so how can I do it better?
EDIT:
expected result:
Python code 3 output string with replaced padding
|
2020/06/08
|
[
"https://Stackoverflow.com/questions/62262007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450090/"
] |
If you tried both language servers and VS Code made you reload then you have tried the options currently available to you from the Python extension. We are actively working on making it better, though, and hope to having something to say about it shortly.
But if you can't wait you can try something like <https://marketplace.visualstudio.com/items?itemName=ms-pyright.pyright> as an alternative language server.
|
It might be a problem related to Pylance. By default, Pylance only looks for modules in the root directory. Making some tweaks in the settings made sure everything I import in VSCode works as if it's imported in PyCharm.
Please see:
<https://stackoverflow.com/a/67099842/6381389>
|
69,675,173
|
Following is the content of `foo.py`
```py
import sys
print(sys.executable)
```
When I execute this, I can get the full path of the the Python interpreter that called this script.
```sh
$ /mingw64/bin/python3.9.exe foo.py
/mingw64/bin/python3.9.exe
```
How to do this in nim (`nimscript`)?
|
2021/10/22
|
[
"https://Stackoverflow.com/questions/69675173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1805129/"
] |
The question mentions [NimScript](https://nim-lang.github.io/Nim/nims.html), which has other uses in the Nim ecosystem, but can also be used to write executable scripts instead of using, e.g., Bash or Python. You can use the [`selfExe`](https://nim-lang.github.io/Nim/nimscript.html#selfExe) proc to get the path to the Nim executable which is running a NimScript script:
```
#!/usr/bin/env -S nim --hints:off
mode = ScriptMode.Silent
echo selfExe()
```
After saving the above as `test.nims` and using `chmod +x` to make the file executable, the script can be invoked to show the path to the current Nim executable:
```none
$ ./test.nims
/home/.choosenim/toolchains/nim-1.4.8/bin/nim
```
|
Nim is compiled, so I assume you want to get the path of the application's own binary? If so, you can do that with:
```
import std/os
echo getAppFilename()
```
|
69,675,173
|
Following is the content of `foo.py`
```py
import sys
print(sys.executable)
```
When I execute this, I can get the full path of the the Python interpreter that called this script.
```sh
$ /mingw64/bin/python3.9.exe foo.py
/mingw64/bin/python3.9.exe
```
How to do this in nim (`nimscript`)?
|
2021/10/22
|
[
"https://Stackoverflow.com/questions/69675173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1805129/"
] |
The question mentions [NimScript](https://nim-lang.github.io/Nim/nims.html), which has other uses in the Nim ecosystem, but can also be used to write executable scripts instead of using, e.g., Bash or Python. You can use the [`selfExe`](https://nim-lang.github.io/Nim/nimscript.html#selfExe) proc to get the path to the Nim executable which is running a NimScript script:
```
#!/usr/bin/env -S nim --hints:off
mode = ScriptMode.Silent
echo selfExe()
```
After saving the above as `test.nims` and using `chmod +x` to make the file executable, the script can be invoked to show the path to the current Nim executable:
```none
$ ./test.nims
/home/.choosenim/toolchains/nim-1.4.8/bin/nim
```
|
If you want to do that in Nim (not NimScript), you can take compiler executable path using <https://nim-lang.org/docs/os.html#getCurrentCompilerExe>
```
import os
echo getCurrentCompilerExe()
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
I hope this will help you and it works fine on my project.
```
public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123;
public static final int MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE = 124;
public static final int MY_PERMISSIONS_REQUEST_CAMERA = 124;
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionread(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionwrite(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.CAMERA)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissioncamera(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
protected ProgressDialog mProgressDialog;
protected void showProgressDialog(String title,String message)
{
/* if(mProgressDialog!=null)
{
mProgressDialog.dismiss();
}*/
mProgressDialog = ProgressDialog.show(this,title,message);
}
```
|
Try adding permission to MANIFEST file.
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your checkPermissionsR() get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
In your case 1 , You are taking permission about writing external storage , you should take permission about using camera, SO first mentioned permission in your manifest like this .
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your **checkPermissionsR()** get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
Try adding permission to MANIFEST file.
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your checkPermissionsR() get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
Try requesting all the permissions at once at the start up of your application.
Like in your `MainActivity`
First in your `onCreate` method call this:
```
checkPermissions();
```
Then try calling these methods:
```
private void checkPermissions() {
if (Build.VERSION.SDK_INT >= 23) {
if (!checkAllPermission())
requestPermission();
}
}
private void requestPermission() {
ActivityCompat.requestPermissions(MainActivity.this, new String[]
{
CAMERA,
READ_EXTERNAL_STORAGE,
WRITE_EXTERNAL_STORAGE,
//check more permissions if you want
........
}, RequestPermissionCode);
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
switch (requestCode) {
case RequestPermissionCode:
if (grantResults.length > 0) {
boolean CameraPermission = grantResults[0] == PackageManager.PERMISSION_GRANTED;
boolean ReadExternalStatePermission = grantResults[1] == PackageManager.PERMISSION_GRANTED;
boolean ReadWriteStatePermission = grantResults[2] == PackageManager.PERMISSION_GRANTED;
//
.......
if (CameraPermission && ReadExternalStatePermission && ReadWriteStatePermission) {
Toast.makeText(MainActivity.this, "Permissions acquired", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(MainActivity.this, "One or more permissions denied", Toast.LENGTH_LONG).show();
}
}
break;
default:
break;
}
}
public boolean checkAllPermission() {
int FirstPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), CAMERA);
int SecondPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), READ_EXTERNAL_STORAGE);
int ThirdPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE);
.....
return FirstPermissionResult == PackageManager.PERMISSION_GRANTED &&
SecondPermissionResult == PackageManager.PERMISSION_GRANTED &&
ThirdPermissionResult == PackageManager.PERMISSION_GRANTED
}
```
|
Try adding permission to MANIFEST file.
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your checkPermissionsR() get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
>
> Try this one
>
>
>
```
if (ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
//Show Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else if (permissionStatus.getBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,false)) {
//Previously Permission Request was cancelled with 'Dont Ask Again',
// Redirect to Settings after showing Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
sentToSettings = true;
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
Uri uri = Uri.fromParts("package", getPackageName(), null);
intent.setData(uri);
startActivityForResult(intent, REQUEST_PERMISSION_SETTING);
Toast.makeText(getBaseContext(), "Go to Permissions to Grant Storage", Toast.LENGTH_LONG).show();
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else {
//just request the permission
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
SharedPreferences.Editor editor = permissionStatus.edit();
editor.putBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,true);
editor.commit();
} else {
//You already have the permission, just go ahead.
proceedAfterPermission();
}
```
|
Try adding permission to MANIFEST file.
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your checkPermissionsR() get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
In your case 1 , You are taking permission about writing external storage , you should take permission about using camera, SO first mentioned permission in your manifest like this .
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your **checkPermissionsR()** get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
I hope this will help you and it works fine on my project.
```
public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123;
public static final int MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE = 124;
public static final int MY_PERMISSIONS_REQUEST_CAMERA = 124;
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionread(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionwrite(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.CAMERA)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissioncamera(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
protected ProgressDialog mProgressDialog;
protected void showProgressDialog(String title,String message)
{
/* if(mProgressDialog!=null)
{
mProgressDialog.dismiss();
}*/
mProgressDialog = ProgressDialog.show(this,title,message);
}
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
Try requesting all the permissions at once at the start up of your application.
Like in your `MainActivity`
First in your `onCreate` method call this:
```
checkPermissions();
```
Then try calling these methods:
```
private void checkPermissions() {
if (Build.VERSION.SDK_INT >= 23) {
if (!checkAllPermission())
requestPermission();
}
}
private void requestPermission() {
ActivityCompat.requestPermissions(MainActivity.this, new String[]
{
CAMERA,
READ_EXTERNAL_STORAGE,
WRITE_EXTERNAL_STORAGE,
//check more permissions if you want
........
}, RequestPermissionCode);
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
switch (requestCode) {
case RequestPermissionCode:
if (grantResults.length > 0) {
boolean CameraPermission = grantResults[0] == PackageManager.PERMISSION_GRANTED;
boolean ReadExternalStatePermission = grantResults[1] == PackageManager.PERMISSION_GRANTED;
boolean ReadWriteStatePermission = grantResults[2] == PackageManager.PERMISSION_GRANTED;
//
.......
if (CameraPermission && ReadExternalStatePermission && ReadWriteStatePermission) {
Toast.makeText(MainActivity.this, "Permissions acquired", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(MainActivity.this, "One or more permissions denied", Toast.LENGTH_LONG).show();
}
}
break;
default:
break;
}
}
public boolean checkAllPermission() {
int FirstPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), CAMERA);
int SecondPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), READ_EXTERNAL_STORAGE);
int ThirdPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE);
.....
return FirstPermissionResult == PackageManager.PERMISSION_GRANTED &&
SecondPermissionResult == PackageManager.PERMISSION_GRANTED &&
ThirdPermissionResult == PackageManager.PERMISSION_GRANTED
}
```
|
I hope this will help you and it works fine on my project.
```
public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123;
public static final int MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE = 124;
public static final int MY_PERMISSIONS_REQUEST_CAMERA = 124;
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionread(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionwrite(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.CAMERA)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissioncamera(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
protected ProgressDialog mProgressDialog;
protected void showProgressDialog(String title,String message)
{
/* if(mProgressDialog!=null)
{
mProgressDialog.dismiss();
}*/
mProgressDialog = ProgressDialog.show(this,title,message);
}
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
I hope this will help you and it works fine on my project.
```
public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123;
public static final int MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE = 124;
public static final int MY_PERMISSIONS_REQUEST_CAMERA = 124;
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionread(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissionwrite(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.CAMERA)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static boolean checkPermissioncamera(final Context context) {
int currentAPIVersion = Build.VERSION.SDK_INT;
if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) {
if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context);
alertBuilder.setCancelable(true);
alertBuilder.setTitle("Permission necessary");
alertBuilder.setMessage("External storage permission is necessary");
alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {
@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
});
AlertDialog alert = alertBuilder.create();
alert.show();
} else {
ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE);
}
return false;
} else {
return true;
}
} else {
return true;
}
}
protected ProgressDialog mProgressDialog;
protected void showProgressDialog(String title,String message)
{
/* if(mProgressDialog!=null)
{
mProgressDialog.dismiss();
}*/
mProgressDialog = ProgressDialog.show(this,title,message);
}
```
|
>
> Try this one
>
>
>
```
if (ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
//Show Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else if (permissionStatus.getBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,false)) {
//Previously Permission Request was cancelled with 'Dont Ask Again',
// Redirect to Settings after showing Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
sentToSettings = true;
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
Uri uri = Uri.fromParts("package", getPackageName(), null);
intent.setData(uri);
startActivityForResult(intent, REQUEST_PERMISSION_SETTING);
Toast.makeText(getBaseContext(), "Go to Permissions to Grant Storage", Toast.LENGTH_LONG).show();
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else {
//just request the permission
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
SharedPreferences.Editor editor = permissionStatus.edit();
editor.putBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,true);
editor.commit();
} else {
//You already have the permission, just go ahead.
proceedAfterPermission();
}
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
In your case 1 , You are taking permission about writing external storage , you should take permission about using camera, SO first mentioned permission in your manifest like this .
```
<uses-permission android:name="android.permission.CAMERA" />
```
and in your **checkPermissionsR()** get this permission
```
ContextCompat.checkSelfPermission(this, Manifest.permission.Camera)
```
|
>
> Try this one
>
>
>
```
if (ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
//Show Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else if (permissionStatus.getBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,false)) {
//Previously Permission Request was cancelled with 'Dont Ask Again',
// Redirect to Settings after showing Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
sentToSettings = true;
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
Uri uri = Uri.fromParts("package", getPackageName(), null);
intent.setData(uri);
startActivityForResult(intent, REQUEST_PERMISSION_SETTING);
Toast.makeText(getBaseContext(), "Go to Permissions to Grant Storage", Toast.LENGTH_LONG).show();
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else {
//just request the permission
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
SharedPreferences.Editor editor = permissionStatus.edit();
editor.putBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,true);
editor.commit();
} else {
//You already have the permission, just go ahead.
proceedAfterPermission();
}
```
|
47,405,748
|
I am reading python official documentation word by word.
In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization)
It does specify 16 special methods under `object` basic customization, I collect them as following:
```
In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__',
'__format__', '__eq__', '__le__', '__lt__', '__eq__',
'__ne__', '__ge__', '__gt__', '__hash__', '__bool__']
In [48]: len(bc)
Out[48]: 16
```
The problem is that there's three of them are not `object`'s valid attributes
```
In [50]: for attr in bc:
...: if hasattr(object,attr):
...: pass
...: else:
...: print(attr)
...:
__del__
__bytes__
__bool__
```
Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object)
It has no recursive base classes.
Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47405748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301792/"
] |
Try requesting all the permissions at once at the start up of your application.
Like in your `MainActivity`
First in your `onCreate` method call this:
```
checkPermissions();
```
Then try calling these methods:
```
private void checkPermissions() {
if (Build.VERSION.SDK_INT >= 23) {
if (!checkAllPermission())
requestPermission();
}
}
private void requestPermission() {
ActivityCompat.requestPermissions(MainActivity.this, new String[]
{
CAMERA,
READ_EXTERNAL_STORAGE,
WRITE_EXTERNAL_STORAGE,
//check more permissions if you want
........
}, RequestPermissionCode);
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
switch (requestCode) {
case RequestPermissionCode:
if (grantResults.length > 0) {
boolean CameraPermission = grantResults[0] == PackageManager.PERMISSION_GRANTED;
boolean ReadExternalStatePermission = grantResults[1] == PackageManager.PERMISSION_GRANTED;
boolean ReadWriteStatePermission = grantResults[2] == PackageManager.PERMISSION_GRANTED;
//
.......
if (CameraPermission && ReadExternalStatePermission && ReadWriteStatePermission) {
Toast.makeText(MainActivity.this, "Permissions acquired", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(MainActivity.this, "One or more permissions denied", Toast.LENGTH_LONG).show();
}
}
break;
default:
break;
}
}
public boolean checkAllPermission() {
int FirstPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), CAMERA);
int SecondPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), READ_EXTERNAL_STORAGE);
int ThirdPermissionResult = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE);
.....
return FirstPermissionResult == PackageManager.PERMISSION_GRANTED &&
SecondPermissionResult == PackageManager.PERMISSION_GRANTED &&
ThirdPermissionResult == PackageManager.PERMISSION_GRANTED
}
```
|
>
> Try this one
>
>
>
```
if (ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.shouldShowRequestPermissionRationale(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
//Show Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else if (permissionStatus.getBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,false)) {
//Previously Permission Request was cancelled with 'Dont Ask Again',
// Redirect to Settings after showing Information about why you need the permission
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("Need Storage Permission");
builder.setMessage("This app needs storage permission.");
builder.setPositiveButton("Grant", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
sentToSettings = true;
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
Uri uri = Uri.fromParts("package", getPackageName(), null);
intent.setData(uri);
startActivityForResult(intent, REQUEST_PERMISSION_SETTING);
Toast.makeText(getBaseContext(), "Go to Permissions to Grant Storage", Toast.LENGTH_LONG).show();
}
});
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
builder.show();
} else {
//just request the permission
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, EXTERNAL_STORAGE_PERMISSION_CONSTANT);
}
SharedPreferences.Editor editor = permissionStatus.edit();
editor.putBoolean(Manifest.permission.WRITE_EXTERNAL_STORAGE,true);
editor.commit();
} else {
//You already have the permission, just go ahead.
proceedAfterPermission();
}
```
|
59,783,094
|
I am running `py.test` 4.3.1 with `python` 3.7.6 on a Mac (Mojave) and I want to get the list of markers for the 'session', once at the begin of the run.
In `conftest.py` I have tried using the following function:
```
@pytest.fixture(scope="session", autouse=True)
def collab_setup(request):
print([marker.name for marker in request.function.pytestmark])
```
which, however, results in an error
```
E AttributeError: function not available in session-scoped context
```
when I call a dummy test like
```
py.test -s -m "mark1 and mark2" tests/tests_dummy.py
```
It is important to have the list of markers only once for my testing session, as in the end I want to setup something for all the tests in the testsuite. That is why I must not call this function more than once per test session.
Is this possible to achieve?
|
2020/01/17
|
[
"https://Stackoverflow.com/questions/59783094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1581090/"
] |
If the `locationName` gets as input `[1,5]` then the code should look like this:
```
filterData(locationName: number[]) {
return ELEMENT_DATA.filter(object => {
return locationName.includes(object.position);
});
}
```
|
You can use [Array.prototype.filter()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter)
```
ELEMENT_DATA.filter(function (object) {
return locationName.indexOf(object.position) !== -1; // -1 means not present
});
```
or with underscore JS , using the same predicate:
```
_.filter(ELEMENT_DATA, function (object) {
return locationName.indexOf(object.position) !== -1; // -1 means not present
}
```
If you have access to ES6 collections or a polyfill for [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set/has).
Here **locationName** should be a type of Set
```
ELEMENT_DATA.filter(object=> locationName.has(object.position))
```
|
14,490,845
|
Python 2.6 on Redhat 6.3
I have a device that saves 32 bit floating point value across 2 memory registers, split into most significant word and least significant word.
I need to convert this to a float.
I have been using the following code found on SO and it is similar to code I have seen elsewhere
```
#!/usr/bin/env python
import sys
from ctypes import *
first = sys.argv[1]
second = sys.argv[2]
reading_1 = str(hex(int(first)).lstrip("0x"))
reading_2 = str(hex(int(second)).lstrip("0x"))
sample = reading_1 + reading_2
def convert(s):
i = int(s, 16) # convert from hex to a Python int
cp = pointer(c_int(i)) # make this into a c integer
fp = cast(cp, POINTER(c_float)) # cast the int pointer to a float pointer
return fp.contents.value # dereference the pointer, get the float
print convert(sample)
```
an example of the register values would be ;
register-1;16282 register-2;60597
this produces the resulting float of
1.21034872532
A perfectly cromulent number, however sometimes the memory values are something like;
register-1;16282 register-2;1147
which, using this function results in a float of;
1.46726675314e-36
which is a fantastically small number and not a number that seems to be correct. This device should be producing readings around the 1.2, 1.3 range.
What I am trying to work out is if the device is throwing bogus values or whether the values I am getting are correct but the function I am using is not properly able to convert them.
Also is there a better way to do this, like with numpy or something of that nature?
I will hold my hand up and say that I have just copied this code from examples on line and I have very little understanding of how it works, however it seemed to work in the test cases that I had available to me at the time.
Thank you.
|
2013/01/23
|
[
"https://Stackoverflow.com/questions/14490845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1635823/"
] |
If you have the raw bytes (e.g. read from memory, from file, over the network, ...) you can use `struct` for this:
```
>>> import struct
>>> struct.unpack('>f', '\x3f\x9a\xec\xb5')[0]
1.2103487253189087
```
Here, `\x3f\x9a\xec\xb5` are your input registers, 16282 (hex 0x3f9a) and 60597 (hex 0xecb5) expressed as bytes in a string. The `>` is the byte order mark.
So depending how you get the register values, you may be able to use this method (e.g. by converting your input integers to byte strings). You can use `struct` for this, too; this is your second example:
```
>>> raw = struct.pack('>HH', 16282, 1147) # from two unsigned shorts
>>> struct.unpack('>f', raw)[0] # to one float
1.2032617330551147
```
|
The way you've converting the two `int`s makes implicit assumptions about [endianness](http://en.wikipedia.org/wiki/Endianness) that I believe are wrong.
So, let's back up a step. You know that the first argument is the most significant word, and the second is the least significant word. So, rather than try to figure out how to combine them into a hex string in the appropriate way, let's just do this:
```
import struct
import sys
first = sys.argv[1]
second = sys.argv[2]
sample = int(first) << 16 | int(second)
```
Now we can just convert like this:
```
def convert(i):
s = struct.pack('=i', i)
return struct.unpack('=f', s)[0]
```
And if I try it on your inputs:
```
$ python floatify.py 16282 60597
1.21034872532
$ python floatify.py 16282 1147
1.20326173306
```
|
58,770,519
|
How to do this
```
c++ -> Python -> c++
^ |
| |
-----------------
```
1. C++ app is hosting python.
2. Python creates a class, which is actually a wrapping to c/c++ object
3. How to get access from hosting c++ to c/c++ pointer of this object created by python?
**Example with code:**
Imagine I have a C++ class wrapped for python (e.g. using boost.python)
```
// foo.cpp
#include <boost/python.hpp>
struct Cat {
Cat (int fur=4): _fur{i} { }
int get_fur() const { return _fur; }
private:
int _fur;
};
BOOST_PYTHON_MODULE(foo) {
using namespace boost::python;
class_<Cat>("Cat", init<int>())
.def("get_fur", &A::get_fur, "Density of cat's fur");
```
}
Now **I'm hosting a python in C++**. A python script creates **Cat** => A c++ Cat instance is created underneath. How to get a pointer to c++ instance of Cat from hosting c++ (from C++ to C++)?
```
#include <Python.h>
int
main(int argc, char *argv[])
{
Py_SetProgramName(argv[0]); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from cat import Cat \n"
"cat = Cat(10) \n");
// WHAT to do here to get pointer to C++ instance of Cat
... ??? ...
std::cout << "Cat's fur: " << cat->get_fur() << std::endl
Py_Finalize();
return 0;
}
```
**Real application**
The real problem is this: we have a c++ framework which has pretty complex initialization and configuration phase where performance is not critical; and then processing phase, where performance is everything. There is a python wrapping for the framework. Defining things in python is very convenient but running from python is still slower than pure c++ code. It is tempting for many reasons to do this configuration/initialization phase in python, get pointers to underneath C++ objects and then run in "pure c++ mode". It would be easy if we could write everything from scratch, but we have already pretty much defined (30 years old) c++ framework.
|
2019/11/08
|
[
"https://Stackoverflow.com/questions/58770519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/548894/"
] |
```
#include <Python.h>
int main(int argc, char *argv[])
{
Py_SetProgramName(argv[0]);
Py_Initialize();
object module = import("__main__");
object space = module.attr("__dict__");
exec("from cat import Cat \n"
"cat = Cat(10) \n",
space);
Cat& cat = extract<Cat&>(space["cat"]);
std::cout<<"Cat's fur: "<< cat.get_fur() <<"\n";
//or:
Cat& cat2 = extract<Cat&>(space.attr("cat"));
std::cout<<"Cat's fur: "<< cat2.get_fur() <<"\n";
//or:
object result = eval("cat = Cat(10)");
Cat& cat3 = extract<Cat&>(result);
std::cout<<"Cat's fur: "<< cat3.get_fur() <<"\n";
Py_Finalize();
return 0;
}
```
|
In case the wrapper is open source, use the wrapper's python object struct from C++. Cast the PyObject \* to that struct which should have a PyObject as its first member iirc, and simply access the pointer to the C++ instance.
Make sure that the instance is not deleted while you're using it by keeping the wrapper instance around in Python. When it is released, it will probably delete the wrapped C++ instance you now still are holding a pointer to.
And keep in mind that this approach is risky and fragile, as it depends on implementation details of the wrapper that might change in future versions.
Starting point: [cpython/object.h](https://github.com/python/cpython/blob/master/Include/object.h)
|
6,124,701
|
I feel like this is simple, but I just don't know enough about python to do it correctly.
I have two files:
1. File with lines listing an id number and whether that id is used. Format is 'id, isUsed'.
2. File with rules containing one rule for each id.
So what I want to do is to parse through the file with id-used pairs and then based on that information, I will find the corresponding rule in the second file and then comment or un-comment the rule based on if that rule is used.
Is there an easy way to search through the second file for the rule I am looking for instead of searching it line by line every time? Also, do I have to re-write the file every time I change the file.
Here is what I have so far I don't really know what the best way to implement modifyRulesFile():
```
def editRulesFile(pairFile, ruleFile):
pairFd = open(pairFile, 'r')
ruleFd = open(ruleFile, 'rw')
for line in pairFd.readLine():
id,isUsed = line.split(',')
modifyRulesFile(ruleFd, id, isUsed)
def modifyRulesFile(fd, id, isUsed):
for line in fd.readLine():
# Find line with id in it and add a comment or remove comment based on isUsed
```
|
2011/05/25
|
[
"https://Stackoverflow.com/questions/6124701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/240522/"
] |
I suggest you read the rules file into a dictionary (id -> rule). Then, as you read the config file, write out the corresponding rule (including a comment if you need to).
some pseudocode:
```
rules = {}
for id, rule in read_rules_file():
rules[id] = rule
for id, isUsed in read_pairs_file():
if isUsed:
write_rule(id, rules[id])
else:
write_commented_rule(id, rules[id])
```
This way, you will pass through each file only once. If the rules file gets very long, you might run out of memory, but, well, that normally takes a long time to happen!
You can use generators to avoid keeping all the pairs in memory at once:
```
def read_pairs_file():
pairFd = open(pairFile, 'r')
for line in pairFd.readLines():
id, isUsed = line.split(',')
yield (id, isUsed)
pairFd.Close()
```
|
I don't know why I didn't think of this before, but there is another way to do this.
First, you read which rules should be used (or not used) into memory, I stored it into a dictionary.
```
def readRulesIntoMemory(fileName):
rules = {}
# Open csv file with rule id, isUsed pairs
fd = open(fileName, 'r')
if fd:
for line in fd.readlines():
id,isUsed = line.split(',')
rules[id] = isUsed
```
Then while reading the list of current rules in the other file, write your changes to a temporary file.
```
def createTemporaryRulesFile(temporaryFileName, rulesFileName, rules):
# Open current rules file for reading.
rulesFd = open(rulesFileName, 'r')
if not rulesFd:
return False
# Open temporary file for writing
tempFd = open(temporaryFileName, 'w')
if not tempFd:
return False
# Iterate through each current rule.
for line in rulesFd.readlines():
id = getIdFromLine(line)
isCommented = True # Default to commenting out rule
# If rule's id is was in csv file from earlier, save whether we comment
# the line or not.
if id in rules:
isCommented = rules[id]
if isCommented:
writeCommentedLine(tempFd, line)
else:
writeUncommentedLine(tempFd, line)
return True
```
Now we can copy the new temp file over the original if we want to.
|
63,749,945
|
I'm a beginner at python and I made this random date generator which should generate year-month-date output. And 4,6,9,11 months have 30 days, all others 31.
But I'm having a problem where february in leap year still generates date=30 despite if and elif having the condition where M must be 2.
```
import random
import calendar
for i in range(500):
Y = rand.randint(0, 170)
M = rand.randint(1, 12)
if calendar.isleap(G+1850) == True and M == 2:
D = rand.randint(1, 28)
elif calendar.isleap(G+1850) == False and M == 2:
D=rand.randint(1, 29)
if M == 4 or M == 6 or M == 9 or M == 11:
D=rand.randint(1, 30)
else:
D=rand.randint(1, 31)
```
|
2020/09/05
|
[
"https://Stackoverflow.com/questions/63749945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224115/"
] |
Several problems mentioned in comments:
* you are using `rand` instead of `random`
* you are using `G` when presumably it should be `Y`
See a refactored code, rewriting the `if` statements to first test `M` then test `Y`.
```
import random
import calendar
for i in range(500):
Y = random.randint(0, 170)
M = random.randint(1, 12)
if M == 2:
if calendar.isleap(Y + 1850):
Dmax = 28
else:
Dmax = 29
elif M in [4, 6, 9, 11]:
Dmax = 30
else:
Dmax = 31
D = random.randint(1, Dmax)
print(Y, M, D)
```
---
A more pythonic way would be to use `timedelta` to create a random date from the origin.
```
import random
from datetime import datetime, timedelta
def random_date(start: datetime, end: datetime):
days = (end - start).days
return start + timedelta(days=random.randint(0, days))
start = datetime(1850, 1, 1)
end = datetime(2020, 12, 31)
for i in range(500):
print(random_date(start, end))
```
|
The final else-block is executed if `M == 2` and overwrites `D`.
Simple solution can be to reorder the two if parts:
```
import random as rand
import calendar
for i in range(500):
G = rand.randint(0, 170)
M = rand.randint(1, 12)
if M == 4 or M == 6 or M == 9 or M == 11:
D=rand.randint(1, 30)
else:
D=rand.randint(1, 31)
if calendar.isleap(G+1850) == True and M == 2:
D = rand.randint(1, 28)
elif calendar.isleap(G+1850) == False and M == 2:
D=rand.randint(1, 29)
```
|
6,132,423
|
I was trying to install SCRAPY and play with it.
The tutorial says to run this:
```
scrapy startproject tutorial
```
Can you please break this down to help me understand it. I have various releases of Python on my Windows 7 machine for various conflicting projects, so when I installed Scrapy with their .exe, it installed it in c:\Python26\_32bit directory, which is okay. But I don't have any one version of Python in my path.
So I tried:
```
\python26_32bit\python.exe scrapy startproject tutorial
```
and I get the error:
```
\python26_32bit\python.exe: can't open file 'scrapy': [Errno 2] No such file or directory.
```
I do see scrapy installed here: c:\Python26\_32bit\Lib\site-packages\scrapy
I cannot find any file called scrapy.py, so what exactly is "scrapy" in Python terminology, a lib, a site-package, a program, ?? and how do I change the sample above to run?
I'm a little more used to Python in Google App Engine environment, so running on my local machine is often more challenging and foreign to me.
|
2011/05/26
|
[
"https://Stackoverflow.com/questions/6132423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160245/"
] |
scrapy is a batch file which execute a python file called "scrapy", so you need to add the file "scrapy"'s path to your PATH environment.
if that is still not work, make "scrapy.py" file with content
```
from scrapy.cmdline import execute
execute()
```
and run `\python26_32bit\python.exe scrapy.py startproject tutorial`
|
Try
```
C:\Python26_32bit\Scripts\Scrapy startproject tutorial
```
or
add `C:\Python26_32bit\Scripts` to your path
|
6,132,423
|
I was trying to install SCRAPY and play with it.
The tutorial says to run this:
```
scrapy startproject tutorial
```
Can you please break this down to help me understand it. I have various releases of Python on my Windows 7 machine for various conflicting projects, so when I installed Scrapy with their .exe, it installed it in c:\Python26\_32bit directory, which is okay. But I don't have any one version of Python in my path.
So I tried:
```
\python26_32bit\python.exe scrapy startproject tutorial
```
and I get the error:
```
\python26_32bit\python.exe: can't open file 'scrapy': [Errno 2] No such file or directory.
```
I do see scrapy installed here: c:\Python26\_32bit\Lib\site-packages\scrapy
I cannot find any file called scrapy.py, so what exactly is "scrapy" in Python terminology, a lib, a site-package, a program, ?? and how do I change the sample above to run?
I'm a little more used to Python in Google App Engine environment, so running on my local machine is often more challenging and foreign to me.
|
2011/05/26
|
[
"https://Stackoverflow.com/questions/6132423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160245/"
] |
scrapy is a batch file which execute a python file called "scrapy", so you need to add the file "scrapy"'s path to your PATH environment.
if that is still not work, make "scrapy.py" file with content
```
from scrapy.cmdline import execute
execute()
```
and run `\python26_32bit\python.exe scrapy.py startproject tutorial`
|
I ran accross this error with the following setup: Python installed on Windows. Cygwin (babun) installed. Used `pip install Scrapy` from the Windows installation (Scrapy now in C:\Python27\Lib\site-packages\scrapy). Wanted to use Scrapy from within babun. Got the same error as you. What you can do:
In your .bashrc/.zshrc/etc, add the following:
`alias scrapy='python.exe -mscrapy.cmdline'`
I can now run scrapy inside babun without any problems.
Note: I also had to run `pip install service_identity` manually.
|
6,132,423
|
I was trying to install SCRAPY and play with it.
The tutorial says to run this:
```
scrapy startproject tutorial
```
Can you please break this down to help me understand it. I have various releases of Python on my Windows 7 machine for various conflicting projects, so when I installed Scrapy with their .exe, it installed it in c:\Python26\_32bit directory, which is okay. But I don't have any one version of Python in my path.
So I tried:
```
\python26_32bit\python.exe scrapy startproject tutorial
```
and I get the error:
```
\python26_32bit\python.exe: can't open file 'scrapy': [Errno 2] No such file or directory.
```
I do see scrapy installed here: c:\Python26\_32bit\Lib\site-packages\scrapy
I cannot find any file called scrapy.py, so what exactly is "scrapy" in Python terminology, a lib, a site-package, a program, ?? and how do I change the sample above to run?
I'm a little more used to Python in Google App Engine environment, so running on my local machine is often more challenging and foreign to me.
|
2011/05/26
|
[
"https://Stackoverflow.com/questions/6132423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/160245/"
] |
Try
```
C:\Python26_32bit\Scripts\Scrapy startproject tutorial
```
or
add `C:\Python26_32bit\Scripts` to your path
|
I ran accross this error with the following setup: Python installed on Windows. Cygwin (babun) installed. Used `pip install Scrapy` from the Windows installation (Scrapy now in C:\Python27\Lib\site-packages\scrapy). Wanted to use Scrapy from within babun. Got the same error as you. What you can do:
In your .bashrc/.zshrc/etc, add the following:
`alias scrapy='python.exe -mscrapy.cmdline'`
I can now run scrapy inside babun without any problems.
Note: I also had to run `pip install service_identity` manually.
|
70,058,771
|
Im trying to use python to determine the continued fractions of pi by following the stern brocot tree. Its simple, if my estimation of pi is too high, take a left, if my estimation of pi is too low, take a right.
Im using `mpmath` to get arbitrary precision floating numbers, as python doesn't support that, but no matter what i set the decimal precision to using 'mp.dps', the continued fraction generation seems to stop once it hits `245850922/78256779`.
In theory, it should only exit execution when it is equal to the current estimation for pi. So I tried increasing the decimal precision of `mp.dps`, however it still halts execution there.
have i reached a maximum amount of precision with `mp.dps` or is my approach inefficient? how can i make the continued fraction generation not cease at `245850922/78256779`???
```
import mpmath as mp
mp.dps = 1000
def eval_stern_seq(seq):
a,b,c,d,m,n=0,1,1,0,1,1
for i in seq:
if i=='L':
c,d=m,n
else:
a,b=m,n
m,n=a+c,b+d
return m,n
seq = ''
while True:
stern_frac = eval_stern_seq(seq)
print(f"\n\ncurrent fraction: {stern_frac[0]}/{stern_frac[1]}")
print("current value: " + mp.nstr(mp.fdiv(stern_frac[0],stern_frac[1]),n=mp.dps))
print("pi (reference): " + mp.nstr(mp.pi,n=mp.dps))
if mp.fdiv(stern_frac[0],stern_frac[1]) > mp.pi:
seq+='L'
elif mp.fdiv(stern_frac[0],stern_frac[1]) < mp.pi:
seq+='R'
else:
break
input("\n\t(press enter to continue generation...)")
```
|
2021/11/21
|
[
"https://Stackoverflow.com/questions/70058771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16155472/"
] |
Never fails that as soon as I post a question I find the answer. For anyone else looking for something similar:
The first bracket matches all slashes `[/]`
The parenthesis capture the group, in this case the group of numbers `([0-9])`
The `[0-9]` searches the range of numbers between 0 and 9
`{8}` is the quantifier, it's there so we look for a group of 8 numbers
That said, My expression I needed was `[/]([0-9]){8}[/]` which will select all groups of 8 numbers between slashes like:
[https://cdn.mydomain.com/wp-content/uploads/2020/05\*\*/20125258/\*\*image1.jpg](https://cdn.mydomain.com/wp-content/uploads/2020/05**/20125258/**image1.jpg)
[https://cdn.mydomain.com/wp-content/uploads/2021/10\*\*/13440323/\*\*image-ex2.jpg](https://cdn.mydomain.com/wp-content/uploads/2021/10**/13440323/**image-ex2.jpg)
[https://cdn.mydomain.com/wp-content/uploads/2012/01\*\*/92383422/\*\*my-image3.jpg](https://cdn.mydomain.com/wp-content/uploads/2012/01**/92383422/**my-image3.jpg)
For what it's worth, this site helped me a lot with writing this and testing it <https://regexr.com/>
|
Use
```php
/\d{8}/
```
See [regex proof](https://regex101.com/r/sPLT7b/1).
**EXPLANATION**
```php
--------------------------------------------------------------------------------
/ '/'
--------------------------------------------------------------------------------
\d{8} digits (0-9) (8 times)
--------------------------------------------------------------------------------
/ '/'
```
If you would like to exclude slashes:
```
(?<=/)\d{8}(?=/)
```
See [another regex proof](https://regex101.com/r/sPLT7b/2).
**EXPLANATION**
```php
--------------------------------------------------------------------------------
(?<= look behind to see if there is:
--------------------------------------------------------------------------------
/ '/'
--------------------------------------------------------------------------------
) end of look-behind
--------------------------------------------------------------------------------
\d{8} digits (0-9) (8 times)
--------------------------------------------------------------------------------
(?= look ahead to see if there is:
--------------------------------------------------------------------------------
/ '/'
--------------------------------------------------------------------------------
) end of look-ahead
```
|
71,857,414
|
**The error :**
from asyncio.windows\_events import NULL
File "/app/.heroku/python/lib/python3.10/asyncio/windows\_events.py", line 6, in
raise ImportError('win32 only')
ImportError: win32 only
**please how can i fix this ?**
|
2022/04/13
|
[
"https://Stackoverflow.com/questions/71857414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18793327/"
] |
I had the same error while trying to deploy:
```
File "/tmp/build_4a1c8563/base/models.py", line 1, in <module>
from asyncio.windows_events import NULL
File "/app/.heroku/python/lib/python3.9/asyncio/windows_events.py", line 6, in <module>
raise ImportError('win32 only')
```
I deleted the "from asyncio.windows\_events import NULL" in models.py and the issue was solved. Wasn't event using that module...
|
I had the same error while deploying:
your IDE have imported `from asyncio.windows_events import NULL` line automatically while you were typing NULL
Just delete this line
```
from asyncio.windows_events import NULL
```
|
42,164,772
|
I can't achieve to make summaries work with the Estimator API of Tensorflow.
The Estimator class is very useful for many reasons: I have already implemented my own classes which are really similar but I am trying to switch to this one.
Here is the code sample:
```
import tensorflow as tf
import tensorflow.contrib.layers as layers
import tensorflow.contrib.learn as learn
import numpy as np
# To reproduce the error: docker run --rm -w /algo -v $(pwd):/algo tensorflow/tensorflow bash -c "python sample.py"
def model_fn(x, y, mode):
logits = layers.fully_connected(x, 12, scope="dense-1")
logits = layers.fully_connected(logits, 56, scope="dense-2")
logits = layers.fully_connected(logits, 4, scope="dense-3")
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y), name="xentropy")
return {"predictions":logits}, loss, tf.train.AdamOptimizer(0.001).minimize(loss)
def input_fun():
""" To be completed for a 4 classes classification problem """
feature = tf.constant(np.random.rand(100,10))
labels = tf.constant(np.random.random_integers(0,3, size=(100,)))
return feature, labels
estimator = learn.Estimator(model_fn=model_fn, )
trainingConfig = tf.contrib.learn.RunConfig(save_checkpoints_secs=60)
estimator = learn.Estimator(model_fn=model_fn, model_dir="./tmp", config=trainingConfig)
# Works
estimator.fit(input_fn=input_fun, steps=2)
# The following code does not work
# Can't initialize saver
# saver = tf.train.Saver(max_to_keep=10) # Error: No variables to save
# The following fails because I am missing a saver... :(
hooks=[
tf.train.LoggingTensorHook(["xentropy"], every_n_iter=100),
tf.train.CheckpointSaverHook("./tmp", save_steps=1000, checkpoint_basename='model.ckpt'),
tf.train.StepCounterHook(every_n_steps=100, output_dir="./tmp"),
tf.train.SummarySaverHook(save_steps=100, output_dir="./tmp"),
]
estimator.fit(input_fn=input_fun, steps=2, monitors=hooks)
```
As you can see, I can create an Estimator and use it but I can achieve to add hooks to the fitting process.
The logging hooks works just fine but the others require both **tensors** and a **saver** which I can't provide.
The tensors are defined in the model function, thus I can't pass them to the **SummaryHook** and the **Saver** can't be initialized because there is no tensor to save...
Is there a solution to my problem? (I am guessing yes but there is a lack of documentation of this part in the tensorflow documentation)
* How can I initialized my **saver**? Or should I use other objects such as *Scaffold*?
* How can I pass **summaries** to the **SummaryHook** since they are defined in my model function?
Thanks in advance.
*PS: I have seen the DNNClassifier API but I want to use the estimator API for Convolutional Nets and others. I need to create summaries for any estimator.*
|
2017/02/10
|
[
"https://Stackoverflow.com/questions/42164772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5184894/"
] |
The intended use case is that you let the Estimator save summaries for you. There are options in [RunConfig](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/run_config.py#L182) for configuring summary writing. RunConfigs get passed when [constructing the Estimator](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/estimator.py#L347).
|
Just have `tf.summary.scalar("loss", loss)` in the `model_fn`, and run the code without `summary_hook`. The loss is recorded and shown in the tensorboard.
---
See also:
* [Tensorflow - Using tf.summary with 1.2 Estimator API](https://stackoverflow.com/questions/45086109/tensorflow-using-tf-summary-with-1-2-estimator-api)
|
24,021,831
|
I'm a Transifex user, I need to retrieve my dashboard page with the list of all the projects of my organization.
that is, the page I see when I login: <https://www.transifex.com/organization/(my_organization_name)/dashboard>
I can access Transifex API with this code:
```
import urllib.request as url
usr = 'myusername'
pwd = 'mypassword'
def GetUrl(Tx_url):
auth_handler = url.HTTPBasicAuthHandler()
auth_handler.add_password(realm='Transifex API',
uri=Tx_url,
user=usr,
passwd=pwd)
opener = url.build_opener(auth_handler)
url.install_opener(opener)
f = url.urlopen(Tx_url)
return f.read().decode("utf-8")
```
everything is ok, but there's no API call to get all the projects of my organization.
the only way is to get that page html, and parse it, but if I use this code, I get the login page.
This works ok with google.com, but I get an error with www.transifex.com or www.transifex.com/organization/(my\_organization\_name)/dashboard
**[Python, HTTPS GET with basic authentication](https://stackoverflow.com/questions/6999565/python-https-get-with-basic-authentication?rq=1)**
I'm new at Python, I need some code with Python 3 and only standard library.
Thanks for any help.
|
2014/06/03
|
[
"https://Stackoverflow.com/questions/24021831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3704129/"
] |
The call to
>
> /projects/
>
>
>
returns your projects along with all the public projects that you can have access (like what you said). You can search for the ones that you need by modifying the call to something like:
>
> <https://www.transifex.com/api/2/projects/?start=1&end=6>
>
>
>
Doing so the number of projects returned will be restricted.
For now maybe it would be more convenient to you, if you don't have many projects, to use this call:
>
> /project/project\_slug
>
>
>
and fetch each one separately.
|
Transifex comes with an API, and you can use it to fetch all the projects you have.
I think that what you need [this](http://docs.transifex.com/developer/api/projects) GET request on projects. It returns a list of (slug, name, description, source\_language\_code) for all projects that you have access to in JSON format.
Since you are familiar with python, you could use the [requests](http://www.django-rest-framework.org/api-guide/requests) library to perform the same actions in a much easier and more readable way.
You will just need to do something like that:
```
import requests
import json
AUTH = ('yourusername', 'yourpassword')
url = 'www.transifex.com/api/2/projects'
headers = {'Content-type': 'application/json'}
response = requests.get(url, headers=headers, auth=AUTH)
```
I hope I've helped.
|
60,202,828
|
I have been learning about the Trie structure through python. What is a little bit different about his trie compared to other tries is the fact that we are trying to implement a counter into every node of the trie in order to do an autocomplete (that is the final hope for the project). So far, I decided that having a recursive function to put the letter into a list of dictionaries would be a good idea.
Final Product (Trie):
```
Trie = {"value":"*start"
"count":1
"children":["value":"t"
"count":1
"children":["value":"e"
"count":1
"children":[...]
```
I know that a recursion would be very useful as it is just adding letters to the list, however, I can't figure out how to construct the basic function and how to tell the computer to refer to the last part of the dictionary without writing out
```
Trie["children"]["children"]["children"]["children"]
```
a bunch of times. Can you guys please give me some ideas as of how to construct the function?
--Thanks
|
2020/02/13
|
[
"https://Stackoverflow.com/questions/60202828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11659038/"
] |
Try this
```
mapply(function(x,y){paste(intersect(x,y),collapse=", ")},
strsplit(as.character(df$text),"\\, | "),
strsplit(as.character(df$word),"\\, | "))
[1] "red, green" "red" "blue"
```
|
```
library(tidyverse)
df %>%
mutate(newcol = stringr::str_extract_all(text,gsub(", +","|",word)))
country text word newcol
1 CA paint red green green, red, blue red, green
2 IN painting red red red
3 US painting blue red, blue blue
```
In this case, `newcol` is a list. To make it a string, we can do:
```
df%>%
mutate(newcol = text %>%
str_extract_all(gsub(", +", "|", word)) %>%
invoke(toString, .))
```
with data.table, you could do:
```
df[,id := .I][,newcol := do.call(toString,str_extract_all(text,gsub(', +',"|",word))),
by = id][, id := NULL][]
country text word newcol
1: CA paint red green green, red, blue red, green
2: IN painting red red red
3: US painting blue red, blue blue
```
|
60,202,828
|
I have been learning about the Trie structure through python. What is a little bit different about his trie compared to other tries is the fact that we are trying to implement a counter into every node of the trie in order to do an autocomplete (that is the final hope for the project). So far, I decided that having a recursive function to put the letter into a list of dictionaries would be a good idea.
Final Product (Trie):
```
Trie = {"value":"*start"
"count":1
"children":["value":"t"
"count":1
"children":["value":"e"
"count":1
"children":[...]
```
I know that a recursion would be very useful as it is just adding letters to the list, however, I can't figure out how to construct the basic function and how to tell the computer to refer to the last part of the dictionary without writing out
```
Trie["children"]["children"]["children"]["children"]
```
a bunch of times. Can you guys please give me some ideas as of how to construct the function?
--Thanks
|
2020/02/13
|
[
"https://Stackoverflow.com/questions/60202828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11659038/"
] |
Another base R solution using `mapply` + `grep` + `regmatches`, i.e.,
```
df <- within(df, newcol <- mapply(function(x,y) toString(grep(x,y,value = TRUE)),
gsub("\\W+","|",word),
regmatches(text,gregexpr("\\w+",text))))
```
such that
```
> df
country text word newcol
1 CA paint red green green, red, blue red, green
2 IN painting red red red
3 US painting blue red, blue blue
```
|
```
library(tidyverse)
df %>%
mutate(newcol = stringr::str_extract_all(text,gsub(", +","|",word)))
country text word newcol
1 CA paint red green green, red, blue red, green
2 IN painting red red red
3 US painting blue red, blue blue
```
In this case, `newcol` is a list. To make it a string, we can do:
```
df%>%
mutate(newcol = text %>%
str_extract_all(gsub(", +", "|", word)) %>%
invoke(toString, .))
```
with data.table, you could do:
```
df[,id := .I][,newcol := do.call(toString,str_extract_all(text,gsub(', +',"|",word))),
by = id][, id := NULL][]
country text word newcol
1: CA paint red green green, red, blue red, green
2: IN painting red red red
3: US painting blue red, blue blue
```
|
12,794,357
|
I have 2 python scripts inside my c:\Python32
1)Tima\_guess.py which looks like this:
```
#Author:Roshan Mehta
#Date :9th October 2012
import random,time,sys
ghost ='''
0000 0 000----
00000-----0-0
----0000---0
'''
guess_taken = 0
print('Hello! What is your name?')
name = input()
light_switch = random.randint(1,12)
print("Well, " + name + ", There are 12 switches and one of them turns on the Light.\nYou just need to guess which one is it in 4 guesses.Purely a luck,but i will help you to choose")
print("Choose a switch,they are marked with numbers from 1-12.\nEnter the switch no.")
while guess_taken < 4:
try:
guess = input()
guess = int(guess)
except:
print("invalid literal,Plese enter an integer next time.")
x = input()
sys.exit(1)
guess_taken = guess_taken + 1
guess_remain = 4 - guess_taken
time.sleep(1)
if guess < light_switch:
print("The Light's switch is on right of your current choice.You have {} more chances to turn on the light.".format(guess_remain))
if guess > light_switch:
print("The Light's switch is on left of your current choice.You have {} more chances to turn on the light.".format(guess_remain))
if guess == light_switch:
print("Good,you are quiet lucky,You have turned on the light in {} chances.".format(guess_taken))
sys.exit(1)
if guess != light_switch:
print("Naah,You don't seems to be lucky enough,The switch was {}.".format(light_switch))
for i in range(3):
time.sleep(2)
print(ghost)
print("The Devil in the room has just killed you....Ha ha ha")
input()
```
2)setup.py which look like this:
```
from cx_Freeze import setup, Executable
setup(
name = "Console game",
version = "0.1",
description = "Nothing!",
executables = [Executable("Tima_guess.py")])
```
When i run python setup.py build it creates an executable in build directory inside c:\Python32\build but when i run Tima\_guess.exe it just shows a black screen and goes off instantly not even able to see the message it is throwing.
Please help me to get a standalone executable of my Tima\_guess.py game.
Regards.
As according to Thomas suggestion when i explicitly go and run in the cmd by Tima\_guess.exe.I get the following error but still not able to make out what is wrong.
```
c:\Python32\build\exe.win32-3.2>Tima_guess.exe
Traceback (most recent call last):
File "c:\Python32\lib\site-packages\cx_Freeze\initscripts\Console3.py", line 2
7, in <module>
exec(code, m.__dict__)
File "Tima_guess.py", line 4, in <module>
File "c:\Python32\lib\random.py", line 39, in <module>
from warnings import warn as _warn
File "C:\Python\32-bit\3.2\lib\warnings.py", line 6, in <module>
File "C:\Python\32-bit\3.2\lib\linecache.py", line 10, in <module>
File "C:\Python\32-bit\3.2\lib\tokenize.py", line 28, in <module>
ImportError: No module named re
c:\Python32\build\exe.win32-3.2>
```
|
2012/10/09
|
[
"https://Stackoverflow.com/questions/12794357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1716525/"
] |
After building, add re.pyc to the library.zip file.
To get re.pyc, all you need to do is run re.py successfully, then open `__pycache__` folder, then you will see a file like re.cpython-32.pyc, rename it to re.pyc and voila!
|
**setup.py**
```
from cx_Freeze import setup, Executable
build_exe_options = {"includes": ["re"]}
setup(
name = "Console game",
version = "0.1",
description = "Nothing!",
options = {"build_exe": build_exe_options},
executables = [Executable("Tima_guess.py")])
```
|
55,577,991
|
I am trying to install `fiona=1.6` but I get the following error
```
conda install fiona=1.6
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: done
Solving environment: -
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- conda-forge/noarch::flask-cors==3.0.7=py_0
- conda-forge/osx-64::blaze==0.11.3=py36_0
- conda-forge/noarch::flask==1.0.2=py_2
failed
PackagesNotFoundError: The following packages are not available from current channels:
- fiona=1.6 -> gdal==1.11.4
Current channels:
- https://conda.anaconda.org/conda-forge/osx-64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
```
If I try to install `gdal==1.11.4`, I get the following
```
conda install -c conda-forge gdal=1.11.4
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: done
Solving environment: |
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- conda-forge/noarch::flask-cors==3.0.7=py_0
- conda-forge/osx-64::blaze==0.11.3=py36_0
- conda-forge/noarch::flask==1.0.2=py_2
failed
PackagesNotFoundError: The following packages are not available from current channels:
- gdal=1.11.4
Current channels:
- https://conda.anaconda.org/conda-forge/osx-64
- https://conda.anaconda.org/conda-forge/noarch
- https://repo.anaconda.com/pkgs/main/osx-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/osx-64
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/osx-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
```
This is the result of `conda info`
```
conda info
active environment : base
active env location : /anaconda3
shell level : 1
user config file : /Users/massaro/.condarc
populated config files : /Users/massaro/.condarc
conda version : 4.6.11
conda-build version : 3.17.8
python version : 3.6.8.final.0
base environment : /anaconda3 (writable)
channel URLs : https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
package cache : /anaconda3/pkgs
/Users/massaro/.conda/pkgs
envs directories : /anaconda3/envs
/Users/massaro/.conda/envs
platform : osx-64
user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.8 Darwin/17.5.0 OSX/10.13.4
UID:GID : 502:20
netrc file : None
```
|
2019/04/08
|
[
"https://Stackoverflow.com/questions/55577991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3590067/"
] |
Python Versions
===============
The [Conda Forge channel only has gdal v1.11.4 for Python 2.7, 3.4, and 3.5](https://anaconda.org/conda-forge/gdal/files?version=1.11.4). You either need to use a newer version of Fiona (current is 1.8) or make a new env that includes one of those older Python versions.
For example,
```
conda create -n fiona_1_6 fiona=1.6 python=3.5
```
Channel `defaults` is Required
==============================
Another issue you face is that you have removed the `defaults` channel from your configuration (as per your `conda info`). It is impossible to install `fiona=1.6` with only the `conda-forge` channel. My recommendation would be to have both `conda-forge` and `defaults` in your configuration, but just set `conda-forge` to have higher priority (if that's what you want). You can do this like so,
```
conda config --append channels defaults
```
If you really don't want to include `defaults`, but just want a temporary workaround, then you can simply run the first command with a `--channels | -c` flag
```
conda create -n fiona_1_6 -c conda-forge -c defaults fiona=1.6 python=3.5
```
This will still give `conda-forge` precedence, but allow missing dependencies to be sourced from `defaults`.
Environment File
================
If you have more than just Fiona that you require, it may be cleaner to put together a requirements file, like so
### fiona\_1\_6.yaml
```
name: fiona_1_6
channels:
- conda-forge
- defaults
dependencies:
- python=3.5
- fiona=1.6
- osmnx
```
Then create the new environment with this:
```
conda env create -f fiona_1_6.yaml
```
|
Doing what the error message told me to,
>
> To search for alternate channels that may provide the conda package you're
> looking for, navigate to <https://anaconda.org>
>
>
>
and typing in `gdal` in the search box led me to <https://anaconda.org/conda-forge/gdal> which has this installation instruction:
>
> `conda install -c conda-forge gdal=1.11.4`
>
>
>
Try that to install the `gdal` dependency, maybe?
|
23,871,680
|
I downloaded the git repo from the official link,
```
git clone git://
```
and I ran `./configure && make && make install` where the `make install` returns with error:
```
LINK(target) /usr/local/bin/node/out/Release/node: Finished
touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_header.stamp
touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_provider.stamp
touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_ustack.stamp
touch /usr/local/bin/node/out/Release/obj.target/node_etw.stamp
touch /usr/local/bin/node/out/Release/obj.target/node_mdb.stamp
touch /usr/local/bin/node/out/Release/obj.target/node_perfctr.stamp
touch /usr/local/bin/node/out/Release/obj.target/specialize_node_d.stamp
make[1]: Leaving directory `/usr/local/bin/node/out'
ln -fs out/Release/node node
#make install
make -C out BUILDTYPE=Release V=1
make[1]: Entering directory `/usr/local/bin/node/out'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/usr/local/bin/node/out'
ln -fs out/Release/node node
/usr/bin/python tools/install.py install '' '/usr/local'
installing /usr/local/bin/node
Traceback (most recent call last):
File "tools/install.py", line 202, in <module>
run(sys.argv[:])
File "tools/install.py", line 197, in run
if cmd == 'install': return files(install)
File "tools/install.py", line 130, in files
action(['out/Release/node'], 'bin/node')
File "tools/install.py", line 79, in install
def install(paths, dst): map(lambda path: try_copy(path, dst), paths)
File "tools/install.py", line 79, in <lambda>
def install(paths, dst): map(lambda path: try_copy(path, dst), paths)
File "tools/install.py", line 70, in try_copy
try_unlink(target_path) # prevent ETXTBSY errors
File "tools/install.py", line 33, in try_unlink
os.unlink(path)
OSError: [Errno 21] Is a directory: '/usr/local/bin/node'
make: *** [install] Error 1
```
I'm really not familiar with this, what is the issue?
I ran the commands with root, when I googled for the error, I only found permission problem topics but that not the case here.
|
2014/05/26
|
[
"https://Stackoverflow.com/questions/23871680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1948292/"
] |
You can use an injection interceptor.
>
> For EJB 3 Session Beans and Message-Driven Beans, Spring provides a
> convenient interceptor that resolves Spring 2.5's @Autowired
> annotation in the EJB component class:
> org.springframework.ejb.interceptor.SpringBeanAutowiringInterceptor.
> This interceptor can be applied through an `@Interceptors`
> annotation in the EJB component class, or through an
> interceptor-binding XML element in the EJB deployment descriptor.
>
>
>
Code example:
```
@Stateless
@Interceptors(SpringBeanAutowiringInterceptor.class)
public class Foo {
@Autowired
private Boo boo;
}
```
For more info, the reference [18.3.2. EJB 3 injection interceptor](http://docs.spring.io/spring/docs/2.5.x/reference/ejb.html)
If you need to access the EJB from the spring, you can define the bean like the example below in your spring-context.xml configuration
```
<jee:local-slsb id="myComponent" jndi-name="ejb/fooBean"
business-interface="com.Foo"/>
```
You can have more info about it in the section *18.2.2. Accessing local SLSBs* of the refence above.
|
I understand you question as: you have problem to inject a request scoped bean into another using spring.
so try this:
```
<bean id="boo" class="Boo" scope="request">
<aop:scoped-proxy/>
</bean>
<bean id="foo" class="Foo">
<property name="boo" ref="Boo" />
</bean>
```
|
13,549,699
|
I wish to mock a class with the following requirements:
* The class has public read/write properties, defined in its `__init__()` method
* The class has public attribute which is auto-incremented on object creation
* I wish to use `autospec=True`, so the class's API will be strictly checks on calls
A simplified class sample:
```
class MyClass():
id = 0
def __init__(self, x=0.0, y=1.0):
self.x = x
self.y = y
self.id = MyClass._id
MyClass.id +=1
def calc_x_times_y(self):
return self.x*self.y
def calc_x_div_y(self, raise_if_y_not_zero=True):
try:
return self.x/self.y
except ZeroDivisionError:
if raise_if_y_not_zero:
raise ZeroDivisionError
else:
return float('nan')
```
I need for the mock object to behave as the the original object, as far as properties are concerned:
* It should auto-increment the id assigned to each newly-created mock object
* It should allow access to its `x,y` properties
But the mock method calls should be intercepted by the mock, and have its call signature validated
What's the best way to go on about this?
**EDIT**
I've already tried several approaches, including subclassing the `Mock` class, use `attach_mock()`, and `mock_add_spec()`, but always ran into some dead end.
I'm using the standard [mock](http://www.voidspace.org.uk/python/mock/index.html#mock-mocking-and-testing-library) library.
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13549699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/499721/"
] |
Since no answers are coming in, I'll post what worked for me (not necessarily the best approach, but here goes):
I've created a mock factory which creates a `Mock()` object, sets its `id` property using the syntax described [here](http://www.voidspace.org.uk/python/mock/mock.html#mock.PropertyMock), and returns the object:
```
class MyClassMockFactory():
_id = 0
def get_mock_object(self, *args,**kwargs):
mock = Mock(MyClass, autospec = True)
self._attach_mock_property(mock , 'x', kwargs['x'])
self._attach_mock_property(mock , 'y', kwargs['y'])
self._attach_mock_property(mock , 'id', MyClassMockFactory._id)
MyClassMockFactory._id += 1
return mock
def _attach_mock_property(self, mock_object, name, value):
p = PropertyMock(return_value=value)
setattr(type(mock_object), name, p)
```
Now, I can patch the `MyClass()` constructor for my tests:
```
class TestMyClass(TestCase):
mock_factory = MyClassMockFactory()
@patch('MyClass',side_effect=mock_factory.get_mock_object)
test_my_class(self,*args):
obj0 = MyClass()
obj1 = MyClass(1.0,2.2)
obj0.calc_x_times_y()
# Assertions
obj0.calc_x_times_y.assert_called_once_with()
self.assertEqaul(obj0.id, 0)
self.assertEqaul(obj1.id, 1)
```
|
Sorry to dig up an old post, but something that would allow you to do precisely what you would like to achieve is to patch `calc_x_times_y` and `calc_x_div_y` and set `autospec=True` there, as opposed to Mocking the creation of the entire class.
Something like:
```
@patch('MyClass.calc_x_times_y')
@patch('MyClass.calc_x_div_y')
test_foo(patched_div, patched_times):
my_class = MyClass() #using real class to define attributes
# ...rest of test
```
|
58,016,261
|
So, i am trying to create a linear functions in python such has `y = x` without using `numpy.linspace()`. In my understanding numpy.linspace() gives you an array which is discontinuous. But to fo
I am trying to find the intersection of `y = x` and a function unsolvable analytically ( such has the one in the picture ) .
Here is my code I don't know how to define x. Is there a way too express y has a simple continuous function?
```
import random as rd
import numpy as np
a = int(input('choose a :'))
eps = abs(float(input('choose epsilon :')))
b = 0
c = 10
x = ??????
y1 = x
y2 = a*(1 - np.exp(x))
z = abs(y2 - y1)
while z > eps :
d = rd.uniform(b,c)
c = d
print(c)
print(y1 , y2 )
```

|
2019/09/19
|
[
"https://Stackoverflow.com/questions/58016261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12091717/"
] |
Since your functions are differentiable, you could use the [Newton-Raphson method](https://en.wikipedia.org/wiki/Newton%27s_method) implemented by `scipy.optimize`:
```
>>> scipy.optimize.newton(lambda x: 1.5*(1-math.exp(-x))-x, 10)
0.8742174657987283
```
Computing the error is very straightforward:
```
>>> def f(x): return 1.5*(1-math.exp(-x))
...
>>> x = scipy.optimize.newton(lambda x: f(x)-x, 10)
>>> error = f(x) - x
>>> x, error
(0.8742174657987283, -4.218847493575595e-15)
```
I've somewhat arbitrarily chosen x0=10 as the starting point. Some care needs to be take here to make sure the method doesn't converge to x=0, which in your example is also a root.
|
“Not solvable analytically” means there is no closed-form solution. In other words, you cant write down a single answer on paper like a number or equation and circle it and say ”thats my answer.” For some math problems it’s impossible to do so. Instead, for these kinds of problems, we can approximate the solution by running simulations and getting values or a graph of what the solution is.
|
58,016,261
|
So, i am trying to create a linear functions in python such has `y = x` without using `numpy.linspace()`. In my understanding numpy.linspace() gives you an array which is discontinuous. But to fo
I am trying to find the intersection of `y = x` and a function unsolvable analytically ( such has the one in the picture ) .
Here is my code I don't know how to define x. Is there a way too express y has a simple continuous function?
```
import random as rd
import numpy as np
a = int(input('choose a :'))
eps = abs(float(input('choose epsilon :')))
b = 0
c = 10
x = ??????
y1 = x
y2 = a*(1 - np.exp(x))
z = abs(y2 - y1)
while z > eps :
d = rd.uniform(b,c)
c = d
print(c)
print(y1 , y2 )
```

|
2019/09/19
|
[
"https://Stackoverflow.com/questions/58016261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12091717/"
] |
I'm not a mathematician so perhaps you can explain me sth here, but I don't understand what exactly you mean by "unsolvable analytically".
That's what sympy returns:
```
from sympy import *
x = symbols('x')
a = 1.5
y1 = x
y2 = a*(1 - exp(-x))
print(solve(y1-y2))
# [0.874217465798717]
```
|
“Not solvable analytically” means there is no closed-form solution. In other words, you cant write down a single answer on paper like a number or equation and circle it and say ”thats my answer.” For some math problems it’s impossible to do so. Instead, for these kinds of problems, we can approximate the solution by running simulations and getting values or a graph of what the solution is.
|
15,040,884
|
I want to list all the keys stored in the memcached server.
I googled for the same, I got some python/php scripts that can list the same. I tested it but all went failed and none gave me full keys. I can see thousands of keys using telnet command
```
stats items
```
I used perl script that uses telnet to list keys, but that got failed too. I mean that script is listing keys but not all of them.
Do I need to reconfigure telnet ? Is there any other way ?
|
2013/02/23
|
[
"https://Stackoverflow.com/questions/15040884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308498/"
] |
memcache does not provide an api to exhaustively list all keys. "stats items" is as good as it gets to list the first 1M of keys. More info here: <http://www.darkcoding.net/software/memcached-list-all-keys/>
Not sure if that helps you but redis (which could be considered a superset of memcache) provides a more comprehensive API for key listing and searching. You might want to give it a try.
|
It you use python-memcached, and would like to export all the items in memcache server, I summerized two methods to the problem in this question: [Export all keys and values from memcached with python-memcache](https://stackoverflow.com/questions/5730276/export-all-keys-and-values-from-memcached-with-python-memcache)
|
3,224,924
|
Is there anything in python that lets me dump out a random object in such a way as to see its underlying data representation?
I am coming from Perl where Data::Dumper does a reasonable job of letting me see how a data structure is laid out. Is there anything that does the same thing in python?
Thanks!
|
2010/07/11
|
[
"https://Stackoverflow.com/questions/3224924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/365530/"
] |
Well `Dumper` in Perl gives you a representation of an object that can be `eval`'d by the interpreter to give you the original object. An object's `repr` in Python tries to do that, and sometimes it's possible. A `dict`'s `repr` or a `str`'s `repr` do this, and some classes like `datetime` and `timedelta` also do this. So `repr` is as much the equivalent of `Dumper` but it's not pretty and doesn't show you the internals of an object. For that you can use `dir` and roll your own printer.
Here's my shot at a printer that would not result in `eval`-able Python code and thus should be used to generate a string of the object instead:
```
def dump(obj):
out = {}
for attr in dir(obj):
out[attr] = getattr(obj, attr)
from pprint import pformat
return pformat(out)
class myclass(object):
foo = 'foo'
def __init__(self):
self.bar = 'bar'
def __str__(self):
return dump(self)
c = myclass()
print c
```
In the above example, I've overridden the object's default `__str__` implementation. `__str__` is what gets called when you try to represent the object as a string, or format it using a string formatting function.
BTW `repr` is what gets printed when you do `print obj`, it invokes the `__repr__` method on that object. See [the Python documentation of `__repr__`](http://docs.python.org/reference/datamodel.html#object.__repr__) for more information on how to control the formatting of objects.
```
# this would print the object's __repr__
print "%r" % c
# this would print the object's __str__
print "%s" % c
```
---
The output from the above code was
```
{'__class__': <class '__main__.myclass'>,
'__delattr__': <method-wrapper '__delattr__' of myclass object at 0xb76deb0c>,
'__dict__': {'bar': 'bar'},
'__doc__': None,
'__format__': <built-in method __format__ of myclass object at 0xb76deb0c>,
'__getattribute__': <method-wrapper '__getattribute__' of myclass object at 0xb76deb0c>,
'__hash__': <method-wrapper '__hash__' of myclass object at 0xb76deb0c>,
'__init__': <bound method myclass.__init__ of <__main__.myclass object at 0xb76deb0c>>,
'__module__': '__main__',
'__new__': <built-in method __new__ of type object at 0x82358a0>,
'__reduce__': <built-in method __reduce__ of myclass object at 0xb76deb0c>,
'__reduce_ex__': <built-in method __reduce_ex__ of myclass object at 0xb76deb0c>,
'__repr__': <method-wrapper '__repr__' of myclass object at 0xb76deb0c>,
'__setattr__': <method-wrapper '__setattr__' of myclass object at 0xb76deb0c>,
'__sizeof__': <built-in method __sizeof__ of myclass object at 0xb76deb0c>,
'__str__': <bound method myclass.__str__ of <__main__.myclass object at 0xb76deb0c>>,
'__subclasshook__': <built-in method __subclasshook__ of type object at 0x896ad34>,
'__weakref__': None,
'bar': 'bar',
'foo': 'foo'}
```
|
After much searching about for this exactly myself, I came across this Dumper equivalent which I typically import now. <https://salmon-protocol.googlecode.com/svn-history/r24/trunk/salmon-playground/dumper.py>
|
9,595,009
|
What is the difference between [`warnings.warn()`](https://docs.python.org/library/warnings.html#warnings.warn) and [`logging.warn()`](https://docs.python.org/library/logging.html#logging.Logger.warning) in terms of what they do and how they should be used?
|
2012/03/07
|
[
"https://Stackoverflow.com/questions/9595009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84952/"
] |
I agree with the other answer -- `logging` is for logging and `warning` is for warning -- but I'd like to add more detail.
Here is a tutorial-style HOWTO taking you through the steps in using the `logging` module.
<https://docs.python.org/3/howto/logging.html>
It directly answers your question:
>
> warnings.warn() in library code if the issue is avoidable and the
> client application should be modified to eliminate the warning
>
>
> logging.warning() if there is nothing the client application can do
> about the situation, but the event should still be noted
>
>
>
|
Besides the [canonical explanation in official documentation](https://docs.python.org/2/howto/logging.html#when-to-use-logging)
>
> warnings.warn() in library code if the issue is avoidable and the client application should be modified to eliminate the warning
>
>
> logging.warning() if there is nothing the client application can do about the situation, but the event should still be noted
>
>
>
It is also worth noting that, by default `warnings.warn("same message")` will show up only once. That is a major noticeable difference. Quoted from [official doc](https://docs.python.org/2/library/warnings.html)
>
> Repetitions of a particular warning for the same source location are typically suppressed.
>
>
>
```
>>> import warnings
>>> warnings.warn("foo")
__main__:1: UserWarning: foo
>>> warnings.warn("foo")
>>> warnings.warn("foo")
>>>
>>> import logging
>>> logging.warn("bar")
WARNING:root:bar
>>> logging.warn("bar")
WARNING:root:bar
>>> logging.warn("bar")
WARNING:root:bar
>>>
>>>
>>> warnings.warn("fur")
__main__:1: UserWarning: fur
>>> warnings.warn("fur")
>>> warnings.warn("fur")
>>>
```
|
9,595,009
|
What is the difference between [`warnings.warn()`](https://docs.python.org/library/warnings.html#warnings.warn) and [`logging.warn()`](https://docs.python.org/library/logging.html#logging.Logger.warning) in terms of what they do and how they should be used?
|
2012/03/07
|
[
"https://Stackoverflow.com/questions/9595009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84952/"
] |
[`logging.warning`](https://docs.python.org/library/logging.html#logging.warning) just logs something at the `WARNING` level, in the same way that `logging.info` logs at the `INFO` level and `logging.error` logs at the `ERROR` level. It has no special behaviour.
[`warnings.warn`](https://docs.python.org/library/warnings.html#warnings.warn) emits a [`Warning`](https://docs.python.org/library/exceptions.html#Warning), which may be printed to `stderr`, ignored completely, or thrown like a normal `Exception` (potentially crashing your application) depending upon the precise `Warning` subclass emitted and how you've configured your *Warnings Filter*. By default, warnings will be printed to `stderr` or ignored.
Warnings emitted by `warnings.warn` are often useful to know about, but easy to miss (especially if you're running a Python program in a background process and not capturing `stderr`). For that reason, it can be helpful to have them logged.
Python provides a built-in integration between the `logging` module and the `warnings` module to let you do this; just call [`logging.captureWarnings(True)`](https://docs.python.org/library/logging.html#logging.captureWarnings) at the start of your script and all warnings emitted by the `warnings` module will automatically be logged at level `WARNING`.
|
Besides the [canonical explanation in official documentation](https://docs.python.org/2/howto/logging.html#when-to-use-logging)
>
> warnings.warn() in library code if the issue is avoidable and the client application should be modified to eliminate the warning
>
>
> logging.warning() if there is nothing the client application can do about the situation, but the event should still be noted
>
>
>
It is also worth noting that, by default `warnings.warn("same message")` will show up only once. That is a major noticeable difference. Quoted from [official doc](https://docs.python.org/2/library/warnings.html)
>
> Repetitions of a particular warning for the same source location are typically suppressed.
>
>
>
```
>>> import warnings
>>> warnings.warn("foo")
__main__:1: UserWarning: foo
>>> warnings.warn("foo")
>>> warnings.warn("foo")
>>>
>>> import logging
>>> logging.warn("bar")
WARNING:root:bar
>>> logging.warn("bar")
WARNING:root:bar
>>> logging.warn("bar")
WARNING:root:bar
>>>
>>>
>>> warnings.warn("fur")
__main__:1: UserWarning: fur
>>> warnings.warn("fur")
>>> warnings.warn("fur")
>>>
```
|
2,248,699
|
Is there something like twisted (python) or eventmachine (ruby) in .net land?
Do I even need this abstraction? I am listening to a single IO device that will be sending me events for three or four analog sensors attached to it. What are the risks of simply using a looped `UdpClient`? I can't miss any events, but will the ip stack handle the queuing of messages for me? Does all of this depend on how much work the thread tries to do once I receive a message?
What I'm looking for in an abstraction is to remove the complication of threading and synchronization from the problem.
|
2010/02/11
|
[
"https://Stackoverflow.com/questions/2248699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/804/"
] |
I think you are making it too complicated.
Just have 1 UDP socket open, and set an async callback on it. For every incoming packet put it in a queue, and set the callback again. Thats it.
make sure that when queuing and dequeueing you set a lock on the queue.
it's as simple as that and performance will be great.
R
|
I would recommend [ICE](http://www.zeroc.com) it's a communication engine that will abstract threading and communication to you (documentation is kind of exhaustive).
|
2,248,699
|
Is there something like twisted (python) or eventmachine (ruby) in .net land?
Do I even need this abstraction? I am listening to a single IO device that will be sending me events for three or four analog sensors attached to it. What are the risks of simply using a looped `UdpClient`? I can't miss any events, but will the ip stack handle the queuing of messages for me? Does all of this depend on how much work the thread tries to do once I receive a message?
What I'm looking for in an abstraction is to remove the complication of threading and synchronization from the problem.
|
2010/02/11
|
[
"https://Stackoverflow.com/questions/2248699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/804/"
] |
I think you are making it too complicated.
Just have 1 UDP socket open, and set an async callback on it. For every incoming packet put it in a queue, and set the callback again. Thats it.
make sure that when queuing and dequeueing you set a lock on the queue.
it's as simple as that and performance will be great.
R
|
Problem is that with Udp you are automatically assuming the risk of lost packets. I've read the documentation on ICE (as Steve suggested), and it is *very* exhaustive. It appears that ICE will work for Udp, however, it appears that Tcp is preferred by the developers. I gather from the ICE documentation that it does not provide any intensive mechanisms to ensure reliable Udp communications.
It is actually very easy to set up an asynchronous Udp client or server. Your real work comes in checking for complete packets and buffering. The asynchronous implementations should keep you from managing threads.
|
2,248,699
|
Is there something like twisted (python) or eventmachine (ruby) in .net land?
Do I even need this abstraction? I am listening to a single IO device that will be sending me events for three or four analog sensors attached to it. What are the risks of simply using a looped `UdpClient`? I can't miss any events, but will the ip stack handle the queuing of messages for me? Does all of this depend on how much work the thread tries to do once I receive a message?
What I'm looking for in an abstraction is to remove the complication of threading and synchronization from the problem.
|
2010/02/11
|
[
"https://Stackoverflow.com/questions/2248699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/804/"
] |
I think you are making it too complicated.
Just have 1 UDP socket open, and set an async callback on it. For every incoming packet put it in a queue, and set the callback again. Thats it.
make sure that when queuing and dequeueing you set a lock on the queue.
it's as simple as that and performance will be great.
R
|
It sounds like you are looking for reliable multicast -You could try [RMF](http://www.mesongo.com/rmf.aspx) , it will do the reliability and deliver the messages using asyc callbacks from the incoming message queue. IBM also does WebSphere which has a UDP component. EmCaster is also an option - however development seems to have stopped back in 2008.
If you aren't going to be transmitting these packets (or events) to other machines you might just want to use something simple like memory mapped files or other forms of IPC.
|
57,087,455
|
I need to compare data in two tables. These tables are similar in schema but will have different data values. I want to export these data to csv or similar format and then check for differences.
I would like to perform this check with a python script. I have already figured out how to export the data to csv format. But my problem is the since the two tables are not in sink the primary keys may be different for the same row. Also the row order may be different in the to tables. CSV compare will not help me in this aspect.
Example database tables in CSV format are below
id,name,designation,department
Table employee in db1
---------------------
1,Ann,Manager,Sales
2,Brian,Executive,Marketing
4,Melissa,Director,Engineering
5,George,Manager,Plant
Table employee in db2
---------------------
1,Ann,Manager,Sales
2,George,Manager,Plant
3,Brian,Executive,Marketing
Here Melissa is a missing record in the second DB. But George and Brian even though they have different Id's are considered the same record.
I've found that there are commercial software for this task, but what i need is a script that can be used in a process flow to identify the differences in the tables.
|
2019/07/18
|
[
"https://Stackoverflow.com/questions/57087455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4817150/"
] |
I finally managed to get it working by adding `contentContainerStyle={{borderRadius: 6, overflow: 'hidden'}}` to the FlatList.
|
Recreated the structure and for me its working fine with border radius
Snack link: <https://snack.expo.io/@msbot01/disrespectful-chocolate>
```
<View style={styles.container}>
<ImageBackground source={{uri: 'https://artofislamicpattern.com/wp-content/uploads/2012/10/3.jpg'}} style={{width: '100%', height: '100%',opacity:0.8, alignItems:'center', justifyContent:'center'}}>
<FlatList
data={[{key: 'a', value: 'Australia'}, {key: 'b', value:'Canada'}]}
extraData={this.state}
keyExtractor={this._keyExtractor}
renderItem={this._renderItem}
style={{backgroundColor:'white', width:'90%', borderRadius:10, margin:10, marginBottom:10, paddingTop:10, paddingBottom:10, paddingLeft:10, position:'absolute', zIndex: 1}}
/>
</ImageBackground>
</View>
```
[](https://i.stack.imgur.com/LJWW8.png)
|
57,087,455
|
I need to compare data in two tables. These tables are similar in schema but will have different data values. I want to export these data to csv or similar format and then check for differences.
I would like to perform this check with a python script. I have already figured out how to export the data to csv format. But my problem is the since the two tables are not in sink the primary keys may be different for the same row. Also the row order may be different in the to tables. CSV compare will not help me in this aspect.
Example database tables in CSV format are below
id,name,designation,department
Table employee in db1
---------------------
1,Ann,Manager,Sales
2,Brian,Executive,Marketing
4,Melissa,Director,Engineering
5,George,Manager,Plant
Table employee in db2
---------------------
1,Ann,Manager,Sales
2,George,Manager,Plant
3,Brian,Executive,Marketing
Here Melissa is a missing record in the second DB. But George and Brian even though they have different Id's are considered the same record.
I've found that there are commercial software for this task, but what i need is a script that can be used in a process flow to identify the differences in the tables.
|
2019/07/18
|
[
"https://Stackoverflow.com/questions/57087455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4817150/"
] |
I finally managed to get it working by adding `contentContainerStyle={{borderRadius: 6, overflow: 'hidden'}}` to the FlatList.
|
To add styles use like this:
<ListItem
containerStyle={{
borderRadius: 8,
overflow: 'hidden',
}}
/>
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in ***django.config***
```yaml
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
```
to:
```yaml
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
```
as recommended [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). This will help you avoid issues with **wrong settings** used.
[More](https://docs.djangoproject.com/en/2.1/ref/django-admin/) on ***manage.py*** v.s. ***django-admin.py.***
|
In reference to Oscar Chen answer, you can set environmental variables using eb cli with
```
eb setenv key1=value1 key2=valu2 ...etc
```
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in ***django.config***
```yaml
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
```
to:
```yaml
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
```
as recommended [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). This will help you avoid issues with **wrong settings** used.
[More](https://docs.djangoproject.com/en/2.1/ref/django-admin/) on ***manage.py*** v.s. ***django-admin.py.***
|
`django-admin` method not working as it was not configured properly. You can also use `python manage.py migrate` in
**.ebextentions/django.config**
```
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
```
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to **manually run migration** (and any other **Django management commands**) while working with **Amazon Linux 2** (Python 3.7, 3.8) created by **Elastic Beanstalk**:
First, from your EB cli: `eb ssh` to connect an instance.
The virtual environment can be activated by
`source /var/app/venv/*/bin/activate`
The manage.py can be ran by
`python3 /var/app/current/manage.py`
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by `/opt/elasticbeanstalk/bin/get-config`, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
```
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
```
More info regarding Amazon Linux 2 splatform script tools: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html>
|
`django-admin` method not working as it was not configured properly. You can also use `python manage.py migrate` in
**.ebextentions/django.config**
```
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
```
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
I'm not sure there is a specific way to answer yes or no. but you can append `--noinput` to your container command. Use the `--noinput` option to suppress all user prompting, such as “Are you sure?” confirmation messages.
```
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
```
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
1. Install awsebcli with `pip install awsebcli`
2. Type `eb ssh Your EnvironmentName`
3. Navigate to your eb instance app directory with:
---
* sudo -s
* source /opt/python/run/venv/bin/activate
* source /opt/python/current/env
* cd /opt/python/current/app
* then run your command.
./manage.py migrate
I hope this helps
|
The trick is that the full output of `container_commands` is in `/var/log/cfn-init-cmd.log` (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
```
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
```
This doesn't seem to be documented anywhere obvious and it's not displayed by `eb logs`; I found it by hunting around in /var/log.
The [Django example](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html) management command `django-admin.py migrate` did not work for me. Instead I had to use something like:
```
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
```
To see the values of your environment variables at deploy time, you can create a debug command like:
```
03_debug:
command: "env"
```
You can see most of these environment variable with `eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env`, but there seem to be some subtle differences at deploy time, hence using `env` above to be sure.
Here you'll see that `$PYTHONPATH` is being in a non-typical way, pointing to the virtualenv's `bin` directory, not the `site-packages` directory.
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in ***django.config***
```yaml
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
```
to:
```yaml
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
```
as recommended [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). This will help you avoid issues with **wrong settings** used.
[More](https://docs.djangoproject.com/en/2.1/ref/django-admin/) on ***manage.py*** v.s. ***django-admin.py.***
|
The trick is that the full output of `container_commands` is in `/var/log/cfn-init-cmd.log` (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
```
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
```
This doesn't seem to be documented anywhere obvious and it's not displayed by `eb logs`; I found it by hunting around in /var/log.
The [Django example](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html) management command `django-admin.py migrate` did not work for me. Instead I had to use something like:
```
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
```
To see the values of your environment variables at deploy time, you can create a debug command like:
```
03_debug:
command: "env"
```
You can see most of these environment variable with `eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env`, but there seem to be some subtle differences at deploy time, hence using `env` above to be sure.
Here you'll see that `$PYTHONPATH` is being in a non-typical way, pointing to the virtualenv's `bin` directory, not the `site-packages` directory.
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
I'm not sure there is a specific way to answer yes or no. but you can append `--noinput` to your container command. Use the `--noinput` option to suppress all user prompting, such as “Are you sure?” confirmation messages.
```
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
```
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
1. Install awsebcli with `pip install awsebcli`
2. Type `eb ssh Your EnvironmentName`
3. Navigate to your eb instance app directory with:
---
* sudo -s
* source /opt/python/run/venv/bin/activate
* source /opt/python/current/env
* cd /opt/python/current/app
* then run your command.
./manage.py migrate
I hope this helps
|
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in ***django.config***
```yaml
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
```
to:
```yaml
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
```
as recommended [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). This will help you avoid issues with **wrong settings** used.
[More](https://docs.djangoproject.com/en/2.1/ref/django-admin/) on ***manage.py*** v.s. ***django-admin.py.***
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
`django-admin` method not working as it was not configured properly. You can also use `python manage.py migrate` in
**.ebextentions/django.config**
```
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
```
|
[This answer](https://stackoverflow.com/questions/18869414/can-stale-content-types-be-automatically-deleted-in-django) looks like it will work for you if you just want to send "yes" to a few prompts.
You might also consider the `--noinput` flag so that your config looks like:
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput
```
This takes the default setting, which is "no".
It also appears that there's [an open issue/fix](https://code.djangoproject.com/ticket/24865#no1) to solve this problem a better way.
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
I'm not sure there is a specific way to answer yes or no. but you can append `--noinput` to your container command. Use the `--noinput` option to suppress all user prompting, such as “Are you sure?” confirmation messages.
```
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
```
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
1. Install awsebcli with `pip install awsebcli`
2. Type `eb ssh Your EnvironmentName`
3. Navigate to your eb instance app directory with:
---
* sudo -s
* source /opt/python/run/venv/bin/activate
* source /opt/python/current/env
* cd /opt/python/current/app
* then run your command.
./manage.py migrate
I hope this helps
|
`django-admin` method not working as it was not configured properly. You can also use `python manage.py migrate` in
**.ebextentions/django.config**
```
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
```
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to **manually run migration** (and any other **Django management commands**) while working with **Amazon Linux 2** (Python 3.7, 3.8) created by **Elastic Beanstalk**:
First, from your EB cli: `eb ssh` to connect an instance.
The virtual environment can be activated by
`source /var/app/venv/*/bin/activate`
The manage.py can be ran by
`python3 /var/app/current/manage.py`
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by `/opt/elasticbeanstalk/bin/get-config`, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
```
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
```
More info regarding Amazon Linux 2 splatform script tools: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html>
|
The trick is that the full output of `container_commands` is in `/var/log/cfn-init-cmd.log` (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
```
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
```
This doesn't seem to be documented anywhere obvious and it's not displayed by `eb logs`; I found it by hunting around in /var/log.
The [Django example](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html) management command `django-admin.py migrate` did not work for me. Instead I had to use something like:
```
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
```
To see the values of your environment variables at deploy time, you can create a debug command like:
```
03_debug:
command: "env"
```
You can see most of these environment variable with `eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env`, but there seem to be some subtle differences at deploy time, hence using `env` above to be sure.
Here you'll see that `$PYTHONPATH` is being in a non-typical way, pointing to the virtualenv's `bin` directory, not the `site-packages` directory.
|
30,950,941
|
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
```
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
```
How do I set up the container command to respond to this with a `yes` during the deployment phase?
This is my current config file
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
```
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
|
2015/06/20
|
[
"https://Stackoverflow.com/questions/30950941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2989731/"
] |
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to **manually run migration** (and any other **Django management commands**) while working with **Amazon Linux 2** (Python 3.7, 3.8) created by **Elastic Beanstalk**:
First, from your EB cli: `eb ssh` to connect an instance.
The virtual environment can be activated by
`source /var/app/venv/*/bin/activate`
The manage.py can be ran by
`python3 /var/app/current/manage.py`
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by `/opt/elasticbeanstalk/bin/get-config`, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
```
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
```
More info regarding Amazon Linux 2 splatform script tools: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html>
|
[This answer](https://stackoverflow.com/questions/18869414/can-stale-content-types-be-automatically-deleted-in-django) looks like it will work for you if you just want to send "yes" to a few prompts.
You might also consider the `--noinput` flag so that your config looks like:
```
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput
```
This takes the default setting, which is "no".
It also appears that there's [an open issue/fix](https://code.djangoproject.com/ticket/24865#no1) to solve this problem a better way.
|
57,901,995
|
i have a dockerfile which looks like this:
```
FROM python:3.7-slim-stretch
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY vegetation.py .
```
Now in my project i have two python files vegetation and forest. i have kept each of them in separate folders. How can i create separate docker images for both python files and execute the containers for them separately?
|
2019/09/12
|
[
"https://Stackoverflow.com/questions/57901995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11310755/"
] |
If the base code is same, and only the container is supposed to run up with different Python Script, So then I will suggest using single Docker and you will not worry about the management of two docker image.
Set `vegetation.py` to default, when container is up without passing ENV it will run `vegetation.py` and if the ENV `FILE_TO_RUN` override during run time, the specified file will be run.
```
FROM python:3.7-alpine3.9
ENV FILE_TO_RUN="/vegetation.py"
COPY vegetation.py /vegetation.py
CMD ["sh", "-c", "python $FILE_TO_RUN"]
```
Now, if you want to run `forest.py` then you can just pass the path file to ENV.
```
docker run -it -e FILE_TO_RUN="/forest.py" --rm my_image
```
or
```
docker run -it -e FILE_TO_RUN="/anyfile_to_run.py" --rm my_image
```
**updated:**
You can manage with args+env in your docker image.
```
FROM python:3.7-alpine3.9
ARG APP="default_script.py"
ENV APP=$APP
COPY $APP /$APP
CMD ["sh", "-c", "python /$APP"]
```
Now build with ARGs
```
docker build --build-arg APP="vegetation.py" -t app_vegetation .
```
or
```
docker build --build-arg APP="forest.py" -t app_forest .
```
Now good to run
```
docker run --rm -it app_forest
```
copy both
```
FROM python:3.7-alpine3.9
# assign some default script name to args
ARG APP="default_script.py"
ENV APP=$APP
COPY vegetation.py /vegetation.py
COPY forest.py /forest.py
CMD ["sh", "-c", "python /$APP"]
```
|
If you insist in creating separate images, you can always use the [ARG](https://docs.docker.com/engine/reference/builder/#arg) command.
```
FROM python:3.7-slim-stretch
ARG file_to_copy
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY $file_to_copy .
```
And the build the image like that:
```
docker build --buid-arg file_to_copy=vegetation.py .
```
or like that
```
docker build --buid-arg file_to_copy=forest.py .
```
|
57,901,995
|
i have a dockerfile which looks like this:
```
FROM python:3.7-slim-stretch
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY vegetation.py .
```
Now in my project i have two python files vegetation and forest. i have kept each of them in separate folders. How can i create separate docker images for both python files and execute the containers for them separately?
|
2019/09/12
|
[
"https://Stackoverflow.com/questions/57901995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11310755/"
] |
If the base code is same, and only the container is supposed to run up with different Python Script, So then I will suggest using single Docker and you will not worry about the management of two docker image.
Set `vegetation.py` to default, when container is up without passing ENV it will run `vegetation.py` and if the ENV `FILE_TO_RUN` override during run time, the specified file will be run.
```
FROM python:3.7-alpine3.9
ENV FILE_TO_RUN="/vegetation.py"
COPY vegetation.py /vegetation.py
CMD ["sh", "-c", "python $FILE_TO_RUN"]
```
Now, if you want to run `forest.py` then you can just pass the path file to ENV.
```
docker run -it -e FILE_TO_RUN="/forest.py" --rm my_image
```
or
```
docker run -it -e FILE_TO_RUN="/anyfile_to_run.py" --rm my_image
```
**updated:**
You can manage with args+env in your docker image.
```
FROM python:3.7-alpine3.9
ARG APP="default_script.py"
ENV APP=$APP
COPY $APP /$APP
CMD ["sh", "-c", "python /$APP"]
```
Now build with ARGs
```
docker build --build-arg APP="vegetation.py" -t app_vegetation .
```
or
```
docker build --build-arg APP="forest.py" -t app_forest .
```
Now good to run
```
docker run --rm -it app_forest
```
copy both
```
FROM python:3.7-alpine3.9
# assign some default script name to args
ARG APP="default_script.py"
ENV APP=$APP
COPY vegetation.py /vegetation.py
COPY forest.py /forest.py
CMD ["sh", "-c", "python /$APP"]
```
|
When you start a Docker container, you can specify what command to run at the end of the `docker run` command. So you can build a single image that contains both scripts and pick which one runs when you start the container.
The scripts should be "normally" executable: they need to have the executable permission bit set, and they need to start with a line like
```py
#!/usr/bin/env python3
```
and you should be able to *locally* (outside of Docker) run
```sh
. some_virtual_environment/bin/activate
./vegetation.py
```
Once you've gotten through this, you can copy the content into a Docker image
```sh
FROM python:3.7-slim-stretch
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY sentools sentools
COPY data data
COPY vegetation.py forest.py .
CMD ["./vegetation.py"]
```
Then you can build and run this image with either script.
```sh
docker build -t trees .
docker run --rm trees ./vegetation.py
docker run --rm trees ./forest.py
```
If you actually want this to be two separate images, you can create two separate Dockerfiles that differ only in their final `COPY` and `CMD` lines, and use the `docker build -f` option to pick which one to use.
```
$ tail -2 Dockerfile.vegetation
COPY vegetation.py ./
CMD ["./vegetation.py"]
$ docker build -t vegetation -f Dockerfile.vegetation .
$ docker run --rm vegetation
```
|
57,901,995
|
i have a dockerfile which looks like this:
```
FROM python:3.7-slim-stretch
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY vegetation.py .
```
Now in my project i have two python files vegetation and forest. i have kept each of them in separate folders. How can i create separate docker images for both python files and execute the containers for them separately?
|
2019/09/12
|
[
"https://Stackoverflow.com/questions/57901995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11310755/"
] |
If you insist in creating separate images, you can always use the [ARG](https://docs.docker.com/engine/reference/builder/#arg) command.
```
FROM python:3.7-slim-stretch
ARG file_to_copy
ENV PIP pip
RUN \
$PIP install --upgrade pip && \
$PIP install scikit-learn && \
$PIP install scikit-image && \
$PIP install rasterio && \
$PIP install geopandas && \
$PIP install matplotlib
COPY sentools sentools
COPY data data
COPY $file_to_copy .
```
And the build the image like that:
```
docker build --buid-arg file_to_copy=vegetation.py .
```
or like that
```
docker build --buid-arg file_to_copy=forest.py .
```
|
When you start a Docker container, you can specify what command to run at the end of the `docker run` command. So you can build a single image that contains both scripts and pick which one runs when you start the container.
The scripts should be "normally" executable: they need to have the executable permission bit set, and they need to start with a line like
```py
#!/usr/bin/env python3
```
and you should be able to *locally* (outside of Docker) run
```sh
. some_virtual_environment/bin/activate
./vegetation.py
```
Once you've gotten through this, you can copy the content into a Docker image
```sh
FROM python:3.7-slim-stretch
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY sentools sentools
COPY data data
COPY vegetation.py forest.py .
CMD ["./vegetation.py"]
```
Then you can build and run this image with either script.
```sh
docker build -t trees .
docker run --rm trees ./vegetation.py
docker run --rm trees ./forest.py
```
If you actually want this to be two separate images, you can create two separate Dockerfiles that differ only in their final `COPY` and `CMD` lines, and use the `docker build -f` option to pick which one to use.
```
$ tail -2 Dockerfile.vegetation
COPY vegetation.py ./
CMD ["./vegetation.py"]
$ docker build -t vegetation -f Dockerfile.vegetation .
$ docker run --rm vegetation
```
|
18,238,558
|
I am new to python language. My problem is I have two python scripts : Automation script A and a main script B. Script A internally calls script B. Script B exits whenever an exception is caught using sys.exit(1) functionality. Now, whenever script B exits it result in exit of script A also. Is there any way to stop exiting script A and continue its rest of execution, even if script B exits.
Thanks in advance.
|
2013/08/14
|
[
"https://Stackoverflow.com/questions/18238558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2286286/"
] |
You should encapsulate the code in a try except block. That will catch your exception, and continue executing script A.
|
`sys.exit()` actually raises a `SystemExit` exception which is caught and handled by the Python interpreter. All you have to do is put the call into to "script B" into a try/except block that catches `SystemExit` before it bubbles all the way up. For example:
```
try:
script_b.do_stuff()
except SystemExit as e:
print('Script B exited with return code {0}'.format(e.code))
```
|
63,867,203
|
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one.
```
counter_var = 1
quotient = num1/num2
if quotient<1:
print('1 time')
else:
while quotient >= 1:
quotient = num1/num2
counter_var = counter_var + 1
print(counter_var)
```
It is not ending the process but neither is it giving any output.
|
2020/09/13
|
[
"https://Stackoverflow.com/questions/63867203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14108602/"
] |
you are not changing the value of quotient in the while loop. it remains constant.
instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
|
well to start, you're missing assignment to numbers,
in the case of num1>num2 , you will be entering an endless while loop and hence you will never get to the `print(counter_var)` snippet
|
63,867,203
|
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one.
```
counter_var = 1
quotient = num1/num2
if quotient<1:
print('1 time')
else:
while quotient >= 1:
quotient = num1/num2
counter_var = counter_var + 1
print(counter_var)
```
It is not ending the process but neither is it giving any output.
|
2020/09/13
|
[
"https://Stackoverflow.com/questions/63867203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14108602/"
] |
you are not changing the value of quotient in the while loop. it remains constant.
instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
|
I did a slight change to your code, since you are always defining the quotient as the initial numbers. Instead, divide your quotient against the num2.
```
counter_var = 1
quotient = num1/num2
if quotient<1:
print('1 time')
else:
quotient = num1/num2
while quotient >= 1:
quotient = quotient/num2
counter_var = counter_var + 1
print(counter_var)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.