qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
52,425,065
|
How can I use Python 3.7 in my command line?
My Python interpreter is active on Python 3.7 and my python.pythonPath is `/usr/local/bin/python3`, though my Python version in my command line is 2.7. I've already installed Python 3.7 on my mac.
```
lorenzs-mbp:Python lorenzkort$ python --version
Python 2.7.13
```
|
2018/09/20
|
[
"https://Stackoverflow.com/questions/52425065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7866979/"
] |
If you open tab "Terminal" in your VSCode then your default python is called (in your case python2).
If you want to use python3.7 try to use following command:
```
python3 --version
```
Or just type the full path to the python folder:
```
/usr/local/bin/python3 yourscript.py
```
In case you want to change default python interpreter, then follow this accepted anwser: [How to set Python's default version to 3.x on OS X?](https://stackoverflow.com/questions/18425379/how-to-set-pythons-default-version-to-3-x-on-os-x)
|
You should use `python3` instead of `python` for running any commands you want to run then it will use the python3 version interpreter.
Also if you are using `pip` then use `pip3`.
Hope this helps.
|
52,425,065
|
How can I use Python 3.7 in my command line?
My Python interpreter is active on Python 3.7 and my python.pythonPath is `/usr/local/bin/python3`, though my Python version in my command line is 2.7. I've already installed Python 3.7 on my mac.
```
lorenzs-mbp:Python lorenzkort$ python --version
Python 2.7.13
```
|
2018/09/20
|
[
"https://Stackoverflow.com/questions/52425065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7866979/"
] |
If you open tab "Terminal" in your VSCode then your default python is called (in your case python2).
If you want to use python3.7 try to use following command:
```
python3 --version
```
Or just type the full path to the python folder:
```
/usr/local/bin/python3 yourscript.py
```
In case you want to change default python interpreter, then follow this accepted anwser: [How to set Python's default version to 3.x on OS X?](https://stackoverflow.com/questions/18425379/how-to-set-pythons-default-version-to-3-x-on-os-x)
|
This gives you a complete guide in setting up Visual Studio code for python
[Getting Started with Python in VS Code](https://code.visualstudio.com/docs/python/python-tutorial)
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
If you need to preserve the order **and** get rid of the duplicates, you can do it like:
```
ls = [1, 2, 3, 4, 1, 23, 4, 12, 3, 41]
lookup = set() # a temporary lookup set
ls = [x for x in ls if x not in lookup and lookup.add(x) is None]
# [1, 2, 3, 4, 23, 12, 41]
```
This should be **considerably** faster than your approach.
|
You could use a [set](https://docs.python.org/2/library/stdtypes.html#set) like this:
```
newls = []
seen = set()
for elem in ls:
if not elem in seen:
newls.append(elem)
seen.add(elem)
```
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
If you need to preserve the order **and** get rid of the duplicates, you can do it like:
```
ls = [1, 2, 3, 4, 1, 23, 4, 12, 3, 41]
lookup = set() # a temporary lookup set
ls = [x for x in ls if x not in lookup and lookup.add(x) is None]
# [1, 2, 3, 4, 23, 12, 41]
```
This should be **considerably** faster than your approach.
|
Define a function to do so:
```
def uniques(l):
retl = []
for x in l:
if x not in retl:
retl.append(x)
return retl
ls = [1,2,3,4,1,23,4,12,3,41]
uniques(ls)
[1, 2, 3, 4, 23, 12, 41]
```
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
If you need to preserve the order **and** get rid of the duplicates, you can do it like:
```
ls = [1, 2, 3, 4, 1, 23, 4, 12, 3, 41]
lookup = set() # a temporary lookup set
ls = [x for x in ls if x not in lookup and lookup.add(x) is None]
# [1, 2, 3, 4, 23, 12, 41]
```
This should be **considerably** faster than your approach.
|
Another solution would be using list comprehension like this:
```
[x for i, x in enumerate(ls) if x not in ls[:i]]
```
Output:
```
[1, 2, 3, 4, 23, 12, 41]
```
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
You could use a [set](https://docs.python.org/2/library/stdtypes.html#set) like this:
```
newls = []
seen = set()
for elem in ls:
if not elem in seen:
newls.append(elem)
seen.add(elem)
```
|
Define a function to do so:
```
def uniques(l):
retl = []
for x in l:
if x not in retl:
retl.append(x)
return retl
ls = [1,2,3,4,1,23,4,12,3,41]
uniques(ls)
[1, 2, 3, 4, 23, 12, 41]
```
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
You could use a [set](https://docs.python.org/2/library/stdtypes.html#set) like this:
```
newls = []
seen = set()
for elem in ls:
if not elem in seen:
newls.append(elem)
seen.add(elem)
```
|
Another solution would be using list comprehension like this:
```
[x for i, x in enumerate(ls) if x not in ls[:i]]
```
Output:
```
[1, 2, 3, 4, 23, 12, 41]
```
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
Do looping in a loop is not a good way. Try to avoid to do such things.
I think you could cache the content of file1 first use a dict object. Then, when looping file2 you can use the dict object to find your need things.So, i will code like bellow:
```
with open("file1.csv", "r") as protein, open("file2.csv", "r") as postion, open("result.csv", "w") as fallout:
writer = csv.writer(fallout, delimiter=' ')
protein_dict = {}
for rowinprt in csv.reader(protein, delimiter=' '):
key = rowinprt[0]
sub_value = (int(rowinprt[1]), int(rowinprt[2]))
protein_dict.setdefault(key, [])
protein_dict[key].append(sub_value)
for pos in csv.reader(postion, delimiter=' '):
id_key = pos[2]
id_min = int(pos[3])
id_max = int(pos[4])
if protein_dict.has_key(id_key) and all([ id_max < _min or _max < id_min for _min, _max in protein_dict[id_key]]):
writer.writerow(pos)
```
|
For your code to work, you will have to change three things:
1. Your loop over the position reader is only run once, because the reader runs through the file, the first time the loop is run. Afterwards, he searches for the next item at the end of the file. You'll have to rewind the file each time, before the loop is run. See e.g. ([Python csv.reader: How do I return to the top of the file?](https://stackoverflow.com/questions/431752/python-csv-reader-how-do-i-return-to-the-top-of-the-file))
2. The csv reader returns string objects. If you compare them via smaller/larger signs, it compares them as if they were words. So it checks first digit vs first digit, and so on. To get the comparison right, you'll have to cast to integer first.
3. Lastly, your program now would return a line out of the protein file every time it does not overlap with a line in the position file that has the same identifier. So for example the first protein line would be printed twice for the second and third position line. As I understand, this is not what you want, so you'll have to check ALL position lines to be valid, before printing.
Generally, the code is probably not the most elegant way to do it but to get correct results with something close to what you wrote, you might want to try something along the lines of:
```
with open ('file2.txt') as protein, open('file1.txt') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
position.seek(0)
valid = True
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if not (int(rowinprot[4])<int(rowinpos[1]) or int(rowinprot[3])>int(rowinpos[2])):
valid = False
if valid:
writer.writerow(rowinprot)
```
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
It might be overkill, but you could create a class to represent intervals and use it to make fairly readable code:
```
import csv
class Interval(object):
""" Representation of a closed interval.
a & b can be numeric, a datetime.date, or any other comparable type.
"""
def __init__(self, a, b):
self.lowerbound, self.upperbound = (a, b) if a < b else (b, a)
def __contains__(self, val):
return self.lowerbound <= val <= self.upperbound
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__,
self.lowerbound, self.upperbound)
filename1 = 'afile1.txt'
filename2 = 'afile2.txt'
filename3 = 'test.result'
intervals = {} # master dictionary of intervals
with open(filename1, 'rb') as f:
reader = csv.reader(f, delimiter=' ')
for row in reader:
cls, a, b = row[0], int(row[1]), int(row[2])
intervals.setdefault(cls, []).append(Interval(a, b))
with open(filename2, 'rb') as f1, open(filename3, 'wb') as f2:
reader = csv.reader(f1, delimiter=' ')
writer = csv.writer(f2, delimiter=' ')
for row in reader:
cls, a, b = row[2], int(row[3]), int(row[4])
if cls in intervals:
for interval in intervals[cls]:
# check for overlap
if ((a in interval) or (b in interval) or
(a < interval.lowerbound and b > interval.upperbound)):
break # skip
else:
writer.writerow(row) # no overlaps
```
|
For your code to work, you will have to change three things:
1. Your loop over the position reader is only run once, because the reader runs through the file, the first time the loop is run. Afterwards, he searches for the next item at the end of the file. You'll have to rewind the file each time, before the loop is run. See e.g. ([Python csv.reader: How do I return to the top of the file?](https://stackoverflow.com/questions/431752/python-csv-reader-how-do-i-return-to-the-top-of-the-file))
2. The csv reader returns string objects. If you compare them via smaller/larger signs, it compares them as if they were words. So it checks first digit vs first digit, and so on. To get the comparison right, you'll have to cast to integer first.
3. Lastly, your program now would return a line out of the protein file every time it does not overlap with a line in the position file that has the same identifier. So for example the first protein line would be printed twice for the second and third position line. As I understand, this is not what you want, so you'll have to check ALL position lines to be valid, before printing.
Generally, the code is probably not the most elegant way to do it but to get correct results with something close to what you wrote, you might want to try something along the lines of:
```
with open ('file2.txt') as protein, open('file1.txt') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
position.seek(0)
valid = True
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if not (int(rowinprot[4])<int(rowinpos[1]) or int(rowinprot[3])>int(rowinpos[2])):
valid = False
if valid:
writer.writerow(rowinprot)
```
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
Do looping in a loop is not a good way. Try to avoid to do such things.
I think you could cache the content of file1 first use a dict object. Then, when looping file2 you can use the dict object to find your need things.So, i will code like bellow:
```
with open("file1.csv", "r") as protein, open("file2.csv", "r") as postion, open("result.csv", "w") as fallout:
writer = csv.writer(fallout, delimiter=' ')
protein_dict = {}
for rowinprt in csv.reader(protein, delimiter=' '):
key = rowinprt[0]
sub_value = (int(rowinprt[1]), int(rowinprt[2]))
protein_dict.setdefault(key, [])
protein_dict[key].append(sub_value)
for pos in csv.reader(postion, delimiter=' '):
id_key = pos[2]
id_min = int(pos[3])
id_max = int(pos[4])
if protein_dict.has_key(id_key) and all([ id_max < _min or _max < id_min for _min, _max in protein_dict[id_key]]):
writer.writerow(pos)
```
|
Here is an algorithm working for you:
```
def is_overlapping(x, y):
return len(range(max(x[0], y[0]), min(x[-1], y[-1])+1)) > 0
position_file = r"file1.txt"
positions = [line.strip().split() for line in open(position_file).read().split('\n')]
protein_file = r"file2.txt"
proteins = [(line.strip().split()[2:5], line) for line in open(protein_file).read().split('\n')]
fallout_file = r"result.txt"
with open(fallout_file, 'w') as fallout:
for index, protein_info in enumerate(proteins):
try:
test_position = positions[index]
except IndexError:
# If files do not have the same size the loop will end
break
protein_position, protein_line = protein_info
# If identifier is different, write line and check next line
if protein_position[0] != test_position[0]:
print(protein_line)
fallout.write(protein_line)
continue
# Here identifiers are identical, then we check if they overlap
test_range = range(int(test_position[1]), int(test_position[2]))
protein_range = range(int(protein_position[1]), int(protein_position[2]))
if not is_overlapping(protein_range, test_range):
print(protein_line)
fallout.write(protein_line + '\n')
```
Concerning the overlapping test, a good snippet was given here: <https://stackoverflow.com/a/6821298/2531279>
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
It might be overkill, but you could create a class to represent intervals and use it to make fairly readable code:
```
import csv
class Interval(object):
""" Representation of a closed interval.
a & b can be numeric, a datetime.date, or any other comparable type.
"""
def __init__(self, a, b):
self.lowerbound, self.upperbound = (a, b) if a < b else (b, a)
def __contains__(self, val):
return self.lowerbound <= val <= self.upperbound
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__,
self.lowerbound, self.upperbound)
filename1 = 'afile1.txt'
filename2 = 'afile2.txt'
filename3 = 'test.result'
intervals = {} # master dictionary of intervals
with open(filename1, 'rb') as f:
reader = csv.reader(f, delimiter=' ')
for row in reader:
cls, a, b = row[0], int(row[1]), int(row[2])
intervals.setdefault(cls, []).append(Interval(a, b))
with open(filename2, 'rb') as f1, open(filename3, 'wb') as f2:
reader = csv.reader(f1, delimiter=' ')
writer = csv.writer(f2, delimiter=' ')
for row in reader:
cls, a, b = row[2], int(row[3]), int(row[4])
if cls in intervals:
for interval in intervals[cls]:
# check for overlap
if ((a in interval) or (b in interval) or
(a < interval.lowerbound and b > interval.upperbound)):
break # skip
else:
writer.writerow(row) # no overlaps
```
|
Here is an algorithm working for you:
```
def is_overlapping(x, y):
return len(range(max(x[0], y[0]), min(x[-1], y[-1])+1)) > 0
position_file = r"file1.txt"
positions = [line.strip().split() for line in open(position_file).read().split('\n')]
protein_file = r"file2.txt"
proteins = [(line.strip().split()[2:5], line) for line in open(protein_file).read().split('\n')]
fallout_file = r"result.txt"
with open(fallout_file, 'w') as fallout:
for index, protein_info in enumerate(proteins):
try:
test_position = positions[index]
except IndexError:
# If files do not have the same size the loop will end
break
protein_position, protein_line = protein_info
# If identifier is different, write line and check next line
if protein_position[0] != test_position[0]:
print(protein_line)
fallout.write(protein_line)
continue
# Here identifiers are identical, then we check if they overlap
test_range = range(int(test_position[1]), int(test_position[2]))
protein_range = range(int(protein_position[1]), int(protein_position[2]))
if not is_overlapping(protein_range, test_range):
print(protein_line)
fallout.write(protein_line + '\n')
```
Concerning the overlapping test, a good snippet was given here: <https://stackoverflow.com/a/6821298/2531279>
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
Do looping in a loop is not a good way. Try to avoid to do such things.
I think you could cache the content of file1 first use a dict object. Then, when looping file2 you can use the dict object to find your need things.So, i will code like bellow:
```
with open("file1.csv", "r") as protein, open("file2.csv", "r") as postion, open("result.csv", "w") as fallout:
writer = csv.writer(fallout, delimiter=' ')
protein_dict = {}
for rowinprt in csv.reader(protein, delimiter=' '):
key = rowinprt[0]
sub_value = (int(rowinprt[1]), int(rowinprt[2]))
protein_dict.setdefault(key, [])
protein_dict[key].append(sub_value)
for pos in csv.reader(postion, delimiter=' '):
id_key = pos[2]
id_min = int(pos[3])
id_max = int(pos[4])
if protein_dict.has_key(id_key) and all([ id_max < _min or _max < id_min for _min, _max in protein_dict[id_key]]):
writer.writerow(pos)
```
|
It might be overkill, but you could create a class to represent intervals and use it to make fairly readable code:
```
import csv
class Interval(object):
""" Representation of a closed interval.
a & b can be numeric, a datetime.date, or any other comparable type.
"""
def __init__(self, a, b):
self.lowerbound, self.upperbound = (a, b) if a < b else (b, a)
def __contains__(self, val):
return self.lowerbound <= val <= self.upperbound
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__,
self.lowerbound, self.upperbound)
filename1 = 'afile1.txt'
filename2 = 'afile2.txt'
filename3 = 'test.result'
intervals = {} # master dictionary of intervals
with open(filename1, 'rb') as f:
reader = csv.reader(f, delimiter=' ')
for row in reader:
cls, a, b = row[0], int(row[1]), int(row[2])
intervals.setdefault(cls, []).append(Interval(a, b))
with open(filename2, 'rb') as f1, open(filename3, 'wb') as f2:
reader = csv.reader(f1, delimiter=' ')
writer = csv.writer(f2, delimiter=' ')
for row in reader:
cls, a, b = row[2], int(row[3]), int(row[4])
if cls in intervals:
for interval in intervals[cls]:
# check for overlap
if ((a in interval) or (b in interval) or
(a < interval.lowerbound and b > interval.upperbound)):
break # skip
else:
writer.writerow(row) # no overlaps
```
|
65,891,227
|
this is probably a pretty dumb question..
I was fooling around some in python and decided to make two version of prime number apps, one that just indefinately counts up all prime-numbers from start until you fry ur pc and one that lets the user input a number and then checks if the number is a prime number or not.
So i made one that worked fine but it was quite slow. took 41 sec to check if 269996535 was a prime. i guess my algorithm is pretty bad.. but thats not my concern! i then read up on multiprocessing (multithreading) and decided to give it a go, after some minutes of tinkering i got it to work on smaller numbers and it was fast, so i decided to compare with the previously big number i had (269996535)
and i looked at my resmon and the cpu spiked, then the music stopped, all my programs started to crash one by one and then finally bluescreen.
Can someone explain why this happened and why i cant get this to work. im not looking for a better algorithm im just simply trying to figure out why my pc crashed when trying to multithread.
code that works
```
import time
def prime(x):
start = time.time()
num = x+1
range_num = []
two = x/2
five = x/5
ten = x/10
if not two.is_integer() or five.is_integer() or ten.is_integer():
for i in range(1, num):
y = x/i
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if num == 1:
print(1, " is a prime number")
elif total == 2:
print(num-1, " is a prime number")
else:
print(num-1, " is not a prime number")
else:
print(num - 1, " is not a prime number")
print("This took ", round((time.time() - start), 2), "Seconds to complete")
prime(269996535)
```
code that fried my pc
```
import time
import multiprocessing
global range_num
range_num = []
def part_one(x, denom):
if not denom == 0:
for i in range(1, x):
y = x/denom
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
if __name__ == '__main__':
start = time.time()
x = int(input("Enter number: "))
num = x+1
range_num = []
if num-1 == 1:
print("1 is a prime number")
for i in range(0, x):
p = multiprocessing.Process(target=part_one, args=(x, i,))
p.start()
for process in range_num:
process.join()
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if total == 2:
print(num-1, " is a prime number")
else:
print(num - 1, " is not a prime number")
```
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65891227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15040876/"
] |
You need to fix `scripts/start.js` file.
There is a line in this file:
```
const compiler = createCompiler(webpack, config, appName, urls, useYarn);
```
You need to change it to
```
const compiler = createCompiler({ webpack, config, appName, urls, useYarn });
```
This change will fix your current error but probably you'll get the next one :)
|
Try updating your `node` part for webpack 5 as well. What was
```
node: {
dgram: 'empty',
fs: 'empty',
net: 'empty',
tls: 'empty',
child_process: 'empty',
}
```
has to be under `resolve.fallback`.
Source: <https://webpack.js.org/migrate/5/#clean-up-configuration>
|
65,891,227
|
this is probably a pretty dumb question..
I was fooling around some in python and decided to make two version of prime number apps, one that just indefinately counts up all prime-numbers from start until you fry ur pc and one that lets the user input a number and then checks if the number is a prime number or not.
So i made one that worked fine but it was quite slow. took 41 sec to check if 269996535 was a prime. i guess my algorithm is pretty bad.. but thats not my concern! i then read up on multiprocessing (multithreading) and decided to give it a go, after some minutes of tinkering i got it to work on smaller numbers and it was fast, so i decided to compare with the previously big number i had (269996535)
and i looked at my resmon and the cpu spiked, then the music stopped, all my programs started to crash one by one and then finally bluescreen.
Can someone explain why this happened and why i cant get this to work. im not looking for a better algorithm im just simply trying to figure out why my pc crashed when trying to multithread.
code that works
```
import time
def prime(x):
start = time.time()
num = x+1
range_num = []
two = x/2
five = x/5
ten = x/10
if not two.is_integer() or five.is_integer() or ten.is_integer():
for i in range(1, num):
y = x/i
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if num == 1:
print(1, " is a prime number")
elif total == 2:
print(num-1, " is a prime number")
else:
print(num-1, " is not a prime number")
else:
print(num - 1, " is not a prime number")
print("This took ", round((time.time() - start), 2), "Seconds to complete")
prime(269996535)
```
code that fried my pc
```
import time
import multiprocessing
global range_num
range_num = []
def part_one(x, denom):
if not denom == 0:
for i in range(1, x):
y = x/denom
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
if __name__ == '__main__':
start = time.time()
x = int(input("Enter number: "))
num = x+1
range_num = []
if num-1 == 1:
print("1 is a prime number")
for i in range(0, x):
p = multiprocessing.Process(target=part_one, args=(x, i,))
p.start()
for process in range_num:
process.join()
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if total == 2:
print(num-1, " is a prime number")
else:
print(num - 1, " is not a prime number")
```
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65891227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15040876/"
] |
You need to fix `scripts/start.js` file.
There is a line in this file:
```
const compiler = createCompiler(webpack, config, appName, urls, useYarn);
```
You need to change it to
```
const compiler = createCompiler({ webpack, config, appName, urls, useYarn });
```
This change will fix your current error but probably you'll get the next one :)
|
```
new webpack.LoaderOptionsPlugin({
options: {
applyWebpackOptionsDefaults: {},
```
From your code...
|
12,243,891
|
How do i achieve line 9 and 10 i.e the two `for` loops in python
```
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
]
newMatrix = []
for (i=0; i < len(matrix); i++):
for (j=0; j < len(matrix[i]); j++):
newMatrix[j][i] = matrix[i][j]
print newMatrix
```
PS: i know i can do `[[row[i] for row in matrix] for i in range(4)]` but how i do only using for loops
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12243891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/406659/"
] |
Use `range` (or `xrange`).
```
for i in range(len(matrix)):
for j in range(len(matrix[i])):
```
FYI, assigning to `newMatrix[i][j]` will fail, because `newMatrix` is an empty `list`. You need to add a new empty `list` to `newMatrix` for every row and then append the new value to that list in every iteration.
|
You can use the [`enumerate()` function](http://docs.python.org/library/functions.html#enumerate) to loop over both the matrix values and give you indices:
```
newMatrix = [[0] * len(matrix) for _ in xrange(len(matrix[0]))]
for i, row in enumerate(matrix):
for j, value in enumerate(row):
newMatrix[j][i] = value
```
This outputs:
```
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
```
Because you are addressing rows and columns in the new matrix directly, you need to have initialized that new matrix with empty values first. The list comprehension on the first line of my example does this for you. It creates a new matrix that is (*y*, *x*) in size, given an input matrix of size (*x*, *y*).
Alternatively, you can avoid explicit looping altogether by (ab)using the [`zip` function](http://docs.python.org/library/functions.html#zip):
```
newMatrix = zip(*matrix)
```
which takes each row of `matrix` and groups each column in those rows into new rows.
In the generic case, python `for` constructs loop over sequences. The [`range()`](http://docs.python.org/library/functions.html#range) and [`xrange()`](http://docs.python.org/library/functions.html#xrange) functions can produce numerical sequences for you, these generate a number sequence in much the same way that a `for` loop in C or JavaScript would produce:
```
>>> for i in range(5):
>>> print i,
0 1 2 3 4
```
but the construct is far more powerful than the C-style `for` construct. In most such loops your goal is to provide indices into some sequence, the python construct bypasses the indices and goes straight to the values in the sequence instead.
|
12,243,891
|
How do i achieve line 9 and 10 i.e the two `for` loops in python
```
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
]
newMatrix = []
for (i=0; i < len(matrix); i++):
for (j=0; j < len(matrix[i]); j++):
newMatrix[j][i] = matrix[i][j]
print newMatrix
```
PS: i know i can do `[[row[i] for row in matrix] for i in range(4)]` but how i do only using for loops
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12243891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/406659/"
] |
Use `range` (or `xrange`).
```
for i in range(len(matrix)):
for j in range(len(matrix[i])):
```
FYI, assigning to `newMatrix[i][j]` will fail, because `newMatrix` is an empty `list`. You need to add a new empty `list` to `newMatrix` for every row and then append the new value to that list in every iteration.
|
Just to throw something else into the mix of answers, if you are doing *any* kind of matrix operations with Python, consider using [`numpy`](http://numpy.scipy.org/):
```
>>> import numpy
>>> matrix = numpy.matrix([
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]
... ])
```
The transpose is pretty simple to do:
```
>>> matrix.transpose()
matrix([[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11],
[ 4, 8, 12]])
```
|
12,243,891
|
How do i achieve line 9 and 10 i.e the two `for` loops in python
```
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
]
newMatrix = []
for (i=0; i < len(matrix); i++):
for (j=0; j < len(matrix[i]); j++):
newMatrix[j][i] = matrix[i][j]
print newMatrix
```
PS: i know i can do `[[row[i] for row in matrix] for i in range(4)]` but how i do only using for loops
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12243891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/406659/"
] |
Use `range` (or `xrange`).
```
for i in range(len(matrix)):
for j in range(len(matrix[i])):
```
FYI, assigning to `newMatrix[i][j]` will fail, because `newMatrix` is an empty `list`. You need to add a new empty `list` to `newMatrix` for every row and then append the new value to that list in every iteration.
|
If I understand you correctly, what you want to do is a [matrix transposition](http://en.wikipedia.org/wiki/Transpose), which can be done this way:
```
zip(*matrix)
```
|
1,478,178
|
Usually, the best practice in python, when using international languages, is to use unicode and to convert early any input to unicode and to convert late to a string encoding (UTF-8 most of the times).
But when I need to do RegEx on unicode I don't find the process really friendly. For example, if I need to find the 'é' character follow by one ore more spaces I have to write (Note: my shell or python file are set to UTF-8):
```
re.match('(?u)\xe9\s+', unicode)
```
So I have to write the unicode code of 'é'. That's not really convenient and if I need to built the RegEx from a variable, things start to come ugly. Example:
```
word_to_match = 'Élisa™'.decode('utf-8') # that return a unicode object
regex = '(?u)%s\s+' % word_to_match
re.match(regex, unicode)
```
And this is a simple example. So if you have a lot of Regexs to do one after another with special characters in it, I found more easy and natural to do the RegEx on a string encoded in UTF-8. Example:
```
re.match('Élisa\s+', string)
re.match('Geneviève\s+', string)
re.match('DrØshtit\s+', string)
```
Is there's something I'm missing ? What are the drawbacks of the UTF-8 approach ?
UPDATE
------
Ok, I find the problem. I was doing my tests in ipython but unfortunately it seems to mess the encoding. Example:
In the python shell
```
>>> string_utf8 = 'Test « with theses » quotes Éléments'
>>> string_utf8
'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
>>> print string_utf8
Test « with theses » quotes Éléments
>>>
>>> unicode_string = u'Test « with theses » quotes Éléments'
>>> unicode_string
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_string
Test « with theses » quotes Éléments
>>>
>>> unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
>>> unicode_decoded_from_utf8
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
In ipython
```
In [1]: string_utf8 = 'Test « with theses » quotes Éléments'
In [2]: string_utf8
Out[2]: 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [3]: print string_utf8
Test « with theses » quotes Éléments
In [4]: unicode_string = u'Test « with theses » quotes Éléments'
In [5]: unicode_string
Out[5]: u'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [6]: print unicode_string
Test « with theses » quotes Ãléments
In [7]: unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
In [8]: unicode_decoded_from_utf8
Out[8]: u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
In [9]: print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
As you can see, ipython is messing with encoding when using the u'' notation. That was the source of my problems. The bug is mentionned here: <https://bugs.launchpad.net/ipython/+bug/339642>
|
2009/09/25
|
[
"https://Stackoverflow.com/questions/1478178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
I haven't used a JDBC-ODBC bridge in quite a few years, but I would expect that the ODBC driver's fetch size would be controlling. What ODBC driver are you using and what version of the ODBC driver are you using? What is the fetch size specified in the DSN?
As a separate issue, I would seriously question your decision to use a JDBC-ODBC bridge in this day and age. A JDBC driver can use a TNS alias rather than explicitly specifying a host and port which is no harder to configure than an ODBC DSN. And getting rid of the requirement to install, configure, and maintain an ODBC driver and DSN vastly improves performance an maintainability.
|
Use JDBC. There is no advantage in using ODBC bridge. In PreparedStatement or CallableStatement you can call setFetchSize() to restrict the rowset size.
|
1,478,178
|
Usually, the best practice in python, when using international languages, is to use unicode and to convert early any input to unicode and to convert late to a string encoding (UTF-8 most of the times).
But when I need to do RegEx on unicode I don't find the process really friendly. For example, if I need to find the 'é' character follow by one ore more spaces I have to write (Note: my shell or python file are set to UTF-8):
```
re.match('(?u)\xe9\s+', unicode)
```
So I have to write the unicode code of 'é'. That's not really convenient and if I need to built the RegEx from a variable, things start to come ugly. Example:
```
word_to_match = 'Élisa™'.decode('utf-8') # that return a unicode object
regex = '(?u)%s\s+' % word_to_match
re.match(regex, unicode)
```
And this is a simple example. So if you have a lot of Regexs to do one after another with special characters in it, I found more easy and natural to do the RegEx on a string encoded in UTF-8. Example:
```
re.match('Élisa\s+', string)
re.match('Geneviève\s+', string)
re.match('DrØshtit\s+', string)
```
Is there's something I'm missing ? What are the drawbacks of the UTF-8 approach ?
UPDATE
------
Ok, I find the problem. I was doing my tests in ipython but unfortunately it seems to mess the encoding. Example:
In the python shell
```
>>> string_utf8 = 'Test « with theses » quotes Éléments'
>>> string_utf8
'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
>>> print string_utf8
Test « with theses » quotes Éléments
>>>
>>> unicode_string = u'Test « with theses » quotes Éléments'
>>> unicode_string
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_string
Test « with theses » quotes Éléments
>>>
>>> unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
>>> unicode_decoded_from_utf8
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
In ipython
```
In [1]: string_utf8 = 'Test « with theses » quotes Éléments'
In [2]: string_utf8
Out[2]: 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [3]: print string_utf8
Test « with theses » quotes Éléments
In [4]: unicode_string = u'Test « with theses » quotes Éléments'
In [5]: unicode_string
Out[5]: u'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [6]: print unicode_string
Test « with theses » quotes Ãléments
In [7]: unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
In [8]: unicode_decoded_from_utf8
Out[8]: u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
In [9]: print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
As you can see, ipython is messing with encoding when using the u'' notation. That was the source of my problems. The bug is mentionned here: <https://bugs.launchpad.net/ipython/+bug/339642>
|
2009/09/25
|
[
"https://Stackoverflow.com/questions/1478178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
I haven't used a JDBC-ODBC bridge in quite a few years, but I would expect that the ODBC driver's fetch size would be controlling. What ODBC driver are you using and what version of the ODBC driver are you using? What is the fetch size specified in the DSN?
As a separate issue, I would seriously question your decision to use a JDBC-ODBC bridge in this day and age. A JDBC driver can use a TNS alias rather than explicitly specifying a host and port which is no harder to configure than an ODBC DSN. And getting rid of the requirement to install, configure, and maintain an ODBC driver and DSN vastly improves performance an maintainability.
|
Are you sure the fetch size makes any difference in Sun's ODBC-JDBC driver? We tried it a few years ago with Java 5. The value is set, even validated but never used. I suspect it's still the case.
|
59,357,940
|
The below piece of code uses openCV module to identify lanes on road. I use the python 3.6 for coding (I use atom IDE for development. This info is being provided because stackoverflow isn't letting me post the info without unnecessary lines of info. so please ignore the comments in bracket)
The code runs fine with a given sample video. But when I run it for another video it throws the following error:
```
(base) D:\Self-Driving course\finding-lanes>RayanFindingLanes.py
C:\Users\Tarun\Anaconda3\lib\site-packages\numpy\lib\function_base.py:392: RuntimeWarning: Mean of empty slice.
avg = a.mean(axis)
C:\Users\Tarun\Anaconda3\lib\site-packages\numpy\core\_methods.py:85: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Traceback (most recent call last):
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 81, in <module>
averaged_lines = average_slope_intercept(frame, lines)
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 51, in average_slope_intercept
right_line = make_points(image, right_fit_average)
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 56, in make_points
slope, intercept = line
TypeError: cannot unpack non-iterable numpy.float64 object
```
What does the error mean and how to solve it?
code:
```
import cv2
import numpy as np
def canny(img):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel = 5
blur = cv2.GaussianBlur(gray,(kernel, kernel),0)
canny = cv2.Canny(blur, 50, 150)
return canny
def region_of_interest(canny):
height = canny.shape[0]
width = canny.shape[1]
mask = np.zeros_like(canny)
triangle = np.array([[
(200, height),
(550, 250),
(1100, height),]], np.int32)
cv2.fillPoly(mask, triangle, 255)
masked_image = cv2.bitwise_and(canny, mask)
return masked_image
def display_lines(img,lines):
line_image = np.zeros_like(img)
if lines is not None:
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
return line_image
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []
if lines is None:
return None
for line in lines:
for x1, y1, x2, y2 in line:
fit = np.polyfit((x1,x2), (y1,y2), 1)
slope = fit[0]
intercept = fit[1]
if slope < 0: # y is reversed in image
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
# add more weight to longer lines
left_fit_average = np.average(left_fit, axis=0)
right_fit_average = np.average(right_fit, axis=0)
left_line = make_points(image, left_fit_average)
right_line = make_points(image, right_fit_average)
averaged_lines = [left_line, right_line]
return averaged_lines
def make_points(image, line):
slope, intercept = line
y1 = int(image.shape[0])# bottom of the image
y2 = int(y1*3/5) # slightly lower than the middle
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
return [[x1, y1, x2, y2]]
cap = cv2.VideoCapture("test3.mp4")
while(cap.isOpened()):
_, frame = cap.read()
canny_image = canny(frame)
cropped_canny = region_of_interest(canny_image)
lines = cv2.HoughLinesP(cropped_canny, 2, np.pi/180, 100, np.array([]), minLineLength=40,maxLineGap=5)
averaged_lines = average_slope_intercept(frame, lines)
line_image = display_lines(frame, averaged_lines)
combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)
cv2.imshow("result", combo_image)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
|
2019/12/16
|
[
"https://Stackoverflow.com/questions/59357940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12181181/"
] |
In some of the frames all the slope are > 0 hence left\_fit list is empty. Because of that you are getting error when you are calculating left\_fit average. One of the way to solve this problem to use left\_fit average from previous frame. I have solved it using the same approach. Please see the code below and let me know if it has solved your issue.
```
global_left_fit_average = []
global_right_fit_average = []
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []
global global_left_fit_average
global global_right_fit_average
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
parameters = np.polyfit((x1, x2), (y1,y2), 1)
slope = parameters[0]
intercept = parameters[1]
if (slope < 0):
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
if (len(left_fit) == 0):
left_fit_average = global_left_fit_average
else:
left_fit_average = np.average(left_fit, axis=0)
global_left_fit_average = left_fit_average
right_fit_average = np.average(right_fit, axis=0)
global_right_fit_average = right_fit_average
left_line = make_corordinates(image, left_fit_average)
right_line = make_corordinates(image, right_fit_average)
return np.array([left_line, right_line])
```
|
`HoughLinesP` returns a list and that can be an empty list and not necessarily `None`
So the lines in the function `average_slope_intercept`
```
if lines is None:
return None
```
is not much of use.
You need to check `len(lines) == 0`
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
With `zip()` and `itertools.izip_longest()` you can do that like:
### Code:
```
import itertools as it
in_data = list('ABCDEFGHIJKLMNOPQ')
out_data = {k: list(v) if v else [] for k, v in
it.izip_longest(in_data, zip(in_data[1::2], in_data[2::2]))}
import json
print(json.dumps(out_data, indent=2))
```
### Results:
```
{
"A": [
"B",
"C"
],
"C": [
"F",
"G"
],
"B": [
"D",
"E"
],
"E": [
"J",
"K"
],
"D": [
"H",
"I"
],
"G": [
"N",
"O"
],
"F": [
"L",
"M"
],
"I": [],
"H": [
"P",
"Q"
],
"K": [],
"J": [],
"M": [],
"L": [],
"O": [],
"N": [],
"Q": [],
"P": []
}
```
|
You can use `itertools`:
```
from itertools import chain, repeat
data = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q']
lists = [[first, second] for first, second in zip(data[1::2], data[2::2])]
result = {char: list(value) for char, value in zip(data, chain(lists, repeat([])))}
result
```
Output:
```
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
With `zip()` and `itertools.izip_longest()` you can do that like:
### Code:
```
import itertools as it
in_data = list('ABCDEFGHIJKLMNOPQ')
out_data = {k: list(v) if v else [] for k, v in
it.izip_longest(in_data, zip(in_data[1::2], in_data[2::2]))}
import json
print(json.dumps(out_data, indent=2))
```
### Results:
```
{
"A": [
"B",
"C"
],
"C": [
"F",
"G"
],
"B": [
"D",
"E"
],
"E": [
"J",
"K"
],
"D": [
"H",
"I"
],
"G": [
"N",
"O"
],
"F": [
"L",
"M"
],
"I": [],
"H": [
"P",
"Q"
],
"K": [],
"J": [],
"M": [],
"L": [],
"O": [],
"N": [],
"Q": [],
"P": []
}
```
|
you can create tree topology dict by index relation:
```
def generateTree(arr):
tree = {}
for i, v in enumerate(arr):
tree[v] = []
if i * 2 + 1 < len(arr):
tree[v].append(arr[i * 2 + 1])
if i * 2 + 2 < len(arr):
tree[v].append(arr[i * 2 + 2])
return tree
```
output:
`{'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}`
Hope that will help you, and comment if you have further questions. : )
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
With `zip()` and `itertools.izip_longest()` you can do that like:
### Code:
```
import itertools as it
in_data = list('ABCDEFGHIJKLMNOPQ')
out_data = {k: list(v) if v else [] for k, v in
it.izip_longest(in_data, zip(in_data[1::2], in_data[2::2]))}
import json
print(json.dumps(out_data, indent=2))
```
### Results:
```
{
"A": [
"B",
"C"
],
"C": [
"F",
"G"
],
"B": [
"D",
"E"
],
"E": [
"J",
"K"
],
"D": [
"H",
"I"
],
"G": [
"N",
"O"
],
"F": [
"L",
"M"
],
"I": [],
"H": [
"P",
"Q"
],
"K": [],
"J": [],
"M": [],
"L": [],
"O": [],
"N": [],
"Q": [],
"P": []
}
```
|
try this:
```
LL = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dd = {}
for i,e in enumerate(LL):
LLL = []
if ((i+1) + len(dd) < len(LL)): LLL = [LL[((i+1) + len(dd))], LL[((i+1) + len(dd))+1]]
dd[e] = LLL
print dd
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
a little more readable:
```
dd = {}
for i,e in enumerate(LL):
LLL = []
intv = (i+1) + len(dd)
if (intv < len(LL)): LLL = [LL[(intv)], LL[(intv)+1]]
dd[e] = LLL
print dd
```
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
Here's an old-fashion method that's pretty optimized that uses the array indexing method as described in the following tutorial: <https://www.geeksforgeeks.org/construct-complete-binary-tree-given-array/>
The first line fills out the non-leaves with the values of children. The second line fills the leaves to be empty lists. I'll note that we know the number of internal nodes is (len(values) // 2).
```
values = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dictionary = {values[i]:[values[2*i+1], values[2*i+2]] for i in range((len(values) // 2))}
dictionary.update({values[i]:[] for i in range((len(values) // 2) + 1, len(values))})
```
|
You can use `itertools`:
```
from itertools import chain, repeat
data = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q']
lists = [[first, second] for first, second in zip(data[1::2], data[2::2])]
result = {char: list(value) for char, value in zip(data, chain(lists, repeat([])))}
result
```
Output:
```
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
```
alphabet=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
d={} # empty dictionary
counter=2
for i in range(0,len(alphabet)):
if i==0: # at letter 'A' only
lst=[alphabet[i+1],alphabet[i+2]] # lst that will be used as value of key in dictionary
elif i<(len(alphabet)-1)/2: # at letter 'B' through 'H'
lst=[alphabet[i+counter],alphabet[i+counter+1]] # lst that will be used as value of key in dictionary
counter+=1 # increment counter
else: # all letters after 'H'
lst=[] # an empty list that will be used as value of key in dictionary
d[alphabet[i]]=lst # add 'lst' as a value for the letter key in the dictionary
print(d) # print the dictionary
# {'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}
```
|
You can use `itertools`:
```
from itertools import chain, repeat
data = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q']
lists = [[first, second] for first, second in zip(data[1::2], data[2::2])]
result = {char: list(value) for char, value in zip(data, chain(lists, repeat([])))}
result
```
Output:
```
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
Here's an old-fashion method that's pretty optimized that uses the array indexing method as described in the following tutorial: <https://www.geeksforgeeks.org/construct-complete-binary-tree-given-array/>
The first line fills out the non-leaves with the values of children. The second line fills the leaves to be empty lists. I'll note that we know the number of internal nodes is (len(values) // 2).
```
values = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dictionary = {values[i]:[values[2*i+1], values[2*i+2]] for i in range((len(values) // 2))}
dictionary.update({values[i]:[] for i in range((len(values) // 2) + 1, len(values))})
```
|
you can create tree topology dict by index relation:
```
def generateTree(arr):
tree = {}
for i, v in enumerate(arr):
tree[v] = []
if i * 2 + 1 < len(arr):
tree[v].append(arr[i * 2 + 1])
if i * 2 + 2 < len(arr):
tree[v].append(arr[i * 2 + 2])
return tree
```
output:
`{'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}`
Hope that will help you, and comment if you have further questions. : )
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
```
alphabet=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
d={} # empty dictionary
counter=2
for i in range(0,len(alphabet)):
if i==0: # at letter 'A' only
lst=[alphabet[i+1],alphabet[i+2]] # lst that will be used as value of key in dictionary
elif i<(len(alphabet)-1)/2: # at letter 'B' through 'H'
lst=[alphabet[i+counter],alphabet[i+counter+1]] # lst that will be used as value of key in dictionary
counter+=1 # increment counter
else: # all letters after 'H'
lst=[] # an empty list that will be used as value of key in dictionary
d[alphabet[i]]=lst # add 'lst' as a value for the letter key in the dictionary
print(d) # print the dictionary
# {'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}
```
|
you can create tree topology dict by index relation:
```
def generateTree(arr):
tree = {}
for i, v in enumerate(arr):
tree[v] = []
if i * 2 + 1 < len(arr):
tree[v].append(arr[i * 2 + 1])
if i * 2 + 2 < len(arr):
tree[v].append(arr[i * 2 + 2])
return tree
```
output:
`{'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}`
Hope that will help you, and comment if you have further questions. : )
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
Here's an old-fashion method that's pretty optimized that uses the array indexing method as described in the following tutorial: <https://www.geeksforgeeks.org/construct-complete-binary-tree-given-array/>
The first line fills out the non-leaves with the values of children. The second line fills the leaves to be empty lists. I'll note that we know the number of internal nodes is (len(values) // 2).
```
values = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dictionary = {values[i]:[values[2*i+1], values[2*i+2]] for i in range((len(values) // 2))}
dictionary.update({values[i]:[] for i in range((len(values) // 2) + 1, len(values))})
```
|
try this:
```
LL = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dd = {}
for i,e in enumerate(LL):
LLL = []
if ((i+1) + len(dd) < len(LL)): LLL = [LL[((i+1) + len(dd))], LL[((i+1) + len(dd))+1]]
dd[e] = LLL
print dd
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
a little more readable:
```
dd = {}
for i,e in enumerate(LL):
LLL = []
intv = (i+1) + len(dd)
if (intv < len(LL)): LLL = [LL[(intv)], LL[(intv)+1]]
dd[e] = LLL
print dd
```
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
```
alphabet=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
d={} # empty dictionary
counter=2
for i in range(0,len(alphabet)):
if i==0: # at letter 'A' only
lst=[alphabet[i+1],alphabet[i+2]] # lst that will be used as value of key in dictionary
elif i<(len(alphabet)-1)/2: # at letter 'B' through 'H'
lst=[alphabet[i+counter],alphabet[i+counter+1]] # lst that will be used as value of key in dictionary
counter+=1 # increment counter
else: # all letters after 'H'
lst=[] # an empty list that will be used as value of key in dictionary
d[alphabet[i]]=lst # add 'lst' as a value for the letter key in the dictionary
print(d) # print the dictionary
# {'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F', 'G'], 'D': ['H', 'I'], 'E': ['J', 'K'], 'F': ['L', 'M'], 'G': ['N', 'O'], 'H': ['P', 'Q'], 'I': [], 'J': [], 'K': [], 'L': [], 'M': [], 'N': [], 'O': [], 'P': [], 'Q': []}
```
|
try this:
```
LL = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
dd = {}
for i,e in enumerate(LL):
LLL = []
if ((i+1) + len(dd) < len(LL)): LLL = [LL[((i+1) + len(dd))], LL[((i+1) + len(dd))+1]]
dd[e] = LLL
print dd
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
a little more readable:
```
dd = {}
for i,e in enumerate(LL):
LLL = []
intv = (i+1) + len(dd)
if (intv < len(LL)): LLL = [LL[(intv)], LL[(intv)+1]]
dd[e] = LLL
print dd
```
|
51,961,416
|
i am trying to copy line from text file in zip folder by matching partial string,
zip folders are in a shared folder
is there a way to copy strings from text files and send it to one output text file.
how to do it using python..
is it possible with zip\_archive?
i tried using this, with no luck.
```
zf = zipfile.ZipFile('C:/Users/Analytics Vidhya/Desktop/test.zip')
# having First.csv zipped file.
df = pd.read_csv(zf.open('First.csv'))
```
|
2018/08/22
|
[
"https://Stackoverflow.com/questions/51961416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10258075/"
] |
Unlike *@strava* answer, you don't actually have to extract... `zipfile` gives you excellent [API](https://docs.python.org/3/library/zipfile.html) for manipulating files. Here is a simple example of reading each file inside a simple zip (I zipped only one `.txt` file):
```
import zipfile
zip_path = r'C:\Users\avi_na\Desktop\a.zip'
niddle = '2'
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
print(file_name, zf.read(file_name))
if(niddle in str(zf.read(file_name))):
print('found substring!!')
```
output:
```
a.txt b'1\r\n2\r\n3\r\n'
found substring!!
```
Using this example you can easily elaborate and read each one of the files, search the text for your string, and write it to output file.
For more info, check `printdir, read, write, open, close` members of `zipfile.ZipFile`
If you just want to extract and then use `pd.read_csv`, that's also fine:
```
import zipfile
zip_path = r'...\a.zip'
unzip_dir = "unzip_dir"
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
if '.csv' in file_name: # make sure file is .csv
zf.extract(file_name, unzip_dir)
df = pd.read_csv(open("{}/{}".format(unzip_dir,file_name)))
```
|
You could try extracting them first, and then treating them as normal csv files
```
zf = zipfile.ZipFile( path to zip )
zf.extract('first.csv', path to save directory )
file = open('path\first.csv')
```
|
51,961,416
|
i am trying to copy line from text file in zip folder by matching partial string,
zip folders are in a shared folder
is there a way to copy strings from text files and send it to one output text file.
how to do it using python..
is it possible with zip\_archive?
i tried using this, with no luck.
```
zf = zipfile.ZipFile('C:/Users/Analytics Vidhya/Desktop/test.zip')
# having First.csv zipped file.
df = pd.read_csv(zf.open('First.csv'))
```
|
2018/08/22
|
[
"https://Stackoverflow.com/questions/51961416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10258075/"
] |
Unlike *@strava* answer, you don't actually have to extract... `zipfile` gives you excellent [API](https://docs.python.org/3/library/zipfile.html) for manipulating files. Here is a simple example of reading each file inside a simple zip (I zipped only one `.txt` file):
```
import zipfile
zip_path = r'C:\Users\avi_na\Desktop\a.zip'
niddle = '2'
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
print(file_name, zf.read(file_name))
if(niddle in str(zf.read(file_name))):
print('found substring!!')
```
output:
```
a.txt b'1\r\n2\r\n3\r\n'
found substring!!
```
Using this example you can easily elaborate and read each one of the files, search the text for your string, and write it to output file.
For more info, check `printdir, read, write, open, close` members of `zipfile.ZipFile`
If you just want to extract and then use `pd.read_csv`, that's also fine:
```
import zipfile
zip_path = r'...\a.zip'
unzip_dir = "unzip_dir"
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
if '.csv' in file_name: # make sure file is .csv
zf.extract(file_name, unzip_dir)
df = pd.read_csv(open("{}/{}".format(unzip_dir,file_name)))
```
|
Here it is a script that does what *I suppose* you need — please note that we DO NOT need `pandas` if we want to simply match a string against a line's content, it'd be different if you want to match on a specific field, a numerical value etc — hence I am not using `pandas`...
```
$ cat zip.py
from sys import argv
from zipfile import ZipFile
'''
Usage python zip.py string ziparchive1 [ziparchive2 ...]
'''
def interesting(fn):
# just an example
return fn[-4:].lower() == '.csv'
script, text, zips = argv
if len(argv) == 3:
# if 3 arguments, 'zips' is a string and iterating (below) on a
# string gives us the individual characters in it, not exactly
# what we want, hence we force it to be a list
zips = [zips]
# iterate on archives' names
for arch_name in zips:
# instance a zipfile object
archive = ZipFile(arch_name)
# iterate on the names of contained files
for file_name in archive.namelist():
# skip directories
if file_name[-1] == '/': continue
if interesting(file_name):
line_no = 0
for line in archive.open(file_name):
line_no += 1
if text in line:
# here I just print the matching line, if you need
# something more complex you can use the
# 'arch_name', the 'file_name' and the 'line_no'
# to pinpoint the position of the matching line
print(line)
$
```
|
69,979,362
|
I have a csv that I need to convert to XML using Python. I'm a novice python dev.
Example CSV data:
```
Amount,Code
CODE50,1246
CODE50,6290
CODE25,1077
CODE25,9790
CODE100,5319
CODE100,4988
```
Necessary output XML
```
<coupon-codes coupon-id="CODE50">
<code>1246</code>
<code>1246</code>
<coupon-codes/>
<coupon-codes coupon-id="CODE25">
<code>1077</code>
<code>9790</code>
<coupon-codes/>
<coupon-codes coupon-id="CODE100">
<code>5319</code>
<code>4988</code>
<coupon-codes/>
```
My guess is I have to use pandas to pull the csv in, use `pandas.groupby` to group the `Amount` column, then push this into an element/subelement to create the xml, then print/push to a xml file. I can't get the `groupby` to work and don't know how to then push that into the element, then populate the sub element.
|
2021/11/15
|
[
"https://Stackoverflow.com/questions/69979362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9803155/"
] |
Here how you do it:
```
Expanded(
child: CustomPaint(
painter: MyCustomPainter(),
size: Size.infinite,
),
),
```
|
You can get height and width of the screen by using MediaQuery and use them
```
final width = MediaQuery.of(context).size.width;
final height = MediaQuery.of(context).size.height;
return CustomPaint(
size: Size(width,height),
painter: Sky(),
child: const Center(
child: Text(
'Once upon a time...',
style: TextStyle(
fontSize: 40.0,
fontWeight: FontWeight.w900,
color: Color(0xFFFFFFFF),
),
),
),
);
```
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark <http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html> -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
|
I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.
I see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:
However, I added arbitrary precision to the Perl program with the [bignum](http://perldoc.perl.org/bignum.html)
```
use bignum;
```
Now I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.
Some of you may have seen my question about [What are five things you hate about your favorite language?](https://stackoverflow.com/questions/282329/what-are-five-things-you-hate-about-your-favorite-language). Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use [PDL](http://pdl.perl.org).
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
The nit-picking answer is that you should compare it to idiomatic Python:
* The original code takes **34** seconds on my machine.
* A `for` loop ([FlorianH's answer](https://stackoverflow.com/a/1984904/239657)) with `+=` and `xrange()` takes **21**.
* Putting the whole thing in a function reduces it to **9** seconds!
**That's much faster than Perl** (15 seconds on my machine)!
Explanation: [Python local vars are much faster than globals](http://wiki.python.org/moin/PythonSpeed/PerformanceTips#LocalVariables).
(For fairness, I also tried a function in Perl - no change)
* Getting rid of the j variable reduced it to **8** seconds:
`print sum(1 for i in xrange(100000000))`
Python has the strange property that higher-level shorter code tends to be fastest :-)
But the real answer is that your "micro-benchmark" is meaningless.
The real question of language speed is: what's the performance of an average real application? To know that, you should take into account:
* Typical mix of operations in complex code.
Your code doesn't contain any data structures, function calls, or OOP operations.
* A large enough codebase to feel cache effects — many interpreter optimizations trade memory for speed, which is not measured fairly by *any* tiny benchmark.
* [Optimization](http://www.python.org/doc/essays/list2str.html) [opportunities](http://wiki.python.org/moin/PythonSpeed/PerformanceTips): after you write your code, IF it's not fast enough,
how much faster can you easily make it?
E.g. how hard is it to offload the heavy lifting to effecient C libriries?
[PyPy's benchmarks](http://speed.pypy.org) and [Octane](https://developers.google.com/octane/benchmark) are good examples of what realistic language speed benchmarks look like.
If you want to talk number crunching, Python *IS* surprisingly popular with scientists.
They love it for the simple pseudo-math syntax and short learning curve, but also for the excellent [numpy](http://numpy.scipy.org/) library for array crunching and the ease of wrapping other existing C code.
And then there is the [Psyco JIT](http://psyco.sourceforge.net/) which would probably run your toy example well under 1 second, but I can't check it now because it only works on 32-bit x86.
*EDIT*: Nowdays, skip Psyco and use [PyPy](http://pypy.org/) which a cross-platform actively improving JIT.
|
Python is not particularly fast at numeric computations and I'm sure it's slower than perl when it comes to text processing.
Since you're an experienced Perl hand, I don't know if this applies to you but Python programs in the long run tend to be more maintainable and are quicker to develop. The speed is 'enough' for most situations and you have the flexibility to drop down into C when you *really* need a performance boost.
Update
------
Okay. I just created a large file (1GB) with random data in it (mostly ascii) and broke it into lines of equal lengths. This was supposed to simulate a log file.
I then ran simple perl and python programs that search the file line by line for an existing pattern.
With Python 2.6.2, the results were
```
real 0m18.364s
user 0m9.209s
sys 0m0.956s
```
and with Perl 5.10.0
```
real 0m17.639s
user 0m5.692s
sys 0m0.844s
```
The programs are as follows (please let me know if I'm doing something stupid)
```
import re
regexp = re.compile("p06c")
def search():
with open("/home/arif/f") as f:
for i in f:
if regexp.search(i):
print "Found : %s"%i
search()
```
and
```
sub search() {
open FOO,"/home/arif/f" or die $!;
while (<FOO>) {
print "Found : $_\n" if /p06c/o;
}
}
search();
```
The results are pretty close and tweaking it this way or other don't seem to alter the results much. I don't know if this is a *true* benchmark but I think it'd be the way I'd search log files in the two languages so I stand corrected about the relative performances.
Thanks Chris.
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
All this micro benchmarking can get a bit silly!
For eg. just switching to `for` in both Python & Perl provides an hefty speed bump. The original Perl example would be twice as quick if `for` was used:
```
my $j = 0;
for my $i (1..100000000) {
++$j;
}
print $j;
```
And I can shave off a bit more with this:
```
++$j for 1..100000000;
print $j;
```
And getting even sillier we can get it down to 1 second here ;-)
```
print {STDOUT} (1..10000000)[-1];
```
/I3az/
*ref*: Perl 5.10.1 used.
|
>
> is Python really that much slower than
> Perl?
>
>
>
Look at the **Computer Language Benchmarks Game** - "Compare the performance of ≈30 programming languages using ≈12 flawed benchmarks and ≈1100 programs".
They are only tiny benchmark programs but they still do a lot more than the code snippet you have timed -
[**http://shootout.alioth.debian.org/u32/python.php**](http://shootout.alioth.debian.org/u32/python.php)
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
Python runs very fast, if you use the correct syntax of the python language. It is roughly described as "[pythonic](http://faassen.n--tree.net/blog/view/weblog/2005/08/06/0)".
If you restructure your code like this, it will run at least twice as fast (well, it does on my machine):
```
j = 0
for i in range(10000000):
j = j + 1
print j
```
Whenever you use a while in python, you should check if you could also use a "for X in range()".
|
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark <http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html> -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
Python runs very fast, if you use the correct syntax of the python language. It is roughly described as "[pythonic](http://faassen.n--tree.net/blog/view/weblog/2005/08/06/0)".
If you restructure your code like this, it will run at least twice as fast (well, it does on my machine):
```
j = 0
for i in range(10000000):
j = j + 1
print j
```
Whenever you use a while in python, you should check if you could also use a "for X in range()".
|
>
> is Python really that much slower than
> Perl?
>
>
>
Look at the **Computer Language Benchmarks Game** - "Compare the performance of ≈30 programming languages using ≈12 flawed benchmarks and ≈1100 programs".
They are only tiny benchmark programs but they still do a lot more than the code snippet you have timed -
[**http://shootout.alioth.debian.org/u32/python.php**](http://shootout.alioth.debian.org/u32/python.php)
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
Python maintains global variables in a dictionary. Therefore, each time there is an assignment, the interpreter performs a **lookup on the module dictionary**, that is somewhat expensive, and this is the reason why you found your example so slower.
In order to improve performance, you should use local allocation, like creating a function. Python interpreter stores local variables in an array, with a much faster access.
However, it should be noted that this is an **implementation detail** of CPython; I suspect IronPython, for instance, would lead to a completely different result.
Finally, for more information on this topic, I suggest you an interesting essay from GvR, about optimization in Python: [Python Patterns - An Optimization Anecdote](http://www.python.org/doc/essays/list2str.html).
|
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark <http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html> -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
The nit-picking answer is that you should compare it to idiomatic Python:
* The original code takes **34** seconds on my machine.
* A `for` loop ([FlorianH's answer](https://stackoverflow.com/a/1984904/239657)) with `+=` and `xrange()` takes **21**.
* Putting the whole thing in a function reduces it to **9** seconds!
**That's much faster than Perl** (15 seconds on my machine)!
Explanation: [Python local vars are much faster than globals](http://wiki.python.org/moin/PythonSpeed/PerformanceTips#LocalVariables).
(For fairness, I also tried a function in Perl - no change)
* Getting rid of the j variable reduced it to **8** seconds:
`print sum(1 for i in xrange(100000000))`
Python has the strange property that higher-level shorter code tends to be fastest :-)
But the real answer is that your "micro-benchmark" is meaningless.
The real question of language speed is: what's the performance of an average real application? To know that, you should take into account:
* Typical mix of operations in complex code.
Your code doesn't contain any data structures, function calls, or OOP operations.
* A large enough codebase to feel cache effects — many interpreter optimizations trade memory for speed, which is not measured fairly by *any* tiny benchmark.
* [Optimization](http://www.python.org/doc/essays/list2str.html) [opportunities](http://wiki.python.org/moin/PythonSpeed/PerformanceTips): after you write your code, IF it's not fast enough,
how much faster can you easily make it?
E.g. how hard is it to offload the heavy lifting to effecient C libriries?
[PyPy's benchmarks](http://speed.pypy.org) and [Octane](https://developers.google.com/octane/benchmark) are good examples of what realistic language speed benchmarks look like.
If you want to talk number crunching, Python *IS* surprisingly popular with scientists.
They love it for the simple pseudo-math syntax and short learning curve, but also for the excellent [numpy](http://numpy.scipy.org/) library for array crunching and the ease of wrapping other existing C code.
And then there is the [Psyco JIT](http://psyco.sourceforge.net/) which would probably run your toy example well under 1 second, but I can't check it now because it only works on 32-bit x86.
*EDIT*: Nowdays, skip Psyco and use [PyPy](http://pypy.org/) which a cross-platform actively improving JIT.
|
I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.
I see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:
However, I added arbitrary precision to the Perl program with the [bignum](http://perldoc.perl.org/bignum.html)
```
use bignum;
```
Now I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.
Some of you may have seen my question about [What are five things you hate about your favorite language?](https://stackoverflow.com/questions/282329/what-are-five-things-you-hate-about-your-favorite-language). Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use [PDL](http://pdl.perl.org).
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
The nit-picking answer is that you should compare it to idiomatic Python:
* The original code takes **34** seconds on my machine.
* A `for` loop ([FlorianH's answer](https://stackoverflow.com/a/1984904/239657)) with `+=` and `xrange()` takes **21**.
* Putting the whole thing in a function reduces it to **9** seconds!
**That's much faster than Perl** (15 seconds on my machine)!
Explanation: [Python local vars are much faster than globals](http://wiki.python.org/moin/PythonSpeed/PerformanceTips#LocalVariables).
(For fairness, I also tried a function in Perl - no change)
* Getting rid of the j variable reduced it to **8** seconds:
`print sum(1 for i in xrange(100000000))`
Python has the strange property that higher-level shorter code tends to be fastest :-)
But the real answer is that your "micro-benchmark" is meaningless.
The real question of language speed is: what's the performance of an average real application? To know that, you should take into account:
* Typical mix of operations in complex code.
Your code doesn't contain any data structures, function calls, or OOP operations.
* A large enough codebase to feel cache effects — many interpreter optimizations trade memory for speed, which is not measured fairly by *any* tiny benchmark.
* [Optimization](http://www.python.org/doc/essays/list2str.html) [opportunities](http://wiki.python.org/moin/PythonSpeed/PerformanceTips): after you write your code, IF it's not fast enough,
how much faster can you easily make it?
E.g. how hard is it to offload the heavy lifting to effecient C libriries?
[PyPy's benchmarks](http://speed.pypy.org) and [Octane](https://developers.google.com/octane/benchmark) are good examples of what realistic language speed benchmarks look like.
If you want to talk number crunching, Python *IS* surprisingly popular with scientists.
They love it for the simple pseudo-math syntax and short learning curve, but also for the excellent [numpy](http://numpy.scipy.org/) library for array crunching and the ease of wrapping other existing C code.
And then there is the [Psyco JIT](http://psyco.sourceforge.net/) which would probably run your toy example well under 1 second, but I can't check it now because it only works on 32-bit x86.
*EDIT*: Nowdays, skip Psyco and use [PyPy](http://pypy.org/) which a cross-platform actively improving JIT.
|
All this micro benchmarking can get a bit silly!
For eg. just switching to `for` in both Python & Perl provides an hefty speed bump. The original Perl example would be twice as quick if `for` was used:
```
my $j = 0;
for my $i (1..100000000) {
++$j;
}
print $j;
```
And I can shave off a bit more with this:
```
++$j for 1..100000000;
print $j;
```
And getting even sillier we can get it down to 1 second here ;-)
```
print {STDOUT} (1..10000000)[-1];
```
/I3az/
*ref*: Perl 5.10.1 used.
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
Python is not particularly fast at numeric computations and I'm sure it's slower than perl when it comes to text processing.
Since you're an experienced Perl hand, I don't know if this applies to you but Python programs in the long run tend to be more maintainable and are quicker to develop. The speed is 'enough' for most situations and you have the flexibility to drop down into C when you *really* need a performance boost.
Update
------
Okay. I just created a large file (1GB) with random data in it (mostly ascii) and broke it into lines of equal lengths. This was supposed to simulate a log file.
I then ran simple perl and python programs that search the file line by line for an existing pattern.
With Python 2.6.2, the results were
```
real 0m18.364s
user 0m9.209s
sys 0m0.956s
```
and with Perl 5.10.0
```
real 0m17.639s
user 0m5.692s
sys 0m0.844s
```
The programs are as follows (please let me know if I'm doing something stupid)
```
import re
regexp = re.compile("p06c")
def search():
with open("/home/arif/f") as f:
for i in f:
if regexp.search(i):
print "Found : %s"%i
search()
```
and
```
sub search() {
open FOO,"/home/arif/f" or die $!;
while (<FOO>) {
print "Found : $_\n" if /p06c/o;
}
}
search();
```
The results are pretty close and tweaking it this way or other don't seem to alter the results much. I don't know if this is a *true* benchmark but I think it'd be the way I'd search log files in the two languages so I stand corrected about the relative performances.
Thanks Chris.
|
I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.
I see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:
However, I added arbitrary precision to the Perl program with the [bignum](http://perldoc.perl.org/bignum.html)
```
use bignum;
```
Now I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.
Some of you may have seen my question about [What are five things you hate about your favorite language?](https://stackoverflow.com/questions/282329/what-are-five-things-you-hate-about-your-favorite-language). Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use [PDL](http://pdl.perl.org).
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark <http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html> -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
|
>
> is Python really that much slower than
> Perl?
>
>
>
Look at the **Computer Language Benchmarks Game** - "Compare the performance of ≈30 programming languages using ≈12 flawed benchmarks and ≈1100 programs".
They are only tiny benchmark programs but they still do a lot more than the code snippet you have timed -
[**http://shootout.alioth.debian.org/u32/python.php**](http://shootout.alioth.debian.org/u32/python.php)
|
25,100,309
|
I have a memory constraint of 4GB RAM. I need to have 2.5 GB of data in RAM in order to perform further things
```
import numpy
a = numpy.random.rand(1000000,100)## This needs to be in memory
b= numpy.random.rand(1,100)
c= a-b #this need to be in code in order to perform next operation
d = numpy.linalg.norm(numpy.asarray(c, dtype = numpy.float32), axiss =1)
```
While creating c, memory usage explodes and python gets killed. Is there a way to fasten this process. I am performing this on EC2 Ubuntu with 4GB RAM and single core. When I perform same calculation on my MAC OSX, it gets easily done without any memory problem as well as takes less time. Why is this happening?
One solution that I can think of is
```
d =[numpy.sqrt(numpy.dot(i-b,i-b)) for i in a]
```
which I dont think will be good for speed.
|
2014/08/02
|
[
"https://Stackoverflow.com/questions/25100309",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3413239/"
] |
If the creation of `a` doesn't cause a memory problem, and you don't need to preserve the values in `a`, you could compute `c` by modifying `a` in place:
```
a -= b # Now use `a` instead of `c`.
```
Otherwise, the idea of working in smaller chunks or batches is a good one. With your list comprehension solution, you are, in effect, computing `d` from `a` and `b` in batch sizes of one row of `a`. You can improve the efficiency by using a larger batch size. Here's an example; it includes your code (with some cosmetic changes) and a version that computes the result (called `d2`) in batches of `batch_size` rows of `a`.
```
import numpy as np
#n = 1000000
n = 1000
a = np.random.rand(n,100) ## This needs to be in memory
b = np.random.rand(1,100)
c = a-b # this need to be in code in order to perform next operation
d = np.linalg.norm(np.asarray(c), axis=1)
batch_size = 300
# Preallocate the result.
d2 = np.empty(n)
for start in range(0, n, batch_size):
end = min(start + batch_size, n)
c2 = a[start:end] - b
d2[start:end] = np.linalg.norm(c2, axis=1)
```
|
If memory is the reason your code is slowing down and crashing why not just use a generator instead of list comprehension?
d =(numpy.sqrt(numpy.dot(i-b,i-b)) for i in a)
A generator essentially provides steps to get the next object in an iterator. In other words no operation is made and no data is stored until you call the next() method on the generator's iterator. The reason I'm stressing this is I don't want you to think when you call `d =(numpy.sqrt(numpy.dot(i-b,i-b)) for i in a)` it does all the computations, rather it stores the instructions.
|
72,177,222
|
Description.
------------
While trying to use pre-commit hooks, I am experiencing some difficulties, including [the latest Lava-nc release by Intel in `.tar.gz` format](https://github.com/lava-nc/lava/releases/download/v0.3.0/lava-nc-0.3.0.tar.gz) `pip` package in a Conda environment.
MWE
---
The following Conda `environment.yaml` file is used:
```
# This file is to automatically configure your environment. It allows you to
# run the code with a single command without having to install anything
# (extra).
# First run:: conda env create --file environment.yml
# If you change this file, run: conda env update --file environment.yml
# Instructions for this networkx-to-lava-nc repository only. First time usage
# On Ubuntu (this is needed for lava-nc):
# sudo apt upgrade
# sudo apt full-upgrade
# yes | sudo apt install gcc
# Conda configuration settings. (Specify which modules/packages are installed.)
name: networkx-to-lava
channels:
- conda-forge
- conda
dependencies:
- anaconda
- conda:
# Run python tests.
- pytest-cov
- pip
- pip:
# Run pip install on .tar.gz file in GitHub repository (For lava-nc only).
- https://github.com/lava-nc/lava/releases/download/v0.3.0/lava-nc-0.3.0.tar.gz
# Auto check static typing.
- mypy
# Auto check programming style aspects.
- pylint
```
If I include the following MWE `.pre-commit-config.yaml`:
```
repos:
# Test if the variable typing is correct
# - repo: https://github.com/python/mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.950
hooks:
- id: mypy
```
Output/Error Message
--------------------
The `git commit "something"` and the `pre-commit run --all-files` return both:
```
[INFO] Installing environment for https://github.com/pre-commit/mirrors-mypy.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
Check the log at /home/name/.cache/pre-commit/pre-commit.log
```
Error Log.
----------
The output of `pre-commit.log` is:
```
### version information
pre-commit version: 2.19.0
git --version: git version 2.30.2
sys.version:
3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]
sys.executable: /home/name/anaconda3/envs/networkx-to-lava/bin/python3.1
os.name: posix
sys.platform: linux
### error information
An unexpected error has occurred: CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
Traceback (most recent call last):
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/error_handler.py", line 73, in error_handler
yield
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/main.py", line 361, in main
return hook_impl(
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/commands/hook_impl.py", line 238, in hook_impl
return retv | run(config, store, ns)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/commands/run.py", line 414, in run
install_hook_envs(to_install, store)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/repository.py", line 223, in install_hook_envs
_hook_install(hook)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/repository.py", line 79, in _hook_install
lang.install_environment(
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/languages/python.py", line 219, in install_environment
cmd_output_b(*venv_cmd, cwd='/')
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/util.py", line 146, in cmd_output_b
raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)
pre_commit.util.CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
```
Workaround
----------
This error is evaded if I remove the `/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1` file. However, that breaks automation and seems to be an improper work-around.
Question
--------
How can I ensure creating and activating the Conda environment allows direct running of the pre-commit hook without error (without having to delete the `python3.1` file)?
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72177222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7437143/"
] |
this is unfortunately a known bug in `conda` -- though I'm unfamiliar with the status on it. they're mistakenly shipping a `python3.1` binary as the default executable (it should be called `python3.10` -- they have a [2020](https://github.com/asottile/flake8-2020) problem that's probably a `sys.version[:3]` somewhere)
fortunately, you can instruct pre-commit to ignore its "default" version detection (which is detecting your `python3.1` executable) and give it a specific instruction on which python version to use
there's two approaches to that:
1. [`default_language_version`](https://pre-commit.com/#top_level-default_language_version): as a "default" which applies when `language_version` is not set
```yaml
default_language_version:
python: python3.10 # or python3
# ...
```
2. [`language_version`](https://pre-commit.com/#config-language_version) on the specific hook (to override whatever default value the hook provider has set)
```yaml
# ...
- id: black
language_version: python3.10
```
---
disclaimer: I created pre-commit
|
There was a ticked opened regarding the observation in pre-commit repository <https://github.com/pre-commit/pre-commit/issues/1375>
One of pre-commit contributers suggest to use `language_version: python3` for black config in `.pre-commit-config.yaml`
```yaml
repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
language_version: python3
```
<https://github.com/pre-commit/pre-commit/issues/1375#issuecomment-603906811>
|
29,224,447
|
I have a python function that takes a imported module as a parameter:
```
def printModule(module):
print("That module is named '%s'" % magic(module))
import foo.bar.baz
printModule(foo.bar.baz)
```
What I want is to be able to extract the module name (in this case `foo.bar.baz`) from a passed reference to the module. In the above example, the `magic()` function is a stand-in for the function I want.
`__name__` would normally work, but that requires executing in the context of the passed module, and I'm trying to extract this information from merely a reference to the module.
What is the proper procedure here?
All I can think of is either doing `string(module)`, and then some text hacking, or trying to inject a function into the module that I can then call to have return `__name__`, and neither of those solutions is elegant. (I tried both these, neither actually work. Modules can apparently permute their name in the `string()` representation, and injecting a function into the module and then calling it just returns the caller's context.)
|
2015/03/24
|
[
"https://Stackoverflow.com/questions/29224447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/268006/"
] |
The `__name__` attribute seems to work:
```
def magic(m):
return m.__name__
```
|
If you have a string with the module name, you can use pkgutil.
```
import pkgutil
pkg = pkgutil.get_loader(module_name)
print pkg.fullname
```
From the module itself,
```
import pkgutil
pkg = pkgutil.get_loader(module.__name__)
print pkg.fullname
```
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
This is quite common, in any largish system. You can use Valgrind's [suppression system](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles) to explicitly suppress warnings that you're not interested in.
|
There is another option I found. James Henstridge has custom build of python which can detect the fact that python running under valgrind and in this case pymalloc allocator is disabled, with PyObject\_Malloc/PyObject\_Free passing through to normal malloc/free, which valgrind knows how to track.
Package available here: <https://launchpad.net/~jamesh/+archive/python>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
The most correct option is to tell Valgrind that it should intercept Python's allocation functions.
You should patch valgrind/coregrind/m\_replacemalloc/vg\_replace\_malloc.c adding the new interceptors for PyObject\_Malloc, PyObject\_Free, PyObject\_Realloc, e.g.:
```
ALLOC_or_NULL(NONE, PyObject_Malloc, malloc);
```
(note that the soname for users allocation functions should be `NONE`)
|
This is quite NORMAL If you want to use Valgrind more effectively and catch even more
memory leaks, you will need to configure python --without-pymalloc. <https://svn.python.org/projects/python/trunk/Misc/README.valgrind>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
You could try using the [suppression file](http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp) that comes with the python source
Reading the [Python Valgrind README](http://svn.python.org/projects/python/trunk/Misc/README.valgrind) is a good idea too!
|
This is quite common, in any largish system. You can use Valgrind's [suppression system](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles) to explicitly suppress warnings that you're not interested in.
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
This is quite common, in any largish system. You can use Valgrind's [suppression system](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles) to explicitly suppress warnings that you're not interested in.
|
This is quite NORMAL If you want to use Valgrind more effectively and catch even more
memory leaks, you will need to configure python --without-pymalloc. <https://svn.python.org/projects/python/trunk/Misc/README.valgrind>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
The most correct option is to tell Valgrind that it should intercept Python's allocation functions.
You should patch valgrind/coregrind/m\_replacemalloc/vg\_replace\_malloc.c adding the new interceptors for PyObject\_Malloc, PyObject\_Free, PyObject\_Realloc, e.g.:
```
ALLOC_or_NULL(NONE, PyObject_Malloc, malloc);
```
(note that the soname for users allocation functions should be `NONE`)
|
There is another option I found. James Henstridge has custom build of python which can detect the fact that python running under valgrind and in this case pymalloc allocator is disabled, with PyObject\_Malloc/PyObject\_Free passing through to normal malloc/free, which valgrind knows how to track.
Package available here: <https://launchpad.net/~jamesh/+archive/python>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
Following links given by Nick I was able to find some updates on [README.valgrind](https://github.com/python/cpython/blob/master/Misc/README.valgrind). In one word, for Python > 3.6, you can set `PYTHONMALLOC=malloc` environment variable to effectively disable the warnings. For example, in my machine:
```
export PYTHONMALLOC=malloc
valgrind python my_script.py
```
doesn't produce any error related to python.
|
Yes, this is typical. Large systems often leave memory un-freed, which is fine so long as it is a constant amount, and not proportional to the running history of the system. The Python interpreter falls into this category.
Perhaps you can filter the valgrind output to focus only on allocations made in your C extension?
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
You could try using the [suppression file](http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp) that comes with the python source
Reading the [Python Valgrind README](http://svn.python.org/projects/python/trunk/Misc/README.valgrind) is a good idea too!
|
This is quite NORMAL If you want to use Valgrind more effectively and catch even more
memory leaks, you will need to configure python --without-pymalloc. <https://svn.python.org/projects/python/trunk/Misc/README.valgrind>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
Following links given by Nick I was able to find some updates on [README.valgrind](https://github.com/python/cpython/blob/master/Misc/README.valgrind). In one word, for Python > 3.6, you can set `PYTHONMALLOC=malloc` environment variable to effectively disable the warnings. For example, in my machine:
```
export PYTHONMALLOC=malloc
valgrind python my_script.py
```
doesn't produce any error related to python.
|
The most correct option is to tell Valgrind that it should intercept Python's allocation functions.
You should patch valgrind/coregrind/m\_replacemalloc/vg\_replace\_malloc.c adding the new interceptors for PyObject\_Malloc, PyObject\_Free, PyObject\_Realloc, e.g.:
```
ALLOC_or_NULL(NONE, PyObject_Malloc, malloc);
```
(note that the soname for users allocation functions should be `NONE`)
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
Following links given by Nick I was able to find some updates on [README.valgrind](https://github.com/python/cpython/blob/master/Misc/README.valgrind). In one word, for Python > 3.6, you can set `PYTHONMALLOC=malloc` environment variable to effectively disable the warnings. For example, in my machine:
```
export PYTHONMALLOC=malloc
valgrind python my_script.py
```
doesn't produce any error related to python.
|
There is another option I found. James Henstridge has custom build of python which can detect the fact that python running under valgrind and in this case pymalloc allocator is disabled, with PyObject\_Malloc/PyObject\_Free passing through to normal malloc/free, which valgrind knows how to track.
Package available here: <https://launchpad.net/~jamesh/+archive/python>
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
This is quite common, in any largish system. You can use Valgrind's [suppression system](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles) to explicitly suppress warnings that you're not interested in.
|
The most correct option is to tell Valgrind that it should intercept Python's allocation functions.
You should patch valgrind/coregrind/m\_replacemalloc/vg\_replace\_malloc.c adding the new interceptors for PyObject\_Malloc, PyObject\_Free, PyObject\_Realloc, e.g.:
```
ALLOC_or_NULL(NONE, PyObject_Malloc, malloc);
```
(note that the soname for users allocation functions should be `NONE`)
|
32,553,827
|
---I am on Ubuntu---
I am trying to setup Django Crispy forms and getting the following error:
```
Exception Type: TemplateDoesNotExist
Exception Value:
bootstrap3/layout/buttonholder.html
```
Settings.py template setup
==========================
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {...............
```
Template-loader postmortem
==========================
```
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
/home/schachte/Documents/School/cse360/CSE_360_Project/CSE360_Project/ipcms/ipcms/templates/bootstrap3/layout/buttonholder.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
/usr/local/lib/python2.7/dist-packages/Django-1.8.4-py2.7.egg/django/contrib/admin/templates/bootstrap3/layout/buttonholder.html (File does not exist)
/usr/local/lib/python2.7/dist-packages/Django-1.8.4-py2.7.egg/django/contrib/auth/templates/bootstrap3/layout/buttonholder.html (File does not exist)
/usr/local/lib/python2.7/dist-packages/crispy_forms/templates/bootstrap3/layout/buttonholder.html (File does not exist)
```
It's pretty obvious that the dir is not there that it's trying to find, but how can I make crispy forms find the correct bootstrap directory?
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32553827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5054204/"
] |
`ButtonHolder` isn't part of the Bootstrap template pack. It's documented as not being so.
See [this issue for discussion](https://github.com/maraujop/django-crispy-forms/issues/478)
Best bet is to use `FormActions` instead
|
Where do you currently save your static content. The issue seems to be due to Django's inability to find your static content from bootstrap.
|
55,910,797
|
Hello I've started using python fairly recently. I'm having so much trouble with this one segment of my code that gives me a keyerror when I try to remove an element from my set:
tiles.remove(m)
KeyError: 'B9'
EDIT: I forgot to mention that the m value changes everytime I call another function before the for loop. Also in the fc function, if it is false the function makes sure to add the tile back into the set tiles.add(m)
Researching online it says a keyerror only occurs when the element is not in the set but....right before I remove the element, I check to see that it is in the set and it is.
```
m = findMRV()
if checkEmpty(board, m) == False:
backtrack(board)
for d in domain[m].copy():
if checkValid(board, m[0], m[1], d ):
if m in tiles:
print(str(m)+"HELLO3")
tiles.remove(m)
board[m] = d
if(fc(board, m[0], m1], d) == False):
continue
```
the checkValid function just returns true or false and doesn't change m. I want m to be removed from the set that contains only empty tiles but I keep receiving a keyerror and I can't seem to figure out what the issue could be or where else it could be coming from.
|
2019/04/29
|
[
"https://Stackoverflow.com/questions/55910797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8817267/"
] |
You have a loop
```
for d in domain[m].copy():
```
where you are trying to `tiles.remove(m)` in every iteration. After it's removed in the first iteration, the dictionary won't have the key any more and you would get a keyerror in subsequent iterations.
|
The ‘remove’ statement needs to be included in the ‘if’ statement, otherwise it is never prevented.
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
Firstly unpack your tuple:
`a,b,c,d = my_list`
This, of course, assumes 4 elements exactly to your tuple or you'll get an exception.
Then iterate over d:
`for d1,d2 in d:
print('({},{},{},{},{})'.format(a,b,c,d1,d2))`
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
You can just iterate over and join them with the `operator+`:
```
fixed = my_list[:3] # this becomes (u'code', u'somet text', u'integer')
new_list = [fixed + x for x in my_list[3]] # list comprehension to build new list
```
You must be careful in your terminology. You call your variable `my_list` but the `()` around the entire content of the variable makes it a `tuple` not `list`. Your `my_list[3]` however is a `list` of `tuples`. Both these containers support the `operator+` for concatenation.
So using this solution (which is *Pythonic* and works with n-sized `my_list[3]`), you can read it as:
"`new_list` is a list of the `fixed` variable plus the current element for each element in `my_list[3]`"
With your current data you get a `list` (again, with `[]`):
```
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
You can try this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
new_list = [(my_list[0], my_list[1], my_list[2], i[0], i[-1]) for i in my_list[3:][0]]
```
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
You could also try this:
```
from itertools import repeat
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
result = [x + y for x, y in zip(repeat(my_list[:3], len(my_list[3])), my_list[3])]
print(result)
```
Which outputs:
```
[('code', 'somet text', 'integer', '1', 'text1'),
('code', 'somet text', 'integer', '2', 'text2'),
('code', 'somet text', 'integer', '3', 'text3'),
('code', 'somet text', 'integer', '4', 'text4'),
('code', 'somet text', 'integer', '5', 'text5')]
```
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
You can also try without loop and in just one line without importing any external module or complex logic , Here is lambda approach :
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
print((list(map(lambda v:list(map(lambda y:(y+my_list[:3]),v)),my_list[3:]))))
```
output:
```
[[('1', 'text1', 'code', 'somet text', 'integer'), ('2', 'text2', 'code', 'somet text', 'integer'), ('3', 'text3', 'code', 'somet text', 'integer'), ('4', 'text4', 'code', 'somet text', 'integer'), ('5', 'text5', 'code', 'somet text', 'integer')]]
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
In Python for compare by not equal need `!=`, not `<>`.
So need:
```
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
```
Another solution from [stats.stackexchange](https://stats.stackexchange.com/a/294069):
```
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
```
|
Here is an improved version that is mindful of Zero:
```
#Mean Absolute Percentage error
def mape(y_true, y_pred,sample_weight=None,multioutput='uniform_average'):
y_type, y_true, y_pred, multioutput = _check_reg_targets(y_true, y_pred, multioutput)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
In Python for compare by not equal need `!=`, not `<>`.
So need:
```
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
```
Another solution from [stats.stackexchange](https://stats.stackexchange.com/a/294069):
```
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
```
|
Here is another way to calculate MAPE which can tackle '0' denominator and it is fast.
```py
def mod_my_MAPE(y, pred):
y = y[:][y[:]!=0]
pred = pred[:][y[:]!=0]
summ = np.sum(abs((y[:] - pred[:])/y[:]))
return summ/len(y)
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
Both solutions are not working with zero values. This is working form me:
```
def percentage_error(actual, predicted):
res = np.empty(actual.shape)
for j in range(actual.shape[0]):
if actual[j] != 0:
res[j] = (actual[j] - predicted[j]) / actual[j]
else:
res[j] = predicted[j] / np.mean(actual)
return res
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100
```
I hope it helps.
|
Since `scikit-learn` version 0.24. one is able to use [`sklearn.metrics.mean_absolute_percentage_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html).
[Here](https://github.com/scikit-learn/scikit-learn/blob/46f15a00d2324da4b9f12c9168ddda8dddb1b607/sklearn/metrics/_regression.py#L292) is the link for the implementation on their GitHub repo.
I just tried locally with the newest version (1.0.2).
Here is an example ([Source](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html))
```
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.5515...
```
**Note**: MAPE can be problematic, if one of the values of y\_true is zero. In Scikit-learn that will result in an arbitrarily high number.
See this example ([Source](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html))
```
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [1., 0., 2.4, 7.]
>>> y_pred = [1.2, 0.1, 2.4, 8.]
>>> mean_absolute_percentage_error(y_true, y_pred)
112589990684262.48
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
Both solutions are not working with zero values. This is working form me:
```
def percentage_error(actual, predicted):
res = np.empty(actual.shape)
for j in range(actual.shape[0]):
if actual[j] != 0:
res[j] = (actual[j] - predicted[j]) / actual[j]
else:
res[j] = predicted[j] / np.mean(actual)
return res
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100
```
I hope it helps.
|
Here is an improved version that is mindful of Zero:
```
#Mean Absolute Percentage error
def mape(y_true, y_pred,sample_weight=None,multioutput='uniform_average'):
y_type, y_true, y_pred, multioutput = _check_reg_targets(y_true, y_pred, multioutput)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
Both solutions are not working with zero values. This is working form me:
```
def percentage_error(actual, predicted):
res = np.empty(actual.shape)
for j in range(actual.shape[0]):
if actual[j] != 0:
res[j] = (actual[j] - predicted[j]) / actual[j]
else:
res[j] = predicted[j] / np.mean(actual)
return res
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100
```
I hope it helps.
|
Here is another way to calculate MAPE which can tackle '0' denominator and it is fast.
```py
def mod_my_MAPE(y, pred):
y = y[:][y[:]!=0]
pred = pred[:][y[:]!=0]
summ = np.sum(abs((y[:] - pred[:])/y[:]))
return summ/len(y)
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
The new version of scikit-learn (v0.24) has a function that will calculate MAPE.
`sklearn.metrics.mean_absolute_percentage_error`
All what you need is two array-like variables: `y_true` storing the actual/real values, and `y_pred` storing the predicted values.
You can refer to the official documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html#sklearn.metrics.mean_absolute_percentage_error).
|
Here is another way to calculate MAPE which can tackle '0' denominator and it is fast.
```py
def mod_my_MAPE(y, pred):
y = y[:][y[:]!=0]
pred = pred[:][y[:]!=0]
summ = np.sum(abs((y[:] - pred[:])/y[:]))
return summ/len(y)
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
In Python for compare by not equal need `!=`, not `<>`.
So need:
```
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
```
Another solution from [stats.stackexchange](https://stats.stackexchange.com/a/294069):
```
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
```
|
Since `scikit-learn` version 0.24. one is able to use [`sklearn.metrics.mean_absolute_percentage_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html).
[Here](https://github.com/scikit-learn/scikit-learn/blob/46f15a00d2324da4b9f12c9168ddda8dddb1b607/sklearn/metrics/_regression.py#L292) is the link for the implementation on their GitHub repo.
I just tried locally with the newest version (1.0.2).
Here is an example ([Source](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html))
```
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.5515...
```
**Note**: MAPE can be problematic, if one of the values of y\_true is zero. In Scikit-learn that will result in an arbitrarily high number.
See this example ([Source](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html))
```
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [1., 0., 2.4, 7.]
>>> y_pred = [1.2, 0.1, 2.4, 8.]
>>> mean_absolute_percentage_error(y_true, y_pred)
112589990684262.48
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
The new version of scikit-learn (v0.24) has a function that will calculate MAPE.
`sklearn.metrics.mean_absolute_percentage_error`
All what you need is two array-like variables: `y_true` storing the actual/real values, and `y_pred` storing the predicted values.
You can refer to the official documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html#sklearn.metrics.mean_absolute_percentage_error).
|
Here is an improved version that is mindful of Zero:
```
#Mean Absolute Percentage error
def mape(y_true, y_pred,sample_weight=None,multioutput='uniform_average'):
y_type, y_true, y_pred, multioutput = _check_reg_targets(y_true, y_pred, multioutput)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
The new version of scikit-learn (v0.24) has a function that will calculate MAPE.
`sklearn.metrics.mean_absolute_percentage_error`
All what you need is two array-like variables: `y_true` storing the actual/real values, and `y_pred` storing the predicted values.
You can refer to the official documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html#sklearn.metrics.mean_absolute_percentage_error).
|
Since the actual values can also be zeroes I am taking the average of the actual values in the denominator, instead of the actual values:
```py
Error = np.sum(np.abs(np.subtract(data_4['y'],data_4['pred'])))
Average = np.sum(data_4['y'])
MAPE = Error/Average
```
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
In Python for compare by not equal need `!=`, not `<>`.
So need:
```
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
```
Another solution from [stats.stackexchange](https://stats.stackexchange.com/a/294069):
```
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
```
|
Both solutions are not working with zero values. This is working form me:
```
def percentage_error(actual, predicted):
res = np.empty(actual.shape)
for j in range(actual.shape[0]):
if actual[j] != 0:
res[j] = (actual[j] - predicted[j]) / actual[j]
else:
res[j] = predicted[j] / np.mean(actual)
return res
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100
```
I hope it helps.
|
52,135,293
|
I'm following the `Quickstart` on <https://developers.google.com/drive/api/v3/quickstart/python>. I've enabled the drive API through the page, loaded the **credentials.json** and can successfully list files in my google drive. However when I wanted to download a file, I got the message
```
`The user has not granted the app ####### read access to the file`
```
Do I need to do more than putting that scope in my code or do I need to activate something else?
```
SCOPES = 'https://www.googleapis.com/auth/drive.file'
client.flow_from_clientsecrets('credentials.json', SCOPES)
```
|
2018/09/02
|
[
"https://Stackoverflow.com/questions/52135293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1767754/"
] |
Once you go through the `Quick-Start Tutorial` initially the Scope is given as:
```
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
```
So after listing files and you decide to download, it won't work as you need to generate the the token again, so changing scope won't recreate or prompt you for the 'google authoriziation' which happens on first run.
To force generate the token, just delete your curent token or use a new file to store your key from the storage:
```
store = file.Storage('tokenWrite.json')
```
|
I ran into the same error. I authorized the entire scope, then retrieved the file, and use the io.Base class to stream the data into a file. Note, you'll need to create the file first.
```
from __future__ import print_function
from googleapiclient.discovery import build
import io
from apiclient import http
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'credentials.json'
FILE_ID = <file-id>
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
service = build('drive', 'v3', credentials=credentials)
def download_file(service, file_id, local_fd):
request = service.files().get_media(fileId=file_id)
media_request = http.MediaIoBaseDownload(local_fd, request)
while True:
_, done = media_request.next_chunk()
if done:
print ('Download Complete')
return
file_io_base = open('file.csv','wb')
download_file(service=service,file_id=FILE_ID,local_fd=file_io_base)
```
Hope that helps.
|
52,135,293
|
I'm following the `Quickstart` on <https://developers.google.com/drive/api/v3/quickstart/python>. I've enabled the drive API through the page, loaded the **credentials.json** and can successfully list files in my google drive. However when I wanted to download a file, I got the message
```
`The user has not granted the app ####### read access to the file`
```
Do I need to do more than putting that scope in my code or do I need to activate something else?
```
SCOPES = 'https://www.googleapis.com/auth/drive.file'
client.flow_from_clientsecrets('credentials.json', SCOPES)
```
|
2018/09/02
|
[
"https://Stackoverflow.com/questions/52135293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1767754/"
] |
Once you go through the `Quick-Start Tutorial` initially the Scope is given as:
```
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
```
So after listing files and you decide to download, it won't work as you need to generate the the token again, so changing scope won't recreate or prompt you for the 'google authoriziation' which happens on first run.
To force generate the token, just delete your curent token or use a new file to store your key from the storage:
```
store = file.Storage('tokenWrite.json')
```
|
**Delete token.pickle and re-run program.**
Elaboration:- Once you run Quickstart Example, It stores a token.pickle.
Thereafter, even if you change the scope in google api console and add the following the scope in your code :-
```
SCOPES = ['https://www.googleapis.com/auth/drive']
```
It will not work until you delete and earlier scope's token.pickle.
Once deleted rerun program. A tab will appear, authorize the app and Done.
|
52,135,293
|
I'm following the `Quickstart` on <https://developers.google.com/drive/api/v3/quickstart/python>. I've enabled the drive API through the page, loaded the **credentials.json** and can successfully list files in my google drive. However when I wanted to download a file, I got the message
```
`The user has not granted the app ####### read access to the file`
```
Do I need to do more than putting that scope in my code or do I need to activate something else?
```
SCOPES = 'https://www.googleapis.com/auth/drive.file'
client.flow_from_clientsecrets('credentials.json', SCOPES)
```
|
2018/09/02
|
[
"https://Stackoverflow.com/questions/52135293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1767754/"
] |
I ran into the same error. I authorized the entire scope, then retrieved the file, and use the io.Base class to stream the data into a file. Note, you'll need to create the file first.
```
from __future__ import print_function
from googleapiclient.discovery import build
import io
from apiclient import http
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'credentials.json'
FILE_ID = <file-id>
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
service = build('drive', 'v3', credentials=credentials)
def download_file(service, file_id, local_fd):
request = service.files().get_media(fileId=file_id)
media_request = http.MediaIoBaseDownload(local_fd, request)
while True:
_, done = media_request.next_chunk()
if done:
print ('Download Complete')
return
file_io_base = open('file.csv','wb')
download_file(service=service,file_id=FILE_ID,local_fd=file_io_base)
```
Hope that helps.
|
**Delete token.pickle and re-run program.**
Elaboration:- Once you run Quickstart Example, It stores a token.pickle.
Thereafter, even if you change the scope in google api console and add the following the scope in your code :-
```
SCOPES = ['https://www.googleapis.com/auth/drive']
```
It will not work until you delete and earlier scope's token.pickle.
Once deleted rerun program. A tab will appear, authorize the app and Done.
|
48,634,071
|
I'm very new to python, so please bear with me. I'm having trouble to visualize an excel/csv file with seaborn's lmplot. This code:
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df=pd.read_csv("C:/Users/me/Documents/Jupyter Notebooks/Seaborn/Test.csv")
sns.set_style('whitegrid')
sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
```
gives me this error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-41-00ad8b882663> in <module>()
----> 1 sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
~\Anaconda3\lib\site-packages\seaborn\regression.py in lmplot(x, y, data, hue, col, row, palette, col_wrap, size, aspect, markers, sharex, sharey, hue_order, col_order, row_order, legend, legend_out, x_estimator, x_bins, x_ci, scatter, fit_reg, ci, n_boot, units, order, logistic, lowess, robust, logx, x_partial, y_partial, truncate, x_jitter, y_jitter, scatter_kws, line_kws)
550 need_cols = [x, y, hue, col, row, units, x_partial, y_partial]
551 cols = np.unique([a for a in need_cols if a is not None]).tolist()
--> 552 data = data[cols]
553
554 # Initialize the grid
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2131 if isinstance(key, (Series, np.ndarray, Index, list)):
2132 # either boolean or fancy integer index
-> 2133 return self._getitem_array(key)
2134 elif isinstance(key, DataFrame):
2135 return self._getitem_frame(key)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in _getitem_array(self, key)
2175 return self._take(indexer, axis=0, convert=False)
2176 else:
-> 2177 indexer = self.loc._convert_to_indexer(key, axis=1)
2178 return self._take(indexer, axis=1, convert=True)
2179
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _convert_to_indexer(self, obj, axis, is_setter)
1267 if mask.any():
1268 raise KeyError('{mask} not in index'
-> 1269 .format(mask=objarr[mask]))
1270
1271 return _values_from_object(indexer)
KeyError: '[ 2 23 231 234 242 321] not in index'
```
I figured there is something wrong with the index, but to me (as I said, I'm new to this) the index seems to be set right:
```
df.head()
Asset Paid Organic
0 A 23 2
1 B 231 234
2 C 321 242
```
And when I use import Datasets, lmplot is working just fine:
```
%matplotlib inline
import seaborn as sns
sns.set(color_codes=True)
tips = sns.load_dataset("tips")
g = sns.lmplot(x="total_bill", y="tip", data=tips, hue="smoker")
```
What am I doing wrong? sns.regplot didn't give me that error, but I wanted to use hue, to make a much bigger dataset easier to understand. I tried many things to fix this and couldn't find any solution. xlxs and csv-files both give me this error.
|
2018/02/06
|
[
"https://Stackoverflow.com/questions/48634071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9319446/"
] |
```
sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
```
x, y : strings, optional
Input variables, these should be column names in data.
You can try again with:
```
sns.lmplot(x="TestX",y="TestY", data=df)
```
|
```
import pandas as pd
import numpy as np
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style="darkgrid")
df = pd.read_csv('data.csv')
sns.lmplot(x="Duration", y="Maxpulse", data=df)
plt.show()
plt.savefig(sys.stdout.buffer)
sys.stdout.flush()
```
"Duration" and "Maxpluse" are column names.
Here is the output of my code:
[](https://i.stack.imgur.com/IaXxY.png)
|
57,624,731
|
I would like to assert that two dictionaries are equal, using Python's [`unittest`](https://docs.python.org/3/library/unittest.html), but ignoring the values of certain keys in the dictionary, in a convenient syntax, like this:
```py
from unittest import TestCase
class Example(TestCase):
def test_example(self):
result = foobar()
self.assertEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": ignore(), # how to do this?
"unique_id": ignore(), #
},
)
```
To be clear, I want to check that all four keys exist, and I want to check the values of `"name"` and `"year_of_birth"`, (but not `"image_url"` or `"unique_id`"), and I want to check that no other keys exist.
I know I could modify `result` here to the key-value pairs for `"image_url"` and `"unique_id"`, but I would like something more convenient that doesn't modify the original dictionary.
(This is inspired by [`Test::Deep`](https://metacpan.org/pod/Test::Deep#ignore) for Perl 5.)
|
2019/08/23
|
[
"https://Stackoverflow.com/questions/57624731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247696/"
] |
There is [`unittest.mock.ANY`](https://docs.python.org/3/library/unittest.mock.html#any) which compares equal to everything.
```
from unittest import TestCase
import unittest.mock as mock
class Example(TestCase):
def test_happy_path(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"image_url": 42,
"unique_id": 'foo'
}
self.assertEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_missing_key(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"unique_id": 'foo'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_extra_key(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"image_url": 42,
"unique_id": 'foo',
"maiden_name": 'Doe'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_wrong_value(self):
result = {
"name": "John Smith",
"year_of_birth": 1918,
"image_url": 42,
"unique_id": 'foo'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
```
|
You can simply ignore the selected keys in the `result` dictionary.
```
self.assertEqual({k: v for k, v in result.items()
if k not in ('image_url', 'unique_id')},
{"name": "John Smith",
"year_of_birth": 1980})
```
|
62,155,242
|
I'm trying to compile a python script with sklearn, pandas, numpy and igraph, but the Pyinstaller executable doesn't run correctly because it can't find version.json in tmp folder.
```
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Usuario\\AppData\\Local\\Temp\\_MEI106882\\wcwidth\\version.json'
[17248] Failed to execute script pyScript
```
|
2020/06/02
|
[
"https://Stackoverflow.com/questions/62155242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11902728/"
] |
You'll need to include the `wcwidth` project directory in your data since it isn't considered a package or a module, its a data file.
In your spec file:
```
...
import wcwidth
a = Analysis(['main.py'],
pathex=[],
binaries=[],
datas=[
(os.path.dirname(wcwidth.__file__), 'wcwidth')
],
...
```
Note: Above I use `os.path.dirname(wcwidth.__file__)` to dynamically get the directory of `wcwidth`, but this can just be `.venv/lib/site_packages/wcwidth` or where-ever it is installed, this was just crucial for CI for me.
Or with `--add-data`:
```
pyinstaller --add-data "/path/to/site_packages/wcwidth;wcwidth"
```
|
I hit this issue when creating a binary from a virtual environment. It seems installing ipython within the virtual environment is causing this.
Recreating a new virtual environment without ipython seems to overcome the issue.
|
58,308,911
|
1) I first map the key names to a dictionary called main\_dict with an empty list (the actual problem has many keys hence the reason why I am doing this)
2) I then loop over a data matrix consisting of 3 columns
3) When I append a value to a column (the key) in the dictionary, the data is appended to wrong keys.
Where am I going wrong here?
**Minimum working example:**
Edit: The data is being read from a file row by row and does not exist in lists as in this MWE.
Edit2: Prefer a more pythonic solution than Pandas.
```
import numpy as np
key_names = ["A", "B", "C"]
main_dict = {}
val = []
main_dict = main_dict.fromkeys(key_names, val)
data = np.array([[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]])
for i in data:
main_dict["A"].append(i[0])
main_dict["B"].append(i[1])
main_dict["C"].append(i[2])
print(main_dict["A"])
# Actual output: [2018.0, 1.1, 3.3, 2017.0, 2.1, 5.4, 2016.0, 3.1, 1.4]
# print(main_dict["A"])
# Expected output: [2018.0, 2017.0, 2016.0]
# print(main_dict["B"])
# Expected output: [1.1, 2.1, 3.1]
# print(main_dict["C"])
# Expected output: [3.3, 5.4, 1.4]
```
|
2019/10/09
|
[
"https://Stackoverflow.com/questions/58308911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6685541/"
] |
Without using numpy (which is a heavy weight package for what you are doing) I would do this:
```
keys = ["A", "B", "C"]
main_dict = {key: [] for key in keys}
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, key in enumerate(keys):
main_dict[key].append(datum[index])
print(main_dict) # {'A': [2018, 2017, 2016], 'B': [1.1, 2.1, 3.1], 'C': [3.3, 5.4, 1.4]}
```
Alternatively you can use the built-in `defaultdict` which is a tad faster since you don't need to do the dictionary comprehension:
```
from collections import defaultdict
main_dict = defaultdict(list)
keys = ["A", "B", "C"]
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, key in enumerate(keys):
main_dict[key].append(datum[index])
print(main_dict)
```
Lastly, if the keys really don't matter to you, you can be a little more dynamic by coming up with the keys dynamically starting from `A`. Thus allowing lines to have more than three attributes:
```
from collections import defaultdict
main_dict = defaultdict(list)
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, attr in enumerate(datum):
main_dict[chr(ord('A') + index)].append(attr)
print(main_dict) # {'A': [2018, 2017, 2016], 'B': [1.1, 2.1, 3.1], 'C': [3.3, 5.4, 1.4]}
```
|
The problem is in
```
main_dict = main_dict.fromkeys(key_names, val)
```
The same list val is referenced by all the keys since python passes reference.
|
42,585,598
|
When I try to deploy to my Elastic Beanstalk environment I am getting this python error. Everything was working fine a few days ago.
```
$ eb deploy
ERROR: AttributeError :: 'NoneType' object has no attribute 'split'
```
I've thus far attempted to update everything to no effect by issuing the following commands:
```
sudo pip install --upgrade setuptools
```
and
```
sudo pip install --upgrade awscli
```
Here are the resulting versions I'm working with:
```
$ eb --version
EB CLI 3.10.0 (Python 2.7.1)
$ aws --version
aws-cli/1.11.56 Python/2.7.13rc1 Darwin/16.4.0 botocore/1.5.19
```
Everything looks fine under eb status
```
$ eb status
Environment details for: ***
Application name: ***
Region: us-west-2
Deployed Version: ***
Environment ID: ***
Platform: 64bit Amazon Linux 2016.09 v3.3.1 running Node.js
Tier: WebServer-Standard
CNAME: ***.us-west-2.elasticbeanstalk.com
Updated: 2017-03-02 14:48:29.099000+00:00
Status: Ready
Health: Green
```
This issue appears to only effect this elastic beanstalk project. I am able to deploy to another project on the same AWS account.
|
2017/03/03
|
[
"https://Stackoverflow.com/questions/42585598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767129/"
] |
I had the same issue. Turns out that when you enable CodeCommit the CLI looks for a remote called "codecommit-origin" and if you don't have a git remote with that *specific* name it will throw that error.
Posting this for anyone else who stumbles upon the same issue.
|
I fixed this with `eb codesource local && eb deploy`, so it forgets about CodeCommit.
|
42,585,598
|
When I try to deploy to my Elastic Beanstalk environment I am getting this python error. Everything was working fine a few days ago.
```
$ eb deploy
ERROR: AttributeError :: 'NoneType' object has no attribute 'split'
```
I've thus far attempted to update everything to no effect by issuing the following commands:
```
sudo pip install --upgrade setuptools
```
and
```
sudo pip install --upgrade awscli
```
Here are the resulting versions I'm working with:
```
$ eb --version
EB CLI 3.10.0 (Python 2.7.1)
$ aws --version
aws-cli/1.11.56 Python/2.7.13rc1 Darwin/16.4.0 botocore/1.5.19
```
Everything looks fine under eb status
```
$ eb status
Environment details for: ***
Application name: ***
Region: us-west-2
Deployed Version: ***
Environment ID: ***
Platform: 64bit Amazon Linux 2016.09 v3.3.1 running Node.js
Tier: WebServer-Standard
CNAME: ***.us-west-2.elasticbeanstalk.com
Updated: 2017-03-02 14:48:29.099000+00:00
Status: Ready
Health: Green
```
This issue appears to only effect this elastic beanstalk project. I am able to deploy to another project on the same AWS account.
|
2017/03/03
|
[
"https://Stackoverflow.com/questions/42585598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767129/"
] |
I had the same issue. Turns out that when you enable CodeCommit the CLI looks for a remote called "codecommit-origin" and if you don't have a git remote with that *specific* name it will throw that error.
Posting this for anyone else who stumbles upon the same issue.
|
To fix this error I needed to add the following lines to my `.elasticbeanstalk/config.yml`
`branch-defaults:
master:
environment: prod-api
group_suffix: null
staging:
environment: staging-api
group_suffix: null`
You'll need to change the branch names and environments to what is applicable to your case (in our case we have a master branch deploying to an eb environment named prod-api).
|
42,585,598
|
When I try to deploy to my Elastic Beanstalk environment I am getting this python error. Everything was working fine a few days ago.
```
$ eb deploy
ERROR: AttributeError :: 'NoneType' object has no attribute 'split'
```
I've thus far attempted to update everything to no effect by issuing the following commands:
```
sudo pip install --upgrade setuptools
```
and
```
sudo pip install --upgrade awscli
```
Here are the resulting versions I'm working with:
```
$ eb --version
EB CLI 3.10.0 (Python 2.7.1)
$ aws --version
aws-cli/1.11.56 Python/2.7.13rc1 Darwin/16.4.0 botocore/1.5.19
```
Everything looks fine under eb status
```
$ eb status
Environment details for: ***
Application name: ***
Region: us-west-2
Deployed Version: ***
Environment ID: ***
Platform: 64bit Amazon Linux 2016.09 v3.3.1 running Node.js
Tier: WebServer-Standard
CNAME: ***.us-west-2.elasticbeanstalk.com
Updated: 2017-03-02 14:48:29.099000+00:00
Status: Ready
Health: Green
```
This issue appears to only effect this elastic beanstalk project. I am able to deploy to another project on the same AWS account.
|
2017/03/03
|
[
"https://Stackoverflow.com/questions/42585598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767129/"
] |
I fixed this with `eb codesource local && eb deploy`, so it forgets about CodeCommit.
|
To fix this error I needed to add the following lines to my `.elasticbeanstalk/config.yml`
`branch-defaults:
master:
environment: prod-api
group_suffix: null
staging:
environment: staging-api
group_suffix: null`
You'll need to change the branch names and environments to what is applicable to your case (in our case we have a master branch deploying to an eb environment named prod-api).
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
So I'm guessing your input is something like this?
```
string = input('Type your numbers, separated by a space')
```
Then I'd do:
```
numbers = [int(i) for i in string.strip().split(' ')]
amount_of_numbers = len(numbers)
sorted = []
for i in range(amount_of_numbers):
x = max(numbers)
numbers.remove(x)
sorted.append(x)
print(sorted)
```
This will sort them using max, but min can also be used.
**If you *didn't have to* use min and max**:
```
string = input('Type your numbers, separated by a space')
numbers = [int(i) for i in string.strip().split(' ')]
numbers.sort() #an optional reverse argument possible
print(numbers)
```
|
LITERALLY just min and max? Odd, but, why not. I'm about to crash, but I think the following would work:
```
# Easy
arr[0] = max(a,b,c,d)
# Take the smallest element from each pair.
#
# You will never take the largest element from the set, but since one of the
# pairs will be (largest, second_largest) you will at some point take the
# second largest. Take the maximum value of the selected items - which
# will be the maximum of the items ignoring the largest value.
arr[1] = max(min(a,b)
min(a,c)
min(a,d)
min(b,c)
min(b,d)
min(c,d))
# Similar logic, but reversed, to take the smallest of the largest of each
# pair - again omitting the smallest number, then taking the smallest.
arr[2] = min(max(a,b)
max(a,c)
max(a,d)
max(b,c)
max(b,d)
max(c,d))
# Easy
arr[3] = min(a,b,c,d)
```
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
So I'm guessing your input is something like this?
```
string = input('Type your numbers, separated by a space')
```
Then I'd do:
```
numbers = [int(i) for i in string.strip().split(' ')]
amount_of_numbers = len(numbers)
sorted = []
for i in range(amount_of_numbers):
x = max(numbers)
numbers.remove(x)
sorted.append(x)
print(sorted)
```
This will sort them using max, but min can also be used.
**If you *didn't have to* use min and max**:
```
string = input('Type your numbers, separated by a space')
numbers = [int(i) for i in string.strip().split(' ')]
numbers.sort() #an optional reverse argument possible
print(numbers)
```
|
I have worked it out.
```
min_integer = min(first_integer, second_integer, third_integer, fourth_integer)
mid_low_integer = min(max(first_integer, second_integer), max(third_integer, fourth_integer))
mid_high_integer = max(min(first_integer, second_integer), min(third_integer, fourth_integer))
max_integer = max(first_integer, second_integer, third_integer, fourth_integer)
```
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
So I'm guessing your input is something like this?
```
string = input('Type your numbers, separated by a space')
```
Then I'd do:
```
numbers = [int(i) for i in string.strip().split(' ')]
amount_of_numbers = len(numbers)
sorted = []
for i in range(amount_of_numbers):
x = max(numbers)
numbers.remove(x)
sorted.append(x)
print(sorted)
```
This will sort them using max, but min can also be used.
**If you *didn't have to* use min and max**:
```
string = input('Type your numbers, separated by a space')
numbers = [int(i) for i in string.strip().split(' ')]
numbers.sort() #an optional reverse argument possible
print(numbers)
```
|
For Tankerbuzz's result for the following:
```
first_integer = 9
second_integer = 19
third_integer = 1
fourth_integer = 15
```
I get 1, 15, 9, 19 as the ascending values.
The following is one of the forms that gives symbolic form of the ascending values (using i1-i4 instead of first\_integer, etc...):
```
Min(i1, i2, i3, i4)
Max(Min(i4, Max(Min(i1, i2), Min(i3, Max(i1, i2))), Max(i1, i2, i3)), Min(i1, i2, i3, Max(i1, i2)))
Max(Min(i1, i2), Min(i3, Max(i1, i2)), Min(i4, Max(i1, i2, i3)))
Max(i1, i2, i3, i4)
```
It was generated by a 'bubble sort' using the Min and Max functions of SymPy (a python CAS):
```
def minmaxsort(v):
"""return a sorted list of the elements in v using the
Min and Max functions.
Examples
========
>>> minmaxsort(3, 2, 1)
[1, 2, 3]
>>> minmaxsort(1, x, y)
[Min(1, x, y), Max(Min(1, x), Min(y, Max(1, x))), Max(1, x, y)]
>>> minmaxsort(1, y, x)
[Min(1, x, y), Max(Min(1, y), Min(x, Max(1, y))), Max(1, x, y)]
"""
from sympy import Min, Max
v = list(v)
v0 = Min(*v)
for j in range(len(v)):
for i in range(len(v) - j - 1):
w = v[i:i + 2]
v[i:i + 2] = [Min(*w), Max(*w)]
v[0] = v0
return v
```
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
LITERALLY just min and max? Odd, but, why not. I'm about to crash, but I think the following would work:
```
# Easy
arr[0] = max(a,b,c,d)
# Take the smallest element from each pair.
#
# You will never take the largest element from the set, but since one of the
# pairs will be (largest, second_largest) you will at some point take the
# second largest. Take the maximum value of the selected items - which
# will be the maximum of the items ignoring the largest value.
arr[1] = max(min(a,b)
min(a,c)
min(a,d)
min(b,c)
min(b,d)
min(c,d))
# Similar logic, but reversed, to take the smallest of the largest of each
# pair - again omitting the smallest number, then taking the smallest.
arr[2] = min(max(a,b)
max(a,c)
max(a,d)
max(b,c)
max(b,d)
max(c,d))
# Easy
arr[3] = min(a,b,c,d)
```
|
I have worked it out.
```
min_integer = min(first_integer, second_integer, third_integer, fourth_integer)
mid_low_integer = min(max(first_integer, second_integer), max(third_integer, fourth_integer))
mid_high_integer = max(min(first_integer, second_integer), min(third_integer, fourth_integer))
max_integer = max(first_integer, second_integer, third_integer, fourth_integer)
```
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
For Tankerbuzz's result for the following:
```
first_integer = 9
second_integer = 19
third_integer = 1
fourth_integer = 15
```
I get 1, 15, 9, 19 as the ascending values.
The following is one of the forms that gives symbolic form of the ascending values (using i1-i4 instead of first\_integer, etc...):
```
Min(i1, i2, i3, i4)
Max(Min(i4, Max(Min(i1, i2), Min(i3, Max(i1, i2))), Max(i1, i2, i3)), Min(i1, i2, i3, Max(i1, i2)))
Max(Min(i1, i2), Min(i3, Max(i1, i2)), Min(i4, Max(i1, i2, i3)))
Max(i1, i2, i3, i4)
```
It was generated by a 'bubble sort' using the Min and Max functions of SymPy (a python CAS):
```
def minmaxsort(v):
"""return a sorted list of the elements in v using the
Min and Max functions.
Examples
========
>>> minmaxsort(3, 2, 1)
[1, 2, 3]
>>> minmaxsort(1, x, y)
[Min(1, x, y), Max(Min(1, x), Min(y, Max(1, x))), Max(1, x, y)]
>>> minmaxsort(1, y, x)
[Min(1, x, y), Max(Min(1, y), Min(x, Max(1, y))), Max(1, x, y)]
"""
from sympy import Min, Max
v = list(v)
v0 = Min(*v)
for j in range(len(v)):
for i in range(len(v) - j - 1):
w = v[i:i + 2]
v[i:i + 2] = [Min(*w), Max(*w)]
v[0] = v0
return v
```
|
I have worked it out.
```
min_integer = min(first_integer, second_integer, third_integer, fourth_integer)
mid_low_integer = min(max(first_integer, second_integer), max(third_integer, fourth_integer))
mid_high_integer = max(min(first_integer, second_integer), min(third_integer, fourth_integer))
max_integer = max(first_integer, second_integer, third_integer, fourth_integer)
```
|
29,989,956
|
I want to share a variable value between a function defined within a python class and an externally defined function. So in the code below, when the internalCompute() function is called, self.data is updated. How can I access this updated data value inside a function that is defined outside the class, i.e inside the report function?
**Note:
I would like to avoid using of global variable as much as possible.**
```
class Compute(object):
def __init__(self):
self.data = 0
def internalCompute(self):
self.data = 5
def externalCompute():
q = Compute()
q.internalCompute()
def report():
# access the updated variable self.data from Compute class
print "You entered report"
externalCompute()
report()
```
|
2015/05/01
|
[
"https://Stackoverflow.com/questions/29989956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1833722/"
] |
>
> determine the physical size of an iPhone (in inches)
>
>
>
This will never be possible, because the physical screen is made up of *pixels* (square LEDs), and there is no way to ask the size of one of those.
Thus, for example, an app that presents a ruler (inches / centimetres) would need to know *in some other way* how the pixel count relates to the physical dimensions of the screen - for example, by looking it up in a built-in table. As you rightly say, if a new device were introduced, the app would have no entry in that table and would be powerless to present a correctly-scaled ruler.
|
I'm using this code im my pch file:
```
#define IS_IPAD ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPad)
#define IS_IPHONE6PLUS (!IS_IPAD && [[UIScreen mainScreen] bounds].size.height >= 736)
#define IS_IPHONE6 (!IS_IPAD && !IS_PHONE6PLUS && [[UIScreen mainScreen] bounds].size.height >= 667)
#define IS_IPHONE5 (!IS_IPAD && !IS_PHONE6PLUS && !IS_IPHONE6 && ([[UIScreen mainScreen] bounds].size.height >= 568))
```
then in your code you can ask
```
if (IS_IPHONE6PLUS) {
// do something
}
```
output from `NSStringFromCGRect([[UIScreen mainScreen] bounds])` is `{{0, 0}, {414, 736}}`
|
49,901,249
|
I have searched everywhere for how to fix this and I could not find anything, so I'm sorry if there is already a thread existing on this issue. Also, I'm fairly new to Linux, GDP, and StackOverflow, this is my first post.
First, I am running on Debian GNU/Linux 9 (stretch) with the Windows subsystem for Linux and when I start gdb I get this:
```
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
...
This GDB was configured as "x86_64-linux-gnu".
...
```
Also when I show the configuration I get this:
```
configure --host=x86_64-linux-gnu --target=x86_64-linux-gnu
--with-auto-load-dir=$debugdir:$datadir/auto-load
--with-auto-load-safe-path=$debugdir:$datadir/auto-load
--with-expat
--with-gdb-datadir=/usr/share/gdb (relocatable)
--with-jit-reader-dir=/usr/lib/gdb (relocatable)
--without-libunwind-ia64
--with-lzma
--with-python=/usr (relocatable)
--without-guile
--with-separate-debug-dir=/usr/lib/debug (relocatable)
--with-system-gdbinit=/etc/gdb/gdbinit
--with-babeltrace
```
I've created some sample c code to show the issue I have been encountering. I just want to clarify that I know I'm using heaps and I could just as easily use a char buffer for this example, the issue is not about that.
I'll call this sample program test.c
```
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
char *str = malloc(8);
char *str2 = malloc(8);
strcpy(str, argv[1]);
strcpy(str2, argv[2]);
printf("This is the end.\n");
printf("1st: %s\n2nd: %s\n", str, str2);
}
```
I then compiled it with
```
gcc -g -o test test.c
```
And with a quick run, I know that everything works the way it should.
```
$ ./test AAAAAAAA BBBBBBBB
This is the end.
1st: AAAAAAAA
2nd: BBBBBBBB
```
When I start the program with no input args, I get a segmentation fault as expected. But when I then try to use gdb to show what exactly happens, I get the following.
```
$ gdb test
(gdb) run
Program received signal SIGSEGV, Segmentation fault.
__strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
296 ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: No such file or directory.
```
What I expected was to get information about what failed such as the destination address and the source address that were sent as parameters for strcpy, but I just get empty parentheses.
Again I'm relatively new to gdb so I don't know if this is normal or not.
Any help would be great.
**EDIT 1**
>
> Can you paste the output of `(gdb) bt` into your question? Your program is stopped in a library function that hasn't had its optional debug information installed on your system, but some other parts of the stack trace may be useful. – Mark Plotnick
>
>
>
```
(gdb) run
(gdb) bt
#0 __strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
#1 0x00000000080007d5 in main (argc=1, argv=0x7ffffffde308) at test.c:10
(gdb) run AAAAAAAA
(gdb) bt
#0 __strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
#1 0x00000000080007ef in main (argc=2, argv=0x7ffffffde2f8) at test.c:11
```
|
2018/04/18
|
[
"https://Stackoverflow.com/questions/49901249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9664285/"
] |
When you build the project it generates .war file as you know.You can extract it and find all the dependent jar files at {.war}\WEB-INF\lib\
|
My suggestion is, go to C:\Users\<>.m2\repository (your maven directory) and search \*.jar...it will list out all the jars in the window.
Copy all the jars and paste in your directory.
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
I'm guessing that the error you're getting involves concatenating a string with an integer.
try:
```
outfile.write(str(i) + '\n')
```
|
Assuming you want a file that looks like this:
```
1
2
3
4
5
6
7
8
9
```
Did you try it this way?
```
>>> fo = open('outfile.txt', 'w')
>>> for i in range(1,10):
... fo.write(str(i)+"\n")
>>> fo.close()
```
The error you probably are getting is this :
>
> TypeError: unsupported operand type(s) for +: 'int' and 'str'
>
>
>
Which is because you are trying to concatenate a string and an integer. COnvert you integer ( `i`) into a string by enclosing it with `str()`
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
You're openning/closing your file at each iteration of while-loop. Why overdo things? You can easily do the job, once the file is open.
Also, you try to write *'\n'* (which is a string) plus *i* (which is an integer). That's wrong, and you need to convert your *i* into string too.
Try this code:
```
with open('text.txt', 'w') as f:
for i in range(11):
f.write(str(i) + '\n')
```
Hope that helps.
|
Assuming you want a file that looks like this:
```
1
2
3
4
5
6
7
8
9
```
Did you try it this way?
```
>>> fo = open('outfile.txt', 'w')
>>> for i in range(1,10):
... fo.write(str(i)+"\n")
>>> fo.close()
```
The error you probably are getting is this :
>
> TypeError: unsupported operand type(s) for +: 'int' and 'str'
>
>
>
Which is because you are trying to concatenate a string and an integer. COnvert you integer ( `i`) into a string by enclosing it with `str()`
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
I'm guessing that the error you're getting involves concatenating a string with an integer.
try:
```
outfile.write(str(i) + '\n')
```
|
Try:
```
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+str(i))
outfile.close()
i=i+1
```
I am getting a text.txt with 0-10 on seperate lines with a blank line at the top.
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
You're openning/closing your file at each iteration of while-loop. Why overdo things? You can easily do the job, once the file is open.
Also, you try to write *'\n'* (which is a string) plus *i* (which is an integer). That's wrong, and you need to convert your *i* into string too.
Try this code:
```
with open('text.txt', 'w') as f:
for i in range(11):
f.write(str(i) + '\n')
```
Hope that helps.
|
I'm guessing that the error you're getting involves concatenating a string with an integer.
try:
```
outfile.write(str(i) + '\n')
```
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
You're openning/closing your file at each iteration of while-loop. Why overdo things? You can easily do the job, once the file is open.
Also, you try to write *'\n'* (which is a string) plus *i* (which is an integer). That's wrong, and you need to convert your *i* into string too.
Try this code:
```
with open('text.txt', 'w') as f:
for i in range(11):
f.write(str(i) + '\n')
```
Hope that helps.
|
Try:
```
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+str(i))
outfile.close()
i=i+1
```
I am getting a text.txt with 0-10 on seperate lines with a blank line at the top.
|
45,860,272
|
My goal is to run a Python script that uses Anaconda libraries (such as Pandas) on `Azure WebJob` but can't seem to figure out how to load the libraries.
I start out just by testing a simple Azure blob to blob file copy which works when run locally but hit into an error `"ImportError: No module named 'azure'"` when ran in WebJob.
example code:
```py
from azure.storage.blob import BlockBlobService
blobAccountName = <name>
blobStorageKey = <key>
containerName = <containername>
blobService = BlockBlobService(account_name=blobAccountName,
account_key=blobStorageKey)
blobService.set_container_acl(containerName)
b = blobService.get_blob_to_bytes(containerName, 'file.csv')
blobService.create_blob_from_bytes(containerName, 'file.csv', b.content)
```
I can't even get Azure SDK libraries to run. let alone Anaconda's
How do I run a python script that requires external libraries such as Anaconda (and even Azure SDK). How do I "pip install" these stuff for a WebJob?
|
2017/08/24
|
[
"https://Stackoverflow.com/questions/45860272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6329714/"
] |
It seems like you've kown about deployment of Azure WebJobs, I offer the below steps for you to show how to load external libraries in python scripts.
Step 1 :
Use the **virtualenv** component to create an independent python runtime environment in your system.Please install it first with command `pip install virtualenv` if you don't have it.
If you installed it successfully ,you could see it in your **python/Scripts** file.
[](https://i.stack.imgur.com/fOtfV.png)
Step2 : Run the commad to create independent python runtime environment.
[](https://i.stack.imgur.com/AqAR6.png)
Step 3: Then go into the created directory's **Scripts** folder and activate it (**this step is important , don't miss it**)
[](https://i.stack.imgur.com/lVtrx.png)
Please **don't close this command window** and use `pip install <your libraryname>` to download external libraries in this command window.
[](https://i.stack.imgur.com/IHAqU.png)
Step 4:Keep the Sample.py uniformly compressed into a folder with the libs packages in the **Libs/site-packages** folder that you rely on.
[](https://i.stack.imgur.com/ui8Ej.png)
Step 5:
Create webjob in Web app service and upload the zip file,then you could execute your Web Job and check the log
[](https://i.stack.imgur.com/eS13d.png)
You could also refer to the SO thread :[Options for running Python scripts in Azure](https://stackoverflow.com/questions/45716757/options-for-running-python-scripts-in-azure/45725327#45725327)
In addition, if you want to use the modules in Anaconda, please download them separately. There is no need to download the entire library.
Hope it helps you.
|
You can point your Azure WebJob to your main WebApp environment (and thus its real site packages). This allows you to use the newest fastest version of Python supported by the WebApp (right now mine is 364x64), much better than 3.4 or 2.7 in x86. Another huge benefit is then you don't have to maintain an additional set of packages that are statically held in a file somewhere (this gave me a lot of trouble with dynamic libraries with crazy dependencies such as psycopg2 and pandas).
HOW: In your WebJobs files, set up a .cmd file that runs your run.py, and in that .cmd file, you can just have one line of code like this:
D:\home\python364x64\python.exe run.py
That's it!
Azure WebJobs looks at .cmd files first, then run.py and others.
See this link for an official MS post on this method:
<https://blogs.msdn.microsoft.com/azureossds/2016/12/09/running-python-webjob-on-azure-app-services-using-non-default-python-version/>
|
59,103,401
|
I am currently implementing a MINLP optimization problem in Python GEKKO for determining the optimal operational strategy of a trigeneration energy system. As I consider the energy demand during all periods of different representative days as input data, basically all my decision variables, intermediates, etc. are 2D arrays.
I suspect that the declaration of the 2D intermediates is my problem. Right now I used list comprehension to declare 2D intermediates, but it seems like python cannot use these intermediates. Furthermore, the error *This steady-state IMODE only allows scalar values.* occurs.
Whenever I use the GEKKO m.Array function like this:
`e_GT = m.Array(m.Intermediate(E_GT[z][p]/E_max_GT) for z in range(Z) for p in range(P), (Z,P))`
it says, that the GEKKO object m.Intermediate cannot be called.
I would be very thankful if anyone could give me a hint.
Here is the complete code:
```
"""
Created on Fri Nov 22 10:18:33 2019
@author: julia
"""
# __Get GEKKO & numpy___
from gekko import GEKKO
import numpy as np
# ___Initialize model___
m = GEKKO()
# ___Global options_____
m.options.SOLVER = 1 # APOPT is MINLP Solver
# ______Constants_______
i = m.Const(value=0.05)
n = m.Const(value=10)
C_GT = m.Const(value=100000)
C_RB = m.Const(value=10000)
C_HB = m.Const(value=10000)
C_RS = m.Const(value=10000)
C_RE = m.Const(value=10000)
Z = 12
P = 24
E_min_GT = m.Const(value=1000)
E_max_GT = m.Const(value=50000)
F_max_GT = m.Const(value=100000)
Q_max_GT = m.Const(value=100000)
a = m.Const(value=1)
b = m.Const(value=1)
c = m.Const(value=1)
d = m.Const(value=1)
eta_RB = m.Const(value=0.01)
eta_HB = m.Const(value=0.01)
eta_RS = m.Const(value=0.01)
eta_RE = m.Const(value=0.01)
alpha = m.Const(value=0.01)
# ______Parameters______
T_z = m.Param([31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
C_p_Gas = m.Param(np.ones([P]))
C_p_Elec = m.Param(np.ones([P]))
E_d = np.ones([Z,P])
H_d = np.ones([Z,P])
K_d = np.ones([Z,P])
# _______Variables______
E_purch = m.Array(m.Var, (Z,P), lb=0)
E_GT = m.Array(m.Var, (Z,P), lb=0)
F_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT_RB = m.Array(m.Var, (Z,P), lb=0)
Q_disp = m.Array(m.Var, (Z,P), lb=0)
Q_HB = m.Array(m.Var, (Z,P), lb=0)
K_RS = m.Array(m.Var, (Z,P), lb=0)
K_RE = m.Array(m.Var, (Z,P), lb=0)
delta_GT = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_HB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RS = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RE = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
# ____Intermediates_____
R = m.Intermediate((i*(1+i)**n)/((1+i)**n-1))
e_min_GT = m.Intermediate(E_min_GT/E_max_GT)
e_GT = [m.Intermediate(E_GT[z][p]/E_max_GT) for z in range(Z) for p in range(P)]
f_GT = [m.Intermediate(F_GT[z][p]/F_max_GT) for z in range(Z) for p in range(P)]
q_GT = [m.Intermediate(Q_GT[z][p]/Q_max_GT) for z in range(Z) for p in range(P)]
Q_RB = [m.Intermediate(eta_RB*Q_GT_RB[z][p]*delta_RB[z][p]) for z in range(Z) for p in range(P)]
F_HB = [m.Intermediate(eta_HB*Q_HB[z][p]*delta_HB[z][p]) for z in range(Z) for p in range(P)]
Q_RS = [m.Intermediate(eta_RS*K_RS[z][p]*delta_RS[z][p]) for z in range(Z) for p in range(P)]
E_RE = [m.Intermediate(eta_RE*K_RE[z][p]*delta_RE[z][p]) for z in range(Z) for p in range(P)]
F_Gas = [m.Intermediate(F_GT[z][p] + eta_HB*Q_HB[z][p]*delta_HB[z][p]) for z in range(Z) for p in range(P)]
Cc = m.Intermediate(R*(C_GT + C_RB + C_HB + C_RS + C_RE))
Cr_z = m.Intermediate((sum(C_p_Gas[p]*F_Gas[z][p] + C_p_Elec[p]*E_purch[z][p]) for p in range(P)) for z in range(Z))
Cr = m.Intermediate(sum(Cr_z[z]*T_z[z]) for z in range(Z))
# ______Equations_______
m.Equation(e_min_GT[z][p]*delta_GT[z][p] <= e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(e_GT[z][p] <= 1*delta_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(f_GT [z][p]== a*delta_GT[z][p] + b*e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(q_GT [z][p]== c*delta_GT[z][p] + d*e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(E_purch[z][p] + E_GT[z][p] == E_RE[z][p] + E_d[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_GT[z][p] == Q_disp[z][p] + Q_GT_RB[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_RB[z][p] + Q_HB[z][p] == Q_RS[z][p] + H_d[z][p] for z in range(Z) for p in range(P))
m.Equation(K_RS[z][p] + K_RE[z][p] == K_d[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_disp[z][p] <= alpha*Q_GT[z][p] for z in range(Z) for p in range(P))
# ______Objective_______
m.Obj(Cc + Cr)
#_____Solve Problem_____
m.solve()
```
|
2019/11/29
|
[
"https://Stackoverflow.com/questions/59103401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12456060/"
] |
Extra square brackets are needed for 2D list definition. This gives a 2D list with 3 rows and 4 columns.
```py
[[p+10*z for p in range(3)] for z in range(4)]
# Result: [[0, 1, 2], [10, 11, 12], [20, 21, 22], [30, 31, 32]]
```
If you leave out the inner brackets, it is a 1D list of length 12.
```py
[p+10*z for p in range(3) for z in range(4)]
# Result: [0, 10, 20, 30, 1, 11, 21, 31, 2, 12, 22, 32]
```
It also works when each element of the list is a Gekko `Intermediate`.
```py
[[m.Intermediate(p+10*z) for p in range(3)] for z in range(4)]
```
|
To diagnose the problem, I added a call to open the run folder before the solve command.
```py
#_____Solve Problem_____
m.open_folder()
m.solve()
```
I opened the `gk_model0.apm` model file with a text editor to look at a text version of the model. At the bottom it shows that there are problems with the last two intermediates and 9 equations.
```
i2327=<generator object <genexpr> at 0x0E51BC30>
i2328=<generator object <genexpr> at 0x0E51BC30>
End Intermediates
Equations
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
minimize (i2326+i2328)
End Equations
End Model
```
This happens when the `.value` property of a `Param` or `Var` is a lists or numpy array instead of a scalar value.
```py
# ______Parameters______
#T_z = m.Param([31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
T_z = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
```
There were also some other problems such as
* `e_min_GT` referenced as an element of a list instead of a scalar value in the first equation
* missing square brackets in the Intermediate list comprehensions to make the list 2D such as `[[2 for p in range(P)] for z in range(Z)]`
* dimension mismatch from `for z in range(Z)] for p in range(P)]` instead of `for p in range(P)] for z in range(Z)]`
* for better efficiency, use `m.sum` instead of `sum`
I made a few other changes as well. A differencing program should show them.
```py
"""
Created on Fri Nov 22 10:18:33 2019
@author: julia
"""
# __Get GEKKO & numpy___
from gekko import GEKKO
import numpy as np
# ___Initialize model___
m = GEKKO()
# ___Global options_____
m.options.SOLVER = 1 # APOPT is MINLP Solver
# ______Constants_______
i = m.Const(value=0.05)
n = m.Const(value=10)
C_GT = m.Const(value=100000)
C_RB = m.Const(value=10000)
C_HB = m.Const(value=10000)
C_RS = m.Const(value=10000)
C_RE = m.Const(value=10000)
Z = 12
P = 24
E_min_GT = m.Const(value=1000)
E_max_GT = m.Const(value=50000)
F_max_GT = m.Const(value=100000)
Q_max_GT = m.Const(value=100000)
a = m.Const(value=1)
b = m.Const(value=1)
c = m.Const(value=1)
d = m.Const(value=1)
eta_RB = m.Const(value=0.01)
eta_HB = m.Const(value=0.01)
eta_RS = m.Const(value=0.01)
eta_RE = m.Const(value=0.01)
alpha = m.Const(value=0.01)
# ______Parameters______
T_z = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
C_p_Gas = np.ones(P)
C_p_Elec = np.ones(P)
E_d = np.ones([Z,P])
H_d = np.ones([Z,P])
K_d = np.ones([Z,P])
# _______Variables______
E_purch = m.Array(m.Var, (Z,P), lb=0)
E_GT = m.Array(m.Var, (Z,P), lb=0)
F_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT_RB = m.Array(m.Var, (Z,P), lb=0)
Q_disp = m.Array(m.Var, (Z,P), lb=0)
Q_HB = m.Array(m.Var, (Z,P), lb=0)
K_RS = m.Array(m.Var, (Z,P), lb=0)
K_RE = m.Array(m.Var, (Z,P), lb=0)
delta_GT = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_HB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RS = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RE = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
# ____Intermediates_____
R = m.Intermediate((i*(1+i)**n)/((1+i)**n-1))
e_min_GT = m.Intermediate(E_min_GT/E_max_GT)
e_GT = [[m.Intermediate(E_GT[z][p]/E_max_GT) for p in range(P)] for z in range(Z)]
f_GT = [[m.Intermediate(F_GT[z][p]/F_max_GT) for p in range(P)] for z in range(Z)]
q_GT = [[m.Intermediate(Q_GT[z][p]/Q_max_GT) for p in range(P)] for z in range(Z)]
Q_RB = [[m.Intermediate(eta_RB*Q_GT_RB[z][p]*delta_RB[z][p]) for p in range(P)] for z in range(Z)]
F_HB = [[m.Intermediate(eta_HB*Q_HB[z][p]*delta_HB[z][p]) for p in range(P)] for z in range(Z)]
Q_RS = [[m.Intermediate(eta_RS*K_RS[z][p]*delta_RS[z][p]) for p in range(P)] for z in range(Z)]
E_RE = [[m.Intermediate(eta_RE*K_RE[z][p]*delta_RE[z][p]) for p in range(P)] for z in range(Z)]
F_Gas = [[m.Intermediate(F_GT[z][p] + eta_HB*Q_HB[z][p]*delta_HB[z][p]) for p in range(P)] for z in range(Z)]
Cc = m.Intermediate(R*(C_GT + C_RB + C_HB + C_RS + C_RE))
Cr_z = [m.Intermediate(m.sum([C_p_Gas[p]*F_Gas[z][p] + C_p_Elec[p]*E_purch[z][p] for p in range(P)])) for z in range(Z)]
Cr = m.Intermediate(m.sum([Cr_z[z]*T_z[z] for z in range(Z)]))
# ______Equations_______
m.Equation([e_min_GT*delta_GT[z][p] <= e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([e_GT[z][p] <= 1*delta_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([f_GT [z][p]== a*delta_GT[z][p] + b*e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([q_GT [z][p]== c*delta_GT[z][p] + d*e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([E_purch[z][p] + E_GT[z][p] == E_RE[z][p] + E_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_GT[z][p] == Q_disp[z][p] + Q_GT_RB[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_RB[z][p] + Q_HB[z][p] == Q_RS[z][p] + H_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([K_RS[z][p] + K_RE[z][p] == K_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_disp[z][p] <= alpha*Q_GT[z][p] for z in range(Z) for p in range(P)])
# ______Objective_______
m.Obj(Cc + Cr)
#_____Solve Problem_____
#m.open_folder()
m.solve()
```
The problem solves with `1.6 sec` of solver time.
```
--------- APM Model Size ------------
Each time step contains
Objects : 13
Constants : 20
Variables : 5209
Intermediates: 2320
Connections : 313
Equations : 5213
Residuals : 2893
Number of state variables: 5209
Number of total equations: - 2905
Number of slack variables: - 864
---------------------------------------
Degrees of freedom : 1440
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 1.53 NLPi: 3 Dpth: 0 Lvs: 0 Obj: 2.69E+04 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 1.59730000000854 sec
Objective : 26890.6404951639
Successful solution
---------------------------------------------------
```
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
The first thing I notice is that you have two `GET`s, instead of a `GET` and then a `POST` (as you noticed yourself).
But let's assume the HTML form is correct, since it works in development. That makes me suspect there is some Javascript that is throwing things off. One common problem when moving things from development to production is the Asset Pipeline. So I would open your browser's console, refresh the page, and see if you get any Javascript errors, especially about files failing to load.
You should also share how you are including Javascript files. Can you share that part of your layout (or whatever)? I would make sure that you use `asset_path` and friends, and not just `/assets/application.js`, because the latter will break when you go to production. Even better, use `javascript_include_tag`.
|
I am no expert but since your code is working on local just try the following to get an idea where the problem might be:
a) Run production environment on your local, see if the problem persists there.
b) Try to test run without any javascript enabled or atleast disable custom ones on the production server.
c) Try to redirect after devise authentication to an existing/static page.
I am sure you will get some hints or ideas
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
This sounds like an SSL issue to me.
Your form is definitely POSTing to the route. You can confirm this by adding `data-remote="true"` to your form HTML in the browser, then watching the console as you make the request:
```
XHR finished loading: POST "https://win-marketing.sciencesupercrew.com/en/users/login"
```
When you POST, however, you are immediately redirected. You can see this if you curl the route:
```
$ curl -i -X POST https://win-marketing.sciencesupercrew.com/en/users/login
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.2
Date: Thu, 23 Feb 2017 14:58:04 GMT
Content-Type: text/html
Content-Length: 185
Location: https://win-marketing.sciencesupercrew.com/en/users/login/
Connection: keep-alive
```
Here's where SSL comes in. [Rails returns a 301 (moved permanently) error](https://github.com/rails/rails/blob/c1f990cfb6d8bbb3b56c0a4ba23dcfe2037e2805/actionpack/lib/action_controller/metal/force_ssl.rb#L82) when a request is not SSL but it should be:
```
def force_ssl_redirect(host_or_options = nil)
unless request.ssl?
options = {
:protocol => 'https://',
:host => request.host,
:path => request.fullpath,
:status => :moved_permanently
}
```
I'm not sure how you've set up SSL on your server. Are you terminating SSL in a load balancer, for example? Whatever you're doing, it seems like SSL is the culprit. To confirm *temporarily* try:
```
# config/production.rb
config.force_ssl = false
```
and see if that solves it.
|
The first thing I notice is that you have two `GET`s, instead of a `GET` and then a `POST` (as you noticed yourself).
But let's assume the HTML form is correct, since it works in development. That makes me suspect there is some Javascript that is throwing things off. One common problem when moving things from development to production is the Asset Pipeline. So I would open your browser's console, refresh the page, and see if you get any Javascript errors, especially about files failing to load.
You should also share how you are including Javascript files. Can you share that part of your layout (or whatever)? I would make sure that you use `asset_path` and friends, and not just `/assets/application.js`, because the latter will break when you go to production. Even better, use `javascript_include_tag`.
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
Glad you figured it out! Posting my last comment as an answer below.
Thanks for the live example. So, I click on the submit button and I can see the browser sending a POST request. The server responds successfully, but with a redirect.
`POST https://win-marketing.sciencesupercrew.com/en/users/login` -> `301 Moved Permanently with Location: https://win-marketing.sciencesupercrew.com/en/users/login/`. The form data are sent correctly as `user[email]` and `user[password]`.
If you say that this request does not reach Rails, then you need to check your nginx configuration.
**UPDATED SOLUTION**
The issue was my nginx configuration. The one that works is the following:
```
server {
listen 80;
server_name win-marketing.sciencesupercrew.com www.win-marketing.sciencesupercrew.com;
return 301 https://www.win-marketing.sciencesupercrew.com$request_uri;
}
server {
listen 443 ssl;
server_name win-marketing.sciencesupercrew.com;
passenger_enabled on;
rails_env production;
root /home/demo/windhagermediahub/public;
ssl on;
ssl_certificate /home/demo/ssl/ssl-bundle.crt;
ssl_certificate_key /home/demo/ssl/ssc.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
```
After adding these lines it started working:
```
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
```
|
The first thing I notice is that you have two `GET`s, instead of a `GET` and then a `POST` (as you noticed yourself).
But let's assume the HTML form is correct, since it works in development. That makes me suspect there is some Javascript that is throwing things off. One common problem when moving things from development to production is the Asset Pipeline. So I would open your browser's console, refresh the page, and see if you get any Javascript errors, especially about files failing to load.
You should also share how you are including Javascript files. Can you share that part of your layout (or whatever)? I would make sure that you use `asset_path` and friends, and not just `/assets/application.js`, because the latter will break when you go to production. Even better, use `javascript_include_tag`.
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
This sounds like an SSL issue to me.
Your form is definitely POSTing to the route. You can confirm this by adding `data-remote="true"` to your form HTML in the browser, then watching the console as you make the request:
```
XHR finished loading: POST "https://win-marketing.sciencesupercrew.com/en/users/login"
```
When you POST, however, you are immediately redirected. You can see this if you curl the route:
```
$ curl -i -X POST https://win-marketing.sciencesupercrew.com/en/users/login
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.2
Date: Thu, 23 Feb 2017 14:58:04 GMT
Content-Type: text/html
Content-Length: 185
Location: https://win-marketing.sciencesupercrew.com/en/users/login/
Connection: keep-alive
```
Here's where SSL comes in. [Rails returns a 301 (moved permanently) error](https://github.com/rails/rails/blob/c1f990cfb6d8bbb3b56c0a4ba23dcfe2037e2805/actionpack/lib/action_controller/metal/force_ssl.rb#L82) when a request is not SSL but it should be:
```
def force_ssl_redirect(host_or_options = nil)
unless request.ssl?
options = {
:protocol => 'https://',
:host => request.host,
:path => request.fullpath,
:status => :moved_permanently
}
```
I'm not sure how you've set up SSL on your server. Are you terminating SSL in a load balancer, for example? Whatever you're doing, it seems like SSL is the culprit. To confirm *temporarily* try:
```
# config/production.rb
config.force_ssl = false
```
and see if that solves it.
|
I am no expert but since your code is working on local just try the following to get an idea where the problem might be:
a) Run production environment on your local, see if the problem persists there.
b) Try to test run without any javascript enabled or atleast disable custom ones on the production server.
c) Try to redirect after devise authentication to an existing/static page.
I am sure you will get some hints or ideas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.