title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
How can one suppress evaluation while still being able to index?
| 39,375,880
|
<p>I have two small but very complicated matrices that I want to multiply. I've done this using sympy:</p>
<pre><code>C=sympy.MatMul(A,B,hold=True)
</code></pre>
<p>This gives me a MatMul object which saves a huge amount of time, and I'm not interested in a symbolic expression anyway, rather I want to evaluate at specific points later on.</p>
<p>If this was the end of my calculation it would be fine, but I then need to use elements of C in order to define a new equation, however, I cannot index it. I get the following,</p>
<pre><code>In [286]: C[0]
Traceback (most recent call last):
File "<ipython-input-286-829e2440bf19>", line 1, in <module>
C[0]
File "C:\Anaconda3\lib\site-packages\sympy\matrices\expressions\matexpr.py", line 242, in __getitem__
raise IndexError("Single index only supported for "
IndexError: Single index only supported for non-symbolic matrix shapes.
</code></pre>
<p>Is it possible to somehow index such an object? For example, in maple I can use a semi-colon which suppresses the output, while retaining information about the structure of the resulting object so I can index it.</p>
| 2
|
2016-09-07T17:20:59Z
| 39,376,135
|
<p>You can index elements of <code>C</code>, e.g. <code>C[0,0]</code>. So this will give you the first row as a list:</p>
<pre><code>row0 = [C[0, k] for k in range(C.shape[1])]
</code></pre>
<p>Here's an example. <code>x</code> and <code>y</code> are sympy symbols.</p>
<pre><code>In [40]: A
Out[40]:
Matrix([
[2*x + 1, x + 3],
[ -2, 3]])
In [41]: B
Out[41]:
Matrix([
[-3, 3],
[ y, 2*y]])
In [42]: C = sympy.MatMul(A, B, hold=True)
In [43]: C[0,0]
Out[43]: -6*x + y*(x + 3) - 3
In [44]: [C[0,k] for k in range(C.shape[1])]
Out[44]: [-6*x + y*(x + 3) - 3, 6*x + 2*y*(x + 3) + 3]
</code></pre>
| 3
|
2016-09-07T17:37:11Z
|
[
"python",
"numpy",
"math",
"indexing",
"sympy"
] |
How can one suppress evaluation while still being able to index?
| 39,375,880
|
<p>I have two small but very complicated matrices that I want to multiply. I've done this using sympy:</p>
<pre><code>C=sympy.MatMul(A,B,hold=True)
</code></pre>
<p>This gives me a MatMul object which saves a huge amount of time, and I'm not interested in a symbolic expression anyway, rather I want to evaluate at specific points later on.</p>
<p>If this was the end of my calculation it would be fine, but I then need to use elements of C in order to define a new equation, however, I cannot index it. I get the following,</p>
<pre><code>In [286]: C[0]
Traceback (most recent call last):
File "<ipython-input-286-829e2440bf19>", line 1, in <module>
C[0]
File "C:\Anaconda3\lib\site-packages\sympy\matrices\expressions\matexpr.py", line 242, in __getitem__
raise IndexError("Single index only supported for "
IndexError: Single index only supported for non-symbolic matrix shapes.
</code></pre>
<p>Is it possible to somehow index such an object? For example, in maple I can use a semi-colon which suppresses the output, while retaining information about the structure of the resulting object so I can index it.</p>
| 2
|
2016-09-07T17:20:59Z
| 39,419,050
|
<p>To explain the error, normally, a single index indexes a matrix row-by-row:</p>
<pre><code>In [8]: M = Matrix([[1, 2], [3, 4]])
In [9]: M
Out[9]:
â¡1 2â¤
⢠â¥
â£3 4â¦
In [10]: M[0]
Out[10]: 1
In [11]: M[1]
Out[11]: 2
In [12]: M[2]
Out[12]: 3
In [14]: M[3]
Out[14]: 4
</code></pre>
<p>For a symbolic matrix symbol, this is computed as the row,column index. For instance, </p>
<pre><code>In [16]: MatrixSymbol('A', 3, 4)[0]
Out[16]: Aââ
In [17]: MatrixSymbol('A', 3, 4)[10]
Out[17]: Aââ
</code></pre>
<p>The element <code>A[10]</code> is automatically converted to <code>A[2, 2]</code>, because A has 4 columns, so 10 is the third column of the third row (remember that everything is 0-indexed). </p>
<p>However, if your shape is symbolic, particularly the number of columns, say <code>A</code> is m x n, there is no way to know which row,column <code>A[i]</code> refers to (<code>n % i</code> is symbolic). Probably, SymPy could be changed to make <code>A[i]</code> return <code>A[i//n,i%n]</code> symbolically, but you generally want to explicitly reference matrix elements by row,column anyway, so if you really want that, you could just do it manually. Also, this formula has no bounds checking (if <code>i >= n*m</code> the element is out of bounds).</p>
<p>Strictly speaking, <code>A[0]</code> could probably work, since that will always be <code>A[0, 0]</code> regardless of the shape of A. However, this would be a single special-case, and SymPy has chosen to disallow it, since it's possible to just write the explicit <code>A[0, 0]</code> anyway. </p>
| 1
|
2016-09-09T20:10:17Z
|
[
"python",
"numpy",
"math",
"indexing",
"sympy"
] |
Inserting an element before each element of a list
| 39,375,906
|
<p>I'm looking to insert a constant element before each of the existing element of a list, i.e. go from:</p>
<pre><code>['foo', 'bar', 'baz']
</code></pre>
<p>to:</p>
<pre><code>['a', 'foo', 'a', 'bar', 'a', 'baz']
</code></pre>
<p>I've tried using list comprehensions but the best thing I can achieve is an array of arrays using this statement:</p>
<pre><code>[['a', elt] for elt in stuff]
</code></pre>
<p>Which results in this:</p>
<pre><code>[['a', 'foo'], ['a', 'bar'], ['a', 'baz']]
</code></pre>
<p>So not exactly what I want. Can it be achieved using list comprehension? Just in case it matters, I'm using Python 3.5.</p>
| 4
|
2016-09-07T17:22:29Z
| 39,375,937
|
<p>Add another loop:</p>
<pre><code>[v for elt in stuff for v in ('a', elt)]
</code></pre>
<p>or use <a href="https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable"><code>itertools.chain.from_iterable()</code></a> together with <a href="https://docs.python.org/3/library/functions.html#zip"><code>zip()</code></a> and <a href="https://docs.python.org/3/library/itertools.html#itertools.repeat"><code>itertools.repeat()</code></a> if you need an iterable version rather than a full list:</p>
<pre><code>from itertools import chain, repeat
try:
# Python 3 version (itertools.izip)
from future_builtins import zip
except ImportError:
# No import needed in Python 3
it = chain.from_iterable(zip(repeat('a'), stuff))
</code></pre>
| 8
|
2016-09-07T17:24:13Z
|
[
"python",
"list",
"python-3.x",
"list-comprehension"
] |
Inserting an element before each element of a list
| 39,375,906
|
<p>I'm looking to insert a constant element before each of the existing element of a list, i.e. go from:</p>
<pre><code>['foo', 'bar', 'baz']
</code></pre>
<p>to:</p>
<pre><code>['a', 'foo', 'a', 'bar', 'a', 'baz']
</code></pre>
<p>I've tried using list comprehensions but the best thing I can achieve is an array of arrays using this statement:</p>
<pre><code>[['a', elt] for elt in stuff]
</code></pre>
<p>Which results in this:</p>
<pre><code>[['a', 'foo'], ['a', 'bar'], ['a', 'baz']]
</code></pre>
<p>So not exactly what I want. Can it be achieved using list comprehension? Just in case it matters, I'm using Python 3.5.</p>
| 4
|
2016-09-07T17:22:29Z
| 39,376,091
|
<p>A simple generator function works nicely here too:</p>
<pre><code>def add_between(iterable, const):
# TODO: think of a better name ... :-)
for item in iterable:
yield const
yield item
list(add_between(['foo', 'bar', 'baz'], 'a')
</code></pre>
<p>This lets you avoid a nested list-comprehension and is quite straight-forward to read and understand at the cost of being <em>slightly</em> more verbose.</p>
| 3
|
2016-09-07T17:34:01Z
|
[
"python",
"list",
"python-3.x",
"list-comprehension"
] |
Inserting an element before each element of a list
| 39,375,906
|
<p>I'm looking to insert a constant element before each of the existing element of a list, i.e. go from:</p>
<pre><code>['foo', 'bar', 'baz']
</code></pre>
<p>to:</p>
<pre><code>['a', 'foo', 'a', 'bar', 'a', 'baz']
</code></pre>
<p>I've tried using list comprehensions but the best thing I can achieve is an array of arrays using this statement:</p>
<pre><code>[['a', elt] for elt in stuff]
</code></pre>
<p>Which results in this:</p>
<pre><code>[['a', 'foo'], ['a', 'bar'], ['a', 'baz']]
</code></pre>
<p>So not exactly what I want. Can it be achieved using list comprehension? Just in case it matters, I'm using Python 3.5.</p>
| 4
|
2016-09-07T17:22:29Z
| 39,376,122
|
<p>You already have </p>
<pre><code> l = [['a', 'foo'], ['a', 'bar], ['a', 'baz']]
</code></pre>
<p>You can flatten it using </p>
<pre><code>[item for sublist in l for item in sublist]
</code></pre>
<p>this even works for arbitrary length nested list.</p>
| 1
|
2016-09-07T17:36:06Z
|
[
"python",
"list",
"python-3.x",
"list-comprehension"
] |
How to pass dictionary to functions?
| 39,375,912
|
<p>I want to pass dictionary to user defined functions and I need to do some calculation based on the dictionary values. It is not working for me with functions but works fine without using functions. I am not sure, what is wrong with code. Any help please? No error message.</p>
<p>Input:</p>
<blockquote>
<p>"13-07-2016 12:55:46",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 12:57:50",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:00:43",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:01:45",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:02:57",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:04:59",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:06:51",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:07:56",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com"</p>
</blockquote>
<p>Code:</p>
<pre><code>file_name = sys.argv[1]
fo = open(file_name, "rb")
def setdict():
dico,i={},0
line = fo.readline()
for line in fo:
date, user, proxy_ip, client_ip, access_method, con, sites = line.split(",")
sites = sites.rstrip('\n')
dico[i]= date, user, proxy_ip, client_ip, access_method, con, sites
return dico
def display(dico):
for k,v in dico.items():
print k,v
</code></pre>
| 0
|
2016-09-07T17:22:55Z
| 39,376,027
|
<p>When you write a function in Python using the <code>def</code> keyword, the function is not automatically executed. You are never calling your <code>setdict</code> or <code>display</code> functions, just defining them so they can be called later.</p>
<p>Add this line to the end of your script to actually call the functions you defined:</p>
<pre><code>display(setdict())
</code></pre>
<p>or more verbosely</p>
<pre><code>dico = setdict()
display(dico)
</code></pre>
| 1
|
2016-09-07T17:29:51Z
|
[
"python",
"dictionary"
] |
How to pass dictionary to functions?
| 39,375,912
|
<p>I want to pass dictionary to user defined functions and I need to do some calculation based on the dictionary values. It is not working for me with functions but works fine without using functions. I am not sure, what is wrong with code. Any help please? No error message.</p>
<p>Input:</p>
<blockquote>
<p>"13-07-2016 12:55:46",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 12:57:50",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:00:43",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:01:45",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:02:57",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:04:59",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:06:51",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com" <br>
"13-07-2016 13:07:56",user,192.168.10.100,192.168.10.20,CONNECT,200,"www.abc.com"</p>
</blockquote>
<p>Code:</p>
<pre><code>file_name = sys.argv[1]
fo = open(file_name, "rb")
def setdict():
dico,i={},0
line = fo.readline()
for line in fo:
date, user, proxy_ip, client_ip, access_method, con, sites = line.split(",")
sites = sites.rstrip('\n')
dico[i]= date, user, proxy_ip, client_ip, access_method, con, sites
return dico
def display(dico):
for k,v in dico.items():
print k,v
</code></pre>
| 0
|
2016-09-07T17:22:55Z
| 39,376,067
|
<p><strong>A:</strong> You should consider to call your functions at the end of the script:</p>
<pre><code>dico = setdict()
display(dico)
</code></pre>
<p>Without that, they are declared, but not used.</p>
<p><strong>B:</strong> You should also consider a better way to open your file:</p>
<pre><code>with open(file_name, "rb") as f:
lines = f.readlines()
for line in lines:
# Do stuff with your line
</code></pre>
<p>This is the best way to open a file in python and to read it line by line.</p>
<p><strong>C:</strong> You are using:</p>
<pre><code> line = fo.readline()
# ^ That line is never use after, you will loose all it's datas
for line in fo:
#do stuff on line
</code></pre>
<p>I've add a comment to show you that you loose the data from the first line.</p>
<p><strong>D:</strong> You are using global variable (you use <code>fo</code> inside <code>setdict()</code> a better way will be to pass it by arguments:</p>
<pre><code>fo = open(file_name, "rb")
def setdict(fo):
dico,i={},0
line = fo.readline()
...
setdict(fo)
</code></pre>
<p>Finally, here is how you can rewrite your script :</p>
<pre><code>def setdict(filename):
dico,i={},0
with open(filename, 'r') as f:
for line in f.readlines():
date, user, proxy_ip, client_ip, access_method, con, sites = line.split(",")
sites = sites.rstrip('\n')
dico[i]= date, user, proxy_ip, client_ip, access_method, con, sites
return dico
def display(dico):
for k,v in dico.items():
print k,v
file_name = sys.argv[1]
dico = setdict(filename)
display(dico)
</code></pre>
| 2
|
2016-09-07T17:32:46Z
|
[
"python",
"dictionary"
] |
Modify string in bash to contain new line character?
| 39,375,977
|
<p>I am using a bash script to call google-api's upload_video.py (<a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow">https://developers.google.com/youtube/v3/guides/uploading_a_video</a> )</p>
<p>I have a mp4 called output.mp4 which I would like to upload.</p>
<p>The problem is I cannot get my array to work how I would like. </p>
<p>This new line character is "required" because my arguments to python script contain spaces. </p>
<p>Here is a simplified version of my bash script:</p>
<pre><code># Operator may change these
hold=100
location="Foo, Montana "
declare -a file_array=("unique_ID_0" "unique_ID_1")
upload_file=upload_file.txt
upload_movie=output.mp4
# Hit enter at end b/c \n not recognized
upload_title=$location' - '${file_array[0]}' - Hold '$hold' Sweeps
'
upload_description='The spectrum recording was made in at '$location'.
'
# Overwrite with 1st call > else apppend >>
echo "$upload_title" > $upload_file
echo "$upload_description" >> $upload_file
# Load each line of text file into array
IFS=$'\n'
cmd_google=$(<$upload_file)
unset IFS
nn=1
for i in "${cmd_google[@]}"
do
echo "$i"
# Delete last character: \n
#i=${i[-nn]%?}
#i=${i: : -nn}
#i=${i::${#i}-nn}
i=${i%?}
#i=${i#"\n"}
#i=${i%"\n"}
echo "$i"
done
python upload_video.py --file=$upload_movie --title="${cmd_google[0]}" --description="${cmd_google[1]}"
</code></pre>
<p>At first I attempted to remove the new line character, but it appears that the enter or \n is not working how I would like, each line is not separate. It writes the title and description as one line. </p>
<p>How do I modify my bash script to recognize a newline character?</p>
| 0
|
2016-09-07T17:26:56Z
| 39,376,935
|
<p>This is much simpler than you are making it.</p>
<pre><code># Operator may change these
hold=100
location="Foo, Montana"
declare -a file_array=("unique_ID_0" "unique_ID_1")
upload_file=upload_file.txt
upload_movie=output.mp4
upload_title="$location - ${file_array[0]} - Hold $hold Sweeps"
upload_description="The spectrum recording was made in at $location."
cat <<EOF > "$upload_file"
$upload_title
$upload_description
EOF
# ...
readarray -t cmd_google < "$upload_file"
python upload_video.py --file="$upload_movie" --title="${cmd_google[0]}" --description="${cmd_google[1]}"
</code></pre>
<p>I suspect the <code>readarray</code> command is all you are really looking for, since much of the above code is simply creating a file that I assume you are receiving already created.</p>
| 0
|
2016-09-07T18:34:27Z
|
[
"python",
"bash",
"google-api"
] |
Modify string in bash to contain new line character?
| 39,375,977
|
<p>I am using a bash script to call google-api's upload_video.py (<a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow">https://developers.google.com/youtube/v3/guides/uploading_a_video</a> )</p>
<p>I have a mp4 called output.mp4 which I would like to upload.</p>
<p>The problem is I cannot get my array to work how I would like. </p>
<p>This new line character is "required" because my arguments to python script contain spaces. </p>
<p>Here is a simplified version of my bash script:</p>
<pre><code># Operator may change these
hold=100
location="Foo, Montana "
declare -a file_array=("unique_ID_0" "unique_ID_1")
upload_file=upload_file.txt
upload_movie=output.mp4
# Hit enter at end b/c \n not recognized
upload_title=$location' - '${file_array[0]}' - Hold '$hold' Sweeps
'
upload_description='The spectrum recording was made in at '$location'.
'
# Overwrite with 1st call > else apppend >>
echo "$upload_title" > $upload_file
echo "$upload_description" >> $upload_file
# Load each line of text file into array
IFS=$'\n'
cmd_google=$(<$upload_file)
unset IFS
nn=1
for i in "${cmd_google[@]}"
do
echo "$i"
# Delete last character: \n
#i=${i[-nn]%?}
#i=${i: : -nn}
#i=${i::${#i}-nn}
i=${i%?}
#i=${i#"\n"}
#i=${i%"\n"}
echo "$i"
done
python upload_video.py --file=$upload_movie --title="${cmd_google[0]}" --description="${cmd_google[1]}"
</code></pre>
<p>At first I attempted to remove the new line character, but it appears that the enter or \n is not working how I would like, each line is not separate. It writes the title and description as one line. </p>
<p>How do I modify my bash script to recognize a newline character?</p>
| 0
|
2016-09-07T17:26:56Z
| 39,379,964
|
<p>I figured it out with help from chepner's answer. My question hid the fact that I wanted to write new line characters into the video's description.</p>
<p>Instead of adding a new line character in the bash script, it is much easier to have a text file which contains the correctly formatted script and read it in, then concatenate it with run-time specific variable.</p>
<p>In my case the correctly formatted text is called description.txt:</p>
<p><a href="http://i.stack.imgur.com/jNsux.png" rel="nofollow">Here is a snip of my description.txt which contains newline characters</a></p>
<p>Here is my final version of the script:</p>
<pre><code># Operator may change these
hold=100
location="Foo, Montana"
declare -a file_array=("unique_ID_0" "unique_ID_1")
upload_title="$location - ${file_array[0]} - Hold $hold Sweeps"
upload_description="The spectrum recording was made in at $location. "
# Read in script which contains newline
temp=$(<description.txt)
# Concatenate them
upload_description="$upload_description$temp"
upload_movie=output.mp4
python upload_video.py --file="$upload_movie" --title="$upload_title" --description="$upload_description"
</code></pre>
| 0
|
2016-09-07T22:31:29Z
|
[
"python",
"bash",
"google-api"
] |
Sorting words in a list of strings based on their relative frequencies, not regular sorting?
| 39,376,043
|
<p>Suppose I have a <code>pandas.Series</code> object:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello there you would like to sort me",
"sorted i would like to be", "the banana does not taste like the orange",
"my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum",
"orange is my favorite color"])
</code></pre>
<p>I want to sort the words inside each row based on the frequency with which each word occurs in the <em>entire</em> <code>Series</code>.</p>
<p>I can create a dictionary of the word: frequency key-value pairs easily:</p>
<pre><code>from collections import Counter
def create_word_freq_dict(series):
return Counter(word for row in series for word in row.lower().split())
word_counts = create_word_freq_dict(s)
</code></pre>
<p>Without procedurally going through each row in the <code>Series</code>, how can I sort the word in this object by their relative frequencies? That is to say, for example, that "hello" occurs more frequently than "friend," and so should be further to the left in the resultant "sorted" string.</p>
<p>This is what I have:</p>
<pre><code>for row in s:
ordered_words = []
words = row.split()
if len(words) == 1:
ordered_words.append(words[0])
else:
i = 1
prevWord = words[0]
prevWord_freq = word_counts[prevWord]
while i < len(words):
currWord = words[i]
currWord_freq = word_counts[currWord]
if currWord_freq > prevWord_freq:
prevWord = currWord
prevWord_freq = currWord_freq
words.append(currWord)
...
</code></pre>
<p>It's not complete yet, but is there a better way (as opposed to recursion) of sorting in this manner?</p>
| 0
|
2016-09-07T17:30:38Z
| 39,376,064
|
<pre><code>print create_word_freq_dict(series).most_common()
</code></pre>
| 0
|
2016-09-07T17:32:34Z
|
[
"python",
"sorting"
] |
Sorting words in a list of strings based on their relative frequencies, not regular sorting?
| 39,376,043
|
<p>Suppose I have a <code>pandas.Series</code> object:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello there you would like to sort me",
"sorted i would like to be", "the banana does not taste like the orange",
"my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum",
"orange is my favorite color"])
</code></pre>
<p>I want to sort the words inside each row based on the frequency with which each word occurs in the <em>entire</em> <code>Series</code>.</p>
<p>I can create a dictionary of the word: frequency key-value pairs easily:</p>
<pre><code>from collections import Counter
def create_word_freq_dict(series):
return Counter(word for row in series for word in row.lower().split())
word_counts = create_word_freq_dict(s)
</code></pre>
<p>Without procedurally going through each row in the <code>Series</code>, how can I sort the word in this object by their relative frequencies? That is to say, for example, that "hello" occurs more frequently than "friend," and so should be further to the left in the resultant "sorted" string.</p>
<p>This is what I have:</p>
<pre><code>for row in s:
ordered_words = []
words = row.split()
if len(words) == 1:
ordered_words.append(words[0])
else:
i = 1
prevWord = words[0]
prevWord_freq = word_counts[prevWord]
while i < len(words):
currWord = words[i]
currWord_freq = word_counts[currWord]
if currWord_freq > prevWord_freq:
prevWord = currWord
prevWord_freq = currWord_freq
words.append(currWord)
...
</code></pre>
<p>It's not complete yet, but is there a better way (as opposed to recursion) of sorting in this manner?</p>
| 0
|
2016-09-07T17:30:38Z
| 39,376,255
|
<h1>Python 2</h1>
<p>All you have to do is create custom comparator based on your counter and call sorting</p>
<pre><code>s = ["hello there you would like to sort me",
"sorted i would like to be", "the banana does not taste like the orange",
"my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum",
"orange is my favorite color"]
from collections import Counter
def create_word_freq_dict(series):
return Counter(word for row in series for word in row.lower().split())
word_counts = create_word_freq_dict(s)
for row in s:
print sorted(row.lower().split(), lambda x, y: word_counts[y] - word_counts[x])
</code></pre>
<p>So all I do here is simply call <code>sorted</code> with custom comparison operator, which ignores the word, and instead uses <code>word_counts</code> mapping to determine which one should be first.</p>
<p>and effect</p>
<pre><code>['hello', 'like', 'there', 'would', 'to', 'you', 'sort', 'me']
['like', 'would', 'to', 'sorted', 'i', 'be']
['like', 'orange', 'the', 'banana', 'the', 'does', 'not', 'taste']
['hello', 'my', 'friend', 'said']
['hello', 'there', 'amigo']
['orange', 'apple', 'apple', 'banana', 'peach', 'pear', 'plum']
['orange', 'my', 'is', 'favorite', 'color']
</code></pre>
<p>and to prove it really sorts according to frequencies:</p>
<pre><code>for row in s:
sorted_row = sorted(row.split(), lambda x, y: word_counts[y] - word_counts[x])
print zip(sorted_row, map(lambda x: word_counts[x], sorted_row))
</code></pre>
<p>produces</p>
<pre><code>[('hello', 3), ('like', 3), ('there', 2), ('would', 2), ('to', 2), ('you', 1), ('sort', 1), ('me', 1)]
[('like', 3), ('would', 2), ('to', 2), ('sorted', 1), ('i', 1), ('be', 1)]
[('like', 3), ('orange', 3), ('the', 2), ('banana', 2), ('the', 2), ('does', 1), ('not', 1), ('taste', 1)]
[('hello', 3), ('my', 2), ('friend', 1), ('said', 1)]
[('hello', 3), ('there', 2), ('amigo', 1)]
[('orange', 3), ('apple', 2), ('apple', 2), ('banana', 2), ('peach', 1), ('pear', 1), ('plum', 1)]
[('orange', 3), ('my', 2), ('is', 1), ('favorite', 1), ('color', 1)]
</code></pre>
<h2>Python 3</h2>
<p>In Python 3, <code>sorted</code> no longer accepts a function but <code>key</code> instead, thus you have to do conversion</p>
<pre><code>s = ["hello there you would like to sort me",
"sorted i would like to be", "the banana does not taste like the orange",
"my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum",
"orange is my favorite color"]
from functools import cmp_to_key
from collections import Counter
def create_word_freq_dict(series):
return Counter(word for row in series for word in row.lower().split())
word_counts = create_word_freq_dict(s)
for row in s:
sorted_row = sorted(row.split(), key=cmp_to_key(lambda x, y: word_counts[y] - word_counts[x]))
print(sorted_row)
</code></pre>
| 1
|
2016-09-07T17:45:04Z
|
[
"python",
"sorting"
] |
python pandas binning incorrect column format list of lists
| 39,376,070
|
<p>I am working through an example to bin columns using pandas. I am trying to use the django graphos library to plot the distribution and in order to do this I need to convert the binned output into a list of lists. Below is a snippet of the data I start with. </p>
<pre><code> A
1 8.78
2 9.46
3 8.78
4 10.27
5 10.37
6 12.36
7 14.56
</code></pre>
<p>then I create the bins</p>
<pre><code>bins = np.linspace(sample.A.min(), sample.A.max(), 100)
</code></pre>
<p>then I group by and count</p>
<pre><code>groups = sample.groupby(pd.cut(sample.A, bins)).count()
</code></pre>
<p>Then I get this (first couple rows) </p>
<pre><code> A
A
(7.68, 7.799] 5
(7.799, 7.918] 0
(7.918, 8.0364] 2
(8.0364, 8.155] 0
</code></pre>
<p>When I try to convert this output I receive only the aggregated column rows and not the bins (using groups.values.tolist()). (First couple rows)</p>
<pre><code>[[5],
[0],
[2],
[0],
[9],
[25],
</code></pre>
<p>My desired output would look like (list of lists)</p>
<pre><code> data = [
['Bins', 'Count'],
['7.68, 7.799', 1000],
['7.799, 7.918', 1170],
]
</code></pre>
<p>per the example (list example from <a href="https://github.com/agiliq/django-graphos" rel="nofollow">https://github.com/agiliq/django-graphos</a>)</p>
<p>For some reason I cannot get the list to format correctly I think it is due to some part in my cut example.</p>
| 0
|
2016-09-07T17:32:55Z
| 39,383,389
|
<p>I was doing pretty much what you were in the comments:</p>
<pre><code>binList = groups.index.tolist()
countList = [count[0] for count in groups.values.tolist()] # groups.values.tolist() comes as a list of lists
binList = [[binStr.replace('(', '').replace(']', ''), count] for binStr, count in zip(binList, countList)]
binList = ['Bins', 'Count'] + binList
</code></pre>
<p>It's not the Mona Lisa of python, but...uh...great minds think alike?</p>
| 1
|
2016-09-08T05:47:50Z
|
[
"python",
"pandas",
"numpy"
] |
Python 2.7 Argparse Optional and Required arguments
| 39,376,095
|
<p>So I've been frantically reading tutorials on argparse everywhere but can't seem to figure out why my program is getting an error. My code currently looks like this:</p>
<pre><code>parser = argparse.ArgumentParser()
parser.add_argument("-d", "-debug", required = False, help = "optional parameter")
parser.add_argument("input_file", help = "file to be parsed")
args = parser.parse_args()
</code></pre>
<p>When I run my program with the command "python myprogram.py -d inputfile" it complains that there are too few arguments. Furthermore, when I just run it with inputfile as the parameter, it works.</p>
<p>Does anyone know why this might be happening?</p>
| 0
|
2016-09-07T17:34:20Z
| 39,376,148
|
<p>The <a href="https://docs.python.org/2.7/library/argparse.html#action" rel="nofollow">default action</a> for an argument is <code>'store'</code>. <code>store</code> actions generally expect a <em>value</em> to be associated with the flag.</p>
<p>It looks like you want this to be a boolean switch type of flag in which case you want the <code>'store_true'</code> action</p>
<pre><code>parser = argparse.ArgumentParser()
parser.add_argument("-d", "--debug", required = False, help = "optional parameter", action = "store_true")
parser.add_argument("input_file", help = "file to be parsed")
args = parser.parse_args()
</code></pre>
| 2
|
2016-09-07T17:37:42Z
|
[
"python",
"python-2.7",
"command-line-arguments"
] |
Keras Convolution2D Input: Error when checking model input: expected convolution2d_input_1 to have shape
| 39,376,169
|
<p>I am working through <a href="https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html" rel="nofollow">this great tutorial</a> on creating an image classifier using Keras. Once I have trained the model, I save it to a file and then later reload it into a model in a test script shown below.</p>
<p>I get the following exception when I evaluate the model using a new, never-before-seen image:</p>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "test_classifier.py", line 48, in <module>
score = model.evaluate(x, y, batch_size=16)
File "/Library/Python/2.7/site-packages/keras/models.py", line 655, in evaluate
sample_weight=sample_weight)
File "/Library/Python/2.7/site-packages/keras/engine/training.py", line 1131, in evaluate
batch_size=batch_size)
File "/Library/Python/2.7/site-packages/keras/engine/training.py", line 959, in _standardize_user_data
exception_prefix='model input')
File "/Library/Python/2.7/site-packages/keras/engine/training.py", line 108, in standardize_input_data
str(array.shape))
Exception: Error when checking model input: expected convolution2d_input_1 to have shape (None, 3, 150, 150) but got array with shape (1, 3, 150, 198)`
</code></pre>
<p>Is the problem with the model that I have trained or with how I am invoking the evaluate method?</p>
<p>Code:</p>
<pre><code> from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
img_width, img_height = 150, 150
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2000
nb_validation_samples = 800
nb_epoch = 5
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(3, img_width, img_height)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.load_weights('first_try.h5')
img = load_img('data/test2/ferrari.jpeg')
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape( (1,) + x.shape ) # this is a Numpy array with shape (1, 3, 150, 150)
y = np.array([0])
score = model.evaluate(x, y, batch_size=16)`
</code></pre>
| 0
|
2016-09-07T17:38:57Z
| 39,377,182
|
<p>The issue was two-fold:</p>
<ol>
<li><p>The test image was the wrong size. It was 150 x 198, and needed to be 150 x 150.</p></li>
<li><p>I had to change the dense layer from <code>model.add(Dense(10))</code> to <code>model.add(Dense(1))</code>.</p></li>
</ol>
<p>I don't yet understand how to get the model to give me the prediction, but at least now, the model evaluation runs.</p>
| 0
|
2016-09-07T18:50:33Z
|
[
"python",
"numpy",
"machine-learning",
"theano",
"keras"
] |
Not able to download pyxplorer package using pip?
| 39,376,231
|
<p>I referring url to profile data and to profile the data we need pyxplorer inside of python interpreter but when i try to install pyxplorer package it gives me error like:</p>
<p>Collecting pyxplorer
Could not find a version that satisfies the requirement pyxplorer (from versions: ) No matching distribution found for pyxplorer</p>
<p>command to install package is:</p>
<p>pip install pyxplorer</p>
<p>I know below link only about data profiling(pyxplorer)</p>
<p>1) <a href="https://github.com/grundprinzip/pyxplorer" rel="nofollow">https://github.com/grundprinzip/pyxplorer</a>
2) <a href="http://nbviewer.jupyter.org/github/grundprinzip/pyxplorer/blob/master/pyxplorer_stuff.ipynb" rel="nofollow">http://nbviewer.jupyter.org/github/grundprinzip/pyxplorer/blob/master/pyxplorer_stuff.ipynb</a></p>
<p>The links which I have already tried are:</p>
<p>1)<a href="http://stackoverflow.com/questions/17416938/pip-cannot-install-anything">pip cannot install anything</a></p>
<p>2) <a href="http://stackoverflow.com/questions/17260730/could-not-find-any-downloads-that-satisfy-the-requirement-newrelic-plugin-agent">Could not find any downloads that satisfy the requirement newrelic-plugin-agent</a></p>
<p>Thanks in advance.</p>
| 3
|
2016-09-07T17:43:03Z
| 39,376,884
|
<p>It looks like the <code>pyxplorer</code> package on PyPI is invalid and doesn't actually contain any release data. Have a look at the releases key of the <a href="https://pypi.python.org/pypi/pyxplorer/json" rel="nofollow">JSON for <code>pyxplorer</code></a> - it's an empty array, but normal packages look more like <a href="https://pypi.python.org/pypi/numpy/json" rel="nofollow">this</a>.</p>
<p>The best solution would be to install directly from GitHub, like so:</p>
<pre><code>pip install git+https://github.com/grundprinzip/pyxplorer
</code></pre>
<p>(You may need to use <code>sudo</code> on Unix-like systems or <em>Run as Administrator</em> on Windows)</p>
<p>It might also be wise to file an issue on the <code>pyxplorer</code> bug tracker so they know about this.</p>
| 1
|
2016-09-07T18:30:53Z
|
[
"python"
] |
Django ManyToMany Field with field name values
| 39,376,243
|
<p><a href="http://i.stack.imgur.com/3KSpq.png" rel="nofollow"><img src="http://i.stack.imgur.com/3KSpq.png" alt="ManyToMany Field Coming as object"></a></p>
<h1>model.py</h1>
<pre><code>class MedtechProductCategory(models.Model):
name = models.CharField(max_length=128, null=False, blank=False)
type = models.CharField(choices=type_choices_for_tag, max_length=512)
class Meta:
db_table = 'medtech_product_category'
class ProductsInfo(models.Model):
deal_active = models.BooleanField(default=True)
category = models.ManyToManyField(MedtechProductCategory, related_name='product_info_category')
class Meta:
db_table = 'products_info'
def getTags(self):
return self.category.values_list()
</code></pre>
<h1>admin.py</h1>
<pre><code>class ProductsInfoAdmin(admin.ModelAdmin):
filter_horizontal = ('category',)
admin.site.register(ProductsInfo, ProductsInfoAdmin)
</code></pre>
<p>So i want to show the name of the category field in the filter search and want to save them as objects while doing save.</p>
<p>How to customise it to show the name of the manytomany field and on save save the objects of the manytomany field</p>
| 3
|
2016-09-07T17:43:48Z
| 39,376,312
|
<p>Add a <code>__unicode__</code> method to your model which will return the string that you want to use.</p>
<p>For python 3, use <code>__str__</code> instead.</p>
<pre><code># on ProductsInfo model
def __str__(self):
return self.category.name
</code></pre>
| 3
|
2016-09-07T17:48:41Z
|
[
"python",
"django",
"django-models",
"django-forms",
"django-admin"
] |
linear regression model prediction in scikit-learn is inconsistent
| 39,376,410
|
<p>So I built a simple linear regression model with a handful of features. When I try to predict for new input, the output is inconsistent. For example:</p>
<pre><code>In [1]: model.predict(X_new)
Out[1]: array([ 7.15993216e+08, 1.13548305e+09])
</code></pre>
<p>But if I tack it onto the original training sample, I get a very different answer:</p>
<pre><code>In [2]: model.predict(X_training[:1].append(X_new))[1:]
Out[2]: array([ 272682.59925699, 1179906.89475647])
</code></pre>
<p>This seems to be model agnostic (at least within linear regression). I also tried the same inside of a pipeline and get the sam behavior. </p>
<p>Any thoughts?</p>
| 0
|
2016-09-07T17:54:39Z
| 39,417,345
|
<p>This seems to be an issue with the sorting order of the pandas data frame. A solution for this is to pre-sort both training and testing data sets by the same column order. Something along the lines of:</p>
<pre><code>model.fit(np.array(X_training.sort_index(1)))
model.predict(np.array(new_input.sort_index(1)))
</code></pre>
<p>This cements the column order in the training and testing arrays.</p>
| 0
|
2016-09-09T18:04:48Z
|
[
"python",
"scikit-learn"
] |
AttributeError: type object has no attribute "id" PYTHON
| 39,376,420
|
<p>So I was trying to make a basic python pong game when this error came up:
It seems to say that AttributeError: type object has no attribute "id" which I have no idea what it means.</p>
<pre><code>C:\Users\****\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/****/untitled/src/testing.py
Traceback (most recent call last):
File "C:/Users/****/untitled/src/testing.py", line 113, in <module>
ball.draw()
File "C:/Users/****/untitled/src/testing.py", line 36, in draw
if self.hit_paddle2(pos) == True:
File "C:/Users/****/untitled/src/testing.py", line 53, in hit_paddle2
paddle2_pos = self.canvas.coords(self.paddle2.id)
AttributeError: type object 'Paddle2' has no attribute 'id'
Process finished with exit code 1
</code></pre>
<p>This is the code I used:</p>
<pre><code>from tkinter import *
import random
import time
tk = Tk()
tk.title("Game")
tk.resizable(0, 0)
tk.wm_attributes("-topmost", 1)
canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0)
canvas.pack()
tk.update()
class Ball:
def __init__(self, canvas, paddle, paddle2, color):
self.canvas = canvas
self.paddle = paddle
self.paddle2 = paddle2
self.id = canvas.create_oval(10, 10, 25, 25, fill=color)
self.canvas.move(self.id, 245, 100)
starts = [-3, -2, -1, 1, 2, 3]
random.shuffle(starts)
self.x = starts[0]
self.y = -3
self.canvas_height = self.canvas.winfo_height()
self.canvas_width = self.canvas.winfo_width()
self.hit_bottom = False
def draw(self):
self.canvas.move(self.id, self.x, self.y)
pos = self.canvas.coords(self.id)
if pos[1] <= 0:
self.y = 3
if self.hit_paddle(pos) == True:
self.y = -3
if self.hit_paddle2(pos) == True:
self.y = 3
if pos[3] >= self.canvas_height:
self.hit_bottom = True
if pos[0] <= 0:
self.x = 3
if pos[2] >= self.canvas_width:
self.x = -3
def hit_paddle(self, pos):
paddle_pos = self.canvas.coords(self.paddle.id)
if pos[2] >= paddle_pos[0] and pos[0] <= paddle_pos[2]:
if pos[3] >= paddle_pos[1] and pos[3] <= paddle_pos[3]:
return True
return False
def hit_paddle2(self, pos):
paddle2_pos = self.canvas.coords(self.paddle2.id)
if pos[2] >= paddle2_pos[0] and pos[0] <= paddle2_pos[2]:
if pos[3] >= paddle2_pos[1] and pos[3] <= paddle2_pos[3]:
return True
return False
class Paddle:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(0, 0, 100, 10, fill=color)
self.canvas.move(self.id, 200, 300)
self.x = 0
self.canvas_width = self.canvas.winfo_width()
self.canvas.bind_all('<KeyPress-Left>', self.turn_left)
self.canvas.bind_all('<KeyPress-Right>', self.turn_right)
def turn_left(self, evt):
self.x = -2
def turn_right(self, evt):
self.x = 2
def draw(self):
self.canvas.move(self.id, self.x, 0)
pos = self.canvas.coords(self.id)
if pos[0] <= 0:
self.x = 0
elif pos[2] >= self.canvas_width:
self.x = 0
class Paddle2:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(0, 10, 100, 10, fill=color)
self.canvas.move(self.id, 200, 300)
self.x = 0
self.canvas_width = self.canvas.winfo_width()
self.canvas.bind_all('<KeyPress-W>', self.turn_left)
self.canvas.bind_all('<KeyPress-A>', self.turn_right)
def turn_left(self, evt):
self.x = -2
def turn_right(self, evt):
self.x = 2
def draw(self):
self.canvas.move(self.id, self.x, 0)
pos = self.canvas.coords(self.id)
if pos[0] <= 0:
self.x = 0
elif pos[2] >= self.canvas_width:
self.x = 0
paddle = Paddle(canvas, 'blue')
ball = Ball(canvas, paddle, Paddle2, 'red')
paddle2 = Paddle2(canvas, 'blue')
while 1:
if ball.hit_bottom == False:
ball.draw()
paddle.draw()
paddle2.draw()
tk.update_idletasks()
tk.update()
time.sleep(0.01)
</code></pre>
| -1
|
2016-09-07T17:55:08Z
| 39,376,459
|
<pre><code>paddle = Paddle(canvas, 'blue')
ball = Ball(canvas, paddle, Paddle2, 'red')
paddle2 = Paddle2(canvas, 'blue')
</code></pre>
<p>should you pass in the paddle2 instance? like this? </p>
<pre><code>paddle = Paddle(canvas, 'blue')
paddle2 = Paddle2(canvas, 'blue')
ball = Ball(canvas, paddle, paddle2, 'red')
</code></pre>
| 1
|
2016-09-07T17:58:36Z
|
[
"python"
] |
AttributeError: type object has no attribute "id" PYTHON
| 39,376,420
|
<p>So I was trying to make a basic python pong game when this error came up:
It seems to say that AttributeError: type object has no attribute "id" which I have no idea what it means.</p>
<pre><code>C:\Users\****\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/****/untitled/src/testing.py
Traceback (most recent call last):
File "C:/Users/****/untitled/src/testing.py", line 113, in <module>
ball.draw()
File "C:/Users/****/untitled/src/testing.py", line 36, in draw
if self.hit_paddle2(pos) == True:
File "C:/Users/****/untitled/src/testing.py", line 53, in hit_paddle2
paddle2_pos = self.canvas.coords(self.paddle2.id)
AttributeError: type object 'Paddle2' has no attribute 'id'
Process finished with exit code 1
</code></pre>
<p>This is the code I used:</p>
<pre><code>from tkinter import *
import random
import time
tk = Tk()
tk.title("Game")
tk.resizable(0, 0)
tk.wm_attributes("-topmost", 1)
canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0)
canvas.pack()
tk.update()
class Ball:
def __init__(self, canvas, paddle, paddle2, color):
self.canvas = canvas
self.paddle = paddle
self.paddle2 = paddle2
self.id = canvas.create_oval(10, 10, 25, 25, fill=color)
self.canvas.move(self.id, 245, 100)
starts = [-3, -2, -1, 1, 2, 3]
random.shuffle(starts)
self.x = starts[0]
self.y = -3
self.canvas_height = self.canvas.winfo_height()
self.canvas_width = self.canvas.winfo_width()
self.hit_bottom = False
def draw(self):
self.canvas.move(self.id, self.x, self.y)
pos = self.canvas.coords(self.id)
if pos[1] <= 0:
self.y = 3
if self.hit_paddle(pos) == True:
self.y = -3
if self.hit_paddle2(pos) == True:
self.y = 3
if pos[3] >= self.canvas_height:
self.hit_bottom = True
if pos[0] <= 0:
self.x = 3
if pos[2] >= self.canvas_width:
self.x = -3
def hit_paddle(self, pos):
paddle_pos = self.canvas.coords(self.paddle.id)
if pos[2] >= paddle_pos[0] and pos[0] <= paddle_pos[2]:
if pos[3] >= paddle_pos[1] and pos[3] <= paddle_pos[3]:
return True
return False
def hit_paddle2(self, pos):
paddle2_pos = self.canvas.coords(self.paddle2.id)
if pos[2] >= paddle2_pos[0] and pos[0] <= paddle2_pos[2]:
if pos[3] >= paddle2_pos[1] and pos[3] <= paddle2_pos[3]:
return True
return False
class Paddle:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(0, 0, 100, 10, fill=color)
self.canvas.move(self.id, 200, 300)
self.x = 0
self.canvas_width = self.canvas.winfo_width()
self.canvas.bind_all('<KeyPress-Left>', self.turn_left)
self.canvas.bind_all('<KeyPress-Right>', self.turn_right)
def turn_left(self, evt):
self.x = -2
def turn_right(self, evt):
self.x = 2
def draw(self):
self.canvas.move(self.id, self.x, 0)
pos = self.canvas.coords(self.id)
if pos[0] <= 0:
self.x = 0
elif pos[2] >= self.canvas_width:
self.x = 0
class Paddle2:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(0, 10, 100, 10, fill=color)
self.canvas.move(self.id, 200, 300)
self.x = 0
self.canvas_width = self.canvas.winfo_width()
self.canvas.bind_all('<KeyPress-W>', self.turn_left)
self.canvas.bind_all('<KeyPress-A>', self.turn_right)
def turn_left(self, evt):
self.x = -2
def turn_right(self, evt):
self.x = 2
def draw(self):
self.canvas.move(self.id, self.x, 0)
pos = self.canvas.coords(self.id)
if pos[0] <= 0:
self.x = 0
elif pos[2] >= self.canvas_width:
self.x = 0
paddle = Paddle(canvas, 'blue')
ball = Ball(canvas, paddle, Paddle2, 'red')
paddle2 = Paddle2(canvas, 'blue')
while 1:
if ball.hit_bottom == False:
ball.draw()
paddle.draw()
paddle2.draw()
tk.update_idletasks()
tk.update()
time.sleep(0.01)
</code></pre>
| -1
|
2016-09-07T17:55:08Z
| 39,376,463
|
<p>You need to pass instances of <code>Paddle</code> and <code>Paddle2</code> to <code>Ball</code> constructor.</p>
<pre><code>paddle = Paddle(canvas, 'blue')
paddle2 = Paddle2(canvas, 'blue')
ball = Ball(canvas, paddle, paddle2, 'red')
</code></pre>
| 2
|
2016-09-07T17:58:53Z
|
[
"python"
] |
Python copy parts of dictionary into a new dictionary
| 39,376,563
|
<p>I have a python dictionary which looks like this:</p>
<pre><code>old_dict={"payment_amt": "20",
"chk_nr": "321749",
"clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John"},
{"clm_id": "9999", "name": "Jack"}]}}
</code></pre>
<p>I need to parse the above and store it as:</p>
<pre><code>{"payment_amt": "20",
"clm_list": {"dtl": [{"clm_id": "1A2345"},
{"clm_id": "9999"}]}}
</code></pre>
<p>Is there a right pythonic way to do it?</p>
<p>Thanks</p>
| 1
|
2016-09-07T18:05:50Z
| 39,376,706
|
<p>Yes, there are many "<em>right pythonic ways to do it</em>".
Here is one such way:</p>
<pre><code>old_dict = {
"payment_amt": "20",
"chk_nr": "321749",
"clm_list": {
"dtl": [
{"clm_id": "1A2345", "name": "John"},
{"clm_id": "9999", "name": "Jack"}]}}
new_dict = {
'payment_amt': old_dict['payment_amt'],
'clm_list': {
'dtl': [{
'clm_id': dtl['clm_id']} for dtl in old_dict['clm_list']['dtl']]}}
assert new_dict == {
"payment_amt": "20",
"clm_list": {"dtl": [{"clm_id": "1A2345"}, {"clm_id": "9999"}]}}
</code></pre>
| 2
|
2016-09-07T18:16:58Z
|
[
"python",
"dictionary"
] |
Python copy parts of dictionary into a new dictionary
| 39,376,563
|
<p>I have a python dictionary which looks like this:</p>
<pre><code>old_dict={"payment_amt": "20",
"chk_nr": "321749",
"clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John"},
{"clm_id": "9999", "name": "Jack"}]}}
</code></pre>
<p>I need to parse the above and store it as:</p>
<pre><code>{"payment_amt": "20",
"clm_list": {"dtl": [{"clm_id": "1A2345"},
{"clm_id": "9999"}]}}
</code></pre>
<p>Is there a right pythonic way to do it?</p>
<p>Thanks</p>
| 1
|
2016-09-07T18:05:50Z
| 39,376,899
|
<p>You could create a "template" dictionary/list/whatever and define a recursive method that traverses both the input object (some sort of nested list/dictionary thing) and the template in parallel and just keeps those elements that are in the respective place in the template. In a basic version, this could look like this (but could certainly be extended to cover more cases):</p>
<pre><code>def prune_dict(obj, template):
if template is None:
return obj
if isinstance(template, dict):
return {key: prune_dict(obj[key], template[key]) for key in template}
if isinstance(template, list):
return [prune_dict(x, template[0]) for x in obj]
</code></pre>
<p>Here, <code>template</code> is assumed to be another dictionary or list. <code>None</code> is used to denote "leafs" in the structure. For <code>list</code>, the template is assumed to hold only one element, that is used as template for all the list's elements. For <code>dict</code>, it will retain all those values that are represented in the template.</p>
<p>Applied to your use case:</p>
<pre><code>>>> tmpl = {"payment_amt": None, "clm_list": {"dtl": [{"clm_id": None}]}}
>>> prune_dict(old_dict, tmpl)
{'clm_list': {'dtl': [{'clm_id': '1A2345'}, {'clm_id': '9999'}]}, 'payment_amt': '20'}
</code></pre>
| 1
|
2016-09-07T18:32:00Z
|
[
"python",
"dictionary"
] |
Python copy parts of dictionary into a new dictionary
| 39,376,563
|
<p>I have a python dictionary which looks like this:</p>
<pre><code>old_dict={"payment_amt": "20",
"chk_nr": "321749",
"clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John"},
{"clm_id": "9999", "name": "Jack"}]}}
</code></pre>
<p>I need to parse the above and store it as:</p>
<pre><code>{"payment_amt": "20",
"clm_list": {"dtl": [{"clm_id": "1A2345"},
{"clm_id": "9999"}]}}
</code></pre>
<p>Is there a right pythonic way to do it?</p>
<p>Thanks</p>
| 1
|
2016-09-07T18:05:50Z
| 39,376,926
|
<p>Personally, I would go with a straightforward copy of the exact keys you want to keep in the example you posted (see Rob's answer), if the input is always exactly like you listed. Keep it simple.</p>
<p>However, if you can't rely on the input to always have the same exact structure, you can still reduce it to only include the expected keys with a recursive function which uses dict comprehensions.</p>
<pre><code>old_dict = {"payment_amt": "20",
"chk_nr": "321749",
"clm_list": {"dtl": [{"clm_id": "1A2345", "name": "John"},
{"clm_id": "9999", "name": "Jack"}]}}
keep = ["payment_amt", "clm_list", "dtl", "clm_id"]
def recursively_prune_dict_keys(obj, keep):
if isinstance(obj, dict):
return {k: recursively_prune_dict_keys(v, keep) for k, v in obj.items() if k in keep}
elif isinstance(obj, list):
return [recursively_prune_dict_keys(item, keep) for item in obj]
else:
return obj
new_dict = recursively_prune_dict_keys(old_dict, keep)
print new_dict
</code></pre>
<p>output:</p>
<pre><code>{'clm_list': {'dtl': [{'clm_id': '1A2345'}, {'clm_id': '9999'}]}, 'payment_amt': '20'}
</code></pre>
| 2
|
2016-09-07T18:33:59Z
|
[
"python",
"dictionary"
] |
Edited - Python plot persistence windows
| 39,376,670
|
<p>Each time I launch my program, my plots are erased after each execution.</p>
<p>I would like the following situation:</p>
<ol>
<li>Launch program 1 and plot in figure 1</li>
<li>Stop the execution of program 1</li>
<li>Lauch program 2 and plot in figure 1</li>
<li>Retrieve a pdf file where the plot of program 1 and program 2 are superimposed</li>
</ol>
<p>here is my code:</p>
<pre class="lang-python prettyprint-override"><code>z = np.zeros(5)
fig,axarr = plt.subplots(nrows=2, ncols=3)
fig.subplots_adjust()
lig,col = 0,0
axarr[lig,col].plot(tt,z[0,:],'k')
#plt.ion()
#plt.draw()
plt.show(block=True)
#plt.show()
</code></pre>
<p><strong>EDIT:</strong> Ok maybe I could solve this problem with the following approach:
Is it possible to get the ax of a figure that was opened by an other excetion of of program ?</p>
| 2
|
2016-09-07T18:13:57Z
| 39,376,911
|
<p><code>plt.show(block=True)</code> should work.</p>
<p>Other hack would be using pause function as shown below:</p>
<pre><code>from matplotlib import pylab
pylab.plot(range(10), range(10))
pylab.pause(2**31-1)
</code></pre>
| 0
|
2016-09-07T18:32:51Z
|
[
"python",
"python-2.7",
"matplotlib"
] |
append columns to pandas dataframe with duplicate rows
| 39,376,755
|
<p>How can I append the columns from the <code>data</code> dataframe to the <code>q</code> dataframe, while maintaining the same order and number of rows in <code>q</code>? The challenge is that there can be duplicates in <code>data</code> and <code>q</code>.</p>
<pre><code>In [2]: data = pd.DataFrame([[3,4,333],[5,6,111],[2,9,222],[5,6,111]], columns=['a','b','id'])
In [3]: data.index = data.id
In [4]: q = pd.DataFrame([[333],[111]], columns=['id'])
In [5]: q.index = q.id
In [6]: data
Out[6]:
a b id
id
333 3 4 333
111 5 6 111
222 2 9 222
111 5 6 111
In [7]: q
Out[7]:
id
id
333 333
111 111
</code></pre>
<p>The result should look something like:</p>
<pre><code> a b id
0 3 4 333
1 5 6 111
</code></pre>
<p>Some stuff that doesn't work:</p>
<pre><code>pd.merge(q, data, how='left')
data.ix[q.iloc[:,0],['a','b','id']]
</code></pre>
<p>I would prefer to not do a <code>unique</code> on <code>data</code> as this would create another large object.</p>
| 0
|
2016-09-07T18:21:18Z
| 39,376,909
|
<p>One possible solution without drop duplicates is create new columns in both <code>DataFrames</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow"><code>cumcount</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> by <code>id</code> and <code>g</code>. Last need remove column <code>g</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> and reorder columns:</p>
<pre><code>data['g'] = data.groupby('id').cumcount()
q['g'] = q.groupby('id').cumcount()
print (data)
a b id g
id
333 3 4 333 0
111 5 6 111 0
222 2 9 222 0
111 5 6 111 1
print (q)
id g
id
333 333 0
111 111 0
print (pd.merge(q, data, on=['id','g'], how='left')
.drop('g', axis=1)[['a','b','id']])
a b id
0 3 4 333
1 5 6 111
</code></pre>
<p>With <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a> it is easy:</p>
<pre><code>data = data.drop_duplicates('id')
print (data)
a b id
id
333 3 4 333
111 5 6 111
222 2 9 222
print (q)
id
id
333 333
111 111
print (pd.merge(q, data, how='left')[['a','b','id']])
a b id
0 3 4 333
1 5 6 111
</code></pre>
| 0
|
2016-09-07T18:32:41Z
|
[
"python",
"pandas"
] |
append columns to pandas dataframe with duplicate rows
| 39,376,755
|
<p>How can I append the columns from the <code>data</code> dataframe to the <code>q</code> dataframe, while maintaining the same order and number of rows in <code>q</code>? The challenge is that there can be duplicates in <code>data</code> and <code>q</code>.</p>
<pre><code>In [2]: data = pd.DataFrame([[3,4,333],[5,6,111],[2,9,222],[5,6,111]], columns=['a','b','id'])
In [3]: data.index = data.id
In [4]: q = pd.DataFrame([[333],[111]], columns=['id'])
In [5]: q.index = q.id
In [6]: data
Out[6]:
a b id
id
333 3 4 333
111 5 6 111
222 2 9 222
111 5 6 111
In [7]: q
Out[7]:
id
id
333 333
111 111
</code></pre>
<p>The result should look something like:</p>
<pre><code> a b id
0 3 4 333
1 5 6 111
</code></pre>
<p>Some stuff that doesn't work:</p>
<pre><code>pd.merge(q, data, how='left')
data.ix[q.iloc[:,0],['a','b','id']]
</code></pre>
<p>I would prefer to not do a <code>unique</code> on <code>data</code> as this would create another large object.</p>
| 0
|
2016-09-07T18:21:18Z
| 39,376,979
|
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow"><code>join</code></a> to join the columns of the two dataframes on a common index, <code>id</code>. Then, drop the duplicated values along with dropping off <code>Nans</code> if present as shown:</p>
<pre><code>data[['a', 'b']].join(q['id']) \
.drop_duplicates() \
.dropna() \
.sort_values('id', ascending=False) \
.reset_index(drop=True) \
.astype(int)
a b id
0 3 4 333
1 5 6 111
</code></pre>
| 1
|
2016-09-07T18:37:19Z
|
[
"python",
"pandas"
] |
In Python, how to deep copy the Namespace obj "args" from argparse
| 39,376,763
|
<p>I got "args" from argparse:</p>
<blockquote>
<p>args = parser.parse_args()</p>
</blockquote>
<p>I want to pass it to two different functions with slight modifications each. That's why I want to deep copy the args, modify the copy and pass them to each function. </p>
<p>However, the copy.deepcopy just doesn't work. It gives me:</p>
<blockquote>
<p>TypeError: cannot deepcopy this pattern object</p>
</blockquote>
<p>So what's the right way to do it? Thanks</p>
| 0
|
2016-09-07T18:21:40Z
| 39,378,639
|
<p>I myself just figured out a way to do it:</p>
<pre><code>args_copy = Namespace(**vars(args))
</code></pre>
<p>Not real deep copy. But at least "deeper" than:</p>
<pre><code>args_copy = args
</code></pre>
| 0
|
2016-09-07T20:40:08Z
|
[
"python",
"copy",
"argparse"
] |
Convert length 1 nested lists into strings
| 39,376,830
|
<p>I have this list, in Python, with some nested lists inside with length 1:</p>
<p>[<strong>['7746']</strong>, '12', '1929', '8827', <strong>['7']</strong>, '8837', '128']</p>
<p>I want to get rid of the lists and just keep the string inside and get:</p>
<p>[<strong>'7746'</strong>, '12', '1929', '8827', <strong>'7'</strong>, '8837', '128']</p>
<p>How do I do this?</p>
| -2
|
2016-09-07T18:26:05Z
| 39,383,553
|
<p>You can use a <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=201609080600574325476">list comprehension</a> and a ternary conditional:</p>
<pre><code>empty = [(x[0] if isinstance(x, list) else x) for x in nested]
</code></pre>
<p>Alternatively, the obvious solution with a simple loop:</p>
<pre><code>empty = []
for x in nested:
if isinstance(x, list):
empty.append(x[0])
else:
empty.append(x)
</code></pre>
| 0
|
2016-09-08T06:03:30Z
|
[
"python",
"list"
] |
Python3, json causes TypeError but simplejson doesn't
| 39,376,847
|
<p>I'm running the latest Python3 (with the Anaconda distribution) and have a problem with the standard library installed json which causes the Traceback:</p>
<pre><code> Traceback (most recent call last):
File "C:\Users\Think\Anaconda3\lib\site-packages\werkzeug\serving.py", line 193, in run_wsgi
execute(self.server.app)
File "C:\Users\Think\Anaconda3\lib\site-packages\werkzeug\serving.py", line 181, in execute
application_iter = app(environ, start_response)
File "C:\Users\Think\my_server.py", line 148, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\Think\my_server.py", line 144, in wsgi_app
response = self.dispatch_request(request)
File "C:\Users\Think\my_server.py", line 80, in dispatch_request
return getattr(self, 'on_' + endpoint)(request, **values)
File "C:\Users\Think\my_server.py", line 54, in on_xapi_request
json_data = self.load_json(request.data)
File "C:\Users\Think\my_server.py", line 60, in load_json
return json.loads(data)
File "C:\Users\Think\Anaconda3\lib\json\__init__.py", line 312, in loads
s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
</code></pre>
<p>But, simplejson doesn't cause the error.</p>
| 0
|
2016-09-07T18:27:27Z
| 39,376,874
|
<p>Use <code>str.decode('utf-8')</code> before passing to <code>json.loads</code></p>
<p>I think this line:</p>
<pre><code>return json.loads(data)
</code></pre>
<p>is causing the problem. Decode data before passing it to this function.</p>
| 1
|
2016-09-07T18:30:15Z
|
[
"python",
"json",
"python-3.x"
] |
How to separate thousands with commas using Facet Grid in Seaborn
| 39,376,888
|
<p>I have the following code:</p>
<pre><code> d = sns.FacetGrid(data = df,
col = 'Company',
sharex = False,
sharey = False,
col_wrap = 4)
d.map(sns.distplot, 'Volume', kde = False, rug = True, fit = stats.norm)
d.set_xlabels('volume')
d.set_xticklabels(rotation = 45)
plt.savefig(myDataRepository + 'figure_02__' + str(time_stamp) + '.png')
ax.get_xaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
</code></pre>
<p>The last two lines attempt to separate the x-axis values with commas. For example, '500,000' instead of '500000'.</p>
<p>I see an error message saying:</p>
<pre><code>"global name ax is not defined"
</code></pre>
<p>How would I amend <code>ax.</code> such that I can format the x-axis values with comma separation?</p>
<p>Thanks in advance!</p>
<p><strong>---EDIT BELOW---</strong></p>
<p>Here is updated code that I'm posting. It still fails, but this time with error message <code>@property</code> when evaluating </p>
<pre><code>d.ax.get_xaxis().set_major_formatter(tkr.FuncFormatter(lambda x, p: format(int(x), ',')))
</code></pre>
<p>Revised Code: </p>
<pre><code>fig, ax = plt.subplots()
d = sns.FacetGrid(data = df,
col = 'Company',
sharex = False,
sharey = False,
col_wrap = 4)
d.map(sns.distplot, 'Volume', kde = False, rug = True, fit = stats.norm)
d.set_xlabels('volume')
d.set_xticklabels(rotation = 45)
plt.savefig(myDataRepository + 'figure_02__' + str(time_stamp) + '.png')
d.ax.get_xaxis().set_major_formatter(tkr.FuncFormatter(lambda x, p: format(int(x), ',')))
</code></pre>
<p><code>@cphlewis</code> is suggesting that I put the ax.get_axis() inside the Facet Grid. But how???</p>
<p>Putting <code>ax.get_axis()</code> inside the FacetGrid does not seem to solve this problem (assuming I did so correctly). Is it even <em>possible</em> to use this <code>ax.get_axis()</code> function with the Seaborn FacetGrid?</p>
<p>The error message:</p>
<pre><code> 971 return self.axes[0, 0]
972 else:
--> 973 raise AttributeError
974
975 @property
AttributeError:
</code></pre>
| 0
|
2016-09-07T18:31:31Z
| 39,377,865
|
<p>Specify the <code>ax</code> inside the <code>FacetGrid</code>:</p>
<pre><code>df = random((5, 5))
d = sns.FacetGrid(data=df)
dir(d.ax.get_xaxis())
</code></pre>
<p>and aha, <code>get_major_formatter</code> is in that list. </p>
| 0
|
2016-09-07T19:41:02Z
|
[
"python",
"matplotlib",
"seaborn",
"facet"
] |
Why is numpy's sine function so inaccurate at some points?
| 39,376,891
|
<p>I just checked <code>numpy</code>'s <code>sine</code> function. Apparently, it produce highly inaccurate results around pi. </p>
<pre><code>In [26]: import numpy as np
In [27]: np.sin(np.pi)
Out[27]: 1.2246467991473532e-16
</code></pre>
<p>The expected result is 0. Why is <code>numpy</code> so inaccurate there?</p>
<p>To some extend, I feel uncertain whether it is acceptable to regard the calculated result as inaccurate: Its absolute error comes within one machine epsilon (for binary64), whereas the relative error is <code>+inf</code> -- reason why I feel somewhat confused. Any idea?</p>
<p>[Edit] I fully understand that floating-point calculation can be inaccurate. But most of the floating-point libraries can manage to deliver results within a small range of error. Here, the relative error is +inf, which seems unacceptable. Just imagine that we want to calculate </p>
<pre><code>1/(1e-16 + sin(pi))
</code></pre>
<p>The results would be disastrously wrong if we use numpy's implementation. </p>
| -1
|
2016-09-07T18:31:38Z
| 39,376,999
|
<p>The value is dependent upon the algorithm used to compute it. A typical implementation will use some quickly-converging infinite series, carried out until it converges within one machine epsilon. Many modern chips (starting with the Intel 960, I think) had such functions in the instruction set.</p>
<p>To get 0 returned for this, we would need either a notably more accurate algorithm, one that ran extra-precision arithmetic to guarantee the closest-match result, or something that recognizes special cases: detect a multiple of PI and return the exact value.</p>
| -1
|
2016-09-07T18:38:51Z
|
[
"python",
"numpy",
"floating-point"
] |
Why is numpy's sine function so inaccurate at some points?
| 39,376,891
|
<p>I just checked <code>numpy</code>'s <code>sine</code> function. Apparently, it produce highly inaccurate results around pi. </p>
<pre><code>In [26]: import numpy as np
In [27]: np.sin(np.pi)
Out[27]: 1.2246467991473532e-16
</code></pre>
<p>The expected result is 0. Why is <code>numpy</code> so inaccurate there?</p>
<p>To some extend, I feel uncertain whether it is acceptable to regard the calculated result as inaccurate: Its absolute error comes within one machine epsilon (for binary64), whereas the relative error is <code>+inf</code> -- reason why I feel somewhat confused. Any idea?</p>
<p>[Edit] I fully understand that floating-point calculation can be inaccurate. But most of the floating-point libraries can manage to deliver results within a small range of error. Here, the relative error is +inf, which seems unacceptable. Just imagine that we want to calculate </p>
<pre><code>1/(1e-16 + sin(pi))
</code></pre>
<p>The results would be disastrously wrong if we use numpy's implementation. </p>
| -1
|
2016-09-07T18:31:38Z
| 39,377,219
|
<p>The main problem here is that <code>np.pi</code> is not exactly Ï, it's a finite binary floating point number that is close to the true irrational real number Ï but still off by ~1e-16. <code>np.sin(np.pi)</code> is actually returning a value closer to the true infinite-precision result for <code>sin(np.pi)</code> (i.e. the ideal mathematical <code>sin()</code> function being given the approximated <code>np.pi</code> value) than 0 would be.</p>
| 6
|
2016-09-07T18:53:20Z
|
[
"python",
"numpy",
"floating-point"
] |
Creating a NumPy array directly from __array_interface__
| 39,376,892
|
<p>Suppose I have an <code>__array_interface__</code> dictionary and I would like to create a numpy view of this data from the dictionary itself. For example:</p>
<pre><code>buff = {'shape': (3, 3), 'data': (140546686381536, False), 'typestr': '<f8'}
view = np.array(buff, copy=False)
</code></pre>
<p>However, this does not work as <code>np.array</code> searches for either the buffer or array interface as attributes. The simple workaround could be the following:</p>
<pre><code>class numpy_holder(object):
pass
holder = numpy_holder()
holder.__array_interface__ = buff
view = np.array(holder, copy=False)
</code></pre>
<p>This seems a bit roundabout. Am I missing a straightforward way to do this?</p>
| 2
|
2016-09-07T18:31:40Z
| 39,377,877
|
<p>correction - with the right 'data' value your <code>holder</code> works in <code>np.array</code>:</p>
<p><code>np.array</code> is definitely not going to work since it expects an iterable, some things like a list of lists, and parses the individual values.</p>
<p>There is a low level constructor, <code>np.ndarray</code> that takes a buffer parameter. And a <code>np.frombuffer</code>.</p>
<p>But my impression is that <code>x.__array_interface__['data'][0]</code> is a integer representation of the data buffer location, but not directly a pointer to the buffer. I've only used it to verify that a view shares the same databuffer, not to construct anything from it.</p>
<p><code>np.lib.stride_tricks.as_strided</code> uses <code>__array_interface__</code> for default stride and shape data, but gets the data from an array, not the <code>__array_interface__</code> dictionary.</p>
<p>===========</p>
<p>An example of <code>ndarray</code> with a <code>.data</code> attribute:</p>
<pre><code>In [303]: res
Out[303]:
array([[ 0, 20, 50, 30],
[ 0, 50, 50, 0],
[ 0, 0, 75, 25]])
In [304]: res.__array_interface__
Out[304]:
{'data': (178919136, False),
'descr': [('', '<i4')],
'shape': (3, 4),
'strides': None,
'typestr': '<i4',
'version': 3}
In [305]: res.data
Out[305]: <memory at 0xb13ef72c>
In [306]: np.ndarray(buffer=res.data, shape=(4,3),dtype=int)
Out[306]:
array([[ 0, 20, 50],
[30, 0, 50],
[50, 0, 0],
[ 0, 75, 25]])
In [324]: np.frombuffer(res.data,dtype=int)
Out[324]: array([ 0, 20, 50, 30, 0, 50, 50, 0, 0, 0, 75, 25])
</code></pre>
<p>Both of these arrays are views.</p>
<p>OK, with your <code>holder</code> class, I can make the same thing, using this <code>res.data</code> as the data buffer. Your class creates an <code>object exposing the array interface</code>. </p>
<pre><code>In [379]: holder=numpy_holder()
In [380]: buff={'data':res.data, 'shape':(4,3), 'typestr':'<i4'}
In [381]: holder.__array_interface__ = buff
In [382]: np.array(holder, copy=False)
Out[382]:
array([[ 0, 20, 50],
[30, 0, 50],
[50, 0, 0],
[ 0, 75, 25]])
</code></pre>
| 2
|
2016-09-07T19:41:49Z
|
[
"python",
"numpy",
"pybinding"
] |
NaNs suddenly appearing for sklearn KFolds
| 39,376,967
|
<p>I'm trying to run cross validation on my data set. The data appears to be clean, but then when I try to run it, some of my data gets replaced by NaNs. I'm not sure why. Has anybody seen this before?</p>
<pre><code>y, X = np.ravel(df_test['labels']), df_test[['variation', 'length', 'tempo']]
X_train, X_test, y_train, y_test = cv.train_test_split(X,y,test_size=.30, random_state=4444)
</code></pre>
<p>This is what my X data looked like before KFolds:
<code>
variation length tempo
0 0.005144 1183.148118 135.999178
1 0.002595 720.165442 117.453835
2 0.008146 397.500952 112.347147
3 0.005367 1109.819501 172.265625
4 0.001631 509.931973 135.999178
5 0.001620 560.365714 151.999081
6 0.002513 763.377778 107.666016
7 0.009262 502.083628 99.384014
8 0.000610 500.017052 143.554688
9 0.000733 269.001723 117.453835
</code></p>
<p>My Y data looks like this:
<code>
array([ True, False, False, True, True, True, True, False, True, False], dtype=bool)
</code></p>
<p>Now when I try to do the cross val:</p>
<pre><code>kf = KFold(X_train.shape[0], n_folds=4, shuffle=True)
for train_index, val_index in kf:
cv_train_x = X_train.ix[train_index]
cv_val_x = X_train.ix[val_index]
cv_train_y = y_train[train_index]
cv_val_y = y_train[val_index]
print cv_train_x
logreg = LogisticRegression(C = .01)
logreg.fit(cv_train_x, cv_train_y)
pred = logreg.predict(cv_val_x)
print accuracy_score(cv_val_y, pred)
</code></pre>
<p>When I try to run this, I error out with the below error, so I add the print statement.<br>
<code>ValueError: Input contains NaN, infinity or a value too large for dtype('float64').</code></p>
<p>In my print statement, this is what it printed, some data became NaNs.
<code>
variation length tempo
0 NaN NaN NaN
1 NaN NaN NaN
2 0.008146 397.500952 112.347147
3 0.005367 1109.819501 172.265625
4 0.001631 509.931973 135.999178
</code></p>
<p>I'm sure I'm doing something wrong, any ideas? As always, thank you so much!</p>
| 0
|
2016-09-07T18:36:31Z
| 39,379,816
|
<p>To solve use <code>.iloc</code> instead of <code>.ix</code> to index your pandas dataframe</p>
<pre><code>for train_index, val_index in kf:
cv_train_x = X_train.iloc[train_index]
cv_val_x = X_train.iloc[val_index]
cv_train_y = y_train[train_index]
cv_val_y = y_train[val_index]
print cv_train_x
logreg = LogisticRegression(C = .01)
logreg.fit(cv_train_x, cv_train_y)
pred = logreg.predict(cv_val_x)
print accuracy_score(cv_val_y, pred)
</code></pre>
<p>Indexing with <code>ix</code> is usually equivalent to using <code>.loc</code> which is <strong>label based</strong> indexing, not <strong>index based</strong>. While <code>.loc</code> works on <code>X</code> which has a nice integer based indexing/labeling, after cv split this rule is no longer there, you get something like:</p>
<pre><code> length tempo variation
4 509.931973 135.999178 0.001631
2 397.500952 112.347147 0.008146
7 502.083628 99.384014 0.009262
6 763.377778 107.666016 0.002513
5 560.365714 151.999081 0.001620
3 1109.819501 172.265625 0.005367
9 269.001723 117.453835 0.000733
</code></pre>
<p>and now you <strong>no longer have</strong> label 0 or 1, so if you do</p>
<pre><code>X_train.loc[1]
</code></pre>
<p>you will get an Exception</p>
<pre><code>KeyError: 'the label [1] is not in the [index]'
</code></pre>
<p>However, pandas has a <strong>silent error</strong> if you request multiple labels, where <strong>at least one exists</strong>. Thus if you do</p>
<pre><code> X_train.loc[[1,4]]
</code></pre>
<p>you will get</p>
<pre><code> length tempo variation
1 NaN NaN NaN
4 509.931973 135.999178 0.001631
</code></pre>
<p>As expected - 1 returns NaNs (since it was not found) and 4 represents actual row - since it is inside X_train. In order to solve it - just switch to <code>.iloc</code> or manually rebuild an index of X_train.</p>
| 1
|
2016-09-07T22:15:41Z
|
[
"python",
"machine-learning",
"scikit-learn",
"cross-validation"
] |
Web Scraping Javascript Using Python
| 39,376,972
|
<p>I am used to using BeautifulSoup to scrape a website, however this website is different. Upon soup.prettify() I get back Javascript code, lots of stuff. I want to scrape this website for the data on the actual website (company name, telephone number etc). Is there a way of scraping these scripts such as Main.js to retrieve the data that is displayed on the website to me? </p>
<p>Clear version:</p>
<p>Code is: </p>
<pre><code><script src="/docs/Main.js" type="text/javascript" language="javascript"></script>
</code></pre>
<p>This holds the text that is on the website. I would like to scrape this text however it is populated using JS not HTML (which I used to use BeautifulSoup for).</p>
| -3
|
2016-09-07T18:36:44Z
| 39,377,235
|
<p>You're asking if you can scrape text generated at runtime by Javascript. The answer is sort-of.</p>
<p>You'd need to run some kind of <a href="https://github.com/dhamaniasad/HeadlessBrowsers" rel="nofollow">headless browser</a>, like PhantomJS, in order to let the Javascript execute and populate the page. You'd then need to feed the HTML that the headless browser generates to BeautifulSoup in order to parse it.</p>
| 1
|
2016-09-07T18:54:08Z
|
[
"python",
"python-2.7"
] |
Compute all ways to bin a series of integers into N bins, where each bin only contains contiguous numbers
| 39,376,987
|
<p>I want find all possible ways to map a series of (contiguous) integers M = {0,1,2,...,m} to another series of integers N = {0,1,2,...,n} where m > n, <em>subject to the constraint that only contiguous integers in M map to the same integer in N.</em> </p>
<p>The following piece of python code comes close (<code>start</code> corresponds to the first element in M, <code>stop</code>-1 corresponds to the last element in M, and <code>nbins</code> corresponds to |N|):</p>
<pre><code>import itertools
def find_bins(start, stop, nbins):
if (nbins > 1):
return list(list(itertools.product([range(start, ii)], find_bins(ii, stop, nbins-1))) for ii in range(start+1, stop-nbins+2))
else:
return [range(start, stop)]
</code></pre>
<p>E.g </p>
<pre><code>In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[([0], [([1], [2, 3, 4])]),
([0], [([1, 2], [3, 4])]),
([0], [([1, 2, 3], [4])])],
[([0, 1], [([2], [3, 4])]),
([0, 1], [([2, 3], [4])])],
[([0, 1, 2], [([3], [4])])]]
</code></pre>
<p>However, as you can see the output is nested, and for the life of me, I cant find a way to properly amend the code without breaking it.</p>
<p>The desired output would look like this:</p>
<pre><code>In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[(0), (1), (2, 3, 4)],
[(0), (1, 2), (3, 4)],
[(0), (1, 2, 3), (4)],
[(0, 1), (2), (3, 4)],
[(0, 1), (2, 3), (4)],
[(0, 1, 2), (3), (4)]]
</code></pre>
| 1
|
2016-09-07T18:37:56Z
| 39,378,085
|
<p>This does what I want; I will gladly accept simpler, more elegant solutions:</p>
<pre><code>def _split(start, stop, nbins):
if (nbins > 1):
out = []
for ii in range(start+1, stop-nbins+2):
iterator = itertools.product([range(start, ii)], _split(ii, stop, nbins-1))
for item in iterator:
out.append(item)
return out
else:
return [range(start, stop)]
def _unpack(nested):
unpacked = []
if isinstance(nested, (list, tuple)):
for item in nested:
if isinstance(item, tuple):
for subitem in item:
unpacked.extend(_unpack(subitem))
elif isinstance(item, list):
unpacked.append([_unpack(subitem) for subitem in item])
elif isinstance(item, int):
unpacked.append([item])
return unpacked
else: # integer
return nested
def find_nbins(start, stop, nbins):
nested = _split(start, stop, nbins)
unpacked = [_unpack(item) for item in nested]
return unpacked
</code></pre>
| 0
|
2016-09-07T19:58:49Z
|
[
"python",
"combinatorics"
] |
Compute all ways to bin a series of integers into N bins, where each bin only contains contiguous numbers
| 39,376,987
|
<p>I want find all possible ways to map a series of (contiguous) integers M = {0,1,2,...,m} to another series of integers N = {0,1,2,...,n} where m > n, <em>subject to the constraint that only contiguous integers in M map to the same integer in N.</em> </p>
<p>The following piece of python code comes close (<code>start</code> corresponds to the first element in M, <code>stop</code>-1 corresponds to the last element in M, and <code>nbins</code> corresponds to |N|):</p>
<pre><code>import itertools
def find_bins(start, stop, nbins):
if (nbins > 1):
return list(list(itertools.product([range(start, ii)], find_bins(ii, stop, nbins-1))) for ii in range(start+1, stop-nbins+2))
else:
return [range(start, stop)]
</code></pre>
<p>E.g </p>
<pre><code>In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[([0], [([1], [2, 3, 4])]),
([0], [([1, 2], [3, 4])]),
([0], [([1, 2, 3], [4])])],
[([0, 1], [([2], [3, 4])]),
([0, 1], [([2, 3], [4])])],
[([0, 1, 2], [([3], [4])])]]
</code></pre>
<p>However, as you can see the output is nested, and for the life of me, I cant find a way to properly amend the code without breaking it.</p>
<p>The desired output would look like this:</p>
<pre><code>In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[(0), (1), (2, 3, 4)],
[(0), (1, 2), (3, 4)],
[(0), (1, 2, 3), (4)],
[(0, 1), (2), (3, 4)],
[(0, 1), (2, 3), (4)],
[(0, 1, 2), (3), (4)]]
</code></pre>
| 1
|
2016-09-07T18:37:56Z
| 39,378,895
|
<p>I suggest a different approach: a partitioning into <code>n</code> non-empty bins is uniquely determined by the <code>n-1</code> distinct indices marking the boundaries between the bins, where the first marker is after the first element, and the final marker before the last element. <code>itertools.combinations()</code> can be used directly to generate all such index tuples, and then it's just a matter of using them as slice indices. Like so:</p>
<pre><code>def find_nbins(start, stop, nbins):
from itertools import combinations
base = range(start, stop)
nbase = len(base)
for ixs in combinations(range(1, stop - start), nbins - 1):
yield [tuple(base[lo: hi])
for lo, hi in zip((0,) + ixs, ixs + (nbase,))]
</code></pre>
<p>Then, e.g.,</p>
<pre><code>for x in find_nbins(0, 5, 3):
print(x)
</code></pre>
<p>displays:</p>
<pre><code>[(0,), (1,), (2, 3, 4)]
[(0,), (1, 2), (3, 4)]
[(0,), (1, 2, 3), (4,)]
[(0, 1), (2,), (3, 4)]
[(0, 1), (2, 3), (4,)]
[(0, 1, 2), (3,), (4,)]
</code></pre>
<h2>EDIT: Making it into 2 problems</h2>
<p>Just noting that there's a more general underlying problem here: generating the ways to break an arbitrary sequence into <code>n</code> non-empty bins. Then the specific question here is applying that to the sequence <code>range(start, stop)</code>. I believe viewing it that way makes the code easier to understand, so here it is:</p>
<pre><code>def gbins(seq, nbins):
from itertools import combinations
base = tuple(seq)
nbase = len(base)
for ixs in combinations(range(1, nbase), nbins - 1):
yield [base[lo: hi]
for lo, hi in zip((0,) + ixs, ixs + (nbase,))]
def find_nbins(start, stop, nbins):
return gbins(range(start, stop), nbins)
</code></pre>
| 1
|
2016-09-07T20:58:50Z
|
[
"python",
"combinatorics"
] |
Why no colors in bokeh plot's legend
| 39,376,990
|
<p>In Bokeh 0.12.2, I was able to make a stacked bar chart with various hover tooltips using plotting and VBars. I also enabled a legend for the plot. However, my vbars are colored and the colors for each vbar (each stack) are not appearing in the legend. Only the names for the stack in the legend are appearing. Is this not an implemented feature yet or a bug maybe? Or maybe I'm missing something?</p>
<p><a href="http://i.stack.imgur.com/MJCzC.png" rel="nofollow">what my chart looks like</a></p>
| 0
|
2016-09-07T18:38:11Z
| 39,708,656
|
<p>This was due to a bug in the bokeh source code which is being fixed if not fixed already.</p>
| 0
|
2016-09-26T17:15:07Z
|
[
"python",
"colors",
"legend",
"bokeh"
] |
Pandas - Creating multiple columns similar to pd.get_dummies
| 39,377,164
|
<p>Let's say my data looks like this:</p>
<pre><code>df = pd.DataFrame({'color': ['red', 'blue', 'green', 'red', 'blue', 'blue'], 'line': ['sunday', 'sunday', 'monday', 'monday', 'monday', 'tuesday'],
'group': ['1', '1', '2', '1', '1', '1'], 'value': ['a', 'b', 'a', 'c', 'a', 'b']})
color group line value
0 red 1 sunday a
1 blue 1 sunday b
2 green 2 monday a
3 red 1 monday c
4 blue 1 monday a
5 blue 1 tuesday b
</code></pre>
<p>Essentially, what I want is to get a list of lines for each color. For instance, I want the color red to show each line and value associated with it in its own column. The trick is that I also want to show other lines associated with colors from the same group. The corresponding values for those would be 'not eligible'. Thus I want my output to look like this:</p>
<pre><code> color line_1 line_1_value line_2 line_2_value line_3 line_3_value
0 red sunday a monday c tuesday not eligible
1 blue sunday b monday a tuesday b
2 green monday c
</code></pre>
<p>There are some ~50,000 unique 'colors' that I need to do this for. I'm sure it's something relatively simple, but I don't possess the knowledge or skillset yet to figure it out. Any help would be appreciated! </p>
| 4
|
2016-09-07T18:49:40Z
| 39,378,883
|
<p>Drop the column you don't need and add a column to get a unique subindex per color:</p>
<pre><code>df = df.drop('group', axis=1)
df['index_by_color'] = df.groupby('color').cumcount()
color line value index_by_color
0 red sunday a 0
1 blue sunday b 0
2 green monday a 0
3 red monday c 1
4 blue monday a 1
5 blue tuesday b 2
</code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> to get the orientation of data you want:</p>
<pre><code>df.pivot_table(index='color', columns=['index_by_color'], aggfunc=lambda x:x.iloc[0])
line value
index_by_color 0 1 2 0 1 2
color
blue sunday monday tuesday b a b
green monday None None a None None
red sunday monday None a c None
</code></pre>
<p>The thing about <code>aggfunc=lambda x:x.iloc[0]</code> is to reduce the non-numeric pivoted data as a specific value, and the first element of the subframe is enough as per the unicity of your data structure.</p>
<p>Reorder the column hierachical index:</p>
<pre><code> res = res.sort_index(axis=1, level=1)
line value line value line value
index_by_color 0 0 1 1 2 2
color
blue sunday b monday a tuesday b
green monday a None None None None
red sunday a monday c None None
</code></pre>
<p>The remaining parts are trivial cleanup depending on the exact presentation you need, like <code>cumcount()+1</code> if you need to start numbering at 1 instead of 0, write/flatten the column names as you want <code>res.columns =['_'.join([l0, str(l1)]) for l0,l1 in res.columns]</code> or equivalent, etc.</p>
| 0
|
2016-09-07T20:57:35Z
|
[
"python",
"pandas",
"dataframe"
] |
Pandas - Creating multiple columns similar to pd.get_dummies
| 39,377,164
|
<p>Let's say my data looks like this:</p>
<pre><code>df = pd.DataFrame({'color': ['red', 'blue', 'green', 'red', 'blue', 'blue'], 'line': ['sunday', 'sunday', 'monday', 'monday', 'monday', 'tuesday'],
'group': ['1', '1', '2', '1', '1', '1'], 'value': ['a', 'b', 'a', 'c', 'a', 'b']})
color group line value
0 red 1 sunday a
1 blue 1 sunday b
2 green 2 monday a
3 red 1 monday c
4 blue 1 monday a
5 blue 1 tuesday b
</code></pre>
<p>Essentially, what I want is to get a list of lines for each color. For instance, I want the color red to show each line and value associated with it in its own column. The trick is that I also want to show other lines associated with colors from the same group. The corresponding values for those would be 'not eligible'. Thus I want my output to look like this:</p>
<pre><code> color line_1 line_1_value line_2 line_2_value line_3 line_3_value
0 red sunday a monday c tuesday not eligible
1 blue sunday b monday a tuesday b
2 green monday c
</code></pre>
<p>There are some ~50,000 unique 'colors' that I need to do this for. I'm sure it's something relatively simple, but I don't possess the knowledge or skillset yet to figure it out. Any help would be appreciated! </p>
| 4
|
2016-09-07T18:49:40Z
| 39,382,285
|
<p>Consider a merge on two pivoted dfs with column name handling:</p>
<pre><code>df['count'] = df.groupby('color').cumcount() + 1
pvt1 = df.pivot(columns='count', index='color', values='line').reset_index().fillna('')
pvt1.columns = ['color'] + ['line_'+str(c) for c in pvt1.columns[1:]]
pvt2 = df.pivot(columns='count', index='color', values='value').reset_index().fillna('')
pvt2.columns = ['color'] + ['line_'+str(c)+'_value' for c in pvt2.columns[1:]]
pvtdf = pd.merge(pvt1, pvt2, on=['color'])
pvtdf = pvtdf[[c for c in sorted(pvtdf.columns)]]
# color line_1 line_1_value line_2 line_2_value line_3 line_3_value
# 0 blue sunday b monday a tuesday b
# 1 green monday a
# 2 red sunday a monday c
</code></pre>
| 0
|
2016-09-08T03:46:02Z
|
[
"python",
"pandas",
"dataframe"
] |
Applying a custom function on a Pandas series using groupby and pd.isnull
| 39,377,195
|
<p>I have a sample dataframe which generically looks like this:</p>
<pre><code>df = pd.Dataframe({'Class': [1, 2, 3, 2, 1, 2, 3, 2],
'Sex': [1, 0, 0, 0, 1, 1, 0, 1],
'Age': [15, 24, 13, 28, 29, NaN, 34, 27]})
</code></pre>
<p>Which displays as:</p>
<pre><code> Age Class Sex
0 15.0 1 1
1 24.0 2 0
2 13.0 2 0
3 28.0 2 0
4 29.0 1 1
5 NaN 2 1
6 34.0 1 0
7 27.0 2 1
</code></pre>
<p>What I'd like to do is fill in each of the NaN values in the 'Age' series with the median value for all entries that have their 'Class' and 'Sex' grouping. </p>
<p>So for example, when I access these values like so:</p>
<pre><code>df.groupby(['Class', 'Sex'])['Age'].median()
</code></pre>
<p>and get:</p>
<pre><code> Class Sex
1 0 34.0
1 22.0
2 0 24.0
1 27.0
</code></pre>
<p>I'd like to write a function that automatically fills the extant NaN value with 27 since that is the median of the entries that have a Class value of 2 and a Sex value of 1.</p>
<p>Right now I have:</p>
<pre><code>df['Age'] = df.groupby(['Class', 'Sex'])['Age'].apply(lambda x: x.median() if pd.isnull(x) else x)
</code></pre>
<p>and am getting the following error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>However, a very similar syntax was used in the answers for <a href="http://stackoverflow.com/questions/14714181/conditional-logic-on-pandas-dataframe">this question</a> and <a href="http://stackoverflow.com/questions/34861086/replacing-null-values-in-a-pandas-dataframe-using-applymap">this one</a>, so I'm not quite sure why mine doesn't work, particularly, the latter also uses the isnull method in its lambda function so it's not clear to me why mine doesn't work but that one does.</p>
<p>I've also tried using the fillna method like so:</p>
<pre><code>df['Age'] = df['Age'].fillna(df.groupby(['Class', 'Sex'])['Age'].median())
</code></pre>
<p>But got the following error message:</p>
<pre><code>ValueError: Buffer dtype mismatch, expected 'Python object' but got 'long long'
</code></pre>
<p>I'm open to other methods which get the same value, but prefer something that relies entirely on Pandas methods without having to use a separate for-loop and passing it into the 'Apply' method to make it as concise as possible.</p>
<p>Thank you.</p>
| 1
|
2016-09-07T18:51:42Z
| 39,377,312
|
<p>One option would be to use <code>transform</code> to replace null values with median for the <code>Age</code> column:</p>
<pre><code>df['Age'] = df.groupby(['Class', 'Sex']).Age.transform(lambda col: col.where(col.notnull(), col.median()))
df
# Age Class Sex
#0 15.0 1 1
#1 24.0 2 0
#2 13.0 3 0
#3 28.0 2 0
#4 29.0 1 1
#5 27.0 2 1
#6 34.0 3 0
#7 27.0 2 1
</code></pre>
<p>Alternatively use <code>replace</code> method instead of <code>where</code> also works:</p>
<pre><code>df['Age'] = df.groupby(['Class', 'Sex']).Age.transform(lambda col: col.replace(np.nan, col.median()))
</code></pre>
| 2
|
2016-09-07T18:58:22Z
|
[
"python",
"pandas"
] |
csv file to numpy array via Python
| 39,377,229
|
<p>I have a csv file of the following format that I am trying to normalise. The numbers represent the counts for associated strings. The file contains close to 100K entries.</p>
<pre><code>159028,CASSVDGSYEQYFGPG
86832,CASSLQLYFGEG
74720,CASSQDQDTQYFGPG
71701,CASSRVGSDYTFGSG
69360,CARNVTPPKSYAVFFGKG
52458,CAAEQFFGPG
51406,CASSSGDQDTQYFGPG
50305,CASQLYFGEG
38745,CAYFGPG
32565,CASSPDWGENTLYFGAG
</code></pre>
<p>I have tried to create a dictionary using the following </p>
<pre><code>import csv
input = csv.DictReader(open("data.csv"))
for row in input:
print(row)
</code></pre>
<p>Result </p>
<pre><code>{'159028': '86832', 'CASSVDGSYEQYFGPG': 'CASSLQLYFGEG'}
{'159028': '74720', 'CASSVDGSYEQYFGPG': 'CASSQDQDTQYFGPG'}
{'159028': '71701', 'CASSVDGSYEQYFGPG': 'CASSRVGSDYTFGSG'}
{'159028': '69360', 'CASSVDGSYEQYFGPG': 'CARNVTPPKSYAVFFGKG'}
{'159028': '52458', 'CASSVDGSYEQYFGPG': 'CAAEQFFGPG'}
{'159028': '51406', 'CASSVDGSYEQYFGPG': 'CASSSGDQDTQYFGPG'}
{'159028': '50305', 'CASSVDGSYEQYFGPG': 'CASQLYFGEG'}
{'159028': '38745', 'CASSVDGSYEQYFGPG': 'CAYFGPG'}
{'159028': '32565', 'CASSVDGSYEQYFGPG': 'CASSPDWGENTLYFGAG'}
...
</code></pre>
<p>Instead of </p>
<pre><code> {'CASSVDGSYEQYFGPG': 159028}
{'CASSLQLYFGEG': '86832'}
{'CASSQDQDTQYFGPG': '74720'}
{'CASSRVGSDYTFGSG': '71701'}
{'CARNVTPPKSYAVFFGKG': '69360'}
{'CAAEQFFGPG': '52458'}
{'CASSSGDQDTQYFGPG': '51406'}
{'CASQLYFGEG': '50305'}
{'CAYFGPG': '38745'}
{'CASSPDWGENTLYFGAG': '32565'}
...
</code></pre>
<p>I also tried converting the csv file into a numpy array, but I get the following:</p>
<pre><code>>>>from numpy import genfromtxt
>>>data = genfromtxt('data.csv', delimiter=',')
>>>data
array([[ 1.59028000e+05, nan],
[ 8.68320000e+04, nan],
[ 7.47200000e+04, nan],
...,
[ 1.00000000e+00, nan],
[ 1.00000000e+00, nan],
[ 1.00000000e+00, nan]])
</code></pre>
<p>There may be other ways of normalising and other data processing this data via Python.</p>
| 1
|
2016-09-07T18:53:44Z
| 39,377,421
|
<p>Use Numpy loadtxt to import, then use a dict comprehension if you need it as a dict.</p>
<pre><code>import numpy as np
arr = np.loadtxt('data.csv', dtype=str, delimiter=",")
b = dict([(y, x) for (x, y) in arr])
</code></pre>
| 1
|
2016-09-07T19:07:32Z
|
[
"python",
"list",
"csv",
"numpy",
"dictionary"
] |
csv file to numpy array via Python
| 39,377,229
|
<p>I have a csv file of the following format that I am trying to normalise. The numbers represent the counts for associated strings. The file contains close to 100K entries.</p>
<pre><code>159028,CASSVDGSYEQYFGPG
86832,CASSLQLYFGEG
74720,CASSQDQDTQYFGPG
71701,CASSRVGSDYTFGSG
69360,CARNVTPPKSYAVFFGKG
52458,CAAEQFFGPG
51406,CASSSGDQDTQYFGPG
50305,CASQLYFGEG
38745,CAYFGPG
32565,CASSPDWGENTLYFGAG
</code></pre>
<p>I have tried to create a dictionary using the following </p>
<pre><code>import csv
input = csv.DictReader(open("data.csv"))
for row in input:
print(row)
</code></pre>
<p>Result </p>
<pre><code>{'159028': '86832', 'CASSVDGSYEQYFGPG': 'CASSLQLYFGEG'}
{'159028': '74720', 'CASSVDGSYEQYFGPG': 'CASSQDQDTQYFGPG'}
{'159028': '71701', 'CASSVDGSYEQYFGPG': 'CASSRVGSDYTFGSG'}
{'159028': '69360', 'CASSVDGSYEQYFGPG': 'CARNVTPPKSYAVFFGKG'}
{'159028': '52458', 'CASSVDGSYEQYFGPG': 'CAAEQFFGPG'}
{'159028': '51406', 'CASSVDGSYEQYFGPG': 'CASSSGDQDTQYFGPG'}
{'159028': '50305', 'CASSVDGSYEQYFGPG': 'CASQLYFGEG'}
{'159028': '38745', 'CASSVDGSYEQYFGPG': 'CAYFGPG'}
{'159028': '32565', 'CASSVDGSYEQYFGPG': 'CASSPDWGENTLYFGAG'}
...
</code></pre>
<p>Instead of </p>
<pre><code> {'CASSVDGSYEQYFGPG': 159028}
{'CASSLQLYFGEG': '86832'}
{'CASSQDQDTQYFGPG': '74720'}
{'CASSRVGSDYTFGSG': '71701'}
{'CARNVTPPKSYAVFFGKG': '69360'}
{'CAAEQFFGPG': '52458'}
{'CASSSGDQDTQYFGPG': '51406'}
{'CASQLYFGEG': '50305'}
{'CAYFGPG': '38745'}
{'CASSPDWGENTLYFGAG': '32565'}
...
</code></pre>
<p>I also tried converting the csv file into a numpy array, but I get the following:</p>
<pre><code>>>>from numpy import genfromtxt
>>>data = genfromtxt('data.csv', delimiter=',')
>>>data
array([[ 1.59028000e+05, nan],
[ 8.68320000e+04, nan],
[ 7.47200000e+04, nan],
...,
[ 1.00000000e+00, nan],
[ 1.00000000e+00, nan],
[ 1.00000000e+00, nan]])
</code></pre>
<p>There may be other ways of normalising and other data processing this data via Python.</p>
| 1
|
2016-09-07T18:53:44Z
| 39,377,540
|
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow"><code>genfromtxt</code></a> has many arguments, and it can take a while to learn the right incantation to read any given file.</p>
<p>Here's how you can do it with your file. The array <code>data</code> returned by <code>genfromtxt</code> is a one-dimensional <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured array</a> with two fields, called <code>count</code> and <code>string</code>:</p>
<pre><code>In [11]: data = np.genfromtxt("counts_strings.csv", delimiter=',', names=['count', 'string'], dtype=None)
In [12]: data['count']
Out[12]:
array([159028, 86832, 74720, 71701, 69360, 52458, 51406, 50305,
38745, 32565])
In [13]: data['string']
Out[13]:
array([b'CASSVDGSYEQYFGPG', b'CASSLQLYFGEG', b'CASSQDQDTQYFGPG',
b'CASSRVGSDYTFGSG', b'CARNVTPPKSYAVFFGKG', b'CAAEQFFGPG',
b'CASSSGDQDTQYFGPG', b'CASQLYFGEG', b'CAYFGPG', b'CASSPDWGENTLYFGAG'],
dtype='|S18')
In [14]: data[0]
Out[14]: (159028, b'CASSVDGSYEQYFGPG')
</code></pre>
| 0
|
2016-09-07T19:16:50Z
|
[
"python",
"list",
"csv",
"numpy",
"dictionary"
] |
Storing a service account with Flask-SQLAlchemy
| 39,377,249
|
<p>I'm writing a small HipChat plugin using Flask and Flask-SQLAlchemy with local database. I want the admin to be able to setup a service account for an external service this is meant to integrate with.</p>
<p>Because the username/password of the service accounts needs to be stored so they can be used by the integration to make API calls I can't use non-reversible hashing methods for storing the password.</p>
<p>Are there recommendations for how to approach this so that the passwords or the database can be better secured?</p>
| 2
|
2016-09-07T18:54:54Z
| 39,377,369
|
<p>You can encrypt your data before storing them in your database. <a href="https://www.dlitz.net/software/pycrypto/" rel="nofollow"><code>pycrypto</code></a> is one of the libraries that you can utilize.</p>
<blockquote>
<p>It is easy to encrypt text using DES/ECB with pycrypto. The key
â10234567â is 8 bytes and the textâs length needs to be a multiple of
8 bytes. We picked âabcdefghâ in this example.</p>
</blockquote>
<pre><code>>>> from Crypto.Cipher import DES
>>> des = DES.new('01234567', DES.MODE_ECB)
>>> text = 'abcdefgh'
>>> cipher_text = des.encrypt(text)
>>> cipher_text
'\xec\xc2\x9e\xd9] a\xd0'
>>> des.decrypt(cipher_text)
'abcdefgh'
</code></pre>
<p><a href="http://www.laurentluce.com/posts/python-and-cryptography-with-pycrypto/#a_2" rel="nofollow">here</a> is a short article about <code>pycrypto</code>.</p>
| 0
|
2016-09-07T19:02:38Z
|
[
"python",
"security",
"password-encryption"
] |
Kivy layout not rendering as expected
| 39,377,373
|
<p>I am trying to create an interface with the KV lang provided by the Kivy framework. I want to stack two <strong>BoxLayout</strong> widgets on top of each other. When the first layout is rendered it is has a default height of 350. I need this to be reduced otherwise. </p>
<p>Below is my layout</p>
<pre><code><RootWidget>:
# this is the rule for your root widget, defining it's look and feel.
StackLayout:
height: 350.0
BoxLayout:
id: 'letterBox'
height: 150.0
ActionButton:
width: 15.0
text: '-'
ActionButton:
width: 15.0
text: 'A'
ActionButton:
width: 15.0
text: 'B'
ActionButton:
width: 15.0
text: 'C'
ActionButton:
width: 15.0
text: 'D'
ActionButton:
width: 15.0
text: 'E'
ActionButton:
width: 15.0
text: 'F'
ActionButton:
width: 15.0
text: 'G'
ActionButton:
width: 15.0
text: 'H'
ActionButton:
width: 15.0
text: 'I'
ActionButton:
width: 15.0
text: 'J'
ActionButton:
width: 15.0
text: 'K'
ActionButton:
width: 15.0
text: 'L'
ActionButton:
width: 15.0
text: 'M'
ActionButton:
width: 15.0
text: 'N'
ActionButton:
width: 15.0
text: 'O'
ActionButton:
width: 15.0
text: 'P'
ActionButton:
width: 15.0
text: 'Q'
ActionButton:
width: 15.0
text: 'R'
ActionButton:
width: 15.0
text: 'S'
ActionButton:
width: 15.0
text: 'T'
ActionButton:
width: 15.0
text: 'U'
ActionButton:
width: 15.0
text: 'V'
ActionButton:
width: 15.0
text: 'W'
ActionButton:
width: 15.0
text: 'X'
ActionButton:
width: 15.0
text: 'Y'
ActionButton:
width: 15.0
text: 'Z'
BoxLayout:
id: 'contentBox'
height: 150.0
ActionButton:
text: '4'
</code></pre>
| 0
|
2016-09-07T19:02:59Z
| 39,380,161
|
<p>You could do. Try using size_hint cause it makes your app responsive, looking alike in all screen resolutions. </p>
<pre><code> BoxLayout:
orientation: 'vertical'
BoxLayout:
id: 'letterBox'
size_hint: 1, .15
BoxLayout:
id: 'contentBox'
size_hint: 1, .85
</code></pre>
| 1
|
2016-09-07T22:53:02Z
|
[
"python",
"kivy"
] |
Nested SQL queries too slow
| 39,377,422
|
<p>I have following code, where I execute another query within the loop of result set from first query
Table1 has 35K records, while table2 has 4M</p>
<pre><code>db = MySQLdb.connect("localhost","root","root","test" )
cursor1 = db.cursor(MySQLdb.cursors.DictCursor)
cursor2 = db.cursor(MySQLdb.cursors.DictCursor)
sql = 'select * from table1 limit 2'
cursor1.execute(sql)
results = cursor1.fetchall()
for row in results:
sql2 = 'select * from table2 where t1 = '+row['t1']
cursor2.execute(sql2)
result2 = cursor2.fetchall()
for row2 in result2
#do something
</code></pre>
<p>For each iteration and each query, the process seems to be waiting. I tried profiling with cProfile and got one of the following output</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
3 21.529 7.176 21.529 7.176 connections.py:274(query)
</code></pre>
<p>How to debug this issue? I am quite new to python.</p>
| 3
|
2016-09-07T19:07:34Z
| 39,377,548
|
<p>Nesting queries within a web application is never a good idea. As you've found, it kills performance.</p>
<p>Try using a single query which joins table1 and table2. Then, programmatically keep track of when your parent data changes to handle line breaks or display changes.</p>
| -1
|
2016-09-07T19:17:55Z
|
[
"python",
"mysql"
] |
Nested SQL queries too slow
| 39,377,422
|
<p>I have following code, where I execute another query within the loop of result set from first query
Table1 has 35K records, while table2 has 4M</p>
<pre><code>db = MySQLdb.connect("localhost","root","root","test" )
cursor1 = db.cursor(MySQLdb.cursors.DictCursor)
cursor2 = db.cursor(MySQLdb.cursors.DictCursor)
sql = 'select * from table1 limit 2'
cursor1.execute(sql)
results = cursor1.fetchall()
for row in results:
sql2 = 'select * from table2 where t1 = '+row['t1']
cursor2.execute(sql2)
result2 = cursor2.fetchall()
for row2 in result2
#do something
</code></pre>
<p>For each iteration and each query, the process seems to be waiting. I tried profiling with cProfile and got one of the following output</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
3 21.529 7.176 21.529 7.176 connections.py:274(query)
</code></pre>
<p>How to debug this issue? I am quite new to python.</p>
| 3
|
2016-09-07T19:07:34Z
| 39,377,864
|
<p>1)Use left join for table1 to get desired results from table2</p>
<p>2)Fetch results and do something you want.</p>
<p>3)* if you don't have indices, add them(just in case) </p>
<p>Some advice: SQL is created on union theory. However, Cursor is set up for looping. So if you use for loop too many times. It will cause your commands to slow down dramatically. Optimization is tricky . Basically, using built in function will improve performance(unless some complicated situations)</p>
<p>Try to solve problems based on union theory( Join/Merge/..) instead of some scripting languages mode here and you will accelerate accelerate your scripts a lot.</p>
| -1
|
2016-09-07T19:40:59Z
|
[
"python",
"mysql"
] |
Quadratic formula solver in python
| 39,377,432
|
<p>I'm somewhat new to python, but I'm trying my best to learn. My code is</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
quad_solve(a, b, c)
</code></pre>
<p>And when I run it it gives the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 19, in <module>
File "python", line 12, in quad_solve
ValueError: math domain error
</code></pre>
<p>which I don't really understand. </p>
<p>I'm trying to create a quadratic formula solver. I use the math module, and then define three variables, a, b, and c. Then, I define a function that takes in those variables (I call the function at the end). In the function, I define four quantities. <code>q1</code> is the b squared under the square root, <code>q2</code> is the 4ac also under the square root, <code>q3</code> is the denominator, and <code>q4</code> calculates the total under the square root (i.e., <code>q1</code> - <code>q2</code>). Then, I define a variable called <code>sqr</code> which is equal to the square root of <code>q4</code>. Then, I define four more variables, which calculate the solutions. <code>sol1p1</code> takes b + sqr, and <code>sol1p2</code> takes <code>sol1p1</code> and divides it by two. This gives the first solution. Then, <code>sol2p1</code> takes b - sqr, and <code>sol2p2</code> takes <code>sol2p1</code> and divides it by two. Finally, <code>sol1p2</code> and <code>sol2p2</code> are printed, in a set of parentheses with a comma between. I hope that makes sense; if any clarification is needed about the variable names, please let me know. </p>
<p>I am using the online compiler repl.it (I don't know if there's anything special to consider with that).</p>
<p>Thanks!</p>
<hr>
<p><strong>Edit:</strong></p>
<p>I updated my code, per Code Apprentice's recommendations. I started by adding an if statement:</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
check = math.tan(q2)
if (q1 > check):
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
else:
print "Imaginary number. There are no zeros."
quad_solve(a, b, c)
</code></pre>
<p>but it is continuing to return the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 23, in <module>
File "python", line 14, in quad_solve
ValueError: math domain error
</code></pre>
<p>I'm not sure why. </p>
| 3
|
2016-09-07T19:08:53Z
| 39,377,472
|
<p>b^2 has to be greater than 4ac, So right now, that <code>sqrt()</code> function is getting a negative number. </p>
| 1
|
2016-09-07T19:11:47Z
|
[
"python",
"math",
"compiler-errors"
] |
Quadratic formula solver in python
| 39,377,432
|
<p>I'm somewhat new to python, but I'm trying my best to learn. My code is</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
quad_solve(a, b, c)
</code></pre>
<p>And when I run it it gives the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 19, in <module>
File "python", line 12, in quad_solve
ValueError: math domain error
</code></pre>
<p>which I don't really understand. </p>
<p>I'm trying to create a quadratic formula solver. I use the math module, and then define three variables, a, b, and c. Then, I define a function that takes in those variables (I call the function at the end). In the function, I define four quantities. <code>q1</code> is the b squared under the square root, <code>q2</code> is the 4ac also under the square root, <code>q3</code> is the denominator, and <code>q4</code> calculates the total under the square root (i.e., <code>q1</code> - <code>q2</code>). Then, I define a variable called <code>sqr</code> which is equal to the square root of <code>q4</code>. Then, I define four more variables, which calculate the solutions. <code>sol1p1</code> takes b + sqr, and <code>sol1p2</code> takes <code>sol1p1</code> and divides it by two. This gives the first solution. Then, <code>sol2p1</code> takes b - sqr, and <code>sol2p2</code> takes <code>sol2p1</code> and divides it by two. Finally, <code>sol1p2</code> and <code>sol2p2</code> are printed, in a set of parentheses with a comma between. I hope that makes sense; if any clarification is needed about the variable names, please let me know. </p>
<p>I am using the online compiler repl.it (I don't know if there's anything special to consider with that).</p>
<p>Thanks!</p>
<hr>
<p><strong>Edit:</strong></p>
<p>I updated my code, per Code Apprentice's recommendations. I started by adding an if statement:</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
check = math.tan(q2)
if (q1 > check):
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
else:
print "Imaginary number. There are no zeros."
quad_solve(a, b, c)
</code></pre>
<p>but it is continuing to return the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 23, in <module>
File "python", line 14, in quad_solve
ValueError: math domain error
</code></pre>
<p>I'm not sure why. </p>
| 3
|
2016-09-07T19:08:53Z
| 39,377,516
|
<p>This is my version of the answer in the fewest lines of code:</p>
<pre><code>import cmath
#Your Variables
a = 5
b = 5
c = 5
#The Discriminant
d = (b**2) - (4*a*c)
#The Solutions
solution1 = (-b-cmath.sqrt(d))/(2*a)
solution2 = (-b+cmath.sqrt(d))/(2*a)
print (solution1)
print (solution2)
</code></pre>
| 3
|
2016-09-07T19:14:39Z
|
[
"python",
"math",
"compiler-errors"
] |
Quadratic formula solver in python
| 39,377,432
|
<p>I'm somewhat new to python, but I'm trying my best to learn. My code is</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
quad_solve(a, b, c)
</code></pre>
<p>And when I run it it gives the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 19, in <module>
File "python", line 12, in quad_solve
ValueError: math domain error
</code></pre>
<p>which I don't really understand. </p>
<p>I'm trying to create a quadratic formula solver. I use the math module, and then define three variables, a, b, and c. Then, I define a function that takes in those variables (I call the function at the end). In the function, I define four quantities. <code>q1</code> is the b squared under the square root, <code>q2</code> is the 4ac also under the square root, <code>q3</code> is the denominator, and <code>q4</code> calculates the total under the square root (i.e., <code>q1</code> - <code>q2</code>). Then, I define a variable called <code>sqr</code> which is equal to the square root of <code>q4</code>. Then, I define four more variables, which calculate the solutions. <code>sol1p1</code> takes b + sqr, and <code>sol1p2</code> takes <code>sol1p1</code> and divides it by two. This gives the first solution. Then, <code>sol2p1</code> takes b - sqr, and <code>sol2p2</code> takes <code>sol2p1</code> and divides it by two. Finally, <code>sol1p2</code> and <code>sol2p2</code> are printed, in a set of parentheses with a comma between. I hope that makes sense; if any clarification is needed about the variable names, please let me know. </p>
<p>I am using the online compiler repl.it (I don't know if there's anything special to consider with that).</p>
<p>Thanks!</p>
<hr>
<p><strong>Edit:</strong></p>
<p>I updated my code, per Code Apprentice's recommendations. I started by adding an if statement:</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
check = math.tan(q2)
if (q1 > check):
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
else:
print "Imaginary number. There are no zeros."
quad_solve(a, b, c)
</code></pre>
<p>but it is continuing to return the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 23, in <module>
File "python", line 14, in quad_solve
ValueError: math domain error
</code></pre>
<p>I'm not sure why. </p>
| 3
|
2016-09-07T19:08:53Z
| 39,377,991
|
<p>If you're just interested in getting a result (and not in learning how to do this), you can use <a href="http://sympy.org" rel="nofollow">sympy</a>:</p>
<pre><code>from sympy import var, solve
x = var("x")
print(solve(5*x**2 + 5*x + 5))
# prints [-1/2 - sqrt(3)*I/2, -1/2 + sqrt(3)*I/2]
</code></pre>
| 1
|
2016-09-07T19:50:05Z
|
[
"python",
"math",
"compiler-errors"
] |
Quadratic formula solver in python
| 39,377,432
|
<p>I'm somewhat new to python, but I'm trying my best to learn. My code is</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
quad_solve(a, b, c)
</code></pre>
<p>And when I run it it gives the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 19, in <module>
File "python", line 12, in quad_solve
ValueError: math domain error
</code></pre>
<p>which I don't really understand. </p>
<p>I'm trying to create a quadratic formula solver. I use the math module, and then define three variables, a, b, and c. Then, I define a function that takes in those variables (I call the function at the end). In the function, I define four quantities. <code>q1</code> is the b squared under the square root, <code>q2</code> is the 4ac also under the square root, <code>q3</code> is the denominator, and <code>q4</code> calculates the total under the square root (i.e., <code>q1</code> - <code>q2</code>). Then, I define a variable called <code>sqr</code> which is equal to the square root of <code>q4</code>. Then, I define four more variables, which calculate the solutions. <code>sol1p1</code> takes b + sqr, and <code>sol1p2</code> takes <code>sol1p1</code> and divides it by two. This gives the first solution. Then, <code>sol2p1</code> takes b - sqr, and <code>sol2p2</code> takes <code>sol2p1</code> and divides it by two. Finally, <code>sol1p2</code> and <code>sol2p2</code> are printed, in a set of parentheses with a comma between. I hope that makes sense; if any clarification is needed about the variable names, please let me know. </p>
<p>I am using the online compiler repl.it (I don't know if there's anything special to consider with that).</p>
<p>Thanks!</p>
<hr>
<p><strong>Edit:</strong></p>
<p>I updated my code, per Code Apprentice's recommendations. I started by adding an if statement:</p>
<pre><code>import math
a = 5
b = 5
c = 5
def quad_solve(a, b, c):
q1 = b*b
q2 = 4*a*c
q3 = 2*a
q4 = q1-q2
check = math.tan(q2)
if (q1 > check):
sqr = math.sqrt(q4)
sol1p1 = b+sqr
sol1p2 = sol1p1/2
sol2p1 = b-sqr
sol2p2 = sol2p1/2
print ("(",sol1p2,",",sol2p2,")")
else:
print "Imaginary number. There are no zeros."
quad_solve(a, b, c)
</code></pre>
<p>but it is continuing to return the error </p>
<pre><code>Traceback (most recent call last):
File "python", line 23, in <module>
File "python", line 14, in quad_solve
ValueError: math domain error
</code></pre>
<p>I'm not sure why. </p>
| 3
|
2016-09-07T19:08:53Z
| 39,378,435
|
<p>There are two ways to do this (I figured this out thanks to the great hints given by Code-Apprentice, Pablo Iocco, and Tom Pitts).</p>
<pre><code>import cmath
import math
a = 1
b = 3
c = 2
def quad_solve_exact(a, b, c):
d = (b*b)-(4*a*c)
solution1 = (-b-cmath.sqrt(d))/(2*a)
solution2 = (-b+cmath.sqrt(d))/(2*a)
print (solution1,solution2)
quad_solve_exact(a, b, c)
def quad_solve(a, b, c):
if (b*b > 4*a*c):
print "There is a solution!"
d = math.sqrt((b*b)-(4*a*c))
solution1 = (-b-math.sqrt(d))/(2*a)
solution2 = (-b+math.sqrt(d))/(2*a)
print (solution1,solution2)
else:
print "No solutions, imaginary number"
quad_solve(a, b, c)
</code></pre>
<p>The first way is the more exact. The problem is the <code>sqrt</code> function used by the normal <code>math</code> package can't handle negative numbers. There is a package, <code>cmath</code>, however, that <em>can</em> handle negative numbers. So, you import both packages (the normal <code>math</code> package is used in the second example) and then define a, b, and c. Then, in your function you can combine the variables much more than I did, leading to shorter/clearer code. So, the variable <code>d</code> is used to denote what is under the square root. Then, for each solution, -b is added or subtracted to the square root of <code>d</code>, all of which is divided by 2a. Then, the solutions are printed.</p>
<p>The second solution is less exact, but perfectly serviceable for my purposes. The new function also takes a, b, and c. Then, an if statement is used to make sure that the number being square rooted is not negative. If it is negative, then the else statement is ran, which prints that there is no solution. If the variables pass the if statement, it prints that there is indeed a solution, and basically uses the same code as before, except instead of <code>cmath.sqrt</code>, <code>math.sqrt</code> is used.</p>
| 0
|
2016-09-07T20:25:38Z
|
[
"python",
"math",
"compiler-errors"
] |
args python parser, a whitespace and Spark
| 39,377,451
|
<p>I have this code in <code>foo.py</code>:</p>
<pre><code>from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('--label', dest='label', type=str, default=None, required=True, help='label')
args = parser.parse_args()
</code></pre>
<p>and when I execute:</p>
<blockquote>
<p>spark-submit --master yarn --deploy-mode cluster foo.py --label 106466153-Gateway Arch</p>
</blockquote>
<p>I get this error at Stdout:</p>
<pre><code>usage: foo.py [-h] --label LABEL
foo.py: error: unrecognized arguments: Arch
</code></pre>
<p>Any idea(s) please?</p>
<hr>
<p>Attempts:</p>
<ol>
<li><code>--label "106466153-Gateway Arch"</code></li>
<li><code>--label 106466153-Gateway\ Arch</code></li>
<li><code>--label "106466153-Gateway\ Arch"</code></li>
<li><code>--label="106466153-Gateway Arch"</code></li>
<li><code>--label 106466153-Gateway\\\ Arch</code></li>
<li><code>--label 106466153-Gateway\\\\\\\ Arch</code></li>
</ol>
<p>All attempts produce the same error.</p>
<hr>
<p>I am using Red Hat Enterprise Linux Server release 6.4 (Santiago).</p>
| 2
|
2016-09-07T19:10:06Z
| 39,379,156
|
<p>Here is a nasty workaround:</p>
<pre><code>from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('--label', dest='label', type=str, default=None, required=True, help='label', nargs="+")
args = parser.parse_args()
args = ' '.join(args.label)
print args
</code></pre>
<p>where I am using <code>nargs="+"</code> and then <code>join</code> the arguments. </p>
<p>I execute like this:</p>
<blockquote>
<p>spark-submit --master yarn --deploy-mode cluster foo.py --label "106466153-Gateway Arch"</p>
</blockquote>
<p>Also notice that this approach can work when no space exists, like this:</p>
<blockquote>
<p>spark-submit --master yarn --deploy-mode cluster foo.py --label "106466153-GatewayNoSpaceArch"</p>
</blockquote>
| 0
|
2016-09-07T21:19:09Z
|
[
"python",
"linux",
"apache-spark",
"io",
"redhat"
] |
Smart Sheet Row calls not getting information or working
| 39,377,478
|
<p>I'm trying to use the row attribute created_by accessing it with the Python SDK and it keeps throwing an error. I'm not sure if I'm missing something.</p>
<p>Every time I try to use it I get the error:</p>
<pre><code> File "C:\Python27\lib\site-packages\smartsheet\models\row.py", line 166, in __getattr__ raise AttributeError(key)
AttributeError: created_by
</code></pre>
<p>It also doesn't return anything when I run created_at, but at least it doesn't throw an error.</p>
<p>Anyone else have this problem, or at least can point me to what I'm doing wrong?</p>
<p>It happens ever time I call it. Here is a simple example</p>
<pre><code>smartsheet = smartsheet.Smartsheet(SMARTSHEET_ACCESS_TOKEN)
sheet = smartsheet.Sheets.get_sheet(IT_TRACKER_ID)
rows = sheet.rows
columns = sheet.columns
print rows[1].created_by
</code></pre>
| 2
|
2016-09-07T19:12:14Z
| 39,395,884
|
<p>Via the API, if you want the <strong>Get Sheet</strong> response to include the <strong>createdBy</strong> attribute for each Row, you must specify the <strong>include</strong> parameter on the request with a value that includes the string <strong>rowWriterInfo</strong>. (See the <a href="http://smartsheet-platform.github.io/api-docs/?python#row-include-flags" rel="nofollow">Row Include Flags</a> section API docs for more info.) For example:</p>
<pre><code>GET https://api.smartsheet.com/2.0/sheets/7359436428732292?include=rowWriterInfo
</code></pre>
<p>If I execute this request via Postman, I see that the response does indeed include the <strong>createdBy</strong> attribute for each Row object in the response.</p>
<p>Via the SDK, I can execute this same request as follows:</p>
<pre><code>sheet = smartsheet.Sheets.get_sheet(7359436428732292, include='rowWriterInfo')
</code></pre>
<p>So, the SDK seems to be sending the request properly. However, it doesn't look like the SDK currently supports accessing the <strong>created_by</strong> property of Row objects -- if you search the <em>Lib\site-packages\smartsheet\models\row.py</em> file for <em>created_by</em>, you'll come up empty. </p>
<p>I'd suggest that you report this issue to Smartsheet by opening an issue in the <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk/issues" rel="nofollow">corresponding GitHub repo</a>. In the meantime, you might consider updating your local copy of the Python SDK to add support for accessing the <strong>created_by</strong> property of Row objects. </p>
| 0
|
2016-09-08T16:18:01Z
|
[
"python",
"smartsheet-api"
] |
How to change the font size of a QInputDialog in PyQt?
| 39,377,515
|
<p><strong>The case of a simple message box</strong></p>
<p>I have figured out how to change the font size in simple PyQt dialog windows. Take this example:</p>
<pre><code> # Create a custom font
# ---------------------
font = QFont()
font.setFamily("Arial")
font.setPointSize(10)
# Show simple message box
# ------------------------
msg = QMessageBox()
msg.setIcon(QMessageBox.Question)
msg.setText("Are you sure you want to delete this file?")
msg.setWindowTitle("Sure?")
msg.setStandardButtons(QMessageBox.Ok | QMessageBox.Cancel)
msg.setFont(font)
retval = msg.exec_()
if retval == QMessageBox.Ok:
print('OK')
elif retval == QMessageBox.Cancel:
print('CANCEL')
</code></pre>
<p>The key to changing the font size is that you actually have a 'handle' to your message box. The variable <code>msg</code> is available to tweak the message box to your needs before showing it with <code>msg.exec_()</code>.</p>
<p><strong>The case of a simple input dialog</strong></p>
<p>The problem about the input dialog is that such handle is not present. Take this example:</p>
<pre><code> # Show simple input dialog
# -------------------------
filename, ok = QInputDialog.getText(None, 'Input Dialog', 'Enter the file name:')
if(ok):
print('Name of file = ' + filename)
else:
print('Cancelled')
</code></pre>
<p>The input dialog object is created on-the-fly. I have no way to tweak it to my needs (eg. apply a different font).</p>
<p>Is there a way to get a handle to this <code>QInputDialog</code> object, before showing it?</p>
<p><strong>EDIT :</strong></p>
<p>I was adviced in the comments to try it with an HTML snippet:</p>
<pre><code> filename, ok = QInputDialog.getText(None, 'Input Dialog', '<html style="font-size:12pt;">Enter the file name:</html>')
</code></pre>
<p>The result is as follows:</p>
<p><a href="http://i.stack.imgur.com/BtzNI.png" rel="nofollow"><img src="http://i.stack.imgur.com/BtzNI.png" alt="enter image description here"></a></p>
<p>As you can see, the text input field has still the small (unchanged) font size.</p>
| 1
|
2016-09-07T19:14:30Z
| 39,387,037
|
<p>Thanks to the comments of @denvaar and @ekhumoro, I got the solution. Here it is:</p>
<pre><code> # Create a custom font
# ---------------------
font = QFont()
font.setFamily("Arial")
font.setPointSize(10)
# Create and show the input dialog
# ---------------------------------
inputDialog = QInputDialog(None)
inputDialog.setInputMode(QInputDialog.TextInput)
inputDialog.setWindowTitle('Input')
inputDialog.setLabelText('Enter the name for this new file:')
inputDialog.setFont(font)
ok = inputDialog.exec_()
filename = inputDialog.textValue()
if(ok):
print('Name of file = ' + filename)
else:
print('Cancelled')
</code></pre>
| 1
|
2016-09-08T09:16:58Z
|
[
"python",
"python-3.x",
"pyqt",
"pyqt5"
] |
Data from 2 pages as one item
| 39,377,671
|
<p>I am crawling a site with products the currency that product's price is shown is it set via the url <code>/en-GB/</code> for GBP and <code>/en-AU/</code> for AUD my client wants both prices in one item.</p>
<p>I would like to be able to use pipelines to put it into their DB so combining it afterwards is not viable. Is their anyway with scrapy to do this?</p>
| 2
|
2016-09-07T19:27:12Z
| 39,379,944
|
<p><a href="http://doc.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions" rel="nofollow">http://doc.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions</a></p>
<pre><code>def parse_page1(self, response):
item = MyItem()
item['price_GBP'] = response.xpath("//foo/bar").extract_first()
request = scrapy.Request("http://www.example.com/en-AU/",
callback=self.parse_page2)
request.meta['item'] = item
yield request
def parse_page2(self, response):
item = response.meta['item']
item['price_AUD'] = response.xpath("//foo/bar").extract_first()
yield item
</code></pre>
| 2
|
2016-09-07T22:29:13Z
|
[
"python",
"python-2.7",
"scrapy"
] |
Retrieving result from celery worker constantly
| 39,377,751
|
<p>I have an web app in which I am trying to use celery to load background tasks from a database. I am currently loading the database upon request, but would like to load the tasks on an hourly interval and have them work in the background. I am using flask and am coding in python.I have redis running as well. </p>
<p>So far using celery I have gotten the worker to process the task and the beat to send the tasks to the worker on an interval. But I want to retrieve the results[a dataframe or query] from the worker and if the result is not ready then it should load the previous result of the worker. </p>
<p>Any ideas on how to do this?</p>
<p><strong>Edit</strong></p>
<p>I am retrieving the results from a database using sqlalchemy and I am rendering the results in a webpage. I have my homepage which has all the various links which all lead to different graphs which I want to be loaded in the background so the user does not have to wait long loading times. </p>
| 8
|
2016-09-07T19:33:28Z
| 39,430,468
|
<p>The Celery <a href="http://docs.celeryproject.org/en/latest/userguide/tasks.html#tasks">Task</a> is being executed by a Worker, and it's Result is being stored in the <a href="http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results">Celery Backend</a>.</p>
<p>If I get you correctly, then I think you got few options:</p>
<ol>
<li><a href="http://docs.celeryproject.org/en/latest/userguide/tasks.html#ignore-results-you-don-t-want">Ignore the result</a> of the graph-loading-task, store what ever you need, as a side effect of the task, in your database. When needed, query for the most recent result in that database. If the DB is Redis, you may find <a href="http://redis.io/commands/ZADD">ZADD</a> and <a href="http://redis.io/commands/ZRANGE">ZRANGE</a> suitable. This way you'll get the new if available, or the previous if not.</li>
<li><p>You can look for the <a href="http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.AsyncResult">result of a task if you provide it's id</a>. You can do this when you want to find out the status, something like (where <code>celery</code> is the Celery app): <code>result = celery.AsyncResult(<the task id>)</code></p></li>
<li><p>Use <a href="http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks">callback</a> to update farther when new result is ready.</p></li>
<li>Let a background thread <a href="http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.AsyncResult.wait">wait</a> for the AsyncResult, or <a href="http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.ResultSet.join_native">native_join</a>, which is supported with Redis, and update accordingly (not recommended)</li>
</ol>
<p>I personally used option #1 in similar cases (using MongoDB) and found it to be very maintainable and flexible. But possibly, due the nature of your UI, option #3 will more suitable for you needs.</p>
| 7
|
2016-09-10T20:50:47Z
|
[
"python",
"database",
"celery"
] |
How to recode line so an exact sentence must be in the list for it to match
| 39,377,790
|
<pre><code>x = ["Cookie flavored water is yummy 6", "Coding complicated 16", "Help 7"]
for i in x:
if "flavored" in x:
print ("Yes")
else:
print ("No")
</code></pre>
<p>I want the exact string "Cookie flavored water is yummy" to be in the list for it to be acceptable but I don't want the 6 part included. I'm completely befuddled on how to accomplish this. Also the objective might change from the first element to a different element.</p>
| -1
|
2016-09-07T19:36:38Z
| 39,377,889
|
<p>Well if the string is always the one you specified you could do this:</p>
<pre><code>yourString = "Cookie flavored water is yummy"
for item in x:
if yourString in item:
print 'Yes'
else:
print 'No'
</code></pre>
<p>This check each list item for the specified string. In your example "Cookie flavored water is yummy 6" contains the substring "Cookie flavored water is yummy". So the script will print 'Yes'</p>
| 0
|
2016-09-07T19:42:25Z
|
[
"python",
"list"
] |
How to recode line so an exact sentence must be in the list for it to match
| 39,377,790
|
<pre><code>x = ["Cookie flavored water is yummy 6", "Coding complicated 16", "Help 7"]
for i in x:
if "flavored" in x:
print ("Yes")
else:
print ("No")
</code></pre>
<p>I want the exact string "Cookie flavored water is yummy" to be in the list for it to be acceptable but I don't want the 6 part included. I'm completely befuddled on how to accomplish this. Also the objective might change from the first element to a different element.</p>
| -1
|
2016-09-07T19:36:38Z
| 39,377,897
|
<p>you're iterating on <code>x</code> with <code>i</code> but you check if string belongs to the list, not the element, which is always false.</p>
<p>to check if an element of x contains <code>"Cookie flavored water is yummy"</code></p>
<pre><code>x = ["Cookie flavored water is yummy 6", "Coding complicated 16", "Help 7"]
for i in x:
print ("Yes" if "Cookie flavored water is yummy" in i else "No")
</code></pre>
<p>on the other hand, for exact string match simply use <code>in</code> without a loop, the loop being made on <code>x</code> by the <code>in</code> operator:</p>
<pre><code>print ("Yes" if "Cookie flavored water is yummy" in x else "No")
</code></pre>
<p>If you need exact string match on a great number of elements, consider putting your elements in a <code>set</code> instead because lookup time is much smaller (hashing involved). The code remains the same apart from that.</p>
| 0
|
2016-09-07T19:43:27Z
|
[
"python",
"list"
] |
How to recode line so an exact sentence must be in the list for it to match
| 39,377,790
|
<pre><code>x = ["Cookie flavored water is yummy 6", "Coding complicated 16", "Help 7"]
for i in x:
if "flavored" in x:
print ("Yes")
else:
print ("No")
</code></pre>
<p>I want the exact string "Cookie flavored water is yummy" to be in the list for it to be acceptable but I don't want the 6 part included. I'm completely befuddled on how to accomplish this. Also the objective might change from the first element to a different element.</p>
| -1
|
2016-09-07T19:36:38Z
| 39,377,992
|
<p>What do you mean by "acceptable?" Additionally, when you say you don't want the number at the end of the items in the list to be included, do you mean for your comparison? I agree with the other answers, and if you happen to want to remove the numbers from the list:</p>
<pre><code>x = [' '.join(i.split(' ')[:-1]) for i in x]
</code></pre>
| 0
|
2016-09-07T19:50:05Z
|
[
"python",
"list"
] |
How to use multiple core with a webservice based on Python Klein
| 39,377,856
|
<p>I am writing a web service based on Klein framework</p>
<p><a href="https://klein.readthedocs.io/en/latest/index.html" rel="nofollow">https://klein.readthedocs.io/en/latest/index.html</a></p>
<p>At this stage I am stress testing my service, it can handles about 70 requests per second on amazon t2.medium instance. But when I use top to check the server, it only use 100% of CPU. I think amazon t2.medium instance should have 2 cpu, so I wonder is there a way to change in my web service code to use all of the possible cpus and hopefully handle more requests.</p>
<p>I've read python documentations and found the <code>multiprocessing</code> module but I am not sure will that be the right solution to it. Right now the main function of my web service is</p>
<pre><code>APP = Klein()
if __name__ == "__main__":
APP.run("0.0.0.0", SERVER_PORT)
</code></pre>
<p>Is there a straight forward fix to make this service being able to use multiple cpu to process the incoming requests? Thank you for reading the question. </p>
| 2
|
2016-09-07T19:40:36Z
| 39,391,548
|
<p>It's certainly possible to use <code>multiprocessing</code> and it sure as heck is easy to spin up processes.</p>
<pre><code>from multiprocessing import Process
from klein import Klein
def runserver(interface, port, logFile):
app = Klein()
@app.route('/')
def heyEarth(request):
return 'Hey Earth!'
app.run(interface, port, logFile)
process_list = []
for x, port in enumerate([8000, 8001, 8002, 8003]):
logfilename = open('localhost' + str(port) + '.log', 'a')
process_list.append(Process(target=runserver, args=('localhost', port, logfilename)))
process_list[x].daemon = True
process_list[x].start()
process_list.pop().join()
</code></pre>
<p>In an enterprise environment it's better and more reliable to run behind a dedicated load balancer like nginx. So the snippet above should only be used to start the web servers, then all your load balancing should be handled by a dedicated load balancer.</p>
<p>Keep the multiprocess code to a bare minimum or else basic things like debugging and shared system files start to become an annoyance. And that's the "normal" stuff, there are TONS of ABNORMALITIES that can arise and no one will be able to help you because you don't know what's happening yourself. Just running this snippet, I noticed a few weird things with signals and Twisted. I think it could be fixed if I ran all <code>klein</code> imports in the <code>runserver()</code>.</p>
<p>Get informed, learn from other's mistakes, heed the warnings of those who have been burned by <code>multiprocessing</code> and go make a kick ass app! Hope this helps :D</p>
<h1>References</h1>
<ul>
<li><a href="http://www.tornadoweb.org/en/stable/guide/running.html#running-behind-a-load-balancer" rel="nofollow">Tornado: Running and Deploying</a> - it's a Tornado doc, but it should still help</li>
<li><a href="http://stackoverflow.com/a/10088578/2172464">Multicore TwistedWeb</a> - This is the "Twisted" approach to multiprocesses by a core developer. Worth a read for anyone looking to run Twisted and spawn processes.</li>
</ul>
| 1
|
2016-09-08T12:55:10Z
|
[
"python",
"multithreading",
"web-services",
"twisted",
"klein-mvc"
] |
How to pass a list of objects to a thread function
| 39,377,862
|
<p>I have to pass a list of objects to a function that executes in a thread in python. I need to be able to call functions of those objects, such as <code>animal.bite()</code>. I created a generic test class:</p>
<pre><code>class test_class:
def f(self):
print 'hello'
</code></pre>
<p>and created a list of these objects:</p>
<pre><code>test_object = test_class()
new_object = test_class()
strlist = [test_object, new_object]
</code></pre>
<p>and have a function that creates a thread if one hasn't already been created:</p>
<pre><code>def threadScheduler(*stringlist):
global lock #variable defined elsewhere, makeshift resource lock
if not lock:
print "creating thread"
lock = True
thread = threading.Thread(name='heartbeat', target=threadWorkLoad, args=(stringlist,))
thread.start()
</code></pre>
<p>This is the function <code>threadWorkLoad</code>:</p>
<pre><code>def threadWorkLoad(*stringlist):
global lock
for item in stringlist:
print 'in thread', item.f()
time.sleep(2)
lock = False
return
</code></pre>
<p>and this is the main loop:</p>
<pre><code>for x in range(0,10):
print 'in main thread', strlist[0].f()
threadScheduler(strlist)
time.sleep(1)
</code></pre>
<p>What I would like to do is to be able to call the function <code>f()</code> on the objects in the list in <code>threadWorkLoad</code>, but currently I get the error <code>AttributeError: 'tuple' object has no attribute 'f'</code></p>
<p>If I replace those objects with strings, the code works the way I want it to. But I need to be able to do the same thing with objects. How do I do this in python?</p>
<p>EDIT:</p>
<p>Some other things I tried -</p>
<p>I changed the creation of the thread in <code>threadScheduler</code> to <code>thread = threading.Thread(name='heartbeat', target=threadWorkLoad, args=[stringlist])</code> with no luck. </p>
<p>I also tried to put in the following statement as the first line of <code>threadScheduler</code>:</p>
<p><code>print 'in scheduler', stringlist[0].f()</code></p>
<p>and I get the same error. It seems like the issue is related to passing a list objects as a function parameter.</p>
<p>I should also clarify that we are using Python 2.5.2.</p>
| 2
|
2016-09-07T19:40:49Z
| 39,378,176
|
<p>Two changes had to be made for this to work (thanks to Scott Mermelstein):</p>
<p>In the main loop, <code>threadScheduler(strlist)</code> had to be changed to <code>threadScheduler(*strlist)</code>. I verified this by adding the following line to the <code>threadScheduler()</code> function:</p>
<pre><code>print 'in scheduler', stringlist[0].f()
</code></pre>
<p>and was able to successfully call <code>f()</code> only after I added the asterisk. I'm unsure of why the argument had to be passed this way, though.</p>
<p>Additionally, I had to change </p>
<pre><code>thread = threading.Thread(name='heartbeat', target=threadWorkLoad, args=(stringlist,))
</code></pre>
<p>to</p>
<pre><code>thread = threading.Thread(name='heartbeat', target=threadWorkLoad, args=tuple(stringlist))
</code></pre>
<p>Then I was able to successfully call <code>f()</code> for each object in the list in the <code>threadWorkLoad()</code> function.</p>
| 1
|
2016-09-07T20:05:35Z
|
[
"python",
"multithreading"
] |
Does numpy internally store size of an array?
| 39,377,866
|
<p>From specification of a numpy array at <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/c-api.types-and-structures.html#c.PyArrayObject" rel="nofollow">here</a>:</p>
<pre><code>typedef struct PyArrayObject {
PyObject_HEAD
char *data;
int nd;
npy_intp *dimensions;
npy_intp *strides;
PyObject *base;
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
} PyArrayObject;
</code></pre>
<p>When I look at the specification of a numpy array, I don't see that it stores number of elements of the array. Is that really the case?</p>
<p>What is the advantage of not storing that?</p>
<p>Thank you.</p>
| 3
|
2016-09-07T19:41:13Z
| 39,377,943
|
<p>Look at <code>PyArray_ArrayDescr *PyArray_Descr.subarray</code>:</p>
<blockquote>
<p>If this is non- NULL, then this data-type descriptor is a C-style
contiguous array of another data-type descriptor. In other-words, each
element that this descriptor describes is actually an array of some
other base descriptor. This is most useful as the data-type descriptor
for a field in another data-type descriptor. The fields member should
be NULL if this is non- NULL (the fields member of the base descriptor
can be non- NULL however). The PyArray_ArrayDescr structure is defined
using</p>
</blockquote>
<pre><code>typedef struct {
PyArray_Descr *base;
PyObject *shape; /* <-------- */
} PyArray_ArrayDescr;
</code></pre>
<p>and:</p>
<pre><code>PyObject *PyArray_ArrayDescr.shape
</code></pre>
<blockquote>
<p>The shape (always C-style contiguous) of the sub-array as a Python tuple.</p>
</blockquote>
| 2
|
2016-09-07T19:46:37Z
|
[
"python",
"c",
"numpy"
] |
Does numpy internally store size of an array?
| 39,377,866
|
<p>From specification of a numpy array at <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/c-api.types-and-structures.html#c.PyArrayObject" rel="nofollow">here</a>:</p>
<pre><code>typedef struct PyArrayObject {
PyObject_HEAD
char *data;
int nd;
npy_intp *dimensions;
npy_intp *strides;
PyObject *base;
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
} PyArrayObject;
</code></pre>
<p>When I look at the specification of a numpy array, I don't see that it stores number of elements of the array. Is that really the case?</p>
<p>What is the advantage of not storing that?</p>
<p>Thank you.</p>
| 3
|
2016-09-07T19:41:13Z
| 39,378,201
|
<p>The size (that is, the total number of elements in the array) is computed as the product of the values in the array <code>dimensions</code>. The length of that array is <code>nd</code>.</p>
<p>In the C code that implements the core of numpy, you'll find many uses of the macro <code>PyArray_SIZE(obj)</code>. Here's the definition of that macro:</p>
<pre><code>#define PyArray_SIZE(m) PyArray_MultiplyList(PyArray_DIMS(m), PyArray_NDIM(m))
</code></pre>
<p>The advantage of not storing it is, well, not storing redundant data.</p>
| 5
|
2016-09-07T20:07:44Z
|
[
"python",
"c",
"numpy"
] |
OSError when trying to pip install shapely inside docker container
| 39,377,911
|
<p>Could not find library geos_c or load any of its variants ['libgeos_c.so.1', 'libgeos_c.so']</p>
<p>using the python:3.5.1 image I am trying to run a container that includes among other things it installs in requirements.txt shapely. When the docker container tries to install shapely I get the above error.</p>
<p>RUN apt-get install libgeos-dev</p>
<p>was something I saw trying to search the issue but that returns unable to locate package libgeos-dev</p>
<p>summary:</p>
<p>expected conditions: including shapely in the requirements.txt file results ins shapely being installed when the docker container is built
actual conditions: An error message is recieved during build <code>Could not find library geos_c or load any of its variants ['libgeos_c.so.1', 'libgeos_c.so']</code></p>
<p>Steps to reproduce:</p>
<p>use docker-compose to build on</p>
<p>Docker-compose.yml: </p>
<pre><code>app:
build: ${APP_REPO}
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.5.1-onbuild
</code></pre>
<p>Requirements.txt:</p>
<pre><code>shapely
</code></pre>
<p>(Simplified to attempt to isolate issues.)</p>
| 2
|
2016-09-07T19:44:41Z
| 39,399,690
|
<p>I found a solution from: <a href="https://github.com/calendar42/docker-python-geos/blob/master/Dockerfile" rel="nofollow">https://github.com/calendar42/docker-python-geos/blob/master/Dockerfile</a></p>
<pre><code>ENV PYTHONUNBUFFERED 1
#### Install GEOS ####
# Inspired by: https://hub.docker.com/r/cactusbone/postgres-postgis-sfcgal/~/dockerfile/
ENV GEOS http://download.osgeo.org/geos/geos-3.5.0.tar.bz2
#TODO make PROCESSOR_COUNT dynamic
#built by docker.io, so reducing to 1. increase to match build server processor count as needed
ENV PROCESSOR_COUNT 1
WORKDIR /install-postgis
WORKDIR /install-postgis/geos
ADD $GEOS /install-postgis/geos.tar.bz2
RUN tar xf /install-postgis/geos.tar.bz2 -C /install-postgis/geos --strip-components=1
RUN ./configure && make -j $PROCESSOR_COUNT && make install
RUN ldconfig
WORKDIR /install-postgis
</code></pre>
<p>I copied this into my dockerfile before the line</p>
<pre><code>pip install requirements.txt
</code></pre>
<p>and the shapely install worked.</p>
<p>It stalls out doing the build occasionally but the main problem was solved.</p>
| 1
|
2016-09-08T20:30:36Z
|
[
"python",
"docker-compose",
"shapely",
"geos"
] |
Combine every two lines while reading .txt file in python
| 39,377,936
|
<p>I'm currently working with very large files in Python that look like</p>
<pre><code>junk
junk
junk
--- intermediate:
1489 pi0 111 [686] (1491,1492)
0.534 -0.050 -0.468 0.724 0.135
1499 pi0 111 [690] (1501,1502)
-1.131 0.503 12.751 12.812 0.135
--- final:
32 e- 11 [7]
9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6]
-11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] (91) >43 {+5}
2.946 0.315 94.111 94.159 0.500
14 g 21 [11] (60,61) 34>>16 {+7,-6}
-0.728 3.329 5.932 6.907 0.950
------------------------------------------------------------------------------
junk
junk
--- intermediate:
repeat
</code></pre>
<p>I want to combine every two lines after the "---final" line until the "----------------" line. For example, I'd like an output file to read</p>
<pre><code> 32 e- 11 [7] 9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6] -11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] 2.946 0.315 94.111 94.159 0.500
14 g 21 [11] -0.728 3.329 5.932 6.907 0.950
</code></pre>
<p>Notice how I'm omitting the extra entries in the lines without spaces. My current approach is</p>
<pre><code>start = False
for line in myfile:
line = line.strip()
fields = line.split()
if len(fields)==0:
continue
if not start:
if fields[0] == "----final:":
start = True
continue
</code></pre>
<p>len(fields)==0 should end the script at the "---------" line and continue until it sees another "----final" line. What I currently don't know how do is combine the two lines together while omitting the extra information in the lines without spaces. Any suggestions? </p>
| 2
|
2016-09-07T19:46:25Z
| 39,378,360
|
<p>A quick and dirty way of merging every other line:</p>
<pre><code>for i in range(0,len(lines),2):
fields1 = lines[i].strip().split()
fields2 = lines[i+1].strip().split()
print("\t".join(fields1[:4]+fields2))
</code></pre>
<p>Note that I considered here that all the lines to be merged are extracted and put in a list called <code>lines</code> and that I simply hard coded the number (4) of elements that will be kept from every first line.</p>
| 0
|
2016-09-07T20:19:40Z
|
[
"python"
] |
Combine every two lines while reading .txt file in python
| 39,377,936
|
<p>I'm currently working with very large files in Python that look like</p>
<pre><code>junk
junk
junk
--- intermediate:
1489 pi0 111 [686] (1491,1492)
0.534 -0.050 -0.468 0.724 0.135
1499 pi0 111 [690] (1501,1502)
-1.131 0.503 12.751 12.812 0.135
--- final:
32 e- 11 [7]
9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6]
-11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] (91) >43 {+5}
2.946 0.315 94.111 94.159 0.500
14 g 21 [11] (60,61) 34>>16 {+7,-6}
-0.728 3.329 5.932 6.907 0.950
------------------------------------------------------------------------------
junk
junk
--- intermediate:
repeat
</code></pre>
<p>I want to combine every two lines after the "---final" line until the "----------------" line. For example, I'd like an output file to read</p>
<pre><code> 32 e- 11 [7] 9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6] -11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] 2.946 0.315 94.111 94.159 0.500
14 g 21 [11] -0.728 3.329 5.932 6.907 0.950
</code></pre>
<p>Notice how I'm omitting the extra entries in the lines without spaces. My current approach is</p>
<pre><code>start = False
for line in myfile:
line = line.strip()
fields = line.split()
if len(fields)==0:
continue
if not start:
if fields[0] == "----final:":
start = True
continue
</code></pre>
<p>len(fields)==0 should end the script at the "---------" line and continue until it sees another "----final" line. What I currently don't know how do is combine the two lines together while omitting the extra information in the lines without spaces. Any suggestions? </p>
| 2
|
2016-09-07T19:46:25Z
| 39,378,417
|
<p>as long as you know the exact lines that surround the section you want:</p>
<pre><code>#split the large text into lines
lines = large_text.split('\n')
#get the indexes of the beginning and end of your target section
idx_start = lines.index("--- final:")
idx_finish= lines.index("------------------------------------------------------------------------------")
#iterate through the section in steps of 2, split on spaces, remove empty strings, print them as tab delimited
for idx in range( idx_start+1, idx_finish, 2):
out = list(filter(None,(lines[idx]+lines[idx+1]).split(" ")))
print("\t".join(out))
</code></pre>
<p>Where <code>large_text</code> is the file imported as a giant string.</p>
<p><strong>EDIT</strong>
In order to open the file `large_text.txt' as a string try this:</p>
<pre><code>with open('large_text.txt','r') as f:
#split the large text into lines
lines = f.readlines()
#get the indexes of the beginning and end of your target section
idx_start = lines.index("--- final:")
idx_finish= lines.index("------------------------------------------------------------------------------")
#iterate through the section in steps of 2, split on spaces, remove empty strings, print them as tab delimited
for idx in range( idx_start+1, idx_finish, 2):
out = list(filter(None,(lines[idx]+lines[idx+1]).split(" ")))
print("\t".join(out))
</code></pre>
<p><strong>Assumptions</strong></p>
<ol>
<li>You know the lines that seperate the section of interest (IE: "--- final:" )</li>
<li>Your values are space not tab delimited. If not change <code>split(" ")</code> to <code>split("\t")</code> </li>
</ol>
<p><strong>Should be the winner</strong>
Added that formatting fix to the one set of lines. Same assumptions hold true.</p>
<pre><code>with open('./large_text.txt','r') as f:
#split the large text into lines
lines = f.read().split("\n")
#get the indexes of the beginning and end of your target section
idx_start = lines.index("--- final:")
idx_finish= lines.index("------------------------------------------------------------------------------")
#iterate through the section in steps of 2, split on spaces, remove empty strings, print them as tab delimited
for idx in range( idx_start+1, idx_finish, 2):
line_spaces = list(filter(None,lines[idx].split(" ")))[0:4]
other_line = list(filter(None,(lines[idx+1]).split(" ")))
out = line_spaces + other_line
print("\t".join(out))
</code></pre>
| 0
|
2016-09-07T20:24:22Z
|
[
"python"
] |
Combine every two lines while reading .txt file in python
| 39,377,936
|
<p>I'm currently working with very large files in Python that look like</p>
<pre><code>junk
junk
junk
--- intermediate:
1489 pi0 111 [686] (1491,1492)
0.534 -0.050 -0.468 0.724 0.135
1499 pi0 111 [690] (1501,1502)
-1.131 0.503 12.751 12.812 0.135
--- final:
32 e- 11 [7]
9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6]
-11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] (91) >43 {+5}
2.946 0.315 94.111 94.159 0.500
14 g 21 [11] (60,61) 34>>16 {+7,-6}
-0.728 3.329 5.932 6.907 0.950
------------------------------------------------------------------------------
junk
junk
--- intermediate:
repeat
</code></pre>
<p>I want to combine every two lines after the "---final" line until the "----------------" line. For example, I'd like an output file to read</p>
<pre><code> 32 e- 11 [7] 9.072 20.492 499.225 499.727 0.001
33 e+ -11 [6] -11.317 -17.699 2632.568 2632.652 0.001
12 s 3 [10] 2.946 0.315 94.111 94.159 0.500
14 g 21 [11] -0.728 3.329 5.932 6.907 0.950
</code></pre>
<p>Notice how I'm omitting the extra entries in the lines without spaces. My current approach is</p>
<pre><code>start = False
for line in myfile:
line = line.strip()
fields = line.split()
if len(fields)==0:
continue
if not start:
if fields[0] == "----final:":
start = True
continue
</code></pre>
<p>len(fields)==0 should end the script at the "---------" line and continue until it sees another "----final" line. What I currently don't know how do is combine the two lines together while omitting the extra information in the lines without spaces. Any suggestions? </p>
| 2
|
2016-09-07T19:46:25Z
| 39,378,595
|
<p>You could solve your problem with the newer <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><strong><code>regex</code></strong></a> module and some regular expressions:</p>
<pre><code>import regex as re
rx = re.compile(r'''(?V1)
(?:^---\ final:[\n\r])|(?:\G(?!\A))
^(\ *\d+.+?)\ *$[\n\r]
^\ +(.+)$[\n\r]
''', re.MULTILINE | re.VERBOSE)
junky_string = your_string
matches = [" ".join(match.groups())
for match in rx.finditer(junky_string)
if match.group(1) is not None]
print(matches)
# [' 32 e- 11 [7] 9.072 20.492 499.225 499.727 0.001',
# ' 33 e+ -11 [6] -11.317 -17.699 2632.568 2632.652 0.001',
# ' 12 s 3 [10] (91) >43 {+5} 2.946 0.315 94.111 94.159 0.500',
# ' 14 g 21 [11] (60,61) 34>>16 {+7,-6} -0.728 3.329 5.932 6.907 0.950']
</code></pre>
<p>This looks for <code>--- final:</code> at the very beginning of the line or spaces, followed by digits <strong>after</strong> having matched <code>--- final:</code> (study the <a href="https://regex101.com/r/nY0uH4/1" rel="nofollow"><strong>explanation on regex101.com</strong></a> for more details).<br>
The matched items are joined with a tabulator afterwards.</p>
| 0
|
2016-09-07T20:37:21Z
|
[
"python"
] |
Responding to new_session_created messages in the telegram.org API
| 39,377,938
|
<p>In my telegram client I go through the seemingly typical process of creating a valid session:</p>
<ol>
<li>Generate a random session_id</li>
<li>Create an auth key</li>
<li>Call <code>initConnection</code> with <code>getNearestDc</code></li>
<li>Switch to the nearest DC, which involves a new random session_id and
auth_key</li>
<li>Attempt a <code>sendCode</code> command, which results in another switch to the
correct DC</li>
</ol>
<p>At various points in this process I receive <code>MessageContainers</code> from the server indicating status and <code>MsgAcks</code>, etc. This is expected and I am now decoding them properly. Some of these messages are of type <code>new_session_created</code> and look like this:</p>
<pre><code>{'msg': {u'new_session_created': {u'first_msg_id': 6327252208304908288L, u'unique_id': -4911750325772918873L, u'server_salt': 6799011375684265530L}}, 'seqno': 1, 'msg_id': 6327252210221112321L}
</code></pre>
<p>My current server_salt is different than that in this message. Do I need to switch to using the new salt? What about unique_id, is that my new session_id or do I just ignore these messages?</p>
<p>After sorting all this out, what part of the session do I need to save for the next time the client starts up? The session_id, auth_key, auth_key_id?</p>
| 2
|
2016-09-07T19:46:27Z
| 39,378,210
|
<p>Save and re-use this new salt you just received for your next requests in this session.</p>
<p>To do a subsequest login all you need is the <code>session_id</code>, <code>recent_salt</code> and the <code>auth_key</code>.</p>
<p><code>Auth_key_id</code> is computed from the <code>auth_key</code> so you may, or may not choose to store it</p>
| 1
|
2016-09-07T20:08:55Z
|
[
"python",
"telegram"
] |
Why does Python's set difference method take time with an empty set?
| 39,378,043
|
<p>Here is what I mean:</p>
<pre><code>> python -m timeit "set().difference(xrange(0,10))"
1000000 loops, best of 3: 0.624 usec per loop
> python -m timeit "set().difference(xrange(0,10**4))"
10000 loops, best of 3: 170 usec per loop
</code></pre>
<p>Apparently python iterates through the whole argument, even if the result is known to be the empty set beforehand. Is there any good reason for this? The code was run in python 2.7.6.</p>
<p>(Even for nonempty sets, if you find that you've removed all of the first set's elements midway through the iteration, it makes sense to stop right away.)</p>
| 12
|
2016-09-07T19:55:12Z
| 39,397,175
|
<p>IMO it's a matter of specialisation, consider:</p>
<pre><code>In [18]: r = range(10 ** 4)
In [19]: s = set(range(10 ** 4))
In [20]: %time set().difference(r)
CPU times: user 387 µs, sys: 0 ns, total: 387 µs
Wall time: 394 µs
Out[20]: set()
In [21]: %time set().difference(s)
CPU times: user 10 µs, sys: 8 µs, total: 18 µs
Wall time: 16.2 µs
Out[21]: set()
</code></pre>
<p>Apparently difference has specialised implementation for <code>set - set</code>.</p>
<p>Note that difference <em>operator</em> requires right hand argument to be a set, while difference allows any iterable.</p>
<p>Per @wim implementation is at <a href="https://github.com/python/cpython/blob/master/Objects/setobject.c#L1553-L1555" rel="nofollow">https://github.com/python/cpython/blob/master/Objects/setobject.c#L1553-L1555</a></p>
| 4
|
2016-09-08T17:42:54Z
|
[
"python",
"performance",
"set",
"operators"
] |
Why does Python's set difference method take time with an empty set?
| 39,378,043
|
<p>Here is what I mean:</p>
<pre><code>> python -m timeit "set().difference(xrange(0,10))"
1000000 loops, best of 3: 0.624 usec per loop
> python -m timeit "set().difference(xrange(0,10**4))"
10000 loops, best of 3: 170 usec per loop
</code></pre>
<p>Apparently python iterates through the whole argument, even if the result is known to be the empty set beforehand. Is there any good reason for this? The code was run in python 2.7.6.</p>
<p>(Even for nonempty sets, if you find that you've removed all of the first set's elements midway through the iteration, it makes sense to stop right away.)</p>
| 12
|
2016-09-07T19:55:12Z
| 39,431,151
|
<p>When Python core developers add new features, the first priority is correct code with thorough test coverage. That is hard enough in itself. Speedups often come later as someone has the idea and inclination. I opened a tracker issue <a href="https://bugs.python.org/issue28071" rel="nofollow">28071</a> summarizing the proposal and counter-reasons discussed here. I will try to summarize its disposition here.</p>
<p>UPDATE: An early-out for sets that start empty has been added for 3.6.0b1, due in about a day.</p>
| 2
|
2016-09-10T22:29:47Z
|
[
"python",
"performance",
"set",
"operators"
] |
Why does Python's set difference method take time with an empty set?
| 39,378,043
|
<p>Here is what I mean:</p>
<pre><code>> python -m timeit "set().difference(xrange(0,10))"
1000000 loops, best of 3: 0.624 usec per loop
> python -m timeit "set().difference(xrange(0,10**4))"
10000 loops, best of 3: 170 usec per loop
</code></pre>
<p>Apparently python iterates through the whole argument, even if the result is known to be the empty set beforehand. Is there any good reason for this? The code was run in python 2.7.6.</p>
<p>(Even for nonempty sets, if you find that you've removed all of the first set's elements midway through the iteration, it makes sense to stop right away.)</p>
| 12
|
2016-09-07T19:55:12Z
| 39,441,283
|
<blockquote>
<p>Is there any good reason for this? </p>
</blockquote>
<p>Having a special path for the empty set had not come up before.</p>
<blockquote>
<p>Even for nonempty sets, if you find that you've removed all of the first set's elements midway through the iteration, it makes sense to stop right away.</p>
</blockquote>
<p>This is a reasonable optimization request. I've made a <a href="http://bugs.python.org/file44565/set_diff_early_out.diff" rel="nofollow">patch</a> and will apply it shortly. Here are the new timings with the patch applied:</p>
<pre><code> $ py -m timeit -s "r = range(10 ** 4); s = set()" "s.difference(r)"
10000000 loops, best of 3: 0.104 usec per loop
$ py -m timeit -s "r = set(range(10 ** 4)); s = set()" "s.difference(r)"
10000000 loops, best of 3: 0.105 usec per loop
$ py -m timeit -s "r = range(10 ** 4); s = set()" "s.difference_update(r)"
10000000 loops, best of 3: 0.0659 usec per loop
$ py -m timeit -s "r = set(range(10 ** 4)); s = set()" "s.difference_update(r)"
10000000 loops, best of 3: 0.0684 usec per loop
</code></pre>
| 4
|
2016-09-11T22:32:19Z
|
[
"python",
"performance",
"set",
"operators"
] |
Django error: NoReverseMatch
| 39,378,074
|
<p>I'm using Django 1.10 and python 3.4</p>
<p>The precise error is</p>
<pre><code>NoReverseMatch at /movies/movie/Twilight/
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
<p>The error is caused by this line: <code>{% url 'moviesrating:movie-details' movie.id %}</code> in template <em>moviesrating/select_movie.html</em></p>
<p>In file <em>moviesrating/urls.py</em>, which is correctly included in the main urls file, there are those lines:</p>
<pre><code>app_name = 'moviesrating'
urlpatterns = [
url(r'^movie/(?P<movie_id>\d+)|(?P<movie_name>[a-zA-Z\ ]+)/$', view_movie, name = 'movie-details'),
]
</code></pre>
<p>which refers to function <em>view_movie</em> in <em>moviesrating/views.py</em>:</p>
<pre><code>def view_movie(request, movie_id, movie_name):
if movie_id:
movie = get_object_or_404(Movie, pk = movie_id)
elif movie_name:
try:
movie = get_object_or_404(Movie, name = movie_name)
except MultipleObjectsReturned: # There are two movies named 'Twilight'
movies = get_list_or_404(Movie, name = movie_name)
return render(request, "moviesrating/select_movie.html", {'movies': movies})
else:
movie = None
return render(request, "moviesrating/movie.html", {'movie': movie})
</code></pre>
<p>The purpose of the url <em>/movies/movie/...</em> is to show a movie found by name or by id, the specific url pattern comes from this need.</p>
<p>The point is that the error shows that <strong>even if it doesn't find the reverse match it finds the right url pattern</strong> so I thought the pattern didn't match. Then I tried to change the line to:</p>
<pre><code>{% url 'moviesrating:movie-details' movie.id %}
{% url 'moviesrating:movie-details' movie_id=movie.id movie_name=None %}
{% url 'moviesrating:movie-details' movie.id None %}
{% url 'moviesrating.views.view_movie' movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id movie_name=None %}
</code></pre>
<p>but none of them worked.</p>
<p>Viewing the link in the browser (ex. <em>http://localhost:8081/movies/movie/8</em>) works, so it's not that page that causes the problem.</p>
<p>At this point really I can't understand the problem, I've followed the passages explained in the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial03/#s-removing-hardcoded-urls-in-templates" rel="nofollow">django docs</a> and read a lot of questions already asked but none of them solved my problem.</p>
<p>If something is unclear or more information is needed please let me know and I will edit the post. Thanks</p>
<p>Here is the full stacktrace of the error:</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://localhost:8081/movies/movie/Twilight/
Django Version: 1.10.1
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'todolist.apps.TodolistConfig',
'moviesrating.apps.MoviesratingConfig']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template C:\Users\fra\Programmazione\Python\myserver\moviesrating\templates\moviesrating\select_movie.html, error at line 12
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$'] 2 : <html>
3 : <head>
4 : <meta charset="ISO-8859-1">
5 : <title>Choice your movie</title>
6 : </head>
7 : <body>
8 : <h2>Choice of which movie named {{ movies.0.name }} would you see the details</h2>
9 :
10 : <ul>
11 : {% for movie in movies %}
12 : <li> {% url 'moviesrating:movie-details' movie.id %} </li>
13 : <li><a href="{{ movie_url }}">{{ movie.name }} del {{ movie.year }} diretto da {{ movie.director }}</a></li>
14 : {% endfor %}
15 : </ul>
16 : </body>
17 : </html>
18 :
Traceback:
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
31. movie = get_object_or_404(Movie, name = movie_name)
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in get_object_or_404
85. return queryset.get(*args, **kwargs)
File "C:\Program Files (x86)\Python\lib\site-packages\django\db\models\query.py" in get
389. (self.model._meta.object_name, num)
During handling of the above exception (get() returned more than one Movie -- it returned 2!), another exception occurred:
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
34. return render(request, "moviesrating/select_movie.html", {'movies': movies})
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in render
30. content = loader.render_to_string(template_name, context, request, using=using)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\loader.py" in render_to_string
68. return template.render(context, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\backends\django.py" in render
66. return self.template.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
208. return self._render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
209. nodelist.append(node.render_annotated(context))
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
439. url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\base.py" in reverse
91. return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\resolvers.py" in _reverse_with_prefix
392. (lookup_view_s, args, kwargs, len(patterns), patterns)
Exception Type: NoReverseMatch at /movies/movie/Twilight/
Exception Value: Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
| 3
|
2016-09-07T19:57:57Z
| 39,378,334
|
<p>Django's <code>reverse()</code> cannot handle disjunctive patterns (using a <code>|</code>) outside of a capturing group. It's one of those things you'd hope someone would've fixed somewhere in the past 10 or so years, but this limitation has been around <a href="https://github.com/django/django/blob/978a00e39fee25cfa99065285b0de88366710fad/django/utils/regex_helper.py#L97" rel="nofollow">since 1.0</a>.</p>
<p>A workaround is to split up the pattern into two patterns:</p>
<pre><code>urlpatterns = [
url(r'^movie/(?P<movie_id>\d+)/$', view_movie, name='movie-details'),
url(r'^movie/(?P<movie_name>[a-zA-Z\ ]+)/$', view_movie, name='movie-details'),
]
</code></pre>
<p>You'll need to add a default to both parameters:</p>
<pre><code>def view_movie(request, movie_id=None, movie_name=None):
...
</code></pre>
| 1
|
2016-09-07T20:17:56Z
|
[
"python",
"django"
] |
Django error: NoReverseMatch
| 39,378,074
|
<p>I'm using Django 1.10 and python 3.4</p>
<p>The precise error is</p>
<pre><code>NoReverseMatch at /movies/movie/Twilight/
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
<p>The error is caused by this line: <code>{% url 'moviesrating:movie-details' movie.id %}</code> in template <em>moviesrating/select_movie.html</em></p>
<p>In file <em>moviesrating/urls.py</em>, which is correctly included in the main urls file, there are those lines:</p>
<pre><code>app_name = 'moviesrating'
urlpatterns = [
url(r'^movie/(?P<movie_id>\d+)|(?P<movie_name>[a-zA-Z\ ]+)/$', view_movie, name = 'movie-details'),
]
</code></pre>
<p>which refers to function <em>view_movie</em> in <em>moviesrating/views.py</em>:</p>
<pre><code>def view_movie(request, movie_id, movie_name):
if movie_id:
movie = get_object_or_404(Movie, pk = movie_id)
elif movie_name:
try:
movie = get_object_or_404(Movie, name = movie_name)
except MultipleObjectsReturned: # There are two movies named 'Twilight'
movies = get_list_or_404(Movie, name = movie_name)
return render(request, "moviesrating/select_movie.html", {'movies': movies})
else:
movie = None
return render(request, "moviesrating/movie.html", {'movie': movie})
</code></pre>
<p>The purpose of the url <em>/movies/movie/...</em> is to show a movie found by name or by id, the specific url pattern comes from this need.</p>
<p>The point is that the error shows that <strong>even if it doesn't find the reverse match it finds the right url pattern</strong> so I thought the pattern didn't match. Then I tried to change the line to:</p>
<pre><code>{% url 'moviesrating:movie-details' movie.id %}
{% url 'moviesrating:movie-details' movie_id=movie.id movie_name=None %}
{% url 'moviesrating:movie-details' movie.id None %}
{% url 'moviesrating.views.view_movie' movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id movie_name=None %}
</code></pre>
<p>but none of them worked.</p>
<p>Viewing the link in the browser (ex. <em>http://localhost:8081/movies/movie/8</em>) works, so it's not that page that causes the problem.</p>
<p>At this point really I can't understand the problem, I've followed the passages explained in the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial03/#s-removing-hardcoded-urls-in-templates" rel="nofollow">django docs</a> and read a lot of questions already asked but none of them solved my problem.</p>
<p>If something is unclear or more information is needed please let me know and I will edit the post. Thanks</p>
<p>Here is the full stacktrace of the error:</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://localhost:8081/movies/movie/Twilight/
Django Version: 1.10.1
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'todolist.apps.TodolistConfig',
'moviesrating.apps.MoviesratingConfig']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template C:\Users\fra\Programmazione\Python\myserver\moviesrating\templates\moviesrating\select_movie.html, error at line 12
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$'] 2 : <html>
3 : <head>
4 : <meta charset="ISO-8859-1">
5 : <title>Choice your movie</title>
6 : </head>
7 : <body>
8 : <h2>Choice of which movie named {{ movies.0.name }} would you see the details</h2>
9 :
10 : <ul>
11 : {% for movie in movies %}
12 : <li> {% url 'moviesrating:movie-details' movie.id %} </li>
13 : <li><a href="{{ movie_url }}">{{ movie.name }} del {{ movie.year }} diretto da {{ movie.director }}</a></li>
14 : {% endfor %}
15 : </ul>
16 : </body>
17 : </html>
18 :
Traceback:
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
31. movie = get_object_or_404(Movie, name = movie_name)
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in get_object_or_404
85. return queryset.get(*args, **kwargs)
File "C:\Program Files (x86)\Python\lib\site-packages\django\db\models\query.py" in get
389. (self.model._meta.object_name, num)
During handling of the above exception (get() returned more than one Movie -- it returned 2!), another exception occurred:
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
34. return render(request, "moviesrating/select_movie.html", {'movies': movies})
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in render
30. content = loader.render_to_string(template_name, context, request, using=using)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\loader.py" in render_to_string
68. return template.render(context, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\backends\django.py" in render
66. return self.template.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
208. return self._render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
209. nodelist.append(node.render_annotated(context))
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
439. url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\base.py" in reverse
91. return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\resolvers.py" in _reverse_with_prefix
392. (lookup_view_s, args, kwargs, len(patterns), patterns)
Exception Type: NoReverseMatch at /movies/movie/Twilight/
Exception Value: Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
| 3
|
2016-09-07T19:57:57Z
| 39,378,358
|
<p>You are only sending one parameter to your view, though it expects two. If you want to stick with one that can be interpreted as an id or a name, why not accept an alpha-numeric parameter </p>
<pre><code>urlpatterns = [
url(r'^movie/(?P<movie_id>[A-z0-9]+)/$', view_movie, name = 'movie-details'),
]
</code></pre>
<p>and then check in your view which one it is so you know if you should treat it as an id or a name</p>
<pre><code>def view_movie(request, movie_id):
try:
int(movie_id)
movie_name = False
except:
movie_name = True
if not movie_name:
movie = get_object_or_404(Movie, pk = movie_id)
elif movie_name:
# etc.
</code></pre>
| 0
|
2016-09-07T20:19:21Z
|
[
"python",
"django"
] |
Django error: NoReverseMatch
| 39,378,074
|
<p>I'm using Django 1.10 and python 3.4</p>
<p>The precise error is</p>
<pre><code>NoReverseMatch at /movies/movie/Twilight/
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
<p>The error is caused by this line: <code>{% url 'moviesrating:movie-details' movie.id %}</code> in template <em>moviesrating/select_movie.html</em></p>
<p>In file <em>moviesrating/urls.py</em>, which is correctly included in the main urls file, there are those lines:</p>
<pre><code>app_name = 'moviesrating'
urlpatterns = [
url(r'^movie/(?P<movie_id>\d+)|(?P<movie_name>[a-zA-Z\ ]+)/$', view_movie, name = 'movie-details'),
]
</code></pre>
<p>which refers to function <em>view_movie</em> in <em>moviesrating/views.py</em>:</p>
<pre><code>def view_movie(request, movie_id, movie_name):
if movie_id:
movie = get_object_or_404(Movie, pk = movie_id)
elif movie_name:
try:
movie = get_object_or_404(Movie, name = movie_name)
except MultipleObjectsReturned: # There are two movies named 'Twilight'
movies = get_list_or_404(Movie, name = movie_name)
return render(request, "moviesrating/select_movie.html", {'movies': movies})
else:
movie = None
return render(request, "moviesrating/movie.html", {'movie': movie})
</code></pre>
<p>The purpose of the url <em>/movies/movie/...</em> is to show a movie found by name or by id, the specific url pattern comes from this need.</p>
<p>The point is that the error shows that <strong>even if it doesn't find the reverse match it finds the right url pattern</strong> so I thought the pattern didn't match. Then I tried to change the line to:</p>
<pre><code>{% url 'moviesrating:movie-details' movie.id %}
{% url 'moviesrating:movie-details' movie_id=movie.id movie_name=None %}
{% url 'moviesrating:movie-details' movie.id None %}
{% url 'moviesrating.views.view_movie' movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id %}
{% url 'moviesrating.views.view_movie' movie_id=movie.id movie_name=None %}
</code></pre>
<p>but none of them worked.</p>
<p>Viewing the link in the browser (ex. <em>http://localhost:8081/movies/movie/8</em>) works, so it's not that page that causes the problem.</p>
<p>At this point really I can't understand the problem, I've followed the passages explained in the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial03/#s-removing-hardcoded-urls-in-templates" rel="nofollow">django docs</a> and read a lot of questions already asked but none of them solved my problem.</p>
<p>If something is unclear or more information is needed please let me know and I will edit the post. Thanks</p>
<p>Here is the full stacktrace of the error:</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://localhost:8081/movies/movie/Twilight/
Django Version: 1.10.1
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'todolist.apps.TodolistConfig',
'moviesrating.apps.MoviesratingConfig']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template C:\Users\fra\Programmazione\Python\myserver\moviesrating\templates\moviesrating\select_movie.html, error at line 12
Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$'] 2 : <html>
3 : <head>
4 : <meta charset="ISO-8859-1">
5 : <title>Choice your movie</title>
6 : </head>
7 : <body>
8 : <h2>Choice of which movie named {{ movies.0.name }} would you see the details</h2>
9 :
10 : <ul>
11 : {% for movie in movies %}
12 : <li> {% url 'moviesrating:movie-details' movie.id %} </li>
13 : <li><a href="{{ movie_url }}">{{ movie.name }} del {{ movie.year }} diretto da {{ movie.director }}</a></li>
14 : {% endfor %}
15 : </ul>
16 : </body>
17 : </html>
18 :
Traceback:
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
31. movie = get_object_or_404(Movie, name = movie_name)
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in get_object_or_404
85. return queryset.get(*args, **kwargs)
File "C:\Program Files (x86)\Python\lib\site-packages\django\db\models\query.py" in get
389. (self.model._meta.object_name, num)
During handling of the above exception (get() returned more than one Movie -- it returned 2!), another exception occurred:
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\core\handlers\base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\fra\Programmazione\Python\myserver\moviesrating\views.py" in view_movie
34. return render(request, "moviesrating/select_movie.html", {'movies': movies})
File "C:\Program Files (x86)\Python\lib\site-packages\django\shortcuts.py" in render
30. content = loader.render_to_string(template_name, context, request, using=using)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\loader.py" in render_to_string
68. return template.render(context, request)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\backends\django.py" in render
66. return self.template.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
208. return self._render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
209. nodelist.append(node.render_annotated(context))
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Program Files (x86)\Python\lib\site-packages\django\template\defaulttags.py" in render
439. url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\base.py" in reverse
91. return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "C:\Program Files (x86)\Python\lib\site-packages\django\urls\resolvers.py" in _reverse_with_prefix
392. (lookup_view_s, args, kwargs, len(patterns), patterns)
Exception Type: NoReverseMatch at /movies/movie/Twilight/
Exception Value: Reverse for 'movie-details' with arguments '(8,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['movies/movie/(?P<movie_id>\\d+)|(?P<movie_name>[a-zA-Z\\ ]+)/$']
</code></pre>
| 3
|
2016-09-07T19:57:57Z
| 39,378,517
|
<p>You can add parentheses around capturing groups:</p>
<pre><code>urlpatterns = [
url(r'^movie/((?P<movie_id>\d+)|(?P<movie_name>[a-zA-Z\ ]+))/$', view_movie, name = 'movie-details'),
]
</code></pre>
<p>and add defaults to view parameters:</p>
<pre><code>def view_movie(request, movie_id=None, movie_name=None):
if movie_id:
return HttpResponse('1')
if movie_name:
return HttpResponse('2')
</code></pre>
<p>Idk why but i've tried this on my project and it works.</p>
| 0
|
2016-09-07T20:31:43Z
|
[
"python",
"django"
] |
Parsing XML from websites and save the code?
| 39,378,160
|
<p>I would like to parse the xml code from a website like
<a href="http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio" rel="nofollow">http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio</a>
and save it in another xml or csv file.</p>
<p>I tried it with this:</p>
<pre><code>import urllib.request
web_data = urllib.request.urlopen("http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio")
str_data = web_data.read()
try:
f = open("file.xml", "w")
f.write(str(str_data))
print("SUCCESS")
except:
print("ERROR")
</code></pre>
<p>But in the saved XML data is between every element '\n' and at the beginning ' b' '</p>
<p>How can i save the XML data without all the 'n\' and ' b' '?</p>
| 1
|
2016-09-07T20:03:54Z
| 39,378,426
|
<p><code>read()</code> returns data as <code>bytes</code> but you can save data without converting to <code>str()</code>. You have to open file in <code>byte</code> mode - <code>"wb"</code> - and write data.</p>
<pre><code>import urllib.request
web_data = urllib.request.urlopen("http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio")
data = web_data.read()
try:
f = open("file.xml", "wb")
f.write(data)
print("SUCCESS")
except:
print("ERROR")
</code></pre>
<p>BTW: To convert <code>bytes</code> to <code>string/unicode</code> you have to use ie. <code>decode('utf-8')</code> .
If you use <code>str()</code> then Python uses own method to create string and it adds <code>b"</code> to inform you that you have <code>bytes</code> in your <code>data</code>.</p>
| 0
|
2016-09-07T20:24:37Z
|
[
"python",
"xml",
"python-3.5"
] |
Parsing XML from websites and save the code?
| 39,378,160
|
<p>I would like to parse the xml code from a website like
<a href="http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio" rel="nofollow">http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio</a>
and save it in another xml or csv file.</p>
<p>I tried it with this:</p>
<pre><code>import urllib.request
web_data = urllib.request.urlopen("http://ops.epo.org/3.1/rest-services/published-data/publication/docdb/EP1000000/biblio")
str_data = web_data.read()
try:
f = open("file.xml", "w")
f.write(str(str_data))
print("SUCCESS")
except:
print("ERROR")
</code></pre>
<p>But in the saved XML data is between every element '\n' and at the beginning ' b' '</p>
<p>How can i save the XML data without all the 'n\' and ' b' '?</p>
| 1
|
2016-09-07T20:03:54Z
| 39,378,491
|
<p>If you write the xml file in binary mode, you don't need to convert the data read into a string of characters first. Also, if you process the data a line at a time, that should get rid of <code>'\n'</code> problem. The logic of your code could also be structured a little better IMO, as shown below:</p>
<pre><code>import urllib.request
web_data = urllib.request.urlopen("http://ops.epo.org/3.1/rest-services"
"/published-data/publication"
"/docdb/EP1000000/biblio")
data = web_data.read()
with open("file.xml", "wb") as f:
for line in data:
try:
f.write(data)
except Exception as exc:
print('ERROR')
print(str(exc))
break
else:
print('SUCCESS')
</code></pre>
| 1
|
2016-09-07T20:30:05Z
|
[
"python",
"xml",
"python-3.5"
] |
Absolute path error when building wheel
| 39,378,247
|
<p><strong>OVERVIEW</strong></p>
<p>I'm trying to learn how to build wheels on my windows dev box so hopefully I'll have a nice way to deploy django websites on linux boxes. But right now I'm stuck with a little error.</p>
<p>Here's my setup.py:</p>
<pre><code>from setuptools import setup, find_packages
setup(name='pkg',
version="1.0",
packages=find_packages(),
data_files=[('/etc/nginx/sites-available', ['foo.conf'])]
)
</code></pre>
<p>When i try to do <code>>python setup.py bdist_wheel</code> I'm getting this error:</p>
<pre><code>raise ValueError, "path '%s' cannot be absolute" % pathname
</code></pre>
<p>It seems the way I'm using data_files is not supported. </p>
<p><strong>QUESTION</strong></p>
<p>What's the right way to deploy config files using <a href="http://pythonwheels.com/" rel="nofollow">wheels</a> & <a href="https://docs.python.org/2/distutils/setupscript.html" rel="nofollow">setup.py</a>?</p>
| 1
|
2016-09-07T20:11:27Z
| 39,417,056
|
<p>Wheels should be used for bundling Python code. It's not for configuration management (where Nginx configurations would typically be handled).</p>
<p>See also: <a href="http://stackoverflow.com/a/34204582/116042">http://stackoverflow.com/a/34204582/116042</a></p>
| 1
|
2016-09-09T17:42:56Z
|
[
"python",
"django",
"python-wheel"
] |
How to convert "12:45pm - 01:00pm Today, September 7" into UTC
| 39,378,313
|
<p>I have the following time string </p>
<pre><code>12:45pm - 01:00pm Today, September 7
</code></pre>
<p>and I want to convert this string into UTC datetime in the following format</p>
<pre><code>2016-09-07T22:45:00Z
</code></pre>
<p>How can I achieve this in python?</p>
| -1
|
2016-09-07T20:16:28Z
| 39,398,238
|
<pre><code>developed the script to achieve it.
def appointment_time_string(time_str):
import datetime
a = time_str.split()[0]
in_time = datetime.datetime.strptime(a,'%I:%M%p')
start_time = str(datetime.datetime.strftime(in_time, "%H:%M:%S")) + "Z"
if time_str.split()[3] == 'Today,':
start_date = datetime.datetime.utcnow().strftime("%Y-%m-%dT")
elif time_str.split()[3] == 'Tomorrow,':
today = datetime.date.today( )
start_date = (today + datetime.timedelta(days=1)).strftime("%Y-%m-%dT")
appointment_time = str(start_date) + str(start_time)
return appointment_time
</code></pre>
| 0
|
2016-09-08T18:52:15Z
|
[
"python"
] |
Convert columns from a data frame to a list of dicts efficiently
| 39,378,331
|
<p>I am converting columns of data frame to to a list of dictionaries, however, due to the number of columns and number of observations in my data frame I run out of memory using my current approach:</p>
<pre><code>df = pd.DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c'])
df.T.to_dict().values()
</code></pre>
<p>Is there a more efficient way I can do this?</p>
| 2
|
2016-09-07T20:17:44Z
| 39,378,372
|
<p>is that what you want?</p>
<pre><code>In [9]: df.to_dict('r')
Out[9]:
[{'a': 1.3720225964856179,
'b': -1.1530341240730422,
'c': -0.18791193632296455},
{'a': 1.3283240103713496, 'b': 3.6614598433626959, 'c': -0.46395170547460196},
{'a': -1.4960282310010959,
'b': 0.25156344524211743,
'c': -1.3664311385849288},
{'a': -0.11601714495988308,
'b': -0.73400546410732148,
'c': 0.9131316189984563},
{'a': 0.27404065198912386,
'b': -3.1246509560345261,
'c': 0.67227710572588184},
{'a': 1.3390654954886572, 'b': -0.80535280826120292, 'c': -1.78092490531724},
{'a': -0.13911682611874573,
'b': 1.6846890792762916,
'c': 0.22985191293512194},
{'a': -0.22058925847227495,
'b': -0.29342906413451442,
'c': -1.1181888670510167},
{'a': 3.2190577575509951, 'b': 0.59152576294942738, 'c': -1.3474566325216308},
{'a': -0.53486658456919434, 'b': 0.14390073779727405, 'c': 1.2214292373636}]
</code></pre>
<p>data:</p>
<pre><code>In [10]: df
Out[10]:
a b c
0 1.372023 -1.153034 -0.187912
1 1.328324 3.661460 -0.463952
2 -1.496028 0.251563 -1.366431
3 -0.116017 -0.734005 0.913132
4 0.274041 -3.124651 0.672277
5 1.339065 -0.805353 -1.780925
6 -0.139117 1.684689 0.229852
7 -0.220589 -0.293429 -1.118189
8 3.219058 0.591526 -1.347457
9 -0.534867 0.143901 1.221429
</code></pre>
| 2
|
2016-09-07T20:20:49Z
|
[
"python",
"pandas",
"dictionary",
"dataframe"
] |
Remove nan rows in a scipy sparse matrix
| 39,378,363
|
<p>I am given a (normalized) sparse adjacency matrix and a list of labels for the respective matrix rows. Because some nodes have been removed by another sanitization function, there are some rows containing NaNs in the matrix. I want to find these rows and remove them <em>as well as their respective labels</em>. Here is the function I wrote:</p>
<pre class="lang-py prettyprint-override"><code>def sanitize_nan_rows(adj, labels):
# convert to numpy array and keep dimension
adj = np.array(adj, ndmin=2)
for i, row in enumerate(adj):
# check if row all nans
if np.all(np.isnan(row)):
# print("Removing nan row label in %s" % i)
# remove row index from labels
del labels[i]
# remove all nan rows
adj = adj[~np.all(np.isnan(adj), axis=1)]
# return sanitized adj and labels_clean
return adj, labels
</code></pre>
<p><code>labels</code> is a simple Python list and <code>adj</code> has the type <code><class 'scipy.sparse.lil.lil_matrix'></code> (containing elements of type <code><class 'numpy.float64'></code>), which are both the result of</p>
<pre><code>adj, labels = nx.attr_sparse_matrix(infected, normalized=True)
</code></pre>
<p>On execution I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-503-8a404b58eaa9> in <module>()
----> 1 adj, labels = sanitize_nans(adj, labels)
<ipython-input-502-ead99efec677> in sanitize_nans(adj, labels)
6 for i, row in enumerate(adj):
7 # check if row all nans
----> 8 if np.all(np.isnan(row)):
9 print("Removing nan row label in %s" % i)
10 # remove row index from labels
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>So I thought that SciPy NaNs were different from numpy NaNs. After that I tried to convert the sparse matrix into a numpy array (taking the risk of flooding my RAM, because the matrix has about 40k rows and columns). When running it, the error stays the same however. It seems that the <code>np.array()</code> call just wrapped the sparse matrix and didn't convert it, as <code>type(row)</code> inside the for loop still outputs <code><class 'scipy.sparse.lil.lil_matrix'></code></p>
<p>So my question is how to resolve this issue and whether there is a better approach that gets the job done. I am fairly new to numpy and scipy (as used in networkx), so I'd appreciate an explanation. Thank you!</p>
<p>EDIT: After changing the conversion to what <a href="http://stackoverflow.com/users/901925/hpaulj">hpaulj</a> proposed, I'm getting a MemoryError:</p>
<pre><code>---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-519-8a404b58eaa9> in <module>()
----> 1 adj, labels = sanitize_nans(adj, labels)
<ipython-input-518-44201f4ff35c> in sanitize_nans(adj, labels)
1 def sanitize_nans(adj, labels):
----> 2 adj = adj.toarray()
3
4 for i, row in enumerate(adj):
5 # check if row all nans
/usr/lib/python3/dist-packages/scipy/sparse/lil.py in toarray(self, order, out)
348 def toarray(self, order=None, out=None):
349 """See the docstring for `spmatrix.toarray`."""
--> 350 d = self._process_toarray_args(order, out)
351 for i, row in enumerate(self.rows):
352 for pos, j in enumerate(row):
/usr/lib/python3/dist-packages/scipy/sparse/base.py in_process_toarray_args(self, order, out)
697 return out
698 else:
--> 699 return np.zeros(self.shape, dtype=self.dtype, order=order)
700
701
MemoryError:
</code></pre>
<p>So apparently I'll have to stick with the sparse matrix to save RAM.</p>
| 1
|
2016-09-07T20:20:04Z
| 39,379,774
|
<p>If I make a sample array:</p>
<pre><code>In [328]: A=np.array([[1,0,0,np.nan],[0,np.nan,np.nan,0],[1,0,1,0]])
In [329]: A
Out[329]:
array([[ 1., 0., 0., nan],
[ 0., nan, nan, 0.],
[ 1., 0., 1., 0.]])
In [331]: M=sparse.lil_matrix(A)
</code></pre>
<p>This lil sparse matrix is stored in 2 arrays:</p>
<pre><code>In [332]: M.data
Out[332]: array([[1.0, nan], [nan, nan], [1.0, 1.0]], dtype=object)
In [333]: M.rows
Out[333]: array([[0, 3], [1, 2], [0, 2]], dtype=object)
</code></pre>
<p>With your function, no rows will be removed, even though the middle row of the sparse matrix only contains <code>nan</code>.</p>
<pre><code>In [334]: A[~np.all(np.isnan(A), axis=1)]
Out[334]:
array([[ 1., 0., 0., nan],
[ 0., nan, nan, 0.],
[ 1., 0., 1., 0.]])
</code></pre>
<p>I could test the rows of <code>M</code> for <code>nan</code>, and identify the ones that only contain <code>nan</code> (besides 0s). But it's probably easier to collect the ones that we want to keep. </p>
<pre><code>In [346]: ll = [i for i,row in enumerate(M.data) if not np.all(np.isnan(row))]
In [347]: ll
Out[347]: [0, 2]
In [348]: M[ll,:]
Out[348]:
<2x4 sparse matrix of type '<class 'numpy.float64'>'
with 4 stored elements in LInked List format>
In [349]: _.A
Out[349]:
array([[ 1., 0., 0., nan],
[ 1., 0., 1., 0.]])
</code></pre>
<p>A row of <code>M</code> is a list, but <code>np.isnan(row)</code> will convert it to an array and do it's array test.</p>
| 0
|
2016-09-07T22:11:18Z
|
[
"python",
"numpy",
"scipy",
"sparse-matrix",
"networkx"
] |
Maintaining Dictionary Integrity While Running it Through Multithread Process
| 39,378,382
|
<p>I sped up a process by using a multithread function, however I need to maintain a relationship between the output and input. </p>
<pre><code>import requests
import pprint
import threading
ticker = ['aapl', 'googl', 'nvda']
url_array = []
for i in ticker:
url = 'https://query2.finance.yahoo.com/v10/finance/quoteSummary/' + i + '?formatted=true&crumb=8ldhetOu7RJ&lang=en-US&region=US&modules=defaultKeyStatistics%2CfinancialData%2CcalendarEvents&corsDomain=finance.yahoo.com'
url_array.append(url)
def fetch_ev(url):
urlHandler = requests.get(url)
data = urlHandler.json()
ev_single = data['quoteSummary']['result'][0]['defaultKeyStatistics']['enterpriseValue']['raw']
ev_array.append(ev_single) # makes array of enterprise values
threads = [threading.Thread(target=fetch_ev, args=(url,)) for url in
url_array] # calls multi thread that pulls enterprise value
for thread in threads:
thread.start()
for thread in threads:
thread.join()
pprint.pprint(dict(zip(ticker, ev_array)))
</code></pre>
<p>Sample output of the code: </p>
<p>1) <code>{'aapl': '30.34B', 'googl': '484.66B', 'nvda': '602.66B'}</code></p>
<p>2) <code>{'aapl': '484.66B', 'googl': '30.34B', 'nvda': '602.66B'}</code></p>
<p>I need the value to be matched up with the correct ticker. </p>
<p>Edit: I know dictionaries do not preserve order. Sorry, perhaps I was a little (very) unclear in my question. I have an array of ticker symbols, that matches the order of my url inputs. After running <code>fetch_ev</code>, I want to combine these ticker symbols with the matching enterprise value or <code>ev_single</code>. The order that they are stored in does not matter, however the pairings (<em>k</em> <em>v</em> pairs) or which values are stored with which ticker is <em>very</em> important. </p>
<p>Edit2 (MCVE) I changed the code to a simpler version of what I had- that shows the problem better. Sorry it's a little more complicated than I would have wanted complicated.</p>
| 2
|
2016-09-07T20:21:21Z
| 39,380,627
|
<p>To make it easy to maintain the correspondence between input and output, the <code>ev_array</code> can be preallocated so it's the same size as the <code>ticker</code> array, and the <code>fetch_ev()</code> thread function can be given an extra argument specifying the index of the location in that array to store the value fetched.</p>
<p>The maintain the integrity of the <code>ev_array</code>, a <code>threading.RLock</code> was added to prevent concurrent access to the shared resource which might otherwise be written to simultaneously by more than one thread. (Since its contents are now referenced directly through the index passed to <code>fetch_ev()</code>, this may not be strictly necessary.)</p>
<p>I don't know the proper ticker â enterprise value concurrence to be able to verify the results that doing this produces:</p>
<pre><code>{'aapl': 602658308096L, 'googl': 484659986432L, 'nvda': 30338199552L}
</code></pre>
<p>but at least they're now the same each time it's run.</p>
<pre><code>import requests
import pprint
import threading
def fetch_ev(index, url): # index parameter added
response = requests.get(url)
response.raise_for_status()
data = response.json()
ev_single = data['quoteSummary']['result'][0][
'defaultKeyStatistics']['enterpriseValue']['raw']
with ev_array_lock:
ev_array[index] = ev_single # store enterprise value obtained
tickers = ['aapl', 'googl', 'nvda']
ev_array = [None] * len(tickers) # preallocate to hold results
ev_array_lock = threading.RLock() # to synchronize concurrent array access
urls = ['https://query2.finance.yahoo.com/v10/finance/quoteSummary/{}'
'?formatted=true&crumb=8ldhetOu7RJ&lang=en-US&region=US'
'&modules=defaultKeyStatistics%2CfinancialData%2CcalendarEvents'
'&corsDomain=finance.yahoo.com'.format(symbol)
for symbol in tickers]
threads = [threading.Thread(target=fetch_ev, args=(i, url))
for i, url in enumerate(urls)] # activities to obtain ev's
for thread in threads:
thread.start()
for thread in threads:
thread.join()
pprint.pprint(dict(zip(tickers, ev_array)))
</code></pre>
| 1
|
2016-09-07T23:55:42Z
|
[
"python",
"multithreading",
"dictionary"
] |
Iterating across multiple columns in Pandas DF and slicing dynamically
| 39,378,510
|
<p><strong>TLDR:</strong> How to iterate across all options of multiple columns in a pandas dataframe without specifying the columns or their values explicitly?</p>
<p><strong>Long Version:</strong> I have a pandas dataframe that looks like this, only it has a lot more features or drug dose combinations than are listed here. Instead of just 3 types of features, it could have something like 70...:</p>
<pre><code>> dosage_df
First Score Last Score A_dose B_dose C_dose
22 28 1 40 130
55 11 2 40 130
15 72 3 40 130
42 67 1 90 130
90 74 2 90 130
87 89 3 90 130
14 43 1 40 700
12 61 2 40 700
41 5 3 40 700
</code></pre>
<p>Along with my data frame, I also have a python dictionary with the relevant ranges for each feature. The keys are the feature names, and the different values which it can take are the keys:</p>
<pre><code>> dict_of_dose_ranges = {'A_dose': [1, 2, 3], 'B_dose': [40, 90], 'C_dose': [130,700]}
</code></pre>
<p>For my purposes, I need to generate a particular combination (say A_dose = 1, B_dose = 90, and C_dose = 700), and based on those settings take the relevant slice out of my dataframe, and do relevant calculations from that smaller subset, and save the results somewhere.</p>
<p>I need to do this for ALL possible combinations of ALL of my features (far more than the 3 which are here, and which will be variable in the future). </p>
<p>In this case, I could easily pop this into SkLearn's Parameter grid, generate the options:</p>
<pre><code>> from sklearn.grid_search import ParameterGrid
> all_options = list(ParameterGrid(dict_of_dose_ranges))
> all_options
</code></pre>
<p>and get:</p>
<pre><code>[{'A_dose': 1, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 700}]
</code></pre>
<p><strong>This is where I run into problems:</strong></p>
<p><strong>Problem #1)</strong> I can now iterate across <code>all_options</code>, but I'm not sure how to now SELECT out of my <code>dosage_df</code> from each of the dictionary options (i.e. {'A_dose': 1, 'B_dose': 40, 'C_dose': 130}) WITHOUT doing it explicitly. </p>
<p>In the past, I could do something like:</p>
<pre><code>dosage_df[(dosage_df.A_dose == 1) & (dosage_df.B_dose == 40) & (dosage_df.C_dose == 130)]
First Score Last Score A_dose B_dose C_dose
0 22 28 140 130
</code></pre>
<p>But now I'm not sure what to put inside the brackets to slice it dynamically...</p>
<pre><code>dosage_df[?????]
</code></pre>
<p><strong>Problem #2)</strong> When I actually enter in my full dictionary of features with their respective ranges, I get an error because it deems it as having too many options... </p>
<pre><code>from sklearn.grid_search import ParameterGrid
all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
all_options
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-138-7b73d5e248f5> in <module>()
1 from sklearn.grid_search import ParameterGrid
----> 2 all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
3 all_options
OverflowError: long int too large to convert to int
</code></pre>
<p>I tried a number of alternate approaches including using double while loops, a <a href="http://stackoverflow.com/questions/23986892/python-recursive-iteration-exceeding-limit-for-tree-implementation">tree / recursion method from here</a>, another <a href="http://stackoverflow.com/questions/13109274/python-recursion-permutations">recursion method from here</a>, but it wasn't coming together.... Any help is much appreciated. </p>
| 1
|
2016-09-07T20:31:18Z
| 39,379,186
|
<p>What about using the underlying numpy array and some boolean logic to build an array containing only the lines you want ?</p>
<pre><code>dosage_df = pd.DataFrame((np.random.rand(40000,10)*100).astype(np.int))
dict_of_dose_ranges={3:[10,11,12,13,15,20],4:[20,22,23,24]}
#combined_doses will be bool array that will select all the lines that match the wanted combinations of doses
combined_doses=np.ones(dosage_df.shape[0]).astype(np.bool)
for item in dict_of_dose_ranges.items():
#item[0] is the kind of dose
#item[1] are the values of that kind of dose
next_dose=np.zeros(dosage_df.shape[0]).astype(np.bool)
#we then iterate over the wanted values
for value in item[1]:
# we select and "logical or" all lines matching the values
next_dose|=(dosage_df[item[0]] == value)
# we "logical and" all the kinds of dose
combined_doses&=next_dose
print(dosage_df[combined_doses])
</code></pre>
| 0
|
2016-09-07T21:21:26Z
|
[
"python",
"pandas",
"machine-learning",
"scikit-learn",
"grid-search"
] |
Iterating across multiple columns in Pandas DF and slicing dynamically
| 39,378,510
|
<p><strong>TLDR:</strong> How to iterate across all options of multiple columns in a pandas dataframe without specifying the columns or their values explicitly?</p>
<p><strong>Long Version:</strong> I have a pandas dataframe that looks like this, only it has a lot more features or drug dose combinations than are listed here. Instead of just 3 types of features, it could have something like 70...:</p>
<pre><code>> dosage_df
First Score Last Score A_dose B_dose C_dose
22 28 1 40 130
55 11 2 40 130
15 72 3 40 130
42 67 1 90 130
90 74 2 90 130
87 89 3 90 130
14 43 1 40 700
12 61 2 40 700
41 5 3 40 700
</code></pre>
<p>Along with my data frame, I also have a python dictionary with the relevant ranges for each feature. The keys are the feature names, and the different values which it can take are the keys:</p>
<pre><code>> dict_of_dose_ranges = {'A_dose': [1, 2, 3], 'B_dose': [40, 90], 'C_dose': [130,700]}
</code></pre>
<p>For my purposes, I need to generate a particular combination (say A_dose = 1, B_dose = 90, and C_dose = 700), and based on those settings take the relevant slice out of my dataframe, and do relevant calculations from that smaller subset, and save the results somewhere.</p>
<p>I need to do this for ALL possible combinations of ALL of my features (far more than the 3 which are here, and which will be variable in the future). </p>
<p>In this case, I could easily pop this into SkLearn's Parameter grid, generate the options:</p>
<pre><code>> from sklearn.grid_search import ParameterGrid
> all_options = list(ParameterGrid(dict_of_dose_ranges))
> all_options
</code></pre>
<p>and get:</p>
<pre><code>[{'A_dose': 1, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 700}]
</code></pre>
<p><strong>This is where I run into problems:</strong></p>
<p><strong>Problem #1)</strong> I can now iterate across <code>all_options</code>, but I'm not sure how to now SELECT out of my <code>dosage_df</code> from each of the dictionary options (i.e. {'A_dose': 1, 'B_dose': 40, 'C_dose': 130}) WITHOUT doing it explicitly. </p>
<p>In the past, I could do something like:</p>
<pre><code>dosage_df[(dosage_df.A_dose == 1) & (dosage_df.B_dose == 40) & (dosage_df.C_dose == 130)]
First Score Last Score A_dose B_dose C_dose
0 22 28 140 130
</code></pre>
<p>But now I'm not sure what to put inside the brackets to slice it dynamically...</p>
<pre><code>dosage_df[?????]
</code></pre>
<p><strong>Problem #2)</strong> When I actually enter in my full dictionary of features with their respective ranges, I get an error because it deems it as having too many options... </p>
<pre><code>from sklearn.grid_search import ParameterGrid
all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
all_options
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-138-7b73d5e248f5> in <module>()
1 from sklearn.grid_search import ParameterGrid
----> 2 all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
3 all_options
OverflowError: long int too large to convert to int
</code></pre>
<p>I tried a number of alternate approaches including using double while loops, a <a href="http://stackoverflow.com/questions/23986892/python-recursive-iteration-exceeding-limit-for-tree-implementation">tree / recursion method from here</a>, another <a href="http://stackoverflow.com/questions/13109274/python-recursion-permutations">recursion method from here</a>, but it wasn't coming together.... Any help is much appreciated. </p>
| 1
|
2016-09-07T20:31:18Z
| 39,380,574
|
<p>You can use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a> to generate all possible dosage combinations, and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow"><code>DataFrame.query</code></a> to do the selection:</p>
<pre><code>from itertools import product
for dosage_comb in product(*dict_of_dose_ranges.values()):
dosage_items = zip(dict_of_dose_ranges.keys(), dosage_comb)
query_str = ' & '.join('{} == {}'.format(*x) for x in dosage_items)
sub_df = dosage_df.query(query_str)
# Do Stuff...
</code></pre>
| 2
|
2016-09-07T23:47:49Z
|
[
"python",
"pandas",
"machine-learning",
"scikit-learn",
"grid-search"
] |
Sudo pip install upgrade operation not permitted
| 39,378,523
|
<p>I am trying to upgrade Numpy on Python 2.7 on Mac El Capitan, but I keep getting errors. I have Numpy v1.8.0rc1 and I need the latest one.</p>
<p><code>sudo pip2 install --upgrade numpy</code></p>
<p><code>...</code></p>
<p><code>OSError: [Errno 1] Operation not permitted: '/tmp/pip-HUSiK5-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'</code></p>
<p>If I do
<code>which pip2</code>
I get
<code>/usr/local/bin/pip2</code></p>
<p>And <code>which python</code> gives <code>/usr/bin/python</code></p>
<p>Also, I installed Python 3.5, if that matters.</p>
| 2
|
2016-09-07T20:31:50Z
| 39,378,584
|
<p>You're likely running into System Integrity Protection, the system introduced by Apple to prevent modification of system files (see <a href="https://apple.stackexchange.com/questions/209572/how-to-use-pip-after-the-os-x-el-capitan-upgrade">this answer on Ask Different</a>). Your options are approximately:</p>
<ul>
<li>install your own version of Python (using Homebrew or another system, e.g. <code>brew install python</code>)</li>
<li><a href="https://apple.stackexchange.com/questions/208478/how-do-i-disable-system-integrity-protection-sip-aka-rootless-on-os-x-10-11">Disable System Integrity Protection</a> (not recommended unless you know what you're doing)</li>
</ul>
| 0
|
2016-09-07T20:36:24Z
|
[
"python",
"numpy",
"pip",
"upgrade"
] |
Changing data in a DataFrame column (Pandas) with a For loop
| 39,378,535
|
<p>I'm trying to take the data from "Mathscore" and convert the values into numerical values, all under "Mathscore."</p>
<p>strong =1
Weak = 0</p>
<p>I tried doing this via the function below using For loop but I can't get the code to run. Is the way I'm trying to assign data incorrect?</p>
<p>Thanks! </p>
<pre><code>import pandas as pd
data = {'Id_Student' : [1,2,3,4,5,6,7,8,9,10],'Mathscore' :['Strong','Weak','Weak','Strong','Strong','Weak','Strong','Strong','Weak','Strong']}
df = pd.DataFrame(data)
df
# # Strong = 1 and Weak =0
##def tran_mathscore(x): if x == 'Strong': return 1 if x == 'Weak': return 0
##
##df['Trans_MathScore'] = df['Mathscore'].apply(tran_mathscore)
##df
##df.Mathscore[0]=["Weak"]
##print(df.columns)
##
##
##print(df.Mathscore)
def tran_mathscore():
for i in df.Mathscore:
if i == "Strong":
df.Mathscore[i]= ['1']
elif i == "Weak":
df.Mathscore[i]= ['0']
tran_mathscore()
</code></pre>
| 4
|
2016-09-07T20:32:22Z
| 39,378,624
|
<p>you can <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html" rel="nofollow">categorize</a> your data:</p>
<pre><code>In [23]: df['Mathscore'] = df.Mathscore.astype('category').cat.rename_categories(['1','0'])
In [24]: df
Out[24]:
Id_Student Mathscore
0 1 1
1 2 0
2 3 0
3 4 1
4 5 1
5 6 0
6 7 1
7 8 1
8 9 0
9 10 1
In [25]: df.dtypes
Out[25]:
Id_Student int64
Mathscore category
dtype: object
</code></pre>
<p>or map it:</p>
<pre><code>In [27]: df
Out[27]:
Id_Student Mathscore
0 1 Strong
1 2 Weak
2 3 Weak
3 4 Strong
4 5 Strong
5 6 Weak
6 7 Strong
7 8 Strong
8 9 Weak
9 10 Strong
In [28]: df.Mathscore.map(d)
Out[28]:
0 1
1 0
2 0
3 1
4 1
5 0
6 1
7 1
8 0
9 1
Name: Mathscore, dtype: int64
In [29]: d
Out[29]: {'Strong': 1, 'Weak': 0}
In [30]: df['Mathscore'] = df.Mathscore.map(d)
In [31]: df
Out[31]:
Id_Student Mathscore
0 1 1
1 2 0
2 3 0
3 4 1
4 5 1
5 6 0
6 7 1
7 8 1
8 9 0
9 10 1
In [32]: df.dtypes
Out[32]:
Id_Student int64
Mathscore int64
dtype: object
</code></pre>
<p>PS i prefer the first option as <code>categorical</code> dtype uses much less memory </p>
| 3
|
2016-09-07T20:38:52Z
|
[
"python",
"pandas",
"dataframe"
] |
Changing data in a DataFrame column (Pandas) with a For loop
| 39,378,535
|
<p>I'm trying to take the data from "Mathscore" and convert the values into numerical values, all under "Mathscore."</p>
<p>strong =1
Weak = 0</p>
<p>I tried doing this via the function below using For loop but I can't get the code to run. Is the way I'm trying to assign data incorrect?</p>
<p>Thanks! </p>
<pre><code>import pandas as pd
data = {'Id_Student' : [1,2,3,4,5,6,7,8,9,10],'Mathscore' :['Strong','Weak','Weak','Strong','Strong','Weak','Strong','Strong','Weak','Strong']}
df = pd.DataFrame(data)
df
# # Strong = 1 and Weak =0
##def tran_mathscore(x): if x == 'Strong': return 1 if x == 'Weak': return 0
##
##df['Trans_MathScore'] = df['Mathscore'].apply(tran_mathscore)
##df
##df.Mathscore[0]=["Weak"]
##print(df.columns)
##
##
##print(df.Mathscore)
def tran_mathscore():
for i in df.Mathscore:
if i == "Strong":
df.Mathscore[i]= ['1']
elif i == "Weak":
df.Mathscore[i]= ['0']
tran_mathscore()
</code></pre>
| 4
|
2016-09-07T20:32:22Z
| 39,379,032
|
<p>You could use:</p>
<pre><code>df['Mathscore'] = df['Mathscore'].str.replace('Strong','1')
df['Mathscore'] = df['Mathscore'].str.replace('Weak','0')
</code></pre>
<p>Returns:</p>
<pre><code>In [1]: df
Out[1]:
Id_Student Mathscore
0 1 1
1 2 0
2 3 0
3 4 1
4 5 1
5 6 0
6 7 1
7 8 1
8 9 0
9 10 1
</code></pre>
| 1
|
2016-09-07T21:07:57Z
|
[
"python",
"pandas",
"dataframe"
] |
Python - expanding the range between dot dot (id1..id2)
| 39,378,551
|
<p>Suppose i have a list of ids like: </p>
<pre><code>ids= 1/1/n1..1/1/n5 , 1/1/x1 , 1/1/g1
</code></pre>
<p>Expected output: </p>
<pre><code>1/1/n1 , 1/1/n2 , 1/1/n3 , 1/1/n4 , 1/1/n5 ,1/1/x1, 1/1/g1
</code></pre>
<p>Means wherever i finds 'ids..ids', i will fill the gap </p>
<p>I have written a very basic code, but i am looking for more pythonic solution</p>
<pre><code>import re
ports_list=['1/1/n1..1/1/n8']
n = 2
port_range=[]
for port in ports_list:
if '..' in port:
groups = port.split('..') #gives ['1/1/n1', '1/1/n8']
for item in groups:
port_split = item.split('/')
port_join='/'.join(port_split[:n]), '/'.join(port_split[n:])
port_join=port_join[0]+"/"
port_split=port_split[2] # n1 n8
get_str=port_split[0][0]
num=re.findall(r'\d+', port_split) # 1 8
port_range.append(num[0])
#remove port with '..
ports_list.remove(port)
n1=port_range[0]
n2=port_range[1]
final_number_list=list(range(int(n1),int(n2)+1))
my_new_list = [ get_str + str(n) for n in final_number_list]
final_list=[ port_join + str(n) for n in my_new_list]
ports_list=ports_list+final_list
print ports_list
</code></pre>
<p>Gives Expected Output: </p>
<pre><code>['1/1/n1', '1/1/n2', '1/1/n3', '1/1/n4', '1/1/n5', '1/1/n6', '1/1/n7', '1/1/n8']
</code></pre>
<p>But how it can be solved easily , without complex logic ? </p>
| 0
|
2016-09-07T20:33:43Z
| 39,378,682
|
<p>Here's the base code; I'll let you recombine it as desired, including list comprehensions that you already have. Just split the range as you're already doing. Instead of de-constructing the entire string, just take that last digit and run the range:</p>
<pre><code>ports_list=['1/1/n1..1/1/n8']
for port in ports_list:
if ".." in port:
left, right = port.split("..")
for i in range(int(left[-1]), int(right[-1])+1):
new_port = left[:-1] + str(i)
print new_port
</code></pre>
<p>The last line is there merely to demonstrate the proper response.</p>
| 0
|
2016-09-07T20:43:02Z
|
[
"python",
"list",
"python-2.7"
] |
Python - expanding the range between dot dot (id1..id2)
| 39,378,551
|
<p>Suppose i have a list of ids like: </p>
<pre><code>ids= 1/1/n1..1/1/n5 , 1/1/x1 , 1/1/g1
</code></pre>
<p>Expected output: </p>
<pre><code>1/1/n1 , 1/1/n2 , 1/1/n3 , 1/1/n4 , 1/1/n5 ,1/1/x1, 1/1/g1
</code></pre>
<p>Means wherever i finds 'ids..ids', i will fill the gap </p>
<p>I have written a very basic code, but i am looking for more pythonic solution</p>
<pre><code>import re
ports_list=['1/1/n1..1/1/n8']
n = 2
port_range=[]
for port in ports_list:
if '..' in port:
groups = port.split('..') #gives ['1/1/n1', '1/1/n8']
for item in groups:
port_split = item.split('/')
port_join='/'.join(port_split[:n]), '/'.join(port_split[n:])
port_join=port_join[0]+"/"
port_split=port_split[2] # n1 n8
get_str=port_split[0][0]
num=re.findall(r'\d+', port_split) # 1 8
port_range.append(num[0])
#remove port with '..
ports_list.remove(port)
n1=port_range[0]
n2=port_range[1]
final_number_list=list(range(int(n1),int(n2)+1))
my_new_list = [ get_str + str(n) for n in final_number_list]
final_list=[ port_join + str(n) for n in my_new_list]
ports_list=ports_list+final_list
print ports_list
</code></pre>
<p>Gives Expected Output: </p>
<pre><code>['1/1/n1', '1/1/n2', '1/1/n3', '1/1/n4', '1/1/n5', '1/1/n6', '1/1/n7', '1/1/n8']
</code></pre>
<p>But how it can be solved easily , without complex logic ? </p>
| 0
|
2016-09-07T20:33:43Z
| 39,378,792
|
<p>Not sure it is more readable or better than your current approach, but we can use regular expressions to extract the common part and the range borders from a string:</p>
<pre><code>import re
def expand(l):
result = []
pattern = re.compile(r"^(.*?)(\d+)$")
for item in l:
# determine if it is a "range" item or not
is_range = '..' in item
if not is_range:
result.append(item)
continue
# get the borders and the common reusable part
borders = [pattern.match(border).groups() for border in item.split('..')]
(common_part, start), (_, end) = borders
for x in range(int(start), int(end) + 1):
result.append("%s%d" % (common_part, x))
return result
print(expand(['1/1/n1..1/1/n8']))
print(expand(['1/1/n1..1/1/n5', '1/1/x1', '1/1/g1']))
</code></pre>
<p>Prints:</p>
<pre><code>['1/1/n1', '1/1/n2', '1/1/n3', '1/1/n4', '1/1/n5', '1/1/n6', '1/1/n7', '1/1/n8']
['1/1/n1', '1/1/n2', '1/1/n3', '1/1/n4', '1/1/n5', '1/1/x1', '1/1/g1']
</code></pre>
<p>where in <code>^(.*?)(\d+)$</code>:</p>
<ul>
<li><code>^</code> matches the beginning of a string (not actually needed in our case since we use <code>.match()</code> - it will start searching from the beginning of a string anyway - leaving it there just because "Explicit is better than implicit")</li>
<li><code>(.*?)</code> is a capturing group that saves any characters any number of times in a <a href="http://stackoverflow.com/questions/2301285/what-do-lazy-and-greedy-mean-in-the-context-of-regular-expressions">non-greedy fashion</a></li>
<li><code>(\d+)</code> is a capturing group that would save one or more consecutive digits</li>
<li><code>$</code> matches the end of a string</li>
</ul>
| 2
|
2016-09-07T20:50:51Z
|
[
"python",
"list",
"python-2.7"
] |
remove specific rows in dataframe with pandas
| 39,378,552
|
<p>i need some help from all of you
I'm working with a data form from excel, so basically now i have something like this. </p>
<pre><code>csr id ac otc tm lease maint
1 456 b 0 0 0 0
1 543 a 0 1 1 0
1 435 e 0 0 0 0
2 123 w 1 1 1 1
2 123 g 0 0 0 0
3 987 j 0 0 0 0
4 258 k 1 1 1 1
4 258 m 0 0 0 0
</code></pre>
<p>So i need to delete the rows with zero in 'otc' 'tm' 'lease' 'maint' columns. i do something like this </p>
<pre><code>df = pd.read_excel(xlsx,'Sheet1')
df_zero = df[(df['OTC'] == 0) & (df['TM'] == 0) & (df['Lease'] == 0) & (df['Maint'] == 0) & (df['Support'] == 0) & (df['Other'] == 0)]
</code></pre>
<p>In this way i open the file and save in df_zero all the rows that contain zero in the specific column. Then </p>
<pre><code>df1 = df_zero.loc[:, 'CSR']
</code></pre>
<p>Basically this save in df1 the CSR number for the rows with zeros in the specific columns, like this </p>
<pre><code>csr
1
1
2
3
4
</code></pre>
<p>So i think ok i do this and problem resolved.</p>
<pre><code>for n1 in df1:
df = df[df.CSR != n1]
</code></pre>
<p>But the problem here is, as you can see in the CSR 1, we have 3 different rows, if i run that 'for', i will delete the 3 of them, i just need to remove the ones that have zeros in the specific columns ('otc' 'tm' 'lease' 'maint'). </p>
<p>I think in a 'for' for be moving in the CSR and another one to be moving in 'otc' if the value that i found is zero move to 'tm'(in the same row) check for zero, then to 'lease' and 'maint' in the same row, if any of this columns is not zero, jump to the next CSR. In this example. We will remove the CSR 1, because all of them ('otc' 'tm' 'lease' 'maint') are zero, then jump to the next CSR, again 1, but in this case we have zero in 'otc' but 1 in 'tm', so we have to jump to the next CSR is again 1 but all of the columns ('otc' 'tm' 'lease' 'maint') are in zero so we remove the row, and continue in this way until the last CSR...</p>
<p>I think that could work but i'm having some problems to implement that, or maybe any of you have a better idea. Thanks and sorry for bad english</p>
| 1
|
2016-09-07T20:33:45Z
| 39,378,702
|
<p>try this:</p>
<pre><code>In [35]: df.eval('otc == 0 and tm == 0 and lease == 0 and maint == 0')
Out[35]:
0 True
1 False
2 True
3 False
4 True
5 True
6 False
7 True
dtype: bool
In [36]: df[~df.eval('otc == 0 and tm == 0 and lease == 0 and maint == 0')]
Out[36]:
csr id ac otc tm lease maint
1 1 543 a 0 1 1 0
3 2 123 w 1 1 1 1
6 4 258 k 1 1 1 1
</code></pre>
| 1
|
2016-09-07T20:44:36Z
|
[
"python",
"excel",
"pandas",
"dataframe",
"filter"
] |
remove specific rows in dataframe with pandas
| 39,378,552
|
<p>i need some help from all of you
I'm working with a data form from excel, so basically now i have something like this. </p>
<pre><code>csr id ac otc tm lease maint
1 456 b 0 0 0 0
1 543 a 0 1 1 0
1 435 e 0 0 0 0
2 123 w 1 1 1 1
2 123 g 0 0 0 0
3 987 j 0 0 0 0
4 258 k 1 1 1 1
4 258 m 0 0 0 0
</code></pre>
<p>So i need to delete the rows with zero in 'otc' 'tm' 'lease' 'maint' columns. i do something like this </p>
<pre><code>df = pd.read_excel(xlsx,'Sheet1')
df_zero = df[(df['OTC'] == 0) & (df['TM'] == 0) & (df['Lease'] == 0) & (df['Maint'] == 0) & (df['Support'] == 0) & (df['Other'] == 0)]
</code></pre>
<p>In this way i open the file and save in df_zero all the rows that contain zero in the specific column. Then </p>
<pre><code>df1 = df_zero.loc[:, 'CSR']
</code></pre>
<p>Basically this save in df1 the CSR number for the rows with zeros in the specific columns, like this </p>
<pre><code>csr
1
1
2
3
4
</code></pre>
<p>So i think ok i do this and problem resolved.</p>
<pre><code>for n1 in df1:
df = df[df.CSR != n1]
</code></pre>
<p>But the problem here is, as you can see in the CSR 1, we have 3 different rows, if i run that 'for', i will delete the 3 of them, i just need to remove the ones that have zeros in the specific columns ('otc' 'tm' 'lease' 'maint'). </p>
<p>I think in a 'for' for be moving in the CSR and another one to be moving in 'otc' if the value that i found is zero move to 'tm'(in the same row) check for zero, then to 'lease' and 'maint' in the same row, if any of this columns is not zero, jump to the next CSR. In this example. We will remove the CSR 1, because all of them ('otc' 'tm' 'lease' 'maint') are zero, then jump to the next CSR, again 1, but in this case we have zero in 'otc' but 1 in 'tm', so we have to jump to the next CSR is again 1 but all of the columns ('otc' 'tm' 'lease' 'maint') are in zero so we remove the row, and continue in this way until the last CSR...</p>
<p>I think that could work but i'm having some problems to implement that, or maybe any of you have a better idea. Thanks and sorry for bad english</p>
| 1
|
2016-09-07T20:33:45Z
| 39,378,716
|
<p>You can also extract the four columns that you are interested and count how many zeros it has for each row and create logical vector for indexing:</p>
<pre><code>df[(df[['otc', 'tm', 'lease', 'maint']] == 0).sum(axis = 1) < 4]
# csr id ac otc tm lease maint
# 1 1 543 a 0 1 1 0
# 3 2 123 w 1 1 1 1
# 6 4 258 k 1 1 1 1
</code></pre>
| 2
|
2016-09-07T20:45:31Z
|
[
"python",
"excel",
"pandas",
"dataframe",
"filter"
] |
Reference to an element in a list
| 39,378,598
|
<p>I am a little confused about how python deal with reference to an element in a list, considering these two examples:</p>
<p>First example:</p>
<pre><code>import random
a = [[1,2],[3,4],[5,6],[7,8]]
b = [0.1,0.2]
c = random.choice(a)
c[:] = b
print(a)
</code></pre>
<p>Second example:</p>
<pre><code>import random
a = [1, 2, 3, 4, 5, 6, 7, 8]
b = 0.1
c = random.choice(a)
c = b
print(a)
</code></pre>
<p>In first example, the content in list a is changed; while in the second example, the content of list a is not changed. Why is that? </p>
| 4
|
2016-09-07T20:37:28Z
| 39,378,778
|
<p>Let's start with the second case. You write</p>
<pre><code>c = random.choice(a)
</code></pre>
<p>so the name <code>c</code> gets bound to some element of a, then</p>
<pre><code>c = b
</code></pre>
<p>so the name <code>c</code> gets bound to some other object (the one to which the name <code>b</code> is referring - the float 0.1).</p>
<hr>
<p>Now to the first case. You start with</p>
<pre><code>c = random.choice(a)
</code></pre>
<p>So the name <code>c</code> gets bound to an object in <code>a</code>, which is a list itself. Then you write</p>
<pre><code>c[:] = b
</code></pre>
<p>which means, replace all items in the list bound to by the name <code>c</code>, by some other list. In fact, this is called <a href="http://stackoverflow.com/questions/10623302/how-assignment-works-with-python-list-slice">slice assignment</a>, and is basically syntactic sugar for calling a method of the object to which <code>c</code> is bound.</p>
<hr>
<p>The difference, then, is that in the first case, it doesn't just bind a name first to one object, then to another. It binds a name to a list, then uses this name to indirectly call a method of the list.</p>
| 3
|
2016-09-07T20:50:12Z
|
[
"python",
"random"
] |
Translation of a phrase both ways
| 39,378,636
|
<p>I am having trouble with Tuccin to English.
I can get it to translate in from English to Tuccin only.
What I want is if word is English translate to Tuccin,
If word is Tuccin, translate to English full phrases.
And finally if any input words are not stored I want it
To print that same word in its own place so show there was
Nothing to translate it to.</p>
<pre><code>#Translator.py
Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],
"mine":["yeme"],"are":["sia"]}
phrase=True
reverseLookup = False
while True:
reverseLookup = False
translation = str(raw_input("Enter content for translation.\n").lower())
input_list = translation.split()
#English to Tuccin
if phrase ==True:
print "*English Detected!"
for word in input_list:
if word in Tuc:
print ("".join(Tuc[word]))
else:
reverseLookup = True
#Tuccin to english
elif phrase == True and reverseLookup == True:
print "*Tuccin Detected!"
input_list = translation.split()
for k, v in Tuc.iteritems():
if translation in v:
print k
else:
print "Word Not Stored!"
reverseLookup = False
print word
</code></pre>
| 0
|
2016-09-07T20:39:46Z
| 39,378,819
|
<p><a href="http://stackoverflow.com/questions/483666/python-reverse-invert-a-mapping">This</a> stack answer demonstrates how to quickly invert a simple dictionary. So, in your case, what you would do is:</p>
<pre><code>Eng = {t[0]: [e] for t, e in Tuc.items()}
</code></pre>
<p>and use this dictionary the same way you've used the Tuc dictionary.</p>
<p>It would be easier if your Tuc dictionary didn't have redundant lists as values:</p>
<pre><code>Tuc = {'me': 'ye', 'love': 'wau', 'i': 'o', 'mine': 'yeme', 'are': 'sia', 'you': 'uo', 'my': 'yem'}
Eng = {t: e for t, e in Tuc.items()}
</code></pre>
| -1
|
2016-09-07T20:53:10Z
|
[
"python",
"translation"
] |
Sum of Pairs: Codewars
| 39,378,656
|
<p>I recently solved a problem as stated below in Codewars(<strong>NOT</strong> asking for a solution, already solved it) and though it is not the optimized solution I came across a very interesting problem which I could not figure out an answer to.</p>
<p><strong>I have NO intention to give out the solution, but just understand what happened that resulted in this anomaly</strong> </p>
<p>The problem ran as - Given a list of integers and a single sum value, return the first two values (parse from the left please) in order of appearance that add up to form the sum.</p>
<p>Pretty simple problem and a straightforward if not optimum solution that I came up with -</p>
<pre><code>def sum_pairs(ints, s):
indices = []
for i in ints:
if (s-i) in ints[ints.index(i)+1:]:
print(ints.index(i))
print(ints[ints.index(i)+1:])
print(i,s-i)
indices.append([ints.index(i),ints[ints.index(i)+1:].index(s-i)+len(ints[:ints.index(i)])+1])
if(len(indices) == 0):
return None
print(indices)
indices.sort(key=lambda x: x[1])
print(indices)
return [ints[indices[0][0]], ints[indices[0][1]]]
</code></pre>
<p>I passed all the test-cases.</p>
<p>I made use of the print statements for debugging and figuring out what was going on. Sample test cases and their output which stumped me are given below.</p>
<pre><code>sum_pairs([1, 2, 3, 4, 1, 0], 2)
0
[2, 3, 4, 1, 0]
(1, 1)
1
[3, 4, 1, 0]
(2, 0)
0
[2, 3, 4, 1, 0]
(1, 1)
[[0, 4], [1, 5], [0, 4]]
[[0, 4], [0, 4], [1, 5]]
sum_pairs([10, 5, 2, 3, 7, 5])
1
[2, 3, 7, 5]
(5, 5)
3
[7, 5]
(3, 7)
1
[2, 3, 7, 5]
(5, 5)
[[1, 5], [3, 4], [1, 5]]
[[3, 4], [1, 5], [1, 5]]
</code></pre>
<p>So now to the part that I could not figure out, observing the output one can see that in test case 1, 0 is printed twice and the ints[ints.index(i)+1:] is [2, 3, 4, 1, 0] for both 1's at position 0 and 4 and the entry [0,4] is appended twice.
A similar pattern is observed in test case 2.
The if condition which says </p>
<pre><code>if (s-i) in ints[ints.index(i)+1:]
</code></pre>
<p>should not be evaluated to true the second time as ints[ints.index(i)+1:] is there to ensure that the new list starts from the item next to the occurrence of i.</p>
<p>It seems not that important as I got the solution but it would be great if someone can shed light on what actually happened.</p>
<p><strong>NOTE</strong>: I have had 2 bad experiences where people downvoted my question without giving a reason to do so in a predefined ratio. It would be really useful if I could be given feedback for any such downvote, also would appreciate if anyone could check those questions and give feedback if any improvements can be made.</p>
| 0
|
2016-09-07T20:41:08Z
| 39,378,966
|
<p><code>ints.index(i)</code> doesn't give you the index of the particular occurrence of <code>i</code> you're working with. If you want that, you want <code>enumerate</code>, not <code>index</code>.</p>
<p><code>ints.index(i)</code> means "go through <code>ints</code> and find the index of the <em>first</em> element of <code>ints</code> that equals <code>i</code>". <code>i</code> holds no information about where in <code>ints</code> it came from, so there's no way for <code>index</code> to tell that you were thinking of any particular occurrence of <code>i</code>. <code>enumerate</code> avoids this problem by keeping its own counter and incrementing it every time it produces an element, so it always knows what index it's at.</p>
| 0
|
2016-09-07T21:03:27Z
|
[
"python"
] |
Searching file for the same string twice and printing both seperately
| 39,378,711
|
<p>I am a beginner python programmer with a search question. I need to locate a string of DNA in a DNA file. The issue is that I do not know where the string appears in the file, it appears twice, and I need to know both locations. My current program can only find the first string and I am having difficulty having it continue the search to find the second. My two ideas that I have have both failed but I think only because I do not understand how to use the functions. Here they are:</p>
<ol>
<li><p>use .seek() to find the first string of DNA I need and set that as the starting point for a second search. My problem is that I do not know exactly where the DNA strings are, so I tried to set my seek location to the DNA string. This wont work however, as the data needs to be in bytes to find a location, not a string. I tried to use .index() the location of the DNA string and setting that as a variable, but that also does not work.</p></li>
<li><p>Use .split() to split the DNA text file after the first DNA string was found then running a second search on the right half of the break. I thought this would work, but I only end up with an error reading:</p>
<p>IndexError: list index out of range</p></li>
</ol>
<p>specifically, I am writing .split('str')[1] to use the second half of the string.</p>
<p>Assuming my logic is correct in the program, would these approaches work? Does anyone have a different idea they think I should try? </p>
<p>Thank you for your help and if you need me to clarify anything I can do so.</p>
| 0
|
2016-09-07T20:45:06Z
| 39,379,049
|
<p>How large is your file? If it is not prohibitively long, you could use the naive approach:</p>
<pre><code>file = open("filename.text", r)
genome = file.read()
file.close()
genome_length = len(genome)
pattern = "ATCT" #or whatever your pattern is
pattern_length = len(pattern)
pattern_locations = []
for i in range(genome_length - pattern_length - 1):
if pattern == genome[i:i+pattern_length]:
pattern_locations.append(i)
return pattern_locations
</code></pre>
<p>However, if the genome is long then it would require more sophisticated algorithms in order to search in a reasonable time (such as constructing a suffix tree) </p>
| 0
|
2016-09-07T21:09:33Z
|
[
"python",
"python-2.7",
"bioinformatics"
] |
Searching file for the same string twice and printing both seperately
| 39,378,711
|
<p>I am a beginner python programmer with a search question. I need to locate a string of DNA in a DNA file. The issue is that I do not know where the string appears in the file, it appears twice, and I need to know both locations. My current program can only find the first string and I am having difficulty having it continue the search to find the second. My two ideas that I have have both failed but I think only because I do not understand how to use the functions. Here they are:</p>
<ol>
<li><p>use .seek() to find the first string of DNA I need and set that as the starting point for a second search. My problem is that I do not know exactly where the DNA strings are, so I tried to set my seek location to the DNA string. This wont work however, as the data needs to be in bytes to find a location, not a string. I tried to use .index() the location of the DNA string and setting that as a variable, but that also does not work.</p></li>
<li><p>Use .split() to split the DNA text file after the first DNA string was found then running a second search on the right half of the break. I thought this would work, but I only end up with an error reading:</p>
<p>IndexError: list index out of range</p></li>
</ol>
<p>specifically, I am writing .split('str')[1] to use the second half of the string.</p>
<p>Assuming my logic is correct in the program, would these approaches work? Does anyone have a different idea they think I should try? </p>
<p>Thank you for your help and if you need me to clarify anything I can do so.</p>
| 0
|
2016-09-07T20:45:06Z
| 39,396,046
|
<p>I read your problem as "I am trying to find the locations of a DNA subsequence." Does the following example represent what you are trying to achieve? Let me know if I am oversimplifying your question and I can revise.</p>
<pre><code>>>> import re
>>> dna = 'AGTCTCCCGGATTTGGATTTAA' #super short, but just for proof of concept
>>> subseq = 'ATTT' #sequence you want to find within dna
>>> for location in re.finditer(subseq, dna):
... print 'start: %d end: %d' % (location.start(), location.end())
start: 10 end: 14
start: 16 end: 20
</code></pre>
| 1
|
2016-09-08T16:26:47Z
|
[
"python",
"python-2.7",
"bioinformatics"
] |
Sending SMS from django app
| 39,378,728
|
<p>I came to the requirement to send SMS from my django app. Its a dashboard from multiple clients, and each client will have the ability to send programable SMS. </p>
<p>Is this achievable with django smsish? I have found some packages that aren't updated, and I sending email sms is not possible.</p>
<p>All answers found are old and I have tried all approaches suggested.
Do I have to use services like twilio mandatorily? Thanks </p>
| 1
|
2016-09-07T20:46:31Z
| 39,379,920
|
<p>Using Twilio is not mandatory, but I do recommend it. Twilio does the heavy lifting, your Django App just needs to make the proper API Requests to Twilio, which has great documentation on it. </p>
<p>Twilio has Webhooks as well which you can 'hook' to specific Django Views and process certain events. As for the 'programmable' aspect of your app you can use django-celery, django-cron, RabbitMQ or other task-queueing software.</p>
| 1
|
2016-09-07T22:26:22Z
|
[
"python",
"django",
"sms"
] |
Starting another script using a button in tkinter
| 39,378,811
|
<p>I am trying to run another script within a script using a button in tkinter </p>
<p>I have tried two methods one being</p>
<pre><code>import os
class SeaofBTCapp(tk.Tk):
#initilization
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_title(self, "Battery Life Calculator")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, PageOne, PageTwo, PageThree):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
#raise each page to the front
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
def helloCallBack():
os.system('python test2.py')
#the start page
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = ttk.Label(self, text="Start Page", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button = ttk.Button(self, text="Visit Page 1",command=lambda:controller.show_frame(PageOne))
button.pack()
button1 = ttk.Button(self, text="Visit Page 2",command=lambda:controller.show_frame(PageTwo))
button1.pack()
button2 = ttk.Button(self, text="Visit Graph page",command=lambda:controller.show_frame(PageThree))
button2.pack()
button3 = ttk.Button(self, text="execute test",command=lambda:controller.helloCallBack)
button3.pack()
</code></pre>
<p>No errors are given but when I hit execute nothing happens</p>
<p>The other method i tried was to</p>
<pre><code>import test2
</code></pre>
<p>but it runs the script automatically and it prints "i am a test"</p>
<p>I should note the script i am trying to call is simply a test and just prints "i am a test"</p>
<p>any help is appreciated! </p>
<p>thanks</p>
<p>****edit****</p>
<pre><code>import tkinter as tk
from tkinter import ttk
import test2
LARGE_FONT= ("Verdana", 12)
class SeaofBTCapp(tk.Tk):
#initilization
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_title(self, "Battery Life Calculator")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, PageOne, PageTwo, PageThree):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
#raise each page to the front
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
#the start page
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = ttk.Label(self, text="Start Page", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button = ttk.Button(self, text="Visit Page 1",command=lambda:controller.show_frame(PageOne))
button.pack()
button1 = ttk.Button(self, text="Visit Page 2",command=lambda:controller.show_frame(PageTwo))
button1.pack()
button2 = ttk.Button(self, text="Visit Graph page",command=lambda:controller.show_frame(PageThree))
button2.pack()
button3 = ttk.Button(self, text="execute test",command=test2.main)
button3.pack()
class PageOne(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = ttk.Label(self, text="Page One", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button1 = ttk.Button(self, text="Back to start page",command=lambda:controller.show_frame(StartPage))
button1.pack()
button1 = ttk.Button(self, text="Visit Page 2",command=lambda:controller.show_frame(PageTwo))
button1.pack()
class PageTwo(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = ttk.Label(self, text="Page One", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button1 = ttk.Button(self, text="Back to Start Page",command=lambda:controller.show_frame(StartPage))
button1.pack()
button1 = ttk.Button(self, text="Back to Page One",command=lambda:controller.show_frame(PageOne))
button1.pack()
class PageThree(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = ttk.Label(self, text="Graph Page", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button1 = ttk.Button(self, text="Back to Start Page",command=lambda:controller.show_frame(StartPage))
button1.pack()
app = SeaofBTCapp()
app.mainloop()
</code></pre>
<p>test2.p is as follows</p>
<pre><code>def main():
if __name__ == "__main__":
print("Executing as main program")
print("Value of __name__ is: ", __name__)
main()
</code></pre>
| -1
|
2016-09-07T20:52:35Z
| 39,378,939
|
<p>Your first method is not recommended for that. If you have another python script you should definitely import it.</p>
<p>About the second method:<br>
My guess is that the script <code>test2.py</code> is written without</p>
<pre><code>if __name__ == "__main__":
main()
</code></pre>
<p>And that's why it shoot you message when you import it.</p>
| 0
|
2016-09-07T21:01:57Z
|
[
"python",
"python-3.x",
"user-interface",
"tkinter"
] |
Incomplete coordinate values for Google Vision OCR
| 39,378,862
|
<p>I have a script that is iterating through images of different forms. When parsing the Google Vision Text detection response, I use the XY coordinates in the 'boundingPoly' for each text item to specifically look for data in different parts of the form. </p>
<p>The problem I'm having is that some of the responses come back with only an X coordinate. Example:</p>
<pre><code>{u'description': u'sometext', u'boundingPoly': {u'vertices': [{u'x': 5595}, {u'x': 5717}, {u'y': 122, u'x': 5717}, {u'y': 122, u'x': 5595}
</code></pre>
<p>I've set a try/except (using python 2.7) to catch this issue, but it's always the same issue: <code>KeyError: 'y'</code>. I'm iterating through thousands of forms; so far it has happened to 10 rows out of 1000. </p>
<p>Has anyone had this issue before? Is there a fix other than attempting to re-submit the request if it reaches this error? </p>
| 2
|
2016-09-07T20:55:59Z
| 39,378,944
|
<p><a href="https://cloud.google.com/vision/reference/rest/v1/images/annotate" rel="nofollow">From the docs</a>:</p>
<blockquote>
<p>boundingPoly</p>
<p>object(BoundingPoly)</p>
<p>The bounding polygon around the face. The coordinates of the bounding box are in the original image's scale, as returned in ImageParams. The bounding box is computed to "frame" the face in accordance with human expectations. It is based on the landmarker results. <strong>Note that one or more x and/or y coordinates may not be generated in the BoundingPoly (the polygon will be unbounded) if only a partial face appears in the image to be annotated.</strong></p>
</blockquote>
<p>I believe this is implying that the <code>'y'</code> value in this case is <code>0</code>, or more generally, an edge value. In other words, it doesn't know where the bounded poly truly ends, as the text goes all the way to the edge of the image, and thus the image doesn't give enough info to know for sure that the text actually ends there. As far as the image provides, it ends at <code>'y'</code> of <code>0</code>.</p>
| 1
|
2016-09-07T21:02:19Z
|
[
"python",
"ocr",
"google-cloud-vision"
] |
Plot 2D array with Pandas, Matplotlib, and Numpy
| 39,378,902
|
<p>As a result from simulations, I parsed the output using Pandas <code>groupby()</code>. I am having a bit of difficulty to plot the data the way I want. Here's the Pandas output file (suppressed for simplicity) that I'm trying to plot:</p>
<pre><code> Avg-del Min-del Max-del Avg-retx Min-retx Max-retx
Prob Producers
0.3 1 8.060291 0.587227 26.709371 42.931779 5.130041 136.216642
5 8.330889 0.371387 54.468836 43.166326 3.340193 275.932170
10 1.012147 0.161975 4.320447 6.336965 2.026241 19.177802
0.5 1 8.039639 0.776463 26.053635 43.160880 5.798276 133.090358
5 4.729875 0.289472 26.717824 25.732373 2.909811 135.289244
10 1.043738 0.160671 4.353993 6.461914 2.015735 19.595393
</code></pre>
<p>My y-axis is delay and my x-axis is the number of producers. I want to have errorbars for probability <code>p=0.3</code> and another one for <code>p=0.5</code>.
My python script is the following:</p>
<pre><code>import sys
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.set_option('display.expand_frame_repr', False)
outputFile = 'averages.txt'
f_out = open(outputFile, 'w')
data = pd.read_csv(sys.argv[1], delimiter=",")
result = data.groupby(["Prob", "Producers"]).mean()
print "Writing to output file: " + outputFile
result_s = str(result)
f_out.write(result_s)
f_out.close()
*** Update from James ***
for prob_index in result.index.levels[0]:
r = result.loc[prob_index]
labels = [col for col in r]
lines = plt.plot(r)
[line.set_label(str(prob_index)+" "+col) for col, line in zip(labels, lines)]
ax = plt.gca()
ax.legend()
ax.set_xticks(r.index)
ax.set_ylabel('Latency (s)')
ax.set_xlabel('Number of producer nodes')
plt.show()
</code></pre>
<p>Now I have 4 sliced arrays, one for each probability.
How do I slice them again based on delay(del) and retx, and plot errorbars based on ave, min, max?</p>
| 0
|
2016-09-07T20:59:03Z
| 39,381,333
|
<p>Ok, there is a lot going on here. First, it is plotting 6 lines. When your code calls</p>
<pre><code>plt.plot(np.transpose(np.array(result)[0:3, 0:3]), label = 'p=0.3')
plt.plot(np.transpose(np.array(result)[3:6, 0:3]), label = 'p=0.5')
</code></pre>
<p>it is calling <code>plt.plot</code> on a 3x3 array of data. <code>plt.plot</code> interprets this input not as an x and y, but rather as 3 separate series of y-values (with 3 points each). For the x values, it is imputing the values 0,1,2. In other words it for the first <code>plot</code> call it is plotting the data:</p>
<pre><code>x = [1,2,3]; y = [8.060291, 8.330889, 1.012147]
x = [1,2,3]; y = [0.587227, 0.371387, 0.161975]
x = [1,2,3]; y = [26.709371, 54.468836, 4.320447]
</code></pre>
<p>Based on your x-label, I think you want the values to be <code>x = [1,5,10]</code>. Try this to see if it gets the plot you want.</p>
<pre><code># iterate over the first dataframe index
for prob_index in result.index.levels[0]:
r = result.loc[prob_index]
labels = [col for col in r]
lines = plt.plot(r)
[line.set_label(str(prob_index)+" "+col) for col, line in zip(labels, lines)]
ax = plt.gca()
ax.legend()
ax.set_xticks(r.index)
ax.set_ylabel('Latency (s)')
ax.set_xlabel('Number of producer nodes')
</code></pre>
| 0
|
2016-09-08T01:37:08Z
|
[
"python",
"pandas",
"numpy",
"multidimensional-array",
"matplotlib"
] |
How to verify that a package installed from a list of package names
| 39,378,958
|
<p>I'm writing a program that will install certain packages from a <code>whl</code> file, however I need a way to verify that the packages where installed:</p>
<pre><code>def verify_installs(self):
for pack in self.packages:
import pip
installed = pip.get_installed_distributions()
for name in list(installed):
if pack not in name:
print "{} failed to install.".format(pack)
</code></pre>
<p>This will throw the error:</p>
<pre><code>Traceback (most recent call last):
File "run_setup.py", line 34, in <module>
test.verify_installs()
File "run_setup.py", line 29, in verify_installs
if pack not in name:
TypeError: argument of type 'Distribution' is not iterable
</code></pre>
<p>If I attempt to to run a loop of the packages and use <code>import</code> like so:</p>
<pre><code>def verify_installs(self):
for pack in self.packages:
import pack
</code></pre>
<p>I'll get the error:</p>
<pre><code>Traceback (most recent call last):
File "run_setup.py", line 29, in <module>
test.verify_installs()
File "run_setup.py", line 24, in verify_installs
import pack
ImportError: No module named pack
</code></pre>
<p>Is there a way I can loop through a list of packages and then try to import them and catch the import exception? Something like:</p>
<pre><code>def verify_packs(pack_list):
for pack in pack_list:
try:
import pack
except ImportError:
print "{} failed to install".format(pack)
</code></pre>
| -1
|
2016-09-07T21:02:54Z
| 39,379,001
|
<p>Say <code>pack_list</code> is a list of string names of modules:</p>
<pre><code>import importlib
def verify_packs(pack_list):
for pack in pack_list:
try:
importlib.import_module(pack)
except ImportError:
print("{} failed to install".format(pack))
</code></pre>
<p>Note that this is not the preferable way to check if a module was installed and available.<br>
Take a look <a href="http://stackoverflow.com/questions/14050281/how-to-check-if-a-python-module-exists-without-importing-it">here</a>.</p>
| 0
|
2016-09-07T21:05:56Z
|
[
"python",
"python-2.7",
"loops",
"import"
] |
How to verify that a package installed from a list of package names
| 39,378,958
|
<p>I'm writing a program that will install certain packages from a <code>whl</code> file, however I need a way to verify that the packages where installed:</p>
<pre><code>def verify_installs(self):
for pack in self.packages:
import pip
installed = pip.get_installed_distributions()
for name in list(installed):
if pack not in name:
print "{} failed to install.".format(pack)
</code></pre>
<p>This will throw the error:</p>
<pre><code>Traceback (most recent call last):
File "run_setup.py", line 34, in <module>
test.verify_installs()
File "run_setup.py", line 29, in verify_installs
if pack not in name:
TypeError: argument of type 'Distribution' is not iterable
</code></pre>
<p>If I attempt to to run a loop of the packages and use <code>import</code> like so:</p>
<pre><code>def verify_installs(self):
for pack in self.packages:
import pack
</code></pre>
<p>I'll get the error:</p>
<pre><code>Traceback (most recent call last):
File "run_setup.py", line 29, in <module>
test.verify_installs()
File "run_setup.py", line 24, in verify_installs
import pack
ImportError: No module named pack
</code></pre>
<p>Is there a way I can loop through a list of packages and then try to import them and catch the import exception? Something like:</p>
<pre><code>def verify_packs(pack_list):
for pack in pack_list:
try:
import pack
except ImportError:
print "{} failed to install".format(pack)
</code></pre>
| -1
|
2016-09-07T21:02:54Z
| 39,379,233
|
<p>I figured out a way to check the installed packages:</p>
<pre><code>def verify_installs(self):
for pack in self.packages:
import pip
items = pip.get_installed_distributions()
installed_packs = sorted(["{}".format(i.key) for i in items])
if pack not in installed_packs:
print "Package {} was not installed".format(pack)
</code></pre>
<p>Example:</p>
<pre><code>test = SetUpProgram(["lxml", "test", "testing"], None, None)
test.verify_installs()
</code></pre>
<p>Output:</p>
<pre><code>Package test was not installed
Package testing was not installed
</code></pre>
<p>Now to explain it, this part <code>installed_packs = sorted(["{}".format(i.key) for i in items])</code> will create this:</p>
<pre><code>['babelfish', 'backports.shutil-get-terminal-size', 'beautifulsoup', 'chardet',
'cmd2', 'colorama', 'cycler', 'decorator', 'django', 'easyprocess', 'gooey', 'gu
essit', 'hachoir-core', 'hachoir-metadata', 'hachoir-parser', 'ipython', 'ipytho
n-genutils', 'lxml', 'matplotlib', 'mechanize', 'mypackage', 'nose', 'numpy', 'p
athlib2', 'pickleshare', 'pillow', 'pip', 'prettytable', 'progressbar', 'prompt-
toolkit', 'pygments', 'pyinstaller', 'pyparsing', 'python-dateutil', 'python-geo
ip', 'python-geoip-geolite2', 'pytz', 'pyvirtualdisplay', 'rebulk', 'requests',
'scapy', 'scrappy', 'selenium', 'setuptools', 'simplegeneric', 'six', 'tinydb',
'traitlets', 'tvdb-api', 'twisted', 'wcwidth', 'win-unicode-console', 'zope.inte
rface']
</code></pre>
<p>A list of all locally installed packages on the computer, from there:</p>
<pre><code>if pack not in installed_packs:
print "Package {} was not installed".format(pack)
</code></pre>
<p>Will run through the packages and check if any of the packages in a given list correspond to any of the actual install packages.</p>
| 0
|
2016-09-07T21:24:58Z
|
[
"python",
"python-2.7",
"loops",
"import"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.