title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
how to read json file with pandas?
39,040,250
<p>I have scraped a website with scrapy and stored the data in a json file.<br> Link to the json file: <a href="https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing</a></p> <p>But the json isn't standard json and gives errors:</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file: ... data = json.load(file) ... Traceback (most recent call last): File "&lt;stdin&gt;", line 2, in &lt;module&gt; File "/root/anaconda2/lib/python2.7/json/__init__.py", line 291, in load **kw) File "/root/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/root/anaconda2/lib/python2.7/json/decoder.py", line 367, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 3 column 2 - line 3697 column 2 (char 45 - 3661517) </code></pre> <p>Then I tried this:</p> <pre><code>with open('/root/code/itjuzi/itjuzi/investorinfo.json','rb') as f: data = f.readlines() data = map(lambda x: x.decode('unicode_escape'), data) &gt;&gt;&gt; df = pd.DataFrame(data) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; NameError: name 'pd' is not defined &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame(data) &gt;&gt;&gt; print pd &lt;module 'pandas' from '/root/anaconda2/lib/python2.7/site-packages/pandas/__init__.pyc'&gt; &gt;&gt;&gt; print df [3697 rows x 1 columns] </code></pre> <p>Why does this only return 1 column?</p> <p>How can I standardize the json file and read it with pandas correctly?</p>
2
2016-08-19T13:25:16Z
39,040,324
<p>Try following codes: (you are missing one something)</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file: ... data = json.load(file.read()) </code></pre>
1
2016-08-19T13:28:36Z
[ "python", "json", "list", "pandas", "scrapy" ]
how to read json file with pandas?
39,040,250
<p>I have scraped a website with scrapy and stored the data in a json file.<br> Link to the json file: <a href="https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing</a></p> <p>But the json isn't standard json and gives errors:</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file: ... data = json.load(file) ... Traceback (most recent call last): File "&lt;stdin&gt;", line 2, in &lt;module&gt; File "/root/anaconda2/lib/python2.7/json/__init__.py", line 291, in load **kw) File "/root/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/root/anaconda2/lib/python2.7/json/decoder.py", line 367, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 3 column 2 - line 3697 column 2 (char 45 - 3661517) </code></pre> <p>Then I tried this:</p> <pre><code>with open('/root/code/itjuzi/itjuzi/investorinfo.json','rb') as f: data = f.readlines() data = map(lambda x: x.decode('unicode_escape'), data) &gt;&gt;&gt; df = pd.DataFrame(data) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; NameError: name 'pd' is not defined &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame(data) &gt;&gt;&gt; print pd &lt;module 'pandas' from '/root/anaconda2/lib/python2.7/site-packages/pandas/__init__.pyc'&gt; &gt;&gt;&gt; print df [3697 rows x 1 columns] </code></pre> <p>Why does this only return 1 column?</p> <p>How can I standardize the json file and read it with pandas correctly?</p>
2
2016-08-19T13:25:16Z
39,040,375
<p>try this <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/4752/json/16714/read-json#t=201608191329212576968">from SO documentation[JSON] </a>:</p> <pre><code>import json with open('data.json') as data_file: data = json.load(data_file) </code></pre> <p>This has the advantage of dealing well with large JSON files that do not fit in memory</p> <p>EDIT: Your data is not valid JSON. Delete the following in the first 3 lines and it will validate:</p> <pre><code>[{ "website": ["\u5341\u65b9\u521b\u6295"] }] </code></pre> <p>EDIT2[Since you need to access nested values from json]:</p> <p>You can now also access single values like this:</p> <pre><code>data["one"][0]["id"] # will return 'value' data["two"]["id"] # will return 'value' data["three"] # will return 'value' </code></pre>
4
2016-08-19T13:31:27Z
[ "python", "json", "list", "pandas", "scrapy" ]
Python: Division by larger numbers slower?
39,040,340
<p>Why does dividing by the larger factor pair result in slower execution?</p> <p>My solution for <a href="https://codility.com/programmers/task/min_perimeter_rectangle/" rel="nofollow">https://codility.com/programmers/task/min_perimeter_rectangle/</a></p> <pre class="lang-py prettyprint-override"><code>from math import sqrt, floor # This fails the performance tests def solution_slow(n): x = int(sqrt(n)) for i in xrange(x, n+1): if n % i == 0: return 2*(i + n / i)) # This passes the performance tests def solution_fast(n): x = int(sqrt(n)) for i in xrange(x, 0, -1): if n % i == 0: return 2*(i + n / i) </code></pre>
0
2016-08-19T13:29:38Z
39,040,447
<p>That's obvious. The first function loops <em>many</em> more times.</p> <p>Note that <code>sqrt(n) != n - sqrt(n)</code>! in general <code>sqrt(n) &lt;&lt; n-sqrt(n)</code> where <code>&lt;&lt;</code> means <em>much</em> lesser than.</p> <p>If <code>n=1000</code> the first function is looping <code>969</code> times while the second one only <code>32</code>.</p>
1
2016-08-19T13:35:23Z
[ "python", "performance" ]
Python: Division by larger numbers slower?
39,040,340
<p>Why does dividing by the larger factor pair result in slower execution?</p> <p>My solution for <a href="https://codility.com/programmers/task/min_perimeter_rectangle/" rel="nofollow">https://codility.com/programmers/task/min_perimeter_rectangle/</a></p> <pre class="lang-py prettyprint-override"><code>from math import sqrt, floor # This fails the performance tests def solution_slow(n): x = int(sqrt(n)) for i in xrange(x, n+1): if n % i == 0: return 2*(i + n / i)) # This passes the performance tests def solution_fast(n): x = int(sqrt(n)) for i in xrange(x, 0, -1): if n % i == 0: return 2*(i + n / i) </code></pre>
0
2016-08-19T13:29:38Z
39,040,480
<p>It's not division that slows it down; it's the number of iterations required.</p> <p>Let <code>L = xrange(0, x)</code> (order doesn't matter here) and <code>R = xrange(x, n+1)</code>. Every factor of <code>n</code> in <code>L</code> can be paired with exactly one factor of <code>n</code> in <code>R</code>. In general, <code>x</code> is much, much smaller than <code>n/2</code>, so <code>L</code> is much smaller than <code>R</code>. This means that there are far more elements of <code>R</code> that don't divide <code>n</code> than there are in <code>L</code>. In the case of a prime number, there <em>are</em> no factors, so the slow solution has to check every value of the much larger than instead of the much smaller set.</p>
3
2016-08-19T13:37:01Z
[ "python", "performance" ]
Python: Division by larger numbers slower?
39,040,340
<p>Why does dividing by the larger factor pair result in slower execution?</p> <p>My solution for <a href="https://codility.com/programmers/task/min_perimeter_rectangle/" rel="nofollow">https://codility.com/programmers/task/min_perimeter_rectangle/</a></p> <pre class="lang-py prettyprint-override"><code>from math import sqrt, floor # This fails the performance tests def solution_slow(n): x = int(sqrt(n)) for i in xrange(x, n+1): if n % i == 0: return 2*(i + n / i)) # This passes the performance tests def solution_fast(n): x = int(sqrt(n)) for i in xrange(x, 0, -1): if n % i == 0: return 2*(i + n / i) </code></pre>
0
2016-08-19T13:29:38Z
39,040,595
<p>I'd say the of iterations is the key which makes perfomance a little bit different between your functions as @Bakuriu already said. Also, xrange could be slightly more expensive than using a simple loop, for instance, take a look f3 will perform a little better than f1 &amp; f2:</p> <pre><code>import timeit from math import sqrt, floor def f1(n): x = int(sqrt(n)) for i in xrange(x, n + 1): if n % i == 0: return 2 * (i + n / i) def f2(n): x = int(sqrt(n)) for i in xrange(x, 0, -1): if n % i == 0: return 2 * (i + n / i) def f3(n): x = int(sqrt(n)) while True: if n % x == 0: return 2 * (x + n / x) x -= 1 N = 30 K = 100000 print("Measuring {0} times f1({1})={2}".format( K, N, timeit.timeit('f1(N)', setup='from __main__ import f1, N', number=K))) print("Measuring {0} times f1({1})={2}".format( K, N, timeit.timeit('f2(N)', setup='from __main__ import f2, N', number=K))) print("Measuring {0} times f1({1})={2}".format( K, N, timeit.timeit('f3(N)', setup='from __main__ import f3, N', number=K))) # Measuring 100000 times f1(30)=0.0738177938151 # Measuring 100000 times f1(30)=0.0753000788315 # Measuring 100000 times f1(30)=0.0503645315841 # [Finished in 0.3s] </code></pre> <p>Next time, you got these type of questions, using a profiler is highly recommended :)</p>
0
2016-08-19T13:43:44Z
[ "python", "performance" ]
Looking backward and forward in a circular loop in python
39,040,359
<p>I would like to generate a list of single digits based on user input. In a circular iterative way, the list should contain the user input, the two digits before that, and the two digits after that. The order of the digits isn't important. </p> <p>user_input = "1" output = [9, 0, 1, 2, 3]</p> <p>user_input = "9" output = [7, 8, 9, 0, 1]</p> <p>Using itertools.cycle I was able to get the next two digits, but I couldn't find an answer that can help me get the previous two digits. Is there a simple way to get those previous two digits?</p> <pre><code>from itertools import cycle numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] user_input = "139" for i in user_input: s = int(i) lst = [s] itr = cycle(numbers) if s in itr: #how can I get the two digits before s? lst.append(next(itr)) #getting the next digit lst.append(next(itr)) print(lst) </code></pre>
1
2016-08-19T13:30:56Z
39,040,496
<p>You can implement like this.</p> <pre><code>def backward_list(n): numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] if n == 0 or n == 1: x = numbers.index(n) else: x = (numbers.index(n)-10) return [numbers[x-2],numbers[x-1],numbers[x],numbers[x+1],numbers[x+2]] </code></pre> <p><strong>Execution</strong></p> <pre><code>In [1]: for i in range(10): .....: print backward_list(i) .....: [8, 9, 0, 1, 2] [9, 0, 1, 2, 3] [0, 1, 2, 3, 4] [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] [3, 4, 5, 6, 7] [4, 5, 6, 7, 8] [5, 6, 7, 8, 9] [6, 7, 8, 9, 0] [7, 8, 9, 0, 1] </code></pre>
2
2016-08-19T13:38:26Z
[ "python", "list", "iteration" ]
Looking backward and forward in a circular loop in python
39,040,359
<p>I would like to generate a list of single digits based on user input. In a circular iterative way, the list should contain the user input, the two digits before that, and the two digits after that. The order of the digits isn't important. </p> <p>user_input = "1" output = [9, 0, 1, 2, 3]</p> <p>user_input = "9" output = [7, 8, 9, 0, 1]</p> <p>Using itertools.cycle I was able to get the next two digits, but I couldn't find an answer that can help me get the previous two digits. Is there a simple way to get those previous two digits?</p> <pre><code>from itertools import cycle numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] user_input = "139" for i in user_input: s = int(i) lst = [s] itr = cycle(numbers) if s in itr: #how can I get the two digits before s? lst.append(next(itr)) #getting the next digit lst.append(next(itr)) print(lst) </code></pre>
1
2016-08-19T13:30:56Z
39,040,563
<p>Modify your statements in the iff to this:</p> <pre><code>if s in itr and len(str) == 2: lst.append(next(itr)) #getting the next digit lst = [s - 1] + lst # prepend the first value lst.append(next(itr)) lst = [s - 2] + lst # prepend the second value </code></pre> <p>Or you can also do</p> <pre><code>if s in itr and len(str) == 2: lst.append(next(itr)) #getting the next digit lst.insert(0, s-1) # prepend the first value lst.append(next(itr)) lst.insert(0, s-2) # prepend the second value </code></pre>
0
2016-08-19T13:41:57Z
[ "python", "list", "iteration" ]
Looking backward and forward in a circular loop in python
39,040,359
<p>I would like to generate a list of single digits based on user input. In a circular iterative way, the list should contain the user input, the two digits before that, and the two digits after that. The order of the digits isn't important. </p> <p>user_input = "1" output = [9, 0, 1, 2, 3]</p> <p>user_input = "9" output = [7, 8, 9, 0, 1]</p> <p>Using itertools.cycle I was able to get the next two digits, but I couldn't find an answer that can help me get the previous two digits. Is there a simple way to get those previous two digits?</p> <pre><code>from itertools import cycle numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] user_input = "139" for i in user_input: s = int(i) lst = [s] itr = cycle(numbers) if s in itr: #how can I get the two digits before s? lst.append(next(itr)) #getting the next digit lst.append(next(itr)) print(lst) </code></pre>
1
2016-08-19T13:30:56Z
39,040,589
<p>You could take a range from the input and use that range to slice a numpy array</p> <p>edit: my bad for writing code and not testing it... thanks @Stefan Pochmann for pointing that out...</p> <pre><code>import numpy as np def cycle(x): #x is user input indices = np.array(range(x-2, x+3))%10 numbers = np.array(range(10)) return numbers[indices] </code></pre>
0
2016-08-19T13:43:24Z
[ "python", "list", "iteration" ]
Looking backward and forward in a circular loop in python
39,040,359
<p>I would like to generate a list of single digits based on user input. In a circular iterative way, the list should contain the user input, the two digits before that, and the two digits after that. The order of the digits isn't important. </p> <p>user_input = "1" output = [9, 0, 1, 2, 3]</p> <p>user_input = "9" output = [7, 8, 9, 0, 1]</p> <p>Using itertools.cycle I was able to get the next two digits, but I couldn't find an answer that can help me get the previous two digits. Is there a simple way to get those previous two digits?</p> <pre><code>from itertools import cycle numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] user_input = "139" for i in user_input: s = int(i) lst = [s] itr = cycle(numbers) if s in itr: #how can I get the two digits before s? lst.append(next(itr)) #getting the next digit lst.append(next(itr)) print(lst) </code></pre>
1
2016-08-19T13:30:56Z
39,040,871
<p>Could use a list comprehension and <code>% 10</code>:</p> <pre><code>&gt;&gt;&gt; for s in range(10): print([i % 10 for i in range(s-2, s+3)]) [8, 9, 0, 1, 2] [9, 0, 1, 2, 3] [0, 1, 2, 3, 4] [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] [3, 4, 5, 6, 7] [4, 5, 6, 7, 8] [5, 6, 7, 8, 9] [6, 7, 8, 9, 0] [7, 8, 9, 0, 1] </code></pre>
1
2016-08-19T13:55:47Z
[ "python", "list", "iteration" ]
Select/slice a multi-index dataframe time-series using a period leads to a bug?
39,040,445
<p>I have a multi index which first level index is a time series in the very same way as the following one:</p> <pre><code>In[168]: rng = pd.date_range('01-01-2000',periods=50,freq='M') In[169]: long_df = pd.DataFrame(np.random.randn(50,4),index = rng, columns=['bar','baz','foo','zoo']) In[170]: long_df = long_df.stack() In[171]: long_df[:10] Out[171]: 2000-01-31 bar 2.079474 baz -0.569920 foo 1.149012 zoo -0.228926 2000-02-29 bar 0.429502 baz -0.117166 foo 0.956546 zoo -1.483818 2000-03-31 bar -1.137998 baz 1.049849 </code></pre> <p>EDIT </p> <p>I can slice it using periods and it works fine:</p> <pre><code>In[172]: long_df = long_df.sort_index() In[173]: long_df.loc['2001'] Out[173]: 2001-01-31 bar -0.193987 baz 0.769297 foo 0.286880 zoo -1.431313 2001-02-28 bar -0.840502 baz 1.786758 foo 0.878356 zoo 0.433383 2001-03-31 bar 0.897548 baz 1.901540 foo 0.110606 zoo 0.571267 2001-04-30 bar -0.375377 baz 1.423742 foo -0.415006 zoo -0.141000 (...) </code></pre> <p>However, when I use the multiindex version I am working with the slicing is not being acknowledged:</p> <pre><code>In[204]: dfmi Out[204]: Last Days to expiry Date Ticker 1988-12-06 HGF89 1.46894 52 HGF90 1.17100 419 HGG89 1.42100 80 HGH89 1.37344 113 HGH90 1.17450 477 HGK89 1.28750 171 HGK90 1.15900 539 HGN89 1.24550 233 HGN90 1.15900 598 HGU89 1.21750 295 HGU90 1.15900 659 HGZ89 1.18500 386 1988-12-07 HGF89 1.51900 51 HGF90 1.18900 418 HGG89 1.46394 79 HGH89 1.41300 112 HGH90 1.19250 476 HGK89 1.31750 170 HGK90 1.17700 538 HGN89 1.27550 232 HGN90 1.17700 597 HGU89 1.24250 294 HGU90 1.17700 658 HGZ89 1.20300 385 1988-12-08 HGF89 1.58100 50 HGF90 1.18900 417 HGG89 1.50894 78 HGH89 1.43994 111 HGH90 1.19250 475 HGK89 1.32750 169 ... ... 2016-07-05 HGK7 2.20500 325 HGM7 2.20900 358 HGN6 2.18150 22 HGN7 2.21000 387 HGQ6 2.18150 55 HGQ7 2.21450 420 HGU6 2.18350 85 HGU7 2.21550 449 HGV6 2.18700 114 HGV7 2.21850 479 HGX6 2.19100 146 HGX7 2.22000 511 HGZ6 2.19250 176 2016-07-06 HGF7 2.16700 205 HGG7 2.17100 233 HGH7 2.17100 266 HGJ7 2.17550 294 HGK7 2.17650 324 HGM7 2.18050 357 HGN6 2.15150 21 HGN7 2.18150 386 HGQ6 2.15150 54 HGQ7 2.18600 419 HGU6 2.15350 84 HGU7 2.18700 448 HGV6 2.15700 113 HGV7 2.19000 478 HGX6 2.16100 145 HGX7 2.19150 510 HGZ6 2.16300 175 [167701 rows x 2 columns] In[204]: dfmi = dfmi.sort_index() In[205]: dfmi.loc['2001'] Out[206]: Last Days to expiry Date Ticker 1988-12-06 HGF89 1.46894 52 HGF90 1.17100 419 HGG89 1.42100 80 HGH89 1.37344 113 HGH90 1.17450 477 HGK89 1.28750 171 HGK90 1.15900 539 HGN89 1.24550 233 HGN90 1.15900 598 HGU89 1.21750 295 HGU90 1.15900 659 1988-12-07 HGF89 1.51900 51 HGF90 1.18900 418 HGG89 1.46394 79 HGH89 1.41300 112 HGH90 1.19250 476 HGK89 1.31750 170 HGK90 1.17700 538 HGN89 1.27550 232 HGN90 1.17700 597 HGU89 1.24250 294 HGU90 1.17700 658 1988-12-08 HGF89 1.58100 50 HGF90 1.18900 417 HGG89 1.50894 78 HGH89 1.43994 111 HGH90 1.19250 475 HGK89 1.32750 169 HGK90 1.17700 537 HGN89 1.27750 231 ... ... 2016-07-05 HGH7 2.19950 267 HGJ7 2.20400 295 HGK7 2.20500 325 HGM7 2.20900 358 HGN6 2.18150 22 HGN7 2.21000 387 HGQ6 2.18150 55 HGQ7 2.21450 420 HGU6 2.18350 85 HGU7 2.21550 449 HGV6 2.18700 114 HGV7 2.21850 479 HGX6 2.19100 146 HGX7 2.22000 511 2016-07-06 HGF7 2.16700 205 HGG7 2.17100 233 HGH7 2.17100 266 HGJ7 2.17550 294 HGK7 2.17650 324 HGM7 2.18050 357 HGN6 2.15150 21 HGN7 2.18150 386 HGQ6 2.15150 54 HGQ7 2.18600 419 HGU6 2.15350 84 HGU7 2.18700 448 HGV6 2.15700 113 HGV7 2.19000 478 HGX6 2.16100 145 HGX7 2.19150 510 [161017 rows x 2 columns] </code></pre> <p>I noticed that there is a difference in type between the long_df (pandas.core.series.Series) I gave as an example and the df (pandas.core.frame.DataFrame) I use</p> <p>What is the correct way to do it?</p> <p>Thanks a lot for your tips,</p>
2
2016-08-19T13:35:16Z
39,040,543
<p>You need add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>, but need last version of <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#partial-string-indexing-on-datetimeindex-when-part-of-a-multiindex" rel="nofollow">pandas 0.18.1</a>:</p> <pre><code>print (long_df.loc['2001']) 2001-01-31 bar 1.684425 baz 1.215258 foo 0.158968 zoo 0.689477 2001-02-28 bar -0.123582 baz 0.312533 foo 0.609169 zoo -0.093985 2001-03-31 bar 0.372093 baz -0.281191 foo -0.400354 zoo 0.646965 2001-04-30 bar -0.287488 baz -0.928941 foo 1.365416 zoo 0.267282 2001-05-31 bar -1.021086 baz 0.317819 foo -0.393135 zoo -0.213589 2001-06-30 bar -2.594173 ... ... </code></pre> <p>EDIT:</p> <p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> from first level with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow"><code>get_loc</code></a> for finding integer indexes:</p> <pre><code>import pandas as pd long_df = pd.read_csv('test/testslice.csv', parse_dates=[0], index_col=[0,1]) dfmi = long_df.stack().sort_index() print (dfmi.index.get_level_values(0)) DatetimeIndex(['1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', '1988-12-06', ... '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06', '2016-07-06'], dtype='datetime64[ns]', name='Date', length=335402, freq=None) print (dfmi.index.get_level_values(0).get_loc('2001')) slice(121844, 133684, None) </code></pre> <pre><code>print (dfmi.iloc[dfmi.index.get_level_values(0).get_loc('2001')]) Date Ticker 2001-01-02 HGF01 Last 0.8180 Days to expiry 27.0000 HGF02 Last 0.8180 Days to expiry 392.0000 HGG01 Last 0.8165 Days to expiry 55.0000 HGG02 Last 0.8180 Days to expiry 420.0000 HGH01 Last 0.8115 Days to expiry 85.0000 HGH02 Last 0.8180 Days to expiry 448.0000 HGJ01 Last 0.8125 Days to expiry 114.0000 HGJ02 Last 0.8170 Days to expiry 479.0000 HGK01 Last 0.8135 Days to expiry 147.0000 HGK02 Last 0.8160 Days to expiry 512.0000 HGM01 Last 0.8145 Days to expiry 176.0000 HGM02 Last 0.8155 Days to expiry 540.0000 HGN01 Last 0.8155 Days to expiry 206.0000 HGN02 Last 0.8140 Days to expiry 573.0000 HGQ01 Last 0.8160 Days to expiry 239.0000 ... 2001-12-31 HGK03 Last 0.6960 Days to expiry 513.0000 HGM02 Last 0.6680 Days to expiry 177.0000 HGM03 Last 0.6980 Days to expiry 542.0000 HGN02 Last 0.6710 Days to expiry 210.0000 HGN03 Last 0.7005 Days to expiry 575.0000 HGQ02 Last 0.6740 Days to expiry 240.0000 HGQ03 Last 0.7030 Days to expiry 604.0000 HGU02 Last 0.6770 Days to expiry 269.0000 HGU03 Last 0.7050 Days to expiry 634.0000 HGV02 Last 0.6795 Days to expiry 302.0000 HGV03 Last 0.7080 Days to expiry 667.0000 HGX02 Last 0.6820 Days to expiry 329.0000 HGX03 Last 0.7110 Days to expiry 694.0000 HGZ02 Last 0.6850 Days to expiry 361.0000 HGZ03 Last 0.7140 Days to expiry 728.0000 dtype: float64 </code></pre> <p>EDIT1 by comment:</p> <p>Unfortunately I have only slow solution with list comprehension and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> if need select by range:</p> <pre><code>print (list(range(1993, 2003))) [1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002] dfs = [dfmi.iloc[dfmi.index.get_level_values(0).get_loc(str(x))] for x in range(1993, 2003)] print (pd.concat(dfs)) 1993-01-01 00:00:00 bar 0.080676 baz 0.315925 foo -1.484132 zoo -0.977202 1993-01-01 01:00:00 bar 0.817846 baz -1.280649 foo 0.727975 zoo -0.062142 1993-01-01 02:00:00 bar 1.278623 baz 0.268865 foo -0.183612 zoo 0.194996 1993-01-01 03:00:00 bar -0.304734 baz -0.227468 foo -0.134305 zoo 0.887374 1993-01-01 04:00:00 bar -0.166669 baz -0.132718 foo -0.624932 zoo 1.959724 1993-01-01 05:00:00 bar -1.379774 baz -0.738452 foo 0.398924 zoo 0.005612 1993-01-01 06:00:00 bar -0.864205 baz -0.813321 foo 0.931858 zoo -1.005977 1993-01-01 07:00:00 bar 0.667380 baz -1.208457 ... 2002-10-30 08:00:00 foo 0.311835 zoo 0.611802 2002-10-30 09:00:00 bar 2.615050 baz -0.291767 foo -0.508202 zoo 0.443429 2002-10-30 10:00:00 bar -1.724252 baz -0.126579 foo 1.108530 zoo -0.553025 2002-10-30 11:00:00 bar 1.208705 baz -1.561024 foo 0.722768 zoo 1.893419 2002-10-30 12:00:00 bar 0.239383 baz -0.543053 foo -0.687370 zoo 0.848929 2002-10-30 13:00:00 bar 0.897465 baz 0.631292 foo 0.068200 zoo -1.579010 2002-10-30 14:00:00 bar -0.996531 baz -1.208318 foo 0.174970 zoo -0.780913 2002-10-30 15:00:00 bar 0.237465 baz 0.380585 foo -1.646285 zoo -0.730744 dtype: float64 </code></pre>
3
2016-08-19T13:41:01Z
[ "python", "pandas", "time-series", "slice", "multi-index" ]
How to replace letters with a corresponding number
39,040,466
<p>My question is: if I have a string like “lake" and I wanted it to be replaced with a set of numbers like "1,2,3,4", how will I be able to do it? To be more clear I want each letter to have a corresponding number. Hope I'm clear. Thanks in advance.</p>
-4
2016-08-19T13:36:28Z
39,040,778
<p>Your question is a little bit generic, but let me show you a little example where you transform your input into "something" and you can recover your source string from that "something":</p> <pre><code>alphabet = "abcdefghijklmnopqrstuvwxyz0123456789 .,?!" def crypt(c, key): return ord(c)-key def decrypt(c, key): return chr(c+key) my_string = "lake" key = 97 dst = [crypt(c,key) for c in my_string] src = [decrypt(c,key) for c in dst] print dst print ''.join(src) </code></pre> <p>There is no general rule which answers your question of <code>I want each letter to have a corresponding number</code>, for the sake of simplicity, just use a relevant <a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow">bijective</a> function to transform/untransform your strings and you're done.</p> <p>Of course, I'd recommend you not sticking to this simple answer of mine and reading about this topic in some of the zillion of specialized books out there :)</p> <p>Hope it helps.</p>
0
2016-08-19T13:51:32Z
[ "python", "python-2.7" ]
Pandas set_Value with DatetimeIndex [Python]
39,040,544
<p>I'm trying to add the row-wise result from a function into my dataframe using <code>df.set_Value</code>.</p> <p><code>df</code> in the format :</p> <pre><code> Count DTW DateTime 2015-01-16 10 0 2015-01-17 28 0 </code></pre> <p>Using <code>df.setValue</code></p> <pre><code>dw.set_Value(idx, 'col', dtw) # idx and dtw are int values TypeError: cannot insert DatetimeIndex with incompatible label </code></pre> <p>How do I solve this error or what alternative method with comparable efficiency is there?</p>
3
2016-08-19T13:41:04Z
39,040,781
<p>I think you have <code>Series</code>, not <code>DataFrame</code>, so use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.set_value.html" rel="nofollow"><code>Series.set_value</code></a> with index converted to <code>datetime</code></p> <pre><code>dw = pd.Series([-2374], index = [pd.to_datetime('2015-01-18')]) dw.index.name = 'DateTime' print (dw) DateTime 2015-01-18 -2374 dtype: int64 print (dw.set_value(pd.to_datetime('2015-01-19'), 1)) DateTime 2015-01-18 -2374 2015-01-19 1 dtype: int64 </code></pre> <hr> <pre><code>print (dw.set_value(pd.datetime(2015, 1, 19), 1)) DateTime 2015-01-18 -2374 2015-01-19 1 dtype: int64 </code></pre> <p>More standard way is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow"><code>iloc</code></a>:</p> <pre><code>print (dw) Count DTW DateTime 2015-01-16 10 0 2015-01-17 28 0 dw.ix[1, 'DTW'] = 10 #dw.DTW.iloc[1] = 10 print (dw) Count DTW DateTime 2015-01-16 10 0 2015-01-17 28 10 </code></pre>
3
2016-08-19T13:51:36Z
[ "python", "pandas", "dataframe" ]
How to check user input with csv file and print data from specific column?
39,040,662
<p>I have a CSV file, which contains patch names, their release date and some other info in separate columns. I am trying to write a Python script that will ask the user for a Patch name and once it gets the input, will check if the Patch is in the CSV file and print out the Release Date.</p> <p>So far, I have written the following piece of code, which I based based on <a href="http://stackoverflow.com/questions/28195926/how-do-you-read-data-in-csv-file-and-print-specific-ones">this</a> answer.</p> <pre><code>import csv patch = raw_input("Please provide your Patchname: ") with open("CSV_File1.csv") as my_file1: reader = csv.DictReader(my_file1) for row in reader: for k in row: if row[k] == patch: print "According to the CSV_File1 database: "+row[k] </code></pre> <p>This way I get the Patch name printed on the screen. I don't know how to traverse the column with the Dates, so that I can print the date that corresponds to the row with the Patch name that I provided as input.</p> <p>In addition, I would like to check if that patch is the last released one. If it isn't, then print the latest one along with its release date. My problem is that the CSV file contains patch names of different software versions, so I can't just print the last of the list. For example:</p> <pre><code>PatchXXXYY,...other columns...,Release Date,... &lt;--- (this is the header row of the CSV file) Patch10000,...,date Patch10001,...,date Patch10002,...,date Patch10100,...,date Patch10101,...,date Patch10102,...,date Patch10103,...,date Patch20000,...,date ... </code></pre> <p>So, if my input is "Patch10000", then I should get its release date and the latest available Patch, which in this case would be Patch10002, and its release date. But NOT Patch20000, as that would be a different software version. A preferable output would like this:</p> <blockquote> <p>According to the CSV_File1 database: Patch10100 was released on "date". The latest available patch is "Patch10103", which was released on "date".</p> </blockquote> <p>That's because the "XXX" digits in the PatchXXXYY above, represent the software version, and the "YY" the patch number. I hope this is clear.</p> <p>Thanks in advance!</p>
0
2016-08-19T13:46:40Z
39,040,917
<p>You're almost there, though I'm a <em>wee</em> bit confused - your sample data doesn't have a header row. If it doesn't then you shouldn't be using a <code>DictReader</code> but if it does you can take this approach.</p> <pre><code>version = patch[:8] latest_patch = '' last_patch_data = None with open("CSV_File1.csv") as my_file1: reader = csv.DictReader(my_file1) for row in reader: # This works because of ASCII ordering. First, # we make sure the package starts with the right # version - e.g. Patch200 if row['Package'].startswith(version): # Now we grab the next two numbers, so from # Patch20042 we're grabbing '42' patch_number = row['Package'][8:10] # '02' &gt; '' is true, and '42' &gt; '02' is also True if patch_number &gt; latest_patch: # If we have a greater patch number, we # want to store that, along with the row that # had that. We could just store the patch &amp; date # but it's fine to store the whole row latest_patch = patch_number last_patch_data = row # No need to iterate over the keys, you *know* the # column containing the patch. Presumably it's # titled 'patch' #for k in row: # if row[k] == patch: if row['Package'] == patch: # assuming the date header is 'date' print("According to the CSV_File1 database: {patch!r}" " was released on {date!r}".format(patch=row['Package'], date=row['Registration'])) # `None` is a singleton, which means that we can use `is`, # rather than `==`. If we didn't even *start* with the same # version, there was certainly no patch. You may prefer a # different message, of course. if last_patch_data is None: print('No patch found') else: print('The latest available patch is {patch!r},' ' which was released on {date!r}'.format(patch=last_patch_data['Package'], date=last_patch_data['Registration'])) </code></pre>
0
2016-08-19T13:58:14Z
[ "python", "csv", "input", "output" ]
How to check user input with csv file and print data from specific column?
39,040,662
<p>I have a CSV file, which contains patch names, their release date and some other info in separate columns. I am trying to write a Python script that will ask the user for a Patch name and once it gets the input, will check if the Patch is in the CSV file and print out the Release Date.</p> <p>So far, I have written the following piece of code, which I based based on <a href="http://stackoverflow.com/questions/28195926/how-do-you-read-data-in-csv-file-and-print-specific-ones">this</a> answer.</p> <pre><code>import csv patch = raw_input("Please provide your Patchname: ") with open("CSV_File1.csv") as my_file1: reader = csv.DictReader(my_file1) for row in reader: for k in row: if row[k] == patch: print "According to the CSV_File1 database: "+row[k] </code></pre> <p>This way I get the Patch name printed on the screen. I don't know how to traverse the column with the Dates, so that I can print the date that corresponds to the row with the Patch name that I provided as input.</p> <p>In addition, I would like to check if that patch is the last released one. If it isn't, then print the latest one along with its release date. My problem is that the CSV file contains patch names of different software versions, so I can't just print the last of the list. For example:</p> <pre><code>PatchXXXYY,...other columns...,Release Date,... &lt;--- (this is the header row of the CSV file) Patch10000,...,date Patch10001,...,date Patch10002,...,date Patch10100,...,date Patch10101,...,date Patch10102,...,date Patch10103,...,date Patch20000,...,date ... </code></pre> <p>So, if my input is "Patch10000", then I should get its release date and the latest available Patch, which in this case would be Patch10002, and its release date. But NOT Patch20000, as that would be a different software version. A preferable output would like this:</p> <blockquote> <p>According to the CSV_File1 database: Patch10100 was released on "date". The latest available patch is "Patch10103", which was released on "date".</p> </blockquote> <p>That's because the "XXX" digits in the PatchXXXYY above, represent the software version, and the "YY" the patch number. I hope this is clear.</p> <p>Thanks in advance!</p>
0
2016-08-19T13:46:40Z
39,045,540
<p>The CSV module works fine but I just wanted to throw Pandas in as this can be a good use case for it. There may be better ways to handle this but it's a fun example. This is assuming that your columns are labels(Patch_Name, Release_Date) so you will need to correct them.</p> <pre><code>import pandas as pd my_file1 = pd.read_csv("CSV_File1.csv", error_bad_lines=False) patch = raw_input("Please provide your Patchname: ") #Find row that matches patch and store the index as idx idx = my_file1[my_file1["Patch_Name"] == patch].index.tolist() #Get the date value from row by index number date = my_file1.get_value(idx[0], "Release_Date") print "According to the CSV_File1 database: {} {}".format(patch, date) </code></pre> <p>There are great ways to filter and compare the data in a CSV with Pandas as well. I would give more descriptive solutions if I had more time. I highly suggest looking into the Pandas documentation.</p>
0
2016-08-19T18:23:45Z
[ "python", "csv", "input", "output" ]
Can python compare user input with variables in program?
39,040,699
<p>This is my first post here, hope it is easily readable and also my answer is asked! (:</p> <p>First of all, to put you a bit into perspective, I wanted to create an efficiency calculator for the brand new game No Man's Sky. The economy in that game is pretty much the same as in real life. You can sell items at a much higher price than the components needed to create it. E.g: You can sell a rock for 5000€ or 30 rock parts for 30€ each. If you craft the rock with the rock parts your profit will be 5000-900, right? (:</p> <p>Here is the <a href="http://pastebin.com/Qitb7hgp" rel="nofollow">code</a>.</p> <p>What I want it to do is the following. User enters a product, the program compares the price of the product if sold and the price of the components to craft the product and shows you the profit doing so.</p> <p>I have the following questions about it:</p> <ol> <li>Is there a better way to save the data to use it after? (lines 1-16)</li> <li><p>Is there any way to compare all variables I will create (line 18) or do I have to create an if loop for every product (lines 22-24). What I mean is something like</p> <pre><code>profit = products[input] - input_recipe print profit </code></pre></li> </ol> <p>Since I want to check a lot of recipes, it would be a pain in the ass if there's a better for to do it.</p>
1
2016-08-19T13:48:22Z
39,041,414
<p>How you save the data and access it will really depend on how you want to handle your calculator. I would say the best way to handle this would be if there were an excel file or JSON file or something of the sort that is all inclusive of all materials and items of the game (you may have to be the one to make this or someone else may already have). In the event you have to put the list together yourself, it could be a long process and very annoying, so try to find a list somewhere you can download then open the file and parse the data as needed. You could put all the data in the code itself but that doesn't allow you to write code against the data with say a different language if you so desired.</p> <p>As far as loops are concerned, I'm not sure what you mean by that? You have dictionaries for your data so there's no need to loop over every value right? Now if you are referring to taking in multiple user inputs, a loop wouldn't be a bad idea for command line:</p> <pre><code>continue_calculations = 'y' while continue_calculations != 'n': # Do your logic here. continue_calculations = raw_input('Would you like to continue(y/n)?') </code></pre> <p>Of course if you are making a calculator you could look into GUI development, or web development if you want to make it into a site. PyQT is a handy little module to work in and there are some good tutorials for that: <a href="https://pythonprogramming.net/basic-gui-pyqt-tutorial/" rel="nofollow">https://pythonprogramming.net/basic-gui-pyqt-tutorial/</a></p> <p>Cheers,</p>
1
2016-08-19T14:22:44Z
[ "python", "calculator" ]
Can python compare user input with variables in program?
39,040,699
<p>This is my first post here, hope it is easily readable and also my answer is asked! (:</p> <p>First of all, to put you a bit into perspective, I wanted to create an efficiency calculator for the brand new game No Man's Sky. The economy in that game is pretty much the same as in real life. You can sell items at a much higher price than the components needed to create it. E.g: You can sell a rock for 5000€ or 30 rock parts for 30€ each. If you craft the rock with the rock parts your profit will be 5000-900, right? (:</p> <p>Here is the <a href="http://pastebin.com/Qitb7hgp" rel="nofollow">code</a>.</p> <p>What I want it to do is the following. User enters a product, the program compares the price of the product if sold and the price of the components to craft the product and shows you the profit doing so.</p> <p>I have the following questions about it:</p> <ol> <li>Is there a better way to save the data to use it after? (lines 1-16)</li> <li><p>Is there any way to compare all variables I will create (line 18) or do I have to create an if loop for every product (lines 22-24). What I mean is something like</p> <pre><code>profit = products[input] - input_recipe print profit </code></pre></li> </ol> <p>Since I want to check a lot of recipes, it would be a pain in the ass if there's a better for to do it.</p>
1
2016-08-19T13:48:22Z
39,041,640
<p>About your first question, another way would be to use <strong>json</strong> format to store your data and not to use three separate dictionaries, I mean something like:</p> <pre><code>data = {"elements":{"Th":20.6,"Pu":41.3},"alloys":{"Aronium":1546.9,"Herox":2877.5},"Products":{"Antimatter":5232,"Warp Cell":46750}} </code></pre> <p>You could parse for example "Th" price by writing:</p> <pre><code>th_price = data['elements']['Th'] </code></pre> <p>As for your second question you could create a fourth dictionary that would contain the prices of all the possible recipes which of course you have predefined - just not to compute them every time you need them and to have them available for fast parsing. So you would write something like:</p> <pre><code>profit = products[input] - input_recipe[input] print profit </code></pre> <p>where input_recipe would be your fourth dictionary with the recipe prices.</p>
0
2016-08-19T14:32:11Z
[ "python", "calculator" ]
pyQt and QTextEdit: Why are some unicode characters are shown, others not?
39,040,732
<p>I try to display unicode text in a QTextEdit in pyQt. </p> <p>It's Python 2.7 and PyQt4 on Mac OSX El Capitan.</p> <p>I read through some Q&amp;A about python, QString and unicode and came up with the following running example. </p> <p>When run, it prints two unicode strings to the terminal and also shows them in a QTextEdit in its Main Window. The first string is ok (I copied it from a Q&amp;A here on stackoverflow, actually I have no idea what it means in English...). I see all characters displayed correctly in my terminal as well as in the QTextEdit. </p> <p>However, the emoticons of the second string are missing in the QTextEdit, although they are printed correctly in the terminal. In the QTextEdit there are two blanks in between the '---'. When I copy the blanks in the QTextEdit and paste them in a terminal, I see the emoticons. So it seems that the content is there, but not the graphical representation.</p> <p>I set the font family to Monaco, as this is the font in my text terminal as well as in Eclipse, which I use for developing. Eclipse shows the emoticons correctly in its editor as well.</p> <p>So I assumed that the Monaco font family would support the emoticons.</p> <p>What did I do wrong?</p> <p>Thanks for any help</p> <p>Armin</p> <p>Running example: Sorry for the length, this was copied in bits and pieces from existing code and a pyuic generated ui-class...</p> <pre><code># -*- coding: utf-8 -*- ''' ''' # Importing the necessary Qt classes. import sys import re import sip import time from PyQt4 import QtCore from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4 import QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_Dialog(object): def setupUi(self, Dialog): Dialog.setObjectName(_fromUtf8("Dialog")) Dialog.resize(550, 350) self.ExitButton = QtGui.QPushButton(Dialog) self.ExitButton.setGeometry(QtCore.QRect(420, 310, 100, 35)) self.ExitButton.setObjectName(_fromUtf8("ExitButton")) self.logView = QtGui.QTextEdit(Dialog) self.logView.setGeometry(QtCore.QRect(20, 30, 500, 280)) self.logView.setReadOnly(False) self.logView.setObjectName(_fromUtf8("logView")) self.retranslateUi(Dialog) QtCore.QMetaObject.connectSlotsByName(Dialog) def retranslateUi(self, Dialog): Dialog.setWindowTitle(_translate("Dialog", "Dialog", None)) self.ExitButton.setText(_translate("Dialog", "Exit", None)) class MainWindow(QMainWindow, Ui_Dialog): # quit def finish(self): quit() def __init__(self): QMainWindow.__init__(self) # set up User Interface (widgets, layout...) self.setupUi(self) # custom slots connections QtCore.QObject.connect(self.ExitButton, QtCore.SIGNAL("released()"), self.finish) self.logView.setFontFamily("Monaco") print("Zażółć gęślą jaźń") print("Xc😍😙--") t = QString.fromUtf8("---Zażółć gęślą jaźń---") self.logView.append(t) t = QString.fromUtf8("---😍😙---") self.logView.append(t) print("family is " + self.logView.fontFamily()) self.logView.append("family is " + self.logView.fontFamily()) # Main entry to program. Sets up the main app and create a new window. def main(argv): # create Qt application app = QApplication(argv,True) # create main window wnd = MainWindow() # classname wnd.show() # Connect signal for app finish app.connect(app, QtCore.SIGNAL("lastWindowClosed()"), app, QtCore.SLOT("quit()")) # Start the app up sys.exit(app.exec_()) if __name__ == "__main__": main(sys.argv) </code></pre>
0
2016-08-19T13:49:33Z
39,045,000
<p>What is the output of <code>sys.maxunicode</code> for the python youy are using? If it's 65535 (rather than 1114111), you are using a <em>narrow</em> build of python, which does not support characters outside the <a href="http://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane" rel="nofollow">BMP</a>.</p> <p>The unicode code-point of "😍" is 128525, and "😙" 65535 is 128537, both of which are beyond 65535. In a narrow build, these will be represented as a surrogate pair, which presumably Qt does not know how to render.</p> <p>Since <a href="https://www.python.org/dev/peps/pep-0261/" rel="nofollow">PEP-261</a>, it is possible to compile a wide build of python (by using the <code>--enable-unicode=ucs4</code> option) which has support for characters beyond the BMP. (For python > 3.3, only wide builds are possible).</p>
1
2016-08-19T17:46:57Z
[ "python", "unicode", "pyqt", "emoji", "qtextedit" ]
JSON ValueError in CMS, tornado
39,040,796
<p>I'm tring to run Contest Management System (<a href="http://cms.readthedocs.io/en/v1.2/index.html" rel="nofollow">docs</a>) in Ubuntu 16.04. CMS runs quite OK, but sometimes it returns error, like this in short:</p> <pre><code>ValueError: No JSON object could be decoded </code></pre> <p>I'm afraid if make everything wrong by editing the code in packages, so please avoid such methods.</p> <p>Here's the error message In the cmsAdminWebServer:</p> <pre><code>2016/08/19 22:39:55 - ERROR [AdminWebServer,0] Cannot decode score type parameters. ValueError('No JSON object could be decoded',). 2016/08/19 22:39:55 - ERROR [AdminWebServer,0] Uncaught exception GET /user/3 (127.0.0.1) HTTPServerRequest(protocol='http', host='127.0.0.1:8889', method='GET', uri='/user/3', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Upgrade-Insecure-Requests': '1', 'Host': '127.0.0.1:8889', 'Cookie': 'unread_count="2|1:0|10:1471613726|12:unread_count|4:MA==|cb62dfe311f2b0bb4378ad4ba752764da4e2f88314a3aa76969aeec5ba764733"; login="2|1:0|10:1471612630|5:login|68:KFZ5aHVucm9oCnAwClZxd2VydHl1aQpwMQpGMTQ3MTYxMjYzMC44MTA1NzgKdHAyCi4=|a94f16138a54068f9eccc9e2950e7e079ff0523475d8c7f3dde9a2cc0048870b"', 'Referer': 'http://127.0.0.1:8889/userlist/2', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0'}) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/AdminWebServer.py", line 1687, in get self.render("user.html", **self.r_params) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 665, in render html = self.render_string(template_name, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 772, in render_string return t.generate(**namespace) File "/usr/local/lib/python2.7/dist-packages/tornado/template.py", line 278, in generate return execute() File "user_html.generated.py", line 375, in _tt_execute score_type = get_score_type(dataset=dataset) # user.html:62 (via base.html:129) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:39:55 - ERROR [AdminWebServer,0] Uncaught exception (ValueError('No JSON object could be decoded',)) while processing a request: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/AdminWebServer.py", line 1687, in get self.render("user.html", **self.r_params) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 665, in render html = self.render_string(template_name, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 772, in render_string return t.generate(**namespace) File "/usr/local/lib/python2.7/dist-packages/tornado/template.py", line 278, in generate return execute() File "user_html.generated.py", line 375, in _tt_execute score_type = get_score_type(dataset=dataset) # user.html:62 (via base.html:129) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:39:55 - ERROR [AdminWebServer,0] 500 GET /user/3 (127.0.0.1) 35.83ms 127.0.0.1 - - [2016-08-19 22:39:55] "GET /user/3 HTTP/1.1" 500 4235 0.037082 </code></pre> <p>It occurs when I try to see any user's profile</p> <p>In the cmsContestWebServer :</p> <pre><code>2016/08/19 22:45:24 - ERROR [ContestWebServer,0] Cannot decode score type parameters. ValueError('No JSON object could be decoded',). </code></pre> <p>and </p> <pre><code>2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Cannot decode score type parameters. ValueError('No JSON object could be decoded',). 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception GET /tasks/Hello/submissions/3 (127.0.0.1) HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/tasks/Hello/submissions/3', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'Accept': '*/*', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0', 'Host': '127.0.0.1:8888', 'X-Requested-With': 'XMLHttpRequest', 'Referer': 'http://127.0.0.1:8888/tasks/Hello/submissions', 'Cookie': 'unread_count="2|1:0|10:1471614324|12:unread_count|4:MA==|29d9493cc5372707a208c1eca6f791069159c152d0afaf5faeba8ed098391a04"; login="2|1:0|10:1471614324|5:login|68:KFZ5aHVucm9oCnAwClZxd2VydHl1aQpwMQpGMTQ3MTYxNDMyNC4yODA5NTgKdHAyCi4=|a2ea45df8688435985f9e1d5dacc23d4c37ae7ff7bf992ea0cb32e5e2f31f530"'}) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception (ValueError('No JSON object could be decoded',)) while processing a request: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] 500 GET /tasks/Hello/submissions/3 (127.0.0.1) 35.07ms 127.0.0.1 - - [2016-08-19 22:45:26] "GET /tasks/Hello/submissions/3 HTTP/1.1" 500 4775 0.035801 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Cannot decode score type parameters. ValueError('No JSON object could be decoded',). 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception GET /tasks/Hello/submissions/1 (127.0.0.1) HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/tasks/Hello/submissions/1', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'Accept': '*/*', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0', 'Host': '127.0.0.1:8888', 'X-Requested-With': 'XMLHttpRequest', 'Referer': 'http://127.0.0.1:8888/tasks/Hello/submissions', 'Cookie': 'unread_count="2|1:0|10:1471614324|12:unread_count|4:MA==|29d9493cc5372707a208c1eca6f791069159c152d0afaf5faeba8ed098391a04"; login="2|1:0|10:1471614324|5:login|68:KFZ5aHVucm9oCnAwClZxd2VydHl1aQpwMQpGMTQ3MTYxNDMyNC4yODA5NTgKdHAyCi4=|a2ea45df8688435985f9e1d5dacc23d4c37ae7ff7bf992ea0cb32e5e2f31f530"'}) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception (ValueError('No JSON object could be decoded',)) while processing a request: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] 500 GET /tasks/Hello/submissions/1 (127.0.0.1) 56.84ms 127.0.0.1 - - [2016-08-19 22:45:26] "GET /tasks/Hello/submissions/1 HTTP/1.1" 500 4775 0.057471 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Cannot decode score type parameters. ValueError('No JSON object could be decoded',). 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception GET /tasks/Hello/submissions/2 (127.0.0.1) HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/tasks/Hello/submissions/2', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'Accept': '*/*', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0', 'Host': '127.0.0.1:8888', 'X-Requested-With': 'XMLHttpRequest', 'Referer': 'http://127.0.0.1:8888/tasks/Hello/submissions', 'Cookie': 'unread_count="2|1:0|10:1471614324|12:unread_count|4:MA==|29d9493cc5372707a208c1eca6f791069159c152d0afaf5faeba8ed098391a04"; login="2|1:0|10:1471614324|5:login|68:KFZ5aHVucm9oCnAwClZxd2VydHl1aQpwMQpGMTQ3MTYxNDMyNC4yODA5NTgKdHAyCi4=|a2ea45df8688435985f9e1d5dacc23d4c37ae7ff7bf992ea0cb32e5e2f31f530"'}) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] Uncaught exception (ValueError('No JSON object could be decoded',)) while processing a request: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 2601, in wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/__init__.py", line 195, in wrapped return func(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/server/ContestWebServer.py", line 1283, in get score_type = get_score_type(dataset=task.active_dataset) File "/usr/local/lib/python2.7/dist-packages/cms-1.2.0-py2.7.egg/cms/grading/scoretypes/__init__.py", line 77, in get_score_type parameters = json.loads(parameters) File "/usr/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2016/08/19 22:45:26 - ERROR [ContestWebServer,0] 500 GET /tasks/Hello/submissions/2 (127.0.0.1) 63.60ms 127.0.0.1 - - [2016-08-19 22:45:26] "GET /tasks/Hello/submissions/2 HTTP/1.1" 500 4775 0.064078 </code></pre> <p>when I try to see my submissions as a user, this happens and the score doesn't show up.</p> <p>Here's the versions:</p> <pre><code>$ pip list backport-ipaddress (0.1) backports-abc (0.4) backports.ssl-match-hostname (3.5.0.1) certifi (2016.8.8) cms (1.2.0) coverage (4.2) Django (1.10) djrill (2.1.0) funcsigs (1.0.2) gevent (1.1.2) greenlet (0.4.10) mechanize (0.2.5) mock (2.0.0) netifaces (0.10.4) patool (1.12) pbr (1.10.0) pip (8.1.2) psutil (0.6.1) pyasn (1.5.0b7) pyasn1 (0.1.9) pycrypto (2.6.1) pytz (2014.10) PyYAML (3.10) requests (2.11.0) setuptools (25.2.0) singledispatch (3.4.0.3) six (1.10.0) SQLAlchemy (0.6.0) tornado (4.0) Werkzeug (0.11.10) wheel (0.29.0) </code></pre> <p>I installed with this(in the docs):</p> <pre><code>sudo apt-get install build-essential fpc postgresql postgresql-client \ gettext python2.7 python-setuptools python-tornado python-psycopg2 \ python-sqlalchemy python-psutil python-netifaces python-crypto \ python-tz python-six iso-codes shared-mime-info stl-manual \ python-beautifulsoup python-mechanize python-coverage python-mock \ cgroup-lite python-requests python-werkzeug python-gevent patool </code></pre> <p>if you need, here's the requirements:</p> <pre><code>setuptools&gt;=0.6 tornado&gt;=2.0 psycopg2&gt;=2.4 sqlalchemy&gt;=0.7 netifaces&gt;=0.5 pycrypto&gt;=2.3 pyyaml&gt;=3.10 pytz&gt;=2011k psutil&gt;=0.6 BeautifulSoup&gt;=3.2 coverage&gt;=3.4 mock&gt;=1.0 mechanize&gt;=0.2 six&gt;=1.1 requests&gt;=1.1 gevent&gt;=1.0 werkzeug&gt;=0.8 pycups&gt;=1.9 PyPDF2&gt;=1.19 patool&gt;=1.7 </code></pre> <p>I already downgraded version of tornado 4.2.1 to 4.0, psutil to 0.6.1 from <a href="http://stackoverflow.com/questions/31216835/python-psutil-psutil-get-process-list-error">this</a>.</p> <p>If you need any list on this or if I'm asking wrong thing, please inform me. Thanks!</p>
0
2016-08-19T13:52:24Z
39,192,636
<p>Sorry, it was my mistake. For who has same errors, (predictably nobody) I filled my task scoring form improperly, so JSON couldn't decode them. closing!</p>
0
2016-08-28T15:05:22Z
[ "python", "json", "content-management-system" ]
TypeError: 'str' object is not callable di python 2.7
39,040,923
<pre><code>while True: op_list = [] for op in client.longPoll(): op_list.append(op) for op in op_list: sender = op[0] receiver = op[1] message = op[2] msg = message.text if msg("help"): receiver.sendMessage('why?') if msg("help"): </code></pre> <p>TypeError: 'str' object is not callable ,anyone can help me?</p>
-3
2016-08-19T13:58:19Z
39,041,206
<p>Problem lies on line 12. msg is a text, obvious that it cannot be a callable. You should quite certainly do an equal to if you want to compare two strings.</p> <pre><code>if msg == "help": receiver.sendMessage('why?') </code></pre>
0
2016-08-19T14:12:24Z
[ "python", "string" ]
Fast counts of elements of numpy array by value thresholds in another array
39,040,924
<p>Given a <code>numpy</code> array of threshold values, what is the most efficient way to produce an array of the counts of another array meeting these values? </p> <p>Assume the threshold value array is small and sorted, and the array of values to be counted is large-ish and unsorted.</p> <p><strong>Example:</strong> for each element of <code>valueLevels</code>, count the elements of <code>values</code> greater than or equal to it:</p> <pre><code>import numpy as np n = int(1e5) # size of example # example levels: the sequence 0, 1., 2.5, 5., 7.5, 10, 5, ... 50000, 75000 valueLevels = np.concatenate( [np.array([0.]), np.concatenate([ [ x*10**y for x in [1., 2.5, 5., 7.5] ] for y in range(5) ] ) ] ) np.random.seed(123) values = np.random.uniform(low=0, high=1e5, size=n) </code></pre> <p><strong>So far I have tried</strong> the list comprehension approach. </p> <ul> <li><code>np.array([sum(values&gt;=x) for x in valueLevels])</code>was unacceptably slow</li> <li><code>np.array([len(values[values&gt;=x]) for x in valueLevels])</code> was an improvement</li> <li>sorting <code>values</code> did speed up the comprehension (in the example, from ~7 to 0.5 ms), but the cost of sort (~8 ms) exceeded the savings for one-time use</li> </ul> <p>The best I have right now is a comprehension of <a href="http://stackoverflow.com/a/8364723/2573061">this approach</a>:</p> <pre><code>%%timeit np.array([np.count_nonzero(values&gt;=x) for x in valueLevels]) # 1000 loops, best of 3: 1.26 ms per loop </code></pre> <p>which is acceptable for my purposes, but out of curiosity,</p> <p><strong>What I would like to know</strong> is </p> <ul> <li>If list comprehension is the way to go, can it be sped up? Or,</li> <li>Are other approaches faster? (I have a vague sense that this could be done by broadcasting the values array over the thresholds array, but I can't figure out how to get the dimensions right for <code>np.broadcast_arrays()</code>.</li> </ul>
3
2016-08-19T13:58:23Z
39,041,130
<p><strong>Approach #1</strong> Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow"><code>np.searchsorted</code></a> -</p> <pre><code>values.size - np.searchsorted(values,valueLevels,sorter=values.argsort()) </code></pre> <p><strong>Approach #2</strong> Using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p> <pre><code>(values[:,None]&gt;=valueLevels).sum(0) </code></pre>
2
2016-08-19T14:07:50Z
[ "python", "numpy", "cumulative-frequency" ]
Fast counts of elements of numpy array by value thresholds in another array
39,040,924
<p>Given a <code>numpy</code> array of threshold values, what is the most efficient way to produce an array of the counts of another array meeting these values? </p> <p>Assume the threshold value array is small and sorted, and the array of values to be counted is large-ish and unsorted.</p> <p><strong>Example:</strong> for each element of <code>valueLevels</code>, count the elements of <code>values</code> greater than or equal to it:</p> <pre><code>import numpy as np n = int(1e5) # size of example # example levels: the sequence 0, 1., 2.5, 5., 7.5, 10, 5, ... 50000, 75000 valueLevels = np.concatenate( [np.array([0.]), np.concatenate([ [ x*10**y for x in [1., 2.5, 5., 7.5] ] for y in range(5) ] ) ] ) np.random.seed(123) values = np.random.uniform(low=0, high=1e5, size=n) </code></pre> <p><strong>So far I have tried</strong> the list comprehension approach. </p> <ul> <li><code>np.array([sum(values&gt;=x) for x in valueLevels])</code>was unacceptably slow</li> <li><code>np.array([len(values[values&gt;=x]) for x in valueLevels])</code> was an improvement</li> <li>sorting <code>values</code> did speed up the comprehension (in the example, from ~7 to 0.5 ms), but the cost of sort (~8 ms) exceeded the savings for one-time use</li> </ul> <p>The best I have right now is a comprehension of <a href="http://stackoverflow.com/a/8364723/2573061">this approach</a>:</p> <pre><code>%%timeit np.array([np.count_nonzero(values&gt;=x) for x in valueLevels]) # 1000 loops, best of 3: 1.26 ms per loop </code></pre> <p>which is acceptable for my purposes, but out of curiosity,</p> <p><strong>What I would like to know</strong> is </p> <ul> <li>If list comprehension is the way to go, can it be sped up? Or,</li> <li>Are other approaches faster? (I have a vague sense that this could be done by broadcasting the values array over the thresholds array, but I can't figure out how to get the dimensions right for <code>np.broadcast_arrays()</code>.</li> </ul>
3
2016-08-19T13:58:23Z
39,041,202
<p>The fastest I have so far is</p> <pre><code>%timeit count_nonzero(values &gt;= atleast_2d(valueLevels).T, axis=1) # 1000 loops, best of 3: 860 µs per loop </code></pre> <p><code>sum</code> is slower:</p> <pre><code>%timeit sum(values &gt;= atleast_2d(valueLevels).T, axis=1) # 100 loops, best of 3: 2.5 ms per loop </code></pre> <p>@Divakar's version is even slower:</p> <pre><code>%timeit count_nonzero(values[:, None] &gt;= valueLevels, axis=1) # 100 loops, best of 3: 3.86 ms per loop </code></pre> <p>However, I would probably still use your list comprehension, which is not much slower and does not create a big 2D boolean array as an intermediate step:</p> <pre><code>%timeit np.array([np.count_nonzero(values&gt;=x) for x in valueLevels]) # 1000 loops, best of 3: 987 µs per loop </code></pre>
4
2016-08-19T14:12:04Z
[ "python", "numpy", "cumulative-frequency" ]
remove special escape python
39,040,979
<p>I have the string</p> <pre><code>a = 'ddd\ttt\nnn' </code></pre> <p>I want to remove the '\' from the string. and It will be </p> <pre><code>a = 'dddtttnnn' </code></pre> <p>how to do that in python since '\t' and '\n' has special meaning in python</p>
1
2016-08-19T14:01:32Z
39,041,476
<p>Assuming you want to remove <code>\t</code> and <code>\n</code> type characters (with those representing <code>tab</code> and <code>newline</code> in this case and remove the meaning of <code>\</code> in the string in general) you can do:</p> <pre><code>&gt;&gt;&gt; a = 'ddd\ttt\nnn' &gt;&gt;&gt; print a ddd tt nn &gt;&gt;&gt; repr(a)[1:-1].replace('\\','') 'dddtttnnn' &gt;&gt;&gt; print repr(a)[1:-1].replace('\\','') dddtttnnn </code></pre> <p>If it is a raw string (i.e., the <code>\</code> is not interpolated to a single character), you do not need the <code>repr</code>:</p> <pre><code>&gt;&gt;&gt; a = r'ddd\ttt\nnn' &gt;&gt;&gt; a.replace('\\','') 'dddtttnnn' </code></pre>
2
2016-08-19T14:25:04Z
[ "python", "string", "escaping" ]
Slack API: Do Something when button is clicked
39,040,993
<p>I am using Python and it's <code>Slacker</code> API to post messages to a slack channel and it's posting the messages nicely.</p> <p>Now what I want to do is create a button that says, <em>More Info</em> and when it's clicked, I want to show a list of items. But when the button is clicked, <code>slackbot</code> says <code>oh no, something weng wrong, Please try that again</code></p> <p>Here is an example: <a href="https://api.slack.com/docs/messages/builder?msg=%7B%22text%22%3A%22Would%20you%20like%20to%20play%20a%20game%3F%22%2C%22attachments%22%3A%5B%7B%22text%22%3A%22Choose%20a%20game%20to%20play%22%2C%22fallback%22%3A%22You%20are%20unable%20to%20choose%20a%20game%22%2C%22callback_id%22%3A%22wopr_game%22%2C%22color%22%3A%22%233AA3E3%22%2C%22attachment_type%22%3A%22default%22%2C%22actions%22%3A%5B%7B%22name%22%3A%22chess%22%2C%22text%22%3A%22Chess%22%2C%22type%22%3A%22button%22%2C%22value%22%3A%22chess%22%7D%2C%7B%22name%22%3A%22maze%22%2C%22text%22%3A%22Falken%27s%20Maze%22%2C%22type%22%3A%22button%22%2C%22value%22%3A%22maze%22%7D%2C%7B%22name%22%3A%22war%22%2C%22text%22%3A%22Thermonuclear%20War%22%2C%22style%22%3A%22danger%22%2C%22type%22%3A%22button%22%2C%22value%22%3A%22war%22%2C%22confirm%22%3A%7B%22title%22%3A%22Are%20you%20sure%3F%22%2C%22text%22%3A%22Wouldn%27t%20you%20prefer%20a%20good%20game%20of%20chess%3F%22%2C%22ok_text%22%3A%22Yes%22%2C%22dismiss_text%22%3A%22No%22%7D%7D%5D%7D%5D%7D" rel="nofollow">link</a></p> <p>Below is my json and the code</p> <pre><code>msg = "&lt;!here&gt; Hello guys! " moreInfo = ['person', 'person2', 'person3'] message = [{ "title": "Lunch time has been decided", "text": "You will also be joining", "actions": [ { "name": "buttonName", "text": "More Info", "type": "button", "value": moreInfo }] }] slack.chat.post_message('#teamChannel', msg, username='my_username', attachments=message) </code></pre> <p>And this is the what it looks like in Slack when I click on <strong>More info</strong> button. <a href="http://i.stack.imgur.com/zppcZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/zppcZ.png" alt="enter image description here"></a></p> <p>Any help is appreciated! Thanks :)</p>
1
2016-08-19T14:02:16Z
39,392,802
<p>Do you have a button endpoint setup already? If not, you will see that error message. </p> <p>Or if you made the same mistake I did, you are using the incorrect token. This is non-obvious from the Slack docs. You can post a message with a custom integration bot token, including using attachments (i.e. interactive buttons). However, if you want to actually respond to a button press, you need to post the message with a full-fledged Slack app token (even if you don't intend on releasing your app into the wild). That means creating an endpoint for your oauth2 flow to add your app to a Slack team, and acquiring your bot token there.</p> <ol> <li>Create a Slack app if you haven't already.</li> <li>Create an oauth GET endpoint (e.g. <code>/slack/authorize</code>)</li> <li>Set your Slack app's "Redirect URI" to your auth endpoint</li> <li>Your endpoint should expect a <code>code</code> in its query parameters. Call the Slack API <code>oauth.access</code> with your app's <code>client_id</code>, <code>client_secret</code>, and that <code>code</code> to exchange for aforementioned app token. To post your interactive message from a bot, you will need the bot token (prefixed "xoxo-").</li> </ol>
1
2016-09-08T13:52:13Z
[ "python", "slack-api", "slack" ]
Cannot seem to get read_batch_examples working alongside an Estimator
39,041,125
<p><strong>EDIT</strong>: I'm using TensorFlow version 0.10.0rc0</p> <p>I'm currently trying to use <code>tf.contrib.learn.read_batch_examples</code> working while using a TensorFlow (SKFlow/tf.contrib) Estimator, specifically the <code>LinearClassifier</code>. I create a <code>read_batch_examples</code> op feeding in a CSV file with a <code>tf.decode_csv</code> for the <code>parse_fn</code> parameter with appropriate default records. I then feed that op to my <code>input_fn</code> for fitting the Estimator, but when that's run I receive the following error:</p> <pre><code>ValueError: Tensor("centered_bias_weight:0", shape=(1,), dtype=float32_ref) must be from the same graph as Tensor("linear/linear/BiasAdd:0", shape=(?, 1), dtype=float32). </code></pre> <p>I'm confused because neither of those Tensors appear to be from the <code>read_batch_examples</code> op. The code works if I run the op beforehand and then feed the input instead as an array of values. While this workaround exists, it is unhelpful because I am working with large datasets in which I need to batch in my inputs. Currently going over <code>Estimator.fit</code> (currently equivalent to <code>Estimator.partial_fit</code> in iterations isn't nearly as fast as being able to feed in data as it trains, so having this working is ideal. Any ideas? I'll post the non-functioning code below.</p> <pre><code>def input_fn(examples_dict): continuous_cols = {k: tf.cast(examples_dict[k], dtype=tf.float32) for k in CONTINUOUS_FEATURES} categorical_cols = { k: tf.SparseTensor( indices=[[i, 0] for i in xrange(examples_dict[k].get_shape()[0])], values=examples_dict[k], shape=[int(examples_dict[k].get_shape()[0]), 1]) for k in CATEGORICAL_FEATURES} feature_cols = dict(continuous_cols) feature_cols.update(categorical_cols) label = tf.contrib.layers.one_hot_encoding(labels=examples_dict[LABEL], num_classes=2, on_value=1, off_value=0) return feature_cols, label filenames = [...] csv_headers = [...] # features and label headers batch_size = 50 min_after_dequeue = int(num_examples * min_fraction_of_examples_in_queue) queue_capacity = min_after_dequeue + 3 * batch_size examples = tf.contrib.learn.read_batch_examples( filenames, batch_size=batch_size, reader=tf.TextLineReader, randomize_input=True, queue_capacity=queue_capacity, num_threads=1, read_batch_size=1, parse_fn=lambda x: tf.decode_csv(x, [tf.constant([''], dtype=tf.string) for _ in xrange(csv_headers)])) examples_dict = {} for i, header in enumerate(csv_headers): examples_dict[header] = examples[:, i] categorical_cols = [] for header in CATEGORICAL_FEATURES: categorical_cols.append(tf.contrib.layers.sparse_column_with_keys( header, keys # Keys for that particular feature, source not shown here )) continuous_cols = [] for header in CONTINUOUS_FEATURES: continuous_cols.append(tf.contrib.layers.real_valued_column(header)) feature_columns = categorical_cols + continuous_cols model = tf.contrib.learn.LinearClassifier( model_dir=model_dir, feature_columns=feature_columns, optimizer=optimizer, n_classes=num_classes) # Above code is ok up to this point model.fit(input_fn=lambda: input_fn(examples_dict), steps=200) # This line causes the error **** </code></pre> <p>Any alternatives for batching would be appreciated as well!</p>
0
2016-08-19T14:07:37Z
39,296,428
<p>I was able to figure out my mistake through the help of the great TensorFlow team! <code>read_batch_examples</code> has to be called within <code>input_fn</code>, otherwise the op has to be run beforehand as it'll be from a different graph. If someone else has this problem and I wasn't clear enough, just leave a comment.</p>
0
2016-09-02T15:53:01Z
[ "python", "python-2.7", "tensorflow" ]
SearchCursor in Arcpy Returning Tuples & Lists, Not a List
39,041,128
<p>I have a fairly simple piece of code that searches a polygon feature class and stores the data for a selection of fields in a list:</p> <pre><code>for eachSMField in smFieldList: with arcpy.da.SearchCursor(seamaskPGN, eachSMField) as cursor: for row in cursor: cfbDataList.append(row) print("### cfbDataList: ") print(cfbDataList) </code></pre> <p>The last line of code above gives the following output:</p> <pre><code>[[(4.1,)], [(4.2,)], [(4.34,)], [(4.45,)], [(4.55,)], [(4.58,)], [(4.68,)], [(4.75,)], [(4.78,)], [(4.83,)], [(4.87,)], [(4.89,)], [(4.91,)], [(4.96,)], [(5.03,)], [(5.09,)]] </code></pre> <p>While the data is accurate, I cannot figure out why the data is 1) in a tuple and 2) each tuple is in their own list, in the wider list.</p> <p>The output I'm looking for is simply the data in a list, e.g.:</p> <pre><code>[4.1, 4.2, 4.34, 4.45, ...etc] </code></pre>
-1
2016-08-19T14:07:42Z
39,045,024
<p>The output of <code>SearchCursor</code> is an iterator of tuples. You're appending each row (tuple) to your list rather than the values themselves. Change your append statement to <code>cfbDataList.append(row[0])</code> to append the value instead of the tuple.</p> <p>Another thing to check is the value of <code>eachSMField</code> you're passing into the cursor. It should be a list of fields...or, guessing at your intent, a list with one field name.</p>
0
2016-08-19T17:48:23Z
[ "python", "list", "cursor", "tuples", "arcpy" ]
How to setup pyvisa exception handler?
39,041,142
<p>I try to use python 3 and pyvisa 1.8 to communicate with GPIB devices. </p> <p>but how to distinguish different type of excptions.</p> <p>For example:</p> <pre><code>try: visa.ResourceManager().open_resources('COM1') exception visa.VisaIOError: &lt;some code&gt; </code></pre> <p>when open fails, it generate a general exception VisaIOError, but how can I know, is the port busy or the port does not exist or something else?</p> <p>like:</p> <pre><code>try: visa.ResourceManager().open_resources('COM1') exception &lt;1&gt;: # device busy exception &lt;2&gt;: # device does not exist exception ... </code></pre> <p>what should I right on position &lt;1>&lt;2> and so on to catch different type of exceptions?</p> <p>Thanks</p>
2
2016-08-19T14:08:23Z
39,066,537
<p>Visa can also raise ValueErrors and AttributeError if you somehow give it bad data. I think it can raise IOError, though I've never seen that happen.</p> <p>But yes, it mostly raises VisaIOError. </p> <p>Some things you can do to get more information about an exception are:</p> <pre><code>_rm = visa.ResourceManager() try: _rm.open_resources('COM1') exception visa.VisaIOError as e: print(e.args) print(_rm.last_status) print(_rm.visalib.last_status) </code></pre> <p>You can compare these status codes with various constants from visa.constants.StatusCode</p> <pre><code>if _rm.last_status == visa.constants.StatusCode.error_resource_busy: print("The port is busy!") </code></pre> <p>last_status and visalib.last_status sometimes give the same status code - but sometimes they don't, so you should probably check both of them.</p> <p>Note that I instantiate ResourceManager. You don't have to, but there are things you can do with an instance that you can't with the class, plus if you give it a short name it's less typing.</p>
0
2016-08-21T16:44:34Z
[ "python", "visa" ]
How to determine which python env IPython uses
39,041,147
<p>I just installed <code>anaconda</code>in order to use <code>numba</code> and I'd like to use the anaconda environment in IPython (occasionally). The issue is that I either set the anaconda install to be default system python env and then IPython always uses the anaconda env, or I can't use anaconda's env with IPython.</p> <p>I have read the docs and IPython's help but I can't find a way to do it (I'm probably not using the correct search terms, because I'm sure this is something that can't be done).</p> <p>Specifically I'm looking for a way to start IPython like this:</p> <pre><code>ipython --use-env=/home/user/anaconda </code></pre> <p>or something like that. Maybe creating a separate IPython profile that already starts with that env option.</p>
0
2016-08-19T14:08:44Z
39,041,691
<p>With Anaconda you can install iPython into an environment. You would activate this environment and launch iPython. iPython will then use the environment it is launched from. Say I wanted a Python2 environment with Pandas, iPython and Numpy.</p> <p>This creates the environment:</p> <pre><code>conda create -n py27 python=2.7 numpy pandas ipython </code></pre> <p>This activates the environment(Linux/Mac OS X):</p> <pre><code>source activate py27 </code></pre> <p>This launches iPython using that environment:</p> <pre><code>ipython </code></pre> <p>More information on Anaconda environments can be found here, <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">http://conda.pydata.org/docs/using/envs.html</a>. With any Anaconda installation you can create multiple environments with different packages, package versions, and even Python 2 or Python 3 as Anaconda will figure out the version dependencies for you.</p>
1
2016-08-19T14:34:51Z
[ "python", "ipython", "anaconda", "environment" ]
NameError: global name 'NoneType' is not defined in Spark
39,041,316
<p>I have written a UDF to replace a few specific date values in a column named "latest_travel_date" with 'NA'. However, this column also contains many null values, so I have handled this also in the UDF. (please see below)</p> <pre><code>Query: def date_cleaner(date_col): if type(date_col) == NoneType: pass else: if year(date_col) in ('1899','1900'): date_col= 'NA' else: pass return date_col date_cleaner_udf = udf(date_cleaner, DateType()) Df3= Df2.withColumn("latest_cleaned", date_cleaner_udf("latest_travel_date")) </code></pre> <p><strong>However, I am continuously getting the error: NameError: global name 'NoneType' is not defined</strong></p> <p>Can anyone please help me to resolve this?</p>
0
2016-08-19T14:17:53Z
39,041,415
<p>The problem is this line:</p> <pre><code>if type(date_col) == NoneType: </code></pre> <p>It looks like you actually want:</p> <pre><code>if date_col is None: </code></pre>
2
2016-08-19T14:22:46Z
[ "python", "apache-spark", "bigdata", "pyspark", "user-defined-functions" ]
NameError: global name 'NoneType' is not defined in Spark
39,041,316
<p>I have written a UDF to replace a few specific date values in a column named "latest_travel_date" with 'NA'. However, this column also contains many null values, so I have handled this also in the UDF. (please see below)</p> <pre><code>Query: def date_cleaner(date_col): if type(date_col) == NoneType: pass else: if year(date_col) in ('1899','1900'): date_col= 'NA' else: pass return date_col date_cleaner_udf = udf(date_cleaner, DateType()) Df3= Df2.withColumn("latest_cleaned", date_cleaner_udf("latest_travel_date")) </code></pre> <p><strong>However, I am continuously getting the error: NameError: global name 'NoneType' is not defined</strong></p> <p>Can anyone please help me to resolve this?</p>
0
2016-08-19T14:17:53Z
39,041,567
<p>This issue could be solved by two ways.</p> <p>If you try to find the Null values from your dataFrame you should use the <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.types.NullType" rel="nofollow">NullType</a>.</p> <p>Like this:</p> <pre><code>if type(date_col) == NullType </code></pre> <p>Or you can find if the date_col is None like this:</p> <pre><code>if date_col is None </code></pre> <p>I hope this help.</p>
2
2016-08-19T14:28:59Z
[ "python", "apache-spark", "bigdata", "pyspark", "user-defined-functions" ]
NameError: global name 'NoneType' is not defined in Spark
39,041,316
<p>I have written a UDF to replace a few specific date values in a column named "latest_travel_date" with 'NA'. However, this column also contains many null values, so I have handled this also in the UDF. (please see below)</p> <pre><code>Query: def date_cleaner(date_col): if type(date_col) == NoneType: pass else: if year(date_col) in ('1899','1900'): date_col= 'NA' else: pass return date_col date_cleaner_udf = udf(date_cleaner, DateType()) Df3= Df2.withColumn("latest_cleaned", date_cleaner_udf("latest_travel_date")) </code></pre> <p><strong>However, I am continuously getting the error: NameError: global name 'NoneType' is not defined</strong></p> <p>Can anyone please help me to resolve this?</p>
0
2016-08-19T14:17:53Z
39,058,977
<p>As pointed out by Michael, you cannot do</p> <pre><code>if type(date_col) == NoneType: </code></pre> <p>However, changing that to <code>None</code> won't complete the task. There is another issue with</p> <pre><code>date_col= 'NA' </code></pre> <p>It is of <code>StringType</code> but you declared the return type to be <code>DateType</code>. Your <code>_jvm</code> error in the comment was complaining this mis-match of data types. </p> <p>It seems you just want to mark <code>date_col</code> to be <code>None</code> when it is <code>1899</code> or <code>1900</code>, and drop all Nulls. If so, you can do this:</p> <pre><code>def date_cleaner(date_col): if date_col: if year(date_col) in ('1899','1900'): return None return date_col date_cleaner_udf = udf(date_cleaner, DateType()) Df3= Df2.withColumn("latest_cleaned", date_cleaner_udf("latest_travel_date")).dropna(subset=["latest_travel_date"]) </code></pre> <p>This is because <code>DateType</code> could either take a valid datetime or Null (by default). You could do <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.dropna" rel="nofollow"><code>dropna</code></a> to "clean" your dataframe.</p>
0
2016-08-20T21:48:32Z
[ "python", "apache-spark", "bigdata", "pyspark", "user-defined-functions" ]
Fastest way to send long string from C# to Python
39,041,401
<p>I have a long string of data streaming continuously, which is generated by my C# script. I would like to use it as an input to Python Machine Learning model. </p> <p>What is the fastest &amp; easiest way to implement this transmission? </p> <p>I have found some possible options:</p> <ol> <li>Use sockets.</li> <li>Use http.</li> <li>Write data to a file from C#, and read the file from Python</li> <li><p>Data as an argument, while python is running as a process in C# or</p></li> <li><p><em>Something different?</em></p></li> </ol>
2
2016-08-19T14:22:04Z
39,041,750
<p>Provided that these processes run on the same machine, I think when handling big chunks of data between process <a href="https://docs.python.org/3/library/mmap.html" rel="nofollow">memory mapping</a> is the way to go.</p> <p>In other case TCP/UDP sockets is a good option since they don't have the overhead of the Application layer which, HTTP for example, has </p>
0
2016-08-19T14:37:39Z
[ "c#", "python", "sockets", "tcp", "data-stream" ]
Fastest way to send long string from C# to Python
39,041,401
<p>I have a long string of data streaming continuously, which is generated by my C# script. I would like to use it as an input to Python Machine Learning model. </p> <p>What is the fastest &amp; easiest way to implement this transmission? </p> <p>I have found some possible options:</p> <ol> <li>Use sockets.</li> <li>Use http.</li> <li>Write data to a file from C#, and read the file from Python</li> <li><p>Data as an argument, while python is running as a process in C# or</p></li> <li><p><em>Something different?</em></p></li> </ol>
2
2016-08-19T14:22:04Z
39,043,545
<p>I suggest option 6 Use Database like sqlite.</p> <p>With this option you gain:</p> <ul> <li>Both running OnLine</li> <li>Python set flags to these records in database when they processed.</li> <li>You can complete operation latter if python off, and continue processing data.</li> <li>You can add results from python to the database.</li> <li>you have a history in database you may need latter for post processing.</li> </ul>
0
2016-08-19T16:11:25Z
[ "c#", "python", "sockets", "tcp", "data-stream" ]
If loop continue is executing for loop again
39,041,698
<pre><code>if eqp_id: for rule in items: if rule.eqp_pricelist == True: print "id 1" print rule.id continue print "id 2 out" print rule.id #outputs: #id1 #5 #id 2 out #4 </code></pre> <p>How is it possible that <code>rule.id = 5</code> comes before <code>rule.id = 4</code></p> <p>This code is for product_pricelist for method :</p> <pre><code>def _price_rule_get_multi(self, cr, uid, pricelist, products_by_qty_by_partner, context=None): </code></pre>
0
2016-08-19T14:35:09Z
39,054,619
<p>As you noted in your comment, if <code>items</code> is acquired using: <code>items = self.pool.get('product.pricelist.item').browse(cr, uid, item_ids, context=context)</code> then <code>items</code> is a "recordset" of Odoo ORM, so you can use a <strong><code>sorted()</code></strong> function of Odoo ORM (<a href="https://www.odoo.com/documentation/8.0/reference/orm.html#openerp.models.Model.sorted" rel="nofollow">see documentation</a>):</p> <blockquote> <p><code>sorted(key=None, reverse=False)</code> Return the recordset <code>self</code> ordered by <code>key</code>.</p> <p><strong>Parameters</strong>:<br> <strong>key</strong> -- either a function of one argument that returns a comparison key for each record, or None, in which case records are ordered according the default model's order<br> <strong>reverse</strong> -- if True, return the result in reverse order</p> </blockquote> <p>&nbsp;</p> <p>In order to apply this function to your code change it as follows:</p> <pre><code>if eqp_id: for rule in items.sorted(key=lambda r: r.id): ## sort by id using sort()... if rule.eqp_pricelist == True: print "id 1" print rule.id continue print "id 2 out" print rule.id </code></pre> <p>&nbsp;</p> <p>EDIT:<br> I do not see clearly your goal, but check also <code>filtered()</code> if it can help you:</p> <pre><code>if eqp_id: for rule in items.sorted(key=lambda r: r.id).filtered(lambda r: r.eqp_pricelist == True): ## sort by id using sort() AND filtered using eqp_pricelist == True... print "[True?] Rule with eqp_pricelist == %s" % rule.eqp_pricelist print rule.id </code></pre> <p>OR: </p> <pre><code>if eqp_id: for rule in items.sorted(key=lambda r: r.id).filtered(lambda r: r.eqp_pricelist == Talse): ## sort by id using sort() AND filtered using eqp_pricelist == False... print "[False?] Rule with eqp_pricelist == %s" % rule.eqp_pricelist print rule.id </code></pre> <p>you can apply just <code>filtered()</code> without <code>sorted()</code> as well. </p> <p>Check out this: <code>for rule in items.filtered("eqp_pricelist"):</code></p>
2
2016-08-20T13:21:40Z
[ "python", "openerp", "odoo-9" ]
Create "buffer" matrix from dataframe with rolling window?
39,041,769
<p>Given a dataframe of just one column, how can I convert it into another dataframe "buffer" (of size 2), described below:</p> <p>df =</p> <pre><code> 0 0 1 1 2 2 3 3 4 4 4 5 5 6 5 </code></pre> <p>expected_buffer =</p> <pre><code> 0 1 0 1 2 1 2 3 2 3 4 3 4 5 </code></pre> <p>This is my attempt:</p> <pre><code>def buff(df,past): arr1=df.values arr=arr1[0:past] for i in xrange(past,df.shape[0]-past+2): arr=np.append(arr,arr1[i:past+i],axis=0) return pd.DataFrame(arr) </code></pre> <p>Which returns the following:</p> <pre><code> 0 0 1 1 2 2 3 3 4 4 4 5 5 6 5 </code></pre> <p>How to get the expected buff output ?</p> <p>EDIT: By <code>past</code> I mean the buffer size. Using MATLAB notations: I have 5 element column vector</p> <pre><code>df = [1;2;3;4;5] </code></pre> <p>If <code>past</code> is 2, I should end up getting the following output:</p> <pre><code>buff = [1 2; 2 3; 3 4; 4 5] </code></pre> <p>If <code>past</code> is 3, then expected output should be</p> <pre><code>buff = [1 2 3; 2 3 4; 3 4 5] </code></pre> <p>If <code>past</code> is 4, then expected output is </p> <pre><code>buff = [1 2 3 4; 2 3 4 5] </code></pre> <p>So for <code>n</code>-element <code>df</code> and <code>past=m</code>, I would get a matrix of size <code>(n-past+1)</code>x<code>past</code>.</p>
3
2016-08-19T14:38:28Z
39,042,046
<pre><code>def buff(df, past): a = np.concatenate([df.values[i:i-past] for i in range(past)], axis=1) return pd.DataFrame(a, columns=list(range(past))) </code></pre> <hr> <pre><code>buff(df, 2) </code></pre> <p><a href="http://i.stack.imgur.com/hRSu9.png" rel="nofollow"><img src="http://i.stack.imgur.com/hRSu9.png" alt="enter image description here"></a></p> <pre><code>buff(df, 3) </code></pre> <p><a href="http://i.stack.imgur.com/cYM0j.png" rel="nofollow"><img src="http://i.stack.imgur.com/cYM0j.png" alt="enter image description here"></a></p> <pre><code>buff(df, 4) </code></pre> <p><a href="http://i.stack.imgur.com/wT2sA.png" rel="nofollow"><img src="http://i.stack.imgur.com/wT2sA.png" alt="enter image description here"></a></p> <pre><code>buff(df, 5) </code></pre> <p><a href="http://i.stack.imgur.com/h9kAm.png" rel="nofollow"><img src="http://i.stack.imgur.com/h9kAm.png" alt="enter image description here"></a></p>
4
2016-08-19T14:52:39Z
[ "python", "pandas", "numpy" ]
Create "buffer" matrix from dataframe with rolling window?
39,041,769
<p>Given a dataframe of just one column, how can I convert it into another dataframe "buffer" (of size 2), described below:</p> <p>df =</p> <pre><code> 0 0 1 1 2 2 3 3 4 4 4 5 5 6 5 </code></pre> <p>expected_buffer =</p> <pre><code> 0 1 0 1 2 1 2 3 2 3 4 3 4 5 </code></pre> <p>This is my attempt:</p> <pre><code>def buff(df,past): arr1=df.values arr=arr1[0:past] for i in xrange(past,df.shape[0]-past+2): arr=np.append(arr,arr1[i:past+i],axis=0) return pd.DataFrame(arr) </code></pre> <p>Which returns the following:</p> <pre><code> 0 0 1 1 2 2 3 3 4 4 4 5 5 6 5 </code></pre> <p>How to get the expected buff output ?</p> <p>EDIT: By <code>past</code> I mean the buffer size. Using MATLAB notations: I have 5 element column vector</p> <pre><code>df = [1;2;3;4;5] </code></pre> <p>If <code>past</code> is 2, I should end up getting the following output:</p> <pre><code>buff = [1 2; 2 3; 3 4; 4 5] </code></pre> <p>If <code>past</code> is 3, then expected output should be</p> <pre><code>buff = [1 2 3; 2 3 4; 3 4 5] </code></pre> <p>If <code>past</code> is 4, then expected output is </p> <pre><code>buff = [1 2 3 4; 2 3 4 5] </code></pre> <p>So for <code>n</code>-element <code>df</code> and <code>past=m</code>, I would get a matrix of size <code>(n-past+1)</code>x<code>past</code>.</p>
3
2016-08-19T14:38:28Z
39,043,020
<pre><code>import pandas as pd def buff(s, n): return (pd.concat([s.shift(-i) for i in range(n)], axis=1) .dropna().astype(int)) s = pd.Series([1,2,3,4,5]) print(buff(s, 2)) # 0 0 # 0 1 2 # 1 2 3 # 2 3 4 # 3 4 5 print(buff(s, 3)) # 0 0 0 # 0 1 2 3 # 1 2 3 4 # 2 3 4 5 print(buff(s, 4)) # 0 0 0 0 # 0 1 2 3 4 # 1 2 3 4 5 </code></pre>
3
2016-08-19T15:41:11Z
[ "python", "pandas", "numpy" ]
Three variables as heatmap
39,041,865
<p>I want to plot my data as a heatmap which has the following structure:</p> <p><code>X = [1,1,1,1,1,1,1,1,1,1], Y = [1,2,3,4,5,6,7,8,9,10] Z = [0.2, 0.33, 0.1, 0.25, 0.0, 0.9, 0.75, 0.88, 0.44, 0.95]</code></p> <p>The x and y-axis shall be represented by X and Y, while the 'heat' is represented by the values of Z.</p> <p>E.g. at coordinate (x,y) = (1,2) the intensity shall be 0.33 How can this be achieved by using matplotlib? Looking at posts which relate to the keyword heatmap or even to those related to the term contour map, I could not transfer it to this problem yet.</p> <p>Thank you in advance for any hints Dan</p>
0
2016-08-19T14:42:41Z
39,042,065
<p>I hope your data is just an example because it will look funny (it's more a sequence of strips; the x-dimension is constant).</p> <p>I would recommend the usage of <a href="http://pandas.pydata.org/" rel="nofollow">pandas (general data-analysis)</a> and <a href="https://stanford.edu/~mwaskom/software/seaborn/" rel="nofollow">seaborn (matplotlib-extensions)</a> which makes it a bit nicer.</p> <h3>Code</h3> <pre><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns X = [1,1,1,1,1,1,1,1,1,1] Y = [1,2,3,4,5,6,7,8,9,10] Z = [0.2, 0.33, 0.1, 0.25, 0.0, 0.9, 0.75, 0.88, 0.44, 0.95] data = pd.DataFrame({'X': X, 'Y': Y, 'Z': Z}) data_pivoted = data.pivot("X", "Y", "Z") ax = sns.heatmap(data_pivoted) plt.show() </code></pre> <h3>Output</h3> <p><a href="http://i.stack.imgur.com/3iplS.png" rel="nofollow"><img src="http://i.stack.imgur.com/3iplS.png" alt="enter image description here"></a></p>
3
2016-08-19T14:54:10Z
[ "python", "matplotlib", "heatmap" ]
Returning data from many rows with different values
39,041,900
<p>so i have been having a problem with my code.</p> <p>What im trying to do is i have a list set up like:</p> <pre><code>locationList = [['mt', 'moon', 'mt-moon'], ['misty', 'mis'], ['surge', 'sur'], ect...] </code></pre> <p>And what this is is they are inputs the people use, so instead of having to input the whole word, they just do an abbreviation. </p> <p>Now i have these stored in a DB, with The persons name, their points, the bet loctaion, and betted points. Like this:</p> <pre><code>**Name** | **Points** | **BetLocation** | **BetPoints** -------- | ---------- | --------------- | ------------- James | 1000 | mis | 100 Mike | 3000 | misty | 700 Dave | 400 | mt | 200 </code></pre> <p>Now what im trying to do is when i choose a winning location, like say i choose misty, it will return both James and Mike but not Dave.</p> <pre><code>if any(winner in s for s in locationList): c = conn.cursor() search = winner for sublist in locationList: if sublist[0] == search: print(sublist) print(str(sublist)[1:-1]) test1 = '(' + str(sublist)[1:-1] + ')' print(test1) c.execute("SELECT Name, CurrentBetValue FROM Users WHERE CurrentBetLocation IN('misty', 'mis')") data = c.fetchall() print(data) for f in data: kek = (f[1], f[0]) c.execute('UPDATE Users SET Points = Points + (? * 2) WHERE Name = ?', kek) conn.commit() betReset() betsOpen = False return send_message('"'+str(winner) + '"' + ' is the winning location, points have been updated.') </code></pre> <p>Each print returns:</p> <ul> <li>['misty', 'mis']</li> <li>'misty', 'mis'</li> <li>('misty', 'mis')</li> </ul> <p>Now this works due to me having manually put in <code>IN('misty', 'mis')</code> And returns what i want it to.</p> <p>However if i change it to <code>c.execute("SELECT Name, CurrentBetValue FROM Users WHERE CurrentBetLocation IN ?", test1)</code></p> <blockquote> <p>c.execute("SELECT Name, CurrentBetValue FROM Users WHERE CurrentBetLocation IN ?", test1) sqlite3.OperationalError: near "?": syntax error</p> </blockquote> <p>I was wondering if anybody could help!</p>
0
2016-08-19T14:44:59Z
39,042,641
<p>Just taking a guess from "Its just a Local .db file" that you're using SQLite and the built-in <code>sqlite3</code> Python module.</p> <p>If so, I think you want this:</p> <pre><code>c.execute("SELECT Name, CurrentBetValue FROM Users WHERE CurrentBetLocation IN ({})".format(",".join("?"*len(sublist))), sublist) </code></pre> <p><code>"?" * len(sublist)</code> gives you a sequence of <em>n</em> question marks for <em>n</em> elements in the sublist. Joining that with commas, you get a nice parameterized command: <code>"... IN (?,?)"</code>. Finally, you can pass in your <code>sublist</code> as those parameters.</p> <p>Here's a complete working example with a couple comments inline so you can see what's happening:</p> <pre><code>import sqlite3 conn = sqlite3.connect("test.db") c = conn.cursor() c.execute("create table test (name text, location text)") c.execute("insert into test values (?, ?)", ("James", "mis")) c.execute("insert into test values (?, ?)", ("Mike", "misty")) c.execute("insert into test values (?, ?)", ("Dave", "mt")) sublist = ["misty", "mis"] # select name from test where location in (?,?) command = "select name from test where location in ({})".format(",".join("?" * len(sublist))) # select name from test where location in ('misty', 'mis') c.execute(command, sublist) print(c.fetchall()) # [('James',), ('Mike',)] </code></pre> <p>If my assumptions were wrong, and you're using a database other than SQLite or a module other than <code>sqlite3</code>, please tell us what you're using. (For future questions: this is quite important information to include. Even better would be to provide a complete runnable example.)</p>
1
2016-08-19T15:21:33Z
[ "python", "sql" ]
Calculate difference between 2 dates in python dataframe
39,042,102
<p>I've looked through python advice and worked out how to calculate the difference between two dates, e.g. <a href="http://stackoverflow.com/questions/8419564/difference-between-two-dates">Difference between two dates?</a>. That works, but ... I'm working with variables in a dataframe. I'm sure I'm following the advice I've read but I'm getting:</p> <pre><code>TypeError: strptime() argument 1 must be str, not Series </code></pre> <p>Here's the code:</p> <pre><code>df['DAYSDIFF'] = (datetime.datetime.strptime(df['SDATE'],"%d/%m/%Y") - datetime.datetime.strptime(df['QDATE'],"%d/%m/%Y")) </code></pre> <p>Thanks again for help!</p>
0
2016-08-19T14:55:26Z
39,042,249
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>pandas.to_datetime</code></a>:</p> <pre><code>df["SDATE"] = pd.to_datetime(df["SDATE"], format="%d/%m/%Y") df["QDATE"] = pd.to_datetime(df["QDATE"], format="%d/%m/%Y") df["DAYSDIFF"] = df["SDATE"] - df["QDATE"] </code></pre> <p>Because <code>datetime.strptime</code> does not recognize the pandas Series and is expecting a string.</p>
0
2016-08-19T15:02:48Z
[ "python", "datetime" ]
Operands could not be broadcast together error when two rows used
39,042,148
<p>Working with the following data:</p> <pre><code>import datetime,numpy as np,pandas as pd nan = np.nan a = pd.DataFrame( {'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 0, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 7}, 'reversal': {datetime.time(9, 0): 1, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan, datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}}) a['target_hit']=nan; a['target_miss']=nan; a['reversal1']=a['reversal']+1; a['reversal2']=a['reversal']-a['reversal']; a.sort_index(1,inplace=True); </code></pre> <p>I create a subset as follows:</p> <pre><code>hit = a.ix[:,:-2].dropna() hit </code></pre> <p>Which gives output showing two rows that match:</p> <pre><code> price reversal reversal1 reversal2 09:00:00 1 1.0 2.0 0.0 14:00:00 6 6.0 7.0 0.0 </code></pre> <p>When I then try to use these rows to match against the following I get this error <code>ValueError: operands could not be broadcast together with shapes (2,) (12,)</code> </p> <pre><code>takeBoth = False targetIsHit,targetIsMiss = False,False if takeBoth: targetHit = a[(hit['reversal1'].values==a['price'].values) &amp; (hit['reversal1'].index.values&lt;a['price'].index.values)]; targetMiss = a[(hit['reversal2'].values==a['price'].values) &amp; (hit['reversal2'].index.values&lt;a['price'].index.values)]; targetIsHit,targetIsMiss = not targetHit.empty, not targetMiss.empty else: targetHit = a[(hit['reversal1'].values==a['price'].values) &amp; (hit['reversal1'].index.values&lt;a['price'].index.values)]; targetIsHit = not targetHit.empty if not targetIsHit: targetMiss = a[(hit['reversal2'].values==a['price'].values) &amp; (hit['reversal2'].index.values&lt;a['price'].index.values)]; targetIsMiss = not targetMiss.empty if targetIsHit:a.loc[hit.index.values,"target_hit"] = targetHit.index.values; if targetIsMiss:a.loc[hit.index.values,"target_miss"] = targetMiss.index.values; </code></pre> <p>I do not get this error if <code>hit = a.ix[:,:-2].dropna()</code> only produces one row. After reading about this I see that this is possibly due to the broadcasting rule. </p> <p>Should I be iterating over the rows in <code>hit</code> to avoid this? Any other suggestions on how to fix this please?</p>
1
2016-08-19T14:58:18Z
39,047,581
<p>Yup it has to be for loop... Now I changed it to choose first from hit or miss.</p> <pre><code>import datetime,numpy as np,pandas as pd; nan = np.nan; a = pd.DataFrame( {'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 0, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 7}, 'reversal': {datetime.time(9, 0): 1, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan, datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}}); a['target_hit']=a['target_miss']=nan; a['reversal1']=a['reversal']+1; a['reversal2']=a['reversal']-a['reversal']; a.sort_index(1,inplace=True); hits = a.ix[:,:-2].dropna(); for row,hit in hits.iterrows(): forwardRows = [row]&lt;a['price'].index.values targetHit = a.index.values[(hit['reversal1']==a['price'].values) &amp; forwardRows][0]; targetMiss = a.index.values[(hit['reversal2']==a['price'].values) &amp; forwardRows][0]; if targetHit&gt;targetMiss: a.loc[row,"target_miss"] = targetMiss; else: a.loc[row,"target_hit"] = targetHit; print '#'*50 print a ''' </code></pre> <blockquote> <pre><code> ################################################## ################################################## price reversal reversal1 reversal2 target_hit target_miss 09:00:00 1 1.0 2.0 0.0 NaN 10:00:00 10:00:00 0 NaN NaN NaN NaN NaN 11:00:00 3 NaN NaN NaN NaN NaN 12:00:00 4 NaN NaN NaN NaN NaN 13:00:00 7 NaN NaN NaN NaN NaN 14:00:00 6 6.0 7.0 0.0 NaN 17:00:00 15:00:00 5 NaN NaN NaN NaN NaN 16:00:00 4 NaN NaN NaN NaN NaN 17:00:00 0 NaN NaN NaN NaN NaN 18:00:00 2 NaN NaN NaN NaN NaN 19:00:00 4 NaN NaN NaN NaN NaN 20:00:00 7 NaN NaN NaN NaN NaN </code></pre> </blockquote> <pre><code>''' </code></pre>
1
2016-08-19T20:59:16Z
[ "python", "pandas" ]
How can I slice each element of a numpy array of strings?
39,042,214
<p>Numpy has some very useful <a href="http://docs.scipy.org/doc/numpy/reference/routines.char.html" rel="nofollow">string operations</a>, which vectorize the usual Python string operations.</p> <p>Compared to these operation and to <code>pandas.str</code>, the numpy strings module seems to be missing a very important one: the ability to slice each string in the array. For example,</p> <pre><code>a = numpy.array(['hello', 'how', 'are', 'you']) numpy.char.sliceStr(a, slice(1, 3)) &gt;&gt;&gt; numpy.array(['el', 'ow', 're' 'ou']) </code></pre> <p>Am I missing some obvious method in the module with this functionality? Otherwise, is there a fast vectorized way to achieve this?</p>
11
2016-08-19T15:01:00Z
39,042,910
<p>Interesting omission... I guess you can always write your own:</p> <pre><code>import numpy as np def slicer(start=None, stop=None, step=1): return np.vectorize(lambda x: x[start:stop:step], otypes=[str]) a = np.array(['hello', 'how', 'are', 'you']) print(slicer(1, 3)(a)) # =&gt; ['el' 'ow' 're' 'ou'] </code></pre> <p>EDIT: Here are some benchmarks using the text of Ulysses by James Joyce. <strike>It seems the clear winner is @hpaulj's last strategy.</strike> @Divakar gets into the race improving on @hpaulj's last strategy.</p> <pre><code>import numpy as np import requests ulysses = requests.get('http://www.gutenberg.org/files/4300/4300-0.txt').text a = np.array(ulysses.split()) # Ufunc def slicer(start=None, stop=None, step=1): return np.vectorize(lambda x: x[start:stop:step], otypes=[str]) %timeit slicer(1, 3)(a) # =&gt; 1 loop, best of 3: 221 ms per loop # Non-mutating loop def loop1(a): out = np.empty(len(a), dtype=object) for i, word in enumerate(a): out[i] = word[1:3] %timeit loop1(a) # =&gt; 1 loop, best of 3: 262 ms per loop # Mutating loop def loop2(a): for i in range(len(a)): a[i] = a[i][1:3] b = a.copy() %timeit -n 1 -r 1 loop2(b) # 1 loop, best of 1: 285 ms per loop # From @hpaulj's answer %timeit np.frompyfunc(lambda x:x[1:3],1,1)(a) # =&gt; 10 loops, best of 3: 141 ms per loop %timeit np.frompyfunc(lambda x:x[1:3],1,1)(a).astype('U2') # =&gt; 1 loop, best of 3: 170 ms per loop %timeit a.view('U1').reshape(len(a),-1)[:,1:3].astype(object).sum(axis=1) # =&gt; 10 loops, best of 3: 60.7 ms per loop def slicer_vectorized(a,start,end): b = a.view('S1').reshape(len(a),-1)[:,start:end] return np.fromstring(b.tostring(),dtype='S'+str(end-start)) %timeit slicer_vectorized(a,1,3) # =&gt; The slowest run took 5.34 times longer than the fastest. # This could mean that an intermediate result is being cached. # 10 loops, best of 3: 16.8 ms per loop </code></pre>
3
2016-08-19T15:35:31Z
[ "python", "arrays", "string", "numpy", "slice" ]
How can I slice each element of a numpy array of strings?
39,042,214
<p>Numpy has some very useful <a href="http://docs.scipy.org/doc/numpy/reference/routines.char.html" rel="nofollow">string operations</a>, which vectorize the usual Python string operations.</p> <p>Compared to these operation and to <code>pandas.str</code>, the numpy strings module seems to be missing a very important one: the ability to slice each string in the array. For example,</p> <pre><code>a = numpy.array(['hello', 'how', 'are', 'you']) numpy.char.sliceStr(a, slice(1, 3)) &gt;&gt;&gt; numpy.array(['el', 'ow', 're' 'ou']) </code></pre> <p>Am I missing some obvious method in the module with this functionality? Otherwise, is there a fast vectorized way to achieve this?</p>
11
2016-08-19T15:01:00Z
39,043,288
<p>Most, if not all the functions in <code>np.char</code> apply existing <code>str</code> methods to each element of the array. It's a little faster than direct iteration (or <code>vectorize</code>) but not drastically so.</p> <p>There isn't a string slicer; at least not by that sort of name. Closest is indexing with a slice:</p> <pre><code>In [274]: 'astring'[1:3] Out[274]: 'st' In [275]: 'astring'.__getitem__ Out[275]: &lt;method-wrapper '__getitem__' of str object at 0xb3866c20&gt; In [276]: 'astring'.__getitem__(slice(1,4)) Out[276]: 'str' </code></pre> <p>An iterative approach can be with <code>frompyfunc</code> (which is also used by <code>vectorize</code>):</p> <pre><code>In [277]: a = numpy.array(['hello', 'how', 'are', 'you']) In [278]: np.frompyfunc(lambda x:x[1:3],1,1)(a) Out[278]: array(['el', 'ow', 're', 'ou'], dtype=object) In [279]: np.frompyfunc(lambda x:x[1:3],1,1)(a).astype('U2') Out[279]: array(['el', 'ow', 're', 'ou'], dtype='&lt;U2') </code></pre> <p>I could view it as a single character array, and slice that</p> <pre><code>In [289]: a.view('U1').reshape(4,-1)[:,1:3] Out[289]: array([['e', 'l'], ['o', 'w'], ['r', 'e'], ['o', 'u']], dtype='&lt;U1') </code></pre> <p>I still need to figure out how to convert it back to 'U2'.</p> <pre><code>In [290]: a.view('U1').reshape(4,-1)[:,1:3].copy().view('U2') Out[290]: array([['el'], ['ow'], ['re'], ['ou']], dtype='&lt;U2') </code></pre> <p>The initial view step shows the databuffer as Py3 characters (these would be bytes in a <code>S</code> or Py2 string case):</p> <pre><code>In [284]: a.view('U1') Out[284]: array(['h', 'e', 'l', 'l', 'o', 'h', 'o', 'w', '', '', 'a', 'r', 'e', '', '', 'y', 'o', 'u', '', ''], dtype='&lt;U1') </code></pre> <p>Picking the 1:3 columns amounts to picking <code>a.view('U1')[[1,2,6,7,11,12,16,17]]</code> and then reshaping and view. Without getting into details, I'm not surprised that it requires a copy. </p>
2
2016-08-19T15:55:55Z
[ "python", "arrays", "string", "numpy", "slice" ]
How can I slice each element of a numpy array of strings?
39,042,214
<p>Numpy has some very useful <a href="http://docs.scipy.org/doc/numpy/reference/routines.char.html" rel="nofollow">string operations</a>, which vectorize the usual Python string operations.</p> <p>Compared to these operation and to <code>pandas.str</code>, the numpy strings module seems to be missing a very important one: the ability to slice each string in the array. For example,</p> <pre><code>a = numpy.array(['hello', 'how', 'are', 'you']) numpy.char.sliceStr(a, slice(1, 3)) &gt;&gt;&gt; numpy.array(['el', 'ow', 're' 'ou']) </code></pre> <p>Am I missing some obvious method in the module with this functionality? Otherwise, is there a fast vectorized way to achieve this?</p>
11
2016-08-19T15:01:00Z
39,045,337
<p>Here's a vectorized approach -</p> <pre><code>def slicer_vectorized(a,start,end): b = a.view('S1').reshape(len(a),-1)[:,start:end] return np.fromstring(b.tostring(),dtype='S'+str(end-start)) </code></pre> <p>Sample run -</p> <pre><code>In [68]: a = np.array(['hello', 'how', 'are', 'you']) In [69]: slicer_vectorized(a,1,3) Out[69]: array(['el', 'ow', 're', 'ou'], dtype='|S2') In [70]: slicer_vectorized(a,0,3) Out[70]: array(['hel', 'how', 'are', 'you'], dtype='|S3') </code></pre> <p>Runtime test -</p> <p>Testing out all the approaches posted by other authors that I could run at my end and also including the vectorized approach from earlier in this post.</p> <p>Here's the timings -</p> <pre><code>In [53]: # Setup input array ...: a = np.array(['hello', 'how', 'are', 'you']) ...: a = np.repeat(a,10000) ...: # @Alberto Garcia-Raboso's answer In [54]: %timeit slicer(1, 3)(a) 10 loops, best of 3: 23.5 ms per loop # @hapaulj's answer In [55]: %timeit np.frompyfunc(lambda x:x[1:3],1,1)(a) 100 loops, best of 3: 11.6 ms per loop # Using loop-comprehension In [56]: %timeit np.array([i[1:3] for i in a]) 100 loops, best of 3: 12.1 ms per loop # From this post In [57]: %timeit slicer_vectorized(a,1,3) 1000 loops, best of 3: 787 µs per loop </code></pre>
6
2016-08-19T18:09:01Z
[ "python", "arrays", "string", "numpy", "slice" ]
How can I slice each element of a numpy array of strings?
39,042,214
<p>Numpy has some very useful <a href="http://docs.scipy.org/doc/numpy/reference/routines.char.html" rel="nofollow">string operations</a>, which vectorize the usual Python string operations.</p> <p>Compared to these operation and to <code>pandas.str</code>, the numpy strings module seems to be missing a very important one: the ability to slice each string in the array. For example,</p> <pre><code>a = numpy.array(['hello', 'how', 'are', 'you']) numpy.char.sliceStr(a, slice(1, 3)) &gt;&gt;&gt; numpy.array(['el', 'ow', 're' 'ou']) </code></pre> <p>Am I missing some obvious method in the module with this functionality? Otherwise, is there a fast vectorized way to achieve this?</p>
11
2016-08-19T15:01:00Z
39,045,428
<p>To solve this, so far I've been transforming the numpy <code>array</code> to a pandas <code>Series</code> and back. It is not a pretty solution, but it works and it works relatively fast.</p> <pre><code>a = numpy.array(['hello', 'how', 'are', 'you']) pandas.Series(a).str[1:3].values array(['el', 'ow', 're', 'ou'], dtype=object) </code></pre>
1
2016-08-19T18:15:38Z
[ "python", "arrays", "string", "numpy", "slice" ]
Return True/False - when to use it over just return
39,042,320
<p>I have searched over internet why i should use return False/True over just return but can not find the answer.</p> <p>Why would i want to have statement return True/False inseatd of just return ? Can you please show me an example ?</p> <pre><code>def test(var): if var &gt; 5: return True else: return False test(8) &gt;&gt;&gt; True #------------------------ def test(var): if var &gt; 5: return else: return test(8) &gt;&gt;&gt; None </code></pre>
-7
2016-08-19T15:06:03Z
39,042,386
<p>One major problem is that your second function will return <code>None</code> either way. Returning a boolean value is a way to have the return value of your function be meaningful/useful elsewhere.</p> <p>If it returns a value like <code>True</code> or <code>False</code>, you can in turn use the return value of your function in cases like:</p> <pre><code>if test(8): # do something it returns True else: # do something otherwise </code></pre> <p>Otherwise, you function is meaningless because <code>test()</code> will return the same thing regardless of input. </p> <p>I was once told that a function should either "do something" or "return something". Your second example function, doesn't "do anything", because the <code>&gt;</code> comparison has no effect if you are not making some choice based on the results of that comparison. It also doesn't really return anything (at least not anything meaningful) because it will return <code>None</code> no matter what - in fact, even if you remove the <code>return</code> keyword, it will still just return <code>None</code>.</p>
1
2016-08-19T15:08:38Z
[ "python", "python-2.7" ]
Return True/False - when to use it over just return
39,042,320
<p>I have searched over internet why i should use return False/True over just return but can not find the answer.</p> <p>Why would i want to have statement return True/False inseatd of just return ? Can you please show me an example ?</p> <pre><code>def test(var): if var &gt; 5: return True else: return False test(8) &gt;&gt;&gt; True #------------------------ def test(var): if var &gt; 5: return else: return test(8) &gt;&gt;&gt; None </code></pre>
-7
2016-08-19T15:06:03Z
39,043,750
<p>Analogy: A function is a cloneable assistant ready to perform a task, and give you an answer. The task is defined by the <em>parameters</em> of the function (the stuff inside the parentheses). Let's rewrite the names to give them semantic meaning (i.e. names which illustrate what we expect).</p> <pre><code>def isXGreaterThanY(..... </code></pre> <p>Here, the name of the task is "is X greater than Y?". If you go up to your cloneable assistant and ask "is X greater than Y?", your assistant will not be able to accomplish what you want unless you tell them what X and Y are.</p> <pre><code>def isXGreaterThanY(x, y): ......... </code></pre> <p>Now I can start to explain where you may be going wrong. One error at this level of programming is that just because you see something that does almost what one wants on a webpage, one may be tempted to copy it syntactically and try to fiddle around with the syntax and hope it will just work. This will never work. It is not the point of programming.</p> <p>Some everyday people think that programming is about magic words (not that I am implying you think that) that solve your problem. This is not the case. Rather, programming is (classically) about being able to make automatons (these little assistants) which manipulate information for you. The rote, mechanical manipulation of information is what computers are good at. We want our tasks to be replicable though, so we give them names like "is X greater than Y?", and specify them in what is known, aptly, as <em>procedures</em> (a.k.a. functions).</p> <p>Let's consider what you wrote:</p> <pre><code>def isXGreaterThanY(x, y): if x &gt; y: return True else: return False </code></pre> <p>A procedure is all about <em>control flow</em>. Each part of the procedure is either a <em>statement</em> or <em>expression</em> (which you can, at this level, consider to be the same thing). A procedure usually has an <em>answer</em>: whenever control flow hits a "return ANSWER" statement, the entire procedure stops (the task is finished) and your magical assistant returns to you with the answer on a piece of paper with <code>ANSWER</code> written on it. A procedure which returns an answer is known as a 'function', and this is almost always what we want (procedures which do ugly 'side-effects' behind the scenes are usually not what we want).</p> <p>Below, I illustrate the idea that takes us from syntax (what we write down) to mechanical actions. A procedure is made up of syntactic expressions, and each expression may have subexpressions, etc.</p> <ul> <li>We have the <code>if __ then __ else __</code> statement, which consists of three subexpressions: <ul> <li>the query clause of <code>x &gt; y</code>, which consists of: <ul> <li>the <code>_ &gt; _</code> operator acting on: <ul> <li>the variable x</li> <li>the variable y</li> </ul></li> </ul></li> <li>the "then" clause of <code>return True</code>, which consists of: <ul> <li>the <code>return</code> statement, returning: <ul> <li>the literal boolean value <code>True</code></li> </ul></li> </ul></li> <li>the "else" clause of <code>return False</code>, which consists of: <ul> <li>the <code>return</code> statement, returning: <ul> <li>the literal boolean value <code>False</code></li> </ul></li> </ul></li> </ul></li> </ul> <p>This 'syntax tree' is what the computer sees. Now, the programming language associates meaning to these expressions: it knows how to navigate this tree in what is known as "control flow". In particular, in the programming language Python, we know that when we see an if-then-else statement, that first we check the test condition. In this case we look at the test condition and notice it is a naked comparison (we ask the CPU, and it gives either True or False back). If the comparison is true, we will do the "then" clause, which returns; i.e. hands you a slip of paper with the answer <code>True</code>. If the comparison was false, we'd do the "else" clause, and hand you a slip of paper with the answer <code>False</code>.</p> <p>In this way, whenever you ask your assistant "is X greater than Y? where X=... and Y=...", your assistant will (in effect) look at the instructions you've specified in the procedure, and interpret them with the assistant's eye being always fixed on one expression at a time (the "control flow" can be thought of as a highlighted or underlined 'active' subexpression, i.e. control flow is the path your assistant's eye takes while looking through the code). In this particular case, your procedure begins with an if-then-else clause, which it interprets as a branching point in the control flow (a fork in the road); it takes the appropriate branch, and in this case, will discover one of the two 'return' statements, and then dutifully give you a slip of paper.</p> <p>Control flow is determined by the semantics (meaning) behind special control-flow statements, such as if-then-else. Other control flow structures are interpreted differently. <code>for x in range(7): ...</code> will pretend x is 1 and execute <code>...</code>, pretend x is 2 and execute <code>...</code>, etc.</p> <p>A <code>while True: ...</code> will loop forever, performing the <code>...</code> over and over again.</p> <p>A <code>break</code> (break out of) means "stop the while loop" or "stop the for loop prematurely".</p> <p>A <code>continue</code> means "skip the rest of the <code>...</code> in this while/for loop, but keep on looping".</p> <p>You can implement your own control flow using the above and your own custom functions, with what is known as recursion (another topic outside this answer's scope).</p> <p>That is control flow and imperative programming in a nutshell.</p> <hr> <p>By the way, it is better form to do this:</p> <pre><code>def isXGreaterThanY(x, y): # this is a comment # you can insert a print x&gt;y here, or print(x&gt;y) depending on your version of python return (x &gt; y) </code></pre> <p>The expression <code>x &gt; y</code> evaluates to True/False before it's fed into the if-then-else statement. So, you could just return the expression as the answer. However, by that point, your function is simple enough that you wouldn't write a function answer:</p> <pre><code>#print isXGreaterThanY(1,3) print (1 &gt; 3) </code></pre>
1
2016-08-19T16:24:18Z
[ "python", "python-2.7" ]
Calling Twisted recator.run() Inside flask app
39,042,399
<p>I am writing a web service in python flask for Jasmin SMS Gateway to create and delete user from the gateway. In case of POST request I am calling runScenario() and after that I am starting reactor.run() which will create user in the gateway. this code runs perfectly for the first web service call but on second call its giving me this error :</p> <pre><code>raise error.ReactorNotRestartable() ReactorNotRestartable </code></pre> <p>This is my Flask app:</p> <pre><code>#!/usr/bin/env python from flask import Flask, jsonify, request, Response, make_response, abort from JasminIntegration import * JasminWebservice = Flask(__name__) @JasminWebservice.errorhandler(404) def not_found(error): return make_response(jsonify({'error': 'Not found'}), 404) @JasminWebservice.route('/jsms/webservice', methods=['POST']) def create_user(): if not request.json or not 'username' in request.json: abort(400) runScenario(request) reactor.run() return jsonify({'response':'Success'}), 201 if __name__ == '__main__': JasminWebservice.run(host="0.0.0.0",port=7034,debug=True) </code></pre> <p>I am calling runScenario() which is defined in JasminIntegration.py</p> <pre><code>#!/usr/bin/env python import sys import pickle from flask import abort from twisted.internet import defer, reactor from jasmin.managers.proxies import SMPPClientManagerPBProxy from jasmin.routing.proxies import RouterPBProxy from jasmin.routing.Routes import DefaultRoute from jasmin.routing.jasminApi import SmppClientConnector, User, Group, MtMessagingCredential, SmppsCredential from jasmin.protocols.smpp.configs import SMPPClientConfig from twisted.web.client import getPage @defer.inlineCallbacks def runScenario(Request): try: proxy_router = RouterPBProxy() yield proxy_router.connect('127.0.0.1', 8988, 'radmin', 'rpwd') if Request.method == 'POST': smppUser = Request.json['username'] smppPass = Request.json['password'] smppThroughput = Request.json['tp'] smppBindSessions = Request.json['sessions'] if not smppUser: abort(400) if len(smppPass) == 0 or len(smppPass) &gt; 8: abort(400) if not smppThroughput.isdigit(): abort(400) if not smppBindSessions.isdigit(): abort(400) # Provisiong router with users smpp_cred = SmppsCredential() yield smpp_cred.setQuota('max_bindings',int(smppBindSessions)) mt_cred = MtMessagingCredential() yield mt_cred.setQuota('smpps_throughput' , smppThroughput) #yield mt_cred.setQuota('submit_sm_count' , 500) g1 = Group('clients') u1 = User(uid = smppUser, group = g1, username = smppUser, password = smppPass, mt_credential = mt_cred, smpps_credential = smpp_cred) yield proxy_router.group_add(g1) yield proxy_router.user_add(u1) if Request.method == 'DELETE': smppUser = Request.json['username'] if not smppUser: abort(404) yield proxy_router.user_remove(smppUser) except Exception, e: yield "%s" %str(e) finally: print "Stopping Reactor" reactor.stop() </code></pre> <p>Please help me out to solve this issue:</p>
0
2016-08-19T15:09:22Z
39,046,221
<p>Reactor is not restartable in Twisted by design, it performs initialization and finalization that isn't easily restartable.</p> <p>In the example provided you're using a development WSGI server (flask's default one, <a href="http://werkzeug.pocoo.org/docs/0.11/serving/" rel="nofollow">http://werkzeug.pocoo.org/docs/0.11/serving/</a>) which appears to be single-threaded by default. </p> <p>Your problem would go away if you avoid using threads and switch to a multi-process server instead. i.e. it would work if you just run it like this (see <strong>processes=2</strong> => each request will be handled in a new process, but no more than 2 concurrent ones):</p> <pre><code>if __name__ == '__main__': JasminWebservice.run(host="0.0.0.0", port=7034, debug=True, processes=2) </code></pre> <p><strong>But</strong> I wouldn't rely on that – you'll run into similar troubles when writing unit tests and restricting your app to run in multi-process environment only is not a good approach. </p> <p>But looks like the problem stems from your app design – why would you need Flask and an additional WSGI server? You can build REST API fully in twisted ending up running only a single reactor which would handle both queries to your API and incoming requests.</p>
0
2016-08-19T19:12:02Z
[ "python", "python-2.7", "flask", "twisted" ]
Calling Twisted recator.run() Inside flask app
39,042,399
<p>I am writing a web service in python flask for Jasmin SMS Gateway to create and delete user from the gateway. In case of POST request I am calling runScenario() and after that I am starting reactor.run() which will create user in the gateway. this code runs perfectly for the first web service call but on second call its giving me this error :</p> <pre><code>raise error.ReactorNotRestartable() ReactorNotRestartable </code></pre> <p>This is my Flask app:</p> <pre><code>#!/usr/bin/env python from flask import Flask, jsonify, request, Response, make_response, abort from JasminIntegration import * JasminWebservice = Flask(__name__) @JasminWebservice.errorhandler(404) def not_found(error): return make_response(jsonify({'error': 'Not found'}), 404) @JasminWebservice.route('/jsms/webservice', methods=['POST']) def create_user(): if not request.json or not 'username' in request.json: abort(400) runScenario(request) reactor.run() return jsonify({'response':'Success'}), 201 if __name__ == '__main__': JasminWebservice.run(host="0.0.0.0",port=7034,debug=True) </code></pre> <p>I am calling runScenario() which is defined in JasminIntegration.py</p> <pre><code>#!/usr/bin/env python import sys import pickle from flask import abort from twisted.internet import defer, reactor from jasmin.managers.proxies import SMPPClientManagerPBProxy from jasmin.routing.proxies import RouterPBProxy from jasmin.routing.Routes import DefaultRoute from jasmin.routing.jasminApi import SmppClientConnector, User, Group, MtMessagingCredential, SmppsCredential from jasmin.protocols.smpp.configs import SMPPClientConfig from twisted.web.client import getPage @defer.inlineCallbacks def runScenario(Request): try: proxy_router = RouterPBProxy() yield proxy_router.connect('127.0.0.1', 8988, 'radmin', 'rpwd') if Request.method == 'POST': smppUser = Request.json['username'] smppPass = Request.json['password'] smppThroughput = Request.json['tp'] smppBindSessions = Request.json['sessions'] if not smppUser: abort(400) if len(smppPass) == 0 or len(smppPass) &gt; 8: abort(400) if not smppThroughput.isdigit(): abort(400) if not smppBindSessions.isdigit(): abort(400) # Provisiong router with users smpp_cred = SmppsCredential() yield smpp_cred.setQuota('max_bindings',int(smppBindSessions)) mt_cred = MtMessagingCredential() yield mt_cred.setQuota('smpps_throughput' , smppThroughput) #yield mt_cred.setQuota('submit_sm_count' , 500) g1 = Group('clients') u1 = User(uid = smppUser, group = g1, username = smppUser, password = smppPass, mt_credential = mt_cred, smpps_credential = smpp_cred) yield proxy_router.group_add(g1) yield proxy_router.user_add(u1) if Request.method == 'DELETE': smppUser = Request.json['username'] if not smppUser: abort(404) yield proxy_router.user_remove(smppUser) except Exception, e: yield "%s" %str(e) finally: print "Stopping Reactor" reactor.stop() </code></pre> <p>Please help me out to solve this issue:</p>
0
2016-08-19T15:09:22Z
39,049,476
<p>You can't stop the reactor and run it again. The error you're getting is by design. Consider using <code>klein</code> it uses <code>werkzeug</code> like <code>flask</code> and leverages <code>twisted</code> for networking as well. The syntax is even similar. Translating your code into <code>klein</code> would look a bit like this:</p> <pre><code>import json from klein import Klein from exception werkzeug.exceptions import NotFound from JasminIntegration import * JasminWebservice = Klein() @JasminWebservice.handle_errors(NotFound) def not_found(request, error): request.setResponseCode(404) return json.dumps({'error': 'Not found'}) @JasminWebservice.route('/jsms/webservice', methods=['POST']) def create_user(request): try: data = json.loads(request.content.read()) if not data or if 'username' not in data: raise NotFound() except: # yeah I know this isn't best practice raise NotFound() runScenario(request) request.setResponseCode(201) return json.dumps({'response':'Success'}) if __name__ == '__main__': JasminWebservice.run("0.0.0.0", port=7034) </code></pre> <p>As a side note, you shouldn't stop the <code>reactor</code> unless you want to exit your app entirely.</p>
0
2016-08-20T00:51:29Z
[ "python", "python-2.7", "flask", "twisted" ]
Calling Twisted recator.run() Inside flask app
39,042,399
<p>I am writing a web service in python flask for Jasmin SMS Gateway to create and delete user from the gateway. In case of POST request I am calling runScenario() and after that I am starting reactor.run() which will create user in the gateway. this code runs perfectly for the first web service call but on second call its giving me this error :</p> <pre><code>raise error.ReactorNotRestartable() ReactorNotRestartable </code></pre> <p>This is my Flask app:</p> <pre><code>#!/usr/bin/env python from flask import Flask, jsonify, request, Response, make_response, abort from JasminIntegration import * JasminWebservice = Flask(__name__) @JasminWebservice.errorhandler(404) def not_found(error): return make_response(jsonify({'error': 'Not found'}), 404) @JasminWebservice.route('/jsms/webservice', methods=['POST']) def create_user(): if not request.json or not 'username' in request.json: abort(400) runScenario(request) reactor.run() return jsonify({'response':'Success'}), 201 if __name__ == '__main__': JasminWebservice.run(host="0.0.0.0",port=7034,debug=True) </code></pre> <p>I am calling runScenario() which is defined in JasminIntegration.py</p> <pre><code>#!/usr/bin/env python import sys import pickle from flask import abort from twisted.internet import defer, reactor from jasmin.managers.proxies import SMPPClientManagerPBProxy from jasmin.routing.proxies import RouterPBProxy from jasmin.routing.Routes import DefaultRoute from jasmin.routing.jasminApi import SmppClientConnector, User, Group, MtMessagingCredential, SmppsCredential from jasmin.protocols.smpp.configs import SMPPClientConfig from twisted.web.client import getPage @defer.inlineCallbacks def runScenario(Request): try: proxy_router = RouterPBProxy() yield proxy_router.connect('127.0.0.1', 8988, 'radmin', 'rpwd') if Request.method == 'POST': smppUser = Request.json['username'] smppPass = Request.json['password'] smppThroughput = Request.json['tp'] smppBindSessions = Request.json['sessions'] if not smppUser: abort(400) if len(smppPass) == 0 or len(smppPass) &gt; 8: abort(400) if not smppThroughput.isdigit(): abort(400) if not smppBindSessions.isdigit(): abort(400) # Provisiong router with users smpp_cred = SmppsCredential() yield smpp_cred.setQuota('max_bindings',int(smppBindSessions)) mt_cred = MtMessagingCredential() yield mt_cred.setQuota('smpps_throughput' , smppThroughput) #yield mt_cred.setQuota('submit_sm_count' , 500) g1 = Group('clients') u1 = User(uid = smppUser, group = g1, username = smppUser, password = smppPass, mt_credential = mt_cred, smpps_credential = smpp_cred) yield proxy_router.group_add(g1) yield proxy_router.user_add(u1) if Request.method == 'DELETE': smppUser = Request.json['username'] if not smppUser: abort(404) yield proxy_router.user_remove(smppUser) except Exception, e: yield "%s" %str(e) finally: print "Stopping Reactor" reactor.stop() </code></pre> <p>Please help me out to solve this issue:</p>
0
2016-08-19T15:09:22Z
39,072,729
<p>@ffeast I also tried to do it in twisted but encountered the same problem, as reactor is not restartable.In twisted I have done something like this:</p> <pre><code>#!/usr/bin/env python from pprint import pprint import json from twisted.web import server, resource from twisted.internet import reactor from JasminIntegration import * import ast class Simple(resource.Resource): isLeaf = True def render_GET(self, request): return "&lt;html&gt;Hello, world!&lt;/html&gt;" def render_POST(self, request): pprint(request.__dict__) newdata = request.content.getvalue() newdata = ast.literal_eval(newdata) ret = runScenario(newdata) #print request.content #print newdata return '' site = server.Site(Simple()) reactor.listenTCP(7034, site) reactor.run() </code></pre>
0
2016-08-22T06:38:12Z
[ "python", "python-2.7", "flask", "twisted" ]
find_one() return one document give a combination of fields
39,042,409
<p>I am trying to return a document from my mongodb (using pymongo). I want the query to return a document given an id and a tag. </p> <pre><code>ids = ['123', '456', '234', '534'] rows = [] for i in ids: for b in ["Tag1", "Tag2", "Tag3"]: temp = pb_db.db.collection.find_one({"ID": i, "Tag": b}, {'ID': 1, 'Tag': 1, 'Name': 1, '_created_at': 1}) if temp is not None: rows.append(temp) </code></pre> <p>A document with an ID of '123' may have one record with 'Tag1' and a separate document with 'Tag3'. Any combination of 'ID' and 'Tag' is possible.</p> <p>The goal is to return one instance of each id, tag combination (hence using find_one())</p> <p>At the moment my code above is very inefficient as it queries the db for every id three times (my list of ids is much larger than this example). Is it possible to use the find_one() query to return a document for a given id with each tag only once? Thanks,</p> <p>example mongo structure:</p> <pre><code>{ "_id" : "random_mongo_id", "Tag" : "Tag1", "_created_at" : ISODate("2016-06-25T00:00:00.000Z"), "ID" : [ "123" ], }, { "_id" : "random_mongo_id", "Tag" : "Tag2", "_created_at" : ISODate("2016-07-25T00:00:00.000Z"), "ID" : [ "123" ], }, { "_id" : "random_mongo_id", "Tag" : "Tag1", "_created_at" : ISODate("2016-07-25T00:00:00.000Z"), "ID" : [ "534" ], } </code></pre> <p>so in this example i would expect to see:</p> <pre><code>ID: 123, Tag: Tag1 ID: 123, Tag: Tag2 ID: 534, Tag: Tag1 </code></pre>
0
2016-08-19T15:10:00Z
39,042,951
<p>You could do it in a single pass by using the <a href="https://docs.mongodb.com/manual/reference/operator/query/in/#op._S_in" rel="nofollow">$in operator</a> to compare the items in the "ID" array in the database with the items in the "ids" array variable, and similarly for the tags.</p> <blockquote> <p>Use the $in Operator to Match Values in an Array</p> <p>The collection inventory contains documents that include the field tags, as in the following:</p> </blockquote> <pre><code>{ _id: 1, item: "abc", qty: 10, tags: [ "school", "clothing" ], sale: false } </code></pre> <blockquote> <p>Then, the following update() operation will set the sale field value to true where the tags field holds an array with at least one element matching either "appliances" or "school".</p> </blockquote> <pre><code>db.inventory.update( { tags: { $in: ["appliances", "school"] } }, { $set: { sale:true } } ) </code></pre> <p>In user3939059's case, the query would be something like this:</p> <pre><code>ids = ['123', '456', '234', '534'] tags = ['Tag1', 'Tag2', 'Tag3'] pb_db.db.collection.find({"ID": {$in: ids}, "Tag": {$in: tags}}, {'ID': 1, 'Tag': 1, 'Name': 1, '_created_at': 1}) </code></pre>
-1
2016-08-19T15:37:46Z
[ "python", "mongodb", "mongodb-query", "pymongo" ]
find_one() return one document give a combination of fields
39,042,409
<p>I am trying to return a document from my mongodb (using pymongo). I want the query to return a document given an id and a tag. </p> <pre><code>ids = ['123', '456', '234', '534'] rows = [] for i in ids: for b in ["Tag1", "Tag2", "Tag3"]: temp = pb_db.db.collection.find_one({"ID": i, "Tag": b}, {'ID': 1, 'Tag': 1, 'Name': 1, '_created_at': 1}) if temp is not None: rows.append(temp) </code></pre> <p>A document with an ID of '123' may have one record with 'Tag1' and a separate document with 'Tag3'. Any combination of 'ID' and 'Tag' is possible.</p> <p>The goal is to return one instance of each id, tag combination (hence using find_one())</p> <p>At the moment my code above is very inefficient as it queries the db for every id three times (my list of ids is much larger than this example). Is it possible to use the find_one() query to return a document for a given id with each tag only once? Thanks,</p> <p>example mongo structure:</p> <pre><code>{ "_id" : "random_mongo_id", "Tag" : "Tag1", "_created_at" : ISODate("2016-06-25T00:00:00.000Z"), "ID" : [ "123" ], }, { "_id" : "random_mongo_id", "Tag" : "Tag2", "_created_at" : ISODate("2016-07-25T00:00:00.000Z"), "ID" : [ "123" ], }, { "_id" : "random_mongo_id", "Tag" : "Tag1", "_created_at" : ISODate("2016-07-25T00:00:00.000Z"), "ID" : [ "534" ], } </code></pre> <p>so in this example i would expect to see:</p> <pre><code>ID: 123, Tag: Tag1 ID: 123, Tag: Tag2 ID: 534, Tag: Tag1 </code></pre>
0
2016-08-19T15:10:00Z
39,044,810
<p>You need to use <a href="https://docs.mongodb.com/manual/reference/operator/query/in/" rel="nofollow"><code>$in</code></a> and the <a href="https://docs.mongodb.com/manual/reference/operator/query/elemMatch/" rel="nofollow"><code>$elemMatch</code></a> query operator.</p> <pre><code>ids = ['123', '456', '234', '534'] tags = ["Tag1", "Tag2", "Tag3"] db.collection.find_one({ "Tag": { "$in": tags}, "ID": { "$elemMatch": { "$in": ids}} }) </code></pre>
0
2016-08-19T17:35:28Z
[ "python", "mongodb", "mongodb-query", "pymongo" ]
iterate over list of dicts to create different strings
39,042,532
<p>I have a pandas file with 3 different columns that I turn into a dictionary with to_dict, the result is a list of dictionaries:</p> <pre><code>df = [ {'HEADER1': 'col1-row1', 'HEADER2: 'col2-row1', 'HEADER3': 'col3-row1'}, {'HEADER1': 'col1-row2', 'HEADER2: 'col2-row2', 'HEADER3': 'col3-row2'} ] </code></pre> <p>Now my problem is that I need the value of 'col2-rowX' and 'col3-rowX' to build an URL and use requests and bs4 to scrape the websties.</p> <p>I need my result to be something like the following:</p> <pre><code>requests.get("'http://www.website.com/' + row1-col2 + 'another-string' + row1-col3 + 'another-string'") </code></pre> <p>And i need to do that for every dictionary in the list.</p> <p>I have tried iterating over the dictionaries using for-loops. something like:</p> <pre><code>import pandas as pd import os os.chdir('C://Users/myuser/Desktop') df = pd.DataFrame.from_csv('C://Users/myuser/Downloads/export.csv') #Remove 'Code' column df = df.drop('Code', axis=1) #Remove 'Code2' as index df = df.reset_index() #Rename columns for easier manipulation df.columns = ['CB', 'FC', 'PO'] #Convert to dictionary for easy URL iteration and creation df = df.to_dict('records') for row in df: for key in row: print(key) </code></pre>
-1
2016-08-19T15:15:53Z
39,042,737
<p>You only ever iterate twice, and <em>short-circuit</em> out of the nested <code>for</code> loop every time it is executed by having a <code>return</code> statement there. Looking up the necessary information from the dictionary will allow you to build up your url's. One possible example:</p> <pre><code>def get_urls(l_d): l=[] for d in l_d: l.append('http://www.website.com/' + d['HEADER2'] + 'another-string' + d['HEADER3'] + 'another-string') return l df = [{'HEADER1': 'col1-row1', 'HEADER2': 'col2-row1', 'HEADER3': 'col3-row1'},{'HEADER1': 'col1-row2', 'HEADER2': 'col2-row2', 'HEADER3': 'col3-row2'}] print get_urls(df) &gt;&gt;&gt; ['http://www.website.com/col2-row1another-stringcol3-row1another-string', 'http://www.website.com/col2-row2another-stringcol3-row2another-string'] </code></pre>
2
2016-08-19T15:26:24Z
[ "python", "for-loop", "dictionary" ]
Place backslash between words in string
39,042,637
<p>I Want to convert com to newcom </p> <pre><code>com = R.E.M. - Losing My Religion.mp3 newcom = R.E.M.\ -\ Losing\ My\ Religion.mp3 </code></pre> <p>I am doing this because Ubuntu terminal needs backslashes to specify spaces in paths.</p> <p>This is just a string manipulation, what do I need to do?</p>
-3
2016-08-19T15:21:10Z
39,042,726
<p>Look into the replace() method in python: <a href="http://www.tutorialspoint.com/python/string_replace.htm" rel="nofollow">http://www.tutorialspoint.com/python/string_replace.htm</a></p>
-1
2016-08-19T15:25:39Z
[ "python" ]
Place backslash between words in string
39,042,637
<p>I Want to convert com to newcom </p> <pre><code>com = R.E.M. - Losing My Religion.mp3 newcom = R.E.M.\ -\ Losing\ My\ Religion.mp3 </code></pre> <p>I am doing this because Ubuntu terminal needs backslashes to specify spaces in paths.</p> <p>This is just a string manipulation, what do I need to do?</p>
-3
2016-08-19T15:21:10Z
39,042,744
<pre><code>newcom = com.replace(' ', '\\ ') </code></pre> <p>You need to replace a space with an (escaped) backslash and another space - that's exactly what Python's replace method is for.</p> <p>Alternatively, an ubuntu terminal is fine with directories in quotes, e.g.</p> <pre><code>cd "hello world" </code></pre> <p>is just as valid as your solution, which would give</p> <pre><code>cd hello\ world </code></pre> <p>And perhaps cleaner to the user, and more accepting of other characters that might need to be escaped.</p>
0
2016-08-19T15:26:47Z
[ "python" ]
Trying to subtract two dates to find number of days inbetween
39,042,774
<p>Within Zapier, I have two dates and am trying to find the number of days between them in a Code Step. I run a Formatter Step on each date to output a datetime object in YYYY-MM-DD then run the following code below:</p> <pre><code>submit = input['submit_date'] event = input['event_date'] delta = event-submit numdays = delta.days return {'numdays': numdays} </code></pre> <p>The error I get suggests the two dates that i'm importing are strings, not datetimes.</p> <p>Here is the error:</p> <p><code>Your code had an error! Traceback (most recent call last): File "/tmp/tmp7xyg3Z/usercode.py", line 10, in the_function delta = event-submit TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode'</code></p> <p>Anyone know what i'm doing wrong or a better way to accomplish this task?</p>
0
2016-08-19T15:28:11Z
39,042,902
<p>The formatter steps aren't included, but in general you are wanting time or datetime objects (<code>import time</code> or <code>import datetime</code>)</p> <p>You should probably be looking at something like <code>datetime.strptime(date_string, format)</code> from the datetime option to take your text string and convert it to a date/time object.</p> <p><a href="https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior" rel="nofollow">https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior</a></p>
3
2016-08-19T15:35:11Z
[ "python", "datetime", "zapier" ]
Trying to subtract two dates to find number of days inbetween
39,042,774
<p>Within Zapier, I have two dates and am trying to find the number of days between them in a Code Step. I run a Formatter Step on each date to output a datetime object in YYYY-MM-DD then run the following code below:</p> <pre><code>submit = input['submit_date'] event = input['event_date'] delta = event-submit numdays = delta.days return {'numdays': numdays} </code></pre> <p>The error I get suggests the two dates that i'm importing are strings, not datetimes.</p> <p>Here is the error:</p> <p><code>Your code had an error! Traceback (most recent call last): File "/tmp/tmp7xyg3Z/usercode.py", line 10, in the_function delta = event-submit TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode'</code></p> <p>Anyone know what i'm doing wrong or a better way to accomplish this task?</p>
0
2016-08-19T15:28:11Z
39,043,029
<p>I have 2 methods <code>get_days(start_date, end_date)</code> and <code>increment_da_day(str_date)</code></p> <p>invoke <code>get_days()</code> it returns you a list of days(format is yyyymmdd), you can provide your start date and end date and print the <code>get_days(start_date, end_date) - 1</code></p> <p>For example:</p> <p>import datetime</p> <pre><code>def get_days(start_date, end_date): """returns a list of days""" all_days = [] starts = start_date all_days.append(starts) while int(starts) &lt; int(end_date): starts = increment_a_day(starts) all_days.append(starts) return all_days def increment_a_day(str_date): """increments a day for given date string in format yyyymmdd""" year = int(str_date[:4]) month = int(str_date[4:6]) day = int(str_date[-2:]) now = datetime.date(year, month, day) delta = datetime.timedelta(days=1) return (now + delta).strftime('%Y%m%d') #yyyymmdd </code></pre> <p>You only need to do following:</p> <pre><code>days = get_days("20160601", "20160602") delta = len(days) - 1 print delta </code></pre>
0
2016-08-19T15:41:44Z
[ "python", "datetime", "zapier" ]
Trying to subtract two dates to find number of days inbetween
39,042,774
<p>Within Zapier, I have two dates and am trying to find the number of days between them in a Code Step. I run a Formatter Step on each date to output a datetime object in YYYY-MM-DD then run the following code below:</p> <pre><code>submit = input['submit_date'] event = input['event_date'] delta = event-submit numdays = delta.days return {'numdays': numdays} </code></pre> <p>The error I get suggests the two dates that i'm importing are strings, not datetimes.</p> <p>Here is the error:</p> <p><code>Your code had an error! Traceback (most recent call last): File "/tmp/tmp7xyg3Z/usercode.py", line 10, in the_function delta = event-submit TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode'</code></p> <p>Anyone know what i'm doing wrong or a better way to accomplish this task?</p>
0
2016-08-19T15:28:11Z
39,043,104
<p>if they are strings the you can either pass input or take the input right there, the following code will return the days </p> <pre><code>def h(): from datetime import datetime date_format = "%m/%d/%Y" a = datetime.strptime(input('Enter first date in MM/DD/YY format '), date_format) b = datetime.strptime(input('Enter second date in MM/DD/YY format '), date_format) delta = b - a print (delta.days) return int(delta.days) </code></pre> <p>or </p> <pre><code>a='8/18/2008' b='9/26/2008' def h(c,d): from datetime import datetime date_format = "%m/%d/%Y" a = datetime.strptime(c, date_format) b = datetime.strptime(d, date_format) delta = b - a print (delta.days) return int(delta.days) </code></pre>
0
2016-08-19T15:46:30Z
[ "python", "datetime", "zapier" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,805
<p>Ascii is represented with numbers, so you can random a number in the range that you prefer and then cast it to char.</p>
4
2016-08-19T15:29:59Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,831
<p>The function <code>choice</code> takes a general sequence.</p> <blockquote> <p>Return a random element from the non-empty sequence seq.</p> </blockquote> <p>In particular</p> <pre><code>random.choice(['A', 'B', 'C', 'D']) </code></pre> <p>will do what you want.</p> <p>You can easily generate the range programatically:</p> <pre><code>random.choice([chr(c) for c in xrange(ord('A'), ord('D')+1)]) </code></pre>
3
2016-08-19T15:31:44Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,834
<p>You can slice <code>string.ascii_letters</code>:</p> <pre><code>random.choice(string.ascii_letters[0:4]) </code></pre>
6
2016-08-19T15:31:52Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,843
<p>String slicing would be my solution to this</p> <pre><code>random.choice(string.ascii_letters[:4]) </code></pre> <p>This would randomly choose one of the first 4 letters of the alphabet. Obviously 4 could be any value.</p>
3
2016-08-19T15:32:10Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,919
<p>You can trivially define such a thing:</p> <pre><code>def letters_in_range(start_letter, end_letter): start_index = string.ascii_letters.find(start_letter) end_index = string.ascii_letters.find(end_letter) assert start_index != -1 assert end_index != -1 assert start_letter &lt; end_letter return string.ascii_letters[start_index:end_index] </code></pre> <p>With the above:</p> <pre><code> random.choice(letters_in_range('A', 'D')) </code></pre>
1
2016-08-19T15:35:57Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,042,937
<pre><code>&gt;&gt;&gt; random.choice('ABCD') 'C' </code></pre> <p>Or if it's a larger range so you don't want to type them all out:</p> <pre><code>&gt;&gt;&gt; chr(random.randint(ord('I'), ord('Q'))) 'O' </code></pre>
2
2016-08-19T15:36:55Z
[ "python" ]
Is there a random letter generator with a range?
39,042,775
<p>I was wondering if there is a random letter generator in Python that takes a range as parameter? For example, if I wanted a range between A and D? I know you can use this as a generator:</p> <pre><code>import random import string random.choice(string.ascii_letters) </code></pre> <p>But it doesn't allow you to supply it a range.</p>
5
2016-08-19T15:28:15Z
39,043,070
<p>Using your original way of getting a random letter you can simply make a function as such.</p> <pre><code>def getLetter(start, stop): letter = random.choice(string.ascii_letters) while letter &lt; start and letter &gt; stop: letter = random.choice(string.ascii_letters) return letter </code></pre> <p>Calling it like <code>getLetter('a', 'd')</code></p>
1
2016-08-19T15:44:19Z
[ "python" ]
How to sum within a groupby over with both numeric and non-numeric data types
39,042,785
<p>Consider the following <code>df</code></p> <pre><code>df = pd.DataFrame([ ['X', 'a', 0, 1], ['X', 'b', 2, 3], ['X', 'c', 4, 5], ['Y', 'a', 6, 7], ['Y', 'b', 8, 9], ['Y', 'c', 10, 11], ], columns=['One', 'Two', 'Three', 'Four']) df </code></pre> <p><a href="http://i.stack.imgur.com/6sh2W.png" rel="nofollow"><img src="http://i.stack.imgur.com/6sh2W.png" alt="enter image description here"></a></p> <pre><code>df.dtypes One object Two object Three int64 Four int64 dtype: object </code></pre> <p>When I <code>df.sum()</code> I get what <code>sum</code> would do over the each of the columns.</p> <pre><code>df.sum() One XXXYYY Two abcabc Three 30 Four 36 dtype: object </code></pre> <p>However, I'd like to perform this within a <code>groupby</code>. I'd expect this to work</p> <pre><code>df.groupby('One').sum() </code></pre> <p><a href="http://i.stack.imgur.com/grsS1.png" rel="nofollow"><img src="http://i.stack.imgur.com/grsS1.png" alt="enter image description here"></a></p> <p>But it appears to only sum over numeric columns. What is a convenient way to perform the same summation as <code>df.sum()</code>?</p> <p>I'd expect this result</p> <pre><code>pd.concat([df.set_index('One').loc[i].sum() for i in ['X', 'Y']], axis=1, keys=['X', 'Y']).T.rename_axis('One') </code></pre> <p><a href="http://i.stack.imgur.com/RTwIR.png" rel="nofollow"><img src="http://i.stack.imgur.com/RTwIR.png" alt="enter image description here"></a></p>
4
2016-08-19T15:29:07Z
39,042,867
<p>It's possible to achieve your desired result by using <code>agg</code> with a <code>lambda</code>:</p> <pre><code>In [6]: df.groupby('One').agg(lambda x: x.sum()) Out[6]: Two Three Four One X abc 6 9 Y abc 24 27 </code></pre>
4
2016-08-19T15:33:14Z
[ "python", "pandas", "group-by" ]
pandas fillna with multiple columns
39,042,872
<p>I used <code>pandas.concat</code> to join several <code>Dataframe</code> together and would like to fill the <code>NaN</code> values in one column with the <code>value</code> of several other columns.</p> <p>To get the below table, I did: <code>z = pandas.concat([df1, df2, df3], axis=0, join='outer')</code></p> <p><strong>concatenated table</strong></p> <pre><code> C_header1 C_header2 C_header3 Column1 Column2 Column3 0 Item1 NaN NaN Values Values Values 1 Item2 NaN NaN Values Values Values 2 Item3 NaN NaN Values Values Values 3 Item4 NaN NaN Values Values Values 4 Item5 NaN NaN Values Values Values 5 NaN Item6 NaN Values Values Values 6 NaN Item7 NaN Values Values Values 7 NaN Item8 NaN Values Values Values 8 NaN NaN Item9 Values Values Values 9 NaN NaN Item10 Values Values Values </code></pre> <p>Currently, I am running the below code to put <code>C_header1, C_header2, C_header3</code> together</p> <pre><code>z['C_header1'].fillna(z['C_header2'], inplace=True) z['C_header1'].fillna(z['C_header3'], inplace=True) z.drop(['C_header2', 'C_header3'], inplace=True) </code></pre> <p><strong>To get</strong></p> <pre><code> C_header1 Column1 Column2 Column3 0 Item1 Values Values Values 1 Item2 Values Values Values 2 Item3 Values Values Values 3 Item4 Values Values Values 4 Item5 Values Values Values 5 Item6 Values Values Values 6 Item7 Values Values Values 7 Item8 Values Values Values 8 Item9 Values Values Values 9 Item10 Values Values Values </code></pre> <p>Is there a more pythonic way of doing this? Feels like i'm missing something </p>
1
2016-08-19T15:33:46Z
39,045,739
<p>If you make sure that your <code>df1</code>, <code>df2</code> and <code>df3</code> have the same columns, the concatenation is done as desired.</p> <p>In this case, you could do:</p> <pre><code>df2.columns = df1.columns df3.columns = df1.columns # at this point concat will give you the desired result pandas.concat(rename_columns(df1, df2, df3)) </code></pre> <p><strong>NOTE</strong> Generally speaking, you're better off using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow">pandas.DataFrame.rename</a> to rename your columns.</p>
-1
2016-08-19T18:36:12Z
[ "python", "pandas" ]
PIL in Python complains that there are no 'size' attributes to a PixelAccess, what am I doing wrong?
39,042,950
<p>I am trying to program an application that will loop through every pixel of a given image, get the rgb value for each, add it to a dictionary (along with amount of occurences) and then give me rundown of the most used rgb values.</p> <p>However, to be able to loop through images, I need to be able to fetch their size; this proved to be no easy task. </p> <p>According to the <a href="http://effbot.org/imagingbook/image.htm#tag-Image.Image.size" rel="nofollow">PIL documentation</a>, the Image object should have an attribute called 'size'. When I try to run the program, I get this error:</p> <pre><code>AttributeError: 'PixelAccess' object has no attribute 'size' </code></pre> <p>this is the code:</p> <pre><code>from PIL import Image import sys ''' TODO: - Get an image - Loop through all the pixels and get the rgb values - append rgb values to dict as key, and increment value by 1 - return a "graph" of all the colours and their occurances TODO LATER: - couple similar colours together ''' SIZE = 0 def load_image(path=sys.argv[1]): image = Image.open(path) im = image.load() SIZE = im.size return im keyValue = {} # set the image object to variable image = load_image() print SIZE </code></pre> <p>Which makes no sense at all. What am I doing wrong?</p>
2
2016-08-19T15:37:46Z
39,043,060
<p><code>image.load</code> returns a pixel access object that does not have a <code>size</code> attribute</p> <pre><code>def load_image(path=sys.argv[1]): image = Image.open(path) im = image.load() SIZE = image.size return im </code></pre> <p>is what you want</p> <p><a href="http://effbot.org/imagingbook/image.htm" rel="nofollow">documentation</a> for PIL </p>
2
2016-08-19T15:43:57Z
[ "python", "image", "python-imaging-library" ]
PIL in Python complains that there are no 'size' attributes to a PixelAccess, what am I doing wrong?
39,042,950
<p>I am trying to program an application that will loop through every pixel of a given image, get the rgb value for each, add it to a dictionary (along with amount of occurences) and then give me rundown of the most used rgb values.</p> <p>However, to be able to loop through images, I need to be able to fetch their size; this proved to be no easy task. </p> <p>According to the <a href="http://effbot.org/imagingbook/image.htm#tag-Image.Image.size" rel="nofollow">PIL documentation</a>, the Image object should have an attribute called 'size'. When I try to run the program, I get this error:</p> <pre><code>AttributeError: 'PixelAccess' object has no attribute 'size' </code></pre> <p>this is the code:</p> <pre><code>from PIL import Image import sys ''' TODO: - Get an image - Loop through all the pixels and get the rgb values - append rgb values to dict as key, and increment value by 1 - return a "graph" of all the colours and their occurances TODO LATER: - couple similar colours together ''' SIZE = 0 def load_image(path=sys.argv[1]): image = Image.open(path) im = image.load() SIZE = im.size return im keyValue = {} # set the image object to variable image = load_image() print SIZE </code></pre> <p>Which makes no sense at all. What am I doing wrong?</p>
2
2016-08-19T15:37:46Z
39,043,091
<p>Note that <a href="https://pypi.python.org/pypi/PIL" rel="nofollow">PIL (Python Imaging Library)</a> is deprecated and replaced by <a href="https://pillow.readthedocs.io" rel="nofollow">Pillow</a>.</p> <p>The problem is about the <a href="http://pillow.readthedocs.io/en/3.3.x/reference/PixelAccess.html" rel="nofollow"><strong>PixelAccess</strong></a> class, not the <strong>Image</strong> class.</p>
1
2016-08-19T15:45:43Z
[ "python", "image", "python-imaging-library" ]
regular expression to match phone number but not zipcodes
39,042,969
<p>I have written below regex</p> <pre><code>phone_regex = re.compile(r'(\+?\(?\+?\d{1,}\)?[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-s\.]?)') </code></pre> <p>It matches &amp; identifies phone numbers along with country codes eg <code>+91 9561217616</code>,<code>(+91) 9561 217 616</code>,<code>+(91) 9561217616</code>,<code>+91-9833775049</code> but it also match <code>431003</code> (zipcode) can someone help out to write regex to match only phones but not zipcodes</p>
-3
2016-08-19T15:38:54Z
39,043,430
<p>You need to specify number of matches, e.g. <code>{m, n}</code> or <code>{m}</code> like:</p> <pre><code>regexp = r''' # matches phones, but not zipcodes. Use with VERBOSE regexps ^ # start of the string \s*? # whitespaces, etc \+? # + char (optional) \s*? # whitespaces, etc \(? # ( char (optional) ([0-9]{3}) # 3 numbers \)? # ) char (optional) ( # group start [\s-]? # whitespace, - char (optionals) [0-9] # 1 number ){7} # matches exactly 7 numbers \s*? # whitespaces, etc $ # end of the string ''' phones = ['(123) 456 7899', '(123)-456-7899', '+1234567899', '+123 456-7899', '12-34567899', '+123456789'] # these 2 dont match matches = [bool(re.match(regexp, num, re.VERBOSE)) for num in phones] print(matches) # gives [True, True, True, True, False, False] </code></pre> <p>Working with VERBOSE regexps gives you a great debug</p>
1
2016-08-19T16:03:28Z
[ "python", "regex" ]
regular expression to match phone number but not zipcodes
39,042,969
<p>I have written below regex</p> <pre><code>phone_regex = re.compile(r'(\+?\(?\+?\d{1,}\)?[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-\s\.]?\d{1,}[-s\.]?)') </code></pre> <p>It matches &amp; identifies phone numbers along with country codes eg <code>+91 9561217616</code>,<code>(+91) 9561 217 616</code>,<code>+(91) 9561217616</code>,<code>+91-9833775049</code> but it also match <code>431003</code> (zipcode) can someone help out to write regex to match only phones but not zipcodes</p>
-3
2016-08-19T15:38:54Z
39,043,478
<p><code>(?:\+\d\d|\(\+\d\d\)|\+\(\d\d\))(?:\s+|-)\d{4}(?:\s+|-)?\d{3}(?:\s+|-)?\d{3}$</code></p> <ul> <li><code>(?:\+\d\d|\(\+\d\d\)|\+\(\d\d\))</code> +00 or (+00) or +(00)</li> <li><code>(?:\s+|-)</code> a gap (at least one space or a single dash)</li> <li><code>\d{4}</code> 4 numbers (0000)</li> <li><code>(?:\s+|-)?</code> an optional gap (at least one space or a single dash or -nothing at all)</li> <li><code>\d{3}</code> 3 numbers (000)</li> <li><code>(?:\s+|-)?</code> an optional gap</li> <li><code>\d{3}</code> 3 numbers (000)</li> </ul> <p>As a zip code won't fulfill all these requirements, it won't pass the regex.</p>
1
2016-08-19T16:06:43Z
[ "python", "regex" ]
Pandas: Type conversion using `df.loc` from datetime64 to int
39,042,997
<p>When trying to re-assign certain values in a column using <code>df.loc[]</code> I am getting a strange type conversion error converting datetimes to integers.</p> <p><strong>Minimal Example:</strong></p> <pre><code>import numpy as np import pandas as pd import datetime d = pd.DataFrame(zip(['12/6/2015', np.nan], [1, 2]), columns=list('ab')) print(d) d.loc[pd.notnull(d.a), 'a'] = d.a[pd.notnull(d.a)].apply(lambda x: datetime.datetime(2015,12,6)) print(d) </code></pre> <p><strong>Full Example:</strong></p> <p>Here is my dataframe (contains NaNs):</p> <pre><code>&gt;&gt;&gt; df.head() prior_ea_date quarter 0 12/31/2015 Q2 1 12/31/2015 Q3 2 12/31/2015 Q3 3 12/31/2015 Q3 4 12/31/2015 Q2 &gt;&gt;&gt; df.prior_ea_date 0 12/31/2015 1 12/31/2015 ... 341486 1/19/2016 341487 1/6/2016 Name: prior_ea_date, dtype: object </code></pre> <p>I want to run the following line of code:</p> <pre><code>df.loc[pd.notnull(df.prior_ea_date), 'prior_ea_date'] = df.prior_ea_date[pd.notnull(df.prior_ea_date)].apply(dt, usa=True) </code></pre> <p>where <code>dt</code> is a string to datetime parser, which when run normally gives:</p> <pre><code>&gt;&gt;&gt; df.prior_ea_date[pd.notnull(df.prior_ea_date)].apply(dt, usa=True).head() 0 2015-12-31 1 2015-12-31 2 2015-12-31 3 2015-12-31 4 2015-12-31 Name: prior_ea_date, dtype: datetime64[ns] </code></pre> <p>However, when I run the <code>.loc[]</code> I get the following:</p> <pre><code>&gt;&gt;&gt; df.loc[pd.notnull(df.prior_ea_date), 'prior_ea_date'] = df.prior_ea_date[pd.notnull(df.prior_ea_date)].apply(dt, usa=True) &gt;&gt;&gt; df.head() prior_ea_date quarter 0 1451520000000000000 Q2 1 1451520000000000000 Q3 2 1451520000000000000 Q3 3 1451520000000000000 Q3 4 1451520000000000000 Q2 </code></pre> <p>and it has converted my datetime objects to integers. </p> <ul> <li><strong>Why is this happening?</strong></li> <li><strong>How do I avoid this behavior?</strong></li> </ul> <p>I have managed to build a temporary work around, so I while any one-line hacks would be appreciated, I would like a pandas style solution.</p> <p>Thanks.</p>
3
2016-08-19T15:40:00Z
39,044,349
<p>We'll start with the first question: <strong>how to avoid this behavior?</strong></p> <p>My understanding is that you want to convert the <code>prior_eta_date</code> column to datetime objects. The Pandas style approach is to use <code>to_datetime</code>:</p> <pre><code>df.prior_ea_date = pd.to_datetime(df.prior_ea_date, format='%m/%d/%Y') df.prior_ea_date 0 2015-12-31 1 2015-12-31 2 2015-12-31 3 2015-12-31 4 2015-12-31 5 NaT Name: prior_ea_date, dtype: datetime64[ns] </code></pre> <p>Your first question is more interesting: <strong>why is this happening?</strong></p> <p>What I think is happening is that when you use <code>df.loc[pd.notnull(df.prior_ea_date), 'prior_ea_date'] = ....</code> you are setting values on a slice of the <code>prior_ea_date</code> column instead of overwriting the whole column. In this case, Pandas performs and tacit type cast to convert the right hand side to the type of the of the original <code>prior_ea_date</code> column. Notice that those long integers are epoch times for the wanted dates.</p> <p>We can see this with your minimal example:</p> <pre><code>## # Example of type casting on slice ## d = pd.DataFrame(zip(['12/6/2015', np.nan], [1, 2]), columns=list('ab')) # Column-a is still dtype: object d.a 0 12/6/2015 1 NaN Name: a, dtype: object d.loc[pd.notnull(d.a), 'a'] = d.a[pd.notnull(d.a)].apply(lambda x: datetime.datetime(2015,12,6)) # Column-a is still dtype: object d.a 0 1449360000000000000 1 NaN Name: a, dtype: object ## # Example of overwriting whole column ## d = pd.DataFrame(zip(['12/6/2015', np.nan], [1, 2]), columns=list('ab')) d.a = pd.to_datetime(d.a, format='%m/%d/%Y') # Column-a dtype is now datetime d.a 0 2015-12-06 1 NaT Name: a, dtype: datetime64[ns] </code></pre> <p><strong>FURTHER DETAILS:</strong> </p> <p>In response to the OP's request for more under-the-hood details, I traced the call stack in Pycharm to learn what is going on. The TLDR answer is: ultimately, the unexpected behavior of casting <code>datetime</code> dtypes into integers is due to Numpy's internal behavior.</p> <pre><code>d = np.datetime64('2015-12-30T16:00:00.000000000-0800') d.astype(np.dtype(object)) #&gt;&gt;&gt; 1451520000000000000L </code></pre> <p><em>...could you elaborate on why this type casting is happening when using .loc and how to avoid it...</em></p> <p>The intuition in my original answer is correct. It is due to the fact that the datetime objects are being cast into generic <code>object</code> types. This is because setting on the <code>loc</code> slice preserves the dtype of the column having the values set. </p> <p>When setting values with <code>loc</code>, Pandas uses the <code>_LocationIndexer</code> in the <a href="https://github.com/pydata/pandas/blob/1f883121c47940cf51fd33f40e64d18908153c71/pandas/core/indexing.py" rel="nofollow"><code>indexing</code> module</a>. After great deal of checking dimensions and conditions, the line <code>self.obj._data = self.obj._data.setitem(indexer, value)</code> actually sets the new values.</p> <p>Stepping into that line, we find that the moment the datetimes are cast into integers, <a href="https://github.com/pydata/pandas/blob/ce61b3f1c85c1541cfbe1b3bb594431b38689946/pandas/core/internals.py" rel="nofollow">line 742 <code>pandas.core.internals.py</code></a>: </p> <pre><code>values[indexer] = value </code></pre> <p>In this statement, <code>values</code> is a Numpy <code>ndarray</code> of object dtypes. This is the data from the left-hand-side of original assignment. It contains the date strings. The <code>indexer</code> is just a tuple. And <code>value</code> is an <code>ndarray</code> of Numpy <code>datetime64</code> objects. </p> <p>This operation uses Numpy's own <code>setitem</code> methods, which fills individual "cells" with calls to <code>np.asarray(value, self.dtype)</code>. In your case, <code>self.dtype</code> is the type of the left-hand-side:<code>object</code> and the value parameters are in the individual datetimes.</p> <pre><code>np.asarray(d, np.dtype(object)) #&gt;&gt;&gt; array(1451520000000000000L, dtype=object) </code></pre> <p><em>...and how to avoid it...</em><br> Don't use <code>loc</code>. Overwrite the whole column as in my example above.</p> <p><em>...I thought having the column with dtype=object would avoid pandas assuming the object type. And either way it seems unexpected to me why it should be converting it to an int when the original column contains strings and NaNs.</em></p> <p>Ultimately, the behavior is due to how Numpy implements casting from from datetime to object. Now why does Numpy do it that way? I don't know. That is a good new question and a whole other rabbit hole.</p>
3
2016-08-19T17:02:43Z
[ "python", "datetime", "pandas", "type-conversion" ]
How do I write a python dictionary to an excel file?
39,043,010
<p>I'm trying to write a dictionary with randomly generated strings and numbers to an excel file. I've almost succeeded but with one minor problem. The structure of my dictionary is as follows:</p> <pre><code>Age: 11, Names Count: 3, Names: nizbh,xyovj,clier </code></pre> <p>This dictionary was generated from data obtained through a text file. It aggregates all the contents based on their age and if two people have the same age, it groups them into one list. I'm trying to write this data on to an excel file. I've written this piece of code so far.</p> <pre><code>import xlsxwriter lines = [] workbook = xlsxwriter.Workbook('demo.xlsx') worksheet = workbook.add_worksheet() with open ("input2.txt") as input_fd: lines = input_fd.readlines() age_and_names = {} for line in lines: name,age = line.strip().split(",") if not age in age_and_names: age_and_names[age]=[] age_and_names[age].append(name) print age_and_names for key in sorted(age_and_names): print "Age: {}, Names Count: {}, Names: {}".format(key, len(age_and_names[key]), ",".join(age_and_names[key])) row=0 col=0 for key in sorted(age_and_names):#.keys(): row += 1 worksheet.write(row, col, key) for item in age_and_names[key]: worksheet.write(row, col+1, len(age_and_names[key])) worksheet.write(row, col+1, item) row+=1 workbook.close() </code></pre> <p>But what this is actually doing is this (in the excel file):</p> <pre><code>11 nizbh xyovj clier </code></pre> <p>What should I do to make it appear like this instead?</p> <pre><code>Age Name Count Names 11 3 nizbh, xyovj, clier </code></pre>
4
2016-08-19T15:40:51Z
39,043,950
<p>The problem was indeed in the two for loops there. I meddled and played around with them until I arrived at the answer. They're working fine. Thank you guys!</p> <p>Replace the for loops in the end with this:</p> <pre><code>for key in sorted(age_and_names):#.keys(): row+=1 worksheet.write(row, col, key) worksheet.write(row, col+1, len(age_and_names[key])) worksheet.write(row, col+2, ",".join(age_and_names[key])) </code></pre>
2
2016-08-19T16:37:25Z
[ "python", "excel", "dictionary" ]
How to solve a pep8 Error thrown during a Maven build
39,043,035
<p>I've been trying to build a work project using maven and it's failing with the following error - </p> <pre><code>"[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:exec (pep8) on project xxx_1234: Command execution failed. Cannot run program "pep8" (in directory "C:\workspace\ projects\xxx\xxxx_xxxx\xxx"): CreateProcess error=2, The system cannot find the file specified -&gt; [Help 1]". </code></pre> <p>The entire project consists of mainly java code but also contains python and shell scripts.</p> <p>From what I've gathered from researching this issue, pep8 is a plugin for checking python code for coding standards however, given this project was checked out of gerrit repository and built as needed by other colleagues without issue, my suspicions are that this error is more to do with my own local environment.</p> <p>Has anybody else come across this error during a maven build or can anybody shed some light on it?</p> <p>Thanks in advance,</p> <p>M.</p>
0
2016-08-19T15:42:01Z
39,043,261
<p>So this probably means that <code>pep8</code> isn't installed. The easiest way to do fix this is install it with <code>pip</code>.</p> <pre><code>$ pip install pep8 </code></pre> <p>Try running that in a terminal, then your maven build should work just fine.</p>
0
2016-08-19T15:54:36Z
[ "java", "python", "maven-3" ]
ImportError: No module named _backend_gdk on Jupyter notebook with Conda
39,043,041
<p>Hello guys getting the following error on my jupyter notebook -</p> <p>Running a codeblock with <code>%matplotlib gtk</code> results in the exception</p> <pre><code>/usr/local/lib/python2.7/site-packages/matplotlib/backends/backend_gdk.py in &lt;module&gt;() 31 from matplotlib.mathtext import MathTextParser 32 from matplotlib.transforms import Affine2D ---&gt; 33 from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array 34 35 backend_version = "%d.%d.%d" % gtk.pygtk_version ImportError: No module named _backend_gdk </code></pre> <p>However Installing the <code>pygtk</code> package with homebrew did succeeded. Any ideas? I'm using the anaconda distribution of python</p>
0
2016-08-19T15:42:39Z
39,620,224
<p>Take a look at this thread:</p> <p><a href="http://stackoverflow.com/questions/14346090/importerror-no-module-named-backend-gdk">ImportError: No module named _backend_gdk</a></p> <p>There, installing python-gtk2-dev is recommended.</p> <pre><code>sudo apt-get install python-gtk2-dev </code></pre>
0
2016-09-21T15:05:25Z
[ "python", "matplotlib", "jupyter" ]
Packages from private pypi don't find requirements
39,043,046
<p>I'm building a private pypi server and it's working, but the packages that I put there has some requirements from official pypi, but when I try to install my private package, the <code>install_requires</code> breaks trying to find the external dependencies in my private repository (I saw this in the log).</p> <p>When I generate the package locally and try to install like</p> <pre><code>pip install -U package.tar.gz </code></pre> <p>it works and the dependencies are found in the official pypi repository.</p> <p>What do I miss?</p> <p>My process looks like:</p> <pre><code>python setup.py sdist upload -r http://127.0.0.1:8000/sample/ pip install -i http://127.0.0.1:8000/pypi/ </code></pre> <p>And I'm getting:</p> <pre><code>Downloading/unpacking mypackage http://127.0.0.1:8000/pypi/mypackage/ uses an insecure transport scheme (http). Consider using https if 127.0.0.1:8000 has it available Downloading mypackage-1.0.tar.gz (399kB): 399kB downloaded Running setup.py (path:/tmp/pip-build-LjFfGj/mypackage/setup.py) egg_info for package mypackage Downloading/unpacking feedparser (from mypackage) http://127.0.0.1:8000/pypi/feedparser/ uses an insecure transport scheme (http). Consider using https if 127.0.0.1:8000 has it available Could not find any downloads that satisfy the requirement feedparser (from mypackage) Cleaning up... No distributions at all found for feedparser (from mypackage) Storing debug log for failure in /home/rodolpho/.pip/pip.log </code></pre> <p>And in the log I see:</p> <pre><code>Downloading/unpacking feedparser (from mypackage) Getting page http://127.0.0.1:8000/pypi/feedparser/ Could not fetch URL http://127.0.0.1:8000/pypi/feedparser/: 404 Client Error: Not Found </code></pre>
0
2016-08-19T15:42:58Z
39,044,483
<p>Add <code>--extra-index-url https://pypi.python.org/pypi</code> to your <code>pip install</code> command. See the documentation <a href="https://pip.pypa.io/en/stable/reference/pip_wheel/#cmdoption--extra-index-url" rel="nofollow">here</a>.</p>
1
2016-08-19T17:11:09Z
[ "python", "python-2.7", "pip", "setuptools" ]
Folder picker in HTML with Flask for Uploads
39,043,087
<p>I am running a Flask app where the user uploads a file and must select the root folder path of where to upload the file on a network drive. This path is an IIS available network path and is also a network drive on all user's computers.</p> <p>I know this can't be done with pure HTML due to security but wanted to know if there was a way around this with Flask. The goal is to use Python to move the upload file to the choosen folder path. </p> <p>I have tried:</p> <pre><code>&lt;form&gt;&lt;input type="file" name=dir webkitdirectory directory multiple/&gt;&lt;/form&gt; </code></pre> <p>But this only works in Chrome. With the path choosen by the user I can pass this onto Python to copy the upload file to there.</p>
0
2016-08-19T15:45:28Z
39,044,636
<p><strong>Python</strong> runs on your server, therefoere it will not be possible to use that to move the files on the client side. If you think about it, let's assume you manage to somehow (magically) send python commands to the clients to move files, do you know if they even have python installed to be able to interpret your commands?</p> <p><strong>Javascript</strong> on the other hand is running on client side and was used to achieve this. However, like you said, due to security reasons modern browswers won't allow that. If they would allow it then any website could potentially see your whole File System.</p> <p>Here is an <a href="https://blogs.msdn.microsoft.com/ie/2008/07/02/ie8-security-part-v-comprehensive-protection/" rel="nofollow">article</a> that explains a bit why. Look up the File Upload Control section of it. Hope this makes things a bit clearer.</p> <p><strong>EDIT</strong>: after seeing your comment you could achieve that using os.walk. Beware it could be slow.</p> <pre><code>for root, dirs, files in os.walk(rootPath): # for example "C:/Users/" for file in files: if file == (wantedFile): print(os.path.join(root,file)) break </code></pre>
1
2016-08-19T17:22:13Z
[ "python", "html", "flask" ]
Folder picker in HTML with Flask for Uploads
39,043,087
<p>I am running a Flask app where the user uploads a file and must select the root folder path of where to upload the file on a network drive. This path is an IIS available network path and is also a network drive on all user's computers.</p> <p>I know this can't be done with pure HTML due to security but wanted to know if there was a way around this with Flask. The goal is to use Python to move the upload file to the choosen folder path. </p> <p>I have tried:</p> <pre><code>&lt;form&gt;&lt;input type="file" name=dir webkitdirectory directory multiple/&gt;&lt;/form&gt; </code></pre> <p>But this only works in Chrome. With the path choosen by the user I can pass this onto Python to copy the upload file to there.</p>
0
2016-08-19T15:45:28Z
39,578,792
<p>Due to modern browser limitations I decided to use JSTree as a solution. And it is working very well. It features a tree structure browser. The structure is the result of outputting the folders as JSON. You can add a search bar as well so the user can just type in a folder name to search.<br> Please see <a href="https://www.jstree.com/" rel="nofollow">JSTree https://www.jstree.com/</a></p> <p><strong>How to implement this with Flask</strong></p> <p><strong>HTML/JS:</strong></p> <pre><code> &lt;link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/themes/default/style.min.css"&gt; &lt;div&gt; &lt;input class="search-input form-control" placeholder="Search for folder"&gt;&lt;/input&gt; &lt;/div&gt; &lt;script id="jstree1" name="jstree1"&gt; /*Search and JS Folder Tree*/ $(function () { $(".search-input").keyup(function () { var searchString = $(this).val(); console.log(searchString); $('#container').jstree('search', searchString); }); $('#container').jstree({ 'core': { "themes": { "name": "default" , "dots": true , "icons": true } , 'data': { 'url': "static/JSONData.json" , 'type': 'GET' , 'dataType': 'JSON' } } , "search": { "case_insensitive": true , "show_only_matches": true } , "plugins": ["search"] }); }); { /* --- THIS IS FOLDER SELECTOR FOR ID "folderout" --- */ $("#container").on("select_node.jstree", function (evt, data) { var number = data.node.text document.getElementById("folderout").value = number; }); </code></pre> <p><strong>In Flask/WTForms</strong> call on the id "folderout". This will return the path to WTForms when the user clicks the folder. </p> <pre><code>folderout = TextField('Folder:', validators=[validators.required()]) </code></pre> <p><strong>To Create the JSON JStree File using Python:</strong></p> <pre><code>import os # path : string to relative or absolute path to be queried # subdirs: tuple or list containing all names of subfolders that need to be # present in the directory def all_dirs_with_subdirs(path, subdirs): # make sure no relative paths are returned, can be omitted path = os.path.abspath(path) result = [] for root, dirs, files in os.walk(path): if all(subdir in dirs for subdir in subdirs): result.append(root) return result def get_directory_listing(path): output = {} output["text"] = path.decode('latin1') output["type"] = "directory" output["children"] = all_dirs_with_subdirs(path, ('Maps', 'Reports')) return output with open('test.json', 'w+') as f: listing = get_directory_listing(".") json.dump(listing, f) </code></pre>
0
2016-09-19T17:24:39Z
[ "python", "html", "flask" ]
Python package installation via yum
39,043,095
<p>What could be the difference of installing python packages via <code>yum</code> vs. via <code>pip</code>on <code>Centos</code> in terms of security? Is it even possible to install a python package only via <code>yum</code>?</p>
1
2016-08-19T15:46:09Z
39,043,207
<p><code>yum</code> can be used to install Python on CentOS.</p> <p><code>pip</code> is used to install Python libraries (packages). Not Python itself.</p> <p>No "security" issue. But with <code>yum</code> you could overwrite your native Python installation, which can be a problem.</p> <p>Instead of that, it is recommended to use <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a>.</p>
0
2016-08-19T15:51:36Z
[ "python", "security", "pip", "centos7", "yum" ]
Bokeh proof-of-concept efficient dynamic plot update?
39,043,115
<p>As a demonstration piece for proof-of-concept I've created a Bokeh plot of four OHLC candles.</p> <p>I would like to extend the demo to animate the plot so that the current candlestick would be made to move and update in response to changes to its OHLC data. An important point is that only the components (glyphs?) of the final candle should be updated, NOT the entire plot.</p> <p>The intent is to demonstrate an approach that is computationally lightweight and can be scaled up to a larger chart with many candles and very frequent updates to the 'live' candle.</p> <p>Is anyone able to please outline or demonstrate in code how this could be accomplished?</p> <p>With thanks.</p> <p>Jupyter notebook Python code (Included is 39 seconds of synthetic tick data to facilitate a 4 x 10 second candlestick animation):</p> <pre><code>from ipywidgets import interact import numpy as np from bokeh.plotting import figure, output_notebook, show import datetime as dt import pandas as pd from math import pi datum = dt.datetime.now() time_delta = dt.timedelta(seconds=1) tick_data = [(datum + (time_delta*1), 20), (datum + (time_delta*2), 19.67603177022472), (datum + (time_delta*3), 20.431878609290592), (datum + (time_delta*4), 20.20576131687243), (datum + (time_delta*5), 20.715609070433032), (datum + (time_delta*6), 20.722416975732024), (datum + (time_delta*7), 20.722027468712426), (datum + (time_delta*8), 20.728022489796615), (datum + (time_delta*9), 20.70996968619282), (datum + (time_delta*10), 20.70096021947874), (datum + (time_delta*11), 20.729546647699372), (datum + (time_delta*12), 20.759081440837274), (datum + (time_delta*13), 20.823807346441097), (datum + (time_delta*14), 20.610018797947472), (datum + (time_delta*15), 20.591932124168064), (datum + (time_delta*16), 20.584175853951805), (datum + (time_delta*17), 20.563650527527987), (datum + (time_delta*18), 20.617504106758794), (datum + (time_delta*19), 20.42010872326373), (datum + (time_delta*20), 20.391860996799267), (datum + (time_delta*21), 20.3913190739894), (datum + (time_delta*22), 20.34308794391099), (datum + (time_delta*23), 20.2225778590662), (datum + (time_delta*24), 20.47050754458162), (datum + (time_delta*25), 20.83193618858914), (datum + (time_delta*26), 20.80978509373571), (datum + (time_delta*27), 20.80917543057461), (datum + (time_delta*28), 20.859506511541262), (datum + (time_delta*29), 20.596402987349492), (datum + (time_delta*30), 20.644024454266795), (datum + (time_delta*31), 20.58183881183424), (datum + (time_delta*32), 20.59023861538722), (datum + (time_delta*33), 20.454961133973477), (datum + (time_delta*34), 20.495334383308776), (datum + (time_delta*35), 20.483818523599044), (datum + (time_delta*36), 20.593964334705078), (datum + (time_delta*37), 20.91518908025538), (datum + (time_delta*38), 20.87942217480398), (datum + (time_delta*39), 20.772392419854697)] #Prepare to convert fractal tick data into candlesticks candle_delta = dt.timedelta(seconds=10) candle_close_time = datum + candle_delta candle_data = [] #Initialise op, hi, lo, cl = 0, 0, 0, 0 #Convert ticks to candlesticks for (dtval, val) in tick_data: if candle_close_time &lt; dtval: #store the completed candle candle_data.append((candle_close_time, op, hi, lo, cl)) #increment to the next candle candle_close_time += candle_delta #Reset op, hi, lo, cl = 0, 0, 0, 0 if dtval &lt;= candle_close_time and op==0: #set initial values op, hi, lo, cl = val, val, val, val elif dtval &lt;= candle_close_time and op!=0: #update values as appropriate hi = val if val &gt; hi else hi lo = val if val &lt; lo else lo cl = val #final tick if dtval == tick_data[-1][0]: #store the completed candle candle_data.append((candle_close_time, op, hi, lo, cl)) #print(str(candle_data)) df = pd.DataFrame(candle_data, columns=list('dohlc')) #For rectangle positioning mids = (df.o + df.c)/2 #Rectangle height spans = abs(df.c-df.o) #Detect up / down candle body inc = df.c &gt; df.o dec = df.o &gt; df.c #Candle width w = 10 * 500 TOOLS = "pan,wheel_zoom,box_zoom,reset,save" p = figure(x_axis_type="datetime", tools=TOOLS, plot_width=500, title = "Four Candles") p.xaxis.major_label_orientation = pi/4 p.grid.grid_line_alpha=0.3 #Wick p.segment(df.d, df.h, df.d, df.l, color="#000000") #Up body p.rect(df.d[inc], mids[inc], w, spans[inc], fill_color="#09ff00", line_color="#09ff00") #Down body p.rect(df.d[dec], mids[dec], w, spans[dec], fill_color="#ff0000", line_color="#ff0000") output_notebook() show(p) </code></pre> <p><a href="http://i.stack.imgur.com/Shgu3.png" rel="nofollow"><img src="http://i.stack.imgur.com/Shgu3.png" alt="Four Candles"></a></p>
3
2016-08-19T15:47:02Z
39,708,864
<p>I created <a href="https://github.com/Formulator/candlestickmaker" rel="nofollow">candlestickmaker</a> which goes beyond the brief of this question to fulfil the intent; a demonstration Bokeh patching &amp; streaming OHLC chart.</p>
0
2016-09-26T17:25:57Z
[ "python", "animation", "plot", "bokeh", "candlestick-chart" ]
dictionary transformation
39,043,213
<p>How to map the elements of the lists in the dictionary below to their basic keys (i.e. basic keys are the ones that map to empty lists)</p> <pre><code>{ '1':[], '2':[], '3':['1','2'], '4':[], '5':[], '6':['4','5'], '7':['3','6'] } </code></pre> <p>which would result in</p> <pre><code>{ '1':[], '2':[], '3':{'1':[],'2':[]}, '4':[], '5':[], '6':{'4':[],'5':[]}, '7':{'3':{'1':[],'2':[]},'6':{'4':[],'5':[]}} } </code></pre> <p>I think it can be accomplished using a recursive function.</p>
-5
2016-08-19T15:51:59Z
39,043,337
<p>My understanding is that you want to convert values like <code>['1', '2']</code> to <code>dict</code> like <code>{'1': [], '2': []}</code>.</p> <p>Don't know why:</p> <pre><code>'6':['4','5'], '7':['3','6'] </code></pre> <p>Gives:</p> <pre><code>'6':{'4':[],'5':[]}, '7':{'3':{'1':[],'2':[]},'6':{'4':[],'5':[]}} </code></pre> <p>But not:</p> <pre><code>'6':{'4':[],'5':[]}, '7':{'3': [], '6': []} </code></pre> <p>?</p> <p>Recursion is not necessary, but you can use a dictionary in comprehension:</p> <pre><code>import pprint a = { '1':[], '2':[], '3':['1','2'], '4':[], '5':[], '6':['4','5'], '7':['3','6'] } b = {k: {i: [] for i in v} for k, v in a.items()} pprint.pprint(b) </code></pre> <p>You'll get:</p> <pre><code>{'1': {}, '2': {}, '3': {'1': [], '2': []}, '4': {}, '5': {}, '6': {'4': [], '5': []}, '7': {'3': [], '6': []}} </code></pre>
-1
2016-08-19T15:58:20Z
[ "python", "dictionary" ]
dictionary transformation
39,043,213
<p>How to map the elements of the lists in the dictionary below to their basic keys (i.e. basic keys are the ones that map to empty lists)</p> <pre><code>{ '1':[], '2':[], '3':['1','2'], '4':[], '5':[], '6':['4','5'], '7':['3','6'] } </code></pre> <p>which would result in</p> <pre><code>{ '1':[], '2':[], '3':{'1':[],'2':[]}, '4':[], '5':[], '6':{'4':[],'5':[]}, '7':{'3':{'1':[],'2':[]},'6':{'4':[],'5':[]}} } </code></pre> <p>I think it can be accomplished using a recursive function.</p>
-5
2016-08-19T15:51:59Z
39,043,634
<p>To get you started I would suggest something along this line.</p> <pre><code>def recursiveChange(object): if type(object) == type({}): #Whats happens when its a dictionary recursiveChange(NEWOBJECT) if type(object) == type([]): #What happens when its a list recursiveChange(NEWOBJECT) </code></pre> <p>Each time the function is called it will check what type of object it is, then changing the values as needed. With this you should be able to get an idea of the route you want to take, give it a try.</p>
0
2016-08-19T16:18:08Z
[ "python", "dictionary" ]
Read a file in Python and replace certain strings on the go
39,043,256
<p>I want to read a multiple files in Python in order to do some mapping between them. </p> <p>I'm pretty new at these things, so I got the code from someone else. But now I want to edit it. And I can't fully understand the python macros.</p> <p>So here's the code</p> <pre><code>def getDataFromFile(infile): ''' Opens a file, processes it by replacing all the \t\t with \t'n/a'\t and returns to the user the header of the file, and a list of genes. ''' with open(infile, 'r') as f: reader = csv.reader(f, delimiter='\t') # Open the file with csv.reader so it has a cleaner look to it. header = f.readline() # Store header on a variable list = [[x if x else 'n/a' for x in line] for line in reader] # This is done, so we can have 1 universal input. n/a is for non-existent value! # Most databases, don't insert a special character for non-existent # values, they just \t\t it! So be careful with that! # With the above approach, we end up with a list of lists # Every column, will have a value and that will be either the one provided by the file # or, the "our" special for non-existent attributes, 'NaN' header = header.split() # header should be a list of strings. return header, geneList </code></pre> <p>How can I modify this line <code>list = [[x if x else 'n/a' for x in line] for line in reader]</code> so that, not only it checks for <code>'/t/t'</code> and replacing it with <code>'n/a'</code> but also looks for other forms of 'non-existent' like <code>'NA'</code> (used in R). </p> <p>I know it's a <strong>noob</strong> question, but I started using Python 2 weeks ago. And I'm still in the learning process.</p>
0
2016-08-19T15:54:25Z
39,043,327
<p>Just add another test in your listcomp:</p> <pre><code>list = [[x if (x and x not in ["NA","whatever"]) else 'n/a' for x in line] for line in reader] </code></pre> <p>Which can be clearer like that with inverted logic and integrating empty string in checklist.</p> <pre><code>list = [['n/a' if (x in ["", "NA","whatever"]) else x for x in line] for line in reader] </code></pre>
1
2016-08-19T15:57:49Z
[ "python", "file" ]
Python: Merge two CSV files to multilevel JSON
39,043,323
<p>I am very new to Python/JSON so please bear with me on this. I could do this in R but we need to use Python so as to transform this to Python/Spark/MongoDB. Also, I am just posting a minimal subset - I have a couple more file types and so if anyone can help me with this, I can build upon that to integrate more files and file types:</p> <p>Getting back to my problem:</p> <p>I have two tsv input files that I need to merge and convert to JSON. Both the files have gene and sample columns plus some additional columns. However, the <code>gene</code> and <code>sample</code> may or may not overlap like I have shown - f2.tsv has all genes in f1.tsv but also has an additional gene <code>g3</code>. Similarly, both files have overlapping as well as non-overlapping values in <code>sample</code> column.</p> <pre><code># f1.tsv – has gene, sample and additional column other1 $ cat f1.tsv gene sample other1 g1 s1 a1 g1 s2 b1 g1 s3a c1 g2 s4 d1 # f2.tsv – has gene, sample and additional columns other21, other22 $ cat f2.tsv gene sample other21 other22 g1 s1 a21 a22 g1 s2 b21 b22 g1 s3b c21 c22 g2 s4 d21 d22 g3 s5 f21 f22 </code></pre> <p><em>The gene forms the top level, each gene has multiple samples which form the second level and the additional columns form the <code>extras</code> which is the third level.</em> The extras are divided into two because one file has <code>other1</code> and the second file has <code>other21</code> and <code>other22</code>. The other files that I will include later will have other fields like <code>other31</code> and <code>other32</code> and so on but they will still have the gene and sample columns.</p> <pre><code># expected output – JSON by combining both tsv files. $ cat output.json [{ "gene":"g1", "samples":[ { "sample":"s2", "extras":[ { "other1":"b1" }, { "other21":"b21", "other22":"b22" } ] }, { "sample":"s1", "extras":[ { "other1":"a1" }, { "other21":"a21", "other22":"a22" } ] }, { "sample":"s3b", "extras":[ { "other21":"c21", "other22":"c22" } ] }, { "sample":"s3a", "extras":[ { "other1":"c1" } ] } ] },{ "gene":"g2", "samples":[ { "sample":"s4", "extras":[ { "other1":"d1" }, { "other21":"d21", "other22":"d22" } ] } ] },{ "gene":"g3", "samples":[ { "sample":"s5", "extras":[ { "other21":"f21", "other22":"f22" } ] } ] }] </code></pre> <p>How do convert two csv files to a single - multi level JSON based on two common columns? </p> <p>I would really appreciate any help that I can get on this.</p> <p>Thanks!</p>
1
2016-08-19T15:57:36Z
39,044,587
<p>This looks like a problem for <code>pandas</code>! Unfortunately pandas only takes us so far and we then have to do some manipulation on our own. This is neither fast nor particularly efficient code, but it will get the job done.</p> <pre><code>import pandas as pd import json from collections import defaultdict # here we import the tsv files as pandas df f1 = pd.read_table('f1.tsv', delim_whitespace=True) f2 = pd.read_table('f2.tsv', delim_whitespace=True) # we then let pandas merge them newframe = f1.merge(f2, how='outer', on=['gene', 'sample']) # have pandas write them out to a json, and then read them back in as a # python object (a list of dicts) pythonList = json.loads(newframe.to_json(orient='records')) newDict = {} for d in pythonList: gene = d['gene'] sample = d['sample'] sampleDict = {'sample':sample, 'extras':[]} extrasdict = defaultdict(lambda:dict()) if gene not in newDict: newDict[gene] = {'gene':gene, 'samples':[]} for key, value in d.iteritems(): if 'other' not in key or value is None: continue else: id = key.split('other')[-1] if len(id) == 1: extrasdict['1'][key] = value else: extrasdict['{}'.format(id[0])][key] = value for value in extrasdict.values(): sampleDict['extras'].append(value) newDict[gene]['samples'].append(sampleDict) newList = [v for k, v in newDict.iteritems()] print json.dumps(newList) </code></pre> <p>If this looks like a solution that will work for you, I am happy to spend some time cleaning it up to make it bait more readable and efficient.</p> <p>PS: If you like R, then pandas is the way to go (it was written to give a R-like interface to data in python)</p>
2
2016-08-19T17:18:22Z
[ "python", "json", "csv" ]
Python: Merge two CSV files to multilevel JSON
39,043,323
<p>I am very new to Python/JSON so please bear with me on this. I could do this in R but we need to use Python so as to transform this to Python/Spark/MongoDB. Also, I am just posting a minimal subset - I have a couple more file types and so if anyone can help me with this, I can build upon that to integrate more files and file types:</p> <p>Getting back to my problem:</p> <p>I have two tsv input files that I need to merge and convert to JSON. Both the files have gene and sample columns plus some additional columns. However, the <code>gene</code> and <code>sample</code> may or may not overlap like I have shown - f2.tsv has all genes in f1.tsv but also has an additional gene <code>g3</code>. Similarly, both files have overlapping as well as non-overlapping values in <code>sample</code> column.</p> <pre><code># f1.tsv – has gene, sample and additional column other1 $ cat f1.tsv gene sample other1 g1 s1 a1 g1 s2 b1 g1 s3a c1 g2 s4 d1 # f2.tsv – has gene, sample and additional columns other21, other22 $ cat f2.tsv gene sample other21 other22 g1 s1 a21 a22 g1 s2 b21 b22 g1 s3b c21 c22 g2 s4 d21 d22 g3 s5 f21 f22 </code></pre> <p><em>The gene forms the top level, each gene has multiple samples which form the second level and the additional columns form the <code>extras</code> which is the third level.</em> The extras are divided into two because one file has <code>other1</code> and the second file has <code>other21</code> and <code>other22</code>. The other files that I will include later will have other fields like <code>other31</code> and <code>other32</code> and so on but they will still have the gene and sample columns.</p> <pre><code># expected output – JSON by combining both tsv files. $ cat output.json [{ "gene":"g1", "samples":[ { "sample":"s2", "extras":[ { "other1":"b1" }, { "other21":"b21", "other22":"b22" } ] }, { "sample":"s1", "extras":[ { "other1":"a1" }, { "other21":"a21", "other22":"a22" } ] }, { "sample":"s3b", "extras":[ { "other21":"c21", "other22":"c22" } ] }, { "sample":"s3a", "extras":[ { "other1":"c1" } ] } ] },{ "gene":"g2", "samples":[ { "sample":"s4", "extras":[ { "other1":"d1" }, { "other21":"d21", "other22":"d22" } ] } ] },{ "gene":"g3", "samples":[ { "sample":"s5", "extras":[ { "other21":"f21", "other22":"f22" } ] } ] }] </code></pre> <p>How do convert two csv files to a single - multi level JSON based on two common columns? </p> <p>I would really appreciate any help that I can get on this.</p> <p>Thanks!</p>
1
2016-08-19T15:57:36Z
39,045,101
<p>Here's another option. I tried to make it easy to manage when you start adding more files. You can run on the command line and provide arguments, one for each file you want to add in. Gene/sample names are stored in dictionaries to improve efficiency. The formatting of your desired JSON object is done in each class' format() method. Hope this helps.</p> <pre><code>import csv, json, sys class Sample(object): def __init__(self, name, extras): self.name = name self.extras = [extras] def format(self): map = {} map['sample'] = self.name map['extras'] = self.extras return map def add_extras(self, extras): #edit 8/20 #always just add the new extras to the list for extra in extras: self.extras.append(extra) class Gene(object): def __init__(self, name, samples): self.name = name self.samples = samples def format(self): map = {} map ['gene'] = self.name map['samples'] = sorted([self.samples[sample_key].format() for sample_key in self.samples], key=lambda sample: sample['sample']) return map def create_or_add_samples(self, new_samples): # loop through new samples, seeing if they already exist in the gene object for sample_name in new_samples: sample = new_samples[sample_name] if sample.name in self.samples: self.samples[sample.name].add_extras(sample.extras) else: self.samples[sample.name] = sample class Genes(object): def __init__(self): self.genes = {} def format(self): return sorted([self.genes[gene_name].format() for gene_name in self.genes], key=lambda gene: gene['gene']) def create_or_add_gene(self, gene): if not gene.name in self.genes: self.genes[gene.name] = gene else: self.genes[gene.name].create_or_add_samples(gene.samples) def row_to_gene(headers, row): gene_name = "" sample_name = "" extras = {} for value in enumerate(row): if headers[value[0]] == "gene": gene_name = value[1] elif headers[value[0]] == "sample": sample_name = value[1] else: extras[headers[value[0]]] = value[1] sample_dict = {} sample_dict[sample_name] = Sample(sample_name, extras) return Gene(gene_name, sample_dict) if __name__ == '__main__': delim = "\t" genes = Genes() files = sys.argv[1:] for file in files: print("Reading " + str(file)) with open(file,'r') as f1: reader = csv.reader(f1, delimiter=delim) headers = [] for row in reader: if len(headers) == 0: headers = row else: genes.create_or_add_gene(row_to_gene(headers, row)) result = json.dumps(genes.format(), indent=4) print(result) with open('json_output.txt', 'w') as output: output.write(result) </code></pre>
2
2016-08-19T17:53:44Z
[ "python", "json", "csv" ]
Python: Merge two CSV files to multilevel JSON
39,043,323
<p>I am very new to Python/JSON so please bear with me on this. I could do this in R but we need to use Python so as to transform this to Python/Spark/MongoDB. Also, I am just posting a minimal subset - I have a couple more file types and so if anyone can help me with this, I can build upon that to integrate more files and file types:</p> <p>Getting back to my problem:</p> <p>I have two tsv input files that I need to merge and convert to JSON. Both the files have gene and sample columns plus some additional columns. However, the <code>gene</code> and <code>sample</code> may or may not overlap like I have shown - f2.tsv has all genes in f1.tsv but also has an additional gene <code>g3</code>. Similarly, both files have overlapping as well as non-overlapping values in <code>sample</code> column.</p> <pre><code># f1.tsv – has gene, sample and additional column other1 $ cat f1.tsv gene sample other1 g1 s1 a1 g1 s2 b1 g1 s3a c1 g2 s4 d1 # f2.tsv – has gene, sample and additional columns other21, other22 $ cat f2.tsv gene sample other21 other22 g1 s1 a21 a22 g1 s2 b21 b22 g1 s3b c21 c22 g2 s4 d21 d22 g3 s5 f21 f22 </code></pre> <p><em>The gene forms the top level, each gene has multiple samples which form the second level and the additional columns form the <code>extras</code> which is the third level.</em> The extras are divided into two because one file has <code>other1</code> and the second file has <code>other21</code> and <code>other22</code>. The other files that I will include later will have other fields like <code>other31</code> and <code>other32</code> and so on but they will still have the gene and sample columns.</p> <pre><code># expected output – JSON by combining both tsv files. $ cat output.json [{ "gene":"g1", "samples":[ { "sample":"s2", "extras":[ { "other1":"b1" }, { "other21":"b21", "other22":"b22" } ] }, { "sample":"s1", "extras":[ { "other1":"a1" }, { "other21":"a21", "other22":"a22" } ] }, { "sample":"s3b", "extras":[ { "other21":"c21", "other22":"c22" } ] }, { "sample":"s3a", "extras":[ { "other1":"c1" } ] } ] },{ "gene":"g2", "samples":[ { "sample":"s4", "extras":[ { "other1":"d1" }, { "other21":"d21", "other22":"d22" } ] } ] },{ "gene":"g3", "samples":[ { "sample":"s5", "extras":[ { "other21":"f21", "other22":"f22" } ] } ] }] </code></pre> <p>How do convert two csv files to a single - multi level JSON based on two common columns? </p> <p>I would really appreciate any help that I can get on this.</p> <p>Thanks!</p>
1
2016-08-19T15:57:36Z
39,045,203
<p>Do it in steps:</p> <ol> <li>Read the incoming <code>tsv</code> files and aggregate the information from different genes into a dictionary.</li> <li>Process said dictionary to match your desired format.</li> <li>Write the result to a JSON file.</li> </ol> <p>Here is the code:</p> <pre><code>import csv import json from collections import defaultdict input_files = ['f1.tsv', 'f2.tsv'] output_file = 'genes.json' # Step 1 gene_dict = defaultdict(lambda: defaultdict(list)) for file in input_files: with open(file, 'r') as f: reader = csv.DictReader(f, delimiter='\t') for line in reader: gene = line.pop('gene') sample = line.pop('sample') gene_dict[gene][sample].append(line) # Step 2 out = [{'gene': gene, 'samples': [{'sample': sample, 'extras': extras} for sample, extras in samples.items()]} for gene, samples in gene_dict.items()] # Step 3 with open(output_file, 'w') as f: json.dump(out, f) </code></pre>
1
2016-08-19T18:01:09Z
[ "python", "json", "csv" ]
traceback shows only one line of a multiline command
39,043,353
<p>I have added a small debugging aid to my server. It logs a stack trace obtained from <code>traceback.format_stack()</code></p> <p>It contains few incomplete lines like this:</p> <pre><code>File "/home/...../base/loop.py", line 361, in run self.outputs.fd_list, (), sleep) </code></pre> <p>which is not that much helpfull.</p> <p>The source lines 360 and 361:</p> <pre><code>rlist, wlist, unused = select.select(self.inputs.fd_list, self.outputs.fd_list, (), sleep) </code></pre> <p>If only one line can be part of the stack trace, I would say the line 360 with the function name (here <code>select.select</code>) is the right one, because the stack is created by calling functions.</p> <p>Anyway, I would prefer the whole (logical) line to be printed. Or at least some context (e.g. 2 lines before). Is that possible? I mean with just an adequate effort, of course.</p> <p>Tried to add a line continuation character <code>\</code>, but without success. </p> <hr> <p><strong>EPILOGUE</strong>: Based on Jean-François Fabre's answer and his code I'm going to use this function:</p> <pre><code>def print_trace(): for fname, lnum, func, line in traceback.extract_stack()[:-1]: print('File "{}", line {}, in {}'.format(fname, lnum, func)) try: with open(fname) as f: rl = f.readlines() except OSError: if line is not None: print(" " + line + " &lt;===") continue first = max(0, lnum-3) # read 2 lines before and 2 lines after for i, line in enumerate(rl[first:lnum+2]): line = line.rstrip() if i + first + 1 == lnum: print(" " + line + " &lt;===") elif line: print(" " + line) </code></pre>
2
2016-08-19T15:59:34Z
39,043,529
<p>The <code>traceback.format_exception_only</code> function format only one line, except in case of <strong>SyntaxError</strong>, so…</p>
0
2016-08-19T16:10:23Z
[ "python", "traceback" ]
traceback shows only one line of a multiline command
39,043,353
<p>I have added a small debugging aid to my server. It logs a stack trace obtained from <code>traceback.format_stack()</code></p> <p>It contains few incomplete lines like this:</p> <pre><code>File "/home/...../base/loop.py", line 361, in run self.outputs.fd_list, (), sleep) </code></pre> <p>which is not that much helpfull.</p> <p>The source lines 360 and 361:</p> <pre><code>rlist, wlist, unused = select.select(self.inputs.fd_list, self.outputs.fd_list, (), sleep) </code></pre> <p>If only one line can be part of the stack trace, I would say the line 360 with the function name (here <code>select.select</code>) is the right one, because the stack is created by calling functions.</p> <p>Anyway, I would prefer the whole (logical) line to be printed. Or at least some context (e.g. 2 lines before). Is that possible? I mean with just an adequate effort, of course.</p> <p>Tried to add a line continuation character <code>\</code>, but without success. </p> <hr> <p><strong>EPILOGUE</strong>: Based on Jean-François Fabre's answer and his code I'm going to use this function:</p> <pre><code>def print_trace(): for fname, lnum, func, line in traceback.extract_stack()[:-1]: print('File "{}", line {}, in {}'.format(fname, lnum, func)) try: with open(fname) as f: rl = f.readlines() except OSError: if line is not None: print(" " + line + " &lt;===") continue first = max(0, lnum-3) # read 2 lines before and 2 lines after for i, line in enumerate(rl[first:lnum+2]): line = line.rstrip() if i + first + 1 == lnum: print(" " + line + " &lt;===") elif line: print(" " + line) </code></pre>
2
2016-08-19T15:59:34Z
39,043,790
<p>"just with adequate effort" this can be done. But it's hack-like</p> <p>check this example:</p> <pre><code>import traceback,re,os,sys r = re.compile(r'File\s"(.*)",\sline\s(\d+)') def print_trace(): # discard the 2 deepest entries since they're a call to print_trace() lines = [str.split(x,"\n")[0] for x in traceback.format_stack()][:-2] for l in lines: m = r.search(l) if m != None: sys.stdout.write(l+"\n") file = m.group(1) line = int(m.group(2))-1 if os.path.exists(file): with open(file,"r") as f: rl = f.readlines() tblines = rl[max(line-2,0):min(line+3,len(rl))] # read 2 lines before and 2 lines after for i,tl in enumerate(tblines): tl = tl.rstrip() if i==2: sys.stdout.write(" "+tl+" &lt;====\n") elif tl: sys.stdout.write(" "+tl+"\n") def foo(): print_trace() foo() </code></pre> <p>output:</p> <pre><code> File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 63, in &lt;module&gt; if __name__ == "__main__": main() &lt;==== File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 60, in main t = SimpleServer(ModSlaveService, port = port, auto_register = False) t.start() &lt;==== if __name__ == "__main__": File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 227, in start File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 139, in accept File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 14, in _accept_method class SimpleServer(Server): def _accept_method(self, sock): self._serve_client(sock, None) &lt;==== class ModSlaveService(SlaveService): File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 191, in _serve_client File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 391, in serve_all File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 382, in serve File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 350, in _dispatch File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 298, in _dispatch_request File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 528, in _handle_call File "&lt;string&gt;", line 420, in run_nodebug File "C:\DATA\jff\data\python\stackoverflow\traceback_test.py", line 31, in &lt;module&gt; print_trace() foo() &lt;==== </code></pre> <p>EDIT: VPfB suggested the use of <code>extract_stack</code> which is a little less "hacky", no need to parse a string, just get the quadruplet with traceback info (needs to rebuild the text message, but that's better)</p> <pre><code>import traceback,os,sys def print_trace(): # discard the 2 deepest entries since they're a call to print_trace() for file,line,w1,w2 in traceback.extract_stack()[:-2]: sys.stdout.write(' File "{}", line {}, in {}\n'.format(file,line,w1)) if os.path.exists(file): line -= 1 with open(file,"r") as f: rl = f.readlines() tblines = rl[max(line-2,0):min(line+3,len(rl))] # read 2 lines before and 2 lines after for i,tl in enumerate(tblines): tl = tl.rstrip() if i==2: sys.stdout.write(" "+tl+" &lt;====\n") elif tl: sys.stdout.write(" "+tl+"\n") def foo(): print_trace() foo() </code></pre>
1
2016-08-19T16:26:42Z
[ "python", "traceback" ]
A Python package using caching
39,043,417
<p>It is possible to use the built-in <code>pkgutil</code> package to use its <code>pkgutil.get_data</code> to get data packaged with the package.</p> <p>My case is a bit different.</p> <p>I would like to allow a platform-independent way to allow storing of data done by my package, but not actually distribute anything at installation.</p> <p>When the data is older than, let's say, 1 day, when a new conversion is being done it should refresh this cache.</p> <p>Code might help:</p> <pre><code>import json from datetime import datetime from dateutil.relativedelta import relativedelta cache_path = "XXX/here" with open(cache_path) as f: cached_data = json.load(f) def convert(value, from_type, to_type): pair = from_type + "-" to_type now = datetime.now() too_old = (now + relativedelta(days=1)).isoformat() if pair not in cached_data or too_old &lt; cached_data[pair]['last_updated']: cached_data[pair] = get_new_value(pair) with open(cache_path, "w") as f: json.dump(cached_data, f) return value * float(cached_data[pair]['value']) </code></pre> <p>So how to choose <code>cache_path</code>?</p>
0
2016-08-19T16:03:06Z
39,043,797
<p>The built in <code>tempfile</code> module will help you here. </p> <pre><code>import tempfile with tempfile.NamedTemporaryFile(delete=False) as cache_path_fh: &lt;do stuff&gt; </code></pre> <p>Deletion/cleanup will have to be done manually with <code>delete=False</code> since you want the file to persist beyond the scope of the file handle.</p> <p>By default the files go under <code>/tmp</code> or the system's temporary dir, which can be adjusted. See <a href="https://docs.python.org/2/library/tempfile.html?highlight=tempfile#module-tempfile" rel="nofollow">docs</a>. </p>
0
2016-08-19T16:27:12Z
[ "python", "caching", "package" ]
Parse JSON from HTML responseText
39,043,468
<p>Python's webob module by default returns text/html responses, specifically ServerErorr's and these end up embedding the error JSON Paylod within the body of the HTML responseText contains the following:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;503 Service Unavailable&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;503 Service Unavailable&lt;/h1&gt; { "status": "object-specific error", "payload": { "Message": "Unable to list resources", "HTTP Method": "GET", "URI": "api/myManager/1.0/Node", "Operation": "LIST", "Object": { "Name": "myManager.Node", "Interface": "Node" }, "Version": { "Major": 1, "Minor": 0 } } }&lt;br /&gt;&lt;br /&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Using Javascript on the client side what is the best approach to extract the JSON that's embedded within the HTML ? What is the best way to extract this JSON object embedded within the HTML ?</p>
0
2016-08-19T16:06:17Z
39,043,765
<p>Using a RegEx to parse (not really reliable but efficient)  import re import json</p> <pre><code>content = """\ &lt;html&gt; &lt;head&gt; &lt;title&gt;503 Service Unavailable&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;503 Service Unavailable&lt;/h1&gt; { "status": "object-specific error", "payload": { "Message": "Unable to list resources", "HTTP Method": "GET", "URI": "api/myManager/1.0/Node", "Operation": "LIST", "Object": { "Name": "myManager.Node", "Interface": "Node" }, "Version": { "Major": 1, "Minor": 0 } } }&lt;br /&gt;&lt;br /&gt; &lt;/body&gt; &lt;/html&gt;""" mo = re.search(r"&lt;/h1&gt;(.*?)&lt;br", content, flags=re.DOTALL) if mo: data = mo.group(1) obj = json.loads(data) print(obj) </code></pre> <p>You'll get:</p> <pre><code>{'payload': {'Operation': 'LIST', 'HTTP Method': 'GET', 'URI': 'api/myManager/1.0/Node', 'Message': 'Unable to list resources', 'Version': {'Major': 1, 'Minor': 0}, 'Object': {'Interface': 'Node', 'Name': 'myManager.Node'}}, 'status': 'object-specific error'} </code></pre> <p>Or, using <a href="http://lxml.de/" rel="nofollow">lxml</a>:</p> <pre><code>import json from lxml import etree content = """\ &lt;html&gt; ... &lt;/html&gt;""" tree = etree.XML(content) h1 = tree.xpath("/html/body/h1[1]")[0] data = h1.tail obj = json.loads(data) </code></pre> <p>Same result</p>
-1
2016-08-19T16:24:54Z
[ "javascript", "python", "html", "json", "parsing" ]
Parse JSON from HTML responseText
39,043,468
<p>Python's webob module by default returns text/html responses, specifically ServerErorr's and these end up embedding the error JSON Paylod within the body of the HTML responseText contains the following:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;503 Service Unavailable&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;503 Service Unavailable&lt;/h1&gt; { "status": "object-specific error", "payload": { "Message": "Unable to list resources", "HTTP Method": "GET", "URI": "api/myManager/1.0/Node", "Operation": "LIST", "Object": { "Name": "myManager.Node", "Interface": "Node" }, "Version": { "Major": 1, "Minor": 0 } } }&lt;br /&gt;&lt;br /&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Using Javascript on the client side what is the best approach to extract the JSON that's embedded within the HTML ? What is the best way to extract this JSON object embedded within the HTML ?</p>
0
2016-08-19T16:06:17Z
39,229,423
<p>So I agree in general that, the better solution is to ensure the server returns only JSON, however a quick means of achieving this via Javascript on the client side as @Barmer suggested, Parse the html to the DOM, get the text childNode inside body and run JSONParse on it.</p> <pre><code>var responseStr = '&lt;html&gt;' + '&lt;head&gt;' + ' &lt;title&gt;503 Service Unavailable&lt;/title&gt;' + '&lt;/head&gt;' + '&lt;body&gt;' + '&lt;h1&gt;503 Service Unavailable&lt;/h1&gt;' + '{' + ' "status": "object-specific error",' + ' "payload": {' + ' "Message": "Unable to list resources",' + ' "HTTP Method": "GET",' + ' "URI": "api/myManager/1.0/Node",' + ' "Operation": "LIST",' + ' "Object": {' + ' "Name": "myManager.Node",' + ' "Interface": "Node"' + ' },' + ' "Version": {' + ' "Major": 1,' + ' "Minor": 0' + ' }' + ' }' + '}&lt;br /&gt;&lt;br /&gt;' + '&lt;/body&gt;' + '&lt;/html&gt;'; var parser = new DOMParser(); var doc = parser.parseFromString(responseStr, "text/html"); var items = doc.body.getElementsByTagName("*"); var json_obj; for (var i = 0, len = doc.body.childNodes.length; i &lt; len; i++) { if (doc.body.childNodes[i].nodeName == "#text") { json_obj = JSON.parse(doc.body.childNodes[i].data); break; } } // You can access json directly now e.g. console.log(json_obj.status); console.log(json_obj.payload['HTTP Method']); </code></pre>
0
2016-08-30T13:52:57Z
[ "javascript", "python", "html", "json", "parsing" ]
Python, regex dynamic count
39,043,494
<p>So far I have this code: </p> <pre><code>def find_words(m_count, m_string): m_list = re.findall(r'\w{6,}', m_string) return m_list </code></pre> <p>is there a way to use <code>m_count</code> instead of using the count number (6) explicitly??</p>
1
2016-08-19T16:07:48Z
39,043,542
<p>You can build a regex by concatenating your count variable and static part like this:</p> <pre><code>&gt;&gt;&gt; m_count = 6 &gt;&gt;&gt; re.findall(r'\w{' + str(m_count) + ',}', 'abcdefg 1234 124xyz') ['abcdefg', '124xyz'] </code></pre>
1
2016-08-19T16:11:09Z
[ "python", "regex" ]
Python, regex dynamic count
39,043,494
<p>So far I have this code: </p> <pre><code>def find_words(m_count, m_string): m_list = re.findall(r'\w{6,}', m_string) return m_list </code></pre> <p>is there a way to use <code>m_count</code> instead of using the count number (6) explicitly??</p>
1
2016-08-19T16:07:48Z
39,043,594
<p>You can use <a href="https://docs.python.org/3/library/stdtypes.html#str.format" rel="nofollow"><code>format()</code></a>, and <a href="https://docs.python.org/3/library/string.html#format-string-syntax" rel="nofollow"><em>escape</em></a> the curly braces.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; m_count = 6 &gt;&gt;&gt; print re.findall(r'\w{{{},}}'.format(m_count),'123456789 foobar hello_world') ['123456789', 'hello_world'] </code></pre> <p><strong>Full method body:</strong></p> <pre><code>def find_words(m_count, m_string): m_list = re.findall(r'\w{{{},}}'.format(m_count), m_string) return m_list </code></pre>
1
2016-08-19T16:15:09Z
[ "python", "regex" ]
Python, regex dynamic count
39,043,494
<p>So far I have this code: </p> <pre><code>def find_words(m_count, m_string): m_list = re.findall(r'\w{6,}', m_string) return m_list </code></pre> <p>is there a way to use <code>m_count</code> instead of using the count number (6) explicitly??</p>
1
2016-08-19T16:07:48Z
39,043,615
<p>Convert the int to string and added to you reg exp like this:</p> <pre><code>def find_words(m_count, m_string): m_list = re.findall(r'\w{'+str(m_count)+',}', m_string) return m_list </code></pre>
2
2016-08-19T16:16:36Z
[ "python", "regex" ]
Pyserial - Python creating byte array
39,043,536
<p>I have the following data</p> <p>a1 = 0x5A -- hex</p> <p>a2 = 0x01 -- hex</p> <p>a3 = 12 -- decimal</p> <p>a4 = 28 -- decimal</p> <p>a5 = sum of (a1 to a4)</p> <p>I should be able to send all this information in a byte array and send using ser.write command in one go. </p> <p>Currently I am manually converting a3 and a4 to hex and I am using something like this ser.write('\x5A\x01\x...\x...\x...)</p> <p>I would like a way, I could pack all the variables into a single byte array and say ser.write(bytearray)</p> <p>ser --- is my serial.Serial('COM1')</p> <p>Same with ser.read - the information I get is in byte array - How can I decode to decimals and hexdecimals </p> <p>Looking for the use of binascii function for both converting to byte array and converting back from byte array</p>
0
2016-08-19T16:10:53Z
39,043,632
<p>Do you want a string of hex values? Not sure to understand.</p> <pre><code>a1 = 0x5A # hex a2 = 0x01 # hex a3 = 12 # decimal a4 = 28 # decimal a5 = sum((a1, a2, a3, a4)) int_array = [a1, a2, a3, a4, a5] print(int_array) hex_array = "".join(map(hex, int_array)) print(hex_array) </code></pre> <p>You'll get:</p> <pre><code>[90, 1, 12, 28, 131] 0x5a0x10xc0x1c0x83 </code></pre> <p>Using <code>array</code> class:</p> <pre><code>import array byte_array = array.array('B', int_array) print(byte_array) print(byte_array.tostring()) </code></pre> <p>You'll get:</p> <pre><code>array('B', [90, 1, 12, 28, 131]) b'Z\x01\x0c\x1c\x83' </code></pre>
2
2016-08-19T16:17:52Z
[ "python", "pyserial" ]
Combinations of restricted set of integers
39,043,552
<p>How to generate the list of combinations with <code>allowed_ints</code> that sums to <code>goal</code> ?</p> <p>Examples: </p> <pre><code>allowed_ints=[1,2], goal=4 combinations = [[1,1,1,1],[1,1,2],[2,1,1],[2,2],[1,2,1]] allowed_ints=[5, 6], goal=13 combinations = [] </code></pre> <p>What I've made so far don't works.</p> <pre><code>def combinations(allowed_ints, goal): if goal &gt; 0: for i in allowed_ints: for p in combinations(allowed_ints, goal-i): yield [i] + p else: yield [] print list(combinations([1, 2],3)) [[1, 1, 1], [1, 1, 2], [1, 2], [2, 1], [2, 2]] # not what I want </code></pre>
-1
2016-08-19T16:11:43Z
39,043,732
<p>You should include 3 conditions like</p> <p>goal == 0 then recursion worked</p> <p>goal &lt; 0 then recursion failed</p> <p>goal >= 0 and list elements end then recursion failed</p> <p>By the way you can do it all using recursion. No need of loop, recursion can also do loop</p>
-1
2016-08-19T16:23:01Z
[ "python", "combinations" ]
Combinations of restricted set of integers
39,043,552
<p>How to generate the list of combinations with <code>allowed_ints</code> that sums to <code>goal</code> ?</p> <p>Examples: </p> <pre><code>allowed_ints=[1,2], goal=4 combinations = [[1,1,1,1],[1,1,2],[2,1,1],[2,2],[1,2,1]] allowed_ints=[5, 6], goal=13 combinations = [] </code></pre> <p>What I've made so far don't works.</p> <pre><code>def combinations(allowed_ints, goal): if goal &gt; 0: for i in allowed_ints: for p in combinations(allowed_ints, goal-i): yield [i] + p else: yield [] print list(combinations([1, 2],3)) [[1, 1, 1], [1, 1, 2], [1, 2], [2, 1], [2, 2]] # not what I want </code></pre>
-1
2016-08-19T16:11:43Z
39,044,129
<p>Using you function try this:</p> <pre><code>def combinations(allowed_ints, goal): if goal &gt; 0: for i in allowed_ints: for p in combinations(allowed_ints, goal-i): if sum([i] + p) == goal: yield [i] + p else: yield [] print list(combinations([1, 2],3)) </code></pre> <p>Outputs:</p> <pre><code>[[1, 1, 1], [1, 2], [2, 1]] </code></pre>
1
2016-08-19T16:47:41Z
[ "python", "combinations" ]
Combinations of restricted set of integers
39,043,552
<p>How to generate the list of combinations with <code>allowed_ints</code> that sums to <code>goal</code> ?</p> <p>Examples: </p> <pre><code>allowed_ints=[1,2], goal=4 combinations = [[1,1,1,1],[1,1,2],[2,1,1],[2,2],[1,2,1]] allowed_ints=[5, 6], goal=13 combinations = [] </code></pre> <p>What I've made so far don't works.</p> <pre><code>def combinations(allowed_ints, goal): if goal &gt; 0: for i in allowed_ints: for p in combinations(allowed_ints, goal-i): yield [i] + p else: yield [] print list(combinations([1, 2],3)) [[1, 1, 1], [1, 1, 2], [1, 2], [2, 1], [2, 2]] # not what I want </code></pre>
-1
2016-08-19T16:11:43Z
39,044,588
<p>I know you've already selected an answer, but I wanted to offer an alternative that's not as similar to your code. If you wanted to use recursion, I would suggest something like this:</p> <pre><code>def combinations(allowed_ints, list, goal): # base case: you're done if sum(list) == goal: print list # if overshoot, stop here elif sum(list) &gt; goal: pass else: for i in allowed_ints: new_list = list[:] new_list.append(i) combinations(allowed_ints, new_list, goal) combinations([1, 2],[],4) </code></pre> <p>It's no better than the suggested answer, but uses a different approach.</p>
0
2016-08-19T17:18:29Z
[ "python", "combinations" ]
Tweepy twitter bot replies to new requests
39,043,601
<p>I'm looking for a way to stop my Twitter bot, <a href="https://twitter.com/1happybot" rel="nofollow">HappyBot</a>, from replying to each user who has tweeted at it, and instead to only reply to new tweets since the last time the code was run. For example, if I tweeted the bot at 13:00 it would reply at 13:00 but if I tweeted it again at 14:00 it would reply to both the 13:00 and the 14:00 tweet. The code I run currently:</p> <pre><code>twts = api.search(q="@1happybot make me happy") t = ['@1happybot make me happy', '@1happybot Make me happy!', '@1happybot make me happy.', 'Make me happy @1happybot', 'make me happy @1happybot'] for s in twts: for i in t: if i == s.text: sn = s.user.screen_name m = "@%s Don't worry, be happy!" % (sn) s = api.update_status(m, s.id) print ('yes') </code></pre> <p>Any solutions, ideas or jumping off points would be greatly appreciated. Hope everything is clear.</p>
0
2016-08-19T16:15:39Z
39,043,956
<p>Which library are you using? My best guess is that whatever it is you are using will expose the JSON response that twitter returns. In this case you could try retrieving the creation date of the tweet. See <a href="https://dev.twitter.com/rest/reference/get/search/tweets" rel="nofollow">https://dev.twitter.com/rest/reference/get/search/tweets</a></p> <pre><code>s.created_at </code></pre> <p>That should give you the datetime string. Your code could also be a lot more pythonic. Take a look at:</p> <pre><code>TRIGGERS = ('@1happybot make me happy', '@1happybot Make me happy!', '@1happybot make me happy.', 'Make me happy @1happybot', 'make me happy @1happybot') for tweet in tweets: if tweet.text in TRIGGERS: sn = tweet.user.screen_name status_msg = '@{} Don't worry, be happy!'.format(sn) api.update_status(status_msg, s.id) print ('yes') </code></pre> <p>Naming things is hard but we should try our best to be descriptive without being too verbose. Variable names should communicate their intent. Use conventions like "element in list" instead of looping through a list to see if an element is in it. Be consistent in your use of quotes. Single quotes were used for your list of strings but not in the status msg you were setting. Choose one, stick to it. </p> <p>Are the values in TRIGGERS ever going to change? Use a non mutable type like a tuple if that is the case and communicate that intent of keeping it static by using all caps. Lots of little things. Highly recommend you read -> <a href="http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html" rel="nofollow">http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html</a> </p>
0
2016-08-19T16:37:44Z
[ "python", "twitter", "tweepy" ]
Tweepy twitter bot replies to new requests
39,043,601
<p>I'm looking for a way to stop my Twitter bot, <a href="https://twitter.com/1happybot" rel="nofollow">HappyBot</a>, from replying to each user who has tweeted at it, and instead to only reply to new tweets since the last time the code was run. For example, if I tweeted the bot at 13:00 it would reply at 13:00 but if I tweeted it again at 14:00 it would reply to both the 13:00 and the 14:00 tweet. The code I run currently:</p> <pre><code>twts = api.search(q="@1happybot make me happy") t = ['@1happybot make me happy', '@1happybot Make me happy!', '@1happybot make me happy.', 'Make me happy @1happybot', 'make me happy @1happybot'] for s in twts: for i in t: if i == s.text: sn = s.user.screen_name m = "@%s Don't worry, be happy!" % (sn) s = api.update_status(m, s.id) print ('yes') </code></pre> <p>Any solutions, ideas or jumping off points would be greatly appreciated. Hope everything is clear.</p>
0
2016-08-19T16:15:39Z
39,046,814
<p>The solution I offer here is quick &amp; dirty, but it should work. Basically, it stores the value of the last tweet you replied to in a text file to ensure persistence when your code stops, and reloads it when your code is called. With the "since" option, it will only look for tweets older than those you have already replied to, and thus it will not find a tweet twice.</p> <pre><code>f = open("lastTweet.txt","r") lastTweet = int(f.readline()) f.close() twts = api.search("@1happybot make me happy since:"+str(lastTweet)) t = ['@1happybot make me happy', '@1happybot Make me happy!', '@1happybot make me happy.', 'Make me happy @1happybot', 'make me happy @1happybot'] for s in twts: for i in t: if i == s.text: sn = s.user.screen_name m = "@%s Don't worry, be happy!" % (sn) s = api.update_status(m, s.id) print ('yes') if s.id &gt; lastId: lastId = s.id f = open("lastTweet.txt","w") f.write(lastId) f.close() </code></pre>
0
2016-08-19T19:56:35Z
[ "python", "twitter", "tweepy" ]
python - different array length along interpolation axis?
39,043,621
<p>I am trying to use the Python interpolation function to get the value y for a given x but I am getting the error "raise ValueError("x and y arrays must be equal in length along along interpolation axis" even though my arrays have both equal size and shape (according to what I get when I use .shape in my code). I am quite new to programming so I don't know how to check what else could be different in my arrays. Here is my code:</p> <pre><code>s = [] def slowroll(y, t): phi, dphi, a = y h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) da = h*a ddphi = -3.*h*dphi - phi return [dphi,ddphi,da] phi_ini = 18. dphi_ini = -0.1 init_y = [phi_ini,dphi_ini,1.] h_ini =np.sqrt(1/3. * (1/2. * dphi_ini**2. + 1/2.*phi_ini**2.)) t=np.linspace(0.,20.,100.) from scipy.integrate import odeint sol = odeint(slowroll, init_y, t) phi = sol[:,0] dphi = sol[:,1] a=sol[:,2] n=np.log(a) h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) s.extend(a*h) x = np.asarray(s) y = np.asarray(t) F = interp1d(y, x, kind='cubic') print F(7.34858263) </code></pre>
0
2016-08-19T16:17:02Z
39,043,841
<p>After adding in the required imports, I've been unable to duplicate your error with version 2.7.12. What python version are you running? </p> <pre><code>import numpy as np from scipy.interpolate import interp1d s = [] def slowroll(y, t): phi, dphi, a = y h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) da = h*a ddphi = -3.*h*dphi - phi return [dphi,ddphi,da] phi_ini = 18. dphi_ini = -0.1 init_y = [phi_ini,dphi_ini,1.] h_ini =np.sqrt(1/3. * (1/2. * dphi_ini**2. + 1/2.*phi_ini**2.)) t=np.linspace(0.,20.,100.) from scipy.integrate import odeint sol = odeint(slowroll, init_y, t) phi = sol[:,0] dphi = sol[:,1] a=sol[:,2] n=np.log(a) h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) s.extend(a*h) x = np.asarray(s) y = np.asarray(t) F = interp1d(y, x, kind='cubic') print F(7.34858263) </code></pre> <p>Output: <code>2.11688518961e+20</code></p>
0
2016-08-19T16:30:24Z
[ "python", "arrays", "scipy", "interpolation", "odeint" ]
python - different array length along interpolation axis?
39,043,621
<p>I am trying to use the Python interpolation function to get the value y for a given x but I am getting the error "raise ValueError("x and y arrays must be equal in length along along interpolation axis" even though my arrays have both equal size and shape (according to what I get when I use .shape in my code). I am quite new to programming so I don't know how to check what else could be different in my arrays. Here is my code:</p> <pre><code>s = [] def slowroll(y, t): phi, dphi, a = y h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) da = h*a ddphi = -3.*h*dphi - phi return [dphi,ddphi,da] phi_ini = 18. dphi_ini = -0.1 init_y = [phi_ini,dphi_ini,1.] h_ini =np.sqrt(1/3. * (1/2. * dphi_ini**2. + 1/2.*phi_ini**2.)) t=np.linspace(0.,20.,100.) from scipy.integrate import odeint sol = odeint(slowroll, init_y, t) phi = sol[:,0] dphi = sol[:,1] a=sol[:,2] n=np.log(a) h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2)) s.extend(a*h) x = np.asarray(s) y = np.asarray(t) F = interp1d(y, x, kind='cubic') print F(7.34858263) </code></pre>
0
2016-08-19T16:17:02Z
39,046,554
<p>It's Friday, I'm about to leave work, and I can't figure it out</p> <p><a href="http://i.stack.imgur.com/MvwWw.png" rel="nofollow"><img src="http://i.stack.imgur.com/MvwWw.png" alt="enter image description here"></a></p> <p>mods feel free to flag/delete post, I know it's not that productive...</p>
0
2016-08-19T19:39:21Z
[ "python", "arrays", "scipy", "interpolation", "odeint" ]