title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Web Scraping Python using Google Chrome extension | 39,104,480 | <p>Hi I am a Python newbie and I am webscrapping a webpage. </p>
<p>I am using the Google Chrome Developer Extension to identify the class of the objects I want to scrape. However, my code returns an empty array of results whereas the screenshots clearly show that that those strings are in the HTML code.
<a href="http://i.stack.imgur.com/0Xf87.png" rel="nofollow">Chrome Developer</a></p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'http://www.momondo.de/flightsearch/?Search=true&TripType=2&SegNo=2&SO0=BOS&SD0=LON&SDP0=07-09-2016&SO1=LON&SD1=BOS&SDP1=12-09-2016&AD=1&TK=ECO&DO=false&NA=false'
html = requests.get(url)
soup = BeautifulSoup(html.text,"lxml")
x = soup.find_all("span", {"class":"value"})
print(x)
#pprint.pprint (soup.div)
</code></pre>
<p>I am very much appreciating your help!</p>
<p>Many thanks!</p>
| 0 | 2016-08-23T14:52:57Z | 39,104,831 | <p>Converted my comment to an answer...</p>
<p>Make sure the data you are expecting is actually there. Use <code>print(soup.prettify())</code> to see what was actually returned from the request. Depending on how the site works, the data you are looking for may only exist in the browser after the javascript is processed. You might also want to take a look at <a href="http://www.seleniumhq.org/" rel="nofollow">selenium</a></p>
| 0 | 2016-08-23T15:09:24Z | [
"javascript",
"python"
] |
NumPy performance: uint8 vs. float and multiplication vs. division? | 39,104,562 | <p>I have just noticed that the execution time of a script of mine nearly halves by only changing a multiplication to a division.</p>
<p>To investigate this, I have written a small example:</p>
<pre><code>import numpy as np
import timeit
# uint8 array
arr1 = np.random.randint(0, high=256, size=(100, 100), dtype=np.uint8)
# float32 array
arr2 = np.random.rand(100, 100).astype(np.float32)
arr2 *= 255.0
def arrmult(a):
"""
mult, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) * 0.5
def arrmult2(a):
"""
mult, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) * 0.5
def arrmult3(a):
"""
mult, vectorized
"""
b = a.copy()
b = (b + 5) * 0.5
def arrdiv(a):
"""
div, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) / 2
def arrdiv2(a):
"""
div, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) / 2
def arrdiv3(a):
"""
div, vectorized
"""
b = a.copy()
b = (b + 5) / 2
def print_time(name, t):
print("{: <10}: {: >6.4f}s".format(name, t))
timeit_iterations = 100
print("uint8 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr1)", "from __main__ import arrmult, arr1", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr1)", "from __main__ import arrmult2, arr1", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr1)", "from __main__ import arrmult3, arr1", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr1)", "from __main__ import arrdiv, arr1", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr1)", "from __main__ import arrdiv2, arr1", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr1)", "from __main__ import arrdiv3, arr1", number=timeit_iterations))
print("\nfloat32 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr2)", "from __main__ import arrmult2, arr2", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr2)", "from __main__ import arrmult3, arr2", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr2)", "from __main__ import arrdiv2, arr2", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr2)", "from __main__ import arrdiv3, arr2", number=timeit_iterations))
</code></pre>
<p>This prints the following timings:</p>
<pre><code>uint8 arrays
arrmult : 2.2004s
arrmult2 : 3.0589s
arrmult3 : 0.0014s
arrdiv : 1.1540s
arrdiv2 : 2.0780s
arrdiv3 : 0.0027s
float32 arrays
arrmult : 1.2708s
arrmult2 : 2.4120s
arrmult3 : 0.0009s
arrdiv : 1.5771s
arrdiv2 : 2.3843s
arrdiv3 : 0.0009s
</code></pre>
<p>I always thought a multiplication is computationally cheaper than a division. However, for <code>uint8</code> a division seems to be nearly twice as effective. Does this somehow relate to the fact, that <code>* 0.5</code> has to calculate the multiplication in a float and then casting the result back to to an integer?</p>
<p>At least for floats multiplications seem to be faster than divisions. Is this generally true?</p>
<p>Why is a multiplication in <code>uint8</code> more expansive than in <code>float32</code>? I thought an 8-bit unsigned integer should be much faster to calculate than 32-bit floats?!</p>
<p>Can someone "demystify" this?</p>
<p><strong>EDIT</strong>: to have more data, I've included vectorized functions (like suggested) and added index iterators as well. The vectorized functions are much faster, thus not really comparable. However, if <code>timeit_iterations</code> is set much higher for the vectorized functions, it turns out that multiplication is faster for both, <code>uint8</code> and <code>float32</code>. I guess this confuses even more?!</p>
<p>Maybe multiplication is in fact always faster than division, but the main performance leaks in the for-loops is not the arithmetical operation, but the loop itself. Although this does not explain why the loops behave differently for different operations.</p>
<p><strong>EDIT2</strong>: Like @jotasi already stated, we are looking for a full explanation of <code>division</code> vs. <code>multiplication</code> and <code>int</code>(or <code>uint8</code>) vs. <code>float</code> (or <code>float32</code>). Additionally, explaining the different trends of the vectorized approaches and the iterators would be interesting, as in the vectorized case, the division seems to be slower, whereas it is faster in the iterator case.</p>
| 12 | 2016-08-23T14:57:01Z | 39,106,626 | <p>It's the very first operation that will typically take longer before "warming up" (e.g. memory allocated, caching).</p>
<p>See the same effect using the reverse order of dividing and multiplying:</p>
<pre><code>>>> print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations))
>>> print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations))
arrdiv: 3.2630s
arrmult: 2.5873s
</code></pre>
| -2 | 2016-08-23T16:42:20Z | [
"python",
"performance",
"python-2.7",
"numpy"
] |
NumPy performance: uint8 vs. float and multiplication vs. division? | 39,104,562 | <p>I have just noticed that the execution time of a script of mine nearly halves by only changing a multiplication to a division.</p>
<p>To investigate this, I have written a small example:</p>
<pre><code>import numpy as np
import timeit
# uint8 array
arr1 = np.random.randint(0, high=256, size=(100, 100), dtype=np.uint8)
# float32 array
arr2 = np.random.rand(100, 100).astype(np.float32)
arr2 *= 255.0
def arrmult(a):
"""
mult, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) * 0.5
def arrmult2(a):
"""
mult, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) * 0.5
def arrmult3(a):
"""
mult, vectorized
"""
b = a.copy()
b = (b + 5) * 0.5
def arrdiv(a):
"""
div, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) / 2
def arrdiv2(a):
"""
div, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) / 2
def arrdiv3(a):
"""
div, vectorized
"""
b = a.copy()
b = (b + 5) / 2
def print_time(name, t):
print("{: <10}: {: >6.4f}s".format(name, t))
timeit_iterations = 100
print("uint8 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr1)", "from __main__ import arrmult, arr1", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr1)", "from __main__ import arrmult2, arr1", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr1)", "from __main__ import arrmult3, arr1", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr1)", "from __main__ import arrdiv, arr1", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr1)", "from __main__ import arrdiv2, arr1", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr1)", "from __main__ import arrdiv3, arr1", number=timeit_iterations))
print("\nfloat32 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr2)", "from __main__ import arrmult2, arr2", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr2)", "from __main__ import arrmult3, arr2", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr2)", "from __main__ import arrdiv2, arr2", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr2)", "from __main__ import arrdiv3, arr2", number=timeit_iterations))
</code></pre>
<p>This prints the following timings:</p>
<pre><code>uint8 arrays
arrmult : 2.2004s
arrmult2 : 3.0589s
arrmult3 : 0.0014s
arrdiv : 1.1540s
arrdiv2 : 2.0780s
arrdiv3 : 0.0027s
float32 arrays
arrmult : 1.2708s
arrmult2 : 2.4120s
arrmult3 : 0.0009s
arrdiv : 1.5771s
arrdiv2 : 2.3843s
arrdiv3 : 0.0009s
</code></pre>
<p>I always thought a multiplication is computationally cheaper than a division. However, for <code>uint8</code> a division seems to be nearly twice as effective. Does this somehow relate to the fact, that <code>* 0.5</code> has to calculate the multiplication in a float and then casting the result back to to an integer?</p>
<p>At least for floats multiplications seem to be faster than divisions. Is this generally true?</p>
<p>Why is a multiplication in <code>uint8</code> more expansive than in <code>float32</code>? I thought an 8-bit unsigned integer should be much faster to calculate than 32-bit floats?!</p>
<p>Can someone "demystify" this?</p>
<p><strong>EDIT</strong>: to have more data, I've included vectorized functions (like suggested) and added index iterators as well. The vectorized functions are much faster, thus not really comparable. However, if <code>timeit_iterations</code> is set much higher for the vectorized functions, it turns out that multiplication is faster for both, <code>uint8</code> and <code>float32</code>. I guess this confuses even more?!</p>
<p>Maybe multiplication is in fact always faster than division, but the main performance leaks in the for-loops is not the arithmetical operation, but the loop itself. Although this does not explain why the loops behave differently for different operations.</p>
<p><strong>EDIT2</strong>: Like @jotasi already stated, we are looking for a full explanation of <code>division</code> vs. <code>multiplication</code> and <code>int</code>(or <code>uint8</code>) vs. <code>float</code> (or <code>float32</code>). Additionally, explaining the different trends of the vectorized approaches and the iterators would be interesting, as in the vectorized case, the division seems to be slower, whereas it is faster in the iterator case.</p>
| 12 | 2016-08-23T14:57:01Z | 39,163,473 | <p>It's because you multiply an int by a float and store the result as an int.
Try your arr_mult and arr_div tests with different integer or float values for the multiplication / division. Especially, compare multiplying by '2' and multiplying by '2.'</p>
| -1 | 2016-08-26T09:53:34Z | [
"python",
"performance",
"python-2.7",
"numpy"
] |
NumPy performance: uint8 vs. float and multiplication vs. division? | 39,104,562 | <p>I have just noticed that the execution time of a script of mine nearly halves by only changing a multiplication to a division.</p>
<p>To investigate this, I have written a small example:</p>
<pre><code>import numpy as np
import timeit
# uint8 array
arr1 = np.random.randint(0, high=256, size=(100, 100), dtype=np.uint8)
# float32 array
arr2 = np.random.rand(100, 100).astype(np.float32)
arr2 *= 255.0
def arrmult(a):
"""
mult, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) * 0.5
def arrmult2(a):
"""
mult, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) * 0.5
def arrmult3(a):
"""
mult, vectorized
"""
b = a.copy()
b = (b + 5) * 0.5
def arrdiv(a):
"""
div, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) / 2
def arrdiv2(a):
"""
div, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) / 2
def arrdiv3(a):
"""
div, vectorized
"""
b = a.copy()
b = (b + 5) / 2
def print_time(name, t):
print("{: <10}: {: >6.4f}s".format(name, t))
timeit_iterations = 100
print("uint8 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr1)", "from __main__ import arrmult, arr1", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr1)", "from __main__ import arrmult2, arr1", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr1)", "from __main__ import arrmult3, arr1", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr1)", "from __main__ import arrdiv, arr1", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr1)", "from __main__ import arrdiv2, arr1", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr1)", "from __main__ import arrdiv3, arr1", number=timeit_iterations))
print("\nfloat32 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr2)", "from __main__ import arrmult2, arr2", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr2)", "from __main__ import arrmult3, arr2", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr2)", "from __main__ import arrdiv2, arr2", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr2)", "from __main__ import arrdiv3, arr2", number=timeit_iterations))
</code></pre>
<p>This prints the following timings:</p>
<pre><code>uint8 arrays
arrmult : 2.2004s
arrmult2 : 3.0589s
arrmult3 : 0.0014s
arrdiv : 1.1540s
arrdiv2 : 2.0780s
arrdiv3 : 0.0027s
float32 arrays
arrmult : 1.2708s
arrmult2 : 2.4120s
arrmult3 : 0.0009s
arrdiv : 1.5771s
arrdiv2 : 2.3843s
arrdiv3 : 0.0009s
</code></pre>
<p>I always thought a multiplication is computationally cheaper than a division. However, for <code>uint8</code> a division seems to be nearly twice as effective. Does this somehow relate to the fact, that <code>* 0.5</code> has to calculate the multiplication in a float and then casting the result back to to an integer?</p>
<p>At least for floats multiplications seem to be faster than divisions. Is this generally true?</p>
<p>Why is a multiplication in <code>uint8</code> more expansive than in <code>float32</code>? I thought an 8-bit unsigned integer should be much faster to calculate than 32-bit floats?!</p>
<p>Can someone "demystify" this?</p>
<p><strong>EDIT</strong>: to have more data, I've included vectorized functions (like suggested) and added index iterators as well. The vectorized functions are much faster, thus not really comparable. However, if <code>timeit_iterations</code> is set much higher for the vectorized functions, it turns out that multiplication is faster for both, <code>uint8</code> and <code>float32</code>. I guess this confuses even more?!</p>
<p>Maybe multiplication is in fact always faster than division, but the main performance leaks in the for-loops is not the arithmetical operation, but the loop itself. Although this does not explain why the loops behave differently for different operations.</p>
<p><strong>EDIT2</strong>: Like @jotasi already stated, we are looking for a full explanation of <code>division</code> vs. <code>multiplication</code> and <code>int</code>(or <code>uint8</code>) vs. <code>float</code> (or <code>float32</code>). Additionally, explaining the different trends of the vectorized approaches and the iterators would be interesting, as in the vectorized case, the division seems to be slower, whereas it is faster in the iterator case.</p>
| 12 | 2016-08-23T14:57:01Z | 39,174,057 | <p>The problem is your assumption, that you measure the time needed for division or multiplication, which is not true. You are measuring the overhead needed for a division or multiplication. </p>
<p>One has really to look at the exact code to explain every effect, which can vary from version to version. This answer can only give an idea, what one has to consider. </p>
<p>The problem is that a simple <code>int</code> is not simple at all in python: it is a real object which must be registered in the garbage collector, it grows in size with its value - for all that you have to pay: for example for a 8bit integer 24 bytes memory are needed! similar goes for python-floats.</p>
<p>On the other hand, a numpy array consists of simple c-style integers/floats without overhead, you save a lot of memory, but pay for it during the access to an element of numpy-array. <code>a[i]</code> means: a python-integer must be constructed, registered in the garbage collector and only than it can be used - there is <strong>a lot</strong> of overhead. </p>
<p>Consider this code:</p>
<pre><code>li1=[x%256 for x in xrange(10**4)]
arr1=np.array(li1, np.uint8)
def arrmult(a):
for i in xrange(len(a)):
a[i]*=5;
</code></pre>
<p><code>arrmult(li1)</code> is 25 faster than <code>arrmult(arr1)</code> because integers in the list are already python-ints and don't have to be created! The lion's share of the calculation time is needed for creation of the objects - everything else can be almost neglected.</p>
<hr>
<p>Let's take a look at your code, first the multiplication:</p>
<pre><code>def arrmult2(a):
...
b[i, j] = (b[i, j] + 5) * 0.5
</code></pre>
<p>In the case of the uint8 the following must happen (I neglect +5 for simplicity):</p>
<ol>
<li>a python-int must be created</li>
<li>it must be casted to a float (python-float creation), in order to be able to do float multiplication</li>
<li>and casted back to a python-int or/and uint8</li>
</ol>
<p>For float32, there is less work to do (multiplication does not cost much):
1. a python-float created
2. casted back float32.</p>
<p>So the float-version should be faster and it is.</p>
<hr>
<p>Now let's take a look at the division:</p>
<pre><code>def arrdiv2(a):
...
b[i, j] = (b[i, j] + 5) / 2
</code></pre>
<p>The pitfall here: All operations are integer-operations. So compared to multiplication there is no need to cast to python-float, thus we have less overhead as in the case of multiplication. Division is "faster" for unint8 than multiplication in your case.</p>
<p>However, division and multiplication are equally fast/slow for float32, because almost nothing has changed in this case - we still need to create a python-float.</p>
<hr>
<p>Now the vectorized versions: they work with c-style "raw" float32s/uint8s without conversion (and its cost!) to the corresponding python-objects under the hood. To get meaningful results you should increase the number of iteration (right now the running time is too small to say something with certainty). </p>
<ol>
<li><p>division and multiplication for float32 could have the same running time, because I would expect numpy to replace the division by 2 through multiplication by <code>0.5</code> (but to be sure one has to look into the code).</p></li>
<li><p>multiplication for uint8 should be slower, because every uint8-integer must be casted to a float prior to multiplication with 0.5 and than casted back to uint8 afterwards.</p></li>
<li><p>for the uint8 case, the numpy cannot replace the division by 2 through multiplication with 0.5 because it is an integer division. Integer division is slower than float-multiplication for a lot of architectures - this is the slowest vectorized operation. </p></li>
</ol>
<hr>
<p>PS: I would not dwell too much about costs multiplication vs. division - there are too many other things that can have a bigger hit on the performance. For example creating unnecessary temporary objects or if the numpy-array is large and does not fit into the cache, than the memory access will be the bottle-neck - you will see no difference between multiplication and division at all.</p>
| 5 | 2016-08-26T20:15:24Z | [
"python",
"performance",
"python-2.7",
"numpy"
] |
NumPy performance: uint8 vs. float and multiplication vs. division? | 39,104,562 | <p>I have just noticed that the execution time of a script of mine nearly halves by only changing a multiplication to a division.</p>
<p>To investigate this, I have written a small example:</p>
<pre><code>import numpy as np
import timeit
# uint8 array
arr1 = np.random.randint(0, high=256, size=(100, 100), dtype=np.uint8)
# float32 array
arr2 = np.random.rand(100, 100).astype(np.float32)
arr2 *= 255.0
def arrmult(a):
"""
mult, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) * 0.5
def arrmult2(a):
"""
mult, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) * 0.5
def arrmult3(a):
"""
mult, vectorized
"""
b = a.copy()
b = (b + 5) * 0.5
def arrdiv(a):
"""
div, read-write iterator
"""
b = a.copy()
for item in np.nditer(b, op_flags=["readwrite"]):
item[...] = (item + 5) / 2
def arrdiv2(a):
"""
div, index iterator
"""
b = a.copy()
for i, j in np.ndindex(b.shape):
b[i, j] = (b[i, j] + 5) / 2
def arrdiv3(a):
"""
div, vectorized
"""
b = a.copy()
b = (b + 5) / 2
def print_time(name, t):
print("{: <10}: {: >6.4f}s".format(name, t))
timeit_iterations = 100
print("uint8 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr1)", "from __main__ import arrmult, arr1", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr1)", "from __main__ import arrmult2, arr1", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr1)", "from __main__ import arrmult3, arr1", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr1)", "from __main__ import arrdiv, arr1", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr1)", "from __main__ import arrdiv2, arr1", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr1)", "from __main__ import arrdiv3, arr1", number=timeit_iterations))
print("\nfloat32 arrays")
print_time("arrmult", timeit.timeit("arrmult(arr2)", "from __main__ import arrmult, arr2", number=timeit_iterations))
print_time("arrmult2", timeit.timeit("arrmult2(arr2)", "from __main__ import arrmult2, arr2", number=timeit_iterations))
print_time("arrmult3", timeit.timeit("arrmult3(arr2)", "from __main__ import arrmult3, arr2", number=timeit_iterations))
print_time("arrdiv", timeit.timeit("arrdiv(arr2)", "from __main__ import arrdiv, arr2", number=timeit_iterations))
print_time("arrdiv2", timeit.timeit("arrdiv2(arr2)", "from __main__ import arrdiv2, arr2", number=timeit_iterations))
print_time("arrdiv3", timeit.timeit("arrdiv3(arr2)", "from __main__ import arrdiv3, arr2", number=timeit_iterations))
</code></pre>
<p>This prints the following timings:</p>
<pre><code>uint8 arrays
arrmult : 2.2004s
arrmult2 : 3.0589s
arrmult3 : 0.0014s
arrdiv : 1.1540s
arrdiv2 : 2.0780s
arrdiv3 : 0.0027s
float32 arrays
arrmult : 1.2708s
arrmult2 : 2.4120s
arrmult3 : 0.0009s
arrdiv : 1.5771s
arrdiv2 : 2.3843s
arrdiv3 : 0.0009s
</code></pre>
<p>I always thought a multiplication is computationally cheaper than a division. However, for <code>uint8</code> a division seems to be nearly twice as effective. Does this somehow relate to the fact, that <code>* 0.5</code> has to calculate the multiplication in a float and then casting the result back to to an integer?</p>
<p>At least for floats multiplications seem to be faster than divisions. Is this generally true?</p>
<p>Why is a multiplication in <code>uint8</code> more expansive than in <code>float32</code>? I thought an 8-bit unsigned integer should be much faster to calculate than 32-bit floats?!</p>
<p>Can someone "demystify" this?</p>
<p><strong>EDIT</strong>: to have more data, I've included vectorized functions (like suggested) and added index iterators as well. The vectorized functions are much faster, thus not really comparable. However, if <code>timeit_iterations</code> is set much higher for the vectorized functions, it turns out that multiplication is faster for both, <code>uint8</code> and <code>float32</code>. I guess this confuses even more?!</p>
<p>Maybe multiplication is in fact always faster than division, but the main performance leaks in the for-loops is not the arithmetical operation, but the loop itself. Although this does not explain why the loops behave differently for different operations.</p>
<p><strong>EDIT2</strong>: Like @jotasi already stated, we are looking for a full explanation of <code>division</code> vs. <code>multiplication</code> and <code>int</code>(or <code>uint8</code>) vs. <code>float</code> (or <code>float32</code>). Additionally, explaining the different trends of the vectorized approaches and the iterators would be interesting, as in the vectorized case, the division seems to be slower, whereas it is faster in the iterator case.</p>
| 12 | 2016-08-23T14:57:01Z | 39,182,139 | <p>This answer only looks at vectorised operations, as the reason for the other operations being slow has been answered by <a href="http://stackoverflow.com/a/39174057/529630">ead</a>.</p>
<p>A lot of "optimisations" are based on old hardware. The assumptions that meant that optimisations held true on older hardware do not old true on newer hardware. </p>
<h3>Pipelines and division</h3>
<p>Division <em>is</em> slow. Division operations consist of several units that each have to perform one calculation one after another. This is what makes division slow. </p>
<p>However, in a floating-point processing unit (FPU) [common on most modern CPUs] there are dedicated units arranged in a "pipeline" for the division instruction. Once a unit is done, that unit isn't needed for the rest of the operation. If you have several division operations you can get these units with nothing to do started on the next division operation. So though each operation is slow, the FPU can actually achieve a high throughput of division operations. Pipeline-ing isn't the same as vectorisation, but the results are mostly the same -- higher throughput when you have lots of the same operations to do.</p>
<p>Think of pipeline-ing like traffic. Compare three lanes of traffic moving at 30 mph versus one lane of traffic moving at 90 mph. The slower traffic is definitely slower individually, but the three-lane-road still has the same throughput.</p>
| 2 | 2016-08-27T14:19:02Z | [
"python",
"performance",
"python-2.7",
"numpy"
] |
Django Rest Framework Recursive Nested Parent Serialization | 39,104,575 | <p>I have a model with a self referential field called parent.
Model:</p>
<pre><code>class Zone(BaseModel):
name = models.CharField(max_length=200)
parent = models.ForeignKey('self', models.CASCADE, blank=True, null=True, related_name='children')
def __unicode__(self):
return self.name
</code></pre>
<p>Serializer:</p>
<pre><code>class ZoneSerializer(ModelSerializer):
parent = PrimaryKeyRelatedField(many=False, queryset=Zone.objects.all())
parent_disp = StringRelatedField(many=False, source="parent")
class Meta:
model = Zone
fields = ('id', 'name', 'parent', 'parent_disp')
</code></pre>
<p>Now I want to serialize the parent of the zone and its parent and its parent till parent is none.
I found recursive serialization methods for children but not for parent.
How can I do this?</p>
| 0 | 2016-08-23T14:57:47Z | 39,105,821 | <p>Try use <a href="http://www.django-rest-framework.org/api-guide/fields/#serializermethodfield" rel="nofollow">SerializerMethodField</a> here:</p>
<pre><code>def get_parent(self, obj):
# query what your want here.
</code></pre>
<p>I'm not sure D-R-F has build-in methods for this, but you can use query to get what you want in this method.</p>
| 1 | 2016-08-23T15:56:48Z | [
"python",
"django",
"recursion",
"serialization",
"django-rest-framework"
] |
Django Rest Framework Recursive Nested Parent Serialization | 39,104,575 | <p>I have a model with a self referential field called parent.
Model:</p>
<pre><code>class Zone(BaseModel):
name = models.CharField(max_length=200)
parent = models.ForeignKey('self', models.CASCADE, blank=True, null=True, related_name='children')
def __unicode__(self):
return self.name
</code></pre>
<p>Serializer:</p>
<pre><code>class ZoneSerializer(ModelSerializer):
parent = PrimaryKeyRelatedField(many=False, queryset=Zone.objects.all())
parent_disp = StringRelatedField(many=False, source="parent")
class Meta:
model = Zone
fields = ('id', 'name', 'parent', 'parent_disp')
</code></pre>
<p>Now I want to serialize the parent of the zone and its parent and its parent till parent is none.
I found recursive serialization methods for children but not for parent.
How can I do this?</p>
| 0 | 2016-08-23T14:57:47Z | 39,122,426 | <p>Ok, I got it working like that.</p>
<pre><code>class ZoneSerializer(ModelSerializer):
parent = SerializerMethodField()
class Meta:
model = Zone
fields = ('id', 'name', 'project', 'parent',)
def get_parent(self, obj):
if obj.parent is not None:
return ZoneSerializer(obj.parent).data
else:
return None
</code></pre>
| 0 | 2016-08-24T11:44:29Z | [
"python",
"django",
"recursion",
"serialization",
"django-rest-framework"
] |
Setting up BGP Layer Using Scapy | 39,104,621 | <p>I am trying to use Scapy to send packets that have a BGP layer</p>
<p>I am currently stuck on a rudimentary part of this problem because I am unable to set up the BGP layer. I followed the instructions to set up the regular IP and TCP Layer.</p>
<p>Eg:</p>
<pre><code>>>a=IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=179,dport=50)
</code></pre>
<p>But the problem arises when I do this:</p>
<pre><code>>>a=a/BGP()
NameError: name BGP is not defined
</code></pre>
<p>I have seen the BGP implementations in the contrib file from Scapy Github (<a href="https://github.com/secdev/scapy/blob/9201f1cf1318edd5768d7e2ee968b7fba0a24c5e/scapy/contrib/bgp.py" rel="nofollow">https://github.com/secdev/scapy/blob/9201f1cf1318edd5768d7e2ee968b7fba0a24c5e/scapy/contrib/bgp.py</a>) so I think Scapy does support BGP implementations</p>
<p>I am new to networking so I was wondering if you could help me set up the BGP layer</p>
<p>Thanks for taking the time to read this!</p>
| 0 | 2016-08-23T15:00:21Z | 39,107,539 | <p>Just going to try and help here. I have zero experience with BGP type packets, but... I copied the bgp.py file from the link you provided into scapy/layers. Using ls() I found the following:</p>
<pre><code>BGPAuthenticationData : BGP Authentication Data
BGPErrorSubcodes : BGP Error Subcodes
BGPHeader : BGP header
BGPNotification : BGP Notification fields
BGPOpen : BGP Open Header
BGPOptionalParameter : BGP Optional Parameters
BGPPathAttribute : BGP Attribute fields
BGPUpdate : BGP Update fields
</code></pre>
<p>I could then use say ls(BGPUpdate) to show this:</p>
<pre><code>withdrawn_len : ShortField = (None)
withdrawn : FieldListField = ([])
tp_len : ShortField = (None)
total_path : PacketListField = ([])
nlri : FieldListField = ([])
</code></pre>
<p>and was able to create this packet:</p>
<pre><code>pkt = pkt = IP()/TCP()/BGPUpdate()
pkt.show()
###[ IP ]###
version = 4
ihl = None
tos = 0x0
len = None
id = 1
flags =
frag = 0
ttl = 64
proto = tcp
chksum = None
src = 127.0.0.1
dst = 127.0.0.1
\options \
###[ TCP ]###
sport = ftp_data
dport = http
seq = 0
ack = 0
dataofs = None
reserved = 0
flags = S
window = 8192
chksum = None
urgptr = 0
options = {}
###[ BGP Update fields ]###
withdrawn_len= None
withdrawn = []
tp_len = None
\total_path\
nlri = []
</code></pre>
<p>I'm not sure what all of the different types of BGP layers/packets are used for or where the Communities Number would be set. Possibly in BGPPathAttribute(type=x). Type 5 is "LOCAL_PREF" which may correspond to Community Values. Try this <a href="http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/28784-bgp-community.html" rel="nofollow">Link.</a></p>
<pre><code>pkt = BGPPathAttribute(type=5)
pkt.show()
###[ BGP Attribute fields ]###
flags = Transitive
type = LOCAL_PREF
attr_len = None
value = ''
</code></pre>
<p>Anyway, hope that helps a little.</p>
<p>Edit:
Forgot. I also added "bgp" to the load_layers section of scapy/config.py. Line 373. Like this:</p>
<pre><code> load_layers = ["l2", "inet", "dhcp", "dns", "dot11", "gprs", "hsrp", "inet6", "ir", "isakmp", "l2tp",
"mgcp", "mobileip", "netbios", "netflow", "ntp", "ppp", "radius", "rip", "rtp",
"sebek", "skinny", "smb", "snmp", "tftp", "x509", "bluetooth", "dhcp6", "llmnr", "sctp", "vrrp",
"ipsec","bgp"]
</code></pre>
| 0 | 2016-08-23T17:37:00Z | [
"python",
"networking",
"scapy",
"bgp"
] |
label in kivy doesn't update in while loop | 39,104,655 | <p>I'm trying to make a timer that updates a label to see the current amount of time remaining. I have a button that when you press should start the 2 min timer. For some reason the label doesn't update. Is there something wrong with the way I am doing this?</p>
<p>Here is my code:</p>
<pre><code>import time
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.gridlayout import GridLayout
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.uix.label import Label
Builder.load_string("""
#:import sla kivy.adapters.simplelistadapter
#:import label kivy.uix.label
<ListItemButton>:
selected_color: 0, 0, 1, 1
deselected_color: 0, 0, 0, 1
<MenuScreen>:
FloatLayout:
#cols: 2
#rows: 2
size: 800,480
Label:
id: output
text: "0 min 0 s"
font_size: 60
size_hint: None, None
size: 400, 100
pos: 200,425
Button:
id: statheader
text: "2 min"
font_size: 40
size_hint: None, None
size: 600,100
pos: 150,800
background_color: 0,0,1,1
on_press: root.startTimer(int(2))
""")
class MenuScreen(Screen):
tww = 0
def startTimer(self, what):
self.tww = what*60
while self.tww > 0:
minute = self.tww/60
print(minute)
second = self.tww - minute*60
print(second)
self.ids.output.text = str(minute) + " min " + str(second) + " s"
self.tww -= 1
time.sleep(1)
sm = ScreenManager()
menu_screen = MenuScreen(name='menu')
sm.add_widget(menu_screen)
class TestApp(App):
def build(self):
return sm
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>Is there something else that <code>self.ids.output.text</code> should be?</p>
| 0 | 2016-08-23T15:01:34Z | 39,104,845 | <p>Kivy's graphics can't update until your while loop finishes - during the loop, only the content of the loop is run (repeatedly), and Kivy's normal functions are blocked.</p>
<p>You should instead use <code>Clock.schedule_interval</code> to run computations every frame without blocking other functions, or run your while loop in a thread.</p>
| 1 | 2016-08-23T15:10:03Z | [
"python",
"kivy"
] |
Geopandas Dataframe Points to Polygons | 39,104,710 | <p>I have a geopandas dataframe made up of an id and a geometry column which is populated by 2D points. I want to join the points for each unique id to create a polygon, so that my new dataframe will have polygons as its geometry. My code currently looks something like this:</p>
<pre><code>polygons = geopandas.GeoDataFrame()
for i in id:
group = df[df['id']== i]
polygon = {'type': 'Polygon', 'coordinates': group['geometry']}
polygon['poly'] = polygon
polygons = geopandas.concat([polygon,polygons])
</code></pre>
<p>It creates a polygon but when I assign the new variable <code>poly</code> it says</p>
<pre><code>ValueError: Length of values does not match length of index"
</code></pre>
<p>which makes sense since it is still just a list of coordinates and not an actual polygon object. Does anyone know how to make this an actual polygon object that I can add to a column on a geopandas <code>df</code>?<br>
Thanks in advance :)</p>
| 0 | 2016-08-23T15:04:07Z | 39,154,164 | <p>I have achieved something similar with the <code>groupby</code> function. Assuming your points are actually Shapely <code>Point</code> objects, and are sorted in the right order, you can try something like this.</p>
<pre><code>import pandas as pd
import geopandas as gp
from shapely.geometry import Point, Polygon
# Initialize a test GeoDataFrame where geometry is a list of points
df = gp.GeoDataFrame( [['box', Point(1, 0)],
['box', Point(1, 1)],
['box', Point(2,2)],
['box', Point(1,2)],
['triangle', Point(1, 1)],
['triangle', Point(2,2)],
['triangle', Point(3,1)]],
columns = ['shape_id', 'geometry'],
geometry='geometry')
# Extract the coordinates from the Point object
df['geometry'] = df['geometry'].apply(lambda x: x.coords[0])
# Group by shape ID
# 1. Get all of the coordinates for that ID as a list
# 2. Convert that list to a Polygon
df = df.groupby('shape_id')['geometry'].apply(lambda x: Polygon(x.tolist())).reset_index()
# Declare the result as a new a GeoDataFrame
df = gp.GeoDataFrame(df, geometry = 'geometry')
df.plot()
</code></pre>
<p><a href="http://i.stack.imgur.com/hGGPC.png" rel="nofollow"><img src="http://i.stack.imgur.com/hGGPC.png" alt="enter image description here"></a></p>
| 1 | 2016-08-25T20:23:32Z | [
"python",
"pandas",
"shapely",
"geopandas"
] |
Pandas replace with default value | 39,104,730 | <p>I have a pandas dataframe I want to replace a certain column conditionally.</p>
<p>eg:</p>
<pre><code> col
0 Mr
1 Miss
2 Mr
3 Mrs
4 Col.
</code></pre>
<p>I want to map them as</p>
<pre><code>{'Mr': 0, 'Mrs': 1, 'Miss': 2}
</code></pre>
<p>If there are other titles now available in the dict then I want them to have a default value of <code>3</code></p>
<p>The above example becomes</p>
<pre><code> col
0 0
1 2
2 0
3 1
4 3
</code></pre>
<p>Can I do this with pandas.replace() without using regex ?</p>
| 2 | 2016-08-23T15:04:57Z | 39,104,759 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> rather as <code>replace</code>, because faster, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> by <code>3</code> and cast to <code>int</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a>:</p>
<pre><code>df['col'] = df.col.map({'Mr': 0, 'Mrs': 1, 'Miss': 2}).fillna(3).astype(int)
print (df)
col
0 0
1 2
2 0
3 1
4 3
</code></pre>
<p>Another solution with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a> and condition with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>d = {'Mr': 0, 'Mrs': 1, 'Miss': 2}
df['col'] = np.where(df.col.isin(d.keys()), df.col.map(d), 3).astype(int)
print (df)
col
0 0
1 2
2 0
3 1
4 3
</code></pre>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow"><code>replace</code></a>:</p>
<pre><code>d = {'Mr': 0, 'Mrs': 1, 'Miss': 2}
df['col'] = np.where(df.col.isin(d.keys()), df.col.replace(d), 3)
print (df)
col
0 0
1 2
2 0
3 1
4 3
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>df = pd.concat([df]*10000).reset_index(drop=True)
d = {'Mr': 0, 'Mrs': 1, 'Miss': 2}
df['col0'] = df.col.map(d).fillna(3).astype(int)
df['col1'] = np.where(df.col.isin(d.keys()), df.col.replace(d), 3)
df['col2'] = np.where(df.col.isin(d.keys()), df.col.map(d), 3).astype(int)
print (df)
In [447]: %timeit df['col0'] = df.col.map(d).fillna(3).astype(int)
100 loops, best of 3: 4.93 ms per loop
In [448]: %timeit df['col1'] = np.where(df.col.isin(d.keys()), df.col.replace(d), 3)
100 loops, best of 3: 14.3 ms per loop
In [449]: %timeit df['col2'] = np.where(df.col.isin(d.keys()), df.col.map(d), 3).astype(int)
100 loops, best of 3: 7.68 ms per loop
In [450]: %timeit df['col3'] = df.col.map(lambda L: d.get(L, 3))
10 loops, best of 3: 36.2 ms per loop
</code></pre>
| 3 | 2016-08-23T15:06:20Z | [
"python",
"pandas",
"replace",
"dataframe",
"condition"
] |
How do I join the args of a function to form a string? | 39,104,787 | <p>I am trying to obtain a string from the attributes of the <code>changegame</code> function in order to change the status of a bot i'm developing.</p>
<pre><code>async def changegame(*game_chosen: str):
"""Changes the game the bot is playing"""
game_str = discord.Game(name=game_chosen)
try:
await bot.change_status(game=game_str, idle=False)
await bot.say("```Game correctly changed to {0}```".format(game_chosen))
</code></pre>
<p>This does not result in the string being recognized but in this:</p>
<p><code>Game correctly changed to ('Test', 'string', '123')</code></p>
| 0 | 2016-08-23T15:07:18Z | 39,104,861 | <p>To solve your initial issue, try a simple join:</p>
<pre><code>' '.join(map(str, game_chosen))
</code></pre>
<p>However, your bigger problem is:</p>
<pre><code>game_str = discord.Game(name=game_chosen)
</code></pre>
<p>Here you are passing a <em>tuple</em> to <code>discord.Game</code>, are you sure this is right? If you want to call your initial function like this: <code>changegame("League of Legends")</code>, then you need to fix your function definition:</p>
<pre><code>async def changegame(game_chosen: str)
</code></pre>
<p>I suspect this is what you are actually trying to do.</p>
| 1 | 2016-08-23T15:10:43Z | [
"python"
] |
Partial substitution / removing characters within regex matches | 39,104,792 | <p>I'm trying to remove the characters <code>\/<>~?`%</code> if they appear within three <code>&lt;</code>'s and <code>&gt;</code>'s.</p>
<p>For the string:</p>
<p><code><html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;test/verify&gt;&gt;&gt; in &lt;&lt;&lt;one% g?o&gt;&gt;&gt;</body></html></code></p>
<p>(reads like <code>Multiple <<<parameter>>> options %to <<<test/verify>>> in <<<one% g?o>>></code>.)</p>
<p>The final string I want is:</p>
<p><code><html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;testverify&gt;&gt;&gt; in &lt;&lt;&lt;one go&gt;&gt;&gt;</body></html></code></p>
<p>Note that the '%' in '%to' is not removed since it's not within three <code>&lt;</code>'s and <code>&gt;</code>'s.</p>
<p>I tried these regex's so far:</p>
<pre><code>>>> s = '<html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;test/verify&gt;&gt;&gt; in &lt;&lt;&lt;one% g?o&gt;&gt;&gt;</body></html>'
>>>
>>> # just getting everything between <<< and >>> is easy
... re.sub(r'((?:&lt;){3})(.*?)((?:&gt;){3})', r'\1\2\3', s)
'<html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;test/verify&gt;&gt;&gt; in &lt;&lt;&lt;one%? go&gt;&gt;&gt;</body></html>'
>>> re.findall(r'((?:&lt;){3})(.*?)((?:&gt;){3})', s)
[('&lt;&lt;&lt;', 'parameter', '&gt;&gt;&gt;'),
('&lt;&lt;&lt;', 'test/verify', '&gt;&gt;&gt;'),
('&lt;&lt;&lt;', 'one%? go', '&gt;&gt;&gt;')]
</code></pre>
<p>But trying to get a sequence of non-<code>\/<>~?`%</code> characters <a href="https://regex101.com/r/vM8bD5/1" rel="nofollow">doesn't work</a> since anything containing it just gets excluded:</p>
<pre><code>>>> re.findall(r'((?:&lt;){3})([^\\/<>~?`%]*?)((?:&gt;){3})', s)
[('&lt;&lt;&lt;', 'parameter', '&gt;&gt;&gt;')]
>>> re.findall(r'((?:&lt;){3})((?:[^\\/<>~?`%]*?)*?)((?:&gt;){3})', s)
[('&lt;&lt;&lt;', 'parameter', '&gt;&gt;&gt;')]
>>> re.findall(r'((?:&lt;){3})((?:[^\\/<>~?`%])*?)((?:&gt;){3})', s)
[('&lt;&lt;&lt;', 'parameter', '&gt;&gt;&gt;')]
</code></pre>
| 0 | 2016-08-23T15:07:41Z | 39,104,793 | <p>The solution I went with was using the original <code><<<.*>>></code> regex and the <a href="https://docs.python.org/2/library/re.html#re.sub" rel="nofollow"><code>repl</code> as a function</a> option for <code>re.sub</code>:</p>
<pre><code>>>> def illrepl(matchobj):
... return ''.join([matchobj.group(1),
... matchobj.group(2).translate(None, r'\/<>~?`%'),
... matchobj.group(3)])
...
>>> re.sub(r'((?:&lt;){3})(.*?)((?:&gt;){3})', illrepl, s)
'<html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;testverify&gt;&gt;&gt; in &lt;&lt;&lt;one go&gt;&gt;&gt;</body></html>'
>>> # verify that this is the final string I wanted:
... re.sub(r'((?:&lt;){3})(.*?)((?:&gt;){3})', illrepl, s) == '<html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;testverify&gt;&gt;&gt; in &lt;&lt;&lt;one go&gt;&gt;&gt;</body></html>'
True
</code></pre>
<p>And since I don't need to change the <code>&lt;</code>'s and <code>&gt;</code>'s and I know the match is only for things within them, I could either use a non-capturing group for those parts of the regex or simplify the <code>illrepl</code> function a bit by just using the full match object at <code>group(0)</code> to remove illegal/invalid characters:</p>
<pre><code>>>> def illrepl(matchobj):
... # return matchobj.group(0).translate(None, r'\/<>~?`%') # may have unicode so can't use this
... return re.sub(r'[\/<>~?`%]*', '', matchobj.group(0))
...
>>> re.sub(r'(?:&lt;){3}(.*?)(?:&gt;){3}', illrepl, s)
'<html><body>Multiple &lt;&lt;&lt;parameter&gt;&gt;&gt; options %to &lt;&lt;&lt;testverify&gt;&gt;&gt; in &lt;&lt;&lt;one go&gt;&gt;&gt;</body></html>'
</code></pre>
<p>Not certain if there is a way I could have done this only via the regex and not needing to use the <code>illrepl</code> function to generate the replacements and having to use <code>re.sub</code> again within that.</p>
| 2 | 2016-08-23T15:07:41Z | [
"python",
"regex",
"string",
"substitution"
] |
Why make lists unhashable? | 39,104,841 | <p>A common issue on SO is <a href="https://stackoverflow.com/questions/39081807/python-2-d-list-how-to-make-a-set/39081956#39081956">removing duplicates from a list of lists</a>. Since lists are unhashable, <code>set([[1, 2], [3, 4], [1, 2]])</code> throws <code>TypeError: unhashable type: 'list'</code>. Answers to this kind of question usually involve using tuples, which are immutable and therefore hashable.</p>
<p>This answer to <a href="http://stackoverflow.com/questions/23268899/what-makes-lists-unhashable">What makes lists unhashable?</a> include the following:</p>
<blockquote>
<p>If the hash value changes after it gets stored at a particular slot in the dictionary, it will lead to an inconsistent dictionary. For example, initially the list would have gotten stored at location A, which was determined based on the hash value. If the hash value changes, and if we look for the list we might not find it at location A, or as per the new hash value, we might find some other object.</p>
</blockquote>
<p>but I don't quite understand because other types that can be used for dictionary keys can be changed without issue:</p>
<pre><code>>>> d = {}
>>> a = 1234
>>> d[a] = 'foo'
>>> a += 1
>>> d[a] = 'bar'
>>> d
{1234: 'foo', 1235: 'bar'}
</code></pre>
<p>It is obvious that if the value of <code>a</code> changes, it will hash to a different location in the dictionary. <strong>Why is the same assumption dangerous for a list?</strong> Why is the following an unsafe method for hashing a list, since it is what we all use when we need to anyway?</p>
<pre><code>>>> class my_list(list):
... def __hash__(self):
... return tuple(self).__hash__()
...
>>> a = my_list([1, 2])
>>> b = my_list([3, 4])
>>> c = my_list([1, 2])
>>> foo = [a, b, c]
>>> foo
[[1, 2], [3, 4], [1, 2]]
>>> set(foo)
set([[1, 2], [3, 4]])
</code></pre>
<p>It seems that this solves the <code>set()</code> problem, why is this an issue? Lists may be mutable, but they are ordered which seems like it would be all that's needed for hashing.</p>
| 3 | 2016-08-23T15:09:57Z | 39,104,966 | <p>You seem to confuse mutability with rebinding. <code>a += 1</code> assigns a <strong>new object</strong>, the <code>int</code> object with the numeric value 1235, to <code>a</code>. Under the hood, for immutable objects like <code>int</code>, <code>a += 1</code> is just the same as <code>a = a + 1</code>.</p>
<p>The original <code>1234</code> object is not mutated. The dictionary is still using an <code>int</code> object with numeric value 1234 as the key. The dictionary still holds a <em>reference</em> to that object, even though <code>a</code> now references a different object. The two references are independent.</p>
<p>Try this instead:</p>
<pre><code>>>> class BadKey:
... def __init__(self, value):
... self.value = value
... def __eq__(self, other):
... return other == self.value
... def __hash__(self):
... return hash(self.value)
... def __repr__(self):
... return 'BadKey({!r})'.format(self.value)
...
>>> badkey = BadKey('foo')
>>> d = {badkey: 42}
>>> badkey.value = 'bar'
>>> print(d)
{BadKey('bar'): 42}
</code></pre>
<p>Note that I altered the attribute <code>value</code> on the <code>badkey</code> instance. I didn't even touch the dictionary. The dictionary reflects the change; the <em>actual key value itself</em> was mutated, the object that both the name <code>badkey</code> and the dictionary reference.</p>
<p>However, you now <strong>can't access that key anymore</strong>:</p>
<pre><code>>>> badkey in d
False
>>> BadKey('bar') in d
False
>>> for key in d:
... print(key, key in d)
...
BadKey('bar') False
</code></pre>
<p>I have thoroughly broken my dictionary, because I can no longer reliably locate the key.</p>
<p>That's because <code>BadKey</code> violates the principles of <em>hashability</em>; that the hash value <strong>must</strong> remain stable. You can only do that if you don't change anything about the object that the hash is based on. And the hash must be based on whatever makes two instances equal.</p>
<p>For lists, the <em>contents</em> make two list objects equal. And you can change those, so you can't produce a stable hash either.</p>
| 9 | 2016-08-23T15:14:57Z | [
"python",
"list",
"hash"
] |
pandas dataframe get rows based on matched strings in cells | 39,104,860 | <p>Given the following data frame</p>
<pre class="lang-none prettyprint-override"><code>+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello@me.com | 2.0 | Hello |
| 1 | you@you.com | 3.0 | World |
| 2 | us@world.com | hi | holiday |
+-----+----------------+--------+---------+
</code></pre>
<p>How can I get all the rows where <code>re.compile([Hh](i|ello))</code> would match in a cell? That is, from the above example, I would like to get the following output:</p>
<pre class="lang-none prettyprint-override"><code>+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello@me.com | 2.0 | Hello |
| 2 | us@world.com | hi | holiday |
+-----+----------------+--------+---------+
</code></pre>
<p>I am not able to get a solution for this. And help would be very much appreciated.</p>
| 2 | 2016-08-23T15:10:42Z | 39,105,906 | <p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.findall.html" rel="nofollow">findall</a> function which takes regular expressions.</p>
<pre><code>msk = df.apply(lambda x: x.str.findall(r'[Hh](i|ello)')).any(axis=1)
df[msk]
+---|------------|------|---------+
| | A | B | C |
+---|------------|------|---------+
| 0 |hello@me.com| 2 | Hello |
| 2 |us@world.com| hi | holiday |
+---|------------|------|---------+
</code></pre>
<p><code>any(axis=1)</code> will check if any of the columns in a given row are true. So <code>msk</code> is a single column of True/False values indicating whether or not the regular expression was found in that row.</p>
| 2 | 2016-08-23T16:01:15Z | [
"python",
"pandas",
"dataframe"
] |
pandas dataframe get rows based on matched strings in cells | 39,104,860 | <p>Given the following data frame</p>
<pre class="lang-none prettyprint-override"><code>+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello@me.com | 2.0 | Hello |
| 1 | you@you.com | 3.0 | World |
| 2 | us@world.com | hi | holiday |
+-----+----------------+--------+---------+
</code></pre>
<p>How can I get all the rows where <code>re.compile([Hh](i|ello))</code> would match in a cell? That is, from the above example, I would like to get the following output:</p>
<pre class="lang-none prettyprint-override"><code>+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello@me.com | 2.0 | Hello |
| 2 | us@world.com | hi | holiday |
+-----+----------------+--------+---------+
</code></pre>
<p>I am not able to get a solution for this. And help would be very much appreciated.</p>
| 2 | 2016-08-23T15:10:42Z | 39,109,496 | <p>Using <code>stack</code> to avoid <code>apply</code></p>
<pre><code>df.loc[df.stack().str.match(r'[Hh](i|ello)').unstack().any(1)]
</code></pre>
<p><a href="http://i.stack.imgur.com/wj6D4.png" rel="nofollow"><img src="http://i.stack.imgur.com/wj6D4.png" alt="enter image description here"></a></p>
<p>Using <code>match</code> generates a future warning. The warning is consistant with what we are doing, so that's good. However, <code>findall</code> accomplishes the same thing</p>
<pre><code>df.loc[df.stack().str.findall(r'[Hh](i|ello)').unstack().any(1)]
</code></pre>
| 3 | 2016-08-23T19:45:06Z | [
"python",
"pandas",
"dataframe"
] |
unicode issue in python when writing to file | 39,104,863 | <p>I have an csv sheet that i read it like this:</p>
<pre><code> with open(csvFilePath, 'rU') as csvFile:
reader = csv.reader(csvFile, delimiter= '|')
numberOfMovies = 0
for row in reader:
title = row[1:2][0]
</code></pre>
<p>as you see, i am taking the value of <code>title</code></p>
<p>Then i surf the internet for some info about that value and then i write to a file, the writing is like this:</p>
<pre><code>def writeRDFToFile(rdf, fileName):
f = open("movies/" + fileName + '.ttl','a')
try:
#rdf = rdf.encode('UTF-8')
f.write(rdf) # python will convert \n to os.linesep
except:
print "exception happened for movie " + movieTitle
f.close()
</code></pre>
<p>In that function, i am writing the <code>rdf</code> variable to a file.</p>
<p>As you see there is a <strong>commetted</strong> line</p>
<p>If the value of rdf variable contains unicode char <strong>and</strong> that line was <strong>not commeted</strong>, that code doesn't write anything to the file.</p>
<p>However, if I just <strong>commet</strong> that line, that code writes to a file.</p>
<p>Okay you can say that: commit that line and everything will be fine, but that is not correct, because i have another <strong>java</strong> process (which is Fuseki server) that reads the file and if the file contains unicode chars, it throws an error.</p>
<p>so i need to solve the file myself, i need to encode that data to ut8, </p>
<p>help please</p>
| 0 | 2016-08-23T15:10:46Z | 39,105,077 | <p>The normal csv library can have difficulty writing unicode to files. I suggest you use the <a href="https://pypi.python.org/pypi/unicodecsv/0.14.1" rel="nofollow">unicodecsv</a> library instead of the csv library. It supports writing unicode to CSVs.</p>
<p>Practically speaking, just write: </p>
<pre><code>import unicodecsv as csv
</code></pre>
| 2 | 2016-08-23T15:20:47Z | [
"python",
"unicode"
] |
Comparing rows of pandas dataframe and find intersection? | 39,104,869 | <p>I have a df :</p>
<pre><code>year name_list
2009 [sam,maj,mak]
2010 [sam, mak, ali, mo, za]
2011 [mp,ki]
</code></pre>
<p>I would like to compare each row in terms of name_list and count how many new names are added/deleted each year.
Expected results:</p>
<pre><code> year name_list added_count removed_count
2009 [sam,maj,mak] 0 0
2010 [sam, mak, ali, mo, za] 3 1
2011 [mp,ki] 2 5
</code></pre>
<p>Can anybody help?</p>
| 0 | 2016-08-23T15:11:12Z | 39,109,260 | <p>First two lines are to initialize 2009 values to zero. Assumes that the years are in chronological order and the years are in the index and not a separate column. Also assumes no duplicate values for the names in column 'name_list'. </p>
<pre><code>df.loc[2009,'added_count'] = 0
df.loc[2009,'removed_count'] = 0
for i in df.index[1:]:
df.loc[i,'added_count'] = len(list(set(df.loc[i,'name_list'])-set(df.loc[i-1,'name_list'])))
df.loc[i,'removed_count'] = len(list(set(df.loc[i-1,'name_list'])-set(df.loc[i,'name_list'])))
</code></pre>
| 1 | 2016-08-23T19:28:25Z | [
"python",
"pandas",
"compare",
"rows"
] |
completely delete a list in python | 39,104,917 | <p>I'm using python 2, and trying to <strong>delete two lists</strong>.
Here is the code:</p>
<pre><code>test_data1 = [img for img in glob.glob("/location/of/images/*png")]
test_data0 = [img for img in glob.glob("/location/of/other_images/*png")]
test_data = test_data1 + test_data0
</code></pre>
<p>Every list of images contains millions of file-names, so I would prefer to delete the unnecessary lists after I created the <code>test_data</code> list. Just for make the code "easier" for the computer to run.</p>
<p>How can I do it?</p>
<p>I found few different ways, but no any of them refereed to memory issues. I'm not sure if <code>test_data1=[]</code> actually delete the list completely from the memory.</p>
<p>also I'm afraid that the <code>test_data = test_data1 + test_data0</code> line only combine the hashes of the lists, and when I'll delete the two lists, <code>test_data</code> also become empty.</p>
<p>So.. what is the right way?</p>
<p>Really appreciate your help!
Sorry if the English is bad, I'm not a native speaker :P</p>
<p>Thanks! </p>
| 2 | 2016-08-23T15:12:56Z | 39,104,973 | <p>You can use list concatenation to remove the need for the intermediate lists</p>
<pre><code>test_data = []
test_data += [img for img in glob.glob("/location/of/images/*png")]
test_data += [img for img in glob.glob("/location/of/other_images/*png")]
</code></pre>
<p>Also I'm not sure what the overall design of your program is, but there is a preference in Python to use iterators/generators instead of lists for just this reason. The less you have to keep in memory at once the better. See if you can redesign your program to just iterate on the fly instead of building up this large list.</p>
| 4 | 2016-08-23T15:15:13Z | [
"python",
"list",
"python-2.7",
"memory"
] |
completely delete a list in python | 39,104,917 | <p>I'm using python 2, and trying to <strong>delete two lists</strong>.
Here is the code:</p>
<pre><code>test_data1 = [img for img in glob.glob("/location/of/images/*png")]
test_data0 = [img for img in glob.glob("/location/of/other_images/*png")]
test_data = test_data1 + test_data0
</code></pre>
<p>Every list of images contains millions of file-names, so I would prefer to delete the unnecessary lists after I created the <code>test_data</code> list. Just for make the code "easier" for the computer to run.</p>
<p>How can I do it?</p>
<p>I found few different ways, but no any of them refereed to memory issues. I'm not sure if <code>test_data1=[]</code> actually delete the list completely from the memory.</p>
<p>also I'm afraid that the <code>test_data = test_data1 + test_data0</code> line only combine the hashes of the lists, and when I'll delete the two lists, <code>test_data</code> also become empty.</p>
<p>So.. what is the right way?</p>
<p>Really appreciate your help!
Sorry if the English is bad, I'm not a native speaker :P</p>
<p>Thanks! </p>
| 2 | 2016-08-23T15:12:56Z | 39,105,097 | <p>You could use <a href="https://docs.python.org/2/tutorial/datastructures.html#more-on-lists" rel="nofollow"><code>extend()</code></a>. This will instantiate a list and populate it with those items, and extend will append that list to <code>test_data</code>. This way, the only place in memory that the lists exist in will be in <code>test_data</code>. As opposed to multiple instances. Whether that will have any tangible effect on performance can only be determined with testing/profiling. </p>
<pre><code>test_data = []
test_data.extend([img for img in glob.glob("/location/of/images/*png")])
test_data.extend([img for img in glob.glob("/location/of/other_images/*png")])
</code></pre>
<p>or using <a href="https://docs.python.org/3/reference/simple_stmts.html#the-del-statement" rel="nofollow"><code>del</code></a>, to clear the binding for that variable (<em>the garbage collector will delete the unused value</em>).</p>
<pre><code>l = [1,2,3,4,5]
del l # l cleared from memory.
</code></pre>
| 2 | 2016-08-23T15:21:51Z | [
"python",
"list",
"python-2.7",
"memory"
] |
completely delete a list in python | 39,104,917 | <p>I'm using python 2, and trying to <strong>delete two lists</strong>.
Here is the code:</p>
<pre><code>test_data1 = [img for img in glob.glob("/location/of/images/*png")]
test_data0 = [img for img in glob.glob("/location/of/other_images/*png")]
test_data = test_data1 + test_data0
</code></pre>
<p>Every list of images contains millions of file-names, so I would prefer to delete the unnecessary lists after I created the <code>test_data</code> list. Just for make the code "easier" for the computer to run.</p>
<p>How can I do it?</p>
<p>I found few different ways, but no any of them refereed to memory issues. I'm not sure if <code>test_data1=[]</code> actually delete the list completely from the memory.</p>
<p>also I'm afraid that the <code>test_data = test_data1 + test_data0</code> line only combine the hashes of the lists, and when I'll delete the two lists, <code>test_data</code> also become empty.</p>
<p>So.. what is the right way?</p>
<p>Really appreciate your help!
Sorry if the English is bad, I'm not a native speaker :P</p>
<p>Thanks! </p>
| 2 | 2016-08-23T15:12:56Z | 39,105,109 | <p>The option of adding new data to array as in other answers works, but if you want to keep having two arrays and adding them, consider using garbage collector.</p>
<p>Python has a garbage collector, that will delete the objects when they are no longer in use (i.e. when the object is unreachable and is not referenced any more). For example, if you have the program:</p>
<pre><code>a = [1, 2, 3, 4]
a = []
# Here data [1, 2, 3, 4] is unreachable (unreferenced)
....
</code></pre>
<p>The garbage collector may eventually delete the object [1, 2, 3, 4]. You are not guaranteed when though. It happens automatically and you do not have to do anything with it.</p>
<p>However, if you are concerned about memory resources, you can force garbage collector to delete unreferenced objects using <code>gs.collect()</code> (do not forget to <code>import gc</code>). For example:</p>
<pre><code>import gc
a = [1, 2, 3, 4]
a = []
gc.collect()
# Here it is guaranteed that the memory previously occupied by [1, 2, 3, 4] is free
</code></pre>
<p>So your program will turn into</p>
<pre><code>import gc
test_data1 = [img for img in glob.glob("/location/of/images/*png")]
test_data0 = [img for img in glob.glob("/location/of/other_images/*png")]
test_data = test_data1 + test_data0
test_data1 = []
test_data0 = []
gc.collect()
</code></pre>
| 0 | 2016-08-23T15:22:39Z | [
"python",
"list",
"python-2.7",
"memory"
] |
completely delete a list in python | 39,104,917 | <p>I'm using python 2, and trying to <strong>delete two lists</strong>.
Here is the code:</p>
<pre><code>test_data1 = [img for img in glob.glob("/location/of/images/*png")]
test_data0 = [img for img in glob.glob("/location/of/other_images/*png")]
test_data = test_data1 + test_data0
</code></pre>
<p>Every list of images contains millions of file-names, so I would prefer to delete the unnecessary lists after I created the <code>test_data</code> list. Just for make the code "easier" for the computer to run.</p>
<p>How can I do it?</p>
<p>I found few different ways, but no any of them refereed to memory issues. I'm not sure if <code>test_data1=[]</code> actually delete the list completely from the memory.</p>
<p>also I'm afraid that the <code>test_data = test_data1 + test_data0</code> line only combine the hashes of the lists, and when I'll delete the two lists, <code>test_data</code> also become empty.</p>
<p>So.. what is the right way?</p>
<p>Really appreciate your help!
Sorry if the English is bad, I'm not a native speaker :P</p>
<p>Thanks! </p>
| 2 | 2016-08-23T15:12:56Z | 39,105,465 | <p>In fact, each list store <strong>references to string</strong>, but not strings itself.</p>
<p>I'm pretty sure, the used memory is about 1M x 4 (for 32 bits architecture) or 1M x 8 (for 64 bits architecture).</p>
<p>I suggest you to do profiling, see <a href="http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended">Which Python memory profiler is recommended?</a>.</p>
<p>You can use <a href="https://docs.python.org/2/library/glob.html" rel="nofollow">glob.iglob</a> to have iterators instead of lists and chain the list with <a href="https://docs.python.org/2/library/itertools.html#itertools.chain" rel="nofollow">itertools.chain</a>, as bellow:</p>
<pre><code>import itertools
import glob
iter1 = glob.iglob("/location/of/images/*png")
iter2 = glob.iglob("/location/of/other_images/*png")
test_data = [name for name in itertools.chain(iter1, iter2)]
</code></pre>
| -1 | 2016-08-23T15:40:21Z | [
"python",
"list",
"python-2.7",
"memory"
] |
Pylint AttributeError: 'module' object has no attribute 'append' | 39,104,919 | <p>I was configuring my new laptop (macbook pro) and everything was fine until I wanted to try my pylint command.</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/pylint", line 11, in <module>
load_entry_point('pylint==1.6.4', 'console_scripts', 'pylint')()
File "/usr/local/lib/python2.7/site-packages/pylint-1.6.4-py2.7.egg/pylint/__init__.py", line 13, in run_pylint
Run(sys.argv[1:])
File "/usr/local/lib/python2.7/site-packages/pylint-1.6.4-py2.7.egg/pylint/lint.py", line 1270, in __init__
'init-hook')))
File "/usr/local/lib/python2.7/site-packages/pylint-1.6.4-py2.7.egg/pylint/lint.py", line 1371, in cb_init_hook
exec(value) # pylint: disable=exec-used
File "<string>", line 1, in <module>
AttributeError: 'module' object has no attribute 'append'
</code></pre>
<p>From that I don't understand what's wrong with my pylint .... I tried a lot of things but as I'm not quite sure of what I've done at the end, I prefer to not list things. </p>
<p>Is there someone who already got it ? Someone who have an idea how to solve it?</p>
<p>Thanks for your help </p>
| 1 | 2016-08-23T15:12:58Z | 39,115,354 | <blockquote>
<p>Hi, can you show the value of init-hook from the configuration file you are using? What happen is that you have configured, somehow, init-hook with some invalid code. You can see this in your traceback through the last exec call, which happens only when init-hook is provided. Seeing its value could lead to solving this problem. My intuition is that you probably have something as in ``init-hook="import sys; sys.append(some_path)"</p>
</blockquote>
<p>Thanks to <a href="http://stackoverflow.com/questions/39104919/pylint-attributeerror-module-object-has-no-attribute-append#comment65562878_39104919">PCManticore</a>, that was that, I had a <code>.pylintrc</code> in my home folder that had something weard for the <code>init-hook</code> value. I changed that and everything is working well now.</p>
| 0 | 2016-08-24T05:50:41Z | [
"python",
"osx",
"pylint"
] |
Pandas, map two dataframes, count based on condition | 39,104,928 | <p>I have written some code to map the ids of two dataframes and if a condition matches then create a count in a specified column in the existing dataframe, I am looking for a more efficient way of calculating it. </p>
<p><strong>Sample Data</strong></p>
<pre><code>import numpy as np
import pandas as pd
d = {'ID' : pd.Series([111, 222, 111, 444, 222, 111]), 'Tag' : pd.Series([1, 2, 3, 1, 2, 1])}
df1 = (pd.DataFrame(d))
print(df1)
ID Tag
0 111 1
1 222 2
2 111 3
3 444 1
4 222 2
5 111 1
d = {'ID' : pd.Series([111, 444, 666, 444, 777])}
df2 = (pd.DataFrame(d))
print(df2)
ID
0 111
1 444
2 666
3 444
4 777
df2['tag1'] = 0
df2['tag2'] = 0
df2['tag3'] = 0
â
for index, row in df2.iterrows():
for i, t in df1.iterrows():
if row['ID'] == t['ID']:
if t['Tag'] == 1:
df2.loc[index]["tag1"] += 1
elif t['Tag'] == 2:
df2.loc[index]["tag2"] += 1
elif t['Tag'] == 3:
df2.loc[index]["tag3"] += 1
</code></pre>
<p><strong>Output</strong></p>
<pre><code>print(df2)
ID tag1 tag2 tag3
0 111 2 0 1
1 444 1 0 0
2 666 0 0 0
3 444 1 0 0
4 777 0 0 0
</code></pre>
<p>What is the most efficient way of doing this, rather than computing iteratively? </p>
<p>Note, df1 can contain the sample <code>ID</code> multiple times with a different value of <code>Tag</code></p>
<p>(df1 and df2 are large dataframes, with 50,000 rows in df1 and 15,000 in df2)</p>
| 1 | 2016-08-23T15:13:17Z | 39,105,187 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>print (pd.crosstab(df1.ID, df1.Tag))
Tag 1 2 3
ID
111 2 0 1
222 0 2 0
444 1 0 0
print (pd.merge(df2, pd.crosstab(df1.ID, df1.Tag)
.add_prefix('tag')
.reset_index(), on='ID', how='left')
.fillna(0)
.astype(int))
ID tag1 tag2 tag3
0 111 2 0 1
1 444 1 0 0
2 666 0 0 0
3 444 1 0 0
4 777 0 0 0
</code></pre>
<p>Instead <code>crosstab</code> you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (df1.groupby(['ID', 'Tag'])['Tag'].size().unstack())
Tag 1 2 3
ID
111 2.0 NaN 1.0
222 NaN 2.0 NaN
444 1.0 NaN NaN
print (pd.merge(df2, df1.groupby(['ID', 'Tag'])['Tag'].size().unstack()
.add_prefix('tag')
.reset_index(), on='ID', how='left')
.fillna(0)
.astype(int))
ID tag1 tag2 tag3
0 111 2 0 1
1 444 1 0 0
2 666 0 0 0
3 444 1 0 0
4 777 0 0 0
</code></pre>
| 1 | 2016-08-23T15:27:26Z | [
"python",
"pandas",
"dataframe"
] |
Multiply each block of a block matrix with different coefficient in NumPy | 39,104,999 | <p>Suppose I have a <code>(4*n)*(4*n)</code> block matrix, and I would like to multiply each block of <code>(n*n)</code> with a different coefficient from a corresponding <code>(4*4)</code> matrix - what is the way to do it in NumPy?</p>
<p>For example, I have:</p>
<pre><code>>>> mat
matrix([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]])
>>> x
array([[1, 2],
[3, 4]])
</code></pre>
<p>And the requested result should be like this:</p>
<pre><code>>>> result
array([[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]])
</code></pre>
| 3 | 2016-08-23T15:16:52Z | 39,105,464 | <p>One way would be to construct a (4*n)*(4*n) block matrix from <code>x</code>:</p>
<pre><code>result = np.multiply(mat, np.kron(x, np.ones((n,n))))
</code></pre>
| 3 | 2016-08-23T15:40:20Z | [
"python",
"numpy",
"matrix",
"linear-algebra"
] |
Multiply each block of a block matrix with different coefficient in NumPy | 39,104,999 | <p>Suppose I have a <code>(4*n)*(4*n)</code> block matrix, and I would like to multiply each block of <code>(n*n)</code> with a different coefficient from a corresponding <code>(4*4)</code> matrix - what is the way to do it in NumPy?</p>
<p>For example, I have:</p>
<pre><code>>>> mat
matrix([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]])
>>> x
array([[1, 2],
[3, 4]])
</code></pre>
<p>And the requested result should be like this:</p>
<pre><code>>>> result
array([[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]])
</code></pre>
| 3 | 2016-08-23T15:16:52Z | 39,106,830 | <p>Here's a vectorized approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p>
<pre><code>m1,n1 = x.shape
m2,n2 = mat.shape
out = (mat.reshape(m1,m2//m1,n1,n2//n1)*x[:,None,:,None]).reshape(m2,n2)
</code></pre>
<p>Sample run -</p>
<pre><code>In [41]: mat
Out[41]:
array([[8, 8, 7, 2, 3, 4],
[2, 4, 7, 5, 4, 8],
[7, 2, 4, 5, 6, 5],
[4, 3, 3, 6, 5, 3],
[7, 3, 5, 8, 7, 7],
[2, 5, 2, 4, 2, 7]])
In [42]: x
Out[42]:
array([[1, 2],
[3, 4]])
In [43]: out
Out[43]:
array([[ 8, 8, 7, 4, 6, 8],
[ 2, 4, 7, 10, 8, 16],
[ 7, 2, 4, 10, 12, 10],
[12, 9, 9, 24, 20, 12],
[21, 9, 15, 32, 28, 28],
[ 6, 15, 6, 16, 8, 28]])
</code></pre>
<p>I think <code>np.kron</code> as suggested in <a href="http://stackoverflow.com/a/39105464/3293881"><code>@Tim Fuchs</code>'s post</a> would be the safest option here without fiddling around with the size of input arrays. For performance, here's some timings on a decent sized input arrays comparing <code>kron</code> and <code>broadcasting-based</code> approaches -</p>
<pre><code>In [56]: mat = np.random.randint(2,9,(100,100))
In [57]: x = np.random.randint(2,9,(50,50))
In [58]: n = 2
In [59]: m1,n1 = x.shape # Ignoring timings from these as negligible
...: m2,n2 = mat.shape
...:
In [60]: %timeit np.multiply(mat, np.kron(x, np.ones((n,n))))
1000 loops, best of 3: 312 µs per loop
In [61]: %timeit (mat.reshape(m1,m2//m1,n1,n2//n1)*x[:,None,:,None]).reshape(m2,n2)
10000 loops, best of 3: 83.7 µs per loop
</code></pre>
| 2 | 2016-08-23T16:55:37Z | [
"python",
"numpy",
"matrix",
"linear-algebra"
] |
Selenium Python 2.7 - asserting non-ascii characters | 39,105,019 | <p>I'm having an issue when asserting two non-ascii values. One is coming from a csv file and the other one obtained from an element in the html:</p>
<pre><code><h1 class="LoginElement">ç»å½</h1>
</code></pre>
<p>I'm using selenium to get the text</p>
<pre><code>w_msg = driver.find_element(By.CSS_SELECTOR, "h1.LoginElement").text
</code></pre>
<p>When I assert both values</p>
<pre><code>assert txt in w_msg
</code></pre>
<p>I get the following error msg:</p>
<pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128)
</code></pre>
<p>if I print both variables and their types:</p>
<pre><code>print txt
print type(txt)
print w_msg
print type(w_msg)
</code></pre>
<p>It returns the following:</p>
<pre><code>ç»å
¥
<type 'str'>
ç»å½
<type 'unicode'>
</code></pre>
<p>This is how I'm initializing the CSV file from my "Utility" class:</p>
<pre><code>def open_csv(base_csv, file_name):
csv_file = open(base_csv + file_name, 'rb')
reader = csv.reader(csv_file, delimiter=',')
row = list(reader)
return row
</code></pre>
<p>And here's the call from the test:</p>
<pre><code>csv = Utility.open_csv(base_csv, file_name)
</code></pre>
<p><em>NOTE</em>: I'm using OpenOffice Calc to build the csv and saving it in UTF-8</p>
<p>I've tried lots of solutions found in SO but still can't get it to work. Any help or lead in the right direction will be much appreciated.</p>
| 2 | 2016-08-23T15:18:14Z | 39,117,607 | <p>Python is trying to convert your <code>str</code> to a Unicode to carry out the comparison. Unfortunately, Python 2.x is designed to err on the side of caution and only decode your string using ASCII.</p>
<p>You need to decode <code>txt</code> to a Unicode using the appropriate encoding of the CSV file so Python doesn't have to.</p>
<p>You could do this with <code>txt.decode()</code>, but the best way to do it by having Python decode it for you as you read the file.</p>
<p>Unfortunately, the Python 2.x CSV module doesn't support Unicode so you need to use the drop in replacement: <a href="https://github.com/jdunck/python-unicodecsv" rel="nofollow">https://github.com/jdunck/python-unicodecsv</a></p>
<p>Use it like:</p>
<pre><code>import unicodecsv
with open("myfile.csv") as my_csv:
r = unicodecsv.reader(my_csv, encoding=YOURENCODING)
</code></pre>
<p><code>YOURENCODING</code> may be <code>utf-8</code>, <code>cp1252</code> or any codec listed here: <a href="https://docs.python.org/2/library/codecs.html#standard-encodings" rel="nofollow">https://docs.python.org/2/library/codecs.html#standard-encodings</a></p>
<p>If the CSV has come from Excel then it's likely to be a codec beginning with <code>cp</code></p>
| 2 | 2016-08-24T08:00:31Z | [
"python",
"unicode",
"py.test"
] |
Regular expression for coordinates in Python | 39,105,104 | <p>I don't have much experience with regexps, so need to ask the question here. </p>
<p>I'm uploading a database for a learning algorithm. There are five variables: iteration, label, vector of pixel intensities, vector of coordinates and bounding box cordinates. </p>
<p>I've uploaded vector of pixels this way:</p>
<pre><code>with open(tracked_cow, 'r+') as f:
...
elif line % 5 == 2:
# regular expression
string_this_obs = [float(val) for val in re.findall(r"[\d.]+", vs)]
mean_intensities.append(string_this_obs)
</code></pre>
<p>to get a list of intensities for each observation in floats, which is exactly what I need. Coordinates for each observation are in format <code>[(v1,v2), (v3,v4)...]</code>, and I'd like to get something like <code>['(v1,v2)','(v3,v4)'...]</code>, where each input is a string. </p>
<p>EDIT: here is the raw data from the file :</p>
<pre><code>10,
3,
[0.11198258, 0.24691357, 0.17487293, 0.46013072, 0.20174292, 0.16528684, 0.20348585, 0.15991287, 0.3205519, 0.16397965, 0.54306459, 0.10878723, 0.035439365, 0.11387073, 0.8691358, 0.27843139, 0.090777054, 0.065649971, 0.1697894, 0.12941177, 0.2556282, 0.10762528, 0.26187363, 0.10312274, 0.26550472, 0.069571532, 0.23805377, 0.10036311, 0.18707335, 0.15976763, 0.1828613, 0.38010171, 0.094262883, 0.39157587, 0.35410312, 0.093827158, 0.10777052, 0.10777051, 0.10079884, 0.20130719, 0.1029775, 0.20275964, 0.57981122, 0.26056644, 0.16180103, 0.21089324, 0.18445896, 0.15323168, 0.070007272, 0.14989108, 0.22716051, 0.58344227, 0.69876546, 0.13478577, 0.17037037, 0.17893973, 0.16092958, 0.98155409, 0.2771242, 0.0824982, 0.29092228, 0.089034133, 0.11314452, 0.07392884, 0.07770516, 0.074074082, 0.27102399, 0.10442992, 0.19419028, 0.20116195, 0.16325344, 0.10617284, 0.84647787, 0.5863471, 0.088017434, 0.16891794, 0.070007272, 0.088598408, 0.13493101, 0.18997823, 0.98779958, 0.071895428, 0.17748728, 0.19680466, 0.15700799, 0.49513438, 0.068409592, 0.96920842, 0.09440814, 0.90515625, 0.2878722, 0.03267974, 0.22120552, 0.26753813, 0.070007272, 0.11372551, 0.11532317, 0.29019612, 0.21161947, 0.37400147],
[(366, 732), (269, 759), (318, 739), (326, 790), (369, 771), (384, 775), (376, 744), (312, 739), (366, 737), (304, 736), (359, 758), (333, 773), (266, 728), (362, 728), (313, 767), (343, 759), (381, 777), (268, 731), (290, 751), (287, 740), (266, 760), (334, 745), (276, 728), (313, 751), (382, 766), (265, 751), (328, 755), (310, 748), (349, 730), (374, 759), (350, 743), (360, 756), (375, 782), (354, 754), (349, 738), (345, 747), (296, 789), (355, 790), (356, 778), (363, 730), (346, 772), (278, 753), (314, 765), (383, 723), (303, 734), (374, 778), (361, 770), (299, 733), (368, 732), (286, 737), (268, 762), (287, 769), (306, 770), (323, 748), (346, 787), (294, 757), (340, 751), (283, 771), (333, 738), (266, 737), (326, 740), (382, 772), (325, 751), (267, 741), (366, 759), (266, 732), (270, 758), (307, 782), (357, 781), (325, 755), (304, 733), (320, 780), (362, 749), (283, 775), (379, 773), (374, 730), (368, 732), (265, 748), (338, 767), (317, 736), (340, 784), (316, 788), (272, 728), (360, 770), (292, 762), (359, 756), (343, 778), (306, 767), (321, 784), (340, 725), (288, 724), (366, 789), (378, 735), (339, 735), (383, 753), (305, 780), (326, 773), (366, 750), (312, 729), (339, 738)],
Bbox(x0=723.0, y0=263.0, x1=794.0, y1=389.0),
</code></pre>
| 0 | 2016-08-23T15:22:28Z | 39,105,729 | <p>Does this do what you're looking for? </p>
<pre><code>re.findall(r'(\(\d+, \d+\))', coord_string)
</code></pre>
| 1 | 2016-08-23T15:52:50Z | [
"python",
"regex",
"list"
] |
How to find min value of another column greater than current column Pandas | 39,105,282 | <p>I am sure this is an easy one, but how do I find the minimum value of a column that is greater than the value in the current column? Also, how do I find the maximum value of a column less that the value in the current column?</p>
<pre><code>from io import StringIO
import io
text = """Order starttime endtime
1 2016-03-01 14:31:10.777 2016-03-01 14:31:10.803
1 2016-03-01 14:31:10.779 2016-03-01 14:31:10.780
1 2016-03-01 14:31:10.790 2016-03-01 14:31:10.791
1 2016-03-01 14:31:10.806 2016-03-01 14:31:10.863"""
df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2])
</code></pre>
<p>So.. example..
for the endtime column, I want the minimum value of the starttime column that is greater to that value. </p>
<p>The value associated with then endtime 2016-03-01 14:31:10.803 (the first value)
would then be 2016-03-01 14:31:10.806 (the last value of startdatetime).</p>
<p>The value associated with 2016-03-01 14:31:10.780 (the second endtime) should then be 2016-03-01 14:31:10.790</p>
<p>So basically (in pseudocode)</p>
<p>df['nexttime'] = min(df['starttime'])>df['endtime']</p>
<p>Would appreciate any help .. I'm sure this is pretty easy for someone more skilled than I am</p>
| 0 | 2016-08-23T15:31:42Z | 39,105,595 | <p>You can try something like this:</p>
<pre><code>df.endtime.apply(lambda x: min(df.starttime[df.starttime > x]) if len(df.starttime[df.starttime > x]) != 0 else np.nan)
# 0 2016-03-01 14:31:10.806
# 1 2016-03-01 14:31:10.790
# 2 2016-03-01 14:31:10.806
# 3 NaT
# Name: endtime, dtype: datetime64[ns]
</code></pre>
<p>Or slightly more efficient way:</p>
<pre><code>def findMin(x):
larger = df.starttime[df.starttime > x]
if len(larger) != 0:
return min(larger)
else:
return np.nan
df.endtime.apply(findMin)
# 0 2016-03-01 14:31:10.806
# 1 2016-03-01 14:31:10.790
# 2 2016-03-01 14:31:10.806
# 3 NaT
# Name: endtime, dtype: datetime64[ns]
</code></pre>
<p>There is probably a way to avoid the vector scan, but if the performance is not a big issue, this works.</p>
| 1 | 2016-08-23T15:46:09Z | [
"python",
"pandas",
"dataframe",
"aggregate",
"min"
] |
Merging several csv files and storing the file names as a variable - Python | 39,105,342 | <p>I am trying to append several csv files into a single csv file using python while adding the file name (or, even better, a sub-string of the file name) as a new variable. All files have headers. The following script does the trick of merging the files, but does not cover the file name as variable issue:</p>
<pre><code>import glob
filenames=glob.glob("/filepath/*.csv")
outputfile=open("out.csv","a")
for line in open(str(filenames[1])):
outputfile.write(line)
for i in range(1,len(filenames)):
f = open(str(filenames[i]))
f.next()
for line in f:
outputfile.write(line)
outputfile.close()
</code></pre>
<p>I was wondering if there are any good suggestions. I have about 25k small size csv files (less than 100KB each).</p>
| 0 | 2016-08-23T15:34:05Z | 39,105,436 | <p>Simple changes will achieve what you want:
For the first line</p>
<pre><code>outputfile.write(line) -> outputfile.write(line+',file')
</code></pre>
<p>and later</p>
<pre><code>outputfile.write(line+','+filenames[i])
</code></pre>
| 0 | 2016-08-23T15:38:47Z | [
"python",
"csv"
] |
Merging several csv files and storing the file names as a variable - Python | 39,105,342 | <p>I am trying to append several csv files into a single csv file using python while adding the file name (or, even better, a sub-string of the file name) as a new variable. All files have headers. The following script does the trick of merging the files, but does not cover the file name as variable issue:</p>
<pre><code>import glob
filenames=glob.glob("/filepath/*.csv")
outputfile=open("out.csv","a")
for line in open(str(filenames[1])):
outputfile.write(line)
for i in range(1,len(filenames)):
f = open(str(filenames[i]))
f.next()
for line in f:
outputfile.write(line)
outputfile.close()
</code></pre>
<p>I was wondering if there are any good suggestions. I have about 25k small size csv files (less than 100KB each).</p>
| 0 | 2016-08-23T15:34:05Z | 39,105,877 | <p>You can use Python's <code>csv</code> module to parse the CSV files for you, and to format the output. Example code (untested):</p>
<pre><code>import csv
with open(output_filename, "wb") as outfile:
writer = None
for input_filename in filenames:
with open(input_filename, "rb") as infile:
reader = csv.DictReader(infile)
if writer is None:
field_names = ["Filename"] + reader.fieldnames
writer = csv.DictWriter(outfile, field_names)
writer.writeheader()
for row in reader:
row["Filename"] = input_filename
writer.writerow(row)
</code></pre>
<p>A few notes:</p>
<ul>
<li>Always use <code>with</code> to open files. This makes sure they will get closed again when you are done with them. Your code doesn't correctly close the input files.</li>
<li>CSV files should be opened in binary mode.</li>
<li>Indices start at 0 in Python. Your code skips the first file, and includes the lines from the second file twice. If you just want to iterate over a list, you don't need to bother with indices in Python. Simply use <code>for x in my_list</code> instead.</li>
</ul>
| 0 | 2016-08-23T15:59:47Z | [
"python",
"csv"
] |
Python: adding values to columns in a text file | 39,105,352 | <p>I have a text file that looks like:</p>
<pre><code>0.0 3 0.1273
4.0 3 -0.0227
8.0 3 0.1273
</code></pre>
<p>I want to change this so that it prints out column1 and column2 and replaces the values in column3 to '1' for each row. So I want the output file to look like:</p>
<pre><code>0.0 3 1
4.0 3 1
8.0 3 1
</code></pre>
<p>How can I do this in python? I have a code that just reads and writes the text file as it is- not sure how to edit it so that it changes the third column. Any help will be much appreciated!</p>
<pre><code>fname="file.txt"
results = []
for line in open(fname,'r'):
col1=line.split()[0]
col2=line.split()[1]
col3=line.split()[2]
data = col1,col2, col3
#data.insert(col3, 1) #attempt1-this didnt work
#data.replace(col3, 1) #attempt2-this didnt work
results.append(data)
print(data) #this just prints the file as it is
with open ('new_file.txt', 'w') as datafile:
for data in results:
datafile.write ('{0}\n'.format(' '.join(data)))
</code></pre>
| 0 | 2016-08-23T15:34:31Z | 39,105,902 | <p>Recall that tuple is immutable. You can't change the value.
So when you do <code>data = col1, col2, col3</code> then <code>data</code>'s value cannot be changed. Just simply assign <code>data</code> directly with <code>col1, col2, 1</code>. </p>
| 1 | 2016-08-23T16:00:59Z | [
"python",
"text-files"
] |
Spread function calls evenly over time in Python | 39,105,380 | <p>Let's say i have a function in Python and it's pretty fast so i can call it in a loop like 10000 times per second.</p>
<p>I'd like to call it, for example, 2000 times per second but with even intervals between calls (not just call 2000 times and wait till the end of the second). How can i achieve this in Python? </p>
| 0 | 2016-08-23T15:36:04Z | 39,105,597 | <p>You can use the built-in <a href="https://docs.python.org/2/library/sched.html" rel="nofollow"><code>sched</code></a> module which implements a general purpose scheduler.</p>
<pre><code>import sched, time
# Initialize the scheduler
s = sched.scheduler(time.time, time.sleep)
# Define some function for the scheduler to run
def some_func():
print('ran some_func')
# Add events to the scheduler and run
delay_time = 0.01
for jj in range(20):
s.enter(delay_time*jj, 1, some_func)
s.run()
</code></pre>
<p>Using the <code>s.enter</code> method puts the events into the scheduler with a delay relative to when the events are entered. It is also possible to schedule the events to occur at a specific time with <code>s.enterabs</code>.</p>
| 0 | 2016-08-23T15:46:13Z | [
"python"
] |
Python setuptools don't give me permission to execute script | 39,105,468 | <p>I'm running the following from the package dir:</p>
<pre><code>sudo ./setup.py develop
# or
sudo python setup.py develop
</code></pre>
<p>The package contains two executables. <code>setup.py</code> places them in <code>/usr/bin/</code>, but when I'm trying to run them, it fails with:</p>
<pre><code>-bash: /usr/bin/<executable>: Permission denied
</code></pre>
<p>WTF?</p>
<hr>
<p><strong>OS</strong>: Archlinux x86_64 runnning inside a Vagrant containter via VirtualBox on Windows 7</p>
<p>Under Ubuntu everything worked correctly.</p>
| 0 | 2016-08-23T15:40:29Z | 39,119,762 | <p>I found <a href="http://unix.stackexchange.com/questions/166948/why-is-my-pip-installed-python-script-not-executable-for-everyone">answer</a> on <a href="https://unix.stackexchange.com">https://unix.stackexchange.com</a>.</p>
<p>This helped:</p>
<pre><code>sudo umask 002
</code></pre>
| 0 | 2016-08-24T09:44:16Z | [
"python",
"pip",
"package-managers"
] |
Python raw input function for tax calculation | 39,105,532 | <p>I am trying to make a simple calculator for working out the tax due on a salary. Please see the code below:</p>
<p>I keep getting this error and I don't know what is wrong, please help :) thanks!</p>
<pre><code>Traceback (most recent call last):
File "python", line 13
elif salary > 11000 and salary < 43000:
^
SyntaxError: invalid syntax
</code></pre>
<p>CODE:</p>
<pre><code>salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
salary >= 0
while true:
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
return tax
</code></pre>
| 0 | 2016-08-23T15:43:13Z | 39,105,714 | <p>As the comments have already stated, your indentation is incorrect. See below:</p>
<pre><code>def taxes(salary):
salary >= 0
tax = 0
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
print("Value of tax is: " + str(tax))
return tax
salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
print("\nHere is your net salary after taxes: %r" % (taxes(int(salary))))
</code></pre>
<p>With python, indentations are how you tell the interpreter which blocks of code fall where (unlike with Java for example with semicolons being the end of line delimiter). By not indenting properly on your <code>elif</code> statements, you are essentially telling the program there is an <code>elif</code> without an <code>if</code> hence your syntax problem.</p>
| 0 | 2016-08-23T15:52:11Z | [
"python"
] |
Python raw input function for tax calculation | 39,105,532 | <p>I am trying to make a simple calculator for working out the tax due on a salary. Please see the code below:</p>
<p>I keep getting this error and I don't know what is wrong, please help :) thanks!</p>
<pre><code>Traceback (most recent call last):
File "python", line 13
elif salary > 11000 and salary < 43000:
^
SyntaxError: invalid syntax
</code></pre>
<p>CODE:</p>
<pre><code>salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
salary >= 0
while true:
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
return tax
</code></pre>
| 0 | 2016-08-23T15:43:13Z | 39,105,770 | <p>Steps to correct your code</p>
<p>step1 : the salary data type should be of int, to correct..use the following code</p>
<hr>
<p>step 2: Indentation is compulsory in python, so indent your code very well</p>
<hr>
<p>step 3: Add an else statement after the conditional statements</p>
<hr>
<p>step 4: indent return statement</p>
<p>change your code to this one</p>
<pre><code>salary = int(raw_input ("What is your salary?"))
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
salary >= 0
while true:
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
else :
tax = undefined
return tax
</code></pre>
| 1 | 2016-08-23T15:54:40Z | [
"python"
] |
Python raw input function for tax calculation | 39,105,532 | <p>I am trying to make a simple calculator for working out the tax due on a salary. Please see the code below:</p>
<p>I keep getting this error and I don't know what is wrong, please help :) thanks!</p>
<pre><code>Traceback (most recent call last):
File "python", line 13
elif salary > 11000 and salary < 43000:
^
SyntaxError: invalid syntax
</code></pre>
<p>CODE:</p>
<pre><code>salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
salary >= 0
while true:
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
return tax
</code></pre>
| 0 | 2016-08-23T15:43:13Z | 39,105,995 | <p>That because there are indent error in line number 12, now you can just copy pust this :</p>
<p>note : <code>salary > 11000 and salary < 43000</code> equivalent to <code>11000 < salary < 43000</code> in python:</p>
<pre><code>salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
while true:
if salary < 11000:
tax = 0
elif 11000 < salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
return tax
</code></pre>
| 0 | 2016-08-23T16:05:13Z | [
"python"
] |
Python raw input function for tax calculation | 39,105,532 | <p>I am trying to make a simple calculator for working out the tax due on a salary. Please see the code below:</p>
<p>I keep getting this error and I don't know what is wrong, please help :) thanks!</p>
<pre><code>Traceback (most recent call last):
File "python", line 13
elif salary > 11000 and salary < 43000:
^
SyntaxError: invalid syntax
</code></pre>
<p>CODE:</p>
<pre><code>salary = raw_input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nNow we need to calculate what your net salary is."
def taxes(salary):
salary >= 0
while true:
if salary < 11000:
tax = 0
elif salary > 11000 and salary < 43000:
tax = (0.2 * income) - 2200
elif salary > 43000 and salary < 150000:
tax = (0.4 * (salary - 43000)) + 6400
elif salary > 150000:
tax = ((salary - 150000) * 0.45) + 6400 + 42800
return tax
</code></pre>
| 0 | 2016-08-23T15:43:13Z | 39,107,742 | <p>It felt like there was a better way of doing this, so I came up with an alternative route:</p>
<pre><code>tax_bands = [11000, 43000, 150000]
tax_amts = [0.2, 0.4, 0.45]
salary = 43001
</code></pre>
<p>Placing the thresholds and amounts into a list means that you can change them more easily if you need to.</p>
<p>The function below creates a list of the tax calculations, <code>tax_list</code>, and then a separate list of the maximum tax liability in each band called <code>max_tax</code> (the upper band has no maximum).
It then compares the values in the lists, and overwrites the <code>tax_list</code> if the corresponding value is larger than the maximum.
Then it calculates the sum of all values in <code>tax_list</code> greater than zero and returns it.</p>
<pre><code>def taxes(salary, tax_bands, tax_amts):
tax_list = [(pct * (salary - band)) for (band, pct) in zip(tax_bands, tax_amts)]
max_tax = []
for index, sal in enumerate(tax_bands[:-1]):
max_tax.append(tax_bands[index + 1] - sal)
max_tax = [segment * tax for segment, tax in zip(max_tax, tax_amts[:-1])]
for index, value in enumerate(tax_list):
try:
if value > max_tax[index]:
tax_list[index] = max_tax[index]
except:
pass
tax_to_pay = sum([x for x in tax_list if x > 0])
return tax_to_pay
print taxes(salary, tax_bands, tax_amts)
salary = input ("What is your salary?")
print "So your gross annual salary is %r GBP" % (salary)
print "\nYour net annual salary is: {} GBP".format(salary - taxes(salary, tax_bands, tax_amts))
</code></pre>
<p>To be super safe, you could also have the first line in the function call <code>int(salary)</code> using a <code>try except</code> just to check that it's the right type and that someone hasn't entered <code>43,000</code>.</p>
| 0 | 2016-08-23T17:49:07Z | [
"python"
] |
Python opening XML remotely for reading and parsing | 39,105,545 | <p>I am attempting to open a xml file remotely for reading and parsing but getting an error when I try to use it. When I print it also starts with some unrecognized characters. Can you please help point me in the right direction so that I can open xml file remotely and parse the data?</p>
<p>XML File:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="Data.xsl"?>
<abc>
<md>
<mi>
<datetime>20160822020003</datetime>
<period>3600</period>
<it>Item_No_1</it>
<it>Item_No_2</it>
<it>Item_No_3</it>
<it>Item_No_4</it>
<it>Item_No_5</it>
<it>Item_No_6</it>
<it>Item_No_7</it>
<ovalue>
<v>1111111111</v>
<v>2222222222</v>
<v>3333333333</v>
<v>4444444444</v>
<v>5555555555</v>
<v>6666666666</v>
<v>7777777777</v>
</ovalue>
</mi>
</md>
</abc>
</code></pre>
<p>Code:</p>
<pre><code>import xml.etree.ElementTree as ET
with open('test.xml') as f:
for line in f:
x = line
print(x, end='')
root = ET.fromstring(x)
print(root.tag)
</code></pre>
<p>Error:</p>
<pre><code>xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 1
</code></pre>
| 0 | 2016-08-23T15:43:47Z | 39,106,364 | <p>The first two bytes in your example file are a Unicode <a href="https://en.wikipedia.org/wiki/Byte_order_mark" rel="nofollow">Byte Order Marker</a> indicating the file is encoded as UTF-8.</p>
<p>It seems lxml trips over this. Just slice the first two bytes off, lxml will detect the correct encoding from the Doctype declaration anyway.</p>
<pre><code>root = ET.fromstring(x[3:])
</code></pre>
| 0 | 2016-08-23T16:26:24Z | [
"python",
"xml",
"parsing"
] |
Conda (Python) Virtual Environment is not Portable from Windows to Linux | 39,105,596 | <p>On my Windows 10 machine, I created a virtual environment using the following command:</p>
<pre><code>>conda env export > environment.yml
</code></pre>
<p>I tried re-creating the virtual environment using the yml file on the Windows system and it worked fine. Then I transferred environment.yml to my Linux machine (Ubuntu 16.04.1) with the same version of conda and python and ran the following in the terminal:</p>
<pre><code>$ conda env create -f environment.yml
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>Using Anaconda Cloud api site <a href="https://api.anaconda.org" rel="nofollow">https://api.anaconda.org</a><br>
Fetching package metadata .......<br>
Solving package specifications: .<br>
Error: Packages missing in current linux-64 channels:<br>
- jpeg 8d vc14_0<br>
- libpng 1.6.22 vc14_0<br>
- libtiff 4.0.6 vc14_2<br>
- mkl 11.3.3 1<br>
- numpy 1.11.1 py35_1<br>
- openssl 1.0.2h vc14_0<br>
- pyqt 4.11.4 py35_7<br>
- qt 4.8.7 vc14_9<br>
- tk 8.5.18 vc14_0<br>
- vs2015_runtime 14.0.25123 0<br>
- zlib 1.2.8 vc14_3 </p>
</blockquote>
<p>Most of these packages are available in the linux repo of conda, but with a different flavor. For instance, if I remove vc14_0 from the line that contains the jpeg package in the yml file, that would work just fine. The package vs2015_runtime is not available in linux at all. Nothing gets returned when you run:</p>
<pre><code>conda search vs2015_runtime".
</code></pre>
<p>How can I export my virtual environment in a portable way when working cross-platform, so that all the packages can be installed in Linux as well?</p>
<p>Here is the content of my <a href="https://drive.google.com/open?id=0B4wYzrKgNQV4Wkd1RVhZUkNBX1k" rel="nofollow">environment.yml</a>.</p>
| 1 | 2016-08-23T15:46:10Z | 39,106,886 | <p>It looks like you are fetching packages compiled with Microsoft Visual C/C++ Compiler (the <code>vc</code> part of the name).
Those packages won't be ABI compatible from Linux as you are trying to do. Simply target the packages that are not Windows-specific.</p>
| 1 | 2016-08-23T16:58:24Z | [
"python",
"linux",
"virtualenv",
"anaconda",
"conda"
] |
Simple list creation | 39,105,667 | <p>I'm new on python and I have a problem</p>
<p>I have a word and i want in return a list like that:
outlist = ["word1, word2, word3, ecc....]
this is what I tried:</p>
<pre><code>numberlist=list(range(1,1000))
numberlist=(','.join("'{0}'".format(x) for x in numberlist))
list=["word" + number for number in numberlist]
for elem in sq:
print (elem)
</code></pre>
<p>I have a list in return but its wrong, I cant find a way to obtain what I want.
Where is the error?
thanks for the help </p>
| 0 | 2016-08-23T15:49:46Z | 39,105,708 | <p>Your use of <code>join</code> is wrong. The simplest solution would be:</p>
<pre><code>print ["word" + str(i) for i in range(1, 1000)]
</code></pre>
<p>This gives:</p>
<pre><code>In [4]: ["word" + str(k) for k in range(5)]
Out[4]: ['word0', 'word1', 'word2', 'word3', 'word4']
</code></pre>
<p><code>join</code> is used to return a string that represents the concatenation of multiple strings in an iterable, so in this case you could use it to return a string, joined by comma:</p>
<pre><code>In [5]: ','.join(["word" + str(k) for k in range(5)])
Out[5]: 'word0,word1,word2,word3,word4'
</code></pre>
| 4 | 2016-08-23T15:51:52Z | [
"python"
] |
Python regex finding phrases in same line | 39,105,698 | <p>I have transcripts like this:</p>
<pre class="lang-none prettyprint-override"><code>speaker1 (caller): hello.
speaker2 (agent): thank you for calling.
speaker1 (caller): I need some help with my account 3429.
speaker2 (agent): Sure let me help.
</code></pre>
<p>They are of the form 'speakerN (caller or agent)'. I need to write a regex to get lists of caller and agent conversations. So for the above example, I would output:</p>
<pre><code>['(caller): hello. ', '(agent): thank you for calling', '(caller): I need some help with my account 3429.', '(agent): Sure let me help.']
</code></pre>
<p>Here's what I have so far: </p>
<pre><code>aList = re.findall('speaker. (.*) speaker.|$', transcript)
print(aList)
</code></pre>
<p>I know there's a speakerN in the front, some text that I need to capture, and then either another speakerN at the end (indicating a new list) or the end of line. This is the logic that I tried to capture, but it's putting the entire transcript in the one list element and an empty string in the second. Any help would be appreciated.</p>
| 1 | 2016-08-23T15:51:33Z | 39,105,831 | <p>Regex only produces non-overlapping matches. So you can't have <code>speaker</code> occuring twice in your pattern. You need to put it inside a lookahead:</p>
<pre><code>speaker\d+ (\([^(]*?)(?=\s+speaker\d+|$)
</code></pre>
<p>This will capture the text in group 1.</p>
<p><a href="https://regex101.com/r/kY1bQ1/1" rel="nofollow">Demo.</a></p>
| 1 | 2016-08-23T15:57:36Z | [
"python",
"regex"
] |
Python regex finding phrases in same line | 39,105,698 | <p>I have transcripts like this:</p>
<pre class="lang-none prettyprint-override"><code>speaker1 (caller): hello.
speaker2 (agent): thank you for calling.
speaker1 (caller): I need some help with my account 3429.
speaker2 (agent): Sure let me help.
</code></pre>
<p>They are of the form 'speakerN (caller or agent)'. I need to write a regex to get lists of caller and agent conversations. So for the above example, I would output:</p>
<pre><code>['(caller): hello. ', '(agent): thank you for calling', '(caller): I need some help with my account 3429.', '(agent): Sure let me help.']
</code></pre>
<p>Here's what I have so far: </p>
<pre><code>aList = re.findall('speaker. (.*) speaker.|$', transcript)
print(aList)
</code></pre>
<p>I know there's a speakerN in the front, some text that I need to capture, and then either another speakerN at the end (indicating a new list) or the end of line. This is the logic that I tried to capture, but it's putting the entire transcript in the one list element and an empty string in the second. Any help would be appreciated.</p>
| 1 | 2016-08-23T15:51:33Z | 39,106,102 | <p>use <code>aList = re.findall('speaker\d+\s(.*?)(?=\sspeaker|$)', transcript)</code></p>
<p><code>.*?</code> will stop matching right away when it finds another occurrence of speaker, while <code>.*</code> will keep matching any character till the last occurrence. Hope it helps.</p>
<p>Edit: speaker\d+, . will match only one character.</p>
<p>Edit: not good if word 'speaker' comes in between conversation. So use</p>
<p><code>aList = re.findall('speaker\d+\s*(.*?)(?=\sspeaker\s*\(|$)', transcript)</code></p>
| 0 | 2016-08-23T16:11:05Z | [
"python",
"regex"
] |
understanding the size and structure of multi-dimensional array | 39,105,846 | <p>When reading an open source program, a function outputs the following multi-dimensional array. The output is referred to as <code>batch</code>. <code>print(batch)</code> generates the following output. In order to know exactly the structure of this output. I tried <code>print(batch.shape)</code>, which generates the following error message <code>print(batch.shape)
AttributeError: 'tuple' object has no attribute 'shape'
</code>. What are the possible ways to understand the structure/size of this type of array structure?</p>
<p><a href="http://i.stack.imgur.com/31mav.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/31mav.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-23T15:58:10Z | 39,106,654 | <p>A tuple, like a Python list, has a <code>len</code> property</p>
<pre><code>len(batch) # probably gives 2
</code></pre>
<p>You can look at the properties of the elements of tuple with iteration</p>
<pre><code>for arr in batch:
print(arr.shape)
print(arr.dtype)
</code></pre>
<p>Or</p>
<pre><code> [arr.shape for arr in batch]
</code></pre>
<p>or</p>
<pre><code> batch[0].shape
batch[1].shape
</code></pre>
| 0 | 2016-08-23T16:44:14Z | [
"python",
"numpy",
"scipy"
] |
Reflecting points in a bar graph using matplotlib | 39,105,884 | <p>long time lurker for programming questions, first time poster.</p>
<p>I am writing some code where I am making a bar graph of a bunch of values, some of which are negative and some of which are positive- <a href="http://i.stack.imgur.com/VaimT.png" rel="nofollow">plot here</a></p>
<p>In short, what I want to do is take all the negative values for the green part and overlay them onto the positive side, so you can see the asymmetry in those values. I have tried a few methods to get this to work, and perhaps I'm not searching the correct things but can't seem to find a good answer on how to do this.</p>
<p>The relevant code I have so far (hopefully not leaving anything out that's important for the purposes of the plot...):</p>
<pre><code>import glob
import pyrap.images as pim
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import matplotlib.mlab as mlab
from scipy.optimize import *
less_than_expected_min = -500
more_than_expected_max = 1200
n_bins = 100
bin_edges = np.linspace(less_than_expected_min, more_than_expected_max, n_bins)
for i in range(total):
all_clipped_values = np.zeros([total,n_bins-1])
clipped_levels= np.sum(all_clipped_values,axis=0)
reflect= np.concatenate([clipped_levels[0:30], clipped_levels[30:0]])
plt.bar(bin_edges[:-1],clipped_levels,width=(more_than_expected_max -less_than_expected_min)/float(n_bins), log=True,color='green')
plt.bar(bin_edges[:-1],reflect,width=(more_than_expected_max -less_than_expected_min)/float(n_bins), log=True,color='red')
</code></pre>
<p>The issue when I try this method, however, is I get "AssertionError: incompatible sizes: argument 'height' must be length 99 or scalar." It's not quite clear to me how to solve this, or in fact if there is a simpler way to do this reflection than what I'm thinking.</p>
<p>Any feedback appreciated- thanks!</p>
| 3 | 2016-08-23T16:00:00Z | 39,106,147 | <p>As I mentioned in my comment, maybe this will clarify</p>
<pre><code>>>> x = list(range(100))
>>> x[0:30]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
>>> x[30:0]
[]
>>> x[30:0:-1]
[30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
</code></pre>
<p>You have to specify the negative step.</p>
| 1 | 2016-08-23T16:14:09Z | [
"python",
"matplotlib"
] |
Reflecting points in a bar graph using matplotlib | 39,105,884 | <p>long time lurker for programming questions, first time poster.</p>
<p>I am writing some code where I am making a bar graph of a bunch of values, some of which are negative and some of which are positive- <a href="http://i.stack.imgur.com/VaimT.png" rel="nofollow">plot here</a></p>
<p>In short, what I want to do is take all the negative values for the green part and overlay them onto the positive side, so you can see the asymmetry in those values. I have tried a few methods to get this to work, and perhaps I'm not searching the correct things but can't seem to find a good answer on how to do this.</p>
<p>The relevant code I have so far (hopefully not leaving anything out that's important for the purposes of the plot...):</p>
<pre><code>import glob
import pyrap.images as pim
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import matplotlib.mlab as mlab
from scipy.optimize import *
less_than_expected_min = -500
more_than_expected_max = 1200
n_bins = 100
bin_edges = np.linspace(less_than_expected_min, more_than_expected_max, n_bins)
for i in range(total):
all_clipped_values = np.zeros([total,n_bins-1])
clipped_levels= np.sum(all_clipped_values,axis=0)
reflect= np.concatenate([clipped_levels[0:30], clipped_levels[30:0]])
plt.bar(bin_edges[:-1],clipped_levels,width=(more_than_expected_max -less_than_expected_min)/float(n_bins), log=True,color='green')
plt.bar(bin_edges[:-1],reflect,width=(more_than_expected_max -less_than_expected_min)/float(n_bins), log=True,color='red')
</code></pre>
<p>The issue when I try this method, however, is I get "AssertionError: incompatible sizes: argument 'height' must be length 99 or scalar." It's not quite clear to me how to solve this, or in fact if there is a simpler way to do this reflection than what I'm thinking.</p>
<p>Any feedback appreciated- thanks!</p>
| 3 | 2016-08-23T16:00:00Z | 39,106,665 | <p>Each time you call <code>plt.bar</code>, if your first argument is an array of length <strong>n</strong> (the set of abscissas), then your second argument must be an array of length <strong>n</strong> (the set of ordinates).</p>
<p>In your case, your set of abscissas is by construction an array of length 99, hence you must ensure your set of ordinates has the same shape.</p>
<p>For the first call, your second argument <code>clipped_levels</code> seems to have the right length, but for the second call, the second argument is <code>reflect</code> - which is far from 99 items long.</p>
<p>Fix that and it should work, hopefully !</p>
<p><strong>EDIT :</strong></p>
<p>Something like <code>reverse = np.concatenate([clipped_levels[:n_bins/2], clipped_levels[n_bins/2-2::-1]])</code> should do the trick.</p>
<p>Also, I still think your <code>for</code> loop could be replaced by a single instruction (the initialization of <code>all_clipped_values</code>), unless there is some other code inside that was not relevant here.</p>
| 0 | 2016-08-23T16:44:54Z | [
"python",
"matplotlib"
] |
Python: BeautifulSoup: CSS rule to select elements only if have two classes and share the same first one | 39,105,944 | <p>i have these elements in the HTML I want to parse:</p>
<pre><code> <td class="line"> GARBAGE </td>
<td class="line text"> I WANT THAT </td>
<td class="line heading"> I WANT THAT </td>
<td class="line"> GARBAGE </td>
</code></pre>
<p>How can I make a CSS selector that select elements with attributes class line and class something else (could be heading, text or anything else) BUT not attribute class line only?</p>
<p>I have tried:</p>
<pre><code> td[class=line.*]
td.line.*
td[class^=line.]
</code></pre>
<p>EDIT</p>
<p>I am using Python and BeautifulSoup:</p>
<pre><code>url = 'http://www.somewebsite'
res = requests.get(url)
res.raise_for_status()
DicoSoup = bs4.BeautifulSoup(res.text, "lxml")
elems = DicoSoup.select('body div#someid tr td.line')
</code></pre>
<p>I am looking into modifying the last piece, namely td.line to something like td.line.whateverotherclass (but not td.line alone otherwise my selector would suffice already)</p>
<p>Thank you for your tips,</p>
| 3 | 2016-08-23T16:02:55Z | 39,106,039 | <p>You can chain CSS classes for a class selector.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>.line {
color: green;
}
.line.text {
color: red;
}
.line.heading {
color: blue;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><p class="line">GARBAGE</p>
<p class="line text">I WANT THAT</p>
<p class="line heading">I WANT THAT</p>
<p class="line">GARBAGE</p></code></pre>
</div>
</div>
</p>
| 0 | 2016-08-23T16:08:07Z | [
"python",
"html",
"css-selectors",
"beautifulsoup"
] |
Python: BeautifulSoup: CSS rule to select elements only if have two classes and share the same first one | 39,105,944 | <p>i have these elements in the HTML I want to parse:</p>
<pre><code> <td class="line"> GARBAGE </td>
<td class="line text"> I WANT THAT </td>
<td class="line heading"> I WANT THAT </td>
<td class="line"> GARBAGE </td>
</code></pre>
<p>How can I make a CSS selector that select elements with attributes class line and class something else (could be heading, text or anything else) BUT not attribute class line only?</p>
<p>I have tried:</p>
<pre><code> td[class=line.*]
td.line.*
td[class^=line.]
</code></pre>
<p>EDIT</p>
<p>I am using Python and BeautifulSoup:</p>
<pre><code>url = 'http://www.somewebsite'
res = requests.get(url)
res.raise_for_status()
DicoSoup = bs4.BeautifulSoup(res.text, "lxml")
elems = DicoSoup.select('body div#someid tr td.line')
</code></pre>
<p>I am looking into modifying the last piece, namely td.line to something like td.line.whateverotherclass (but not td.line alone otherwise my selector would suffice already)</p>
<p>Thank you for your tips,</p>
| 3 | 2016-08-23T16:02:55Z | 39,106,207 | <p>If you know all the other possible class names in advance, you can list them all. It's cumbersome, especially if you have <em>many</em> possible class names, but it's the single most reliable method.</p>
<p>Otherwise, if you can ensure that elements with only the "line" class don't have any whitespace in their class attributes, you can use either</p>
<pre><code>td.line:not([class="line"])
</code></pre>
<p>or</p>
<pre><code>td.line[class*=" "]
</code></pre>
<p>If you <em>cannot</em> ensure that the class attributes won't be littered with unnecessary whitespace (because your templating engine wasn't well designed for example), and you can't predict the other class names in advance, you'll need JavaScript.</p>
| 3 | 2016-08-23T16:16:44Z | [
"python",
"html",
"css-selectors",
"beautifulsoup"
] |
Python: BeautifulSoup: CSS rule to select elements only if have two classes and share the same first one | 39,105,944 | <p>i have these elements in the HTML I want to parse:</p>
<pre><code> <td class="line"> GARBAGE </td>
<td class="line text"> I WANT THAT </td>
<td class="line heading"> I WANT THAT </td>
<td class="line"> GARBAGE </td>
</code></pre>
<p>How can I make a CSS selector that select elements with attributes class line and class something else (could be heading, text or anything else) BUT not attribute class line only?</p>
<p>I have tried:</p>
<pre><code> td[class=line.*]
td.line.*
td[class^=line.]
</code></pre>
<p>EDIT</p>
<p>I am using Python and BeautifulSoup:</p>
<pre><code>url = 'http://www.somewebsite'
res = requests.get(url)
res.raise_for_status()
DicoSoup = bs4.BeautifulSoup(res.text, "lxml")
elems = DicoSoup.select('body div#someid tr td.line')
</code></pre>
<p>I am looking into modifying the last piece, namely td.line to something like td.line.whateverotherclass (but not td.line alone otherwise my selector would suffice already)</p>
<p>Thank you for your tips,</p>
| 3 | 2016-08-23T16:02:55Z | 39,108,518 | <p>What <a href="http://stackoverflow.com/a/39106207/771848">@BoltClock suggested</a> is generally a correct way to approach the problem with CSS selectors. The only problem is that <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow"><code>BeautifulSoup</code> supports a <em>limited number of CSS selectors</em></a>. For instance, <a href="https://bugs.launchpad.net/beautifulsoup/+bug/1607476" rel="nofollow"><code>not()</code> selector is :not(.supported) at the moment</a>.</p>
<p>You can workaround it with a "starts-with" selector to check if a class starts with <code>line</code> followed by a space (it is quite fragile but works on your sample data):</p>
<pre><code>for td in soup.select("td[class^='line ']"):
print(td.get_text(strip=True))
</code></pre>
<p>Or, you can solve it using the <code>find_all()</code> and having a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function" rel="nofollow">searching function</a> checking the <code>class</code> attribute to have <code>line</code> and some other class:</p>
<pre><code>from bs4 import BeautifulSoup
data = """
<table>
<tr>
<td class="line"> GARBAGE </td>
<td class="line text"> I WANT THAT </td>
<td class="line heading"> I WANT THAT </td>
<td class="line"> GARBAGE </td>
</tr>
</table>"""
soup = BeautifulSoup(data, 'html.parser')
for td in soup.find_all(lambda tag: tag and tag.name == "td" and
"class" in tag.attrs and "line" in tag["class"] and
len(tag["class"]) > 1):
print(td.get_text(strip=True))
</code></pre>
<p>Prints:</p>
<pre><code>I WANT THAT
I WANT THAT
</code></pre>
| 3 | 2016-08-23T18:40:46Z | [
"python",
"html",
"css-selectors",
"beautifulsoup"
] |
Best way to parse arguments in a REST api | 39,105,997 | <p>I am building a Rest API based on flask Restful. And I'm troubled what would be the best way to parse the arguments I am waiting for. </p>
<p>The info:</p>
<p>Table Schema: </p>
<pre><code>-----------------------------------------
| ID | NAME | IP | MAIL | OS | PASSWORD |
-----------------------------------------
</code></pre>
<p>Method that does the job:</p>
<pre><code>def update_entry(data, id):
...
...
pass
</code></pre>
<p>The resource that handles the request:</p>
<pre><code>def put(self, id):
json_data = request.get_json(force=True)
update_entry(json_data, id)
pass
</code></pre>
<p>Json format:</p>
<pre><code>{'NAME': 'john', 'OS': 'windows'}
</code></pre>
<p>I have to mention that I do not know if all the above are relevant to my question. </p>
<p>Now what I would like to know is, where is the proper place to check if the client sent the arguments i want or the keys in his request are valid.
I have thought a couple of alternatives but i have the feeling that i'm missing a best practice here.</p>
<ol>
<li>Pass whatever the client sends to the backend let an error happen and catch it.</li>
<li>Create sth like a json template and validate the client's request with that before pass it back.</li>
</ol>
<p>Ofc the first option is simplier, but the second doesn't create unnecessary load to my db although might become quite complex. </p>
<p>Any opinion for either of the above two or any other suggestion welcome. </p>
| 0 | 2016-08-23T16:05:35Z | 39,107,310 | <p>It's not a good idea to let your database api catch errors. Repeated DB access will hamper performance.</p>
<p>Best case scenario, if you can error check the json at the client, do it.
Error check a json in python anyhow. You must always assume that the network
is compromised and you will get garbage values/ malicious requests.
A design principle I read somewhere (Clean Code I think) was that be strict on
output but go easy on the input.</p>
| 0 | 2016-08-23T17:23:28Z | [
"python",
"rest",
"flask-restful"
] |
Best way to parse arguments in a REST api | 39,105,997 | <p>I am building a Rest API based on flask Restful. And I'm troubled what would be the best way to parse the arguments I am waiting for. </p>
<p>The info:</p>
<p>Table Schema: </p>
<pre><code>-----------------------------------------
| ID | NAME | IP | MAIL | OS | PASSWORD |
-----------------------------------------
</code></pre>
<p>Method that does the job:</p>
<pre><code>def update_entry(data, id):
...
...
pass
</code></pre>
<p>The resource that handles the request:</p>
<pre><code>def put(self, id):
json_data = request.get_json(force=True)
update_entry(json_data, id)
pass
</code></pre>
<p>Json format:</p>
<pre><code>{'NAME': 'john', 'OS': 'windows'}
</code></pre>
<p>I have to mention that I do not know if all the above are relevant to my question. </p>
<p>Now what I would like to know is, where is the proper place to check if the client sent the arguments i want or the keys in his request are valid.
I have thought a couple of alternatives but i have the feeling that i'm missing a best practice here.</p>
<ol>
<li>Pass whatever the client sends to the backend let an error happen and catch it.</li>
<li>Create sth like a json template and validate the client's request with that before pass it back.</li>
</ol>
<p>Ofc the first option is simplier, but the second doesn't create unnecessary load to my db although might become quite complex. </p>
<p>Any opinion for either of the above two or any other suggestion welcome. </p>
| 0 | 2016-08-23T16:05:35Z | 39,902,270 | <p>Why you don't consider to use a library like <a href="http://marshmallow.readthedocs.io/en/latest/" rel="nofollow">marchmallow</a> since the flask-restful documentation suggest it? It will answer your problems in a proper and non custom why like if you would right the validation from scratch.</p>
| 1 | 2016-10-06T17:30:33Z | [
"python",
"rest",
"flask-restful"
] |
Capture output from external command and write it to a file | 39,106,032 | <p>I am trying to create a script that calls on a linux command from my Ubuntu server and prints the output of aforementioned command to txt files. This is literally the first script I've ever written, I just started learning python recently. I want 3 files in 3 separate folders with filenames unique to date.</p>
<pre><code>def swo():
from subprocess import call
call("svn info svn://url")
def tco():
from subprocess import call
call("svn info svn://url2")
def fco():
from subprocess import call
call("url3")
import time
timestr = time.strftime("%Y%m%d")
fs = "/path/1/" + timestr
ft = "/path/2/" + timestr
fc = "/path/3/" + timestr
f1 = open(fs + '.txt', 'w')
f1.write(swo)
f1.close()
f2 = open(ft + '.txt', 'w')
f2.write(tco)
f2.close()
f3 = open(fc + '.txt' 'w')
f3.write(fco)
f3.close()
</code></pre>
<p>It is failing at the f.write() functions. I'm stuck at making the output of the linux commands the actual text in the new files.</p>
| -3 | 2016-08-23T16:07:37Z | 39,106,185 | <p>You can do this instead:</p>
<pre><code>import time
import subprocess as sp
timestr = time.strftime("%Y%m%d")
fs = "/path/1/" + timestr
ft = "/path/2/" + timestr
fc = "/path/3/" + timestr
f1 = open(fs + '.txt', 'w')
rc = sp.call("svn info svn://url", stdout=f1, stderr=sp.STDOUT)
f1.close()
f2 = open(ft + '.txt', 'w')
rc = sp.call("svn info svn://url2", stdout=f2, stderr=sp.STDOUT)
f2.close()
f3 = open(fc + '.txt' 'w')
rc = sp.call("svn info svn://url3", stdout=f3, stderr=sp.STDOUT)
f3.close()
</code></pre>
<p>Assuming the <code>url3</code> command you used was supposed to be <code>svn info svn://url3</code>. This allows the <code>subprocess.call</code> to save command output directly to a file.</p>
| 0 | 2016-08-23T16:15:47Z | [
"python",
"linux",
"ubuntu",
"svn",
"scripting"
] |
Capture output from external command and write it to a file | 39,106,032 | <p>I am trying to create a script that calls on a linux command from my Ubuntu server and prints the output of aforementioned command to txt files. This is literally the first script I've ever written, I just started learning python recently. I want 3 files in 3 separate folders with filenames unique to date.</p>
<pre><code>def swo():
from subprocess import call
call("svn info svn://url")
def tco():
from subprocess import call
call("svn info svn://url2")
def fco():
from subprocess import call
call("url3")
import time
timestr = time.strftime("%Y%m%d")
fs = "/path/1/" + timestr
ft = "/path/2/" + timestr
fc = "/path/3/" + timestr
f1 = open(fs + '.txt', 'w')
f1.write(swo)
f1.close()
f2 = open(ft + '.txt', 'w')
f2.write(tco)
f2.close()
f3 = open(fc + '.txt' 'w')
f3.write(fco)
f3.close()
</code></pre>
<p>It is failing at the f.write() functions. I'm stuck at making the output of the linux commands the actual text in the new files.</p>
| -3 | 2016-08-23T16:07:37Z | 39,109,668 | <p>I figured it out after all. The following works great!</p>
<pre><code>## This will get the last revision number overall in repository ##
import os
sfo = os.popen("svn info svn://url1 | grep Revision")
sfo_output = sfo.read()
tco = os.popen("svn info svn://url2 | grep Revision")
tco_output = tco.read()
fco = os.popen("svn://url3 | grep Revision")
fco_output = fco.read()
## This part imports the time function, and creates a variable that will be the ##
## save path of the new file which is than output in the f1, f2 and f3 sections ##
import time
timestr = time.strftime("%Y%m%d")
fs = "/root/path/" + timestr
ft = "/root/path/" + timestr
fc = "/root/path/" + timestr
f1 = open(fs + '-code-rev.txt', 'w')
f1.write(sfo_output)
f1.close()
f2 = open(ft + '-code-rev.txt', 'w')
f2.write(tco_output)
f2.close()
f3 = open(fc + '-code-rev.txt', 'w')
f3.write(fco_output)
f3.close()
</code></pre>
| 0 | 2016-08-23T19:56:10Z | [
"python",
"linux",
"ubuntu",
"svn",
"scripting"
] |
Integer list to ranges | 39,106,236 | <p>I need to convert a list of ints to a string containing all the ranges in the list.
So for example, the output should be as follows:</p>
<pre><code>getIntRangesFromList([1,3,7,2,11,8,9,11,12,15]) -> "1-3,7-9,11-12,15"
</code></pre>
<p>So the input is not sorted and there can be duplicate values. The lists range in size from one element to 4k elements. The minimum and maximum values are 1 and 4094.</p>
<p>This is part of a performance critical piece of code. I have been trying to optimize this, but I can't find a way to get this faster. This is my current code:</p>
<pre><code>def _getIntRangesFromList(list):
if (list==[]):
return ''
list.sort()
ranges = [[list[0],list[0]]] # ranges contains the start and end values of each range found
for val in list:
r = ranges[-1]
if val==r[1]+1:
r[1] = val
elif val>r[1]+1:
ranges.append([val,val])
return ",".join(["-".join([str(y) for y in x]) if x[0]!=x[1] else str(x[0]) for x in ranges])
</code></pre>
<p>Any idea on how to get this faster?</p>
| 2 | 2016-08-23T16:18:43Z | 39,106,411 | <p>This could be a task for the <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow"><code>itertools</code></a> module.</p>
<pre><code>import itertools
list_num = [1, 2, 3, 7, 8, 9, 11, 12, 15]
groups = (list(x) for _, x in
itertools.groupby(list_num, lambda x, c=itertools.count(): x - next(c)))
print(', '.join('-'.join(map(str, (item[0], item[-1])[:len(item)])) for item in groups))
</code></pre>
<p>This will give you <code>1-3, 7-9, 11-12, 15</code>.</p>
<p>To understand what's going on you might want to check the content of <code>groups</code>.</p>
<pre><code>import itertools
list_num = [1, 2, 3, 7, 8, 9, 11, 12, 15]
groups = (list(x) for _, x in
itertools.groupby(list_num, lambda x, c=itertools.count(): x - next(c)))
for element in groups:
print('element={}'.format(element))
</code></pre>
<p>This will give you the following output.</p>
<pre><code>element=[1, 2, 3]
element=[7, 8, 9]
element=[11, 12]
element=[15]
</code></pre>
<p>The basic idea is to have a counter running parallel to the numbers. <code>groupby</code> will create individual groups for numbers with the same numerical distance to the current value of the counter.</p>
<p>I don't know whether this is faster on your version of Python. You'll have to check this yourself. In my setting it's slower with this data set, but faster with a bigger number of elements.</p>
| 1 | 2016-08-23T16:29:18Z | [
"python",
"algorithm",
"python-2.7"
] |
Integer list to ranges | 39,106,236 | <p>I need to convert a list of ints to a string containing all the ranges in the list.
So for example, the output should be as follows:</p>
<pre><code>getIntRangesFromList([1,3,7,2,11,8,9,11,12,15]) -> "1-3,7-9,11-12,15"
</code></pre>
<p>So the input is not sorted and there can be duplicate values. The lists range in size from one element to 4k elements. The minimum and maximum values are 1 and 4094.</p>
<p>This is part of a performance critical piece of code. I have been trying to optimize this, but I can't find a way to get this faster. This is my current code:</p>
<pre><code>def _getIntRangesFromList(list):
if (list==[]):
return ''
list.sort()
ranges = [[list[0],list[0]]] # ranges contains the start and end values of each range found
for val in list:
r = ranges[-1]
if val==r[1]+1:
r[1] = val
elif val>r[1]+1:
ranges.append([val,val])
return ",".join(["-".join([str(y) for y in x]) if x[0]!=x[1] else str(x[0]) for x in ranges])
</code></pre>
<p>Any idea on how to get this faster?</p>
| 2 | 2016-08-23T16:18:43Z | 39,107,192 | <pre><code>def _to_range(l, start, stop, idx, result):
if idx == len(l):
result.append((start, stop))
return result
if l[idx] - stop > 1:
result.append((start, stop))
return _to_range(l, l[idx], l[idx], idx + 1, result)
return _to_range(l, start, l[idx], idx + 1, result)
def get_range(l):
if not l:
return []
return _to_range(l, start = l[0], stop = l[0], idx = 0, result = [])
l = [1, 2, 3, 7, 8, 9, 11, 12, 15]
result = get_range(l)
print(result)
>>> [(1, 3), (7, 9), (11, 12), (15, 15)]
# I think it's better to fetch the data as it is and if needed, change it
# with
print(','.join('-'.join([str(start), str(stop)]) for start, stop in result))
>>> 1-3,7-9,11-12,15-15
</code></pre>
<p>Unless you don't care at all about the data, then u can just append str(start) + '-' + str(stop) in _to_range function so later there will be no need to type extra '-'.join method.</p>
| 0 | 2016-08-23T17:16:12Z | [
"python",
"algorithm",
"python-2.7"
] |
Integer list to ranges | 39,106,236 | <p>I need to convert a list of ints to a string containing all the ranges in the list.
So for example, the output should be as follows:</p>
<pre><code>getIntRangesFromList([1,3,7,2,11,8,9,11,12,15]) -> "1-3,7-9,11-12,15"
</code></pre>
<p>So the input is not sorted and there can be duplicate values. The lists range in size from one element to 4k elements. The minimum and maximum values are 1 and 4094.</p>
<p>This is part of a performance critical piece of code. I have been trying to optimize this, but I can't find a way to get this faster. This is my current code:</p>
<pre><code>def _getIntRangesFromList(list):
if (list==[]):
return ''
list.sort()
ranges = [[list[0],list[0]]] # ranges contains the start and end values of each range found
for val in list:
r = ranges[-1]
if val==r[1]+1:
r[1] = val
elif val>r[1]+1:
ranges.append([val,val])
return ",".join(["-".join([str(y) for y in x]) if x[0]!=x[1] else str(x[0]) for x in ranges])
</code></pre>
<p>Any idea on how to get this faster?</p>
| 2 | 2016-08-23T16:18:43Z | 39,108,612 | <p>I'll concentrate in performance that is your main issue. I'll give 2 solutions:</p>
<p>1) If the boundaries of the integers stored is between A and B, and you can create an array of booleans(even you can choose an array of bits for expand de range you can storage) with (B - A + 2) elements, e.g. A = 0 and B = 1 000 000, we can do this (i'll write it in C#, sorry XD). This run in O(A - B) and is a good solution if A - B is less than the amount of numbers:</p>
<pre><code>public string getIntRangesFromList(int[] numbers)
{
//You can change this 2 constants
const int A = 0;
const int B = 1000000;
//Create an array with all its values in false by default
//Last value always will be in false in propourse, as you can see it storage 1 value more than needed for 2nd cycle
bool[] apparitions = new bool[B - A + 2];
int minNumber = B + 1;
int maxNumber = A - 1;
int pos;
for (int i = 0; i < numbers.Length; i++)
{
pos = numbers[i] - A;
apparitions[pos] = true;
if (minNumber > pos)
{
minNumber = pos;
}
if (maxNumber < pos)
{
maxNumber = pos;
}
}
//I will mantain the concatenation simple, but you can make it faster to improve performance
string result = "";
bool isInRange = false;
bool isFirstRange = true;
int firstPosOfRange = 0; //Irrelevant what is its initial value
for (int i = minNumber; i <= maxNumber + 1; i++)
{
if (!isInRange)
{
if (apparitions[i])
{
if (!isFirstRange)
{
result += ",";
}
else
{
isFirstRange = false;
}
result += (i + A);
isInRange = true;
firstPosOfRange = i;
}
}
else
{
if (!apparitions[i])
{
if (i > firstPosOfRange + 1)
{
result += "-" + (i + A - 1);
}
isInRange = false;
}
}
}
return result;
}
</code></pre>
<p>2) O(N * log N)</p>
<pre><code> public string getIntRangesFromList2(int[] numbers)
{
string result = "";
if (numbers.Length > 0)
{
numbers.OrderBy(x => x); //sorting and making the algorithm complexity O(N * log N)
result += numbers[0];
int countNumbersInRange = 1;
for (int i = 1; i < numbers.Length; i++)
{
if (numbers[i] != numbers[i - 1] + 1)
{
if (countNumbersInRange > 1)
{
result += "-" + numbers[i - 1];
}
result += "," + numbers[i];
countNumbersInRange = 1;
}
else
{
countNumbersInRange++;
}
}
}
return result;
}
</code></pre>
| 0 | 2016-08-23T18:47:01Z | [
"python",
"algorithm",
"python-2.7"
] |
Integer list to ranges | 39,106,236 | <p>I need to convert a list of ints to a string containing all the ranges in the list.
So for example, the output should be as follows:</p>
<pre><code>getIntRangesFromList([1,3,7,2,11,8,9,11,12,15]) -> "1-3,7-9,11-12,15"
</code></pre>
<p>So the input is not sorted and there can be duplicate values. The lists range in size from one element to 4k elements. The minimum and maximum values are 1 and 4094.</p>
<p>This is part of a performance critical piece of code. I have been trying to optimize this, but I can't find a way to get this faster. This is my current code:</p>
<pre><code>def _getIntRangesFromList(list):
if (list==[]):
return ''
list.sort()
ranges = [[list[0],list[0]]] # ranges contains the start and end values of each range found
for val in list:
r = ranges[-1]
if val==r[1]+1:
r[1] = val
elif val>r[1]+1:
ranges.append([val,val])
return ",".join(["-".join([str(y) for y in x]) if x[0]!=x[1] else str(x[0]) for x in ranges])
</code></pre>
<p>Any idea on how to get this faster?</p>
| 2 | 2016-08-23T16:18:43Z | 39,110,382 | <p>The fastest one I could come up, which tests about 10% faster than your solution on my machine (according to timeit):</p>
<pre><code>def _ranges(l):
if l:
l.sort()
return ''.join([(str(l[i]) + ('-' if l[i] + 1 == l[i + 1] else ','))
for i in range(0, len(l) - 1) if l[i - 1] + 2 != l[i + 1]] +
[str(l[-1])])
else: return ''
</code></pre>
<p>The above code assumes that the values in the list are unique. If they aren't, it's easy to fix but there's a subtle hack which will no longer work and the end result will be slightly slower.</p>
<p>I actually timed <code>_ranges(u[:])</code> because of the sort; u is 600 randomly selected integers from range(1000) comprising 235 subsequences; 83 are singletons and 152 contain at least two numbers. If the list is sorted, quite a lot of time is saved.</p>
| 0 | 2016-08-23T20:47:04Z | [
"python",
"algorithm",
"python-2.7"
] |
Python Numpy - attach three arrays to form a matrix or 3D array | 39,106,248 | <p>This is my simple of piece of code</p>
<p>Everything is a numpy array. I welcome manipulation using lists too.</p>
<pre><code>a = [1,2,3,4,5]
b = [3,2,2,2,8]
c = ['test1', 'test2', 'test3','test4','test5']
</code></pre>
<p>expected Outcome: </p>
<pre><code>d = [ 1, 2, 3, 4, 5;
3, 2, 2, 2, 8;
'test1','test2', 'test3', 'test4','test5' ]
</code></pre>
<p>OR</p>
<pre><code> d = [ 1 3 'test1';
2 2 'test2';
3 2 'test3';
4 2 'test4';
5 8 'test5']
</code></pre>
| 1 | 2016-08-23T16:19:15Z | 39,106,280 | <p>Check out the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow">concat method.</a></p>
<pre><code>>>> a = np.array([[1, 2], [3, 4]])
>>> b = np.array([[5, 6]])
>>> np.concatenate((a, b), axis=0)
array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
| 0 | 2016-08-23T16:21:20Z | [
"python",
"arrays",
"numpy"
] |
Python Numpy - attach three arrays to form a matrix or 3D array | 39,106,248 | <p>This is my simple of piece of code</p>
<p>Everything is a numpy array. I welcome manipulation using lists too.</p>
<pre><code>a = [1,2,3,4,5]
b = [3,2,2,2,8]
c = ['test1', 'test2', 'test3','test4','test5']
</code></pre>
<p>expected Outcome: </p>
<pre><code>d = [ 1, 2, 3, 4, 5;
3, 2, 2, 2, 8;
'test1','test2', 'test3', 'test4','test5' ]
</code></pre>
<p>OR</p>
<pre><code> d = [ 1 3 'test1';
2 2 'test2';
3 2 'test3';
4 2 'test4';
5 8 'test5']
</code></pre>
| 1 | 2016-08-23T16:19:15Z | 39,106,416 | <p>Adam's <a href="http://stackoverflow.com/q/39106248/6748857">answer</a> using <code>numpy.concat</code> is also correct, but in terms of specifying the exact shape you are expecting â rows stacked vertically â you'll want to look at <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">numpy.vstack</a>:</p>
<pre><code>>>> import numpy as np
>>> np.vstack([a, b, c])
array([['1', '2', '3', '4', '5'],
['3', '2', '2', '2', '8'],
['test1', 'test2', 'test3', 'test4', 'test5']],
dtype='<U21')
</code></pre>
<p>There's a catch here either way you do it: since your separate arrays (<code>int64</code>, <code>int64</code>, <code><U5</code>) are all being put together, <em>the new array will automatically use the least restrictive type</em>, which in this case is the unicode type.</p>
<p>See also: <code>numpy.hstack</code>.</p>
| 0 | 2016-08-23T16:29:31Z | [
"python",
"arrays",
"numpy"
] |
Python Numpy - attach three arrays to form a matrix or 3D array | 39,106,248 | <p>This is my simple of piece of code</p>
<p>Everything is a numpy array. I welcome manipulation using lists too.</p>
<pre><code>a = [1,2,3,4,5]
b = [3,2,2,2,8]
c = ['test1', 'test2', 'test3','test4','test5']
</code></pre>
<p>expected Outcome: </p>
<pre><code>d = [ 1, 2, 3, 4, 5;
3, 2, 2, 2, 8;
'test1','test2', 'test3', 'test4','test5' ]
</code></pre>
<p>OR</p>
<pre><code> d = [ 1 3 'test1';
2 2 'test2';
3 2 'test3';
4 2 'test4';
5 8 'test5']
</code></pre>
| 1 | 2016-08-23T16:19:15Z | 39,106,501 | <p>Your <code>a</code>,<code>b</code>,<code>c</code> are lists; you'd have to use <code>np.array([1,2,3])</code> to get an array, and it will display as <code>[1 2 3]</code> (without the commas).</p>
<p>Simply creating a new list from those lists produces a list of lists</p>
<pre><code>In [565]: d=[a,b,c]
In [566]: d
Out[566]:
[[1, 2, 3, 4, 5],
[3, 2, 2, 2, 8],
['test1', 'test2', 'test3', 'test4', 'test5']]
</code></pre>
<p>and simply concatenating the lists produces one longer one</p>
<pre><code>In [567]: a+b+c
Out[567]: [1, 2, 3, 4, 5, 3, 2, 2, 2, 8, 'test1', 'test2', 'test3', 'test4', 'test5']
</code></pre>
<p><code>numpy</code> arrays have problems containing both numbers and strings. You have to make a 'structured array'.</p>
<p>The easiest way to combine these into one array is with a <code>fromarrays</code> utility function:</p>
<pre><code>In [561]: x=np.rec.fromarrays(([1,2,3],[3,2,2],['test1','test2','test3']))
In [562]: x['f0']
Out[562]: array([1, 2, 3])
In [563]: x['f2']
Out[563]:
array(['test1', 'test2', 'test3'],
dtype='<U5')
In [568]: x
Out[568]:
rec.array([(1, 3, 'test1'), (2, 2, 'test2'), (3, 2, 'test3')],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<U5')])
</code></pre>
<p>or with a little editing of the display:</p>
<pre><code>In [569]: print(x)
[(1, 3, 'test1')
(2, 2, 'test2')
(3, 2, 'test3')]
</code></pre>
<p>This is not a 2d array; it's 1d (here 3 elements) with 3 fields.</p>
<p>Perhaps the easiest way to format this array in a way that looks like your spec would be with a <code>csv</code> writer:</p>
<pre><code>In [570]: np.savetxt('x.txt',x,fmt='%d %d %s;')
In [571]: cat x.txt # shell command to display the file
1 3 test1;
2 2 test2;
3 2 test3;
</code></pre>
| 0 | 2016-08-23T16:34:23Z | [
"python",
"arrays",
"numpy"
] |
Cassandra model paging with REST API | 39,106,256 | <p>I have model as.</p>
<pre><code># File: models.py
from uuid import uuid4
from cassandra.cqlengine.models import Model
from cassandra.cqlengine import columns
class StudentModel(Model):
__table_name__ = 'students'
id = columns.UUID(primary_key=True, default=uuid4)
name = columns.Text(index=True, required=True)
def __json__(self):
return {'id': str(self.id),
'name': self.name}
</code></pre>
<p>I wrote bottle app which serve data from this model.</p>
<pre><code># File: app.py
from bottle import run
from bottle import Bottle, request, HTTPResponse
from cassandra.cqlengine import connection
from cassandra.cqlengine.management import sync_table
from models import StudentModel
API = Bottle()
# Create Connection
connection.setup(hosts=['192.168.99.100'],
default_keyspace='test',
protocol_version=3)
# Sync database table to create table in keyspace
sync_table(StudentModel)
@API.get('/students')
def get_all_students():
all_objs = StudentModel.all()
return HTTPResponse(
body={'data': [x.__json__() for x in all_objs]},
headers={'content-type': 'application/json'},
status_code=200)
run(host='localhost',
port=8080,
app=API,
server='auto')
</code></pre>
<p>This code works fine, and I get api working as.</p>
<pre><code>curl http://localhost:8080/students -i
HTTP/1.1 200 OK
Content-Length: 74
Content-Type: application/json
Date: Tue, 23 Aug 2016 15:55:23 GMT
Server: waitress
Status-Code: 200
{"data": [{"id": "7f6d18ec-bf24-4583-a06b-b9f55a4dc6e8", "name": "test"}, {"id": "7f6d18ec-bf24-4583-a06b-b9f55a4dc6e9", "name": "test1"}]}
</code></pre>
<p>Now I want to add pagging, and want to create api which has <code>limit</code> and <code>offset</code>.</p>
<p>I check <a href="https://datastax.github.io/python-driver/query_paging.html#paging-large-queries" rel="nofollow">Paging Large Queries</a> but it has no example with <code>Model</code>.</p>
<p>Then I change my api to:</p>
<pre><code># File: app.py
...
...
@API.get('/students')
def get_all_students():
limit = request.query.limit
offset = request.query.offset
all_objs = StudentModel.all()
if limit and offset:
all_objs = all_objs[int(offset): int(offset+limit)]
return HTTPResponse(
body={'data': [x.__json__() for x in all_objs]},
headers={'content-type': 'application/json'},
status_code=200)
...
...
</code></pre>
<p>And call api as:</p>
<pre><code>curl "http://localhost:8080/students?limit=1&offset=0" -i
HTTP/1.1 200 OK
Content-Length: 74
Content-Type: application/json
Date: Tue, 23 Aug 2016 16:12:00 GMT
Server: waitress
Status-Code: 200
{"data": [{"id": "7f6d18ec-bf24-4583-a06b-b9f55a4dc6e8", "name": "test"}]}
</code></pre>
<p>and </p>
<pre><code>curl "http://localhost:8080/students?limit=1&offset=1" -i
HTTP/1.1 200 OK
Content-Length: 75
Content-Type: application/json
Date: Tue, 23 Aug 2016 16:12:06 GMT
Server: waitress
Status-Code: 200
{"data": [{"id": "7f6d18ec-bf24-4583-a06b-b9f55a4dc6e9", "name": "test1"}]}
</code></pre>
<p>I get another solution using <a href="https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.ResponseFuture.has_more_pages" rel="nofollow">has_more_pages</a> and <a href="https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.ResponseFuture.start_fetching_next_page" rel="nofollow">start_fetching_next_page()</a></p>
<pre><code>from bottle import run
from bottle import Bottle, request, HTTPResponse
from cassandra.cqlengine import connection
from cassandra.query import SimpleStatement
from cassandra.cqlengine.management import sync_table
from models import StudentModel
API = Bottle()
# Create Connection
connection.setup(hosts=['192.168.99.100'],
default_keyspace='test',
protocol_version=3)
# Sync database table to create table in keyspace
sync_table(StudentModel)
@API.get('/students')
def get_all_students():
limit = request.query.limit
offset = request.query.offset
page = int(request.query.page or 0)
session = connection.get_session()
session.default_fetch_size = 1
objs = StudentModel.all()
result = objs._execute(objs._select_query())
data = []
count = 0
while (not page or page > count) and result.has_more_pages:
count += 1
if page and page > count:
result.fetch_next_page()
continue
data.extend(result.current_rows)
result.fetch_next_page()
all_objs = [StudentModel(**student) for student in data]
return HTTPResponse(
body={'data': [x.__json__() for x in all_objs]},
headers={'content-type': 'application/json'},
status_code=200)
run(host='localhost',
port=8080,
app=API,
debug=True,
server='auto')
</code></pre>
<p>From above 2 solution, which one is correct ?</p>
| 0 | 2016-08-23T16:19:32Z | 39,110,483 | <p>Currently, there is no efficient way to do pagination with CQLEngine. Using QuerySet slicing works, but be aware that previous pages will still be materialized internally in the result cache. So, this can lead to memory issues and it also affect the request performance. I've created a ticket to analyze a way to fill a single page at a time. You can watch the following ticket:</p>
<p><a href="https://datastax-oss.atlassian.net/browse/PYTHON-627" rel="nofollow">https://datastax-oss.atlassian.net/browse/PYTHON-627</a></p>
<p>If you need an immediate efficient pagination support, I suggest to use the core driver instead of cqlengine. </p>
| 2 | 2016-08-23T20:52:59Z | [
"python",
"api",
"model",
"cassandra"
] |
Not all objects transferring from one list to another in beginner grid-based game | 39,106,339 | <p>This has been driving me nuts.</p>
<p>I'm developing a grid-based movement engine for a game. Character instances move around using the "move" function, which reduces their internal moves_left variable by 1 every time. </p>
<pre><code>def move(self, direction): #how characters move around
if self.collision_check(direction) == True:
print("Collision")
return
if self.moves_left == 0:
print("No more moves left")
Map.update()
return
elif direction == "UP":
self.internal_row -= 1
elif direction == "LEFT":
self.internal_column -= 1
elif direction == "RIGHT":
self.internal_column += 1
elif direction == "DOWN":
self.internal_row += 1
self.moves_left = self.moves_left -1
Map.update()
</code></pre>
<p>When this variable reaches 0, they are supposed to stop moving and be transferred from the "can move" list of characters to the "no moves" list of characters. This check is in the Map.update() function. </p>
<pre><code>for characterobject in range(0, len(Map.no_moves)-1): #This moves any characters with moves to the can move list
if len(Map.no_moves) > 0:
if Map.no_moves[characterobject].moves_left > 0:
print("character moved from no moves to moves")
Map.can_move.append(Map.no_moves[characterobject])
Map.no_moves.remove(Map.no_moves[characterobject])
for characterobject in range(0, len(Map.can_move)-1):
if len(Map.can_move) == 0:
break
elif Map.can_move[characterobject].moves_left == 0: #This moves any characters with 0 moves from the can't move list to the can move list
print("character moved from moves to no moves")
Map.no_moves.append(Map.can_move[characterobject])
Map.can_move.remove(Map.can_move[characterobject])
</code></pre>
<p>The problem that I'm having is that the check is not being made. When a moving character reaches moves_left = 0, the move function prints "no moves left" and Map.update() is called, but the character object stays in the list and is not transferred to the no_moves list. </p>
<p>Here is the full code: </p>
<pre><code>import random
import pygame
import math
pygame.init()
Clock = pygame.time.Clock()
Screen = pygame.display.set_mode([650, 650])
DONE = False
MAPSIZE = 50 #how many tiles
TILEWIDTH = 10 #pixel size of tile
TILEHEIGHT = 10
TILEMARGIN = 2
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
GREEN = (0, 255, 0)
RED = (255, 0, 0)
BLUE = (0, 0, 255)
BROWN = (123, 123, 0)
MOVECOLOR = (150, 250, 150)
ITEMS = ["Sword", "Helmet", "Shield", "Coin"] #just a test
KeyLookup = {
pygame.K_LEFT: "LEFT",
pygame.K_RIGHT: "RIGHT",
pygame.K_DOWN: "DOWN",
pygame.K_UP: "UP"
}
class MapTile(object): #The main class for stationary things that inhabit the grid ... grass, trees, rocks and stuff.
def __init__(self, name, internal_column, internal_row):
self.name = name
self.internal_column = internal_column
self.internal_row = internal_row
class Item(object):
def __init__(self, name, weight):
self.name = name
self.weight = weight
class Character(object): #can_move can move around and do cool stuff
def __init__(self, name, HP, internal_column, internal_row):
self.name = name
self.HP = HP
self.internal_column = internal_column
self.internal_row = internal_row
inventory = []
moves_left = 25
def move(self, direction): #how characters move around
if self.collision_check(direction) == True:
print("Collision")
return
if self.moves_left == 0:
print("No more moves left")
Map.update()
return
elif direction == "UP":
self.internal_row -= 1
elif direction == "LEFT":
self.internal_column -= 1
elif direction == "RIGHT":
self.internal_column += 1
elif direction == "DOWN":
self.internal_row += 1
self.moves_left = self.moves_left - 1
Map.update()
def collision_check(self, direction):
if direction == "UP":
if self.internal_row == 0:
return True
if len(Map.Grid[self.internal_column][(self.internal_row)-1]) > 1:
return True
elif direction == "LEFT":
if self.internal_column == 0:
return True
if len(Map.Grid[self.internal_column-1][(self.internal_row)]) > 1:
return True
elif direction == "RIGHT":
if self.internal_column == MAPSIZE-1:
return True
if len(Map.Grid[self.internal_column+1][(self.internal_row)]) > 1:
return True
elif direction == "DOWN":
if self.internal_row == MAPSIZE-1:
return True
if len(Map.Grid[self.internal_column][(self.internal_row)+1]) > 1:
return True
return False
def location(self):
print("Coordinates:" + str(self.internal_column) + ", " + str(self.internal_row))
def check_inventory(self):
weight = 0
for item in self.inventory:
print(item)
weight = weight + item.weight
print(weight)
class Map(object): #The main class; where the action happens
global MAPSIZE
can_move = []
no_moves = []
Grid = []
for row in range(MAPSIZE): # Creating grid
Grid.append([])
for column in range(MAPSIZE):
Grid[row].append([])
for row in range(MAPSIZE): #Filling grid with grass
for column in range(MAPSIZE):
TempTile = MapTile("Grass", column, row)
Grid[column][row].append(TempTile)
for row in range(MAPSIZE): #Putting some rocks near the top
for column in range(MAPSIZE):
TempTile = MapTile("Rock", column, row)
if row == 1:
Grid[column][row].append(TempTile)
for i in range(10): #Trees in random places
random_row = random.randint(0, MAPSIZE - 1)
random_column = random.randint(0, MAPSIZE - 1)
TempTile = MapTile("Tree", random_column, random_row)
Grid[random_column][random_row].append(TempTile)
def generate_hero(self): #Generate a character and place it randomly
random_row = random.randint(0, MAPSIZE - 1)
random_column = random.randint(0, MAPSIZE - 1)
id_number = len(Map.can_move)
temp_hero = Character(str(id_number), 10, random_column, random_row)
i = random.randint(0, len(ITEMS)-1)
temp_hero.inventory.append(ITEMS[i])
self.Grid[random_column][random_row].append(temp_hero)
self.can_move.append(temp_hero)
Map.update()
def update(self): #Important function
for column in range(MAPSIZE): #These nested loops go through entire grid
for row in range(MAPSIZE): #They check if any objects internal coordinates
for i in range(len(Map.Grid[column][row])): #disagree with its place on the grid and update it accordingly
if Map.Grid[column][row][i].internal_column != column:
TempChar = Map.Grid[column][row][i]
Map.Grid[column][row].remove(Map.Grid[column][row][i])
Map.Grid[int(TempChar.internal_column)][int(TempChar.internal_row)].append(TempChar)
elif Map.Grid[column][row][i].internal_row != row:
TempChar = Map.Grid[column][row][i]
Map.Grid[column][row].remove(Map.Grid[column][row][i])
Map.Grid[int(TempChar.internal_column)][int(TempChar.internal_row)].append(TempChar)
for characterobject in range(0, len(Map.no_moves)-1): #This moves any characters with moves to the can move list
if len(Map.no_moves) > 0:
if Map.no_moves[characterobject].moves_left > 0:
print("character moved from no moves to moves")
Map.can_move.append(Map.no_moves[characterobject])
Map.no_moves.remove(Map.no_moves[characterobject])
for characterobject in range(0, len(Map.can_move)-1):
print(str(characterobject))
if len(Map.can_move) == 0:
break
elif Map.can_move[characterobject].moves_left == 0: #This moves any characters with 0 moves from the can't move list to the can move list
print("character moved from moves to no moves")
Map.no_moves.append(Map.can_move[characterobject])
Map.can_move.remove(Map.can_move[characterobject])
Map = Map()
Map.generate_hero()
while not DONE: #Main pygame loop
for event in pygame.event.get(): #catching events
if event.type == pygame.QUIT:
DONE = True
elif event.type == pygame.MOUSEBUTTONDOWN:
Pos = pygame.mouse.get_pos()
column = Pos[0] // (TILEWIDTH + TILEMARGIN) #Translating the position of the mouse into rows and columns
row = Pos[1] // (TILEHEIGHT + TILEMARGIN)
print(str(row) + ", " + str(column))
for i in range(len(Map.Grid[column][row])):
print(str(Map.Grid[column][row][i].name)) #print stuff that inhabits that square
elif event.type == pygame.KEYDOWN:
if event.key == 97: # Keypress: a
print("new turn")
for characterobject in range(0, len(Map.no_moves)-1):
Map.no_moves[characterobject].moves_left = 25
Map.update()
elif event.key == 115: # Keypress: s
print("boop")
Map.generate_hero()
Map.update()
elif len(Map.can_move) > 0:
Map.can_move[0].move(KeyLookup[event.key])
else:
print("invalid")
Screen.fill(BLACK)
for row in range(MAPSIZE): # Drawing grid
for column in range(MAPSIZE):
for i in range(0, len(Map.Grid[column][row])):
Color = WHITE
if len(Map.can_move) > 0: # Creating colored area around character showing his move range
if (math.sqrt((Map.can_move[0].internal_column - column)**2 + (Map.can_move[0].internal_row - row)**2)) <= Map.can_move[0].moves_left:
Color = MOVECOLOR
if len(Map.Grid[column][row]) > 1:
Color = RED
if Map.Grid[column][row][i].name == "Tree":
Color = GREEN
if str(Map.Grid[column][row][i].__class__.__name__) == "Character":
Color = BROWN
pygame.draw.rect(Screen, Color, [(TILEMARGIN + TILEWIDTH) * column + TILEMARGIN,
(TILEMARGIN + TILEHEIGHT) * row + TILEMARGIN,
TILEWIDTH,
TILEHEIGHT])
Clock.tick(30)
pygame.display.flip()
pygame.quit()
</code></pre>
<p>Play around with it and see what I mean. You can press "s" to add a new character. Notice what happens in the shell when a character can no logner move. You're supposed to be able to press "a" to give characters in the no_moves list more moves, but that doesn't work either. </p>
<p>Thanks</p>
| 0 | 2016-08-23T16:24:46Z | 39,106,445 | <p>You shouldnât iterate over an array youâre mutating, which is causing this problem. Instead:</p>
<pre><code>arr = Map.no_moves[:] # copy
for item in arr:
if item.moves_left == 0:
Map.no_moves.remove(item)
Map.can_move.append(item)
</code></pre>
<hr>
<p>Note that this concept applies to almost <em>every</em> language, so itâs good to keep this pattern in your toolbox.</p>
| 2 | 2016-08-23T16:30:48Z | [
"python",
"arrays",
"python-3.x",
"oop",
"pygame"
] |
How convert list element to list? | 39,106,363 | <p>Original cat looks like this:</p>
<pre><code>cat = ['a','a,b,c','c,d,e,f']
</code></pre>
<p>I want to convert it to:</p>
<pre><code>cat = [['a'],['a','b','c'],['c','d','e','f']]
</code></pre>
| 2 | 2016-08-23T16:26:21Z | 39,106,407 | <p>You just need to <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow">split</a> each string:</p>
<pre><code>result = [s.split(',') for s in cat]
</code></pre>
| 2 | 2016-08-23T16:29:15Z | [
"python",
"string",
"list",
"python-3.x"
] |
How convert list element to list? | 39,106,363 | <p>Original cat looks like this:</p>
<pre><code>cat = ['a','a,b,c','c,d,e,f']
</code></pre>
<p>I want to convert it to:</p>
<pre><code>cat = [['a'],['a','b','c'],['c','d','e','f']]
</code></pre>
| 2 | 2016-08-23T16:26:21Z | 39,106,424 | <p>Try This:</p>
<pre><code>cat = ['a','a,b,c','c,d,e,f']
newlist = [a.split(',') for a in cat]
print(newlist)
>>> [['a'], ['a', 'b', 'c'], ['c', 'd', 'e', 'f']]
</code></pre>
| 1 | 2016-08-23T16:29:49Z | [
"python",
"string",
"list",
"python-3.x"
] |
How convert list element to list? | 39,106,363 | <p>Original cat looks like this:</p>
<pre><code>cat = ['a','a,b,c','c,d,e,f']
</code></pre>
<p>I want to convert it to:</p>
<pre><code>cat = [['a'],['a','b','c'],['c','d','e','f']]
</code></pre>
| 2 | 2016-08-23T16:26:21Z | 39,106,434 | <p>You can achieve this by <code>list comprehension</code> as:</p>
<pre><code>>>> cat = ['a','a,b,c','c,d,e,f']
>>> [c.split(',') for c in cat]
[['a'], ['a', 'b', 'c'], ['c', 'd', 'e', 'f']]
</code></pre>
<p><strong>Alternatively</strong>, you may also use <code>lambda</code> function with <code>map</code> to achieve this:</p>
<pre><code>>>> map(lambda x: x.split(','), cat)
[['a'], ['a', 'b', 'c'], ['c', 'd', 'e', 'f']]
</code></pre>
| 4 | 2016-08-23T16:30:28Z | [
"python",
"string",
"list",
"python-3.x"
] |
converting simple SOAP client code from php to python | 39,106,428 | <p>so here is my php code </p>
<pre><code> $client = new SoapClient('https://someservice.com/Tokens.xml', array('soap_version' => SOAP_1_1));
$params['merchantId'] = 'ABC';
$params['invoiceNo'] = 1;
$result = $client->__soapCall("MakeToken", array($params));
$token = $result->MakeTokenResult->token;
</code></pre>
<p>so i've installed suds
and i've came this far</p>
<pre><code>from suds.client import Client
def test(request):
client = Client(location="https://someservice.com/Tokens.xml" )
return(HttpResponse('something !! '))
</code></pre>
<p>im not sure what the next step is in this line</p>
<pre><code>$result = $client->__soapCall("MakeToken", array($params));
</code></pre>
<p>this is what i came up with which is obviusly wrong ! </p>
<pre><code> client.service.__soapCall('MakeToken' , 'merchantId:ABC' , 'invoiceNo:1' )
</code></pre>
| 1 | 2016-08-23T16:30:05Z | 39,117,425 | <p>Assuming <code>MakeToken</code> is the name of the method provided by the service, you need to call <code>client.service.MakeToken(merchantId='ABC', invoiceNo=1)</code></p>
<p>Have a look at the documentation at <a href="https://fedorahosted.org/suds/wiki/Documentation" rel="nofollow">https://fedorahosted.org/suds/wiki/Documentation</a></p>
| 0 | 2016-08-24T07:51:17Z | [
"python",
"django",
"soap"
] |
Changing tag in a HTML when parsed with Beautifulsoup | 39,106,482 | <p>I'm trying to crawl a <a href="http://www.allocine.fr/film/fichefilm_gen_cfilm=144185.html" rel="nofollow">webpage</a> using beautiful soup but I got a problem with a tag that is mysteriously changing from what is displayed in my browser and what I receive in my terminal.
<a href="http://i.stack.imgur.com/Px4qZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/Px4qZ.png" alt="This tab"></a></p>
<p><a href="http://i.stack.imgur.com/yOCEb.png" rel="nofollow"><img src="http://i.stack.imgur.com/yOCEb.png" alt="Correspond to this tag"></a></p>
<p>Ok so the tab above is corresponding to the HTML tag above in my browser.
Once I parsed it with beautiful I did:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url = "http://www.allocine.fr/film/fichefilm_gen_cfilm=144185.html"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
trailer = soup.find(title="Bandes-annonces")
print trailer
</code></pre>
<p>Which output:</p>
<pre><code><span class="ACrL3ZACrpZGVvL3BsYXllcl9nZW5fY21lZGlhPTE5NTYxOTgyJmNmaWxtPTE0NDE4NS5odG1s item trailer icon icon-play-mini" title="Bandes-annonces">
Bandes-annonces
</span>
</code></pre>
<p>I would like to know why my "a" tag suddenly became a "span" tag? How can I avoid it?</p>
| 2 | 2016-08-23T16:33:20Z | 39,111,521 | <p>There are a few of issues, some tags are created using <em>Javascript</em>, there are actually two tags that have a <em>title="Bandes-annonces"</em>, what you see in your output is the first occurrence with obfuscated data which is <em>base-64</em> encoded with substring(s) embedded, you can see in one of the Js functions that has <code>AC.config = {</code> the following:</p>
<pre><code> seo: {
obfuscatedPrefix: 'ACr'
},
</code></pre>
<p>Each tag in the source you get back from requests contains the encoded data like <em>ACrL3ZACrpZGVvL3BsYXllcl9nZW5fY21lZGlhPTE5NTYxOTgyJmNmaWxtPTE0NDE4NS5odG1s</em></p>
<p>You can see if we replace any occurrences of the prefix <em>ACr</em> and <em>base-64</em> decode the remaining string:</p>
<pre><code>In [113]: s = "ACrL3ZACrpZGVvL3BsYXllcl9nZW5fY21lZGlhPTE5NTYxOTgyJmNmaWxtPTE0NDE4NS5odG1s"
In [114]: s.replace("ACr", "").decode("base-64")
Out[114]: '/video/player_gen_cmedia=19561982&cfilm=144185.html'
</code></pre>
<p>We get the href.</p>
<p>If you wanted to get the tag with the title, you could use one of the <em>css classes</em>:</p>
<pre><code>trailer = soup.find(class_="icon-play-mini", title="Bandes-annonces")
</code></pre>
<p>which if we run the code:</p>
<pre><code>In [117]: url = "http://www.allocine.fr/film/fichefilm_gen_cfilm=144185.html"
In [118]: page = requests.get(url)
In [119]: soup = BeautifulSoup(page.content, 'html.parser')
In [120]: trailer = soup.find(class_="icon-play-mini", title="Bandes-annonces")
In [121]: print trailer
<span class="ACrL3ZACrpZGVvL3BsYXllcl9nZW5fY21lZGlhPTE5NTYxOTgyJmNmaWxtPTE0NDE4NS5odG1s item trailer icon icon-play-mini" title="Bandes-annonces">
Bandes-annonces
</span>
</code></pre>
<p>Gives you the second occurrence of the tag with the title=..</p>
<p>Then to get the href:</p>
<pre><code>In [122]: trailer["class"][0].replace("ACr", "").decode("base-64")
Out[122]: '/video/player_gen_cmedia=19561982&cfilm=144185.html'
</code></pre>
<p>You can see it is not going to be very straight forward to scrape data from that site, the obfuscation is likely there for a good reason, to make scraping harder as they most likely don't want you to be doing it.</p>
| 2 | 2016-08-23T22:16:19Z | [
"python",
"html",
"python-2.7",
"beautifulsoup",
"web-crawler"
] |
Create a queryset to compare two models | 39,106,528 | <p>I'm just new using Django. I just created my models and migrate information to my sqlite3 database using the .cvs import module. This are my modules:</p>
<pre><code>class Backlog(models.Model):
sales_order = models.CharField(max_length=30)
po_number = models.CharField(max_length=30)
order_number = models.IntegerField(blank=True)
line_number = models.IntegerField(blank=True)
ship_Set = models.IntegerField(blank=True)
product_id = models.CharField(max_length=30)
ordered_quantity = models.IntegerField(blank=True)
class Material(models.Model):
product_id = models.CharField(max_length=50)
tan_id = models.CharField(max_length=50)
</code></pre>
<p>Now that I have the information inside my tables I want to do the following:</p>
<ul>
<li>Find if <code>product_id</code> from <code>Backlog</code> is in <code>Material</code>'s model, once it finds it verify the first two digits from the <code>tan_id</code>. If is <code>74</code> classify as <code>'1'</code>, if is <code>800</code> classify as <code>'3'</code> else set as <code>'2'</code>. (<code>tan_id</code> format commonly are <code>74-102345-03</code>, <code>800-120394-03</code>)</li>
</ul>
<p>My two questions are:
How to do that and if I have to create a new column to add the information from every <code>product_id</code>.</p>
| 0 | 2016-08-23T16:35:56Z | 39,109,169 | <p>Ok well given your current models, here is a possible solution to the problem you are having:</p>
<pre><code>for backlog in Backlog.objects.all():
try:
material = Material.objects.get(product_id = backlog.product_id)
if material.tan_id[0:2] == '74':
# Classify as 1
elif material.tan_id[0:2] == '80':
# Classify as 3
else:
# Classify as 2
except Material.DoesNotExist:
print("This material is not in backlog")
continue
</code></pre>
<p>This code should loop over every instance of Backlog you have in your database and then try to find the associated Material. In the event it doesn't find a Material (in your case there is no backlog), objects.get() raises an exception that it doesn't exist, we print it's not there and continue on with the loop. If it is we classify it as you specified. Might require a slight bit of tweaking but it should give you the bones of what you want to fix this problem. Let me know if it doesn't.</p>
| 1 | 2016-08-23T19:21:49Z | [
"python",
"django",
"django-models",
"django-views"
] |
Inserting object in excel using Python | 39,106,532 | <p>I want to insert a Image(object) in MS-Excel Report which i am generating using openpyxl utility. Is there a way to do it using some python utility?</p>
| 0 | 2016-08-23T16:36:20Z | 39,106,569 | <p>Openpyxl allows you to write images into your Excel files! Here it is in the <a href="http://openpyxl.readthedocs.io/en/2.4/api/openpyxl.drawing.image.html" rel="nofollow">official documentation</a>.</p>
<pre><code>import openpyxl
wb = openpyxl.Workbook()
ws = wb.worksheets[0]
picture = openpyxl.drawing.Image('/path/to/picture')
picture.anchor(ws.cell('cell to put the image'))
ws.add_image(picture)
wb.save('whatever you want to save the workbook as')
</code></pre>
<p>This code of course refers to creating a new workbook and adding the image into it. To add the image to your preexisting workbook you would obviously just load that workbook using <a href="http://openpyxl.readthedocs.io/en/default/usage.html#read-an-existing-workbook" rel="nofollow">load_workbook</a>.</p>
| 3 | 2016-08-23T16:38:34Z | [
"python",
"pywin32",
"openpyxl",
"win32com",
"xlwt"
] |
Pandas: change values in column | 39,106,660 | <p>I have dataframe </p>
<pre><code>ID RuCitySize qsex age ranges url used_at active_seconds
f78f67101c3aeb099212b8aa9a95dfd2 500-млн ÐенÑкий 18-24 lada.ru 03.01.2016 20:18 66.25557348
f78f67101c3aeb099212b8aa9a95dfd2 500-млн ÐенÑкий 18-24 lada.ru/cars/vesta/sedan/tth.html 03.01.2016 20:18 51.5321127
f78f67101c3aeb099212b8aa9a95dfd2 500-млн ÐенÑкий 18-24 lada.ru/cars/4x4/3dv/prices.html 03.01.2016 20:20 22.08519116
f78f67101c3aeb099212b8aa9a95dfd2 500-млн ÐенÑкий 18-24 lada.ru/cars/4x4/3dv/1.7_8_mkpp/lux/017/114/card.html 03.01.2016 20:20 29.44692154
740e2b36a4fa2d145436293522f5f5d5 500-млн ÐенÑкий 18-24 penza-avto.lada.ru/ds/cars 05.01.2016 12:51 7.361730386
740e2b36a4fa2d145436293522f5f5d5 500-млн ÐенÑкий 18-24 penza-avto.lada.ru/ds/cars/granta/liftback/prices.html 05.01.2016 12:51 66.25557348
</code></pre>
<p>And df </p>
<pre><code>qsex age ranges RuCitySize FOM_quota
ÐенÑкий 18-24 100-500 3.680865193
ÐенÑкий 18-24 500-млн 1.764538469
ÐенÑкий 18-24 Ðиллионники 2.295797363
</code></pre>
<p>I need if values in columns <code>qsex, age ranges, RuCitySIze</code> are equal, multiplicate value from <code>active_seconds</code> to <code>FOM_quota</code>.
How can I do that?</p>
| 0 | 2016-08-23T16:44:29Z | 39,108,595 | <p>you can do it this way:</p>
<pre><code>In [33]: df1['new'] = (df1['active_seconds'] *
....: pd.merge(df1, df2,
....: on=['qsex', 'age ranges', 'RuCitySize'],
....: how='left')['FOM_quota'])
In [34]: df1[['active_seconds','new']]
Out[34]:
active_seconds new
0 66.255573 116.910508
1 51.532113 90.930395
2 22.085191 38.970169
3 29.446922 51.960226
4 7.361730 12.990056
5 66.255573 116.910508
</code></pre>
| 1 | 2016-08-23T18:46:22Z | [
"python",
"pandas"
] |
Weird output when calculating fractions of a square root | 39,106,666 | <p>I'm trying to find the continued fractions of any non-square number (until it repeats).</p>
<p>For example: input: <code>23 = [4; 1,3,1,8]</code></p>
<p>my code works for many numbers (even though it's very clumsy).
It works for 23 where it outputs: </p>
<pre><code>[4, 1, 3, 1, 8, 1, 3, 1]
</code></pre>
<p>(Ignore the extra 1, 3, 1)</p>
<p>But when i input 61 it never stops... here a line of the output:</p>
<pre><code>[7, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1, 14, 1, 4, 3, 1, 2, 2, 1, 4, 5, 1, 6900]
</code></pre>
<p>After 14 it doesn't repeat like it should (4, 5 instead of 3, 4 and 6900 are out of place)</p>
<p>I'm a bit of a noob when it comes to coding, so it would help alot if someone could tell my why it doesn't work and how i should fix it</p>
<p>Here's my code: </p>
<pre><code>def find_fractions(n):
d = math.sqrt(n)
x = 0
y = 0
safeint = 0
safe = True
a = ["a", "b", "c", "d"]
while a[1:int(len(a) / 2)] != a[int(len(a) / 2) + 1:]:
a.append(math.floor(d))
d = 1 / (d - math.floor(d))
print(a)
safeint += 1
if safeint > 4 and safe:
del a[0]
del a[0]
del a[0]
del a[0]
safe = False
print(a)
find_fractions(23)
</code></pre>
<p>Edit: not 63, meant 61</p>
| 0 | 2016-08-23T16:44:56Z | 39,109,577 | <p>What you have is a precision error. These calculations are extremely precise, meaning they require many binary digits to represent. The finite floating point precision that your computer uses is sometimes not enough to do this accurately. Somewhere along the line, the behavior of how this inaccuracy is handled in your machine is breaking your calculations. I used the <a href="https://docs.python.org/3/library/decimal.html#decimal.Decimal" rel="nofollow">decimal</a> module to handle this large precision.</p>
<pre><code>import math
from decimal import Decimal
from decimal import getcontext
def find_fractions(n):
d = Decimal(n).sqrt()
x = 0
y = 0
safeint = 0
safe = True
a = ["a", "b", "c", "d"]
while a[1:int(len(a) / 2)] != a[int(len(a) / 2) + 1:]:
a.append(math.floor(d))
d = Decimal(1 / (d - math.floor(d)))
print(a)
safeint += 1
if safeint > 4 and safe:
del a[0]
del a[0]
del a[0]
del a[0]
safe = False
print(a)
</code></pre>
<p>This gives me the output
<code>[7, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1, 14, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1]</code>
for input 61. The default number of places for the Decimal class is 28. If necessary, you can set the Decimal objects to use higher precision like so
<code>getcontext().prec = x</code></p>
<p>Here's a the <a href="https://en.wikipedia.org/wiki/Floating_point" rel="nofollow">Wikipedia page</a> to review floating point precision. If you'd like I would be happy to give you some suggestions on making your code cleaner as well. </p>
| 0 | 2016-08-23T19:50:51Z | [
"python",
"python-3.x",
"math",
"output"
] |
Rows are still present after calling delete() in SQLAlchemy | 39,106,687 | <p>I want to have a Flask route that deletes all instances of a SQLAlchemy model, <code>VisitLog</code>. I call <code>VisitLog.query.delete()</code>, then redirect back to the page, but the old entries are still present. There was no error. Why weren't they deleted?</p>
<pre><code>@app.route('/log')
def log():
final_list = VisitLog.query.all()
return render_template('log.html', loging=final_list)
@app.route('/logclear')
def logclear():
VisitLog.query.delete()
return redirect("log.html", code=302)
</code></pre>
<pre class="lang-html prettyprint-override"><code><a href="{{ url_for('logclear') }}">Clear database</a>
</code></pre>
| 3 | 2016-08-23T16:46:01Z | 39,106,794 | <p>Just like other write operations, you must commit the session after executing a bulk delete.</p>
<pre><code>VisitLog.query.delete()
db.session.commit()
</code></pre>
| 2 | 2016-08-23T16:53:02Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] |
how to merge two dataframes if the index and length both do not match? | 39,106,717 | <p>i have two data frames predictor_df and solution_df like this :</p>
<pre><code>predictor_df
</code></pre>
<p>1000 A B C
1001 1 2 3
1002 4 5 6
1003 7 8 9
1004 Nan Nan Nan</p>
<pre><code>and a solution_df
</code></pre>
<p>0 D
1 10
2 11
3 12</p>
<p>the reason for the names is that the predictor_df is used to do some analysis on it's columns to arrive at analysis_df . My analysis leaves the rows with Nan values in predictor_df and hence the <strong>shorter</strong> solution_df </p>
<p>Now i want to know how to join these two dataframes to obtain my final dataframe as </p>
<pre><code> A B C D
1 2 3 10
4 5 6 11
7 8 9 12
Nan Nan Nan
</code></pre>
<p>please guide me through it . thanks in advance.
Edit : i tried to merge the two dataframes but the result comes like this , </p>
<pre><code> A B C D
1 2 3 Nan
4 5 6 Nan
7 8 9 Nan
Nan Nan Nan
</code></pre>
<p>Edit 2 : also when i do <code>pd.concat([predictor_df, solution_df], axis = 1)</code>
it becomes like this </p>
<pre><code> A B C D
Nan Nan Nan 10
Nan Nan Nan 11
Nan Nan Nan 12
Nan Nan Nan Nan
</code></pre>
| 0 | 2016-08-23T16:48:12Z | 39,107,890 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> with <code>drop=True</code> which resets the index to the default integer index.</p>
<pre><code>pd.concat([df_1.reset_index(drop=True), df_2.reset_index(drop=True)], axis=1)
A B C D
0 1 2 3 10.0
1 4 5 6 11.0
2 7 8 9 12.0
3 Nan Nan Nan NaN
</code></pre>
| 1 | 2016-08-23T18:00:13Z | [
"python",
"pandas"
] |
How can I "fix" floating numbers in python by intelligently choosing the correct decimal place to round to? | 39,106,733 | <p>I've seen many discussions here about rounding floating values in python or similar topics, but I have a related problem that I want a good solution for.</p>
<p>Context:</p>
<p>I use the netCDF4 python library to extract data from NetCDF files. My organization keeps a precision attribute on variables within these files.</p>
<p>Example: <code>TS:data_precision = 0.01f ;</code></p>
<p>I collect these precision attributes using the library like this:</p>
<pre><code>d = netCDF4.Dataset(path) # assume path is the file or url link
precisions = {}
for v in d.variables:
try:
precisions[v] = d.variables[v].__getattribute__('data_precision')
except AttributeError:
pass
return precisions
</code></pre>
<hr>
<p>When I retrieve these precision values from a dataset, in python they end up showing up like:</p>
<pre><code>{u'lat': 9.9999997e-05, u'lon': 9.9999997e-05, u'TS': 0.0099999998, u'SSPS': 0.0099999998}
</code></pre>
<p>But, what I really want is:</p>
<pre><code>{u'lat': 0.0001, u'lon': 0.0001, u'TS': 0.01, u'SSPS': 0.01}
</code></pre>
<hr>
<p>Essentially I need a way in python to intelligently round these values to their most appropriate decimal place. I am sure I can come up with a really ugly method to do this, but I want to know if there is already a 'nice' solution to this problem.</p>
<p>For my use case, I suppose I can take advantage of the fact that since these values are all 'data_precision' values, I can just count the zero's from the decimal place, and then round to the last 0. (I'm making the assumption that <code>0 < n < 1</code>). With these assumptions, this would be my solution:</p>
<pre><code>#/usr/bin/python
def intelli_round(n):
def get_decimal_place(n):
count = 0
while n < 1:
n *= 10
count += 1
return count
return round(n, get_decimal_place(n))
examples = [0.0099999, 0.00000999, 0.99999]
for e in examples:
print e, intelli_round(e)
</code></pre>
<p>.</p>
<pre><code>0.0099999 0.01
9.99e-06 1e-05
0.99999 1.0
</code></pre>
<p>Does this seem appropriate? It seems to work under the constraints, but I'm curious to see alternatives.</p>
| 1 | 2016-08-23T16:48:59Z | 39,107,265 | <p>For rounding the values to <code>float</code> with 2 decimal precision, you may use below code:</p>
<pre><code>>>> x = [0.0099999, 0.00000999, 0.99999]
>>> ['%.2f' % i for i in x]
['0.01', '0.00', '1.00']
</code></pre>
<p>OR, you may use <code>format</code> as:</p>
<pre><code>>>> ["{0:.2f}".format(i) for i in x]
['0.01', '0.00', '1.00']
</code></pre>
<p>In case you do not want to use these pythonic approach, and interested to implement it via mathematical logic, you may do:</p>
<pre><code>>>> [int((i * 100) + 0.5) / 100.0 for i in x]
[0.01, 0.0, 1.0]
</code></pre>
| 0 | 2016-08-23T17:20:59Z | [
"python"
] |
How can I "fix" floating numbers in python by intelligently choosing the correct decimal place to round to? | 39,106,733 | <p>I've seen many discussions here about rounding floating values in python or similar topics, but I have a related problem that I want a good solution for.</p>
<p>Context:</p>
<p>I use the netCDF4 python library to extract data from NetCDF files. My organization keeps a precision attribute on variables within these files.</p>
<p>Example: <code>TS:data_precision = 0.01f ;</code></p>
<p>I collect these precision attributes using the library like this:</p>
<pre><code>d = netCDF4.Dataset(path) # assume path is the file or url link
precisions = {}
for v in d.variables:
try:
precisions[v] = d.variables[v].__getattribute__('data_precision')
except AttributeError:
pass
return precisions
</code></pre>
<hr>
<p>When I retrieve these precision values from a dataset, in python they end up showing up like:</p>
<pre><code>{u'lat': 9.9999997e-05, u'lon': 9.9999997e-05, u'TS': 0.0099999998, u'SSPS': 0.0099999998}
</code></pre>
<p>But, what I really want is:</p>
<pre><code>{u'lat': 0.0001, u'lon': 0.0001, u'TS': 0.01, u'SSPS': 0.01}
</code></pre>
<hr>
<p>Essentially I need a way in python to intelligently round these values to their most appropriate decimal place. I am sure I can come up with a really ugly method to do this, but I want to know if there is already a 'nice' solution to this problem.</p>
<p>For my use case, I suppose I can take advantage of the fact that since these values are all 'data_precision' values, I can just count the zero's from the decimal place, and then round to the last 0. (I'm making the assumption that <code>0 < n < 1</code>). With these assumptions, this would be my solution:</p>
<pre><code>#/usr/bin/python
def intelli_round(n):
def get_decimal_place(n):
count = 0
while n < 1:
n *= 10
count += 1
return count
return round(n, get_decimal_place(n))
examples = [0.0099999, 0.00000999, 0.99999]
for e in examples:
print e, intelli_round(e)
</code></pre>
<p>.</p>
<pre><code>0.0099999 0.01
9.99e-06 1e-05
0.99999 1.0
</code></pre>
<p>Does this seem appropriate? It seems to work under the constraints, but I'm curious to see alternatives.</p>
| 1 | 2016-08-23T16:48:59Z | 39,109,875 | <p>Thanks to Benjamin for linking another post in the comments above to the solution I was looking for. A better way to word my question is that I want to operate on float values such that only 1 significant digit is retained.</p>
<p>Simple examples:</p>
<pre><code>0.00999 -> 0.01
0.09998 -> 0.1
0.00099 -> 0.001
</code></pre>
<p><a href="http://stackoverflow.com/a/3411435/3454650">This solution</a> was perfect for my needs:</p>
<pre><code>>>> from math import log10, floor
>>> def round_to_1(x):
... return round(x, -int(floor(log10(abs(x)))))
</code></pre>
<p>It handles inputs within my specific context (<code>1.0e^p, p < 0</code>) just fine so thank you so much for the help guys!</p>
| 1 | 2016-08-23T20:09:49Z | [
"python"
] |
"AS" syntax in sqlite throwing error in python | 39,106,825 | <p>I am trying to query based on distance in an sqlite database using some math. Accuracy of the results isn't that important, so I have the following:</p>
<pre><code>SELECT * AS distance FROM items ORDER BY ((location_lat-lat)*(location_lat-lat)) + ((location_lng - lng)*(location_lng - lng)) ASC
</code></pre>
<p>I tried this:</p>
<pre><code>location_lat, location_lng = (x,y)
c.execute("SELECT * AS distance FROM items ORDER BY ((?-lat)*(?-lat)) + ((? - lng)*(? - lng)) ASC", (location_lat, location_lat, location_lng, location_lng,))
</code></pre>
<p>But this ends up giving me an error:</p>
<pre><code>sqlite3.OperationalError: near "AS": syntax error
</code></pre>
<p>Is the statement not correct, or is it complaining about the parameterized inputs <code>?</code>? What did I do wrong?</p>
| -1 | 2016-08-23T16:55:16Z | 39,106,862 | <p>Because your <code>SQL</code> syntax is wrong. Use instead:</p>
<pre><code>SELECT * FROM items ORDER BY ((location_lat-lat)*(location_lat-lat)) + ((location_lng - lng)*(location_lng - lng)) ASC
</code></pre>
<p><strong>Explanation:</strong> Since <code>*</code> returns all the columns within your table and <code>AS</code> is used reference the column to display with result set. Your SQL engine can not map many columns into 1.</p>
<p>In case you want to fetch just one column, let's <code>retailer_items</code> as <code>items</code> then you may use the query as:</p>
<pre><code>SELECT retailers_items AS items FROM items ORDER BY ((location_lat-lat)*(location_lat-lat)) + ((location_lng - lng)*(location_lng - lng)) ASC
</code></pre>
<p>OR, you may use <code>AS</code> to find <code>min</code>, <code>max</code>, <code>count</code>, etc of columns. For example:</p>
<pre><code>SELECT count(items) FROM items ORDER BY ((location_lat-lat)*(location_lat-lat)) + ((location_lng - lng)*(location_lng - lng)) ASC
</code></pre>
| 0 | 2016-08-23T16:57:26Z | [
"python",
"python-2.7",
"sqlite",
"sqlite3",
"coordinates"
] |
Express time elapsed between element of an array of date in hour min second using python | 39,106,832 | <p>I have a list of time creation of files obtained using os.path.getmtime</p>
<pre><code>time_creation_sorted
Out[45]:
array([ 1.47133334e+09, 1.47133437e+09, 1.47133494e+09,
1.47133520e+09, 1.47133577e+09, 1.47133615e+09,
1.47133617e+09, 1.47133625e+09, 1.47133647e+09])
</code></pre>
<p>I know how to convert those elements in hour minute seconds. </p>
<pre><code>datetime.fromtimestamp(time_creation_sorted[1]).strftime('%H:%M:%S')
Out[62]: '09:59:26'
</code></pre>
<p>What I would like to do is to create another table that contains the time elapsed since the first element but expressed in hour:min:sec such that it would look like: </p>
<pre><code>array(['00:00:00','00:16:36',...])
</code></pre>
<p>But I have not manage to find how to do that. Taking naively the difference between the elements of time_creation_sorted and trying to convert to hour:min:sec does not give something logical:</p>
<pre><code>datetime.fromtimestamp(time_creation_sorted[1]-time_creation_sorted[0]).strftime('%H:%M:%S')
Out[67]: '01:17:02'
</code></pre>
<p>Any idea or link on how to do that?</p>
<p>Thanks,
Grégory</p>
| 1 | 2016-08-23T16:55:49Z | 39,107,151 | <p>Check out timedelta objects, it gives difference between two dates or times</p>
<p><a href="https://docs.python.org/2/library/datetime.html#timedelta-objects" rel="nofollow">https://docs.python.org/2/library/datetime.html#timedelta-objects</a></p>
| 0 | 2016-08-23T17:13:30Z | [
"python",
"datetime",
"time",
"difference"
] |
Express time elapsed between element of an array of date in hour min second using python | 39,106,832 | <p>I have a list of time creation of files obtained using os.path.getmtime</p>
<pre><code>time_creation_sorted
Out[45]:
array([ 1.47133334e+09, 1.47133437e+09, 1.47133494e+09,
1.47133520e+09, 1.47133577e+09, 1.47133615e+09,
1.47133617e+09, 1.47133625e+09, 1.47133647e+09])
</code></pre>
<p>I know how to convert those elements in hour minute seconds. </p>
<pre><code>datetime.fromtimestamp(time_creation_sorted[1]).strftime('%H:%M:%S')
Out[62]: '09:59:26'
</code></pre>
<p>What I would like to do is to create another table that contains the time elapsed since the first element but expressed in hour:min:sec such that it would look like: </p>
<pre><code>array(['00:00:00','00:16:36',...])
</code></pre>
<p>But I have not manage to find how to do that. Taking naively the difference between the elements of time_creation_sorted and trying to convert to hour:min:sec does not give something logical:</p>
<pre><code>datetime.fromtimestamp(time_creation_sorted[1]-time_creation_sorted[0]).strftime('%H:%M:%S')
Out[67]: '01:17:02'
</code></pre>
<p>Any idea or link on how to do that?</p>
<p>Thanks,
Grégory</p>
| 1 | 2016-08-23T16:55:49Z | 39,107,175 | <p>You need to rearrange some parts of your code in order to get the desired output.</p>
<p>First you should convert the time stamps to <code>datetime</code> objects which differences result into so called <code>timedelta</code> objects. The <code>__str__()</code> representation of those <code>timedelta</code> objects are exactly what you want:</p>
<pre><code>from datetime import datetime
tstamps = [1.47133334e+09, 1.47133437e+09, 1.47133494e+09, 1.47133520e+09, 1.47133577e+09, 1.47133615e+09, 1.47133617e+09, 1.47133625e+09, 1.47133647e+09]
tstamps = [datetime.fromtimestamp(stamp) for stamp in tstamps]
tstamps_relative = [(t - tstamps[0]).__str__() for t in tstamps]
print(tstamps_relative)
</code></pre>
<p>giving:</p>
<pre><code>['0:00:00', '0:17:10', '0:26:40', '0:31:00', '0:40:30', '0:46:50', '0:47:10', '0:48:30', '0:52:10']
</code></pre>
| 2 | 2016-08-23T17:15:02Z | [
"python",
"datetime",
"time",
"difference"
] |
How can I display argparse help information for parameters in argument subgroups? | 39,106,834 | <p><strong>I'm putting together an <code>argparse</code> parser where I want to have multiple levels of sub-grouping:</strong></p>
<pre><code>Parser
|
|- Option A
|- Option B
|- Group 1
| |- Option 1.A
| |- Subgroup 1.2
| |- Mutually-Exclusive Group 1.2.1
| | |- MEG Option 1.2.1.A
| | |- MEG Option 1.2.1.B
| |- Mutually-Exclusive Group 1.2.2
| | ...
|- Group 2
| ...
</code></pre>
<p><strong>I've got it coded like the following, presently:</strong></p>
<pre><code># Core parser
prs = ap.ArgumentParser(...)
# Compression and decompression groups
gp_comp = prs.add_argument_group(title="compression options")
gp_decomp = prs.add_argument_group(title="decompression options")
# Thresholding subgroup within compression
gp_thresh = gp_comp.add_argument_group(title="thresholding options")
# Mutually exclusive subgroups for the compression operation
meg_threshmode = gp_thresh.add_mutually_exclusive_group()
#meg_threshvals = gp_thresh.add_mutually_exclusive_group() # Nothing added yet
# Argument for the filename (core parser)
prs.add_argument('path', ...)
# Argument to delete the source file; default is to keep (core)
prs.add_argument('-d', '--delete', ...)
# gzip compression level (compress)
gp_comp.add_argument('-c', '--compress', ...)
# gzip truncation level (compress)
gp_comp.add_argument('-t', '--truncate', ...)
# Absolute thresholding mode (compress -- threshold)
meg_threshmode.add_argument('-a', '--absolute', ...)
# Signed thresholding mode (compress -- threshold)
meg_threshmode.add_argument('-s', '--signed', ...)
# Data block output precision (decompress)
gp_decomp.add_argument('-p', '--precision', ...)
</code></pre>
<p><strong>When I call my script with <code>--help</code>, I get the following:</strong></p>
<pre><code>usage: h5cube.py [-h] [-d] [-c #] [-t #] [-a | -s] [-p #] path
Gaussian CUBE (de)compression via h5py
positional arguments:
path path to .(h5)cube file to be (de)compressed
optional arguments:
-h, --help show this help message and exit
-d, --delete delete the source file after (de)compression
compression options:
-c #, --compress # gzip compression level for volumetric data (0-9,
default 9)
-t #, --truncate # gzip truncation width for volumetric data (1-15,
default 5)
decompression options:
-p #, --precision # volumetric data block output precision (0-15, default
5)
</code></pre>
<p><strong>The help content for all of the 'group-level' parameters shows up just fine.</strong> However, the help for my sub-sub-group parameters <code>-a</code> and <code>-s</code> is missing. The options <em>are</em> being parsed, because it shows <code>[-a | -s]</code> in the signature, but their help isn't being displayed.</p>
<p>Relocating <code>-a</code> and <code>-s</code> from their mutually-exclusive group up to <code>gp_thresh</code> doesn't help. The only difference is (naturally) that <code>-a</code> and <code>-s</code>show up separately in the signature:</p>
<pre><code>usage: h5cube.py [-h] [-d] [-c #] [-t #] [-a] [-s] [-p #] path
</code></pre>
<p><strong>How can I make the help content display for <code>-a</code> and <code>-s</code>?</strong> I've looked through the whole of the <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow"><code>argparse</code> help</a>, but haven't found anything that looks like a 'display depth' setting or whatever. Would it work to set up sub-parsers? That seems like overkill, though....</p>
<p>This is Python 3.5.1 on Windows 7 64-bit. The code in this state is <a href="https://github.com/bskinn/h5cube/tree/93fdad1e294622529a9810cf7a8b285632dab1f2" rel="nofollow">here</a> at my GitHub repo.</p>
| 0 | 2016-08-23T16:56:01Z | 39,107,208 | <p>We've discussed this in other SO questions, but the simple answer is that <code>argument groups</code> do not nest. <code>mutually exclusive groups</code> can nest in an argument group for display purposes, but they don't nest for parsing or testing</p>
<p>Argument groups only affect the help display. Actions added to a group are also added to the parser. The parser only looks at the Actions its own list, and ignores any grouping. And the help display does not allow for any nested indentation.</p>
<p>==================</p>
<p><code>add_argument_group</code> is a method in an abstract parent class <code>_ActionsContainer</code>, as are methods like <code>add_argument</code>. <code>_ArgumentGroup</code> and <code>ArgumentParser</code> both subclass this, so inherit this method. So it is possible to add a group to a group (no error is raised). And because of how <code>add_argument</code> works, arguments (<code>Actions</code>) are shared with the parser and all groups (they all access the same list). So parsing of the nested actions works fine.</p>
<p>The flaw is in the help formatter. It gets the list of argument groups from the parser. Those groups include the default 2 (optionals and postionals). But there's no provision in the formatter to check if the groups contain subgroups.</p>
<p>The original developer(s) didn't anticipate the interest in nesting groups. Hence this incomplete nesting was not blocked in the class hierarchy nor in the documentation. And patching has been slow.</p>
| 1 | 2016-08-23T17:17:28Z | [
"python",
"python-3.x",
"argparse"
] |
Microsoft Emotion API can i use it to create application for pc in visual studio with python? | 39,106,903 | <p>I'm trying to build an app with microsoft-cognitive emotion API, as python is my preferred programming language just wanted to know if this could be done in visual studio would love to learn more about it. </p>
| -2 | 2016-08-23T16:59:13Z | 39,150,315 | <p>There is python code available on GitHub that demonstrates calling the Emotion API: <a href="https://github.com/Microsoft/Cognitive-Emotion-Python" rel="nofollow">https://github.com/Microsoft/Cognitive-Emotion-Python</a>. It comes in the form of a Jupyter notebook, but it should be a good place to get started nonetheless.</p>
| 1 | 2016-08-25T16:23:52Z | [
"python",
"microsoft-cognitive"
] |
Parsing txt file into csv | 39,107,103 | <p>i need to parse some .txt file into .csv!
Let's begin... Text file looks like that:</p>
<blockquote>
<p>COMPACT3</p>
<p>GPS_START_TIME 2010 4 15 00 00 0.0000</p>
<p>0.0000 10 G16 G02 G29 G31 G30 G12 G10 G21 G05 G24</p>
<p>0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000<br>
30.0000 -1</p>
<p>-0.050 0.049 -0.007 -0.003 -0.015 -0.006 -0.036<br>
0.020 0.049 -0.002<br>
60.0000 -1</p>
<p>....</p>
</blockquote>
<p>First three rows are header and then follow a lot of data that need to be processed. "-1" represent EOL (end of line)!</p>
<p>.csv must be in this format:</p>
<blockquote>
<p>time G16 G02 G29 G31 G30 G12 G10 G21 G05 G24</p>
<p>time is the number before "-1", in our example 30.000, 60.000, etc</p>
<p>G16 G02 ... are first 10 numbers in the row</p>
</blockquote>
<p>Any help how to parse that .txt file into .csv?</p>
| -4 | 2016-08-23T17:10:38Z | 39,107,366 | <p>you'll find how to read and write files here :</p>
<p>for python2.7 : <a href="https://docs.python.org/2/tutorial/inputoutput.html" rel="nofollow">https://docs.python.org/2/tutorial/inputoutput.html</a></p>
<p>for python3 : <a href="https://docs.python.org/3/tutorial/inputoutput.html" rel="nofollow">https://docs.python.org/3/tutorial/inputoutput.html</a></p>
<p>and then may use things like :
readline(), str.split(), etc... </p>
<p>but please send some code you have done </p>
| 0 | 2016-08-23T17:26:49Z | [
"python",
"csv",
"parsing"
] |
Is a python dict's len() atomic with respect to the GIL? | 39,107,220 | <p>I'm asking about CPython, python2.7. Say I have a <code>dict</code>, and a few threads that will insert values from time to time by calling <code>add()</code>:</p>
<pre><code>d = {}
dlock = threading.Lock()
def add(key, value):
with dlock:
d[key] = value
</code></pre>
<p>Is it safe to get the size of the dict from a separate thread without grabbing the lock, relying just on the GIL?</p>
<pre><code>def count():
return len(d)
</code></pre>
<p>Assuming that I don't care about getting precisely the correct value, just any value that was correct at some point during <code>count()</code>.</p>
| 4 | 2016-08-23T17:17:54Z | 39,107,346 | <p>I wouldn't recommend it, but yeah, the GIL will protect that. There's no opportunity for the GIL to be released during the <code>len</code> call.</p>
| 1 | 2016-08-23T17:25:18Z | [
"python",
"python-2.7",
"cpython"
] |
Python tkinter lisbox bold | 39,107,336 | <p>I am trying to bold all of the entries for my python tkinter listbox.</p>
<p>I have a list and it enters it into a listbox:</p>
<p><code>listbox4.insert(END, Words)</code>
Does anyone know the code to bold these entries?</p>
| 1 | 2016-08-23T17:24:50Z | 39,107,472 | <pre><code>from Tkinter import *
import tkFont
sf= tkFont.Font(family='Helvetica', size=36, weight='bold')
lb = Listbox(root , bd=1, height=10, font=sf)
</code></pre>
| 1 | 2016-08-23T17:32:53Z | [
"python",
"user-interface",
"tkinter"
] |
Python tkinter lisbox bold | 39,107,336 | <p>I am trying to bold all of the entries for my python tkinter listbox.</p>
<p>I have a list and it enters it into a listbox:</p>
<p><code>listbox4.insert(END, Words)</code>
Does anyone know the code to bold these entries?</p>
| 1 | 2016-08-23T17:24:50Z | 39,107,536 | <p>you can configure the Font used by the listbox object for all of the text</p>
<p>assuming you are using python 3 and tkinter the code would be like follows (replace the import line with <code>import tkFont</code> on python 2)</p>
<pre><code>from tkinter import font
listbox4.insert(END, Words)
bolded = font.Font(weight='bold') # will use the default font
listbox4.config(font=bolded)
</code></pre>
<p>see <a href="http://www.tutorialspoint.com/python/tk_fonts.htm" rel="nofollow">here</a> for some more <code>tkFont</code> documentation in case you want to change the font-family, size, etc.</p>
| 1 | 2016-08-23T17:36:56Z | [
"python",
"user-interface",
"tkinter"
] |
Flanker: MimePart is not iterable | 39,107,435 | <p>I'm a noob and using Flanker to parse emails.
<a href="https://github.com/mailgun/flanker" rel="nofollow">https://github.com/mailgun/flanker</a></p>
<p>I'm getting a Not Iterable error that I just can't seem to figure out. I've read tons of pages about lists, but I just can't get it to work. I'm hopeful I can get some help.</p>
<p>To run the following code, you will need to install Flanker, and save this file as 'email'.
<a href="http://pastebin.com/ZS4q2kYN" rel="nofollow">http://pastebin.com/ZS4q2kYN</a></p>
<p>I'm trying to read the 'attachmenttype' and do something depending on the response. Can't get it to work though. Here is the test code:</p>
<pre><code>#!/usr/bin/python
#Open Email
from flanker import mime
with open ("email", mode="rb") as myfile:
message_string=myfile.read()
myfile.close()
#Read Email
msg = mime.from_string(message_string)
#read attachment type
attachmenttype = msg.parts[1]
print attachmenttype
#This errors for me: TypeError: argument of type 'MimePart' is not iterable
if attachmenttype:
if '(text/html)' in attachmenttype:
print "woohoo"
</code></pre>
<p>Here is the response I get:
<a href="http://i.stack.imgur.com/YuL1T.png" rel="nofollow"><img src="http://i.stack.imgur.com/YuL1T.png" alt="myerror"></a></p>
<p>Thanks in advance.</p>
| 0 | 2016-08-23T17:31:05Z | 39,107,507 | <p><code>attachmenttype</code> may be printed a string but it is not a string,more a structure containing some properties. However, since you can print it, you're halfway there. Just convert it to a string using <code>str</code> and compare that.</p>
<p>Fix your code like this. I couldn't test it but I don't see how it wouldn't work:</p>
<pre><code>if attachmenttype:
if '(text/html)' in str(attachmenttype):
print("woohoo")
</code></pre>
| 1 | 2016-08-23T17:34:58Z | [
"python",
"email",
"parsing"
] |
How to do a transpose a dataframe group by key on pandas? | 39,107,512 | <p>I have this table from my database and I need a transpose group by survey_id</p>
<pre><code>id answer survey_id question_number questionid
216 0.0 69 3 2.0
217 3.0 69 4 3.0
218 0.0 69 5 4.0
219 0.0 69 6 5.0
221 0.0 69 8 7.0
</code></pre>
<p>Like this:</p>
<pre><code>Survey P01 P02 P03 P04 P05
69 1 1 2 2 1
</code></pre>
<p>The cell is the answer and the column is format "P{question_number}"</p>
<p>I'm using pandas 0.18.1.</p>
<p>How can I do that?</p>
| -3 | 2016-08-23T17:35:29Z | 39,108,054 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>pivot</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_prefix.html" rel="nofollow"><code>add_prefix</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>print (df.pivot(index='survey_id', columns='question_number', values='answer')
.add_prefix('P')
.reset_index())
question_number survey_id P3 P4 P5 P6 P8
0 69 0.0 3.0 0.0 0.0 0.0
</code></pre>
| 1 | 2016-08-23T18:11:12Z | [
"python",
"pandas",
"dataframe",
"analytics"
] |
python + arduino controlling DC Motor | 39,107,576 | <p>Hi This is my Arduino code, since I want the loop only once, I used the while(1) {} construct in the void loop() </p>
<pre><code>int motorPin = 3;
int motorDir = 12;
int motorBr = 9;
void setup() {
//pinMode(motorPin, OUTPUT);
pinMode(motorBr, OUTPUT);
pinMode(motorDir, OUTPUT);
if (Serial.available() > 0) {
if(Serial.read() == '1') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, HIGH);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
} else if(Serial.read() == '0') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, LOW);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
}
}
}
void loop() { while(1) {}
}
</code></pre>
<p>This is my python code</p>
<pre><code>import serial
import time
ser = serial.Serial('COM3', 9600, timeout=1)
time.sleep(2)
#I am forcing the script to write 1 to Arduino to make the motor turn
ser.write(b'1')
ser.flush()
time.sleep(2)
ser.close()
</code></pre>
<p>The communication isn't happening. Any insight should help. I am using Python 3.5 and Arduino Uno with the updated drivers. </p>
<p>Edit: </p>
<p>Hi Julien, yes the following code does its job: </p>
<pre><code>int motorPin = 3;
int motorDir = 12;
int motorBr = 9;
void setup() {
// put your setup code here, to run once:
//pinMode(motorPin, OUTPUT);
pinMode(motorBr, OUTPUT);
pinMode(motorDir, OUTPUT);
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, HIGH);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
delay(2000);
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, LOW);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
}
void loop() {
// put your main code here, to run repeatedly:
}
</code></pre>
<p>I have also made the following changes </p>
<pre><code>ser.write('1') --> ser.write(b'1')
Serial.read() == 1 --> Serial.read() == '1'
Serial.read() == 1 --> Serial.read() == 0x31
</code></pre>
<p>doesn't seem to have any effect. </p>
<p>The way I am accomplishing this is first uploading the Arduino program to memory, then running the Python script. No errors show up either.. </p>
<p>Execution of the Ardiono code via Subprocess call in Python:</p>
<pre><code>import subprocess
actionLine = "upload"
projectFile = "C:/Users/Tomography/Desktop/DCM2/DCM2.ino"
portname = "COM3"
boardname = "arduino:avr:uno"
#I added the ardiono.exe to path, the command automatically sources the
Command = "arduino" + " --" + actionLine +" --board " + boardname + " --port " + portname + " " + projectFile
print(Command)
result = subprocess.call(Command)
if result != 0:
print("\n Failed - result code = %s --" %(result))
else:
print("\n-- Success --")
</code></pre>
| 0 | 2016-08-23T17:39:31Z | 39,108,105 | <p>try this :</p>
<pre><code>import serial
import time
ser = serial.Serial('COM3', 9600, timeout=1) #here you may add write_timeout=1 to avoid indefinite blocking on failing flush
time.sleep(2)
ser.write('1')
ser.flush() #force the physical write
#time.sleep(2) #no need to sleep as flush was blocking
ser.close()
</code></pre>
<p>for the Arduino code, the test on Communication happens only once as it is located in setup function. The loop() is the equivalent of the while(1) in the main loop that you may know from "normal" C codes. </p>
<p><a href="https://www.arduino.cc/en/Reference/Setup" rel="nofollow">Reference manual for setup</a></p>
<p><a href="https://www.arduino.cc/en/Reference/loop" rel="nofollow">Reference manual for loop</a></p>
<p>This means your arduino code is already in the while(1) in loop() once you execute the Python and it will never be alowed to analyse the serial data.</p>
<p>The correct Arduino code would be :</p>
<pre><code>int motorPin = 3;
int motorDir = 12;
int motorBr = 9;
void setup() //this is executed only once at init
{
//pinMode(motorPin, OUTPUT);
pinMode(motorBr, OUTPUT);
pinMode(motorDir, OUTPUT);
}
void loop() //This will be executed over and over
{
if (Serial.available() > 0) {
// here '1' (the character) is important as 1 is the number
// and '1' equals 0x31 (ASCII)
if(Serial.read() == '1') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, HIGH);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
} else if(Serial.read() == '0') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, LOW);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
}
}
}
</code></pre>
| 0 | 2016-08-23T18:14:54Z | [
"python",
"arduino",
"serial-port"
] |
python + arduino controlling DC Motor | 39,107,576 | <p>Hi This is my Arduino code, since I want the loop only once, I used the while(1) {} construct in the void loop() </p>
<pre><code>int motorPin = 3;
int motorDir = 12;
int motorBr = 9;
void setup() {
//pinMode(motorPin, OUTPUT);
pinMode(motorBr, OUTPUT);
pinMode(motorDir, OUTPUT);
if (Serial.available() > 0) {
if(Serial.read() == '1') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, HIGH);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
} else if(Serial.read() == '0') {
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, LOW);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
}
}
}
void loop() { while(1) {}
}
</code></pre>
<p>This is my python code</p>
<pre><code>import serial
import time
ser = serial.Serial('COM3', 9600, timeout=1)
time.sleep(2)
#I am forcing the script to write 1 to Arduino to make the motor turn
ser.write(b'1')
ser.flush()
time.sleep(2)
ser.close()
</code></pre>
<p>The communication isn't happening. Any insight should help. I am using Python 3.5 and Arduino Uno with the updated drivers. </p>
<p>Edit: </p>
<p>Hi Julien, yes the following code does its job: </p>
<pre><code>int motorPin = 3;
int motorDir = 12;
int motorBr = 9;
void setup() {
// put your setup code here, to run once:
//pinMode(motorPin, OUTPUT);
pinMode(motorBr, OUTPUT);
pinMode(motorDir, OUTPUT);
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, HIGH);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
delay(2000);
digitalWrite(motorBr, LOW);
digitalWrite(motorDir, LOW);
digitalWrite(motorPin, HIGH);
delay(500);
digitalWrite(motorBr, HIGH);
}
void loop() {
// put your main code here, to run repeatedly:
}
</code></pre>
<p>I have also made the following changes </p>
<pre><code>ser.write('1') --> ser.write(b'1')
Serial.read() == 1 --> Serial.read() == '1'
Serial.read() == 1 --> Serial.read() == 0x31
</code></pre>
<p>doesn't seem to have any effect. </p>
<p>The way I am accomplishing this is first uploading the Arduino program to memory, then running the Python script. No errors show up either.. </p>
<p>Execution of the Ardiono code via Subprocess call in Python:</p>
<pre><code>import subprocess
actionLine = "upload"
projectFile = "C:/Users/Tomography/Desktop/DCM2/DCM2.ino"
portname = "COM3"
boardname = "arduino:avr:uno"
#I added the ardiono.exe to path, the command automatically sources the
Command = "arduino" + " --" + actionLine +" --board " + boardname + " --port " + portname + " " + projectFile
print(Command)
result = subprocess.call(Command)
if result != 0:
print("\n Failed - result code = %s --" %(result))
else:
print("\n-- Success --")
</code></pre>
| 0 | 2016-08-23T17:39:31Z | 39,108,187 | <p>Your Python code is sending the string '1', but your arduino code is looking for the number 1. Try changing the arduino code to this</p>
<pre><code>Serial.read() == 0x31
</code></pre>
<p>and</p>
<pre><code>Serial.read() == 0x30
</code></pre>
<p>Those are the ASCII codes for the '1' and '0' respectively</p>
<p>The code in your setup() function has most likely already ran by the time you send the character from your python script.</p>
<p>Place the code in the loop() function and then place some logic in the loop function so it only runs once.</p>
| 0 | 2016-08-23T18:20:15Z | [
"python",
"arduino",
"serial-port"
] |
add new columns with conditions to a dataframe using python | 39,107,610 | <p>I create a dataframe df_energy :</p>
<pre><code>df_energy=pd.read_csv('C:/Users/Demonstrator/Downloads/power.csv', delimiter=';', parse_dates=[0], infer_datetime_format = True)
</code></pre>
<p>with this structure : </p>
<pre><code> df_energy.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 43229 entries, 0 to 43228
Data columns (total 6 columns):
TIMESTAMP 43229 non-null datetime64[ns]
P_ACT_KW 40376 non-null float64
PERIODE_TARIF 43209 non-null object
P_SOUSCR 37501 non-null float64
SITE 43229 non-null object
TARIF 43229 non-null object
dtypes: datetime64[ns](1), float64(2), object(3)
memory usage: 2.0+ MB
TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE TARIF
2015-07-31 23:00:00 12.0 HC NaN ST GEREON TURPE_HTA5
2015-07-31 23:10:00 466.0 HC 425.0 ST GEREON TURPE_HTA5
2015-07-31 23:20:00 18.0 HC 425.0 ST GEREON TURPE_HTA5
2015-07-31 23:30:00 17.0 HC 425.0 ST GEREON TURPE_HTA5
</code></pre>
</blockquote>
<p>As I am starting learning python, I would like to know can I add three new columns : High_energy, Medium_energy and low_energy .</p>
<p>High_energy contains the P_ACT_KW value if P_ACT_KW > 400, Medium_energy contains the P_ACT_KW value if P_ACT_KW is between 200 and 400, Low_energy contains the P_ACT_KW value if P_ACT_KW < 200.
For example : </p>
<blockquote>
<pre><code>TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE TARIF High_energy Medium_energy Low_energy
2015-07-31 23:00:00 12.0 HC NaN ST GEREON TURPE_HTA5 0 0 12
2015-07-31 23:10:00 466.0 HC 425.0 ST GEREON TURPE_HTA5 466 0 0
2015-07-31 23:20:00 18.0 HC 425.0 ST GEREON TURPE_HTA5 0 0 18
2015-07-31 23:30:00 17.0 HC 425.0 ST GEREON TURPE_HTA5 0 0 17
</code></pre>
</blockquote>
<p>Thank you</p>
<p>Kind regards</p>
| 0 | 2016-08-23T17:41:17Z | 39,108,161 | <p>you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">np.where</a> from <a href="http://www.numpy.org/" rel="nofollow">numpy</a> as:<br>
Sample df:</p>
<pre><code>Out[71]:
TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE \
0 2015-07-31 23:00:00 12.0 HC NaN ST GEREON
1 2015-07-31 23:10:00 466.0 HC 425.0 ST GEREON
2 2015-07-31 23:20:00 18.0 HC 425.0 ST GEREON
3 2015-07-31 23:30:00 17.0 HC 425.0 ST GEREON
TARIF
0 TURPE_HTA5
1 TURPE_HTA5
2 TURPE_HTA5
3 TURPE_HTA5
df['high_energy']=np.where(df['P_ACT_KW']>400,df['P_ACT_KW'],0)
df['medium_energy']=np.where((df['P_ACT_KW']>200)&(df['P_ACT_KW']<400),df['P_ACT_KW'],0)
df['low_energy']=np.where(df['P_ACT_KW']<200,df['P_ACT_KW'],0)
Out[72]:
TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE \
0 2015-07-31 23:00:00 12.0 HC NaN ST GEREON
1 2015-07-31 23:10:00 466.0 HC 425.0 ST GEREON
2 2015-07-31 23:20:00 18.0 HC 425.0 ST GEREON
3 2015-07-31 23:30:00 17.0 HC 425.0 ST GEREON
TARIF high_energy medium_energy low_energy
0 TURPE_HTA5 0.0 0.0 12.0
1 TURPE_HTA5 466.0 0.0 0.0
2 TURPE_HTA5 0.0 0.0 18.0
3 TURPE_HTA5 0.0 0.0 17.0
</code></pre>
| 3 | 2016-08-23T18:18:57Z | [
"python",
"pandas",
"dataframe"
] |
unicodecsv doesn't read unicode csv file | 39,107,654 | <p>This is the line I'm trying to read:</p>
<pre><code>with open('u.item', 'w') as demofile:
demofile.write(
"543|Mis\xe9rables, Les (1995)|01-Jan-1995||"
"http://us.imdb.com/M/title-exact?Mis%E9rables%2C%20Les%20%281995%29|
"0|0|0|0|0|0|0|0|1|0|0|0|1|0|0|0|0|0|0\n"
)
</code></pre>
<p>This is the way I am reading it</p>
<pre><code>import unicodecsv as csv
def moviesToRDF(csvFilePath):
with open(csvFilePath, 'rU') as csvFile:
reader = csv.reader(csvFile, encoding='utf-8', delimiter= '|')
for row in reader:
print row
moviesToRDF("u.item")
</code></pre>
<p>This is the error I am getting:</p>
<pre><code>UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 3: invalid continuation byte
</code></pre>
<p>the value that throws the error is:</p>
<pre><code>Misérables, Les
</code></pre>
<p>What wrong did I do please?</p>
<p>(i am using 2.7 python)</p>
| -2 | 2016-08-23T17:43:58Z | 39,107,801 | <p>I found the problem</p>
<p>the file is encoded latin-1 not utf 8</p>
<p>this solves the problem</p>
<pre><code>reader = csv.reader(csvFile, encoding='latin-1', delimiter= '|')
</code></pre>
| 1 | 2016-08-23T17:53:53Z | [
"python",
"python-2.7",
"unicode"
] |
Random Http error while using Youtube api to get comments through python | 39,107,772 | <p>on a random basis getting random Http errors while trying to get comments from predefined videos with YouTube API v3 on python. The case: giving list of video ids and comments are downloaded for each one until python throws error, and process stops. If i reload program it might stuck on the same or on another video and on different comments as well. Errors ranging from 40* till 500 as well on random basis.
Tried to put code in try except, didn't help. Anything else i can do except remembering last scrapped video id and reloading manually program?
The code:</p>
<pre><code>import httplib2
import urllib2
import os
import sys
import pandas as pd
from apiclient.discovery import build_from_document
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
DEVELOPER_KEY = "---"
CLIENT_SECRETS_FILE = "client_secrets.json"
YOUTUBE_READ_WRITE_SSL_SCOPE = "https://www.googleapis.com/auth/youtube.force-ssl"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
listL = list()
listL.append("D0uEXoL04OM")
listL.append("eX8-g9wM_Sc")
listL.append("aKInxyP5l7k")
listL.append("vMp__taMQtE")
listL.append("Zd3qcqGKbYA")
listL.append("69sg2o2phVs")
listL.append("QcGhVY3ieu4")
listL.append("t4QhJOFo2S0")
listL.append("NeJPr6ko2Hk")
listL.append("15ka3dFn6LI")
listL.append("hweA36OyxRM")
listL.append("ZmCv5HJJPqQ")
listL.append("zfi5DamYZxA")
listL.append("x7O3GVAqCio")
listL.append("kAbhm5NJTz8")
listL.append("7URzyREVdao")
def comment_threads_list_by_video_id(service, part, video_id):
res = service.commentThreads().list(
part=part,
videoId=video_id,
maxResults="100",
).execute()
nextPageToken = res.get('nextPageToken')
while ('nextPageToken' in res):
nextPage = service.commentThreads().list(
part="snippet",
videoId=video_id,
maxResults="100",
pageToken=nextPageToken
).execute()
res['items'] = res['items'] + nextPage['items']
if 'nextPageToken' not in nextPage:
res.pop('nextPageToken', None)
else:
nextPageToken = nextPage['nextPageToken']
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
for item in listL:
try:
print item
comment_threads_list_by_video_id(youtube, 'snippet, replies', item)
except urllib2.HTTPError, err:
print "Http Error happened"
pass
except urllib2.URLError, err:
print "Some other error happened:", err.reason
pass
</code></pre>
<p>EDIT: --------------------------
Few errors</p>
<pre><code>HttpError: <HttpError 400 when requesting https://www.googleapis.com/youtube/v3/commentThreads?pageToken=ChYQpqWd6pfYzgIYyISxrpfYzgIgACgcEhQIABDIhLGul9jOAhiQgZuP9IfOAhgCIO4VKJHr35vwuKix-gE%3D&part=snippet&key=AIzaSyBzExhLoWbeHU1iKHZuaYV7IBPJNiyaDkE&alt=json&videoId=D0uEXoL04OM&maxResults=100 returned "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid.">
</code></pre>
| 0 | 2016-08-23T17:51:47Z | 39,146,984 | <p>Silly mistake was made. Instead of API used error identifier in 'exception'</p>
<pre><code>...
except HttpError, err:
...
</code></pre>
<p>Urlib2 one was used</p>
<pre><code>...
except urllib2.HTTPError, err:
...
</code></pre>
<p>The simple solution was just to ignore and repeat until success. However, still not clear why these random errors are coming. Sleep didn't help.</p>
| 0 | 2016-08-25T13:41:40Z | [
"python",
"http-error",
"youtube-v3-api"
] |
Flower doesn't display all workers for celery | 39,107,835 | <p>I am running celery on two servers with one redis as a broker.</p>
<p>Celery start command looks like following:</p>
<pre><code>celery multi start 2 -A app_name
</code></pre>
<p>Flower start command:</p>
<pre><code>celery flower -A app_name --address=10.41.31.210 --port=5555
</code></pre>
<p>In flower's output there are some warnings:</p>
<pre><code>WARNING:flower.api.control:'stats' inspect method failed
WARNING:flower.api.control:'active_queues' inspect method failed
WARNING:flower.api.control:'registered' inspect method failed
WARNING:flower.api.control:'scheduled' inspect method failed
WARNING:flower.api.control:'active' inspect method failed
WARNING:flower.api.control:'reserved' inspect method failed
WARNING:flower.api.control:'revoked' inspect method failed
WARNING:flower.api.control:'conf' inspect method failed
</code></pre>
<p>And the most strange thing for me - not all workers are displayed in Flower's dashboard. Seems that after every flower restart only some workers are displayed. Due to my start scripts - there should be at least 8 workers, but I see 4 or sometimes 6.</p>
<p>Looking for any solution or advice. Thank you.</p>
<p>P.s I don't have any problems with the same services when there is only one server used for celery workers.</p>
| 0 | 2016-08-23T17:56:10Z | 39,107,887 | <p>You have to also pass <code>--basic_auth=<username>:<password></code> param while starting flower. Here <code>username</code> and <code>password</code> are of rabbitMQ user.</p>
<p>In case you do not have user, you may create it via:</p>
<pre><code>sudo rabbitmqctl add_user <username> <password>
sudo rabbitmqctl set_permissions -p / <username> ".*" ".*" ".*"
</code></pre>
<p><strong>Note:</strong> You will have to also mention this user and password in your celery configuration of the project.</p>
| 0 | 2016-08-23T18:00:09Z | [
"python",
"celery",
"flower"
] |
Efficiently finding the closest coordinate pair from a set in Python | 39,107,896 | <p><strong>The Problem</strong></p>
<p>Imagine I am stood in an airport. Given a geographic coordinate pair, how can one efficiently determine which airport I am stood in?</p>
<p><strong>Inputs</strong></p>
<ul>
<li>A coordinate pair <code>(x,y)</code> representing the location I am stood at.</li>
<li>A set of coordinate pairs <code>[(a1,b1), (a2,b2)...]</code> where each coordinate pair represents one airport.</li>
</ul>
<p><strong>Desired Output</strong></p>
<p>A coordinate pair <code>(a,b)</code> from the set of airport coordinate pairs representing the closest airport to the point <code>(x,y)</code>.</p>
<p><strong>Inefficient Solution</strong></p>
<p>Here is my inefficient attempt at solving this problem. It is clearly linear in the length of the set of airports.</p>
<pre><code>shortest_distance = None
shortest_distance_coordinates = None
point = (50.776435, -0.146834)
for airport in airports:
distance = compute_distance(point, airport)
if distance < shortest_distance or shortest_distance is None:
shortest_distance = distance
shortest_distance_coordinates = airport
</code></pre>
<p><strong>The Question</strong></p>
<p>How can this solution be improved? This might involve some way of pre-filtering the list of airports based on the coordinates of the location we are currently stood at, or sorting them in a certain order beforehand.</p>
| 2 | 2016-08-23T18:00:36Z | 39,107,994 | <p>From this <a href="http://codereview.stackexchange.com/questions/28207/finding-the-closest-point-to-a-list-of-points">SO question</a>:</p>
<pre><code>import numpy as np
def closest_node(node, nodes):
nodes = np.asarray(nodes)
deltas = nodes - node
dist_2 = np.einsum('ij,ij->i', deltas, deltas)
return np.argmin(dist_2)
</code></pre>
<p>where <code>node</code> is a tuple with two values (x, y) and <code>nodes</code> is an array of tuples with two values (<code>[(x_1, y_1), (x_2, y_2),]</code>)</p>
| 1 | 2016-08-23T18:07:32Z | [
"python",
"coordinates",
"distance",
"closest"
] |
Efficiently finding the closest coordinate pair from a set in Python | 39,107,896 | <p><strong>The Problem</strong></p>
<p>Imagine I am stood in an airport. Given a geographic coordinate pair, how can one efficiently determine which airport I am stood in?</p>
<p><strong>Inputs</strong></p>
<ul>
<li>A coordinate pair <code>(x,y)</code> representing the location I am stood at.</li>
<li>A set of coordinate pairs <code>[(a1,b1), (a2,b2)...]</code> where each coordinate pair represents one airport.</li>
</ul>
<p><strong>Desired Output</strong></p>
<p>A coordinate pair <code>(a,b)</code> from the set of airport coordinate pairs representing the closest airport to the point <code>(x,y)</code>.</p>
<p><strong>Inefficient Solution</strong></p>
<p>Here is my inefficient attempt at solving this problem. It is clearly linear in the length of the set of airports.</p>
<pre><code>shortest_distance = None
shortest_distance_coordinates = None
point = (50.776435, -0.146834)
for airport in airports:
distance = compute_distance(point, airport)
if distance < shortest_distance or shortest_distance is None:
shortest_distance = distance
shortest_distance_coordinates = airport
</code></pre>
<p><strong>The Question</strong></p>
<p>How can this solution be improved? This might involve some way of pre-filtering the list of airports based on the coordinates of the location we are currently stood at, or sorting them in a certain order beforehand.</p>
| 2 | 2016-08-23T18:00:36Z | 39,108,426 | <p>If your coordinates are unsorted, your search can only be improved slightly assuming it is <code>(latitude,longitude)</code> by filtering on latitude first as for earth</p>
<blockquote>
<p>1 degree of latitude on the sphere is 111.2 km or 69 miles</p>
</blockquote>
<p>but that would not give a huge speedup.</p>
<p>If you sort the airports by latitude first then you can use a binary search for finding the first airport that <em>could</em> match (<code>airport_lat >= point_lat-tolerance</code>) and then only compare up to the last one that <em>could</em> match (<code>airport_lat <= point_lat+tolerance</code>) - but take care of 0 degrees equaling 360. While you cannot use that library directly, the sources of <a href="https://docs.python.org/3/library/bisect.html" rel="nofollow">bisect</a> are a good start for implementing a binary search.</p>
<p>While technically this way the search is still O(n), you have much fewer actual distance calculations (depending on tolerance) and few latitude comparisons. So you will have a huge speedup.</p>
| 1 | 2016-08-23T18:35:26Z | [
"python",
"coordinates",
"distance",
"closest"
] |
Efficiently finding the closest coordinate pair from a set in Python | 39,107,896 | <p><strong>The Problem</strong></p>
<p>Imagine I am stood in an airport. Given a geographic coordinate pair, how can one efficiently determine which airport I am stood in?</p>
<p><strong>Inputs</strong></p>
<ul>
<li>A coordinate pair <code>(x,y)</code> representing the location I am stood at.</li>
<li>A set of coordinate pairs <code>[(a1,b1), (a2,b2)...]</code> where each coordinate pair represents one airport.</li>
</ul>
<p><strong>Desired Output</strong></p>
<p>A coordinate pair <code>(a,b)</code> from the set of airport coordinate pairs representing the closest airport to the point <code>(x,y)</code>.</p>
<p><strong>Inefficient Solution</strong></p>
<p>Here is my inefficient attempt at solving this problem. It is clearly linear in the length of the set of airports.</p>
<pre><code>shortest_distance = None
shortest_distance_coordinates = None
point = (50.776435, -0.146834)
for airport in airports:
distance = compute_distance(point, airport)
if distance < shortest_distance or shortest_distance is None:
shortest_distance = distance
shortest_distance_coordinates = airport
</code></pre>
<p><strong>The Question</strong></p>
<p>How can this solution be improved? This might involve some way of pre-filtering the list of airports based on the coordinates of the location we are currently stood at, or sorting them in a certain order beforehand.</p>
| 2 | 2016-08-23T18:00:36Z | 39,109,296 | <pre><code>>>> from scipy import spatial
>>> airports = [(10,10),(20,20),(30,30),(40,40)]
>>> tree = spatial.KDTree(airports)
>>> tree.query([(21,21)])
(array([ 1.41421356]), array([1]))
</code></pre>
<p>Where 1.41421356 is the distance between the queried point and the nearest neighbour and 1 is the index of the neighbour.</p>
<p>See: <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html#scipy.spatial.KDTree.query" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html#scipy.spatial.KDTree.query</a></p>
| 1 | 2016-08-23T19:31:04Z | [
"python",
"coordinates",
"distance",
"closest"
] |
Stack columns above value labels in pandas pivot table | 39,107,997 | <p>Given a dataframe that looks like:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({
'Key1': ['one', 'one', 'two', 'three'] * 3,
'Key2': ['A', 'B', 'C'] * 4,
'Value1': np.random.randn(12),
'Value2': np.random.randn(12)
})
print df
</code></pre>
<pre>
Key1 Key2 Value1 Value2
0 one A 1.405817 1.307511
1 one B -0.037627 -0.215800
2 two C -0.116591 -1.195066
3 three A 2.044775 -1.207433
4 one B -1.109636 0.031521
5 one C -1.529597 1.761366
6 two A -1.349865 0.321454
7 three B 0.814374 2.285579
8 one C 0.178702 0.479210
9 one A 0.718921 0.504311
10 two B -0.375898 -0.379315
11 three C -0.822250 0.703811
</pre>
<p>I can pivot it so that I get the first key as rows and the second key as columns</p>
<pre><code>pt = df.pivot_table(
index=['Key1'],
columns=['Key2'],
values=['Value1','Value2']
)
print pt
</code></pre>
<pre>
Value1 Value2
Key2 A B C A B C
Key1
one -0.076303 -0.899175 0.631831 -1.196249 0.339583 0.583173
three 0.105773 0.460911 -0.387941 0.697660 1.091828 1.447365
two 1.391854 0.499841 -0.422887 -0.366169 -0.230001 2.417211
</pre>
<p>How can flip it such that the values and columns are stacked by the column first and then the values, e.g.</p>
<pre>
A B C
Value1 Value2 Value1 Value2 Value1 Value2
one -0.0763 -1.19625 -0.89918 0.339583 0.631831 0.583173
three 0.105773 0.69766 0.460911 1.091828 -0.38794 1.447365
two 1.391854 -0.36617 0.499841 -0.23 -0.42289 2.417211
</pre>
<p>I've looked at <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">MultiIndexes</a> but I can't see how that would affect the layout in this way.</p>
| 2 | 2016-08-23T18:07:41Z | 39,108,110 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.swaplevel.html" rel="nofollow"><code>MultiIndex.swaplevel</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>:</p>
<pre><code>pt.columns = pt.columns.swaplevel(0,1)
pt = pt.sort_index(axis=1)
#pt = pt.sort_index(axis=1, level=0)
print (pt)
Key2 A B C
Value1 Value2 Value1 Value2 Value1 Value2
Key1
one 0.439076 -0.492287 -0.841044 0.435300 -0.490016 0.045178
three -0.975650 0.276097 0.617394 -0.553229 0.213254 -0.044848
two 0.291563 2.730831 -2.405110 -0.878826 -0.801219 0.908600
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.swaplevel.html" rel="nofollow"><code>DataFrame.swaplevel</code></a>:</p>
<pre><code>pt = pt.swaplevel(0,1, axis=1).sort_index(axis=1)
print (pt)
Key2 A B C
Value1 Value2 Value1 Value2 Value1 Value2
Key1
one 0.439076 -0.492287 -0.841044 0.435300 -0.490016 0.045178
three -0.975650 0.276097 0.617394 -0.553229 0.213254 -0.044848
two 0.291563 2.730831 -2.405110 -0.878826 -0.801219 0.908600
</code></pre>
| 4 | 2016-08-23T18:15:09Z | [
"python",
"pandas"
] |
Python FuncAnimation not recognizing update | 39,108,035 | <p>I'm learning how to animate in python for one of my projects and I'm basing my code off of the following example from <a href="http://eli.thegreenplace.net/2016/drawing-animated-gifs-with-matplotlib" rel="nofollow">here</a>.</p>
<p>My adaption of their code goes as follows:</p>
<pre><code>import numpy as np
import h5py, os, glob, sys, time
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
def update(i):
for j in np.arange(0,10):
for k in np.arange(0,10):
for channel in ["N","E"]:
x = some_x_value
y = some_y_value
line = plt.loglog(x,y)
ax.set_xlabel(label)
return line, ax
if __name__ == "__main__":
fig, ax = plt.subplots()
anim = FuncAnimation(fig, update, frames=np.arange(0,10), interval=200)
anim.save('Test.gif', dpi=80, writer='imagemagick')
</code></pre>
<p>And when I try to run my script I get the following error:
Name Error: name 'update' is not defined.</p>
<p>As I said before, I'm still learning how to animate and don't understand all of what's going on in the code tutorial I found. However, I'm very confused as to why update isn't recognized at all as the way I call update seems to be exactly the same as what's in the tutorial.</p>
| 0 | 2016-08-23T18:09:52Z | 39,113,212 | <pre><code>import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
def update(i, ln):
i = i+1
x = i
y = i ** 2
x_data = ln.get_xdata()
y_data = ln.get_ydata()
ln.set_data(np.concatenate(([x], x_data)),
np.concatenate(([y], y_data)))
return ln
if __name__ == "__main__":
fig, ax = plt.subplots()
ax.set_xlim(1, 10)
ax.set_ylim(1, 100)
line, = ax.loglog([1], [1])
anim = FuncAnimation(fig, update, frames=np.arange(0, 10), interval=200,
fargs=(line, ))
anim.save('Test.gif', dpi=80, writer='imagemagick')
</code></pre>
<p>Works as expected. This makes me think that there is some other error in your code which is getting masked.</p>
| 0 | 2016-08-24T01:49:38Z | [
"python",
"animation",
"matplotlib"
] |
Mouse clicker program without autohotkey? | 39,108,113 | <p>I am a total programming newb and am trying to find a way to automate a mouse click once every 5 seconds for 5 minutes at a specific location to automatically run a licensed program many times. My work computer does not allow installation of autohotkey, but I was able to install Python v3.5 (v2.7 will not install). My work computer uses windows 7 and can't install any programs that require admin rights.</p>
<p>I tried using the PyAutoGui module and it does not appear to be working with v3.5 python?</p>
<p>The script i want to use is something like below, but the script below is for v2.5 python, which I cannot use on v3.5. Can someone translate this script to v3.5 python?</p>
<pre><code>import win32api, win32con
def click(x,y):
win32api.SetCursorPos((x,y))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
click(10,10)
</code></pre>
| 0 | 2016-08-23T18:15:32Z | 39,108,648 | <p>Why not just write a Python script that would open and close the program using <a href="https://docs.python.org/3/library/subprocess.html#popen-objects" rel="nofollow">Popen</a>?</p>
<pre><code>import subprocess, time
def run_program():
program = subprocess.Popen("path/to/file.exe")
program.terminate()
while True:
run_program()
time.sleep(5)
</code></pre>
| 0 | 2016-08-23T18:49:06Z | [
"python"
] |
I want to drop_duplicates from TextFileReader and save what remains into separate files | 39,108,147 | <p>So, I'm iterating over chunks in huge pandas' TextFileReader object and for each chunk I do drop_duplicates, then to_csv. Unfortunately, when I tried save everything in one file, it crashed after the file reached 4GB. I assumed I have to create couple of smaller csv's, that won't exceed 4GB size.</p>
<p>Now I'm testing my code on smaller file (300 lines) and smaller chunks, but the problem is that it either puts one chunk per file, if </p>
<pre><code>if int(os.stat(ostatni_plik).st_size) < 'approx. size of a chunk':
</code></pre>
<p>Or it returns one empty file only, if </p>
<pre><code>if int(os.stat(ostatni_plik).st_size) < 'much bigger or much smaller than a chunk':
</code></pre>
<p>My code:</p>
<pre><code>tp1 = pd.read_csv('C:\test\\test.csv',chunksize=50,iterator=True)
a = 0
f = open(path2%str(a),'ab+')
last_file = path2%str(a)
for chunk in tp1:
if int(os.stat(last_file).st_size) < 50:
chunk.drop_duplicates(inplace=False,subset='kol2')
chunk.to_csv(last_file,mode='ab+')
else:
a += 1
last_file = path2%str(a)
chunk.drop_duplicates(inplace=False,subset='kol2')
chunk.to_csv(last_file,mode='ab+')
</code></pre>
<p>I have no idea what's going on.</p>
<p>Thank's for replies!</p>
| 2 | 2016-08-23T18:17:51Z | 39,109,013 | <p>This works for me.</p>
<h3>Initialize <code>'test.csv'</code></h3>
<pre><code>df = pd.DataFrame(np.random.choice((1, 0), (200, 2)), columns=list('ab'))
df.insert(0, 'label', np.random.choice(list('ABCDE'), 200))
df.to_csv('test.csv', index=None)
</code></pre>
<h3>Process for 1 file</h3>
<pre><code># Initialize 'output.csv' with just the header
df.iloc[:0].to_csv('output.csv', index=None)
# wrap key word args together for cleanliness
output_kwds = dict(index=None, mode='a', header=None)
chunker = pd.read_csv('test.csv', chunksize=10, iterator=True)
for chunk in chunker:
chunk.drop_duplicates(subset='label').to_csv('output.csv', **output_kwds)
</code></pre>
<h3>Validation</h3>
<pre><code>df_ = pd.read_csv('output.csv')
df_.label.describe()
count 90
unique 5
top D
freq 20
Name: label, dtype: object
</code></pre>
<h3>Process for separate files</h3>
<pre><code># wrap key word args together for cleanliness
output_kwds = dict(index=None, mode='a')
chunker = pd.read_csv('test.csv', chunksize=10, iterator=True)
for i, chunk in enumerate(chunker):
chunk.drop_duplicates(subset='label').to_csv('output%s.csv' % i, **output_kwds)
</code></pre>
| 1 | 2016-08-23T19:12:24Z | [
"python",
"pandas"
] |
Resizing Quad Frame Video from Cap | 39,108,278 | <p>Hi I have this block of code that was suggested to me and I am trying to modify it using the Cap set for width and height to adjust the overall size of the quad display however every other run it throws this ValueError: could not broadcast input array from shape (240,320) into shape (480,640). Of course the numbers change depending on how I modify the size values in the set. My overall goal is to plug the quad frame into a tkinter gui I have that is right now showing single channels just fine. Currently my gui displays a 640x480 channel and I want to display the quad into that same size frame. Here is the current code:</p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
red = np.zeros(frame.shape, 'uint8')
green = np.zeros(frame.shape, 'uint8')
blue = np.zeros(frame.shape, 'uint8')
cap.set(cv2.CAP_PROP_FRAME_WIDTH, int(640 *.5))
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, int(480 *.5))
while(True):
ret, frame = cap.read()
b, g, r = cv2.split(frame)
red[..., 0], red[..., 1], red[..., 2] = r, r, r
green[..., 0], green[..., 1], green[..., 2] = g, g, g
blue[..., 0], blue[..., 1], blue[..., 2] = b, b, b
final = cv2.vconcat((
cv2.hconcat((frame, red)),
cv2.hconcat((green, blue))
))
cv2.imshow('frame', final)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-08-23T18:24:51Z | 39,126,932 | <p>Moving the cap.set and the color shapes to after the read location fixes my problem.</p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
while(True):
ret, frame = cap.read()
cap.set(cv2.CAP_PROP_FRAME_WIDTH, int(640 *.5))
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, int(480 *.5))
red = np.zeros(frame.shape, 'uint8')
green = np.zeros(frame.shape, 'uint8')
blue = np.zeros(frame.shape, 'uint8')
b, g, r = cv2.split(frame)
red[..., 0], red[..., 1], red[..., 2] = r, r, r
green[..., 0], green[..., 1], green[..., 2] = g, g, g
blue[..., 0], blue[..., 1], blue[..., 2] = b, b, b
final = cv2.vconcat((
cv2.hconcat((frame, red)),
cv2.hconcat((green, blue))
))
cv2.imshow('frame', final)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-08-24T15:03:00Z | [
"python",
"opencv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.