title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Regex required or can BeautifulSoup refine output
39,378,971
<p>If I use the following function I can grab the text and link I need from a website:</p> <pre><code>def get_url_text(url): source = requests.get(url) plain_text = source.text soup = BeautifulSoup(plain_text) for item_name in soup.findAll('li', {'class': 'ptb2'}): print(item_name.string) print (item_name.a) get_url_text('https://www.residentadvisor.net/podcast.aspx') </code></pre> <p>returns:</p> <pre><code>RA.532 Marquis Hawkes &lt;a href="/podcast-episode.aspx?id=532"&gt;&lt;h1&gt;RA.532 Marquis Hawkes&lt;/h1&gt;&lt;/a&gt; RA.531 Evan Baggs &lt;a href="/podcast-episode.aspx?id=531"&gt;&lt;h1&gt;RA.531 Evan Baggs&lt;/h1&gt;&lt;/a&gt; RA.530 MCDE vs Jeremy Underground </code></pre> <p>If I only want the href link instead of the tags etc surrounding it do I need to use a regex or is there another method within BeautifulSoup?</p> <p>Desired output is:</p> <pre><code>RA.532 Marquis Hawkes https://www.residentadvisor.net/podcast-episode.aspx?id=532 </code></pre> <p>for each similar element.</p>
3
2016-09-07T21:03:38Z
39,379,018
<p>you can use <code>print(item_name.a['href'])</code> and (if needed) prepend the prefix <code>https://www.residentadvisor.net</code> (since the links in the webpage are used in a form without explicit scheme and netloc part - for example, <code>/podcast-episode.aspx?id=528</code>)</p>
3
2016-09-07T21:07:06Z
[ "python" ]
When running Neo4j Python Bolt Driver Example, error:"ImportError: No module named '_backend'"
39,378,987
<p>I am trying to switch over from Py2Neo to the new Neo4j <a href="https://neo4j.com/docs/developer-manual/current/drivers/" rel="nofollow">Bolt Driver</a>. After installing neo4j-driver v1.0.2 and I run the example code found on their <a href="https://github.com/neo4j/neo4j-python-driver" rel="nofollow">Github ReadMe page</a>:</p> <pre><code>from neo4j.v1 import GraphDatabase, basic_auth driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo4j", "neo4j")) session = driver.session() session.run("CREATE (a:Person {name:'Bob'})") result = session.run("MATCH (a:Person) RETURN a.name AS name") for record in result: print(record["name"]) session.close() </code></pre> <p>In response, I get the following error:</p> <pre><code>Traceback (most recent call last): File "C:/PythonApps/Neo4jBoltDriverTest/run.py", line 1, in &lt;module&gt; from neo4j.v1 import GraphDatabase, basic_auth File "C:\Users\username\AppData\Local\Programs\Python\Python35\lib\site-packages\neo4j\__init__.py", line 29, in &lt;module&gt; from neo4j.core import GraphDatabase, Direction, NotFoundException, BOTH, ANY, INCOMING, OUTGOING File "C:\Users\username\AppData\Local\Programs\Python\Python35\lib\site-packages\neo4j\core.py", line 19, in &lt;module&gt; from _backend import * ImportError: No module named '_backend' </code></pre> <p>I have never seen an error with '_backend' before and it doesn't seem to be a library I can install. Any ideas what is causing this error?</p> <p>For more context, I am using Python 3.5 and have installed neo4j-driver v1.0.2. Looks like neo4j-driver only works up to Python 3.4, is that correct?</p>
0
2016-09-07T21:04:56Z
39,379,675
<p>There is no module called <code>neo4j.core</code> in the official driver. From where did you install this library?</p>
1
2016-09-07T22:01:48Z
[ "python", "neo4j" ]
When running Neo4j Python Bolt Driver Example, error:"ImportError: No module named '_backend'"
39,378,987
<p>I am trying to switch over from Py2Neo to the new Neo4j <a href="https://neo4j.com/docs/developer-manual/current/drivers/" rel="nofollow">Bolt Driver</a>. After installing neo4j-driver v1.0.2 and I run the example code found on their <a href="https://github.com/neo4j/neo4j-python-driver" rel="nofollow">Github ReadMe page</a>:</p> <pre><code>from neo4j.v1 import GraphDatabase, basic_auth driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo4j", "neo4j")) session = driver.session() session.run("CREATE (a:Person {name:'Bob'})") result = session.run("MATCH (a:Person) RETURN a.name AS name") for record in result: print(record["name"]) session.close() </code></pre> <p>In response, I get the following error:</p> <pre><code>Traceback (most recent call last): File "C:/PythonApps/Neo4jBoltDriverTest/run.py", line 1, in &lt;module&gt; from neo4j.v1 import GraphDatabase, basic_auth File "C:\Users\username\AppData\Local\Programs\Python\Python35\lib\site-packages\neo4j\__init__.py", line 29, in &lt;module&gt; from neo4j.core import GraphDatabase, Direction, NotFoundException, BOTH, ANY, INCOMING, OUTGOING File "C:\Users\username\AppData\Local\Programs\Python\Python35\lib\site-packages\neo4j\core.py", line 19, in &lt;module&gt; from _backend import * ImportError: No module named '_backend' </code></pre> <p>I have never seen an error with '_backend' before and it doesn't seem to be a library I can install. Any ideas what is causing this error?</p> <p>For more context, I am using Python 3.5 and have installed neo4j-driver v1.0.2. Looks like neo4j-driver only works up to Python 3.4, is that correct?</p>
0
2016-09-07T21:04:56Z
39,417,917
<p>Just wanted to follow up with the answer so that it might benefit others coming across this in the future.</p> <p>With Nigel Small's help, I realized that I was not calling the correct package. I believe this is another python package on my system from previous work named neo4j which my PyCharm IDE was calling to rather than neo4j-driver. </p> <p>Bottom line, this problem was my fault due to poor package management. Nothing wrong with the library's source. Moral of the story is that VirtualEnv is your friend and you should use it for every new project. </p>
0
2016-09-09T18:43:11Z
[ "python", "neo4j" ]
Python Requests declined: "Due to the presence of characters known to be used in Cross Site Scripting attacks, access is forbidden."
39,379,045
<p>Dear fellow <code>requests</code> users,</p> <p><strong>Update:</strong></p> <p>Sorry, guys. My error came from a mistake:</p> <p>My goal was to do this:</p> <pre><code>r = requests.get('http://www.spdrs.com/product/fund.seam?ticker=SPY', stream=True, headers=hdr) </code></pre> <p>I did this: </p> <pre><code>r = requests.get('http://www.spdrs.com/product/fund.seam?ticker={}'.format(['SPY']), stream=True, headers=hdr) </code></pre> <p>Which should be:</p> <pre><code>r = requests.get('http://www.spdrs.com/product/fund.seam?ticker={}'.format('SPY'), stream=True, headers=hdr) </code></pre> <p>The extra brackets <strong><code>[]</code></strong> were causing the error, apparently. Dumb mistake. Feel free to vote me down, if you wish.</p> <p><strong>Original question:</strong></p> <p>I am trying to scrape <code>spdrs.com</code> webpage using:</p> <pre><code>hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'Accept-Encoding': 'none', 'Accept-Language': 'en-US,en;q=0.8', 'Connection': 'keep-alive'} r = requests.get('http://www.spdrs.com/product/fund.seam?ticker=SPY', stream=True, headers=hdr) </code></pre> <p>But all I get is this:</p> <pre><code>Due to the presence of characters known to be used in Cross Site Scripting attacks, access is forbidden. </code></pre> <p>It's the same with <code>http</code> or <code>https</code>.</p> <p>If I remove <code>hdr</code>, I get a straight 403 decline.</p> <p>Is there any modification I can do to the <code>hdr</code> to show the website that I am a well-behaving script? I know, servers don't like scrapers.</p> <p>This <a href="http://stackoverflow.com/questions/5249130/due-to-the-presence-of-characters-known-to-be-used-in-cross-site-scripting-attac">thread on SO</a> shows a similar problem from webmaster's perspective.</p> <p>Thanks a lot!</p> <p>Yi</p>
1
2016-09-07T21:09:22Z
39,380,709
<p>Drop the following key/value from your header:</p> <pre><code>'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', </code></pre> <p>It works for me after I did that.</p>
0
2016-09-08T00:07:28Z
[ "python", "python-requests" ]
Adding exception to "AttributeError" python
39,379,140
<p>So, I have some tweets with some special characters and shapes. I am trying to find a word in those tweets by converting them to lower case. The function throws an "AttributeError" when it encounters those special characters and hence, I want to change my function in a way that it skips those records and processes others.</p> <p>Can I add exception to "AttributeError" in python. I want it to act more like an "iferror resume next"/Error handling statement.</p> <p>I am currently using:-</p> <pre><code>def word_in_text(word, text): try: print text word = word.lower() text = text.lower() match = re.search(word, text) if match: return True else: return False except(AttributeError, Exception) as e: continue </code></pre> <p>error post using @galah92 recommendations :-</p> <pre><code>Traceback (most recent call last): File "&lt;input&gt;", line 1, in &lt;module&gt; File "C:\Python27\lib\site-packages\pandas\core\series.py", line 2220, in apply mapped = lib.map_infer(values, f, convert=convert_dtype) File "pandas\src\inference.pyx", line 1088, in pandas.lib.map_infer (pandas\lib.c:63043) File "&lt;input&gt;", line 1, in &lt;lambda&gt; File "&lt;input&gt;", line 3, in word_in_text File "C:\Python27\lib\re.py", line 146, in search return _compile(pattern, flags).search(string) TypeError: expected string or buffer </code></pre> <p>I am new to Python and self learning it. Any help will be really appreciated.</p>
0
2016-09-07T21:17:59Z
39,379,551
<p>You can use <code>re.IGNORECASE</code> flag when you <code>search()</code>.<br> That way you don't need to deal with <code>lower()</code> or exceptions.</p> <pre><code>def word_in_text(word, text): print text if re.search(word, text, re.IGNORECASE): return True else: return False </code></pre> <hr> <p>As an example, if I run:</p> <pre><code>from __future__ import unicode_literals # see edit notes import re text = "🎵🔥CANCION! You &amp;amp" word = "you" def word_in_text(word, text): print(text) if re.search(word, text, re.IGNORECASE): return True else: return False print(word_in_text(word, text)) </code></pre> <p>The output is:</p> <pre><code>🎵🔥CANCION! You &amp;amp True </code></pre> <hr> <p><strong>EDIT</strong></p> <p>For Python 2, you should add <code>from __future__ import unicode_literals</code> at the top of your script to make sure you encode everything to UTF-8.<br> You can read more about it <a href="http://stackoverflow.com/questions/23370025/what-is-unicode-literals-used-for">here</a>.</p>
0
2016-09-07T21:51:05Z
[ "python", "python-2.7", "python-3.x" ]
fast way to put ones beetween ones in each row of a numpy 2d array
39,379,147
<p>I have a 2d array (Q) consisting of just zeros and ones. I wish to fill with 1 each position between 1's of each line Q. Here's an example:</p> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>The resulting matrix:</strong></p> <pre><code>[0 0 0 1 1 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 1 1 0 0] [1 1 1 1 1 1] [0 1 1 1 1 1] [1 1 1 1 1 0] </code></pre> <p>I implement an algorithm, it works, but for large arrays it is not efficient.</p> <pre><code>def beetween(Q): for client in range(len(Q)): idStart = findIdStart(Q, client) idEnd = findIdEnd(Q, client) if idStart != idEnd and idStart &gt; -1 and idEnd &gt; -1: for i in range(idStart, idEnd): Q[client][i] = 1 return Q def findIdStart(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape for product in range (0, c): if Q[client][product] == 1: return product else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 def findIdEnd(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape Qtemp = Q[client] for product in range(0,c): idProduct = (c-1)-product if Qtemp[idProduct]==1: return idProduct else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 </code></pre> <p>I'm trying to build a more optimized version, but I'm not having success:</p> <pre><code>def beetween(Q): l, c = np.shape(Q) minIndex = Q.argmax(axis=1) maxIndex = c-(np.fliplr(Q).argmax(axis=1)) Q = np.zeros(shape=(l,c)).astype(np.int) for i in range(l): Q[i, minIndex[i]:maxIndex[i]] = 1 return Q </code></pre> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>Wrong Result</strong></p> <pre><code>[0 0 0 1 1 1] # OK [1 0 0 0 0 0] # OK [1 1 1 1 1 1] # wrong [1 1 1 1 0 0] # OK [1 1 1 1 1 1] # OK [0 1 1 1 1 1] # OK [1 1 1 1 1 0] # OK </code></pre> <p>Can anybody suggest another simple solution to this problem?</p> <p>Thanks.</p>
4
2016-09-07T21:18:25Z
39,379,263
<p>The function below looks at a single row and fills 1s between other 1s if they exist. It assumes that the array contains only 0s and 1s.</p> <pre><code>import numpy as np def ones_row(row): if np.sum(row) &gt;= 2: # Otherwise, not enough 1s inds = np.where(row == 1)[0] row[inds[0]:inds[-1]] = 1 return row </code></pre> <p>Now you can process your entire array with </p> <pre><code>for jj in range(len(Q)): Q[jj] = ones_row(Q[jj]) </code></pre>
1
2016-09-07T21:27:27Z
[ "python", "arrays", "numpy", "matrix" ]
fast way to put ones beetween ones in each row of a numpy 2d array
39,379,147
<p>I have a 2d array (Q) consisting of just zeros and ones. I wish to fill with 1 each position between 1's of each line Q. Here's an example:</p> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>The resulting matrix:</strong></p> <pre><code>[0 0 0 1 1 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 1 1 0 0] [1 1 1 1 1 1] [0 1 1 1 1 1] [1 1 1 1 1 0] </code></pre> <p>I implement an algorithm, it works, but for large arrays it is not efficient.</p> <pre><code>def beetween(Q): for client in range(len(Q)): idStart = findIdStart(Q, client) idEnd = findIdEnd(Q, client) if idStart != idEnd and idStart &gt; -1 and idEnd &gt; -1: for i in range(idStart, idEnd): Q[client][i] = 1 return Q def findIdStart(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape for product in range (0, c): if Q[client][product] == 1: return product else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 def findIdEnd(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape Qtemp = Q[client] for product in range(0,c): idProduct = (c-1)-product if Qtemp[idProduct]==1: return idProduct else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 </code></pre> <p>I'm trying to build a more optimized version, but I'm not having success:</p> <pre><code>def beetween(Q): l, c = np.shape(Q) minIndex = Q.argmax(axis=1) maxIndex = c-(np.fliplr(Q).argmax(axis=1)) Q = np.zeros(shape=(l,c)).astype(np.int) for i in range(l): Q[i, minIndex[i]:maxIndex[i]] = 1 return Q </code></pre> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>Wrong Result</strong></p> <pre><code>[0 0 0 1 1 1] # OK [1 0 0 0 0 0] # OK [1 1 1 1 1 1] # wrong [1 1 1 1 0 0] # OK [1 1 1 1 1 1] # OK [0 1 1 1 1 1] # OK [1 1 1 1 1 0] # OK </code></pre> <p>Can anybody suggest another simple solution to this problem?</p> <p>Thanks.</p>
4
2016-09-07T21:18:25Z
39,379,373
<p>Another option with <code>np.apply_along_axis</code>:</p> <pre><code>import numpy as np def minMax(A): idx = np.where(A == 1)[0] if len(idx) &gt; 1: A[idx.min():idx.max()] = 1 return A ​ np.apply_along_axis(minMax, 1, mat) # array([[0, 0, 0, 1, 1, 1], # [1, 0, 0, 0, 0, 0], # [0, 0, 0, 0, 0, 0], # [1, 1, 1, 1, 0, 0], # [1, 1, 1, 1, 1, 1], # [0, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 0]]) </code></pre>
2
2016-09-07T21:35:38Z
[ "python", "arrays", "numpy", "matrix" ]
fast way to put ones beetween ones in each row of a numpy 2d array
39,379,147
<p>I have a 2d array (Q) consisting of just zeros and ones. I wish to fill with 1 each position between 1's of each line Q. Here's an example:</p> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>The resulting matrix:</strong></p> <pre><code>[0 0 0 1 1 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 1 1 0 0] [1 1 1 1 1 1] [0 1 1 1 1 1] [1 1 1 1 1 0] </code></pre> <p>I implement an algorithm, it works, but for large arrays it is not efficient.</p> <pre><code>def beetween(Q): for client in range(len(Q)): idStart = findIdStart(Q, client) idEnd = findIdEnd(Q, client) if idStart != idEnd and idStart &gt; -1 and idEnd &gt; -1: for i in range(idStart, idEnd): Q[client][i] = 1 return Q def findIdStart(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape for product in range (0, c): if Q[client][product] == 1: return product else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 def findIdEnd(Q, client): if Q.ndim &gt; 1: l, c = np.array(Q).shape Qtemp = Q[client] for product in range(0,c): idProduct = (c-1)-product if Qtemp[idProduct]==1: return idProduct else: idProduct = 1 Qtemp = Q[client] if Qtemp[idProduct] == 1: return idProduct return -1 </code></pre> <p>I'm trying to build a more optimized version, but I'm not having success:</p> <pre><code>def beetween(Q): l, c = np.shape(Q) minIndex = Q.argmax(axis=1) maxIndex = c-(np.fliplr(Q).argmax(axis=1)) Q = np.zeros(shape=(l,c)).astype(np.int) for i in range(l): Q[i, minIndex[i]:maxIndex[i]] = 1 return Q </code></pre> <p><strong>Original matrix:</strong></p> <pre><code>[0 0 0 1 0 1] [1 0 0 0 0 0] [0 0 0 0 0 0] [1 1 0 1 0 0] [1 0 0 0 0 1] [0 1 1 0 0 1] [1 0 1 0 1 0] </code></pre> <p><strong>Wrong Result</strong></p> <pre><code>[0 0 0 1 1 1] # OK [1 0 0 0 0 0] # OK [1 1 1 1 1 1] # wrong [1 1 1 1 0 0] # OK [1 1 1 1 1 1] # OK [0 1 1 1 1 1] # OK [1 1 1 1 1 0] # OK </code></pre> <p>Can anybody suggest another simple solution to this problem?</p> <p>Thanks.</p>
4
2016-09-07T21:18:25Z
39,379,379
<p>Here's a one-liner:</p> <pre><code>In [25]: Q Out[25]: array([[0, 0, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 0, 1, 0, 0], [1, 0, 0, 0, 0, 1], [0, 1, 1, 0, 0, 1], [1, 0, 1, 0, 1, 0]]) In [26]: np.maximum.accumulate(Q, axis=1) &amp; np.maximum.accumulate(Q[:,::-1], axis=1)[:,::-1] Out[26]: array([[0, 0, 0, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0]]) </code></pre> <p>Or</p> <pre><code>In [36]: np.minimum(np.maximum.accumulate(Q, axis=1), np.maximum.accumulate(Q[:,::-1], axis=1)[:,::-1]) Out[36]: array([[0, 0, 0, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0]]) </code></pre> <p>In either case, the two terms being combined are</p> <pre><code>In [37]: np.maximum.accumulate(Q, axis=1) Out[37]: array([[0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]) </code></pre> <p>and</p> <pre><code>In [38]: np.maximum.accumulate(Q[:,::-1], axis=1)[:,::-1] Out[38]: array([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0]]) </code></pre>
5
2016-09-07T21:35:50Z
[ "python", "arrays", "numpy", "matrix" ]
How do I use py2app with Anaconda python?
39,379,155
<p>I am using Python 3 from the Anaconda distribution, and trying to convert a simple python program into an OS X app (running on El Capitan). Following the instructions in <a href="https://pythonhosted.org/py2app/tutorial.html" rel="nofollow">the tutorial</a>, I ran </p> <pre><code>py2applet --make-setup my-script.py python setup.py py2app -A </code></pre> <p>Everything ran fine with no errors, but when I try to launch the app, I get this error message:</p> <blockquote> <p>my-script: A python runtime not could (sic) be located. You may need to install a framework build of Python, or edit the PyRuntimeLocations array in this applications Info.plist file. </p> </blockquote> <p>I understood this to mean I should add the path of Anaconda's python (which is in my bash PATH, but is not known to the launcher). However, the app's automatically generated <code>Info.plist</code> already points to the Anaconda python binary:</p> <pre><code>&lt;key&gt;PythonInfoDict&lt;/key&gt; &lt;dict&gt; &lt;key&gt;PythonExecutable&lt;/key&gt; &lt;string&gt;/Applications/Experimental/anaconda/bin/python&lt;/string&gt; ... </code></pre> <p>I don't see what there is to fix here. I have read these related questions: </p> <ul> <li><a href="http://stackoverflow.com/q/26081258/699305">How do I use py2app with a virtual environment?</a></li> <li><a href="http://stackoverflow.com/questions/10184974/py2app-is-not-copying-the-python-framework-to-the-new-app-while-using-virutalenv">py2app is not copying the python framework to the new app while using virutalenv</a></li> </ul> <p>The first question involves the same error message, and is resolved by following the advice in the second question. But as I understand it, these questions describe the opposite situation: The OP was running a python distributed with the OS, and wanted to distribute their app; the solution is to use a separately installed python. I <em>am</em> using a non-system python, and I'm not yet trying to distribute anything. So what is causing the trouble here, and what is the solution?</p>
0
2016-09-07T21:19:08Z
39,392,214
<p>The suggestion by @l'L'l allowed me to identify the problem: While there were no errors when I generated my app in "alias mode" (using symlinks to the environment instead of copying binaries), building the app without alias mode flushed out the error: <code>py2app</code> looks for the <code>libpython</code> DLL under the non-existent name <code>/Applications/anaconda/lib/libpython3.4.dylib</code>. </p> <p>A quick check showed that Anaconda provides this DLL under a slightly different name: <code>libpython3.4m.dylib</code>. While patching <code>dist/my-script.app/Contents/Info.plist</code> fixes the problem, the <em>right</em> solution is to edit <code>setup.py</code> so that future builds will work correctly. With the help of the <a href="https://pythonhosted.org/py2app/tweaking.html" rel="nofollow">py2app documentation</a>, I put together the following (partial contents of <code>setup.py</code> shown):</p> <pre><code>OPTIONS = {'argv_emulation': True, 'plist': { 'PyRuntimeLocations': [ '@executable_path/../Frameworks/libpython3.4m.dylib', '/Applications/anaconda/lib/libpython3.4m.dylib' ] }} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) </code></pre> <p>The paths come from the generated <code>Info.plist</code>; I only modified the absolute path, reasoning that if I ever provide a local DLL at the relative path, it will have the default name.</p>
0
2016-09-08T13:25:14Z
[ "python", "osx", "anaconda", "py2app" ]
Creating a list of 'N' pairs of a variable
39,379,197
<p>I have a variable <code>x</code>. I need to create a list such as </p> <pre><code>lst=[0,x,x,2*x,2*x,3*x,3*x,N*x,N*x] </code></pre> <p>up to any <code>N</code></p> <p>Seems like this should be straightforward but I'm kinda stuck. Any help is appreciated.</p> <p>Respectfully I don't see how this question is a duplicate</p> <p><strong>update....</strong> </p> <p>So I did this. </p> <pre><code>import numpy as np N=4 lst=[0] x=1.2 for i in np.arange(1,N+1): seed=[1,1] for j in seed: lst.append(i*x) lst= [0, 1.2, 1.2, 2.3999999999999999, 2.3999999999999999, 3.5999999999999996, 3.5999999999999996, 4.7999999999999998, 4.7999999999999998] </code></pre> <p>It feels like a terrible hack. </p> <p>There has to be a more elegant solution.</p>
-1
2016-09-07T21:22:15Z
39,381,084
<p>You can do this with a list comprehension. It isn't as clear as the code you started with though.</p> <pre><code>&gt;&gt;&gt; N=4 &gt;&gt;&gt; x=1.25 &gt;&gt;&gt; lst = [0] + [i//2 * x for i in range(2, 2*N+2)] &gt;&gt;&gt; lst [0, 1.25, 1.25, 2.5, 2.5, 3.75, 3.75, 5.0, 5.0] </code></pre>
0
2016-09-08T01:03:14Z
[ "python" ]
Creating a list of 'N' pairs of a variable
39,379,197
<p>I have a variable <code>x</code>. I need to create a list such as </p> <pre><code>lst=[0,x,x,2*x,2*x,3*x,3*x,N*x,N*x] </code></pre> <p>up to any <code>N</code></p> <p>Seems like this should be straightforward but I'm kinda stuck. Any help is appreciated.</p> <p>Respectfully I don't see how this question is a duplicate</p> <p><strong>update....</strong> </p> <p>So I did this. </p> <pre><code>import numpy as np N=4 lst=[0] x=1.2 for i in np.arange(1,N+1): seed=[1,1] for j in seed: lst.append(i*x) lst= [0, 1.2, 1.2, 2.3999999999999999, 2.3999999999999999, 3.5999999999999996, 3.5999999999999996, 4.7999999999999998, 4.7999999999999998] </code></pre> <p>It feels like a terrible hack. </p> <p>There has to be a more elegant solution.</p>
-1
2016-09-07T21:22:15Z
39,381,714
<p>you can do this like this</p> <pre><code>&gt;&gt;&gt; n=4 &gt;&gt;&gt; x=1.2 &gt;&gt;&gt; lst=[0,x,x] &gt;&gt;&gt; lst.extend( i*x for i in range(2,n+1) for _ in range(2) ) &gt;&gt;&gt; lst [0, 1.2, 1.2, 2.4, 2.4, 3.5999999999999996, 3.5999999999999996, 4.8, 4.8] &gt;&gt;&gt; </code></pre> <p>EDIT</p> <p>or with one list comprehension and nothing else</p> <pre><code>&gt;&gt;&gt; n=4 &gt;&gt;&gt; x=1.2 &gt;&gt;&gt; lst=[ i*x for i in range(n+1) for _ in range(2 if i else 1) ] &gt;&gt;&gt; lst [0.0, 1.2, 1.2, 2.4, 2.4, 3.5999999999999996, 3.5999999999999996, 4.8, 4.8] &gt;&gt;&gt; </code></pre> <p>(note use xrange if you are in python 2)</p>
1
2016-09-08T02:30:20Z
[ "python" ]
Flask - uploadnotallowed error - when renaming a file to be saved
39,379,287
<p>I'm trying to upload an excel file in flask and give it a new name when saving, something like: <code>oldname.xlsx</code> to <code>newname.xlsx</code>.</p> <p>Here is my code so far:</p> <pre><code>from flask import Flask, render_template, send_file, request, redirect, url_for from flask_uploads import UploadSet, configure_uploads, DOCUMENTS, IMAGES from remove_characters import get_csv, edit_data, cleanup_data import re import os app = Flask(__name__) #the name 'datafiles' must match in app.config to DATAFILES docs = UploadSet('datafiles', DOCUMENTS) app.config['UPLOADED_DATAFILES_DEST'] = 'static/uploads' configure_uploads(app, docs) file_new_name = 'dataexcel' @app.route("/upload", methods = ['GET', 'POST']) def upload(): #user_file is the name value in input element if request.method == 'POST' and 'user_file' in request.files: filestorage = request.files['user_file'] path = "static/uploads/" + filestorage.filename filename = docs.save(filestorage, name = file_new_name) return redirect(url_for('results', path = path)) return render_template('upload.html') </code></pre> <p>So in the <code>save</code> function, I'm passing <code>file_new_name</code> to the name param, so it will be saved with that variable name. I got the <code>name</code> param from flask upload docs, but I get an 'uploadnotallowed' error</p> <p><a href="http://i.stack.imgur.com/u8Gtg.png" rel="nofollow"><img src="http://i.stack.imgur.com/u8Gtg.png" alt="enter image description here"></a></p> <p>I'm wondering If I'm not following the right format for the <code>save</code> function, or my configurations are not set up properly. I'm new to flask, so I'm still learning this cool web framework. Thanks in advance</p>
2
2016-09-07T21:29:25Z
39,379,631
<p>Ok, found my error. The variable <code>file_new_name = 'dataexcel'</code> needs to have the extension, in this case the <code>.xlsx</code> ext. So the variable should be <code>file_new_name = 'dataexcel.xlsx'</code></p> <p>the <code>save</code> function should look like this -> <code>filename = docs.save(filestorage, None, file_new_name)</code>. <code>None</code> is the subfolder, if you want to pass a subfolder, just change it to something like <code>static/upload/dist</code>.</p>
0
2016-09-07T21:58:23Z
[ "python", "flask" ]
Python: exec() a code block and eval() the last line
39,379,331
<p>I have a string literal containing one or more lines of (trusted) Python code, and I would like to <code>exec()</code> the block, while capturing the results of the last line. More concretely, I would like a function <code>exec_then_eval</code> that returns the following:</p> <pre><code>code = """ x = 4 y = 5 x + y """ assert exec_then_eval(code) == 9 </code></pre> <p>A couple things I've tried:</p> <ol> <li><p>By splitting off the last line, you can exec the first block, then eval the last line; e.g.</p> <pre><code>def exec_then_eval(code): first_block = '\n'.join(code.splitlines()[:-1]) last_line = code.splitlines()[-1] globals = {} locals = {} exec(first_block, globals, locals) return eval(last_line, globals, locals) </code></pre> <p>This works, but will fail if the last statement has multiple lines.</p></li> <li><p>If the code itself is modified so the result is stored as a local variable, this variable can then be recovered; e.g.</p> <pre><code>code = """ x = 4 y = 5 z = x + y """ globals = {} locals = {} exec(code, globals, locals) assert locals['z'] == 9 </code></pre> <p>again, this works, but only if you're able to first parse the code block in a general way and modify it appropriately.</p></li> </ol> <p>Is there an easy way to write a general <code>exec_and_eval</code> function?</p>
4
2016-09-07T21:32:42Z
39,381,428
<p>Based on @kalzekdor's suggestion of using the <code>ast</code> module, I came up with this solution which is similar in spirit to @vaultah's solution posted above:</p> <pre><code>import ast def exec_then_eval(code): block = ast.parse(code, mode='exec') # assumes last node is an expression last = ast.Expression(block.body.pop().value) _globals, _locals = {}, {} exec(compile(block, '&lt;string&gt;', mode='exec'), _globals, _locals) return eval(compile(last, '&lt;string&gt;', mode='eval'), _globals, _locals) </code></pre> <p>It also passes @vaultah's nice set of test-cases:</p> <pre><code>exec_then_eval('''x = 4 y = 5 x + y''')) # 9 exec_then_eval('''x = 4 y = 5;x + y''')) # 9 exec_then_eval('''x = 4 y = 5;( x + y * 2)''') # 14 </code></pre>
4
2016-09-08T01:52:59Z
[ "python" ]
Changing __getattr__ during instantiation
39,379,351
<p>I am wanting to change <code>__getattr__</code> during instantiation of the class. For example:</p> <pre><code>class AttrTest(object): def __init__(self): self.__getattr__ = self._getattr def _getattr(self, attr): print("Getting {}".format(attr)) </code></pre> <p>I would have expected this to behave like:</p> <pre><code>class AttrTest(object): def __getattr__(self, attr): print("Getting {}".format(attr)) </code></pre> <p>But it does not.</p> <p>For example, when I run:</p> <pre><code>&gt;&gt;&gt; at = AttrTest() &gt;&gt;&gt; at.test </code></pre> <p>I would expect both classes to print <code>Getting test</code>, but the top class throws an <code>AttributeError</code>.</p> <p>Can <code>__getattr__</code> not be changed in this way?</p>
4
2016-09-07T21:34:26Z
39,379,402
<p>For special methods like <code>__getattr__</code>, Python searches in the base(s) <code>__dict__</code>, not in the instance <code>__dict__</code>.</p> <p>You can read more details about this in the <a href="https://docs.python.org/3/reference/datamodel.html#special-lookup" rel="nofollow">special lookup</a> section of the data model documentation.</p> <blockquote> <p>I have two implementations of <code>__getattr__</code> that each use a different type of serialization (json, pickle), and I wanted my class to be able to select one based on a kwarg.</p> </blockquote> <p>This is not a good use-case for overriding <code>__getattr__</code>. Abandon this idea, and instead consider to use <a href="https://docs.python.org/3/library/functions.html#property" rel="nofollow"><code>@property</code></a> or <a href="https://docs.python.org/3/reference/datamodel.html#descriptors" rel="nofollow">descriptors</a> to handle your serialisation dynamically. </p>
2
2016-09-07T21:37:34Z
[ "python" ]
Issue trying to install or update package using PIP
39,379,497
<p>For some reason I can't use PIP to install or update python packages in my system. I get this error.</p> <blockquote> <p>Exception:</p> <p>Traceback (most recent call last):</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/basecommand.py", line 209, in main status = self.run(options, args)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/commands/install.py", line 299, in run requirement_set.prepare_files(finder)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_set.py", line 360, in prepare_files ignore_dependencies=self.ignore_dependencies))</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_set.py", line 448, in _prepare_file req_to_install, finder)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_set.py", line 397, in _check_skip_installed finder.find_requirement(req_to_install, self.upgrade)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/index.py", line 440, in find_requirement all_candidates = self.find_all_candidates(req.name)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/index.py", line 398, in find_all_candidates for page in self._get_pages(url_locations, project_name):</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/index.py", line 543, in _get_pages page = self._get_page(location)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/index.py", line 646, in _get_page return HTMLPage.get_page(link, session=self.session)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/index.py", line 755, in get_page "Cache-Control": "max-age=600",</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/sessions.py", line 480, in get return self.request('GET', url, **kwargs)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/download.py", line 378, in request return super(PipSession, self).request(method, url, *args, **kwargs)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 46, in send resp = super(CacheControlAdapter, self).send(request, **kw)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/adapters.py", line 376, in send timeout=timeout</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 559, in urlopen body=body, headers=headers)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 345, in _make_request self._validate_conn(conn)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 784, in _validate_conn conn.connect()</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 252, in connect ssl_version=resolved_ssl_version)</p> <p>File "/usr/local/lib/python2.7/site-packages/pip-8.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 296, in ssl_wrap_socket cnx.set_tlsext_host_name(server_hostname)</p> <p>File "build/bdist.linux-x86_64/egg/OpenSSL/SSL.py", line 423, in explode</p> <pre><code>raise NotImplementedError(error) NotImplementedError: SNI not available </code></pre> </blockquote> <p>I try Installing manually the packages pyOpenSSL, ndg-httpsclient and pyasn1. And the problem still.</p> <p>My system is a RED HAT 4.1.2(server) with Python 2.7.3</p> <p>Thanks in advance </p>
0
2016-09-07T21:46:21Z
39,386,871
<p>You OpenSSL(not pyOpenSSL) is too old, you need to install an OpenSSL version that supports SNI.</p>
0
2016-09-08T09:07:38Z
[ "python", "pip", "sni" ]
Why does 11 not printed from a string in this Python script?
39,379,514
<p>There is a Python question like this:</p> <pre><code>&gt;&gt;&gt; import string &gt;&gt;&gt; s = ‘mary11had a little lamb’ &gt;&gt;&gt; print s mary had a little lamb </code></pre> <p>Actually when I try it myself, the result is not that, but:</p> <pre><code>mary11had a little lamb </code></pre> <p>Is there anything I don't know about Python that can make 11 disappear from a string?</p>
-2
2016-09-07T21:48:18Z
39,379,771
<p>As Padraic has pointed out in the comments - it looks like the leading backslash is missing before the 11 as a minor typo in the question.</p> <p>So it should read </p> <pre><code>&gt;&gt;&gt; import string &gt;&gt;&gt; s = ‘mary\11had a little lamb’ &gt;&gt;&gt; print s mary had a little lamb </code></pre> <p>It's interpreting \11 as the escape sequence for a numerically defined ascii character, which in this case is horizontal tab (\t). As without qualification, <code>\nnn</code> is assumed to be octal (as opposed to <code>\xnn</code> which is interpreted as hexadecimal).</p> <p>So if you were to write just <code>&gt;&gt;&gt; s</code> in the REPL, you'd expect it to evaluate to <code>'mary\thad a little lamb'</code>.</p> <p>Same outcome if you entered: <code>'mary\x09had a little lamb'</code></p>
1
2016-09-07T22:10:46Z
[ "python", "string" ]
How do I generate a random list in Python with duplicates numbers
39,379,515
<p>So I just started programming in Python a few days ago. And now, im trying to make a program that generates a random list, and then, choose the duplicates elements. The problem is, I dont have duplicate numbers in my list. </p> <p>This is my code: </p> <pre><code>import random def generar_listas (numeros, rango): lista = [random.sample(range(numeros), rango)] print("\n", lista, sep="") return def texto_1 (): texto = "Debes de establecer unos parámetros para generar dos listas aleatorias" print(texto) return texto_1() generar_listas(int(input("\nNumero maximo: ")), int(input("Longitud: "))) </code></pre> <p>And for example, I choose 20 and 20 for random.sample, it generates me a list from 0 to 20 but in random position. I want a list with random numbers and duplicated. </p>
0
2016-09-07T21:48:18Z
39,379,622
<p>What you want is fairly simple. You want to generate a random list of numbers that contain some duplicates. The way to do that is easy if you use something like numpy.</p> <ul> <li>Generate a list (range) of 0 to 10. </li> <li>Sample randomly (with replacement) from that list.</li> </ul> <p><strong>Like this:</strong></p> <pre><code>import numpy as np print np.random.choice(10, 10, replace=True) </code></pre> <p><strong>Result:</strong></p> <pre><code>[5 4 8 7 0 8 7 3 0 0] </code></pre> <p>If you want the list to be ordered just use the builtin function "sorted(list)"</p> <pre><code>sorted([5 4 8 7 0 8 7 3 0 0]) [0 0 0 3 4 5 7 7 8 8] </code></pre> <p>If you don't want to use numpy you can use the following:</p> <pre><code>print [random.choice(range(10)) for i in range(10)] [7, 3, 7, 4, 8, 0, 4, 0, 3, 7] </code></pre>
0
2016-09-07T21:57:37Z
[ "python", "list" ]
How do I generate a random list in Python with duplicates numbers
39,379,515
<p>So I just started programming in Python a few days ago. And now, im trying to make a program that generates a random list, and then, choose the duplicates elements. The problem is, I dont have duplicate numbers in my list. </p> <p>This is my code: </p> <pre><code>import random def generar_listas (numeros, rango): lista = [random.sample(range(numeros), rango)] print("\n", lista, sep="") return def texto_1 (): texto = "Debes de establecer unos parámetros para generar dos listas aleatorias" print(texto) return texto_1() generar_listas(int(input("\nNumero maximo: ")), int(input("Longitud: "))) </code></pre> <p>And for example, I choose 20 and 20 for random.sample, it generates me a list from 0 to 20 but in random position. I want a list with random numbers and duplicated. </p>
0
2016-09-07T21:48:18Z
39,379,745
<p>random.randrange is what you want.</p> <pre><code>&gt;&gt;&gt; [random.randrange(10) for i in range(5)] [3, 2, 2, 5, 7] </code></pre>
0
2016-09-07T22:08:36Z
[ "python", "list" ]
Twillo - how to handle no input on <gather>
39,379,541
<p>I'm building a simple Twillo (programmable voice) application using Python flask and the <a href="https://www.twilio.com/docs/libraries/python" rel="nofollow">twillo-python helper library</a>. There are several steps to the voice menu, but the first asks the caller to enter a pin number.</p> <p>I am trying to working out what the best practice to deal with no caller input from a TwiML <code>&lt;Gather&gt;</code> verb in a DRY way. I've made a function <code>process_no_input_response</code> which receives and returns a twllio <code>resp</code> object with appropriate <code>&lt;Say&gt;</code> messages depending on if the maximum number of allowed retries has been reached. Code example is below.</p> <p>Is there a better way to handle these scenarios? Keen for any advice or feedback on this code.</p> <pre><code>def process_no_input_response(resp, endpoint, num_retries_allowed=3): """Handle cases where the caller does not respond to a `gather` command. Determines whether to output a 'please try again' message, or redirect to the hand up process Inputs: resp -- A Twillo resp object endpoint -- the Flask endpoint num_retries_allowed -- Number of allowed tries before redirecting to the hang up process Returns: Twillo resp object, with appropriate ('please try again' or redirect) syntax """ # Add initial caller message resp.say("Sorry, I did not hear a response.") session['num_retries_allowed'] = num_retries_allowed # Increment number of attempts if endpoint in session: session[endpoint] += 1 else: session[endpoint] = 1 if session[endpoint] &gt;= num_retries_allowed: # Reached maximum number of retries, so redirect to a message before hanging up resp.redirect(url=url_for('bye')) else: # Allow user to try again resp.say("Please try again.") resp.redirect(url=url_for(endpoint)) return resp @app.route('/', methods=['GET', 'POST']) def step_one(): """Entry point to respond to incoming requests.""" resp = twilio.twiml.Response() with resp.gather(numDigits=6, action="/post_step_one_logic", method="POST") as gather: gather.say("Hello. Welcome to my amazing telephone app! Please enter your pin.") return str(process_no_input_response(resp, request.endpoint)) @app.route('/bye', methods=['GET', 'POST']) def bye(): """Hangup after a number of failed input attempts.""" resp = twilio.twiml.Response() resp.say("You have reached the maximum number of retries allowed. Please hang up and try calling again.") resp.hangup() return str(resp) </code></pre>
1
2016-09-07T21:50:15Z
39,385,809
<p>Twilio developer evangelist here.</p> <p>That seems like a reasonable way to build this response. Redirecting round to play the <a href="https://www.twilio.com/docs/api/twiml/gather" rel="nofollow"><code>&lt;Gather&gt;</code></a> again until you reach a number of attempts is a great way of dealing with this situation (and one I can't think of a a better way to deal with right now). Indeed, it is effectively the solution suggested in the <a href="https://www.twilio.com/docs/api/twiml/gather#hints" rel="nofollow">Advanced use section of the documentation</a>.</p>
0
2016-09-08T08:13:36Z
[ "python", "flask", "twilio", "twiml" ]
Twillo - how to handle no input on <gather>
39,379,541
<p>I'm building a simple Twillo (programmable voice) application using Python flask and the <a href="https://www.twilio.com/docs/libraries/python" rel="nofollow">twillo-python helper library</a>. There are several steps to the voice menu, but the first asks the caller to enter a pin number.</p> <p>I am trying to working out what the best practice to deal with no caller input from a TwiML <code>&lt;Gather&gt;</code> verb in a DRY way. I've made a function <code>process_no_input_response</code> which receives and returns a twllio <code>resp</code> object with appropriate <code>&lt;Say&gt;</code> messages depending on if the maximum number of allowed retries has been reached. Code example is below.</p> <p>Is there a better way to handle these scenarios? Keen for any advice or feedback on this code.</p> <pre><code>def process_no_input_response(resp, endpoint, num_retries_allowed=3): """Handle cases where the caller does not respond to a `gather` command. Determines whether to output a 'please try again' message, or redirect to the hand up process Inputs: resp -- A Twillo resp object endpoint -- the Flask endpoint num_retries_allowed -- Number of allowed tries before redirecting to the hang up process Returns: Twillo resp object, with appropriate ('please try again' or redirect) syntax """ # Add initial caller message resp.say("Sorry, I did not hear a response.") session['num_retries_allowed'] = num_retries_allowed # Increment number of attempts if endpoint in session: session[endpoint] += 1 else: session[endpoint] = 1 if session[endpoint] &gt;= num_retries_allowed: # Reached maximum number of retries, so redirect to a message before hanging up resp.redirect(url=url_for('bye')) else: # Allow user to try again resp.say("Please try again.") resp.redirect(url=url_for(endpoint)) return resp @app.route('/', methods=['GET', 'POST']) def step_one(): """Entry point to respond to incoming requests.""" resp = twilio.twiml.Response() with resp.gather(numDigits=6, action="/post_step_one_logic", method="POST") as gather: gather.say("Hello. Welcome to my amazing telephone app! Please enter your pin.") return str(process_no_input_response(resp, request.endpoint)) @app.route('/bye', methods=['GET', 'POST']) def bye(): """Hangup after a number of failed input attempts.""" resp = twilio.twiml.Response() resp.say("You have reached the maximum number of retries allowed. Please hang up and try calling again.") resp.hangup() return str(resp) </code></pre>
1
2016-09-07T21:50:15Z
39,399,809
<p>I actually found a tidier way to do this, without the need for a helper function. In this approach, all the timeout logic (i.e. 'Please try again' or hang up) in a <code>/timeout</code> endpoint.</p> <p>This appears to be the implied suggestion for timeouts, having seen the example in the <a href="https://www.twilio.com/docs/api/twiml/gather#hints" rel="nofollow">Advanced use section of the Twillo documentation</a>.</p> <pre><code>@app.route('/', methods=['GET', 'POST']) def step_one(): """Entry point to respond to incoming requests.""" resp = twilio.twiml.Response() with resp.gather(numDigits=6, action="/post_step_one_logic", method="POST") as gather: gather.say("Hello. Welcome to my amazing telephone app! Please enter your pin.") resp.redirect(url=url_for('timeout', source=request.endpoint)) return str(resp) @app.route('/timeout', methods=['GET', 'POST']) def timeout(): """Determines whether to output a 'please try again' message, or if they should be cut off after a number of (i.e. 3) failed input attempts. Should include 'source' as part of the GET payload. """ # Get source of the timeout source = request.args.get('source') # Add initial caller message resp = twilio.twiml.Response() resp.say("Sorry, I did not hear a response.") # Increment number of attempts if source in session: session[source] += 1 else: session[source] = 1 # Logic to determine if user should be cut off, or given another chance if session[source] &gt;= 3: # Reached maximum number of retries, so redirect to a message before hanging up resp.say(""" You have reached the maximum number of retries allowed. Please hang up and try calling again. """) resp.hangup() else: # Allow user to try again resp.say("Please try again.") resp.redirect(url=url_for(source)) return str(resp) </code></pre>
1
2016-09-08T20:38:14Z
[ "python", "flask", "twilio", "twiml" ]
How to Solve Numba Lowering error?
39,379,556
<p>I have a function, which I am trying to speed up using the @jit decorator from Numba module. For me it is essential to speed this up as much as possible, because my main code calls upon this function for millions of times. Here is my function:</p> <pre><code>from numba import jit, types import Sweep #My own module, works fine @jit(types.Tuple((types.complex128[:], types.float64[:]))(types.complex128[:], types.complex128[:], types.float64[:], types.float64[:], types.float64)) def MultiModeSL(Ef, Ef2, Nf, u, tijd ): dEdt= np.zeros(nrModes, dtype=np.complex128) dNdt0= np.zeros(nrMoments, dtype=np.complex128) Efcon = np.conjugate(Ef) for j in range(nrModes): for n in range(nrMoments): dEdt += 0.5 * CMx[:,j,n,0] * dg * (1+ A*1j) * Nf[n] * Ef[j] * np.exp( 1j* (Sweep.omega[j]-Sweep.omega) *tijd) for k in range(nrModes): if n==0: dNdt0 += g* CMx[j, k, 0,:] * Efcon[j] * Ef[k] * np.exp( 1j* (Sweep.omega[k]-Sweep.omega[j]) *tijd) dNdt0 += dg*(1+A*1j) * CMx[j,k,n,:] * Nf[n] * Efcon[j] * Ef[k] * np.exp( 1j* (Sweep.omega[k]-Sweep.omega[j]) *tijd) dEdt += - 0.5*(pd-g)*Ef + fbr*Ef2 + Kinj*EAinj*(1 + np.exp(1j*(u+Vmzm)) ) dNdt = Sweep.Jn - Nf*ed - dNdt0.real return dEdt, dNdt </code></pre> <p>The function works perfectly well, without the Jit decorator. However, when I run it with the @jit, I get this error:</p> <pre><code>numba.errors.LoweringError: Failed at object (object mode frontend) Failed at object (object mode backend) dEdt.1 File "Functions.py", line 82 [1] During: lowering "$237 = call $236(Ef, Ef2, Efcon, Nf, dEdt.1, dNdt0, tijd, u)" at /home/humblebee/MEGA/GUI RC/General_Formula/Functions.py (82) </code></pre> <p>Line 82 corresponds to the For loop with j as iterator. </p> <p>Can you help me out?</p> <p><strong><em>EDIT:</em></strong> Based on Peter's suggestion and combining it with Einsum, I was able to remove the loops. This made my function <strong>3</strong> times faster. Here is the new code:</p> <pre><code>def MultiModeSL(Ef, Ef2, Nf, u, tijd ): dEdt= np.zeros(nrModes, dtype=np.complex128) dNdt0= np.zeros(nrMoments, dtype=np.complex128) Efcon = np.conjugate(Ef) dEdt = 0.5* np.einsum("k, jkm, mk, kj -&gt; j", dg*(1+A*1j), CMx[:, :, :, 0], (Ef[:] * Nf[:, None] ), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd)) dEdt += - 0.5*(pd-g)*Ef + fbr*Ef2 + Kinj*EAinj*(1 + np.exp(1j*(u+Vmzm)) ) dNdt = - np.einsum("j, jkm, jk, kj ", g, CMx[:,:,:,0], (Ef*Efcon[:,None]), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd)) dNdt += -np.einsum("j, j, jknm, kjm, kj",dg, (1+A*1j), CMx, (Nf[:]*Efcon[:,None]*Ef[:,None,None]), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd) ) dNdt += JN - Nf*ed return dNdt </code></pre> <p>Can you suggest more techniques to speed this up?</p>
0
2016-09-07T21:51:33Z
39,387,886
<p>I can't see from your code why this isn't vectorizable. Vectorizing can speed up this kind of Python code by around 100x. Not sure of how it does relative to jit.</p> <p>It looks like you could, for instance, take your dEdt out of the loop, and compute it in one step with something like :</p> <pre><code>dEdt = 0.5 * (Cmx[:, :, :, 0] * dg * (1+A*1j) * Nf[:] * Ef[:, None] * np.exp( 1j* (Sweep.omega[None, :, None, None]-Sweep.omega) *tijd)).sum(axis=2).sum(axis=1) - 0.5*(pd-g)*Ef + fbr*Ef2 + Kinj*EAinj*(1 + np.exp(1j*(u+Vmzm)) ) </code></pre> <p>(Though I don't really know what the dimensionality of your Sweet.omega is).</p>
0
2016-09-08T09:55:59Z
[ "python", "numpy", "numba" ]
How to Solve Numba Lowering error?
39,379,556
<p>I have a function, which I am trying to speed up using the @jit decorator from Numba module. For me it is essential to speed this up as much as possible, because my main code calls upon this function for millions of times. Here is my function:</p> <pre><code>from numba import jit, types import Sweep #My own module, works fine @jit(types.Tuple((types.complex128[:], types.float64[:]))(types.complex128[:], types.complex128[:], types.float64[:], types.float64[:], types.float64)) def MultiModeSL(Ef, Ef2, Nf, u, tijd ): dEdt= np.zeros(nrModes, dtype=np.complex128) dNdt0= np.zeros(nrMoments, dtype=np.complex128) Efcon = np.conjugate(Ef) for j in range(nrModes): for n in range(nrMoments): dEdt += 0.5 * CMx[:,j,n,0] * dg * (1+ A*1j) * Nf[n] * Ef[j] * np.exp( 1j* (Sweep.omega[j]-Sweep.omega) *tijd) for k in range(nrModes): if n==0: dNdt0 += g* CMx[j, k, 0,:] * Efcon[j] * Ef[k] * np.exp( 1j* (Sweep.omega[k]-Sweep.omega[j]) *tijd) dNdt0 += dg*(1+A*1j) * CMx[j,k,n,:] * Nf[n] * Efcon[j] * Ef[k] * np.exp( 1j* (Sweep.omega[k]-Sweep.omega[j]) *tijd) dEdt += - 0.5*(pd-g)*Ef + fbr*Ef2 + Kinj*EAinj*(1 + np.exp(1j*(u+Vmzm)) ) dNdt = Sweep.Jn - Nf*ed - dNdt0.real return dEdt, dNdt </code></pre> <p>The function works perfectly well, without the Jit decorator. However, when I run it with the @jit, I get this error:</p> <pre><code>numba.errors.LoweringError: Failed at object (object mode frontend) Failed at object (object mode backend) dEdt.1 File "Functions.py", line 82 [1] During: lowering "$237 = call $236(Ef, Ef2, Efcon, Nf, dEdt.1, dNdt0, tijd, u)" at /home/humblebee/MEGA/GUI RC/General_Formula/Functions.py (82) </code></pre> <p>Line 82 corresponds to the For loop with j as iterator. </p> <p>Can you help me out?</p> <p><strong><em>EDIT:</em></strong> Based on Peter's suggestion and combining it with Einsum, I was able to remove the loops. This made my function <strong>3</strong> times faster. Here is the new code:</p> <pre><code>def MultiModeSL(Ef, Ef2, Nf, u, tijd ): dEdt= np.zeros(nrModes, dtype=np.complex128) dNdt0= np.zeros(nrMoments, dtype=np.complex128) Efcon = np.conjugate(Ef) dEdt = 0.5* np.einsum("k, jkm, mk, kj -&gt; j", dg*(1+A*1j), CMx[:, :, :, 0], (Ef[:] * Nf[:, None] ), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd)) dEdt += - 0.5*(pd-g)*Ef + fbr*Ef2 + Kinj*EAinj*(1 + np.exp(1j*(u+Vmzm)) ) dNdt = - np.einsum("j, jkm, jk, kj ", g, CMx[:,:,:,0], (Ef*Efcon[:,None]), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd)) dNdt += -np.einsum("j, j, jknm, kjm, kj",dg, (1+A*1j), CMx, (Nf[:]*Efcon[:,None]*Ef[:,None,None]), np.exp( 1j* (OMEGA[:, None]-OMEGA) *tijd) ) dNdt += JN - Nf*ed return dNdt </code></pre> <p>Can you suggest more techniques to speed this up?</p>
0
2016-09-07T21:51:33Z
39,389,193
<p>There may be other issues, but one is that referencing an array in a module namespace seems to currently be unsupported (simple repro below). Try importing <code>omega</code> as a name.</p> <pre><code>In [14]: %%file Sweep.py ...: import numpy as np ...: constant_val = 0.5 ...: constant_arr = np.array([0, 1.5, 2.]) Overwriting Sweep.py In [15]: Sweep.constant_val Out[15]: 0.5 In [16]: Sweep.constant_arr Out[16]: array([ 0. , 1.5, 2. ]) In [17]: @njit ...: def f(value): ...: return value + Sweep.constant_val ...: In [18]: f(100) Out[18]: 100.5 In [19]: @njit ...: def f(value): ...: return value + Sweep.constant_arr[0] In [20]: f(100) LoweringError: Failed at nopython (nopython mode backend) 'NoneType' object has no attribute 'module' File "&lt;ipython-input-19-0a259ade6b9e&gt;", line 3 [1] During: lowering "$0.3 = getattr(value=$0.2, attr=constant_arr)" at &lt;ipython-input-19-0a259ade6b9e&gt; (3) </code></pre>
0
2016-09-08T11:00:09Z
[ "python", "numpy", "numba" ]
How can I use regular expression when reading a text file in Python?
39,379,616
<p>I would like to give you an example. If I am trying to print lines that contain the integer <code>-9999</code> from a file.</p> <pre><code>19940325 78 -28 -9999 19940326 50 17 102 19940327 100 -11 -9999 19940328 56 -33 0 19940329 61 -39 -9999 19940330 61 -56 0 19940331 139 -61 -9999 19940401 211 6 0 </code></pre> <p>here is my code that uses regex to read the text file and scans to find the integer <code>-9999</code> and print only the line/lines that contains that integer.</p> <pre><code>import re file= open("USC00110072.txt", "r") for line in file.readlines(): if re.search('^-9999$', line, re.I): print line </code></pre> <p>My code runs with error but doesn't show anything in the output. Please let me know what mistake i have made. </p>
0
2016-09-07T21:57:07Z
39,379,635
<p>Regex is likely overkill for this, a simple substring check using the <code>in</code> operator seems sufficient</p> <pre><code>with open("USC00110072.txt") as f: for line in f: if '-9999' in line: print(line) </code></pre> <p>Or if you're concerned about that matching that as a "whole word" you can do a little more to divide up the values</p> <pre><code>with open("USC00110072.txt") as f: for line in f: if '-9999' in line.strip().split('\t'): print(line) </code></pre>
3
2016-09-07T21:58:53Z
[ "python", "python-2.7", "python-3.x" ]
How can I use regular expression when reading a text file in Python?
39,379,616
<p>I would like to give you an example. If I am trying to print lines that contain the integer <code>-9999</code> from a file.</p> <pre><code>19940325 78 -28 -9999 19940326 50 17 102 19940327 100 -11 -9999 19940328 56 -33 0 19940329 61 -39 -9999 19940330 61 -56 0 19940331 139 -61 -9999 19940401 211 6 0 </code></pre> <p>here is my code that uses regex to read the text file and scans to find the integer <code>-9999</code> and print only the line/lines that contains that integer.</p> <pre><code>import re file= open("USC00110072.txt", "r") for line in file.readlines(): if re.search('^-9999$', line, re.I): print line </code></pre> <p>My code runs with error but doesn't show anything in the output. Please let me know what mistake i have made. </p>
0
2016-09-07T21:57:07Z
39,379,719
<p>You can use <code>filter</code>:</p> <pre><code>with open(fn) as f: print filter(lambda line: '-9999' in line.split()[-1], f) </code></pre> <p>This is will check if '-9999' is in the final column of the line. </p> <p>If you want to use a regex:</p> <pre><code>with open(fn) as f: for line in f: if re.search(r'-9999$', line): # remove $ if the -9999 can be anywhere in the line print line.strip() </code></pre> <p>The <code>^</code> you have will never match except for a line that only contains <code>-9999</code> and nothing else. The <code>^</code> indicates the start of the line. </p> <p>Or, just use <code>in</code> to test the presence of the string:</p> <pre><code>with open(fn) as f: for line in f: if '-9999' in line: print line.strip() </code></pre>
1
2016-09-07T22:06:06Z
[ "python", "python-2.7", "python-3.x" ]
How can I use regular expression when reading a text file in Python?
39,379,616
<p>I would like to give you an example. If I am trying to print lines that contain the integer <code>-9999</code> from a file.</p> <pre><code>19940325 78 -28 -9999 19940326 50 17 102 19940327 100 -11 -9999 19940328 56 -33 0 19940329 61 -39 -9999 19940330 61 -56 0 19940331 139 -61 -9999 19940401 211 6 0 </code></pre> <p>here is my code that uses regex to read the text file and scans to find the integer <code>-9999</code> and print only the line/lines that contains that integer.</p> <pre><code>import re file= open("USC00110072.txt", "r") for line in file.readlines(): if re.search('^-9999$', line, re.I): print line </code></pre> <p>My code runs with error but doesn't show anything in the output. Please let me know what mistake i have made. </p>
0
2016-09-07T21:57:07Z
39,379,742
<p>Alternatively, since you have a <code>csv</code> file you could use the <code>csv</code> module:</p> <pre><code>import csv import io file = io.StringIO(u''' 19940325\t78\t-28\t-9999 19940326\t50\t17\t102 19940327\t100\t-11\t-9999 19940328\t56\t-33\t0 19940329\t61\t-39\t-9999 19940330\t61\t-56\t0 19940331\t139\t-61\t-9999 19940401\t211\t6\t0 '''.strip()) reader = csv.reader(file, delimiter='\t') for row in reader: if row[-1] == '-9999': # or, for regex, `re.match(r'^-9999$', row[-1])` print('\t'.join(row)) </code></pre>
1
2016-09-07T22:08:06Z
[ "python", "python-2.7", "python-3.x" ]
Why am I getting an empty list as my output using solve command?
39,379,624
<p>I am trying to solve an equation using sympy's solve command but my output is an empty list [ ]. The only reason I think this could be happening is because there is no solution but I doubt that is the reason. Anyone know why I am not getting an answer? Thanks!</p> <pre><code>from sympy import * class WaterModel: def fp_requirement(self, Ws0, Wp0, Wg0): greyW = 60.0 potW = 126.0 rainW = 17.05 self.Ws0 = Ws0 self.Wp0 = Wp0 self.Wg0 = Wg0 self.fp = var('fp') filt_greyW = self.fp*greyW dWg = self.Wg0 - greyW + (1 - self.fp)*greyW + rainW dWp = self.Wp0 - potW + filt_greyW F = self.Ws0 + dWp + dWg self.fp = solve(F,self.fp) return self.fp a = WaterModel() fp = a.fp_requirement(1500, 100, 100) print(fp) </code></pre>
3
2016-09-07T21:57:40Z
39,379,789
<p>I tried adding a few tracing statements to your function, just above the call to <strong>solve</strong>.</p> <pre><code> print "dWg\t", dWg, type(dWg) print "dWp\t", dWp, type(dWp) print "F\t", F, type(F) print "self.fp\t", self.fp self.fp = solve(F,self.fp) </code></pre> <p>Output:</p> <pre><code>dWg 117.050000000000 - 60.0000000000000*fp &lt;class 'sympy.core.add.Add'&gt; dWp -26.0000000000000 + 60.0000000000000*fp &lt;class 'sympy.core.add.Add'&gt; F 1591.05000000000 &lt;class 'sympy.core.numbers.Real'&gt; self.fp fp [] </code></pre> <p>If I'm reading this correctly, your function evaluates the expressions rather than maintaining the symbolic nature of <strong>F</strong>. Thus, when you issue the <strong>solve</strong> directive, you're trying to solve a constant for the variable <strong>fp</strong>. That's why you get no solutions.</p> <hr> <p>Ah, ha! There it is! </p> <pre><code>1500 + 117 - 26 - 60*fp + 60*fp =&gt; 1591 </code></pre> <p>With <strong>fp</strong> out of the equation, there are no solutions.</p>
3
2016-09-07T22:12:57Z
[ "python", "sympy", "solver", "is-empty" ]
Why am I getting an empty list as my output using solve command?
39,379,624
<p>I am trying to solve an equation using sympy's solve command but my output is an empty list [ ]. The only reason I think this could be happening is because there is no solution but I doubt that is the reason. Anyone know why I am not getting an answer? Thanks!</p> <pre><code>from sympy import * class WaterModel: def fp_requirement(self, Ws0, Wp0, Wg0): greyW = 60.0 potW = 126.0 rainW = 17.05 self.Ws0 = Ws0 self.Wp0 = Wp0 self.Wg0 = Wg0 self.fp = var('fp') filt_greyW = self.fp*greyW dWg = self.Wg0 - greyW + (1 - self.fp)*greyW + rainW dWp = self.Wp0 - potW + filt_greyW F = self.Ws0 + dWp + dWg self.fp = solve(F,self.fp) return self.fp a = WaterModel() fp = a.fp_requirement(1500, 100, 100) print(fp) </code></pre>
3
2016-09-07T21:57:40Z
39,379,851
<p><code>self.fp</code> cancels out of your computation. The value of <code>F</code> is the same no matter what <code>self.fp</code> is, so <code>solve</code> can't help you.</p> <p>Are you sure you have your equations right?</p>
2
2016-09-07T22:18:48Z
[ "python", "sympy", "solver", "is-empty" ]
How to extract output of specific columns from groupby on pandas dataframe
39,379,634
<p><em>Need to extract output of groupBy pandas dataframe into expected output described below and write to a file:</em></p> <p><strong><em>Input file testdata.txt:</em></strong></p> <pre><code>id, distance 1,0.5 1,1.2 1,0.2 &lt;------------this row should be selected 1,1.5 2,2.5 2,0.5 &lt;------------this row should be selected 2,1.0 2,3.0 </code></pre> <p><strong><em>Expected output:</em></strong> <em>Find row with shortest distance for each id</em></p> <pre><code>1 0.2 2 0.5 </code></pre> <p><strong><em>Actual Output looks as follows: Need to process this output to get expected output described above:</em></strong></p> <pre><code>_1 1 2 1 0.2 2 5 2 0.5 **My Python Script:** lines = sc.textFile("file:///data/testdata.txt") #RDD to Spark DataFrame sparkDF = lines.map(lambda x: str(x)).map(lambda w: w.split(',')).toDF() #Spark DataFrame to Pandas DataFrame pdsDF = sparkDF.toPandas() #print dataframe schema print (pdsDF) StoreIdGroups = pdsDF.groupby(by=['_1']) result = StoreIdGroups.apply(lambda g: g[g['_2'] == g['_2'].min()]) print(result) </code></pre> <p>Any help is appreciated. Thanks</p>
0
2016-09-07T21:58:53Z
39,386,961
<p>This is just a simple case of applying using the <code>.min()</code> method of a <code>DataFrameGroupBy</code> object:</p> <pre><code>data = [(1,0.5),(1,1.2),(1,0.2),(1,1.5),(2,2.5),(2,0.5),(2,1.0),(2,3.0)] df = pd.DataFrame(data) df.columns = ['id', 'distance'] results = df.groupby('id').min() </code></pre> <p><code>results</code> has the data you are looking for. If you want some nice printing, try:</p> <pre><code>for i in results.itertuples(): print(*tuple(i)) </code></pre>
0
2016-09-08T09:12:35Z
[ "python", "pandas", "apache-spark", "lambda" ]
Getting all HTML from requests.get()
39,379,646
<p>I just started with web scraping with Python and hit the wall. I am using the requests library to get the HTML code from a website. For example, the Google search result website: "<a href="https://www.google.com/?gws_rd=ssl#q=ball" rel="nofollow">https://www.google.com/?gws_rd=ssl#q=ball</a>"</p> <p>When I hit <kbd>F12</kbd> and check the HTML, it looks different than with:</p> <pre><code>site = requests.get("https://www.google.com/?gws_rd=ssl#q=ball") print(site.text) </code></pre> <p>with <code>requests.get</code>, text is much shorter and not all information is visible (it starts with <code>!doctype</code>, however). Because of that I am unable to work with this HTML.</p> <p>Can you tell me where the mistake is?</p> <hr> <p>This is actually an exercise from the book "Automate the boring stuff with Python". The task is to search for some item Google and then find few first results with HTML locators. I cannot do it because when I use <code>requests.get()</code> I cannot see any objects for links in the HTML code.</p>
-1
2016-09-07T21:59:29Z
39,379,705
<p>Some HTML elements are generated by JavaScript.</p> <p>Use "show source code" from your browser to see the original code. It must be similar to the Request response text. </p>
-1
2016-09-07T22:04:28Z
[ "python", "html", "python-requests" ]
Getting all HTML from requests.get()
39,379,646
<p>I just started with web scraping with Python and hit the wall. I am using the requests library to get the HTML code from a website. For example, the Google search result website: "<a href="https://www.google.com/?gws_rd=ssl#q=ball" rel="nofollow">https://www.google.com/?gws_rd=ssl#q=ball</a>"</p> <p>When I hit <kbd>F12</kbd> and check the HTML, it looks different than with:</p> <pre><code>site = requests.get("https://www.google.com/?gws_rd=ssl#q=ball") print(site.text) </code></pre> <p>with <code>requests.get</code>, text is much shorter and not all information is visible (it starts with <code>!doctype</code>, however). Because of that I am unable to work with this HTML.</p> <p>Can you tell me where the mistake is?</p> <hr> <p>This is actually an exercise from the book "Automate the boring stuff with Python". The task is to search for some item Google and then find few first results with HTML locators. I cannot do it because when I use <code>requests.get()</code> I cannot see any objects for links in the HTML code.</p>
-1
2016-09-07T21:59:29Z
39,379,850
<p>The HTML you see using the browser's development tools is what the browser is currently working with. This includes any changes performed via Javascript. The data you are getting when using Requests is before any Javascript has operated on the page. (Note that Requests doesn't process Javascript so you will be unable to acquire a javascript processed page using just Requests.)</p> <p>If you're specifically looking to scrape Google Search, use a url like <a href="https://www.google.com/search?q=test" rel="nofollow">https://www.google.com/search?q=test</a>. This particular url is for Google's non-javascript site. Keep in mind that Google (and most other sites) doesn't appreciate scraping so you may run into other issues when doing so.</p>
1
2016-09-07T22:18:48Z
[ "python", "html", "python-requests" ]
Getting started with Coverity for python package
39,379,721
<p>I downloaded the coverity package for Python/PHP, and try to let it analyze my package:</p> <pre><code>./cov-build --dir cov-int --fs-capture-search /my/dir/ python mine.py </code></pre> <p>considering that 'my/dir' contains the package's root directory and the 'mine.py' implements the entry point.</p> <p>The I get the result:</p> <pre><code>command line: No input files. [STATUS] Running filesystem capture search... [STATUS] Emitting 485 source files from filesystem capture |0----------25-----------50----------75---------100| **************************************************** [WARNING] Build command python /tmp/trunk/quex-exe.py exited with code 255. Please verify that the build completed successfully. </code></pre> <p>It is not clear to me, what Coverity means with 'build'. Does it mean a sample invocation of the script? How can I get started?</p> <p>The most of the 'help' files in the 'doc/' subdirectory are empty(!)</p>
1
2016-09-07T22:06:10Z
39,394,501
<pre><code>[WARNING] Build command python /tmp/trunk/quex-exe.py exited with code 255. Please verify that the build completed successfully. </code></pre> <p>This means that the command you specified on the cov-build line (<code>python mine.py</code>) didn't exit with zero. Probably because of this:</p> <pre><code>command line: No input files. </code></pre> <p>But in any case, it looks like cov-build <em>did</em> successfully capture 485 source files, so it's possible you don't need to use a build command at all. In that case, you can specify the <code>--no-command</code> switch and omit <code>python mine.py</code>, at which point you can proceed with the rest of the workflow.</p>
2
2016-09-08T15:09:30Z
[ "python", "coverity" ]
Python is using the wrong version of Numpy module
39,379,728
<p>I'm trying to use Numpy 1.11.1 for Python 2.7. I have Mac El Capitan, so <code>sudo pip install</code> doesn't work.</p> <p>I decided to install Homebrew and do <code>brew install python</code> and that worked. If I use <code>pip show numpy</code> it shows that I have Numpy 1.11.1 now.</p> <p>But if I run <code>python -c 'import numpy; print numpy.version.version'</code> I still get <code>1.8.0rc1</code> which is the old version I was trying to upgrade!</p> <p>How do I use the correct numpy module? I would like to do this in a way that doesn't require adding in a line to the python scripts that call numpy, but if that's the only way then I'll do it.</p> <p>info:</p> <pre><code>which pip /Library/Frameworks/Python.framework/Versions/3.5/bin/pip which pip /Library/Frameworks/Python.framework/Versions/3.5/bin/pip which pip2 /usr/local/bin/pip2 which pip3 /Library/Frameworks/Python.framework/Versions/3.5/bin/pip3 which python /usr/bin/python which python2 which python2.7 /usr/bin/python2.7 which python3 /Library/Frameworks/Python.framework/Versions/3.5/bin/python3 which python3.5 /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 </code></pre>
0
2016-09-07T22:06:47Z
39,379,833
<p>It's better to use a <strong><a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a></strong> to install the required libraries version. Don't pollute your system Python.</p> <p>It will solve your problem…</p> <pre><code>mkdir $HOME/virtualenv cd $HOME/virtualenv virtualenv my_app source my_app/bin/activate pip install the_lib==x.y.z </code></pre> <p>Where <em>the_lib</em> is <strong>numpy</strong> and <em>x.y.z</em> is the version <strong>1.11.1</strong>.</p>
0
2016-09-07T22:17:19Z
[ "python", "python-2.7", "numpy", "version" ]
How to run a def with multiple values only once and use values into another def?
39,379,735
<p>My problem is real simple but unfortunately i can't find a way to solve it.</p> <p>I would like to run a def A which returns multiple values from def B only once.</p> <p>I wrote this block of code :</p> <pre><code>def A(): x = 1 y = 2 z = 3 return a,b,c def B(): d = A[0] + A[1] e = A[2] - A[0] print d,e B() </code></pre> <p>If i'm using this block of code it will run four times!</p> <p>Thanks in advance.</p>
1
2016-09-07T22:07:36Z
39,379,769
<p>Simple, there are mistakes in your code, <code>A</code> is a function and you should call it with <code>A()</code> to get the returned values. Instead you use it like <code>A[0]</code> which is un-subscriptable, and you can assign a temp variable within <code>B</code> function so you can re-use the returned values, change to below should work:</p> <pre><code>In [54]: def A(): ...: x = 1 ...: y = 2 ...: z = 3 ...: return x,y,z # I fixed your typo too ...: ...: def B(): ...: a = A() ...: d = a[0] + a[1] ...: e = a[2] - a[0] ...: print(d,e) ...: ...: B() 3 2 </code></pre>
1
2016-09-07T22:10:16Z
[ "python", "return" ]
How to run a def with multiple values only once and use values into another def?
39,379,735
<p>My problem is real simple but unfortunately i can't find a way to solve it.</p> <p>I would like to run a def A which returns multiple values from def B only once.</p> <p>I wrote this block of code :</p> <pre><code>def A(): x = 1 y = 2 z = 3 return a,b,c def B(): d = A[0] + A[1] e = A[2] - A[0] print d,e B() </code></pre> <p>If i'm using this block of code it will run four times!</p> <p>Thanks in advance.</p>
1
2016-09-07T22:07:36Z
39,379,781
<p>I would do this: </p> <pre><code>def B(): a, b, c = A() d = a + b e = c - a print d,e </code></pre> <p>But make sure your function <code>A()</code> is returning the same variables it is declaring (<code>x, y, z</code> not <code>a, b, c</code>).</p>
1
2016-09-07T22:11:45Z
[ "python", "return" ]
Pyspark: How to transform json strings in a dataframe column
39,379,775
<p>The following is more or less straight python code which functionally extracts exactly as I want. The data schema for the column I'm filtering out within the dataframe is basically a json string. </p> <p>However, I had to greatly bump up the memory requirement for this and I'm only running on a single node. Using a collect is probably bad and creating all of this on a single node really isn't taking advantage of the distributed nature of Spark.</p> <p>I'd like a more Spark centric solution. Can anyone help me massage the logic below to better take advantage of Spark? Also, as a learning point: please provide an explanation for why/how the updates make it better.</p> <p></p> <pre><code>#!/usr/bin/env python # -*- coding: utf-8 -*- import json from pyspark.sql.types import SchemaStruct, SchemaField, StringType input_schema = SchemaStruct([ SchemaField('scrubbed_col_name', StringType(), nullable=True) ]) output_schema = SchemaStruct([ SchemaField('val01_field_name', StringType(), nullable=True), SchemaField('val02_field_name', StringType(), nullable=True) ]) example_input = [ '''[{"val01_field_name": "val01_a", "val02_field_name": "val02_a"}, {"val01_field_name": "val01_a", "val02_field_name": "val02_b"}, {"val01_field_name": "val01_b", "val02_field_name": "val02_c"}]''', '''[{"val01_field_name": "val01_c", "val02_field_name": "val02_a"}]''', '''[{"val01_field_name": "val01_a", "val02_field_name": "val02_d"}]''', ] desired_output = { 'val01_a': ['val_02_a', 'val_02_b', 'val_02_d'], 'val01_b': ['val_02_c'], 'val01_c': ['val_02_a'], } def capture(dataframe): # Capture column from data frame if it's not empty data = dataframe.filter('scrubbed_col_name != null')\ .select('scrubbed_col_name')\ .rdd\ .collect() # Create a mapping of val1: list(val2) mapping = {} # For every row in the rdd for row in data: # For each json_string within the row for json_string in row: # For each item within the json string for val in json.loads(json_string): # Extract the data properly val01 = val.get('val01_field_name') val02 = val.get('val02_field_name') if val02 not in mapping.get(val01, []): mapping.setdefault(val01, []).append(val02) return mapping </code></pre>
1
2016-09-07T22:11:19Z
39,380,110
<p>One possible solution:</p> <pre><code>(df .rdd # Convert to rdd .flatMap(lambda x: x) # Flatten rows # Parse JSON. In practice you should add proper exception handling .flatMap(lambda x: json.loads(x)) # Get values .map(lambda x: (x.get('val01_field_name'), x.get('val02_field_name'))) # Convert to final shape .groupByKey()) </code></pre> <p>Given output specification this operation is not exactly efficient (do you really require grouped values?) but still much better than <code>collect</code>.</p>
2
2016-09-07T22:47:18Z
[ "python", "json", "apache-spark", "pyspark" ]
AWS g2.8xlarge performance and out of memory issues when using tensorflow
39,379,805
<p>I am currently training a recurrent net on Tensorflow for a text classification problem and am running into performance and out of memory issues. I am on AWS g2.8xlarge with Ubuntu 14.04, and a recent nightly build of tensorflow (which I downloaded on Aug 25).</p> <p>1) Performance issue:</p> <p>On the surface, both the CPU and GPU are highly under-utilized. I've run multiple tests on this (and have used line_profiler and memory_profiler in the process). Train durations scale linearly with number of epochs, so I tested with 1 epoch. For RNN config = 1 layer, 20 nodes, training time = 146 seconds.</p> <p>Incidentally, that number is about 20 seconds higher/slower than the same test run on a g2.2xlarge!</p> <p>Here is a snapshot of System Monitor and nvidia-smi (updated every 2 seconds) about 20 seconds into the run:</p> <p><a href="http://i.stack.imgur.com/znNZD.png" rel="nofollow">SnapshotEarlyPartOfRun</a></p> <p>As you can see, GPU utilization is at 19%. When I use nvprof, I find that the total GPU process time is about 27 seconds or so. Also, except for one vCPU, all others are very under-utilized. The numbers stay around this level, till the end of the epoch where I measure error across the entire training set, sending GPU utilization up to 45%.</p> <p>Unless I am missing something, on the surface, each device is sitting around waiting for something to happen.</p> <p>2) Out of memory issue:</p> <p>If I increase the number of nodes to 200, it gives me an Out of Memory error which happens on the GPU side. As you can see from the above snapshots, only one of the four GPUs is used. I've found that the way to get tensorflow to use the GPU has to do with how you assign the model. If you don't specify anything, tensorflow will assign it to a GPU. If you specify a GPU, only it will be used. Tensorflow does not like it when I assign it to multiple devices with a "for d in ['/gpu:0',...]". I get into an issue with re-using the embedding variable. I would like to use all 4 GPUs for this (without setting up distributed tensorflow). Here is the snapshot of the Out of memory error:</p> <p><a href="http://i.stack.imgur.com/tdHsr.png" rel="nofollow">OutofMemoryError</a></p> <p>Any suggestions you may have for both these problems would be greatly appreciated!</p>
0
2016-09-07T22:14:21Z
39,396,466
<p>Re (1), to improve GPU utilization did you try increasing the batch size and / or shortening the sequences you use for training?</p> <p>Re (2), to use multiple GPUs you do need to manually assign the ops to GPU devices, I believe. The right way is to place ops on specific GPUs by doing</p> <pre><code>with g.Device("/gpu:0"): ... with g.Device("/gpu:1"): ... </code></pre>
0
2016-09-08T16:53:49Z
[ "python", "tensorflow", "deep-learning", "recurrent-neural-network" ]
Unordered set or similar in Spark?
39,379,889
<p>I have data of this format:</p> <blockquote> <p>(123456, (43, 4861))</p> <p>(000456, (43, 4861))</p> </blockquote> <p>where the first term is the point id, where the second term is a pair, where its first id is a cluster-centroid and the second id is another cluster-centroid. So that says that point 123456 is assigned to clusters 43 and 4861.</p> <p>What I am trying to do is to create data of this format:</p> <blockquote> <p>(43, [123456, 000456])</p> <p>(4861, [123456, 000456])</p> </blockquote> <p>where the idea is that every centroid has a list of the points that are assigned to it. That list <em>must</em> be at max of length 150.</p> <p>Is there anything I could use in <a href="/questions/tagged/spark" class="post-tag" title="show questions tagged &#39;spark&#39;" rel="tag">spark</a> or <a href="/questions/tagged/python" class="post-tag" title="show questions tagged &#39;python&#39;" rel="tag">python</a> which would make my life easier?</p> <hr> <p>I do not care of fast access and order. I have 100m points and 16k centroids.</p> <hr> <p>Here is some artificial data that I use to play with:</p> <pre><code>data = [] from random import randint for i in xrange(0, 10): data.append((randint(0, 100000000), (randint(0, 16000), randint(0, 16000)))) data = sc.parallelize(data) </code></pre>
0
2016-09-07T22:23:55Z
39,380,113
<p>Judging from what you described (although I still don't quite get it), here is a naive approach using Python:</p> <pre><code>In [1]: from itertools import groupby In [2]: from random import randint In [3]: data = [] # create random samples as you did ...: for i in range(10): ...: data.append((randint(0, 100000000), (randint(0, 16000), randint(0, 16000)))) ...: In [4]: result = [] # create a intermediate list to transform your sample ...: for point_id, cluster in data: ...: for index, c in enumerate(cluster): # I made it up following your pattern ...: result.append((c, [point_id, str(index * 100).zfill(3) + str(point_id)[-3:]])) # sort the result by point_id as key for grouping ...: result = sorted(result, key=lambda x: x[1][0]) ...: In [5]: result[:3] Out[5]: [(4020, [5002188, '000188']), (10983, [5002188, '100188']), (10800, [24763401, '000401'])] In [6]: capped_result = [] # basically groupby sorted point_id and cap the list max at 150 ...: for _, g in groupby(result, key=lambda x: x[1][0]): ...: grouped = list(g)[:150] ...: capped_result.extend(grouped) # final result will be like ...: print(capped_result) ...: [(4020, [5002188, '000188']), (10983, [5002188, '100188']), (10800, [24763401, '000401']), (12965, [24763401, '100401']), (6369, [24924435, '000435']), (429, [24924435, '100435']), (7666, [39240078, '000078']), (2526, [39240078, '100078']), (5260, [47597265, '000265']), (7056, [47597265, '100265']), (2824, [60159219, '000219']), (5730, [60159219, '100219']), (7837, [67208338, '000338']), (12475, [67208338, '100338']), (4897, [80084812, '000812']), (13038, [80084812, '100812']), (2944, [80253323, '000323']), (1922, [80253323, '100323']), (12777, [96811112, '000112']), (5463, [96811112, '100112'])] </code></pre> <p>Of course this isn't optimised at all but will give you a head start how you can tackle this problem. I hope this helps.</p>
1
2016-09-07T22:47:37Z
[ "python", "algorithm", "apache-spark", "data-structures", "bigdata" ]
Pulling table data with mixed element types
39,379,903
<p>I am trying to pull data from a table using Python and Selenium however a few of the columns have a mix between gif and text. When I print the text element it returns the text along with blanks were the gif elements are within the column. However when I prints the gif elements, it returns all the gifs from the table (not just the column) without any blanks for the text fields. Any ideas how I can pull both elements types from the column? Thanks.</p> <p>Table example:</p> <pre><code>&lt;td class="X"&gt; &lt;img src="/a/b/c/d.gif"&gt; &lt;td&gt; </code></pre> <p>and</p> <pre><code>&lt;td class="X"&gt; &lt;div class="default-value"&gt;Not Applicable&lt;/div&gt; &lt;/td&gt; </code></pre> <p>Code for text</p> <pre><code>posts = driver.find_elements_by_class_name("x") for post in posts: print(post.text) </code></pre> <p>Code for gif</p> <pre><code>for element in driver.find_elements_by_tag_name('img'): print(element.get_attribute("src")) </code></pre>
1
2016-09-07T22:24:43Z
39,379,931
<p>Find all <code>td</code> elements first, then, for every <code>td</code> decide if you want to get the text or the <code>src</code> attribute of the <code>img</code> element:</p> <pre><code>posts = driver.find_elements_by_css_selector("td.x") for post in posts: images = post.find_elements_by_tag_name("img") if images: print(images[0].get_attribute("src")) else: print(post.text) </code></pre>
2
2016-09-07T22:27:53Z
[ "python", "selenium", "selenium-webdriver" ]
Reverse translation dictionary Python
39,379,907
<p>This is what I have ended up with after suggestions were made it appears as if the Eng dictionary is identical to the Tuc one. This program will translate English words to Tuccin but I can not for the life of me get it to translate Tuccin to English Pleae tell me how to achieve this. In the event a non stored word is input I have it set to just print the word itself. But i don't even manage to get the elif to trigger it goes straight to the else condition if it's not a stored English word. </p> <pre><code>Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],"mine":["yeme"],"are":["sia"]} Eng = {t: e for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: #English to Tuccin if word in Tuc: print ("".join(Tuc[word]))+" *English&gt;&gt;Tuccin" #Tuccin to English elif word in Eng: print ("".join(Eng[word]))+" *Tuccin&gt;&gt;English" else: print word+" *Word Not Stored" </code></pre>
0
2016-09-07T22:24:55Z
39,379,938
<p>The way you have it currently set, <code>Eng</code> and <code>Tuc</code> are identical. It seems like what you want is </p> <pre><code>&gt;&gt;&gt; Eng = {e[0]: [t] for t, e in Tuc.items()} &gt;&gt;&gt; Eng {'yem': ['my'], 'ye': ['me'], 'uo': ['you'], 'o': ['i'], 'sia': 'are'], 'yeme': ['mine'], 'wau': ['love']} </code></pre> <p>As a side note, there's not really a need to make the values in the hash lists since they all contain a single string, but that's not a big deal.</p>
0
2016-09-07T22:28:52Z
[ "python", "dictionary", "translation" ]
Reverse translation dictionary Python
39,379,907
<p>This is what I have ended up with after suggestions were made it appears as if the Eng dictionary is identical to the Tuc one. This program will translate English words to Tuccin but I can not for the life of me get it to translate Tuccin to English Pleae tell me how to achieve this. In the event a non stored word is input I have it set to just print the word itself. But i don't even manage to get the elif to trigger it goes straight to the else condition if it's not a stored English word. </p> <pre><code>Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],"mine":["yeme"],"are":["sia"]} Eng = {t: e for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: #English to Tuccin if word in Tuc: print ("".join(Tuc[word]))+" *English&gt;&gt;Tuccin" #Tuccin to English elif word in Eng: print ("".join(Eng[word]))+" *Tuccin&gt;&gt;English" else: print word+" *Word Not Stored" </code></pre>
0
2016-09-07T22:24:55Z
39,379,941
<p>You have a typo up there:</p> <pre><code>Eng = {e[0]: [t] for t, e in Tuc.items()} </code></pre>
0
2016-09-07T22:28:59Z
[ "python", "dictionary", "translation" ]
Reverse translation dictionary Python
39,379,907
<p>This is what I have ended up with after suggestions were made it appears as if the Eng dictionary is identical to the Tuc one. This program will translate English words to Tuccin but I can not for the life of me get it to translate Tuccin to English Pleae tell me how to achieve this. In the event a non stored word is input I have it set to just print the word itself. But i don't even manage to get the elif to trigger it goes straight to the else condition if it's not a stored English word. </p> <pre><code>Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],"mine":["yeme"],"are":["sia"]} Eng = {t: e for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: #English to Tuccin if word in Tuc: print ("".join(Tuc[word]))+" *English&gt;&gt;Tuccin" #Tuccin to English elif word in Eng: print ("".join(Eng[word]))+" *Tuccin&gt;&gt;English" else: print word+" *Word Not Stored" </code></pre>
0
2016-09-07T22:24:55Z
39,379,959
<p>You ought to do</p> <pre><code>eng = {v[0]: [k] for k, v in tuc.items()} </code></pre> <p>Or <code>iteritems()</code> with Python 2.</p> <p>Note, instead of:</p> <pre><code>while phrase == True </code></pre> <p>You should write:</p> <pre><code>while phrase </code></pre>
0
2016-09-07T22:31:01Z
[ "python", "dictionary", "translation" ]
Reverse translation dictionary Python
39,379,907
<p>This is what I have ended up with after suggestions were made it appears as if the Eng dictionary is identical to the Tuc one. This program will translate English words to Tuccin but I can not for the life of me get it to translate Tuccin to English Pleae tell me how to achieve this. In the event a non stored word is input I have it set to just print the word itself. But i don't even manage to get the elif to trigger it goes straight to the else condition if it's not a stored English word. </p> <pre><code>Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],"mine":["yeme"],"are":["sia"]} Eng = {t: e for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: #English to Tuccin if word in Tuc: print ("".join(Tuc[word]))+" *English&gt;&gt;Tuccin" #Tuccin to English elif word in Eng: print ("".join(Eng[word]))+" *Tuccin&gt;&gt;English" else: print word+" *Word Not Stored" </code></pre>
0
2016-09-07T22:24:55Z
39,379,989
<p>You can clean up your code quite a bit while fixing this error (you copied the dictionary instead of reversing it):</p> <pre><code>Tuc={"i":"o", "love":"wau", "you":"uo", "me":"ye", "my":"yem", "mine":"yeme", "are":"sia"} Eng = {e:t for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True while phrase: translation = raw_input("Enter content for translation.\n").lower() phrase = translation.split() # empty input will break the loop for word in phrase: print Tuc.get(word, Eng.get(word, 'Word Not Stored')) </code></pre>
0
2016-09-07T22:34:14Z
[ "python", "dictionary", "translation" ]
Reverse translation dictionary Python
39,379,907
<p>This is what I have ended up with after suggestions were made it appears as if the Eng dictionary is identical to the Tuc one. This program will translate English words to Tuccin but I can not for the life of me get it to translate Tuccin to English Pleae tell me how to achieve this. In the event a non stored word is input I have it set to just print the word itself. But i don't even manage to get the elif to trigger it goes straight to the else condition if it's not a stored English word. </p> <pre><code>Tuc={"i":["o"],"love":["wau"],"you":["uo"],"me":["ye"],"my":["yem"],"mine":["yeme"],"are":["sia"]} Eng = {t: e for t, e in Tuc.items()} print "ENG" print Eng print "TUC" print Tuc phrase=True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: #English to Tuccin if word in Tuc: print ("".join(Tuc[word]))+" *English&gt;&gt;Tuccin" #Tuccin to English elif word in Eng: print ("".join(Eng[word]))+" *Tuccin&gt;&gt;English" else: print word+" *Word Not Stored" </code></pre>
0
2016-09-07T22:24:55Z
39,380,008
<p>This version assumes each word will have multiple translations (that's the reason you're using lists on each word):</p> <pre><code>from collections import defaultdict tuc_dictionary = { "i": ["o"], "love": ["wau"], "you": ["uo"], "me": ["ye"], "my": ["yem"], "mine": ["yeme"], "are": ["sia"] } english_dictionary = defaultdict(list) for k, v in tuc_dictionary.items(): for word in v: english_dictionary[word].append(k) english_dictionary = dict(english_dictionary) print "ENG" print english_dictionary print "TUC" print tuc_dictionary phrase = True reverseLookup = False while phrase == True: translation = str(raw_input("Enter content for translation.\n").lower()) input_list = translation.split() for word in input_list: # English to Tuccin if word in tuc_dictionary: print("".join(tuc_dictionary[word])) + " *English&gt;&gt;Tuccin" # Tuccin to English elif word in english_dictionary: print("".join(english_dictionary[word])) + " *Tuccin&gt;&gt;English" else: print word + " *Word Not Stored" </code></pre>
0
2016-09-07T22:35:23Z
[ "python", "dictionary", "translation" ]
Convert .py files to correct encoding for Python 3
39,379,969
<p>I just pulled from a git repo where the users are on Python 2. My system is running Python 3 and with no changes in the code, I am getting this error:</p> <pre><code>TabError: inconsistent use of tabs and spaces in indentation </code></pre> <p>It appears that the solution is to change the char set encoding of the <code>.py</code> files, but working in emacs, I'm not clear how to do this. I'm seeing these instructions:</p> <p><a href="https://www.emacswiki.org/emacs/ChangingEncodings" rel="nofollow">https://www.emacswiki.org/emacs/ChangingEncodings</a></p> <p>but I don't understand how to apply these for utf-8. I'd appreciate any suggestions.</p>
0
2016-09-07T22:32:11Z
39,379,998
<p>This is not a python 2/3 issue, it looks like something in that git repo has wrong indentation. The easiest fix would be to replace all tab characters in all the files with spaces using something like sed</p>
0
2016-09-07T22:34:43Z
[ "python", "encoding", "emacs" ]
Convert .py files to correct encoding for Python 3
39,379,969
<p>I just pulled from a git repo where the users are on Python 2. My system is running Python 3 and with no changes in the code, I am getting this error:</p> <pre><code>TabError: inconsistent use of tabs and spaces in indentation </code></pre> <p>It appears that the solution is to change the char set encoding of the <code>.py</code> files, but working in emacs, I'm not clear how to do this. I'm seeing these instructions:</p> <p><a href="https://www.emacswiki.org/emacs/ChangingEncodings" rel="nofollow">https://www.emacswiki.org/emacs/ChangingEncodings</a></p> <p>but I don't understand how to apply these for utf-8. I'd appreciate any suggestions.</p>
0
2016-09-07T22:32:11Z
39,383,808
<p>Exists a command <code>untabify</code>:</p> <p>Convert all tabs in region to multiple spaces, preserving columns. If called interactively with prefix ARG, convert for the entire buffer.</p> <p>I.e. call it with C-u to convert all TABs in buffer.</p> <p>As comment points out correctly: <code>tabify</code> does the inverse, converts multiple spaces to tabs - while using spaces seems a common convention not just in Python.</p>
1
2016-09-08T06:20:51Z
[ "python", "encoding", "emacs" ]
How can one replace missing values with median or mode in SFrame?
39,379,987
<p>I'm going through the Graphlab documentation and I am trying to figure out how to duplicate the pandas functionality were na values are replaced by the median, the mean, or the mode, etc... In pandas you simply do this by: df.dropna().median() or df.dropna().mean() etc....</p> <p>But the documentation on the dropna and fillna functions for SFrame don't mention anything similar. Is it possible at all in SFrame? </p>
1
2016-09-07T22:33:56Z
39,380,244
<p>There is one, but only the mean is available, not the median. Have a look at: <code>graphlab.toolkits.feature_engineering.NumericImputer</code> (<a href="https://turi.com/products/create/docs/generated/graphlab.toolkits.feature_engineering.NumericImputer.html" rel="nofollow">doc</a>)</p> <blockquote> <p>Impute missing values with feature means.</p> <p>Input columns to the NumericImputer must be of type int, float, dict, list, or array.array. For each column in the input, the transformed output is a column where the input is retained as is if:</p> <ul> <li>there is no missing value.</li> </ul> <p>Inputs that do not satisfy the above are set to the mean value of that feature.</p> </blockquote> <p>If the median is what you want, you could achieve it with:</p> <pre><code>data.fillna('feature_name', np.median(data['feature_name'])) </code></pre>
1
2016-09-07T23:01:55Z
[ "python", "pandas", "graphlab" ]
Python script for finding keith numbers not working
39,380,002
<p>I am trying to make a python program that will find keith numbers. If you don't what keith numbers are, here is a link explaining them: <a href="http://mathworld.wolfram.com/KeithNumber.html" rel="nofollow">Keith Numbers - Wolfram MathWorld</a></p> <p>My code is</p> <pre><code>from decimal import Decimal from time import sleep activator1 = 1 while (activator1 == 1): try: limit = int(raw_input("How many digits do you want me to stop at?")) activator1 = 0 except ValueError: print "You did not enter an integer" limitlist = [] activator2 = 1 while (activator2 &lt;= limit): limitlist.append(activator2) activator2 += 1 print limitlist add1 = 0 add = 0 count = 9 while 1: sleep (0.1) numbers = list(str(count)) for i in limitlist: if (i &gt; 0) &amp; (add &lt; count): add = sum(Decimal(i) for i in numbers) lastnumber = int(numbers[-1]) add1 = lastnumber+int(add) numbers.reverse() numbers.pop() numbers.append(add1) print add1 print add print count print numbers if (add1 == count): print"________________________________" print add1 print count elif (i &gt; 0) &amp; (add &gt; count): count += 1 break </code></pre> <p>It doesn't output any errors but it just outputs </p> <pre><code>18 9 9 [18] </code></pre> <p>Could someone please tell me why it doesn't just repeatedly find Keith numbers within the number of integers range? </p>
0
2016-09-07T22:34:53Z
39,380,183
<p>You have it in front of you:</p> <pre><code>add1 = 18 add = 9 count = 9 numbers = [18] </code></pre> <p>You're in an infinite loop with no output. You get this for <em>once</em>. After this, <strong>i</strong> runs through the values 1, 2, and 3. Each time through the <strong>for</strong> loop, all three <strong>if</strong> conditions are <strong>False</strong>. Nothing changes, you drop out of the <strong>for</strong> loop, and go back to the top of the <strong>while</strong>. Here, you set <strong>numbers</strong> back to ['9'], and loop forever.</p> <p>I suggest that you learn two skills:</p> <ol> <li>Basic debugging: learn to single-step through a debugger, looking at variable values. Alternately, learn to trace your logic on paper and stick in meaningful <strong>print</strong> statements. (My version of this is at the bottom of this answer.)</li> <li>Incremental programming: Write a few lines of code and get them working. <em>After</em> you have them <em>working</em> (test with various input values and results printed), continue to write a few more. In this case, you wrote a large block of code, and then could not see the error in roughly 50 lines. If you code incrementally, you'll often be able to isolate the problem to your most recent 3-5 lines.</li> </ol> <hr> <pre><code>while True: # sleep (0.1) numbers = list(str(count)) print "Top of while; numbers=", numbers for i in limitlist: print "Top of for; i =", i, "\tadd =", add, "\tcount =", count, "\tadll =", add1 if (i &gt; 0) &amp; (add &lt; count): add = sum(Decimal(i) for i in numbers) lastnumber = int(numbers[-1]) add1 = lastnumber+int(add) numbers.reverse() numbers.pop() numbers.append(add1) print "add1\t", add1 print "add\t", add print "count\t", count print "numbers", numbers if (add1 == count): print"________________________________" print add1 print count elif (i &gt; 0) &amp; (add &gt; count): count += 1 print "increment count:", count break </code></pre>
3
2016-09-07T22:56:07Z
[ "python", "python-2.7", "math", "numbers" ]
Python script for finding keith numbers not working
39,380,002
<p>I am trying to make a python program that will find keith numbers. If you don't what keith numbers are, here is a link explaining them: <a href="http://mathworld.wolfram.com/KeithNumber.html" rel="nofollow">Keith Numbers - Wolfram MathWorld</a></p> <p>My code is</p> <pre><code>from decimal import Decimal from time import sleep activator1 = 1 while (activator1 == 1): try: limit = int(raw_input("How many digits do you want me to stop at?")) activator1 = 0 except ValueError: print "You did not enter an integer" limitlist = [] activator2 = 1 while (activator2 &lt;= limit): limitlist.append(activator2) activator2 += 1 print limitlist add1 = 0 add = 0 count = 9 while 1: sleep (0.1) numbers = list(str(count)) for i in limitlist: if (i &gt; 0) &amp; (add &lt; count): add = sum(Decimal(i) for i in numbers) lastnumber = int(numbers[-1]) add1 = lastnumber+int(add) numbers.reverse() numbers.pop() numbers.append(add1) print add1 print add print count print numbers if (add1 == count): print"________________________________" print add1 print count elif (i &gt; 0) &amp; (add &gt; count): count += 1 break </code></pre> <p>It doesn't output any errors but it just outputs </p> <pre><code>18 9 9 [18] </code></pre> <p>Could someone please tell me why it doesn't just repeatedly find Keith numbers within the number of integers range? </p>
0
2016-09-07T22:34:53Z
39,380,287
<p>Prune has already given you good advices! Let's put a little example of what he meant though, let's say you got an algorithm which determine whether n is a keith number or not and also a test loop to print some keith numbers:</p> <pre><code>def keith_number(n): c = str(n) a = list(map(int, c)) b = sum(a) while b &lt; n: a = a[1:] + [b] b = sum(a) return (b == n) &amp; (len(c) &gt; 1) N = 5 for i in range(N): a, b = 10**i, 10**(i + 1) print("[{0},{1}]".format(a, b)) print([i for i in filter(keith_number, range(a, b))]) print('-' * 80) </code></pre> <p>such snippet gives you this:</p> <pre><code>[1,10] [] -------------------------------------------------------------------------------- [10,100] [14, 19, 28, 47, 61, 75] -------------------------------------------------------------------------------- [100,1000] [197, 742] -------------------------------------------------------------------------------- [1000,10000] [1104, 1537, 2208, 2580, 3684, 4788, 7385, 7647, 7909] -------------------------------------------------------------------------------- [10000,100000] [31331, 34285, 34348, 55604, 62662, 86935, 93993] -------------------------------------------------------------------------------- </code></pre> <p>Wow, that's awesome... but wait, let's say you don't understand the keith_number function and you want to explore a little bit the algorithm in order to understand its guts. What about if we add some useful debug lines?</p> <pre><code>def keith_number(n): c = str(n) a = list(map(int, c)) b = sum(a) print("{0} = {1}".format("+".join(map(str, a)), b)) while b &lt; n: a = a[1:] + [b] b = sum(a) print("{0} = {1}".format("+".join(map(str, a)), b)) return (b == n) &amp; (len(c) &gt; 1) keith_number(14) print '-' * 80 keith_number(15) </code></pre> <p>that way you'll be able to trace the important steps and the algorithm will make sense in your head:</p> <pre><code>1+4 = 5 4+5 = 9 5+9 = 14 -------------------------------------------------------------------------------- 1+5 = 6 5+6 = 11 6+11 = 17 </code></pre> <p>Conclusion: I'd advice you learn how to debug your own code instead asking strangers about it ;-)</p>
0
2016-09-07T23:07:03Z
[ "python", "python-2.7", "math", "numbers" ]
python pip upgrade breaks
39,380,047
<p>I am using Ubuntu 16.04.1 LTS at work. I need to upgrade pip to latest version(8.1.2)</p> <p>When I run:</p> <pre><code>sudo pip install -U pip </code></pre> <p>I get following error. I checked the proxy, they look okay. </p> <pre><code>Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 328, in run wb.build(autobuilding=True) File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 748, in build self.requirement_set.prepare_files(self.finder) File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 448, in _prepare_file req_to_install, finder) File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 397, in _check_skip_installed finder.find_requirement(req_to_install, self.upgrade) File "/usr/lib/python2.7/dist-packages/pip/index.py", line 442, in find_requirement all_candidates = self.find_all_candidates(req.name) File "/usr/lib/python2.7/dist-packages/pip/index.py", line 400, in find_all_candidates for page in self._get_pages(url_locations, project_name): File "/usr/lib/python2.7/dist-packages/pip/index.py", line 545, in _get_pages page = self._get_page(location) File "/usr/lib/python2.7/dist-packages/pip/index.py", line 648, in _get_page return HTMLPage.get_page(link, session=self.session) File "/usr/lib/python2.7/dist-packages/pip/index.py", line 757, in get_page "Cache-Control": "max-age=600", File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 480, in get return self.request('GET', url, **kwargs) File "/usr/lib/python2.7/dist-packages/pip/download.py", line 378, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-any.whl/cachecontrol/adapter.py", line 46, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/adapters.py", line 376, in send timeout=timeout File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 610, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 228, in increment total -= 1 TypeError: unsupported operand type(s) for -=: 'Retry' and 'int' </code></pre> <p>Somebody give me some pointer.</p>
5
2016-09-07T22:40:10Z
39,394,824
<p>This is the command i use:</p> <pre><code>pip install --upgrade pip </code></pre>
0
2016-09-08T15:24:00Z
[ "python", "linux", "pip", "upgrade" ]
Writing a source code in Python
39,380,093
<p>I have a Source code in C, I want to write it in Python. I have <code>.so</code> library files for my C code, is it possible to change <code>.so</code> files to <code>.py</code> lib files for execution? </p> <p>If so, how can I execute my python files? I mean in order to run C in python, we use something like below:</p> <pre><code>gcc -shared -I/usr/include/python2.3/ -lpython2.3 -o myModule.so myModule.c </code></pre> <p>So is there is something like this to execute my python files or can I just use import command to execute?</p>
-3
2016-09-07T22:45:28Z
39,380,120
<p>You can use Python's <a href="https://docs.python.org/3.5/library/ctypes.html" rel="nofollow">ctypes</a> module to load <code>so</code> files and use their functions.<br> Take a look at the second answer <a href="http://stackoverflow.com/questions/145270/calling-c-c-from-python?noredirect=1&amp;lq=1">here</a>.</p>
1
2016-09-07T22:48:14Z
[ "python" ]
pandas DataFrame and pandas.groupby to calculate Salaries
39,380,148
<p>For my assignment, I need to import baseball salary data into a pandas <code>DataFrame</code>.<br> From there, one of my objectives is to get the salaries of all the teams per year.</p> <p>I was successful however in order to move onto the next task, I need a pandas <code>DataFrame</code>. <code>sumofSalaries.dtype</code> is returning <code>int64</code>.</p> <p>Questions:<br> 1. How do I convert the data in the code following into a DataFrame?<br> 2. How do I delete the indexes in <code>sumofSalaries</code>?</p> <p>Code:</p> <pre><code> import pandas as pd salariesData = pd.read_csv('Salaries.csv') #sum salaries by year and team sumOfSalaries = salariesData.groupby(by=['yearID','teamID'])['salary'].sum() del sumOfSalaries.index.names #line giving me errors #create DataFrame from grouped data df = pd.DataFrame(sumOfSalaries, columns = ['yearID', 'teamID', 'salary']) df _____________________________________________________________________________ sumofSalaries: yearID teamID 1985 ATL 14807000 BAL 11560712 BOS 10897560 CAL 14427894 CHA 9846178 ...and so on _____________________________________________________________________________ df: yearID teamID salary yearID teamID 1985 ATL NaN NaN 14807000 BAL NaN NaN 11560712 BOS NaN NaN 10897560 CAL NaN NaN 14427894 </code></pre>
2
2016-09-07T22:51:49Z
39,380,440
<p><code>del</code> has a <a href="https://docs.python.org/3/reference/simple_stmts.html#del" rel="nofollow">very specific meaning</a> in python and has no use on a dataframe like that.</p> <p>You want to use <code>reset_index</code> to get rid of the <code>MultiIndex</code> after a groupby -- if you want to get rid of the <code>MultiIndex</code>, that is.</p> <pre><code>import pandas as pd salariesData = pd.read_csv('Salaries.csv') #sum salaries by year and team sumOfSalaries = (pd.DataFrame( salariesData.groupby(by=['yearID','teamID'])['salary'].sum() .reset_index() )) </code></pre> <p>Read up on the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">groupby docs</a> and the <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">multiindexing docs</a> for more information. </p>
1
2016-09-07T23:29:12Z
[ "python", "pandas", "dataframe", "ipython" ]
pandas DataFrame and pandas.groupby to calculate Salaries
39,380,148
<p>For my assignment, I need to import baseball salary data into a pandas <code>DataFrame</code>.<br> From there, one of my objectives is to get the salaries of all the teams per year.</p> <p>I was successful however in order to move onto the next task, I need a pandas <code>DataFrame</code>. <code>sumofSalaries.dtype</code> is returning <code>int64</code>.</p> <p>Questions:<br> 1. How do I convert the data in the code following into a DataFrame?<br> 2. How do I delete the indexes in <code>sumofSalaries</code>?</p> <p>Code:</p> <pre><code> import pandas as pd salariesData = pd.read_csv('Salaries.csv') #sum salaries by year and team sumOfSalaries = salariesData.groupby(by=['yearID','teamID'])['salary'].sum() del sumOfSalaries.index.names #line giving me errors #create DataFrame from grouped data df = pd.DataFrame(sumOfSalaries, columns = ['yearID', 'teamID', 'salary']) df _____________________________________________________________________________ sumofSalaries: yearID teamID 1985 ATL 14807000 BAL 11560712 BOS 10897560 CAL 14427894 CHA 9846178 ...and so on _____________________________________________________________________________ df: yearID teamID salary yearID teamID 1985 ATL NaN NaN 14807000 BAL NaN NaN 11560712 BOS NaN NaN 10897560 CAL NaN NaN 14427894 </code></pre>
2
2016-09-07T22:51:49Z
39,382,899
<p>I think you need only add parameter <code>as_index=False</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a>, output is <code>DataFrame</code> without <code>MultiIndex</code>:</p> <pre><code>sumOfSalaries = salariesData.groupby(by=['yearID','teamID'], as_index=False)['salary'].sum() </code></pre> <p>Sample:</p> <pre><code>import pandas as pd salariesData = pd.DataFrame({ 'yearID': {0: 1985, 1: 1985, 2: 1985, 3: 1985, 4: 1985, 5: 1986, 6: 1986, 7: 1986, 8: 1987, 9: 1987}, 'teamID': {0: 'ATL', 1: 'ATL', 2: 'ATL', 3: 'CAL', 4: 'CAL', 5: 'CAL', 6: 'CAL', 7: 'BOS', 8: 'BOS', 9: 'BOS'}, 'salary': {0: 10, 1: 20, 2: 30, 3: 40, 4: 50, 5: 10, 6: 20, 7: 30, 8: 40, 9: 50} }, columns = ['yearID','teamID','salary'] ) print (salariesData) yearID teamID salary 0 1985 ATL 10 1 1985 ATL 20 2 1985 ATL 30 3 1985 CAL 40 4 1985 CAL 50 5 1986 CAL 10 6 1986 CAL 20 7 1986 BOS 30 8 1987 BOS 40 9 1987 BOS 50 sumOfSalaries = salariesData.groupby(by=['yearID','teamID'], as_index=False)['salary'].sum() print (sumOfSalaries) yearID teamID salary 0 1985 ATL 60 1 1985 CAL 90 2 1986 BOS 30 3 1986 CAL 30 4 1987 BOS 90 </code></pre> <p>Also if need remove index names, use assign to <code>(None, None)</code>, but if use solution above, it is not necessary:</p> <pre><code>sumOfSalaries.index.names = (None, None) </code></pre> <p>Sample:</p> <pre><code>sumOfSalaries = salariesData.groupby(by=['yearID','teamID'])['salary'].sum() sumOfSalaries.index.names = (None, None) print (sumOfSalaries) 1985 ATL 60 CAL 90 1986 BOS 30 CAL 30 1987 BOS 90 Name: salary, dtype: int64 </code></pre>
0
2016-09-08T05:00:42Z
[ "python", "pandas", "dataframe", "ipython" ]
Python assertIn test statement not finding string?
39,380,171
<p>I am using <code>assertIn</code> to test that a part of the result in JSON string is correct.</p> <pre><code>test_json = some_function_returning_a_dict() self.assertIn(expected_json, test_json, "did not match expected output") </code></pre> <p>The error is</p> <blockquote> <p>AssertionError: "'abc': '1.0012'," not found in [{'abc': '1.0012',...</p> </blockquote> <p>I used <code>Ctrl + F</code> over the inner string, and it was in the resulting string.<br> I'm using Python 3.0</p>
0
2016-09-07T22:54:19Z
39,380,218
<p><code>"'abc': '1.0012',"</code> is a string and <code>{'abc': '1.0012', }</code> is an entry in dictionary</p> <p>You want to be checking for the dictionary entry in json, not a string</p>
1
2016-09-07T22:59:42Z
[ "python", "python-3.x" ]
Python assertIn test statement not finding string?
39,380,171
<p>I am using <code>assertIn</code> to test that a part of the result in JSON string is correct.</p> <pre><code>test_json = some_function_returning_a_dict() self.assertIn(expected_json, test_json, "did not match expected output") </code></pre> <p>The error is</p> <blockquote> <p>AssertionError: "'abc': '1.0012'," not found in [{'abc': '1.0012',...</p> </blockquote> <p>I used <code>Ctrl + F</code> over the inner string, and it was in the resulting string.<br> I'm using Python 3.0</p>
0
2016-09-07T22:54:19Z
39,380,225
<p>Right. Python's <strong>in</strong> operator works on an iterable object. The clause <strong>in test_json</strong> means, "is the given item a key of the dictionary". It does <em>not</em> search the dictionary for a <em>key:value</em> pair.</p> <p>To do this, use a two-step process:</p> <pre><code>assertIn('abc', test_json) assertEquals('1.0012', test_json['abc']) </code></pre> <p>Doing this with appropriate variables and references is left as an exercise for the student. :-)</p>
3
2016-09-07T23:00:45Z
[ "python", "python-3.x" ]
Python assertIn test statement not finding string?
39,380,171
<p>I am using <code>assertIn</code> to test that a part of the result in JSON string is correct.</p> <pre><code>test_json = some_function_returning_a_dict() self.assertIn(expected_json, test_json, "did not match expected output") </code></pre> <p>The error is</p> <blockquote> <p>AssertionError: "'abc': '1.0012'," not found in [{'abc': '1.0012',...</p> </blockquote> <p>I used <code>Ctrl + F</code> over the inner string, and it was in the resulting string.<br> I'm using Python 3.0</p>
0
2016-09-07T22:54:19Z
39,380,238
<p>It looks like you are attempting to find a string inside a dictionary, which will check to see if the string you are giving is a key of the specified dictionary. Firstly don't convert your first dictionary to a string, and secondly do something like <code>all(item in test_json.items() for item in expected_json.items())</code></p>
1
2016-09-07T23:01:42Z
[ "python", "python-3.x" ]
How can tkinter justify align text
39,380,182
<p>I would like to know how can we make tkinter to align text justify.</p> <p>There's only <code>CENTER</code>, <code>LEFT</code>, and <code>RIGHT</code>.</p> <p>I would like something like <code>justify = "JUSTIFY"</code> And it seems that it is not implemented.</p> <p>Thank you. </p>
0
2016-09-07T22:56:00Z
39,381,246
<p>There is nothing built-in to Tkinter that will justify text to both the left and right margins at the same time.</p>
0
2016-09-08T01:26:11Z
[ "python", "python-3.x", "tkinter", "text-alignment" ]
Wrapped (circular) 2D interpolation in python
39,380,251
<p>I have angular data on a domain that is wrapped at pi radians (i.e. 0 = pi). The data are 2D, where one dimension represents the angle. I need to interpolate this data onto another grid in a wrapped way.</p> <p>In one dimension, the np.interp function takes a period kwarg (for numpy 1.10 and later): <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html</a></p> <p>This is exactly what I need, but I need it in two dimensions. I'm currently just stepping through columns in my array and using np.interp, but this is of course slow. Anything out there that could achieve this same outcome but faster?</p>
3
2016-09-07T23:03:05Z
39,382,154
<h2>An explanation of how <code>np.interp</code> works</h2> <p>Use the <a href="https://github.com/numpy/numpy/blob/v1.11.0/numpy/lib/function_base.py#L1570-L1692" rel="nofollow">source</a>, Luke!</p> <p>The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow">numpy doc for <code>np.interp</code></a> makes the source particularly easy to find, since it has the link right there, along with the documentation. Let's go through this, line by line.</p> <p>First, recall the parameters:</p> <pre><code>""" x : array_like The x-coordinates of the interpolated values. xp : 1-D sequence of floats The x-coordinates of the data points, must be increasing if argument `period` is not specified. Otherwise, `xp` is internally sorted after normalizing the periodic boundaries with ``xp = xp % period``. fp : 1-D sequence of floats The y-coordinates of the data points, same length as `xp`. period : None or float, optional A period for the x-coordinates. This parameter allows the proper interpolation of angular x-coordinates. Parameters `left` and `right` are ignored if `period` is specified. """ </code></pre> <p>Let's take a simple example of a triangular wave while going through this:</p> <pre><code>xp = np.array([-np.pi/2, -np.pi/4, 0, np.pi/4]) fp = np.array([0, -1, 0, 1]) x = np.array([-np.pi/8, -5*np.pi/8]) # Peskiest points possible }:) period = np.pi </code></pre> <p>Now, I start off with the <code>period != None</code> branch in the source code, after all the type-checking happens:</p> <pre><code># normalizing periodic boundaries x = x % period xp = xp % period </code></pre> <p>This just ensures that all values of <code>x</code> and <code>xp</code> supplied are between <code>0</code> and <code>period</code>. So, since the period is <code>pi</code>, but we specified <code>x</code> and <code>xp</code> to be between <code>-pi/2</code> and <code>pi/2</code>, this will adjust for that by adding <code>pi</code> to all values in the range <code>[-pi/2, 0)</code>, so that they effectively appear after <code>pi/2</code>. So our <code>xp</code> now reads <code>[pi/2, 3*pi/4, 0, pi/4]</code>.</p> <pre><code>asort_xp = np.argsort(xp) xp = xp[asort_xp] fp = fp[asort_xp] </code></pre> <p>This is just ordering <code>xp</code> in increasing order. This is especially required after performing that modulo operation in the previous step. So, now <code>xp</code> is <code>[0, pi/4, pi/2, 3*pi/4]</code>. <code>fp</code> has also been shuffled accordingly, <code>[0, 1, 0, -1]</code>.</p> <pre><code>xp = np.concatenate((xp[-1:]-period, xp, xp[0:1]+period)) fp = np.concatenate((fp[-1:], fp, fp[0:1])) return compiled_interp(x, xp, fp, left, right) # Paraphrasing a little </code></pre> <p><code>np.interp</code> does linear interpolation. When trying to interpolate between two points <code>a</code> and <code>b</code> present in <code>xp</code>, it only uses the values of <code>f(a)</code> and <code>f(b)</code> (i.e., the values of <code>fp</code> at the corresponding indices). So what <code>np.interp</code> is doing in this last step is to take the point <code>xp[-1]</code> and put it in front of the array, and take the point <code>xp[0]</code> and put it after the array, but after subtracting and adding one period respectively. So you now have a new <code>xp</code> that looks like <code>[-pi/4, 0, pi/4, pi/2, 3*pi/4, pi]</code>. Likewise, <code>fp[0]</code> and <code>fp[-1]</code> have been concatenated around, so <code>fp</code> is now <code>[-1, 0, 1, 0, -1, 0]</code>.</p> <p>Note that after the modulo operations, <code>x</code> had been brought into the <code>[0, pi]</code> range too, so <code>x</code> is now <code>[7*pi/8, 3*pi/8]</code>. Which lets you easily see that you'll get back <code>[-0.5, 0.5]</code>.</p> <hr> <h2>Now, coming to your 2D case:</h2> <p>Say you have a grid and some values. Let's just take all values to be between <code>[0, pi]</code> off the bat so we don't need to worry about modulos and shufflings.</p> <pre><code>xp = np.array([0, np.pi/4, np.pi/2, 3*np.pi/4]) yp = np.array([0, 1, 2, 3]) period = np.pi # Put x on the 1st dim and y on the 2nd dim; f is linear in y fp = np.array([0, 1, 0, -1])[:, np.newaxis] + yp[np.newaxis, :] # &gt;&gt;&gt; fp # array([[ 0, 1, 2, 3], # [ 1, 2, 3, 4], # [ 0, 1, 2, 3], # [-1, 0, 1, 2]]) </code></pre> <p>We now know that all you need to do is to add <code>xp[[-1]]</code> in front of the array and <code>xp[[0]]</code> at the end, adjusting for the period. Note how I've indexed using the singleton lists <code>[-1]</code> and <code>[0]</code>. This is a <a href="http://stackoverflow.com/a/18183182/525169">trick</a> to ensure that <a href="http://stackoverflow.com/q/3551242/525169">dimensions are preserved</a>.</p> <pre><code>xp = np.concatenate((xp[[-1]]-period, xp, xp[[0]]+period)) fp = np.concatenate((fp[[-1], :], fp, fp[[0], :])) </code></pre> <p>Finally, you are free to use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy.interpolate.interpn" rel="nofollow"><code>scipy.interpolate.interpn</code></a> to achieve your result. Let's get the value at <code>x = pi/8</code> for all <code>y</code>:</p> <pre><code>from scipy.interpolate import interpn interp_points = np.hstack(( (np.pi/8 * np.ones(4))[:, np.newaxis], yp[:, np.newaxis] )) result = interpn((xp, yp), fp, interp_points) # &gt;&gt;&gt; result # array([ 0.5, 1.5, 2.5, 3.5]) </code></pre> <p><code>interp_points</code> has to be specified as an Nx2 matrix of points, where the first dimension is for each point you want interpolation at the second dimension gives the x- and y-coordinate of that point. See <a href="http://stackoverflow.com/a/39357219/525169">this answer</a> for a detailed explanation.</p> <p>If you want to get the value outside of the range <code>[0, period]</code>, you'll need to modulo it yourself:</p> <pre><code>x = 21 * np.pi / 8 x_equiv = x % period # Now within [0, period] interp_points = np.hstack(( (x_equiv * np.ones(4))[:, np.newaxis], yp[:, np.newaxis] )) result = interpn((xp, yp), fp, interp_points) # &gt;&gt;&gt; result # array([-0.5, 0.5, 1.5, 2.5]) </code></pre> <p>Again, if you want to generate <code>interp_points</code> for a bunch of x- and y- values, look at <a href="http://stackoverflow.com/a/39357219/525169">this answer</a>.</p>
1
2016-09-08T03:29:23Z
[ "python", "numpy", "interpolation" ]
How to write a regex for a text including comma separated values in python?
39,380,275
<p>I'm trying to write a regex in python to get F1 to F8 fields from a line that looks like this:</p> <pre><code>LineNumber(digits): F1, F2, F3, ..., F8; </code></pre> <p><code>F1</code> to <code>F8</code> can have lowercase/uppercase letters and hyphens.</p> <p>For example:</p> <pre><code>Header Description 21: Yes, No, Yes, No, Ye-s, N-o, YES, NO; Footer </code></pre> <p>What I've tried so far is <code>matched = re.match(r'\d+: ([a-zA-Z-]*, ){7}(.*);', line)</code> which matches the lines with the above format. However, when I call <code>matched.groups()</code> to print the matched fields, I only get <code>F7,</code> and <code>F8</code> while the expected output is a list containing <code>F1,</code> to <code>F7,</code> plus <code>F8</code>.</p> <p>I have a few questions regarding this regex:</p> <ol> <li><p>I guess <code>groups()</code> method returns the fields that were grouped in the regex using <code>(...)</code>. Why don't I get F1 to F6 in the output while they are grouped using <code>(...)</code> and have matched the regex?</p></li> <li><p>What is a better regex I can write to exclude <code>,</code> from F1 to F7? (A short explanation of the suggested regex is much appreciated)</p></li> </ol>
0
2016-09-07T23:05:50Z
39,380,365
<p>When you have a construct like <code>(pattern){number}</code> then although it matches multiple instances, only the last one will be stored. In other words, you get one bucket per <code>()</code>, even if you parse it multiple times, in which case the last instance is the one kept. Note that you will get a bucket for ALL bracket pairs, even if they are not used, as in something like <code>(a(b)?c)?d</code> matching <code>d</code>.</p> <p>If you know how many items to expect, then you can do your regexp the long way:</p> <p><code>\d+: *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *, *([a-zA-Z-]+) *;</code></p> <p>This way, since you have 8 sets of brackets, you have 8 items in your <code>matched.groups()</code> array. Also, we're not capturing the spaces and commas between the fields.</p> <p>Given that your string is a CSV, you may be better off parsing it differently and splitting on commas rather than trying to have a single regexp to match the whole line.</p>
0
2016-09-07T23:19:07Z
[ "python", "regex", "csv" ]
How to write a regex for a text including comma separated values in python?
39,380,275
<p>I'm trying to write a regex in python to get F1 to F8 fields from a line that looks like this:</p> <pre><code>LineNumber(digits): F1, F2, F3, ..., F8; </code></pre> <p><code>F1</code> to <code>F8</code> can have lowercase/uppercase letters and hyphens.</p> <p>For example:</p> <pre><code>Header Description 21: Yes, No, Yes, No, Ye-s, N-o, YES, NO; Footer </code></pre> <p>What I've tried so far is <code>matched = re.match(r'\d+: ([a-zA-Z-]*, ){7}(.*);', line)</code> which matches the lines with the above format. However, when I call <code>matched.groups()</code> to print the matched fields, I only get <code>F7,</code> and <code>F8</code> while the expected output is a list containing <code>F1,</code> to <code>F7,</code> plus <code>F8</code>.</p> <p>I have a few questions regarding this regex:</p> <ol> <li><p>I guess <code>groups()</code> method returns the fields that were grouped in the regex using <code>(...)</code>. Why don't I get F1 to F6 in the output while they are grouped using <code>(...)</code> and have matched the regex?</p></li> <li><p>What is a better regex I can write to exclude <code>,</code> from F1 to F7? (A short explanation of the suggested regex is much appreciated)</p></li> </ol>
0
2016-09-07T23:05:50Z
39,380,453
<pre><code>&gt;&gt;&gt; pat = re.compile("""\s+ # one or more spaces (.*?) # the shortest anything (capture) \s* # zero or more spaces [;,] # a semicolon or a colon """,re.X) &gt;&gt;&gt; pat.findall("LineNumber(digits): F1, F2, F3, F4, F5, F6, F7, F8;") ['F1', 'F2', 'F3', 'F4', 'F5', 'F6', 'F7', 'F8'] </code></pre>
1
2016-09-07T23:32:02Z
[ "python", "regex", "csv" ]
python normal distribution
39,380,316
<p>I have a list of numbers, with sample mean and SD for these numbers. Right now I am trying to find out the numbers out of mean+-SD,mean +-2SD and mean +-3SD. For example, in the part of mean+-SD, i made the code like this:</p> <pre><code>ND1 = [np.mean(l)+np.std(l,ddof=1)] ND2 = [np.mean(l)-np.std(l,ddof=1)] m=sorted(l) print(m) ND68 = [] if ND2 &gt; m and m&lt; ND1: ND68.append(m&lt;ND2 and m&gt;ND1) print (ND68) </code></pre> <p>Here is my question: 1. Could number be calculated by the list and arrange. If so, which part I am doing wrong. Or there is some package I can use to solve this.</p>
1
2016-09-07T23:12:25Z
39,380,451
<p>This might help. We will use <code>numpy</code> to grab the values you are looking for. In my example, I create a normally distributed array and then use boolean slicing to return the elements that are outside of +/- 1, 2, or 3 standard deviations. </p> <pre><code>import numpy as np # create a random normally distributed integer array my_array = np.random.normal(loc=30, scale=10, size=100).astype(int) # find the mean and standard dev my_mean = my_array.mean() my_std = my_array.std() # find numbers outside of 1, 2, and 3 standard dev # the portion inside the square brackets returns an # array of True and False values. Slicing my_array # with the boolean array return only the values that # are True out_std_1 = my_array[np.abs(my_array-my_mean) &gt; my_std] out_std_2 = my_array[np.abs(my_array-my_mean) &gt; 2*my_std] out_std_3 = my_array[np.abs(my_array-my_mean) &gt; 3*my_std] </code></pre>
2
2016-09-07T23:31:41Z
[ "python", "python-3.x" ]
python normal distribution
39,380,316
<p>I have a list of numbers, with sample mean and SD for these numbers. Right now I am trying to find out the numbers out of mean+-SD,mean +-2SD and mean +-3SD. For example, in the part of mean+-SD, i made the code like this:</p> <pre><code>ND1 = [np.mean(l)+np.std(l,ddof=1)] ND2 = [np.mean(l)-np.std(l,ddof=1)] m=sorted(l) print(m) ND68 = [] if ND2 &gt; m and m&lt; ND1: ND68.append(m&lt;ND2 and m&gt;ND1) print (ND68) </code></pre> <p>Here is my question: 1. Could number be calculated by the list and arrange. If so, which part I am doing wrong. Or there is some package I can use to solve this.</p>
1
2016-09-07T23:12:25Z
39,380,538
<p>You are on the right track there. You know the mean and standard deviation of your list <code>l</code>, though I'm going to call it something a little less ambiguous, say, <code>samplePopulation</code>. </p> <p>Because you want to do this for several intervals of standard deviation, I recommend crafting a small function. You can call it multiple times without too much extra work. Also, I'm going to use a <a href="https://docs.python.org/3.5/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>, which is just a <code>for</code> loop in one line. </p> <pre><code>import numpy as np def filter_by_n_std_devs(samplePopulation, numStdDevs): # you mostly got this part right, no need to put them in lists though mean = np.mean(samplePopulation) # no brackets needed here std = np.std(samplePopulation) # or here band = numStdDevs * std # this is the list comprehension filteredPop = [x for x in samplePopulation if x &lt; mean - band or x &gt; mean + band] return filteredPop # now call your function with however many std devs you want filteredPopulation = filter_by_n_std_devs(samplePopulation, 1) print(filteredPopulation) </code></pre> <p>Here's a translation of the list comprehension (based on your use of <code>append</code> it looks like you may not know what these are, otherwise feel free to ignore).</p> <pre><code># remember that you provide the variable samplePopulation # the above list comprehension filteredPop = [x for x in samplePopulation if x &lt; mean - band or x &gt; mean + band] # is equivalent to this: filteredPop = [] for num in samplePopulation: if x &lt; mean - band or x &gt; mean + band: filteredPop.append(num) </code></pre> <p>So to recap: </p> <ul> <li>You don't need to make a list object out of your mean and std calculations</li> <li>The function call let's you plug in your <code>samplePopulation</code> and any number of standard deviations you want without having to go in and manually change the value</li> <li>List comprehensions are one line for loops, more or less, and you can even do the filtering you want right inside it! </li> </ul>
1
2016-09-07T23:42:07Z
[ "python", "python-3.x" ]
Why no output using Word2Vec?
39,380,424
<p>I have a dataframe <code>DF</code> that looks like</p> <pre><code>index posts 0 &lt;div class="content"&gt;A number of &lt;br/&gt;&lt;br/&gt;three ... &lt;/div&gt; 1 &lt;div class="content"&gt;Stack ... &lt;br/&gt;&lt;br/&gt;overflow ... &lt;/div&gt; ... </code></pre> <p>I then try to tokenize each <code>posts</code> with:</p> <pre><code>sentences=[] for post in DF["posts"]: sentences += utility.tosentences(post, tokenizer) </code></pre> <p>I then run Word2Vec using the below:</p> <pre><code>logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\ level=logging.INFO) num_features = 100 min_word_count = 7 num_workers = 2 context = 5 downsampling = 1e-5 print "Training model..." model = word2vec.Word2Vec(sentences, workers=num_workers, \ size=num_features, min_count = min_word_count, \ window = context, sample = downsampling) model.init_sims(replace=True) Word2Vec.load() model_name = "what" model.save(model_name) print "finished" </code></pre> <p>I then tested the below</p> <pre><code>model.doesnt_match("travel no Warning health".split()) </code></pre> <p>However, it didn't produce an output at all</p> <p>I don't understand the meaning of the large output I got above. Why is this not working?</p>
0
2016-09-07T23:26:38Z
39,515,884
<p>The function <code>model.doesnt_match()</code> doesn't print anything out; it returns a value. <strong>Print</strong> the returned value to see the output. </p> <p>If you were copy-pasting from this <a href="http://rare-technologies.com/word2vec-tutorial/" rel="nofollow"><code>word2vec</code> tutorial</a>: It's showing you the output you'd see if you ran these commands in the interactive console. (Also, it assumes that you understand what you're doing.)</p>
0
2016-09-15T16:17:53Z
[ "python", "nltk", "word2vec" ]
How to interpolated irregularly distributed data on a non-linear grid in python?
39,380,436
<p>I got a dataset of around 80.000 points (x,y,z), while the points are irregularly distributed in the (x,y) \in [0,a]x[0,b] plane and at every point (x,y) the physical quantity z takes a certain value. To further evaluate the data I want to interpolate it on a grid.</p> <p>Before I used scipy.interpolate.griddata to successfully interpolated them on a regular, quadratic, 2D grid. However this regular grid has the disadvantage, that it can't model the regions with drastic change in z appropriately, while there are to many data points in regions with only slight change in z.</p> <p>I would like to have a non-linear (preferably still quadratic, but with variable mesh size) grid, with more grid points in regions of drastic change in z and less data points in regions of slight change in z. </p>
-1
2016-09-07T23:28:31Z
39,380,757
<p>I think you have it backwards: your grid can be as regular as can be but each grid point should be evaluated using the same number of sample points, thus allowing for strong gradient changes in regions of high sample density, and imposing smoothness in data sparse regions. </p> <p>I use inverse-distance weighted trees for that. The implementation that I have floating around in python: import numpy as np from scipy.spatial import cKDTree</p> <pre><code>class invdisttree(object): """ Compute the score of query points based on the scores of their k-nearest neighbours, weighted by the inverse of their distances. @reference: https://en.wikipedia.org/wiki/Inverse_distance_weighting Example: -------- import numpy as np import matplotlib.pyplot as plt from invdisttree import invdisttree import matplotlib.pyplot as plt # create sample points with structured scores X1 = 10 * np.random.rand(1000, 2) -5 def func(x, y): return np.sin(x**2 + y**2) / (x**2 + y**2) z1 = func(X1[:,0], X1[:,1]) # 'train' tree = invdisttree(X1, z1) # 'test' spacing = np.linspace(-5., 5., 100) X2 = np.meshgrid(spacing, spacing) grid_shape = X2[0].shape X2 = np.reshape(X2, (2, -1)).T z2 = tree(X2) fig, (ax1, ax2, ax3) = plt.subplots(1,3, sharex=True, sharey=True, figsize=(10,3)) ax1.contourf(spacing, spacing, func(*np.meshgrid(spacing, spacing))) ax1.set_title('Ground truth') ax2.scatter(X1[:,0], X1[:,1], c=z1, linewidths=0) ax2.set_title('Samples') ax3.contourf(spacing, spacing, z2.reshape(grid_shape)) ax3.set_title('Reconstruction') plt.show() """ def __init__(self, X=None, z=None, leafsize=10): if not X is None: self.tree = cKDTree(X, leafsize=leafsize ) if not z is None: self.z = z def fit(self, X=None, z=None, leafsize=10): """ Arguments: ---------- X: (N, d) ndarray Coordinates of N sample points in a d-dimensional space. z: (N,) ndarray Corresponding scores. leafsize: int (default 10) Leafsize of KD-tree data structure; should be less than 20. Returns: -------- invdisttree instance: object """ return self.__init__(X, z, leafsize) def __call__(self, X, k=6, eps=1e-6, p=2, regularize_by=1e-9): self.distances, self.idx = self.tree.query(X, k, eps=eps, p=p) self.distances += regularize_by weights = self.z[self.idx.ravel()].reshape(self.idx.shape) mw = np.sum(weights/self.distances, axis=1) / np.sum(1./self.distances, axis=1) return mw def transform(self, X, k=6, p=2, eps=1e-6, regularize_by=1e-9): """ Arguments: ---------- X: (N, d) ndarray Coordinates of N query points in a d-dimensional space. k: int (default 6) Number of nearest neighbours to use. p: int or inf Which Minkowski p-norm to use. 1 is the sum-of-absolute-values "Manhattan" distance 2 is the usual Euclidean distance infinity is the maximum-coordinate-difference distance eps: float (default 1e-6) Return approximate nearest neighbors; the k-th returned value is guaranteed to be no further than (1+eps) times the distance to the real k-th nearest neighbor. regularise_by: float (default 1e-9) Regularise distances to prevent division by zero for sample points with the same location as query points. Returns: -------- z: (N,) ndarray Corresponding scores. """ return self.__call__(X, k, eps, p, regularize_by) </code></pre>
0
2016-09-08T00:13:50Z
[ "python", "grid", "interpolation" ]
Python read from a text file and create/return dictionary values
39,380,501
<p>I am having trouble creating a program that reads from text which maps a word and returns values. How would I write a code to read a text file and spit back a dictionary data structure. The text file is a lot longer than 3, but here is an example.</p> <pre><code>Mary 21.0 25.0 Carson 25.0 27.0 Blair 22.0 10.0 </code></pre> <p>and if I were to </p> <pre><code>&gt;&gt;&gt; dict['Mary'] [21.0, 25.0] </code></pre> <p>and then</p> <pre><code>&gt;&gt;&gt; dict['Mary'][0] 21.0 &gt;&gt;&gt; dict['Mary'][1] 25.0 </code></pre> <p>Thanks.</p>
-4
2016-09-07T23:36:47Z
39,380,557
<p>You are looking for a simple snippet like this</p> <pre><code>dic = {} with open(filename, 'r') as f: for line in f: info = line.split('\t') dic[info[0]] = info[1:] </code></pre> <p>It is just a model, adapt that to your needs.</p>
-2
2016-09-07T23:45:25Z
[ "python", "file", "dictionary" ]
Python read from a text file and create/return dictionary values
39,380,501
<p>I am having trouble creating a program that reads from text which maps a word and returns values. How would I write a code to read a text file and spit back a dictionary data structure. The text file is a lot longer than 3, but here is an example.</p> <pre><code>Mary 21.0 25.0 Carson 25.0 27.0 Blair 22.0 10.0 </code></pre> <p>and if I were to </p> <pre><code>&gt;&gt;&gt; dict['Mary'] [21.0, 25.0] </code></pre> <p>and then</p> <pre><code>&gt;&gt;&gt; dict['Mary'][0] 21.0 &gt;&gt;&gt; dict['Mary'][1] 25.0 </code></pre> <p>Thanks.</p>
-4
2016-09-07T23:36:47Z
39,380,581
<p>If you text file is tab separated with one key/value pair per line you can use this (python 3.4+ only):</p> <pre><code>my_dict = {} with open('c:/path/to/file.txt', 'r') as fp: for line in fp: # grabs the first value as k and everything else a v k, *v = line.strip().split('\t') # turn the string values into floats my_dict[k] = tuple(map(float,v)) </code></pre>
-1
2016-09-07T23:49:16Z
[ "python", "file", "dictionary" ]
Pygame: how to blit an image that follows another image
39,380,527
<p>I'm trying reproduce the game "Snake" in pygame, using the pygame.blit function, instead of pygame.draw. My question is how to make an image follow another image. I mean, make the snake's body photo follow your head. In the current state of my program the head moves on its own.</p>
0
2016-09-07T23:40:12Z
39,380,622
<p>Keep positions of snake segments on list (first segment is head).</p> <p>Later new position of head insert before first segment (and remove last segment). </p> <p>Use this list to blit snake segments.</p>
0
2016-09-07T23:55:22Z
[ "python" ]
z3Py: Cast BoolRef to one-bit BitVecRef
39,380,560
<p>Is it possible to cast <code>BoolRef</code> to a one-bit-long <code>BitVecRef</code> in z3Py? In my design, it is required that a <code>BitVecRef</code> is returned from a comparison between two other <code>BitVecRef</code>'s. This would be similar to casting a python <code>bool</code> to an <code>int</code>. Here is an example of its use:</p> <pre><code>bv1, bv2, added = z3.BitVecs('bv1 bv2 added', 4) res = z3.BitVec('res', 1) s = z3.Solver() s.add(res == (bv1 &lt; bv2)) s.add(added == added + z3.ZeroExt(3, res)) </code></pre> <p>This would be ideal, but the type of <code>(bv1 &lt; bv2)</code> is <code>Boolref</code>, and it throws a "sort mismatch" error. Is there a way to cast the result of <code>(bv1 &lt; bv2)</code> so that <code>res</code> can asserted equal to it?</p>
0
2016-09-07T23:45:55Z
39,396,652
<p>Bit-vectors can't be casted to Boolean automatically. The usual approach is to wrap them in if-then-elses, e.g., in this example, instead of </p> <pre><code>s.add(res == (bv1 &lt; bv2)) </code></pre> <p>we can say</p> <pre><code>c = If(bv1 &lt; bv2, BitVecVal(1, 1), BitVecVal(0, 1)) s.add(res == c) </code></pre>
0
2016-09-08T17:06:02Z
[ "python", "casting", "z3", "z3py", "bitvector" ]
Searching Dictionary in list
39,380,677
<p>I am working on my first project with API's and I am having trouble accessing the data. I am working off an example that calls its data with this loop:</p> <pre><code>for item in data['objects']: print item['name'], item['phone'] </code></pre> <p>This works great for data stored as nested dictionaries (the outside being called objects, and the inside containing the data)</p> <p>The issue I am having is my data is formatted with dictonaries inside of lists</p> <pre><code>[ { "key":"2014cama", "website":"http://www.cvrobotics.org/frc/regional.html", "official":true, "end_date":"2014-03-09", "name":"Central Valley Regional", "short_name":"Central Valley", "facebook_eid":null, "event_district_string":null, "venue_address":"Madera South High School\n705 W. Pecan Avenue\nMadera, CA 93637\nUSA", "event_district":0, "location":"Madera, CA, USA", "event_code":"cama", "year":2014, "webcast":[], "timezone":"America/Los_Angeles", "alliances":[], "event_type_string":"Regional", "start_date":"2014-03-07", "event_type":0 },'more data...'] </code></pre> <p>so calling,</p> <pre><code>for item in data['objects']: print item['name'] </code></pre> <p>Won't work to pull the value stored in <code>name</code>.</p> <p>Any help would be much appreciated.</p> <p>Edit: The full Dataset I'm pulling (<a href="http://www.thebluealliance.com/api/v2/team/frc254/2014/events?X-TBA-App-Id=Peter_Hartnett:Scouting:v1" rel="nofollow">http://www.thebluealliance.com/api/v2/team/frc254/2014/events?X-TBA-App-Id=Peter_Hartnett:Scouting:v1</a>) </p> <p>And the code I am running:</p> <pre><code>import json,urllib2, TBA team ='frc254' year = '2014' Url = 'http://www.thebluealliance.com/api/v2/team/'+team+'/'+year+'/events?X- TBA-App-Id=Peter_Hartnett:Scouting:v1' data = TBA.GetData(Url) for item in data: print item['name'] </code></pre> <p>The TBA Class just imports the data and returns it.</p> <p>Edit2: Here is the TBA class that pulls the data, I can assure you it is identical to that found at the link above</p> <pre><code>import urllib2,cookielib content='none' def GetData(Url): site= Url hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11', 'Accept': ' 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'Accept-Encoding': 'none', 'Accept-Language': 'en-US,en;q=0.8', 'Connection': 'keep-alive'} req = urllib2.Request(site, headers=hdr) try: page = urllib2.urlopen(req) except urllib2.HTTPError, e: print e.fp.read() content = page.read() return content </code></pre>
1
2016-09-08T00:02:32Z
39,380,694
<p>If I understood it correctly, your <code>data['objects']</code> is now an entry in a list, right?</p> <p>So just iterate the list and your logic will remain the same</p> <pre><code>for item in objects: print item['name'], item['phone'] </code></pre> <p>being </p> <pre><code>objects = [ { "key":"2014cama", "website":"http://www.cvrobotics.org/frc/regional.html", "official":true, "end_date":"2014-03-09", "name":"Central Valley Regional", "short_name":"Central Valley", "facebook_eid":null, "event_district_string":null, "venue_address":"Madera South High School\n705 W. Pecan Avenue\nMadera, CA 93637\nUSA", "event_district":0, "location":"Madera, CA, USA", "event_code":"cama", "year":2014, "webcast":[], "timezone":"America/Los_Angeles", "alliances":[], "event_type_string":"Regional", "start_date":"2014-03-07", "event_type":0 },'more data...'] </code></pre> <h3>Edit</h3> <p>I get your problem now. Your object <code>data</code> is a <code>string</code> that represents a <code>JSONArray</code> . You should <strong><em>load</em></strong> that before iterating, in order to be able to work with that as a real <code>list</code>, like so:</p> <pre><code>data = GetData(Url) loaded_array = json.loads(data) for item in loaded_array: print item['name'] </code></pre>
1
2016-09-08T00:06:06Z
[ "python", "list", "dictionary" ]
What is causing "TypeError: not all arguments converted during string formatting"
39,380,711
<p>This code produces the error:</p> <pre><code># -*- coding: utf-8 -*- amount = float(input("Enter the purchase price please.")) down_payment=amount *0.10 monthly_rate = (amount - down_payment) *.05 ending_balance=amount-down_payment print("|Ø-6s|Ø-16s|Ø-9s|Ø-8s|Ø-14s|" % ("Month" , "Current Balance" , "Interest" , "Payment" , "Ending Balance")) month = 1 while True: starting_balance = ending_balance interest = starting_balance * 0.01 final_amount = (starting_balance+interest) if monthly_rate &gt; final_amount: monthly_rate = final_amount ending_balance = final_amount - monthly_rate print("|Ø-6f|Ø-16f|Ø-9f|Ø-8f|Ø-14f|" % (month , starting_balance , interest , monthly_rate , ending_balance)) month+=1 if ending_balance &lt;= 0: break </code></pre> <p>Error:</p> <pre><code>&lt;module&gt; print("|Ø-6s|Ø-16s|Ø-9s|Ø-8s|Ø-14s|" % ("Month" , "Current Balance" , "Interest" , "Payment" , "Ending Balance")) TypeError: not all arguments converted during string formatting </code></pre>
0
2016-09-08T00:07:31Z
39,380,749
<p>You have to use <code>%</code> instead of <code>Ø</code> in your formating strings.</p> <pre><code>print("|%-6s|%-16s|%-9s|%-8s|%-14s|" % ("Month" , "Current Balance" , "Interest" , "Payment" , "Ending Balance")) </code></pre> <p>and</p> <pre><code>print("|%-6f|%-16f|%-9f|%-8f|%-14f|" % (month , starting_balance , interest , monthly_rate , ending_balance)) </code></pre> <p>And see: <a href="https://pyformat.info/" rel="nofollow">https://pyformat.info/</a></p>
0
2016-09-08T00:13:19Z
[ "python" ]
Passing data to and from C++ code using a python driver
39,380,723
<p>I am trying to pass arrays to a c++ function from a python driver. One of the arrays I am passing to the c++ function is a results array. I know the data is passing to the code correctly, but seems to be truncating on the way back. I'm loosing my precision after the decimal. Any ideas where I went wrong? Below is a simplified version of the code, and the resultant output.</p> <p>python code:</p> <pre><code>import ctypes from ctypes import * def get_calculate_windows(): dll = ctypes.CDLL('./cppFunctions.so', mode=ctypes.RTLD_GLOBAL) func = dll.calculate_windows func.argtypes = [POINTER(c_float), POINTER(c_float), c_size_t] return func def calculate_windows(data, answer, size): data_p = data.ctypes.data_as(POINTER(c_float)) answer_p = answer.ctypes.data_as(POINTER(c_float)) __calculate_windows(data_p, answer_p, size) ###### MAIN FUNCTION ##### data = np.array(myData, dtype=np.float32) ans = np.array(myAnswer, dtype=np.float32) print ans[:10] calculate_windows(data, ans, myLength) print ans[:10] </code></pre> <p>c++ code:</p> <pre><code>extern "C" { void calculate_windows(float *h_data, float *result, size_t size ) { for (int i=0; i&lt;size; i++ ) { result[i]=i/10; } } } </code></pre> <p>output:</p> <pre><code>[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] </code></pre> <p>What the output SHOULD be:</p> <pre><code>[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] </code></pre>
2
2016-09-08T00:08:56Z
39,381,090
<blockquote> <p>I know the data is passing to the code correctly, but seems to be truncating on the way back. I'm losing my precision after the decimal. Any ideas where I went wrong?</p> </blockquote> <p>You are using integer division instead of floating-point division.</p> <p>You can fix this by using:</p> <pre><code>result[i] = i / 10.; </code></pre> <p>Instead of:</p> <pre><code>result[i] = i / 10; </code></pre>
0
2016-09-08T01:03:49Z
[ "python", "c++", "floating-point", "ctypes" ]
Python: how to correctly convert from a string date in a particular time zone?
39,380,753
<p>I have input from a human in <code>YYYY-MM-DD HH:MM:SS</code> format that I know is supposed to be Los Angeles local time. In Python, how do I convert this to a <code>datetime.datetime</code> object that is unambiguous and correct? I am aware that the input is ambiguous during the autumn transition out of DST; I'm fine with either choice happening for dates within that hour as long as it is deterministic.</p> <p>Here is my attempt, which I'm surprised to find doesn't work:</p> <pre><code>&gt;&gt;&gt; import pytz &gt;&gt;&gt; from dateutil import parser as date_parser &gt;&gt;&gt; PACIFIC = pytz.timezone('US/Pacific') &gt;&gt;&gt; result = date_parser.parse('2016-08-01 00:00:00 FOO', tzinfos={'FOO': PACIFIC}) &gt;&gt;&gt; result.utcoffset() datetime.timedelta(-1, 57600) &gt;&gt;&gt; str(result) '2016-08-01 00:00:00-08:00' </code></pre> <p>Despite having asked for <code>US/Pacific</code> and this being a summer date, I get UTC-8 instead of UTC-7.</p>
1
2016-09-08T00:13:37Z
39,385,127
<p>I found <a href="https://artofsoftware.org/2012/04/17/dealing-with-daylight-savings-time-using-pytz/" rel="nofollow">This</a>, which may help. Chopping it up a bit, I got the -0700 that you're looking for. I have to admit I don't understand this module, so I can't point to where you can fix your code quickly (and I don't use python on the command line much,) but the below works.</p> <pre><code>import datetime import pytz def report(time): mytz = pytz.timezone('US/Pacific') dtformat = "%Y-%m-%d %H:%M" t = datetime.datetime.strptime(time, dtformat).replace(tzinfo=pytz.utc) print ("*****") print (t) print (mytz.normalize(t)) report("2011-08-01 00:00") </code></pre> <p>So, hope this helps!</p>
0
2016-09-08T07:37:06Z
[ "python", "datetime", "python-datetime", "python-dateutil" ]
Python: how to correctly convert from a string date in a particular time zone?
39,380,753
<p>I have input from a human in <code>YYYY-MM-DD HH:MM:SS</code> format that I know is supposed to be Los Angeles local time. In Python, how do I convert this to a <code>datetime.datetime</code> object that is unambiguous and correct? I am aware that the input is ambiguous during the autumn transition out of DST; I'm fine with either choice happening for dates within that hour as long as it is deterministic.</p> <p>Here is my attempt, which I'm surprised to find doesn't work:</p> <pre><code>&gt;&gt;&gt; import pytz &gt;&gt;&gt; from dateutil import parser as date_parser &gt;&gt;&gt; PACIFIC = pytz.timezone('US/Pacific') &gt;&gt;&gt; result = date_parser.parse('2016-08-01 00:00:00 FOO', tzinfos={'FOO': PACIFIC}) &gt;&gt;&gt; result.utcoffset() datetime.timedelta(-1, 57600) &gt;&gt;&gt; str(result) '2016-08-01 00:00:00-08:00' </code></pre> <p>Despite having asked for <code>US/Pacific</code> and this being a summer date, I get UTC-8 instead of UTC-7.</p>
1
2016-09-08T00:13:37Z
39,757,224
<p>You can use <a href="https://pypi.python.org/pypi/dateparser" rel="nofollow">dateparser</a> for that.</p> <p><pre></p> <code>import dateparser &gt;&gt;&gt; dateparser.parse('2016-08-01 00:00:00', settings={'TIMEZONE': 'US/Pacific', 'TO_TIMEZONE': 'UTC'}) datetime.datetime(2016, 8, 1, 7, 0) </code></pre> <p></p>
0
2016-09-28T20:38:32Z
[ "python", "datetime", "python-datetime", "python-dateutil" ]
How to set miterlimit when saving as svg with matplotlib
39,380,761
<p>Is there anyway to set the miterlimit when saving a matplotlib plot to svg? I'd like to programatically get around this bug:<a href="https://bugs.launchpad.net/inkscape/+bug/1533058" rel="nofollow">https://bugs.launchpad.net/inkscape/+bug/1533058</a> whereas I am having to manually change this in the ".svg" text file.</p>
2
2016-09-08T00:14:44Z
39,539,384
<p>This parameter is defined as <code>'stroke-miterlimit': '100000'</code> and is hard-set in backend_svg.py. There is no such parameter in matplotlibrc, so customizing with style sheet is unlikely to be possible.</p> <p>I used the following code to fix this issue:</p> <pre><code>def fixmiterlimit(svgdata, miterlimit = 10): # miterlimit variable sets the desired miterlimit mlfound = False svgout = "" for line in svgdata: if not mlfound: # searches the stroke-miterlimit within the current line and changes its value mlstring = re.subn(r'stroke-miterlimit:([0-9]+)', "stroke-miterlimit:" + str(miterlimit), line) if mlstring[1]: # use number of changes made to the line to check whether anything was found mlfound = True svgout += mlstring[0] + '\n' else: svgout += line + '\n' else: svgout += line + '\n' return svgout </code></pre> <p>And then call it like this (with the trick from this <a href="http://stackoverflow.com/questions/5453375/matplotlib-svg-as-string-and-not-a-file?rq=1">post</a>):</p> <pre><code>import StringIO ... imgdata = StringIO.StringIO() # initiate StringIO to write figure data to # the same you would use to save your figure to svg, but instead of filename use StringIO object plt.savefig(imgdata, format='svg', dpi=90, bbox_inches='tight') imgdata.seek(0) # rewind the data svg_dta = imgdata.buf # this is svg data svgoutdata = fixmiter(re.split(r'\n', svg_dta)) # pass as an array of lines svgfh = open('figure1.svg', 'w') svgfh.write(svgoutdata) svgfh.close() </code></pre> <p>The code basically changes the stroke-miterlimit parameter in SVG output before writing it to file. Worked for me.</p>
1
2016-09-16T20:10:10Z
[ "python", "svg", "matplotlib" ]
How can I kill all threads?
39,380,811
<p>In this script:</p> <pre><code>import threading, socket class send(threading.Thread): def run(self): try: while True: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((url,port)) s.send(b"Hello world!") print ("Request Sent!") except: s.close() except KeyboardInterrupt: # here i'd like to kill all threads if possible for x in range(800): send().start() </code></pre> <p>Is it possible to kill all threads in the except of KeyboardInterrupt? I've searched on the net and yeah, I know that it has been already asked, but I'm really new in python and I didn't get so well the method of these other question asked on stack.</p>
1
2016-09-08T00:20:31Z
39,380,886
<p>No. Individual threads can't be terminated forcibly (it's unsafe, since it could leave locks held, leading to deadlocks, among other things).</p> <p>Two ways to do something like this would be to either:</p> <ol> <li>Have all threads launched as <code>daemon</code> threads, with the main thread waiting on an <code>Event</code>/<code>Condition</code> and exiting as soon as one of the threads sets the <code>Event</code> or notifies the <code>Condition</code>. The process terminates as soon as the (sole) non-<code>daemon</code> thread exits, ending all the <code>daemon</code> threads</li> <li>Use a shared <code>Event</code> that all the threads poll intermittently, so they cooperatively exit shortly after it is set.</li> </ol>
0
2016-09-08T00:31:17Z
[ "python", "multithreading", "python-3.x", "kill", "terminate" ]
Find index of string_to_hash_bucket in linear weight index Tensorflow
39,380,823
<p>I am working with tensor flow on a binary classification model using a the LinearClassifier class. One of the features I'm basing the classification on is a column called hat:</p> <pre><code>hat = tf.contrib.layers.sparse_column_with_hash_bucket("hat", hash_bucket_size=1000) </code></pre> <p>After I have initialized the model and done a fit with in a tf.Session():</p> <pre><code>with tf.Session() as sess: m = tf.contrib.learn.LinearClassifier(feature_columns=hat...) m.fit(...) </code></pre> <p>I would like to inspect the weights for each of the categories of hat after I have trained the model. </p> <p>The hat tags are just given by different strings. After training the model I would like to find the weight associated with each hat label. However to compare the weights with the particular hat I need to know which hash bucket the hat label has been thrown into. One of my hat labels is "tb". I can find what this is indexed to using the function:</p> <pre><code>tf.string_to_hash_bucket(tf.cast("tb",tf.string), 1000) </code></pre> <p>I can then loop over the weights returned here:</p> <pre><code>for i,n in enumerate(m.linear_weights_["linear/hat_weights"]): print i, n </code></pre> <p>which gives me:</p> <pre><code>linear/hat_weights 0 [-0.147] ... </code></pre> <p>My problem is that none of the indices with significant (abs(x)>0.0005) weights correspond with any of the hash bucket ids I get from string_to_hash_bucket on all the hat labels in the dataset. </p> <p>So finally my question:</p> <p>Am I right in thinking that string_to_hash_bucket id should correspond to the index of the corresponding array return by m.linear_weights_["linear/hat_weights"]?</p> <p>If not how can I obtain the correct id? Is there an easier way to inspect the weights of the feature column tensors,both sparse and real valued (which aren't even contained in .linear_weights_), in the linear model? </p> <p>many thanks!</p>
0
2016-09-08T00:22:02Z
39,396,534
<p>Another way to inspect the weights which is black-box (and hence guaranteed to work) is to evaluate your entire model on an example which just has a single hat feature turned on. Then you can see the weight assigned by the model to each feature.</p>
0
2016-09-08T16:58:08Z
[ "python", "tensorflow" ]
How to update a HyperlinkedRelatedField in Django Rest Framework
39,380,874
<p>I'm trying to update a HyperlinkedRelatedField connected to a ManyToManyField in Django through Django Rest Framework and coming up with a successful PUT (status 200) that ignores my HyperlinkedRelatedField data. I'm obviously missing something, but what?</p> <pre><code>#models.py class Product(models.Model): name = models.CharField(_('Name'),) number = models.IntegerField(_('Number'),) slug = models.SlugField(_('Slug'),) class UserPrefs(models.Model): other_prefs = models.CharField(_('Other Prefs')) favorite_products = models.ManyToManyField( Product, blank = True) #views.py class UserPrefsViewSet(viewsets.ModelViewSet): permission_classes = ( permissions.IsAuthenticated, ) serializer_class = UserPrefsSerializer def get_queryset(self): user = self.request.user return UserPrefs.objects.filter(user=user) #serializers.py class ProductSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = Product fields = ( 'url', 'name', 'number', 'slug', ) extra_kwargs = { 'url': {'lookup_field': 'slug'} } class UserPrefsSerializer(serializers.HyperlinkedModelSerializer): favorite_products = serializers.HyperlinkedRelatedField( queryset = Product.objects.all(), many = True, read_only = False, view_name = 'product-detail', lookup_field = 'slug',) class Meta: model = UserPrefs fields = ( 'url', 'other_prefs', 'favorite_products', ) </code></pre> <p>For example, using httpie from the command line, my PUT:</p> <pre><code>http -a my_auth_details PUT http://localhost/api_v2/userprefs/1/ {favorite_products: ['http://localhost/api_v2/product/jacket/', 'http://localhost/api_v2/product/shirt/']} </code></pre> <p>...and the result:</p> <pre><code>HTTP/1.0 200 OK { "favorite_products": [], "url": "http://localhost:8000/api_v2/userprefs/1/" } </code></pre>
0
2016-09-08T00:28:58Z
39,381,917
<p>it seems like you didn't override the <strong>create()</strong> method of <strong>UserPrefsSerializer().</strong></p> <pre><code>class UserPrefsSerializer(serializers.HyperlinkedModelSerializer): favorite_products = serializers.HyperlinkedRelatedField( queryset = Product.objects.all(), many = True, read_only = False, view_name = 'product-detail', lookup_field = 'slug',) class Meta: model = UserPrefs fields = ( 'url', 'other_prefs', 'favorite_products', ) def create(self, validated_data): favorite_products_data = validated_data.pop('favorite_products') user_prefs = UserPrefs.objects.create(**validated_data) for favorite_product_data in favorite_products_data: favorite_product = Product.objects.create(**favorite_product_data) user_prefs.favorite_products.add(favorite_product) return user_prefs def update(self, instance, validated_data): favorite_products_data = validated_data.pop('favorite_products') instance.other_prefs = validated_data.get('other_prefs', instance.other_prefs) instance.save() for favorite_product_data in favorite_products_data: favorite_product = Product(**favorite_product) favorite_product.save() return instance </code></pre> <p>Please check the <a href="http://www.django-rest-framework.org/api-guide/relations/#writable-nested-serializers" rel="nofollow">Writable nested serializers</a> </p>
1
2016-09-08T02:58:20Z
[ "python", "json", "django", "django-rest-framework" ]
How to color leaves on `ete3` Tree? (Python 3)
39,380,907
<p>I just started using <code>ete3</code> and it is awesome. </p> <p><strong>How can I color the leaves of an <code>ete3</code> Tree object using a color dictionary?</strong> I made <code>"c":None</code> because I don't want the of <code>c</code> to show up. </p> <p>I want to have better control of the tree render but I can't figure out exactly how to do it. </p> <p>I saw that there are <a href="http://etetoolkit.org/docs/latest/reference/reference_treeview.html#nodestyle" rel="nofollow"><code>NodeStyle</code> objects</a> but I think this is for the actual nodes. It looks like this <a href="http://etetoolkit.org/docs/latest/reference/reference_treeview.html#faces" rel="nofollow"><code>TextFace</code> object</a> is what I need but I don't know how to use it. <a href="http://etetoolkit.org/docs/latest/tutorial/tutorial_drawing.html" rel="nofollow">All the examples</a> are adding labels. </p> <pre><code># Build Tree tree = ete3.Tree( "((a,b),c);" ) # Leaf mapping D_leaf_color = {"a":"r", "b":"g","c":None} # Set up style for circular tree ts = ete3.TreeStyle() ts.mode = "c" # Draw Tree tree.render("tree_test.png", dpi=300, w=500, tree_style=ts) </code></pre> <p><a href="http://i.stack.imgur.com/NLgyb.png" rel="nofollow"><img src="http://i.stack.imgur.com/NLgyb.png" alt="enter image description here"></a></p> <p>I looked at this question but it was pretty confusing: <a href="http://stackoverflow.com/questions/32389343/how-to-color-tree-nodes-with-fixed-set-of-colors">How to color tree nodes with fixed set of colors?</a></p>
2
2016-09-08T00:34:50Z
39,409,340
<p>I would do it like this:</p> <pre><code>from ete3 import Tree, TextFace, TreeStyle # Build Tree tree = Tree( "((a,b),c);" ) # Leaf mapping D_leaf_color = {"a":"red", "b":"green"} for node in tree.traverse(): # Hide node circles node.img_style['size'] = 0 if node.is_leaf(): color = D_leaf_color.get(node.name, None) if color: name_face = TextFace(node.name, fgcolor=color, fsize=10) node.add_face(name_face, column=0, position='branch-right') # Set up style for circular tree ts = TreeStyle() ts.mode = "c" ts.scale = 10 # Disable the default tip names config ts.show_leaf_name = False ts.show_scale = False # Draw Tree tree.render("tree_test.png", dpi=300, w=500, tree_style=ts) </code></pre> <p><a href="http://i.stack.imgur.com/HKYAQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/HKYAQ.png" alt="enter image description here"></a></p>
2
2016-09-09T10:21:56Z
[ "python", "colors", "tree", "etetoolkit", "ete3" ]
Compiler not understanding my optional argument in Python
39,381,004
<p>Hello I'm having issues with an exercise that is asking me to write code that contains a function, 3 dictionaries, and an optional argument.</p> <p><strong>Here is my code:</strong></p> <pre><code>def make_album(artist_name, album_title, num_tracks): """Return artist and album title name.""" CD1 = {'sonic': artist_name, 'his world': album_title} CD2 = {'shadow': artist_name, 'all hail shadow': album_title} CD3 = {'silver': artist_name, 'dream of an absolution': album_title} if num_tracks: CD = artist_name + ' ' + album_title + ' ' + num_tracks else: CD = artist_name + ' ' + album_title return CD.title() disc = make_album('sonic', 'his world', '15') print(disc) disc = make_album('shadow', 'all hail shadow') print(disc) disc = make_album('silver', 'dream of an absolution') print(disc) </code></pre> <p>Whenever I try to run my code however, my compiler states that it is missing 1 required positional argument: 'num_tracks' for my second print statement.</p> <p>But this should not be an issue because of my if-else statement, unless I wrote my code incorrectly and the compiler isn't reading my if-else statement? Any feedback would be greatly appreciated, thank you for your time.</p>
1
2016-09-08T00:51:12Z
39,381,021
<p>You need to <code>def</code> the function with a <a href="https://docs.python.org/3/tutorial/controlflow.html#default-argument-values" rel="nofollow">default for the argument to be optional</a>, e.g.:</p> <pre><code># num_tracks=None means if not provided, num_tracks is set to None def make_album(artist_name, album_title, num_tracks=None): ... if num_tracks is not None: CD = artist_name + ' ' + album_title + ' ' + num_tracks else: CD = artist_name + ' ' + album_title </code></pre> <p>Arguments without a default are always required.</p>
2
2016-09-08T00:53:24Z
[ "python" ]
Simple PyQt5 QML application causes segmentation fault
39,381,009
<p>In trying to get a <a href="https://www.boxcontrol.net/beginners-pyqt5-and-qml-integration-tutorial.html" rel="nofollow">very basic PyQt5 QML example</a> to run, I found that I get a segmentation fault. I verified that it only seems deal with displaying QML since <a href="http://pyqt.sourceforge.net/Docs/PyQt5/qml.html" rel="nofollow">an example without a window</a> runs fine. I tried the following minimial test:</p> <pre><code>#!/usr/bin/python3 import sys from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QGuiApplication from PyQt5.QtQml import QQmlApplicationEngine # Main Function if __name__ == '__main__': app = QGuiApplication(sys.argv) engine = QQmlApplicationEngine("simple.qml") engine.quit.connect(app.quit) sys.exit(app.exec_()) </code></pre> <p>simple.qml:</p> <pre><code>import QtQuick 2.5 import QtQuick.Controls 1.4 ApplicationWindow { width: 300 height: 200 title: "Simple" visible: true } </code></pre> <p>When I run this application, a window appears for a split second before closing like in the more detailed example, and I receive <code>Segmentation fault</code> in the console (and nothing more).</p> <p>Running from GDB shows that the <code>QSGRenderThread</code> is receiving the SIGSEGV:</p> <pre><code>(gdb) run snowman_qt.py Starting program: /usr/bin/python3 snowman_qt.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7fffe912b700 (LWP 17200)] [New Thread 0x7fffe3dbb700 (LWP 17201)] [New Thread 0x7fffe1442700 (LWP 17202)] [New Thread 0x7fffdbfff700 (LWP 17203)] Thread 5 "QSGRenderThread" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffdbfff700 (LWP 17203)] __strstr_sse2 (haystack_start=0x0, needle_start=0x7fffe28c9dd0 "nouveau") at ../string/strstr.c:63 63 ../string/strstr.c: No such file or directory. </code></pre> <p>The backtrace follows:</p> <pre><code>#0 __strstr_sse2 (haystack_start=0x0, needle_start=0x7fffe28c9dd0 "nouveau") at ../string/strstr.c:63 #1 0x00007fffe27233ea in QSGRenderContext::initialize(QOpenGLContext*) () from /usr/local/lib/python3.5/dist-packages/PyQt5/Qt/qml/QtQuick.2/../../lib/libQt5Quick.so.5 #2 0x00007fffe273e979 in ?? () from /usr/local/lib/python3.5/dist-packages/PyQt5/Qt/qml/QtQuick.2/../../lib/libQt5Quick.so.5 #3 0x00007ffff56835f9 in ?? () from /usr/local/lib/python3.5/dist-packages/PyQt5/Qt/lib/libQt5Core.so.5 #4 0x00007ffff7bc16fa in start_thread (arg=0x7fffdbfff700) at pthread_create.c:333 #5 0x00007ffff78f7b5d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 </code></pre> <p>If I run the QML file <a href="https://github.com/thp/pyotherside/issues/63" rel="nofollow">from a C++ application</a>, there is no segmentation fault and the application works. Note that I'm using PyQT without PyOtherside, but the symptoms seem similar.</p> <p>Is there a way to get more information to continue debugging?</p> <p>I am running Python 3.5.2 on Linux Mint 18. My QT version is 5.7.0, my PyQt version is 5.7, and my SIP version is 4.18.1.</p>
0
2016-09-08T00:51:33Z
40,119,580
<p>The rabbit hole went deeper than I thought. There were a few problems.</p> <ol> <li>The version of SIP in the Linux Mint 18 (likely Ubuntu 16.04) repository was too old. I updated to 4.18.1 by <a href="http://pyqt.sourceforge.net/Docs/sip4/installation.html" rel="nofollow">installing from source</a>.</li> <li>With an updated SIP version, I was able to <a href="http://pyqt.sourceforge.net/Docs/PyQt5/installation.html" rel="nofollow">compile the latest PyQt5 from source</a>. Before compiling, however, I had to install libqt5opengl5-dev so that it would compile with the QtOpenGL module. After compiling, I installed using <code>checkinstall</code> to make rolling back easier.</li> </ol> <p>With the latest SIP and PyQt5, I no longer received the segmentation fault. However, I had issues with the OpenGL shaders not being created properly and just received a plain white window.</p> <p>The workaround for <a href="https://bugs.launchpad.net/ubuntu/+source/python-qt4/+bug/941826" rel="nofollow">this bug</a> is to include the following before loading PyQt in the Python program:</p> <pre><code>import ctypes from ctypes import util ctypes.CDLL(util.find_library('GL'), ctypes.RTLD_GLOBAL) </code></pre> <p>I believe this is unique to Ubuntu-based OSs with NVIDIA graphics cards and binary drivers.</p>
0
2016-10-18T23:18:57Z
[ "python", "qt", "qml", "pyqt5" ]
I am trying to sum a list by values in python and i am not having any luck
39,381,017
<p>Good day,</p> <p>I am trying to sum a list by values in python and i am not having any luck. what i would like to end up with is three summed values.</p> <p>lst = [100, -1, -2, -3, 100, -1, -2, -3, 100]</p> <p>100 -1-2-3 = 94</p> <p>100 -1-2-3 = 94</p> <p>100 = 100</p> <p>Any help you be greatly appreciated.</p> <p>Thanks</p>
2
2016-09-08T00:52:37Z
39,381,261
<p>How about this?</p> <pre><code>s = [] for i in lst: if i &gt; 0 or len(s) == 0: s.append(i) # start a new element in s when element is larger than zero else: s[-1] += i # otherwise add it to the last element of s s # [94, 94, 100] </code></pre> <p>Or if you use pandas:</p> <pre><code>import pandas as pd: ser = pd.Series(lst) ser.groupby((ser &gt; 0).cumsum()).sum() #1 94 #2 94 #3 100 #dtype: int64 </code></pre>
2
2016-09-08T01:27:53Z
[ "python", "list", "sum", "group" ]
Selenium Phyton Chrome open with def options
39,381,088
<p>I want open Google Chrome, like its self, the chromedriver open it without my cookies, my passwords, my history and all that staff. i tried to play with the option, and search all over the web for solution, didn't got one, plus i tried</p> <p>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options</p> <pre><code>opt = webdriver.ChromeOptions() opt.add_arguments("--user-data-dir=C:\Users\Bar\AppData\Local\Google\Chrome\User Data") driver = webdriver.Chrome(opt) driver.get("https://www.google.com/") </code></pre> <p>but it didn't work it says:</p> <pre><code>C:\Users\Bar\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/Bar/PycharmProjects/yad2/Webdriver.py File "C:/Users/Bar/PycharmProjects/yad2/Webdriver.py", line 7 opt.add_arguments("--user-data-dir=C:\Users\Bar\AppData\Local\Google\Chrome\User Data") ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 18-19: truncated \UXXXXXXXX escape Process finished with exit code 1 </code></pre>
1
2016-09-08T01:03:41Z
39,381,844
<blockquote> <p>AttributeError: 'Options' object has no attribute 'add_arguments'</p> </blockquote> <p>It should be <code>add_argument</code> instead of <code>add_arguments</code>. You should try as :-</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options opt = webdriver.ChromeOptions() opt.add_argument("user-data-dir=C:\Users\Bar\AppData\Local\Google\Chrome\User Data") </code></pre> <blockquote> <p>AttributeError: 'Service' object has no attribute 'process'</p> </blockquote> <p>Now you need to set this <code>opt</code> into <code>chrome_options</code> and pass it into <code>ChromeDriver</code> as :-</p> <pre><code>driver = webdriver.Chrome(chrome_options=opt) driver.get("https://www.google.com/") </code></pre> <p><strong>Edited</strong> :- You need to <a href="http://chromedriver.storage.googleapis.com/index.html?path=2.23/" rel="nofollow">download latest <code>chromedriver.exe</code></a> executable from here and extract this zip into at any location of your system and provide this path location with executable <code>chromedriver.exe</code> as <code>executable_path="path/to/chromedriver.exe"</code> and Initialize <code>ChromeDriver</code> as :-</p> <pre><code>driver = webdriver.Chrome(executable_path="path/to/chromedriver.exe", chrome_options=opt) driver.get("https://www.google.com/") </code></pre>
1
2016-09-08T02:49:00Z
[ "python", "selenium", "selenium-webdriver" ]
Azure Webapp wheels --find-links does not work
39,381,114
<p>I have been struggling with --find-links for an entire day, and I will be very grateful if sb could help me out here.</p> <p>I have been developing using python3.4 and one of the new features I added uses Azure Storage( the most recent version) and it requires cryptograph, which requires cffi, idna, etc... However, when I try to test it against Azure Webapp, the deployment failes, saying 'error : unable to find vcvarsall.bat'</p> <p>With some research, I figured putting --find-links wheelhouse at the top of my requirements.txt and have wheels(cffi-1.8.2-cp34-cp34m-win32.whl (md5) and cryptography-1.5-cp34-cp34m-win32.whl (md5)) located at wheelhouse folder in the root should work. This was not helping at all, and I was running into same problems.</p> <p>I tried --no-index and it gives "Could not find any downloads that satisfy the requirement cffi==1.8.2". Somebody says if I want to use --no-index, then I should have all wheels located in wheelhouse; otherwise, i will get that error.</p> <p>With this, I would like to use my wheels for cffi and cryptograph and the rest download from pypi. Anyone have any clue...? HELP!</p>
1
2016-09-08T01:07:50Z
39,456,034
<p>You are not the only one in that situation: <a href="https://github.com/Azure/azure-storage-python/issues/219" rel="nofollow">https://github.com/Azure/azure-storage-python/issues/219</a></p> <p>It seems for an unknown reason that the version of pip on the WebApp machine does not detect the platform tag as "win32" (it's why it does not find your wheel).</p> <p>Several solutions:</p> <ul> <li><p>Move to Py3.5: <a href="https://blogs.msdn.microsoft.com/pythonengineering/2016/08/04/upgrading-python-on-azure-app-service/" rel="nofollow">https://blogs.msdn.microsoft.com/pythonengineering/2016/08/04/upgrading-python-on-azure-app-service/</a></p></li> <li><p>Use a deploy script to easy_install your wheel: <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-configure/#troubleshooting---package-installation" rel="nofollow">https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-configure/#troubleshooting---package-installation</a></p></li> <li><p>Force the version of storage to 0.32.0 in your requirements.txt file (does not require cryptography) if you don't need the latest features. Read the release note of storage 0.33.0 to figure out if you need it: <a href="https://github.com/Azure/azure-storage-python/releases/tag/v0.33.0" rel="nofollow">https://github.com/Azure/azure-storage-python/releases/tag/v0.33.0</a></p></li> </ul>
1
2016-09-12T17:54:14Z
[ "python", "azure", "pypi", "azure-web-app-service", "python-wheel" ]
How do i programmatically logout user?[Django]
39,381,137
<p>I know to logout user in Django. If i want to logout user, i would do</p> <pre><code>from django.contrib.auth import logout def logout_view(request): logout(request) </code></pre> <p>But what is the relevant way of logging out the user if i am using django oauth toolkit(DOT)? </p> <p>Should i follow the same or delete the token? Some says delete the token and some says expiry period should be expired. Please provide me the best possible resolution for logging out in DRF using DOT.</p>
1
2016-09-08T01:11:18Z
39,381,304
<p>You can check <a href="https://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_04.html" rel="nofollow">Revoking an OAuth2 Token</a></p> <blockquote> <p>You’ve granted a user an Access Token, following part 1 and now you would like to revoke that token, probably in response to a client request (to logout).</p> </blockquote> <p>And <a href="https://stackoverflow.com/questions/24666124/do-you-logout-a-user-who-login-via-oauth2-by-expiring-their-access-token">Do you logout a user who login via OAuth2 by expiring their Access Token?</a></p> <h3>EDIT</h3> <pre><code># OAuth2 provider endpoints oauth2_endpoint_views = [ url(r'^authorize/$', oauth2_views.AuthorizationView.as_view(), name="authorize"), url(r'^token/$', oauth2_views.TokenView.as_view(), name="token"), url(r'^revoke-token/$', oauth2_views.RevokeTokenView.as_view(), name="revoke-token"), ] </code></pre> <p>If you follow the tutorial part2 you will find you already have the revoke-token url, so you just need to send request to this url.</p> <h3>EDIT2</h3> <p>Let me try to explain this clearly</p> <p>When you use Django OAuth Toolkit and DRF, you usually will use </p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'oauth2_provider.ext.rest_framework.OAuth2Authentication', ) } </code></pre> <p>And you can get access token by</p> <pre><code>curl -X POST -d "grant_type=password&amp;username=&lt;user_name&gt;&amp;password=&lt;password&gt;" -u"&lt;client_id&gt;:&lt;client_secret&gt;" http://localhost:8000/o/token/ </code></pre> <p>And get response like this</p> <pre><code>{ "access_token": "&lt;your_access_token&gt;", "token_type": "Bearer", "expires_in": 36000, "refresh_token": "&lt;your_refresh_token&gt;", "scope": "read write groups" } </code></pre> <p>Now you can use your access_token to request the api you set like this</p> <pre><code>curl -H "Authorization: Bearer &lt;your_access_token&gt;" http://localhost:8000/users/1/ </code></pre> <p><strong>How to logout depends on how you define login</strong></p> <p>Website define login from the session in cookies. When you developing a mobile app, You will define login depend on message in your app (<a href="https://stackoverflow.com/questions/10295504/ios-how-to-authenticate-a-user-after-login-for-auto-login">user credentials present in keychain or not</a> when it comes to IOS), and that is what your code do:</p> <pre><code>from django.contrib.auth import logout def logout_view(request): logout(request) </code></pre> <p>You can see source code here <a href="https://github.com/django/django/blob/7549eb000430192833f05186056d1ae20b0d17ad/django/contrib/auth/__init__.py#L133" rel="nofollow">django-logout</a> and docs <a href="https://docs.djangoproject.com/en/1.10/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.flush" rel="nofollow">here</a></p> <blockquote> <p>flush()</p> <p>Deletes the current session data from the session and deletes the session cookie. This is used if you want to ensure that the previous session data can’t be accessed again from the user’s browser (for example, the django.contrib.auth.logout() function calls it).</p> </blockquote> <p>But remember, From <a href="https://stackoverflow.com/questions/22557044/delete-access-token-after-logout">Luke Taylor</a></p> <blockquote> <p>The lifetime of the access_token is independent of the login session of a user who grants access to a client. OAuth2 has no concept of a user login or logout, or a session, so the fact that you expect a logout to revoke a token, would seem to indicate that you're misunderstanding how OAuth2 works. You should probably clarify in your question why you want things to work this way and why you need OAuth.</p> </blockquote> <p>Finally In your case, I think you need to revoeke the token before logout:</p> <pre><code>def revoke-token(request): # just make a request here # POST /o/revoke_token/ HTTP/1.1 Content-Type: application/x-www-form-urlencoded token=XXXX&amp;client_id=XXXX&amp;client_secret=XXXX def logout(request): response = revoke-toke(request) # if succeed logout(request) </code></pre>
2
2016-09-08T01:33:02Z
[ "python", "django", "python-3.x", "oauth", "django-rest-framework" ]
How to get rid of extra white space on subplots with shared axes?
39,381,162
<p>I'm creating a plot using python 3.5.1 and matplotlib 1.5.1 that has two subplots (side by side) with a shared Y axis. A sample output image is shown below:</p> <p><a href="http://i.stack.imgur.com/OK4ap.png" rel="nofollow"><img src="http://i.stack.imgur.com/OK4ap.png" alt="Sample Image"></a></p> <p>Notice the extra white space at the top and bottom of each set of axes. Try as I might I can't seem to get rid of it. The overall goal of the figure is to have a waterfall type plot on the left with a shared Y axes with the plot on the right.</p> <p>Here's some sample code to reproduce the image above.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline # create some X values periods = np.linspace(1/1440, 1, 1000) # create some Y values (will be datetimes, not necessarily evenly spaced # like they are in this example) day_ints = np.linspace(1, 100, 100) days = pd.to_timedelta(day_ints, 'D') + pd.to_datetime('2016-01-01') # create some fake data for the number of points points = np.random.random(len(day_ints)) # create some fake data for the color mesh Sxx = np.random.random((len(days), len(periods))) # Create the plots fig = plt.figure(figsize=(8, 6)) # create first plot ax1 = plt.subplot2grid((1,5), (0,0), colspan=4) im = ax1.pcolormesh(periods, days, Sxx, cmap='viridis', vmin=0, vmax=1) ax1.invert_yaxis() ax1.autoscale(enable=True, axis='Y', tight=True) # create second plot and use the same y axis as the first one ax2 = plt.subplot2grid((1,5), (0,4), sharey=ax1) ax2.scatter(points, days) ax2.autoscale(enable=True, axis='Y', tight=True) # Hide the Y axis scale on the second plot plt.setp(ax2.get_yticklabels(), visible=False) #ax1.set_adjustable('box-forced') #ax2.set_adjustable('box-forced') fig.colorbar(im, ax=ax1) </code></pre> <p>As you can see in the commented out code I've tried a number of approaches, as suggested by posts like <a href="https://github.com/matplotlib/matplotlib/issues/1789/" rel="nofollow">https://github.com/matplotlib/matplotlib/issues/1789/</a> and <a href="http://stackoverflow.com/questions/37558329/matplotlib-set-axis-tight-only-to-x-or-y-axis">Matplotlib: set axis tight only to x or y axis</a>.</p> <p>As soon as I remove the <code>sharey=ax1</code> part of the second subplot2grid call the problem goes away, but then I also don't have a common Y axis.</p>
1
2016-09-08T01:14:33Z
39,381,449
<p>Autoscale tends to add a buffer to the data so that all of the data points are easily visible and not part-way cut off by the axes. </p> <p>Change:</p> <pre><code>ax1.autoscale(enable=True, axis='Y', tight=True) </code></pre> <p>to:</p> <pre><code>ax1.set_ylim(days.min(),days.max()) </code></pre> <p>and</p> <pre><code>ax2.autoscale(enable=True, axis='Y', tight=True) </code></pre> <p>to:</p> <pre><code>ax2.set_ylim(days.min(),days.max()) </code></pre> <p>To get:</p> <p><a href="http://i.stack.imgur.com/P9i70.png" rel="nofollow"><img src="http://i.stack.imgur.com/P9i70.png" alt="enter image description here"></a></p>
1
2016-09-08T01:56:26Z
[ "python", "matplotlib" ]
How to print/show an expression in rational number form in python
39,381,222
<p>I've been developing a Tkinter app and at some label i need to put formula that should look like a rational number(expression1/expression2, like a numerator and denominator and a bar between them). I did some digging and couldnt find anything related to it. Any suggestions on how this can be done ?</p> <p>I even couldnt find anything on printing a fraction in a rational number format on the console. I oonly care about the looks and no calculation will be made with it, its just a label</p>
4
2016-09-08T01:22:22Z
39,381,318
<pre><code>print '%s'%Fraction(1.5) &gt;&gt;&gt; '3/2' </code></pre> <p>See: <a href="https://pymotw.com/2/fractions/" rel="nofollow">https://pymotw.com/2/fractions/</a></p>
0
2016-09-08T01:35:30Z
[ "python", "math", "tkinter" ]
How to print/show an expression in rational number form in python
39,381,222
<p>I've been developing a Tkinter app and at some label i need to put formula that should look like a rational number(expression1/expression2, like a numerator and denominator and a bar between them). I did some digging and couldnt find anything related to it. Any suggestions on how this can be done ?</p> <p>I even couldnt find anything on printing a fraction in a rational number format on the console. I oonly care about the looks and no calculation will be made with it, its just a label</p>
4
2016-09-08T01:22:22Z
39,381,481
<pre><code>print('\n\033[4m'+'3' + '\033[0m'+'\n2') </code></pre> <p><code>\033[4m</code> enables underline</p> <p><code>\033[0m</code> resets it</p> <p>It will basically display something that looks like (with an underline under the 3):</p> <pre><code>3 2 </code></pre>
2
2016-09-08T01:59:35Z
[ "python", "math", "tkinter" ]
How to print/show an expression in rational number form in python
39,381,222
<p>I've been developing a Tkinter app and at some label i need to put formula that should look like a rational number(expression1/expression2, like a numerator and denominator and a bar between them). I did some digging and couldnt find anything related to it. Any suggestions on how this can be done ?</p> <p>I even couldnt find anything on printing a fraction in a rational number format on the console. I oonly care about the looks and no calculation will be made with it, its just a label</p>
4
2016-09-08T01:22:22Z
39,472,345
<p>A high-tech solution would be to use <a href="http://matplotlib.org/1.4.1/index.html" rel="nofollow">matplotlib</a> in Tkinter. It provides a scaled-down version of <code>Latex</code> which it calls <a href="http://matplotlib.org/1.4.1/users/mathtext.html#mathtext-tutorial" rel="nofollow">mathtext</a>. This route doesn't look easy but seems to be possible. See <a href="http://stackoverflow.com/q/22179244/4996248">this</a> question for details. If this way seems too involved, a similar but perhaps easier idea would be to use MathML (officially part of HTML5) for the markup. It might be easier to embed in Tkinter.</p> <p>A comparatively low-tech approach would be to use two labels (with invisible borders) for the numerator and denominator respectively, with a thin rectangle between them. It will take some tweaking, but you should be able to adjust fonts and alignment so that the result looks like a single expression.</p>
1
2016-09-13T14:19:10Z
[ "python", "math", "tkinter" ]
Compiling a dictionary by pulling data from other dictionaries
39,381,286
<p>I am doing a project in which I extract data from three different data sets and combine it to look at campaign contributions. To do this I turned the relevant data from two of the sets into dictionaries (canDict and otherDict) with ID numbers as keys and the information I need (party affiliation) as values. Then I wrote a program to pull party information based on the key (my third set included these ID numbers as well) and match them with the employer of the donating party, and the amount donated. That was a long winded explanation, but I thought it would help with understanding this chunk of code.</p> <p>My problem is that, for some reason, my third dictionary (employerDict) won't compile. By the end of this step I should have a dictionary containing employers as keys, and a list of tuples as values, but after running it, the dictionary remains blank. I've been over this line by line a dozen times and I'm pulling my hair out - I can't for the life of me think why it won't work, which is making it hard to search for answers. I've commented almost every line to try to make it easier to understand out of context. <strong><em>Can anyone spot my mistake?</em></strong></p> <p><em>Update:</em> I added a counter, n, to the outermost for loop to see if the program was iterating at all.</p> <p><em>Update 2:</em> I added another if statement in the creation of the variable <code>party</code>, in case the ID at data[0] did not exist in canDict or in otherDict. I also added some already suggested fixes from the comments.</p> <pre><code>n=0 with open(path3) as f: # path3 is a txt file for line in f: n+=1 if n % 10000 == 0: print(n) data = line.split("|") # Splitting each line into its entries (delimited by the symbol |) party = canDict.get(data[0]) # data[0] is an ID number. canDict and otherDict contain these IDs as keys with party affiliations as values if party is None: party = otherDict[data[0]] # If there is no matching ID number in canDict, search otherDict if party is None: party = 'Other' else: print('ERROR: party is None') x = (party, int(data[14])) # Creating a tuple of the the party (found through the loop) and an integer amount from the file path3 employer = data[11] # Index 11 in path3 is the employer of the person if employer != '': value = employerDict.get(employer) # If the employer field is not blank, see if this employer is already a key in employerDict if value is None: employerDict[employer] = [x] # If the key does not exist, create it and add a list including the tuple x as its value else: employerDict[employer].append(x) # If it does exist, add the tuple x to the existing value else: print('ERROR: employer == ''') </code></pre>
0
2016-09-08T01:31:01Z
39,381,709
<p>Suppose we simplify and rearrange things a bit:</p> <pre><code>import sys from collections import defaultdict employerDict = defaultdict(list) ID, EMPLOYER, AMOUNT = 0, 11, 14 with open(path3) as f: # path3 is a *.txt file for n, line in enumerate(f): if n % 10000 == 0: print(n) data = line.rstrip().split('|') # Splitting each line into its entries employer = data[EMPLOYER] # the employer of the person if employer == '': # verify that you get 'employer' occasionally! print("ERROR: employer == ''", file=sys.stderr) continue id_string = data[ID] # is ID always a string or is it an int (e.g. in other dicts)? # If there is no matching ID number in canDict, search otherDict party = canDict.get(id_string, otherDict.get(id_string, 'Other')) # Create a tuple of the the party (found through the loop) and an integer amount from the file path3 x = (party, int(data[AMOUNT])) employerDict[employer].append(x) # Add the tuple x to the (automatically) existing list </code></pre> <p>Does this help?</p>
0
2016-09-08T02:29:46Z
[ "python", "dictionary" ]
Compiling a dictionary by pulling data from other dictionaries
39,381,286
<p>I am doing a project in which I extract data from three different data sets and combine it to look at campaign contributions. To do this I turned the relevant data from two of the sets into dictionaries (canDict and otherDict) with ID numbers as keys and the information I need (party affiliation) as values. Then I wrote a program to pull party information based on the key (my third set included these ID numbers as well) and match them with the employer of the donating party, and the amount donated. That was a long winded explanation, but I thought it would help with understanding this chunk of code.</p> <p>My problem is that, for some reason, my third dictionary (employerDict) won't compile. By the end of this step I should have a dictionary containing employers as keys, and a list of tuples as values, but after running it, the dictionary remains blank. I've been over this line by line a dozen times and I'm pulling my hair out - I can't for the life of me think why it won't work, which is making it hard to search for answers. I've commented almost every line to try to make it easier to understand out of context. <strong><em>Can anyone spot my mistake?</em></strong></p> <p><em>Update:</em> I added a counter, n, to the outermost for loop to see if the program was iterating at all.</p> <p><em>Update 2:</em> I added another if statement in the creation of the variable <code>party</code>, in case the ID at data[0] did not exist in canDict or in otherDict. I also added some already suggested fixes from the comments.</p> <pre><code>n=0 with open(path3) as f: # path3 is a txt file for line in f: n+=1 if n % 10000 == 0: print(n) data = line.split("|") # Splitting each line into its entries (delimited by the symbol |) party = canDict.get(data[0]) # data[0] is an ID number. canDict and otherDict contain these IDs as keys with party affiliations as values if party is None: party = otherDict[data[0]] # If there is no matching ID number in canDict, search otherDict if party is None: party = 'Other' else: print('ERROR: party is None') x = (party, int(data[14])) # Creating a tuple of the the party (found through the loop) and an integer amount from the file path3 employer = data[11] # Index 11 in path3 is the employer of the person if employer != '': value = employerDict.get(employer) # If the employer field is not blank, see if this employer is already a key in employerDict if value is None: employerDict[employer] = [x] # If the key does not exist, create it and add a list including the tuple x as its value else: employerDict[employer].append(x) # If it does exist, add the tuple x to the existing value else: print('ERROR: employer == ''') </code></pre>
0
2016-09-08T01:31:01Z
39,422,088
<p>Thanks for all the input everyone - however, it looks like its a problem with my data file, not a problem with the program. Dangit.</p>
0
2016-09-10T03:01:26Z
[ "python", "dictionary" ]
Change character based off of its position? Python 2.7
39,381,399
<p>I have a string on unknown length, contain the characters a-z A-Z 0-9. I need to change each character using their position from Left to Right using a dictionary.</p> <p>Example:</p> <pre><code>string = "aaaaaaaa" def shift_char(text): for i in len(text): # Do Something for each character return output print shift_char(string) 'adktivep' </code></pre>
-2
2016-09-08T01:47:56Z
39,432,369
<p>Alright, hopefully this is understandable, otherwise feel free to ask questions.</p> <pre><code>import random import string # Letter pool, these are all the possible characters # you can have in your string letter_pool = list(string.ascii_letters + string.digits) input_string = "HelloWorld" string_length = len(input_string) # Generate one random dictionary for each character in string dictionaries = [] for i in range(string_length): # Copy letter_pool to avoid overwriting letter_pool keys = list(letter_pool) values = list(letter_pool) # Randomise values, (keep keys the same) random.shuffle(values) # This line converts two lists into a dictionary scrambled_dict = dict(zip(keys, values)) # Now each letter (key) maps to a random letter (value) dictionaries.append(scrambled_dict) # Initiate a fresh string to start adding characters to out_string = "" # Loop though the loop string, record the current place (index) and character each loop for index, char in enumerate(input_string): # Get the dictionary for this place in the string dictionary = dictionaries[i] # Get the randomised character from the dictionary new_char = dictionary[char] # Place the randomised character at the end of the string out_string += new_char # Print out the random string print(out_string) </code></pre> <p>Edit: If you want to only generate the random dictionaries once, and load them in everytime, you can serialise the <code>dictionaries</code> array. My favourite is <a href="https://docs.python.org/3.5/library/json.html" rel="nofollow">json</a>.</p>
0
2016-09-11T02:43:29Z
[ "python", "python-2.7" ]
Change character based off of its position? Python 2.7
39,381,399
<p>I have a string on unknown length, contain the characters a-z A-Z 0-9. I need to change each character using their position from Left to Right using a dictionary.</p> <p>Example:</p> <pre><code>string = "aaaaaaaa" def shift_char(text): for i in len(text): # Do Something for each character return output print shift_char(string) 'adktivep' </code></pre>
-2
2016-09-08T01:47:56Z
39,433,530
<p>With the help of <a href="http://stackoverflow.com/users/235698/mark-tolonen">Mark Tolonen</a> in <a href="http://stackoverflow.com/questions/39433191/convert-int-into-str-while-in-getitem-python-2-7">this post</a>, I was able to come up with a solution. The following example only uses 4 dictionaries, while i intend to do more:</p> <pre><code># Input String string = "ttttttttttttttttttttttttttttttt" # Defining dictionarys for 0-4 dicnums = [{"0":"n","1":"Q","2":"k","3":"W","4":"F","5":"g","6":"9","7":"e","8":"v","9":"3","a":"r","b":"T","c":"c","d":"o","e":"b","f":"y","g":"2","h":"A","i":"i","j":"p","k":"1","l":"P","m":"w","n":"x","o":"s","p":"Y","q":"h","r":"G","s":"7","t":"S","u":"6","v":"K","w":"Z","x":"M","y":"C","z":"J","A":"u","B":"f","C":"j","D":"E","E":"a","F":"H","G":"O","H":"N","I":"l","J":"U","K":"I","L":"V","M":"m","N":"5","O":"R","P":"4","Q":"z","R":"L","S":"0","T":"q","U":"D","V":"8","W":"B","X":"X","Y":"d","Z":"t"}, {"0":"q","1":"6","2":"W","3":"4","4":"J","5":"u","6":"n","7":"T","8":"I","9":"O","a":"V","b":"3","c":"Z","d":"s","e":"R","f":"E","g":"G","h":"P","i":"5","j":"l","k":"e","l":"m","m":"F","n":"t","o":"8","p":"K","q":"L","r":"Y","s":"M","t":"D","u":"j","v":"z","w":"H","x":"g","y":"9","z":"f","A":"0","B":"p","C":"o","D":"d","E":"X","F":"S","G":"k","H":"1","I":"Q","J":"C","K":"U","L":"i","M":"r","N":"w","O":"y","P":"B","Q":"2","R":"x","S":"A","T":"c","U":"7","V":"h","W":"v","X":"N","Y":"b","Z":"a"}, {"0":"x","1":"W","2":"q","3":"B","4":"j","5":"I","6":"E","7":"g","8":"U","9":"e","a":"8","b":"3","c":"5","d":"k","e":"9","f":"N","g":"7","h":"Q","i":"t","j":"r","k":"L","l":"Z","m":"b","n":"n","o":"Y","p":"H","q":"R","r":"6","s":"P","t":"1","u":"S","v":"M","w":"p","x":"l","y":"F","z":"2","A":"c","B":"T","C":"G","D":"h","E":"X","F":"v","G":"s","H":"O","I":"D","J":"4","K":"a","L":"A","M":"m","N":"d","O":"C","P":"f","Q":"V","R":"i","S":"o","T":"u","U":"w","V":"0","W":"K","X":"z","Y":"y","Z":"J"}, {"0":"x","1":"W","2":"q","3":"B","4":"j","5":"I","6":"E","7":"g","8":"U","9":"e","a":"8","b":"3","c":"5","d":"k","e":"9","f":"N","g":"7","h":"Q","i":"t","j":"r","k":"L","l":"H","m":"b","n":"n","o":"Y","p":"Z","q":"R","r":"6","s":"P","t":"1","u":"S","v":"M","w":"p","x":"l","y":"F","z":"2","A":"c","B":"T","C":"G","D":"h","E":"X","F":"v","G":"s","H":"O","I":"D","J":"4","K":"a","L":"A","M":"m","N":"d","O":"C","P":"f","Q":"V","R":"i","S":"o","T":"u","U":"w","V":"0","W":"K","X":"z","Y":"y","Z":"J"}] # Changing my string characters string_fin = ''.join(dicnums[i%4][c] for i,c in enumerate(string)) # Print result print string_fin </code></pre> <p>The result is <code>"SD11SD11SD11SD11SD11SD11SD11SD1"</code> if i use all 't's. If i make the dict range broader, and am not using the same characters repeatedly(which i wont be) then the output would be much better. Once combined with the rest of my encryption concept, my program <em>might</em> actually work like i want. </p>
0
2016-09-11T06:42:02Z
[ "python", "python-2.7" ]
GraphQL + Django: resolve queries using raw PostgreSQL query
39,381,436
<p><strong>What is the best way to use GraphQL with Django when using an external database to fetch data from multiple tables (i.e., creating a Django Model to represent the data would not correspond to a single table in my database)?</strong></p> <p>My approach was to temporarily abandon using Django models since I don't think I fully understand them yet. (I'm completely new to Django as well as GraphQL.) I've set up a simple project with an app with a connected external Postgres DB. I followed all the setup from the <a href="http://graphene-python.org/docs/django/tutorial/" rel="nofollow">Graphene Django tutorial</a> and then hit a road block when I realized the model I created was an amalgam of several tables.</p> <p>I have a query that sends back the proper columns mapped to the fields in my model, but I don't know how to make this a dynamic connection such that when my API is hit, it queries my database and maps the rows to the model schema I've defined in Django.</p> <p>My approach since has been to avoid models and use the simpler method demonstrated in Steven Luscher's talk: <a href="https://youtu.be/UBGzsb2UkeY?t=4m30s" rel="nofollow">Zero to GraphQL in 30 Minutes</a>.</p> <blockquote> <p><strong>TLDR;</strong></p> <p>The goal is to be able to hit my GraphQL endpoint, use a cursor object from my django.db.connection to get a list of dictionaries that should resolve to a GraphQLList of OrderItemTypes (see below).</p> </blockquote> <p>The problem is I am getting nulls for every value when I hit the following endpoint with a query:</p> <pre><code>localhost:8000/api?query={orderItems{date,uuid,orderId}} </code></pre> <p>returns:</p> <pre><code>{ "data":{ "orderItems":[ {"date":null, "uuid":null, "orderId":null }, ... ] } } </code></pre> <blockquote> <p>project/main/<strong>app/schema.py</strong></p> </blockquote> <pre class="lang-python prettyprint-override"><code>import graphene from django.db import connection class OrderItemType(graphene.ObjectType): date = graphene.core.types.custom_scalars.DateTime() order_id = graphene.ID() uuid = graphene.String() class QueryType(graphene.ObjectType): name = 'Query' order_items = graphene.List(OrderItemType) def resolve_order_items(root, args, info): data = get_order_items() # data prints out properly in my terminal print data # data does not resolve properly return data def get_db_dicts(sql, args=None): cursor = connection.cursor() cursor.execute(sql, args) columns = [col[0] for col in cursor.description] data = [ dict(zip(columns, row)) for row in cursor.fetchall() ] cursor.close() return data def get_order_items(): return get_db_dicts(""" SELECT j.created_dt AS date, j.order_id, j.uuid FROM job AS j LIMIT 3; """) </code></pre> <p>In my terminal, I print from QueryType's resolve method and I can see the data successfully comes back from my Postgres connection. However, the GraphQL gives me nulls so it has to be in the resolve method that some mapping is getting screwed up.</p> <pre><code>[ { 'uuid': u'7584aac3-ab39-4a56-9c78-e3bb1e02dfc1', 'order_id': 25624320, 'date': datetime.datetime(2016, 1, 30, 16, 39, 40, 573400, tzinfo=&lt;UTC&gt;) }, ... ] </code></pre> <p>How do I properly map my data to the fields I've defined in my OrderItemType?</p> <p>Here are some more references:</p> <blockquote> <p>project/main/<strong>schema.py</strong></p> </blockquote> <pre class="lang-python prettyprint-override"><code>import graphene from project.app.schema import QueryType AppQuery class Query(AppQuery): pass schema = graphene.Schema( query=Query, name='Pathfinder Schema' ) </code></pre> <blockquote> <p><strong>file tree</strong></p> </blockquote> <pre><code>|-- project |-- manage.py |-- main |-- app |-- models.py |-- schema.py |-- schema.py |-- settings.py |-- urls.py </code></pre>
1
2016-09-08T01:53:49Z
39,396,675
<p>Here is temporary workaround, although I'm hoping there is something cleaner to handle the snake_cased fieldnames.</p> <blockquote> <p>project/main/<strong>app/schema.py</strong></p> </blockquote> <pre class="lang-python prettyprint-override"><code>from graphene import ( ObjectType, ID, String, Int, Float, List ) from graphene.core.types.custom_scalars import DateTime from django.db import connection ''' Generic resolver to get the field_name from self's _root ''' def rslv(self, args, info): return self.get(info.field_name) class OrderItemType(ObjectType): date = DateTime(resolver=rslv) order_id = ID() uuid = String(resolver=rslv) place_id = ID() ''' Special resolvers for camel_cased field_names ''' def resolve_order_id(self, args, info): return self.get('order_id') def resolve_place_id(self, args, info): return self.get('place_id') class QueryType(ObjectType): name = 'Query' order_items = List(OrderItemType) def resolve_order_items(root, args, info): return get_order_items() </code></pre>
0
2016-09-08T17:08:11Z
[ "python", "django", "postgresql", "graphql", "graphene-python" ]
GraphQL + Django: resolve queries using raw PostgreSQL query
39,381,436
<p><strong>What is the best way to use GraphQL with Django when using an external database to fetch data from multiple tables (i.e., creating a Django Model to represent the data would not correspond to a single table in my database)?</strong></p> <p>My approach was to temporarily abandon using Django models since I don't think I fully understand them yet. (I'm completely new to Django as well as GraphQL.) I've set up a simple project with an app with a connected external Postgres DB. I followed all the setup from the <a href="http://graphene-python.org/docs/django/tutorial/" rel="nofollow">Graphene Django tutorial</a> and then hit a road block when I realized the model I created was an amalgam of several tables.</p> <p>I have a query that sends back the proper columns mapped to the fields in my model, but I don't know how to make this a dynamic connection such that when my API is hit, it queries my database and maps the rows to the model schema I've defined in Django.</p> <p>My approach since has been to avoid models and use the simpler method demonstrated in Steven Luscher's talk: <a href="https://youtu.be/UBGzsb2UkeY?t=4m30s" rel="nofollow">Zero to GraphQL in 30 Minutes</a>.</p> <blockquote> <p><strong>TLDR;</strong></p> <p>The goal is to be able to hit my GraphQL endpoint, use a cursor object from my django.db.connection to get a list of dictionaries that should resolve to a GraphQLList of OrderItemTypes (see below).</p> </blockquote> <p>The problem is I am getting nulls for every value when I hit the following endpoint with a query:</p> <pre><code>localhost:8000/api?query={orderItems{date,uuid,orderId}} </code></pre> <p>returns:</p> <pre><code>{ "data":{ "orderItems":[ {"date":null, "uuid":null, "orderId":null }, ... ] } } </code></pre> <blockquote> <p>project/main/<strong>app/schema.py</strong></p> </blockquote> <pre class="lang-python prettyprint-override"><code>import graphene from django.db import connection class OrderItemType(graphene.ObjectType): date = graphene.core.types.custom_scalars.DateTime() order_id = graphene.ID() uuid = graphene.String() class QueryType(graphene.ObjectType): name = 'Query' order_items = graphene.List(OrderItemType) def resolve_order_items(root, args, info): data = get_order_items() # data prints out properly in my terminal print data # data does not resolve properly return data def get_db_dicts(sql, args=None): cursor = connection.cursor() cursor.execute(sql, args) columns = [col[0] for col in cursor.description] data = [ dict(zip(columns, row)) for row in cursor.fetchall() ] cursor.close() return data def get_order_items(): return get_db_dicts(""" SELECT j.created_dt AS date, j.order_id, j.uuid FROM job AS j LIMIT 3; """) </code></pre> <p>In my terminal, I print from QueryType's resolve method and I can see the data successfully comes back from my Postgres connection. However, the GraphQL gives me nulls so it has to be in the resolve method that some mapping is getting screwed up.</p> <pre><code>[ { 'uuid': u'7584aac3-ab39-4a56-9c78-e3bb1e02dfc1', 'order_id': 25624320, 'date': datetime.datetime(2016, 1, 30, 16, 39, 40, 573400, tzinfo=&lt;UTC&gt;) }, ... ] </code></pre> <p>How do I properly map my data to the fields I've defined in my OrderItemType?</p> <p>Here are some more references:</p> <blockquote> <p>project/main/<strong>schema.py</strong></p> </blockquote> <pre class="lang-python prettyprint-override"><code>import graphene from project.app.schema import QueryType AppQuery class Query(AppQuery): pass schema = graphene.Schema( query=Query, name='Pathfinder Schema' ) </code></pre> <blockquote> <p><strong>file tree</strong></p> </blockquote> <pre><code>|-- project |-- manage.py |-- main |-- app |-- models.py |-- schema.py |-- schema.py |-- settings.py |-- urls.py </code></pre>
1
2016-09-08T01:53:49Z
39,403,047
<p>Default resolvers on GraphQL Python / Graphene try to do the resolution of a given field_name in a root object using getattr. So, for example, the default resolver for a field named <code>order_items</code> will be something like:</p> <pre><code>def resolver(root, args, context, info): return getattr(root, 'order_items', None) </code></pre> <p>Knowing that, when doing a <code>getattr</code> in a <code>dict</code>, the result will be <code>None</code> (for accessing dict items you will have to use <code>__getitem__</code> / <code>dict[key]</code>).</p> <p>So, solving your problem could be as easy as change from <code>dicts</code> to storing the content to <code>namedtuples</code>.</p> <pre><code>import graphene from django.db import connection from collections import namedtuple class OrderItemType(graphene.ObjectType): date = graphene.core.types.custom_scalars.DateTime() order_id = graphene.ID() uuid = graphene.String() class QueryType(graphene.ObjectType): class Meta: type_name = 'Query' # This will be name in graphene 1.0 order_items = graphene.List(OrderItemType) def resolve_order_items(root, args, info): return get_order_items() def get_db_rows(sql, args=None): cursor = connection.cursor() cursor.execute(sql, args) columns = [col[0] for col in cursor.description] RowType = namedtuple('Row', columns) data = [ RowType(*row) # Edited by John suggestion fix for row in cursor.fetchall() ] cursor.close() return data def get_order_items(): return get_db_rows(""" SELECT j.created_dt AS date, j.order_id, j.uuid FROM job AS j LIMIT 3; """) </code></pre> <p>Hope this helps!</p>
2
2016-09-09T02:44:16Z
[ "python", "django", "postgresql", "graphql", "graphene-python" ]
pandas plot with different variable for subplots and colour?
39,381,540
<p>Currently this code:</p> <pre><code>count_df = (df[['rank', 'name', 'variable', 'value']] .groupby(['rank', 'variable', 'name']) .agg('count') .unstack()) count_df .head() # value # name 1lin STH_km27_lin ST_lin S_lin # rank variable # 1.0 NEE 24 115 33 28 # Qg 23 54 14 9 # Qh 37 124 11 28 # ... count_df.plot(kind='bar') </code></pre> <p>gets me this plot:</p> <p><a href="http://i.stack.imgur.com/1bxB6.png" rel="nofollow"><img src="http://i.stack.imgur.com/1bxB6.png" alt="bar plot with too much shit on it"></a></p> <p>using <code>subplots=True</code> in the <code>.plot()</code> call gets me this:</p> <p><a href="http://i.stack.imgur.com/aaAxN.png" rel="nofollow"><img src="http://i.stack.imgur.com/aaAxN.png" alt="useless subplots"></a></p> <p>which is pretty useless, because the colours are mapped to the same variable as the subplot facetting. Is there a way to choose which column/index is used for the sub-plotting, so that I can still have colours per <code>name</code> (<code>count_df</code> column header), but sub-plots per <code>variable</code>, so that each subplot has a bar per <code>name/rank</code>, grouped by <code>rank</code>, and coloured by <code>name</code>? </p>
4
2016-09-08T02:08:41Z
39,381,765
<p>Hrm. I suspect this isn't doable in pandas by itself, but I found a way to do it in Seaborn:</p> <pre><code>import seaborn as sns cdf = (df[['rank', 'name', 'variable', 'value']] .groupby(['rank', 'variable', 'name']) .agg('count')) sns.factorplot(x="rank", y="value", row="variable", hue="name", data=cdf.reset_index(), kind='bar') </code></pre> <p>which results in:</p> <p><a href="http://i.stack.imgur.com/PKfZu.png" rel="nofollow"><img src="http://i.stack.imgur.com/PKfZu.png" alt="barplot by rank, variable, and name"></a></p> <p>which is close enough for my purposes</p>
2
2016-09-08T02:38:10Z
[ "python", "pandas", "matplotlib", "plot" ]
how to include tests in a distributable django app?
39,381,593
<p>I distribute a small django app that I wanted to write a test for. It uses some settings and I was importing </p> <p><code>from django.conf import settings</code> </p> <p>in the app file but this leaves me with a problem because the standalone app has no django project so how would one write and run tests on it?</p>
0
2016-09-08T02:15:54Z
39,381,648
<p>In your app folder, create a folder named <code>tests</code>, and in that folder place your tests (in a single <code>.py</code> file or multiple files. </p> <p>You will end up with a folder structure like this:</p> <pre><code>my_app/ tests/ test_views.py test_other_stuff.py </code></pre> <p>Then you will be able to run your test suite for that app using:</p> <pre><code>`./manage.py test my_app` </code></pre> <p>Where <code>my_app</code> is the name of your app. As far as how to write the tests themselves, <a href="https://docs.djangoproject.com/en/1.10/topics/testing/overview/" rel="nofollow">the Django docs</a> are very helpful.</p>
0
2016-09-08T02:23:01Z
[ "python", "django" ]
how to include tests in a distributable django app?
39,381,593
<p>I distribute a small django app that I wanted to write a test for. It uses some settings and I was importing </p> <p><code>from django.conf import settings</code> </p> <p>in the app file but this leaves me with a problem because the standalone app has no django project so how would one write and run tests on it?</p>
0
2016-09-08T02:15:54Z
39,386,036
<p>My solution was to create a django project with the name of the app and then put all the distribution files such as </p> <pre><code>setup.py MANIFEST.in </code></pre> <p>in the project level directory and then indicate only the files I want to include with the distribution in the MANIFEST.in file. </p> <p>MAINFEST.in:</p> <pre><code>include myapp/* </code></pre>
0
2016-09-08T08:25:34Z
[ "python", "django" ]
Python Regex Matching characters
39,381,651
<p>I am trying to learn Python regular expressions. I have a long string that contains many patterns that look like: <code>#v=xxxxxxxxxx</code> where x is the variable characters I want.</p> <p>I was thinking I could use <code>re.findall(r'...', myString)</code> where <code>...</code> is my pattern. That's the part I'm having trouble with. I somehow need to get the next 10 characters after each <code>#v=</code>.</p> <p>All help is appreciated :)</p>
1
2016-09-08T02:23:16Z
39,381,731
<p>You were close! Here's an RE that'll work: </p> <pre><code>In [1]: import re In [2]: s = "#v=yyyyyyyyyy #v=xxxxxxxxxx #v=zzzzzzzzzz" In [3]: re.findall(r'#v=(\w{10})', s) Out[3]: ['yyyyyyyyyy', 'xxxxxxxxxx', 'zzzzzzzzzz'] </code></pre>
1
2016-09-08T02:32:54Z
[ "python", "regex" ]
Python -Taking dot product of long list of arrays
39,381,727
<p>So I'm trying to to take the dot product of two arrays using numpy's dot product function.</p> <pre><code>import numpy as np MWFrPos_Hydro1 = subPos1[submaskFirst1] x = MWFrPos_Hydro1 MWFrVel_Hydro1 = subVel1[submaskFirst1] y = MWFrVel_Hydro1 MWFrPosMag_Hydro1 = [np.linalg.norm(i) for i in MWFrPos_Hydro1] np.dot(x, y) </code></pre> <p>returns </p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-135-9ef41eb4235d&gt; in &lt;module&gt;() 6 7 ----&gt; 8 np.dot(x, y) ValueError: shapes (1220,3) and (1220,3) not aligned: 3 (dim 1) != 1220 (dim 0) </code></pre> <p>And I using this function improperly?</p> <p>The arrays look like this </p> <pre><code>print x [[ 51.61872482 106.19775391 69.64765167] [ 33.86419296 11.75729942 11.84990311] [ 12.75009823 58.95491028 38.06708527] ..., [ 99.00266266 96.0210495 18.79844856] [ 27.18083954 74.35041809 78.07577515] [ 19.29788399 82.16114044 1.20453501]] print y [[ 40.0402298 -162.62153625 -163.00158691] [-359.41983032 -115.39328766 14.8419466 ] [ 95.92044067 -359.26425171 234.57330322] ..., [ 130.17840576 -7.00977898 42.09699249] [ 37.37852478 -52.66002655 -318.15155029] [ 126.1726532 121.3104248 -416.20855713]] </code></pre> <p>Would for looping <code>np.vdot</code> be more optimal in this circumstance?</p>
0
2016-09-08T02:31:59Z
39,381,781
<p>You can't take the dot product of two <code>n * m</code> matrices unless <code>m == n</code> -- when multiplying two matrices, A and B, B needs to have as many columns as A has rows. (So you <em>can</em> multiply an <code>n * m</code> matrix with an <code>m * n</code> matrix.) </p> <p>See <a href="http://mathinsight.org/matrix_vector_multiplication" rel="nofollow">this article on multiplying matrices</a>. </p>
2
2016-09-08T02:40:15Z
[ "python", "numpy" ]
Python -Taking dot product of long list of arrays
39,381,727
<p>So I'm trying to to take the dot product of two arrays using numpy's dot product function.</p> <pre><code>import numpy as np MWFrPos_Hydro1 = subPos1[submaskFirst1] x = MWFrPos_Hydro1 MWFrVel_Hydro1 = subVel1[submaskFirst1] y = MWFrVel_Hydro1 MWFrPosMag_Hydro1 = [np.linalg.norm(i) for i in MWFrPos_Hydro1] np.dot(x, y) </code></pre> <p>returns </p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-135-9ef41eb4235d&gt; in &lt;module&gt;() 6 7 ----&gt; 8 np.dot(x, y) ValueError: shapes (1220,3) and (1220,3) not aligned: 3 (dim 1) != 1220 (dim 0) </code></pre> <p>And I using this function improperly?</p> <p>The arrays look like this </p> <pre><code>print x [[ 51.61872482 106.19775391 69.64765167] [ 33.86419296 11.75729942 11.84990311] [ 12.75009823 58.95491028 38.06708527] ..., [ 99.00266266 96.0210495 18.79844856] [ 27.18083954 74.35041809 78.07577515] [ 19.29788399 82.16114044 1.20453501]] print y [[ 40.0402298 -162.62153625 -163.00158691] [-359.41983032 -115.39328766 14.8419466 ] [ 95.92044067 -359.26425171 234.57330322] ..., [ 130.17840576 -7.00977898 42.09699249] [ 37.37852478 -52.66002655 -318.15155029] [ 126.1726532 121.3104248 -416.20855713]] </code></pre> <p>Would for looping <code>np.vdot</code> be more optimal in this circumstance?</p>
0
2016-09-08T02:31:59Z
39,383,113
<p>Some possible products for <code>(n,3)</code> arrays (here I'll just one)</p> <pre><code>In [434]: x=np.arange(12.).reshape(4,3) In [435]: x Out[435]: array([[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.], [ 9., 10., 11.]]) </code></pre> <p>element by element product, summed across the columns; <code>n</code> values. This is a magnitude like number.</p> <pre><code>In [436]: (x*x).sum(axis=1) Out[436]: array([ 5., 50., 149., 302.]) </code></pre> <p>Same thing with <code>einsum</code>, which gives more control over which axes are multiplied, and which are summed.</p> <pre><code>In [437]: np.einsum('ij,ij-&gt;i',x,x) Out[437]: array([ 5., 50., 149., 302.]) </code></pre> <p><code>dot</code> requires last of the 1st and 2nd last of 2nd to have the same size, so I have to use <code>x.T</code> (transpose). The diagonal matches the above.</p> <p>In [438]: np.dot(x,x.T) Out[438]: array([[ 5., 14., 23., 32.], [ 14., 50., 86., 122.], [ 23., 86., 149., 212.], [ 32., 122., 212., 302.]])</p> <p><code>np.einsum('ij,kj',x,x)</code> does the same thing.</p> <p>There is a new <code>matmul</code> product, but with 2d arrays like this it is just <code>dot</code>. I have to turn them into 3d arrays to get the 4 values; and even with that I have to squeeze out excess dimensions:</p> <pre><code>In [450]: x[:,None,:]@x[:,:,None] Out[450]: array([[[ 5.]], [[ 50.]], [[ 149.]], [[ 302.]]]) In [451]: np.squeeze(_) Out[451]: array([ 5., 50., 149., 302.]) </code></pre>
2
2016-09-08T05:22:53Z
[ "python", "numpy" ]
Removing List from List of Lists with condition
39,381,769
<p>I have this list of projects, and I want to remove it one by one start from last item to item-n until it reach some total value of budget = 325.000</p> <pre><code>from collections import namedtuple Item = namedtuple('Item', 'region sector name budget target performance'.split()) sorted_KP = [Item(region='H', sector='2', name='H3', budget=7000.0, target=1.0, performance=4.0), Item(region='H', sector='2', name='H10', budget=35000.0, target=15.0, performance=1.0), Item(region='I', sector='2', name='I6', budget=50000.0, target=5.0, performance=0.40598931548848194), Item(region='E', sector='4', name='E5', budget=75000.0, target=30.0, performance=0.0663966081766), Item(region='C', sector='1', name='C1', budget=75000.0, target=50.0, performance=0.0308067750379), Item(region='C', sector='1', name='C2', budget=75000.0, target=50.0, performance=0.0308067750379), Item(region='C', sector='5', name='C4', budget=75000.0, target=50.0, performance=0.0308067750379), Item(region='I', sector='2', name='I5', budget=100000.0, target=5.0, performance=0.40598931548848194), Item(region='E', sector='4', name='E1', budget=100000.0, target=30.0, performance=0.0663966081766), Item(region='D', sector='5', name='D21', budget=60000.0, target=4.0, performance=0.2479775110248), Item(region='D', sector='5', name='D30', budget=10000.0, target=1.0, performance=0.1653183406832), Item(region='D', sector='1', name='D23', budget=30000.0, target=20.0, performance=0.023659703723372342), Item(region='C', sector='5', name='C3', budget=150000.0, target=75.0, performance=0.0308067750379), Item(region='D', sector='5', name='D20', budget=30000.0, target=5.0, performance=0.0826591703416), Item(region='H', sector='2', name='H6', budget=310576.0, target=1.0, performance=4.0), Item(region='H', sector='3', name='H5', budget=9500.0, target=1.0, performance=0.1172008400616), Item(region='E', sector='6', name='E3', budget=100000.0, target=30.0, performance=0.03747318294316411), Item(region='G', sector='3', name='G17', budget=75000.0, target=20.0, performance=0.04132095963602382), Item(region='C', sector='4', name='C5', budget=75000.0, target=25.0, performance=0.0308067750379), Item(region='C', sector='2', name='C6', budget=30000.0, target=5.0, performance=0.0616135500758), Item(region='C', sector='2', name='C7', budget=30000.0, target=5.0, performance=0.0616135500758), Item(region='D', sector='6', name='D22', budget=65190.0, target=30.0, performance=0.020332158889648923), Item(region='D', sector='5', name='D3', budget=100000.0, target=20.0, performance=0.0413295851708), Item(region='D', sector='5', name='D4', budget=100000.0, target=20.0, performance=0.0413295851708), Item(region='A', sector='1', name='A12', budget=25000.0, target=25.0, performance=0.00749432996938), Item(region='A', sector='1', name='A13', budget=25000.0, target=25.0, performance=0.00749432996938), Item(region='A', sector='3', name='A25', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='5', name='A26', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='1', name='A27', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='1', name='A29', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='3', name='A30', budget=4500.0, target=1.0, performance=0.02997731987752)] </code></pre> <p>But beside the total value, I have two others conditions of the item whether it should be removed or not.</p> <p>First, after the item is removed there still remain at least one item in the list of lists that represent the same <em>region</em></p> <p>Second, after the item is removed there still remain at least one item that represent the same <em>sector</em></p> <p>For example, I can remove the last item because it represent region "A" and there left 5 items that also represent region "A". Also it represent sector "3" and there left 3 items that represent sector "3".</p> <p>This removal and checking repeated until I reach total budget of removal at least 325.000</p> <p>I did this code, but I can not get what I need. Please help me to correct it.</p> <pre><code>from collections import Counter unpack = [] for item in sorted_KP: item_budget = item[3] sum_unpack = sum(item[3] for item in unpack) budget = 325000 remaining = [] for item in sorted_KP: if item not in unpack: remaining.append(item) region_el = [item[0] for item in remaining] counter_R_el = Counter(region_el) sector_el = [item[1] for item in remaining] counter_S_el = Counter(sector_el) if counter_R_el &gt;= 1 or counter_S_el &gt;= 1: if sum_unpack &lt;= budget: unpack.append(item) for item in unpack: print "\t", item </code></pre> <p>Here is what I got with my code, item-25 is still removed when it should not:</p> <pre><code>unpack =Item(region='A', sector='3', name='A30', budget=4500.0, target=1.0, performance=0.02997731987752) Item(region='A', sector='1', name='A29', budget=4500.0, target=1.0, performance=0.02997731987752) Item(region='A', sector='1', name='A27', budget=4500.0, target=1.0, performance=0.02997731987752) Item(region='A', sector='5', name='A26', budget=4500.0, target=1.0, performance=0.02997731987752) Item(region='A', sector='3', name='A25', budget=4500.0, target=1.0, performance=0.02997731987752) Item(region='A', sector='1', name='A13', budget=25000.0, target=25.0, performance=0.00749432996938) Item(region='A', sector='1', name='A12', budget=25000.0, target=25.0, performance=0.00749432996938) Item(region='D', sector='5', name='D4', budget=100000.0, target=20.0, performance=0.0413295851708) Item(region='D', sector='5', name='D3', budget=100000.0, target=20.0, performance=0.0413295851708) Item(region='D', sector='6', name='D22', budget=65190.0, target=30.0, performance=0.020332158889648923) </code></pre> <p>Item-25 (project name: "A12") can not be removed even though we still have budget to be removed, because if it was remove, there'll be no more item represent region "A", and so on.</p> <p>While the solution should be:</p> <pre><code>unpack = [Item(region='A', sector='3', name='A30', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='1', name='A29', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='1', name='A27', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='5', name='A26', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='3', name='A25', budget=4500.0, target=1.0, performance=0.02997731987752), Item(region='A', sector='1', name='A13', budget=25000.0, target=25.0, performance=0.00749432996938), Item(region='D', sector='5', name='D4', budget=100000.0, target=20.0, performance=0.0413295851708), Item(region='D', sector='5', name='D3', budget=100000.0, target=20.0, performance=0.0413295851708), Item(region='D', sector='6', name='D22', budget=65190.0, target=30.0, performance=0.020332158889648923), Item(region='C', sector='2', name='C7', budget=30000.0, target=5.0, performance=0.0616135500758)] </code></pre> <p>Thank you in advance for your help</p>
3
2016-09-08T02:39:17Z
39,382,456
<p>Updating the answer as when I actually tried to run it I found a few other problems:</p> <ul> <li>the inner <code>for item in sorted_KP</code> uses the same <code>item</code> counter as the outer loop and overwrites it - always attempting to remove the <code>A30</code> (last) item</li> <li>when switching to <code>item2</code> in the inner loop I had to also reverse the outer loop order (i.e. start removal from the last line).</li> <li>the region/sector counter comparison is incorrect, causing <code>TypeError: unorderable types: Counter() &gt;= int()</code> - need to pick the specific count inside matching the item's region or sector as needed</li> <li>incorporated my earlier answer: you need to <code>and</code> your 2 extra conditions, not <code>or</code> them</li> <li>incorporated @wwii's comment - indeed the counter comparisons need to be <code>&gt; 1</code>, not <code>&gt;= 1</code></li> </ul> <p>The actual tested code:</p> <pre><code>&gt;&gt;&gt; for item in sorted_KP[::-1]: ... item_budget = item[3] ... sum_unpack = sum(item[3] for item in unpack) ... budget = 325000 ... remaining = [] ... for item2 in sorted_KP: ... if item2 not in unpack: ... remaining.append(item2) ... region_el = [item[0] for item in remaining] ... counter_R_el = Counter(region_el) ... sector_el = [item[1] for item in remaining] ... counter_S_el = Counter(sector_el) ... if counter_R_el[item.region] &gt; 1 and counter_S_el[item.sector] &gt; 1: ... if sum_unpack &lt;= budget: ... unpack.append(item) ... &gt;&gt;&gt; &gt;&gt;&gt; for item in unpack: ... logging.error(item) ... ERROR:root:Item(region='A', sector='3', name='A30', budget=4500.0, target=1.0, performance=0.02997731987752) ERROR:root:Item(region='A', sector='1', name='A29', budget=4500.0, target=1.0, performance=0.02997731987752) ERROR:root:Item(region='A', sector='1', name='A27', budget=4500.0, target=1.0, performance=0.02997731987752) ERROR:root:Item(region='A', sector='5', name='A26', budget=4500.0, target=1.0, performance=0.02997731987752) ERROR:root:Item(region='A', sector='3', name='A25', budget=4500.0, target=1.0, performance=0.02997731987752) ERROR:root:Item(region='A', sector='1', name='A13', budget=25000.0, target=25.0, performance=0.00749432996938) ERROR:root:Item(region='D', sector='5', name='D4', budget=100000.0, target=20.0, performance=0.0413295851708) ERROR:root:Item(region='D', sector='5', name='D3', budget=100000.0, target=20.0, performance=0.0413295851708) ERROR:root:Item(region='D', sector='6', name='D22', budget=65190.0, target=30.0, performance=0.020332158889648923) ERROR:root:Item(region='C', sector='2', name='C7', budget=30000.0, target=5.0, performance=0.0616135500758) &gt;&gt;&gt; </code></pre>
1
2016-09-08T04:09:53Z
[ "python", "list", "condition" ]