title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to plot odd even square wave using python | 38,896,496 | <p>I am using the below python codes so as to generates a square wave at one specific position. For Eg: you enter 0, the signal is high<a href="http://i.stack.imgur.com/4Ivso.png" rel="nofollow">1</a> only between 0 and 1[Odd=High]. You enter 1, the output is high<a href="http://i.stack.imgur.com/4Ivso.png" rel="nofollow">1</a> only between 1 and 2 [Even = High]. How do you extend the below python codes so as to generate odd or even square wave throughout the time span rather that at a single position. Here I am facing problem with 2*n+1 formula.Could anyone help me in this?</p>
<p>Please refer the image below</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def SquareWave(n):
xmin=0;
xmax=10;
ymin=-2;
ymax=2;
Nx=1000;
offset=1;
x=np.linspace(xmin, xmax, Nx);
y=np.sign(x+n)*offset;
y[(x<n)]=0;
y[(x>n+1)]=0;
plt.plot(x, y);
plt.axis([xmin, xmax, ymin, ymax]);
plt.grid()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/4Ivso.png" rel="nofollow"><img src="http://i.stack.imgur.com/4Ivso.png" alt="Odd Wave"></a></p>
| 0 | 2016-08-11T12:31:54Z | 38,896,958 | <p>Don't use <code>;</code>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def SquareWave(n,xmin=0,xmax=10,ymin=-2,Nx=1000,ymax=2,offset=1):
x=np.sort(np.concatenate([np.arange(xmin, xmax)-1E-6,np.arange(xmin, xmax)+1E-6]))
#You can use np.linspace(xmin,xmax,Nx) if you want the intermediate points
y=np.array(x+n+offset,dtype=int)%2
plt.plot(x, y)
plt.axis([xmin, xmax, ymin, ymax])
plt.grid()
plt.show()
</code></pre>
| 1 | 2016-08-11T12:52:48Z | [
"python",
"numpy",
"matplotlib"
] |
SQLAlchemy: is it possible to operate Query without bounding to session? | 38,896,499 | <p>I want to execute the same SQL query from different processes by SQLAlchemy.
As I understand I must create new Session in every process. So for each new session I must recreate query:</p>
<pre><code>session.query(...).filter(...)
etc.
</code></pre>
<p>It seems logical to save fully formed query separately from session. And then only apply this query to each session:</p>
<pre><code>new_session.query(old_saved_query)
</code></pre>
<p>Is it possible?
Or there is some other way?</p>
| 1 | 2016-08-11T12:32:06Z | 38,896,879 | <p>You can use <a href="http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.with_session" rel="nofollow"><code>with_session</code></a>:</p>
<pre><code>query = session.query(...).filter(...)
query.with_session(new_session)
</code></pre>
<p>It is also possible to create a query without a bound session:</p>
<pre><code>from sqlalchemy.orm import Query
query = Query(...).filter(...)
query.with_session(session)
</code></pre>
| 1 | 2016-08-11T12:49:41Z | [
"python",
"session",
"sqlalchemy",
"python-3.4"
] |
adding value to dict using keys list | 38,896,828 | <p>Consider the scenario in which list contains key of a dict</p>
<pre><code>x = {'a':{'b': 1}}
lst = ['a','c']
value = {'d': 3}
</code></pre>
<p>Using the keys present in the list <code>lst</code> is there a way to add an entry in the dict <code>x</code> . </p>
<p><strong>Expected Result:</strong></p>
<pre><code>x = {'a': {'c': {'d': 3}, 'b': 1}}
</code></pre>
| 0 | 2016-08-11T12:47:19Z | 38,897,180 | <p>Use a loop an a temporary dictionary_variable:</p>
<pre><code>tmp_dict = x
for key in lst[:-1]:
tmp_dict = tmp_dict[key]
tmp_dict[lst[-1]] = value
print x
</code></pre>
<p>Notice, that the loop over all keys except the last one, since we need the last key for the assignment operation.</p>
| 0 | 2016-08-11T13:02:02Z | [
"python",
"python-2.7"
] |
adding value to dict using keys list | 38,896,828 | <p>Consider the scenario in which list contains key of a dict</p>
<pre><code>x = {'a':{'b': 1}}
lst = ['a','c']
value = {'d': 3}
</code></pre>
<p>Using the keys present in the list <code>lst</code> is there a way to add an entry in the dict <code>x</code> . </p>
<p><strong>Expected Result:</strong></p>
<pre><code>x = {'a': {'c': {'d': 3}, 'b': 1}}
</code></pre>
| 0 | 2016-08-11T12:47:19Z | 38,897,259 | <p>Philipp's answer is good. </p>
<p>But here is my attempt to give you the <em>exact</em> answer you expected.</p>
<pre><code>x = {'a':{'b' : 1}}
lst=['a','c']
value = {'d':3}
x[lst[0]][lst[1]] = value
print(x)
>> {'a': {'c': {'d': 3}, 'b': 1}}
</code></pre>
| 1 | 2016-08-11T13:05:30Z | [
"python",
"python-2.7"
] |
Python->Beautifulsoup->Webscraping->Looping over URL (1 to 53) and saving Results | 38,896,893 | <p><a href="http://livingwage.mit.edu/">Here is the Website I am trying to scrape http://livingwage.mit.edu/</a></p>
<p>The specific URLs are from </p>
<pre><code>http://livingwage.mit.edu/states/01
http://livingwage.mit.edu/states/02
http://livingwage.mit.edu/states/04 (For some reason they skipped 03)
...all the way to...
http://livingwage.mit.edu/states/56
</code></pre>
<p>And on each one of these URLs, I need the last row of the second table:</p>
<blockquote>
<p>Example for <a href="http://livingwage.mit.edu/states/01">http://livingwage.mit.edu/states/01</a></p>
<p>Required annual income before taxes $20,260 $42,786 $51,642
$64,767 $34,325 $42,305 $47,345 $53,206 $34,325 $47,691
$56,934 $66,997</p>
</blockquote>
<p>Desire output:</p>
<p>Alabama $20,260 $42,786 $51,642 $64,767 $34,325 $42,305 $47,345 $53,206 $34,325 $47,691 $56,934 $66,997</p>
<p>Alaska $24,070 $49,295 $60,933 $79,871 $38,561 $47,136 $52,233 $61,531 $38,561 $54,433 $66,316 $82,403 </p>
<p>...</p>
<p>...</p>
<p>Wyoming $20,867 $42,689 $52,007 $65,892 $34,988 $41,887 $46,983 $53,549 $34,988 $47,826 $57,391 $68,424 </p>
<p>After 2 hours of messing around, this is what I have so far (I am a beginner):</p>
<pre><code>import requests, bs4
res = requests.get('http://livingwage.mit.edu/states/01')
res.raise_for_status()
states = bs4.BeautifulSoup(res.text)
state_name=states.select('h1')
table = states.find_all('table')[1]
rows = table.find_all('tr', 'odd')[4:]
result=[]
result.append(state_name)
result.append(rows)
</code></pre>
<p>When I viewed the state_name and rows in Python Console it give me the html elements</p>
<pre><code>[<h1>Living Wag...Alabama</h1>]
</code></pre>
<p>and</p>
<pre><code>[<tr class = "odd... </td> </tr>]
</code></pre>
<p>Problem 1: These are the things that I want in the desired output, but how can I get python to give it to me in a string format rather than HTML like above?</p>
<p>Problem 2: How do I loop through the request.get(url01 to url56)? </p>
<p>Thank you for your help.</p>
<p>And if you can offer a more efficient way of getting to the rows variable in my code, I would greatly appreciate it, because the way I get there is not very Pythonic.</p>
| 5 | 2016-08-11T12:50:07Z | 38,897,186 | <blockquote>
<p>Problem 1: These are the things that I want in the desired output, but how can I get python to give it to me in a string format rather than HTML like above?</p>
</blockquote>
<p>You can get the text by simply by doing something on the lines of:</p>
<pre><code>state_name=states.find('h1').text
</code></pre>
<p>The same can be applied for each of the rows too.</p>
<blockquote>
<p>Problem 2: How do I loop through the request.get(url01 to url56)?</p>
</blockquote>
<p>The same code block can be put inside a loop from 1 to 56 like so:</p>
<pre><code>for i in range(1,57):
res = requests.get('http://livingwage.mit.edu/states/'+str(i).zfill(2))
...rest of the code...
</code></pre>
<p><code>zfill</code> will add those leading zeroes. Also, it would be better if <code>requests.get</code> is enclosed in a <code>try-except</code> block so that the loop continues gracefully even when the url is wrong.</p>
| 2 | 2016-08-11T13:02:21Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
Python->Beautifulsoup->Webscraping->Looping over URL (1 to 53) and saving Results | 38,896,893 | <p><a href="http://livingwage.mit.edu/">Here is the Website I am trying to scrape http://livingwage.mit.edu/</a></p>
<p>The specific URLs are from </p>
<pre><code>http://livingwage.mit.edu/states/01
http://livingwage.mit.edu/states/02
http://livingwage.mit.edu/states/04 (For some reason they skipped 03)
...all the way to...
http://livingwage.mit.edu/states/56
</code></pre>
<p>And on each one of these URLs, I need the last row of the second table:</p>
<blockquote>
<p>Example for <a href="http://livingwage.mit.edu/states/01">http://livingwage.mit.edu/states/01</a></p>
<p>Required annual income before taxes $20,260 $42,786 $51,642
$64,767 $34,325 $42,305 $47,345 $53,206 $34,325 $47,691
$56,934 $66,997</p>
</blockquote>
<p>Desire output:</p>
<p>Alabama $20,260 $42,786 $51,642 $64,767 $34,325 $42,305 $47,345 $53,206 $34,325 $47,691 $56,934 $66,997</p>
<p>Alaska $24,070 $49,295 $60,933 $79,871 $38,561 $47,136 $52,233 $61,531 $38,561 $54,433 $66,316 $82,403 </p>
<p>...</p>
<p>...</p>
<p>Wyoming $20,867 $42,689 $52,007 $65,892 $34,988 $41,887 $46,983 $53,549 $34,988 $47,826 $57,391 $68,424 </p>
<p>After 2 hours of messing around, this is what I have so far (I am a beginner):</p>
<pre><code>import requests, bs4
res = requests.get('http://livingwage.mit.edu/states/01')
res.raise_for_status()
states = bs4.BeautifulSoup(res.text)
state_name=states.select('h1')
table = states.find_all('table')[1]
rows = table.find_all('tr', 'odd')[4:]
result=[]
result.append(state_name)
result.append(rows)
</code></pre>
<p>When I viewed the state_name and rows in Python Console it give me the html elements</p>
<pre><code>[<h1>Living Wag...Alabama</h1>]
</code></pre>
<p>and</p>
<pre><code>[<tr class = "odd... </td> </tr>]
</code></pre>
<p>Problem 1: These are the things that I want in the desired output, but how can I get python to give it to me in a string format rather than HTML like above?</p>
<p>Problem 2: How do I loop through the request.get(url01 to url56)? </p>
<p>Thank you for your help.</p>
<p>And if you can offer a more efficient way of getting to the rows variable in my code, I would greatly appreciate it, because the way I get there is not very Pythonic.</p>
| 5 | 2016-08-11T12:50:07Z | 38,897,405 | <p>Just get all the states from the initial page, then you can select the second table and use the <em>css classes</em> <em>odd results</em> to get the <em>tr</em> you need, there is no need to slice as the class names are unique:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin # python2 -> from urlparse import urljoin
base = "http://livingwage.mit.edu"
res = requests.get(base)
res.raise_for_status()
states = []
# Get all state urls and state name from the anchor tags on the base page.
# td + td skips the first td which is *Required annual income before taxes*
# get all the anchors inside each li that are children of the
# ul with the css class "states list".
for a in BeautifulSoup(res.text, "html.parser").select("ul.states.list-unstyled li a"):
# The hrefs look like "/states/51/locations".
# We want everything before /locations so we split on / from the right -> /states/51/
# and join to the base url. The anchor text also holds the state name,
# so we return the full url and the state, i.e "http://livingwage.mit.edu/states/01 "Alabama".
states.append((urljoin(base, a["href"].rsplit("/", 1)[0]), a.text))
def parse(soup):
# Get the second table, indexing in css starts at 1, so table:nth-of-type(2)" gets the second table.
table = soup.select_one("table:nth-of-type(2)")
# To get the text, we just need find all the tds and call .text on each.
# Each td we want has the css class "odd results", td + td starts from the second as we don't want the first.
return [td.text.strip() for td in table.select_one("tr.odd.results").select("td + td")]
# Unpack the url and state from each tuple in our states list.
for url, state in states:
soup = BeautifulSoup(requests.get(url).content, "html.parser")
print(state, parse(soup))
</code></pre>
<p>If you run the code you will see output like:</p>
<pre><code>Alabama ['$21,144', '$43,213', '$53,468', '$67,788', '$34,783', '$41,847', '$46,876', '$52,531', '$34,783', '$48,108', '$58,748', '$70,014']
Alaska ['$24,070', '$49,295', '$60,933', '$79,871', '$38,561', '$47,136', '$52,233', '$61,531', '$38,561', '$54,433', '$66,316', '$82,403']
Arizona ['$21,587', '$47,153', '$59,462', '$78,112', '$36,332', '$44,913', '$50,200', '$58,615', '$36,332', '$52,483', '$65,047', '$80,739']
Arkansas ['$19,765', '$41,000', '$50,887', '$65,091', '$33,351', '$40,337', '$45,445', '$51,377', '$33,351', '$45,976', '$56,257', '$67,354']
California ['$26,249', '$55,810', '$64,262', '$81,451', '$42,433', '$52,529', '$57,986', '$68,826', '$42,433', '$61,328', '$70,088', '$84,192']
Colorado ['$23,573', '$51,936', '$61,989', '$79,343', '$38,805', '$47,627', '$52,932', '$62,313', '$38,805', '$57,283', '$67,593', '$81,978']
Connecticut ['$25,215', '$54,932', '$64,882', '$80,020', '$39,636', '$48,787', '$53,857', '$61,074', '$39,636', '$60,074', '$70,267', '$82,606']
</code></pre>
<p>You could loop in a range from 1-53 but extracting the anchor from the base page also gives us the state name in a single step, using the h1 from that page would also give you output <em>Living Wage Calculation for Alabama</em> which you would have to then try to parse to just get the name which would not be trivial considering some states have more the one word names.</p>
| 5 | 2016-08-11T13:12:17Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
Python - How to improve efficiency of complex recursive function? | 38,896,939 | <p>In <a href="https://www.youtube.com/watch?v=jcKRGpMiVTw" rel="nofollow">this video</a> by Mathologer on, amongst other things, infinite sums there are 3 different infinite sums shown at 9:25, when the video freezes suddenly and an elephant diety pops up, challenging the viewer to find "the probable values" of the expressions. I wrote the following script to approximate the last of the three (i.e. 1 + 3.../2...) with increasing precision:</p>
<pre><code>from decimal import Decimal as D, getcontext # for accurate results
def main(c): # faster code when functions defined locally (I think)
def run1(c):
c += 1
if c <= DEPTH:
return D(1) + run3(c)/run2(c)
else:
return D(1)
def run2(c):
c += 1
if c <= DEPTH:
return D(2) + run2(c)/run1(c)
else:
return D(2)
def run3(c):
c += 1
if c <= DEPTH:
return D(3) + run1(c)/run3(c)
else:
return D(3)
return run1(c)
getcontext().prec = 10 # too much precision isn't currently necessary
for x in range(1, 31):
DEPTH = x
print(x, main(0))
</code></pre>
<p>Now this is working totally fine for 1 <= <code>x</code> <= 20ish, but it starts taking an eternity for each result after that. I do realize that this is due to the exponentially increasing number of function calls being made at each <code>DEPTH</code> level. It is also clear that I won't be able to calculate the series comfortably up to an arbitrary point. However, the point at which the program slows down is too early for me to clearly identify the limit the series it is converging to (it might be 1.75, but I need more <code>DEPTH</code> to be certain).</p>
<p><strong>My question is: How do I get as much out of my script as possible (performance-wise)?</strong></p>
<p><em>I have tried:</em><br>
1. finding the mathematical solution to this problem. (No matching results)<br>
2. finding ways to optimize recursive functions in general. According to multiple sources (e.g. <a href="http://stackoverflow.com/questions/13543019/why-is-recursion-in-python-so-slow">this</a>), Python doesn't optimize tail recursion by default, so I tried switching to an iterative style, but I ran out of ideas on how to accomplish this almost instantly... </p>
<p>Any help is appreciated!</p>
<p>NOTE: I know that I could go about this mathematically instead of "brute-forcing" the limit, but I want to get my program running well, now that I've started...</p>
| 1 | 2016-08-11T12:51:44Z | 38,897,580 | <p>You can store the results of the <code>run1</code>, <code>run2</code> and <code>run3</code> functions in arrays to prevent them from being recalculated every time, since in your example, <code>main(1)</code> calls <code>run1(1)</code>, which calls <code>run3(2)</code> and <code>run2(2)</code>, which in turn call <code>run1(3)</code>, <code>run2(3)</code>, <code>run1(3)</code> (again) and <code>run3(3)</code>, and so on.</p>
<p>You can see that <code>run1(3)</code> is being called evaluated twice, and this only gets worse as the number increases; if we count the number of times each function is called, those are the results:</p>
<pre><code> run1 run2 run3
1 1 0 0
2 0 1 1
3 1 2 1
4 3 2 3
5 5 6 5
6 11 10 11
7 21 22 21
8 43 42 43
9 85 86 85
...
20 160,000 each (approx.)
...
30 160 million each (approx.)
</code></pre>
<p>This is actually a variant of a Pascal triangle, and you could <em>probably</em> figure out the results mathematically; but since here you asked for a non mathematical optimization, just notice how the number of calls increases exponentially; it doubles at each iteration. This is even worse since each call will generate thousands of subsequent calls with higher values, which is what you want to avoid.</p>
<p>Therefore what you want to do is store the value of each call, so that the function does not need to be called a thousand times (and itself make thousands more calls) to always get the same result. This is called <a href="https://en.wikipedia.org/wiki/Memoization" rel="nofollow">memoization</a>.</p>
<p>Here is an example solution in pseudo code:</p>
<pre><code>before calling main, declare the arrays val1, val2, val3, all of size DEPTH, and fill them with -1
function run1(c) # same thing for run2 and run3
c += 1
if c <= DEPTH
local3 = val3(c) # read run3(c)
if local3 is -1 # if run3(c) hasn't been computed yet
local3 = run3(c) # we compute it
val3(c) = local3 # and store it into the array
local2 = val2(c) # same with run2(c)
if local2 is -1
local2 = run2(c)
val2(c) = local2
return D(1) + local3/local2 # we use the value we got from the array or from the computation
else
return D(1)
</code></pre>
<p>Here I use -1 since your functions seem to only generate positive numbers, and -1 is an easy placeholder for the empty cells. In other cases you might have to use an object as Cabu below me did. I however think this would be slower due to the cost of retrieving properties in an object versus reading an array, but I might be wrong about that. Either way, your code should be much, much faster with it is now, with a cost of O(n) instead of O(2^n).</p>
<p>This would technically allow your code to run forever at a constant speed, but the recursion will actually cause an early stack overflow. You might still be able to get to a depth of several thousands before that happens though.</p>
<p><strong>Edit:</strong> As ShadowRanger added in the comments, you can keep your original code and simply add <code>@lru_cache(maxsize=n)</code> before each of your <code>run1</code>, <code>run2</code> and <code>run3</code> functions, where n is one of the first powers of two above DEPTH (for example, 32 if depth is 25). This might require an import directive to work.</p>
| 1 | 2016-08-11T13:20:03Z | [
"python",
"performance",
"math",
"recursion",
"optimization"
] |
Python - How to improve efficiency of complex recursive function? | 38,896,939 | <p>In <a href="https://www.youtube.com/watch?v=jcKRGpMiVTw" rel="nofollow">this video</a> by Mathologer on, amongst other things, infinite sums there are 3 different infinite sums shown at 9:25, when the video freezes suddenly and an elephant diety pops up, challenging the viewer to find "the probable values" of the expressions. I wrote the following script to approximate the last of the three (i.e. 1 + 3.../2...) with increasing precision:</p>
<pre><code>from decimal import Decimal as D, getcontext # for accurate results
def main(c): # faster code when functions defined locally (I think)
def run1(c):
c += 1
if c <= DEPTH:
return D(1) + run3(c)/run2(c)
else:
return D(1)
def run2(c):
c += 1
if c <= DEPTH:
return D(2) + run2(c)/run1(c)
else:
return D(2)
def run3(c):
c += 1
if c <= DEPTH:
return D(3) + run1(c)/run3(c)
else:
return D(3)
return run1(c)
getcontext().prec = 10 # too much precision isn't currently necessary
for x in range(1, 31):
DEPTH = x
print(x, main(0))
</code></pre>
<p>Now this is working totally fine for 1 <= <code>x</code> <= 20ish, but it starts taking an eternity for each result after that. I do realize that this is due to the exponentially increasing number of function calls being made at each <code>DEPTH</code> level. It is also clear that I won't be able to calculate the series comfortably up to an arbitrary point. However, the point at which the program slows down is too early for me to clearly identify the limit the series it is converging to (it might be 1.75, but I need more <code>DEPTH</code> to be certain).</p>
<p><strong>My question is: How do I get as much out of my script as possible (performance-wise)?</strong></p>
<p><em>I have tried:</em><br>
1. finding the mathematical solution to this problem. (No matching results)<br>
2. finding ways to optimize recursive functions in general. According to multiple sources (e.g. <a href="http://stackoverflow.com/questions/13543019/why-is-recursion-in-python-so-slow">this</a>), Python doesn't optimize tail recursion by default, so I tried switching to an iterative style, but I ran out of ideas on how to accomplish this almost instantly... </p>
<p>Any help is appreciated!</p>
<p>NOTE: I know that I could go about this mathematically instead of "brute-forcing" the limit, but I want to get my program running well, now that I've started...</p>
| 1 | 2016-08-11T12:51:44Z | 38,897,615 | <p>With some memoization, You could get up to the stack overflow:</p>
<pre><code>from decimal import Decimal as D, getcontext # for accurate results
def main(c): # faster code when functions defined locally (I think)
mrun1 = {} # store partial results of run1, run2 and run3
# This have not been done in the as parameter of the
# run function to be able to reset them easily
def run1(c):
if c in mrun1: # if partial result already computed, return it
return mrun1[c]
c += 1
if c <= DEPTH:
v = D(1) + run3(c) / run2(c)
else:
v = D(1)
mrun1[c] = v # else store it and return the value
return v
def run2(c):
if c in mrun2:
return mrun2[c]
c += 1
if c <= DEPTH:
v = D(2) + run2(c) / run1(c)
else:
v = D(2)
mrun2[c] = v
return v
def run3(c):
if c in mrun3:
return mrun3[c]
c += 1
if c <= DEPTH:
v = D(3) + run1(c) / run3(c)
else:
v = D(3)
mrun3[c] = v
return v
return run1(c)
getcontext().prec = 150 # too much precision isn't currently necessary
for x in range(1, 997):
DEPTH = x
print(x, main(0))
</code></pre>
<p>Python will stack overflow if you go over 997.</p>
| 0 | 2016-08-11T13:21:45Z | [
"python",
"performance",
"math",
"recursion",
"optimization"
] |
using Plotly with pycharm | 38,897,010 | <p>I am trying to use <strong>Plotly</strong> with <strong>pycharm</strong> when I run the code (which I toke from the Ploty getting started <a href="https://plot.ly/python/getting-started/#initialization-for-offline-plotting" rel="nofollow">page</a> on terminal it is OK but when I use it with pycharm I get error: </p>
<p>"ImportError: No module named 'plotly.graph_objs'; 'plotly' is not a packageé"</p>
<p><a href="http://i.stack.imgur.com/Fo5Pb.png" rel="nofollow">Code on pycharm</a></p>
<p>Any idea about where could be the problem ? because plotly module works on terminal and not on the editor
could this problem be related to what is evoqued in this question ? </p>
<blockquote>
<p>Blockquote <a href="http://stackoverflow.com/questions/38021683/python-dynamic-objects-not-defined-anywhere-plotly-lib">Python dynamic objects not defined anywhere? (Plotly lib)</a></p>
</blockquote>
<p>Thx </p>
| 0 | 2016-08-11T12:54:55Z | 38,897,187 | <p>In Pycharm you have to download it. </p>
<p>Go to File --> Settings -->Project --> Project Interpreter --> on the plus right site--> search for your module and download it!</p>
<p>Thats it!!</p>
| 0 | 2016-08-11T13:02:22Z | [
"python",
"pycharm",
"plotly"
] |
using Plotly with pycharm | 38,897,010 | <p>I am trying to use <strong>Plotly</strong> with <strong>pycharm</strong> when I run the code (which I toke from the Ploty getting started <a href="https://plot.ly/python/getting-started/#initialization-for-offline-plotting" rel="nofollow">page</a> on terminal it is OK but when I use it with pycharm I get error: </p>
<p>"ImportError: No module named 'plotly.graph_objs'; 'plotly' is not a packageé"</p>
<p><a href="http://i.stack.imgur.com/Fo5Pb.png" rel="nofollow">Code on pycharm</a></p>
<p>Any idea about where could be the problem ? because plotly module works on terminal and not on the editor
could this problem be related to what is evoqued in this question ? </p>
<blockquote>
<p>Blockquote <a href="http://stackoverflow.com/questions/38021683/python-dynamic-objects-not-defined-anywhere-plotly-lib">Python dynamic objects not defined anywhere? (Plotly lib)</a></p>
</blockquote>
<p>Thx </p>
| 0 | 2016-08-11T12:54:55Z | 39,269,308 | <p>Your script is named <code>plotly.py</code>. In the first line where you import plotly it loads your script and not the plotly package. Change the name of your script to <code>plot.py</code> or something other than plotly and it should work.</p>
| 1 | 2016-09-01T10:38:23Z | [
"python",
"pycharm",
"plotly"
] |
Escape all metacharacters in Python | 38,897,045 | <p>I need to search for patterns which may have many metacharacters. Currently I use a long regex.</p>
<pre><code>prodObjMatcher=re.compile(r"""^(?P<nodeName>[\w\/\:\[\]\<\>\@\$]+)""", re.S|re.M|re.I|re.X)
</code></pre>
<p>(my actual pattern is very long so I just pasted some relevant portion on which I need help)</p>
<p>This is especially painful when I need to write combinations of such patterns in a single re compilation.</p>
<p>Is there a pythonic way for shortening the pattern length? </p>
| 0 | 2016-08-11T12:56:39Z | 38,897,689 | <p>Look, your pattern can be reduced to</p>
<pre><code>r"""^(?P<nodeName>[]\w/:[<>@$]+).*?"""
</code></pre>
<p>Note that you do not have to ever escape any non-word character in the character classes, except for shorthand classes, <code>^</code>, <code>-</code>, <code>]</code>, and <code>\</code>. There are ways to keep even those (except for <code>\</code>) unescaped in the character class: </p>
<ul>
<li><code>]</code> at the start of the character class</li>
<li><code>-</code> at the start/end of the character class</li>
<li><code>^</code> - should only be escaped if you place it at the start of the character class as a literal symbol.</li>
</ul>
<p>Outside a character class, you must escape <code>\</code>, <code>[</code>, <code>(</code>, <code>)</code>, <code>+</code>, <code>$</code>, <code>^</code>, <code>*</code>, <code>?</code>, <code>.</code>.</p>
<p>Note that <code>/</code> is not a special regex metacharacter in Python regex patterns, and does not have to be escaped.</p>
<p>Use raw string literals when defining your regex patterns to avoid issues (like confusing word boundary <code>r'\b'</code> and a backspace <code>'\b'</code>).</p>
| 2 | 2016-08-11T13:25:15Z | [
"python",
"regex",
"metacharacters"
] |
Tensorflow: tensor multiplication row-by-row with more different matrices | 38,897,084 | <p>I have a matrix A which is defined as a tensor in tensorflow, of n rows and p columns. Moreover, I have say k matrices B1,..., Bk with p rows and q columns. My goal is to obtain a resulting matrix C of n rows and q columns where each row of C is the matrix product of the corresponding row in A with one of the B matrices. Which B to choose is determined by a give index vector I of dimension n that can take values ranging from 1 to k. In my case, the B are weight variables while I is another tensor variable given as input. </p>
<p>An example of code in numpy would look as follows:</p>
<pre><code>A = array([[1, 0, 1],
[0, 0, 1],
[1, 1, 0],
[0, 1, 0]])
B1 = array([[1, 1],
[2, 1],
[3, 6]])
B2 = array([[1, 5],
[3, 2],
[0, 2]])
B = [B1, B2]
I = [1, 0, 0, 1]
n = A.shape[0]
p = A.shape[1]
q = B1.shape[1]
C = np.zeros(shape = (n,q))
for i in xrange(n):
C[i,:] = np.dot(A[i,:],B[I[i]])
</code></pre>
<p>How can this be translated in tensor flow?</p>
<p>In my specific case the variables are defined as:</p>
<pre><code>A = tf.placeholder("float", [None, p])
B1 = tf.Variable(tf.random_normal(p,q))
B2 = tf.Variable(tf.random_normal(p,q))
I = tf.placeholder("float",[None])
</code></pre>
| 0 | 2016-08-11T12:58:17Z | 38,903,735 | <p>This is a bit tricky and there are probably better solutions. Taking your first example, my approach computes C as follows:</p>
<pre><code>C = diag([0,1,1,0]) * A * B1 + diag([1,0,0,1]) * A * B2
</code></pre>
<p>where <code>diag([0,1,1,0])</code> is the diagonal matrix having vector <code>[0,1,1,0]</code> in its diagonal. This can be achieved through <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/math_ops.html#diag" rel="nofollow">tf.diag()</a> in TensorFlow. </p>
<p>For convenience, let me assume that k<=n (otherwise some B matrices would remain unused). The following script obtains those diagonal values from vector I and computes C as mentioned above:</p>
<pre><code>k = 2
n = 4
p = 3
q = 2
a = array([[1, 0, 1],
[0, 0, 1],
[1, 1, 0],
[0, 1, 0]])
index_input = [1, 0, 0, 1]
import tensorflow as tf
# Creates a dim·dim tensor having the same vector 'vector' in every row
def square_matrix(vector, dim):
return tf.reshape(tf.tile(vector,[dim]), [dim,dim])
A = tf.placeholder(tf.float32, [None, p])
B = tf.Variable(tf.random_normal(shape=[k,p,q]))
# For the first example (with k=2): B = tf.constant([[[1, 1],[2, 1],[3, 6]],[[1, 5],[3, 2],[0, 2]]], tf.float32)
C = tf.Variable(tf.zeros((n, q)))
I = tf.placeholder(tf.int32,[None])
# Create a n·n tensor 'indices_matrix' having indices_matrix[i]=I for 0<=i<n (each row vector is I)
indices_matrix = square_matrix(I, n)
# Create a n·n tensor 'row_matrix' having row_matrix[i]=[i,...,i] for 0<=i<n (each row vector is a vector of i's)
row_matrix = tf.transpose(square_matrix(tf.range(0, n, 1), n))
# Find diagonal values by comparing tensors indices_matrix and row_matrix
equal = tf.cast(tf.equal(indices_matrix, row_matrix), tf.float32)
# Compute C
for i in range(k):
diag = tf.diag(tf.gather(equal, i))
mul = tf.matmul(diag, tf.matmul(A, tf.gather(B, i)))
C = C + mul
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run(C, feed_dict={A : a, I : index_input}))
</code></pre>
<p>As an improvement, C may be computed using a vectorized implementation instead of using a for loop.</p>
| 0 | 2016-08-11T18:30:39Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning",
"matrix-multiplication"
] |
Tensorflow: tensor multiplication row-by-row with more different matrices | 38,897,084 | <p>I have a matrix A which is defined as a tensor in tensorflow, of n rows and p columns. Moreover, I have say k matrices B1,..., Bk with p rows and q columns. My goal is to obtain a resulting matrix C of n rows and q columns where each row of C is the matrix product of the corresponding row in A with one of the B matrices. Which B to choose is determined by a give index vector I of dimension n that can take values ranging from 1 to k. In my case, the B are weight variables while I is another tensor variable given as input. </p>
<p>An example of code in numpy would look as follows:</p>
<pre><code>A = array([[1, 0, 1],
[0, 0, 1],
[1, 1, 0],
[0, 1, 0]])
B1 = array([[1, 1],
[2, 1],
[3, 6]])
B2 = array([[1, 5],
[3, 2],
[0, 2]])
B = [B1, B2]
I = [1, 0, 0, 1]
n = A.shape[0]
p = A.shape[1]
q = B1.shape[1]
C = np.zeros(shape = (n,q))
for i in xrange(n):
C[i,:] = np.dot(A[i,:],B[I[i]])
</code></pre>
<p>How can this be translated in tensor flow?</p>
<p>In my specific case the variables are defined as:</p>
<pre><code>A = tf.placeholder("float", [None, p])
B1 = tf.Variable(tf.random_normal(p,q))
B2 = tf.Variable(tf.random_normal(p,q))
I = tf.placeholder("float",[None])
</code></pre>
| 0 | 2016-08-11T12:58:17Z | 38,904,771 | <p>Just do 2 matrix multiplications</p>
<pre><code>A1 = A[0:3:3,...] # this will get the first last index of your original but just make a new matrix
A2 = A[1:2]
</code></pre>
<p>in tensorflow</p>
<pre><code>A1 = tf.constant([matrix elements go here])
A2 = tf.constant([matrix elements go here])
B = ...
B1 = tf.matmul(A1,B)
B2 = tf.matmul(A2,B)
C = tf.pack([B1,B2])
</code></pre>
<p>granted if you need to reorganize the C tensor you can also use gather</p>
<pre><code>C = tf.gather(C,[0,3,2,1])
</code></pre>
| 0 | 2016-08-11T19:36:43Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning",
"matrix-multiplication"
] |
Axes Subplot y size | 38,897,091 | <p>I have two subplots that share the x-axes. The first one has data and a fit function, in the second one is the difference between the data and the fit function. In the figure both subplots have the same y axis size (in pixels). Now i want the y axis of the data and the fit to be bigger than the axis of the errors. my code is the following:</p>
<pre><code>import matplotlib.pyplot as plt
f, axarr = plt.subplots(2, sharex=True,figsize=(15, 12))
axarr[0].scatter(x, data , facecolors='none', edgecolors='crimson')
axarr[0].plot(x, fit, color='g',linewidth=1.5)
axarr[0].set_ylim([18,10])
axarr[1].plot(x,data-fit,color='k',linewidth=width)
axarr[1].set_ylim([-0.4,0.4])
yticks[-1].label1.set_visible(False)
plt.subplots_adjust(hspace=0.)
</code></pre>
<p>is there any code that sets the size of the second plot?</p>
| 1 | 2016-08-11T12:58:36Z | 38,900,223 | <p>Take a look at <a href="http://stackoverflow.com/questions/10388462/matplotlib-different-size-subplots">this example, using gridspec</a>. I believe it is exactly what you want. Below is the example adopted for your case. <strong>Edited to also share the x-axis</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
# generate some data
x = np.arange(0, 10, 0.2)
y = np.sin(x)
# plot it
fig = plt.figure(figsize=(8, 6))
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1], sharex=ax0) # <---- sharex=ax0 will share ax1 with ax2
ax0.plot(x, y)
ax1.plot(y, x)
plt.show()
</code></pre>
<p>Or even simpler by following <a href="http://stackoverflow.com/a/35881382/2768172">Hagnes answer</a> in the first link:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 10, 0.2)
y = np.sin(x)
f, (a0, a1) = plt.subplots(2,1, gridspec_kw = {'height_ratios':[1, 3]}, sharex=True) # <---- sharex=True will share the xaxis between the two axes
a0.plot(x, y)
a1.plot(y, x)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/eusgI.png" rel="nofollow"><img src="http://i.stack.imgur.com/eusgI.png" alt="enter image description here"></a></p>
| 0 | 2016-08-11T15:14:48Z | [
"python",
"matplotlib"
] |
Python reverse escape characters | 38,897,289 | <p>I want to convert the escaped characters back to the original form:</p>
<pre><code>>>> myString="\\10.10.10.10\view\a"
>>> myString
'\\10.10.10.10\x0biew\x07'
>>>desiredString=fun(myString)
>>>print desiredString
'\\10.10.10.10\view\a'
</code></pre>
<p>I researched quite a lot and could not find a solution
I am using Python2.6.</p>
| 1 | 2016-08-11T13:07:00Z | 38,897,400 | <pre><code>myString=r"\\10.10.10.10\view\a"
myString
'\\\\10.10.10.10\\view\\a'
print myString
\\10.10.10.10\view\a
</code></pre>
<p>r'...' is byte strings</p>
| 0 | 2016-08-11T13:12:12Z | [
"python",
"string",
"escaping",
"character",
"backslash"
] |
Python reverse escape characters | 38,897,289 | <p>I want to convert the escaped characters back to the original form:</p>
<pre><code>>>> myString="\\10.10.10.10\view\a"
>>> myString
'\\10.10.10.10\x0biew\x07'
>>>desiredString=fun(myString)
>>>print desiredString
'\\10.10.10.10\view\a'
</code></pre>
<p>I researched quite a lot and could not find a solution
I am using Python2.6.</p>
| 1 | 2016-08-11T13:07:00Z | 38,897,585 | <p>The <code>\v</code> and <code>\a</code> are interpreted as specials characters while printing. Don't worry, just add another backslash to avoid this.</p>
<pre><code>>>> myString="\\10.10.10.10\\view\\a"
>>> print(myString)
\10.10.10.10\view\a
</code></pre>
<p>Edit:
Or use <a href="https://docs.python.org/2/library/repr.html" rel="nofollow">repr</a>:</p>
<pre><code>>>> myString=r"\\10.10.10.10\view\a"
>>> print myString
\\10.10.10.10\view\a
</code></pre>
| 0 | 2016-08-11T13:20:21Z | [
"python",
"string",
"escaping",
"character",
"backslash"
] |
Python reverse escape characters | 38,897,289 | <p>I want to convert the escaped characters back to the original form:</p>
<pre><code>>>> myString="\\10.10.10.10\view\a"
>>> myString
'\\10.10.10.10\x0biew\x07'
>>>desiredString=fun(myString)
>>>print desiredString
'\\10.10.10.10\view\a'
</code></pre>
<p>I researched quite a lot and could not find a solution
I am using Python2.6.</p>
| 1 | 2016-08-11T13:07:00Z | 38,900,247 | <p>Ideally you should use the python builtin string functions or parsing properly your input data so you'd avoid these post-transformations. In fact, these custom solutions are not encouraged at all, but here you go:</p>
<pre><code>def fun(input_str):
cache = {
'\'':"\\'",
'\"':'\\"',
'\a':'\\a',
'\b':'\\b',
'\f':'\\f',
'\n':'\\n',
'\r':'\\r',
'\t':'\\t',
'\v':'\\v'
}
return "".join([cache.get(m,m) for m in list(input_str)])
tests = [
"\\10.10.10.10\view\a",
"\'",
'\"',
'\a',
'\b',
'\f',
'\n',
'\r',
'\t',
'\v'
]
for t in tests:
print repr(t)," => ",fun(t)
</code></pre>
| 1 | 2016-08-11T15:15:47Z | [
"python",
"string",
"escaping",
"character",
"backslash"
] |
Pip Wheel and coverage: command not found error | 38,897,291 | <p>I want to use wheels on my Linux server as it seems much faster, but when I do:</p>
<pre><code>pip install wheel
pip wheel -r requirements_dev.txt
</code></pre>
<p>Which contains the following packages</p>
<pre><code>nose
django_coverage
coverage
</code></pre>
<p>I get <strong>coverage: command not found</strong>, it's like it is not being installed. </p>
<p>Is there a fallback if a wheel is not found to pip install or have I not understood/setup this correctly?</p>
| 1 | 2016-08-11T13:07:04Z | 38,897,336 | <p>Can you try this?</p>
<pre><code>virtualenv venv
source venv/bin/activate
pip -r install requirement.txt
</code></pre>
<p>Also getting this with using wheel:-</p>
<pre><code>pip wheel -r check.txt
Collecting nose (from -r check.txt (line 1))
Using cached nose-1.3.7-py2-none-any.whl
Saved ./nose-1.3.7-py2-none-any.whl
Collecting django_coverage (from -r check.txt (line 2))
Saved ./django_coverage-1.2.4-cp27-none-any.whl
Collecting coverage (from -r check.txt (line 3))
Using cached coverage-4.2-cp27-cp27m-macosx_10_10_x86_64.whl
Saved ./coverage-4.2-cp27-cp27m-macosx_10_10_x86_64.whl
Skipping nose, due to already being wheel.
Skipping django-coverage, due to already being wheel.
Skipping coverage, due to already being wheel.
</code></pre>
| 3 | 2016-08-11T13:09:24Z | [
"python",
"django",
"python-3.x",
"pip",
"wheel"
] |
Pip Wheel and coverage: command not found error | 38,897,291 | <p>I want to use wheels on my Linux server as it seems much faster, but when I do:</p>
<pre><code>pip install wheel
pip wheel -r requirements_dev.txt
</code></pre>
<p>Which contains the following packages</p>
<pre><code>nose
django_coverage
coverage
</code></pre>
<p>I get <strong>coverage: command not found</strong>, it's like it is not being installed. </p>
<p>Is there a fallback if a wheel is not found to pip install or have I not understood/setup this correctly?</p>
| 1 | 2016-08-11T13:07:04Z | 38,897,519 | <p>Installing from wheels is what pip already does by default. <code>pip wheel</code> is for <em>creating</em> wheels from your requirements file.</p>
| 1 | 2016-08-11T13:17:16Z | [
"python",
"django",
"python-3.x",
"pip",
"wheel"
] |
Can I use an iterable which reads a file in parallel_bulk? | 38,897,373 | <p>Currently I have a function which reads a file in chunks, does some work (parsing, formating) and then <code>yeilds</code> the data in the format for the <code>elasticsearch</code> bulk loader.</p>
<p>Currently I'm using <a href="http://elasticsearch-py.readthedocs.io/en/master/helpers.html" rel="nofollow"><code>streaming_bulk</code></a>, but I'm wondering is it possible to use <a href="http://elasticsearch-py.readthedocs.io/en/master/helpers.html" rel="nofollow"><code>parallel_bulk</code></a> instead?</p>
<p>Does <a href="http://elasticsearch-py.readthedocs.io/en/master/helpers.html" rel="nofollow"><code>parallel_bulk</code></a> mean that it sends data to <code>elasticsearch</code> concurrently, or does it mean that it calls the iterator conurrently?</p>
<p><strong>Basically, what exactly does <code>parallel_bulk</code> use the extra threads for?</strong></p>
| 1 | 2016-08-11T13:11:05Z | 38,909,059 | <p>Short answer : <code>parallel_bulk</code> sends data to elasticsearch concurrently. </p>
<p>From the code snippet <a href="https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/helpers/__init__.py#L229" rel="nofollow">here</a>: we see that <code>parallel_bulk</code> concurrently sends chunks of actions to elasticsearch.
It uses pythons <a href="https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.dummy" rel="nofollow">multiprocessing.dummy</a> module.<br>
The data is chunked and each chunk is passed on to the thread in pool</p>
| 1 | 2016-08-12T02:46:21Z | [
"python",
"elasticsearch"
] |
How to remove filters in advance search in odoo | 38,897,375 | <p>I want to remove filters which are in advance search in odoo tree views. Its easy to remove filters and group by which display in top of tree view. But in advance search all filters are displaying, and i want some of them, and other ones want to remove, is there any solution to remove advance filters in odoo?</p>
| 1 | 2016-08-11T13:11:08Z | 38,905,738 | <p>As far as i can see <a href="https://github.com/odoo/odoo/blob/8.0/addons/web/static/src/js/search.js#L1989" rel="nofollow">here</a> <code>models.Model</code> <code>fields_get()</code> is called to get the advanced search field list. You should either work around the javascript code or override the <code>fields_get()</code>.</p>
| 1 | 2016-08-11T20:40:44Z | [
"python",
"filter",
"openerp",
"odoo-8"
] |
Customize logging for external/third-party libs | 38,897,399 | <p>I followed the advice of the django docs, and use logging like this:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
def today(...):
logger.info('Sun is shining, the weather is sweet')
</code></pre>
<p>With my current configuration, the output looks like this:</p>
<pre><code>2016-08-11 14:54:06 mylib.foo.today: INFO Sun is shining, the weather is sweet
</code></pre>
<p>Unfortunately some libraries which I can't modify use logging like this:</p>
<pre><code>import logging
def third_party(...):
logging.info('Make you want to move your dancing feet')
</code></pre>
<p>The output unfortunately looks like this:</p>
<pre><code>2016-08-09 08:28:04 root.third_party: INFO Make you want to move your dancing feet
</code></pre>
<p>I want to see this:</p>
<pre><code>2016-08-09 08:28:04 other_lib.some_file.third_party: INFO Make you want to move your dancing feet
</code></pre>
<p>Difference: </p>
<p><em>root.third_party</em> ==> <em>other_lib.some_file.third_party</em></p>
<p>I want to see the long version (not <code>root</code>) if code uses <code>logging.info()</code> instead of <code>logger.info()</code></p>
<p><strong>Update</strong></p>
<p>This is not a duplicate of <a href="http://stackoverflow.com/questions/1598823/elegant-setup-of-python-logging-in-django">Elegant setup of Python logging in Django</a>, since the solution of it is:</p>
<p>Start of quote</p>
<p>In each module, I define a logger using</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>End of quote.</p>
<p>No, I won't modify third-party-code which uses <code>logging.info()</code> instead of <code>logger.info()</code>.</p>
<p><strong>Follow Up Question</strong></p>
<p><a href="http://stackoverflow.com/questions/38952681/">Avoid <code>logger=logging.getLogger(__name__)</code> without loosing way to filter logs</a></p>
| 12 | 2016-08-11T13:12:09Z | 38,898,076 | <p>That's because they're using the root logger (which is what you get by default when you just do </p>
<pre><code>import logging
logging.info("Hi! I'm the root logger!")
</code></pre>
<p>If you want to do something different you have two (or three) options. The best would be to use the <a href="https://docs.python.org/3/library/logging.html#logrecord-attributes" rel="nofollow">Log Record format options</a>. Alternatively, you could monkey patch the libraries that you're using, e.g.</p>
<pre><code>import logging
import mod_with_lazy_logging
mod_with_lazy_logging.logger = logging.getLogger(mod_with_lazy_logging.__name__)
</code></pre>
<p>Or you could do something gnarly with parsing the ast and rewriting their bits of logging code. But, don't do that.</p>
| 3 | 2016-08-11T13:41:46Z | [
"python",
"django",
"logging"
] |
Customize logging for external/third-party libs | 38,897,399 | <p>I followed the advice of the django docs, and use logging like this:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
def today(...):
logger.info('Sun is shining, the weather is sweet')
</code></pre>
<p>With my current configuration, the output looks like this:</p>
<pre><code>2016-08-11 14:54:06 mylib.foo.today: INFO Sun is shining, the weather is sweet
</code></pre>
<p>Unfortunately some libraries which I can't modify use logging like this:</p>
<pre><code>import logging
def third_party(...):
logging.info('Make you want to move your dancing feet')
</code></pre>
<p>The output unfortunately looks like this:</p>
<pre><code>2016-08-09 08:28:04 root.third_party: INFO Make you want to move your dancing feet
</code></pre>
<p>I want to see this:</p>
<pre><code>2016-08-09 08:28:04 other_lib.some_file.third_party: INFO Make you want to move your dancing feet
</code></pre>
<p>Difference: </p>
<p><em>root.third_party</em> ==> <em>other_lib.some_file.third_party</em></p>
<p>I want to see the long version (not <code>root</code>) if code uses <code>logging.info()</code> instead of <code>logger.info()</code></p>
<p><strong>Update</strong></p>
<p>This is not a duplicate of <a href="http://stackoverflow.com/questions/1598823/elegant-setup-of-python-logging-in-django">Elegant setup of Python logging in Django</a>, since the solution of it is:</p>
<p>Start of quote</p>
<p>In each module, I define a logger using</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>End of quote.</p>
<p>No, I won't modify third-party-code which uses <code>logging.info()</code> instead of <code>logger.info()</code>.</p>
<p><strong>Follow Up Question</strong></p>
<p><a href="http://stackoverflow.com/questions/38952681/">Avoid <code>logger=logging.getLogger(__name__)</code> without loosing way to filter logs</a></p>
| 12 | 2016-08-11T13:12:09Z | 38,899,045 | <p>As Wayne Werner suggested, I would use the Log Record format options. Here's an example.</p>
<p>File 1: <code>external_module</code></p>
<pre><code>import logging
def third_party():
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
logger.info("Hello from %s!"%__name__)
</code></pre>
<p>File 2: <code>main</code></p>
<pre><code>import external_module
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(module)s.%(funcName)s: %(levelname)s %(message)s')
logger = logging.getLogger(__name__)
def cmd():
logger.info("Hello from %s!"%__name__)
external_module.third_party()
cmd()
</code></pre>
<p>Output:</p>
<pre><code>2016-08-11 09:18:17,993 main.cmd: INFO Hello from __main__!
2016-08-11 09:18:17,993 external_module.third_party(): INFO Hello from external_module!
</code></pre>
| 3 | 2016-08-11T14:22:33Z | [
"python",
"django",
"logging"
] |
Is there a faster way of doing full row comparisons on a small pandas dataframe than using loops and iloc? | 38,897,449 | <p>I have a large number of small pandas dataframes on which I have to do full row comparisons and write the results into new dataframes which will get concatenated later.</p>
<p>For the row comparisons I'm doing a double loop over the length of the dataframe using iloc. I don't know if there is a faster way, the way I'm doing it seems really slow:</p>
<pre><code># -*- coding: utf-8 -*-
import pandas as pd
import time
def processFrames1(DF):
LL = []
for i in range(len(DF)):
for j in range(len(DF)):
if DF.iloc[i][0] != DF.iloc[j][0]:
T = {u'T1':DF.iloc[i][0]}
T[u'T2'] = DF.iloc[j][0]
T[u'T3'] = 1
if DF.iloc[i][2] > DF.iloc[j][2]:
T[u'T4'] = 1
elif DF.iloc[i][2] < DF.iloc[j][2]:
T[u'T4'] = -1
else:
T[u'T4'] = 0
if DF.iloc[i][1] < DF.iloc[j][1]:
T[u'T5'] = 1
else:
T[u'T5'] = -1
LL.append(T)
return pd.DataFrame.from_dict(LL)
D = [{'A':'XA','B':1,'C':1.4}\
,{'A':'RT','B':2,'C':10}\
,{'A':'HO','B':3,'C':34}\
,{'A':'NJ','B':4,'C':0.41}\
,{'A':'WF','B':5,'C':114}\
,{'A':'DV','B':6,'C':74}\
,{'A':'KP','B':7,'C':2.4}]
P = pd.DataFrame.from_dict(D)
time0 = time.time()
for i in range(10):
X = processFrames1(P)
print time.time()-time0
print X
</code></pre>
<p>Yielding the result:</p>
<pre><code>0.836999893188
T1 T2 T3 T4 T5
0 XA RT 1 -1 1
1 XA HO 1 -1 1
2 XA NJ 1 1 1
3 XA WF 1 -1 1
4 XA DV 1 -1 1
5 XA KP 1 -1 1
6 RT XA 1 1 -1
7 RT HO 1 -1 1
8 RT NJ 1 1 1
9 RT WF 1 -1 1
10 RT DV 1 -1 1
11 RT KP 1 1 1
12 HO XA 1 1 -1
13 HO RT 1 1 -1
14 HO NJ 1 1 1
15 HO WF 1 -1 1
16 HO DV 1 -1 1
17 HO KP 1 1 1
18 NJ XA 1 -1 -1
19 NJ RT 1 -1 -1
20 NJ HO 1 -1 -1
21 NJ WF 1 -1 1
22 NJ DV 1 -1 1
23 NJ KP 1 -1 1
24 WF XA 1 1 -1
25 WF RT 1 1 -1
26 WF HO 1 1 -1
27 WF NJ 1 1 -1
28 WF DV 1 1 1
29 WF KP 1 1 1
30 DV XA 1 1 -1
31 DV RT 1 1 -1
32 DV HO 1 1 -1
33 DV NJ 1 1 -1
34 DV WF 1 -1 -1
35 DV KP 1 1 1
36 KP XA 1 1 -1
37 KP RT 1 -1 -1
38 KP HO 1 -1 -1
39 KP NJ 1 1 -1
40 KP WF 1 -1 -1
41 KP DV 1 -1 -1
</code></pre>
<p>Working this representative dataframe just 10 times takes almost a full second, and I will have to work with over a million.</p>
<p>Is there a faster way to do those full row comparisons?</p>
<p>EDIT1:
After some modifications I could make Javier's code create the correct output:</p>
<pre><code>def compare_values1(x,y):
if x>y: return 1
elif x<y: return -1
else: return 0
def compare_values2(x,y):
if x<y: return 1
elif x>y: return -1
else: return 0
def processFrames(P):
D = P.to_dict(orient='records')
d_A2B = {d["A"]:d["B"] for d in D}
d_A2C = {d["A"]:d["C"] for d in D}
keys = list(d_A2B.keys())
LL = []
for i in range(len(keys)):
k_i = keys[i]
for j in range(len(keys)):
if i != j:
k_j = keys[j]
LL.append([k_i,k_j,1,compare_values1(\
d_A2C[k_i],d_A2C[k_j]),compare_values2(d_A2B[k_i],d_A2B[k_j])])
return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])
</code></pre>
<p>This function works about 60 times faster.</p>
<p>EDIT2:
Final verdict of the four possibilities:</p>
<p>=============== With the small dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
10 loops, best of 3: 85.3 ms per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 286 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
1000 loops, best of 3: 1.24 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
1000 loops, best of 3: 1.98 ms per loop
</code></pre>
<p>=============== For the large dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
1 loop, best of 3: 2.22 s per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 295 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
100 loops, best of 3: 3.13 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
100 loops, best of 3: 2.19 ms per loop
</code></pre>
<p>So it's pretty much a tie between the last two. Thanks to everyone for helping, that speedup was much needed.</p>
<p>EDIT 3:</p>
<p>Divakar has edited their code and this is the new result:</p>
<p>Small dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 492 µs per loop
</code></pre>
<p>Large dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 844 µs per loop
</code></pre>
<p>Very impressive and the absolute winner.</p>
<p>EDIT 4:</p>
<p>Divakar's method slightly modified as I am using it in my program now:</p>
<pre><code>def processFrames(P):
N = len(P)
N_range = np.arange(N)
valid_mask = (N_range[:,None] != N_range).ravel()
colB = P.B.values
colC = P.C.values
T2_arr = np.ones(N*N,dtype=int)
T4_arr = np.zeros((N,N),dtype=int)
T4_arr[colC[:,None] > colC] = 1
T4_arr[colC[:,None] < colC] = -1
T5_arr = np.zeros((N,N),dtype=int)
T5_arr[colB[:,None] > colB] = -1
T5_arr[colB[:,None] < colB] = 1
strings = P.A.values
c0,c1 = np.meshgrid(strings,strings)
arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
T5_arr.ravel()))[valid_mask]
return arr[:,0],arr[:,1],arr[:,2],arr[:,3],arr[:,4]
</code></pre>
<p>I'm creating a dictionary with five keys containing a list each which represent the five resulting columns, then I just extend the lists with the results, and once I'm done I'm making a pandas dataframe from the dictionary. That's a much faster way than to concatenate to an existing dataframe.</p>
<p>PS: The one thing I learned from this: Never use iloc if you can avoid it in any way.</p>
| 1 | 2016-08-11T13:14:26Z | 38,897,941 | <p>Don't use pandas. Use dictionaries and save it:</p>
<pre><code>def compare_values(x,y):
if x>y: return 1
elif x<y: return -1
else: return 0
def processFrames(P):
d_A2B = dict(zip(P["A"],P["B"]))
d_A2C = dict(zip(P["A"],P["C"]))
keys = list(d_A2B.keys())
d_ind2key = dict(zip(range(len(keys)),keys))
LL = []
for i in range(len(keys)):
k_i = keys[i]
for j in range(i+1,len(keys)):
k_j = keys[j]
c1 = compare_values(d_A2C[k_i],d_A2C[k_j])
c2 = -compare_values(d_A2B[k_i],d_A2B[k_j])
LL.append([k_i,k_j,1,c1,c2])
LL.append([k_j,k_i,1,-c1,-c2])
return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])
</code></pre>
| 1 | 2016-08-11T13:36:48Z | [
"python",
"performance",
"pandas",
"dataframe",
"vectorization"
] |
Is there a faster way of doing full row comparisons on a small pandas dataframe than using loops and iloc? | 38,897,449 | <p>I have a large number of small pandas dataframes on which I have to do full row comparisons and write the results into new dataframes which will get concatenated later.</p>
<p>For the row comparisons I'm doing a double loop over the length of the dataframe using iloc. I don't know if there is a faster way, the way I'm doing it seems really slow:</p>
<pre><code># -*- coding: utf-8 -*-
import pandas as pd
import time
def processFrames1(DF):
LL = []
for i in range(len(DF)):
for j in range(len(DF)):
if DF.iloc[i][0] != DF.iloc[j][0]:
T = {u'T1':DF.iloc[i][0]}
T[u'T2'] = DF.iloc[j][0]
T[u'T3'] = 1
if DF.iloc[i][2] > DF.iloc[j][2]:
T[u'T4'] = 1
elif DF.iloc[i][2] < DF.iloc[j][2]:
T[u'T4'] = -1
else:
T[u'T4'] = 0
if DF.iloc[i][1] < DF.iloc[j][1]:
T[u'T5'] = 1
else:
T[u'T5'] = -1
LL.append(T)
return pd.DataFrame.from_dict(LL)
D = [{'A':'XA','B':1,'C':1.4}\
,{'A':'RT','B':2,'C':10}\
,{'A':'HO','B':3,'C':34}\
,{'A':'NJ','B':4,'C':0.41}\
,{'A':'WF','B':5,'C':114}\
,{'A':'DV','B':6,'C':74}\
,{'A':'KP','B':7,'C':2.4}]
P = pd.DataFrame.from_dict(D)
time0 = time.time()
for i in range(10):
X = processFrames1(P)
print time.time()-time0
print X
</code></pre>
<p>Yielding the result:</p>
<pre><code>0.836999893188
T1 T2 T3 T4 T5
0 XA RT 1 -1 1
1 XA HO 1 -1 1
2 XA NJ 1 1 1
3 XA WF 1 -1 1
4 XA DV 1 -1 1
5 XA KP 1 -1 1
6 RT XA 1 1 -1
7 RT HO 1 -1 1
8 RT NJ 1 1 1
9 RT WF 1 -1 1
10 RT DV 1 -1 1
11 RT KP 1 1 1
12 HO XA 1 1 -1
13 HO RT 1 1 -1
14 HO NJ 1 1 1
15 HO WF 1 -1 1
16 HO DV 1 -1 1
17 HO KP 1 1 1
18 NJ XA 1 -1 -1
19 NJ RT 1 -1 -1
20 NJ HO 1 -1 -1
21 NJ WF 1 -1 1
22 NJ DV 1 -1 1
23 NJ KP 1 -1 1
24 WF XA 1 1 -1
25 WF RT 1 1 -1
26 WF HO 1 1 -1
27 WF NJ 1 1 -1
28 WF DV 1 1 1
29 WF KP 1 1 1
30 DV XA 1 1 -1
31 DV RT 1 1 -1
32 DV HO 1 1 -1
33 DV NJ 1 1 -1
34 DV WF 1 -1 -1
35 DV KP 1 1 1
36 KP XA 1 1 -1
37 KP RT 1 -1 -1
38 KP HO 1 -1 -1
39 KP NJ 1 1 -1
40 KP WF 1 -1 -1
41 KP DV 1 -1 -1
</code></pre>
<p>Working this representative dataframe just 10 times takes almost a full second, and I will have to work with over a million.</p>
<p>Is there a faster way to do those full row comparisons?</p>
<p>EDIT1:
After some modifications I could make Javier's code create the correct output:</p>
<pre><code>def compare_values1(x,y):
if x>y: return 1
elif x<y: return -1
else: return 0
def compare_values2(x,y):
if x<y: return 1
elif x>y: return -1
else: return 0
def processFrames(P):
D = P.to_dict(orient='records')
d_A2B = {d["A"]:d["B"] for d in D}
d_A2C = {d["A"]:d["C"] for d in D}
keys = list(d_A2B.keys())
LL = []
for i in range(len(keys)):
k_i = keys[i]
for j in range(len(keys)):
if i != j:
k_j = keys[j]
LL.append([k_i,k_j,1,compare_values1(\
d_A2C[k_i],d_A2C[k_j]),compare_values2(d_A2B[k_i],d_A2B[k_j])])
return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])
</code></pre>
<p>This function works about 60 times faster.</p>
<p>EDIT2:
Final verdict of the four possibilities:</p>
<p>=============== With the small dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
10 loops, best of 3: 85.3 ms per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 286 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
1000 loops, best of 3: 1.24 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
1000 loops, best of 3: 1.98 ms per loop
</code></pre>
<p>=============== For the large dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
1 loop, best of 3: 2.22 s per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 295 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
100 loops, best of 3: 3.13 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
100 loops, best of 3: 2.19 ms per loop
</code></pre>
<p>So it's pretty much a tie between the last two. Thanks to everyone for helping, that speedup was much needed.</p>
<p>EDIT 3:</p>
<p>Divakar has edited their code and this is the new result:</p>
<p>Small dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 492 µs per loop
</code></pre>
<p>Large dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 844 µs per loop
</code></pre>
<p>Very impressive and the absolute winner.</p>
<p>EDIT 4:</p>
<p>Divakar's method slightly modified as I am using it in my program now:</p>
<pre><code>def processFrames(P):
N = len(P)
N_range = np.arange(N)
valid_mask = (N_range[:,None] != N_range).ravel()
colB = P.B.values
colC = P.C.values
T2_arr = np.ones(N*N,dtype=int)
T4_arr = np.zeros((N,N),dtype=int)
T4_arr[colC[:,None] > colC] = 1
T4_arr[colC[:,None] < colC] = -1
T5_arr = np.zeros((N,N),dtype=int)
T5_arr[colB[:,None] > colB] = -1
T5_arr[colB[:,None] < colB] = 1
strings = P.A.values
c0,c1 = np.meshgrid(strings,strings)
arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
T5_arr.ravel()))[valid_mask]
return arr[:,0],arr[:,1],arr[:,2],arr[:,3],arr[:,4]
</code></pre>
<p>I'm creating a dictionary with five keys containing a list each which represent the five resulting columns, then I just extend the lists with the results, and once I'm done I'm making a pandas dataframe from the dictionary. That's a much faster way than to concatenate to an existing dataframe.</p>
<p>PS: The one thing I learned from this: Never use iloc if you can avoid it in any way.</p>
| 1 | 2016-08-11T13:14:26Z | 38,898,073 | <p>You can use:</p>
<pre><code>#cross join
P['one'] = 1
df = pd.merge(P,P, on='one')
df = df.rename(columns={'A_x':'T1','A_y':'T2'})
#remove duplicates
df = df[df.T1 != df.T2]
df.reset_index(drop=True, inplace=True)
#creates new columns
df['T3'] = 1
df['T4'] = (df.C_x > df.C_y).astype(int).replace({0:-1})
df['T5'] = (df.B_x < df.B_y).astype(int).replace({0:-1})
#remove other columns by subset
df = df[['T1','T2','T3','T4','T5']]
print (df)
</code></pre>
<pre><code> T1 T2 T3 T4 T5
0 XA RT 1 -1 1
1 XA HO 1 -1 1
2 XA NJ 1 1 1
3 XA WF 1 -1 1
4 XA DV 1 -1 1
5 XA KP 1 -1 1
6 RT XA 1 1 -1
7 RT HO 1 -1 1
8 RT NJ 1 1 1
9 RT WF 1 -1 1
10 RT DV 1 -1 1
11 RT KP 1 1 1
12 HO XA 1 1 -1
13 HO RT 1 1 -1
14 HO NJ 1 1 1
15 HO WF 1 -1 1
16 HO DV 1 -1 1
17 HO KP 1 1 1
18 NJ XA 1 -1 -1
19 NJ RT 1 -1 -1
20 NJ HO 1 -1 -1
21 NJ WF 1 -1 1
22 NJ DV 1 -1 1
23 NJ KP 1 -1 1
24 WF XA 1 1 -1
25 WF RT 1 1 -1
26 WF HO 1 1 -1
27 WF NJ 1 1 -1
28 WF DV 1 1 1
29 WF KP 1 1 1
30 DV XA 1 1 -1
31 DV RT 1 1 -1
32 DV HO 1 1 -1
33 DV NJ 1 1 -1
34 DV WF 1 -1 -1
35 DV KP 1 1 1
36 KP XA 1 1 -1
37 KP RT 1 -1 -1
38 KP HO 1 -1 -1
39 KP NJ 1 1 -1
40 KP WF 1 -1 -1
41 KP DV 1 -1 -1
</code></pre>
<p><strong>TIMINGS</strong>:</p>
<pre><code>In [339]: %timeit processFrames1(P)
10 loops, best of 3: 44.2 ms per loop
In [340]: %timeit jez(P1)
10 loops, best of 3: 43.3 ms per loop
</code></pre>
<p>If use your timings:</p>
<pre><code>time0 = time.time()
for i in range(10):
X = processFrames1(P)
print (time.time()-time0)
0.4760475158691406
time0 = time.time()
for i in range(10):
X = jez(P1)
print (time.time()-time0)
0.4400441646575928
</code></pre>
<p>Code for testing:</p>
<pre><code>P1 = P.copy()
def jez(P):
P['one'] = 1
df = pd.merge(P,P, on='one')
df = df.rename(columns={'A_x':'T1','A_y':'T2'})
df = df[df.T1 != df.T2]
df.reset_index(drop=True, inplace=True)
df['T3'] = 1
df['T4'] = (df.C_x > df.C_y).astype(int).replace({0:-1})
df['T5'] = (df.B_x < df.B_y).astype(int).replace({0:-1})
df = df[['T1','T2','T3','T4','T5']]
return (df)
def processFrames1(DF):
LL = []
for i in range(len(DF)):
for j in range(len(DF)):
if DF.iloc[i][0] != DF.iloc[j][0]:
T = {u'T1':DF.iloc[i][0]}
T[u'T2'] = DF.iloc[j][0]
T[u'T3'] = 1
if DF.iloc[i][2] > DF.iloc[j][2]:
T[u'T4'] = 1
elif DF.iloc[i][2] < DF.iloc[j][2]:
T[u'T4'] = -1
else:
T[u'T4'] = 0
if DF.iloc[i][1] < DF.iloc[j][1]:
T[u'T5'] = 1
else:
T[u'T5'] = -1
LL.append(T)
return pd.DataFrame.from_dict(LL)
</code></pre>
<p>EDIT1:</p>
<p>I try test in 5 times bigger dataFrame:</p>
<pre><code>D = [{'A':'XA','B':1,'C':1.4}\
,{'A':'RB','B':2,'C':10}\
,{'A':'HC','B':3,'C':34}\
,{'A':'ND','B':4,'C':0.41}\
,{'A':'WE','B':5,'C':114}\
,{'A':'DF','B':6,'C':74}\
,{'A':'KG','B':7,'C':2.4}\
,{'A':'XH','B':1,'C':1.4}\
,{'A':'RI','B':2,'C':10}\
,{'A':'HJ','B':3,'C':34}\
,{'A':'NK','B':4,'C':0.41}\
,{'A':'WL','B':5,'C':114}\
,{'A':'DM','B':6,'C':74}\
,{'A':'KN','B':7,'C':2.4}\
,{'A':'XO','B':1,'C':1.4}\
,{'A':'RP','B':2,'C':10}\
,{'A':'HQ','B':3,'C':34}\
,{'A':'NR','B':4,'C':0.41}\
,{'A':'WS','B':5,'C':114}\
,{'A':'DT','B':6,'C':74}\
,{'A':'KU','B':7,'C':2.4}\
,{'A':'XV','B':1,'C':1.4}\
,{'A':'RW','B':2,'C':10}\
,{'A':'HX','B':3,'C':34}\
,{'A':'NY','B':4,'C':0.41}\
,{'A':'WZ','B':5,'C':114}\
,{'A':'D1','B':6,'C':74}\
,{'A':'K2','B':7,'C':2.4}\
,{'A':'X3','B':1,'C':1.4}\
,{'A':'R4','B':2,'C':10}\
,{'A':'H5','B':3,'C':34}\
,{'A':'N6','B':4,'C':0.41}\
,{'A':'W7','B':5,'C':114}\
,{'A':'D8','B':6,'C':74}\
,{'A':'K9','B':7,'C':2.4} ]
P = pd.DataFrame.from_dict(D)
</code></pre>
<pre><code>P1 = P.copy()
time0 = time.time()
for i in range(10):
X = processFrames1(P)
print (time.time()-time0)
12.230222940444946
time0 = time.time()
for i in range(10):
X = jez(P1)
print (time.time()-time0)
0.4440445899963379
</code></pre>
<pre><code>In [351]: %timeit processFrames1(P)
1 loop, best of 3: 1.21 s per loop
In [352]: %timeit jez(P1)
10 loops, best of 3: 43.7 ms per loop
</code></pre>
| 1 | 2016-08-11T13:41:43Z | [
"python",
"performance",
"pandas",
"dataframe",
"vectorization"
] |
Is there a faster way of doing full row comparisons on a small pandas dataframe than using loops and iloc? | 38,897,449 | <p>I have a large number of small pandas dataframes on which I have to do full row comparisons and write the results into new dataframes which will get concatenated later.</p>
<p>For the row comparisons I'm doing a double loop over the length of the dataframe using iloc. I don't know if there is a faster way, the way I'm doing it seems really slow:</p>
<pre><code># -*- coding: utf-8 -*-
import pandas as pd
import time
def processFrames1(DF):
LL = []
for i in range(len(DF)):
for j in range(len(DF)):
if DF.iloc[i][0] != DF.iloc[j][0]:
T = {u'T1':DF.iloc[i][0]}
T[u'T2'] = DF.iloc[j][0]
T[u'T3'] = 1
if DF.iloc[i][2] > DF.iloc[j][2]:
T[u'T4'] = 1
elif DF.iloc[i][2] < DF.iloc[j][2]:
T[u'T4'] = -1
else:
T[u'T4'] = 0
if DF.iloc[i][1] < DF.iloc[j][1]:
T[u'T5'] = 1
else:
T[u'T5'] = -1
LL.append(T)
return pd.DataFrame.from_dict(LL)
D = [{'A':'XA','B':1,'C':1.4}\
,{'A':'RT','B':2,'C':10}\
,{'A':'HO','B':3,'C':34}\
,{'A':'NJ','B':4,'C':0.41}\
,{'A':'WF','B':5,'C':114}\
,{'A':'DV','B':6,'C':74}\
,{'A':'KP','B':7,'C':2.4}]
P = pd.DataFrame.from_dict(D)
time0 = time.time()
for i in range(10):
X = processFrames1(P)
print time.time()-time0
print X
</code></pre>
<p>Yielding the result:</p>
<pre><code>0.836999893188
T1 T2 T3 T4 T5
0 XA RT 1 -1 1
1 XA HO 1 -1 1
2 XA NJ 1 1 1
3 XA WF 1 -1 1
4 XA DV 1 -1 1
5 XA KP 1 -1 1
6 RT XA 1 1 -1
7 RT HO 1 -1 1
8 RT NJ 1 1 1
9 RT WF 1 -1 1
10 RT DV 1 -1 1
11 RT KP 1 1 1
12 HO XA 1 1 -1
13 HO RT 1 1 -1
14 HO NJ 1 1 1
15 HO WF 1 -1 1
16 HO DV 1 -1 1
17 HO KP 1 1 1
18 NJ XA 1 -1 -1
19 NJ RT 1 -1 -1
20 NJ HO 1 -1 -1
21 NJ WF 1 -1 1
22 NJ DV 1 -1 1
23 NJ KP 1 -1 1
24 WF XA 1 1 -1
25 WF RT 1 1 -1
26 WF HO 1 1 -1
27 WF NJ 1 1 -1
28 WF DV 1 1 1
29 WF KP 1 1 1
30 DV XA 1 1 -1
31 DV RT 1 1 -1
32 DV HO 1 1 -1
33 DV NJ 1 1 -1
34 DV WF 1 -1 -1
35 DV KP 1 1 1
36 KP XA 1 1 -1
37 KP RT 1 -1 -1
38 KP HO 1 -1 -1
39 KP NJ 1 1 -1
40 KP WF 1 -1 -1
41 KP DV 1 -1 -1
</code></pre>
<p>Working this representative dataframe just 10 times takes almost a full second, and I will have to work with over a million.</p>
<p>Is there a faster way to do those full row comparisons?</p>
<p>EDIT1:
After some modifications I could make Javier's code create the correct output:</p>
<pre><code>def compare_values1(x,y):
if x>y: return 1
elif x<y: return -1
else: return 0
def compare_values2(x,y):
if x<y: return 1
elif x>y: return -1
else: return 0
def processFrames(P):
D = P.to_dict(orient='records')
d_A2B = {d["A"]:d["B"] for d in D}
d_A2C = {d["A"]:d["C"] for d in D}
keys = list(d_A2B.keys())
LL = []
for i in range(len(keys)):
k_i = keys[i]
for j in range(len(keys)):
if i != j:
k_j = keys[j]
LL.append([k_i,k_j,1,compare_values1(\
d_A2C[k_i],d_A2C[k_j]),compare_values2(d_A2B[k_i],d_A2B[k_j])])
return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])
</code></pre>
<p>This function works about 60 times faster.</p>
<p>EDIT2:
Final verdict of the four possibilities:</p>
<p>=============== With the small dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
10 loops, best of 3: 85.3 ms per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 286 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
1000 loops, best of 3: 1.24 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
1000 loops, best of 3: 1.98 ms per loop
</code></pre>
<p>=============== For the large dataframe:</p>
<p>My original function:</p>
<pre><code>%timeit processFrames1(P)
1 loop, best of 3: 2.22 s per loop
</code></pre>
<p>jezrael's solution:</p>
<pre><code>%timeit processFrames2(P)
1 loop, best of 3: 295 ms per loop
</code></pre>
<p>Javier's modified code:</p>
<pre><code>%timeit processFrames3(P)
100 loops, best of 3: 3.13 ms per loop
</code></pre>
<p>Divakar's method:</p>
<pre><code>%timeit processFrames4(P)
100 loops, best of 3: 2.19 ms per loop
</code></pre>
<p>So it's pretty much a tie between the last two. Thanks to everyone for helping, that speedup was much needed.</p>
<p>EDIT 3:</p>
<p>Divakar has edited their code and this is the new result:</p>
<p>Small dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 492 µs per loop
</code></pre>
<p>Large dataframe:</p>
<pre><code>%timeit processFrames(P)
1000 loops, best of 3: 844 µs per loop
</code></pre>
<p>Very impressive and the absolute winner.</p>
<p>EDIT 4:</p>
<p>Divakar's method slightly modified as I am using it in my program now:</p>
<pre><code>def processFrames(P):
N = len(P)
N_range = np.arange(N)
valid_mask = (N_range[:,None] != N_range).ravel()
colB = P.B.values
colC = P.C.values
T2_arr = np.ones(N*N,dtype=int)
T4_arr = np.zeros((N,N),dtype=int)
T4_arr[colC[:,None] > colC] = 1
T4_arr[colC[:,None] < colC] = -1
T5_arr = np.zeros((N,N),dtype=int)
T5_arr[colB[:,None] > colB] = -1
T5_arr[colB[:,None] < colB] = 1
strings = P.A.values
c0,c1 = np.meshgrid(strings,strings)
arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
T5_arr.ravel()))[valid_mask]
return arr[:,0],arr[:,1],arr[:,2],arr[:,3],arr[:,4]
</code></pre>
<p>I'm creating a dictionary with five keys containing a list each which represent the five resulting columns, then I just extend the lists with the results, and once I'm done I'm making a pandas dataframe from the dictionary. That's a much faster way than to concatenate to an existing dataframe.</p>
<p>PS: The one thing I learned from this: Never use iloc if you can avoid it in any way.</p>
| 1 | 2016-08-11T13:14:26Z | 38,898,722 | <p>Here's an approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p>
<pre><code>def processFrames1_broadcasting(P):
N = len(P)
N_range = np.arange(N)
valid_mask = (N_range[:,None] != N_range).ravel()
colB = P.B.values
colC = P.C.values
T2_arr = np.ones(N*N,dtype=int)
T4_arr = np.zeros((N,N),dtype=int)
T4_arr[colC[:,None] > colC] = 1
T4_arr[colC[:,None] < colC] = -1
T5_arr = np.where(colB[:,None] < colB,1,-1)
strings = P.A.values
c0,c1 = np.meshgrid(strings,strings)
arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
T5_arr.ravel()))[valid_mask]
df = pd.DataFrame(arr, columns=[['T1','T2','T3','T4','T5']])
return df
</code></pre>
<p>Runtime test -</p>
<p>For the sample posted in the question, the runtimes I got at my end are -</p>
<pre><code>In [337]: %timeit processFrames1(P)
10 loops, best of 3: 93.1 ms per loop
In [338]: %timeit processFrames1_jezrael(P) #@jezrael's soln
10 loops, best of 3: 74.8 ms per loop
In [339]: %timeit processFrames1_broadcasting(P)
1000 loops, best of 3: 561 µs per loop
</code></pre>
| 3 | 2016-08-11T14:08:47Z | [
"python",
"performance",
"pandas",
"dataframe",
"vectorization"
] |
While loop check one condition before the other | 38,897,684 | <p>I'm new to programming, and I want to write some code like </p>
<pre><code>while(condition_A and condition_B):
#Do something
</code></pre>
<p>But each time I run the while loop, I want to check condition A first, and if condition A works then check condition B. For example, condition A checks if condition B will get an array out of bounds error or something. And finally if both conditions are true stay in the while loop. How should I do this? I was thinking of something like</p>
<pre><code> def some_While_Loop:
if condition_A == False:
return
while (condition_B):
#Do something
if condition_A == False:
return
</code></pre>
<p>But then the while loop has to be the last thing a function does. Is there a nicer/better way?</p>
| -1 | 2016-08-11T13:25:04Z | 38,897,758 | <p>It looks like you're using python.
You actually had the answer yourself:</p>
<pre><code>while(condition_A and condition_B):
#Do something
</code></pre>
| 1 | 2016-08-11T13:28:46Z | [
"python",
"loops",
"while-loop"
] |
Enumeration of k-means clusters | 38,897,779 | <pre><code>sample=['he buy fish','he buy bread','the pizza is die'
,'the man buy pizza','mcdonald is there','there is a boy',
'who beat the man','burger and pizza']
fidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000, min_df=0.2, stop_words='english',use_idf=True)
vect=TfidfVectorizer(min_df=1)
x=vect.fit_transform(sample)
idf=vect.idf_
dist = 1 - cosine_similarity(x)
num_clusters = 3
km = KMeans(n_clusters=num_clusters)
km.fit(x)
clusters = km.labels_.tolist()
print(clusters)
</code></pre>
<p>output:</p>
<pre><code>[2 2 0 0 1 1 0 0]
</code></pre>
<p>K-means work perfectly on the data. However, the cluster number is generated randomly between 0 ,1 and 2 without follow a sequence.</p>
| -3 | 2016-08-11T13:29:39Z | 38,906,826 | <p>k-means by <em>design</em> is a randomized algorithm.</p>
<p>It begins with <em>random</em> centers. And by running it multiple times, you can get different solutions. Some may be better than others - that is good.</p>
<p>Because it is randomized, it is not defined which cluster is cluster #0, #1, etc. - they <em>may</em> be permuted.</p>
| 0 | 2016-08-11T22:04:57Z | [
"python",
"cluster-analysis",
"data-mining",
"k-means"
] |
Find if IP address is in range of addresses | 38,897,835 | <p>I am trying to build a service responsible for geo location of IP addresses. The open database of IP addresses is a CSV file of the following format: starting_ip, ending_ip, region</p>
<p>So I was thinking about converting IPs to integers, and trying to see if a given integer is within ranges of starting and ending... However at this point I don't quite see how this comparison can be performed in an efficient way, taking into account the size of 500K entries.</p>
<p>At first I was trying to load everything into memory using the following dict:</p>
<pre><code>{(ip_start, ip_end): 'region', ....}
</code></pre>
<p>But at this point I don't see how to find a key in this dict by IP address.</p>
| 1 | 2016-08-11T13:31:58Z | 38,897,921 | <p>Assuming the ranges are non-overlapping, you could sort them once by <code>ip_start</code> and then use binary search to find a candidate range. Once you've found a candidate range, all you have to do is check whether the IP address falls between <code>ip_start</code> and <code>ip_end</code>.</p>
<p>You could use the built-in <a href="https://docs.python.org/2/library/bisect.html" rel="nofollow"><code>bisect</code></a> module to perform the binary search.</p>
<p>This gives <code>O(logn)</code> lookup cost.</p>
| 2 | 2016-08-11T13:35:34Z | [
"python"
] |
Find if IP address is in range of addresses | 38,897,835 | <p>I am trying to build a service responsible for geo location of IP addresses. The open database of IP addresses is a CSV file of the following format: starting_ip, ending_ip, region</p>
<p>So I was thinking about converting IPs to integers, and trying to see if a given integer is within ranges of starting and ending... However at this point I don't quite see how this comparison can be performed in an efficient way, taking into account the size of 500K entries.</p>
<p>At first I was trying to load everything into memory using the following dict:</p>
<pre><code>{(ip_start, ip_end): 'region', ....}
</code></pre>
<p>But at this point I don't see how to find a key in this dict by IP address.</p>
| 1 | 2016-08-11T13:31:58Z | 38,899,393 | <p>I would suggest you persist the data once sorted in whatever usable format you like but using a <a href="http://www.grantjenks.com/docs/sortedcontainers/introduction.html" rel="nofollow">sortedtcontainers</a> <em>SortedDict</em> will allow you to do the comparison in <em>log n</em> time once you have a sorted collection, sorted by the start ip:</p>
<pre><code>import csv
from sortedcontainers import sorteddict
with open("ips.csv") as f:
ips = ["192.168.43.102", "10.10.145.100", "192.168.1.1", "192.168.43.99","127.0.0.1"]
reader = csv.reader(f)
# Use start ip as the key, creating tuple or using netaddr to turn into an int
sorted_dict = sorteddict.SortedDict((tuple(map(int, sip.split("."))),(eip, rnge)) for sip, eip, rnge in reader)
for ip in ips:
# do the same for the ip you want to search for
ip = tuple(map(int, ip.split(".")))
# bisect to see where the ip would land
ind = sorted_dict.bisect(ip) - 1
start_ip = sorted_dict.iloc[ind]
end_ip = tuple(map(int, sorted_dict[sorted_dict.iloc[ind]][0].split(".")))
print(start_ip, ip, end_ip)
print(start_ip <= ip <= end_ip)
</code></pre>
<p>If we run the code on a test file:</p>
<pre><code>In [5]: !cat ips.csv
192.168.43.100,192.168.43.130,foo
192.168.27.1,192.168.27.12,foobar
192.168.1.1,192.168.1.98,bar
192.168.43.131,192.168.43.140,bar
10.10.131.10,10.10.131.15,barf
10.10.145.10,10.10.145.100,foob
In [6]: import csv
In [7]: from sortedcontainers import sorteddict
In [8]: with open("ips.csv") as f:
...: ips = ["192.168.43.102", "10.10.145.100", "192.168.1.1", "192.168.43.99","127.0.0.1"]
...: reader = csv.reader(f)
...: sorted_dict = sorteddict.SortedDict((tuple(map(int, sip.split("."))),(eip, rnge)) for sip, eip, rnge in reader)
...: for ip in ips:
...: ip = tuple(map(int, ip.split(".")))
...: ind = sorted_dict.bisect(ip) - 1
...: start_ip = sorted_dict.iloc[ind]
...: end_ip = tuple(map(int, sorted_dict[sorted_dict.iloc[ind]][0].split(".")))
...: print(start_ip,ip, end_ip)
...: print(start_ip <= ip <= end_ip)
...:
(192, 168, 43, 100) (192, 168, 43, 102) (192, 168, 43, 130)
True
(10, 10, 145, 10) (10, 10, 145, 100) (10, 10, 145, 100)
True
(192, 168, 1, 1) (192, 168, 1, 1) (192, 168, 1, 98)
True
(192, 168, 27, 1) (192, 168, 43, 99) (192, 168, 27, 12)
False
(10, 10, 145, 10) (127, 0, 0, 1) (10, 10, 145, 100)
False
</code></pre>
<p>You could also modify <a href="https://hg.python.org/cpython/file/2.7/Lib/bisect.py" rel="nofollow">bisect_right</a> to only consider the first elements and use a regular python list:</p>
<pre><code>def bisect_right(a, x, lo=0, hi=None):
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi) // 2
if x < a[mid][0]:
hi = mid
else:
lo = mid + 1
return lo
with open("ips.csv") as f:
ips = ["192.168.43.102", "10.10.145.100", "192.168.1.1", "192.168.43.99", "127.0.0.1"]
reader = csv.reader(f)
sorted_data = sorted(((tuple(map(int, sip.split("."))), eip, rnge) for sip, eip, rnge in reader))
for ip in ips:
ip = tuple(map(int, ip.split(".")))
ind = bisect_right(sorted_data, ip) - 1
ip_sub = sorted_data[ind]
start_ip, end_ip, _ = sorted_data[ind]
end_ip = tuple(map(int, end_ip.split(".")))
print(start_ip, ip, end_ip)
print(start_ip <= ip <= end_ip)
</code></pre>
<p>The result will be the same, I would imagine using a SortedDict is almost certainly faster as the bisect is done at the c level.</p>
| 0 | 2016-08-11T14:37:13Z | [
"python"
] |
Change x-axis (xlim) on holoviews histogram | 38,897,845 | <p>In matplotlib we can change the limits of the x-axis with the xlim() method. Is there an equivalent method in HoloViews?</p>
<p>I searched the <a href="http://ioam.github.io/holoviews/Tutorials/Options" rel="nofollow">HV options page</a>, but didn't find anything that seemed to do this.</p>
<p>I created the image below with the following code in a Jupyter notebook:</p>
<pre><code>import numpy as np
import holoviews as hv
hv.notebook_extension('bokeh', width=90)
values = np.array([list of floats])
frequencies, edges = np.histogram(values, 10)
hv.Histogram(frequencies, edges)
</code></pre>
<p><a href="http://i.stack.imgur.com/qYh7n.png" rel="nofollow"><img src="http://i.stack.imgur.com/qYh7n.png" alt="histogram"></a></p>
<p>How can I change the x-axis limit to [0.002, 0.016].</p>
<p>Also, is it possible to get a plot to return its current x-axis limit?</p>
| 1 | 2016-08-11T13:32:23Z | 39,036,534 | <p>HoloViews will usually just use the bounds of the data you give it. So the easiest way to change the bounds of a histogram is to change it in the np.histogram call itself:</p>
<pre><code>frequencies, edges = np.histogram(values, 10, range=(0.002, 0.016))
hv.Histogram(frequencies, edges)
</code></pre>
<p>If you simply want to change the viewing extents you can do also set those directly:</p>
<pre><code>hv.Histogram(frequencies, edges, extents=(0.002, None, 0.016, None))
</code></pre>
<p>where extents is defined as <code>(xmin, ymin, xmax, ymax)</code>.</p>
| 1 | 2016-08-19T10:18:05Z | [
"python",
"holoviews"
] |
Ensure that a version of some package is newer than version x | 38,897,897 | <p>is there a better way in python to ensure a script runs only with a version of a module newer than x as this?</p>
<pre><code>import somemodule
assert int(somemodule.__version__[0]) > 1 # would enforce a version of at least 2.0
</code></pre>
<p>In perl one would do it like:</p>
<pre><code>use somemodule 2.0
</code></pre>
<p>I would like to do this because I need a newer version than the one provided by Debian repositories and would like to ensure the user installed the lib via pip.</p>
<p>The point is, the script would run with the older package without errors but produce wrong outcome because of unfixed bugs in the old Debian module version.</p>
<p>PS: I need a solution that works for python2 (2.6/2.7) and python3.</p>
| 1 | 2016-08-11T13:34:22Z | 38,899,910 | <p>Why do you really have to convert to int. <strong>I know this is not pythonic way of doing it</strong>, but it definitely works</p>
<pre><code>>>> assert('0.0.8' > '1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
>>> assert('0.0.8' > '0')
>>> assert('0.0.8' > '0.0.7')
>>> assert('0.0.8' > '0.0.7.5')
>>> assert('0.0.8' > '0.0.7.5.8')
>>> assert('0.0.8' > '0.0.8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
>>> assert('0.0.8' > '0.0.8.1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
>>>
</code></pre>
<p>what I mean is <code>assert(somemodule.__version__ > '1')</code> # raise assertion error if version installed is less than 1</p>
| 0 | 2016-08-11T14:59:17Z | [
"python",
"import"
] |
How to have python audibly read text | 38,897,943 | <p>I just can't figure it out....I know how <code>python</code> can recognize my voice but I don't know how to make <code>python</code> read text audibly.</p>
<pre><code>from time import sleep
import sys
print("Tell me something...")
LOL = input()
sleep(2)
print("Thinking...")
sleep(2)
if LOL == 'Hey' or LOL == 'Hello':
??? #I want it to say Hello too!
else:
print("ERROR")
sys.exit()
</code></pre>
| -2 | 2016-08-11T13:36:51Z | 38,898,108 | <p>Assuming you are running under windows and you have pip installed, run <code>pip install speech</code> on a CMD window (run cmd with start menu and type cmd) after speech has installed stick <code>import speech</code> at the top of your python program. under your <code>if</code> statement put <code>speech.say('Hello!')</code>
Like this:</p>
<pre><code>if LOL == 'Hey' or LOL == 'Hello':
speech.say('Hello')
</code></pre>
| 0 | 2016-08-11T13:43:04Z | [
"python",
"speech-recognition"
] |
How to have python audibly read text | 38,897,943 | <p>I just can't figure it out....I know how <code>python</code> can recognize my voice but I don't know how to make <code>python</code> read text audibly.</p>
<pre><code>from time import sleep
import sys
print("Tell me something...")
LOL = input()
sleep(2)
print("Thinking...")
sleep(2)
if LOL == 'Hey' or LOL == 'Hello':
??? #I want it to say Hello too!
else:
print("ERROR")
sys.exit()
</code></pre>
| -2 | 2016-08-11T13:36:51Z | 38,898,847 | <p>Just make sure you have installed the speech library.</p>
<pre><code>pip install speech
</code></pre>
<p>Then import it at the top of the python script </p>
<pre><code>import speech
</code></pre>
<p>Then for what ever you want it to say, do: </p>
<pre><code>speech.say("Text you want it to speak here")
</code></pre>
<p>For your case:</p>
<pre><code>speech.say("Hello")
</code></pre>
<p>If that is not working for you, you can try installing pyTTSX package and looking for code snippets for it. </p>
<p><a href="https://pypi.python.org/pypi/pyttsx" rel="nofollow">https://pypi.python.org/pypi/pyttsx</a></p>
| 0 | 2016-08-11T14:14:05Z | [
"python",
"speech-recognition"
] |
Pulling from List using If/Else statement - Python | 38,898,155 | <p>I want to match the correct terms from my list. Here is my code: </p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print "house good"
if "cat" in item:
print "cat good"
if "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>The results are currently this:</p>
<pre><code>cat good
nothing else
dog good
house good
nothing else
cat good
nothing else
nothing else
</code></pre>
<p>But I want the results to be this: </p>
<pre><code>cat good
dog good
house good
cat good
nothing else
</code></pre>
<p>Currently the script keeps pulling "nothing else" because of my else statement. But I don't know how to only making "nothing else" come up exclusively when a term in my list doesn't match the terms in my if statements. Does anyone know I can do this? </p>
| -1 | 2016-08-11T13:45:21Z | 38,898,210 | <p>You should make all the conditions part of the same statement, by using <code>elif</code>. Currently the else only applies to the last condition.</p>
<pre><code>if "house" in item:
print "house good"
elif "cat" in item:
print "cat good"
elif "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
| 2 | 2016-08-11T13:47:26Z | [
"python",
"list"
] |
Pulling from List using If/Else statement - Python | 38,898,155 | <p>I want to match the correct terms from my list. Here is my code: </p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print "house good"
if "cat" in item:
print "cat good"
if "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>The results are currently this:</p>
<pre><code>cat good
nothing else
dog good
house good
nothing else
cat good
nothing else
nothing else
</code></pre>
<p>But I want the results to be this: </p>
<pre><code>cat good
dog good
house good
cat good
nothing else
</code></pre>
<p>Currently the script keeps pulling "nothing else" because of my else statement. But I don't know how to only making "nothing else" come up exclusively when a term in my list doesn't match the terms in my if statements. Does anyone know I can do this? </p>
| -1 | 2016-08-11T13:45:21Z | 38,898,227 | <p>You should be using <code>elif</code>, like so:</p>
<pre><code>for item in stuff:
if "house" in item:
print "house good"
elif "cat" in item:
print "cat good"
elif "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>Otherwise the <code>else</code> only applies to the last <code>if</code>.</p>
| 3 | 2016-08-11T13:47:43Z | [
"python",
"list"
] |
Pulling from List using If/Else statement - Python | 38,898,155 | <p>I want to match the correct terms from my list. Here is my code: </p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print "house good"
if "cat" in item:
print "cat good"
if "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>The results are currently this:</p>
<pre><code>cat good
nothing else
dog good
house good
nothing else
cat good
nothing else
nothing else
</code></pre>
<p>But I want the results to be this: </p>
<pre><code>cat good
dog good
house good
cat good
nothing else
</code></pre>
<p>Currently the script keeps pulling "nothing else" because of my else statement. But I don't know how to only making "nothing else" come up exclusively when a term in my list doesn't match the terms in my if statements. Does anyone know I can do this? </p>
| -1 | 2016-08-11T13:45:21Z | 38,898,352 | <p>you must use elif condition like :</p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print ("house good")
elif "cat" in item:
print ("cat good")
elif "dog" in item:
print ("dog good")
else:
print ("nothing else")
</code></pre>
| 0 | 2016-08-11T13:53:19Z | [
"python",
"list"
] |
Pulling from List using If/Else statement - Python | 38,898,155 | <p>I want to match the correct terms from my list. Here is my code: </p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print "house good"
if "cat" in item:
print "cat good"
if "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>The results are currently this:</p>
<pre><code>cat good
nothing else
dog good
house good
nothing else
cat good
nothing else
nothing else
</code></pre>
<p>But I want the results to be this: </p>
<pre><code>cat good
dog good
house good
cat good
nothing else
</code></pre>
<p>Currently the script keeps pulling "nothing else" because of my else statement. But I don't know how to only making "nothing else" come up exclusively when a term in my list doesn't match the terms in my if statements. Does anyone know I can do this? </p>
| -1 | 2016-08-11T13:45:21Z | 38,898,608 | <p>Other answers suggest that you should be using a <code>elif</code> statement to fix your code. This is perfectly reasonable. However, I just wanted to point out that with a slight refactoring of your code you can make it simpler, more readable, and more extendable:</p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
good_stuff = set(["house", "cat", "dog"])
for item in stuff:
if item in good_stuff:
print item + " good"
else:
print "nothing else"
</code></pre>
<p>Any time that you find yourself using <code>if</code>, <code>elif</code>, <code>elif</code>, <code>elif</code>, ... it is usually because you have <a href="http://stackoverflow.com/a/1554691/341459">designed your code badly</a>.</p>
<blockquote>
<p>Note that I am using a <code>set</code> here for optimisation purposes. If you don't know why, then I suggest you look <a href="http://stackoverflow.com/a/7717046/341459">here</a>.</p>
</blockquote>
| 0 | 2016-08-11T14:04:28Z | [
"python",
"list"
] |
Pulling from List using If/Else statement - Python | 38,898,155 | <p>I want to match the correct terms from my list. Here is my code: </p>
<pre><code>stuff = ["cat", "dog", "house", "cat", "mouse"]
for item in stuff:
if "house" in item:
print "house good"
if "cat" in item:
print "cat good"
if "dog" in item:
print "dog good"
else:
print "nothing else"
</code></pre>
<p>The results are currently this:</p>
<pre><code>cat good
nothing else
dog good
house good
nothing else
cat good
nothing else
nothing else
</code></pre>
<p>But I want the results to be this: </p>
<pre><code>cat good
dog good
house good
cat good
nothing else
</code></pre>
<p>Currently the script keeps pulling "nothing else" because of my else statement. But I don't know how to only making "nothing else" come up exclusively when a term in my list doesn't match the terms in my if statements. Does anyone know I can do this? </p>
| -1 | 2016-08-11T13:45:21Z | 38,899,023 | <p>Try not to use 'in', if I was you I will try to use '=='.</p>
<p>But judging that it is already in the list, so you won't need the 'for' part I guess, just 'if xxx in stuff'</p>
<p>Hope this helps!</p>
| 0 | 2016-08-11T14:21:28Z | [
"python",
"list"
] |
Testing sub-domains in my localhost using Django | 38,898,346 | <p>I set up django-subdomains following this <a href="http://django-subdomains.readthedocs.io/en/latest/" rel="nofollow">guide</a>.</p>
<p>I set my subdomain urlconfs as follows:</p>
<pre><code>SUBDOMAIN_URLCONFS = {
None: 'mysite.urls',
'www': 'mysite.urls',
'shop': 'mysite.urls.shop',
'blog': 'mysite.urls.blog'
}
</code></pre>
<p>Everything works nicely, but I can't really test it cause when I run my app on my local host using <code>python manage.py runserver</code>, I can't really add the subdomains. If I put in blog.127.0.0.1:8000, then the browser just takes me to a google search. Is there any way to set my server in such a way that allows for testing? Thanks!</p>
<p><strong>EDIT</strong></p>
<p>If go to <a href="http://blog.127.0.0.1:8000" rel="nofollow">http://blog.127.0.0.1:8000</a>, then the browser says a server cannot be found. Have I made mistake in configuring?</p>
<p>My <strong>settings.py</strong></p>
<pre><code>import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*****'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Sites application
'django.contrib.sites',
# My application
'myapp',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
# Sub-domains Middleware
'subdomains.middleware.SubdomainURLRoutingMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.9/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.9/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_URL = '/static/'
# Sub-domains
SITE_ID = 1
SUBDOMAIN_URLCONFS = {
None: 'mysite.urls',
'www': 'mysite.urls',
'blog': 'mysite.urls.blog',
'shop': 'mysite.urls.shop'
}
</code></pre>
<p>I also have this following line in my console:</p>
<pre><code>No handlers could be found for logger "subdomains.middleware"
</code></pre>
| 0 | 2016-08-11T13:53:11Z | 38,899,026 | <p>The missing part of the equation here is DNS. As far as the browser is concerned, <code>foo.example.org</code> and <code>example.org</code> are two different hostnames. The name <code>foo.example.org</code> could point to a different ip address entirely from <code>example.org</code>.</p>
<p>The relationship between those two names is that they share an authoritative DNS server, i.e., whoever controls the DNS configuration for <code>example.org</code> should also be in control of the DNS configuration for <code>foo.example.org</code>, <code>bar.example.org</code> and so on.</p>
<p>The subdomain django middleware is useful when you have multiple subdomains that happen to point to the same server. When I go to <code>example.org</code> in my browser, my browser sends a <a href="https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Request_fields" rel="nofollow">Host header</a> with the value <code>example.org</code>. If I visit <code>foo.example.org</code>, the value of the host header is <code>foo.example.org</code>. So if the same server is listening on both names, it can tell which subdomain I'm on -- and that's what allows you to use different routes for different subdomains in django.</p>
<p>If you want to test subdomains on your local host, you need your browser to resolve multiple subdomains as '127.0.0.1'. The way you do this depends on your OS. If you're using linux, you would edit <code>/etc/hosts</code>. What is your OS?</p>
| 1 | 2016-08-11T14:21:33Z | [
"python",
"django",
".htaccess",
"django-subdomains"
] |
Refactoring Tensorflow FLAGS | 38,898,433 | <p>After using many python scripts with the same Tensorflow FLAGS, I got tired of updating multiple flags for each change and so decided to refactor <code>tf.app.flags</code> into a separate class which I could reuse across scripts. </p>
<p>However, for some strange reason, whenever I use <code>self.flags</code> in a different method it fails to recognise a previously set flag. The following class for example will work fine for flag <code>project_dir2</code> but will fail for flag <code>project_dir3</code></p>
<pre><code>`class MyClass():
def __init__(self):
self.flags = tf.app.flags
self.FLAGS = self.flags.FLAGS
#test code that works here
self.flags.DEFINE_string("project_dir2", "aValue", "project directory")
print("This will print correctly: "+self.FLAGS.project_dir2)
self.my_function()
def my_function(self):
#test code that fails
self.flags.DEFINE_string("project_dir3", "aValue", "project directory")
print("This will fail: "+self.FLAGS.project_dir3)`
</code></pre>
<p>I get the following exception:</p>
<p><code>AttributeError: project_dir2
Exception TypeError: TypeError("'NoneType' object is not callable",) in <function _remove at 0x7fd4c3090668> ignored
</code></p>
<p>Is there something obvious I'm doing wrong? Or is this something you simply cannot do with Tensorflow flags? Does that mean there's no way of refactoring commonly used flag settings across scripts?</p>
| 0 | 2016-08-11T13:56:40Z | 38,902,147 | <p>It seems there's an internal method called <code>_parse_flags()</code> which is called <a href="https://github.com/tensorflow/tensorflow/blob/297ca67a8a06bb7128476761cfee8d56145ee49a/tensorflow/python/platform/flags.py#L40" rel="nofollow">on first access</a>. You could call it manually after you update it</p>
<p>IE</p>
<pre><code> def my_function(self):
#test code that fails
self.flags.DEFINE_string("project_dir3", "aValue", "project directory")
self.flags.FLAGS._parse_flags()
</code></pre>
<p>Background on tf.flags - it's a partial re-implementation of Google's <a href="https://github.com/google/python-gflags" rel="nofollow">gflags</a> library so it's missing features/documentation. It's a contributions to do something smarter like plug in official gflags (<a href="https://github.com/tensorflow/tensorflow/issues/1258" rel="nofollow">issue 1258</a>). That allow things like controlling verbose logging (which requires a recompile <a href="http://stackoverflow.com/questions/36331419/tensorflow-how-to-measure-how-much-gpu-memory-each-tensor-takes/36505898#36505898">right now</a>)</p>
| 0 | 2016-08-11T16:55:39Z | [
"python",
"tensorflow"
] |
Debugging Python and C++ exposed by boost together | 38,898,459 | <p>I can debug Python code using <code>ddd -pydb prog.py</code>. All the python command line arguments can be passed too after <code>prog.py</code>. In my case, many classes have been implemented in C++ that are exposed to python using <code>boost-python</code>. I wish I could debug python code and C++ together. For example I want to set break points like this : </p>
<pre><code>break my_python.py:123
break my_cpp.cpp:456
cont
</code></pre>
<p>Of course I am trying it after compiling c++ codes with debug option but the debugger does not cross boost boundary. Is there any way?</p>
<p><strong>EDIT</strong>:
I saw <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/faq/how_do_i_debug_my_python_extensi.html" rel="nofollow">http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/faq/how_do_i_debug_my_python_extensi.html</a>.
I followed it and I can do debugging both for python and C++. But I preferably want to do visual debugging with <code>DDD</code> but I don't know how to give 'target exec python' command inside <code>DDD</code>. If not (just using <code>gdb</code> as in the link) I should be able to debug for a Python script not interactively giving python commands as in the link.</p>
| 14 | 2016-08-11T13:57:43Z | 39,012,185 | <p>I found out how to debug the C++ part while running python. (realized it while reading about process ID detection in Python book..).<br>
First you run the python program which includes C++ programs. At the start of the python program, use raw_input() to make the program wait for you input. But just before that do <code>print os.getpid()</code> (of course you should have imported os package). When you run the python program, it will have printed the pid of the python program you are running and will be waiting for your keyboard input. </p>
<p>python stop code :</p>
<pre><code>import os
def w1(str):
print (str)
wait = raw_input()
return
print os.getpid()
w1('starting main..press a key')
</code></pre>
<p>result :</p>
<pre><code>27352
starting main..press a key
</code></pre>
<p>Or, you can use import pdb, pdb.set_trace() as comment below.(thanks @AndyG) and see EDIT* to get pid using <code>ps -aux</code>.</p>
<p>Now, suppose the C++ shared library is _caffe.so (which is my case. This _caffe.so library has all the C++ codes and boost python wrapper functions). 27352 is the pid. Then in another shell start gdb like </p>
<pre><code>gdb caffe-fast-rcnn/python/caffe/_caffe.so 27352
</code></pre>
<p>or if you want to use graphical debugging using like DDD, do</p>
<pre><code>ddd caffe-fast-rcnn/python/caffe/_caffe.so 27352
</code></pre>
<p>Then you'll see gdb starts and wait with prompt. The python program is interrupted by gdb and waits in stopped mode (it was waiting for your key input but now it's really in stopeed mode, and it needs gdb continue command from the second debugger to proceed with the key waiting).<br>
Now you can give break point command in gdb like </p>
<pre><code>br solver.cpp:225
</code></pre>
<p>and you can see message like </p>
<pre><code>Breakpoint 1 at 0x7f2cccf70397: file src/caffe/solver.cpp, line 226. (2 locations)
</code></pre>
<p>When you give <code>continue</code> command in the second gdb window(that was holding the program), the python code runs again. Of course you should give a key input in the first gdb window to make it proceed.<br>
Now at least you can debug the C++ code while running python program(that's what I wanted to do)! </p>
<p>I later checked if I can do python and C++ debugging at the same time and it works. You start the debugger(DDD) like <code>ddd -pydb prog1.py options..</code> and attach another DDD using method explained above. Now you can set breakpoints for python and C++ and using other debug functions in each window(I wish I had known this a couple of months earlier.. I should have helped tons.). </p>
<p><a href="http://i.stack.imgur.com/dSPHy.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dSPHy.jpg" alt="enter image description here"></a></p>
<p>EDIT : to get the pid, you can do <code>ps -aux | grep python</code> instead. This pid is the next of ddd's pid.</p>
| 4 | 2016-08-18T07:23:10Z | [
"python",
"c++",
"debugging",
"boost",
"ddd-debugger"
] |
Is there a way of drawing non-canvas lines in Tkinter? | 38,898,484 | <p><p>Hi !
<p>I've been looking all over StackExchange and some other forums for an answer to my question but I couldn't find anything relevant, so here I am, posting my question.
<p>What I'm trying to do is to draw lines that are on the same layer as other Tkinter widgets. I'm currently coding an interface where certain widgets need to visually connect to each other (NodeBox style or Quartz Composer style). <br>
<a href="http://i.stack.imgur.com/TjKAE.jpg" rel="nofollow">Shows the connectors (noodles) present in Quartz Composer. (I'm not necessarily looking for curved lines, straight is more than enough.)</a>
<p>The problem is that it is too complicated to use the canvas widget (I will be using a lot of widgets so embedding all of them into the canvas is not really an option I think). I'm looking for something similar to the separator widget but allowing for diagonal lines and defined by coordinates. I'm thinking of creating a custom widget that does this but I'm not sure where to start. An other solution may be to have a transparent canvas right over the non-canvas widgets but that would complicate the mouse click events a lot. I don't know what option would be the best.</p>
<p> Any thoughts on how I could accomplish drawing lines out of canvas widget? (Or on how I can create a custom widget that does this?)
</p>
| 0 | 2016-08-11T13:58:33Z | 38,901,412 | <p>Your only reasonable choice for drawing lines is the canvas. Without the canvas you can simulate horizontal or vertical lines using a separator and <code>place</code>, but you can't do diagonal lines.</p>
| 0 | 2016-08-11T16:11:45Z | [
"python",
"canvas",
"tkinter",
"tkinter-canvas",
"custom-widgets"
] |
Python len not working | 38,898,586 | <p>In the code below, I am trying to use len(list) to count the number of strings in an array in each of the tags variables from the while loop. When i did a sample list parameter on the bottom, list2, it printed 5 which works, but when i did it with my real data,it was counting the characters in the array, not the number of strings. I need help figuring out why that is and i am new to python so the simplest way possible please!</p>
<pre><code>#!/usr/bin/python
import json
import csv
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
#pprint(data)
# calc number of alert records in json file
x = len(data['alerts'])
count = 0
while (count < x):
tags = str(data['alerts'][count] ['tags']).replace("u\"","\"").replace("u\'","\'")
list = "[" + tags.strip('[]') + "]"
print list
print len(list)
count=count+1
list2 = ['redi', 'asd', 'rrr', 'www', 'qqq']
print len(list2)
</code></pre>
| -4 | 2016-08-11T14:03:31Z | 38,898,642 | <p>Your list construction <code>list = "[" + tags.strip('[]') + "]"</code> creates a <code>string</code>, <strong>not</strong> a <code>list</code>. So yes, <code>len</code> works, it counts the characters in your string.</p>
<p>Your tags construction looks a bit off, you have a dictionary of data (<code>data['alerts']</code>) which you then convert to string, and strip of the '<code>[]</code>'. Why don't use just get the value itself?</p>
<p>Also <code>list</code> is a horrible name for your variable. This possible clashes with internal values.</p>
| 4 | 2016-08-11T14:05:47Z | [
"python",
"list",
"python-2.7"
] |
Python len not working | 38,898,586 | <p>In the code below, I am trying to use len(list) to count the number of strings in an array in each of the tags variables from the while loop. When i did a sample list parameter on the bottom, list2, it printed 5 which works, but when i did it with my real data,it was counting the characters in the array, not the number of strings. I need help figuring out why that is and i am new to python so the simplest way possible please!</p>
<pre><code>#!/usr/bin/python
import json
import csv
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
#pprint(data)
# calc number of alert records in json file
x = len(data['alerts'])
count = 0
while (count < x):
tags = str(data['alerts'][count] ['tags']).replace("u\"","\"").replace("u\'","\'")
list = "[" + tags.strip('[]') + "]"
print list
print len(list)
count=count+1
list2 = ['redi', 'asd', 'rrr', 'www', 'qqq']
print len(list2)
</code></pre>
| -4 | 2016-08-11T14:03:31Z | 38,898,658 | <pre><code>list = "[" + tags.strip('[]') + "]"
print list
print len(list)
</code></pre>
<p>Ironically, <code>list</code> is a string, not a list. That's why calling <code>len</code> on it "was counting the characters in the array"</p>
| 1 | 2016-08-11T14:06:22Z | [
"python",
"list",
"python-2.7"
] |
Python len not working | 38,898,586 | <p>In the code below, I am trying to use len(list) to count the number of strings in an array in each of the tags variables from the while loop. When i did a sample list parameter on the bottom, list2, it printed 5 which works, but when i did it with my real data,it was counting the characters in the array, not the number of strings. I need help figuring out why that is and i am new to python so the simplest way possible please!</p>
<pre><code>#!/usr/bin/python
import json
import csv
from pprint import pprint
with open('data.json') as data_file:
data = json.load(data_file)
#pprint(data)
# calc number of alert records in json file
x = len(data['alerts'])
count = 0
while (count < x):
tags = str(data['alerts'][count] ['tags']).replace("u\"","\"").replace("u\'","\'")
list = "[" + tags.strip('[]') + "]"
print list
print len(list)
count=count+1
list2 = ['redi', 'asd', 'rrr', 'www', 'qqq']
print len(list2)
</code></pre>
| -4 | 2016-08-11T14:03:31Z | 38,898,795 | <p>you need to make sure that your variable is a list rather than a str,
try:</p>
<pre><code>print(type(yourList))
</code></pre>
<p>if it shows that it is a str, then try this:</p>
<pre><code>len(list[yourList)
</code></pre>
<p>hope this answers your question</p>
<p>and when you want to establish a list variable, try this:</p>
<pre><code>myList = []
for blah in blahblah:
myList.append(blah)
</code></pre>
<p>I think these definitely solved your problem, so I hope you noticed this part.</p>
| 0 | 2016-08-11T14:12:05Z | [
"python",
"list",
"python-2.7"
] |
bypass known exception of mysql in python | 38,898,601 | <p>I am trying to bypass "<strong>Cannot delete or update a parent row: a foreign key constraint fails</strong>" inside my python script.
So I am planning to drop all tables but this error throws up due to inter relationship.</p>
<p>My query is I need to get this automated and i know I am gonna come with the same error, but I know how to bypass it by calling <code>SET FOREIGN_KEY_CHECKS=0;</code> and then once deleted enable the feature again <code>SET FOREIGN_KEY_CHECKS=1;</code>.
Need to know how to automate this inside python</p>
<p></p>
<pre><code>import MySQLdb
import sys
if len(sys.argv) != 4:
print "please enter the Hostname to connect followed by:"
print "mysql username;"
print "mysql db to connect;"
else:
_host = sys.argv[1]
_user = sys.argv[2]
# _pass = sys.argv[3]
_db = sys.argv[3]
cham = raw_input("please enter the command to be executed:- ")
_pass = raw_input("please enter password:- ")
if cham == "drop table":
db = MySQLdb.connect(host = _host, user = _user,db = _db, passwd = _pass )
cursor = db.cursor()
cursor.execute("show tables")
for i in cursor.fetchall():
cursor.execute("drop table" + " " + (i[0]))
print cursor.fetchall()
print "all the tables has been deleted"
db.close()
else:
db = MySQLdb.connect(host = _host, user = _user,db = _db, passwd = _pass )
cursor = db.cursor()
cursor.execute(cham)
print cursor.fetchall()
db.close()
</code></pre>
<p></p>
| 0 | 2016-08-11T14:04:08Z | 38,899,086 | <p>I tried the following snip and it worked, thanks anyways.</p>
<pre><code> if cham == "drop table":
db = MySQLdb.connect(host = _host, user = _user,db = _db, passwd = _pass )
cursor = db.cursor()
cursor.execute("show tables")
for i in cursor.fetchall():
try:
cursor.execute("drop table" + " " + (i[0]))
#print cursor.fetchall()
except:
cursor.execute("SET FOREIGN_KEY_CHECKS=0")
cursor.execute("drop table" + " " + (i[0]))
cursor.execute("SET FOREIGN_KEY_CHECKS=1")
# print "all the tables has been deleted"
db.close()
</code></pre>
| 0 | 2016-08-11T14:24:27Z | [
"python",
"mysql-python"
] |
Python 3.4.3: os.system does not run my ping "command" | 38,898,704 | <p>I made this script to just write the ip or name of the host after clicking on it.... but in python 3.4.3 the code simply do not run the ping; I can input the ip, but it do not run the ping....</p>
<p>I've tried to enter both ways. ex.: "127.0.0.1" or 127.0.0.1</p>
<p>it run in python 2.7, but want it working in a more recent version of python... </p>
<p>My Windows is 7 64 bits</p>
<p>What do you recommend / advise to me?</p>
<pre><code>import os
a = input("put the name / ip of the machine you want to ping:\n\n")
p = "ping -t "
os.system(p+a)
</code></pre>
<p>Should I just simply use python 2.7, which runs well it???</p>
| -1 | 2016-08-11T14:08:13Z | 38,898,773 | <p><code>input</code> is different between python 2.7 and python 3.</p>
<p>In python 2 it evaluates the return, I bet you enter your string between simple quotes.</p>
<p>In python 3, it returns a string (like <code>raw_input</code> did)</p>
<p>Just enter your value without quotes and it will work.</p>
| 1 | 2016-08-11T14:11:17Z | [
"python"
] |
Python 3.4.3: os.system does not run my ping "command" | 38,898,704 | <p>I made this script to just write the ip or name of the host after clicking on it.... but in python 3.4.3 the code simply do not run the ping; I can input the ip, but it do not run the ping....</p>
<p>I've tried to enter both ways. ex.: "127.0.0.1" or 127.0.0.1</p>
<p>it run in python 2.7, but want it working in a more recent version of python... </p>
<p>My Windows is 7 64 bits</p>
<p>What do you recommend / advise to me?</p>
<pre><code>import os
a = input("put the name / ip of the machine you want to ping:\n\n")
p = "ping -t "
os.system(p+a)
</code></pre>
<p>Should I just simply use python 2.7, which runs well it???</p>
| -1 | 2016-08-11T14:08:13Z | 38,899,401 | <h1>Popen solution:</h1>
<pre><code>from subprocess import Popen, PIPE
p = Popen(['ping', '-c 1', 'www.google.com'], stdout=PIPE)
while True:
line = p.stdout.readline()
if not line:
break
print(line)
</code></pre>
<h1>sh.ping solution:</h1>
<pre><code>import sh
for line in sh.ping('www.google.com', '-c 1', _err_to_out=True, _iter=True, _out_bufsize=100):
print(line)
</code></pre>
<h1>os.system solution:</h1>
<pre><code>import os
os.system('ping -c 1 www.google.com')
</code></pre>
<p>If <code>os.system()</code> generates any output, it will be sent to the interpreter standard output stream.</p>
<p>I use the option <code>-c</code> to send just one packet.
To construct a command string you can use this approach:</p>
<pre><code>a = input("put the name / ip of the machine you want to ping:\n\n")
cmd = 'ping -c 1 %s' % a
os.system(cmd)
</code></pre>
| 1 | 2016-08-11T14:37:34Z | [
"python"
] |
Python 3.4.3: os.system does not run my ping "command" | 38,898,704 | <p>I made this script to just write the ip or name of the host after clicking on it.... but in python 3.4.3 the code simply do not run the ping; I can input the ip, but it do not run the ping....</p>
<p>I've tried to enter both ways. ex.: "127.0.0.1" or 127.0.0.1</p>
<p>it run in python 2.7, but want it working in a more recent version of python... </p>
<p>My Windows is 7 64 bits</p>
<p>What do you recommend / advise to me?</p>
<pre><code>import os
a = input("put the name / ip of the machine you want to ping:\n\n")
p = "ping -t "
os.system(p+a)
</code></pre>
<p>Should I just simply use python 2.7, which runs well it???</p>
| -1 | 2016-08-11T14:08:13Z | 38,918,516 | <p>Actually, I give up to try it.....
I will use in python 2.7, which runs it well....</p>
<p>Thanks</p>
| 0 | 2016-08-12T12:55:35Z | [
"python"
] |
non-categorical dataframe to categorical data for seaborn plotting boxplots, swarmplots, stripplots etc | 38,898,744 | <p>Im running into a confusion with categorical data plots, which is probably because I dont really understand the concept.</p>
<p>I have a dataframe:</p>
<pre><code> A B C
0 1.438161 -0.210454 -1.983704
1 -0.283780 -0.371773 0.017580
2 0.552564 -0.610548 0.257276
3 1.931332 0.649179 -1.349062
4 1.656010 -1.373263 1.333079
5 0.944862 -0.657849 1.526811
</code></pre>
<p>which I can easily plot as a boxplot for each column using seaborn:</p>
<pre><code>sns.boxplot(df)
</code></pre>
<p>However swarmplots, stripplots dont work, I guess because categorical data is needed?</p>
<pre><code> value indx
1.438161 A
-0.283780 A
...
0.552564 B
1.931332 B
...
1.656010 C
0.944862 C
</code></pre>
<p>Is there a very easy quick way to do this, that Im not aware of?</p>
<p><a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.swarmplot.html" rel="nofollow">https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.swarmplot.html</a></p>
| 2 | 2016-08-11T14:09:27Z | 38,899,176 | <p>I think you need parameter <code>data</code>:</p>
<pre><code>sns.boxplot(data=df)
</code></pre>
<p><a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.swarmplot.html" rel="nofollow">Docs</a>:</p>
<blockquote>
<p>data : DataFrame, array, or list of arrays, optional</p>
<p>Dataset for plotting. If x and y are absent, this is interpreted as wide-form. Otherwise it is expected to be long-form.</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/jm9R9.png" rel="nofollow"><img src="http://i.stack.imgur.com/jm9R9.png" alt="graph"></a></p>
| 2 | 2016-08-11T14:28:03Z | [
"python",
"pandas",
"seaborn"
] |
non-categorical dataframe to categorical data for seaborn plotting boxplots, swarmplots, stripplots etc | 38,898,744 | <p>Im running into a confusion with categorical data plots, which is probably because I dont really understand the concept.</p>
<p>I have a dataframe:</p>
<pre><code> A B C
0 1.438161 -0.210454 -1.983704
1 -0.283780 -0.371773 0.017580
2 0.552564 -0.610548 0.257276
3 1.931332 0.649179 -1.349062
4 1.656010 -1.373263 1.333079
5 0.944862 -0.657849 1.526811
</code></pre>
<p>which I can easily plot as a boxplot for each column using seaborn:</p>
<pre><code>sns.boxplot(df)
</code></pre>
<p>However swarmplots, stripplots dont work, I guess because categorical data is needed?</p>
<pre><code> value indx
1.438161 A
-0.283780 A
...
0.552564 B
1.931332 B
...
1.656010 C
0.944862 C
</code></pre>
<p>Is there a very easy quick way to do this, that Im not aware of?</p>
<p><a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.swarmplot.html" rel="nofollow">https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.swarmplot.html</a></p>
| 2 | 2016-08-11T14:09:27Z | 38,899,620 | <p>IIUC, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow"><code>melt</code></a> to convert one of the variables into categorical format to aid in the plotting of <code>swarmplots</code> and <code>stripplots</code>.</p>
<pre><code>In [3]: df_sns = pd.melt(df, value_vars=['A', 'B', 'C'])
In [4]: df_sns
Out[4]:
variable value
0 A 1.438161
1 A -0.283780
2 A 0.552564
3 A 1.931332
4 A 1.656010
5 A 0.944862
6 B -0.210454
7 B -0.371773
8 B -0.610548
9 B 0.649179
10 B -1.373263
11 B -0.657849
12 C -1.983704
13 C 0.017580
14 C 0.257276
15 C -1.349062
16 C 1.333079
17 C 1.526811
In [5]: sns.swarmplot(x='variable', y='value', data=df_sns)
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x268db2a6e10>
</code></pre>
<p><a href="http://i.stack.imgur.com/gFmeJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/gFmeJ.png" alt="enter image description here"></a></p>
| 1 | 2016-08-11T14:47:16Z | [
"python",
"pandas",
"seaborn"
] |
pyNastran type error | 38,898,789 | <p>I get the following TypeError using pyNastran.
Any idea on how to solve it?
I am using python 2.6.6
Thanks</p>
<pre><code>#!/usr/bin/python
import sys
sys.path.append('/folder/six1100')
sys.path.append('/folder/pyNastran')
from pyNastran.bdf.bdf import BDF
bdf = BDF()
bdf.readBDF('file.bdf')
-> Error:
File "pyNastran-test-001.py", line 10, in <module>
bdf = BDF()
File "/folder/pyNastran/pyNastran/bdf/bdf.py", line 194, in __init__
self.__init_attributes()
File "/folder/pyNastran/pyNastran/bdf/bdf.py", line 525, in __init_attributes
self.coords = {0: CORD2R()}
File "/folder/pyNastran/pyNastran/bdf/cards/coordinateSystems.py", line 1227, in __init__
Cord2x.__init__(self, card, data, comment)
File "/folder/pyNastran/pyNastran/bdf/cards/coordinateSystems.py", line 783, in __init__
self.e1 = array(data[2:5], dtype='float64')
TypeError: data type not understood
</code></pre>
| 0 | 2016-08-11T14:11:49Z | 39,023,090 | <p>I happen to work with the author of pyNastran (he is sitting right next to me). He said to post in the main GitHub discussion forum for pyNastran. I am sorry to have posted this as an answer, I do not as of yet have enough rep on Stack Overflow to add comments.</p>
| 0 | 2016-08-18T16:18:29Z | [
"python",
"nastran"
] |
Drawing circles around a certain area with opencv | 38,898,828 | <p>I am working on a code which accesses my camera, turns the output into grayscale, applies a gaussian blur finds the brightest area/pixel and circles it.
Everything but the drawing-a-circle-part works fine. The command I am trying to use does nothing for me. Does anybody have an idea?
I am working with opencv, python 2.7 and a Windows Computer!</p>
<p>This is the code:</p>
<pre><code>import cv2
import numpy as np
cv2.namedWindow("spot")
cam = cv2.VideoCapture(0)
if cam.isOpened():
rval, frame = cam.read()
else:
rval = False
while rval:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(21,21), 0)
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
cv2.imshow("spot", gray)
rval, frame = cam.read()
key = cv2.waitKey(20)
if key == 27:
break
cv2.destroyWindow("spot")
</code></pre>
<p>And this is the line I have been trying to add so far:</p>
<pre><code>cv2.circle(gray, maxLoc, 21, (255, 0, 0), 2)
</code></pre>
| 0 | 2016-08-11T14:13:24Z | 38,899,041 | <p>You are trying to draw a colour circle on gray image ,
instead you can make the circle on the original colour frame</p>
<pre><code>cv2.circle(frame, maxLoc, 10, (255, 0, 0) )
cv2.imshow("spot",frame)
</code></pre>
| 0 | 2016-08-11T14:22:16Z | [
"python",
"opencv",
"numpy",
"camera",
"gaussianblur"
] |
best way to check download duration of a file from multiple urls in python (threading or async)? | 38,898,839 | <p>What is the best way to check the download duration of a file from like 50 urls? I would like to download from each file using my entire bandwidth, should i use multi threading or co-routines or just plain old synchronous way? why?</p>
<p>This is the code i use to check the download duration from a single url:</p>
<pre><code>import urllib
import time
start = time.time()
with urllib.urlopen('http://example.com/file') as response:
data = response.read()
end = time.time()
duration = end - start
</code></pre>
| 0 | 2016-08-11T14:13:54Z | 38,899,060 | <p>Multithreading and coroutines in Python are still restricted by the <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">Global Interpreter Lock (GIL)</a> to only run one Python instruction at a time. If Python code uses multithreading or coroutines on regular Python code that just performs a parallel calculation with no delays for things like input or output, then it doesn't actually execute in parallel. Because each of your threads would be delayed by the download, they are I/O bound.</p>
<p>Because the downloads are completely I/O bound, multithreading or coroutines should both work fine. If you're concerned about overhead, I would just compare results with the two versions.</p>
<p>If you really are just throwing away the downloaded data from large files, consider using streaming and the <a href="http://docs.python-requests.org/en/master/api/?highlight=iter_content#requests.Response.iter_content" rel="nofollow"><code>iter_content</code> method</a> to avoid holding more data in memory than you need.</p>
| 0 | 2016-08-11T14:23:11Z | [
"python",
"multithreading",
"asynchronous"
] |
'for' statements should use the format 'for x in y': while iterating over value retrieved from dictionary using django template | 38,899,059 | <p>I have a context dictionary entry <code>objectives</code> that maps objective query objects to a list of tests that belong to that objective. Example code:</p>
<pre><code>objectives = Objective.objects.filter(requirement=requirement)
context_dict["requirements"][requirement] = objectives
for objective in objectives:
tests = Test.objects.filter(objective=objective)
context_dict["objectives"][objective] = tests
</code></pre>
<p>In my django html template, I iterate over objectives and display them. I then want to iterate over the tests that belong to these objectives. When I do this:</p>
<pre><code>{% for test in {{ objectives|get_item:objective }} %}
</code></pre>
<p>I get a <code>TemplateSyntaxError: 'for' statements should use the format 'for x in y':</code></p>
<p>In the application/templatetags directory, I have:</p>
<pre><code>from django.template.defaulttags import register
...
@register.filter
def get_item(dictionary, key):
return dictionary.get(key)
</code></pre>
<p>If instead I make <code>{{ objectives|get_item:objective }}</code> a JS variable, I see that it does indeed produce a list, which I should be able to iterate over. Of course, I can't mix JS variables and the django template tags, so this is only for debugging:</p>
<pre><code>var tests = {{ objectives|get_item:objective }}
var tests = [<Test: AT399_8_1>, <Test: AT399_8_2>, <Test: AT399_8_3>, <Test: AT399_8_4>, <Test: AT399_8_5> '...(remaining elements truncated)...']
</code></pre>
<p>How do I iterate over this list in the django template tag?</p>
| 0 | 2016-08-11T14:23:06Z | 38,899,218 | <p>You cannot user <code>{{...}}</code> inside the <code>{%...%}</code></p>
<p>What you can try is changing your filter to an assignment tag and using that value in the loop</p>
<pre><code>@register.assignment_tag
def get_item(dictionary, key):
return dictionary.get(key)
</code></pre>
<p>And then in your template use it as </p>
<pre><code>{% get_item objectives objective as tests %}
{% for test in test %}
....
{% endfor %}
</code></pre>
<p>Instead of all this if your models are proper with foreign keys I would do something like</p>
<pre><code>{% for objective in requirement.objective_set.all %}
{% for test in objective.test_set.all %}
....
{% endfor %}
{% endfor %}
</code></pre>
<p>In my context I would pass only the <code>requirement</code></p>
| 2 | 2016-08-11T14:30:03Z | [
"python",
"django",
"dictionary"
] |
'for' statements should use the format 'for x in y': while iterating over value retrieved from dictionary using django template | 38,899,059 | <p>I have a context dictionary entry <code>objectives</code> that maps objective query objects to a list of tests that belong to that objective. Example code:</p>
<pre><code>objectives = Objective.objects.filter(requirement=requirement)
context_dict["requirements"][requirement] = objectives
for objective in objectives:
tests = Test.objects.filter(objective=objective)
context_dict["objectives"][objective] = tests
</code></pre>
<p>In my django html template, I iterate over objectives and display them. I then want to iterate over the tests that belong to these objectives. When I do this:</p>
<pre><code>{% for test in {{ objectives|get_item:objective }} %}
</code></pre>
<p>I get a <code>TemplateSyntaxError: 'for' statements should use the format 'for x in y':</code></p>
<p>In the application/templatetags directory, I have:</p>
<pre><code>from django.template.defaulttags import register
...
@register.filter
def get_item(dictionary, key):
return dictionary.get(key)
</code></pre>
<p>If instead I make <code>{{ objectives|get_item:objective }}</code> a JS variable, I see that it does indeed produce a list, which I should be able to iterate over. Of course, I can't mix JS variables and the django template tags, so this is only for debugging:</p>
<pre><code>var tests = {{ objectives|get_item:objective }}
var tests = [<Test: AT399_8_1>, <Test: AT399_8_2>, <Test: AT399_8_3>, <Test: AT399_8_4>, <Test: AT399_8_5> '...(remaining elements truncated)...']
</code></pre>
<p>How do I iterate over this list in the django template tag?</p>
| 0 | 2016-08-11T14:23:06Z | 38,900,876 | <p>You already have an answer, but note that dropping the <code>{{ }}</code> tags and keeping everything else the same would have worked fine. </p>
<pre><code>{% for test in objectives|get_item:objective %}
</code></pre>
| 0 | 2016-08-11T15:44:22Z | [
"python",
"django",
"dictionary"
] |
How can I raise an exception through Tornado coroutines incorrectly called? | 38,899,114 | <p>I have a scenario with Tornado where I have a coroutine that is called from a non-coroutine or without yielding, yet I need to propagate the exception back.</p>
<p>Imagine the following methods:</p>
<pre><code>@gen.coroutine
def create_exception(with_yield):
if with_yield:
yield exception_coroutine()
else:
exception_coroutine()
@gen.coroutine
def exception_coroutine():
raise RuntimeError('boom')
def no_coroutine_create_exception(with_yield):
if with_yield:
yield create_exception(with_yield)
else:
create_exception(with_yield)
</code></pre>
<p>Calling:</p>
<pre><code>try:
# Throws exception
yield create_exception(True)
except Exception as e:
print(e)
</code></pre>
<p>will properly raise the exception. However, none of the following raise the exception :</p>
<pre><code>try:
# none of these throw the exception at this level
yield create_exception(False)
no_coroutine_create_exception(True)
no_coroutine_create_exception(False)
except Exception as e:
print('This is never hit)
</code></pre>
<p>The latter are variants similar to what my problem is - I have code outside my control calling coroutines without using yield. In some cases, they are not coroutines themselves. Regardless of which scenario, it means that any exceptions they generate are swallowed until Tornado returns them as "future exception not received."</p>
<p>This is pretty contrary to Tornado's intent, their documentation basically states you need to do yield/coroutine through the entire stack in order for it to work as I'm desiring without hackery/trickery.</p>
<p>I can change the way the exception is raised (ie modify <code>exception_coroutine</code>). But I cannot change several of the intermediate methods.</p>
<p>Is there something I can do in order to <em>force</em> the exception to be raised throughout the Tornado stack, even if it is not properly yielded? Basically to properly raise the exception in all of the last three situations?</p>
<p>This is complicated because I <strong>cannot</strong> change the code that is causing this situation. I can only change <code>exception_coroutine</code> for example in the above.</p>
| 0 | 2016-08-11T14:25:35Z | 38,904,937 | <p>If you're curious why this isn't working, it's because no_coroutine_create_exception contains a yield statement. Therefore it's a generator function, and calling it does <em>not</em> execute its code, it only creates a generator object:</p>
<pre><code>>>> no_coroutine_create_exception(True)
<generator object no_coroutine_create_exception at 0x101651678>
>>> no_coroutine_create_exception(False)
<generator object no_coroutine_create_exception at 0x1016516d0>
</code></pre>
<p>Neither of the calls above executes any Python code, it only creates generators that must be iterated.</p>
<p>You'd have to make a blocking function that starts the IOLoop and runs it until your coroutine finishes:</p>
<pre><code>def exception_blocking():
return ioloop.IOLoop.current().run_sync(exception_coroutine)
exception_blocking()
</code></pre>
<p>(The IOLoop acts as a scheduler for multiple non-blocking tasks, and the <code>gen.coroutine</code> decorator is responsible for iterating the coroutine until completion.)</p>
<p>However, I think I'm likely answering your immediate question but merely enabling you to proceed down an unproductive path. You're almost certainly better off using async code or blocking code throughout instead of trying to mix them.</p>
| -1 | 2016-08-11T19:48:15Z | [
"python",
"python-2.7",
"asynchronous",
"tornado"
] |
How can I raise an exception through Tornado coroutines incorrectly called? | 38,899,114 | <p>I have a scenario with Tornado where I have a coroutine that is called from a non-coroutine or without yielding, yet I need to propagate the exception back.</p>
<p>Imagine the following methods:</p>
<pre><code>@gen.coroutine
def create_exception(with_yield):
if with_yield:
yield exception_coroutine()
else:
exception_coroutine()
@gen.coroutine
def exception_coroutine():
raise RuntimeError('boom')
def no_coroutine_create_exception(with_yield):
if with_yield:
yield create_exception(with_yield)
else:
create_exception(with_yield)
</code></pre>
<p>Calling:</p>
<pre><code>try:
# Throws exception
yield create_exception(True)
except Exception as e:
print(e)
</code></pre>
<p>will properly raise the exception. However, none of the following raise the exception :</p>
<pre><code>try:
# none of these throw the exception at this level
yield create_exception(False)
no_coroutine_create_exception(True)
no_coroutine_create_exception(False)
except Exception as e:
print('This is never hit)
</code></pre>
<p>The latter are variants similar to what my problem is - I have code outside my control calling coroutines without using yield. In some cases, they are not coroutines themselves. Regardless of which scenario, it means that any exceptions they generate are swallowed until Tornado returns them as "future exception not received."</p>
<p>This is pretty contrary to Tornado's intent, their documentation basically states you need to do yield/coroutine through the entire stack in order for it to work as I'm desiring without hackery/trickery.</p>
<p>I can change the way the exception is raised (ie modify <code>exception_coroutine</code>). But I cannot change several of the intermediate methods.</p>
<p>Is there something I can do in order to <em>force</em> the exception to be raised throughout the Tornado stack, even if it is not properly yielded? Basically to properly raise the exception in all of the last three situations?</p>
<p>This is complicated because I <strong>cannot</strong> change the code that is causing this situation. I can only change <code>exception_coroutine</code> for example in the above.</p>
| 0 | 2016-08-11T14:25:35Z | 38,928,739 | <p>What you're asking for is impossible in Python because the decision to <code>yield</code> or not is made by the calling function after the coroutine has finished. The coroutine must return without raising an exception so it can be <code>yielded</code>, and after that it is no longer possible for it to raise an exception into the caller's context in the event that the <code>Future</code> is not <code>yielded</code>.</p>
<p>The best you can do is detect the garbage collection of a Future, but this can't do anything but log (this is how the "future exception not retrieved" message works)</p>
| 0 | 2016-08-13T02:51:45Z | [
"python",
"python-2.7",
"asynchronous",
"tornado"
] |
IntelliJ + Python plugin: Python remote interpreter gets created without classpath, does not work | 38,899,173 | <p>When I create a new remote Python interpreter, IntelliJ doesn't find any dependencies to my code and doesn't seem to index any libraries. Most of the code is red. I think I've pinpointed that to the "classpath" being completely empty, which is unlike some other Python SDKs that I have added (local ones). Some of the times I am able to get it to populate the classpath with paths pointing to the IntelliJ Caches directory by clicking around in the interface, but I most of the times it does not work and I cannot reproduce how to make it work. How do I make sure the classpath gets populated correctly?</p>
<p>I am using IntelliJ Ultimate version 2016.2.1. with the Python plugin version 2016.2.162.43. I am developing on a Vagrant virtual machine and I'm adding a Python remote interpreter that is inside a virtual environment (venv) inside the virtual machine. When I add the remote interpreter, I use:</p>
<ul>
<li>On the SDKs tab - the + button.</li>
<li>Python SDK</li>
<li>Add Remote</li>
<li>I select the Vagrant option</li>
<li>Point it to my Vagrant project directory.</li>
<li>Point it to the python3.5 executable inside my virtualenv</li>
<li>Add the SDK</li>
</ul>
<p>Then the classpath looks like this: <a href="https://www.dropbox.com/s/3xbzopb4y9bhn0u/Screenshot%202016-08-11%2017.19.43.png?dl=0" rel="nofollow">https://www.dropbox.com/s/3xbzopb4y9bhn0u/Screenshot%202016-08-11%2017.19.43.png?dl=0</a> and IntelliJ doesn't recognize any libraries/builtins. For other SDKs, the classpath contains several entries with remote_sources, python_stubs or python-skeletons in the name and they work.</p>
| 1 | 2016-08-11T14:27:53Z | 39,558,752 | <p>As a workaround, I copied every entry from the local python interpreter classpath to the remote one and everything seems to work</p>
<p>Edit:</p>
<p>Actually, I don't know what triggered it, but some days after I wrote this, I noticed that IDEA started downloading source files from the server. I went to the interpreter settings, and the classpath entries I had manually added were gone and replaced by "system/remote_resources" entries. I think this is how it's supposed to work, but unfortunately I don't why I didn't work from the start nor how to trigger the correct behavior, it just started working on its own.</p>
| 1 | 2016-09-18T14:31:18Z | [
"python",
"intellij-idea"
] |
Python performing calculations while comparing dictionaries | 38,899,178 | <p>I am new to Python and am trying to perform counts and calculations by comparing two dictionaries but having trouble presenting it together. Is there a way to perform the below during my first iteration of printing the keys and values together?</p>
<p>I would like to:</p>
<p>1.) Check if a key in dictionary x is in dictionary y</p>
<p>2.) If key exists divide the key value by total key values from both dictionaries and print a percentage next to the value</p>
<p>3.) If the key does NOT exist divide the key by itself and print a percentage next to the value</p>
<p>I am able to accomplish the two separately but not together. Also if you have suggestions for improving my code efficiency it would be greatly appreciated.</p>
<p>Code:</p>
<pre><code>import operator, itertools
trained = {'Dog': 4, 'Cat': 3, 'Bird': 1, 'Fish': 12, 'Mouse': 19}
untrained = {'Cat': 2, 'Mouse': 7, 'Dog': 4}
trained_list = []
untrained_list = []
print('='* 40)
print('Trained \t Untrained')
print('='* 40)
for k, v in sorted(trained.items(), key=operator.itemgetter(1), reverse=True):
t = (k, v )
trained_list.append(t)
for k, v in sorted(untrained.items(), key=operator.itemgetter(1), reverse=True):
u = (k, v )
untrained_list.append(u)
for x, y in itertools.zip_longest(trained_list, untrained_list, fillvalue=" "):
print(str(x[0]).ljust(5, ' ') + '\t' + str(x[1]) + '\t', y[0] + '\t' + str(y[1]))
print('=' * 30)
for k,v in untrained.items():
if k in trained:
print('Untrained ' + str(k).ljust(5, ' ') + '\t\t' + ('{0:.2f}%').format((untrained[k]/(untrained[k] + trained[k])*100)))
print('=' * 30)
for k, v in trained.items():
if k in untrained:
print('Trained ' + str(k).ljust(5, ' ') + '\t\t' + ('{0:.2f}%').format((trained[k]/(untrained[k] + trained[k]))*100))
else:
print('Trained ' + str(k).ljust(5, ' ') + '\t\t' + ('{0:.2f}%').format((trained[k] / (trained[k])) * 100))
</code></pre>
<p>Current Output:</p>
<pre><code>========================================
Trained Untrained
========================================
Mouse 19 Mouse 7
Fish 12 Dog 4
Dog 4 Cat 2
Cat 3
Bird 1
==============================
Untrained Mouse 26.92%
Untrained Cat 40.00%
Untrained Dog 50.00%
==============================
Trained Fish 100.00%
Trained Bird 100.00%
Trained Cat 60.00%
Trained Dog 50.00%
Trained Mouse 73.08%
</code></pre>
<p>Desired Output:</p>
<pre><code>========================================
Trained Untrained
========================================
Mouse 19 (73.08%) Mouse 7 (26.92%)
Fish 12 (100.00%) Dog 4 (50.00%)
Dog 4 (50.00%) Cat 2 (40.00%)
Cat 3 (60.00%)
Bird 1 (100.00%)
</code></pre>
| 1 | 2016-08-11T14:28:08Z | 38,899,628 | <p>Here's one option:</p>
<pre><code>from collections import namedtuple
trained = {'Dog': 4, 'Cat': 3, 'Bird': 1, 'Fish': 12, 'Mouse': 19}
untrained = {'Cat': 2, 'Mouse': 7, 'Dog': 4}
Score = namedtuple('Score', ('total', 'percent', 'name'))
trained_scores = []
for t in trained:
trained_scores.append(
Score(total=trained[t],
percent=(trained[t]/(trained[t]+untrained.get(t, 0)))*100,
name=t)
)
untrained_scores = []
for t in untrained:
untrained_scores.append(
Score(total=untrained[t],
percent=(untrained[t]/(untrained[t]+trained.get(t, 0)))*100,
name=t)
)
trained_scores.sort(reverse=True)
untrained_scores.sort(reverse=True)
import pprint; pprint.pprint(trained_scores)
import pprint; pprint.pprint(untrained_scores)
# I might name these something different.
row_template = '{:<30} {:<30}'
item_template = '{0.name:<10} {0.total:>3} ({0.percent:>6.2f}%)'
print('='*61)
print(row_template.format('Trained', 'Untrained'))
print('='*61)
for trained, untrained in zip_longest(trained_scores, untrained_scores):
print(row_template.format(
'' if trained is None else item_template.format(trained),
'' if untrained is None else item_template.format(untrained),
))
</code></pre>
<p>Outputs:</p>
<pre><code>=============================================================
Trained Untrained
=============================================================
Mouse 19 ( 73.08%) Mouse 7 ( 26.92%)
Fish 12 (100.00%) Dog 4 ( 50.00%)
Dog 4 ( 50.00%) Cat 2 ( 40.00%)
Cat 3 ( 60.00%)
Bird 1 (100.00%)
</code></pre>
| 1 | 2016-08-11T14:47:36Z | [
"python",
"dictionary"
] |
Merge dataframes without matching column | 38,899,341 | <p>I have the following two dataframes:</p>
<pre><code>df1:
column_01 value_01
aaa 1
bbb 2
df2:
column_02 value_02
ccc 3
ddd 4
</code></pre>
<p>I need to merge the dataframes such that the rows in <code>df1</code> are duplicated to contain a row for every row in <code>df2</code>. The output would be as follows:</p>
<pre><code>column_01 value_01 column_02 value_02
aaa 1 ccc 3
aaa 1 ddd 4
bbb 2 ccc 3
bbb 2 ddd 4
</code></pre>
<p>I have tried variations of <code>merge</code> and <code>join</code>, but can't get it working because I am not matching column values, I am purposely trying to duplicate <code>df1</code> rows for each <code>df2</code> row.</p>
| 2 | 2016-08-11T14:34:51Z | 38,899,384 | <p>You can use create new columns with some scalar value and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> by these columns:</p>
<pre><code>df1['one'] = 1
df2['one'] = 1
print (pd.merge(df1, df2, on='one').drop('one', axis=1))
column_01 value_01 column_02 value_02
0 aaa 1 ccc 3
1 aaa 1 ddd 4
2 bbb 2 ccc 3
3 bbb 2 ddd 4
</code></pre>
| 2 | 2016-08-11T14:36:53Z | [
"python",
"pandas",
"dataframe",
"merge",
"cross-join"
] |
Python, Pyautogui, and CTRL-C | 38,899,566 | <p>I am attempting to complete a simple process of opening a web/browser based document, selecting a field within said document, and then copying it so that it goes into my operating system's clipboard. Here's the specs :</p>
<p>Windows 7
Google Chrome ( latest stable )
Python 3.5
pyautogui for keyboard/mouse control</p>
<p>Here is the field I am trying to work with ( <a href="http://screencast.com/t/jt0kTagb" rel="nofollow">http://screencast.com/t/jt0kTagb</a> ). When that little arrow is clicked it pops open to reveal a calendar to pick a date. If you click directly in the field instead it highlights the field's contents. When I manually press CTRL+C in this situation the field's contents go right into the clipboard as expected.</p>
<p>I've tried two methods of getting the field to go into my clipboard. The first was leveraging pyautogui's keyDown/up and press functions which essentially looked like :</p>
<pre><code>imageCoord = noClick("img/date.png")
x, y = pyautogui.center(imageCoord)
pyautogui.click(x, y + 20)
pyautogui.keyDown('ctrl')
pyautogui.press('c')
pyautogui.keyUp('ctrl')
</code></pre>
<p>I then attempted to just use the app menu that appears if you right click on something which looked like this:</p>
<pre><code>imageCoord = noClick("img/date.png")
x, y = pyautogui.center(imageCoord)
pyautogui.click(x, y + 20, button='right')
pyautogui.press("down", presses=2)
time.sleep(1)
pyautogui.press('enter')
</code></pre>
<p>Lastly I tried the pyautogui.hotkey() function which looked like this : </p>
<pre><code>imageCoord = noClick("img/date.png")
x, y = pyautogui.center(imageCoord)
pyautogui.click(x, y + 20, button='right')
pyautogui.hotKey('ctrl', 'c')
</code></pre>
<p>In all three events the field is indeed selected and as best as I can tell the keypresses are going through as all other presses/functions that happen prior go off without a hitch. </p>
<p>The problem that I am facing is that when I do this manually in the same fashion as both of those scripts above I am able to get the contents. When I use the scripts, the clipboard is never updated/populated with the field's contents. Is there something I am overlooking or not considering when working with Python and Window's clipboard?</p>
<p>In the end all I am trying to do is put that value into an excel sheet. Any advice would be appreciated!</p>
| 0 | 2016-08-11T14:44:43Z | 39,489,938 | <p>I have also discovered this issue on a different automation script, and have been working on troubleshooting it for several days. I'm also on Python 3.5 and Windows 7. I can rule out that it has anything to do with Google Chrome, as my particular script is actually working with SAP.</p>
<p>The documentation for pyautogui on Read the Docs (<a href="https://pyautogui.readthedocs.io/en/latest/cheatsheet.html#keyboard-functions" rel="nofollow">https://pyautogui.readthedocs.io/en/latest/cheatsheet.html#keyboard-functions</a>) gives a direct example of using Ctrl + C to copy text to the clipboard, so I can verify you're not actually doing something wrong. I believe you're just looking at a bug here. </p>
<p>I have opened an issue on the project's GitHub page:
<a href="https://github.com/asweigart/pyautogui/issues/102" rel="nofollow">https://github.com/asweigart/pyautogui/issues/102</a></p>
| 1 | 2016-09-14T12:00:58Z | [
"python"
] |
How to round floats to scaled integers? | 38,899,592 | <p>to further explain my title: I have an array of floats which I want to round, HOWEVER, I want to round the numbers to a number that isn't the closest integer. For example, let's say I want the numbers to be rounded to the nearest integer that is a multiple of 2. This is what I have:</p>
<pre><code>Temp = np.around(data,0)
</code></pre>
<p>with <code>data</code> being an array of floats. The numbers are rounded to the closest integer, but I want them to be rounded to the closest multiple of 2. My goal:</p>
<p>0.9 -> 0</p>
<p>1.1 -> 2</p>
<p>etc.</p>
<p>Thanks!</p>
| 2 | 2016-08-11T14:45:55Z | 38,899,846 | <p>Following is one way of doing it:</p>
<pre><code>import math
data = [0.9, 1.1, 10.2, 7.4]
rounded_numbers = []
for num in data:
rounded_up_num = math.ceil(num)
if rounded_up_num % 2 == 0:
rounded_num = rounded_up_num
else:
rounded_num = math.floor(num)
rounded_numbers.append(int(rounded_num))
print rounded_numbers # [0, 2, 10, 8]
</code></pre>
| 1 | 2016-08-11T14:56:17Z | [
"python",
"math",
"floating-point",
"rounding"
] |
How to round floats to scaled integers? | 38,899,592 | <p>to further explain my title: I have an array of floats which I want to round, HOWEVER, I want to round the numbers to a number that isn't the closest integer. For example, let's say I want the numbers to be rounded to the nearest integer that is a multiple of 2. This is what I have:</p>
<pre><code>Temp = np.around(data,0)
</code></pre>
<p>with <code>data</code> being an array of floats. The numbers are rounded to the closest integer, but I want them to be rounded to the closest multiple of 2. My goal:</p>
<p>0.9 -> 0</p>
<p>1.1 -> 2</p>
<p>etc.</p>
<p>Thanks!</p>
| 2 | 2016-08-11T14:45:55Z | 38,899,994 | <p>A multiple of two is straightforward:</p>
<pre><code>x = np.array([0.9, 1.1, 10.2, 7.4])
2*np.round(x/2) # array([ 0., 2., 10., 8.])
</code></pre>
<p>But there's not a universal approach to this. For example there's no obvoius "round to the nearest Fibonacci number". Consider the formula for multiple of <code>2</code> as, given a function <code>f(x)=2*x</code>: 1) first apply the inverse of <code>f</code> (divide in this case), 2) then <code>round</code>, 3) then apply <code>f</code> to result. For this to work, <code>f</code> must exist, have an inverse, and the result must also be an <code>int</code>; so it only works for a few functions.</p>
| 3 | 2016-08-11T15:03:38Z | [
"python",
"math",
"floating-point",
"rounding"
] |
Turtle Graphics not Working on Pycharm | 38,899,612 | <p>I have been trying to use turtle python graphics in Pycharm Community Edition, but it keeps giving me an error for import turtle. My code is so far:</p>
<pre><code> import turtle
t=turtle.Pen()
</code></pre>
<p>And it gives me an error for that.
Then I do something like:</p>
<pre><code> t.forward(50)
t.speed(10)
t.left(90)
t.forward(-20)
</code></pre>
<p>And then I end with:</p>
<pre><code> turtle.exitonclick()
</code></pre>
<p>But it gives me an error for anything relating to turtle. How am I supposed to make this work? Does Pycharm even support turtle?</p>
| -2 | 2016-08-11T14:46:58Z | 38,910,998 | <p>There's nothing wrong with your code when run from the command line with Python 3 or Python 2. You didn't name your own program file "<strong>turtle.py</strong>" by any chance did you? If so, rename it to something else and try again.</p>
<pre><code>import turtle
t = turtle.Pen()
t.speed('fast')
t.forward(50)
t.left(90)
t.backward(20)
turtle.exitonclick()
</code></pre>
| 0 | 2016-08-12T06:05:29Z | [
"python",
"pycharm",
"turtle-graphics"
] |
Continue isnt working properly | 38,899,696 | <p>I do need help in solving my code.
Below python code 'continue' is not working properly</p>
<pre><code>dicemp = {'12345':''}
while(1):
choice = int(input("Please enter your choice\n"))
if (choice == 1):
empno = input("Enter employee number: ")
for i in dicemp.keys():
if i == empno:
print("employee already exists in the database")
continue
print("Hello")
</code></pre>
<p>Output:</p>
<p>Please enter your choice</p>
<p>1</p>
<p>Enter employee number: 12345</p>
<p>employee already exists in the database</p>
<p>Hello</p>
<p>So for the above code if I give same employee no. 12345 it is going into if block and printing the message"employee already exists in the database" after this it should continue from start but in this case it is also printing "hello".</p>
| -4 | 2016-08-11T14:50:16Z | 38,899,780 | <p>Your <code>continue</code> is moving the <code>for</code> loop on to its next iteration, which would have happened anyway. If you need to continue the outer loop, you can do something like this:</p>
<pre><code>while True:
choice = int(input("Please enter your choice\n"))
if choice == 1:
empno = input("Enter employee number: ")
found = False
for i in dicemp:
if i == empno:
print("employee already exists in the database")
found = True
break
if found:
continue
print("Hello")
</code></pre>
<p>Now the <code>continue</code> is outside the <code>for</code> loop, so it will continue the outer loop.</p>
<p>You could simplify this to:</p>
<pre><code>while True:
choice = int(input("Please enter your choice\n"))
if choice==1:
empno = input("Enter employee number: ")
if empno in dicemp:
print("employee already exists in the database")
continue
print("Hello")
</code></pre>
<p>and get rid of the inner loop entirely.</p>
| 2 | 2016-08-11T14:53:10Z | [
"python",
"while-loop",
"continue"
] |
How to display special characters in the CMD part of python | 38,899,740 | <p>I have a python program that returns a large amount of info from an API. The info contains special characters like the TM sign and the â
symbol. When the code is run in the IDLE interface it works exactly as it should returning all information however when the program is run in the CMD-Like interface it crashes because it can not display the symbols. I am using PRINT to output the information. Is there anyway i can make it display the characters in the CMD part?</p>
| 1 | 2016-08-11T14:51:32Z | 39,754,124 | <p>CMD can not display those characters you must convert them</p>
| -1 | 2016-09-28T17:32:04Z | [
"python",
"special-characters"
] |
Process request thread error with Flask Application? | 38,899,790 | <p>This might be a long shot, but here's the error that i'm getting:</p>
<pre><code> File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 596, in process_request_thread
self.finish_request(request, client_address)
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 654, in __init__
self.finish()
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 713, in finish
self.wfile.close()
File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 283, in close
self.flush()
File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
</code></pre>
<p>I have built a <code>Flask</code> application that takes addresses as input and performs some string formatting, manipulation, etc, then sends them to <code>Bing Maps</code> to geocode (through the <code>geopy</code> external module).</p>
<p>I'm using this application to clean very large data sets. The application works for inputs of usually ~1,500 addresses (inputted 1 per line). By that I mean that it will process the address and send it to <code>Bing Maps</code> to be geocoded and then returned. After around 1,500 addresses, the application becomes unresponsive. If this happens while i'm at work, my proxy tells me that there is a <code>tcp error</code>. If i'm on a non work computer it just doesn't load the page. If I restart the application then it functions perfectly fine. Because of this i'm forced to run my program with batches of about 1,000 addresses (just to be safe because i'm not sure yet of the exact number that the program crashes at).</p>
<p>Does anyone have any idea what might be causing it? </p>
<p>I was thinking something along the lines of me hitting my Bing API key limit for the day (which is 30,000), but that can't be accurate as I rarely use more than 15,000 requests per day.</p>
<p>My second thought was that maybe it's because i'm still using the standard flask server to run my application. Would switching to <code>gunicorn</code> or <code>uWSGI</code> solve this?</p>
<p>My third thought was maybe it was getting overloaded with the amount of requests. I tried to sleep the program for 15 seconds or so after the first 1,000 addresses but that didn't solve anything.</p>
<p>If anyone needs further clarification please let me know.</p>
<p>Here is my code for the backend of the Flask Application. I'm getting the input from this function:</p>
<pre><code>@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
</code></pre>
<p>Here is the <code>cleanAddress</code> function: It's a bit cluttered right now, with all of the if statements to check for specific typos in the address, but I plan on moving a lot of this code into other functions in another file and just passing the address though those functions to clean it up a bit.</p>
<pre><code>def cleanAddress(addresses):
counter = 0
# nested helper function to fix addresses such as '30 w 60th'
def check_st(address):
if 'broadway' in address:
return address
has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,4}(th|st|nd|rd)\s)(?P<following>.*)')
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
if re.match(r'(street|st|floor)', has_number.group('following')):
return address
else:
new_address = re.sub('(?P<number>[\d]{1,4}(st|nd|rd|th)\s)', r'\g<number>street ', address, 1)
return new_address
else:
return address
addresses = addresses.split('\n')
cleaned = []
success = 0
fail = 0
cleaned.append('<body bgcolor="#FACC2E"><center><img src="http://goglobal.dhl-usa.com/common/img/dhl-express-logo.png" alt="Smiley face" height="100" width="350"><br><p>')
cleaned.append('<br><h3>Note: Everything before the first comma is the Old Address. Everything after the first comma is the New Address</h13>')
cleaned.append('<p><h3>To format the output in Excel, split the columns using "," as the delimiter. </p></h3>')
cleaned.append('<p><h2><font color="red">Old Address </font> <font color="black">New Address </font></p></h2>')
for address in addresses:
dirty = address.strip()
if ',' in address:
dirty = dirty.replace(',', '')
cleaned.append('<font color="red">' + dirty + ', ' + '</font>')
address = address.lower()
address = re.sub('[^A-Za-z0-9#]+', ' ', address).lstrip()
pattern = r"\d+.* +(\d+ .*(" + "|".join(patterns) + "))"
address = re.sub(pattern, "\\1", address)
address = check_st(address)
if 'one ' in address:
address = address.replace('one', '1')
if 'two' in address:
address = address.replace('two', '2')
if 'three' in address:
address = address.replace('three', '3')
if 'four' in address:
address = address.replace('four', '4')
if 'five' in address:
address = address.replace('five', '5')
if 'eight' in address:
address = address.replace('eight', '8')
if 'nine' in address:
address = address.replace('nine', '9')
if 'fith' in address:
address = address.replace('fith', 'fifth')
if 'aveneu' in address:
address = address.replace('aveneu', 'avenue')
if 'united states of america' in address:
address = address.replace('united states of america', '')
if 'ave americas' in address:
address = address.replace('ave americas', 'avenue of the americas')
if 'americas avenue' in address:
address = address.replace('americas avenue', 'avenue of the americas')
if 'avenue of americas' in address:
address = address.replace('avenue of americas', 'avenue of the americas')
if 'avenue of america ' in address:
address = address.replace('avenue of america ', 'avenue of the americas ')
if 'ave of the americ' in address:
address = address.replace('ave of the americ', 'avenue of the americas')
if 'avenue america' in address:
address = address.replace('avenue america', 'avenue of the americas')
if 'americaz' in address:
address = address.replace('americaz', 'americas')
if 'ave of america' in address:
address = address.replace('ave of america', 'avenue of the americas')
if 'amrica' in address:
address = address.replace('amrica', 'americas')
if 'americans' in address:
address = address.replace('americans', 'americas')
if 'walk street' in address:
address = address.replace('walk street', 'wall street')
if 'northend' in address:
address = address.replace('northend', 'north end')
if 'inth' in address:
address = address.replace('inth', 'ninth')
if 'aprk' in address:
address = address.replace('aprk', 'park')
if 'eleven' in address:
address = address.replace('eleven', '11')
if ' av ' in address:
address = address.replace(' av ', ' avenue')
if 'avnue' in address:
address = address.replace('avnue', 'avenue')
if 'ofthe americas' in address:
address = address.replace('ofthe americas', 'of the americas')
if 'aj the' in address:
address = address.replace('aj the', 'of the')
if 'fifht' in address:
address = address.replace('fifht', 'fifth')
if 'w46' in address:
address = address.replace('w46', 'w 46')
if 'w42' in address:
address = address.replace('w42', 'w 42')
if '95st' in address:
address = address.replace('95st', '95th st')
if 'e61 st' in address:
address = address.replace('e61 st', 'e 61st')
if 'driver information' in address:
address = address.replace('driver information', '')
if 'e87' in address:
address = address.replace('e87', 'e 87')
if 'thrd avenus' in address:
address = address.replace('thrd avenus', 'third avenue')
if '3r ' in address:
address = address.replace('3r ', '3rd ')
if 'st ates' in address:
address = address.replace('st ates', '')
if 'east52nd' in address:
address = address.replace('east52nd', 'east 52nd')
if 'authority to leave' in address:
address = address.replace('authority to leave', '')
if 'sreet' in address:
address = address.replace('sreet', 'street')
if 'w47' in address:
address = address.replace('w47', 'w 47')
if 'signature required' in address:
address = address.replace('signature required', '')
if 'direct' in address:
address = address.replace('direct', '')
if 'streetapr' in address:
address = address.replace('streetapr', 'street')
if 'steet' in address:
address = address.replace('steet', 'street')
if 'w39' in address:
address = address.replace('w39', 'w 39')
if 'ave of new york' in address:
address = address.replace('ave of new york', 'avenue of the americas')
if 'avenue of new york' in address:
address = address.replace('avenue of new york', 'avenue of the americas')
if 'brodway' in address:
address = address.replace('brodway', 'broadway')
if 'w 31 ' in address:
address = address.replace('w 31 ', 'w 31th ')
if 'w 34 ' in address:
address = address.replace('w 34 ', 'w 34th ')
if 'w38' in address:
address = address.replace('w38', 'w 38')
if 'broadeay' in address:
address = address.replace('broadeay', 'broadway')
if 'w37' in address:
address = address.replace('w37', 'w 37')
if '35street' in address:
address = address.replace('35street', '35th street')
if 'eighth avenue' in address:
address = address.replace('eighth avenue', '8th avenue')
if 'west 33' in address:
address = address.replace('west 33', 'west 33rd')
if '34t ' in address:
address = address.replace('34t ', '34th ')
if 'street ave' in address:
address = address.replace('street ave', 'ave')
if 'avenue of york' in address:
address = address.replace('avenue of york', 'avenue of the americas')
if 'avenue aj new york' in address:
address = address.replace('avenue aj new york', 'avenue of the americas')
if 'avenue ofthe new york' in address:
address = address.replace('avenue ofthe new york', 'avenue of the americas')
if 'e4' in address:
address = address.replace('e4', 'e 4')
if 'avenue of nueva york' in address:
address = address.replace('avenue of nueva york', 'avenue of the americas')
if 'avenue of new york' in address:
address = address.replace('avenue of new york', 'avenue of the americas')
if 'west end new york' in address:
address = address.replace('west end new york', 'west end avenue')
#print address
address = address.split(' ')
for pattern in patterns:
try:
if address[0].isdigit():
continue
else:
location = address.index(pattern) + 1
number_location = address[location]
#print address[location]
#if 'th' in address[location + 1] or 'floor' in address[location + 1] or '#' in address[location]:
# continue
except (ValueError, IndexError):
continue
if number_location.isdigit() and len(number_location) <= 4:
address = [number_location] + address[:location] + address[location+1:]
break
address = ' '.join(address)
if '#' in address:
address = address.replace('#', '')
#print (address)
i = 0
for char in address:
if char.isdigit():
address = address[i:]
break
i += 1
#print (address)
if 'plz' in address:
address = address.replace('plz', 'plaza ', 1)
if 'hstreet' in address:
address = address.replace('hstreet', 'h street')
if 'dstreet' in address:
address = address.replace('dstreet', 'd street')
if 'hst' in address:
address = address.replace('hst', 'h st')
if 'dst' in address:
address = address.replace('dst', 'd st')
if 'have' in address:
address = address.replace('have', 'h ave')
if 'dave' in address:
address = address.replace('dave', 'd ave')
if 'havenue' in address:
address = address.replace('havenue', 'h avenue')
if 'davenue' in address:
address = address.replace('davenue', 'd avenue')
#print address
regex = r'(.*)(' + '|'.join(patterns) + r')(.*)'
address = re.sub(regex, r'\1\2', address).lstrip() + " nyc"
print (address)
if 'americasas st' in address:
address = address.replace('americasas st', 'americas')
try:
clean = geolocator.geocode(address)
x = clean.address
address, city, zipcode, country = x.split(",")
address = address.lower()
if 'first' in address:
address = address.replace('first', '1st')
if 'second' in address:
address = address.replace('second', '2nd')
if 'third' in address:
address = address.replace('third', '3rd')
if 'fourth' in address:
address = address.replace('fourth', '4th')
if 'fifth' in address:
address = address.replace('fifth', '5th')
if ' sixth a' in address:
address = address.replace('ave', '')
address = address.replace('avenue', '')
address = address.replace(' sixth', ' avenue of the americas')
if ' 6th a' in address:
address = address.replace('ave', '')
address = address.replace('avenue', '')
address = address.replace(' 6th', ' avenue of the americas')
if 'seventh' in address:
address = address.replace('seventh', '7th')
if 'fashion' in address:
address = address.replace('fashion', '7th')
if 'eighth' in address:
address = address.replace('eighth', '8th')
if 'ninth' in address:
address = address.replace('ninth', '9th')
if 'tenth' in address:
address = address.replace('tenth', '10th')
if 'eleventh' in address:
address = address.replace('eleventh', '11th')
zipcode = zipcode[3:]
to_write = str(address) + ", " + str(zipcode.lstrip()) + ", " + str(clean.latitude) + ", " + str(clean.longitude)
to_find = str(address)
#print to_write
# returns 'can not be cleaned' if street address has no numbers
if any(i.isdigit() for i in str(address)):
with open('/home/MY NAME/Address_Database.txt', 'a+') as database:
if to_find not in database.read():
database.write(dirty + '|' + to_write + '\n')
if 'ncy rd' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
elif 'nye rd' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
elif 'nye c' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
else:
cleaned.append(to_write + '<br>')
success += 1
else:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except AttributeError:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except ValueError:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except GeocoderTimedOut as e:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
total = success + fail
percent = float(success) / float(total) * 100
percent = round(percent, 2)
print percent
cleaned.append('<br>Accuracy: ' + str(percent) + ' %')
cleaned.append('</p></center></body>')
return "\n".join(cleaned)
</code></pre>
<p><strong>UPDATE:</strong> I have switched to running the application using gunicorn, and this is solving the issue when i'm accessing the application from my home network, however, I am still receiving the TCP error from my work proxy. I am not getting any error message in my console, the browser just displays the TCP error. I can tell that the tool is still working in the background, because I have a print statement in the loop telling me that each address is still being geocoded. Could this be something along the lines of my work network not liking that the page remains loading for a long period of time and then just displays the proxy error page?</p>
| 12 | 2016-08-11T14:53:47Z | 38,946,605 | <p>I had similar problem and having a proper web server solved the issue. I used UWSGI with nginx</p>
| 1 | 2016-08-14T21:01:41Z | [
"python",
"flask",
"geopy"
] |
Process request thread error with Flask Application? | 38,899,790 | <p>This might be a long shot, but here's the error that i'm getting:</p>
<pre><code> File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 596, in process_request_thread
self.finish_request(request, client_address)
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 654, in __init__
self.finish()
File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 713, in finish
self.wfile.close()
File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 283, in close
self.flush()
File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
</code></pre>
<p>I have built a <code>Flask</code> application that takes addresses as input and performs some string formatting, manipulation, etc, then sends them to <code>Bing Maps</code> to geocode (through the <code>geopy</code> external module).</p>
<p>I'm using this application to clean very large data sets. The application works for inputs of usually ~1,500 addresses (inputted 1 per line). By that I mean that it will process the address and send it to <code>Bing Maps</code> to be geocoded and then returned. After around 1,500 addresses, the application becomes unresponsive. If this happens while i'm at work, my proxy tells me that there is a <code>tcp error</code>. If i'm on a non work computer it just doesn't load the page. If I restart the application then it functions perfectly fine. Because of this i'm forced to run my program with batches of about 1,000 addresses (just to be safe because i'm not sure yet of the exact number that the program crashes at).</p>
<p>Does anyone have any idea what might be causing it? </p>
<p>I was thinking something along the lines of me hitting my Bing API key limit for the day (which is 30,000), but that can't be accurate as I rarely use more than 15,000 requests per day.</p>
<p>My second thought was that maybe it's because i'm still using the standard flask server to run my application. Would switching to <code>gunicorn</code> or <code>uWSGI</code> solve this?</p>
<p>My third thought was maybe it was getting overloaded with the amount of requests. I tried to sleep the program for 15 seconds or so after the first 1,000 addresses but that didn't solve anything.</p>
<p>If anyone needs further clarification please let me know.</p>
<p>Here is my code for the backend of the Flask Application. I'm getting the input from this function:</p>
<pre><code>@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
</code></pre>
<p>Here is the <code>cleanAddress</code> function: It's a bit cluttered right now, with all of the if statements to check for specific typos in the address, but I plan on moving a lot of this code into other functions in another file and just passing the address though those functions to clean it up a bit.</p>
<pre><code>def cleanAddress(addresses):
counter = 0
# nested helper function to fix addresses such as '30 w 60th'
def check_st(address):
if 'broadway' in address:
return address
has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,4}(th|st|nd|rd)\s)(?P<following>.*)')
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
if re.match(r'(street|st|floor)', has_number.group('following')):
return address
else:
new_address = re.sub('(?P<number>[\d]{1,4}(st|nd|rd|th)\s)', r'\g<number>street ', address, 1)
return new_address
else:
return address
addresses = addresses.split('\n')
cleaned = []
success = 0
fail = 0
cleaned.append('<body bgcolor="#FACC2E"><center><img src="http://goglobal.dhl-usa.com/common/img/dhl-express-logo.png" alt="Smiley face" height="100" width="350"><br><p>')
cleaned.append('<br><h3>Note: Everything before the first comma is the Old Address. Everything after the first comma is the New Address</h13>')
cleaned.append('<p><h3>To format the output in Excel, split the columns using "," as the delimiter. </p></h3>')
cleaned.append('<p><h2><font color="red">Old Address </font> <font color="black">New Address </font></p></h2>')
for address in addresses:
dirty = address.strip()
if ',' in address:
dirty = dirty.replace(',', '')
cleaned.append('<font color="red">' + dirty + ', ' + '</font>')
address = address.lower()
address = re.sub('[^A-Za-z0-9#]+', ' ', address).lstrip()
pattern = r"\d+.* +(\d+ .*(" + "|".join(patterns) + "))"
address = re.sub(pattern, "\\1", address)
address = check_st(address)
if 'one ' in address:
address = address.replace('one', '1')
if 'two' in address:
address = address.replace('two', '2')
if 'three' in address:
address = address.replace('three', '3')
if 'four' in address:
address = address.replace('four', '4')
if 'five' in address:
address = address.replace('five', '5')
if 'eight' in address:
address = address.replace('eight', '8')
if 'nine' in address:
address = address.replace('nine', '9')
if 'fith' in address:
address = address.replace('fith', 'fifth')
if 'aveneu' in address:
address = address.replace('aveneu', 'avenue')
if 'united states of america' in address:
address = address.replace('united states of america', '')
if 'ave americas' in address:
address = address.replace('ave americas', 'avenue of the americas')
if 'americas avenue' in address:
address = address.replace('americas avenue', 'avenue of the americas')
if 'avenue of americas' in address:
address = address.replace('avenue of americas', 'avenue of the americas')
if 'avenue of america ' in address:
address = address.replace('avenue of america ', 'avenue of the americas ')
if 'ave of the americ' in address:
address = address.replace('ave of the americ', 'avenue of the americas')
if 'avenue america' in address:
address = address.replace('avenue america', 'avenue of the americas')
if 'americaz' in address:
address = address.replace('americaz', 'americas')
if 'ave of america' in address:
address = address.replace('ave of america', 'avenue of the americas')
if 'amrica' in address:
address = address.replace('amrica', 'americas')
if 'americans' in address:
address = address.replace('americans', 'americas')
if 'walk street' in address:
address = address.replace('walk street', 'wall street')
if 'northend' in address:
address = address.replace('northend', 'north end')
if 'inth' in address:
address = address.replace('inth', 'ninth')
if 'aprk' in address:
address = address.replace('aprk', 'park')
if 'eleven' in address:
address = address.replace('eleven', '11')
if ' av ' in address:
address = address.replace(' av ', ' avenue')
if 'avnue' in address:
address = address.replace('avnue', 'avenue')
if 'ofthe americas' in address:
address = address.replace('ofthe americas', 'of the americas')
if 'aj the' in address:
address = address.replace('aj the', 'of the')
if 'fifht' in address:
address = address.replace('fifht', 'fifth')
if 'w46' in address:
address = address.replace('w46', 'w 46')
if 'w42' in address:
address = address.replace('w42', 'w 42')
if '95st' in address:
address = address.replace('95st', '95th st')
if 'e61 st' in address:
address = address.replace('e61 st', 'e 61st')
if 'driver information' in address:
address = address.replace('driver information', '')
if 'e87' in address:
address = address.replace('e87', 'e 87')
if 'thrd avenus' in address:
address = address.replace('thrd avenus', 'third avenue')
if '3r ' in address:
address = address.replace('3r ', '3rd ')
if 'st ates' in address:
address = address.replace('st ates', '')
if 'east52nd' in address:
address = address.replace('east52nd', 'east 52nd')
if 'authority to leave' in address:
address = address.replace('authority to leave', '')
if 'sreet' in address:
address = address.replace('sreet', 'street')
if 'w47' in address:
address = address.replace('w47', 'w 47')
if 'signature required' in address:
address = address.replace('signature required', '')
if 'direct' in address:
address = address.replace('direct', '')
if 'streetapr' in address:
address = address.replace('streetapr', 'street')
if 'steet' in address:
address = address.replace('steet', 'street')
if 'w39' in address:
address = address.replace('w39', 'w 39')
if 'ave of new york' in address:
address = address.replace('ave of new york', 'avenue of the americas')
if 'avenue of new york' in address:
address = address.replace('avenue of new york', 'avenue of the americas')
if 'brodway' in address:
address = address.replace('brodway', 'broadway')
if 'w 31 ' in address:
address = address.replace('w 31 ', 'w 31th ')
if 'w 34 ' in address:
address = address.replace('w 34 ', 'w 34th ')
if 'w38' in address:
address = address.replace('w38', 'w 38')
if 'broadeay' in address:
address = address.replace('broadeay', 'broadway')
if 'w37' in address:
address = address.replace('w37', 'w 37')
if '35street' in address:
address = address.replace('35street', '35th street')
if 'eighth avenue' in address:
address = address.replace('eighth avenue', '8th avenue')
if 'west 33' in address:
address = address.replace('west 33', 'west 33rd')
if '34t ' in address:
address = address.replace('34t ', '34th ')
if 'street ave' in address:
address = address.replace('street ave', 'ave')
if 'avenue of york' in address:
address = address.replace('avenue of york', 'avenue of the americas')
if 'avenue aj new york' in address:
address = address.replace('avenue aj new york', 'avenue of the americas')
if 'avenue ofthe new york' in address:
address = address.replace('avenue ofthe new york', 'avenue of the americas')
if 'e4' in address:
address = address.replace('e4', 'e 4')
if 'avenue of nueva york' in address:
address = address.replace('avenue of nueva york', 'avenue of the americas')
if 'avenue of new york' in address:
address = address.replace('avenue of new york', 'avenue of the americas')
if 'west end new york' in address:
address = address.replace('west end new york', 'west end avenue')
#print address
address = address.split(' ')
for pattern in patterns:
try:
if address[0].isdigit():
continue
else:
location = address.index(pattern) + 1
number_location = address[location]
#print address[location]
#if 'th' in address[location + 1] or 'floor' in address[location + 1] or '#' in address[location]:
# continue
except (ValueError, IndexError):
continue
if number_location.isdigit() and len(number_location) <= 4:
address = [number_location] + address[:location] + address[location+1:]
break
address = ' '.join(address)
if '#' in address:
address = address.replace('#', '')
#print (address)
i = 0
for char in address:
if char.isdigit():
address = address[i:]
break
i += 1
#print (address)
if 'plz' in address:
address = address.replace('plz', 'plaza ', 1)
if 'hstreet' in address:
address = address.replace('hstreet', 'h street')
if 'dstreet' in address:
address = address.replace('dstreet', 'd street')
if 'hst' in address:
address = address.replace('hst', 'h st')
if 'dst' in address:
address = address.replace('dst', 'd st')
if 'have' in address:
address = address.replace('have', 'h ave')
if 'dave' in address:
address = address.replace('dave', 'd ave')
if 'havenue' in address:
address = address.replace('havenue', 'h avenue')
if 'davenue' in address:
address = address.replace('davenue', 'd avenue')
#print address
regex = r'(.*)(' + '|'.join(patterns) + r')(.*)'
address = re.sub(regex, r'\1\2', address).lstrip() + " nyc"
print (address)
if 'americasas st' in address:
address = address.replace('americasas st', 'americas')
try:
clean = geolocator.geocode(address)
x = clean.address
address, city, zipcode, country = x.split(",")
address = address.lower()
if 'first' in address:
address = address.replace('first', '1st')
if 'second' in address:
address = address.replace('second', '2nd')
if 'third' in address:
address = address.replace('third', '3rd')
if 'fourth' in address:
address = address.replace('fourth', '4th')
if 'fifth' in address:
address = address.replace('fifth', '5th')
if ' sixth a' in address:
address = address.replace('ave', '')
address = address.replace('avenue', '')
address = address.replace(' sixth', ' avenue of the americas')
if ' 6th a' in address:
address = address.replace('ave', '')
address = address.replace('avenue', '')
address = address.replace(' 6th', ' avenue of the americas')
if 'seventh' in address:
address = address.replace('seventh', '7th')
if 'fashion' in address:
address = address.replace('fashion', '7th')
if 'eighth' in address:
address = address.replace('eighth', '8th')
if 'ninth' in address:
address = address.replace('ninth', '9th')
if 'tenth' in address:
address = address.replace('tenth', '10th')
if 'eleventh' in address:
address = address.replace('eleventh', '11th')
zipcode = zipcode[3:]
to_write = str(address) + ", " + str(zipcode.lstrip()) + ", " + str(clean.latitude) + ", " + str(clean.longitude)
to_find = str(address)
#print to_write
# returns 'can not be cleaned' if street address has no numbers
if any(i.isdigit() for i in str(address)):
with open('/home/MY NAME/Address_Database.txt', 'a+') as database:
if to_find not in database.read():
database.write(dirty + '|' + to_write + '\n')
if 'ncy rd' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
elif 'nye rd' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
elif 'nye c' in address:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
else:
cleaned.append(to_write + '<br>')
success += 1
else:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except AttributeError:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except ValueError:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
except GeocoderTimedOut as e:
cleaned.append('<font color="red"> Can not be cleaned </font> <br>')
fail += 1
total = success + fail
percent = float(success) / float(total) * 100
percent = round(percent, 2)
print percent
cleaned.append('<br>Accuracy: ' + str(percent) + ' %')
cleaned.append('</p></center></body>')
return "\n".join(cleaned)
</code></pre>
<p><strong>UPDATE:</strong> I have switched to running the application using gunicorn, and this is solving the issue when i'm accessing the application from my home network, however, I am still receiving the TCP error from my work proxy. I am not getting any error message in my console, the browser just displays the TCP error. I can tell that the tool is still working in the background, because I have a print statement in the loop telling me that each address is still being geocoded. Could this be something along the lines of my work network not liking that the page remains loading for a long period of time and then just displays the proxy error page?</p>
| 12 | 2016-08-11T14:53:47Z | 38,977,386 | <p>Sounds like it is running out of file handles (default limit is 1024 for regular users) which you can check by running <code>grep 'open' /proc/<webapp pid></code> for limit and <code>ls -1 /proc/<pid>/fd | wc -l</code> for currently open file handles.</p>
<p>I think your code is not sending a correct response which is causing the connections to remain open, eventually running out of open file handles (an open socket is a file on posix systems).</p>
<p>Can confirm what state the connections are in with <code>netstat -an | grep <webapp port></code> when you see the issue. It should have a list of 1k+ IPs and ports and their state. </p>
<p>Would guess they are in <code>TIME_WAIT</code> state which is indicating the client is not closing the connection correctly and it is left up to the kernel to garbage collect them later.</p>
<p>Try:</p>
<pre><code>from flask import make_response
@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
resp = make_response(cleanAddress(addresses), 200)
return resp
</code></pre>
| 3 | 2016-08-16T14:18:44Z | [
"python",
"flask",
"geopy"
] |
Browsermob HTTPS requests with Auth Headers | 38,900,002 | <p>I'm trying to use Browsermob proxy to capture information about requests.
Have it working for HTTP requests and HTTPS requests that don't have an authorization header.</p>
<pre><code> if(product == 'Product'):
headers = {
'Authorization': 'Bearer %s' % accessToken()
}
http_proxy = "http://localhost:" + str(self.proxy.port)
https_proxy = "https://localhost:" + str(self.proxy.port)
ftp_proxy = "ftp://localhost:" + str(self.proxy.port)
proxyDict = {
"http" : http_proxy,
"ftp" : ftp_proxy,
"https" : https_proxy
}
fullurl = baseurl
fullurl += '/'
fullurl += baseuri
logger.console("fullurl: '%s'" % fullurl)
response = requests.request(method,fullurl,proxies=proxyDict,verify=False,data=payload,headers=headers)
</code></pre>
<p>The request goes through if I remove 'proxies=proxyDict'. The request seems to time out if I go through the proxy though and I get the bellow exception:</p>
<pre><code>Exception: Error creating SSLEngine for connection to client to impersonate upstream host: null
at net.lightbody.bmp.mitm.manager.ImpersonatingMitmManager.clientSslEngineFor(ImpersonatingMitmManager.java:227) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ProxyToServerConnection$3.execute(ProxyToServerConnection.java:739) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow.doProcessCurrentStep(ConnectionFlow.java:140) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow.processCurrentStep(ConnectionFlow.java:128) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow.advance(ConnectionFlow.java:90) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlowStep.onSuccess(ConnectionFlowStep.java:83) ~[browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow$2.operationComplete(ConnectionFlow.java:149) ~[browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:683) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:624) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:139) [browsermob-dist-2.1.0.jar:?]
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93) [browsermob-dist-2.1.0.jar:?]
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28) [browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow.doProcessCurrentStep(ConnectionFlow.java:140) [browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow.access$000(ConnectionFlow.java:14) [browsermob-dist-2.1.0.jar:?]
at org.littleshoot.proxy.impl.ConnectionFlow$1.run(ConnectionFlow.java:124) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:73) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) [browsermob-dist-2.1.0.jar:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374) [browsermob-dist-2.1.0.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) [browsermob-dist-2.1.0.jar:?]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_91]
Caused by: java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:212) ~[browsermob-dist-2.1.0.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:3952) ~[browsermob-dist-2.1.0.jar:?]
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) ~[browsermob-dist-2.1.0.jar:?]
at net.lightbody.bmp.mitm.manager.ImpersonatingMitmManager.getHostnameImpersonatingSslContext(ImpersonatingMitmManager.java:242) ~[browsermob-dist-2.1.0.jar:?]
at net.lightbody.bmp.mitm.manager.ImpersonatingMitmManager.clientSslEngineFor(ImpersonatingMitmManager.java:223) ~[browsermob-dist-2.1.0.jar:?]
... 20 more
</code></pre>
<p>Does browsermob handle authorization like this or am I just doing it wrong? Is there another proxy solution that might work better for this? I'm looking to use the har results.</p>
| 1 | 2016-08-11T15:04:10Z | 39,539,680 | <p>From the error log, I'm guessing it's one of two problems:</p>
<ol>
<li><p>a bug in browsermob. Try upgrading to a newer version</p></li>
<li><p>a mismatch between the domain browsermob thinks it's running against, and the domain you're connecting to. try switching to your explicit ip as the hostname - ie: "127.0.0.1" instead of "localhost" </p></li>
</ol>
<p>This doesn't appear to be related to authorization headers.</p>
| 0 | 2016-09-16T20:31:41Z | [
"python",
"proxy",
"request",
"browsermob",
"browsermob-proxy"
] |
Python Writing lists to csv | 38,900,013 | <p>how can i write the results which are in the print statement to a csv file, every time the loop runs the result to be added to the next row. The result contains a lists.</p>
<pre><code>def func_high(i):
swing_low = data_file.iloc[i].Low
date = data_file.iloc[i].Date
while i < len(data_file):
i = i + 1
try :
swing_high = data_file.iloc[i].High
if (swing_high < data_file.iloc[i + 1].High) and (data_file.iloc[i + 1].High > data_file.iloc[i + 2].High):
date_diff = data_file.iloc[i + 1].Date - date
price_diff = data_file.iloc[i + 1].High - swing_low
**print (price_diff/date_diff.days, date, swing_low , data_file.iloc[i + 1].Date, data_file.iloc[i + 1].High, date_diff)**
except IndexError:
pass
continue
</code></pre>
| 1 | 2016-08-11T15:04:48Z | 38,900,096 | <p>If the result is a pandas <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#writing-to-csv-format" rel="nofollow">Series or DataFrame</a> you should just be using the <code>to_csv</code> method.</p>
| 0 | 2016-08-11T15:08:52Z | [
"python",
"csv",
"pandas"
] |
Python Writing lists to csv | 38,900,013 | <p>how can i write the results which are in the print statement to a csv file, every time the loop runs the result to be added to the next row. The result contains a lists.</p>
<pre><code>def func_high(i):
swing_low = data_file.iloc[i].Low
date = data_file.iloc[i].Date
while i < len(data_file):
i = i + 1
try :
swing_high = data_file.iloc[i].High
if (swing_high < data_file.iloc[i + 1].High) and (data_file.iloc[i + 1].High > data_file.iloc[i + 2].High):
date_diff = data_file.iloc[i + 1].Date - date
price_diff = data_file.iloc[i + 1].High - swing_low
**print (price_diff/date_diff.days, date, swing_low , data_file.iloc[i + 1].Date, data_file.iloc[i + 1].High, date_diff)**
except IndexError:
pass
continue
</code></pre>
| 1 | 2016-08-11T15:04:48Z | 38,900,780 | <p>The easiest is to use Pandas.</p>
<pre><code>import pandas as pd
mylist = [1, 2, 3, 4]
df = pd.Series(mylist)
df.to_csv('filename.csv')
</code></pre>
<p>You have to turn the list into a Series/dataframe.</p>
| 0 | 2016-08-11T15:39:53Z | [
"python",
"csv",
"pandas"
] |
Python Writing lists to csv | 38,900,013 | <p>how can i write the results which are in the print statement to a csv file, every time the loop runs the result to be added to the next row. The result contains a lists.</p>
<pre><code>def func_high(i):
swing_low = data_file.iloc[i].Low
date = data_file.iloc[i].Date
while i < len(data_file):
i = i + 1
try :
swing_high = data_file.iloc[i].High
if (swing_high < data_file.iloc[i + 1].High) and (data_file.iloc[i + 1].High > data_file.iloc[i + 2].High):
date_diff = data_file.iloc[i + 1].Date - date
price_diff = data_file.iloc[i + 1].High - swing_low
**print (price_diff/date_diff.days, date, swing_low , data_file.iloc[i + 1].Date, data_file.iloc[i + 1].High, date_diff)**
except IndexError:
pass
continue
</code></pre>
| 1 | 2016-08-11T15:04:48Z | 38,903,367 | <p>You could change <code>print</code> to <code>yield</code>, and use a <code>csv.writer</code> (I made a few other edits to your function so I could follow the logic easier).</p>
<pre><code>import csv
i = 0
def func_high(i):
swing_low = data_file.iloc[i].Low
date = data_file.iloc[i].Date
while i < (len(data_file) - 3):
i += 1
swing_high = data_file.iloc[i].High
curr_date = data_file.iloc[i + 1].Date
curr_high = data_file.iloc[i + 1].High
next_high = data_file.iloc[i + 2].High
if (swing_high < curr_high) and (curr_high > next_high):
date_diff = curr_date - date
price_diff = curr_high - swing_low
avg_change = price_diff/date_diff.days
yield [avg_change, date, swing_low, curr_date, curr_high, date_diff]
with open('my_csv.csv', 'w') as out_f:
writer = csv.writer(out_f, lineterminator='\n')
writer.writerows(func_high(i))
</code></pre>
<p>If you want to go the pandas route, you could do the following using the generator above.</p>
<p><code>pd.DataFrame(list(func_high(i)).to_csv('my_csv.csv', index=False, header=False)</code></p>
| 1 | 2016-08-11T18:09:58Z | [
"python",
"csv",
"pandas"
] |
Python Writing lists to csv | 38,900,013 | <p>how can i write the results which are in the print statement to a csv file, every time the loop runs the result to be added to the next row. The result contains a lists.</p>
<pre><code>def func_high(i):
swing_low = data_file.iloc[i].Low
date = data_file.iloc[i].Date
while i < len(data_file):
i = i + 1
try :
swing_high = data_file.iloc[i].High
if (swing_high < data_file.iloc[i + 1].High) and (data_file.iloc[i + 1].High > data_file.iloc[i + 2].High):
date_diff = data_file.iloc[i + 1].Date - date
price_diff = data_file.iloc[i + 1].High - swing_low
**print (price_diff/date_diff.days, date, swing_low , data_file.iloc[i + 1].Date, data_file.iloc[i + 1].High, date_diff)**
except IndexError:
pass
continue
</code></pre>
| 1 | 2016-08-11T15:04:48Z | 38,921,934 | <p>If you're using a pandas data frame, see one of the other answers that deals with that. If not, using a CSV writer can be good too. However, for completeness, I'll point out that sometimes the simplest approach is to just write the data directly. Not knowing what goes into the data structures in your sample code, I'm providing a simple example program:</p>
<pre><code>import sys
def write_csv_row(f, row_data):
out_str = ','.join([str(e) for e in row_data])
f.write(out_str + '\n')
def main():
fname = 'test.dat'
data = [["Column 1", "Column 2"], [1,1], [2,2]]
with open(fname, 'w') as out_file:
for row in data:
write_csv_row(out_file, row)
if __name__ == '__main__':
sys.exit(main())
</code></pre>
| 0 | 2016-08-12T15:49:17Z | [
"python",
"csv",
"pandas"
] |
Two implementations of a module with a same name-import problems | 38,900,099 | <p>I use two libraries built on top of caffe: crf-rnn(<a href="https://github.com/torrvision/crfasrnn/tree/master/python-scripts" rel="nofollow">https://github.com/torrvision/crfasrnn/tree/master/python-scripts</a>) and hed(<a href="https://github.com/s9xie/hed/blob/master/examples/hed/" rel="nofollow">https://github.com/s9xie/hed/blob/master/examples/hed/</a>), the former for semantic image segmentation, the latter for contour detection. Finally, I realized how to get them to work together for object tracking, but now I face an embarrassing problem: as both are built on top of caffe, they import the same package, but each with very different content, i.e. crf-rnn uses caffe.Segmenter which hed doesn't have and ed uses caffe.TEST which crf-rnn doesn't have. </p>
<p>Python doesn't allow import of two packages with the same name. I've tried finding a workaround by puting hed in a separate Python file and importing it in the main script, and using <code>as</code> to <code>import caffe as cf</code> for one of the packages, but so far nothing has worked out. </p>
<p>Any suggestions? </p>
<p>EDIT: this is a file called <code>Aux.py</code></p>
<pre><code>def import_hed_caffe():
import sys,os
caffe_dir = '/home/alex/Downloads/hed/python'
sys.path.insert(0,caffe_dir)
hed_model = 'deploy.prototxt'
hed_pretrained = 'hed_pretrained_bsds.caffemodel'
import caffe as cf
net = cf.Net(hed_model, hed_pretrained, cf.TEST)
return net
</code></pre>
<p>This is the main script:</p>
<pre><code>caffe_root = '../caffe-crfrnn/'
sys.path.insert(0, caffe_root + 'python')
import caffe as espresso
import AuxScript
net = espresso.Segmenter(MODEL_FILE, PRETRAINED, gpu=False)
a=AuxScript.import_hed_caffe()
</code></pre>
<p>and I get </p>
<pre><code>AttributeError: 'module' object has no attribute 'TEST'
</code></pre>
<p>Needless to say, separately everything works fine, so it's just the import</p>
<p>EDIT 2: </p>
<p>./CMakeFiles</p>
<p>./CMakeFiles/pycaffe.dir</p>
<p>./CMakeFiles/pycaffe.dir/caffe</p>
<p>./caffe</p>
<p>./caffe/imagenet</p>
<p>./caffe/proto</p>
<p>./caffe/test</p>
<p>EDIT 3:</p>
<pre><code>âââ caffe
â  âââ _caffe.cpp
â  âââ _caffe.so -> /home/alex/Downloads/hed/lib/_caffe.so
â  âââ classifier.py
â  âââ classifier.pyc
â  âââ detector.py
â  âââ detector.pyc
â  âââ draw.py
â  âââ imagenet
â  â  âââ ilsvrc_2012_mean.npy
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ io.py
â  âââ io.pyc
â  âââ net_spec.py
â  âââ net_spec.pyc
â  âââ proto
â  â  âââ caffe_pb2.py
â  â  âââ __init__.py
â  âââ pycaffe.py
â  âââ pycaffe.pyc
â  âââ test
â  âââ test_layer_type_list.py
â  âââ test_net.py
â  âââ test_net_spec.py
â  âââ test_python_layer.py
â  âââ test_python_layer_with_param_str.py
â  âââ test_solver.py
âââ classify.py
âââ CMakeFiles
â  âââ CMakeDirectoryInformation.cmake
â  âââ progress.marks
â  âââ pycaffe.dir
â  âââ build.make
â  âââ caffe
â  â  âââ _caffe.cpp.o
â  âââ cmake_clean.cmake
â  âââ CXX.includecache
â  âââ DependInfo.cmake
â  âââ depend.internal
â  âââ depend.make
â  âââ flags.make
â  âââ link.txt
â  âââ progress.make
âââ cmake_install.cmake
âââ CMakeLists.txt
âââ detect.py
âââ draw_net.py
âââ Makefile
âââ requirements.txt
</code></pre>
| 1 | 2016-08-11T15:08:58Z | 38,900,215 | <p>I have seen your last edit, and I must say that changing/tampering with python <code>sys.path</code> is necessary in your context but not sufficient here: you have to rename one of the <code>caffe</code> packages.</p>
<p>Ex: if the <code>caffe</code> package is a directory called <code>caffe</code> containing a <code>__init__.py</code> file, rename <code>caffe</code> to <code>espresso</code> and in your code simply:</p>
<pre><code>import espresso
</code></pre>
<p>(if it's just a <code>caffe.py</code> file, rename to <code>espresso.py</code> although it may be more problematic if there are other modules in the same directory, well worth a try)</p>
<p>BTW: When importing a module, say, <code>xxx</code>, you can know which full filepath it is using by typing:</p>
<pre><code>print(xxx.__file__)
</code></pre>
<p>(useful when you have a doubt)</p>
| 1 | 2016-08-11T15:14:23Z | [
"python",
"import",
"package",
"caffe"
] |
Two implementations of a module with a same name-import problems | 38,900,099 | <p>I use two libraries built on top of caffe: crf-rnn(<a href="https://github.com/torrvision/crfasrnn/tree/master/python-scripts" rel="nofollow">https://github.com/torrvision/crfasrnn/tree/master/python-scripts</a>) and hed(<a href="https://github.com/s9xie/hed/blob/master/examples/hed/" rel="nofollow">https://github.com/s9xie/hed/blob/master/examples/hed/</a>), the former for semantic image segmentation, the latter for contour detection. Finally, I realized how to get them to work together for object tracking, but now I face an embarrassing problem: as both are built on top of caffe, they import the same package, but each with very different content, i.e. crf-rnn uses caffe.Segmenter which hed doesn't have and ed uses caffe.TEST which crf-rnn doesn't have. </p>
<p>Python doesn't allow import of two packages with the same name. I've tried finding a workaround by puting hed in a separate Python file and importing it in the main script, and using <code>as</code> to <code>import caffe as cf</code> for one of the packages, but so far nothing has worked out. </p>
<p>Any suggestions? </p>
<p>EDIT: this is a file called <code>Aux.py</code></p>
<pre><code>def import_hed_caffe():
import sys,os
caffe_dir = '/home/alex/Downloads/hed/python'
sys.path.insert(0,caffe_dir)
hed_model = 'deploy.prototxt'
hed_pretrained = 'hed_pretrained_bsds.caffemodel'
import caffe as cf
net = cf.Net(hed_model, hed_pretrained, cf.TEST)
return net
</code></pre>
<p>This is the main script:</p>
<pre><code>caffe_root = '../caffe-crfrnn/'
sys.path.insert(0, caffe_root + 'python')
import caffe as espresso
import AuxScript
net = espresso.Segmenter(MODEL_FILE, PRETRAINED, gpu=False)
a=AuxScript.import_hed_caffe()
</code></pre>
<p>and I get </p>
<pre><code>AttributeError: 'module' object has no attribute 'TEST'
</code></pre>
<p>Needless to say, separately everything works fine, so it's just the import</p>
<p>EDIT 2: </p>
<p>./CMakeFiles</p>
<p>./CMakeFiles/pycaffe.dir</p>
<p>./CMakeFiles/pycaffe.dir/caffe</p>
<p>./caffe</p>
<p>./caffe/imagenet</p>
<p>./caffe/proto</p>
<p>./caffe/test</p>
<p>EDIT 3:</p>
<pre><code>âââ caffe
â  âââ _caffe.cpp
â  âââ _caffe.so -> /home/alex/Downloads/hed/lib/_caffe.so
â  âââ classifier.py
â  âââ classifier.pyc
â  âââ detector.py
â  âââ detector.pyc
â  âââ draw.py
â  âââ imagenet
â  â  âââ ilsvrc_2012_mean.npy
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ io.py
â  âââ io.pyc
â  âââ net_spec.py
â  âââ net_spec.pyc
â  âââ proto
â  â  âââ caffe_pb2.py
â  â  âââ __init__.py
â  âââ pycaffe.py
â  âââ pycaffe.pyc
â  âââ test
â  âââ test_layer_type_list.py
â  âââ test_net.py
â  âââ test_net_spec.py
â  âââ test_python_layer.py
â  âââ test_python_layer_with_param_str.py
â  âââ test_solver.py
âââ classify.py
âââ CMakeFiles
â  âââ CMakeDirectoryInformation.cmake
â  âââ progress.marks
â  âââ pycaffe.dir
â  âââ build.make
â  âââ caffe
â  â  âââ _caffe.cpp.o
â  âââ cmake_clean.cmake
â  âââ CXX.includecache
â  âââ DependInfo.cmake
â  âââ depend.internal
â  âââ depend.make
â  âââ flags.make
â  âââ link.txt
â  âââ progress.make
âââ cmake_install.cmake
âââ CMakeLists.txt
âââ detect.py
âââ draw_net.py
âââ Makefile
âââ requirements.txt
</code></pre>
| 1 | 2016-08-11T15:08:58Z | 39,042,054 | <p>OK, so I found the least sophisticated solution possible: I wrote two scripts, one for crf-rnn producing the blobs that I ran on the full dataset just once and stored the output. </p>
<p>Then I wrote the second script, with hed edge detector that I use every time I detect and track objects.</p>
| 0 | 2016-08-19T14:53:27Z | [
"python",
"import",
"package",
"caffe"
] |
Scikit-learn Imputer Reducing Dimensions | 38,900,132 | <p>I have a dataframe with 332 columns. I want to impute values to be able to use scikit-learn's decision tree classifier. My problem is that the column of the resulting data from imputer function is only 330. </p>
<pre><code>from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
cols = data.columns
new = imp.fit_transform(data)
print(data.shape,new.shape)
(34132, 332) (34132, 330)
</code></pre>
| 0 | 2016-08-11T15:10:23Z | 38,904,266 | <p>According to the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html" rel="nofollow">documentation of <code>sklearn.preprocessing.Imputer</code></a>:</p>
<blockquote>
<p>When axis=0, columns which only contained missing values at fit are discarded upon transform.</p>
</blockquote>
<p>So, this is removing all-missing-value columns.</p>
| 0 | 2016-08-11T19:03:57Z | [
"python",
"scikit-learn"
] |
Tic Tac toe game keeps getting invalid syntax | 38,900,167 | <p>I was trying to make a tic tac toe game in python and invalid syntax keeps coming up on the line with <code>first.upper()</code> after the "1". I even moved the code around and it was still after "1". How do I fix my code?</p>
<pre><code>first == input("Would u like to go first? 1 is Yes 2 is No \n")
if first.upper() == "1"
place == input("You are X where do u want to place it? \n")
</code></pre>
| -1 | 2016-08-11T15:12:21Z | 38,900,220 | <p>In python, whenever you use an "if" statement, you must follow it up with ':' before going to the next line.
So it should be :</p>
<pre><code>if first.upper() == "1":
place == input("You are X where do u want to place it? \n")
</code></pre>
| 3 | 2016-08-11T15:14:41Z | [
"python"
] |
Tic Tac toe game keeps getting invalid syntax | 38,900,167 | <p>I was trying to make a tic tac toe game in python and invalid syntax keeps coming up on the line with <code>first.upper()</code> after the "1". I even moved the code around and it was still after "1". How do I fix my code?</p>
<pre><code>first == input("Would u like to go first? 1 is Yes 2 is No \n")
if first.upper() == "1"
place == input("You are X where do u want to place it? \n")
</code></pre>
| -1 | 2016-08-11T15:12:21Z | 38,900,230 | <pre><code>first = input("Would u like to go first? 1 is Yes 2 is No \n")
if first.upper() == 1:
place = input("You are X where do u want to place it? \n")
</code></pre>
<p>Try that</p>
| 0 | 2016-08-11T15:15:08Z | [
"python"
] |
Python recognizing one directory as module but not another? | 38,900,185 | <p>My project looks like this:</p>
<pre><code>W:\a\lot\of\stuff\Automation
|__ __init__.py
|__ A
| |_ __init__.py
| |_ aLotOfFiles.py
|
|__ B
| |_ __init__.py
| |_ aLotOfFiles.py
|
|__ C
|_ __init__.py
|_ myFile.py
</code></pre>
<p>where I'm working on <code>myFile.py</code>. In it I am using a lot of the files in modules <code>A</code> and <code>B</code>. When working with Pycharm everything worked perfectly well just by doing</p>
<pre><code>from A.someFile import someClass
from B.otherFile import otherClass
</code></pre>
<p>But then when I finished working on my code and started running it from other places, I started getting import errors being confused I tried the following in interactive python:</p>
<pre><code>>import sys
>sys.path.append('W:\\a\\lot\\of\\stuff\\')
>import Automation
# No import errors so far
>import Automation.A
# Still working fine..
>import Automation.B
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named B #Yeah, that's the module name..
</code></pre>
<p>Now I'm stumped. How can both modules work in Pycharm, be seemingly exactly the same, yet one is imported just fine and the other isn't in the same situation?</p>
<p>Any ideas of how to solve it/what could be causing it/what to check?</p>
| 0 | 2016-08-11T15:13:14Z | 38,901,265 | <p>Found the problem.</p>
<p>Apparently there was an older version of the same workspace somewhere else higher up in the path hierarchy, and it worked when I used modules which were also there, but not when I used new ones... </p>
| 0 | 2016-08-11T16:04:55Z | [
"python",
"python-2.7",
"python-import"
] |
Flattening a list with lists and items: List comprehension with nested for loops and if else condition | 38,900,235 | <p>I want to flatten a list of lists and items.
The below code works only for a list of lists</p>
<pre><code>def flatten(li):
return [item for sublist in li for item in sublist]
</code></pre>
<p>It would work on </p>
<pre><code>[[1],[2,3],[4]]
</code></pre>
<p>but not</p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>I want a function that would do it for the above list - something like</p>
<pre><code>[item if isinstance(sublist, list) else sublist for sublist in li
for item in sublist]
</code></pre>
<p>The above code gives an error.</p>
<p>Trying it on </p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>gives me </p>
<pre><code>TypeError: 'int' object is not iterable
</code></pre>
<p>Can anyone give me a list comprehension which doesn't?</p>
<p>EDIT:</p>
<p>I want a LIST COMPREHENSION only.</p>
| 1 | 2016-08-11T15:15:15Z | 38,900,498 | <pre><code>from itertools import chain
def flatten(li):
if isinstance(li, list):
li = [flatten(x) for x in li]
return list(chain(*li))
else:
return [li]
def flatten2(li):
return list(chain(*[[x] if not isinstance(x, list) else flatten2(x) for x in li]))
</code></pre>
<p>it even work on:</p>
<pre><code>[[1], [[2, 3], 4], [5], 6] -> [1, 2, 3, 4, 5, 6]
</code></pre>
<p>Else:</p>
<pre><code>def flatten3(li):
return [x for inner in [[item] if not isinstance(item, list) else item for item in li] for x in inner]
</code></pre>
| 0 | 2016-08-11T15:26:56Z | [
"python",
"list"
] |
Flattening a list with lists and items: List comprehension with nested for loops and if else condition | 38,900,235 | <p>I want to flatten a list of lists and items.
The below code works only for a list of lists</p>
<pre><code>def flatten(li):
return [item for sublist in li for item in sublist]
</code></pre>
<p>It would work on </p>
<pre><code>[[1],[2,3],[4]]
</code></pre>
<p>but not</p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>I want a function that would do it for the above list - something like</p>
<pre><code>[item if isinstance(sublist, list) else sublist for sublist in li
for item in sublist]
</code></pre>
<p>The above code gives an error.</p>
<p>Trying it on </p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>gives me </p>
<pre><code>TypeError: 'int' object is not iterable
</code></pre>
<p>Can anyone give me a list comprehension which doesn't?</p>
<p>EDIT:</p>
<p>I want a LIST COMPREHENSION only.</p>
| 1 | 2016-08-11T15:15:15Z | 38,900,676 | <pre><code>def flatten(li):
return [ x for x in li if isinstance(x,int) ] + [item for sublist in li if isinstance(sublist,list) for item in sublist]
</code></pre>
| 1 | 2016-08-11T15:35:01Z | [
"python",
"list"
] |
Flattening a list with lists and items: List comprehension with nested for loops and if else condition | 38,900,235 | <p>I want to flatten a list of lists and items.
The below code works only for a list of lists</p>
<pre><code>def flatten(li):
return [item for sublist in li for item in sublist]
</code></pre>
<p>It would work on </p>
<pre><code>[[1],[2,3],[4]]
</code></pre>
<p>but not</p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>I want a function that would do it for the above list - something like</p>
<pre><code>[item if isinstance(sublist, list) else sublist for sublist in li
for item in sublist]
</code></pre>
<p>The above code gives an error.</p>
<p>Trying it on </p>
<pre><code>[[1],[2,3],[4],5]
</code></pre>
<p>gives me </p>
<pre><code>TypeError: 'int' object is not iterable
</code></pre>
<p>Can anyone give me a list comprehension which doesn't?</p>
<p>EDIT:</p>
<p>I want a LIST COMPREHENSION only.</p>
| 1 | 2016-08-11T15:15:15Z | 38,901,454 | <p>As other users have suggested, using a recursive function to flatten the list works really well in this case. But here is a one line list comprehension (<em>albeit not the most efficient</em>).</p>
<pre><code>from collections import Iterable
l = [{},2,[41,123,'str'],32,(4324),3,1,(212,3213),[2]]
print [num for subitem in (subquery if isinstance(subquery, Iterable) else [subquery] for subquery in l) for num in subitem]
>>> [2, 41, 123, 'str', 32, 4324, 3, 1, 212, 3213, 2]
</code></pre>
| 2 | 2016-08-11T16:14:28Z | [
"python",
"list"
] |
Can't download mldata of sklearn | 38,900,246 | <pre><code>from sklearn.datasets import fetch_mldata
from sklearn.cluster import KMeans
from sklearn.utils import shuffle
X_digits, _,_, Y_digits = fetch_mldata("MNIST Original").values()`
</code></pre>
<p>After running it, I get:</p>
<p><img src="http://i.stack.imgur.com/mGT5A.jpg" alt="Image description"></p>
| 0 | 2016-08-11T15:15:40Z | 38,934,955 | <p>I guess that the problem is temporary and related to mldata.org server. So try again later.</p>
| 0 | 2016-08-13T16:51:42Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Can't download mldata of sklearn | 38,900,246 | <pre><code>from sklearn.datasets import fetch_mldata
from sklearn.cluster import KMeans
from sklearn.utils import shuffle
X_digits, _,_, Y_digits = fetch_mldata("MNIST Original").values()`
</code></pre>
<p>After running it, I get:</p>
<p><img src="http://i.stack.imgur.com/mGT5A.jpg" alt="Image description"></p>
| 0 | 2016-08-11T15:15:40Z | 39,051,071 | <p>So, it's the problem of the sever.
Next morning I can download...</p>
<p>Another way, after I downloading it I find the datafile name is <em>mnist-original.mat</em>, so you can go to gihub.com and search for the name to down the data file.</p>
| 0 | 2016-08-20T06:14:40Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Removing the text only if it is between the two strings in a file using sed or python | 38,900,308 | <p>I've to remove text between to words in a file.</p>
<ol>
<li>Ex: if the two strings are hello, world and the string is <code>"hey!! hello beautiful world"</code> ,</li>
</ol>
<p>my expected output : <code>"hey!! "</code>.</p>
<p>I could achieve this using</p>
<pre><code>re.sub('\nhello.*?world','', i , flags=re.DOTALL)
</code></pre>
<p>or using python script.</p>
<ol start="2">
<li>But, if my string is <code>"hey!! hello, how are you? hello beautiful world, bye"</code>
my expected result is <code>"hey!! hello, how are you?, bye "</code></li>
</ol>
<p>How can i achieve this using sed or python.</p>
| -2 | 2016-08-11T15:18:35Z | 38,900,437 | <p>You can use this negative lookahead pattern in <code>python</code> (sed doesn't support look arounds):</p>
<pre><code>regex = re.compile(r'hello(?:(?!hello).)*?world', re.DOTALL)
str = re.sub(regex, '', str)
</code></pre>
<p><a href="https://regex101.com/r/rO9vV2/2" rel="nofollow">RegEx Demo</a></p>
<p><code>(?:(?!hello).)*?</code> is negative lookahead based pattern that will match 0 or more characters (non-greedy) if <code>hello</code> is not found anywhere in the match.</p>
| 0 | 2016-08-11T15:24:25Z | [
"python",
"bash",
"sed"
] |
Removing the text only if it is between the two strings in a file using sed or python | 38,900,308 | <p>I've to remove text between to words in a file.</p>
<ol>
<li>Ex: if the two strings are hello, world and the string is <code>"hey!! hello beautiful world"</code> ,</li>
</ol>
<p>my expected output : <code>"hey!! "</code>.</p>
<p>I could achieve this using</p>
<pre><code>re.sub('\nhello.*?world','', i , flags=re.DOTALL)
</code></pre>
<p>or using python script.</p>
<ol start="2">
<li>But, if my string is <code>"hey!! hello, how are you? hello beautiful world, bye"</code>
my expected result is <code>"hey!! hello, how are you?, bye "</code></li>
</ol>
<p>How can i achieve this using sed or python.</p>
| -2 | 2016-08-11T15:18:35Z | 38,906,660 | <p>sed is for simple subsitutions on individual lines, that is all. If you want to do it with a standard UNIX tool though then there's awk:</p>
<pre><code>$ cat file
"hey!! hello beautiful world"
"hey!! hello, how are you? hello beautiful world, bye"
$ awk -v RS='world' -v ORS= 'match($0,/(.*)hello/,a){$0=a[1]}1' file
"hey!! "
"hey!! hello, how are you? , bye"
</code></pre>
<p>The above uses GNU awk for multi-char RS and the 3rd arg to match().</p>
| 0 | 2016-08-11T21:49:26Z | [
"python",
"bash",
"sed"
] |
Python GetHostId.py getting sys.argv[1] IndexError: list index out of range | 38,900,341 | <p>i am working on a python script to be a multi tool for getting DNS information on servers in a enterprise env. so far the script i have is using python 3.5. i am using argparse for creating command line options, which i am trying to create an if/ elif / else statement which contains the different selections. the main error message i am getting is:</p>
<pre><code>./GetHostName.py
Traceback (most recent call last):
File "./GetHostName.py", line 34, in <module>
remoteServer = sys.argv[1]
IndexError: list index out of range
</code></pre>
<p>that is when the command is run by itself.
when it is run with a host name at the end ./GetHostName.py hostName
it gives this message: </p>
<pre><code>GetHostName.py: error: unrecognized arguments: hostName
</code></pre>
<p>I didn't put real name of server for security issues....
When i use the argparse options say like the -f option for getting the FQDN, it gives this response...</p>
<pre><code>./GetHostName.py -f hostName
3
-f
</code></pre>
<p>from the way it appears, it is taking the -f as input for the server name, when it should only be the input for argparse input. i have tried everything to fix it that i can think of. i have encased the main code body in a main function, that didn't work so i removed it. i use the try: statement and exception statements. that didn't work. i am wondering if there is something just basically wrong with my programming logic at this point...</p>
<p>this here is the code from the script:</p>
<pre><code>#!C:\Bin\Python35\python.exe
#
# import libraries
import sys, os
import argparse as ap
import socket
# Command Line interface setup
def argParse():
#Command Line arg parse
parser=ap.ArgumentParser(description='A tool to get a remote servers DNS information.')
parser.add_argument("-a", "--address", default="fqdn", help="Gets IP address from host name.")
parser.add_argument("-f", "--fqdn", default="fqdn", help="Gets the FQDN address of server.")
parser.add_argument("-d", "--addrinfo", default="fqdn", help="Gets the FQDN address of server.")
parser.add_argument("-l", "--local", default="fqdn", help="Gets info on local host.")
parser.add_argument("-Pr", "--proto", default="fqdn", help="Translate an Internet protocol name to a constant suitable for passing as the (optional) third argument to the socket() function.")
parser.add_argument("-n", "--nameinfo", default="fqdn", help="Gets name and port on remote host.")
parser.add_argument("-Sn", "--servbyname", default="fqdn", help="Translate an Internet service name and protocol name to a port number for that service.")
parser.add_argument("-Sp", "--servbyport", default="fqdn", help="Translate an Internet port number and protocol name to a service name for that service.")
parser.add_argument("-t", "--timeout", default="fqdn", help="Return the default timeout in seconds for new socket objects.")
parser.add_argument("-v", "--verbose", default="fqdn", help="Increase output verbosity")
return parser.parse_args()
#remoteServer = input().strip().split()
args=argParse()
if args.fqdn:
remoteServer = sys.argv[1]
print (len(sys.argv))
remoteServerIP = socket.getfqdn(remoteServer)
print (remoteServerIP)
elif args.address:
remoteServer = sys.argv[2]
print (len(sys.argv))
remoteServerIP = socket.gethostbyname(remoteServer)
print (remoteServerIP)
elif args.addrinfo:
remoteServer = sys.argv[3]
print (len(sys.argv))
remoteServerIP = socket.getaddrinfo(remoteServer)
print (remoteServerIP)
elif args.local:
remoteServer = sys.argv[4]
print (len(sys.argv))
remoteServerIP = socket.gethostname()
print (remoteServerIP)
elif args.proto:
remoteServer = sys.argv[5]
print (len(sys.argv))
remoteServerIP = socket.getprotobyname(remoteServer)
print (remoteServerIP)
elif args.servbyname:
remoteServer = sys.argv[6]
print (len(sys.argv))
remoteServerIP = socket.getservbyname(remoteServer)
print (remoteServerIP)
elif args.servbyport:
remoteServer = sys.argv[7]
print (len(sys.argv))
remoteServerIP = socket.getservbyport(remoteServer)
print (remoteServerIP)
elif args.timeout:
remoteServer = sys.argv[8]
print (len(sys.argv))
remoteServerIP = socket.getdefaulttimeout(remoteServer)
print (remoteServerIP)
elif args.verbose:
remoteServer = sys.argv[9]
print (len(sys.argv))
remoteServerIP = socket.gethostbyaddr(remoteServer)
print (remoteServerIP)
else:
args.nameinfo
remoteServer = sys.argv[10]
print (len(sys.argv))
remoteServerIP = socket.getnameinfo(remoteServer)
print (remoteServerIP)
</code></pre>
<p>any help would be appreciated. please note that when i run a script with just this in it, it works just fine:</p>
<pre><code>#!C:\Bin\Python35\python.exe
#
import sys, os
import argparse
import socket
# Command Line interface setup
def main():
remoteServer = sys.argv[1]
remoteServerIP = socket.gethostbyaddr(remoteServer)
print (remoteServerIP)
if __name__ == '__main__':
main()
</code></pre>
<p>thanks in advance.
-Betzelel</p>
<p>P.S. the code may look out of format, due to having to copy and paste into this blog, and manually putting 4 spaces on to each line to get it to show up as code lol. </p>
| 0 | 2016-08-11T15:20:11Z | 38,900,602 | <p>I've had this problem long time ago. To fix this I've changed</p>
<pre><code>parser.add_argument("-a", "--address", default="fqdn", help="Gets IP address from host name.")
</code></pre>
<p>in</p>
<pre><code>parser.add_argument("-a", "--address", default="fqdn", help="Gets IP address from host name.", dest=ipfromhostname)
</code></pre>
<p>So for getting the value from <code>-a</code> you have to change</p>
<pre><code>remoteServer = sys.argv[1]
</code></pre>
<p>in</p>
<pre><code>remoteServer = args.ipfromhostname
</code></pre>
<p>where <code>ipfromhostname</code> is the <code>dest</code> value.</p>
<p>EDIT:</p>
<p>You have to do this operation for every <code>parser.add_argument</code></p>
| 0 | 2016-08-11T15:31:28Z | [
"python",
"network-programming"
] |
Convert complex NumPy array into (n, 2)-array of real and imaginary parts | 38,900,344 | <p>I have a complex-valued NumPy array that I'd like to convert into a contiguous NumPy array with real and imaginary parts separate.</p>
<p>This</p>
<pre><code>import numpy
u = numpy.array([
1.0 + 2.0j,
2.0 + 4.0j,
3.0 + 6.0j,
4.0 + 8.0j
])
u2 = numpy.ascontiguousarray(numpy.vstack((u.real, u.imag)).T)
</code></pre>
<p>does the trick, but transposing, vstacking, <em>and</em> converting to a contiguous array is probably a step or two too much.</p>
<p>Is there a native NumPy function that does this for me?</p>
| 0 | 2016-08-11T15:20:14Z | 38,901,020 | <p>You can use <code>dstack</code>:</p>
<pre><code>np.dstack((u.real, u.imag))[0]
#Out[210]:
#array([[ 1., 2.],
# [ 2., 4.],
# [ 3., 6.],
# [ 4., 8.]])
</code></pre>
| 1 | 2016-08-11T15:51:47Z | [
"python",
"arrays",
"numpy"
] |
Convert complex NumPy array into (n, 2)-array of real and imaginary parts | 38,900,344 | <p>I have a complex-valued NumPy array that I'd like to convert into a contiguous NumPy array with real and imaginary parts separate.</p>
<p>This</p>
<pre><code>import numpy
u = numpy.array([
1.0 + 2.0j,
2.0 + 4.0j,
3.0 + 6.0j,
4.0 + 8.0j
])
u2 = numpy.ascontiguousarray(numpy.vstack((u.real, u.imag)).T)
</code></pre>
<p>does the trick, but transposing, vstacking, <em>and</em> converting to a contiguous array is probably a step or two too much.</p>
<p>Is there a native NumPy function that does this for me?</p>
| 0 | 2016-08-11T15:20:14Z | 38,901,102 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html" rel="nofollow"><code>column_stack</code></a> and stack the two 1-D arrays as columns to make a single 2D array.</p>
<pre><code>In [9]: np.column_stack((u.real,u.imag))
Out[9]:
array([[ 1., 2.],
[ 2., 4.],
[ 3., 6.],
[ 4., 8.]])
</code></pre>
| 3 | 2016-08-11T15:56:15Z | [
"python",
"arrays",
"numpy"
] |
Convert complex NumPy array into (n, 2)-array of real and imaginary parts | 38,900,344 | <p>I have a complex-valued NumPy array that I'd like to convert into a contiguous NumPy array with real and imaginary parts separate.</p>
<p>This</p>
<pre><code>import numpy
u = numpy.array([
1.0 + 2.0j,
2.0 + 4.0j,
3.0 + 6.0j,
4.0 + 8.0j
])
u2 = numpy.ascontiguousarray(numpy.vstack((u.real, u.imag)).T)
</code></pre>
<p>does the trick, but transposing, vstacking, <em>and</em> converting to a contiguous array is probably a step or two too much.</p>
<p>Is there a native NumPy function that does this for me?</p>
| 0 | 2016-08-11T15:20:14Z | 38,902,227 | <p>None of the alternatives are <code>native</code> or save on reshape, transposes etc.</p>
<p>For example internally <code>column_stack</code> converts its inputs to 2d 'column' arrays. Effectively it is doing</p>
<pre><code>In [1171]: np.concatenate((np.array(u.real,ndmin=2).T,np.array(u.imag,ndmin=2).T),axis=1)
Out[1171]:
array([[ 1., 2.],
[ 2., 4.],
[ 3., 6.],
[ 4., 8.]])
</code></pre>
<p><code>vstack</code> passes its inputs through <code>atleast_2d(m)</code>, making sure each is a 1 row 2d array. <code>np.dstack</code> uses <code>atleast_3d(m)</code>.</p>
<p>A new function is <code>np.stack</code></p>
<pre><code>In [1174]: np.stack((u.real,u.imag),-1)
Out[1174]:
array([[ 1., 2.],
[ 2., 4.],
[ 3., 6.],
[ 4., 8.]])
</code></pre>
<p>It uses <code>None</code> indexing to correct dimensions for concatenation; effectively:</p>
<pre><code>np.concatenate((u.real[:,None],u.imag[:,None]),axis=1)
</code></pre>
<p>All end up using <code>np.concatenate</code>; it and <code>np.array</code> are the only compiled joining functions.</p>
<p>Another trick is to use <code>view</code></p>
<pre><code>In [1179]: u.view('(2,)float')
Out[1179]:
array([[ 1., 2.],
[ 2., 4.],
[ 3., 6.],
[ 4., 8.]])
</code></pre>
<p>The complex values are saved as 2 adjacent floats. So the same databuffer can be viewed as pure floats, or with this view as a 2d array of floats. In contrast to the <code>concatenate</code> functions, there's no copying here.</p>
<p>Another test on the alternatives is to ask what happens when <code>u</code> is 2d or higher?</p>
| 1 | 2016-08-11T17:00:59Z | [
"python",
"arrays",
"numpy"
] |
Printing a non concordant position within a string | 38,900,468 | <p>I am writing a program that analyzes and parses consensus sequences. I have successfully gotten the analyzing and parsing to work and the program tells me whether two sequences are concordant or not. </p>
<p>I want to add an additional feature, if the two sequences are not concordant, I want it to tell me the position where the two sequences do not match up. </p>
<p>For example: </p>
<p>If sequence 1 is: GACTTTTTACTTTTTTG
& sequence 2 is: GACCTTTTACTTTTTTG </p>
<p>it will tell me sequence 1 is not concordant to sequence 2, but I also want it to tell me the position of the non concordance is the 4th letter. </p>
<p>How can I get the program to do this?</p>
<p>Here is the code I have so far:</p>
<pre><code>for (h1,s1),(h2,s2) in combinations(zip(header,sequence),2):
if s1[start:stop]==s2[start:stop]:
print h1, "is concordant to", h2
else:
print h1, "is not concordant to", h2
nonconcordance_position=[]
nonconcordance_position.append(idx2[n-1])
print "position of non concordance:", nonconcordance_position
</code></pre>
<p>When I run this it works, however it does not give the correct position.</p>
| 0 | 2016-08-11T15:25:39Z | 38,900,733 | <p>In your else statement you could loop through the sequence string and then print the index when the characters don't match.</p>
<p>Something like:</p>
<pre><code>else:
for i in range(0, len(h1)):
if h1[i] == h2[i]:
continue
else:
print i
break
</code></pre>
| 0 | 2016-08-11T15:37:33Z | [
"python",
"string",
"python-2.7",
"printing",
"position"
] |
How to reduce the number of samples in audio data array using python | 38,900,475 | <p>I am plotting an audio samples amplitude at every frame present in that sample something like this:</p>
<pre><code>sound = AudioSegment.from_mp3("test.mp3")
print(len(sound))
print(len(sound.raw_data))
data = np.fromstring(sound.raw_data, dtype=np.int16)
left, right = data[0::2], data[1::2]
plt.plot(left)
</code></pre>
<p>In the process I noticed that the length of sound AudioSegment is different from sound raw_data why is that the case?</p>
<p>and also as the test.mp3 duration gets lengthier the ticks on x axis goes into some million so my doubt was how can we plot the data at some lower sample rate or in other words how can I reduce the number of samples in <em>data</em> array?</p>
<p>Here is my first thought: calculate the average of first 10 or 20 sample in an audio data array and represent them as a one point so that way we can reduce the number of samples. However this might cause some information loss and performance issues. </p>
<p>does python have any alternate way to do this??</p>
| 0 | 2016-08-11T15:25:53Z | 38,919,019 | <p>In pydub, <code>len(sound)</code> is the duration in milliseconds, where <code>len(sound.raw_data)</code> would be the number of bytes of total audio data. </p>
<p>If you are working with CD-quality sound (44.1kHz, 16 bit, stereo) you would expect each millisecond to be roughly 44 samples (44100 / 1000), and two bytes (16 bits) per sample, doubled again for left/right channels. So roughly 176 bytes per millisecond.</p>
<p>To create a plot like you see in many audio editors, the most common approach is to get the rms of the audio in chunks.</p>
<p>if you want a plot that is 400px wide, you'd do something likeâ¦</p>
<pre><code>from pydub import AudioSegment
sound = AudioSegment.from_file("...")
num_chunks = 400 #px
chunk_size = int(len(sound) / num_chunks #ms))
loudness_over_time = []
for i in range(0, len(sound), chunk_size):
chunk = sound[i:i+chunk_size]
loudness_over_time.append(chunk.rms)
</code></pre>
<p>note, I haven't tested this code</p>
| 0 | 2016-08-12T13:20:15Z | [
"python",
"numpy",
"audio",
"matplotlib",
"pydub"
] |
How to choose the features that describe x% of all information in data while using Incremental principal components analysis (IPCA)? | 38,900,495 | <p>I'd like to use the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.IncrementalPCA.html#sklearn.decomposition.IncrementalPCA" rel="nofollow">Incremental principal components analysis</a> (IPCA) to reduce my feature space such that it contains x% of information.</p>
<p>I would use the <code>sklearn.decomposition.IncrementalPCA(n_components=None, whiten=False, copy=True, batch_size=None)</code>
I can leave the <code>n_components=None</code> so that it works on all the features that I have.</p>
<p>But later once the whole data set is analyzed.
How do I select the features that represent x% of data and how do I create a <code>transform()</code> for those features number of features.</p>
<p>This idea taken from this <a href="http://datascience.stackexchange.com/questions/10730/how-many-dimensions-to-reduce-to-when-doing-pca">question.</a></p>
| 0 | 2016-08-11T15:26:41Z | 38,907,564 | <p>You can get the percentage of explained variance from each of the components of your PCA using <code>explained_variance_ratio_</code>. For example in the iris dataset, the first 2 principal components account for 98% of the variance in the data:</p>
<pre><code>import numpy as np
from sklearn import decomposition
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
pca = decomposition.IncrementalPCA()
pca.fit(X)
pca.explaned_variance_ratio_
#array([ 0.92461621, 0.05301557, 0.01718514, 0.00518309])
</code></pre>
| 0 | 2016-08-11T23:22:43Z | [
"python",
"machine-learning",
"scikit-learn",
"pca"
] |
How to create a new color image with python Imaging? | 38,900,511 | <p>I want to create a new image, with color in background. </p>
<p>This working: </p>
<pre><code>img = Image.new('RGB', (width,height), "red")
</code></pre>
<p>But I want to customize the color. , when I change "red" by "(228,150,150) it doesn't working.... </p>
<p>Have you an idea to do this?</p>
| 0 | 2016-08-11T15:27:19Z | 38,900,699 | <p>This is working for me. Note that the color tuple is not between quotes.</p>
<pre><code>from PIL import Image
img = Image.new('RGB', (300, 200), (228, 150, 150))
img.show()
</code></pre>
<p>If it does not work for you, which version of Python and which version of PIL are you using?</p>
| 0 | 2016-08-11T15:36:00Z | [
"python",
"python-imaging-library"
] |
python - decorate a generator | 38,900,512 | <p>I have a function in 2 different version which read a large file (I make it simple here and read a very small excel file). </p>
<p><code>Version 1:</code> Read the whole file and return the list of rows<br>
<code>Version 2:</code> Read it line by line with help of generator</p>
<p>I want to decorate the output for these 2 functions and add something to the end of each row based on different logic, that's why I think I need different customized decorator. Yet I can't figure out how can I achieve it by help of decorator? specially in case I have yield instead of return.</p>
<p>Version1:</p>
<pre><code>@dec
def readxls():
fileBook = xlrd.open_workbook('../decorator.xls')
sh = fileBook.sheet_by_name("Sheet1")
out = []
for row_index in xrange(1, sh.nrows):
out.append(sh.row_values(row_index))
return out
</code></pre>
<p>Version 2:</p>
<pre><code>@dec2
def readxls():
fileBook = xlrd.open_workbook('../decorator.xls')
sh = fileBook.sheet_by_name("Sheet1")
for row_index in xrange(1, sh.nrows):
yield sh.row_values(row_index)
</code></pre>
<p>let's say the excel file is like:</p>
<pre><code>Col1 Col2 Col3
Val11 Val12 Val13
Val21 Val22 Val23
</code></pre>
<p>I want to decorate the output to get the following result :</p>
<pre><code>Col1 Col2 Col3 0 Col1Col2
Val11 Val12 Val13 1 Val11Val12
Val21 Val22 Val23 2 Val21Val22
</code></pre>
<p>In order to get something like this as output how should be my dec1 and dec2 function?</p>
| 0 | 2016-08-11T15:27:19Z | 38,903,417 | <p>decorator are suppose to work a taking the result of the function manipulate it and give the new result, so knowing what is the result is the key, in this case, for that example the result is <code>[['val11', 'val12', 'val13'], ['val21', 'val22', 'val23']]</code> for version 1 and a generator with the elements of that for the second. With this knowledge we can proceed to make a decorator, like for example</p>
<pre><code>from functools import wraps
def decorator1(fun):
@wraps(fun)
def wrapper(*args,**kwds):
result = fun(*args,**kwds)
for i,x in enumerate(result,1):
x.extend( (i, x[0]+x[1]) )
return result
return wrapper
def decorator2(fun):
@wraps(fun)
def wrapper(*args,**kwds):
for i,x in enumerate(fun(*args,**kwds),1):
x.extend( (i, x[0]+x[1]) )
yield x
return wrapper
</code></pre>
<p>(here I use <a href="https://docs.python.org/2/library/functools.html#functools.wraps" rel="nofollow"><code>wraps</code></a> to help with maintaining some meta data of the decorate function (functionality wise is not needed) and as guide to write the example)</p>
<p>In the first decorator, as the result is the whole list, I just add the extra stuff to each element and return it, and in the second I add the extra stuff as they come along maintaining the generator structure</p>
<p>decorated with those, the result is now <code>[['val11', 'val12', 'val13', 1, 'val11val12'], ['val21', 'val22', 'val23', 2, 'val21val22']]</code></p>
<hr>
<p>As an aside note, because yours 2 function do the same thing, I would rather keep the generator and when I need a list call <code>list(readxls())</code> and also I would add 2 extra variables to the function signature with default value those string to make the function more flexible.</p>
| 1 | 2016-08-11T18:12:46Z | [
"python",
"generator",
"decorator"
] |
Referring capture group in re.match() in python | 38,900,564 | <p>Hi this is the string I want to match</p>
<p><code>mystr = "mykey/20161010/20161010"</code></p>
<p>so far my <code>regex</code> is like this</p>
<p><code>re.match("mykey/([2-9][0-9][0-9][0-9][0-1][0-9][0-3][0-9])/[.]*", mystr)</code></p>
<p>As you can see, I am using One <code>Capture group</code>. I want to replace the <code>[.]*</code> by referring the Capture group I have already created. How should I do this?</p>
<p>PS : I am using <code>Python 2.7</code></p>
<p><strong>Update 1</strong>
Based on the answers so far, I have tried this(I have simplified the example little bit), but does not seem to be working...</p>
<pre><code>>>> mystr = "mykey/20/20"
>>> print re.match("mykey\/([2-9][0-9])\/[.]*", mystr)
<_sre.SRE_Match object at 0x7faf96ddf558>
>>> print re.match("mykey\/([2-9][0-9])\/.*", mystr)
<_sre.SRE_Match object at 0x7faf96ddf558>
>>> print re.match("mykey\/([2-9][0-9])\/\1", mystr)
None
</code></pre>
<p>I am getting <code>None</code> when trying to refer the <code>Capture group</code>. Am I missing something?</p>
<p><strong>Update 2: Finally Working...</strong>
Hope this helps someone looking for the answer. Adding additional <code>backslash(ie \)</code> did the trick</p>
<pre><code>>>> import re
>>> mystr = "mykey/20160610/20160610"
>>> re.match("mykey/([2-9][0-9][0-9][0-9][0-1][0-9][0-3][0-9])/\\1", mystr)
<_sre.SRE_Match object at 0x7fe352145558>
</code></pre>
| 0 | 2016-08-11T15:29:17Z | 38,900,684 | <pre><code>mykey\/([2-9][0-9][0-9][0-9][0-1][0-9][0-3][0-9])\/\1
</code></pre>
<p>Use <code>\1</code> - to capture the exact match produced by the first capturing group which is <code>[2-9][0-9][0-9][0-9][0-1][0-9][0-3][0-9]</code> in this case.</p>
<p>Shorter version of the same would be </p>
<pre><code>mykey\/([2-9]\d{3}[0-1]\d[0-3]\d)\/\1
</code></pre>
| 2 | 2016-08-11T15:35:17Z | [
"python",
"regex",
"python-2.7"
] |
python-tarantool call parameters | 38,900,621 | <p>I want to call box.space.auth_user:auto_increment{"8", 7, 7, 7} from python script.
This is what I do in python</p>
<pre><code>import tarantool
connection = tarantool.connect('127.0.0.1', 3303)
connection.call('box.space.auth_user:auto_increment', ("8", 7, 7, 7))
</code></pre>
<p>And then I get the error:</p>
<pre><code>[string "-- schema.lua (internal file)..."]:921: bad argument #1 to 'insert' (table expected, got string)
</code></pre>
<p>This is how I define my <strong>auth_user</strong> schema</p>
<pre><code>box.schema.space.create('auth_user',{if_not_exists=true})
box.space.auth_user:create_index('primary', {type='TREE', if_not_exists=true, unique=true, parts={1,'NUM'}})
box.space.auth_user:create_index('login', {type='HASH', if_not_exists=true, unique=true, parts={2,'STR'}})
</code></pre>
<p>What am I doing wrong?</p>
| 1 | 2016-08-11T15:32:26Z | 39,185,623 | <p>It's not a bug, please use <code>[["8", 7, 7, 7]]</code> or <code>(("8", 7, 7, 7), )</code>.</p>
<pre><code>In [4]: connection.call('box.space.auth_user:auto_increment', [["8", 7, 7, 7]])
Out[4]:
- [1, '8', 7, 7, 7]
</code></pre>
<p>tarantool-python uses array to show that it's arguments, so it was analog for <code>box.space.auth_user:auto_increment("8", 7, 7, 7)</code>, not <code>{}</code></p>
| 0 | 2016-08-27T20:49:39Z | [
"python",
"call",
"tarantool"
] |
Converting a list of list into a dictionary based on common values in python | 38,900,637 | <p>I have a list of list in python as follows:</p>
<pre><code>a = [['John', 24, 'teacher'],['Mary',23,'clerk'],['Vinny', 21, 'teacher'], ['Laura',32, 'clerk']]
</code></pre>
<p>The idea is to create a dict on the basis of their occupation as follows:</p>
<pre><code>b = {'teacher': {'John_24': 'true', 'Vinny_21' 'true'},
'clerk' : {'Mary_23': 'true', 'Laura_32' 'true'}}
</code></pre>
<p>What is the best way of achieving this ?</p>
| 1 | 2016-08-11T15:32:51Z | 38,900,759 | <p>You can use a defaultdict:</p>
<pre><code>a = [['John', 24, 'teacher'],['Mary',23,'clerk'],['Vinny', 21, 'teacher'], ['Laura',32, 'clerk']]
from collections import defaultdict
dct = defaultdict(defaultdict)
for name, age, occ in a:
dct[occ][name + "_" + age] = "true"
</code></pre>
<p>Output:</p>
<pre><code>from pprint import pprint as pp
pp(dict(dct))
{'clerk': defaultdict(None, {'Laura_32': 'true', 'Mary_23': 'true'}),
'teacher': defaultdict(None, {'John_24': 'true', 'Vinny_21': 'true'})}
</code></pre>
<p>Although you may be as well just appendiing each name to a list:</p>
<pre><code>from collections import defaultdict
dct = defaultdict(list)
for name, _, occ in a:
dct[occ].append(name + "_" + age)
</code></pre>
<p>Which woud give you:</p>
<pre><code> defaultdict(<class 'list'>, {'clerk': ['Mary_23', 'Laura_32'], 'teacher': ['John_24', 'Vinny_21']})
</code></pre>
<p>If you want to use the age later you may also wan to store them separately:</p>
<pre><code>from collections import defaultdict
dct = defaultdict(list)
for name, age, occ in a:
dct[occ].append([name, age])
</code></pre>
<p>Which would give you:</p>
<pre><code>defaultdict(<class 'list'>,
{'clerk': [['Mary', 23], ['Laura', 32]],
'teacher': [['John', 24], ['Vinny', 21]]})
</code></pre>
<p>Using python3 you can use:</p>
<pre><code>for *nm_age, occ in a:
dct[occ].append(nm_age)
</code></pre>
| 2 | 2016-08-11T15:38:42Z | [
"python",
"list",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.