title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How Do I Start Pulling Apart This Block of JSON Data? | 38,978,428 | <p>I'd like to make a program that makes offline copies of math questions from Khan Academy. I have a huge 21.6MB text file that contains data on all of their exercises, but I have no idea how to start analyzing it, much less start pulling the questions from it.</p>
<p><a href="http://pastebin.com/x8xjS1kx" rel="nofollow">Here</a> is a pastebin containing a sample of the JSON data. If you want to see all of it, you can find it <a href="https://www.khanacademy.org/api/v1/exercises" rel="nofollow">here</a>. Warning for long load time.</p>
<p>I've never used JSON before, but I wrote up a quick Python script to try to load up individual "sub-blocks" (or equivalent, correct term) of data.</p>
<pre><code>import sys
import json
exercises = open("exercises.txt", "r+b")
byte = 0
frontbracket = 0
backbracket = 0
while byte < 1000: #while byte < character we want to read up to
#keep at 1000 for testing purposes
char = exercises.read(1)
sys.stdout.write(char)
#Here we decide what to do based on what char we have
if str(char) == "{":
frontbracket = byte
while True:
char = exercises.read(1)
if str(char)=="}":
backbracket=byte
break
exercises.seek(frontbracket)
block = exercises.read(backbracket-frontbracket)
print "Block is " + str(backbracket-frontbracket) + " bytes long"
jsonblock = json.loads(block)
sys.stdout.write(block)
print jsonblock["translated_display_name"]
print "\nENDBLOCK\n"
byte = byte + 1
</code></pre>
| 0 | 2016-08-16T15:06:56Z | 38,979,645 | <p>Ok, the repeated pattern appears to be this: <a href="http://pastebin.com/4nSnLEFZ" rel="nofollow">http://pastebin.com/4nSnLEFZ</a></p>
<p>To get an idea of the structure of the response, you can use <a href="http://jsonlint.com/" rel="nofollow">JSONlint</a> to copy/paste portions of your string and 'validate'. Even if the portion you copied is not valid, it will still format it into something you can actually read.</p>
<p>First I have used <code>requests</code> library to pull the JSON for you. It's a super-simple library when you're dealing with things like this. The API is slow to respond because it seems you're pulling everything, but it should work fine.</p>
<p>Once you get a response from the API, you can convert that directly to python objects using <code>.json()</code>. What you have is essentially a mixture of nested lists and dictionaries that you can iterate through and pull specific details. In my example below, <code>my_list2</code> has to use a <code>try/except</code> structure because it would seem that some of the entries do not have two items in the list under <code>translated_problem_types</code>. In that case, it will just put 'None' instead. You might have to use trial and error for such things.</p>
<p>Finally, since you haven't used JSON before, it's also worth noting that it can behave like a dictionary itself; you are not guaranteed the order in which you receive details. However, in this case, it seems the outermost structure is a list, so in theory it's possible that there is a consistent order but don't rely on it - we don't know how the list is constructed.</p>
<pre><code>import requests
api_call = requests.get('https://www.khanacademy.org/api/v1/exercises')
json_response = api_call.json()
# Assume we first want to list "author name" with "author key"
# This should loop through the repeated pattern in the pastebin
# access items as a dictionary
my_list1 = []
for item in json_response:
my_list1.append([item['author_name'], item['author_key']])
print my_list1[0:5]
# Now let's assume we want the 'sha' of the SECOND entry in translated_problem_types
# to also be listed with author name
my_list2 = []
for item in json_response:
try:
the_second_entry = item['translated_problem_types'][0]['items'][1]['sha']
except IndexError:
the_second_entry = 'None'
my_list2.append([item['author_name'], item['author_key'], the_second_entry])
print my_list2[0:5]
</code></pre>
| 1 | 2016-08-16T16:07:01Z | [
"python",
"json",
"khan-academy"
] |
How to get n longest entries of DataFrame? | 38,978,432 | <p>I'm trying to get the n longest entries of a dask DataFrame. I tried calling <a href="http://%20http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.nlargest" rel="nofollow">nlargest</a> on a dask DataFrame with two columns like this:</p>
<pre><code>import dask.dataframe as dd
df = dd.read_csv("opendns-random-domains.txt", header=None, names=['domain_name'])
df['domain_length'] = df.domain_name.map(len)
print(df.head())
print(df.dtypes)
top_3 = df.nlargest(3, 'domain_length')
print(top_3.head())
</code></pre>
<p>The file opendns-random-domains.txt contains just a long list of domain names. This is what the output of the above code looks like:</p>
<pre><code> domain_name domain_length
0 webmagnat.ro 12
1 nickelfreesolutions.com 23
2 scheepvaarttelefoongids.nl 26
3 tursan.net 10
4 plannersanonymous.com 21
domain_name object
domain_length float64
dtype: object
Traceback (most recent call last):
File "nlargest_test.py", line 9, in <module>
print(top_3.head())
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 382, in head
result = result.compute()
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 86, in compute
return compute(self, **kwargs)[0]
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 179, in compute
results = get(dsk, keys, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/threaded.py", line 57, in get
**kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 484, in get_async
raise(remote_exception(res, tb))
dask.async.TypeError: Cannot use method 'nlargest' with dtype object
Traceback
---------
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 267, in execute_task
result = _execute_task(task, data)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 249, in _execute_task
return func(*args2)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 2040, in <lambda>
f = lambda df: df.nlargest(n, columns)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3355, in nlargest
return self._nsorted(columns, n, 'nlargest', keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3318, in _nsorted
ser = getattr(self[columns[0]], method)(n, keep=keep)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/util/decorators.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/series.py", line 1898, in nlargest
return algos.select_n(self, n=n, keep=keep, method='nlargest')
File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/algorithms.py", line 559, in select_n
raise TypeError("Cannot use method %r with dtype %s" % (method, dtype))
</code></pre>
<p>I'm confused, because I'm calling <code>nlargest</code> on the column which is of type <code>float64</code> but still get this error saying it cannot be called on dtype <code>object</code>. Also this works fine in pandas. How can I get the n longest entries from a DataFrame? </p>
| 2 | 2016-08-16T15:07:01Z | 38,979,064 | <p>I tried to reproduce your problem but things worked fine. Can I recommend that you produce a <a href="http://stackoverflow.com/help/mcve">Minimal Complete Verifiable Example</a>?</p>
<h3>Pandas example</h3>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})
In [3]: df['y'] = df.x.map(len)
In [4]: df
Out[4]:
x y
0 a 1
1 bb 2
2 ccc 3
3 dddd 4
In [5]: df.nlargest(3, 'y')
Out[5]:
x y
3 dddd 4
2 ccc 3
1 bb 2
</code></pre>
<h3>Dask dataframe example</h3>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})
In [3]: import dask.dataframe as dd
In [4]: ddf = dd.from_pandas(df, npartitions=2)
In [5]: ddf['y'] = ddf.x.map(len)
In [6]: ddf.nlargest(3, 'y').compute()
Out[6]:
x y
3 dddd 4
2 ccc 3
1 bb 2
</code></pre>
<p>Alternatively, perhaps this is just working now on the git master version?</p>
| 0 | 2016-08-16T15:36:48Z | [
"python",
"dask"
] |
Creating uniform random quaternion and multiplication of two quaternions | 38,978,441 | <p>I have a python (NumPy) function which creates a uniform random quaternion. I would like to get two quaternion multiplication as 2-dimensional returned array from the same or an another function. The formula of quaternion multiplication in my recent case is Q1*Q2 and Q2*Q1. Here, <code>Q1=(w0, x0, y0, z0)</code> and <code>Q2=(w1, x1, y1, z1)</code> are two quaternions. The expected two quaternion multiplication output (as 2-d returned array) should be </p>
<pre><code>return([-x1*x0 - y1*y0 - z1*z0 + w1*w0, x1*w0 + y1*z0 - z1*y0 +
w1*x0, -x1*z0 + y1*w0 + z1*x0 + w1*y0, x1*y0 - y1*x0 + z1*w0 +
w1*z0])
</code></pre>
<p>Can anyone help me please? My codes are here: </p>
<pre><code>def randQ(N):
#Generates a uniform random quaternion
#James J. Kuffner 2004
#A random array 3xN
s = random.rand(3,N)
sigma1 = sqrt(1.0 - s[0])
sigma2 = sqrt(s[0])
theta1 = 2*pi*s[1]
theta2 = 2*pi*s[2]
w = cos(theta2)*sigma2
x = sin(theta1)*sigma1
y = cos(theta1)*sigma1
z = sin(theta2)*sigma2
return array([w, x, y, z])
</code></pre>
| 1 | 2016-08-16T15:07:28Z | 38,979,399 | <p>A 2-Dimensional Array is an array like this: foo[0][1]</p>
<p>You don't need to do that. Multiplying two quaternions yields one single quaternion. I don't see why you would need a two-dimensional array, or how you would even use one.</p>
<p>Just have a function that takes two arrays as arguments:</p>
<pre><code>def multQuat(q1, q2):
</code></pre>
<p>then return the relevant array.</p>
<pre><code>return array([-q2[1] * q1[1], ...])
</code></pre>
| 0 | 2016-08-16T15:54:27Z | [
"python",
"numpy",
"2d",
"quaternions"
] |
Creating uniform random quaternion and multiplication of two quaternions | 38,978,441 | <p>I have a python (NumPy) function which creates a uniform random quaternion. I would like to get two quaternion multiplication as 2-dimensional returned array from the same or an another function. The formula of quaternion multiplication in my recent case is Q1*Q2 and Q2*Q1. Here, <code>Q1=(w0, x0, y0, z0)</code> and <code>Q2=(w1, x1, y1, z1)</code> are two quaternions. The expected two quaternion multiplication output (as 2-d returned array) should be </p>
<pre><code>return([-x1*x0 - y1*y0 - z1*z0 + w1*w0, x1*w0 + y1*z0 - z1*y0 +
w1*x0, -x1*z0 + y1*w0 + z1*x0 + w1*y0, x1*y0 - y1*x0 + z1*w0 +
w1*z0])
</code></pre>
<p>Can anyone help me please? My codes are here: </p>
<pre><code>def randQ(N):
#Generates a uniform random quaternion
#James J. Kuffner 2004
#A random array 3xN
s = random.rand(3,N)
sigma1 = sqrt(1.0 - s[0])
sigma2 = sqrt(s[0])
theta1 = 2*pi*s[1]
theta2 = 2*pi*s[2]
w = cos(theta2)*sigma2
x = sin(theta1)*sigma1
y = cos(theta1)*sigma1
z = sin(theta2)*sigma2
return array([w, x, y, z])
</code></pre>
| 1 | 2016-08-16T15:07:28Z | 38,982,314 | <p>A simple rendition of your request would be:</p>
<pre><code>In [70]: def multQ(Q1,Q2):
...: w0,x0,y0,z0 = Q1 # unpack
...: w1,x1,y1,z1 = Q2
...: return([-x1*x0 - y1*y0 - z1*z0 + w1*w0, x1*w0 + y1*z0 - z1*y0 +
...: w1*x0, -x1*z0 + y1*w0 + z1*x0 + w1*y0, x1*y0 - y1*x0 + z1*w0 +
...: w1*z0])
...:
In [72]: multQ(randQ(1),randQ(2))
Out[72]:
[array([-0.37695449, 0.79178506]),
array([-0.38447116, 0.22030199]),
array([ 0.44019022, 0.56496059]),
array([ 0.71855397, 0.07323243])]
</code></pre>
<p>The result is a list of 4 arrays. Just wrap it in <code>np.array()</code> to get a 2d array:</p>
<pre><code>In [73]: M=np.array(_)
In [74]: M
Out[74]:
array([[-0.37695449, 0.79178506],
[-0.38447116, 0.22030199],
[ 0.44019022, 0.56496059],
[ 0.71855397, 0.07323243]])
</code></pre>
<p>I haven't tried to understand or clean up your description - just rendering it as working code.</p>
| 0 | 2016-08-16T18:45:11Z | [
"python",
"numpy",
"2d",
"quaternions"
] |
cx_Freeze giving error when using fuzzywuzzy | 38,978,457 | <p>I have built a tkinter GUI for survey entry in python3.4 that uses a number of packages. I then need to compile it to an executable so that I can put it on a coworkers machine (we both are on windows7 platform) I've structured my setup.py to look like:</p>
<pre><code>from cx_Freeze import setup, Executable
import sys
base = None
if sys.platform == "win32":
base = "Win32GUI"
setup(
name='Survey Entry',
version='3.5',
license='MIT',
description='GUI For entering survey data',
executables= [Executable("Survey Entry.py", base=base)],
options={"build_exe":{"packages":['tkinter','cx_Oracle','datetime','time','enter_survey','lookup',
'queryfuncs','login','gui', 'datetime', 'add_respondent', 'possible_matches']}}
)
</code></pre>
<p>This worked absolutely fine for tons of versions. But then I added a functionality that uses fuzzywuzzy to compare strings. When I include that and add <code>fuzzywuzzy</code> to the packages list in the options dictionary and compile it, I get a big error when I try to run the exe that ends with <code>ImportError: No module named 'Levenshtein'</code></p>
<p>I don't understand because in my development the module works fine. I've tried to include <code>Levenshtein</code> in the setup but it doesn't exist as a module. I don't have python-Levenshtein installed though because I can't get the .whl to install on my windows machine. </p>
<p>Has anyone ran into this? Why would fuzzywuzzy cause this error when it runs fine through python? Is there something I'm missing?</p>
<p>The complete error can be seen here: <a href="http://imgur.com/a/rSKsS" rel="nofollow">http://imgur.com/a/rSKsS</a></p>
<p>EDIT: I was able to figure it out - I need the python-Levenshtein module installed, had to keep working at it to get the .whl to install (apparently I'm not as proficient with the command prompt as I wanted to believe). After that I included <code>Levenshtein</code> and <code>fuzzywuzzy</code> in the packages list and it compiled without error.</p>
<p>I'm going to leave this up because I couldn't find this on a google search so hopefully no one else falls victim!</p>
| 0 | 2016-08-16T15:08:05Z | 38,979,113 | <p>Found the solution - <code>python-Levenshtein</code> needs to be installed. Got mine from gohlke's repository: <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#python-levenshtein" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#python-levenshtein</a> - make sure you have the platform that matches your python installation! I run a x64 windows but I use x32 python for compatibility with my Oracle driver - so I needed to install the 32-bit module. </p>
| 0 | 2016-08-16T15:39:40Z | [
"python",
"python-3.x",
"python-3.4",
"cx-freeze",
"fuzzywuzzy"
] |
How to create pandas dataframe variable/column based on two or more other variables? | 38,978,461 | <p>I have a pandas dataframe, e.g.:</p>
<pre><code>Col1 Col2
A 1
B 2
C 3
</code></pre>
<p>I understand how to create a Col3 based on say the value of Col2:</p>
<pre><code>df['Col3'] = (df['Col2'] <= 1).astype(int)
</code></pre>
<p>But ... How about if the new column was based on two variables, as in (pseudocode here):</p>
<pre><code>if Col2=1 and Col3=1 then Col4='X'
else if Col2=1 and Col3=2 then Col4='Y'
else Col4='Z'
</code></pre>
<p>how would that be achieved? many thanks</p>
| 3 | 2016-08-16T15:08:32Z | 38,978,508 | <p>You can try double <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>df['Col4'] = np.where((df['Col2'] == 1) & (df['Col3'] == 1), 'X',
np.where((df['Col2'] == 1) & (df['Col3'] == 2), 'Y', 'Z'))
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Col2': {0: 1, 1: 1, 2: 3},
'Col1': {0: 'A', 1: 'B', 2: 'C'},
'Col3': {0: 1, 1: 2, 2: 4}})
print (df)
Col1 Col2 Col3
0 A 1 1
1 B 1 2
2 C 3 4
df['Col4'] = np.where( (df['Col2'] == 1) & (df['Col3'] == 1), 'X',
np.where((df['Col2'] == 1) & (df['Col3'] == 2), 'Y', 'Z'))
print (df)
Col1 Col2 Col3 Col4
0 A 1 1 X
1 B 1 2 Y
2 C 3 4 Z
</code></pre>
<hr>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> for fill <code>NaN</code> all other values:</p>
<pre><code>df.loc[ (df['Col2'] == 1) & (df['Col3'] == 1) , 'Col4'] = 'X'
df.loc[ (df['Col2'] == 1) & (df['Col3'] == 2) , 'Col4'] = 'Y'
df['Col4'] = df['Col4'].fillna('Z')
print (df)
Col1 Col2 Col3 Col4
0 A 1 1 X
1 B 1 2 Y
2 C 3 4 Z
</code></pre>
| 1 | 2016-08-16T15:11:32Z | [
"python",
"pandas",
"dataframe"
] |
How to create pandas dataframe variable/column based on two or more other variables? | 38,978,461 | <p>I have a pandas dataframe, e.g.:</p>
<pre><code>Col1 Col2
A 1
B 2
C 3
</code></pre>
<p>I understand how to create a Col3 based on say the value of Col2:</p>
<pre><code>df['Col3'] = (df['Col2'] <= 1).astype(int)
</code></pre>
<p>But ... How about if the new column was based on two variables, as in (pseudocode here):</p>
<pre><code>if Col2=1 and Col3=1 then Col4='X'
else if Col2=1 and Col3=2 then Col4='Y'
else Col4='Z'
</code></pre>
<p>how would that be achieved? many thanks</p>
| 3 | 2016-08-16T15:08:32Z | 38,978,774 | <p>You can initialize the column with your final <code>else</code> value (e.g. <code>Z</code>) and then check each condition:</p>
<pre><code>df['Col4'] = 'Z'
df.loc[(df.Col1 == 1) & (df.Col3 == 1), 'Col4'] = 'X'
df.loc[(df.Col2 == 1) & (df.Col3 == 2), 'Col4'] = 'Y'
</code></pre>
| 0 | 2016-08-16T15:24:49Z | [
"python",
"pandas",
"dataframe"
] |
Select preceding/following-sibling XPath | 38,978,641 | <p>I am using Selenium with Python and I wanna select the html before hr tag.
Here's the code I have:</p>
<pre><code><div id="wikipage">
<div id="wikipage-inner">
<h1>Berkeley</h1>
<p><span><strong>Title1</strong></span></p>
<p><strong>Address: </strong>..</p>
<p><strong>Website: </strong><a href="..">..</a></p>
<p><strong>Phone: </strong>..</p>
<hr />
<p><strong><span">Title2</span></strong></p>
<p><strong>Address: </strong>..</p>
<p><strong>Website:</strong> <a href="..">..</a></p>
<p><strong>Phone:</strong> ..</p>
<p><strong>Email:</strong> <a href="mailto:..">..</a></p>
<hr />
</div>
</div>
</code></pre>
<p>I am using regex to extract title-address-website-phone-email .. into a csv file, so I need the text before each hr tag in the whole web page.
The result will be a list, something like this </p>
<pre><code>This is a text before hr: Title1 Adress: .. Website: .. Phone: ..
This is a text before hr: Title2 Adress ..
</code></pre>
<p>when writing:</p>
<pre><code>for p in parag:
print('This is a text before hr: ', p.text)
</code></pre>
<p>I will appreciate some help guys.</p>
| -1 | 2016-08-16T15:18:36Z | 38,992,521 | <h3>If you have fixed numbers of <code><p></code> nodes you can try this xpath:</h3>
<pre><code>//hr[x]/preceding-sibling::p[position()<=y]
</code></pre>
<p>Where <strong>x</strong> is position of your <code><hr/></code> tag and <strong>y</strong> is number of <code><p></code> tags before <code><hr/></code></p>
<p>So for example if i want to select all 5 <code><p></code> nodes before second <code><hr/></code> i will use this xpath:</p>
<pre><code>//hr[2]/preceding-sibling::p[position()<=5]
</code></pre>
<h3>If you dont have fixed number of <code><p></code> tags you have to use more complicated xpath:</h3>
<pre><code>//hr[x]/preceding-sibling::p[position()<=count(//hr[x]/preceding-sibling::p) - count(//hr[y]/preceding-sibling::p)]
</code></pre>
<p>Where <strong>x</strong> is position of the bottom <code><hr/></code> tag and <strong>y</strong> is position of top <code><hr/></code> tag. </p>
<p>So to select the same nodes like i selected in first example you have to use this xpath:</p>
<pre><code>//hr[2]/preceding-sibling::p[position()<=count(//hr[2]/preceding-sibling::p) - count(//hr[1]/preceding-sibling::p)]
</code></pre>
<p>With this i selected all <code><p></code> tags between first <code><hr/></code> and second <code><hr/></code></p>
| 1 | 2016-08-17T09:15:22Z | [
"python",
"selenium",
"xpath"
] |
How to protect an object using a lock in Python? | 38,978,652 | <p>I've come accross functionality which required the following pattern:</p>
<pre><code>from threading import Lock
the_list = []
the_list_lock = Lock()
</code></pre>
<p>and to use it:</p>
<pre><code>with the_list_lock:
the_list.append("New Element")
</code></pre>
<p>Unfortunately, this does not require me to acquire the lock, I could just access the object directly. I would like some protection against that (I'm only human.) Is there a standard way of doing this? My own approach is to create a <code>HidingLock</code> class that can be used like this:</p>
<pre><code>the_list = HidingLock([])
with the_list as l:
l.append("New Element")
</code></pre>
<p>But it feels so basic that either it should exist in the standard library or it's a very unconventional way to use locks.</p>
| 1 | 2016-08-16T15:19:10Z | 38,978,949 | <p>My current solution (the one I talk about in the question) looks like this:</p>
<pre><code>import threading
class HidingLock(object):
def __init__(self, obj, lock=None):
self.lock = lock or threading.RLock()
self._obj = obj
def __enter__(self):
self.lock.acquire()
return self._obj
def __exit__(self, exc_type, exc_value, traceback):
self.lock.release()
def set(self, obj):
with self:
self._obj = obj
</code></pre>
| 0 | 2016-08-16T15:31:50Z | [
"python",
"multithreading",
"locks"
] |
How to protect an object using a lock in Python? | 38,978,652 | <p>I've come accross functionality which required the following pattern:</p>
<pre><code>from threading import Lock
the_list = []
the_list_lock = Lock()
</code></pre>
<p>and to use it:</p>
<pre><code>with the_list_lock:
the_list.append("New Element")
</code></pre>
<p>Unfortunately, this does not require me to acquire the lock, I could just access the object directly. I would like some protection against that (I'm only human.) Is there a standard way of doing this? My own approach is to create a <code>HidingLock</code> class that can be used like this:</p>
<pre><code>the_list = HidingLock([])
with the_list as l:
l.append("New Element")
</code></pre>
<p>But it feels so basic that either it should exist in the standard library or it's a very unconventional way to use locks.</p>
| 1 | 2016-08-16T15:19:10Z | 38,980,569 | <p>I think the reason there's nothing in the standard library is because for it to be there it would need to make cast iron access guarantees. To provide anything less would give a <strong>false sense of security</strong> that could lead to just as many concurrency issues.</p>
<p>It's also nearly impossible to make these guarantees, without making substantial performance sacrifices. As such, it is left up to the user to consider how they will manage concurrency issues. This is in line with one of Python's the philosophies of "we're all consenting adults". That is, if you're writing a class I think it's reasonable that you should know which attributes you need to acquire a lock before accessing the attribute. Or, if you're really that concerned, write a wrapper/proxy class that controls all access to the underlying object.</p>
<p>With your example there are a number of ways in which the target object could accidentally escape. If the programmer isn't paying enough attention to the code they're writing/maintaining, then this <code>HiddenLock</code> could provide that false sense of security. For instance:</p>
<pre><code>with the_lock as obj:
pass
obj.func() # erroneous
with the_lock as obj:
return obj.func() # possibly erroneous
# What if the return value of `func' contains a self reference?
with the_lock as obj:
obj_copy = obj[:]
obj_copy[0] = 2 # erroneous?
</code></pre>
<p>This last one is particularly pernicious. Whether this code is thread safe depends not on the code within the with block, or even the code after the block. Instead, it is the implementation of the class of <code>obj</code> that will mean this code is thread safe or not. For instance, if <code>obj</code> is a <code>list</code> then this is safe as <code>obj[:]</code> creates a copy. However, if <code>obj</code> is a <code>numpy.ndarray</code> then <code>obj[:]</code> creates a view and so the operation is unsafe. </p>
<p>Actually, if the contents of <code>obj</code> were mutable then this could be unsafe as regardless (eg. <code>obj_copy[0].mutate()</code>).</p>
| 3 | 2016-08-16T16:59:16Z | [
"python",
"multithreading",
"locks"
] |
How to protect an object using a lock in Python? | 38,978,652 | <p>I've come accross functionality which required the following pattern:</p>
<pre><code>from threading import Lock
the_list = []
the_list_lock = Lock()
</code></pre>
<p>and to use it:</p>
<pre><code>with the_list_lock:
the_list.append("New Element")
</code></pre>
<p>Unfortunately, this does not require me to acquire the lock, I could just access the object directly. I would like some protection against that (I'm only human.) Is there a standard way of doing this? My own approach is to create a <code>HidingLock</code> class that can be used like this:</p>
<pre><code>the_list = HidingLock([])
with the_list as l:
l.append("New Element")
</code></pre>
<p>But it feels so basic that either it should exist in the standard library or it's a very unconventional way to use locks.</p>
| 1 | 2016-08-16T15:19:10Z | 38,983,084 | <p>What about creating a <code>shared_list</code> that has a <code>list</code> and implements the desired class methods using a <code>threading.Lock</code>:</p>
<pre><code>import threading
class SharedList(object):
def __init__(self, iterable=None):
if iterable is not None:
self.list = list(iterable)
else:
self.list = list()
self.lock = threading.Lock()
self.index = None
def append(self, x):
with self.lock:
self.list.append(x)
def __iter__(self):
shared_iterator = SharedList()
shared_iterator.list = self.list
shared_iterator.lock = self.lock
shared_iterator.index = 0
return shared_iterator
def next(self):
with self.lock:
if self.index < len(self.list):
result = self.list[self.index]
self.index += 1
else:
raise StopIteration
return result
# Override other methods
if __name__ == '__main__':
shared_list = SharedList()
for x in range(1, 4):
shared_list.append(x)
for entry in shared_list:
print entry
</code></pre>
<p><strong>Output</strong></p>
<pre><code>1
2
3
</code></pre>
<p>As <a href="http://stackoverflow.com/users/24587/georg-sch%c3%b6lly">Georg Shölly</a> pointed out in the comments, this would require a lot of work to implement every method. However, if all you need is a list you can append to and then iterate over, this example provides the starting point.</p>
<p>Then you can just write</p>
<pre><code>the_list = SharedList()
the_list.append("New Element")
</code></pre>
| 0 | 2016-08-16T19:35:05Z | [
"python",
"multithreading",
"locks"
] |
Comparing two list and returning the indices of the matched items with python | 38,978,662 | <p>I have two lists</p>
<pre><code>a = [1.1, 2.2, 5.6, 7.8,7.8, 8.6,10.2]
b = [2.2, 1.4, 1.99, 7.88, 7.8]
</code></pre>
<p>I want both lists to be compared and the indices of entities which delivered the same value with reference to the list a. There can be multiple hits in list a.</p>
<p>The results is</p>
<pre><code>c= [1,3,4] # with reference to a as 2.2 occur at location 1, 7.8 at location 3 and 4.
</code></pre>
<p>I found a similar question but for case the multiple hits were not captured! and the first accepted answer does not print the indices! There is no print in the loop.</p>
<p><a href="http://stackoverflow.com/questions/10367020/compare-two-lists-in-python-and-return-indices-of-matched-values">compare two lists in python and return indices of matched values</a></p>
<p>regards,</p>
| 0 | 2016-08-16T15:19:46Z | 38,978,754 | <p>You can create a utility dictionary mapping the items to the list of positions in the <code>a</code> list:</p>
<pre><code>>>> from collections import defaultdict
>>>
>>> a = [1.1, 2.2, 5.6, 7.8,7.8, 8.6,10.2]
>>> b = [2.2, 1.4, 1.99, 7.88, 7.8]
>>>
>>> d = defaultdict(list)
>>> for index, item in enumerate(a):
... d[item].append(index)
...
>>> [index for item in b for index in d[item] if item in d]
[1, 3, 4]
</code></pre>
| 2 | 2016-08-16T15:23:52Z | [
"python",
"list",
"comparison"
] |
Comparing two list and returning the indices of the matched items with python | 38,978,662 | <p>I have two lists</p>
<pre><code>a = [1.1, 2.2, 5.6, 7.8,7.8, 8.6,10.2]
b = [2.2, 1.4, 1.99, 7.88, 7.8]
</code></pre>
<p>I want both lists to be compared and the indices of entities which delivered the same value with reference to the list a. There can be multiple hits in list a.</p>
<p>The results is</p>
<pre><code>c= [1,3,4] # with reference to a as 2.2 occur at location 1, 7.8 at location 3 and 4.
</code></pre>
<p>I found a similar question but for case the multiple hits were not captured! and the first accepted answer does not print the indices! There is no print in the loop.</p>
<p><a href="http://stackoverflow.com/questions/10367020/compare-two-lists-in-python-and-return-indices-of-matched-values">compare two lists in python and return indices of matched values</a></p>
<p>regards,</p>
| 0 | 2016-08-16T15:19:46Z | 38,978,880 | <pre><code>checker ={}
for i,item in enumerate(a):
checker[item] = checker.get(item,[]) +[i]
reduce(lambda x,y:x+y, [checker[i] for i in b if i in checker])
[1, 3, 4]
</code></pre>
| 0 | 2016-08-16T15:29:26Z | [
"python",
"list",
"comparison"
] |
Comparing two list and returning the indices of the matched items with python | 38,978,662 | <p>I have two lists</p>
<pre><code>a = [1.1, 2.2, 5.6, 7.8,7.8, 8.6,10.2]
b = [2.2, 1.4, 1.99, 7.88, 7.8]
</code></pre>
<p>I want both lists to be compared and the indices of entities which delivered the same value with reference to the list a. There can be multiple hits in list a.</p>
<p>The results is</p>
<pre><code>c= [1,3,4] # with reference to a as 2.2 occur at location 1, 7.8 at location 3 and 4.
</code></pre>
<p>I found a similar question but for case the multiple hits were not captured! and the first accepted answer does not print the indices! There is no print in the loop.</p>
<p><a href="http://stackoverflow.com/questions/10367020/compare-two-lists-in-python-and-return-indices-of-matched-values">compare two lists in python and return indices of matched values</a></p>
<p>regards,</p>
| 0 | 2016-08-16T15:19:46Z | 38,980,550 | <p>A variation on the other answers. My first thought was to turn <code>b</code> into a set then test for membership - sets are good for membership testing.</p>
<pre><code>>>> a = [1.1, 2.2, 5.6, 7.8,7.8, 8.6,10.2]
>>> b = [2.2, 1.4, 1.99, 7.88, 7.8]
>>>
>>> b = set(b)
>>> c = [index for index, item in enumerate(a) if item in b]
>>> print(c)
[1, 3, 4]
>>>
</code></pre>
| 2 | 2016-08-16T16:57:59Z | [
"python",
"list",
"comparison"
] |
Python how to delete list of items with a blacklist not to delete | 38,978,687 | <p>I have a utility to delete a list of projects, but I want to know how I can add a black list filter of projects not to delete.</p>
<p>This is what I have right now, I run the script a few times changing the if "Languages" in project: line to delete different projects.</p>
<pre><code>def delete_projects():
projects = get_projects()
# black_list = [some list of projects that I would like to delete but don't have an exact file name (i.e. "order-*")]
for project in projects:
if "Languages" in project:
delete_project(project)
</code></pre>
<p>I would like to make it so I can just get the list of projects and use the black_list to check for projects with names LIKE xyz* not to delete. How can I do something like this?</p>
<p>Thanks!</p>
<p>Update: this is just my current thought of implementation. Would it be better to implement it with a regular expression and deleting the project that does not match the regular expression? I would need help with the regular expression if that is the way to go.</p>
| 0 | 2016-08-16T15:20:59Z | 38,978,753 | <p>It's relatively simple to create a new list containing only the elements whose names are in the blacklist:</p>
<pre><code>projects = [project for project in projects if project not in blacklist]
</code></pre>
<p>When the blacklist contains patterns, however, the condition may need to be more complex. One way to exclude projects that match any blacklist pattern would be</p>
<pre><code>projects = [p for p in projects if any(patt.match(p) for patt in blacklist)]
</code></pre>
<p>This will retain the projects that match at least one of the patterns.</p>
| 2 | 2016-08-16T15:23:50Z | [
"python"
] |
Python how to delete list of items with a blacklist not to delete | 38,978,687 | <p>I have a utility to delete a list of projects, but I want to know how I can add a black list filter of projects not to delete.</p>
<p>This is what I have right now, I run the script a few times changing the if "Languages" in project: line to delete different projects.</p>
<pre><code>def delete_projects():
projects = get_projects()
# black_list = [some list of projects that I would like to delete but don't have an exact file name (i.e. "order-*")]
for project in projects:
if "Languages" in project:
delete_project(project)
</code></pre>
<p>I would like to make it so I can just get the list of projects and use the black_list to check for projects with names LIKE xyz* not to delete. How can I do something like this?</p>
<p>Thanks!</p>
<p>Update: this is just my current thought of implementation. Would it be better to implement it with a regular expression and deleting the project that does not match the regular expression? I would need help with the regular expression if that is the way to go.</p>
| 0 | 2016-08-16T15:20:59Z | 38,978,772 | <p>You could use Python's <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow"><code>any</code></a>:</p>
<pre><code>BLACKLIST = {'languages'}
def delete_projects():
projects = get_projects()
for project in projects:
if any(term in project for term in BLACKLIST):
delete_project(project)
</code></pre>
<p>As an aside, I would highly recommend first running the code with <code>delete_project(project)</code> line commented out and replaced with a print of a project string representation to make sure you're deleting the right projects before doing it for real ;)</p>
| 0 | 2016-08-16T15:24:42Z | [
"python"
] |
Python how to delete list of items with a blacklist not to delete | 38,978,687 | <p>I have a utility to delete a list of projects, but I want to know how I can add a black list filter of projects not to delete.</p>
<p>This is what I have right now, I run the script a few times changing the if "Languages" in project: line to delete different projects.</p>
<pre><code>def delete_projects():
projects = get_projects()
# black_list = [some list of projects that I would like to delete but don't have an exact file name (i.e. "order-*")]
for project in projects:
if "Languages" in project:
delete_project(project)
</code></pre>
<p>I would like to make it so I can just get the list of projects and use the black_list to check for projects with names LIKE xyz* not to delete. How can I do something like this?</p>
<p>Thanks!</p>
<p>Update: this is just my current thought of implementation. Would it be better to implement it with a regular expression and deleting the project that does not match the regular expression? I would need help with the regular expression if that is the way to go.</p>
| 0 | 2016-08-16T15:20:59Z | 38,978,846 | <p>You can simply use a <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> like this:</p>
<pre><code>projects = [p for p in get_projects() if p in to_keep]
</code></pre>
<p>If you want to select the objects that are not in a list simply do:</p>
<pre><code>projects = [p for p in get_projects() if p not in to_exclude]
</code></pre>
<p>Note that this will work with exact matches. If you want to handle substrings, you can do this:</p>
<pre><code>projects = [p for p in get_projects() if not any(substr in p for substr in to_exclude)]
</code></pre>
| 0 | 2016-08-16T15:27:58Z | [
"python"
] |
Python how to delete list of items with a blacklist not to delete | 38,978,687 | <p>I have a utility to delete a list of projects, but I want to know how I can add a black list filter of projects not to delete.</p>
<p>This is what I have right now, I run the script a few times changing the if "Languages" in project: line to delete different projects.</p>
<pre><code>def delete_projects():
projects = get_projects()
# black_list = [some list of projects that I would like to delete but don't have an exact file name (i.e. "order-*")]
for project in projects:
if "Languages" in project:
delete_project(project)
</code></pre>
<p>I would like to make it so I can just get the list of projects and use the black_list to check for projects with names LIKE xyz* not to delete. How can I do something like this?</p>
<p>Thanks!</p>
<p>Update: this is just my current thought of implementation. Would it be better to implement it with a regular expression and deleting the project that does not match the regular expression? I would need help with the regular expression if that is the way to go.</p>
| 0 | 2016-08-16T15:20:59Z | 38,979,009 | <pre><code>import re
black_list_of_regex = ['order-.*', 'normal_name']
print([project for project in projects for reg in black_list_of_regex if re.match(reg, project)])
</code></pre>
<p>Remember if you want to match full name you should use start/end string regexp <code>^</code>, <code>$</code> in your regular expressions</p>
<p>Hope it helps</p>
| 0 | 2016-08-16T15:34:18Z | [
"python"
] |
Python how to delete list of items with a blacklist not to delete | 38,978,687 | <p>I have a utility to delete a list of projects, but I want to know how I can add a black list filter of projects not to delete.</p>
<p>This is what I have right now, I run the script a few times changing the if "Languages" in project: line to delete different projects.</p>
<pre><code>def delete_projects():
projects = get_projects()
# black_list = [some list of projects that I would like to delete but don't have an exact file name (i.e. "order-*")]
for project in projects:
if "Languages" in project:
delete_project(project)
</code></pre>
<p>I would like to make it so I can just get the list of projects and use the black_list to check for projects with names LIKE xyz* not to delete. How can I do something like this?</p>
<p>Thanks!</p>
<p>Update: this is just my current thought of implementation. Would it be better to implement it with a regular expression and deleting the project that does not match the regular expression? I would need help with the regular expression if that is the way to go.</p>
| 0 | 2016-08-16T15:20:59Z | 38,979,126 | <p>If <code>projects</code> are strings, and you have a <code>blacklist</code> then you can do:</p>
<pre><code>set(projects) - set(blacklist)
</code></pre>
<p>You can create the black list by:</p>
<pre><code>blacklist = [project for project in projects if 'Languages' in project]
</code></pre>
<p><strong>Another</strong> option, without a blacklist</p>
<pre><code>filter(lambda project: "Languages" not in project, projects)
</code></pre>
<p><strong>EDIT</strong></p>
<p>If you need to save projects that have a certain pattern, I would go with a regexp:</p>
<pre><code>import re
pattern = '^XYZ.*'
projects = [project for project in projects if re.search(pattern, project)]
</code></pre>
<p>And if you have a blacklist of words, then you can do:</p>
<pre><code>projects = [project for project in projects if any(pat in project for pat in blacklist)]
</code></pre>
| 0 | 2016-08-16T15:40:18Z | [
"python"
] |
ValueError: invalid literal for int() with base 10: - CharField | 38,978,761 | <p>I'm trying to add an object to my database with models. But getting an error. As I understand <code>CharField</code> are strings? And the error complains that it's an invalid integer?</p>
<pre><code>from django.shortcuts import HttpResponse
from Cr.models import User
def index(request):
a = User('Ra', 'sen')
a.save()
print(a.username)
return HttpResponse('<h1>Hello World!</h1>')
</code></pre>
<p>Models file</p>
<pre><code>from django.db import models
class User(models.Model):
username = models.CharField(max_length=250)
password = models.CharField(max_length=100)
</code></pre>
<p>console print:</p>
<pre><code>Internal Server Error: /Crowd/
Traceback (most recent call last):
File "C:\Anaconda3\Lib\site-packages\django\core\handlers\base.py", line 149, in get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Anaconda3\Lib\site-packages\django\core\handlers\base.py", line 147, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Rasmus\workspace\Crowd\src\Cr\views.py", line 7, in index
a.save()
File "C:\Anaconda3\Lib\site-packages\django\db\models\base.py", line 708, in save
force_update=force_update, update_fields=update_fields)
File "C:\Anaconda3\Lib\site-packages\django\db\models\base.py", line 736, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "C:\Anaconda3\Lib\site-packages\django\db\models\base.py", line 801, in _save_table
forced_update)
File "C:\Anaconda3\Lib\site-packages\django\db\models\base.py", line 831, in _do_update
filtered = base_qs.filter(pk=pk_val)
File "C:\Anaconda3\Lib\site-packages\django\db\models\query.py", line 790, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "C:\Anaconda3\Lib\site-packages\django\db\models\query.py", line 808, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "C:\Anaconda3\Lib\site-packages\django\db\models\sql\query.py", line 1243, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "C:\Anaconda3\Lib\site-packages\django\db\models\sql\query.py", line 1269, in _add_q
allow_joins=allow_joins, split_subq=split_subq,
File "C:\Anaconda3\Lib\site-packages\django\db\models\sql\query.py", line 1203, in build_filter
condition = self.build_lookup(lookups, col, value)
File "C:\Anaconda3\Lib\site-packages\django\db\models\sql\query.py", line 1099, in build_lookup
return final_lookup(lhs, rhs)
File "C:\Anaconda3\Lib\site-packages\django\db\models\lookups.py", line 19, in __init__
self.rhs = self.get_prep_lookup()
File "C:\Anaconda3\Lib\site-packages\django\db\models\lookups.py", line 57, in get_prep_lookup
return self.lhs.output_field.get_prep_lookup(self.lookup_name, self.rhs)
File "C:\Anaconda3\Lib\site-packages\django\db\models\fields\__init__.py", line 744, in get_prep_lookup
return self.get_prep_value(value)
File "C:\Anaconda3\Lib\site-packages\django\db\models\fields\__init__.py", line 976, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: 'Ra'
[16/Aug/2016 17:21:21] "GET /Crowd/ HTTP/1.1" 500 134020
</code></pre>
| 0 | 2016-08-16T15:24:20Z | 38,978,813 | <p><code>a = User('Ra', 'sen')</code> </p>
<p>The first positional argument is used for the <code>pk</code>. When creating a model you should use keyword arguments:</p>
<p><code>a = User(username='Ra', password='sen')</code></p>
| 3 | 2016-08-16T15:26:41Z | [
"python",
"django"
] |
Referencing keys in a Python dictionary for CSV reader | 38,978,769 | <p>new to python and trying to build a simple CSV reader to create new trades off an existing instrument. Ideally, I'd like to build a dictionary to simplify the parameters required to set up a new trade (instead of using row[1], [2], [3], etc, I'd like to replace with my headers that read Value Date, Trade Date, Price, Quantity, etc.)</p>
<p>I've created dictionary keys below, but am having trouble linking them to my script to create the new trade. What should I put to substitute the rows? Any advice appreciated! Thanks...</p>
<p>Code below:</p>
<pre><code>import acm
import csv
# Opening CSV file
with open('C:\Users\Yina.Huang\Desktop\export\TradeBooking.csv', 'rb') as f:
reader = csv.DictReader(f, delimiter=',')
next(reader, None)
for row in reader:
# Match column header with column number
d = {
row["Trade Time"],
row["Value Day"],
row["Acquire Day"],
row["Instrument"],
row["Price"],
row["Quantity"],
row["Counterparty"],
row["Acquirer"],
row["Trader"],
row["Currency"],
row["Portfolio"],
row["Status"]
}
NewTrade = acm.FTrade()
NewTrade.TradeTime = "8/11/2016 12:00:00 AM"
NewTrade.ValueDay = "8/13/2016"
NewTrade.AcquireDay = "8/13/2016"
NewTrade.Instrument = acm.FInstrument[row["Instrument"]]
NewTrade.Price = row[4]
NewTrade.Quantity = row[5]
NewTrade.Counterparty = acm.FParty[row[6]]
NewTrade.Acquirer = acm.FParty[row[7]]
NewTrade.Trader = acm.FUser[row[8]]
NewTrade.Currency = acm.FCurrency[row[9]]
NewTrade.Portfolio = acm.FPhysicalPortfolio[row[10]]
NewTrade.Premium = (int(row[4])*int(row[5]))
NewTrade.Status = row[11]
print NewTrade
NewTrade.Commit()
</code></pre>
| 1 | 2016-08-16T15:24:35Z | 38,979,695 | <p>The <code>csv</code> module already provides this functionality with the <code>csv.DictReader</code> object. </p>
<pre><code>with open('C:\Users\Yina.Huang\Desktop\export\TradeBooking.csv', 'rb') as f:
reader = csv.DictReader(f)
for row in reader:
NewTrade = acm.FTrade()
NewTrade.TradeTime = row['Trade Time']
NewTrade.ValueDay = row['Value Day']
NewTrade.AcquireDay = row['Aquire Day']
NewTrade.Instrument = acm.Finstrument[row['Instrument']]
NewTrade.Price = row['Price']
NewTrade.Quantity = row['Quantity']
# etc
</code></pre>
<p>From the <a href="https://docs.python.org/2/library/csv.html#csv.DictReader" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>Create an object which operates like a regular reader but maps the
information read into a dict whose keys are given by the optional
fieldnames parameter. The fieldnames parameter is a sequence whose
elements are associated with the fields of the input data in order.
These elements become the keys of the resulting dictionary. If the
fieldnames parameter is omitted, the values in the first row of the
csvfile will be used as the fieldnames. If the row read has more
fields than the fieldnames sequence, the remaining data is added as a
sequence keyed by the value of restkey. If the row read has fewer
fields than the fieldnames sequence, the remaining keys take the value
of the optional restval parameter.</p>
</blockquote>
| 1 | 2016-08-16T16:09:29Z | [
"python",
"csv",
"dictionary",
"key"
] |
Exporting data to csv python | 38,978,776 | <p>When I use the following code below, my data gets exported but its all in one column going all the way down, any help?</p>
<pre><code>b = open('tester.csv', 'wb')
a = csv.writer(b)
while (count < x):
tags = str(data['alerts'][count] ['tags']).replace("u\"","\"").replace("u\'","\'")
a.writerows(strList)
</code></pre>
| 0 | 2016-08-16T15:24:55Z | 38,979,298 | <p>Given your data, I fail to see how this could go wrong, unless your data doesn't actually look like this. </p>
<pre><code>>>> data = [['hi', 'yes', 'bye', 'def'], ['hi', 'ast', 'cnx', 'vplex'], ['ever', 'as', 'no', 'qwerty', 'redi'], ['no', 'yes', 'qwerty'], ['redi', 'google'], ['redi', 'asdf', 'asdfef', 'wer'], ['redi', 'asd', 'rrr', 'www', 'qqq'], ['erfa', 'asdf', 'fef'], ['hi', 'dsa', 'f3e']]
>>> with open('my.csv','w') as f:
... for item in data:
... my_item = item[0]+','+item[1]+'\n'
... f.write(my_item)
...
</code></pre>
<p>pg my.csv</p>
<pre><code>hi,yes
hi,ast
ever,as
no,yes
redi,google
redi,asdf
redi,asd
erfa,asdf
hi,dsa
(EOF):
</code></pre>
| 0 | 2016-08-16T15:49:30Z | [
"python",
"csv",
"export-to-csv"
] |
Python memory when plotting figures in a loop | 38,978,920 | <p>I am trying to print a sequence of images in a code loop. I will ultimately need to print around 1000 to show how my system varies with time. I have reviewed the methods outlined in <a href="http://stackoverflow.com/questions/2364945/matplotlib-runs-out-of-memory-when-plotting-in-a-loop">Matplotlib runs out of memory when plotting in a loop</a> but I still can't make the code produce more than 96 images or so.</p>
<p>The code I am using in its stripped out form is below</p>
<pre><code>import numpy as np
import matplotlib as mpl
import os
def pltHM(graphname,graphtext,xAxis,yAxis,xMn,xMx,xCnt,yMn,yMx,yCnt,TCrt):
plt = mpl.pyplot
fig = plt.figure(figsize=(8,7), dpi=250)
cmap = mpl.cm.jet
norm = mpl.colors.Normalize(vmin=-3, vmax=3)
X = np.linspace(xMn,xMx,xCnt)
Y = np.linspace(yMn,yMx,yCnt)
plt.xlabel(xAxis)
plt.ylabel(yAxis)
plt.pcolormesh(X,Y,TCrt, cmap=cmap,norm=norm)
plt.grid(color='w')
plt.suptitle(graphtext, fontsize=14)
plt.colorbar()
plt.savefig(graphname, transparent = True)
plt.cla()
plt.clf()
plt.close(fig)
del plt
del fig
return
</code></pre>
<p>This is used in a simple loop as shown below</p>
<pre><code>for loop1 in range(0,10):
for loop2 in range(0,100):
saveName = 'Test_Images/' + str(loop1) + '_' + str(loop2) + '.png'
plotHeatMap(saveName,'Test','X','Y',-35,35,141,-30,30,121,Z)
</code></pre>
<p>Any advice on why the above is not releasing memory and causing the traceback message</p>
<p>RuntimeError: Could not allocate memory for image</p>
<p>Many thanks for any help provided</p>
| 0 | 2016-08-16T15:30:47Z | 38,980,600 | <p>Here is one stripped example of what you can do. As pointed out by Ajean, you should NOT import plt every time as you did! It is enough once. Also, do not delete the figure and create a new one...it is better to use the same figure and just replace the data. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def plotHeatMap(fig, line, x, y, graphname):
line.set_data(x, y)
fig.canvas.draw()
fig.savefig(graphname)
fig1, ax1 = plt.subplots(1, 1)
line, = ax1.plot([],[])
ax1.set_xlim(0, 1)
ax1.set_ylim(0, 1)
for loop1 in range(0, 2):
for loop2 in range(0, 2):
x = np.random.random(100)
y = np.random.random(100)
save_name = 'fig_'+str(loop1) + '_' + str(loop2) + '.png'
plotHeatMap(fig1, line, x, y, save_name)
</code></pre>
| 1 | 2016-08-16T17:00:55Z | [
"python",
"matplotlib"
] |
Code exit status 255 | 38,978,925 | <p>I have written the below code for getting the only unique element in a given integer array.</p>
<pre><code>def lonelyinteger(a):
for x in a:
answer = a.count(x)
if(a.count(x) < 2)
answer=x
return answer
if __name__ == '__main__':
a = input()
b = map(int, raw_input().strip().split(" "))
print lonelyinteger(b)
</code></pre>
<p><strong>Error</strong></p>
<blockquote>
<p>File "solution.py", line 5
if(a.count(x) < 2)
^
SyntaxError: invalid syntax </p>
<p>Exit Status
255</p>
</blockquote>
<p>Please tell me where did I miss</p>
| -1 | 2016-08-16T15:31:01Z | 38,978,954 | <p>You are missing the <code>:</code> at the end of that line.</p>
| 2 | 2016-08-16T15:32:03Z | [
"python"
] |
Code exit status 255 | 38,978,925 | <p>I have written the below code for getting the only unique element in a given integer array.</p>
<pre><code>def lonelyinteger(a):
for x in a:
answer = a.count(x)
if(a.count(x) < 2)
answer=x
return answer
if __name__ == '__main__':
a = input()
b = map(int, raw_input().strip().split(" "))
print lonelyinteger(b)
</code></pre>
<p><strong>Error</strong></p>
<blockquote>
<p>File "solution.py", line 5
if(a.count(x) < 2)
^
SyntaxError: invalid syntax </p>
<p>Exit Status
255</p>
</blockquote>
<p>Please tell me where did I miss</p>
| -1 | 2016-08-16T15:31:01Z | 38,979,087 | <p>Correct code below this (your code modified):</p>
<pre><code>def lonelyinteger(a):
# added a : that was missing in the for
# loop (syntax error)
for x in a:
answer = a.count(x)
if(a.count(x) < 2):
answer=x
return answer
if __name__ == '__main__':
a = input()
b = map(int, raw_input().strip().split(" "))
print lonelyinteger(b)
</code></pre>
| -1 | 2016-08-16T15:38:05Z | [
"python"
] |
Clear chatterbot database | 38,979,022 | <p>I'm trying to use Chatterbot (<a href="http://chatterbot.readthedocs.io/" rel="nofollow">http://chatterbot.readthedocs.io/</a>) for a simple chat AI, but I have a few problems.</p>
<p>I'm trying to create my own database for it. But it seems that it has it cached somewhere, I can't clear it's database to completely replace it with my own questions\answers it just keeps using old ones. And new ones too.</p>
<pre><code>chatbot = ChatBot("botName")
chatbot.set_trainer(ChatterBotCorpusTrainer)
# Train based on the english corpus
#chatbot.train("chatterbot.corpus.english")
#chatbot.set_trainer(ListTrainer)
file = codecs.open(os.path.join(realPath, 'data', 'skynet.json'), encoding='utf-8')
jsonData = json.load(file)
for value in jsonData.values():
for conv in value:
tm = []
for line in conv:
tm.append(line)
print (tm)
chatbot.train (conv)
</code></pre>
<p>Thanks for any help.</p>
| 0 | 2016-08-16T15:34:50Z | 38,979,326 | <p>Oh, stupid me. The file 'database.db' was under my nose, in the same folder as my python file.</p>
| 2 | 2016-08-16T15:50:46Z | [
"python"
] |
Multiple simultaneous HTTP requests | 38,979,024 | <p>I'm trying to take a list of items and check for their status change based on certain processing by the API. The list will be manually populated and can vary in number to several thousand.</p>
<p>I'm trying to write a script that makes multiple simultaneous connections to the API to keep checking for the status change. For each item, once the status changes, the attempts to check must stop. Based on reading other posts on Stackoverflow (Specifically, <a href="https://stackoverflow.com/questions/2632520/what-is-the-fastest-way-to-send-100-000-http-requests-in-python">What is the fastest way to send 100,000 HTTP requests in Python?</a> ), I've come up with the following code. But the script always stops after processing the list once. What am I doing wrong?</p>
<p>One additional issue that I'm facing is that the keyboard interrup method never fires (I'm trying with Ctrl+C but it does not kill the script.</p>
<pre><code>from urlparse import urlparse
from threading import Thread
import httplib, sys
from Queue import Queue
requestURLBase = "https://example.com/api"
apiKey = "123456"
concurrent = 200
keepTrying = 1
def doWork():
while keepTrying == 1:
url = q.get()
status, body, url = checkStatus(url)
checkResult(status, body, url)
q.task_done()
def checkStatus(ourl):
try:
url = urlparse(ourl)
conn = httplib.HTTPConnection(requestURLBase)
conn.request("GET", url.path)
res = conn.getresponse()
respBody = res.read()
conn.close()
return res.status, respBody, ourl #Status can be 210 for error or 300 for successful API response
except:
print "ErrorBlock"
print res.read()
conn.close()
return "error", "error", ourl
def checkResult(status, body, url):
if "unavailable" not in body:
print status, body, url
keepTrying = 1
else:
keepTrying = 0
q = Queue(concurrent * 2)
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
try:
for value in open('valuelist.txt'):
fullUrl = requestURLBase + "?key=" + apiKey + "&value=" + value.strip() + "&years="
print fullUrl
q.put(fullUrl)
q.join()
except KeyboardInterrupt:
sys.exit(1)
</code></pre>
<p>I'm new to Python so there could be syntax errors as well... I'm definitely not familiar with multi-threading so perhaps I'm doing something else wrong as well.</p>
| 0 | 2016-08-16T15:34:58Z | 38,979,364 | <p>In the code, the list is only read once. Should be something like</p>
<pre><code>try:
while True:
for value in open('valuelist.txt'):
fullUrl = requestURLBase + "?key=" + apiKey + "&value=" + value.strip() + "&years="
print fullUrl
q.put(fullUrl)
q.join()
</code></pre>
<p>For the interrupt thing, remove the bare <code>except</code> line in checkStatus or make it <code>except Exception</code>. Bare excepts will catch all exceptions, including <code>SystemExit</code> which is what <code>sys.exit</code> raises and stop the python process from terminating.</p>
<p>If I may make a couple comments in general though. </p>
<ul>
<li>Threading is not a good implementation for such large concurrencies</li>
<li>Creating a new connection every time is not efficient</li>
</ul>
<p>What I would suggest is </p>
<ol>
<li>Use <a href="http://gevent.readthedocs.org/" rel="nofollow">gevent for asynchronous network I/O</a></li>
<li>Pre-allocate a queue of connections same size as concurrency number and have <code>checkStatus</code> grab a connection object when it needs to make a call. That way the connections stay alive, get reused and there is no overhead in creating and destroying them and the increased memory use that goes with it.</li>
</ol>
| 0 | 2016-08-16T15:53:00Z | [
"python",
"multithreading",
"api",
"rest",
"python-multithreading"
] |
Python ValueError: embedded null byte when reading png file from bash pipe | 38,979,075 | <pre><code>from PIL import Image
from subprocess import Popen, PIPE
scr = Image.open(Popen.communicate(Popen(['import','-w','0x02a00001','png:-'], stdout=PIPE))[0])
</code></pre>
<p>File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2258, in open
fp = builtins.open(filename, "rb")
ValueError: embedded null byte</p>
| 0 | 2016-08-16T15:37:13Z | 38,981,166 | <p>Try first to load raw data into a <code>BytesIO</code> container:</p>
<pre><code>from io import BytesIO
from PIL import Image
from subprocess import Popen, PIPE
data = Popen.communicate(Popen(['import','-w','0x02a00001','png:-'], stdout=PIPE))[0]
scr = Image.open(BytesIO(data))
</code></pre>
| 2 | 2016-08-16T17:36:18Z | [
"python",
"linux",
"bash"
] |
How to guarantee repartitioning in Spark Dataframe | 38,979,098 | <p>I'm pretty new to Apache Spark and I'm trying to repartition a dataframe by U.S. State. I then want to break each partition into its own RDD and save to a specific location:</p>
<pre><code>schema = types.StructType([
types.StructField("details", types.StructType([
types.StructField("state", types.StringType(), True)
]), True)
])
raw_rdd = spark_context.parallelize([
'{"details": {"state": "AL"}}',
'{"details": {"state": "AK"}}',
'{"details": {"state": "AZ"}}',
'{"details": {"state": "AR"}}',
'{"details": {"state": "CA"}}',
'{"details": {"state": "CO"}}',
'{"details": {"state": "CT"}}',
'{"details": {"state": "DE"}}',
'{"details": {"state": "FL"}}',
'{"details": {"state": "GA"}}'
]).map(
lambda row: json.loads(row)
)
rdd = sql_context.createDataFrame(raw_rdd).repartition(10, "details.state").rdd
for index in range(0, rdd.getNumPartitions()):
partition = rdd.mapPartitionsWithIndex(
lambda partition_index, partition: partition if partition_index == index else []
).coalesce(1)
if partition.count() > 0:
df = sql_context.createDataFrame(partition, schema=schema)
for event in df.collect():
print "Partition {0}: {1}".format(index, str(event))
else:
print "Partition {0}: No rows".format(index)
</code></pre>
<p>In order to test, I load a file from S3 with 50 rows (10 in the example), each with a different state in the <code>details.state</code> column. In order to mimic the behavior I've parallelized data in the example above, but the behavior is the same. I get the 50 partitions I asked for but some aren't being used and some carry entries for more than one state. Here's the output for the sample set of 10:</p>
<pre><code>Partition 0: Row(details=Row(state=u'AK'))
Partition 1: Row(details=Row(state=u'AL'))
Partition 1: Row(details=Row(state=u'CT'))
Partition 2: Row(details=Row(state=u'CA'))
Partition 3: No rows
Partition 4: No rows
Partition 5: Row(details=Row(state=u'AZ'))
Partition 6: Row(details=Row(state=u'CO'))
Partition 6: Row(details=Row(state=u'FL'))
Partition 6: Row(details=Row(state=u'GA'))
Partition 7: Row(details=Row(state=u'AR'))
Partition 7: Row(details=Row(state=u'DE'))
Partition 8: No rows
Partition 9: No rows
</code></pre>
<p>My question: is the repartitioning strategy just a suggestion to Spark or is there something fundamentally wrong with my code?</p>
| 1 | 2016-08-16T15:38:51Z | 38,986,150 | <p>There is nothing unexpected going on here. Spark is using hash of the partitioning key (positive) modulo number of partitions to distribute rows between partitions and with 50 partitions you'll get a significant number of duplicates:</p>
<pre><code>from pyspark.sql.functions import expr
states = sc.parallelize([
"AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DC", "DE", "FL", "GA",
"HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD",
"MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ",
"NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC",
"SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"
])
states_df = states.map(lambda x: (x, )).toDF(["state"])
states_df.select(expr("pmod(hash(state), 50)")).distinct().count()
# 26
</code></pre>
<p>If you want to separate files on write it is better to use <code>partitionBy</code> clause for <code>DataFrameWriter</code>. It will create separate output per level and doesn't require shuffling. </p>
<p>If you really want to go with full repartitioning you can use RDD API which allows you to use custom partitioner.</p>
| 2 | 2016-08-16T23:51:05Z | [
"python",
"apache-spark",
"pyspark",
"partitioning"
] |
please fix following multithreading code in python | 38,979,141 | <pre><code> import time from threading
import thread
def myfunc(i): #Each thread runs this function
print "sleep from thread %d" % i
time. Sleep(5)
print "woke up from thread %d" % i
return
for i in range(10): # Create 10 Thread objects
t = Thread(target=myfunc, args=(i,))
t.start() #Start Each Thread
return
</code></pre>
<p>it throws following error .</p>
<blockquote>
<p>File "threading.py", line 1<br>
import time from threading<br>
^ SyntaxError: invalid syntax</p>
</blockquote>
| -2 | 2016-08-16T15:41:17Z | 38,979,275 | <p>For the error you are pointing out, this is because the import statement should be like so :<br>
<code>from threading import time</code> </p>
| 0 | 2016-08-16T15:48:15Z | [
"python",
"multithreading",
"python-multithreading"
] |
what are the differences between import and extends in Flask? | 38,979,155 | <p>I am reading ãFlask web developmentã.
in Example 4-3ï¼</p>
<pre><code>{% extends "base.html" %}
{% import "bootstrap/wtf.html" as wtf %}
</code></pre>
<p>I'd like to know:
What are the differences between extends and import?(I think they are quite similar in usage.)
In which situation,I will use extends or import?</p>
| 3 | 2016-08-16T15:42:15Z | 38,979,410 | <p>There <em>is</em> a difference. <code>{% extends parent.html %}</code> allows you to render <code>parent.html</code> and also override <code>{% block %}</code>'s defined in it while <code>{% import %}</code> just allows you to access template variables.</p>
<p>So example template is extends <code>base.html</code> and imports variables from <code>bootstrap/wtf.html</code>. Think about it like python's class inheritance and import statement.</p>
| 1 | 2016-08-16T15:55:11Z | [
"python",
"flask",
"jinja2"
] |
what are the differences between import and extends in Flask? | 38,979,155 | <p>I am reading ãFlask web developmentã.
in Example 4-3ï¼</p>
<pre><code>{% extends "base.html" %}
{% import "bootstrap/wtf.html" as wtf %}
</code></pre>
<p>I'd like to know:
What are the differences between extends and import?(I think they are quite similar in usage.)
In which situation,I will use extends or import?</p>
| 3 | 2016-08-16T15:42:15Z | 38,979,535 | <blockquote>
<p>By default, included templates are passed the current context and imported templates are not.
<a href="http://By%20default,%20included%20templates%20are%20passed%20the%20current%20context%20and%20imported%20templates%20are%20not." rel="nofollow">Jinja documentation</a></p>
</blockquote>
<p>By default will the included templates not be cached while the imported will.</p>
<p>The reason for this is that imports often are used as a module that holds macros.</p>
<p>The best practice will be to use import on templates that contain macro's while include is best to use when you just wan't some markup template.</p>
| 0 | 2016-08-16T16:01:45Z | [
"python",
"flask",
"jinja2"
] |
what are the differences between import and extends in Flask? | 38,979,155 | <p>I am reading ãFlask web developmentã.
in Example 4-3ï¼</p>
<pre><code>{% extends "base.html" %}
{% import "bootstrap/wtf.html" as wtf %}
</code></pre>
<p>I'd like to know:
What are the differences between extends and import?(I think they are quite similar in usage.)
In which situation,I will use extends or import?</p>
| 3 | 2016-08-16T15:42:15Z | 38,980,052 | <p>When you <code>extend</code> another template the template controls you (the called controls the caller) - only named blocks in the "parent" template will be rendered:</p>
<pre><code>{% extends "base.html" %}
{% block main_content %}
Only shows up if there is a block called main_content
in base.html.
{% endblock main_content%}
</code></pre>
<p>On the other hand an <code>import</code> simply binds the template to a name in your template's scope, and you control when and where to call it (the caller controls the called):</p>
<pre><code>{% import "bootstrap/wtf.html" as wtf %}
Some of your own template code with {{ wtf.calls() }} where it makes sense.
</code></pre>
| 3 | 2016-08-16T16:31:01Z | [
"python",
"flask",
"jinja2"
] |
How to remote click on links from a 3rd party website | 38,979,170 | <p>I have a problem that I am trying to conceptualize whether possible or not. Nothing too fancy (i.e. remote login or anything etc.)</p>
<p>I have Website A and Website B. </p>
<p>On website A a user selects on a few links from website B, i would like to then remotely click on behalf of the user on the link (as Website B creates a cookie with the clicked information) so when the user gets redirected to Website B, the cookie (and the links) are pre-selected and the user does not need to click on them one by one.</p>
<p>Can this be done?</p>
| 0 | 2016-08-16T15:42:53Z | 38,979,448 | <p>IF you want to interact to anorher webservice the resolution is send post/get request and parse response
Question is what is your goal?</p>
| 0 | 2016-08-16T15:57:26Z | [
"javascript",
"jquery",
"python",
"cookies"
] |
Need help in Radix sort | 38,979,220 | <p>I am having a problem in creating a list
I am done with the logic part for radix sort .
Here is the code:</p>
<pre><code>import math
a = [4, 15, 7, 3, 6, 22, 45, 82]
a1 = [[] for _ in xrange(len(a))]
a2 = [[] for _ in xrange(len(a))]
a3 = [[] for _ in xrange(len(a))]
a4 = [[] for _ in xrange(len(a))]
b = [[] for _ in xrange(10)]
b2 = [[] for _ in xrange(10)]
d=len(str(max(a)))
[str(item).zfill(d) for item in a]
print a
</code></pre>
<p>this part of the code will add zero before the numbers so that length of all the digits will be same as that of no having max no of digit </p>
<p>it gives <code>a = [ 04 , 15 , 07 , 03 , 06 , 22 , 45 , 82 ]</code></p>
<pre><code>for x in xrange(0,len(a)) :
a1[x].append(a[x]%10)
print a1
print '\n'
</code></pre>
<p>this will save the end digit of each numbers
as follows </p>
<pre><code>a1 = [[4], [5], [7], [3], [6], [2], [5], [2]]
</code></pre>
<p>In the next part ,if bucket no matches with the end digits of the no.
then the no. having that digit will be stored in it.</p>
<pre><code>i=0
for x in xrange(0,len(a)) :
for u in range(0,len(a)) :
if a1[u]==[i] :
b[x].append(a[u])
i=i+1
for u in range(0,len(a)) :
print b[u]
</code></pre>
<p>Output will be as follows :</p>
<pre><code>[]
[]
[22, 82]
[3]
[4]
[15, 45]
[6]
[7]
</code></pre>
<p>This part will pick up the no. from the bucket starting from bucket no. zero to bucket no. 10</p>
<pre><code>for k in range(0,len(a)) :
l=len(b[k])
for t in range(0,l) :
a2[k]=b[k][t]
print a2[k]
</code></pre>
<p>a2 is</p>
<pre><code>22
82
3
4
15
45
6
7
</code></pre>
<p>but when i print it ,like this-</p>
<pre><code>print a2[0]
</code></pre>
<p>It gives </p>
<pre><code>[]
</code></pre>
<p>I don't want to store the empty values in that a2 list
How to avoid it?</p>
<p>I know i have to use condition like "if bucket is empty dont put the no just continue the loop"
I dont know how to write code for this.</p>
<p>I guess i need to add </p>
<pre><code>if len(b[k][t])==0 :
continue
else :
a2[k]=b[k][t]
print a2[k]
</code></pre>
<p>But it's not working </p>
<pre><code>Traceback (most recent call last):
File "prog.py", line 39, in <module>
TypeError: object of type 'int' has no len()
</code></pre>
| -1 | 2016-08-16T15:45:26Z | 38,979,376 | <p>Each <code>b[?]</code> is an array of integers, so <code>b[k][t]</code> will be an integer, but you are trying to take its length in <code>if len(b[k][t])==0:</code>.</p>
<p>The only way to avoid having empty elements of <code>a2</code> is to not use an array; for example, a dictionary.</p>
| 0 | 2016-08-16T15:53:44Z | [
"python",
"radix-sort"
] |
Dynamically created inputs then getting data from inputs with flask | 38,979,227 | <p>I wrote some HTML and Javascript to create a table that has input fields that appear. Here is a picture of the <a href="http://i.stack.imgur.com/xJDFR.png" rel="nofollow">HTML Table</a>. When the user keeps clicking the button to add more it looks like this: <a href="http://i.stack.imgur.com/gLagy.png" rel="nofollow">HTML Table More Input</a></p>
<p>Here is the HTML code for the table and the javascript to create more inputs:</p>
<pre><code><form action="/information.html" method="POST">
<table class="table table-borderless">
<thead>
<tr>
<th style="text-align:center">Ticker</th>
<th style="text-align:center">Shares</th>
</tr>
</thead>
<tbody id='room_fileds'>
<tr>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="AAPL" name="stock1"/>
</fieldset>
</td>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="100" name="weight1"/>
</fieldset>
</td>
</tr>
</tbody>
</table>
</form>
<br/>
<input type="button" id="more_fields" onclick="add_fields();" value="Add More" />
<input type='submit' value='Submit'/>
<script>
var count = 0
function add_fields() {
count++
var county = count.toString();
var objTo = document.getElementById('room_fileds')
var divtest = document.createElement("tr");
divtest.innerHTML = '<td style="text-align:center"><input type="text"></td><td style="text-align:center"><input type="text"></td>';
objTo.appendChild(divtest)
}
</script>
</code></pre>
<p>I am trying to use flask to get all the post input. Usually I have an input with a name such as stock1 and then I do the following with flask: </p>
<pre><code>stock1=request.form.get('stock1',type=str)
</code></pre>
<p>However, I am unsure of how to handle this type of dynamically created inputs. I am not sure if the user will enter data into 1 or 2 or even 25 input boxes. Is there a proper way to use flask to get all of this data if it is unknown how much data the user will enter? Possibly, I would like to get all of the tickers into a list and all of the shares into another list. </p>
| 1 | 2016-08-16T15:45:45Z | 38,983,699 | <p>Maybe try this repository: <a href="https://github.com/sebkouba/dynamic-flask-form" rel="nofollow">https://github.com/sebkouba/dynamic-flask-form</a>
Works fine out of box.</p>
| 1 | 2016-08-16T20:15:38Z | [
"javascript",
"python",
"html",
"input",
"flask"
] |
Dynamically created inputs then getting data from inputs with flask | 38,979,227 | <p>I wrote some HTML and Javascript to create a table that has input fields that appear. Here is a picture of the <a href="http://i.stack.imgur.com/xJDFR.png" rel="nofollow">HTML Table</a>. When the user keeps clicking the button to add more it looks like this: <a href="http://i.stack.imgur.com/gLagy.png" rel="nofollow">HTML Table More Input</a></p>
<p>Here is the HTML code for the table and the javascript to create more inputs:</p>
<pre><code><form action="/information.html" method="POST">
<table class="table table-borderless">
<thead>
<tr>
<th style="text-align:center">Ticker</th>
<th style="text-align:center">Shares</th>
</tr>
</thead>
<tbody id='room_fileds'>
<tr>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="AAPL" name="stock1"/>
</fieldset>
</td>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="100" name="weight1"/>
</fieldset>
</td>
</tr>
</tbody>
</table>
</form>
<br/>
<input type="button" id="more_fields" onclick="add_fields();" value="Add More" />
<input type='submit' value='Submit'/>
<script>
var count = 0
function add_fields() {
count++
var county = count.toString();
var objTo = document.getElementById('room_fileds')
var divtest = document.createElement("tr");
divtest.innerHTML = '<td style="text-align:center"><input type="text"></td><td style="text-align:center"><input type="text"></td>';
objTo.appendChild(divtest)
}
</script>
</code></pre>
<p>I am trying to use flask to get all the post input. Usually I have an input with a name such as stock1 and then I do the following with flask: </p>
<pre><code>stock1=request.form.get('stock1',type=str)
</code></pre>
<p>However, I am unsure of how to handle this type of dynamically created inputs. I am not sure if the user will enter data into 1 or 2 or even 25 input boxes. Is there a proper way to use flask to get all of this data if it is unknown how much data the user will enter? Possibly, I would like to get all of the tickers into a list and all of the shares into another list. </p>
| 1 | 2016-08-16T15:45:45Z | 39,003,383 | <p>I noticed that if you want to have all dynamically created inputs with the same name that they must be a different name from the first input field that is not dynamically created. So, I named the first input s0 and the rest of the dynamically created inputs s1. I can then call request.form.get() on s0 and can call request.form.getlist() on s1. I can then append s0 to s1 and I have the list that I wanted, which is full of all the input data. However, it does not work if you let the dynamically created inputs have the same name as the first non dynamically created input. </p>
| 1 | 2016-08-17T17:58:16Z | [
"javascript",
"python",
"html",
"input",
"flask"
] |
Dynamically created inputs then getting data from inputs with flask | 38,979,227 | <p>I wrote some HTML and Javascript to create a table that has input fields that appear. Here is a picture of the <a href="http://i.stack.imgur.com/xJDFR.png" rel="nofollow">HTML Table</a>. When the user keeps clicking the button to add more it looks like this: <a href="http://i.stack.imgur.com/gLagy.png" rel="nofollow">HTML Table More Input</a></p>
<p>Here is the HTML code for the table and the javascript to create more inputs:</p>
<pre><code><form action="/information.html" method="POST">
<table class="table table-borderless">
<thead>
<tr>
<th style="text-align:center">Ticker</th>
<th style="text-align:center">Shares</th>
</tr>
</thead>
<tbody id='room_fileds'>
<tr>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="AAPL" name="stock1"/>
</fieldset>
</td>
<td style="text-align:center">
<fieldset class="form-group">
<input type="text" placeholder="100" name="weight1"/>
</fieldset>
</td>
</tr>
</tbody>
</table>
</form>
<br/>
<input type="button" id="more_fields" onclick="add_fields();" value="Add More" />
<input type='submit' value='Submit'/>
<script>
var count = 0
function add_fields() {
count++
var county = count.toString();
var objTo = document.getElementById('room_fileds')
var divtest = document.createElement("tr");
divtest.innerHTML = '<td style="text-align:center"><input type="text"></td><td style="text-align:center"><input type="text"></td>';
objTo.appendChild(divtest)
}
</script>
</code></pre>
<p>I am trying to use flask to get all the post input. Usually I have an input with a name such as stock1 and then I do the following with flask: </p>
<pre><code>stock1=request.form.get('stock1',type=str)
</code></pre>
<p>However, I am unsure of how to handle this type of dynamically created inputs. I am not sure if the user will enter data into 1 or 2 or even 25 input boxes. Is there a proper way to use flask to get all of this data if it is unknown how much data the user will enter? Possibly, I would like to get all of the tickers into a list and all of the shares into another list. </p>
| 1 | 2016-08-16T15:45:45Z | 39,003,708 | <p>I don´t know much about flask, but with this code, you can get the array of objects and insert them one by one with your server logic.</p>
<p><a href="https://jsfiddle.net/8gdun4j0/" rel="nofollow">https://jsfiddle.net/8gdun4j0/</a></p>
<p>PS: used jquery</p>
<pre><code> arrayObject=[];
$("#subButon").on("click",function(){
for(var index=0;index<count+1;index++){
arrayObject.push(
{stock:$("#stock"+index).val(),
weight:$("#weight"+index).val()}
)
}
console.log(arrayObject);
alert("objetos guardados")
})
</code></pre>
| 1 | 2016-08-17T18:17:31Z | [
"javascript",
"python",
"html",
"input",
"flask"
] |
Django-registration Forbidden (403) CSRF verification failed. Request aborted | 38,979,332 | <p>After installing django-registration-redux I have an 403 CSRF Error each time I try to register. Here is my form.html:</p>
<pre><code>{% extends "base.html" %}
{% load i18n %}
{% load crispy_forms_tags %}
{% block content %}
<div class="space"></div>
<div class="space"></div>
<div class="space"></div>
<div class="space"></div>
<div class='row'>
<div class='col-sm-6 col-sm-offset-3'>
<h1>Ãnregistrare</h1>
<form method="post" action=".">
{% csrf_token %}
{{ form|crispy }}
<input class='btn btn-block btn-primary' type="submit" value="{% trans 'Join' %}" />
</form>
</div>
</div>
<hr/>
<div class='row'>
<div class='col-sm-6 col-sm-offset-3 text-align-center'>
<p>DoriÈi sÄ vÄ <a href="{% url 'auth_login' %}">LogaÈi</a>?</p>
</div>
</div>
{% endblock %}
</code></pre>
<p>1)Yes, I do have both {% csrf_token %} in the form.html and the following MIDDLEWARE_CLASSES :</p>
<pre><code>= (
'djangosecure.middleware.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
</code></pre>
| 0 | 2016-08-16T15:51:08Z | 39,362,984 | <p>I have the same problem after i update my django from 1.8 to 1.10.
downgrading to 1.8 fix the problem (for now).. </p>
| 1 | 2016-09-07T06:49:23Z | [
"python",
"django",
"django-registration",
"django-csrf"
] |
Python3 -m /path/to/file is giving me an error, whereas python -m /path/to/file is not | 38,979,362 | <p>I have been using the mod option on the command line with python (Python 2.7.X) for a while, and I am now switching to Python 3.</p>
<pre><code>python -m path/to/file
</code></pre>
<p>Now when I try to do the same procedure, but with Python 3, literally the same line:</p>
<pre><code>python3 -m path/to/file
</code></pre>
<p>I am given the following error: </p>
<pre><code>/usr/bin/python3: No module named path/to/file
</code></pre>
<p>I have been reading the documentation for both Python 2.X and Python 3.X about the -m option, and I have yet to figure out how the small changes made for 3.X have rendered the above mentioned utilization on my side broken!</p>
| 1 | 2016-08-16T15:52:52Z | 38,979,444 | <p>Python3 won't look for Python2 modules. You need to install modules specifically for Python3.</p>
<p>To give a concrete example: Under Debian, package python-numpy installs to <code>/usr/lib/python2.7/dist-packages/numpy/</code> whereas package python3-numpy installs to <code>/usr/lib/python3/dist-packages/numpy/</code>. Python2 will only search in <code>/usr/lib/python2.7/dist-packages/</code>; Python3 will only search in <code>/usr/lib/python3/dist-packages/</code>.</p>
<p>Accept this as it is; don't try to modify the search paths of your local installation, lest you will end in a terrible mess. Rather do reinstall each single module in its Python3 variant.</p>
| 0 | 2016-08-16T15:57:14Z | [
"python",
"python-3.x",
"module",
"options",
"mod"
] |
Python3 -m /path/to/file is giving me an error, whereas python -m /path/to/file is not | 38,979,362 | <p>I have been using the mod option on the command line with python (Python 2.7.X) for a while, and I am now switching to Python 3.</p>
<pre><code>python -m path/to/file
</code></pre>
<p>Now when I try to do the same procedure, but with Python 3, literally the same line:</p>
<pre><code>python3 -m path/to/file
</code></pre>
<p>I am given the following error: </p>
<pre><code>/usr/bin/python3: No module named path/to/file
</code></pre>
<p>I have been reading the documentation for both Python 2.X and Python 3.X about the -m option, and I have yet to figure out how the small changes made for 3.X have rendered the above mentioned utilization on my side broken!</p>
| 1 | 2016-08-16T15:52:52Z | 38,979,755 | <p>The usage is not:</p>
<pre><code>$ python -m path/to/file
</code></pre>
<p>but rather </p>
<pre><code>$ python -m package.subpackage.module
</code></pre>
<p>The <code>-m</code> flag adds the current directory on your path, otherwise it looks for the module to open the program with. For example, Python 2:</p>
<pre><code>$ cat > foo.py
import sys
print(sys.version)
$ python -m foo
2.7.8 (default, Jul 28 2014, 01:34:03)
[GCC 4.8.3]
$ python -m /foo
/usr/bin/python: No module named /foo
$ python -m ~/foo
/usr/bin/python: No module named /cygdrive/c/Users/user/foo
$ python -m ~/foo.py
/usr/bin/python: Import by filename is not supported.
</code></pre>
| 1 | 2016-08-16T16:13:10Z | [
"python",
"python-3.x",
"module",
"options",
"mod"
] |
Print list of lists in Python in specific order | 38,979,363 | <p>I have this list:</p>
<pre><code>vshape = [['0','1','1'],['1','0','1'],['1','1','0'],['1','0','1'],['0','1','1']]
</code></pre>
<p>I need to print out every item in specific order -
one line of vshape[0][0], vshape[1][0], vshape[2][0], vshape[3][0], and vshape[4][0];
followed by line of vshape[0][1], vshape[1][1] ans so on... </p>
<p>Output should look like ('0's creating a V-shape):</p>
<pre><code>01110
10101
11011
</code></pre>
| 0 | 2016-08-16T15:52:58Z | 38,979,433 | <p>Use <code>zip</code>:</p>
<pre><code>for r in zip(*vshape):
print(''.join(r))
# 01110
# 10101
# 11011
</code></pre>
| 2 | 2016-08-16T15:56:39Z | [
"python",
"python-2.7"
] |
Print list of lists in Python in specific order | 38,979,363 | <p>I have this list:</p>
<pre><code>vshape = [['0','1','1'],['1','0','1'],['1','1','0'],['1','0','1'],['0','1','1']]
</code></pre>
<p>I need to print out every item in specific order -
one line of vshape[0][0], vshape[1][0], vshape[2][0], vshape[3][0], and vshape[4][0];
followed by line of vshape[0][1], vshape[1][1] ans so on... </p>
<p>Output should look like ('0's creating a V-shape):</p>
<pre><code>01110
10101
11011
</code></pre>
| 0 | 2016-08-16T15:52:58Z | 38,979,655 | <p>perhaps easier to understand with transpose, using numpy.</p>
<pre><code>for r in np.array(vshape).T:
print(''.join(r))
</code></pre>
| 0 | 2016-08-16T16:07:41Z | [
"python",
"python-2.7"
] |
Print list of lists in Python in specific order | 38,979,363 | <p>I have this list:</p>
<pre><code>vshape = [['0','1','1'],['1','0','1'],['1','1','0'],['1','0','1'],['0','1','1']]
</code></pre>
<p>I need to print out every item in specific order -
one line of vshape[0][0], vshape[1][0], vshape[2][0], vshape[3][0], and vshape[4][0];
followed by line of vshape[0][1], vshape[1][1] ans so on... </p>
<p>Output should look like ('0's creating a V-shape):</p>
<pre><code>01110
10101
11011
</code></pre>
| 0 | 2016-08-16T15:52:58Z | 38,980,007 | <pre><code>vshape = [['0','1','1'],['1','0','1'],['1','1','0'],['1','0','1'], ['0','1','1']]
for i in range(3):
for j in range(5):
print (vshape[j][i], end=' ')
print()
</code></pre>
<blockquote>
<p><em>This is one of my most used methods to print out patterns. In this, I use two nested for loops.</em></p>
</blockquote>
| 2 | 2016-08-16T16:28:05Z | [
"python",
"python-2.7"
] |
Trouble With Python Code | 38,979,400 | <p>I am taking the introduction to computer science class at Udacity and for one of the assignments I must write code that will take all the links from a webpage. Here is the code</p>
<pre><code>def get_next_target(page):
start_link = page.find('<a href=')
while True:
if start_link == -1:
x, y = None, 0
return x, y
break
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]
return url, end_quote
</code></pre>
<p>When I run samples, it seems to work, but when I submit my code, I get the result that my submission did not terminate. What does this mean? What is the issue with my code?</p>
| 0 | 2016-08-16T15:54:31Z | 38,979,661 | <pre><code>def get_next_target(page, start=0):
""" function find link in part of page """
start_link = page[start:].find('<a href=')
if start_link == -1:
x, y = None, None
return x, y
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]
return url, end_quote
def find_all(page):
""" function find all links"""
length = len(page)
current_position = 0 # we start with full page
urls = []
while current_position < length:
# get url and set current_positon, so next we gonna search
# only part of page
url, current_position = get_next_target(page, current_position)
urls.append(url)
if current_position is None:
return urls
return urls
</code></pre>
<p>But I would recommand use regular expressions - something like:</p>
<pre><code>def find_all(page):
import re
return re.findall('<a href="(.+)"', page)
</code></pre>
<p><strong>Edit:</strong>
But neither solution will detect links like:</p>
<pre><code><a href="some/page">, or <a tilte="ti" href="some/page" >
</code></pre>
<p>for this you will need recreate the regular expression. It is the best option IMHO.</p>
| 0 | 2016-08-16T16:07:53Z | [
"python"
] |
Dryscrape Form & Scraping Issue | 38,979,513 | <p>I am trying to submit a form and retrieve some data
with dryscrape but when I execute the program, I get the error: </p>
<pre><code>Traceback (most recent call last):
File "easyjettest.py", line 22, in <module>
originairport_field.set(originairport)
AttributeError: 'NoneType' object has no attribute 'set'
</code></pre>
<p>I really can't figure out what is the problem. I've read the documentation and searched as much as I could online.</p>
<p>The code is the following: </p>
<pre><code>import dryscrape
import sys
if 'linux' in sys.platform:
# start xvfb in case no X is running. Make sure xvfb
# is installed, otherwise this won't work!
dryscrape.start_xvfb()
originairport = 'Cyprus (Larnaca) LCA'
destinationairport = 'London Gatwick LGW'
odate = '16/08/2016'
adate = '18/08/2016'
adults = '1'
sess = dryscrape.Session(base_url = 'http://www.easyjet.com/en/')
sess.set_attribute('auto_load_images', False)
sess.visit('/')
originairport_field = sess.at_xpath('.//*[@id="acOriginAirport"]')
originairport_field.set(originairport)
destinationairport_field = sess.at_xpath('.//* [@id="acDestinationAirport"]')
destinationairport_field.set(destinationairport)
odate_field = sess.at_xpath('.//*[@id="oDate"]')
odate_field.set(odate)
rdate_field = session.at_xpath('.//*[@id="rDate"]')
rdate_field.set(rdate)
adults_field = session.at_xpath('.//*[@id="numberOfAdults"]')
adults_field.set(adults)
originairport_field.form().submit()
# extract all links
for link in session.xpath('//a[@href]'):
print link['href']
</code></pre>
| 1 | 2016-08-16T16:00:31Z | 38,979,649 | <p>Check in which line the error is taking place ,probably any of the variables <code>originairport_field</code>, <code>destinationairport_field</code>, <code>odate_field</code> ,<code>rdate_field</code>,<code>adults_field</code> is assigned none.</p>
<p>By the way from where does the <code>session</code> in the lines where you set the values of <code>rdate_field</code> and <code>adults_field</code> come from? isnt that <code>sess</code></p>
<p><strong>Edit:</strong></p>
<p>From your updated error info probably <code>sess.at_xpath('.//*[@id="acOriginAirport"]')</code> isnt returning anything.</p>
| 1 | 2016-08-16T16:07:21Z | [
"python",
"web-scraping"
] |
why the result of blob detection does not change after the change of parameters? | 38,979,551 | <p>I got the program of blob detection in the following website: <a href="https://www.learnopencv.com/blob-detection-using-opencv-python-c/" rel="nofollow">https://www.learnopencv.com/blob-detection-using-opencv-python-c/</a></p>
<p>It is quiet useful,but i found nothing change in the result after i change the parameters value.
like: even i set the parameter of color to 255(it is used to detect lighter blobs), the dark blobs still can be detected. Also, after i change the value of minimum area, the smallest blob also can be detected.</p>
<p>it seems nothing changes,the result is always like the following one:
<a href="http://i.stack.imgur.com/ELRYc.jpg" rel="nofollow">the result of blob detection</a>
Here is the code:</p>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("blob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector()
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area.
params.filterByArea = False
params.minArea = 10000
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
</code></pre>
<p>Can anyone help me ? Thanks very much!!!</p>
| 0 | 2016-08-16T16:02:40Z | 38,982,057 | <p>You change parameters but they are not used by the detector. Set them and then set up the detector:</p>
<pre><code>import cv2
import numpy as np
# Read image
im = cv2.imread('blob.jpg', cv2.IMREAD_GRAYSCALE)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10 # the graylevel of images
params.maxThreshold = 200
# Filter by Area.
params.filterByArea = True
params.minArea = 1500
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(
im, keypoints, np.array([]), (0, 0, 255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow('Keypoints', im_with_keypoints)
cv2.waitKey(0)
</code></pre>
<p>Note: you have to use <code>True</code> for <code>params.filterByArea</code> if you want this parameter to be used and to not end on a segfault.</p>
| 0 | 2016-08-16T18:29:15Z | [
"python",
"opencv",
"blob"
] |
count increase by number, not digits | 38,979,600 | <p>In this python function the file number increases by one when saving.
It does it from filename1 to filename10 correctly and then jumps to 111, 1112, 11113 and so forth instead of continuing with filename11.
Where does it go wrong?</p>
<pre><code>for f in notepad.getFiles():
if os.path.isfile(f[0]):
notepad.activateBufferID(f[1])
if notepad.getCurrentBufferID() == f[1]:
notepad.save()
else:
notepad.activateBufferID(f[1])
if notepad.getCurrentBufferID() == f[1]:
counter = 0
filename = f[0]
while os.path.isfile(NewFileDir + filename + NewFileExt):
counter += 1
filename = filename[:-1] + str(counter)
notepad.saveAs(NewFileDir + filename + NewFileExt)
</code></pre>
| 0 | 2016-08-16T16:05:16Z | 38,979,658 | <p>The offending line is here:</p>
<pre><code>filename = filename[:-1] + str(counter)
</code></pre>
<p>You trim off <strong>one</strong> character and add the counter. This works great for filenames when the counter is a single digit:</p>
<pre><code>filename8 -> filename + 9
filename9 -> filename + 10
</code></pre>
<p>but fails when the counter is more than one digit:</p>
<pre><code>filename10 -> filename1 + 11
filename111 -> filename11 + 12
</code></pre>
<p>One solution would be to use <code>len(str(counter))</code> instead of hardcoding <code>[:-1]</code>. Another would be to store the base filename separately instead of mutating it as you go.</p>
| 5 | 2016-08-16T16:07:48Z | [
"python",
"string"
] |
Python - write x rows of csv file to json file | 38,979,603 | <p>I have a csv file, which I need to write to json files in rows of 1000. The csv file has around 9,000 rows, so ideally I'd like to end up with 9 separate json files of consecutive data.</p>
<p>I know how to write a csv file to json - what I've been doing:</p>
<pre><code>csvfile = open("C:\\Users\Me\Desktop\data\data.csv", 'r', encoding="utf8")
reader = csv.DictReader(csvfile, delimiter = ",")
out = json.dumps( [ row for row in reader ] )
with open("C:\\Users\Me\Desktop\data\data.json", 'w') as f:
f.write(out)
</code></pre>
<p>which works great. But I need the json file to be 9 split files. Now, I'm assuming that I would either:</p>
<p>1) attempt to count <em>row</em> and stop when it reaches 1,000</p>
<p>2) write the csv file to a single json file, then open the json and attempt to split it somehow.</p>
<p>I'm pretty lost on how to accomplish this - any help appreciated!</p>
| 1 | 2016-08-16T16:05:23Z | 38,979,883 | <p>This will read the file <code>data.csv</code> once and will create separate json files with id <code>data_1.json</code> through <code>data_9.json</code> since there are 9000 rows. </p>
<p>Also as long as the number of rows in <code>data.csv</code> is multiple of 1000, it will create <code>number_of_rows/1000</code> files without having to change the code.</p>
<pre><code>csvfile = open("C:\\Users\Me\Desktop\data\data.csv", 'rb', encoding="utf8")
reader = csv.DictReader(csvfile, delimiter = ",")
r = []
counter = 0
fileid = 1
for row in reader:
r.append( row )
counter += 1
if counter == 999:
out = json.dumps( r )
fname = "C:\\Users\Me\Desktop\data\data_"+ str(fileid) + ".json"
with open( fname, 'wb' ) as f:
f.write( out )
# resetting & updating variables
fileid += 1
counter = 0
r = []
out = None
</code></pre>
| -1 | 2016-08-16T16:20:55Z | [
"python",
"json",
"csv"
] |
Python - write x rows of csv file to json file | 38,979,603 | <p>I have a csv file, which I need to write to json files in rows of 1000. The csv file has around 9,000 rows, so ideally I'd like to end up with 9 separate json files of consecutive data.</p>
<p>I know how to write a csv file to json - what I've been doing:</p>
<pre><code>csvfile = open("C:\\Users\Me\Desktop\data\data.csv", 'r', encoding="utf8")
reader = csv.DictReader(csvfile, delimiter = ",")
out = json.dumps( [ row for row in reader ] )
with open("C:\\Users\Me\Desktop\data\data.json", 'w') as f:
f.write(out)
</code></pre>
<p>which works great. But I need the json file to be 9 split files. Now, I'm assuming that I would either:</p>
<p>1) attempt to count <em>row</em> and stop when it reaches 1,000</p>
<p>2) write the csv file to a single json file, then open the json and attempt to split it somehow.</p>
<p>I'm pretty lost on how to accomplish this - any help appreciated!</p>
| 1 | 2016-08-16T16:05:23Z | 38,980,017 | <p>Read the whole CSV file into a list or rows, then write slices of length 1000 to JSON files.</p>
<pre><code>import csv
import json
input_file = 'C:\\Users\\Me\\Desktop\\data\\data.csv'
output_file_template = 'C:\\Users\\Me\\Desktop\\data\\data_{}.json'
with open(input_file, 'r', encoding='utf8') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
rows = list(reader)
for i in range(len(rows) // 1000):
out = json.dumps(rows[1000*i:1000*(i+1)])
with open(output_file_template.format(i), 'w') as f:
f.write(out)
</code></pre>
| 2 | 2016-08-16T16:28:23Z | [
"python",
"json",
"csv"
] |
Python - write x rows of csv file to json file | 38,979,603 | <p>I have a csv file, which I need to write to json files in rows of 1000. The csv file has around 9,000 rows, so ideally I'd like to end up with 9 separate json files of consecutive data.</p>
<p>I know how to write a csv file to json - what I've been doing:</p>
<pre><code>csvfile = open("C:\\Users\Me\Desktop\data\data.csv", 'r', encoding="utf8")
reader = csv.DictReader(csvfile, delimiter = ",")
out = json.dumps( [ row for row in reader ] )
with open("C:\\Users\Me\Desktop\data\data.json", 'w') as f:
f.write(out)
</code></pre>
<p>which works great. But I need the json file to be 9 split files. Now, I'm assuming that I would either:</p>
<p>1) attempt to count <em>row</em> and stop when it reaches 1,000</p>
<p>2) write the csv file to a single json file, then open the json and attempt to split it somehow.</p>
<p>I'm pretty lost on how to accomplish this - any help appreciated!</p>
| 1 | 2016-08-16T16:05:23Z | 38,980,557 | <p>Instead of reading the whole CSV file, you can iterate (less memory usage).</p>
<p>For instance, here is a simple iteration of the rows:</p>
<pre><code>with open(input_file, 'r', encoding='utf8') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
print(row)
</code></pre>
<p>During iteration, you can enumerate the rows and use this value to count the groups of 1000 rows:</p>
<pre><code>group_size = 1000
with open(input_file, 'r', encoding='utf8') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for index, row in enumerate(reader):
group_idx = index // group_size
print(group_idx, row)
</code></pre>
<p>You should have something like this:</p>
<pre><code>0 [row 0...]
0 [row 1...]
0 [row 2...]
...
0 [row 999...]
1 [row 1000...]
1 [row 1001...]
etc.
</code></pre>
<p>You can use <a href="https://docs.python.org/dev/library/itertools.html#itertools.groupby" rel="nofollow">itertools.groupby</a> to group yours rows by 1000.</p>
<p>Using Alberto Garcia-Raboso's solution, you can use:</p>
<pre><code>from __future__ import division
import csv
import json
import itertools
input_file = 'C:\\Users\\Me\\Desktop\\data\\data.csv'
output_file_template = 'C:\\Users\\Me\\Desktop\\data\\data_{}.json'
group_size = 1000
with open(input_file, 'r', encoding='utf8') as csvfile:
reader = csv.DictReader(csvfile, delimiter=',')
for key, group in itertools.groupby(enumerate(rows),
key=lambda item: item[0] // group_size):
grp_rows = [item[1] for item in group]
content = json.dumps(grp_rows)
with open(output_file_template.format(key), 'w') as jsonfile:
jsonfile.write(content)
</code></pre>
<p>Exemple with some fake data:</p>
<pre><code>from __future__ import division
import itertools
rows = [[1, 2], [3, 4], [5, 6], [7, 8],
[1, 2], [3, 4], [5, 6], [7, 8],
[1, 2], [3, 4], [5, 6], [7, 8],
[1, 2], [3, 4], [5, 6], [7, 8],
[1, 2], [3, 4], [5, 6], [7, 8]]
group_size = 4
for key, group in itertools.groupby(enumerate(rows),
key=lambda item: item[0] // group_size):
g_rows = [item[1] for item in group]
print(key, g_rows)
</code></pre>
<p>You'll get:</p>
<pre><code>0 [[1, 2], [3, 4], [5, 6], [7, 8]]
1 [[1, 2], [3, 4], [5, 6], [7, 8]]
2 [[1, 2], [3, 4], [5, 6], [7, 8]]
3 [[1, 2], [3, 4], [5, 6], [7, 8]]
4 [[1, 2], [3, 4], [5, 6], [7, 8]]
</code></pre>
| 2 | 2016-08-16T16:58:42Z | [
"python",
"json",
"csv"
] |
Python - write x rows of csv file to json file | 38,979,603 | <p>I have a csv file, which I need to write to json files in rows of 1000. The csv file has around 9,000 rows, so ideally I'd like to end up with 9 separate json files of consecutive data.</p>
<p>I know how to write a csv file to json - what I've been doing:</p>
<pre><code>csvfile = open("C:\\Users\Me\Desktop\data\data.csv", 'r', encoding="utf8")
reader = csv.DictReader(csvfile, delimiter = ",")
out = json.dumps( [ row for row in reader ] )
with open("C:\\Users\Me\Desktop\data\data.json", 'w') as f:
f.write(out)
</code></pre>
<p>which works great. But I need the json file to be 9 split files. Now, I'm assuming that I would either:</p>
<p>1) attempt to count <em>row</em> and stop when it reaches 1,000</p>
<p>2) write the csv file to a single json file, then open the json and attempt to split it somehow.</p>
<p>I'm pretty lost on how to accomplish this - any help appreciated!</p>
| 1 | 2016-08-16T16:05:23Z | 38,985,568 | <p>There is no reason to use a Dictreader, the regular <em>csv.reader</em> will do fine. You can also just use <em>itertool.islice</em> on the reader object to slice the data into <code>n</code> rows and dump each collection to a new file:</p>
<pre><code>from itertools import islice, count
import csv
import json
with open("C:\\Users\Me\Desktop\data\data.csv") as f:
reader, cnt = csv.reader(f), count(1)
for rows in iter(lambda: list(islice(reader, 1000)), []):
with open("C:\\Users\Me\Desktop\data\data{}.json".format(next(cnt))) as out:
json.dump(rows, out)
</code></pre>
| 0 | 2016-08-16T22:39:12Z | [
"python",
"json",
"csv"
] |
python tkinter import portability | 38,979,637 | <p>I'm running the same very small python script from both my work computer at home and a server at work that that computer RDPs to. The server uses the company standard python 2 and I have been using python 3 at home. Recently I decided to use the same script to do the same job on both. Everything is the same and works except that one wants:</p>
<pre><code>from Tkinter import *
</code></pre>
<p>and the other wants a single letter case change:</p>
<pre><code>from tkinter import *
</code></pre>
<p>How do I make this portable as in the same script working on different python environments? I don't want to have two scripts to remember to keep an eye on.
Is it possible?</p>
| 0 | 2016-08-16T16:06:41Z | 38,997,734 | <p>The <code>2to3</code> program will perform (some, but not always perfect) translation of your program to Python 3. The trick is to write the code in such a way that your Python 2 <em>does</em> get successfully (and correctly) converted. </p>
<p>I'd recommend you take a look at <a href="https://github.com/mitsuhiko/python-modernize" rel="nofollow">python-modernize</a>, which should help you maintain a compatible source.</p>
| 1 | 2016-08-17T13:15:20Z | [
"python",
"tkinter"
] |
python tkinter import portability | 38,979,637 | <p>I'm running the same very small python script from both my work computer at home and a server at work that that computer RDPs to. The server uses the company standard python 2 and I have been using python 3 at home. Recently I decided to use the same script to do the same job on both. Everything is the same and works except that one wants:</p>
<pre><code>from Tkinter import *
</code></pre>
<p>and the other wants a single letter case change:</p>
<pre><code>from tkinter import *
</code></pre>
<p>How do I make this portable as in the same script working on different python environments? I don't want to have two scripts to remember to keep an eye on.
Is it possible?</p>
| 0 | 2016-08-16T16:06:41Z | 38,998,266 | <p>Check the Python version and treat according to the case:</p>
<pre><code>import sys
if sys.version_info[0] > 2:
from tkinter import *
else:
from Tkinter import *
</code></pre>
| 1 | 2016-08-17T13:37:57Z | [
"python",
"tkinter"
] |
python tkinter import portability | 38,979,637 | <p>I'm running the same very small python script from both my work computer at home and a server at work that that computer RDPs to. The server uses the company standard python 2 and I have been using python 3 at home. Recently I decided to use the same script to do the same job on both. Everything is the same and works except that one wants:</p>
<pre><code>from Tkinter import *
</code></pre>
<p>and the other wants a single letter case change:</p>
<pre><code>from tkinter import *
</code></pre>
<p>How do I make this portable as in the same script working on different python environments? I don't want to have two scripts to remember to keep an eye on.
Is it possible?</p>
| 0 | 2016-08-16T16:06:41Z | 38,999,134 | <p>or try importing both of them :-)</p>
<pre><code>try:
from tkinter import *
except ImportError:
from Tkinter import *
</code></pre>
| 0 | 2016-08-17T14:13:16Z | [
"python",
"tkinter"
] |
python finding the 1's in a bitmask | 38,979,660 | <p>I have a number of nodes that can be grouped to respond to commands via a bitmask. For example: NodeA is in groups 1 and 5. When asked which groups it belongs to, it answers with 17 of which the binary equivalent is '0b10001'. A node in groups 2, 7 and 9 would tell me it belongs to group 322 ('0b101000010'). I need a way to display to the user which group a specified node belongs to. There are a possibility of 16 groups. My code will give me a 'string index out of range' error if the binary is not 16 characters long. I know there is a better way:</p>
<pre><code>def xref(grp):
a = bin(grp)
d = str(a)
if d[-1] == '1':
print "Group 1"
if d[-2] == '1':
print "Group 2"
if d[-3] == '1':
print "Group 3"
repeat for 16 groups
</code></pre>
| 3 | 2016-08-16T16:07:51Z | 38,979,854 | <p>Use bit operations (and loops!):</p>
<pre><code>>>> for i in range(16):
... if grp & (1<<i):
... print('Group', i+1)
</code></pre>
| 2 | 2016-08-16T16:18:35Z | [
"python",
"bitmask"
] |
python finding the 1's in a bitmask | 38,979,660 | <p>I have a number of nodes that can be grouped to respond to commands via a bitmask. For example: NodeA is in groups 1 and 5. When asked which groups it belongs to, it answers with 17 of which the binary equivalent is '0b10001'. A node in groups 2, 7 and 9 would tell me it belongs to group 322 ('0b101000010'). I need a way to display to the user which group a specified node belongs to. There are a possibility of 16 groups. My code will give me a 'string index out of range' error if the binary is not 16 characters long. I know there is a better way:</p>
<pre><code>def xref(grp):
a = bin(grp)
d = str(a)
if d[-1] == '1':
print "Group 1"
if d[-2] == '1':
print "Group 2"
if d[-3] == '1':
print "Group 3"
repeat for 16 groups
</code></pre>
| 3 | 2016-08-16T16:07:51Z | 38,979,917 | <p>You just need to use some basic <a href="https://wiki.python.org/moin/BitwiseOperators" rel="nofollow">bitwise operators</a>.</p>
<p>Here's an example:</p>
<pre><code>def findbits(num):
for i in range(16):
if num & 1 << i:
print("Group {0}".format(i + 1))
</code></pre>
<p>And the results:</p>
<pre>
>>> findbits(0b10001)
Group 1
Group 5
>>> findbits(0b10100010)
Group 2
Group 6
Group 8
>>> findbits(0b101000010)
Group 2
Group 7
Group 9
</pre>
<p>What this does is loop through the 16 bits you want to look at.</p>
<ul>
<li><code>1 << i</code> shifts the number 1 by <code>i</code> bits, e.g. <code>1 << 4</code> would be 0b10000</li>
<li>num & whatever does a bitwise AND - each bit of the number is set to 1 if the bits of the two operands are 1.</li>
</ul>
<p>So what this does is compare your values against 0b1, 0b10, 0b100, etc.</p>
| 2 | 2016-08-16T16:23:13Z | [
"python",
"bitmask"
] |
Import Error When Running File | 38,979,716 | <p><a href="http://i.stack.imgur.com/DoOj7.png" rel="nofollow">enter image description here</a>When running a Python file I keep getting the following traceback error in IDLE:</p>
<pre><code> File "D:\jmz study\project1\MINI PROJECT\sandhi-splitter-master\sandhi-splitter-master\sandhisplitter\tests\test_splitter.py", line 3, in <module>
from sandhisplitter.splitter import Splitter
ImportError: No module named 'sandhisplitter'
</code></pre>
<p>These are the files iam having</p>
| -3 | 2016-08-16T16:10:59Z | 38,981,665 | <p>Looks like you've got a syntax error given that <a href="https://github.com/libindic/sandhi-splitter" rel="nofollow"><strong>this</strong></a> is the right version of the library you are using. </p>
<p>Try using <code>from sandhisplitter import Sandhisplitter</code> instead.</p>
<p>Here is an example straight from the library's ReadMe:</p>
<pre><code>>>> from sandhisplitter import Sandhisplitter
>>> s = Sandhisplitter()
>>> s.split('à´à´¦àµà´¯à´®àµà´¤àµà´¤à´¿')
(['à´à´¦àµà´¯à´', 'à´à´¤àµà´¤à´¿'], [4])
>>> s.split('വയàµà´¯à´¾à´¤àµà´¯à´¾à´¯à´¿')
(['വയàµà´¯à´¾à´¤àµ', 'à´à´¯à´¿'], [7])
>>> s.split('à´à´¨àµà´¨àµà´àµà´àµà´£àµà´àµà´µà´¯àµà´¯')
(['à´à´¨àµà´¨àµà´àµà´àµà´£àµà´àµà´µà´¯àµà´¯'], [])
>>> s.split('à´à´¨àµà´¨à´¤àµà´¤àµà´àµà´à´¾à´²à´¤àµà´¤àµ')
(['à´à´¨àµà´¨à´¤àµà´¤àµà´àµà´à´¾à´²à´¤àµà´¤àµ'], [])
>>> s.split('à´à´¨àµà´¤àµà´àµà´àµà´¯àµ')
(['à´à´¨àµà´¤àµ', 'à´à´àµà´àµà´¯àµ'], [3])
>>> s.join(['à´à´¦àµà´¯à´', 'à´à´¯à´¿'])
'à´à´¦àµà´¯à´®à´¾à´¯à´¿'
</code></pre>
| 1 | 2016-08-16T18:06:58Z | [
"python"
] |
Cast a very long string as an integer or Long Integer in PySpark | 38,979,733 | <p>I'm working with a string column which is 38 characters long and is actually numerical.</p>
<p>for e.g. id = '678868938393937838947477478778877.....' ( 38 characters long). </p>
<p>How do I cast it into a long integer ? I have tried cast function with IntegerType, LongType and DoubleType and when i try to show the column it yields Nulls. </p>
<p>The reason I want to do this is because I need to do some inner joins using this column and doing it as String is giving me Java Heap Space Errors.</p>
<p>Any suggestions on how to cast it as a Long Integer ? { This question tries to cast a string into a Long Integer }</p>
| 0 | 2016-08-16T16:12:11Z | 38,981,178 | <p>Long story short you simply don't. Spark <code>DataFrame</code> is a JVM object which uses following types mapping:</p>
<ul>
<li><code>IntegerType</code> -> <code>Integer</code> with <code>MAX_VALUE</code> equal 2 ** 31 - 1</li>
<li><code>LongType</code> -> <code>Long</code> with <code>MaxValue</code> equal 2 ** 63 - 1</li>
</ul>
<p>You could try to use <code>DecimalType</code> with maximum allowed precission (38).</p>
<pre><code>df = sc.parallelize([("9" * 38, "9" * 39)]).toDF(["x", "y"])
df.select(col("x").cast("decimal(38, 0)")).show(1, False)
## +--------------------------------------+
## |x |
## +--------------------------------------+
## |99999999999999999999999999999999999999|
## +---------------------------------------
</code></pre>
<p>With larger numbers you can cast to double but not without a loss of precision:</p>
<pre><code>df.select(
col("y").cast("decimal(38, 0)"), col("y").cast("double")).show(1, False)
## +----+------+
## |y |y |
## +----+------+
## |null|1.0E39|
## +----+------+
</code></pre>
<p>That being said casting to numeric types won't help you with memory errors. </p>
| 1 | 2016-08-16T17:37:21Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark"
] |
Python: Excel to Web to PDF | 38,979,791 | <p>I'm new to programming and am searching for the best way to pull PDFs of a series of water bills from a city website. I have been able to open the webpage and been able to open an account using an account numbers from an excel list, however, I am having trouble <strong>creating a loop</strong> to run through all accounts without rewriting code. I have some ideas, but I'm guessing that better suggestions exist. See below for the intro code:</p>
<pre><code>import bs4, requests, openpyxl, os
os.chdir('C:\\Users\\jsmith.ABCINC\\Desktop')
addresses = openpyxl.load_workbook ('WaterBills.xlsx')
type (addresses)
sheet = addresses.get_sheet_by_name ('Sheet1')
cell = sheet ['A1']
cell.value
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://secure.phila.gov/WRB/WaterBill/Account/GetAccount.aspx')
elem = browser.find_element_by_css_selector('#MainContent_AcctNum')
elem.click()
elem.send_keys (cell.value)
elem = browser.find_element_by_css_selector('#MainContent_btnLookup')
elem.click()
</code></pre>
<p>Thanks for your assistance!</p>
| 0 | 2016-08-16T16:14:56Z | 38,983,419 | <p>Couldn't find a nice way to download the PDF but here's everything but:</p>
<pre><code> import openpyxl
from selenium import webdriver
workbook = openpyxl.load_workbook('WaterBills.xlsx')
sheet = workbook.get_sheet_by_name('Sheet1')
column_a = sheet.columns[0]
account_numbers = [row.value for row in column_a if row.value]
browser = webdriver.Firefox()
browser.get('https://secure.phila.gov/WRB/WaterBill/Account/GetAccount.aspx')
for account_number in account_numbers:
search_box = browser.find_element_by_id('MainContent_AcctNum')
search_box.click()
search_box.send_keys(account_number)
search_button = browser.find_element_by_id('MainContent_btnLookup')
search_button.click()
# TODO: download the page as a PDF
browser.back()
browser.quit()
</code></pre>
| 1 | 2016-08-16T19:57:29Z | [
"python",
"excel",
"loops",
"pdf"
] |
How to upload multiple answers with image bytestring data? | 38,979,810 | <p>According to Consumer Surveys docs, the <code>questions[].images[].data</code> field takes a bytes datatype.</p>
<p>I'm using Python 3 for implementation, but the API is giving errors like <code>Invalid ByteString</code> or bytes type <code>is not JSON serializable.</code></p>
<p>I'm using the following code:</p>
<pre><code> import base64
import urllib
url = 'http://example.com/image.png'
raw_img = urllib.request.urlopen(url).read()
# is not JSON serializable due to json serializer not being able to serialize raw bytes
img_data = raw_img
# next errors: Invalid ByteString, when tried with base64 encoding as followings:
img_data = base64.b64encode(raw_img)
# Also tried decoding it to UTF.8 `.decode('utf-8')`
</code></pre>
<p><code>img_data</code> is part of the JSON payload that is being sent to the API.</p>
<p>Am I missing something? what's the correct way to handle image data upload for questions? I looked into <code>https://github.com/google/consumer-surveys/tree/master/python/src</code> but there is not example of this part.</p>
<p>Thanks</p>
| 1 | 2016-08-16T16:16:09Z | 38,982,761 | <p>You need to use web-safe/URL-safe encoding. Here's some documentation on doing this in Python: <a href="https://pymotw.com/2/base64/#url-safe-variations" rel="nofollow">https://pymotw.com/2/base64/#url-safe-variations</a></p>
<p>In your case, this would look like</p>
<pre><code>img_data = base64.urlsafe_b64encode(raw_img)
</code></pre>
<p><strong>ETA:</strong> In Python 3, the API expects the image data to be of type <code>str</code> so it can be JSON serialized, but the <code>base64.urlsafe_b64encode</code> method returns the data in the form of UTF-8 <code>bytes</code>. You can fix this by converting the bytes to Unicode:</p>
<pre><code>img_data = base64.urlsafe_b64encode(raw_img)
img_data = img_data.decode('utf-8')
</code></pre>
| 2 | 2016-08-16T19:13:09Z | [
"python",
"google-surveys"
] |
Avoid duplicate result Multithread Python | 38,979,880 | <p><br >
I'm trying to make my actual crawler Multithread.<br >
When I set the Multithread, several instance of the function will be started.</p>
<p><strong>Exemple :</strong></p>
<p>If my function I use <code>print range(5)</code> and I will have <code>1,1,2,2,3,3,4,4,5,5</code> if I have 2 Thread.</p>
<p>How can can I have the result <code>1,2,3,4,5</code> in Multithread ?</p>
<p>My actual code is a crawler as you can see under :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://stackoverflow.com/questions?page=" + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
res = soup.find('span', {'class': 'vote-count-post '})
print("UpVote : " + res.string)
trade_spider(1)
</code></pre>
<p>How can I call <code>trade_spider()</code> in Multithread without duplicate link ?</p>
| 1 | 2016-08-16T16:20:33Z | 38,980,193 | <p>Have the page number be an argument to the <code>trade_spider</code> function. </p>
<p>Call the function in each process with a different page number so that each thread gets a unique page.</p>
<p>For example:</p>
<pre><code>import multiprocessing
def trade_spider(page):
url = "http://stackoverflow.com/questions?page=%s" % (page,)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
# Pool of 10 processes
max_pages = 100
num_pages = range(1, max_pages)
pool = multiprocessing.Pool(10)
# Run and wait for completion.
# pool.map returns results from the trade_spider
# function call but that returns nothing
# so ignoring it
pool.map(trade_spider, num_pages)
</code></pre>
| 1 | 2016-08-16T16:37:53Z | [
"python",
"multithreading"
] |
Avoid duplicate result Multithread Python | 38,979,880 | <p><br >
I'm trying to make my actual crawler Multithread.<br >
When I set the Multithread, several instance of the function will be started.</p>
<p><strong>Exemple :</strong></p>
<p>If my function I use <code>print range(5)</code> and I will have <code>1,1,2,2,3,3,4,4,5,5</code> if I have 2 Thread.</p>
<p>How can can I have the result <code>1,2,3,4,5</code> in Multithread ?</p>
<p>My actual code is a crawler as you can see under :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://stackoverflow.com/questions?page=" + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
res = soup.find('span', {'class': 'vote-count-post '})
print("UpVote : " + res.string)
trade_spider(1)
</code></pre>
<p>How can I call <code>trade_spider()</code> in Multithread without duplicate link ?</p>
| 1 | 2016-08-16T16:20:33Z | 38,981,186 | <p>Try this:</p>
<pre><code>from multiprocessing import Process, Value
import time
max_pages = 100
shared_page = Value('i', 1)
arg_list = (max_pages, shared_page)
process_list = list()
for x in range(2):
spider_process = Process(target=trade_spider, args=arg_list)
spider_process.daemon = True
spider_process.start()
process_list.append(spider_process)
for spider_process in process_list:
while spider_process.is_alive():
time.sleep(1.0)
spider_process.join()
</code></pre>
<p>Change the parameter list of <code>trade_spider</code> to</p>
<pre><code>def trade_spider(max_pages, page)
</code></pre>
<p>and remove the</p>
<pre><code> page = 1
</code></pre>
<p>This will create two processes that will work through the page list by sharing the <code>page</code> value.</p>
| 1 | 2016-08-16T17:38:03Z | [
"python",
"multithreading"
] |
Django password and password confirmation validation | 38,979,919 | <p>I'm having a little trouble validating two fields (password and password confirmation) in the same form.</p>
<p>The thing is that after validating password with a method I've created, when I try to validate password confirmation, I no longer have access to this variable, and <code>password = self.cleaned_data['password']</code> is <code>'None'</code>.</p>
<pre><code>class NewAccountForm(forms.Form):
password = forms.CharField(widget=forms.PasswordInput(attrs={'class': 'narrow-input', 'required': 'true' }), required=True, help_text='Password must be 8 characters minimum length (with at least 1 lower case, 1 upper case and 1 number).')
password_confirm = forms.CharField(widget=forms.PasswordInput(attrs={'class': 'narrow-input', 'required': 'true' }), required=True, )
def __init__(self, *args, **kwargs):
super(NewAccountForm, self).__init__(*args, **kwargs)
self.fields['password'].label = "Password"
self.fields['password_confirm'].label = "Password Confirmation"
</code></pre>
<p>"Password validation" <- this validation is working.</p>
<pre><code>def clean_password(self):
validate_password_strength(self.cleaned_data['password'])
</code></pre>
<p>This second validation isn't correctly performed because password = 'None':</p>
<pre><code>def clean_password_confirm(self):
password = self.cleaned_data['password']
password_confirm = self.cleaned_data.get('password_confirm')
print(password)
print(password_confirm)
if password and password_confirm:
if password != password_confirm:
raise forms.ValidationError("The two password fields must match.")
return password_confirm
</code></pre>
<p>Is there a way to use input for field password as a variable to the second validation (clean_password_confirm) if it is already validated by the first method (clean_password)?</p>
<p>Thanks.</p>
<p><strong>EDIT:</strong> Updated version:</p>
<pre><code>def clean(self):
cleaned_data = super(NewAccountForm, self).clean()
password = cleaned_data.get('password')
# check for min length
min_length = 8
if len(password) < min_length:
msg = 'Password must be at least %s characters long.' %(str(min_length))
self.add_error('password', msg)
# check for digit
if sum(c.isdigit() for c in password) < 1:
msg = 'Password must contain at least 1 number.'
self.add_error('password', msg)
# check for uppercase letter
if not any(c.isupper() for c in password):
msg = 'Password must contain at least 1 uppercase letter.'
self.add_error('password', msg)
# check for lowercase letter
if not any(c.islower() for c in password):
msg = 'Password must contain at least 1 lowercase letter.'
self.add_error('password', msg)
password_confirm = cleaned_data.get('password_confirm')
if password and password_confirm:
if password != password_confirm:
msg = "The two password fields must match."
self.add_error('password_confirm', msg)
return cleaned_data
</code></pre>
| 1 | 2016-08-16T16:23:33Z | 38,979,961 | <p>You can test for multiple fields in the <code>clean()</code> method.</p>
<p>Example:</p>
<pre><code>def clean(self):
cleaned_data = super(NewAccountForm, self).clean()
password = cleaned_data.get('password')
password_confirm = cleaned_data.get('password_confirm ')
if password and password_confirm:
if password != password_confirm:
raise forms.ValidationError("The two password fields must match.")
return cleaned_data
</code></pre>
<p>See the documentation on <a href="https://docs.djangoproject.com/en/1.10/ref/forms/validation/#validating-fields-with-clean" rel="nofollow">Cleaning and validating fields that depend on each other</a>.</p>
| 2 | 2016-08-16T16:25:22Z | [
"python",
"django"
] |
python encoding issue, searching tweets | 38,979,928 | <p>I have written the following code to crawl tweets with 'utf-8' encoding: </p>
<pre><code>kws=[]
f=codecs.open("keywords", encoding='utf-8')
kws = f.readlines()
f.close()
print kws
for kw in kws:
timeline_endpoint ='https://api.twitter.com/1.1/search/tweets.json?q='+kw+'&count=100&lang=fr'
print timeline_endpoint
response, data = client.request(timeline_endpoint)
tweets = json.loads(data)
for tweet in tweets['statuses']:
my_on_data(json.dumps(tweet.encode('utf-8')))
time.sleep(3)
</code></pre>
<p>but I am getting the following error:</p>
<pre><code>response, data = client.request(timeline_endpoint)
File "build/bdist.linux-x86_64/egg/oauth2/__init__.py", line 676, in request
File "build/bdist.linux-x86_64/egg/oauth2/__init__.py", line 440, in to_url
File "/usr/lib/python2.7/urllib.py", line 1357, in urlencode
l.append(k + '=' + quote_plus(str(elt)))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 1: ordinal not in range(128)
</code></pre>
<p>I would appreciate any help.</p>
| 0 | 2016-08-16T16:24:03Z | 38,980,285 | <p>There may be latin-1 encoded string in some Twitter feeds. To deal with those, you need to decode them from latin-1 to unicode and then you encode it utf-8:</p>
<pre><code>my_on_data(json.dumps(tweet.decode("latin-1").encode('utf-8')))
</code></pre>
| -1 | 2016-08-16T16:43:11Z | [
"python",
"encoding",
"urllib",
"tweepy"
] |
python encoding issue, searching tweets | 38,979,928 | <p>I have written the following code to crawl tweets with 'utf-8' encoding: </p>
<pre><code>kws=[]
f=codecs.open("keywords", encoding='utf-8')
kws = f.readlines()
f.close()
print kws
for kw in kws:
timeline_endpoint ='https://api.twitter.com/1.1/search/tweets.json?q='+kw+'&count=100&lang=fr'
print timeline_endpoint
response, data = client.request(timeline_endpoint)
tweets = json.loads(data)
for tweet in tweets['statuses']:
my_on_data(json.dumps(tweet.encode('utf-8')))
time.sleep(3)
</code></pre>
<p>but I am getting the following error:</p>
<pre><code>response, data = client.request(timeline_endpoint)
File "build/bdist.linux-x86_64/egg/oauth2/__init__.py", line 676, in request
File "build/bdist.linux-x86_64/egg/oauth2/__init__.py", line 440, in to_url
File "/usr/lib/python2.7/urllib.py", line 1357, in urlencode
l.append(k + '=' + quote_plus(str(elt)))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 1: ordinal not in range(128)
</code></pre>
<p>I would appreciate any help.</p>
| 0 | 2016-08-16T16:24:03Z | 38,980,694 | <p>Okay, here is the solution using a different search approach:</p>
<pre><code>auth = tweepy.OAuthHandler("k1", "k2")
auth.set_access_token("k3", "k4")
api = tweepy.API(auth)
for kw in kws:
max_tweets = 10
searched_tweets = [status for status in tweepy.Cursor(api.search, q=kw.encode('utf-8')).items(max_tweets)]
for tweet in searched_tweets:
my_on_data(json.dumps(tweet._json))
time.sleep(3)
</code></pre>
| 0 | 2016-08-16T17:07:03Z | [
"python",
"encoding",
"urllib",
"tweepy"
] |
How to properly wrap a C library with Python CFFI | 38,979,947 | <p>I am trying to wrap a very simple C library containing only two .C source files: <em>dbc2dbf.c</em> and <em>blast.c</em></p>
<p>I am doing the following (from the documentation):</p>
<pre><code>import os
from cffi import FFI
blastbuilder = FFI()
ffibuilder = FFI()
with open(os.path.join(os.path.dirname(__file__), "c-src/blast.c")) as f:
blastbuilder.set_source("blast", f.read(), libraries=["c"])
with open(os.path.join(os.path.dirname(__file__), "c-src/blast.h")) as f:
blastbuilder.cdef(f.read())
blastbuilder.compile(verbose=True)
with open('c-src/dbc2dbf.c','r') as f:
ffibuilder.set_source("_readdbc",
f.read(),
libraries=["c"])
with open(os.path.join(os.path.dirname(__file__), "c-src/blast.h")) as f:
ffibuilder.cdef(f.read(), override=True)
if __name__ == "__main__":
# ffibuilder.include(blastbuilder)
ffibuilder.compile(verbose=True)
</code></pre>
<p>This is not quite working. I think I am not including <em>blast.c</em> correctly;</p>
<p>can anyone help?</p>
| 1 | 2016-08-16T16:24:52Z | 39,018,217 | <p>Here is the solution (tested):</p>
<pre><code>import os
from cffi import FFI
ffibuilder = FFI()
PATH = os.path.dirname(__file__)
with open(os.path.join(PATH, 'c-src/dbc2dbf.c'),'r') as f:
ffibuilder.set_source("_readdbc",
f.read(),
libraries=["c"],
sources=[os.path.join(PATH, "c-src/blast.c")],
include_dirs=[os.path.join(PATH, "c-src/")]
)
ffibuilder.cdef(
"""
static unsigned inf(void *how, unsigned char **buf);
static int outf(void *how, unsigned char *buf, unsigned len);
void dbc2dbf(char** input_file, char** output_file);
"""
)
with open(os.path.join(PATH, "c-src/blast.h")) as f:
ffibuilder.cdef(f.read(), override=True)
if __name__ == "__main__":
ffibuilder.compile(verbose=True)
</code></pre>
| 0 | 2016-08-18T12:27:18Z | [
"python",
"c",
"python-cffi"
] |
python parsing json response | 38,979,973 | <p>I'm a newb with JSON and am trying to understand how to parse a JSON response. In the example below, I'd like to know how to retrieve the value of 'issueId':'executions':'id'? in the example below it is '8195'.....</p>
<pre><code>r = requests.get(baseURL + getExecutionsForIssueId + id, auth=('user','pass'))
data = r.json()
JSON Response:
{
"status": {
"1": {
"id": 1,
"color": "#75B000",
"description": "Test was executed and passed successfully.",
"name": "PASS"
},
"2": {
"id": 2,
"color": "#CC3300",
"description": "Test was executed and failed.",
"name": "FAIL"
},
"3": {
.
.
.
}
},
"issueId": 15825,
"executions": [
{
"id": 8195,
"orderId": 7635,
"executionStatus": "-1",
"comment": "",
"htmlComment": "",
.
.
.
</code></pre>
| 0 | 2016-08-16T16:26:14Z | 38,980,085 | <p>Your <code>JSON</code> object is just a dictionary in Python. Access the values you need like so:</p>
<p><code>data['executions']</code> yields an array of similar dictionary objects, assuming your <code>JSON</code> response is typed as you intended. </p>
<pre><code>executions = data['executions']
order_id = executions[0]['orderId']
</code></pre>
<p>If you wish to loop over them to find the correct object with an <code>id</code> of 8195:</p>
<pre><code>executions = data['executions'] # [{'id':8195,'orderId':7635,...}, {...}, ...]
for e in executions:
if e['id'] == 8195: # e is the dict you want
order_id = e['orderId']
</code></pre>
| 0 | 2016-08-16T16:32:27Z | [
"python",
"json"
] |
Implement bias neurons neural network | 38,980,018 | <p>I implemented bias units for my neural network with gradient descent. But I'm not 100% sure If I've implemented it the right way. Would be glade if you can quickly look through my code. Only the parts with </p>
<blockquote>
<p>if bias:</p>
</blockquote>
<p>are important.</p>
<p>And my second question:
Shouldn't the derivate of the softmax function be 1-x, because x is the output of the softmax function?
I tried my net with 1-x but its performance was worse.</p>
<p>Every help is appreciated.
Thanks in advance.</p>
<pre><code>import numpy as np
import pickle
import time
import math
class FeedForwardNetwork():
def __init__(self, input_dim, hidden_dim, output_dim, dropout=False, dropout_prop=0.5, bias=False):
np.random.seed(1)
self.input_layer = np.array([])
self.hidden_layer = np.array([])
self.output_layer = np.array([])
self.hidden_dim = hidden_dim
self.dropout = dropout
self.dropout_prop = dropout_prop
self.bias = bias
r_input_hidden = math.sqrt(6 / (input_dim + hidden_dim))
r_hidden_output = math.sqrt(6 / (hidden_dim + output_dim))
#self.weights_input_hidden = np.random.uniform(low=-r_input_hidden, high=r_input_hidden, size=(input_dim, hidden_dim))
#self.weights_hidden_output = np.random.uniform(low=-r_hidden_output, high=r_hidden_output, size=(hidden_dim, output_dim))
self.weights_input_hidden = np.random.uniform(low=-0.01, high=0.01, size=(input_dim, hidden_dim))
self.weights_hidden_output = np.random.uniform(low=-0.01, high=0.01, size=(hidden_dim, output_dim))
self.validation_data = np.array([])
self.validation_data_solution = np.array([])
self.velocities_input_hidden = np.zeros(self.weights_input_hidden.shape)
self.velocities_hidden_output = np.zeros(self.weights_hidden_output.shape)
if bias:
self.weights_bias_hidden = np.random.uniform(low=-0.01, high=0.01, size=((1, hidden_dim)))
self.weights_bias_output = np.random.uniform(low=-0.01, high=0.01, size=((1, output_dim)))
self.velocities_bias_hidden = np.zeros(self.weights_bias_hidden.shape)
self.velocities_bias_output = np.zeros(self.weights_bias_output.shape)
def _tanh(self, x, deriv=False):
#The derivate is: 1-np.tanh(x)**2; Because x is already the output of tanh(x) 1-x*x is the correct derivate.
if not deriv:
return np.tanh(x)
return 1-x*x
def _softmax(self, x, deriv=False):
if not deriv:
return np.exp(x) / np.sum(np.exp(x), axis=0)
return 1 - np.exp(x) / np.sum(np.exp(x), axis=0)
def set_training_data(self, training_data_input, training_data_target, validation_data_input=None, validation_data_target=None):
"""Splits the data up into training and validation data with a ratio of 0.85/0.15 if no validation data is given.
Sets the data for training."""
if len(training_data_input) != len(training_data_target):
raise ValueError(
'Number of training examples and'
' training targets does not match!'
)
if (validation_data_input is None) and (validation_data_target is None):
len_training_data = int((len(training_data_input)/100*85//1))
self.input_layer = training_data_input[:len_training_data]
self.output_layer = training_data_target[:len_training_data]
self.validation_data = training_data_input[len_training_data:]
self.validation_data_solution = training_data_target[len_training_data:]
else:
self.input_layer = training_data_input
self.output_layer = training_data_target
self.validation_data = validation_data_input
self.validation_data_solution = validation_data_target
def save(self, filename):
"""Saves the weights into a pickle file."""
with open(filename, "wb") as network_file:
pickle.dump(self.weights_input_hidden, network_file)
pickle.dump(self.weights_hidden_output, network_file)
def load(self, filename):
"""Loads network weights from a pickle file."""
with open(filename, "rb") as network_file:
weights_input_hidden = pickle.load(network_file)
weights_hidden_output = pickle.load(network_file)
if (
len(weights_input_hidden) != len(self.weights_input_hidden)
or len(weights_hidden_output) != len(self.weights_hidden_output)
):
raise ValueError(
'File contains weights that does not'
' match the current networks size!'
)
self.weights_input_hidden = weights_input_hidden
self.weights_hidden_output = weights_hidden_output
def measure_error(self, input_data, output_data):
return 1/2 * np.sum((output_data - self.forward_propagate(input_data))**2)
#return np.sum(np.nan_to_num(-output_data*np.log(self.forward_propagate(input_data))-(1-output_data)*np.log(1-self.forward_propagate(input_data))))
def forward_propagate(self, input_data, dropout=False):
"""Proceds the input data from input neurons up to output neurons and returns the output layer.
If dropout is True some of the neurons are randomly turned off."""
input_layer = input_data
self.hidden_layer = self._tanh(np.dot(input_layer, self.weights_input_hidden))
if self.bias:
self.hidden_layer += self.weights_bias_hidden
if dropout:
self.hidden_layer *= np.random.binomial([np.ones((len(input_data),self.hidden_dim))],1-self.dropout_prop)[0] * (1.0/(1-self.dropout_prop))
if self.bias:
return self._softmax((np.dot(self.hidden_layer, self.weights_hidden_output) + self.weights_bias_output).T).T
else:
return self._softmax(np.dot(self.hidden_layer, self.weights_hidden_output).T).T
#return self._softmax(output_layer.T).T
def back_propagate(self, input_data, output_data, alpha, beta, momentum):
"""Calculates the difference between target output and output and adjusts the weights to fit the target output better.
The parameter alpha is the learning rate.
Beta is the parameter for weight decay which penaltizes large weights."""
sample_count = len(input_data)
output_layer = self.forward_propagate(input_data, dropout=self.dropout)
output_layer_error = output_layer - output_data
output_layer_delta = output_layer_error * self._softmax(output_layer, deriv=True)
print("Error: ", np.mean(np.abs(output_layer_error)))
#How much did each hidden neuron contribute to the output error?
#Multiplys delta term with weights
hidden_layer_error = output_layer_delta.dot(self.weights_hidden_output.T)
#If the prediction is good, the second term will be small and the change will be small
#Ex: target: 1 -> Slope will be 1 so the second term will be big
hidden_layer_delta = hidden_layer_error * self._tanh(self.hidden_layer, deriv=True)
#The both lines return a matrix. A row stands for all weights connected to one neuron.
#E.g. [1, 2, 3] -> Weights to Neuron A
# [4, 5, 6] -> Weights to Neuron B
hidden_weights_gradient = input_data.T.dot(hidden_layer_delta)/sample_count
output_weights_gradient = self.hidden_layer.T.dot(output_layer_delta)/sample_count
velocities_input_hidden = self.velocities_input_hidden
velocities_hidden_output = self.velocities_hidden_output
self.velocities_input_hidden = velocities_input_hidden * momentum - alpha * hidden_weights_gradient
self.velocities_hidden_output = velocities_hidden_output * momentum - alpha * output_weights_gradient
#Includes momentum term and weight decay; The weight decay parameter is beta
#Weight decay penalizes large weights to prevent overfitting
self.weights_input_hidden += -velocities_input_hidden * momentum + (1 + momentum) * self.velocities_input_hidden
- alpha * beta * self.weights_input_hidden / sample_count
self.weights_hidden_output += -velocities_hidden_output * momentum + (1 + momentum) * self.velocities_hidden_output
- alpha * beta * self.weights_hidden_output / sample_count
if self.bias:
velocities_bias_hidden = self.velocities_bias_hidden
velocities_bias_output = self.velocities_bias_output
hidden_layer_delta = np.sum(hidden_layer_delta, axis=0)
output_layer_delta = np.sum(output_layer_delta, axis=0)
self.velocities_bias_hidden = velocities_bias_hidden * momentum - alpha * hidden_layer_delta
self.velocities_bias_output = velocities_bias_output * momentum - alpha * output_layer_delta
self.weights_bias_hidden += -velocities_bias_hidden * momentum + (1 + momentum) * self.velocities_bias_hidden
- alpha * beta * self.weights_bias_hidden / sample_count
self.weights_bias_output += -velocities_bias_output * momentum + (1 + momentum) * self.velocities_bias_output
- alpha * beta * self.weights_bias_output / sample_count
def batch_train(self, epochs, alpha, beta, momentum, patience=10):
"""Trains the network in batch mode that means the weights are updated after showing all training examples.
alpha is the learning rate and patience is the number of epochs that the validation error is allowed to increase before aborting.
Beta is the parameter for weight decay which penaltizes large weights."""
#The weight decay parameter is beta
validation_error = self.measure_error(self.validation_data, self.validation_data_solution)
for epoch in range(epochs):
self.back_propagate(self.input_layer, self.output_layer, alpha, beta, momentum)
validation_error_new = self.measure_error(self.validation_data, self.validation_data_solution)
if validation_error_new < validation_error:
validation_error = validation_error_new
else:
patience -= 1
if patience == 0:
print("Abort Training. Overfitting has started! Epoch: {0}. Error: {1}".format(epoch, validation_error_new))
return
print("Epoch: {0}, Validation Error: {1}".format(epoch, validation_error))
self.save("Network_Mnist.net")
def mini_batch_train(self, batch_size, epochs, alpha, beta, momentum, patience=10):
"""Trains the network in mini batch mode, that means the weights are updated after showing only a bunch of training examples.
alpha is the learning rate and patience is the number of epochs that the validation error is allowed to increase before aborting."""
validation_error = self.measure_error(self.validation_data, self.validation_data_solution)
sample_count = len(self.input_layer)
epoch_counter = 0
for epoch in range(0, epochs*batch_size, batch_size):
epoch_counter += 1
self.back_propagate(self.input_layer[epoch%sample_count:(epoch%sample_count)+batch_size],
self.output_layer[epoch%sample_count:(epoch%sample_count)+batch_size], alpha, beta, momentum)
validation_error_new = self.measure_error(self.validation_data, self.validation_data_solution)
if validation_error_new < validation_error:
validation_error = validation_error_new
patience = 20
else:
patience -= 1
if patience == 0:
print("Abort Training. Overfitting has started! Epoch: {0}. Error: {1}".format(epoch_counter, validation_error_new))
return
print("Epoch: {0}, Validation Error: {1}".format(epoch_counter, validation_error))
self.save("Network_Mnist.net")
if __name__ == "__main__":
#If the first row is a one the first output neuron should be on the second off
x = np.array([ [0, 0, 1, 1, 0],
[0, 1, 1, 1, 1],
[1, 0, 1, 1, 1],
[1, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[1, 1, 0, 0, 0],
[1, 1, 0, 0, 0],
[1, 0, 1, 0, 0] ])
y = np.array([ [0, 1],
[0, 1],
[1, 0],
[1, 0],
[0, 1],
[1, 0],
[1, 0],
[1, 0] ])
#x = np.array([ [0, 0, 1, 1] ])
#y = np.array([[0]]).T
a = FeedForwardNetwork(input_dim=5, hidden_dim=200, output_dim=2, bias=False)
a.set_training_data(x, y)
start = time.time()
a.batch_train(epochs=2000, alpha=0.05, beta=0.0001, momentum=0.99, patience=20)
print(time.time()-start)
</code></pre>
| 0 | 2016-08-16T16:28:24Z | 38,985,160 | <p><strong>In relation with the derivatives...</strong></p>
<p>If you are using the <code>tanh</code> activation function, i.e. <a href="https://en.wikipedia.org/wiki/Hyperbolic_function#Derivatives" rel="nofollow">the derivative is:</a> <code>y' = 1 - y^2</code>. The <code>tanh</code> is commonly used because it is zero-centered. </p>
<p>If you are using the logistic equation, then <a href="https://en.wikipedia.org/wiki/Logistic_function#Derivative" rel="nofollow">the derivative is:</a> <code>y' = y(1+y)</code>. The softmax has a <a href="https://en.wikipedia.org/wiki/Softmax_function#Artificial_neural_networks" rel="nofollow">similar derivative</a>. </p>
<p>The nice thing is that all these can be expressed as functions of themselves, so you need to have a look at the <code>def _softmax(self, x, deriv=False)</code> function, to define it in a similar way than <code>def _tanh(self, x, deriv=False)</code>. </p>
| 0 | 2016-08-16T22:00:03Z | [
"python",
"neural-network",
"backpropagation",
"gradient-descent",
"bias-neuron"
] |
Looking for the equivalent of dictcursor in flaskext.mysql | 38,980,044 | <p>I've written a Python Flask app, and initially used MySQLdb to access MySQL. Later I've switched to flaskext.mysql for the same purposes, but now when I use this module I cannot see how to get a dictionary structured cursor.</p>
<p>When I using the MySQLdb module I was using the following line to open a dictionary based cursor - </p>
<pre><code>import MySQLdb as mdb
con = mdb.connect('localhost','root','root','transport')
with con:
cur = con.cursor(mdb.cursors.DictCursor)
</code></pre>
<p>Now I'm trying to do the same with flaskext.mysql, my currect code looks like this - </p>
<pre><code>from flaskext.mysql import MySQL
cur = mysql.get_db().cursor()
</code></pre>
<p>What should I feed the cursor object in order to get the same type of cursor?</p>
| 1 | 2016-08-16T16:30:02Z | 38,980,156 | <p><a href="https://github.com/cyberdelia/flask-mysql/blob/master/flaskext/mysql.py#L58" rel="nofollow"><code>mysql.get_db()</code></a> would result into your "connection" object, you can do:</p>
<pre><code>import MySQLdb as mdb
cur = mysql.get_db().cursor(mdb.cursors.DictCursor)
</code></pre>
<p>Or, you can also set the default <code>cursorclass</code> when initializing the extension:</p>
<pre><code>mysql = MySQL(cursorclass=mdb.cursors.DictCursor)
</code></pre>
| 1 | 2016-08-16T16:36:17Z | [
"python",
"mysql",
"flask"
] |
drop labels created by agg(['sum','count']) | 38,980,138 | <p>I have a df that can be reconstructed as pd.DataFrame(dict):</p>
<pre><code> city d1 d2 d3 d4
date
2014-05-01 sfo 4.26 6.58 2.32 -4.87
2014-05-01 yyz 2.90 6.64 24.78 -50.55
2014-05-01 yvr 2.90 6.64 24.78 -50.55
2014-05-01 dfw 4.06 6.54 2.40 -4.06
2014-05-01 pdx 9.96 6.66 30.35 64.24
</code></pre>
<p><strong>dict:</strong> </p>
<pre><code>{'date': {0: pd.Timestamp('2014-05-01 00:00:00'),
1: pd.Timestamp('2014-05-01 00:00:00'),
2: pd.Timestamp('2014-05-01 00:00:00'),
3: pd.Timestamp('2014-05-01 00:00:00'),
4: pd.Timestamp('2014-05-01 00:00:00')},
'city': {0: 'sfo', 1: 'yyz', 2: 'yvr', 3: 'dfw', 4: 'pdx'},
'd1': {0: 4.2599999999999998,
1: 2.8999999999999999,
2: 2.8999999999999999,
3: 4.0599999999999996,
4: 9.9600000000000009},
'd2': {0: 6.5800000000000001,
1: 6.6399999999999997,
2: 6.6399999999999997,
3: 6.54,
4: 6.6600000000000001},
'd3': {0: 2.3199999999999998,
1: 24.780000000000001,
2: 24.780000000000001,
3: 2.3999999999999999,
4: 30.350000000000001},
'd4': {0: -4.8700000000000001,
1: -50.549999999999997,
2: -50.549999999999997,
3: -4.0599999999999996,
4: 64.239999999999995}}
df.set_index(['date'], inplace=True)
</code></pre>
<p>I perform the following aggregate via <code>TimeGrouper</code>:</p>
<pre><code>grouped = df.groupby(['city', pd.TimeGrouper('M')])
monthly_agg = grouped.agg(['sum', 'count'])
</code></pre>
<p>monthly_agg looks like:</p>
<pre><code> d1 d2 d3 d4
sum count sum count sum count sum count
city date
dfw 2014-05-31 4.06 1 6.54 1 2.40 1 -4.06 1
pdx 2014-05-31 9.96 1 6.66 1 30.35 1 64.24 1
sfo 2014-05-31 4.26 1 6.58 1 2.32 1 -4.87 1
yvr 2014-05-31 2.90 1 6.64 1 24.78 1 -50.55 1
yyz 2014-05-31 2.90 1 6.64 1 24.78 1 -50.55 1
</code></pre>
<p>The <code>count</code> label column is used for sanity checking but once that is done, I want to be able to drop it. </p>
<p>As well, the <code>sum</code> label under d1,d2,d3,d4 etc is no longer necessary</p>
<p>My desired output:</p>
<pre><code> d1 d2 d3 d4
city date
dfw 2014-05-31 4.06 6.54 2.40 -4.06
pdx 2014-05-31 9.96 6.66 30.35 64.24
sfo 2014-05-31 4.26 6.58 2.32 -4.87
yvr 2014-05-31 2.90 6.64 24.78 -50.55
yyz 2014-05-31 2.90 6.64 24.78 -50.55
</code></pre>
<p>How do I get this?</p>
| 1 | 2016-08-16T16:35:29Z | 38,980,223 | <pre><code>monthly_agg = monthly_agg.loc[:, pd.IndexSlice[:,'sum']]
monthly_agg.columns = monthly_agg.columns.droplevel(1)
monthly_agg
Out:
d1 d2 d3 d4
city date
dfw 2014-05-31 4.06 6.54 2.40 -4.06
pdx 2014-05-31 9.96 6.66 30.35 64.24
sfo 2014-05-31 4.26 6.58 2.32 -4.87
yvr 2014-05-31 2.90 6.64 24.78 -50.55
yyz 2014-05-31 2.90 6.64 24.78 -50.55
</code></pre>
<p>Or with xs:</p>
<pre><code>monthly_agg.xs('sum', axis=1, level=1)
Out:
d1 d2 d3 d4
city date
dfw 2014-05-31 4.06 6.54 2.40 -4.06
pdx 2014-05-31 9.96 6.66 30.35 64.24
sfo 2014-05-31 4.26 6.58 2.32 -4.87
yvr 2014-05-31 2.90 6.64 24.78 -50.55
yyz 2014-05-31 2.90 6.64 24.78 -50.55
</code></pre>
| 3 | 2016-08-16T16:39:42Z | [
"python",
"pandas",
"aggregate"
] |
Set Seaborn PairGrid x-axis with 2 different value ranges | 38,980,179 | <p>[The resolution is described below.]</p>
<p>I'm trying to create a PairGrid. The X-axis has at least 2 different value ranges, although even when 'cvar' below is plotted by itself the x-axis overwrites itself. </p>
<p>My question: is there a way to tilt the x-axis labels to be vertical or have fewer x-axis labels so they don't overlap? Is there another way to solve this issue?</p>
<p>====================</p>
<pre><code>import seaborn as sns
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
columns = ['avar', 'bvar', 'cvar']
index = np.arange(10)
df = pd.DataFrame(columns=columns, index = index)
myarray = np.random.random((10, 3))
for val, item in enumerate(myarray):
df.ix[val] = item
df['cvar'] = [400,450,43567,23000,19030,35607,38900,30202,24332,22322]
fig1 = sns.PairGrid(df, y_vars=['avar'],
x_vars=['bvar', 'cvar'],
palette="GnBu_d")
fig1.map(plt.scatter, s=40, edgecolor="white")
# The fix: Add the following to rotate the x axis.
plt.xticks( rotation= -45 )
</code></pre>
<p>=====================</p>
<p>The code above produces this image
<a href="http://i.stack.imgur.com/uMzNk.png" rel="nofollow"><img src="http://i.stack.imgur.com/uMzNk.png" alt="enter image description here"></a></p>
<p>Thanks! </p>
| 0 | 2016-08-16T16:37:26Z | 39,020,838 | <p>I finally figured it out. I added "plt.xticks( rotation= -45 )" to the original code above. More can be fund on the MatPlotLib site <a href="http://matplotlib.org/examples/ticks_and_spines/ticklabels_demo_rotation.html?highlight=xticks%20rotation" rel="nofollow">here</a>. </p>
| 0 | 2016-08-18T14:28:32Z | [
"python",
"seaborn"
] |
Python exception while parsing json to avro schema: avro.schema.SchemaParseException: No "type" property | 38,980,367 | <p>I read a record from a file and convert it into a dictionary. Later I convert that dictionary to json format so that I could further try to convert it to an avro schema. </p>
<p>Here is my code snippet so far:-</p>
<pre><code>import json
from avro import schema, datafile, io
def json_to_avro():
fo = open("avro_record.txt", "r")
data = fo.readlines()
final_header = []
final_rec = []
for header in data[0:1]:
header = header.strip("\n")
header = header.split(",")
final_header = header
for rec in data[1:]:
rec = rec.strip("\n")
rec = rec.split(" ")
rec = ' '.join(rec).split()
final_rec = rec
final_dict = dict(zip(final_header,final_rec))
#print final_dict
json_dumps = json.dumps(final_dict, ensure_ascii=False)
#print json_dumps
SCHEMA = schema.parse(json_dumps)
json_to_avro()
</code></pre>
<p>When I print final_dict, output is:-</p>
<pre><code>{'TransportProtocol': 'udp', 'MSISDN': '+62696174735', 'ResponseCode': 'E6%B8 %B8%E%A%8%E%8%93&pfid=139ver=10.1.2.571title=Air%20fighter_pakage.apk', 'GGSN IP': '202.89.193.185', 'MSTimeZone': '+0008', 'Numbers of time period': '1', 'Mime Type': 'audio/aac', 'EndTime': '1462251588', 'OutBound': '709', 'Inbound': '35','Method': 'GET', 'RAT': 'ph', 'Referer': 'ghijk', 'TAC': '35893783', 'UserAgent': '961', 'MNC': '02', 'OutPayload': '0', 'CI': '34301', 'StartTime': '1462251588', 'DestinationIP':'ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3', 'URL': 'http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%', 'SGSN IP': '202.89.204.5', 'InPayload': '100', 'Protocol': 'http', 'WebDomain': '3', 'Source IP': 'e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e', 'MCC': '515', 'LAC': '36202', 'FlushFlag': '0', 'APN': '.internet.globe.com.', 'DestinationPort': '80', 'SourcePort': '82', 'LineFormat': 'http7', 'IMSI': '515-02-040687823335'}
</code></pre>
<p>When i print json_dumps, output is:-</p>
<pre><code>{"TransportProtocol": "udp", "MSISDN": "+62696174735", "ResponseCode":"E6%B%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571title=Air%20fighter_pakage.apk", "GGSN IP": "202.89.193.185", "MSTimeZone": "+0008", "Numbers of time period": "1", "Mime Type": "audio/aac", "EndTime": "1462251588", "OutBound": "709", "Inbound": "35", "Method": "GET", "RAT": "ph", "Referer": "ghijk", "TAC": "35893783", "UserAgent": "961", "MNC": "02", "OutPayload": "0", "CI": "34301", "StartTime": "1462251588", "DestinationIP": "ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3", "URL": "http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%", "SGSN IP": "202.89.204.5", "InPayload": "100", "Protocol": "http", "WebDomain": "3", "Source IP": "e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e", "MCC": "515", "LAC": "36202", "FlushFlag": "0", "APN": ".internet.globe.com.", "DestinationPort": "80", "SourcePort": "82", "LineFormat": "http7", "IMSI": "515-02-040687823335"}
</code></pre>
<p>Which, I guess is the json format which I further want to convert it to avro schema. But </p>
<pre><code>SCHEMA = schema.parse(json_dumps)
</code></pre>
<p>throws an exception:-</p>
<pre><code>Traceback (most recent call last):
File "convertToAvro.py", line 23, in <module>
json_to_avro()
File "convertToAvro.py", line 20, in json_to_avro
SCHEMA = schema.parse(json_dumps)
File "/usr/lib/python2.7/site-packages/avro/schema.py", line 785, in parse
return make_avsc_object(json_data, names)
File "/usr/lib/python2.7/site-packages/avro/schema.py", line 756, in make_avsc_object
raise SchemaParseException('No "type" property: %s' % json_data)
avro.schema.SchemaParseException: No "type" property: {u'TransportProtocol':u'udp', u'MSISDN': u'+62696174735', u'ResponseCode': u'E6%B8%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571&title=Air%20fighter_pakage.apk', u'GGSN IP': u'202.89.193.185', u'EndTime': u'1462251588', u'Method': u'GET',u'Mime Type': u'audio/aac', u'OutBound': u'709', u'Inbound': u'35',u'Numbers of time period': u'1', u'RAT': u'import jsonph', u'Referer':u'ghijk', u'TAC': u'35893783', u'UserAgent': u'961', u'MNC':u'02',u'OutPayload': u'0', u'CI': u'34301', u'DestinationPort': u'80',u'DestinationIP': u'ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3', u'URL':u'http:///group1/M00//B/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%', u'SGSN IP': u'202.89.204.5', u'InPayload': u'100', u'Protocol': u'http', u'WebDomain': u'3', u'Source IP': u'e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e', u'MCC': u'515', u'MSTimeZone': u'+0008', u'FlushFlag': u'0', u'APN': u'.internet.globe.com.', u'StartTime': u'1462251588', u'SourcePort': u'82', u'LineFormat': u'http7', u'LAC': u'36202', u'IMSI': u'515-02-040687823335'}
</code></pre>
<p>Just in case, here is my input record:-</p>
<pre><code>Protocol,LineFormat,StartTime,EndTime,MSTimeZone,IMSI,MSISDN,TAC,MCC,MNC,LAC,CI,SGSNIP,GGSNIP,APN,RAT,WebDomain,SourceIP,DestinationIP,SourcePort,DestinationPort,TransportProtocol,FlushFlag,Numbers of time period,OutBound,Inbound,Method,URL,ResponseCode,UserAgent,MimeType,Referer,OutPayload,InPayload
http http7 1462251588 1462251588 +0008 515-02-040687823335 +62696174735 35893783 515 02 36202 34301 202.89.204.5 202.89.193.185 .internet.globe.com. ph 3 e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3 82 80 udp 0 1 709 35 GET http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D% E6%B8%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571&title=Air%20fighter_pakage.apk 961 audio/aac ghijk 0 100
</code></pre>
| 0 | 2016-08-16T16:47:15Z | 38,995,327 | <p>This happens because the parameter in <code>schema.parse()</code> function has to be avro-schema (not a record itself) like here (<a href="https://avro.apache.org/docs/1.8.0/gettingstartedpython.html" rel="nofollow">https://avro.apache.org/docs/1.8.0/gettingstartedpython.html</a>):</p>
<pre><code>schema = avro.schema.parse(open("user.avsc", "rb").read())
</code></pre>
<p>As you pass a json record, it breaks.</p>
| 1 | 2016-08-17T11:25:30Z | [
"python",
"json",
"dictionary",
"avro"
] |
Tkinter python append lines in separate file | 38,980,446 | <p>I have no formal programming experience so please forgive my lack of terminology and program structure. Stackoverflow has been a tremendous help. This is my first question so please be gentle.</p>
<p>I have been tasked with writing a GUI. As of now, I works well and I have over 3500 lines of code and multiple files..</p>
<p>I need the Append lines in a separate file if someone could please give me some guidance. Please let me know if my question is not clear enough. Thank you. (Python 2.7.x & Tkinter)</p>
<p>(this does not work for obvious reasons I just can't quite get my head around the class part)</p>
<p><em>save_test.py</em></p>
<pre><code>#!/usr/bin/python
from Tkinter import *
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.grid()
self.createWidgets()
self.DoIt()
def createWidgets(self):
self.code = []
# Create Frames
self.FileFrame = Frame(self, bd=5)
self.FileFrame.grid(row=0, column=0, padx=10, sticky=N + S + E + W)
self.f10 = Label(self.FileFrame, text='Enter Number', width=15, font="-weight bold")
self.f10.grid(row=0, column=0)
self.entersomething = StringVar()
self.entersomething.set("123")
self.es = Entry(self.FileFrame, textvariable=self.entersomething, width=5)
self.es.grid(row=0, column=1)
self.Send = Button(self.FileFrame, text='Send To File', command=self.SendButton)
self.Send.grid(row=0, column=2,)
def SendButton(self):
self.DoIt()
f = open('c:\Python\code.txt', 'w')
for line in self.code:
f.write(line + '\n')
f.close()
def DoIt(self):
thickness = float(self.es.get())
self.code = []
#something here to make it append the lines in mycode.py
app = Application()
app.mainloop()
</code></pre>
<p><em>mycode.py</em></p>
<pre><code>self.code.append('(Code Generated)')
self.code.append('#1=%.4f (Thickness)' % thickness)
</code></pre>
| 0 | 2016-08-16T16:51:43Z | 38,980,536 | <p>You need to make one small change: Open the file with the 'a' flag, for append:</p>
<pre><code> f = open('c:\Python\code.txt', 'a')
</code></pre>
| 1 | 2016-08-16T16:57:11Z | [
"python",
"tkinter"
] |
Choosing only non-zeros from a long list of numbers in text file | 38,980,459 | <p>I have a text file with a long list of numbers. I would like to choose only the non-zeros and make another text file. </p>
<p>This is a portion of the input file:</p>
<pre><code>0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 5.16677E-30
9.61708E-28 1.18779E-25 9.73432E-24 5.29352E-22 1.91009E-20 4.57336E-19
7.26588E-18 7.65971E-17 5.35806E-16 2.48699E-15 7.65973E-15 1.56539E-14
2.12278E-14 1.91010E-14 1.14046E-14 4.51832E-15 1.18780E-15 2.07196E-16
2.39824E-17 1.84193E-18 9.38698E-20 3.17431E-21 7.12271E-23 1.06050E-24
1.04773E-26 6.86848E-29 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
</code></pre>
<p>The expected out put for the portion of the input show above would be:</p>
<pre><code>5.16677E-30 9.61708E-28 1.18779E-25 9.73432E-24 5.29352E-22 1.91009E-20
4.57336E-19 7.26588E-18 7.65971E-17 5.35806E-16 2.48699E-15 7.65973E-15
1.56539E-14 2.12278E-14 1.91010E-14 1.14046E-14 4.51832E-15 1.18780E-15
2.07196E-16 2.39824E-17 1.84193E-18 9.38698E-20 3.17431E-21 7.12271E-23
1.06050E-24 1.04773E-26
</code></pre>
<p>I tried what I wrote below but it is not returning anything.</p>
<pre><code>r1=[]
file = open ('aa2','w')
with open('aa.txt') as m:
file.write('List')
file.write("\n")
for t in itertools.islice(m,500,6500):
for i in t:
if i != 0.00000E+00 :
d = i
k = re.search(r'([- ]\d+\.\d+)+' , d)
if k:
r1.append(k.group())
file.write(str(' '.join(map(str,r1))))
file.close()
</code></pre>
| -2 | 2016-08-16T16:52:29Z | 38,982,908 | <p>You're using regex again where you don't need to. You're also doing something exceedingly bizarre where you're using <code>islice</code> on the file. That's also unnecessary. You could just do this:</p>
<pre><code>import io
file = io.StringIO('''
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 5.16677E-30
9.61708E-28 1.18779E-25 9.73432E-24 5.29352E-22 1.91009E-20 4.57336E-19
7.26588E-18 7.65971E-17 5.35806E-16 2.48699E-15 7.65973E-15 1.56539E-14
2.12278E-14 1.91010E-14 1.14046E-14 4.51832E-15 1.18780E-15 2.07196E-16
2.39824E-17 1.84193E-18 9.38698E-20 3.17431E-21 7.12271E-23 1.06050E-24
1.04773E-26 6.86848E-29 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
'''.strip())
#################################
# Actual Answer to your problem #
#################################
values = []
for line in file:
values.extend(val for val in line.strip().split() if val != '0.00000E+00')
with io.StringIO() as out:
for i, val in enumerate(values):
if i and not i % 6:
out.write('\n')
out.write(val+' ')
out.seek(0)
print(out.read())
</code></pre>
| 0 | 2016-08-16T19:23:05Z | [
"python",
"list",
"numbers"
] |
How can I get virtualenv to work? | 38,980,505 | <p>I have this weird error:</p>
<pre class="lang-none prettyprint-override"><code>Path:\To\My\Command>virtualenv .
Fatal error in launcher: Unable to create process using
'"D:\Python27\python.exe" "F:\Python27\Scripts\virtualenv.exe" .'
</code></pre>
<p>Thing is, I don't have a <code>D:\Python27\python.exe</code>. My <code>D:</code> drive became my <code>F:</code> drive when I got my new computer and there's no <code>D:\Python27</code> listed in my environment variables. Is there someplace else I should be looking?</p>
<p>I'm using Windows 10.</p>
| 0 | 2016-08-16T16:55:23Z | 38,980,572 | <p>Sounds like your environmental variable path is messed up. Try setting it up again for F:\, you can find directions here: <a href="http://stackoverflow.com/questions/6318156/adding-python-path-on-windows-7">Adding Python Path on Windows 7</a></p>
| 0 | 2016-08-16T16:59:27Z | [
"python",
"windows",
"environment-variables",
"virtualenv"
] |
Moving desired row to the top of pandas Data Frame | 38,980,507 | <p>In <code>pandas</code>, how can I copy or move a row to the top of the Data Frame without creating a copy of the Data Frame?</p>
<p>For example, I managed to do almost what I want with the code below, but I have the impression that there might be a better way to accomplish this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Probe':['Test1','Test2','Test3'], 'Sequence':['AATGCGT','TGCGTAA','ATGCATG']})
df
Probe Sequence
0 Test1 AATGCGT
1 Test2 TGCGTAA
2 Test3 ATGCATG
df_shifted = df.shift(1)
df_shifted
Probe Sequence
0 NaN NaN
1 Test1 AATGCGT
2 Test2 TGCGTAA
df_shifted.ix[0] = df.ix[2]
df_shifted
Probe Sequence
0 Test3 ATGCATG
1 Test1 AATGCGT
2 Test2 TGCGTAA
</code></pre>
| 1 | 2016-08-16T16:55:25Z | 38,980,670 | <p>Okay, I think I came up with a solution. By all means, please feel free to add your own answer if you think yours is better:</p>
<pre><code>import numpy as np
df.ix[3] = np.nan
df
Probe Sequence
0 Test1 AATGCGT
1 Test2 TGCGTAA
2 Test3 ATGCATG
3 NaN NaN
df = df.shift(1)
Probe Sequence
0 NaN NaN
1 Test1 AATGCGT
2 Test2 TGCGTAA
3 Test3 ATGCATG
df.ix[0] = df.ix[2]
df
Probe Sequence
0 Test3 ATGCATG
1 Test1 AATGCGT
2 Test2 TGCGTAA
3 Test3 ATGCATG
</code></pre>
| 0 | 2016-08-16T17:05:37Z | [
"python",
"pandas"
] |
Moving desired row to the top of pandas Data Frame | 38,980,507 | <p>In <code>pandas</code>, how can I copy or move a row to the top of the Data Frame without creating a copy of the Data Frame?</p>
<p>For example, I managed to do almost what I want with the code below, but I have the impression that there might be a better way to accomplish this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Probe':['Test1','Test2','Test3'], 'Sequence':['AATGCGT','TGCGTAA','ATGCATG']})
df
Probe Sequence
0 Test1 AATGCGT
1 Test2 TGCGTAA
2 Test3 ATGCATG
df_shifted = df.shift(1)
df_shifted
Probe Sequence
0 NaN NaN
1 Test1 AATGCGT
2 Test2 TGCGTAA
df_shifted.ix[0] = df.ix[2]
df_shifted
Probe Sequence
0 Test3 ATGCATG
1 Test1 AATGCGT
2 Test2 TGCGTAA
</code></pre>
| 1 | 2016-08-16T16:55:25Z | 38,987,227 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>pandas.concat</code></a>:</p>
<pre><code>pd.concat([df.iloc[[n],:], df.drop(n, axis=0)], axis=0)
</code></pre>
| 2 | 2016-08-17T02:32:44Z | [
"python",
"pandas"
] |
Moving desired row to the top of pandas Data Frame | 38,980,507 | <p>In <code>pandas</code>, how can I copy or move a row to the top of the Data Frame without creating a copy of the Data Frame?</p>
<p>For example, I managed to do almost what I want with the code below, but I have the impression that there might be a better way to accomplish this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Probe':['Test1','Test2','Test3'], 'Sequence':['AATGCGT','TGCGTAA','ATGCATG']})
df
Probe Sequence
0 Test1 AATGCGT
1 Test2 TGCGTAA
2 Test3 ATGCATG
df_shifted = df.shift(1)
df_shifted
Probe Sequence
0 NaN NaN
1 Test1 AATGCGT
2 Test2 TGCGTAA
df_shifted.ix[0] = df.ix[2]
df_shifted
Probe Sequence
0 Test3 ATGCATG
1 Test1 AATGCGT
2 Test2 TGCGTAA
</code></pre>
| 1 | 2016-08-16T16:55:25Z | 38,988,241 | <p>Try this, You are not making a copy of dataframe,:</p>
<pre><code>df["new"] = range(1,len(df)+1)
Probe Sequence new
0 Test1 AATGCGT 1
1 Test2 TGCGTAA 2
2 Test3 ATGCATG 3
df.ix[2,'new'] = 0
df.sort_values("new").drop('new', axis=1)
Probe Sequence
2 Test3 ATGCATG
0 Test1 AATGCGT
1 Test2 TGCGTAA
</code></pre>
<p>Basically, since you cant insert into index at 0, create a column so you can. </p>
<p>If you want index ordered, use this:</p>
<pre><code>df.sort_values("new").reset_index(drop='True').drop('new', axis=1)
Probe Sequence
0 Test3 ATGCATG
1 Test1 AATGCGT
2 Test2 TGCGTAA
</code></pre>
| 1 | 2016-08-17T04:42:16Z | [
"python",
"pandas"
] |
Python Eve fails to run on newly created droplet | 38,980,541 | <pre><code>Type "help", "copyright", "credits" or "license" for more information.
>>> import eve
>>> from eve import Eve
>>> eve
<module 'eve' from '/usr/local/lib/python2.7/dist-packages/eve/__init__.pyc'>
>>> app = Eve()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 139, in __init__
self.validate_domain_struct()
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 252, in validate_domain_struct
raise ConfigException('DOMAIN dictionary missing or wrong.')
eve.exceptions.ConfigException: DOMAIN dictionary missing or wrong.
>>>
</code></pre>
<p>This happens and I cant seem to find out what the error is as this is a newly created Ubuntu image on Digital Ocean. Nothing is touched beside installing Python eve with pip. </p>
| 1 | 2016-08-16T16:57:24Z | 38,994,674 | <p>You need to have a <code>settings.py</code> file on the same directory as your application, or you need to pass the path to <code>settings.py</code> with key <code>settings</code> in you app initialization. Check the <a href="http://python-eve.org/quickstart.html#a-minimal-application" rel="nofollow">quickstart</a> guide for the minimal application.</p>
<p>The <code>settings.py</code> file should have your resources definition, which is the <code>DOMAIN</code> dictionary the error mentions.</p>
| 1 | 2016-08-17T10:55:32Z | [
"python",
"python-2.7",
"digital-ocean",
"eve"
] |
What does Random Forest do with unseen data? | 38,980,544 | <p>When I built my random forest model using scikit learn in python, I set a condition (where clause in sql query) so that the training data only contain values whose value is greater than 0.</p>
<p>I am curious to know how random forest handles test data whose value is less than 0, which the random forest model has never seen before in the training data.</p>
| 0 | 2016-08-16T16:57:42Z | 38,980,686 | <p>They will be treated in the same manner as the minimal value already encountered in the training set. RF is just a bunch of voting decision trees, and (basic) DTs can only form decisions in form of "if feature X is > then T go left, otherwise go right". Consequently, if you fit it to data which, for a given feature, has only values in [0, inf], it will either not use this feature at all or use it in a form given above (as decision of form "if X is > than T", where T has to be from (0, inf) to make any sense for the training data). Consequently if you simply take your new data and change negative values to "0", the result will be identical. </p>
| 0 | 2016-08-16T17:06:31Z | [
"python",
"machine-learning",
"scikit-learn",
"random-forest",
"scikits"
] |
Django: Dynamic "table of contents" based on HTML tags | 38,980,558 | <p>I have a model with a <code>TextField</code> that users can populate with HTML. Now I want Django to render a dynamic "Table Of Contents", so that when a <code><h></code> tag is used, django automatically adds that to a list. Bonus points if a nested list is also possible.</p>
<p>I've thought about using inclusion tags, but I'm not sure on the exact details. Any help would be much appreciated. </p>
| 0 | 2016-08-16T16:58:47Z | 39,007,699 | <p>I have never done this exactly...but here is how I would start off.</p>
<h3>Get the HTML</h3>
<p>First things first - get the user to add the HTML. Possibly install tinyMCE, use tinyMCE's HTML field. The user can copy and paste HTML or add HTML content using tinyMCE's WYSIWUG.</p>
<h3>Generate a DOM object</h3>
<p>You might be thinking - I will write some regex to find the relevant h1-6 tags. The problem is that <a href="http://htmlparsing.com/" rel="nofollow">parsing html with regex is a nightmare</a> and should not be done with <a href="http://stackoverflow.com/a/1732454/3003438">regex</a>.</p>
<p>Next you will probably need to <a href="http://stackoverflow.com/questions/126131/python-library-for-rendering-html-and-javascript">render the html</a>. There are probably <a href="http://stackoverflow.com/questions/2782097/python-is-there-a-built-in-package-to-parse-html-into-dom">quite a few ways</a> to do this. </p>
<h3>Table of Contents</h3>
<p>Then go through the rendered html and pull out all the h1-6 tags or whatever you are after.</p>
<p>You might want to edit the html and add the TOC to the top of the list - either that or save it as a separate html snippet that you can insert into the main html document when you render it.</p>
<p>If the generating the TOC sounds like too much work...it probably is. I'm sure you could find a solution to automatically pull out the h1 headers and link to the content. Here is a <a href="http://projects.jga.me/toc/" rel="nofollow">jquery plugin</a> that does just that. A bit of googling could be in order here to find the easiest way to generate the TOC could be in order - seems like it would be quite a common thing.</p>
<h3>Other Thoughts</h3>
<ul>
<li>Be aware - you need to ensure you aren't parsing javascript when rendering the page due to security implications!</li>
<li>Bonus marks for modifying the html to hyperlink the TOC to the actual heading.</li>
<li>Make sure you re-generate the TOC if the user modifies HTML content</li>
</ul>
<p>Good luck.</p>
| 0 | 2016-08-17T23:23:55Z | [
"python",
"django",
"python-3.x"
] |
Numpy convert list of 3D variable size volumes to 4D array | 38,980,582 | <p>I'm working on a neural network where I am augmenting data via rotation and varying the size of each input volume. </p>
<p>Let me back up, the input to the network is a 3D volume. I generate variable size 3D volumes, and then pad each volume with zero's such that the input volume is constant. Check <a href="http://stackoverflow.com/questions/38963610/numpy-padding-4d-units-with-all-zeros/38964707#38964707">here</a> for an issue I was having with padding (now resolved).</p>
<p>I generate a variable size 3D volume, append it to a list, and then convert the list into a numpy array. At this point, padding hasn't occured so converting it into a 4D tuple makes no sense...</p>
<pre><code>input_augmented_matrix = []
label_augmented_matrix = []
for i in range(n_volumes):
if i % 50 == 0:
print ("Augmenting step #" + str(i))
slice_index = randint(0,n_input)
z_max = randint(5,n_input)
z_rand = randint(3,5)
z_min = z_max - z_rand
x_max = randint(75, n_input_x)
x_rand = randint(60, 75)
x_min = x_max - x_rand
y_max = randint(75, n_input_y)
y_rand = randint(60, 75)
y_min = y_max - y_rand
random_rotation = randint(1,4) * 90
for j in range(2):
temp_volume = np.empty((z_rand, x_rand, y_rand))
k = 0
for z in range(z_min, z_max):
l = 0
for x in range(x_min, x_max):
m = 0
for y in range(y_min, y_max):
if j == 0:
#input volume
try:
temp_volume[k][l][m] = input_matrix[z][x][y]
except:
pdb.set_trace()
else:
#ground truth volume
temp_volume[k][l][m] = label_matrix[z][x][y]
m = m + 1
l = l + 1
k = k + 1
temp_volume = np.asarray(temp_volume)
temp_volume = np.rot90(temp_volume,random_rotation)
if j == 0:
input_augmented_matrix.append(temp_volume)
else:
label_augmented_matrix.append(temp_volume)
input_augmented_matrix = np.asarray(input_augmented_matrix)
label_augmented_matrix = np.asarray(label_augmented_matrix)
</code></pre>
<p>The dimensions of <code>input_augmented_matrix</code> at this point is <code>(N,)</code></p>
<p>Then I pad with the following code...</p>
<pre><code>for i in range(n_volumes):
print("Padding volume #" + str(i))
input_augmented_matrix[i] = np.lib.pad(input_augmented_matrix[i], ((0,n_input_z - int(input_augmented_matrix[i][:,0,0].shape[0])),
(0,n_input_x - int(input_augmented_matrix[i][0,:,0].shape[0])),
(0,n_input_y - int(input_augmented_matrix[i][0,0,:].shape[0]))),
'constant', constant_values=0)
label_augmented_matrix[i] = np.lib.pad(label_augmented_matrix[i], ((0,n_input_z - int(label_augmented_matrix[i][:,0,0].shape[0])),
(0,n_input_x - int(label_augmented_matrix[i][0,:,0].shape[0])),
(0,n_input_y - int(label_augmented_matrix[i][0,0,:].shape[0]))),
'constant', constant_values=0)
</code></pre>
<p>At this point, the dimensions are still <code>(N,)</code> even though every element of the list is constant. For example <code>input_augmented_matrix[0] = input_augmented_matrix[1]</code></p>
<p>Currently I just loop through and create a new array, but it takes too long and I would prefer some sort of method that automates this. I do it with the following code...</p>
<pre><code>input_4d = np.empty((n_volumes, n_input_z, n_input_x, n_input_y))
label_4d = np.empty((n_volumes, n_input_z, n_input_x, n_input_y))
for i in range(n_volumes):
print("Converting to 4D tuple #" + str(i))
for j in range(n_input_z):
for k in range(n_input_x):
for l in range(n_input_y):
input_4d[i][j][k][l] = input_augmented_matrix[i][j][k][l]
label_4d[i][j][k][l] = label_augmented_matrix[i][j][k][l]
</code></pre>
<p>Is there a cleaner and faster way to do this?</p>
| 0 | 2016-08-16T17:00:01Z | 38,981,230 | <p>As I understood this part</p>
<pre><code>k = 0
for z in range(z_min, z_max):
l = 0
for x in range(x_min, x_max):
m = 0
for y in range(y_min, y_max):
if j == 0:
#input volume
try:
temp_volume[k][l][m] = input_matrix[z][x][y]
except:
pdb.set_trace()
else:
#ground truth volume
temp_volume[k][l][m] = label_matrix[z][x][y]
m = m + 1
l = l + 1
k = k + 1
</code></pre>
<p>You just want to do this</p>
<pre><code>temp_input = input_matrix[z_min:z_max, x_min:x_max, y_min:y_max]
temp_label = label_matrix[z_min:z_max, x_min:x_max, y_min:y_max]
</code></pre>
<p>and then</p>
<pre><code>temp_input = np.rot90(temp_input, random_rotation)
temp_label = np.rot90(temp_label, random_rotation)
input_augmented_matrix.append(temp_input)
label_augmented_matrix.append(temp_label)
</code></pre>
<p>Here</p>
<pre><code>input_augmented_matrix[i] = np.lib.pad(
input_augmented_matrix[i],
((0,n_input_z - int(input_augmented_matrix[i][:,0,0].shape[0])),
(0,n_input_x - int(input_augmented_matrix[i][0,:,0].shape[0])),
(0,n_input_y - int(input_augmented_matrix[i][0,0,:].shape[0]))),
'constant', constant_values=0)
</code></pre>
<p>Better to do this, because <code>shape</code> property gives you size of array by all dimensions</p>
<pre><code>ia_shape = input_augmented_matrix[i].shape
input_augmented_matrix[i] = np.lib.pad(
input_augmented_matrix[i],
((0, n_input_z - ia_shape[0]),
(0, n_input_x - ia_shape[1])),
(0, n_input_y - ia_shape[2]))),
'constant',
constant_values=0)
</code></pre>
<p>I guess now you're ready to refactor the last part of your code with magic indexing of <code>NumPy</code>.</p>
<p>My common suggestions:</p>
<ul>
<li>use functions for repeated parts of code to avoid such indents like in your cascade of loops;</li>
<li>if you need so lot of nested loops, think about recursion, if you can't deal without them;</li>
<li>explore abilities of <code>NumPy</code> in official <a href="http://docs.scipy.org/doc/numpy/contents.html" rel="nofollow">documentation</a>: they're really exciting ;) For example, <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow">indexing</a> is helpful for this task;</li>
<li>use <code>PyLint</code> and <code>Flake8</code> packages to inspect quality of your code.</li>
</ul>
<p>Do you want to write neural network by yourself, or you just want to solve some patterns recognition task? <a href="https://www.scipy.org/" rel="nofollow">SciPy</a> library may contain what you need and it's based on <code>NumPy</code>.</p>
| 1 | 2016-08-16T17:40:36Z | [
"python",
"arrays",
"numpy",
"3d",
"4d"
] |
Simple calculation function not working | 38,980,604 | <p>I have the following code but I cannot get it to work:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
def test_calc(date, price, performance):
test = pd.DataFrame(columns=('date'), index=('date'))
test['date'] = date
test['new_value'] = price * (1 + performance)
return(test)
print(test_calc(1, 100, 0.05))
</code></pre>
<p>The problem seems to be:
<code>TypeError: Index(...) must be called with a collection of some kind, 'date' was passed</code></p>
<p>I don't need it to be a <code>DataFrame</code> by the way. I just chose it because I used it before. Everything else failed as well, e.g. <code>test = []</code>.</p>
| 1 | 2016-08-16T17:01:24Z | 38,980,723 | <p>Use <code>list</code> at <code>pd.DataFrame(columns=[], index=[])</code></p>
<pre><code>def test_calc(date, price, performance):
test = pd.DataFrame(columns=['date'], index=['date'])
test['date'] = date
test['new_value'] = price * (1 + performance)
return(test)
</code></pre>
| 1 | 2016-08-16T17:08:40Z | [
"python",
"pandas"
] |
Export and Download a list to csv file using Python | 38,980,610 | <p>I have a list:</p>
<pre><code>lista.append(rede)
</code></pre>
<p>When I print it , is showing:</p>
<pre><code>[{'valor': Decimal('9000.00'), 'mes': 'Julho', 'nome': 'ALFANDEGA 1'}, {'valor': Decimal('12000.00'), 'mes': 'Julho', 'nome': 'AMAZONAS SHOPPING 1'}, {'valor': Decimal('600.00'), 'mes': 'Agosto', 'nome': 'ARARUAMA 1'}, {'valor': Decimal('21600.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o'}, {'valor': Decimal('3000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 1'}, {'valor': Decimal('5000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 2'}, {'valor': Decimal('8000.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o 2'}]
</code></pre>
<p>I would like to export to csv file and download it, could you help me?</p>
| -3 | 2016-08-16T17:01:40Z | 38,980,878 | <p>You can use this code to transform your data to csv : </p>
<pre><code>def Decimal(value):
#quick and dirty deal with your Decimal thing in the json
return value
data = [{'valor': Decimal('9000.00'), 'mes': 'Julho', 'nome': 'ALFANDEGA 1'}, {'valor': Decimal('12000.00'), 'mes': 'Julho', 'nome': 'AMAZONAS SHOPPING 1'}, {'valor': Decimal('600.00'), 'mes': 'Agosto', 'nome': 'ARARUAMA 1'}, {'valor': Decimal('21600.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o'}, {'valor': Decimal('3000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 1'}, {'valor': Decimal('5000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 2'}, {'valor': Decimal('8000.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o 2'}]
mes = []
nome = []
valor = []
for i in data:
mes.append(i.get('mes',""))
nome.append(i.get('nome',""))
valor.append(i.get('valor',""))
import csv
f = open("file.csv", 'wt')
try:
writer = csv.writer(f)
writer.writerow( ('mes', 'nome', 'valor') )
for i in range(0,len(mes)):
writer.writerow((mes[i], nome[i], valor[i]))
finally:
f.close()
</code></pre>
| 0 | 2016-08-16T17:18:21Z | [
"python"
] |
Pandas setting multi-index on columns | 38,980,714 | <p>If I have a simple dataframe:</p>
<pre><code>print(a)
one two three
0 A 1 a
1 A 2 b
2 B 1 c
3 B 2 d
4 C 1 e
5 C 2 f
</code></pre>
<p>I can easily create a multi-index on the rows by issuing:</p>
<pre><code>a.set_index(['one', 'two'])
three
one two
A 1 a
2 b
B 1 c
2 d
C 1 e
2 f
</code></pre>
<p>Is there a similarly easy way to create a multi-index on the columns?</p>
<p>I'd like to end up with:</p>
<pre><code> one A B C
two 1 2 1 2 1 2
0 a b c d e f
</code></pre>
<p>In this case, it would be pretty simple to create the row multi-index and then transpose it, but in other examples, I'll be wanting to create a multi-index on both the rows and columns.</p>
| 3 | 2016-08-16T17:08:06Z | 38,981,203 | <p>I think the short answer is <strong>NO</strong>. To have multi-index columns, the dataframe should have two (or more) rows to be converted into headers (like columns for multi-index rows). If you have this kind of dataframe, creating multi-index header is not so difficult. It can be done in a very long line of code, and you can reuse it at any other dataframe, only the row numbers of the headers should be kept in mind & change if differs:</p>
<pre><code>df = pd.DataFrame({'a':['foo_0', 'bar_0', 1, 2, 3], 'b':['foo_0', 'bar_1', 11, 12, 13],
'c':['foo_1', 'bar_0', 21, 22, 23], 'd':['foo_1', 'bar_1', 31, 32, 33]})
</code></pre>
<p>The dataframe:</p>
<pre><code> a b c d
0 foo_0 foo_0 foo_1 foo_1
1 bar_0 bar_1 bar_0 bar_1
2 1 11 21 31
3 2 12 22 32
4 3 13 23 33
</code></pre>
<p>Creating multi-index object:</p>
<pre><code>arrays = [df.iloc[0].tolist(), df.iloc[1].tolist()]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df.columns = index
</code></pre>
<p>Multi-index header result:</p>
<pre><code>first foo_0 foo_1
second bar_0 bar_1 bar_0 bar_1
0 foo_0 foo_0 foo_1 foo_1
1 bar_0 bar_1 bar_0 bar_1
2 1 11 21 31
3 2 12 22 32
4 3 13 23 33
</code></pre>
<p>Finally we need to drop 0-1 rows then reset the row index:</p>
<pre><code>df = df.iloc[2:].reset_index(drop=True)
</code></pre>
<p>The "one-line" version (only thing you have to change is to specify header indexes and the dataframe itself):</p>
<pre><code>idx_first_header = 0
idx_second_header = 1
df.columns = pd.MultiIndex.from_tuples(list(zip(*[df.iloc[idx_first_header].tolist(),
df.iloc[idx_second_header].tolist()])), names=['first', 'second'])
df = df.drop([idx_first_header, idx_second_header], axis=0).reset_index(drop=True)
</code></pre>
| -1 | 2016-08-16T17:39:14Z | [
"python",
"pandas",
"dataframe"
] |
Pandas setting multi-index on columns | 38,980,714 | <p>If I have a simple dataframe:</p>
<pre><code>print(a)
one two three
0 A 1 a
1 A 2 b
2 B 1 c
3 B 2 d
4 C 1 e
5 C 2 f
</code></pre>
<p>I can easily create a multi-index on the rows by issuing:</p>
<pre><code>a.set_index(['one', 'two'])
three
one two
A 1 a
2 b
B 1 c
2 d
C 1 e
2 f
</code></pre>
<p>Is there a similarly easy way to create a multi-index on the columns?</p>
<p>I'd like to end up with:</p>
<pre><code> one A B C
two 1 2 1 2 1 2
0 a b c d e f
</code></pre>
<p>In this case, it would be pretty simple to create the row multi-index and then transpose it, but in other examples, I'll be wanting to create a multi-index on both the rows and columns.</p>
| 3 | 2016-08-16T17:08:06Z | 38,981,251 | <p>Yes! It's called transposition.</p>
<pre><code>a.set_index(['one', 'two']).T
</code></pre>
<p><a href="http://i.stack.imgur.com/S1jTq.png" rel="nofollow"><img src="http://i.stack.imgur.com/S1jTq.png" alt="enter image description here"></a></p>
<hr>
<p>Let's borrow from @ragesz's post because they used a much better example to demonstrate with.</p>
<pre><code>df = pd.DataFrame({'a':['foo_0', 'bar_0', 1, 2, 3], 'b':['foo_0', 'bar_1', 11, 12, 13],
'c':['foo_1', 'bar_0', 21, 22, 23], 'd':['foo_1', 'bar_1', 31, 32, 33]})
df.T.set_index([0, 1]).T
</code></pre>
<p><a href="http://i.stack.imgur.com/fG6zW.png" rel="nofollow"><img src="http://i.stack.imgur.com/fG6zW.png" alt="enter image description here"></a></p>
| 1 | 2016-08-16T17:41:36Z | [
"python",
"pandas",
"dataframe"
] |
Pandas setting multi-index on columns | 38,980,714 | <p>If I have a simple dataframe:</p>
<pre><code>print(a)
one two three
0 A 1 a
1 A 2 b
2 B 1 c
3 B 2 d
4 C 1 e
5 C 2 f
</code></pre>
<p>I can easily create a multi-index on the rows by issuing:</p>
<pre><code>a.set_index(['one', 'two'])
three
one two
A 1 a
2 b
B 1 c
2 d
C 1 e
2 f
</code></pre>
<p>Is there a similarly easy way to create a multi-index on the columns?</p>
<p>I'd like to end up with:</p>
<pre><code> one A B C
two 1 2 1 2 1 2
0 a b c d e f
</code></pre>
<p>In this case, it would be pretty simple to create the row multi-index and then transpose it, but in other examples, I'll be wanting to create a multi-index on both the rows and columns.</p>
| 3 | 2016-08-16T17:08:06Z | 38,982,066 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> followed by a series of manipulations on the dataframe to get the desired form:</p>
<pre><code>df_pivot = pd.pivot_table(df, index=['one', 'two'], values='three', aggfunc=np.sum)
def rename_duplicates(old_list): # Replace duplicates in the index with an empty string
seen = {}
for x in old_list:
if x in seen:
seen[x] += 1
yield " "
else:
seen[x] = 0
yield x
col_group = df_pivot.unstack().stack().reset_index(level=-1)
col_group.index = rename_duplicates(col_group.index.tolist())
col_group.index.name = df_pivot.index.names[0]
col_group.T
one A B C
two 1 2 1 2 1 2
0 a b c d e f
</code></pre>
| 1 | 2016-08-16T18:29:40Z | [
"python",
"pandas",
"dataframe"
] |
Python - input floating point paramater list | 38,980,736 | <p>I'm getting floating point parameters from user in a script and wondering if there is a better/effecient way including an option like if user only supplies one parameter (or no at all), is there any way that i default the remaining two as 0.0? And may be some better way to store in file</p>
<pre><code>#inside a loop to keep getting values
line = raw_input("Enter three parm values: ")
x=line.split(' ')
fa=fb=fc=0.0
a=x[0]
b=x[1] #what if user only supplies only one?
c=x[2] # how can i leave this or default to 0.0?
fa=float(a)
fb=float(b)
fc=float(c)
file=open(inputFile, 'a+')
file.write(name)
file.write("\t")
file.write(a)
file.write(" ")
file.write(b)
file.write(" ")
file.write(c)
file.write("/n")
</code></pre>
| 0 | 2016-08-16T17:09:23Z | 38,980,810 | <p>Updated to address index out of range error:</p>
<pre><code># use python ternary operator to set default value
# index goes out of range
#c = x[2] if x[2] is not None else 0
# use array length instead
b = x[1] if len(x) >= 2 else 0
c = x[2] if len(x) >= 3 else 0
file=open(inputFile, 'a+')
# string concatenation gets rid of repeated function calls
file.write(name + "\t" + a + " " + b + " " + c + "\n")
</code></pre>
| 2 | 2016-08-16T17:14:49Z | [
"python",
"file",
"floating-point"
] |
Python - input floating point paramater list | 38,980,736 | <p>I'm getting floating point parameters from user in a script and wondering if there is a better/effecient way including an option like if user only supplies one parameter (or no at all), is there any way that i default the remaining two as 0.0? And may be some better way to store in file</p>
<pre><code>#inside a loop to keep getting values
line = raw_input("Enter three parm values: ")
x=line.split(' ')
fa=fb=fc=0.0
a=x[0]
b=x[1] #what if user only supplies only one?
c=x[2] # how can i leave this or default to 0.0?
fa=float(a)
fb=float(b)
fc=float(c)
file=open(inputFile, 'a+')
file.write(name)
file.write("\t")
file.write(a)
file.write(" ")
file.write(b)
file.write(" ")
file.write(c)
file.write("/n")
</code></pre>
| 0 | 2016-08-16T17:09:23Z | 38,980,827 | <p>You could check the length of the list returned by line.split(' ') to see how many you got, and go from there. Or you could check if an entry in the list is None before assigning it.</p>
<p>For writing to files, the best thing to do is to set the structure of your data and then call write only once, as that is what will bottleneck your efficiency in file I/O.</p>
| 1 | 2016-08-16T17:15:34Z | [
"python",
"file",
"floating-point"
] |
Python - input floating point paramater list | 38,980,736 | <p>I'm getting floating point parameters from user in a script and wondering if there is a better/effecient way including an option like if user only supplies one parameter (or no at all), is there any way that i default the remaining two as 0.0? And may be some better way to store in file</p>
<pre><code>#inside a loop to keep getting values
line = raw_input("Enter three parm values: ")
x=line.split(' ')
fa=fb=fc=0.0
a=x[0]
b=x[1] #what if user only supplies only one?
c=x[2] # how can i leave this or default to 0.0?
fa=float(a)
fb=float(b)
fc=float(c)
file=open(inputFile, 'a+')
file.write(name)
file.write("\t")
file.write(a)
file.write(" ")
file.write(b)
file.write(" ")
file.write(c)
file.write("/n")
</code></pre>
| 0 | 2016-08-16T17:09:23Z | 38,980,911 | <p>Why don't you use a list to hold an arbitrary number of variables?</p>
<pre><code>floats = [float(var) if var else 0.
for var in raw_input("Enter three parm values: ").split(' ')]
with open(inputFile, 'a+') as f:
f.write(name + '\t' + ' '.join(str(f) for f in floats) + '\n')
</code></pre>
<p>If you want to pad this list with extra zeros up to three parameters, then you could do this:</p>
<pre><code>floats = [1] # For example.
if len(floats) < 3:
floats += [0] * (3 - len(floats))
>>> floats
[1, 0, 0]
</code></pre>
| 3 | 2016-08-16T17:20:25Z | [
"python",
"file",
"floating-point"
] |
Python - input floating point paramater list | 38,980,736 | <p>I'm getting floating point parameters from user in a script and wondering if there is a better/effecient way including an option like if user only supplies one parameter (or no at all), is there any way that i default the remaining two as 0.0? And may be some better way to store in file</p>
<pre><code>#inside a loop to keep getting values
line = raw_input("Enter three parm values: ")
x=line.split(' ')
fa=fb=fc=0.0
a=x[0]
b=x[1] #what if user only supplies only one?
c=x[2] # how can i leave this or default to 0.0?
fa=float(a)
fb=float(b)
fc=float(c)
file=open(inputFile, 'a+')
file.write(name)
file.write("\t")
file.write(a)
file.write(" ")
file.write(b)
file.write(" ")
file.write(c)
file.write("/n")
</code></pre>
| 0 | 2016-08-16T17:09:23Z | 38,980,970 | <pre><code>#use length of list for checking the input and continue with your operation
line = raw_input("Enter three parm values: ")
x=line.split(' ')
a=b=c=0
fa=fb=fc=0.0
listLen = len(x)
if(listLen == 3):
a, b, c = x[0], x[1], x[2]
elif(listLen == 2):
a, b = x[0], x[1]
elif(listLen == 1):
a = x[0]
</code></pre>
| 1 | 2016-08-16T17:24:44Z | [
"python",
"file",
"floating-point"
] |
Python Matplotlib FuncAnimation.save() only saves 100 frames | 38,980,794 | <p>I am trying to save an animation I've created with the FuncAnimation class in Matplotlib. My animation is more complicated, but I get the same error when I try to save the simple example given <a href="http://stackoverflow.com/questions/16732379/stop-start-pause-in-python-matplotlib-animation">here</a>.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import matplotlib.animation as animation
pause = False
def simData():
t_max = 10.0
dt = 0.05
x = 0.0
t = 0.0
while t < t_max:
if not pause:
x = np.sin(np.pi*t)
t = t + dt
yield x, t
def onClick(event):
global pause
pause ^= True
def simPoints(simData):
x, t = simData[0], simData[1]
time_text.set_text(time_template%(t))
line.set_data(t, x)
return line, time_text
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot([], [], 'bo', ms=10) # I'm still not clear on this stucture...
ax.set_ylim(-1, 1)
ax.set_xlim(0, 10)
time_template = 'Time = %.1f s' # prints running simulation time
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)
fig.canvas.mpl_connect('button_press_event', onClick)
ani = animation.FuncAnimation(fig, simPoints, simData, blit=False, interval=10,
repeat=True)
plt.show()
</code></pre>
<p>However, when I try to save this animation by adding the line</p>
<pre><code>ani.save('test.mp4')
</code></pre>
<p>at the end, only the first 100 frames are saved.</p>
<p>After the animation is saved, the function restarts and displays as expected, displaying and updating the figure 200 times (or until t reaches t_max, whatever I set that to be). But the movie that is saved only contains the first 100 frames.</p>
<p>The pause functionality makes it tricky. Without it I could just put in frames = 200 into the FuncAnimation call rather that using the iterator/generator type function I currently have for the frames argument. But by just putting in frames = 200, the frame count seems to be un-pauseable. </p>
<p>How can I fix this?</p>
| 0 | 2016-08-16T17:14:00Z | 38,988,092 | <pre><code>ani = animation.FuncAnimation(fig, simPoints, simData, blit=False, interval=10,
repeat=True, save_count=200)
</code></pre>
<p>will solve the ploblem.</p>
<p>Internally, <code>save</code> only saves a fixed number of frames. If you pass in a fixed length sequence or a number, mpl can correctly guess the length. If you pass in a (possibly infinite) generator and do not pass in <code>save_count</code> it defaults to 100.</p>
| 2 | 2016-08-17T04:23:22Z | [
"python",
"animation",
"matplotlib"
] |
Python External Classes | 38,980,813 | <p>It looks simple, but I could not find a solution.</p>
<p>I display the problem below with the simplest example I could come up with.</p>
<p>(My classes are quiet more complex ;) )</p>
<p><strong>file A.py</strong></p>
<pre><code>import os, sys
import B
from B import *
class _A():
def __init__(self,someVars):
self.someVars = someVars
def run(self):
print self.someVars
someVars = 'jdoe'
B._B(someVars)
</code></pre>
<hr>
<p><strong>file B.py don't match with import A</strong></p>
<pre><code>import A
from A import _A
class _B():
def __init__(self,someVars):
self.someVars = someVars
def run(self):
A._A(self.someVars)
</code></pre>
<p>with <code>import A</code> -> callback : cannot find _A</p>
<p>It only works when I do -</p>
<pre><code>from A import *
</code></pre>
<p>But and logically A functions are executed 2 times.</p>
<p>Thanks to all</p>
| 0 | 2016-08-16T17:14:55Z | 38,981,256 | <p>There is no need to first <code>import X</code>, then <code>from X import Y</code>. If you need Y (even if <code>Y</code> is <code>*</code>) do just <code>from X import Y</code>. This might be the cause of 2 times execution.</p>
<p>Also why have cyclic dependencies between modules <code>A -> B, B -> A</code>? Maybe they should be in one file then?</p>
| 0 | 2016-08-16T17:42:10Z | [
"python",
"class",
"external",
"python-module"
] |
Python External Classes | 38,980,813 | <p>It looks simple, but I could not find a solution.</p>
<p>I display the problem below with the simplest example I could come up with.</p>
<p>(My classes are quiet more complex ;) )</p>
<p><strong>file A.py</strong></p>
<pre><code>import os, sys
import B
from B import *
class _A():
def __init__(self,someVars):
self.someVars = someVars
def run(self):
print self.someVars
someVars = 'jdoe'
B._B(someVars)
</code></pre>
<hr>
<p><strong>file B.py don't match with import A</strong></p>
<pre><code>import A
from A import _A
class _B():
def __init__(self,someVars):
self.someVars = someVars
def run(self):
A._A(self.someVars)
</code></pre>
<p>with <code>import A</code> -> callback : cannot find _A</p>
<p>It only works when I do -</p>
<pre><code>from A import *
</code></pre>
<p>But and logically A functions are executed 2 times.</p>
<p>Thanks to all</p>
| 0 | 2016-08-16T17:14:55Z | 38,981,584 | <p>Because of cyclic dependency you are facing the import error, you can continue your work as: </p>
<p><em>File A.py:</em></p>
<pre><code>import os, sys
#Below two import lines does cyclic dependency between file A and B which is wrong and will give import error,
#commenting below two lines will resolve your import error
#import B
#from B import *
class _A():
def __init__(self,someVars):
self.someVars = someVars
def run(self):
print self.someVars
someVars = 'jdoe'
#B._B(someVars) #comment and respective logic should be moved in B file
</code></pre>
<p>Also, you should either use <code>import A</code> or <code>from A import _A</code> and if you use the later you should call the class directly as: <code>_A(self.someVars)</code> not as: <code>A._A(self.someVars)</code>, this calling convention will be used for former import style(<code>import A</code>), for better understanding of external use of classes and module, you can refer following link: <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow">https://docs.python.org/3/tutorial/modules.html</a></p>
| 0 | 2016-08-16T18:02:07Z | [
"python",
"class",
"external",
"python-module"
] |
Sharing code between python files | 38,980,899 | <p>Lets say I have the following files:</p>
<pre><code>-- classes.py
-- main.py
-- commands.py
-- output.py
</code></pre>
<p><code>main.py</code> is the master file which uses the code from <code>classes</code>, <code>commands</code>, and <code>output</code>. <code>commands</code> takes objects defined in <code>classes</code> as functional inputs and accesses methods/attributes of these objects. Both <code>commands</code> and <code>classes</code> use functions defined in <code>output</code>.</p>
<p>The question: do I need to import each of these modules in each of the files that depend on them?
i.e.: do I need to import <code>output</code> in both <code>classes</code>, <code>commands</code>, and <code>main</code>? Or would the fact that <code>classes</code>, <code>output</code>, and <code>commands</code> are all imported into <code>main</code> mean that they don't need to be imported individually? </p>
<p>What is the best practice for handling multiple files with inter-dependencies? </p>
| 2 | 2016-08-16T17:19:49Z | 38,980,990 | <p>You have to import them to any file that will be using them, otherwise it will not work. That is because each file is a module, and each module has its own scope.</p>
| -2 | 2016-08-16T17:25:44Z | [
"python",
"python-3.x",
"python-import"
] |
Sharing code between python files | 38,980,899 | <p>Lets say I have the following files:</p>
<pre><code>-- classes.py
-- main.py
-- commands.py
-- output.py
</code></pre>
<p><code>main.py</code> is the master file which uses the code from <code>classes</code>, <code>commands</code>, and <code>output</code>. <code>commands</code> takes objects defined in <code>classes</code> as functional inputs and accesses methods/attributes of these objects. Both <code>commands</code> and <code>classes</code> use functions defined in <code>output</code>.</p>
<p>The question: do I need to import each of these modules in each of the files that depend on them?
i.e.: do I need to import <code>output</code> in both <code>classes</code>, <code>commands</code>, and <code>main</code>? Or would the fact that <code>classes</code>, <code>output</code>, and <code>commands</code> are all imported into <code>main</code> mean that they don't need to be imported individually? </p>
<p>What is the best practice for handling multiple files with inter-dependencies? </p>
| 2 | 2016-08-16T17:19:49Z | 38,983,916 | <blockquote>
<p>The question: do I need to import each of these modules in each of the
files that depend on them? i.e.: do I need to import output in both
classes, commands, and main?</p>
</blockquote>
<p>Yes, this is the way to go.</p>
<blockquote>
<p>Or would the fact that classes, output, and commands are all imported
into main mean that they don't need to be imported individually?</p>
</blockquote>
<p>No.</p>
<p>Python files are modules. A python module has a symbol table. Every function, class and variable specified in the module are in this table. A module can only use things which are in this table plus Python builtins.</p>
<p>So for example <code>classes.py</code>:</p>
<pre><code>def function(): pass
class Class(object): pass
</code></pre>
<p>Has symbol table:</p>
<pre><code>{
'function': function,
'Class': Class
}
</code></pre>
<p>You can only use <code>function</code> and <code>Class</code> within <code>classes.py</code> (plus mentioned builtins). You cannot access anything outside of this module implicitly, Python does not have any concept of namespace like C# and Java have. If you need anything from a different file (module), you must explicitly import it.</p>
<h1>Now what really happens when you "import"?</h1>
<p>Something quite straightforward - the imported module "becomes" part of the module itself!</p>
<p>In the next example we have <code>output.py</code>:</p>
<pre><code>def output_function(): pass
</code></pre>
<p>with symbol table:</p>
<pre><code>{
'output_function': output_function
}
</code></pre>
<p>and <code>classes.py</code>:</p>
<pre><code>import output
from output import output_function
def function(): pass
class Class(object): pass
</code></pre>
<p>with symbol table:</p>
<pre><code>{
'Class': Class,
'function': function,
'output': {
'output_function': output_function
}
'output_function': output_function
}
</code></pre>
<p>where value of 'output' is actually the symbol table of 'output' (the very same object)!</p>
<p>You can even do in a different module:</p>
<pre><code>import classes
classes.output.output_function()
</code></pre>
<p><strong>But don't</strong>, be explicit about your imports.</p>
<p>It might sound a little bit weird, but this is how Python works. Note there are more things involved, for example when you import a module for the first time it's executed and so on...</p>
| 0 | 2016-08-16T20:29:31Z | [
"python",
"python-3.x",
"python-import"
] |
Analyzing data from continuous input stream (recording) in Python, multiprocessing? | 38,980,940 | <p>I am analyzing data coming from recording with my microphone in real-time. So far I have been doing this in a linear fashion:</p>
<ul>
<li>Record one second (takes 1s)</li>
<li>Analyze the data (takes for example 50 ms)</li>
<li>Record one second</li>
<li>Analyze the data</li>
</ul>
<p>And so forth. This obviously means that while I'm analyzing the data from the past second, I am losing this 50 ms of time, I won't be recording the sounds during it. </p>
<p>I thought multiprocessing would be the solution: I start a seperate proces that non-stop records in certain length chunks and each time sends it through a pipe to the the main proces, which then analyzes the data. Unfortunately, sending a lot of data through a pipe (or in general, sending a lot of data from one proces to another), is far from ideal apparently. Is there any other way to do this? I just want my computer to record data and import it into python (all of which I'm already doing), while it's also analyzing data. </p>
<p>If I need to add any more details, let me know! </p>
<p>Thanks!</p>
| 0 | 2016-08-16T17:22:50Z | 38,981,468 | <p>Simple producer/consumer implementation. </p>
<p>While it's true moving data back and forth induces overhead and increases memory use, as long as the same data is not needed by more than one process that overhead is minimal. Try it and find out :) Can adjust memory footprint by changing the queue and pool size numbers.</p>
<p>Threading is another option to reduce memory use but at the expense of being blocked on the GIL and effectively single threaded if the processing is in python bytecode.</p>
<pre><code>import multiprocessing
# Some fixed size to avoid run away memory use
recorded_data = multiprocessing.Queue(100)
def process(recorded_data):
while True:
data = recorded_data.get()
<process data>
def record(recorded_data):
while data in input_stream:
recorded_data.put(data)
producer = multiprocessing.Process(target=record,
args=(recorded_data,))
producer.start()
# Pool of 10 processes
num_proc = 10
consumer_pool = multiprocessing.Pool(num_proc)
results = []
for _ in xrange(num_proc):
results.append(
consumer_pool.apply_async(process,
args=(recorded_data,)))
producer.join()
# If processing actually returns something
for result in results:
print result
# Consumers wait for data from queue forever
# so terminate them when done
consumer_pool.terminate()
</code></pre>
| 1 | 2016-08-16T17:55:58Z | [
"python",
"multiprocessing",
"audio-recording"
] |
Multiple URLs in text file - BeautifulSoup Scraping | 38,981,017 | <p>I have a text file <code>urls.txt</code> in the same directory as my <code>script.py</code></p>
<p><code>urls.txt</code> has a list of multiple urls, one per line. </p>
<p>I am attempting to scrape all the urls in one shot and pull out the contents of a particular <code>div</code> </p>
<p>This <code>div</code> occurs multiple times on each URL</p>
<p>here is my script </p>
<pre><code>import requests
from bs4 import BeautifulSoup
from urllib import urlopen
with open('urls.txt') as inf:
urls = (line.strip() for line in inf)
for url in urls:
site = urlopen(url)
soup = BeautifulSoup(site, "lxml")
for item in soup.find_all("div", {"class": "vm-product-descr-container-1"}):
print item.text
</code></pre>
<p>Instead of returning the contents from all the urls in <code>urls.txt</code> the script is only returning the contents from the last url in the list. </p>
<p>My script is not returning any errors, so I am not sure where I went wrong. </p>
<p>Thank you for any input. </p>
| -2 | 2016-08-16T17:26:58Z | 38,981,080 | <p>Seems like a small identation error:
Look at this block:</p>
<pre><code>for url in urls:
site = urlopen(url)
soup = BeautifulSoup(site, "lxml")
for item in soup.find_all("div", {"class": "vm-product-descr-container-1"}):
print item.text
</code></pre>
<p>Change it to this one:</p>
<pre><code>for url in urls:
site = urlopen(url)
soup = BeautifulSoup(site, "lxml")
for item in soup.find_all("div", {"class": "vm-product-descr-container-1"}):
print item.text
</code></pre>
<p>This way the print will execute for each iteration in the inside for loop.</p>
| 0 | 2016-08-16T17:30:46Z | [
"python",
"beautifulsoup"
] |
How do I get the dot product but without the summation | 38,981,194 | <p>consider array's <code>a</code> and <code>b</code></p>
<pre><code>a = np.array([
[-1, 1, 5],
[-2, 3, 0]
])
b = np.array([
[1, 1, 0],
[0, 2, 3],
])
</code></pre>
<p>Looking at</p>
<pre><code>d = a.T.dot(b)
d
array([[-1, -5, -6],
[ 1, 7, 9],
[ 5, 5, 0]])
</code></pre>
<p><code>d[0, 0]</code> is <code>-1</code>. and is the sum of <code>a[:, 0] * b[:, 0]</code>. I'd like a 2x2 array of vectors where the <code>[0, 0]</code> position would be <code>a[:, 0] * b[:, 0]</code>.</p>
<p>with the above <code>a</code> and <code>b</code>, I'd expect</p>
<pre><code>d = np.array([[a[:, i] * b[:, j] for j in range(a.shape[1])] for i in range(b.shape[1])])
d
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
</code></pre>
<p>The sum of <code>d</code> along <code>axis==2</code> should be the dot product <code>a.T.dot(b)</code></p>
<pre><code>d.sum(2)
array([[-1, -5, -6],
[ 1, 7, 9],
[ 5, 5, 0]])
</code></pre>
<h3>Question</h3>
<p>What is the most efficient way of getting <code>d</code>?</p>
| 4 | 2016-08-16T17:38:29Z | 38,981,589 | <p>Here's one way:</p>
<pre><code>In [219]: a
Out[219]:
array([[-1, 1, 5],
[-2, 3, 0]])
In [220]: b
Out[220]:
array([[1, 1, 0],
[0, 2, 3]])
In [221]: a.T[:,None,:] * b.T[None,:,:]
Out[221]:
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
</code></pre>
<p>Or...</p>
<pre><code>In [231]: (a[:,None,:] * b[:,:,None]).T
Out[231]:
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
</code></pre>
| 6 | 2016-08-16T18:02:34Z | [
"python",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.