title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Value Error: negative dimensions are not allowed when merging | 39,130,690 | <p>I am merging 2 dataframes together. They are originally <code>.csv</code> files which are only 7 megabytes each (2 columns and 290,000 rows). I am merging like this:</p>
<pre><code>merge=pd.merge(df1,df2, on=['POINTID'], how='outer')
</code></pre>
<p>and in 32-bit Anaconda I get:</p>
<p><code>ValueError: negative dimensions are not allowed</code></p>
<p>but on 64-bit Anaconda I get a memory error. </p>
<p>I have 12 gigabytes of RAM and only 30% of it is being used so it should not be a memory issue. I tried on another computer and get the same issue.</p>
| 1 | 2016-08-24T18:35:17Z | 39,131,034 | <p>On a 32-bit machine, the default NumPy integer dtype is <code>int32</code>.
On a 64-bit machine, the default NumPy integer dtype is <code>int64</code>.</p>
<p>The largest integers representable by an <code>int32</code> and <code>int64</code> are:</p>
<pre><code>In [88]: np.iinfo('int32').max
Out[88]: 2147483647
In [87]: np.iinfo('int64').max
Out[87]: 9223372036854775807
</code></pre>
<p>So the integer index created by <code>pd.merge</code> will support a maximum of <code>2147483647 = 2**31-1</code> rows on a 32-bit machine, and <code>9223372036854775807 = 2**63-1</code> rows on a 64-bit machine.</p>
<p>In theory, two 290000 row DataFrames merged with an <code>outer</code> join may have as many as <code>290000**2 = 84100000000</code> rows. Since</p>
<pre><code>In [89]: 290000**2 > np.iinfo('int32').max
Out[89]: True
</code></pre>
<p>the 32-bit machine may not be able to generate an integer index big enough to index the merged result.</p>
<p>And although the 64-bit machine can in theory generate an integer index big enough to accommodate the result, you may not have enough memory to build a 84 billion-row DataFrame.</p>
<p>Now, of course, the merged DataFrame may have fewer than 84 billion rows (the exact number depends on how many duplicate values appear in <code>df1['POINTID']</code> and <code>df2['POINTID']</code>) but the above back-of-the envelope calculation shows that the behavior you are seeing is consistent with having a lot of duplicates.</p>
<hr>
<p>PS. You can get negative values when adding or multiplying positive integers in NumPy arrays if there is arithmetic overflow:</p>
<pre><code>In [92]: np.int32(290000)*np.int32(290000)
Out[92]: -1799345920
</code></pre>
<p>My guess is that this is the reason for the exception:</p>
<pre><code>ValueError: negative dimensions are not allowed
</code></pre>
| 5 | 2016-08-24T18:54:41Z | [
"python",
"pandas"
] |
result inconsistency in python | 39,130,823 | <p>I have two text files and the content are as under (<code>file1.txt</code> and <code>file2.txt</code> respectively);</p>
<pre><code>MKKVEAIIRPFKLDEVKIALVNAGIVGMTVSEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVDMVVDKIIAAARTGEIGDGKIFISPVEQVIRIRTGEKNTEAV
</code></pre>
<p>and</p>
<pre><code>AQTVPYGIPLIKADKVQAQGYKGANVKVGIIDTGIAASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNTTGVLGVAPNVSLYAIKVLNSSGSGTYSAIVSGIEWATQNGLDVINMSLGGPSGSTALKQAVDKAYASGIVVVAAAGNSGSSGSQNTIGYPAKYDSVIAVGAVDSNKNRASFSSVGAELEVMAPGVSVYSTYPSNTYTSLNGTSMASPHVAGAAALILSKYPTLSASQVRNRLSSTATNLGDSFYYGKGLINVEAAAQ
</code></pre>
<p>I need to fetch the character based on the index of this string which I know. Now, I need to fetch the 20 characters before the value of the index and 20 characters after the value of index which makes a total of 41 characaters (including the character of the index).
Here is my code </p>
<pre><code>with open('file1.txt', 'r') as myfile:
x = 50
data=myfile.read()
str1 = data[x:x+1+20]
temp = data[x-20:x]
print temp+str1
</code></pre>
<p>The output of <code>file1.txt</code> one is <code>SEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVD</code> which is correct. </p>
<p>The problem is if I run the same code on <code>file2.txt</code> and change the index (value of x) to <code>56</code>, the output I should be getting is <code>AASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDN</code>. Instead I am getting <code>ASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNT</code>
Why is this?</p>
| 0 | 2016-08-24T18:42:08Z | 39,130,903 | <pre><code>temp = data[x-20:x]
</code></pre>
<p>will give you</p>
<pre><code>temp = data[36:56]
</code></pre>
<p>i.e. the characters 36 to 55 (included) of your string</p>
<p>the string starts at character 0, so data[36] is indeed your second A, i.e. the 37th character of your string</p>
<p>No bug here, just a counter issue</p>
<p>you probably actually would want to set x = 55 (i.e. the middle character is the 56th of your string, because it has the index of 55, since indexing starts at zero in python)</p>
| 0 | 2016-08-24T18:47:12Z | [
"python",
"substring"
] |
result inconsistency in python | 39,130,823 | <p>I have two text files and the content are as under (<code>file1.txt</code> and <code>file2.txt</code> respectively);</p>
<pre><code>MKKVEAIIRPFKLDEVKIALVNAGIVGMTVSEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVDMVVDKIIAAARTGEIGDGKIFISPVEQVIRIRTGEKNTEAV
</code></pre>
<p>and</p>
<pre><code>AQTVPYGIPLIKADKVQAQGYKGANVKVGIIDTGIAASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNTTGVLGVAPNVSLYAIKVLNSSGSGTYSAIVSGIEWATQNGLDVINMSLGGPSGSTALKQAVDKAYASGIVVVAAAGNSGSSGSQNTIGYPAKYDSVIAVGAVDSNKNRASFSSVGAELEVMAPGVSVYSTYPSNTYTSLNGTSMASPHVAGAAALILSKYPTLSASQVRNRLSSTATNLGDSFYYGKGLINVEAAAQ
</code></pre>
<p>I need to fetch the character based on the index of this string which I know. Now, I need to fetch the 20 characters before the value of the index and 20 characters after the value of index which makes a total of 41 characaters (including the character of the index).
Here is my code </p>
<pre><code>with open('file1.txt', 'r') as myfile:
x = 50
data=myfile.read()
str1 = data[x:x+1+20]
temp = data[x-20:x]
print temp+str1
</code></pre>
<p>The output of <code>file1.txt</code> one is <code>SEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVD</code> which is correct. </p>
<p>The problem is if I run the same code on <code>file2.txt</code> and change the index (value of x) to <code>56</code>, the output I should be getting is <code>AASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDN</code>. Instead I am getting <code>ASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNT</code>
Why is this?</p>
| 0 | 2016-08-24T18:42:08Z | 39,130,988 | <pre><code>with open('file1.txt', 'r') as myfile:
x = 50
data=myfile.read()
print data[x-20:x+21]
</code></pre>
<p>This takes a slice from position x-20 to position x+21. This will give you a slice of 41 chars.</p>
<p>data2, 56 returns:
<code>ASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNT</code> - Your code is correct.</p>
| 0 | 2016-08-24T18:51:55Z | [
"python",
"substring"
] |
result inconsistency in python | 39,130,823 | <p>I have two text files and the content are as under (<code>file1.txt</code> and <code>file2.txt</code> respectively);</p>
<pre><code>MKKVEAIIRPFKLDEVKIALVNAGIVGMTVSEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVDMVVDKIIAAARTGEIGDGKIFISPVEQVIRIRTGEKNTEAV
</code></pre>
<p>and</p>
<pre><code>AQTVPYGIPLIKADKVQAQGYKGANVKVGIIDTGIAASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNTTGVLGVAPNVSLYAIKVLNSSGSGTYSAIVSGIEWATQNGLDVINMSLGGPSGSTALKQAVDKAYASGIVVVAAAGNSGSSGSQNTIGYPAKYDSVIAVGAVDSNKNRASFSSVGAELEVMAPGVSVYSTYPSNTYTSLNGTSMASPHVAGAAALILSKYPTLSASQVRNRLSSTATNLGDSFYYGKGLINVEAAAQ
</code></pre>
<p>I need to fetch the character based on the index of this string which I know. Now, I need to fetch the 20 characters before the value of the index and 20 characters after the value of index which makes a total of 41 characaters (including the character of the index).
Here is my code </p>
<pre><code>with open('file1.txt', 'r') as myfile:
x = 50
data=myfile.read()
str1 = data[x:x+1+20]
temp = data[x-20:x]
print temp+str1
</code></pre>
<p>The output of <code>file1.txt</code> one is <code>SEVRGFGRQKGQTERYRGSEYTVEFLQKLKVEIVVEDNQVD</code> which is correct. </p>
<p>The problem is if I run the same code on <code>file2.txt</code> and change the index (value of x) to <code>56</code>, the output I should be getting is <code>AASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDN</code>. Instead I am getting <code>ASHTDLKVVGGASFVSGESYNTDGNGHGTHVAGTVAALDNT</code>
Why is this?</p>
| 0 | 2016-08-24T18:42:08Z | 39,133,150 | <p>Okay so if you check, index 50 in file 1 (first row) is <code>Y</code>, 20 elements from left start from <code>SEV...</code> which means starting from index 30. This is your result for file1, which you say is correct.</p>
<pre><code>M K K V E A I I R P F K L D E V K I A L V N A G I V G M T V S E V R G F G R Q K G Q T E R Y R G S E Y T V E F L Q K L K V E I V V E D N Q V D M V V D K I I A A
A Q T V P Y G I P L I K A D K V Q A Q G Y K G A N V K V G I I D T G I A A S H T D L K V V G G A S F V S G E S Y N T D G N G H G T H V A G T V A A L D N T T G V
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
</code></pre>
<p>Now, for file2 (second row), you set index to <code>56</code> which is <code>N</code> in file 2, and 20 elements from left starts from <code>ASH...</code>, starting from index 36. This is the expected result, I don't see why are you saying it is wrong.</p>
<p>Third row represents the index number</p>
<p>Also see image below:
<a href="http://i.stack.imgur.com/OcnlP.png" rel="nofollow"><img src="http://i.stack.imgur.com/OcnlP.png" alt="enter image description here"></a></p>
| 0 | 2016-08-24T21:15:10Z | [
"python",
"substring"
] |
Python - Printing numbers with spacing format | 39,130,914 | <p>Let's say I have an array of numbers where</p>
<pre><code>list = [(4, 3, 7, 23),(17, 4021, 4, 92)]
</code></pre>
<p>and I want to print the numbers out in such a way so that the output looks somewhat like this:</p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
<p>Where the numbers are as centered as possible and that there is enough space in between the "|" to allow a 4 digit number with two spaces on either side.</p>
<p>How would I do this?</p>
<p>Thank you.</p>
| 0 | 2016-08-24T18:47:52Z | 39,131,038 | <p>You can also use third-parties like <a href="https://pypi.python.org/pypi/PrettyTable" rel="nofollow"><code>PrettyTable</code></a> or <a href="https://pypi.python.org/pypi?name=texttable&%3Aaction=display" rel="nofollow"><code>texttable</code></a>. Example using <code>texttable</code>:</p>
<pre><code>import texttable
l = [(4, 3, 7, 23),(17, 4021, 4, 92)]
table = texttable.Texttable()
# table.set_chars(["", "|", "", ""])
table.add_rows(l)
print(table.draw())
</code></pre>
<p>Would produce:</p>
<pre><code>+----+------+---+----+
| 4 | 3 | 7 | 23 |
+====+======+===+====+
| 17 | 4021 | 4 | 92 |
+----+------+---+----+
</code></pre>
| 2 | 2016-08-24T18:55:24Z | [
"python",
"string",
"list",
"format",
"number-formatting"
] |
Python - Printing numbers with spacing format | 39,130,914 | <p>Let's say I have an array of numbers where</p>
<pre><code>list = [(4, 3, 7, 23),(17, 4021, 4, 92)]
</code></pre>
<p>and I want to print the numbers out in such a way so that the output looks somewhat like this:</p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
<p>Where the numbers are as centered as possible and that there is enough space in between the "|" to allow a 4 digit number with two spaces on either side.</p>
<p>How would I do this?</p>
<p>Thank you.</p>
| 0 | 2016-08-24T18:47:52Z | 39,131,158 | <p>Here: </p>
<pre><code>list = [[4, 3, 7, 23],[17, 4021, 4, 92]]
for sublist in list:
output = "["
for index, x in enumerate(sublist):
output +='{:^6}'.format(x)
if index != len(sublist)-1:
output += '|'
output +=']'
print output
</code></pre>
<p>Output: </p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
| 0 | 2016-08-24T19:01:52Z | [
"python",
"string",
"list",
"format",
"number-formatting"
] |
Python - Printing numbers with spacing format | 39,130,914 | <p>Let's say I have an array of numbers where</p>
<pre><code>list = [(4, 3, 7, 23),(17, 4021, 4, 92)]
</code></pre>
<p>and I want to print the numbers out in such a way so that the output looks somewhat like this:</p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
<p>Where the numbers are as centered as possible and that there is enough space in between the "|" to allow a 4 digit number with two spaces on either side.</p>
<p>How would I do this?</p>
<p>Thank you.</p>
| 0 | 2016-08-24T18:47:52Z | 39,131,213 | <p><a href="https://docs.python.org/2/library/string.html#string.center" rel="nofollow">str.center</a> can make things easier.</p>
<pre><code>for i in list:
print '[ ' + ' | '.join([str(j).center(4) for j in i]) + ' ]'
</code></pre>
<p>Output:</p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
<p>In case you need an alternative solution, you can use <a href="https://docs.python.org/2/library/string.html#format-examples" rel="nofollow">str.format</a>: </p>
<pre><code>for i in list:
print '[ ' + ' | '.join(["{:^4}".format(j) for j in i]) + ' ]'
</code></pre>
<p>Output:</p>
<pre><code>[ 4 | 3 | 7 | 23 ]
[ 17 | 4021 | 4 | 92 ]
</code></pre>
| 1 | 2016-08-24T19:04:59Z | [
"python",
"string",
"list",
"format",
"number-formatting"
] |
Python TypeError: Incorrect padding when write to image | 39,130,919 | <p>I tired with this error. this is encoding base64 <a href="http://pastebin.com/16KSrNuL" rel="nofollow">http://pastebin.com/16KSrNuL</a> and i try to decode into image using this code</p>
<pre><code>wf = open('/Users/me/base.txt', 'w')
wf.write(data.get('base64'))
wf.close()
fp = open('/Users/me/base_result.png', 'wb')
fp.write(base64.b64decode(open('/Users/me/base.txt', 'rb').read()))
fp.close()
</code></pre>
<p>in my case, i trying post data json.</p>
| 1 | 2016-08-24T18:48:09Z | 39,131,001 | <p>You need to remove the leading string i.e <em>data:image/png;base64,</em> to get the <em>base64</em> encoded data:</p>
<pre><code>with open("/Users/me/base.txt") as f, open("/Users/me/base_result.png","wb") as out:
out.write(f.read().split(",",1)[1].decode("base-64"))
</code></pre>
<p>When you do you get:</p>
<p><a href="http://i.stack.imgur.com/uQjYq.png" rel="nofollow"><img src="http://i.stack.imgur.com/uQjYq.png" alt="enter image description here"></a></p>
<p>Obviously the leading substring is not base64 encoded.</p>
| 2 | 2016-08-24T18:52:44Z | [
"python",
"base64"
] |
I can't call upon my variable which contains several words in it (if in variable:) | 39,130,924 | <p>I am trying to call upon a variable I created for an 'if' statement in Python.
Here is a summed up version:</p>
<pre><code>yes = ("yes", "y")
question1 = input("Am I right?")
if question1.lower() in yes:
print ("you are correct")
</code></pre>
<p>but I get an error : </p>
<pre><code>--'in <string>' requires string as left operand, not builtin_function_or_method
</code></pre>
<p>My actual code is quite odd, but here you go ((it isn't for the feint of heart)):</p>
<pre><code>yes = ["yes", "y"]
m = "men"
w = "women"
badkeywordslist = ["depression", "pain", "hurt", "dead", "die", "kill", "hell", "suffering", "cutting", "cut", "death"]
Question1 = input("We will start off simple, what is your name?")
if len(Question1) > 0 and Question1.isalpha():
Question2 = input("Ah! Lovely name, %s. Not surprised you get all the women, or is it men?" % Question1)
if Question2.lower() in m:
print ("So, your name is %s and you enjoy the pleasure of %s! I bet you didnt see that coming." % (Question1, Question2))
elif Question2.lower() in w:
print ("So, your name is %s and you enjoy the pleasure of %s! I bet you didnt see that coming." % (Question1, Question2))
else:
print ("Come on! You're helpless. I asked you a simple question with 2 very destinctive answers. Restart!")
else:
print ("Come on, enter your accurate information before proceeding! Restart me!")
Question3 = input("Now I know your name and what gender attracts you. One more question and I will know everything about you... Shall we continue?")
if Question3.lower() in yes:
Question4 = input("Well, it's quite simple really. What's good in your life and what's bad?")
if Question4 in badkeywordslist:
print ("Oh... So your life isn't going so great now, is it? For starters, are you safe?")
</code></pre>
| 0 | 2016-08-24T18:48:18Z | 39,131,041 | <p>In python 2.x;
you must use <code>raw_input</code> instead of <code>input</code> as
<code>input()</code> actually evaluates the input as Python code.
<code>raw_input()</code> returns the verbatim string entered by the user.</p>
<pre><code>yes = ("yes", "y")
question1 = raw_input("Am I right?")
if question1.lower() in yes:
print("you are correct")
</code></pre>
<p>Just tried your code on <a href="https://www.codechef.com/ide" rel="nofollow">https://www.codechef.com/ide</a> and its perfectly fine.</p>
| -1 | 2016-08-24T18:55:33Z | [
"python"
] |
How to define a member function of a class at the class level or at the level of instance objects? | 39,130,950 | <p>In Python 2.7, when defining a class, how can we define </p>
<ul>
<li><p>member functions at the level of class, i.e. its first argument is the class object, not an instance object of the class</p></li>
<li><p>member functions at the level of the class' instance objects, i.e. its first argument is an instance object of the class, not the class object.</p></li>
</ul>
<p>When using a given class, how can we tell if a member function is at the level of class or at the level of the class' instance objects?</p>
<p>For example, in the Python standard library, the <code>setUp()</code> from <code>TestCase</code> is called for each instance object of <code>TestCase</code>, i.e. at the level of instance objects, while
class level fixtures are implemented in <code>TestSuite</code>. When the test suite encounters a test from a new class then <code>tearDownClass()</code> from the previous class (if there is one) is called, followed by <code>setUpClass()</code> from
the new class.</p>
<p>Thanks.</p>
| 0 | 2016-08-24T18:49:46Z | 39,131,266 | <blockquote>
<p>The @classmethod form is a function decorator â see the description of
function definitions in Function definitions for details.</p>
<p>It can be called either on the class (such as C.f()) or on an instance
(such as C().f()). The instance is ignored except for its class. If a
class method is called for a derived class, the derived class object
is passed as the implied first argument.</p>
<p>Class methods are different than C++ or Java static methods. If you
want those, see staticmethod() in this section.</p>
<p>For more information on class methods, consult the documentation on
the standard type hierarchy in The standard type hierarchy.</p>
</blockquote>
<p><a href="http://DOCUMENT" rel="nofollow">https://docs.python.org/2/library/functions.html#classmethod</a></p>
<pre><code>class A:
... message = "class message"
...
... @classmethod
... def classLevel(cls):
... print(cls.message)
...
... def instanceLevel(self, msg):
... self.message = msg
... print(self.message)
>>> a= A()
>>> a.instanceLevel('123')
123
>>> A.classLevel()
class message
>>> a.classLevel()
class message
>>> A.instanceLevel()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method instanceLevel() must be called with A instance as first argument (got nothing instead)
A.__dict__
{'classLevel': <classmethod object at 0x4E974BB0>, '__module__': '__main__', 'instanceLevel': <function instanceLevel at 0x550C8530>, 'message': 'class message', '__doc__': None}
</code></pre>
| 2 | 2016-08-24T19:08:25Z | [
"python",
"python-2.7"
] |
ImportError: No module named '_version' when importing mechanize | 39,130,957 | <p>I installed mechanize via pip and get an errer when I import the module:</p>
<pre><code>$ python
Python 3.5.2 (default, Jun 28 2016, 08:46:01)
[GCC 6.1.1 20160602] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/site-packages/mechanize/__init__.py", line 119, in <module>
from _version import __version__
ImportError: No module named '_version'
</code></pre>
<p>The file <code>-version.py</code> is present in the site-packages directory:</p>
<pre><code>$ ls /usr/lib/python3.5/site-packages/mechanize
_auth.py __init__.py _response.py
_beautifulsoup.py _lwpcookiejar.py _rfc3986.py
_clientcookie.py _markupbase.py _sgmllib_copy.py
_debug.py _mechanize.py _sockettimeout.py
_firefox3cookiejar.py _mozillacookiejar.py _testcase.py
_form.py _msiecookiejar.py _urllib2_fork.py
_gzip.py _opener.py _urllib2.py
_headersutil.py _pullparser.py _useragent.py
_html.py __pycache__ _util.py
_http.py _request.py _version.py
</code></pre>
<p>What am I missing?</p>
| 1 | 2016-08-24T18:50:06Z | 39,131,141 | <p>If you look at <a href="https://github.com/jjlee/mechanize/blob/master/setup.py#L38" rel="nofollow"><code>setup.py</code></a> you'll see <code>mechanize</code> is a <code>Python 2.x</code> package: </p>
<pre><code>Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.4
Programming Language :: Python :: 2.5
Programming Language :: Python :: 2.6
Programming Language :: Python :: 2.7
</code></pre>
<p>Apart from that, you can see in <code>mechanize/__init__.py</code> that all imports are relative:</p>
<pre><code>from _version import __version__
</code></pre>
<p>instead of explicit:</p>
<pre><code>from ._version import __version__
</code></pre>
<p><em><a href="http://stackoverflow.com/questions/12172791/changes-in-import-statement-python3">In python 3, this results in import errors.</a></em></p>
<p>There's an <a href="https://github.com/jjlee/mechanize/issues/96" rel="nofollow">issue</a> opened for <code>Py3</code> support and it lists some alternatives you could try. That, or port it :-).</p>
| 3 | 2016-08-24T19:00:45Z | [
"python",
"python-3.x",
"import",
"mechanize"
] |
python json, replacing values with key and value pairs | 39,131,023 | <p>in the following example I am trying to replace value of one key by the value of another key; but I tried multiple ways and it doesn't seem to work.</p>
<p>following is my code</p>
<pre><code>d = {
"name" : "ABC",
"type" : "Service",
"clusterRef" : {
"clusterName" : "ABCSTUFF"
},
"serviceState" : "STARTED",
"healthChecks" : [ {
"name" : "STORAGE",
"summary" : "GOOD"
}, {
"name" : "CPU UTILIZATION",
"summary" : "GOOD"
} ],
"maintenanceMode" : "false"
}
########################
## Get Key Value
def get_key_values(d, key):
for k, v in d.items():
if k == "name":
k = (key + "." + v)
else:
k = (key + "." + k)
if isinstance(v, dict):
get_key_values(v, k)
elif isinstance(v, list):
for i in v:
get_key_values(i, k)
else:
print ("{0} : {1}".format(k, v))
get_key_values(d, "TTS")
</code></pre>
<p>the result come up like following</p>
<blockquote>
<pre><code>TTS.serviceState : STARTED
TTS.type : Service
TTS.ABC : ABC
TTS.clusterRef.clusterName : ABCSTUFF
TTS.healthChecks.summary : GOOD <<< remove this line and replace "Good" with the value for "TTS.healthChecks.STORAGE"
TTS.healthChecks.STORAGE : STORAGE
TTS.healthChecks.summary : GOOD <<< remove this line and replace "Good" with the value for "TTS.healthChecks.CPU UTILIZATION"
TTS.healthChecks.CPU UTILIZATION : CPU UTILIZATION
TTS.maintenanceMode : false
</code></pre>
</blockquote>
<p>but I want the result to be following</p>
<blockquote>
<pre><code>TTS.serviceState : STARTED
TTS.type : Service
TTS.ABC : ABC
TTS.clusterRef.clusterName : ABCSTUFF
TTS.healthChecks.STORAGE : GOOD <<<
TTS.healthChecks.CPU UTILIZATION : GOOD <<<
TTS.maintenanceMode : false
</code></pre>
</blockquote>
<p>Any help is much appreciated</p>
| 0 | 2016-08-24T18:53:44Z | 39,131,891 | <p>Here's a non-generic solution which works for your question:</p>
<pre><code>d = {
"name": "ABC",
"type": "Service",
"clusterRef": {
"clusterName": "ABCSTUFF"
},
"serviceState": "STARTED",
"healthChecks": [{
"name": "STORAGE",
"summary": "GOOD"
}, {
"name": "CPU UTILIZATION",
"summary": "GOOD"
}],
"maintenanceMode": "false"
}
########################
# Get Key Value
def get_key_values(d, key):
for k, v in d.items():
if k == "name":
k = (key + "." + v)
else:
k = (key + "." + k)
if isinstance(v, dict):
get_key_values(v, k)
elif isinstance(v, list):
for i in v:
tok1 = k + "." + i.get("name")
tok2 = i.get("summary")
print("{0} : {1}".format(tok1, tok2))
else:
print("{0} : {1}".format(k, v))
get_key_values(d, "TTS")
</code></pre>
| 1 | 2016-08-24T19:47:22Z | [
"python",
"json"
] |
zooming with NavigationToolbar does not work in matplotlib with pyqt | 39,131,025 | <p>i try to plot data using python 2.7, matplotlib, qt5 (version 5.7) and pyqt5. i took an example and adapted it to my needs. i see the NavigationToolbar gets added to the plot window, but the zooming does not work. i'm not sure what the second argument of NavigationToolbar object should be?</p>
<pre class="lang-py prettyprint-override"><code>import sys
import matplotlib
matplotlib.use("Qt5Agg")
from numpy import arange, sin, pi
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from PyQt5.QtWidgets import QWidget, QMainWindow, QApplication, QSizePolicy, QVBoxLayout, QPushButton
from PyQt5.QtCore import *
from PyQt5.QtGui import QCursor
class MyMplCanvas(FigureCanvas):
"""Ultimately, this is a QWidget (as well as a FigureCanvasAgg, etc.)."""
def __init__(self, parent=None, width=5, height=4, dpi=100):
fig = Figure(figsize=(width, height), dpi=dpi)
self.fig = fig
self.axes = fig.add_subplot(111)
# We want the axes cleared every time plot() is called
self.axes.hold(False)
self.compute_initial_figure()
FigureCanvas.__init__(self, fig)
self.setParent(parent)
FigureCanvas.setSizePolicy(self,
QSizePolicy.Expanding,
QSizePolicy.Expanding)
FigureCanvas.updateGeometry(self)
def compute_initial_figure(self):
pass
class MyStaticMplCanvas(MyMplCanvas):
"""Simple canvas with a sine plot."""
def compute_initial_figure(self):
t = arange(0.0, 3.0, 0.01)
s = sin(2*pi*t)
self.axes.plot(t, s)
class ApplicationWindow(QWidget):
def __init__(self):
QWidget.__init__(self)
self.setAttribute(Qt.WA_DeleteOnClose)
self.main_widget = QWidget(self)
self.l = QVBoxLayout(self)
but = QPushButton("make_new", self)
but.clicked.connect(self.again)
self.l.addWidget(but)
def again(self):
sc = MyStaticMplCanvas(self, width=5, height=4, dpi=100)
self.sc = sc
win = MyMplCanvas()
win.fig = self.sc.fig
FigureCanvas.__init__(win, win.fig)
self.win = win
self.mpl_toolbar = NavigationToolbar(self.sc, self.win)
win.show()
qApp = QApplication(sys.argv)
aw = ApplicationWindow()
aw.show()
sys.exit(qApp.exec_())
</code></pre>
| 0 | 2016-08-24T18:53:50Z | 39,153,251 | <p>here is how i fixed it if someone is interested as well:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import matplotlib
matplotlib.use("Qt5Agg")
import numpy as np
from numpy import arange, sin, pi
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from PyQt5.QtWidgets import QWidget, QMainWindow, QApplication, QSizePolicy, QVBoxLayout, QPushButton
from PyQt5.QtCore import *
from PyQt5.QtGui import QCursor
class MyMplCanvas(FigureCanvas):
#Ultimately, this is a QWidget (as well as a FigureCanvasAgg, etc.).
def __init__(self, parent=None, width=5, height=4, dpi=100):
fig = Figure(figsize=(width, height), dpi=dpi)
self.fig = fig
self.axes = fig.add_subplot(111)
# We want the axes cleared every time plot() is called
self.axes.hold(False)
self.compute_initial_figure()
FigureCanvas.__init__(self, fig)
self.setParent(parent)
FigureCanvas.setSizePolicy(self,
QSizePolicy.Expanding,
QSizePolicy.Expanding)
FigureCanvas.updateGeometry(self)
def compute_initial_figure(self):
pass
class MyStaticMplCanvas(MyMplCanvas):
#Simple canvas with a sine plot.
def compute_initial_figure(self):
t = arange(0.0, 3.0, 0.01)
s = sin(2*pi*t)
self.axes.plot(t, s)
class PlotDialog(QWidget):
def __init__(self):
QWidget.__init__(self)
self.plot_layout = QVBoxLayout(self)
self.plot_canvas = MyStaticMplCanvas(self, width=5, height=4, dpi=100)
self.navi_toolbar = NavigationToolbar(self.plot_canvas, self)
self.plot_layout.addWidget(self.plot_canvas) # the matplotlib canvas
self.plot_layout.addWidget(self.navi_toolbar)
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
dialog = PlotDialog()
dialog.show()
sys.exit(app.exec_())
</code></pre>
| 0 | 2016-08-25T19:23:55Z | [
"python",
"matplotlib",
"pyqt",
"pyqt5"
] |
AttributeError: 'account.invoice.refund' object has no attribute 'journal_id' - Odoo v9 community | 39,131,039 | <p>I'm having a hard time with a Odoo v9 module I'm adapting, for debit/credit notes.</p>
<p>I'm fixing the bugs on it, so far, I have this issue right now, on this function:</p>
<pre><code>def compute_refund(self, cr, uid, ids, mode='refund', context=None):
"""@param cr: the current row, from the database cursor,
@param uid: the current userâs ID for security checks,
@param ids: the account invoice refundâs ID or list of IDs
"""
inv_obj = self.pool.get('account.invoice')
reconcile_obj = self.pool.get('account.move.reconcile')
account_m_line_obj = self.pool.get('account.move.line')
mod_obj = self.pool.get('ir.model.data')
act_obj = self.pool.get('ir.actions.act_window')
inv_tax_obj = self.pool.get('account.invoice.tax')
inv_line_obj = self.pool.get('account.invoice.line')
res_users_obj = self.pool.get('res.users')
if context is None:
context = {}
for form in self.browse(cr, uid, ids, context=context):
created_inv = []
date = False
period = False
description = False
company = res_users_obj.browse(
cr, uid, uid, context=context).company_id
journal_id = form.journal_id.id
for inv in inv_obj.browse(cr, uid, context.get('active_ids'),
context=context):
if inv.state in ['draft', 'proforma2', 'cancel']:
raise osv.except_osv(_('Error!'), _(
'Cannot %s draft/proforma/cancel invoice.') % (mode))
if inv.reconciled and mode in ('cancel', 'modify'):
raise osv.except_osv(_('Error!'), _(
'Cannot %s invoice which is already reconciled, '
'invoice should be unreconciled first. You can only '
'refund this invoice.') % (mode))
if form.period.id:
period = form.period.id
else:
period = inv.period_id and inv.period_id.id or False
if not journal_id:
journal_id = inv.journal_id.id
if form.date:
date = form.date
if not form.period.id:
cr.execute("select name from ir_model_fields \
where model = 'account.period' \
and name = 'company_id'")
result_query = cr.fetchone()
if result_query:
cr.execute("""select p.id from account_fiscalyear y
, account_period p
where y.id=p.fiscalyear_id \
and date(%s) between p.date_start AND
p.date_stop and y.company_id = %s limit 1""",
(date, company.id,))
else:
cr.execute("""SELECT id
from account_period where date(%s)
between date_start AND date_stop \
limit 1 """, (date,))
res = cr.fetchone()
if res:
period = res[0]
else:
date = inv.date_invoice
if form.description:
description = form.description
else:
description = inv.name
if not period:
raise osv.except_osv(_('Insufficient Data!'),
_('No period found on the invoice.'))
refund_id = inv_obj.refund(cr, uid, [
inv.id], date, period,
description, journal_id,
context=context)
refund = inv_obj.browse(cr, uid, refund_id[0], context=context)
# Add parent invoice
inv_obj.write(cr, uid, [refund.id],
{'date_due': date,
'check_total': inv.check_total,
'parent_id': inv.id})
inv_obj.button_compute(cr, uid, refund_id)
created_inv.append(refund_id[0])
if mode in ('cancel', 'modify'):
movelines = inv.move_id.line_id
to_reconcile_ids = {}
for line in movelines:
if line.account_id.id == inv.account_id.id:
to_reconcile_ids[line.account_id.id] = [line.id]
if type(line.reconcile_id) != osv.orm.browse_null:
reconcile_obj.unlink(cr, uid, line.reconcile_id.id)
refund.signal_workflow('invoice_open')
#wf_service.trg_validate(uid, 'account.invoice',
#refund.id, 'invoice_open', cr)
refund = inv_obj.browse(
cr, uid, refund_id[0], context=context)
for tmpline in refund.move_id.line_id:
if tmpline.account_id.id == inv.account_id.id:
to_reconcile_ids[
tmpline.account_id.id].append(tmpline.id)
for account in to_reconcile_ids:
account_m_line_obj.reconcile(
cr, uid, to_reconcile_ids[account],
writeoff_period_id=period,
writeoff_journal_id=inv.journal_id.id,
writeoff_acc_id=inv.account_id.id
)
if mode == 'modify':
invoice = inv_obj.read(cr, uid, [inv.id],
['name', 'type', 'number',
'reference', 'comment',
'date_due', 'partner_id',
'partner_insite',
'partner_contact',
'partner_ref', 'payment_term',
'account_id', 'currency_id',
'invoice_line', 'tax_line',
'journal_id', 'period_id'],
context=context)
invoice = invoice[0]
del invoice['id']
invoice_lines = inv_line_obj.browse(
cr, uid, invoice['invoice_line'], context=context)
invoice_lines = inv_obj._refund_cleanup_lines(
cr, uid, invoice_lines, context=context)
tax_lines = inv_tax_obj.browse(
cr, uid, invoice['tax_line'], context=context)
tax_lines = inv_obj._refund_cleanup_lines(
cr, uid, tax_lines, context=context)
invoice.update({
'type': inv.type,
'date_invoice': date,
'state': 'draft',
'number': False,
'invoice_line': invoice_lines,
'tax_line': tax_lines,
'period_id': period,
'name': description,
'origin': self._get_orig(cr, uid, inv, context={}),
})
for field in (
'partner_id', 'account_id', 'currency_id',
'payment_term', 'journal_id'):
invoice[field] = invoice[
field] and invoice[field][0]
inv_id = inv_obj.create(cr, uid, invoice, {})
if inv.payment_term.id:
data = inv_obj.onchange_payment_term_date_invoice(
cr, uid, [inv_id], inv.payment_term.id, date)
if 'value' in data and data['value']:
inv_obj.write(cr, uid, [inv_id], data['value'])
created_inv.append(inv_id)
xml_id = (inv.type == 'out_refund') and 'action_invoice_tree1' or \
(inv.type == 'in_refund') and 'action_invoice_tree2' or \
(inv.type == 'out_invoice') and 'action_invoice_tree3' or \
(inv.type == 'in_invoice') and 'action_invoice_tree4'
result = mod_obj.get_object_reference(cr, uid, 'account', xml_id)
id = result and result[1] or False
result = act_obj.read(cr, uid, id, context=context)
invoice_domain = eval(result['domain'])
invoice_domain.append(('id', 'in', created_inv))
result['domain'] = invoice_domain
return result
</code></pre>
<p>Every time I try to make a refund invoice, it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 646, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 683, in dispatch
result = self._call_function(**self.params)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 319, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/service/model.py", line 118, in wrapper
return f(dbname, *args, **kwargs)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 312, in checked_call
result = self.endpoint(*a, **kw)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 962, in __call__
return self.method(*args, **kw)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 512, in response_wrap
response = f(*args, **kw)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/addons/web/controllers/main.py", line 901, in call_button
action = self._call_kw(model, method, args, {})
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/addons/web/controllers/main.py", line 889, in _call_kw
return getattr(request.registry.get(model), method)(request.cr, request.uid, *args, **kwargs)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/addons/debit_credit_note/wizard/account_invoice_refund.py", line 280, in invoice_refund
return self.compute_refund(cr, uid, ids, data_refund, context=context)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/addons/debit_credit_note/wizard/account_invoice_refund.py", line 130, in compute_refund
journal_id = form.journal_id.id
AttributeError: 'account.invoice.refund' object has no attribute 'journal_id'
</code></pre>
<p>I've added this line to the logic <code>journal_obj = self.pool.get('account.journal')</code> after <code>res_users_obj = self.pool.get('res.users')</code> but still get the same error, I think I need to load the account.jounal objects for this operation, but with this line I still can't do it.</p>
<p>Does anybody can shed some light upon this?</p>
<p>If You need more info please let me know.</p>
<p>Thanks in advance!</p>
| 0 | 2016-08-24T18:55:27Z | 39,134,533 | <p>Activate Developer mode, go to Settings -> Technical -> Database Structure -> Models and search for <code>account.invoice.refund</code> then check if this model has any field named <code>journal_id</code>. As I can see this field existed on v8 but does not exist on v9.</p>
<p>You can check this on <a href="http://runbot.odoo.com" rel="nofollow">runbot.odoo.com</a> admin/admin</p>
| 2 | 2016-08-24T23:34:55Z | [
"python",
"openerp",
"odoo-9"
] |
Using command line arguments in imported module | 39,131,122 | <p>I have script <code>download.py</code> with:</p>
<pre><code>import argparse
import models
parser = argparse.ArgumentParser()
parser.add_argument("--db_path", required=True)
args = parser.parse_args()
</code></pre>
<p><code>models.py</code>:</p>
<pre><code>import peewee
database = peewee.SqliteDatabase("wee.db")
class Artist(peewee.Model):
name = peewee.CharField()
class Meta:
database = database
class Album(peewee.Model):
artist = peewee.ForeignKeyField(Artist)
title = peewee.CharField()
release_date = peewee.DateTimeField()
publisher = peewee.CharField()
media_type = peewee.CharField()
class Meta:
database = database
</code></pre>
<p>Instead of <code>wee.db</code> I'd like to use database path from <code>args.db_path</code> in <code>download.py</code> file. How can I do this?</p>
| 0 | 2016-08-24T18:59:59Z | 39,131,233 | <p>All you have to do is pass your arg to the <code>SqliteDatabase</code> constructor.</p>
<p>It seems like you're using Django. If that's the case, you could (and should) specify your database in <code>settings.py</code>, <a href="https://docs.djangoproject.com/en/1.10/ref/settings/#std:setting-DATABASES" rel="nofollow">as indicated on the docs.</a></p>
<p>EDIT</p>
<p>Here is a code example of how to do it with your existing code the way you suggested (I still think there are betters way of doing this):</p>
<p><code>models.py</code>:</p>
<pre><code>database_name = "wee_db"
def set_database_name(name):
database_name = name
database = peewee.SqliteDatabase(database_name)
</code></pre>
<p><code>download.py</code>:</p>
<pre><code>models.set_database_name(args.db_path)
</code></pre>
| 0 | 2016-08-24T19:06:13Z | [
"python",
"python-3.x",
"argparse",
"peewee"
] |
Using command line arguments in imported module | 39,131,122 | <p>I have script <code>download.py</code> with:</p>
<pre><code>import argparse
import models
parser = argparse.ArgumentParser()
parser.add_argument("--db_path", required=True)
args = parser.parse_args()
</code></pre>
<p><code>models.py</code>:</p>
<pre><code>import peewee
database = peewee.SqliteDatabase("wee.db")
class Artist(peewee.Model):
name = peewee.CharField()
class Meta:
database = database
class Album(peewee.Model):
artist = peewee.ForeignKeyField(Artist)
title = peewee.CharField()
release_date = peewee.DateTimeField()
publisher = peewee.CharField()
media_type = peewee.CharField()
class Meta:
database = database
</code></pre>
<p>Instead of <code>wee.db</code> I'd like to use database path from <code>args.db_path</code> in <code>download.py</code> file. How can I do this?</p>
| 0 | 2016-08-24T18:59:59Z | 39,131,491 | <p>Here's a possible solution:</p>
<pre><code># download.py
import argparse
from models import DBManager
parser = argparse.ArgumentParser()
parser.add_argument("--db_path", required=True)
args = parser.parse_args()
DBManager(args.db_path)
</code></pre>
<hr>
<pre><code># models.py
import peewee
class Artist(peewee.Model):
name = peewee.CharField()
class Album(peewee.Model):
artist = peewee.ForeignKeyField(Artist)
title = peewee.CharField()
release_date = peewee.DateTimeField()
publisher = peewee.CharField()
media_type = peewee.CharField()
class DBManager(object):
def __init__(self, db_path):
self.database = peewee.SqliteDatabase(db_path)
Artist._meta.database = self.database
Album._meta.database = self.database
self.database.connect()
self.database.create_tables([Artist, Album])
</code></pre>
| 0 | 2016-08-24T19:23:22Z | [
"python",
"python-3.x",
"argparse",
"peewee"
] |
Combine two columns of text with NaN in pandas | 39,131,131 | <p>I want to combine two columns as below</p>
<pre><code>import numpy as np
import pandas as pd
data = pd.DataFrame({ 'a' : [np.nan, 'abc'], 'b' : ['abc', 'abc']})
data['c']=data['a']+' '+data['b']
data
a b c
0 NaN abc NaN
1 abc abc abc abc
</code></pre>
<p>The problem is in NaN, i want get</p>
<pre><code>Nan + abc = abc
</code></pre>
<p>I can do like this</p>
<pre><code>data = pd.DataFrame({ 'a' : [np.nan, 'abc'], 'b' : ['abc', 'abc']})
data = data.replace( np.nan, '',regex=True)
data['c']=data['a']+' '+data['b']
data
a b c
0 abc abc
1 abc abc abc abc
</code></pre>
<p>but it's not always convenient. Are there ways to combine like?</p>
<pre><code>NaN + abc = abc
</code></pre>
| 1 | 2016-08-24T19:00:16Z | 39,131,225 | <pre><code>>>> data['c']=data['a'].fillna('') + ' ' + data['b'].fillna('')
>>> data
a b c
0 NaN abc abc
1 abc abc abc abc
</code></pre>
<p>However, do note that <code>data['c'][0] == ' abc'</code>. You'd have to use <code>.str.strip()</code> to strip off whitespace if needed.</p>
| 4 | 2016-08-24T19:05:53Z | [
"python",
"pandas",
"numpy",
null
] |
Python export list of folders & full path as JSON for JSTree | 39,131,343 | <p>I am trying to create a JSON file for jstree. But I'm having trouble getting this code to output the full path of the folders and only show folders. I am new to Python and would appreciate any insight! </p>
<p>The goal is to have users select a folder and bring back the full path of that folder in JSTree. (Not in this code).</p>
<pre><code>import os
import json
def path_to_dict(path):
d = {'text': os.path.basename(path)}
if os.path.isdir(path):
d['type'] = "directory"
for root, directories, filenames in os.walk('U:\PROJECTS\MXD_to_PDF'):
for directory in directories:
d['path']= os.path.join(root, directory)
d['children'] = [path_to_dict(os.path.join(path,x)) for x in os.listdir\
(path)]
else:
d['type'] = "file"
#del d["type"]
return d
print json.dumps(path_to_dict('U:\PROJECTS\MXD_to_PDF\TEST'))
with open('U:\PROJECTS\MXD_to_PDF\TEST\JSONData.json', 'w') as f:
json.dump(path_to_dict('U:\PROJECTS\MXD_to_PDF\TEST'), f)
</code></pre>
<p>Output:</p>
<pre><code>{
"text": "TEST"
, "type": "directory"
, "children": [{
"text": "JSONData.json"
, "type": "file"
}, {
"text": "Maps"
, "type": "directory"
, "children": [{
"text": "MAY24MODIFIED.mxd"
, "type": "file"
}, {
"text": "MAY24MODIFIED 2016-05-24 16.16.16.pdf"
, "type": "file"
}, {
"text": "testst"
, "type": "directory"
, "children": []
, "path": "U:\\PROJECTS\\MXD_to_PDF\\TEST2\\Maps\\exported"
}]
, "path": "U:\\PROJECTS\\MXD_to_PDF\\TEST2\\Maps\\exported"
}]
, "path": "U:\\PROJECTS\\MXD_to_PDF\\TEST2\\Maps\\exported"
</code></pre>
<p>}</p>
| 0 | 2016-08-24T19:12:46Z | 39,131,887 | <p>For me the following solution works: (you wanted only directories)</p>
<pre><code>def get_list_of_dirs(path):
output_dictonary = {}
list_of_dirs = [os.path.join(path, item) for item in os.listdir(path) if os.path.isdir(os.path.join(path, item))]
output_dictonary["text"] = path
output_dictonary["type"] = "directory"
output_dictonary["children"] = []
for dir in list_of_dirs:
output_dictonary["children"].append(get_list_of_dirs(dir))
return output_dictonary
print(json.dumps(get_list_of_dirs(path)))
</code></pre>
<p>(You can insert the import, your path, and saving to file as you wish)</p>
| 0 | 2016-08-24T19:47:04Z | [
"python",
"json",
"jstree"
] |
How to receive uploaded file with Klein like Flask in python | 39,131,368 | <p>When setting up a Flask server, we can try to receive the file user uploaded by </p>
<pre><code>imagefile = flask.request.files['imagefile']
filename_ = str(datetime.datetime.now()).replace(' ', '_') + \
werkzeug.secure_filename(imagefile.filename)
filename = os.path.join(UPLOAD_FOLDER, filename_)
imagefile.save(filename)
logging.info('Saving to %s.', filename)
image = exifutil.open_oriented_im(filename)
</code></pre>
<p>When I am looking at the <code>Klein</code> documentation, I've seen <code>http://klein.readthedocs.io/en/latest/examples/staticfiles.html</code>, however this seems like providing file from the webservice instead of receiving a file that's been uploaded to the web service. If I want to let my <code>Klein</code> server able to receive an <code>abc.jpg</code> and save it in the file system, is there any documentation that can guide me towards that objective? </p>
| 8 | 2016-08-24T19:14:26Z | 39,438,647 | <p>As <code>Liam Kelly</code> commented, the snippets from <a href="http://www.cristinagreen.com/uploading-files-using-twisted-web.html" rel="nofollow">this post</a> should work. Using <code>cgi.FieldStorage</code> makes it possible to easily send file metadata without explicitly sending it. A Klein/Twisted approach would look something like this:</p>
<pre><code>from cgi import FieldStorage
from klein import Klein
from werkzeug import secure_filename
app = Klein()
@app.route('/')
def formpage(request):
return '''
<form action="/images" enctype="multipart/form-data" method="post">
<p>
Please specify a file, or a set of files:<br>
<input type="file" name="datafile" size="40">
</p>
<div>
<input type="submit" value="Send">
</div>
</form>
'''
@app.route('/images', methods=['POST'])
def processImages(request):
method = request.method.decode('utf-8').upper()
content_type = request.getHeader('content-type')
img = FieldStorage(
fp = request.content,
headers = request.getAllHeaders(),
environ = {'REQUEST_METHOD': method, 'CONTENT_TYPE': content_type})
name = secure_filename(img[b'datafile'].filename)
with open(name, 'wb') as fileOutput:
# fileOutput.write(img['datafile'].value)
fileOutput.write(request.args[b'datafile'][0])
app.run('localhost', 8000)
</code></pre>
<p>For whatever reason, my Python 3.4 (Ubuntu 14.04) version of <code>cgi.FieldStorage</code> doesn't return the correct results. I tested this on Python 2.7.11 and it works fine. With that being said, you could also collect the filename and other metadata on the frontend and send them in an ajax call to klein. This way you won't have to do too much processing on the backend (which is usually a good thing). Alternatively, you could figure out how to use the utilities provided by werkzeug. The functions <code>werkzeug.secure_filename</code> and <code>request.files</code> (ie. <code>FileStorage</code>) aren't particularly difficult to implement or recreate.</p>
| 2 | 2016-09-11T17:13:56Z | [
"python",
"web-services",
"klein-mvc"
] |
Machine Learning: Basics DepreciationWarning | 39,131,424 | <p>I'm running a basic machine learning tutorial code snippet (which compiles properly on the computer of the person teaching), and I can't seem to find what's wrong. I understand the question has been 'answered', but I can't seem to understand the answer.</p>
<blockquote>
<p>DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
DeprecationWarning)
[0]</p>
</blockquote>
<p>Apparently I just use X.reshape(-1, 1) or X.reshape(1, -1), but I'm not sure exactly how either work in a general situation, or if they should be placed before or after I plot/for the data.</p>
<p>Here's my source code. Any help is much appreciated :-)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
from sklearn import svm
x = [1, 5, 1.5, 8, 1, 9]
y = [2, 8, 1.8, 8, 0.6, 11]
plt.scatter(x,y)
plt.show()
X = np.array([[1,2],
[5,8],
[1.5,1.8],
[8,8],
[1,0.6],
[9,11]])
y = [0,1,0,1,0,1]
clf = svm.SVC(kernel='linear', C = 1.0)
clf.fit(X,y)
print(clf.predict([0.58,0.76]))
</code></pre>
| 0 | 2016-08-24T19:18:23Z | 39,131,611 | <p>Since your data has more than a single feature and it contains more than a single sample you are fine. This is just a warning and shouldn't interfere with the algorithm's behavior.</p>
| 0 | 2016-08-24T19:30:09Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"svm"
] |
Python : IndexError: list index out of shape | 39,131,559 | <p>This piece of code works well when I used my earlier set of data. However Now I have added a couple more columns to the input data set and I am getting indexing errors. I am not exactly sure on how to fix this. Any inputs ? . The following code is giving me error:</p>
<pre><code>with open("master_aggregated_monthly.csv", 'rb') as csvfile:
data_reader = csv.reader(csvfile, delimiter = ',', quotechar = '"')
rolling_queue = []
for row in data_reader:
if row[0].lower() == "salesforceid":
continue
# most recent rows go in first
rolling_queue.insert(0,row)
if len(rolling_queue) > 12:
rolling_queue.pop()
id = row[0]
date = row[2]
if id in company_dictionary:
account = company_dictionary[id]
#See if we are at the final event
#get the final event from the list
# most recent date for data collection
if date == "6/1/2016":
new_event = event()
new_event.date = "7/1/2016"
new_event.type = "Current Period"
account.events.append(new_event)
my_event = account.events[0]
for entry in rolling_queue:
# don't do anything if the entry in the rolling queue is not from the same account
if entry[0] != id:
continue
nonzero_row = False
# loop through the entries in the rolling queue to find at most the last 12 months of data
for i in range(3, len(entry)):
try:
my_event.attributes[i-3] += float(entry[i])
except:
#handle all of the odd string things
temp = entry[i].split(' ')
if temp[0] == '':
temp = float(temp[1])
else:
temp = float(temp[0])
my_event.attributes[i-3] += temp
# discover if this row is nonzero
if entry[i] != 0 and entry[i] != '0':
nonzero_row = True
# don't include this rown in the average if it's a zero row
if nonzero_row:
#print "Month increment"
my_event.months += 1
</code></pre>
<p>This code is giving me the following error</p>
<pre><code><type 'exceptions.IndexError'>
Traceback (most recent call last):
File "P:/Testing/The Scripts, JIC Test/Q4_12month.py", line 116, in main
my_event.attributes[i-3] += temp
IndexError: list index out of range
</code></pre>
| -3 | 2016-08-24T19:27:19Z | 39,131,709 | <p>You're creating an index based off of the length of <code>entry</code>, but using that index in <code>my_event.attributes</code>:</p>
<pre><code>for i in range(3, len(entry)):
...
except:
...
my_event.attributes[i-3] += temp
...
...
</code></pre>
<p>This is probably the problem, but it's kind of tough to tell what your code is doing. Generally, you'll only want to use an index on whatever object the index was created with / whatever object the index is supposed to refer to (in this case <code>entry</code>).</p>
| 0 | 2016-08-24T19:36:02Z | [
"python",
"python-2.7",
"indexing"
] |
using a DataFrame with columns as named arguments to str.format() | 39,131,620 | <p>I have a DataFrame like:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'author':["Melville","Hemingway","Faulkner"],
'title':["Moby Dick","The Sun Also Rises","The Sound and the Fury"],
'subject':["whaling","bullfighting","a messed-up family"]
})
</code></pre>
<p>I know that I can do this:</p>
<pre><code># produces desired output
("Some guy " + df['author'] + " wrote a book called " +
df['title'] + " that uses " + df['subject'] +
" as a metaphor for the human condition.")
</code></pre>
<p>but is it possible to write this more clearly using <code>str.format()</code>, something along the lines of:</p>
<pre><code># returns KeyError:'author'
["Some guy {author} wrote a book called {title} that uses "
"{subject} as a metaphor for the human condition.".format(x)
for x in df.itertuples(index=False)]
</code></pre>
| 1 | 2016-08-24T19:30:35Z | 39,131,761 | <pre><code>>>> ["Some guy {author} wrote a book called {title} that uses "
"{subject} as a metaphor for the human condition.".format(**x._asdict())
for x in df.itertuples(index=False)]
['Some guy Melville wrote a book called Moby Dick that uses whaling as a metaphor for the human condition.', 'Some guy Hemingway wrote a book called The Sun Also Rises that uses bullfighting as a metaphor for the human condition.', 'Some guy Faulkner wrote a book called The Sound and the Fury that uses a messed-up family as a metaphor for the human condition.']
</code></pre>
<p>Note that <code>_asdict()</code> is not meant to be part of the public api, so relying on it may break in future updates to pandas.</p>
<p>You could do this instead:</p>
<pre><code>>>> ["Some guy {} wrote a book called {} that uses "
"{} as a metaphor for the human condition.".format(*x)
for x in df.values]
</code></pre>
| 3 | 2016-08-24T19:39:05Z | [
"python",
"pandas",
"string-formatting"
] |
using a DataFrame with columns as named arguments to str.format() | 39,131,620 | <p>I have a DataFrame like:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'author':["Melville","Hemingway","Faulkner"],
'title':["Moby Dick","The Sun Also Rises","The Sound and the Fury"],
'subject':["whaling","bullfighting","a messed-up family"]
})
</code></pre>
<p>I know that I can do this:</p>
<pre><code># produces desired output
("Some guy " + df['author'] + " wrote a book called " +
df['title'] + " that uses " + df['subject'] +
" as a metaphor for the human condition.")
</code></pre>
<p>but is it possible to write this more clearly using <code>str.format()</code>, something along the lines of:</p>
<pre><code># returns KeyError:'author'
["Some guy {author} wrote a book called {title} that uses "
"{subject} as a metaphor for the human condition.".format(x)
for x in df.itertuples(index=False)]
</code></pre>
| 1 | 2016-08-24T19:30:35Z | 39,132,305 | <p>You could also use <code>DataFrame.iterrows()</code> like this:</p>
<pre><code>["The book {title} by {author} uses "
"{subject} as a metaphor for the human condition.".format(**x)
for i, x in df.iterrows()]
</code></pre>
<p>Which is nice if you want to:</p>
<ul>
<li>use named arguments, so the order of use didn't have to match the order of columns (like above)</li>
<li>not use an internal function like <code>_asdict()</code></li>
</ul>
<p><strong>Timing</strong>: the fastest appears to be M. Klugerford's second solution, even if we note the warning about caching and take the slowest run.</p>
<pre><code># example
%%timeit
("Some guy " + df['author'] + " wrote a book called " +
df['title'] + " that uses " + df['subject'] +
" as a metaphor for the human condition.")
# 1000 loops, best of 3: 883 µs per loop
%%timeit
["Some guy {author} wrote a book called {title} that uses "
"{subject} as a metaphor for the human condition.".format(**x._asdict())
for x in df.itertuples(index=False)]
#1000 loops, best of 3: 962 µs per loop
%%timeit
["Some guy {} wrote a book called {} that uses "
"{} as a metaphor for the human condition.".format(*x)
for x in df.values]
#The slowest run took 5.90 times longer than the fastest. This could mean that an intermediate result is being cached.
#10000 loops, best of 3: 18.9 µs per loop
%%timeit
from collections import OrderedDict
["The book {title} by {author} uses "
"{subject} as a metaphor for the human condition.".format(**x)
for x in [OrderedDict(row) for i, row in df.iterrows()]]
#1000 loops, best of 3: 308 µs per loop
%%timeit
["The book {title} by {author} uses "
"{subject} as a metaphor for the human condition.".format(**x)
for i, x in df.iterrows()]
#1000 loops, best of 3: 413 µs per loop
</code></pre>
<p>Why the next-to-last is faster than the last is beyond me.</p>
| 0 | 2016-08-24T20:13:45Z | [
"python",
"pandas",
"string-formatting"
] |
Python Decryption using private key | 39,131,630 | <p>I have an encrypted string. The Encryption is done using java code. I decrypt the encrypted string using following java code</p>
<pre><code>InputStream fileInputStream = getClass().getResourceAsStream(
"/private.txt");
byte[] bytes = IOUtils.toByteArray(fileInputStream);
private String decrypt(String inputString, byte[] keyBytes) {
String resultStr = null;
PrivateKey privateKey = null;
try {
KeyFactory keyFactory = KeyFactory.getInstance("RSA");
EncodedKeySpec privateKeySpec = new PKCS8EncodedKeySpec(keyBytes);
privateKey = keyFactory.generatePrivate(privateKeySpec);
} catch (Exception e) {
System.out.println("Exception privateKey::::::::::::::::: "
+ e.getMessage());
e.printStackTrace();
}
byte[] decodedBytes = null;
try {
Cipher c = Cipher.getInstance("RSA/ECB/NoPadding");
c.init(Cipher.DECRYPT_MODE, privateKey);
decodedBytes = c.doFinal(Base64.decodeBase64(inputString));
} catch (Exception e) {
System.out
.println("Exception while using the cypher::::::::::::::::: "
+ e.getMessage());
e.printStackTrace();
}
if (decodedBytes != null) {
resultStr = new String(decodedBytes);
resultStr = resultStr.split("MNSadm")[0];
// System.out.println("resultStr:::" + resultStr + ":::::");
// resultStr = resultStr.replace(salt, "");
}
return resultStr;
}
</code></pre>
<p>Now I have to use Python to decrypt the encrypted string. I have the private key. When I use Cryptography package using following code </p>
<pre><code>key = load_pem_private_key(keydata, password=None, backend=default_backend())
</code></pre>
<p>It throws <code>ValueError: Could not unserialize key data.</code></p>
<p>Can anyone help what I am missing here?</p>
| -2 | 2016-08-24T19:31:13Z | 39,150,602 | <p>I figured out the solution:</p>
<pre><code>from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from Crypto.Hash import SHA
from base64 import b64decode
rsa_key = RSA.importKey(open('private.txt', "rb").read())
verifier = PKCS1_v1_5.new(rsa_key)
raw_cipher_data = b64decode(<your cipher data>)
phn = rsa_key.decrypt(raw_cipher_data)
</code></pre>
<p>This is the most basic form of code. What I learned is first you have to get the RSA_key(private key). For me <code>RSA.importKey</code> took care of everything. Really simple. </p>
| 0 | 2016-08-25T16:39:57Z | [
"python",
"cryptography"
] |
Re ordering output from bash commands in python script | 39,131,719 | <p>I am working on a python script that deals with wifi access points. In the script I call os.system("iwlist wlan0 s") to scan the wireless APs nearby. There are only two pieces of info I need from each one: the name and the MAC address. I can grep the output to get a list of the names or the addresses, but is there a way to sort of line them up? For instance right now i can get:</p>
<pre><code>ESSID: point 1
ESSID: point 2
etc,
</code></pre>
<p>and I can get</p>
<pre><code>Address: gh:45:df:etc
Address: ofweiofjw
</code></pre>
<p>but is it possible to get the following?</p>
<pre><code>name - address
name - address
</code></pre>
<p>I was thinking either in the bash commands themselves or by taking the output in the python script and editing it somehow.</p>
| 0 | 2016-08-24T19:37:00Z | 39,132,358 | <pre><code>import commands
a = commands.getstatusoutput("iwlist wlp9s0 s | grep 'ESSID\|Address'|tr -s ' ' | cut -d'-' -f 2")
for item in a[1].strip().split("Address:"):
print('-'.join(item.replace('\n','').strip().split("ESSID:")))
</code></pre>
<p>return:</p>
<p>AA:11:DD:44:32:D7 -"Kolagh_nomap"</p>
<p>BB:22:EE:55:EE:21 -"BaBaei"</p>
<p>CC:33:FF:66:13:2C -"ali"</p>
| 0 | 2016-08-24T20:17:29Z | [
"python",
"linux",
"bash"
] |
Removing six.b from multiple files | 39,131,755 | <p>I have dozens of files in the project and I want to change all occurences of <code>six.b("...")</code> to <code>b"..."</code>. Can I do that with some sort of regex bash script?</p>
| 0 | 2016-08-24T19:38:49Z | 39,131,881 | <p>It's possible entirely in Python, But I would first make a backup of my project tree, and then:</p>
<pre><code>
import re
import os
indir = 'files'
for root, dirs, files in os.walk(indir):
for f in files:
fname = os.path.join(root, f)
with open(fname) as f:
txt = f.read()
txt = re.sub(r'six\.(b\("[^"]*"\))', r'\1', txt)
with open(fname, 'w') as f:
f.write(txt)
print(fname)
</code></pre>
| 1 | 2016-08-24T19:46:40Z | [
"python",
"bash"
] |
Removing six.b from multiple files | 39,131,755 | <p>I have dozens of files in the project and I want to change all occurences of <code>six.b("...")</code> to <code>b"..."</code>. Can I do that with some sort of regex bash script?</p>
| 0 | 2016-08-24T19:38:49Z | 39,133,891 | <p>A relatively simple bash solution (change *.foo to *.py or whatever filename pattern suits your situation):</p>
<pre><code>#!/bin/bash
export FILES=`find . -type f -name '*.foo' -exec egrep -l 'six\.b\("[^\"]*"\)' {} \; 2>/dev/null`
for file in $FILES
do
cp $file $file.bak
sed 's/six\.b(\(\"[^\"]*[^\\]\"\))/b\1/' $file.bak > $file
echo $file
done
</code></pre>
<p>Notes:</p>
<ol>
<li><p>It will only consider/modify files that match the pattern</p></li>
<li><p>It will make a '.bak' copy of each file it modifies</p></li>
<li><p>It won't handle embedded <code>\")</code>, e.g. <code>six.b("asdf\")")</code>, but I don't know that there is a trivial solution to that problem, without knowing more about the files you're manipulating. Is the end of <code>six.b("")</code> guaranteed to be the last <code>")</code> on the line? etc.</p></li>
</ol>
| 1 | 2016-08-24T22:21:53Z | [
"python",
"bash"
] |
Getting a 403 Client Error using tinys3 python package | 39,131,766 | <p>I'm getting a 403 Client Error when making an S3 connection on one of our production servers using the tinys3 python package. Any ideas? I think the credentials are right, as this script runs on my local machine without issue. </p>
<p>I'm getting the same issue on a test script I wrote to help debug this. Pasted below:</p>
<pre><code>import tinys3 as s3
S3_ACCESS_KEY = "[redacted]"`
S3_SECRET_KEY = "[redacted]"
bucket = "test-bucket"
s3_image_prefix = "http://s3.amazonaws.com/" + bucket + "/"
conn = s3.Connection(S3_ACCESS_KEY, S3_SECRET_KEY, default_bucket=bucket)
conn.get('test_file.gif', bucket)
</code></pre>
<p>And the error:</p>
<pre><code>requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://s3.amazonaws.com/test-bucket/test_file.gif
</code></pre>
| 1 | 2016-08-24T19:39:16Z | 39,131,947 | <p>If the machine's system clock is more than 15 minutes behind, you can get this error. The time is put into the request and checked by Amazon. Check the system time with the <code>date</code> command.</p>
<p>This has been discussed by some other questions:</p>
<ul>
<li><a href="http://stackoverflow.com/questions/24433198/amazon-s3-403-accessdenied-error">Amazon S3 403 AccessDenied error</a></li>
<li><a href="http://stackoverflow.com/questions/26594944/getting-boto-exception-s3responseerror-s3responseerror-403-forbidden-when-uplo/27984752#27984752">Getting boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden when uploading file</a></li>
</ul>
| 1 | 2016-08-24T19:51:48Z | [
"python",
"amazon-s3"
] |
Python: Elegant way to combine output variables of a function run many times | 39,131,811 | <p>I have a function that returns many output arrays of varying size.</p>
<pre><code>arr1,arr2,arr3,arr4,arr5, ... = func(data)
</code></pre>
<p>I want to run this function many times over a time series of data, and combine each output variable into one array that covers the whole time series.</p>
<p>To elaborate: If the output arr1 has dimensions (x,y) when the function is called, I want to run the function 't' times and end up with an array that has dimensions (x,y,t). A list of 't' arrays with size (x,y) would also be acceptable, but not preferred.</p>
<p>Again, the output arrays do not all have the same dimensions, or even the same number of dimensions. Arr2 might have size (x2,y2), arr3 might be only a vector of length (x3). I do not know the size of all of these arrays before hand.</p>
<p>My current solution is something like this:</p>
<pre><code>arr1 = []
arr2 = []
arr3 = []
...
for t in range(t_max):
arr1_t, arr2_t, arr3_t, ... = func(data[t])
arr1.append(arr1_t)
arr2.append(arr2_t)
arr3.append(arr3_t)
...
</code></pre>
<p>and so on. However this is inelegant looking when repeated 27 times for each output array.</p>
<p>Is there a better way to do this?</p>
| 0 | 2016-08-24T19:42:08Z | 39,131,951 | <p>You can just make <code>arr1</code>, <code>arr2</code>, etc. a list of lists (of vectors or matrices or whatever). Then use a loop to iterate the results obtained from <code>func</code> and add them to the individual lists.</p>
<pre><code>arrN = [[] for _ in range(N)] # N being number of results from func
for t in range(t_max):
results = func(data[t])
for i, res in enumerate(results):
arrN[i].append(res)
</code></pre>
<p>The elements in the different sub-lists do not have to have the same dimensions.</p>
| 2 | 2016-08-24T19:52:03Z | [
"python",
"arrays",
"numpy"
] |
Python: Elegant way to combine output variables of a function run many times | 39,131,811 | <p>I have a function that returns many output arrays of varying size.</p>
<pre><code>arr1,arr2,arr3,arr4,arr5, ... = func(data)
</code></pre>
<p>I want to run this function many times over a time series of data, and combine each output variable into one array that covers the whole time series.</p>
<p>To elaborate: If the output arr1 has dimensions (x,y) when the function is called, I want to run the function 't' times and end up with an array that has dimensions (x,y,t). A list of 't' arrays with size (x,y) would also be acceptable, but not preferred.</p>
<p>Again, the output arrays do not all have the same dimensions, or even the same number of dimensions. Arr2 might have size (x2,y2), arr3 might be only a vector of length (x3). I do not know the size of all of these arrays before hand.</p>
<p>My current solution is something like this:</p>
<pre><code>arr1 = []
arr2 = []
arr3 = []
...
for t in range(t_max):
arr1_t, arr2_t, arr3_t, ... = func(data[t])
arr1.append(arr1_t)
arr2.append(arr2_t)
arr3.append(arr3_t)
...
</code></pre>
<p>and so on. However this is inelegant looking when repeated 27 times for each output array.</p>
<p>Is there a better way to do this?</p>
| 0 | 2016-08-24T19:42:08Z | 39,132,260 | <p>Not sure if it counts as "elegant", but you can build a <code>list</code> of the result <code>tuple</code>s then use <code>zip</code> to group them into <code>tuple</code>s by return position instead of by call number, then optionally <code>map</code> to convert those <code>tuple</code>s to the final data type. For example, with <code>numpy</code> <code>array</code>:</p>
<pre><code>from future_builtins import map, zip # Only on Python 2, to minimize temporaries
import numpy as np
def func(x):
'Dumb function to return tuple of powers of x from 1 to 27'
return tuple(x ** i for i in range(1, 28))
# Example inputs for func
data = [np.array([[x]*10]*10, dtype=np.uint8) for in range(10)]
# Output is generator of results for each call to func
outputs = map(func, data)
# Pass each complete result of func as a positional argument to zip via star
# unpacking to regroup, so the first return from each func call is the first
# group, then the second return the second group, etc.
positional_groups = zip(*outputs)
# Convert regrouped data (`tuple`s of 2D results) to numpy 3D result type, unpack to names
arr1,arr2,arr3,arr4,arr5, ...,arr27 = map(np.array, positional_groups)
</code></pre>
<p>If the elements returned from func at a given position might have inconsistent dimensions (e.g. one call might return 10x10 as the first return, and another 5x5), you'd avoid the final <code>map</code> step (since the <code>array</code> wouldn't have consistent dimensions and just replace the second-to last step with:</p>
<pre><code>arr1,arr2,arr3,arr4,arr5, ...,arr27 = zip(*outputs)
</code></pre>
<p>making <code>arr#</code> a <code>tuple</code> of 2D <code>array</code>s, or if the need to be mutable:</p>
<pre><code>arr1,arr2,arr3,arr4,arr5, ...,arr27 = map(list, zip(*outputs))
</code></pre>
<p>to make them <code>list</code>s of 2D <code>array</code>s.</p>
| 0 | 2016-08-24T20:10:36Z | [
"python",
"arrays",
"numpy"
] |
Python: Elegant way to combine output variables of a function run many times | 39,131,811 | <p>I have a function that returns many output arrays of varying size.</p>
<pre><code>arr1,arr2,arr3,arr4,arr5, ... = func(data)
</code></pre>
<p>I want to run this function many times over a time series of data, and combine each output variable into one array that covers the whole time series.</p>
<p>To elaborate: If the output arr1 has dimensions (x,y) when the function is called, I want to run the function 't' times and end up with an array that has dimensions (x,y,t). A list of 't' arrays with size (x,y) would also be acceptable, but not preferred.</p>
<p>Again, the output arrays do not all have the same dimensions, or even the same number of dimensions. Arr2 might have size (x2,y2), arr3 might be only a vector of length (x3). I do not know the size of all of these arrays before hand.</p>
<p>My current solution is something like this:</p>
<pre><code>arr1 = []
arr2 = []
arr3 = []
...
for t in range(t_max):
arr1_t, arr2_t, arr3_t, ... = func(data[t])
arr1.append(arr1_t)
arr2.append(arr2_t)
arr3.append(arr3_t)
...
</code></pre>
<p>and so on. However this is inelegant looking when repeated 27 times for each output array.</p>
<p>Is there a better way to do this?</p>
| 0 | 2016-08-24T19:42:08Z | 39,132,781 | <p>This answer gives a solution using <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured arrays</a>. It has the following requirement: Ggven a function <code>f</code> that returns <code>N</code> arrays, and the size of each of the returned arrays can be different -- <em>then for all results of <code>f</code>, <code>len(array_i)</code> must always be same</em>. eg.</p>
<pre><code>arrs_a = f("a")
arrs_b = f("b")
for sub_arr_a, sub_arr_b in zip(arrs_a, arrs_b):
assert len(sub_arr_a) == len(sub_arr_b)
</code></pre>
<p>If the above is true, then you can use structured arrays. A structured array is like a normal array, just with a complex data type. For instance, I could specify a data type that is made up of one array of ints of shape <code>5</code>, and a second array of floats of shape <code>(2, 2)</code>. eg.</p>
<pre><code># define what a record looks like
dtype = [
# tuples of (field_name, data_type)
("a", "5i4"), # array of five 4-byte ints
("b", "(2,2)f8"), # 2x2 array of 8-byte floats
]
</code></pre>
<p>Using <code>dtype</code> you can create a structured array, and set all the results on the structured array in one go.</p>
<pre><code>import numpy as np
def func(n):
"mock implementation of func"
return (
np.ones(5) * n,
np.ones((2,2))* n
)
# define what a record looks like
dtype = [
# tuples of (field_name, data_type)
("a", "5i4"), # array of five 4-byte ints
("b", "(2,2)f8"), # 2x2 array of 8-byte floats
]
size = 5
# create array
arr = np.empty(size, dtype=dtype)
# fill in values
for i in range(size):
# func must return a tuple
# or you must convert the returned value to a tuple
arr[i] = func(i)
# alternate way of instantiating arr
arr = np.fromiter((func(i) for i in range(size)), dtype=dtype, count=size)
# How to use structured arrays
# access individual record
print(arr[1]) # prints ([1, 1, 1, 1, 1], [[1, 1], [1, 1]])
# access specific value -- get second record -> get b field -> get value at 0,0
assert arr[2]['b'][0,0] == 2
# access all values of a specific field
print(arr['a']) # prints all the a arrays
</code></pre>
| 0 | 2016-08-24T20:48:19Z | [
"python",
"arrays",
"numpy"
] |
How to check if a column exists in a pandas MultiIndex | 39,131,900 | <p>Let's say I have a DataFrame with a MultiIndex of columns like this:</p>
<pre><code>In [29]: df = pd.DataFrame([[0] * 8], columns = pd.MultiIndex.from_product(
[['a', 'b'], [1, 2], [2000, 2001]])
)
In [30]: df
Out[30]:
a b
1 2 1 2
2000 2001 2000 2001 2000 2001 2000 2001
0 0 0 0 0 0 0 0 0
In [46]: df.columns.levels
Out[46]: FrozenList([[u'a', u'b'], [1, 2], [2000, 2001]])
</code></pre>
<p>I need to know, for all values of level 0 and some specific value of level 1, what are all the existing unique values of level 2 (say the DataFrame goes through some process in which for some values of level 1 and level 0, level 2 is dropped). The best I've been able to come up with so far is this:</p>
<pre><code>In [54]: level_1_val = 2
In [55]: cols_series = df.columns.to_series()
In [56]: cols_series[
....: cols_series.index.get_level_values(1) == level_1_val
....: ].index.get_level_values(2).unique()
array([2000, 2001])
</code></pre>
<p>What's a better way to do this?</p>
| 2 | 2016-08-24T19:48:07Z | 39,131,965 | <p>IIUC</p>
<pre><code>df.xs(2, axis=1, level=1).groupby(axis=1, level=1).first().columns.values
array([2000, 2001])
</code></pre>
<p>Or</p>
<pre><code>df.xs(2, axis=1, level=1).columns.get_level_values(level=1).unique()
</code></pre>
| 5 | 2016-08-24T19:52:50Z | [
"python",
"pandas",
"multi-index"
] |
Using a for loop to populate a dictionary | 39,131,926 | <p>I'm having some trouble generating the dictionary I've named "appInfo" using the code below. When it is run like this, only the last application number entered gets saved in the appInfo dictionary. It seems like it should be pretty easy, but I haven't been able to find a fix. I'm using Python 3.5.2.</p>
<pre><code>appDict={'AA':{'appType':'app name one','fileLoc':'C:\\app1.docx'},
'BB':{'appType':'app name two','fileLoc':'C:\\app2.docx'},
'CC':{'appType':'app name three','baseDoc':'C:\\app3.docx'},
'DD':{'appType':'app name four','baseDoc':'C:\\app4.docx'},
'EE':{'appType':'app name five','baseDoc':'C:\\app5.docx'},
'FF':{'appType':'app name six','baseDoc':'C:\\app6.docx'}}
appInfo=dict()
appNumList=[]
while True:
print('Enter an application number (XX-00-00). Press Enter to stop:')
appNum=str(input())
if appNum=='':
break
appNumList=appNumList+[appNum]
appShow='/'.join(appNumList)
appNumLength=len(appNumList)
appNumSep=re.compile(r'[A-Z]+')
mo=appNumSep.findall(appNum)
for num in appDict.keys():
if num in mo:
appInfo[num]=appDict[num]
print(appInfo)
</code></pre>
| 1 | 2016-08-24T19:49:31Z | 39,132,167 | <p>Your array <strong>mo</strong> gets overwritten through each iteration of the while loop. When you loop through appDict.keys() <strong>mo</strong> only contains the most recent input. I think you meant to append to <strong>mo</strong> like this:</p>
<pre><code>appDict={'AA':{'appType':'app name one','fileLoc':'C:\\app1.docx'},
'BB':{'appType':'app name two','fileLoc':'C:\\app2.docx'},
'CC':{'appType':'app name three','baseDoc':'C:\\app3.docx'},
'DD':{'appType':'app name four','baseDoc':'C:\\app4.docx'},
'EE':{'appType':'app name five','baseDoc':'C:\\app5.docx'},
'FF':{'appType':'app name six','baseDoc':'C:\\app6.docx'}}
appInfo=dict()
appNumList=[]
mo=[]
while True:
print('Enter an application number (XX-00-00). Press Enter to stop:')
appNum=str(input())
if appNum=='':
break
appNumList=appNumList+[appNum]
appShow='/'.join(appNumList)
appNumLength=len(appNumList)
appNumSep=re.compile(r'[A-Z]+')
mo.append(''.join(appNumSep.findall(appNum))
for num in appDict.keys():
if num in mo:
appInfo[num]=appDict[num]
print(appInfo)
</code></pre>
| 1 | 2016-08-24T20:05:11Z | [
"python",
"for-loop",
"dictionary"
] |
Extracting a section of a string in python with limitations | 39,132,087 | <p>I have a string output that looks like this:</p>
<pre><code>Distance AAAB: ,0.13634,0.13700,0.00080,0.00080,-0.00066,.00001,
Distance AAAC: ,0.12617,0.12680,0.00080,0.00080,-0.00063,,
Distance AAAD: ,0.17045,0.16990,0.00080,0.00080,0.00055,,
Distance AAAE: ,0.09330,0.09320,0.00080,0.00080,0.00010,,
Distance AAAF: ,0.21048,0.21100,0.00080,0.00080,-0.00052,,
Distance AAAG: ,0.02518,0.02540,0.00040,0.00040,-0.00022,,
Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,
Distance AAAI: ,0.10811,0.10860,0.00080,0.00070,-0.00049,,
Distance AAAJ: ,0.02430,0.02400,0.00200,0.00200,0.00030,,
Distance AAAK: ,0.09449,0.09400,0.00200,0.00100,0.00049,,
Distance AAAL: ,0.07689,0.07660,0.00050,0.00050,0.00029,
</code></pre>
<p>What I want to do is extract a specific set of data out of this block, for example only Distance AAAH like so:</p>
<pre><code>Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,
</code></pre>
<p>The measurements will always begin with Distance AAA*: with the star being the only character that will change.</p>
<p>Complications:
This needs to be generic, because I have a lot of different data sets and so Distance AAAH might not always be followed by Distance AAAI or preceded by Distance AAAG, since the measurements for different items vary. I also can't rely on .len(), because the last measurement can sometimes be blank (As it is with Distance AAAH) or can be filled (As with Distance AAAB. And I don't think I can use .find(), because I need all of the numbers following Distance AAAH. </p>
<p>I am still very new and I tried my best to find a solution similar to this problem, but have not had much luck.</p>
| 0 | 2016-08-24T19:59:42Z | 39,132,269 | <p>You could use <code>re</code> module. And making a function should be convenient.</p>
<pre><code>import re
def SearchDistance(pattern,text):
pattern = pattern.replace(' ','\s')
print re.findall(r'{0}.+'.format(pattern),a)
SearchDistance('Distance AAAH',a)
</code></pre>
<p>Output:</p>
<pre><code>['Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,']
</code></pre>
| 1 | 2016-08-24T20:11:19Z | [
"python",
"python-3.x"
] |
Extracting a section of a string in python with limitations | 39,132,087 | <p>I have a string output that looks like this:</p>
<pre><code>Distance AAAB: ,0.13634,0.13700,0.00080,0.00080,-0.00066,.00001,
Distance AAAC: ,0.12617,0.12680,0.00080,0.00080,-0.00063,,
Distance AAAD: ,0.17045,0.16990,0.00080,0.00080,0.00055,,
Distance AAAE: ,0.09330,0.09320,0.00080,0.00080,0.00010,,
Distance AAAF: ,0.21048,0.21100,0.00080,0.00080,-0.00052,,
Distance AAAG: ,0.02518,0.02540,0.00040,0.00040,-0.00022,,
Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,
Distance AAAI: ,0.10811,0.10860,0.00080,0.00070,-0.00049,,
Distance AAAJ: ,0.02430,0.02400,0.00200,0.00200,0.00030,,
Distance AAAK: ,0.09449,0.09400,0.00200,0.00100,0.00049,,
Distance AAAL: ,0.07689,0.07660,0.00050,0.00050,0.00029,
</code></pre>
<p>What I want to do is extract a specific set of data out of this block, for example only Distance AAAH like so:</p>
<pre><code>Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,
</code></pre>
<p>The measurements will always begin with Distance AAA*: with the star being the only character that will change.</p>
<p>Complications:
This needs to be generic, because I have a lot of different data sets and so Distance AAAH might not always be followed by Distance AAAI or preceded by Distance AAAG, since the measurements for different items vary. I also can't rely on .len(), because the last measurement can sometimes be blank (As it is with Distance AAAH) or can be filled (As with Distance AAAB. And I don't think I can use .find(), because I need all of the numbers following Distance AAAH. </p>
<p>I am still very new and I tried my best to find a solution similar to this problem, but have not had much luck.</p>
| 0 | 2016-08-24T19:59:42Z | 39,132,309 | <p>You can search your text by this script :</p>
<pre><code>#fullText = YOUR STRING
text = fullText.splitlines()
for line in text:
if line.startswith('Distance AAAH:'):
print line
</code></pre>
<p>Output:<code>Distance AAAH: ,0.11404,0.11450,0.00120,0.00110,-0.00046,,</code></p>
| 1 | 2016-08-24T20:14:07Z | [
"python",
"python-3.x"
] |
Redirect print() to file in python 2.4 | 39,132,091 | <p>I'm backporting a modern python script to 2.4 to make it compatible with stock RHEL 5.X. While most of the work has been fairly straight-forward, I can't figure out how to handle this case where I am appending to a file:</p>
<pre><code>print("Foo",file=file("/tmp/bar",'ab'))
</code></pre>
<p>This is a very common construct in the code I'm porting. I am using the print function from <strong>future</strong>, which works fine, but here it chokes on the "file=file("filename", 'ab')" part. Apparently this kind of redirection is not supported in 2.4. Likewise, I haven't found a way for the print function to support the >> operator from the old print. It would be an enormous task to re-write this script without the print function, so I'd like a solution based on the print function.</p>
<p>I've found plenty of docs showing how to use >> in the old print, or file=file() in the new print function, but nothing that actually works in 2.4.</p>
<p>What is the equivalent Python 2.4 compatible code for this?</p>
| 3 | 2016-08-24T20:00:12Z | 39,132,131 | <p>The syntax is pretty awful:</p>
<pre><code>print >> file('/tmp/bar', 'ab'), 'Foo'
</code></pre>
<p>Though of course you should rather write:</p>
<pre><code>f = open('/tmp/bar', 'ab')
try:
print >> f, 'Foo'
finally:
f.close()
</code></pre>
<p>to make sure that the output is actually closed and flushed. (Python 2.4 doesn't have <code>with</code> statement!).</p>
<hr>
<p>As an alternative to converting everything to <code>print</code> statement, you could also try the <a href="https://bitbucket.org/gutworth/six/src/ca4580a5a648fc75abc568907e81abc80b05d58c/six.py?at=default&fileviewer=file-view-default#six.py-738" rel="nofollow"><code>print_</code></a> function from the <a href="https://pythonhosted.org/six/" rel="nofollow">Six: Python 2 and 3 Compatibility Library</a>. I am not sure whether the whole library supports 2.4 any longer, but that one function should be OK in 2.4.</p>
| 3 | 2016-08-24T20:03:06Z | [
"python"
] |
dataframe, set index from list | 39,132,181 | <p>Is it possible when creating a dataframe from a list, to set the index as one of the values?</p>
<pre><code>import pandas as pd
tmp = [['a', 'a1'], ['b',' b1']]
df = pd.DataFrame(tmp, columns=["First", "Second"])
First Second
0 a a1
1 b b1
</code></pre>
<p>And how I'd like it to look:</p>
<pre><code> First Second
a a a1
b b b1
</code></pre>
| 3 | 2016-08-24T20:06:10Z | 39,132,236 | <pre><code>>>> pd.DataFrame(tmp, columns=["First", "Second"]).set_index('First', drop=False)
First Second
First
a a a1
b b b1
</code></pre>
| 5 | 2016-08-24T20:09:12Z | [
"python",
"pandas"
] |
dataframe, set index from list | 39,132,181 | <p>Is it possible when creating a dataframe from a list, to set the index as one of the values?</p>
<pre><code>import pandas as pd
tmp = [['a', 'a1'], ['b',' b1']]
df = pd.DataFrame(tmp, columns=["First", "Second"])
First Second
0 a a1
1 b b1
</code></pre>
<p>And how I'd like it to look:</p>
<pre><code> First Second
a a a1
b b b1
</code></pre>
| 3 | 2016-08-24T20:06:10Z | 39,132,290 | <p>If you don't want index name:</p>
<pre><code>df = pd.DataFrame(tmp, columns=["First", "Second"], index=[i[0] for i in tmp])
</code></pre>
<p>Result:</p>
<pre><code> First Second
a a a1
b b b1
</code></pre>
| 3 | 2016-08-24T20:12:56Z | [
"python",
"pandas"
] |
django: how do I actually override admin site template | 39,132,187 | <p>I know this is asked and answered several times but I basically went over all the post on stack overflow and still couldn't get this to work. Right now I am just trying simply change the admin site title. I have the following:</p>
<pre><code>#base_site.html
{% extends "admin/base_site.html" %}
{% block title %}{{ title }} | {{ site_title|default:_('NEW TITLE') }}{% endblock %}
{% block branding %}
<h1 id="site-name"><a href="{% url 'admin:index' %}">{{ site_header|default:_('NEW TITLE') }}</a></h1>
{% endblock %}
{% block nav-global %}{% endblock %}
</code></pre>
<p>And I tried to put this in </p>
<blockquote>
<p>my_site/templates/admin/base_site.html,</p>
<p>my_site/templates/admin/my_app/base_site.html, and</p>
<p>my_site/my_app/templates/admin/base_site.html, </p>
</blockquote>
<p>but none of these work.</p>
<pre><code>settings.py:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'loaders': [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
],
},
},
]
</code></pre>
<p>I also tried just directly changing django\contrib\admin\templates\admin\base_site.html but still nothing happens.</p>
<p>I am really frustrated now and definitely could use some help, thanks</p>
<p>Updates:
Actually I found out that the local template does have effect.
<a href="http://i.stack.imgur.com/PvybC.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/PvybC.jpg" alt="enter image description here"></a></p>
<p>Like here, the topmost white bar displays "#base_site.html!!@#" which is what I put in my_site/templates/admin/base_site.html as a comment by chance. So it kinda working, but I still don't understand why I can't change the site title.</p>
| 0 | 2016-08-24T20:06:32Z | 39,132,843 | <p>use my_site/my_app/templates/admin/base_site.html </p>
<blockquote>
<p>put your app where you define this template before
'django.contrib.admin', in INSTALLED_APPS</p>
</blockquote>
<p><a href="http://stackoverflow.com/questions/4938491/django-admin-change-header-django-administration-text">link</a></p>
| 0 | 2016-08-24T20:53:50Z | [
"python",
"django",
"django-admin"
] |
Python: SyntaxError using bitwise function | 39,132,191 | <p>I am trying to create a basic bitwise function that filters out a certain subset of my data for me.</p>
<pre><code>>>>heads=fits.open('datafile.fits')
>>>data=heads[1].data
</code></pre>
<p>Now, I need to mask out data points that are in a certain column and which are set to bit 0.</p>
<pre><code>>>>ind=np.where(data['COLUMN_NAME'] & np.power(2,9) = 0)
</code></pre>
<p>However, this input throws the error</p>
<pre><code>File "<stdin>", line 1
SyntaxError: keyword cant be an expression
</code></pre>
<p>The error does not give the normal ^ which shows where the error is, so I'm not sure which part of my input python is having an issue with.</p>
| -1 | 2016-08-24T20:06:37Z | 39,132,219 | <p>equal comparsion is <code>==</code>:</p>
<pre><code>ind=np.where(data['COLUMN_NAME'] & (2**9) == 0)
</code></pre>
| 1 | 2016-08-24T20:08:19Z | [
"python",
"bitwise-operators",
"bitwise-and"
] |
Python: SyntaxError using bitwise function | 39,132,191 | <p>I am trying to create a basic bitwise function that filters out a certain subset of my data for me.</p>
<pre><code>>>>heads=fits.open('datafile.fits')
>>>data=heads[1].data
</code></pre>
<p>Now, I need to mask out data points that are in a certain column and which are set to bit 0.</p>
<pre><code>>>>ind=np.where(data['COLUMN_NAME'] & np.power(2,9) = 0)
</code></pre>
<p>However, this input throws the error</p>
<pre><code>File "<stdin>", line 1
SyntaxError: keyword cant be an expression
</code></pre>
<p>The error does not give the normal ^ which shows where the error is, so I'm not sure which part of my input python is having an issue with.</p>
| -1 | 2016-08-24T20:06:37Z | 39,132,239 | <p>Could it be because you use '=' (assignment) instead of '==' (equality) in the call to 'where'?</p>
| 1 | 2016-08-24T20:09:19Z | [
"python",
"bitwise-operators",
"bitwise-and"
] |
The spider doesn't go to the next page | 39,132,263 | <p>Spider code:</p>
<pre><code>import scrapy
from crawler.items import Item
class DmozSpider(scrapy.Spider):
name = 'blabla'
allowed_domains = ['blabla']
def start_requests(self):
yield scrapy.Request('http://blabla.org/forum/viewforum.php?f=123', self.parse)
def parse(self, response):
item = Item()
item['Title'] = response.xpath('//a[@class="title"/text()').extract()
yield item
next_page = response.xpath('//a[text()="Next"]/@href')
if next_page:
url = response.urljoin(next_page[0].extract())
yield scrapy.Request(url, callback=self.parse)
</code></pre>
<p>Problem: spider stops after the first page even though next page_page and url exist and are correct.</p>
<p>Here is the last debug message before stop:</p>
<pre><code>[scrapy] DEBUG: Crawled (200) <GET http://blabla.org/forum/viewforum.php?f=123&start=50> (referer: http://blabla.org/forum/viewforum.php?f=123)
[scrapy] INFO: Closing spider (finished)
</code></pre>
| 0 | 2016-08-24T20:10:52Z | 39,138,567 | <p>You need to check following this.</p>
<ol>
<li>Check if the urls that you are trying to crawl is not Robots.txt, which you can find by looking into <a href="http://blabla.org/robots.txt" rel="nofollow">http://blabla.org/robots.txt</a>. By default scrapy obeys robots.txt. <em>It is recommended that you abide to robots.txt</em></li>
<li>By default the download delay for the scrapy is 0.25, you can increase it 2 Sec or more than that and try.</li>
</ol>
| 1 | 2016-08-25T06:54:18Z | [
"python",
"python-3.x",
"scrapy",
"scrapy-spider"
] |
The spider doesn't go to the next page | 39,132,263 | <p>Spider code:</p>
<pre><code>import scrapy
from crawler.items import Item
class DmozSpider(scrapy.Spider):
name = 'blabla'
allowed_domains = ['blabla']
def start_requests(self):
yield scrapy.Request('http://blabla.org/forum/viewforum.php?f=123', self.parse)
def parse(self, response):
item = Item()
item['Title'] = response.xpath('//a[@class="title"/text()').extract()
yield item
next_page = response.xpath('//a[text()="Next"]/@href')
if next_page:
url = response.urljoin(next_page[0].extract())
yield scrapy.Request(url, callback=self.parse)
</code></pre>
<p>Problem: spider stops after the first page even though next page_page and url exist and are correct.</p>
<p>Here is the last debug message before stop:</p>
<pre><code>[scrapy] DEBUG: Crawled (200) <GET http://blabla.org/forum/viewforum.php?f=123&start=50> (referer: http://blabla.org/forum/viewforum.php?f=123)
[scrapy] INFO: Closing spider (finished)
</code></pre>
| 0 | 2016-08-24T20:10:52Z | 39,151,077 | <p>The problem was that the response from the next page was a response for robots and did not contain any links.</p>
| 0 | 2016-08-25T17:09:55Z | [
"python",
"python-3.x",
"scrapy",
"scrapy-spider"
] |
Errors with django urls | 39,132,285 | <p>I'm getting an error 404 when i click the link on django, i have spent so much time trying to see what i'm doing wrong but no luck.</p>
<p>here is my index.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>{{title}}</h1>
{% for obj in object_list %}
{% url "detail" id=obj.id %}
<a href = '{{obj.get_absolute_url}}'> {{obj.title}}</a><br/>
{{obj.content}} <br/>
{{obj.timestamp}} <br/>
{{obj.updated}} <br/>
{{obj.id}} <br/>
{% endfor %}
</body>
</html>
</code></pre>
<p>This is my urls.py</p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from .views import (
post_list,
post_create,
post_detail,
post_update,
post_delete,
)
urlpatterns = [
url(r'^$', post_list),
url(r'^create/$', post_create),
url(r'^detail/(?P<id>\d+)/$', post_detail, name='detail'),
url(r'^update/$', post_update),
url(r'^delete/$', post_delete),
]
</code></pre>
<p>This is my views.py</p>
<pre><code>from django.http import HttpResponse
from django.shortcuts import render, get_object_or_404
from .models import Post
# Create your views here.
def post_create(request):
return HttpResponse("<h1>Create</h1>")
def post_detail(request, id): # retreive
instance = get_object_or_404(Post, id=id)
context = {
"title": instance.title,
"instance": instance,
}
return render(request, "post_detail.html", context)
def post_list(request): # list of posts
queryset = Post.objects.all
context = {
"title": "List",
"object_list": queryset,
}
return render(request, "index.html", context)
def post_update(request):
return HttpResponse("<h1>Update</h1>")
def post_delete(request):
return HttpResponse("<h1>Delete</h1>")
</code></pre>
<p>and this is my models.py</p>
<pre><code>from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Post(models.Model):
title = models.CharField(max_length = 120)
content = models.TextField()
updated = models.DateTimeField(auto_now=True, auto_now_add = False )
timestamp = models.DateTimeField(auto_now=False, auto_now_add = True )
def __unicode__(self):
return self.title
def __str__(self):
return self.title
def get_absolute_url(self):
return "/posts/%s" %(self.id)
</code></pre>
| 0 | 2016-08-24T20:12:32Z | 39,132,500 | <p>Your <code>get_absolute_url</code> method returns URLs in the form <code>/posts/<id></code> but your urlconf is expecting <code>/posts/detail/<id></code>.</p>
<p>Instead of hard-coding a URL like that in the method, you should use the <code>reverse</code> functionality:</p>
<pre><code>from django.urls import reverse
...
def get_absolute_url(self):
return reverse('detail', kwargs={'id': self.id})
</code></pre>
| 1 | 2016-08-24T20:28:13Z | [
"python",
"django",
"django-models",
"django-templates",
"django-views"
] |
Alternatives to looping in Pandas when you need to update a column based on another | 39,132,367 | <p>I have a <code>Pandas</code> <code>dataframe</code> with text dates that'd I like to convert to <code>datetime</code>. The problem is some of my text dates are bad data and thus can't be converted. In cases for which a date can't be converted, I want to update an <code>Error</code> column to a value of <code>True</code> as well as set the <code>Date</code> column to <code>None</code> so that it can later be added to a database column that is formatted as a <code>datetime</code>.</p>
<p>This is a simplified example. My <code>dataframe</code> may have 1 million rows and multiple date columns this needs to be done for, so I need a faster way of doing this. I know the typical convention is to avoid looping with <code>Pandas</code>, but I can't figure out a way around it.</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
data = 1000 *[['010115', None],
['320115', None]]
df = pd.DataFrame(data=data,
columns=['Date', 'Error'])
for index, row in df.iterrows():
try:
datetime.datetime.strptime(row['Date'], '%d%m%y')
except ValueError:
row['Date'] = None
row['Error'] = True
except TypeError:
pass
print df
</code></pre>
| 3 | 2016-08-24T20:18:34Z | 39,132,408 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> with parameter <code>errors='coerce'</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isnull.html" rel="nofollow"><code>isnull</code></a>:</p>
<pre><code>data = 10 *[['010115', None],
['320115', None]]
df = pd.DataFrame(data=data,
columns=['Date', 'Error'])
print (df)
Date Error
0 010115 None
1 320115 None
2 010115 None
3 320115 None
4 010115 None
5 320115 None
6 010115 None
7 320115 None
8 010115 None
9 320115 None
10 010115 None
11 320115 None
12 010115 None
13 320115 None
14 010115 None
15 320115 None
16 010115 None
17 320115 None
18 010115 None
19 320115 None
</code></pre>
<pre><code>df['Date'] = pd.to_datetime(df['Date'], format='%d%m%y',errors='coerce')
df['Error'] = df['Date'].isnull()
print (df)
Date Error
0 2015-01-01 False
1 NaT True
2 2015-01-01 False
3 NaT True
4 2015-01-01 False
5 NaT True
6 2015-01-01 False
7 NaT True
8 2015-01-01 False
9 NaT True
10 2015-01-01 False
11 NaT True
12 2015-01-01 False
13 NaT True
14 2015-01-01 False
15 NaT True
16 2015-01-01 False
17 NaT True
18 2015-01-01 False
19 NaT True
</code></pre>
| 3 | 2016-08-24T20:21:31Z | [
"python",
"python-2.7",
"datetime",
"pandas",
"dataframe"
] |
Alternatives to looping in Pandas when you need to update a column based on another | 39,132,367 | <p>I have a <code>Pandas</code> <code>dataframe</code> with text dates that'd I like to convert to <code>datetime</code>. The problem is some of my text dates are bad data and thus can't be converted. In cases for which a date can't be converted, I want to update an <code>Error</code> column to a value of <code>True</code> as well as set the <code>Date</code> column to <code>None</code> so that it can later be added to a database column that is formatted as a <code>datetime</code>.</p>
<p>This is a simplified example. My <code>dataframe</code> may have 1 million rows and multiple date columns this needs to be done for, so I need a faster way of doing this. I know the typical convention is to avoid looping with <code>Pandas</code>, but I can't figure out a way around it.</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
data = 1000 *[['010115', None],
['320115', None]]
df = pd.DataFrame(data=data,
columns=['Date', 'Error'])
for index, row in df.iterrows():
try:
datetime.datetime.strptime(row['Date'], '%d%m%y')
except ValueError:
row['Date'] = None
row['Error'] = True
except TypeError:
pass
print df
</code></pre>
| 3 | 2016-08-24T20:18:34Z | 39,132,466 | <p>You can skip the initial creation of <code>df</code> and construct it from the specific columns you need instead.</p>
<pre><code># I push a list of first elements from data via a comprehension
dates = pd.to_datetime([d[0] for d in data], format='%d%m%y', errors='coerce')
# Construct df from scratch here
df = pd.DataFrame({'Date': dates})
df['Error'] = df.Date.isnull()
df.head()
</code></pre>
<p><a href="http://i.stack.imgur.com/UXFGt.png" rel="nofollow"><img src="http://i.stack.imgur.com/UXFGt.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing</h3>
<p>This is the difference between using an already constructed <code>df</code> versus building it from scratch</p>
<p><a href="http://i.stack.imgur.com/kyv4f.png" rel="nofollow"><img src="http://i.stack.imgur.com/kyv4f.png" alt="enter image description here"></a></p>
| 3 | 2016-08-24T20:26:25Z | [
"python",
"python-2.7",
"datetime",
"pandas",
"dataframe"
] |
Python subtraction in for loop | 39,132,429 | <p>I am in need of some help doing some simple calculations in a for loop. I have two columns in the below example file. Column 1 is the date-time-hr-min-ss and column 2 is a value. I would like to print through the file and calculate the difference between the current value and the value of the previous hour. I attempted the code below but not able to subtract the previous hour value. Can I get some help/direction in correcting my code below? Thanks in advance.</p>
<p>File Contents:</p>
<pre><code>20160823220000 1208091708
20160823230000 1209559863
20160824000000 1210706089
20160824010000 1211612458
20160824020000 1212410614
20160824030000 1213059346
</code></pre>
<p>My Code:</p>
<pre><code>with open('datecount.txt') as data:
z = 0
for line in data:
x = (line.strip().split())
num = int(x[1])
z = num
print(z - z)
</code></pre>
<p>Desired Output:</p>
<pre><code>date-time-hr-min-ss Value Delta-from-prev-Hr
==========================================================
20160823220000 1208091708 N/A
20160823230000 1209559863 1468155
20160824000000 1210706089 1146226
20160824010000 1211612458 906369
20160824020000 1212410614 798156
20160824030000 1213059346 648732
</code></pre>
| 0 | 2016-08-24T20:23:41Z | 39,132,504 | <pre><code>with open('datecount.txt') as data:
prev = 0
for line in data:
value = int(line.strip().split()[1])
print(value - prev)
prev = value
</code></pre>
<p>It is a good idea to name the variables so that their names make sense. In other words, leave <code>x, y, z</code> in maths.</p>
| 3 | 2016-08-24T20:28:29Z | [
"python",
"parsing"
] |
Python subtraction in for loop | 39,132,429 | <p>I am in need of some help doing some simple calculations in a for loop. I have two columns in the below example file. Column 1 is the date-time-hr-min-ss and column 2 is a value. I would like to print through the file and calculate the difference between the current value and the value of the previous hour. I attempted the code below but not able to subtract the previous hour value. Can I get some help/direction in correcting my code below? Thanks in advance.</p>
<p>File Contents:</p>
<pre><code>20160823220000 1208091708
20160823230000 1209559863
20160824000000 1210706089
20160824010000 1211612458
20160824020000 1212410614
20160824030000 1213059346
</code></pre>
<p>My Code:</p>
<pre><code>with open('datecount.txt') as data:
z = 0
for line in data:
x = (line.strip().split())
num = int(x[1])
z = num
print(z - z)
</code></pre>
<p>Desired Output:</p>
<pre><code>date-time-hr-min-ss Value Delta-from-prev-Hr
==========================================================
20160823220000 1208091708 N/A
20160823230000 1209559863 1468155
20160824000000 1210706089 1146226
20160824010000 1211612458 906369
20160824020000 1212410614 798156
20160824030000 1213059346 648732
</code></pre>
| 0 | 2016-08-24T20:23:41Z | 39,132,541 | <p>Well if you can assume that each consecutive line in your file will be the hour following the previous hour, you don't need to even worry about messing with the time column. Just use this..</p>
<pre><code>with open('datecount.txt') as data:
z = 0
for line in data:
x = (line.strip().split())
num = int(x[1])
print(num - z)
z = num
</code></pre>
<p>Your code was pretty much doing this already, you just needed to print (num - z) before assigning z to num. And also you had print(z-z) not print(num - z)</p>
| 1 | 2016-08-24T20:30:53Z | [
"python",
"parsing"
] |
Python Google Places Api | 39,132,449 | <p>I am trying to capture details of Starbucks shops in Coventry like Name,Location,Address and the google maps url, but I can't seem to wrap my head around the schedule (opening hours)</p>
<pre><code>print place.details
</code></pre>
<p>will give you all the details on a JSON format, how do I keep just the opening hours</p>
<pre><code>Starbucks Coffee
This is the address of the place : Gulson Road Coventry University, Coventry CV1 2JH, UK
The google map page : https://maps.google.com/?cid=5279103370560834505
The type of place : [u'cafe', u'food', u'store', u'point_of_interest', u'establishment']
{u'website': u'http://www.starbucks.co.uk/store/91356/gb/coventry-university-ecb/gulson-road-ecb-engineering-computing-building', u'utc_offset': 60, u'name': u'Starbucks Coffee', u'reference': u'CnRjAAAADvWg02OACGcjnA6lDYaiHaLuZDZkFnL3lGuw_QOw0i4fmmgcaUXXyROMIKW3eZR1tvorm-T6fAG0b815POJV7mSg4MnISitEn_SKGcs5hq5I2DY2CyAiwAFFjDsJkEWrj6NFEjnaF916KuQ-JXfzghIQdSyG6bHTdaMvn-ZZFXLR5hoUFsXDJ9gx-l0Ys5D6BG9IQV1emGU', u'price_level': 2, u'geometry': {u'location': {u'lat': Decimal('52.4055608'), u'lng': Decimal('-1.4997924')}, u'viewport': {u'northeast': {u'lat': Decimal('52.40569205'), u'lng': Decimal('-1.49939775')}, u'southwest': {u'lat': Decimal('52.40551705000001'), u'lng': Decimal('-1.49992395')}}}, u'adr_address': u'Gulson Road Coventry University, <span class="locality">Coventry</span> <span class="postal-code">CV1 2JH</span>, <span class="country-name">UK</span>', u'place_id': u'ChIJV0Xu7bdLd0gRyYvYroskQ0k', u'international_phone_number': u'+44 24 7622 5719', u'vicinity': u'Coventry, Gulson Road Coventry University, Coventry', u'reviews': [{u'rating': 4, u'aspects': [{u'rating': 2, u'type': u'overall'}], u'profile_photo_url': u'//lh5.googleusercontent.com/-Ksa5MB3V150/AAAAAAAAAAI/AAAAAAAAN1g/Oc9XyOBsRAI/photo.jpg', u'language': u'en', u'text': u"It's Starbucks in EC building, save you walking in town for a hot beverage", u'author_name': u'Dhruv Bhakta', u'author_url': u'https://plus.google.com/106741921003476599081', u'time': 1453209416}], u'formatted_phone_number': u'024 7622 5719', u'scope': u'GOOGLE', u'url': u'https://maps.google.com/?cid=5279103370560834505',---> I want just that u'opening_hours': {u'weekday_text': [u'Monday: 8:00 AM \u2013 6:30 PM', u'Tuesday: 8:00 AM \u2013 6:30 PM', u'Wednesday: 8:00 AM \u2013 6:30 PM', u'Thursday: 8:00 AM \u2013 6:30 PM', u'Friday: 8:00 AM \u2013 6:00 PM', u'Saturday: Closed', u'Sunday: Closed'], u'open_now': False, u'periods': [{u'close': {u'day': 1, u'time': u'1830'}, u'open': {u'day': 1, u'time': u'0800'}}, {u'close': {u'day': 2, u'time': u'1830'}, u'open': {u'day': 2, u'time': u'0800'}}, {u'close': {u'day': 3, u'time': u'1830'}, u'open': {u'day': 3, u'time': u'0800'}}, {u'close': {u'day': 4, u'time': u'1830'}, u'open': {u'day': 4, u'time': u'0800'}}, {u'close': {u'day': 5, u'time': u'1800'}, u'open': {u'day': 5, u'time': u'0800'}}]}, <---- until here u'address_components': [{u'long_name': u'Coventry', u'types': [u'locality', u'political'], u'short_name': u'Coventry'}, {u'long_name': u'Coventry', u'types': [u'postal_town'], u'short_name': u'Coventry'}, {u'long_name': u'West Midlands', u'types': [u'administrative_area_level_2', u'political'], u'short_name': u'West Midlands'}, {u'long_name': u'England', u'types': [u'administrative_area_level_1', u'political'], u'short_name': u'England'}, {u'long_name': u'United Kingdom', u'types': [u'country', u'political'], u'short_name': u'GB'}, {u'long_name': u'CV1 2JH', u'types': [u'postal_code'], u'short_name': u'CV1 2JH'}], u'formatted_address': u'Gulson Road Coventry University, Coventry CV1 2JH, UK', u'id': u'7fd44a38d5776818e51953ee7188f10d54b768cc', u'types': [u'cafe', u'food', u'store', u'point_of_interest', u'establishment'], u'icon': u'https://maps.gstatic.com/mapfiles/place_api/icons/cafe-71.png'}
</code></pre>
<p>The details output is the above but I want just the schedule.</p>
| -2 | 2016-08-24T20:24:59Z | 39,204,529 | <p>try google places text search api to find place_id</p>
<p><a href="https://maps.googleapis.com/maps/api/place/textsearch/json?query=StarbucksCoffee,GulsonRoad,CoventryUniversity,CoventryCV12JH,UK&key=YOUR_API_KEY" rel="nofollow">https://maps.googleapis.com/maps/api/place/textsearch/json?query=StarbucksCoffee,GulsonRoad,CoventryUniversity,CoventryCV12JH,UK&key=YOUR_API_KEY</a></p>
<p>This api will return you a json response.Parse 'place_id' of this place which is "ChIJV0Xu7bdLd0gRyYvYroskQ0k"</p>
<p>use this 'place_id' in google place details api to get opening hours</p>
<p><a href="https://maps.googleapis.com/maps/api/place/details/json?ChIJV0Xu7bdLd0gRyYvYroskQ0k=&key=YOUR_API_KEY" rel="nofollow">https://maps.googleapis.com/maps/api/place/details/json?ChIJV0Xu7bdLd0gRyYvYroskQ0k=&key=YOUR_API_KEY</a></p>
<p>this will return a json response that includes opening hours</p>
| 1 | 2016-08-29T10:55:06Z | [
"python",
"json",
"google-maps",
"google-places-api"
] |
How to interpret `scipy.stats.kstest` and `ks_2samp` to evaluate `fit` of data to a distribution? | 39,132,469 | <p><strong>I'm trying to evaluate/test how well my data fits a particular distribution.</strong> </p>
<p>There are several questions about it and I was told to use either the <code>scipy.stats.kstest</code> or <code>scipy.stats.ks_2samp</code>. It seems straightforward, give it: (A) the data; (2) the distribution; and (3) the fit parameters. The only problem is my results don't make any sense? I want to test the "goodness" of my data and it's fit to different distributions but from the output of <code>kstest</code>, I don't know if I can do this? </p>
<p><a href="http://stackoverflow.com/questions/11268632/goodness-of-fit-tests-in-scipy">Goodness of fit tests in SciPy</a></p>
<blockquote>
<p>"[SciPy] contains K-S"</p>
</blockquote>
<p><a href="http://stackoverflow.com/questions/17901112/using-scipys-stats-kstest-module-for-goodness-of-fit-testing">Using Scipy's stats.kstest module for goodness-of-fit testing</a> says </p>
<blockquote>
<p>"first value is the test statistics, and second value is the p-value. if the p-value is less than 95 (for a level of significance of 5%), this means that you cannot reject the Null-Hypothese that the two sample distributions are identical."</p>
</blockquote>
<p>This is just showing how to fit:
<a href="http://stackoverflow.com/questions/6615489/fitting-distributions-goodness-of-fit-p-value-is-it-possible-to-do-this-with">Fitting distributions, goodness of fit, p-value. Is it possible to do this with Scipy (Python)?</a></p>
<pre><code>np.random.seed(2)
# Sample from a normal distribution w/ mu: -50 and sigma=1
x = np.random.normal(loc=-50, scale=1, size=100)
x
#array([-50.41675785, -50.05626683, -52.1361961 , -48.35972919,
# -51.79343559, -50.84174737, -49.49711858, -51.24528809,
# -51.05795222, -50.90900761, -49.44854596, -47.70779199,
# ...
# -50.46200535, -49.64911151, -49.61813377, -49.43372456,
# -49.79579202, -48.59330376, -51.7379595 , -48.95917605,
# -49.61952803, -50.21713527, -48.8264685 , -52.34360319])
# Try against a Gamma Distribution
distribution = "gamma"
distr = getattr(stats, distribution)
params = distr.fit(x)
stats.kstest(x,distribution,args=params)
KstestResult(statistic=0.078494356486987549, pvalue=0.55408436218441004)
</code></pre>
<p><strong>A p_value of <code>pvalue=0.55408436218441004</code> is saying that the <code>normal</code> and <code>gamma</code> sampling are from the same distirbutions?</strong> </p>
<p>I thought gamma distributions have to contain positive values?<a href="https://en.wikipedia.org/wiki/Gamma_distribution" rel="nofollow">https://en.wikipedia.org/wiki/Gamma_distribution</a> </p>
<p>Now against a normal distribution:</p>
<pre><code># Try against a Normal Distribution
distribution = "norm"
distr = getattr(stats, distribution)
params = distr.fit(x)
stats.kstest(x,distribution,args=params)
KstestResult(statistic=0.070447707170256002, pvalue=0.70801104133244541)
</code></pre>
<p>According to this, if I took the lowest p_value, then <strong>I would conclude my data came from a <code>gamma</code> distribution even though they are all negative values?</strong> </p>
<pre><code>np.random.seed(0)
distr = getattr(stats, "norm")
x = distr.rvs(loc=0, scale=1, size=50)
params = distr.fit(x)
stats.kstest(x,"norm",args=params, N=1000)
KstestResult(statistic=0.058435890774587329, pvalue=0.99558592119926814)
</code></pre>
<p><strong>This means at a 5% level of significance, I can reject the null hypothesis that distributions are identical. So I conclude they are different but they clearly aren't?</strong> Am I interpreting this incorrectly? If I make it one-tailed, would that make it so the larger the value the more likely they are from the same distribution? </p>
| 0 | 2016-08-24T20:26:40Z | 39,133,792 | <p>So the null-hypothesis for the KT test is that the distributions are the same. Thus, the lower your p value the greater the statistical evidence you have to reject the null hypothesis and <em>conclude the distributions are different</em>. The test only really lets you speak of your confidence that the distributions are different, not the same, since the test is designed to find <strong>alpha</strong>, the probability of Type I error. </p>
<p>Also, I'm pretty sure the KT test is only valid if you have a fully specified distribution in mind beforehand. Here, you simply <em>fit</em> a gamma distribution on some data, so of course, it's no surprise the test yielded a high p-value (i.e. you cannot reject the null hypothesis that the distributions are the same).
<p></p> Real quickly, here is the pdf of the Gamma you fit (in blue) against the pdf of the normal distribution you sampled from (in green):</p>
<pre><code>In [13]: paramsd = dict(zip(('shape','loc','scale'),params))
In [14]: a = paramsd['shape']
In [15]: del paramsd['shape']
In [16]: paramsd
Out[16]: {'loc': -71.588039241913037, 'scale': 0.051114096301755507}
In [17]: X = np.linspace(-55, -45, 100)
In [18]: plt.plot(X, stats.gamma.pdf(X,a,**paramsd))
Out[18]: [<matplotlib.lines.Line2D at 0x7ff820f21d68>]
</code></pre>
<p><a href="http://i.stack.imgur.com/dp4i6.png" rel="nofollow"><img src="http://i.stack.imgur.com/dp4i6.png" alt="enter image description here"></a></p>
<p>It should be obvious these aren't very different. Really, the test compares the empirical CDF (ECDF) vs the CDF of you candidate distribution (which again, you derived from fitting your data to that distribution), and the test statistic is the maximum difference. Borrowing an implementation of ECDF <a href="http://stackoverflow.com/a/37660583/5014455">from here</a>, we can see that any such maximum difference will be small, and the test will clearly not reject the null hypothesis:</p>
<pre><code>In [32]: def ecdf(x):
.....: xs = np.sort(x)
.....: ys = np.arange(1, len(xs)+1)/float(len(xs))
.....: return xs, ys
.....:
In [33]: plt.plot(X, stats.gamma.cdf(X,a,**paramsd))
Out[33]: [<matplotlib.lines.Line2D at 0x7ff805223a20>]
In [34]: plt.plot(*ecdf(x))
Out[34]: [<matplotlib.lines.Line2D at 0x7ff80524c208>]
</code></pre>
<p><a href="http://i.stack.imgur.com/VcJH7.png" rel="nofollow"><img src="http://i.stack.imgur.com/VcJH7.png" alt="enter image description here"></a></p>
| 1 | 2016-08-24T22:11:18Z | [
"python",
"numpy",
"machine-learning",
"scipy",
"statistics"
] |
Non-standard distributions variables for KS testing? | 39,132,537 | <p>Could you use the kstest in scipy.stats for the non-standard distribution functions (ie. vary the DOF for Students t, or vary gamma for Cauchy)? My end goal is to find the max p-value and corresponding parameter for my distribution fit but that isn't the issue.</p>
<p><strong>EDIT:</strong></p>
<p><strong>"</strong></p>
<p>scipy.stat's cauchy pdf is:</p>
<pre><code>cauchy.pdf(x) = 1 / (pi * (1 + x**2))
</code></pre>
<p>where it implies <code>x_0 = 0</code> for the location parameter and for gamma, <code>Y = 1</code>. I actually need it to look like this</p>
<pre><code>cauchy.pdf(x, x_0, Y) = Y**2 / [(Y * pi) * ((x - x_0)**2 + Y**2)]
</code></pre>
<p><strong>"</strong></p>
<p>Q1) Could Students t, at least, could be used in a way perhaps like</p>
<pre><code>stuff = []
for dof in xrange(0,100):
d, p, dof = scipy.stats.kstest(data, "t", args = (dof, ))
stuff.append(np.hstack((d, p, dof)))
</code></pre>
<p>since it seems to have the option to vary the parameter?</p>
<p>Q2) How would you do this if you needed the full normal distribution equation (need to vary sigma) and Cauchy as written above (need to vary gamma)? <strong>EDIT: Instead of searching <code>scipy.stats</code> for non-standard distributions, is it actually possible to feed a function I write into the kstest that will find p-value's?</strong></p>
<p>Thanks kindly</p>
| 2 | 2016-08-24T20:30:34Z | 39,215,584 | <p>It seems that what you really want to do is parameter estimation.Using the KT-test in this manner is not really what it is meant for. You should use the <code>.fit</code> method for the <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html#continuous-distributions" rel="nofollow">corresponding distribution</a>.</p>
<pre><code>>>> import numpy as np, scipy.stats as stats
>>> arr = stats.norm.rvs(loc=10, scale=3, size=10) # generate 10 random samples from a normal distribution
>>> arr
array([ 11.54239861, 15.76348509, 12.65427353, 13.32551871,
10.5756376 , 7.98128118, 14.39058752, 15.08548683,
9.21976924, 13.1020294 ])
>>> stats.norm.fit(arr)
(12.364046769964004, 2.3998164726918607)
>>> stats.cauchy.fit(arr)
(12.921113834451496, 1.5012714431045815)
</code></pre>
<p>Now to quickly check the documentation:</p>
<pre><code>>>> help(cauchy.fit)
Help on method fit in module scipy.stats._distn_infrastructure:
fit(data, *args, **kwds) method of scipy.stats._continuous_distns.cauchy_gen instance
Return MLEs for shape, location, and scale parameters from data.
MLE stands for Maximum Likelihood Estimate. Starting estimates for
the fit are given by input arguments; for any arguments not provided
with starting estimates, ``self._fitstart(data)`` is called to generate
such.
One can hold some parameters fixed to specific values by passing in
keyword arguments ``f0``, ``f1``, ..., ``fn`` (for shape parameters)
and ``floc`` and ``fscale`` (for location and scale parameters,
respectively).
...
Returns
-------
shape, loc, scale : tuple of floats
MLEs for any shape statistics, followed by those for location and
scale.
Notes
-----
This fit is computed by maximizing a log-likelihood function, with
penalty applied for samples outside of range of the distribution. The
returned answer is not guaranteed to be the globally optimal MLE, it
may only be locally optimal, or the optimization may fail altogether.
</code></pre>
<p>So, let's say I wanted to hold one of those parameters constant, you could easily do:</p>
<pre><code>>>> stats.cauchy.fit(arr, floc=10)
(10, 2.4905786982353786)
>>> stats.norm.fit(arr, floc=10)
(10, 3.3686549590571668)
</code></pre>
| 1 | 2016-08-29T21:35:26Z | [
"python",
"python-2.7",
"scipy",
"normal-distribution",
"kolmogorov-smirnov"
] |
Check if user input contains a word from an array - Python 3 | 39,132,563 | <p>I am writing a code that asks a set amount of questions.
I am trying to have the user input an answer and the code checks the users input for any word in a list of several words. </p>
<pre><code>keywordlist = ("pain", "suffering", "hurt")
question1 = input("how are you feeling?")
if question1.lower() in keywordlist:
print("something here")
</code></pre>
<p>I ended up browsing stackoverflow for the answer and came across a post about splitting a string, but I didn't understand it. It was <a href="http://stackoverflow.com/questions/37174119/check-a-string-for-any-items-in-an-array/37174142#37174142">this</a> question.</p>
<p>using this link I switched my code to : </p>
<pre><code>if any(word in Question1 for word in keywordlist.split(",")):
</code></pre>
<p>but I got the error: </p>
<pre><code>AttributeError: 'tuple' object has no attribute 'split'
</code></pre>
<p>I am new to Python and need the dumbed down version of the accurate way to do this. </p>
| 1 | 2016-08-24T20:32:10Z | 39,132,606 | <p>Splitting the string should work. You can split on spaces so each individual word ends up being an element in a list. Like so</p>
<pre><code>keywordlist = ("pain", "suffering", "hurt")
question1 = input("how are you feeling?")
question_parts = question1.split(" ")
for part in question_parts:
if part.lower() in keywordlist:
print("something here")
</code></pre>
| 0 | 2016-08-24T20:35:13Z | [
"python"
] |
Check if user input contains a word from an array - Python 3 | 39,132,563 | <p>I am writing a code that asks a set amount of questions.
I am trying to have the user input an answer and the code checks the users input for any word in a list of several words. </p>
<pre><code>keywordlist = ("pain", "suffering", "hurt")
question1 = input("how are you feeling?")
if question1.lower() in keywordlist:
print("something here")
</code></pre>
<p>I ended up browsing stackoverflow for the answer and came across a post about splitting a string, but I didn't understand it. It was <a href="http://stackoverflow.com/questions/37174119/check-a-string-for-any-items-in-an-array/37174142#37174142">this</a> question.</p>
<p>using this link I switched my code to : </p>
<pre><code>if any(word in Question1 for word in keywordlist.split(",")):
</code></pre>
<p>but I got the error: </p>
<pre><code>AttributeError: 'tuple' object has no attribute 'split'
</code></pre>
<p>I am new to Python and need the dumbed down version of the accurate way to do this. </p>
| 1 | 2016-08-24T20:32:10Z | 39,132,725 | <p>Assuming that you want the user to input a sentence and want to check if any word is in the keyword list:</p>
<pre><code>keywordlist = ("pain", "suffering", "hurt")
question1 = input("how are you feeling?")
input_words=question1.lower().split()
for word in input_words:
if word in keywordlist:
print("something here")
</code></pre>
<p>The reason <code>if any(word in Question1 for word in keywordlist.split(",")):</code> gave you that error is because you called the <code>split()</code> method on <code>keywordlist</code> which is a tuple. So the error is telling you exactly what you did wrong. You want to split the input into words, the <code>keywordlist</code> already contains words split up into a tuple.</p>
| 1 | 2016-08-24T20:43:39Z | [
"python"
] |
Iterate through all files in a directory and find and replace text - Python | 39,132,566 | <p>Baby brand new. This was Frankenstein'ed together from a few similar topics, none of which seemed to cover the necessary step of nesting a find and replace inside a file loop. </p>
<p>I am attempting to iterate through every file in a folder (not recursively, I only have one folder level) of a specific type (listed here as a '.LIC') and replace a short bit of text. The following is as close as I could come: </p>
<pre><code>import glob, os, fileinput
from glob import glob
root_dir = r"myPath"
os.chdir(root_dir)
for file in glob, glob('*.LIC'):
filename = str(file)
with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
for line in file:
print(line.replace('findText', 'replaceText'), end='')
</code></pre>
<p>As you can imagine this went swimmingly. The error code is placed below.</p>
<pre><code>OSError Traceback (most recent call last)
<ipython-input-61-e2fd0e9a5df9> in <module>()
6 filename = str(file)
7 with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
----> 8 for line in file:
9 print(line.replace('findText', 'replaceText'), end='')
10
C:\Users\Me\Anaconda3\lib\fileinput.py in __next__(self)
246 def __next__(self):
247 while True:
--> 248 line = self._readline()
249 if line:
250 self._filelineno += 1
C:\Users\Me\Anaconda3\lib\fileinput.py in _readline(self)
333 pass
334 # The next few lines may raise OSError
--> 335 os.rename(self._filename, self._backupfilename)
336 self._file = open(self._backupfilename, self._mode)
337 try:
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '<function glob at 0x00000000013D3400>' -> '<function glob at 0x00000000013D3400>.bak'
</code></pre>
<p>I think my problem is nesting a reference to 'file', but I am unsure how to resolve this.</p>
<p>Thank you for the help in advance.</p>
| 1 | 2016-08-24T20:32:17Z | 39,132,623 | <p>You should loop over the result of <code>glob</code> and not a tuple with the function object <code>glob</code>:</p>
<pre><code>for filename in glob('*.LIC'):
with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
for line in file:
print(line.replace('findText', 'replaceText'), end='')
</code></pre>
| 2 | 2016-08-24T20:36:28Z | [
"python",
"python-3.x",
"file-io",
"glob"
] |
Webscraping multiline cells in tables using CSS Selectors and Python | 39,132,613 | <p>So I'm webscraping a page (<a href="http://canoeracing.org.uk/marathon/results/burton2016.htm" rel="nofollow">http://canoeracing.org.uk/marathon/results/burton2016.htm</a>) where there are multiline cells in tables:
<a href="http://i.stack.imgur.com/DHlLQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/DHlLQ.png" alt=""></a></p>
<p>I'm using the following code to scrape each column (the one below so happens to scrape the names):</p>
<pre><code>import lxml.html
from lxml.cssselect import CSSSelector
# get some html
import requests
r = requests.get('http://canoeracing.org.uk/marathon/results/burton2016.htm')
# build the DOM Tree
tree = lxml.html.fromstring(r.text)
# construct a CSS Selector
sel1 = CSSSelector('body > table > tr > td:nth-child(2)')
# Apply the selector to the DOM tree.
results1 = sel1(tree)
# get the text out of all the results
data1 = [result.text for result in results1]
</code></pre>
<p>Unfortunately it's only returning the first name from each cell, not both. I've tried a similar thing on the webscraping tool Kimono and I'm able to scrape both, however I want to sent up a Python code as Kimono falls down when running over multiple webpages.</p>
| 2 | 2016-08-24T20:35:50Z | 39,132,688 | <p>The problem is that some of the cells contain multiple text nodes delimited by a <code><br></code>. In cases like this, <em>find all text nodes</em> and join them:</p>
<pre><code>data1 = [", ".join(result.xpath("text()")) for result in rows]
</code></pre>
<p>For the provided rows in the screenshot, you would get:</p>
<pre><code>OSCAR HUISSOON, FREJA WEBBER
ELLIE LAWLEY, RHYS TIPPINGS
ALLISON MILES, ALEX MILES
NICOLA RUDGE, DEBORAH CRUMP
</code></pre>
<p>You could have also used <code>.text_content()</code> method, but you would lose the delimiter between the text nodes, getting things like <code>OSCAR HUISSOONFREJA WEBBER</code> in the result.</p>
| 2 | 2016-08-24T20:40:57Z | [
"python",
"css",
"web-scraping"
] |
Python PYserial WxPython Threading | 39,132,641 | <p>Can any one please tell me whats wrong with this piece of code. </p>
<p>When I press button 1 - everything is good. I want to press button2 - to stop the process started by button 1 and do an another process. I am unable to do it - MY GUI is going irresponsive. </p>
<p>You are welcome to edit the serial communication with PRINT statements if you like in doit and doit2 functions. </p>
<p>Please dont comment about how I made the GUI - it is just a quick example. Please comment on why I am unable to pass the pill2kill - when I press the button 2. And why my GUI is going to irresponsive state. </p>
<pre><code>import threading
import time
import numpy as np
import serial
from Transmit import Write
from Receive import Read
import struct
import time
import serial.tools.list_ports
import wx
class windowClass(wx.Frame):
def __init__(self, parent, title):
appSize_x = 1100
appSize_y = 800
super(windowClass, self).__init__(parent, title = title, style = wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CLOSE_BOX |wx.CAPTION, size = (appSize_x, appSize_y))
self.basicGUI()
self.Centre()
self.Show()
def basicGUI(self):
# Main Panel
panel1 = wx.Panel(self)
panel1.SetBackgroundColour('#D3D3D3')
firmware_version = wx.StaticText(panel1, -1, "RANDOM1", pos = (70, 10) )
firmware_version_text_control = wx.TextCtrl(panel1, -1, size = (70,25), pos = (105,40))
pump_model_serial_number = wx.StaticText(panel1, -1, "RANDOM2", pos=(25, 75))
pump_model_serial_number.SetBackgroundColour('yellow')
model = wx.StaticText(panel1, -1, "RANDOM3", pos=(110, 100))
self.listbox = wx.ListBox(panel1, -1, size = (300,250), pos = (20,135))
clear_history = wx.Button(panel1, -1, 'BUTTON1', size = (225,30), pos = (40, 400))
clear_history.SetBackgroundColour('RED')
clear_history.Bind(wx.EVT_BUTTON, self.OnClearHistory)
clear_history2 = wx.Button(panel1, -1, 'BUTTON2', size=(225, 30), pos=(40, 500))
clear_history2.SetBackgroundColour('GREEN')
clear_history2.Bind(wx.EVT_BUTTON, self.OnClearHistory2)
def OnClearHistory(self, event):
self.pill2kill = threading.Event()
self.t = threading.Thread(target=self.doit, args=(self.pill2kill, "task"))
self.t.start()
self.t.join()
def OnClearHistory2(self, event):
self.pill2kill.set()
self.t1 = threading.Thread(target=self.doit2)
self.t1.start()
time.sleep(5)
self.t1.join()
def doit(self, stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
ser = serial.Serial(3, 115200)
c = ser.write('\x5A\x03\x02\x02\x02\x09')
print c
d = ser.read(7)
print d.encode('hex')
ser.close()
print("Stopping as you wish.")
def doit2(self):
#print ("working on %s" % arg)
ser = serial.Serial(3, 115200)
c = ser.write('\x5A\x03\x02\x08\x02\x0F') # Writing to an MCU
print c
d = ser.read(7)
print d.encode('hex')
ser.close()
def random():
app = wx.App()
windowClass(None, title='random')
app.MainLoop()
random()
</code></pre>
| 1 | 2016-08-24T20:37:34Z | 39,149,329 | <p>Don't use the <code>.join</code> commands. They are blocking the main thread.<br>
See this SO question for an in depth description:<br>
<a href="http://stackoverflow.com/questions/15085348/what-is-the-use-of-join-in-python-threading">what is the use of join() in python threading</a></p>
| 0 | 2016-08-25T15:34:07Z | [
"python",
"multithreading",
"wxpython",
"pyserial"
] |
Python PYserial WxPython Threading | 39,132,641 | <p>Can any one please tell me whats wrong with this piece of code. </p>
<p>When I press button 1 - everything is good. I want to press button2 - to stop the process started by button 1 and do an another process. I am unable to do it - MY GUI is going irresponsive. </p>
<p>You are welcome to edit the serial communication with PRINT statements if you like in doit and doit2 functions. </p>
<p>Please dont comment about how I made the GUI - it is just a quick example. Please comment on why I am unable to pass the pill2kill - when I press the button 2. And why my GUI is going to irresponsive state. </p>
<pre><code>import threading
import time
import numpy as np
import serial
from Transmit import Write
from Receive import Read
import struct
import time
import serial.tools.list_ports
import wx
class windowClass(wx.Frame):
def __init__(self, parent, title):
appSize_x = 1100
appSize_y = 800
super(windowClass, self).__init__(parent, title = title, style = wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CLOSE_BOX |wx.CAPTION, size = (appSize_x, appSize_y))
self.basicGUI()
self.Centre()
self.Show()
def basicGUI(self):
# Main Panel
panel1 = wx.Panel(self)
panel1.SetBackgroundColour('#D3D3D3')
firmware_version = wx.StaticText(panel1, -1, "RANDOM1", pos = (70, 10) )
firmware_version_text_control = wx.TextCtrl(panel1, -1, size = (70,25), pos = (105,40))
pump_model_serial_number = wx.StaticText(panel1, -1, "RANDOM2", pos=(25, 75))
pump_model_serial_number.SetBackgroundColour('yellow')
model = wx.StaticText(panel1, -1, "RANDOM3", pos=(110, 100))
self.listbox = wx.ListBox(panel1, -1, size = (300,250), pos = (20,135))
clear_history = wx.Button(panel1, -1, 'BUTTON1', size = (225,30), pos = (40, 400))
clear_history.SetBackgroundColour('RED')
clear_history.Bind(wx.EVT_BUTTON, self.OnClearHistory)
clear_history2 = wx.Button(panel1, -1, 'BUTTON2', size=(225, 30), pos=(40, 500))
clear_history2.SetBackgroundColour('GREEN')
clear_history2.Bind(wx.EVT_BUTTON, self.OnClearHistory2)
def OnClearHistory(self, event):
self.pill2kill = threading.Event()
self.t = threading.Thread(target=self.doit, args=(self.pill2kill, "task"))
self.t.start()
self.t.join()
def OnClearHistory2(self, event):
self.pill2kill.set()
self.t1 = threading.Thread(target=self.doit2)
self.t1.start()
time.sleep(5)
self.t1.join()
def doit(self, stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
ser = serial.Serial(3, 115200)
c = ser.write('\x5A\x03\x02\x02\x02\x09')
print c
d = ser.read(7)
print d.encode('hex')
ser.close()
print("Stopping as you wish.")
def doit2(self):
#print ("working on %s" % arg)
ser = serial.Serial(3, 115200)
c = ser.write('\x5A\x03\x02\x08\x02\x0F') # Writing to an MCU
print c
d = ser.read(7)
print d.encode('hex')
ser.close()
def random():
app = wx.App()
windowClass(None, title='random')
app.MainLoop()
random()
</code></pre>
| 1 | 2016-08-24T20:37:34Z | 39,149,414 | <p><code>Thread.join</code> will block until the thread terminates. If your GUI's event handlers are blocked and unable to return to the MainLoop then other events can not be received and dispatched, and so the application appears to be frozen.</p>
| 0 | 2016-08-25T15:37:37Z | [
"python",
"multithreading",
"wxpython",
"pyserial"
] |
Represent number as a bytes using 16-bit blocks | 39,132,661 | <p>I wish to convert a number like <code>683550</code> (0xA6E1E) to <code>b'\x1e\x6e\x0a\x00'</code>, where the number of bytes in the array is a multiple of 2 and where the len of the bytes object is only so long as it needs to be to represent the number.</p>
<p>This is as far as I got:</p>
<pre><code>"{0:0{1}x}".format(683550,8)
</code></pre>
<p>giving:</p>
<pre><code>'000a6e1e'
</code></pre>
| 2 | 2016-08-24T20:38:55Z | 39,132,879 | <p>Use the <code>.tobytes</code>-method:</p>
<pre><code>num = 683550
bytes = num.to_bytes((num.bit_length()+15)//16*2, "little")
</code></pre>
| 2 | 2016-08-24T20:56:04Z | [
"python",
"python-3.x"
] |
Represent number as a bytes using 16-bit blocks | 39,132,661 | <p>I wish to convert a number like <code>683550</code> (0xA6E1E) to <code>b'\x1e\x6e\x0a\x00'</code>, where the number of bytes in the array is a multiple of 2 and where the len of the bytes object is only so long as it needs to be to represent the number.</p>
<p>This is as far as I got:</p>
<pre><code>"{0:0{1}x}".format(683550,8)
</code></pre>
<p>giving:</p>
<pre><code>'000a6e1e'
</code></pre>
| 2 | 2016-08-24T20:38:55Z | 39,134,342 | <p>Using python3:</p>
<pre class="lang-py prettyprint-override"><code>def encode_to_my_hex_format(num, bytes_group_len=2, byteorder='little'):
"""
@param byteorder can take the values 'little' or 'big'
"""
bytes_needed = abs(-len(bin(num)[2: ]) // 8)
if bytes_needed % bytes_group_len:
bytes_needed += bytes_group_len - bytes_needed % bytes_group_len
num_in_bytes = num.to_bytes(bytes_needed, byteorder)
encoded_num_in_bytes = b''
for index in range(0, len(num_in_bytes), bytes_group_len):
bytes_group = num_in_bytes[index: index + bytes_group_len]
if byteorder == 'little':
bytes_group = bytes_group[-1: -len(bytes_group) -1 : -1]
encoded_num_in_bytes += bytes_group
encoded_num = ''
for byte in encoded_num_in_bytes:
encoded_num += r'\x' + hex(byte)[2: ].zfill(2)
return encoded_num
print(encode_to_my_hex_format(683550))
</code></pre>
| 0 | 2016-08-24T23:09:16Z | [
"python",
"python-3.x"
] |
numpy array converted to pandas dataframe drops values | 39,132,712 | <p>I need to calculate statistics for each node of a 2D grid. I figured the easy way to do this was to take the cross join (AKA cartesian product) of two ranges. I implemented this using <code>numpy</code> as this function:</p>
<pre><code>def node_grid(x_range, y_range, x_increment, y_increment):
x_min = float(x_range[0])
x_max = float(x_range[1])
x_num = (x_max - x_min)/x_increment + 1
y_min = float(y_range[0])
y_max = float(y_range[1])
y_num = (y_max - y_min)/y_increment + 1
x = np.linspace(x_min, x_max, x_num)
y = np.linspace(y_min, y_max, y_num)
ng = list(product(x, y))
ng = np.array(ng)
return ng, x, y
</code></pre>
<p>However when I convert this to a <code>pandas</code> dataframe it drops values. For example:</p>
<pre><code>In [2]: ng = node_grid(x_range=(-60, 120), y_range=(0, 40), x_increment=0.1, y_increment=0.1)
In [3]: ng[0][(ng[0][:,0] > -31) & (ng[0][:,0] < -30) & (ng[0][:,1]==10)]
Out[3]: array([[-30.9, 10. ],
[-30.8, 10. ],
[-30.7, 10. ],
[-30.6, 10. ],
[-30.5, 10. ],
[-30.4, 10. ],
[-30.3, 10. ],
[-30.2, 10. ],
[-30.1, 10. ]])
In [4]: node_df = pd.DataFrame(ng[0])
node_df.columns = ['xx','depth']
print(node_df[(node_df.depth==10) & node_df.xx.between(-30,-31)])
Out[4]:Empty DataFrame
Columns: [xx, depth]
Index: []
</code></pre>
<p>The dataframe isn't empty:</p>
<pre><code>In [5]: print(node_df.head())
Out[5]: xx depth
0 -60.0 0.0
1 -60.0 0.1
2 -60.0 0.2
3 -60.0 0.3
4 -60.0 0.4
</code></pre>
<p>values from the numpy array are being dropped when they are being put into the pandas array. Why?</p>
| 2 | 2016-08-24T20:42:41Z | 39,133,555 | <p>I can't fully reproduce your code.</p>
<p>But I find the problem is that you have to turn the lower and upper boundaries around in the <code>between</code> query. The following works for me:</p>
<pre><code>print(node_df[(node_df.depth==10) & node_df.xx.between(-31,-30)])
</code></pre>
<p>when using:</p>
<pre><code>ng = np.array([[-30.9, 10. ],
[-30.8, 10. ],
[-30.7, 10. ],
[-30.6, 10. ],
[-30.5, 10. ],
[-30.4, 10. ],
[-30.3, 10. ],
[-30.2, 10. ],
[-30.1, 10. ]])
node_df = pd.DataFrame(ng)
</code></pre>
| 0 | 2016-08-24T21:49:13Z | [
"python",
"pandas",
"numpy"
] |
numpy array converted to pandas dataframe drops values | 39,132,712 | <p>I need to calculate statistics for each node of a 2D grid. I figured the easy way to do this was to take the cross join (AKA cartesian product) of two ranges. I implemented this using <code>numpy</code> as this function:</p>
<pre><code>def node_grid(x_range, y_range, x_increment, y_increment):
x_min = float(x_range[0])
x_max = float(x_range[1])
x_num = (x_max - x_min)/x_increment + 1
y_min = float(y_range[0])
y_max = float(y_range[1])
y_num = (y_max - y_min)/y_increment + 1
x = np.linspace(x_min, x_max, x_num)
y = np.linspace(y_min, y_max, y_num)
ng = list(product(x, y))
ng = np.array(ng)
return ng, x, y
</code></pre>
<p>However when I convert this to a <code>pandas</code> dataframe it drops values. For example:</p>
<pre><code>In [2]: ng = node_grid(x_range=(-60, 120), y_range=(0, 40), x_increment=0.1, y_increment=0.1)
In [3]: ng[0][(ng[0][:,0] > -31) & (ng[0][:,0] < -30) & (ng[0][:,1]==10)]
Out[3]: array([[-30.9, 10. ],
[-30.8, 10. ],
[-30.7, 10. ],
[-30.6, 10. ],
[-30.5, 10. ],
[-30.4, 10. ],
[-30.3, 10. ],
[-30.2, 10. ],
[-30.1, 10. ]])
In [4]: node_df = pd.DataFrame(ng[0])
node_df.columns = ['xx','depth']
print(node_df[(node_df.depth==10) & node_df.xx.between(-30,-31)])
Out[4]:Empty DataFrame
Columns: [xx, depth]
Index: []
</code></pre>
<p>The dataframe isn't empty:</p>
<pre><code>In [5]: print(node_df.head())
Out[5]: xx depth
0 -60.0 0.0
1 -60.0 0.1
2 -60.0 0.2
3 -60.0 0.3
4 -60.0 0.4
</code></pre>
<p>values from the numpy array are being dropped when they are being put into the pandas array. Why?</p>
| 2 | 2016-08-24T20:42:41Z | 39,133,567 | <p>the "between" function demands that the first argument be less than the latter.</p>
<p><code>
In: print(node_df[(node_df.depth==10) & node_df.xx.between(-31,-30)])
xx depth
116390 -31.0 10.0
116791 -30.9 10.0
117192 -30.8 10.0
117593 -30.7 10.0
117994 -30.6 10.0
118395 -30.5 10.0
118796 -30.4 10.0
119197 -30.3 10.0
119598 -30.2 10.0
119999 -30.1 10.0
120400 -30.0 10.0</code></p>
<p>For clarity the <code>product()</code> function used comes from the <code>itertools</code> package, i.e., <code>from itertools import product</code></p>
| 1 | 2016-08-24T21:50:12Z | [
"python",
"pandas",
"numpy"
] |
Increase Query Speed in Sqlite | 39,132,715 | <p>I am a very newbie in using python and sqlite. I am trying to create a script that reads a data from a table (rawdata) and then performs some calculations which is then stored in a new table. I am counting the number race that a player has won before that date at a particular track position and calculating the percentage. There are 15 track positions in total. Overall the script is very slow. Any suggestions to improve its speed. I have already used the PRAGMA parameters.</p>
<p>Below is the script.</p>
<pre><code>for item in result:
l1 = str(item[0])
l2 = item[1]
l3 = int(item[2])
winpost = []
key = l1.split("|")
dt = l2
###Denominator--------------
cursor.execute(
"SELECT rowid FROM rawdata WHERE Track = ? AND Date< ? AND Distance = ? AND Surface =? AND OfficialFinish=1",
(key[2], dt, str(key[4]), str(key[5]),))
result_den1 = cursor.fetchall()
cursor.execute(
"SELECT rowid FROM rawdata WHERE Track = ? AND RaceSN<= ? AND Date= ? AND Distance = ? AND Surface =? AND OfficialFinish=1",
(key[2], int(key[3]), dt, str(key[4]), str(key[5]),))
result_den2 = cursor.fetchall()
totalmat = len(result_den1) + len(result_den2)
if totalmat > 0:
for i in range(1, 16):
cursor.execute(
"SELECT rowid FROM rawdata WHERE Track = ? AND Date< ? AND PolPosition = ? AND Distance = ? AND Surface =? AND OfficialFinish=1",
(key[2], dt, i, str(key[4]), str(key[5]),))
result_num1 = cursor.fetchall()
cursor.execute(
"SELECT rowid FROM rawdata WHERE Track = ? AND RaceSN<= ? AND Date= ? AND PolPosition = ? AND Distance = ? AND Surface =? AND OfficialFinish=1",
(key[2], int(key[3]), dt, i, str(key[4]), str(key[5]),))
result_num2 = cursor.fetchall()
winpost.append(len(result_num1) + len(result_num2))
winpost = [float(x) / totalmat for x in winpost]
rank = rankmin(winpost)
franks = list(rank)
franks.insert(0, int(key[3]))
franks.insert(0, dt)
franks.insert(0, l1)
table1.append(franks)
franks = []
cursor.executemany("INSERT INTO posttable VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", table1)
</code></pre>
| 0 | 2016-08-24T20:42:51Z | 39,133,431 | <p>Sending and retrieving an SQL query is "expensive" in terms of time. The easiest way to speed things up would be to use SQL functions to reduce the number of queries.</p>
<p>For example, the first two queries could be reduced to a single call using COUNT(), UNION, and Aliases.</p>
<pre><code>SELECT COUNT(*)
FROM
( SELECT rowid FROM rawdata where ...
UNION
SELECT rowid FROM rawdata where ...
) totalmatch
</code></pre>
<p>In this case we take the two original queries (with your conditions in place of the "...") combine them with a UNION statement, give that union the alias "totalmatch", and count all the rows in it.</p>
<p>Same thing can be done with the second set of queries. Instead of cycling 16 times over 2 queries (resulting in 32 calls to the SQL engine) you can replace it with one query by also using GROUP BY.</p>
<pre><code>SELECT PolPosition, COUNT(PolPosition)
FROM
( SELECT PolPosition FROM rawdata WHERE ...
UNION
SELECt PolPosition FROM rawdata WHERE ...
) totalmatch
GROUP BY PolPosition
</code></pre>
<p>In this case we take the exact same query as before and group it by PolPosition, using COUNT to display how many rows are in each group.</p>
<p>W3Schools is a great resource for how these functions work:
<a href="http://www.w3schools.com/sql/default.asp" rel="nofollow">http://www.w3schools.com/sql/default.asp</a></p>
| 1 | 2016-08-24T21:38:38Z | [
"python",
"sqlite"
] |
Website forces scrapy redirect | 39,132,733 | <p>I keep on getting redirected from www.caribbeanjobs.com. I've programmed my spider to not obey the robot.txt, disabled cookies, tried meta=dont_redirect. What else can I do?</p>
<p>This is my spider below:</p>
<pre><code>import scrapy
from tutorial.items import CaribbeanJobsItem
class CaribbeanJobsSpider(scrapy.Spider):
name = "caribbeanjobs"
allowed_domains = ["caribbeanjobs.com/"]
start_urls = [
"http://www.caribbeanjobs.com/"
]
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, meta={'dont_redirect':True})
def parse(self, response):
if ".com" in response.url:
from scrapy.shell import inspect_response
inspect_response(response, self)
</code></pre>
<p>These are my settings:</p>
<pre><code>BOT_NAME = 'tutorial'
SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tutorial.middlewares.MyCustomSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'tutorial.pipelines.SomePipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
</code></pre>
| 1 | 2016-08-24T20:44:25Z | 39,132,833 | <p>Did you try setting an explicit <code>USER_AGENT</code> in your settings?</p>
<p><a href="http://doc.scrapy.org/en/latest/topics/settings.html#user-agent" rel="nofollow">http://doc.scrapy.org/en/latest/topics/settings.html#user-agent</a></p>
<p>Something like this might work as a starting point:</p>
<pre><code>USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.102 Safari/537.36"`
</code></pre>
| 2 | 2016-08-24T20:52:45Z | [
"python",
"scrapy",
"scrapy-spider"
] |
Website forces scrapy redirect | 39,132,733 | <p>I keep on getting redirected from www.caribbeanjobs.com. I've programmed my spider to not obey the robot.txt, disabled cookies, tried meta=dont_redirect. What else can I do?</p>
<p>This is my spider below:</p>
<pre><code>import scrapy
from tutorial.items import CaribbeanJobsItem
class CaribbeanJobsSpider(scrapy.Spider):
name = "caribbeanjobs"
allowed_domains = ["caribbeanjobs.com/"]
start_urls = [
"http://www.caribbeanjobs.com/"
]
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, meta={'dont_redirect':True})
def parse(self, response):
if ".com" in response.url:
from scrapy.shell import inspect_response
inspect_response(response, self)
</code></pre>
<p>These are my settings:</p>
<pre><code>BOT_NAME = 'tutorial'
SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tutorial.middlewares.MyCustomSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'tutorial.pipelines.SomePipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
</code></pre>
| 1 | 2016-08-24T20:44:25Z | 39,138,489 | <p>You can specify handle_http_status. You can initialize this list post start_urls. </p>
<pre><code>handle_http_status = ['303', '301']
</code></pre>
| 0 | 2016-08-25T06:49:45Z | [
"python",
"scrapy",
"scrapy-spider"
] |
Simplest Group By / Sum in Pandas | 39,132,739 | <p>What's the simplest way to sum a two column pandas dataframe but keep it as a dataframe?</p>
<p>I have a dataframe that looks like </p>
<pre><code> sum1 sum2
0 8153.035 1132
1 12730.100 1559
2 10845.360 1268
3 6694.075 900
4 3740.105 608
5 3247.225 232
6 4579.725 646
7 9225.150 1184
8 12371.885 2346
9 11670.025 1805
10 1088.000 183
11 14.460 3
12 9027.055 1282
13 18880.855 2107
</code></pre>
<p>What's the simplest and most efficient way to sum it to a dataframe that looks like:</p>
<pre><code> sum1 sum2
0 109019.83 15023
</code></pre>
| 1 | 2016-08-24T20:45:03Z | 39,132,762 | <p>I was overthinking. Can just do </p>
<pre><code>df = df.sum(axis=0)
</code></pre>
| 0 | 2016-08-24T20:47:27Z | [
"python",
"pandas",
"group-by"
] |
Simplest Group By / Sum in Pandas | 39,132,739 | <p>What's the simplest way to sum a two column pandas dataframe but keep it as a dataframe?</p>
<p>I have a dataframe that looks like </p>
<pre><code> sum1 sum2
0 8153.035 1132
1 12730.100 1559
2 10845.360 1268
3 6694.075 900
4 3740.105 608
5 3247.225 232
6 4579.725 646
7 9225.150 1184
8 12371.885 2346
9 11670.025 1805
10 1088.000 183
11 14.460 3
12 9027.055 1282
13 18880.855 2107
</code></pre>
<p>What's the simplest and most efficient way to sum it to a dataframe that looks like:</p>
<pre><code> sum1 sum2
0 109019.83 15023
</code></pre>
| 1 | 2016-08-24T20:45:03Z | 39,132,791 | <pre><code>df.sum().to_frame().T
</code></pre>
<p>Output:</p>
<pre><code> sum1 sum2
0 112267.055 15255.0
</code></pre>
<p>You could just do <code>df.sum()</code> to get the series</p>
<pre><code>sum1 112267.055
sum2 15255.000
dtype: float64
</code></pre>
| 2 | 2016-08-24T20:48:55Z | [
"python",
"pandas",
"group-by"
] |
Groupby value counts on the dataframe pandas | 39,132,742 | <p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([
(1, 1, 'term1'),
(1, 2, 'term2'),
(1, 1, 'term1'),
(1, 1, 'term2'),
(2, 2, 'term3'),
(2, 3, 'term1'),
(2, 2, 'term1')
], columns=['id', 'group', 'term'])
</code></pre>
<p>I want to group it by <code>id</code> and <code>group</code> and calculate the number of each term for this id, group pair.</p>
<p>So in the end I am going to get something like this:</p>
<p><a href="http://i.stack.imgur.com/DIwPYm.png" rel="nofollow"><img src="http://i.stack.imgur.com/DIwPYm.png" alt="enter image description here"></a></p>
<p>I was able to achieve what I want by looping over all the rows with <code>df.iterrows()</code> and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).</p>
<p>It looks like I have to group by and then count values, so I tried that with <code>df.groupby(['id', 'group']).value_counts()</code> which does not work because <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow">value_counts</a> operates on the groupby series and not a dataframe.</p>
<p>Anyway I can achieve this without looping?</p>
| 5 | 2016-08-24T20:45:24Z | 39,132,761 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a>:</p>
<pre><code>print (pd.crosstab([df.id, df.group], df.term))
term term1 term2 term3
id group
1 1 2 1 0
2 0 1 0
2 2 1 0 1
3 1 0 0
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with aggregating <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>, reshaping by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)
term term1 term2 term3
id group
1 1 2 1 0
2 0 1 0
2 2 1 0 1
3 1 0 0
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>df = pd.concat([df]*10000).reset_index(drop=True)
In [48]: %timeit (df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0))
100 loops, best of 3: 12.4 ms per loop
In [49]: %timeit (df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0))
100 loops, best of 3: 12.2 ms per loop
</code></pre>
| 3 | 2016-08-24T20:47:25Z | [
"python",
"pandas",
"dataframe",
"group-by",
"crosstab"
] |
Groupby value counts on the dataframe pandas | 39,132,742 | <p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([
(1, 1, 'term1'),
(1, 2, 'term2'),
(1, 1, 'term1'),
(1, 1, 'term2'),
(2, 2, 'term3'),
(2, 3, 'term1'),
(2, 2, 'term1')
], columns=['id', 'group', 'term'])
</code></pre>
<p>I want to group it by <code>id</code> and <code>group</code> and calculate the number of each term for this id, group pair.</p>
<p>So in the end I am going to get something like this:</p>
<p><a href="http://i.stack.imgur.com/DIwPYm.png" rel="nofollow"><img src="http://i.stack.imgur.com/DIwPYm.png" alt="enter image description here"></a></p>
<p>I was able to achieve what I want by looping over all the rows with <code>df.iterrows()</code> and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).</p>
<p>It looks like I have to group by and then count values, so I tried that with <code>df.groupby(['id', 'group']).value_counts()</code> which does not work because <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow">value_counts</a> operates on the groupby series and not a dataframe.</p>
<p>Anyway I can achieve this without looping?</p>
| 5 | 2016-08-24T20:45:24Z | 39,132,838 | <p>using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow">pivot_table()</a> method:</p>
<pre><code>In [22]: df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
Out[22]:
term term1 term2 term3
id group
1 1 2 1 0
2 0 1 0
2 2 1 0 1
3 1 0 0
</code></pre>
<p>Timing against 700K rows DF:</p>
<pre><code>In [24]: df = pd.concat([df] * 10**5, ignore_index=True)
In [25]: df.shape
Out[25]: (700000, 3)
In [3]: %timeit df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)
1 loop, best of 3: 226 ms per loop
In [4]: %timeit df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
1 loop, best of 3: 236 ms per loop
In [5]: %timeit pd.crosstab([df.id, df.group], df.term)
1 loop, best of 3: 355 ms per loop
In [6]: %timeit df.groupby(['id','group','term'])['term'].size().unstack().fillna(0).astype(int)
1 loop, best of 3: 232 ms per loop
In [7]: %timeit df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)
1 loop, best of 3: 231 ms per loop
</code></pre>
<p>Timing against 7M rows DF:</p>
<pre><code>In [9]: df = pd.concat([df] * 10, ignore_index=True)
In [10]: df.shape
Out[10]: (7000000, 3)
In [11]: %timeit df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)
1 loop, best of 3: 2.27 s per loop
In [12]: %timeit df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
1 loop, best of 3: 2.3 s per loop
In [13]: %timeit pd.crosstab([df.id, df.group], df.term)
1 loop, best of 3: 3.37 s per loop
In [14]: %timeit df.groupby(['id','group','term'])['term'].size().unstack().fillna(0).astype(int)
1 loop, best of 3: 2.28 s per loop
In [15]: %timeit df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)
1 loop, best of 3: 1.89 s per loop
</code></pre>
| 4 | 2016-08-24T20:53:14Z | [
"python",
"pandas",
"dataframe",
"group-by",
"crosstab"
] |
Groupby value counts on the dataframe pandas | 39,132,742 | <p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([
(1, 1, 'term1'),
(1, 2, 'term2'),
(1, 1, 'term1'),
(1, 1, 'term2'),
(2, 2, 'term3'),
(2, 3, 'term1'),
(2, 2, 'term1')
], columns=['id', 'group', 'term'])
</code></pre>
<p>I want to group it by <code>id</code> and <code>group</code> and calculate the number of each term for this id, group pair.</p>
<p>So in the end I am going to get something like this:</p>
<p><a href="http://i.stack.imgur.com/DIwPYm.png" rel="nofollow"><img src="http://i.stack.imgur.com/DIwPYm.png" alt="enter image description here"></a></p>
<p>I was able to achieve what I want by looping over all the rows with <code>df.iterrows()</code> and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).</p>
<p>It looks like I have to group by and then count values, so I tried that with <code>df.groupby(['id', 'group']).value_counts()</code> which does not work because <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow">value_counts</a> operates on the groupby series and not a dataframe.</p>
<p>Anyway I can achieve this without looping?</p>
| 5 | 2016-08-24T20:45:24Z | 39,132,900 | <p>I use <code>groupby</code> and <code>size</code></p>
<pre><code>df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)
</code></pre>
<p><a href="http://i.stack.imgur.com/uEOMQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/uEOMQ.png" alt="enter image description here"></a></p>
<hr>
<h1>Timing</h1>
<p><a href="http://i.stack.imgur.com/Qz8dM.png" rel="nofollow"><img src="http://i.stack.imgur.com/Qz8dM.png" alt="enter image description here"></a></p>
<p><strong><em>1,000,000 rows</em></strong></p>
<pre><code>df = pd.DataFrame(dict(id=np.random.choice(100, 1000000),
group=np.random.choice(20, 1000000),
term=np.random.choice(10, 1000000)))
</code></pre>
<p><a href="http://i.stack.imgur.com/FZ3GD.png" rel="nofollow"><img src="http://i.stack.imgur.com/FZ3GD.png" alt="enter image description here"></a></p>
| 4 | 2016-08-24T20:57:41Z | [
"python",
"pandas",
"dataframe",
"group-by",
"crosstab"
] |
Groupby value counts on the dataframe pandas | 39,132,742 | <p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([
(1, 1, 'term1'),
(1, 2, 'term2'),
(1, 1, 'term1'),
(1, 1, 'term2'),
(2, 2, 'term3'),
(2, 3, 'term1'),
(2, 2, 'term1')
], columns=['id', 'group', 'term'])
</code></pre>
<p>I want to group it by <code>id</code> and <code>group</code> and calculate the number of each term for this id, group pair.</p>
<p>So in the end I am going to get something like this:</p>
<p><a href="http://i.stack.imgur.com/DIwPYm.png" rel="nofollow"><img src="http://i.stack.imgur.com/DIwPYm.png" alt="enter image description here"></a></p>
<p>I was able to achieve what I want by looping over all the rows with <code>df.iterrows()</code> and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).</p>
<p>It looks like I have to group by and then count values, so I tried that with <code>df.groupby(['id', 'group']).value_counts()</code> which does not work because <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow">value_counts</a> operates on the groupby series and not a dataframe.</p>
<p>Anyway I can achieve this without looping?</p>
| 5 | 2016-08-24T20:45:24Z | 39,133,528 | <p>Instead of remembering lengthy solutions, how about the one that pandas has built in for you:</p>
<pre><code>df.groupby(['id', 'group', 'term']).count()
</code></pre>
| 1 | 2016-08-24T21:46:53Z | [
"python",
"pandas",
"dataframe",
"group-by",
"crosstab"
] |
Matching Unicode word boundaries in Python | 39,132,743 | <p>In order to match the Unicode word boundaries [as defined in the <a href="http://unicode.org/reports/tr29/#Word_Boundary_Rules">Annex #29</a>] in Python, I have been using the <code>regex</code> package with flags <code>regex.WORD | regex.V1</code> (<code>regex.UNICODE</code> should be default since the pattern is a Unicode string) in the following way:</p>
<pre><code>>>> s="here are some words"
>>> regex.findall(r'\w(?:\B\S)*', s, flags = regex.V1 | regex.WORD)
['here', 'are', 'some', 'words']
</code></pre>
<p>It works well in this rather simple cases. However, I was wondering what is the expected behavior in case the input string contains certain punctuation. It seems to me that <a href="http://unicode.org/reports/tr29/#WB7">WB7</a> says that for example the apostrophe in <code>x'z</code> does not qualify as a word boundary which seems to be indeed the case:</p>
<pre><code>>>> regex.findall(r'\w(?:\B\S)*', "x'z", flags = regex.V1 | regex.WORD)
["x'z"]
</code></pre>
<p>However, if there is a vowel, the situation changes:</p>
<pre><code>>>> regex.findall(r'\w(?:\B\S)*', "l'avion", flags = regex.V1 | regex.WORD)
["l'", 'avion']
</code></pre>
<p>This would suggest that the regex module implements the rule <code>WB5a</code> mentioned in the standard in the <em>Notes</em> section. However, this rule also says that the behavior should be the same with <code>\u2019</code> (right single quotation mark) which I can't reproduce:</p>
<pre><code>>>> regex.findall(r'\w(?:\B\S)*', "l\u2019avion", flags = regex.V1 | regex.WORD)
['lâavion']
</code></pre>
<p>Moreover, even with "normal" apostrophe, a ligature (or <code>y</code>) seems to behave as a "non-vowel":</p>
<pre><code>>>> regex.findall(r'\w(?:\B\S)*', "l'Åil", flags = regex.V1 | regex.WORD)
["l'Åil"]
>>> regex.findall(r'\w(?:\B\S)*', "J'y suis", flags = regex.V1 | regex.WORD)
["J'y", 'suis']
</code></pre>
<p>Is this the expected behavior? (all examples above were executed with regex 2.4.106 and Python 3.5.2)</p>
| 12 | 2016-08-24T20:45:30Z | 39,133,712 | <p>1- <em>RIGHT SINGLE QUOTATION MARK <code>â</code></em> seems to be just simply missed in <a href="https://bitbucket.org/mrabarnett/mrab-regex/src/f21447bf288780d8dd9b1633820480484ce8f677/regex_3/regex/_regex.c?at=default&fileviewer=file-view-default#_regex.c-1657" rel="nofollow">source file</a>:</p>
<pre><code>/* Break between apostrophe and vowels (French, Italian). */
/* WB5a */
if (pos_m1 >= 0 && char_at(state->text, pos_m1) == '\'' &&
is_unicode_vowel(char_at(state->text, text_pos)))
return TRUE;
</code></pre>
<p>2- Unicode vowels are determined with <a href="https://bitbucket.org/mrabarnett/mrab-regex/src/f21447bf288780d8dd9b1633820480484ce8f677/regex_3/regex/_regex.c?at=default&fileviewer=file-view-default#_regex.c-1522" rel="nofollow"><code>is_unicode_vowel()</code></a> function which translates to this list:</p>
<pre><code>a, à , á, â, e, è, é, ê, i, ì, Ã, î, o, ò, ó, ô, u, ù, ú, û
</code></pre>
<p>So a <a href="http://www.utf8-chartable.de/unicode-utf8-table.pl?start=256" rel="nofollow">LATIN SMALL LIGATURE OE</a> <code>Å</code> character is not considered as a unicode vowel:</p>
<pre><code>Py_LOCAL_INLINE(BOOL) is_unicode_vowel(Py_UCS4 ch) {
#if PY_VERSION_HEX >= 0x03030000
switch (Py_UNICODE_TOLOWER(ch)) {
#else
switch (Py_UNICODE_TOLOWER((Py_UNICODE)ch)) {
#endif
case 'a': case 0xE0: case 0xE1: case 0xE2:
case 'e': case 0xE8: case 0xE9: case 0xEA:
case 'i': case 0xEC: case 0xED: case 0xEE:
case 'o': case 0xF2: case 0xF3: case 0xF4:
case 'u': case 0xF9: case 0xFA: case 0xFB:
return TRUE;
default:
return FALSE;
}
}
</code></pre>
<p><strong>This bug is now fixed in regex 2016.08.27 after a <a href="https://bitbucket.org/mrabarnett/mrab-regex/issues/219/unicode-word-boundries" rel="nofollow">bug report</a>.</strong> [<a href="https://bitbucket.org/mrabarnett/mrab-regex/src/63d0aacd9395318daeb3ff8bf8696a7651528415/regex_3/regex/_regex.c?at=default&fileviewer=file-view-default#_regex.c-1668" rel="nofollow">_regex.c:#1668</a>]</p>
| 6 | 2016-08-24T22:02:52Z | [
"python",
"regex",
"python-3.x",
"unicode"
] |
need to improve accuracy in fsolve to find multiples roots | 39,132,786 | <p>I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros</p>
<pre><code>import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
</code></pre>
<p>In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.</p>
<p>I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.</p>
<p>I appreciate any solution, trick or advice.</p>
| 4 | 2016-08-24T20:48:45Z | 39,133,446 | <p>What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.</p>
<p>Let's consider a specific function</p>
<pre><code>[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
</code></pre>
<p>The following code uses <code>polyfit</code> and <code>poly1d</code> to approximate <code>func</code> over the range of interest (<code>-10<x<10</code>) by a polynomial function <code>f_poly</code> of order <code>10</code>.</p>
<pre><code>x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
</code></pre>
<p>As the following plot shows, <code>f_poly</code> is indeed a good approximation of <code>func</code>. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by <code>fsolve</code></p>
<p><a href="http://i.stack.imgur.com/3OZpW.png" rel="nofollow"><img src="http://i.stack.imgur.com/3OZpW.png" alt="enter image description here"></a></p>
<p>The roots of the polynomial approximation can be simply obtained as </p>
<pre><code>roots = np.roots(pfit)
roots
</code></pre>
<blockquote>
<p>array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])</p>
</blockquote>
<p>As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval <code>[-10,10]</code>. These can be extracted as follows:</p>
<pre><code>x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
</code></pre>
<blockquote>
<p>array([-4.9269, 4.7486, 2.9158])</p>
</blockquote>
<p>Array <code>x0</code> can serve as the initialization for <code>fsolve</code>:</p>
<pre><code>fsolve(func, x0)
</code></pre>
<blockquote>
<p>array([-4.9848, 4.5462, 2.7192])</p>
</blockquote>
<p>Remark: The <a href="https://github.com/pychebfun/pychebfun" rel="nofollow">pychebfun</a> package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via <code>fsolve</code>).</p>
<p>This simple code gives the same roots as those by <code>fsolve</code></p>
<pre><code>import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
</code></pre>
| 2 | 2016-08-24T21:40:04Z | [
"python",
"scipy",
"mathematical-optimization"
] |
need to improve accuracy in fsolve to find multiples roots | 39,132,786 | <p>I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros</p>
<pre><code>import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
</code></pre>
<p>In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.</p>
<p>I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.</p>
<p>I appreciate any solution, trick or advice.</p>
| 4 | 2016-08-24T20:48:45Z | 39,133,895 | <p>Between two stationary points (i.e., <code>df/dx=0</code>), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:</p>
<pre><code>[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
</code></pre>
<p>So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its <code>sy.nsolve()</code> allows to robustly find a zero in an interval:</p>
<pre><code>import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function
</code></pre>
| 2 | 2016-08-24T22:22:16Z | [
"python",
"scipy",
"mathematical-optimization"
] |
Adding numbers in a list except a specific number and the following number | 39,132,819 | <p>I am new to programming and I am trying to add up all the numbers in a given list except for the the number 13 and any number following the number 13. the problem that I am having is that if 13 is on the end of the list it won't add the first number. Any help would be appreciated. The code I have is as follows:</p>
<pre><code>def sum13(nums):
total = 0
for i in range(len(nums)):
if nums[i] == 13 or nums[i-1] == 13:
total += 0
else:
total += nums[i]
return total
def main():
print sum13([1, 2, 2, 1, 13])
print sum13([1, 2, 13, 2, 1, 13])
main()
</code></pre>
<p>The two examples should result in 6 and 4 however it results in 5 and 3 because it isn't adding the 1 at the beginning.</p>
| 3 | 2016-08-24T20:51:35Z | 39,132,865 | <p>In Python, an index of <code>-1</code> means the last item in a list. So, in your code, when <code>i</code> is <code>0</code> (the first number), it will not count it because the last item in the list is <code>13</code>.</p>
<p>You can fix this with a simple check that <code>i > 1</code> on that condition:</p>
<pre><code>if nums[i] == 13 or (i > 0 and nums[i - 1] == 13):
</code></pre>
<p>Also for what it's worth, because we all love one-liners, here's an equivalent function in a line:</p>
<pre><code>return sum(num for i, num in enumerate(nums) if num != 13 and (i == 0 or nums[i - 1] != 13))
</code></pre>
| 2 | 2016-08-24T20:55:14Z | [
"python"
] |
Adding numbers in a list except a specific number and the following number | 39,132,819 | <p>I am new to programming and I am trying to add up all the numbers in a given list except for the the number 13 and any number following the number 13. the problem that I am having is that if 13 is on the end of the list it won't add the first number. Any help would be appreciated. The code I have is as follows:</p>
<pre><code>def sum13(nums):
total = 0
for i in range(len(nums)):
if nums[i] == 13 or nums[i-1] == 13:
total += 0
else:
total += nums[i]
return total
def main():
print sum13([1, 2, 2, 1, 13])
print sum13([1, 2, 13, 2, 1, 13])
main()
</code></pre>
<p>The two examples should result in 6 and 4 however it results in 5 and 3 because it isn't adding the 1 at the beginning.</p>
| 3 | 2016-08-24T20:51:35Z | 39,133,398 | <p>My proposition:</p>
<pre><code>def sum13(numbers):
total = 0
skip = False
for i, number in enumerate(numbers):
if number == 13:
skip = True
continue
if skip:
skip = False
continue
total += number
return total
def test():
cases = [
([13, 1, 1], 1),
([13, 1, 5, 13], 5),
([1, 13, 5, 1], 2),
([1, 2, 2, 1, 13], 6),
([1, 2, 13, 2, 1, 13], 4),
]
for case in cases:
assert sum13(case[0]) == case[1]
test()
</code></pre>
<p>Read about enumerate if it's new for you:
<a href="https://docs.python.org/3.4/library/functions.html#enumerate" rel="nofollow">https://docs.python.org/3.4/library/functions.html#enumerate</a></p>
| 0 | 2016-08-24T21:36:14Z | [
"python"
] |
Why am I getting a break outside loop error for this python program but not for a similar program? | 39,132,847 | <p>I am new to coding and Stack Exchange. Please forgive any errors in terms of formatting questions (corrections are welcomed). My question is this. I am doing exercise 7-4 in "Python Crash Course". I have two programs that are very similar in terms of formatting and output. city_visits was the example given by the author and did not lead to a "break outside loop" error. However Pizza_toppings leads to a "break outside loop" error. Can anyone please explain what the difference is that leads to the error in one but not the other? Thanks for the help! </p>
<p>Pizza_toppings.py</p>
<pre><code>prompt = "\nWelcome to Pizza by the sea!"
prompt += "\nYou can add as manty toppings as you like! Just tell us!"
prompt += "\nWhen you are finished type 'quit'. Tell us what you want: "
while True:
topping = raw_input(prompt)
if topping == "quit":
break
else:
print "Adding " + topping + "."
</code></pre>
<p>city_visits</p>
<pre><code>prompt = "\nPlease enter the name of a city you have visited:"
prompt += "\n(Enter 'quit' when you are finished.)"
while True:
city = raw_input(prompt)
if city == 'quit':
break
else:
print "I'd love to go to " + city.title() + "!"
</code></pre>
| 1 | 2016-08-24T20:53:58Z | 39,133,148 | <p>Indentation is strict in Python. This:</p>
<pre><code>if topping == "quit":
break
</code></pre>
<p>should be this:</p>
<pre><code>if topping == "quit":
break
</code></pre>
<p>Very subtle, but very important in Python.</p>
| 0 | 2016-08-24T21:15:00Z | [
"python",
"break"
] |
Why am I getting a break outside loop error for this python program but not for a similar program? | 39,132,847 | <p>I am new to coding and Stack Exchange. Please forgive any errors in terms of formatting questions (corrections are welcomed). My question is this. I am doing exercise 7-4 in "Python Crash Course". I have two programs that are very similar in terms of formatting and output. city_visits was the example given by the author and did not lead to a "break outside loop" error. However Pizza_toppings leads to a "break outside loop" error. Can anyone please explain what the difference is that leads to the error in one but not the other? Thanks for the help! </p>
<p>Pizza_toppings.py</p>
<pre><code>prompt = "\nWelcome to Pizza by the sea!"
prompt += "\nYou can add as manty toppings as you like! Just tell us!"
prompt += "\nWhen you are finished type 'quit'. Tell us what you want: "
while True:
topping = raw_input(prompt)
if topping == "quit":
break
else:
print "Adding " + topping + "."
</code></pre>
<p>city_visits</p>
<pre><code>prompt = "\nPlease enter the name of a city you have visited:"
prompt += "\n(Enter 'quit' when you are finished.)"
while True:
city = raw_input(prompt)
if city == 'quit':
break
else:
print "I'd love to go to " + city.title() + "!"
</code></pre>
| 1 | 2016-08-24T20:53:58Z | 39,133,315 | <p>In Python, the scope of loops and <code>if/else</code> blocks is determined solely by indentation, so if your indentation is messed up -- as it seems to be the case in your code -- it will produce strange results, or not compile at all if certain elements like <code>break</code> are used in an unexpected context.</p>
<p>Your code, in both examples, should be indented like this:</p>
<pre><code>while True:
city = raw_input(prompt)
if city == 'quit':
break
else:
print "I'd love to go to " + city.title() + "!"
</code></pre>
<p>Note how <code>city</code>, <code>if</code>, and <code>else</code> are all (a) further indented than <code>while</code>, and (b) indented exactly the same amount. <em>How</em> you indent is not too important, but good practice dictates <a href="http://stackoverflow.com/q/119562/1639625">4 spaces per level of indentation (although some prefer 1 tab)</a>. Also, never mix tabs and spaces, or your indentation may <em>look</em> correct in your editor, but in reality be totally messed up.</p>
| 1 | 2016-08-24T21:29:04Z | [
"python",
"break"
] |
Python Image Library: Image is turned by 90 degree? | 39,132,878 | <p>I have an image on my computer which has the dimensions width=1932 and height=2576. It was made with a smartphone and uses the "jpeg"-format.</p>
<p>If i open the image with any tool i like it is shown correctly.</p>
<p>I tried to open it with python:</p>
<pre><code>from PIL import Image
img_in = Image.open(input_image_path)
</code></pre>
<p>Unfortunately in python it always has the dimensions width=2576 and height=1932. It this would always happen, i could fix it, but it seems only to happen for some of my images. The images are always rotated clockwise.</p>
<p>Do i use the the open-function wrong or how could i fix this?</p>
<p>Thank you very much.</p>
<p>Regards</p>
<p>Kevin</p>
<p>-- Edit --</p>
<p>Solution:</p>
<p>1) Please read the accepted answer</p>
<p>2) The following code may be used to avoid this problem: <a href="http://stackoverflow.com/a/11543365/916672">http://stackoverflow.com/a/11543365/916672</a></p>
| 2 | 2016-08-24T20:55:59Z | 39,132,990 | <p>When you do a portrait picture with your smartphone and a landscape picture, both will be done with the same coordinate system, that is, relative to the orientation of the imaging sensor in your phone.</p>
<p>However, the phone's gyroscope detects the rotation of the device, so it "knows" whether it is a portrait or landscape picture. This information is saved in the JPEG's metadata.</p>
<p>Software to display images typically evaluates this information to rotate the picture accordingly. In image editing software, you should typically be asked if you want to apply the rotation to the picture (e.g. The Gimp does this).</p>
<p>However, image processing libraries typically completely ignore this information. They only access the image pixels, and they access them in the order they are stored. As you can see there is nothing wrong with python imaging, however it would be interested if there is an interface to also read this information, so you can deal with it accordingly.</p>
| 1 | 2016-08-24T21:03:40Z | [
"python",
"image",
"python-imaging-library"
] |
k-means / x-means (or other?) clustering in pandas/python | 39,132,880 | <p>I have a dataframe that can be reconstructed from the dict below. </p>
<p>The dataframe represents <code>23 statistics (X1-X23)</code> for various cities around the world. Each city occupies a single row in the dataframe with the 23 statistics as separate columns. </p>
<p>My actual df has <code>~6 million</code> cities so its a large dataframe.</p>
<p>What I want to do is:</p>
<p><strong>Step#1</strong>: Identify clusters of cities based on the <code>23 statistics (X1-X23)</code>. </p>
<p><strong>Step#2</strong>: Given the identified clusters in <strong>Step#1</strong>, I want to construct a portfolio of cities such that:</p>
<p><strong>a)</strong> number of cities selected from any given cluster is limited (limit may be different for each cluster)</p>
<p><strong>b)</strong> avoid certain clusters altogether</p>
<p><strong>c)</strong> apply additional criteria to the portfolio selection such that the correlation of poor weather between cities in the portfolio is minimized and correlation of good weather between cities is maximized. </p>
<p>My problem set is such that the <code>K for a K-means algo</code> would be quite large but I'm not sure what that value is. </p>
<p>I've been reading the following on clustering:</p>
<p><a href="http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters/15376462#15376462">Cluster analysis in R: determine the optimal number of clusters</a></p>
<p><a href="http://stackoverflow.com/questions/1793532/how-do-i-determine-k-when-using-k-means-clustering">How do I determine k when using k-means clustering?</a></p>
<p><a href="http://www.aladdin.cs.cmu.edu/papers/pdfs/y2000/xmeans.pdf" rel="nofollow">X-means: Extending K-means...</a></p>
<p>However, a lot of the literature is foreign to me and will take me months to understand. I'm not a data scientist and don't have the time to take a course on machine learning.</p>
<p>At this point I have the dataframe and am now twiddling my thumbs. </p>
<p>I'd be grateful if you can help me move forward in actually implementing <code>Steps#1 to Steps#2</code> in pandas with an example dataset. </p>
<p>The dict below can be reconstructed to a dataframe by <code>pd.DataFrame(x)</code> where x is the dict below:</p>
<p><strong>Output of df.head().to_dict('rec'):</strong></p>
<pre><code>[{'X1': 123.40000000000001,
'X2': -67.900000000000006,
'X3': 172.0,
'X4': -2507.1999999999998,
'X5': 80.0,
'X6': 1692.0999999999999,
'X7': 13.5,
'X8': 136.30000000000001,
'X9': -187.09999999999999,
'X10': 50.0,
'X11': -822.0,
'X12': 13.0,
'X13': 260.80000000000001,
'X14': 14084.0,
'X15': -944.89999999999998,
'X16': 224.59999999999999,
'X17': -23.100000000000001,
'X18': -16.199999999999999,
'X19': 1825.9000000000001,
'X20': 710.70000000000005,
'X21': -16.199999999999999,
'X22': 1825.9000000000001,
'X23': 66.0,
'city': 'SFO'},
{'X1': -359.69999999999999,
'X2': -84.299999999999997,
'X3': 86.0,
'X4': -1894.4000000000001,
'X5': 166.0,
'X6': 882.39999999999998,
'X7': -19.0,
'X8': -133.30000000000001,
'X9': -84.799999999999997,
'X10': 27.0,
'X11': -587.29999999999995,
'X12': 36.0,
'X13': 332.89999999999998,
'X14': 825.20000000000005,
'X15': -3182.5,
'X16': -210.80000000000001,
'X17': 87.400000000000006,
'X18': -443.69999999999999,
'X19': -3182.5,
'X20': 51.899999999999999,
'X21': -443.69999999999999,
'X22': -722.89999999999998,
'X23': -3182.5,
'city': 'YYZ'},
{'X1': -24.800000000000001,
'X2': -34.299999999999997,
'X3': 166.0,
'X4': -2352.6999999999998,
'X5': 87.0,
'X6': 1941.3,
'X7': 56.600000000000001,
'X8': 120.2,
'X9': -65.400000000000006,
'X10': 44.0,
'X11': -610.89999999999998,
'X12': 19.0,
'X13': 414.80000000000001,
'X14': 4891.1999999999998,
'X15': -2396.0999999999999,
'X16': 181.59999999999999,
'X17': 177.0,
'X18': -92.900000000000006,
'X19': -2396.0999999999999,
'X20': 805.60000000000002,
'X21': -92.900000000000006,
'X22': -379.69999999999999,
'X23': -2396.0999999999999,
'city': 'DFW'},
{'X1': -21.300000000000001,
'X2': -47.399999999999999,
'X3': 166.0,
'X4': -2405.5999999999999,
'X5': 85.0,
'X6': 1836.8,
'X7': 55.700000000000003,
'X8': 130.80000000000001,
'X9': -131.09999999999999,
'X10': 47.0,
'X11': -690.60000000000002,
'X12': 16.0,
'X13': 297.30000000000001,
'X14': 5163.3999999999996,
'X15': -2446.4000000000001,
'X16': 182.30000000000001,
'X17': 83.599999999999994,
'X18': -36.0,
'X19': -2446.4000000000001,
'X20': 771.29999999999995,
'X21': -36.0,
'X22': -378.30000000000001,
'X23': -2446.4000000000001,
'city': 'PDX'},
{'X1': -22.399999999999999,
'X2': -9.0,
'X3': 167.0,
'X4': -2405.5999999999999,
'X5': 86.0,
'X6': 2297.9000000000001,
'X7': 41.0,
'X8': 109.7,
'X9': 64.900000000000006,
'X10': 42.0,
'X11': -558.29999999999995,
'X12': 21.0,
'X13': 753.10000000000002,
'X14': 5979.6999999999998,
'X15': -2370.1999999999998,
'X16': 187.40000000000001,
'X17': 373.10000000000002,
'X18': -224.30000000000001,
'X19': -2370.1999999999998,
'X20': 759.5,
'X21': -224.30000000000001,
'X22': -384.39999999999998,
'X23': -2370.1999999999998,
'city': 'EWR'}]
</code></pre>
| 1 | 2016-08-24T20:56:17Z | 39,133,382 | <p>I don't know what you mean by "for further processing" but here is a super simple explanation to get you started.</p>
<p>1) get the data into a dataframe (pandas) with the variables (x1-x23) across the top (column headers) and each row representing a different city (so that your df.head() shows x1-x23 for column headers). </p>
<p>2) standardize the variables</p>
<p>3) decide whether to use PCA before using Kmeans</p>
<p>4) use kmeans- <a href="http://scikit-learn.org/stable/modules/clustering.html#k-means" rel="nofollow">scikit learn makes this part easy</a> check this too <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html" rel="nofollow">and this</a> </p>
<p>5) try this <a href="http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html" rel="nofollow">silhouette analysis</a> for choosing the number of clusters to get a start </p>
<p>references that are good:<br>
<a href="http://statweb.stanford.edu/~tibs/ElemStatLearn/" rel="nofollow">Hastie and Tibshirani book</a></p>
<p><a href="https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about" rel="nofollow">Hastie and Tibshirani free course, but use R</a></p>
<p>Udacity, Coursera, EDX courses on machine learning</p>
<p>EDIT: forgot to mention, don't use your whole dataset while you are testing out the processes. Use a much smaller portion of the data (e.g. 100K cities) so that the processing time is much less until you get everything right.</p>
| 1 | 2016-08-24T21:34:35Z | [
"python",
"pandas",
"k-means"
] |
changing shapes (pro) | 39,132,905 | <p>Hi I am new to programming and I am trying to write a code that will gather information from the input and apply it to the triangle.</p>
<p>This is my code so far</p>
<pre><code>steps = int(input("Size: "))
print('/\\')
for i in range(steps - 1):
print(" "*i+" \\")
print(steps * "__" )
</code></pre>
<p>suppose if the input was three then my program would look like this.</p>
<p><a href="http://i.stack.imgur.com/SCa6f.png" rel="nofollow"><img src="http://i.stack.imgur.com/SCa6f.png" alt="![enter image description here"></a></p>
<p>when I want the output to look like this.</p>
<p><a href="http://i.stack.imgur.com/LBRbo.png" rel="nofollow"><img src="http://i.stack.imgur.com/LBRbo.png" alt="![enter image description here"></a></p>
| 1 | 2016-08-24T20:57:59Z | 39,133,144 | <p>Here's something I think will work. One key thing is that you aren't drawing the left side for all rows after the first, nor account for additional left space needed to align your triangle.</p>
<pre><code>steps = int(input('Size: '))
for i in range(steps):
left_space = steps - i - 1
inner_space = i
print('{}/{}\\'.format(' ' * left_space, ' ' * inner_space * 2))
print(steps * '__')
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>Size: 2
/\
/ \
____
Size: 3
/\
/ \
/ \
______
Size: 4
/\
/ \
/ \
/ \
________
</code></pre>
| 1 | 2016-08-24T21:14:52Z | [
"python"
] |
changing shapes (pro) | 39,132,905 | <p>Hi I am new to programming and I am trying to write a code that will gather information from the input and apply it to the triangle.</p>
<p>This is my code so far</p>
<pre><code>steps = int(input("Size: "))
print('/\\')
for i in range(steps - 1):
print(" "*i+" \\")
print(steps * "__" )
</code></pre>
<p>suppose if the input was three then my program would look like this.</p>
<p><a href="http://i.stack.imgur.com/SCa6f.png" rel="nofollow"><img src="http://i.stack.imgur.com/SCa6f.png" alt="![enter image description here"></a></p>
<p>when I want the output to look like this.</p>
<p><a href="http://i.stack.imgur.com/LBRbo.png" rel="nofollow"><img src="http://i.stack.imgur.com/LBRbo.png" alt="![enter image description here"></a></p>
| 1 | 2016-08-24T20:57:59Z | 39,133,205 | <p>Here is my code:</p>
<pre><code>steps=input("Size: ")
for i in range(steps):
j=steps-i-1
print ' '*j+"/"+' '*i+' '*i+'\\'
print '-'*(steps*2+1)
</code></pre>
<p>Which is the same thing as below:</p>
<pre><code>steps=input("Size: ")
for i in range(steps):
j=steps-i-1
print ' '*j+"/"+' '*(i*2)+'\\'
print '-'*(steps*2+1)
</code></pre>
| 2 | 2016-08-24T21:20:25Z | [
"python"
] |
How to implement a queue system that is shared between separate python processes? | 39,132,981 | <p>We have a python script that is kicked off after a user enters some data in a web form. The entire script takes 10-20 minutes to complete. A user can technically kick off 10 of these within 10 seconds if they so chose. If this happens, we'll have 10 of the same python scripts running at once and causing each other to fail due to various things the script is processing.</p>
<p>What is the go-to way to code an overarching queueing system so that these scripts know of each other and will wait in line to execute? We are people who usually write one off scripts but need to have this constant queueing system in place for the web form... sadly we aren't developers or have that background.</p>
<p>We're also open to different ways of architecting the solution in general. We went into this hacking it together. We've never made a process/service broker/worker but would that might make sense here?</p>
<p>How do people normally do this stuff?</p>
| 1 | 2016-08-24T21:03:04Z | 39,133,126 | <p>Welcome to the wild world of distributing your computation!</p>
<p>Implementing your own queuing system (even in Python) can lead to a world of hurt. A very popular, enterprise-grade and open source message queuing application is RabbitMQ. They have a <a href="http://www.rabbitmq.com/tutorials/tutorial-one-python.html" rel="nofollow">great starter tutorial</a> that talks about ways you can begin configuring it and examples of its uses.</p>
<p>Additionally there is a Python task queue library called Celery that consequently uses RabbitMQ under the hood. It is a bit smaller in focus and capability but offers easy of use and a faster start up time as a tradeoff. One thing that it does not trade-off is RabbitMQs <em>consistency</em> which as you delve deeper into queuing and distributed systems, you will learn is extremely important. There getting started docs can be found <a href="http://docs.celeryproject.org/en/latest/getting-started/index.html" rel="nofollow">here</a></p>
<p><strong>Links</strong>:</p>
<ul>
<li><a href="http://celeryproject.org" rel="nofollow">http://celeryproject.org</a></li>
<li><a href="http://www.rabbitmq.com" rel="nofollow">http://www.rabbitmq.com</a></li>
</ul>
| 2 | 2016-08-24T21:13:29Z | [
"python",
"web-services",
"queue"
] |
How to keep a sorted list while entering data into it in Python | 39,133,033 | <p>I have a list of lists which need to be sorted based on the length of the lists. What I am doing now is first inserting the lists into the main list and then sort the main list giving <code>key=len</code>. This steps will take a total time of <code>n + nlg(n)</code>. Is it possible to maintain a sorted list while entering the data into the main list? Can it be done using bisect (or is there any better way) and if so will it perform better than<code>n + nlg(n)</code>? </p>
| 1 | 2016-08-24T21:06:48Z | 39,133,388 | <p>It depends on the data structure you are using:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Dynamic_array" rel="nofollow">Dynamic Array</a>
<ul>
<li>Finding the right index on a sorted array is <code>O(log n)</code> using a bisect</li>
<li>Inserting is <code>O(n)</code> as you have to shift everything</li>
<li>Total `O(n)</li>
</ul></li>
<li><a href="https://en.wikipedia.org/wiki/Linked_list" rel="nofollow">Linked List</a>
<ul>
<li>Finding the right index on a sorted linked list requires browsing the list until you get there. (<code>O(n)</code>)</li>
<li>Inserting is a simple operation, takes only <code>O(1)</code>.</li>
<li>Total <code>O(n)</code></li>
</ul></li>
<li><a href="https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree" rel="nofollow">Self-balancing BST</a>
<ul>
<li>Inserting while maintaining the order and the balance is <code>O(log n)</code> <em>amortized</em></li>
<li>There are links to implementations in <a href="https://stackoverflow.com/questions/2298165/pythons-standard-library-is-there-a-module-for-balanced-binary-tree">this question</a></li>
</ul></li>
<li><a href="https://en.wikipedia.org/wiki/Heap_(data_structure)" rel="nofollow">Heap</a>. Not exactly what you ask, but inserting into a heap is <code>O(log n)</code> or <code>Theta(1)</code> depending on the implementation you use. <a href="https://docs.python.org/3/library/heapq.html" rel="nofollow">heapq</a> in python is one implementation. You simple push items in your heap, and when you are done, you can get the sorted result in <code>O(n)</code>. Meanwhile you can access the root of the tree in <code>O(1)</code>, and the k smallest, sorted, in <code>O(k)</code>.</li>
</ul>
| 1 | 2016-08-24T21:35:19Z | [
"python",
"list",
"sorting",
"asymptotic-complexity"
] |
Set membership in Python | 39,133,053 | <p>I have a tuple x such that <code>x = (1, 2)</code> and I have a set z. Suppose I do this:</p>
<pre><code>z = set(x)
1 in z # True
2 in z # True
x in z # False
</code></pre>
<p>Why does this happen and how can I add tuples to a set and preserve their properties as tuples?</p>
| -1 | 2016-08-24T21:08:10Z | 39,133,074 | <p>Pass tuples to your set constructor. If you would pass it like this:</p>
<pre><code>x = ((1, 2), )
z = set(x)
1 in z # False
2 in z # False
(1, 2) in z # True
</code></pre>
| 0 | 2016-08-24T21:10:10Z | [
"python",
"set"
] |
Set membership in Python | 39,133,053 | <p>I have a tuple x such that <code>x = (1, 2)</code> and I have a set z. Suppose I do this:</p>
<pre><code>z = set(x)
1 in z # True
2 in z # True
x in z # False
</code></pre>
<p>Why does this happen and how can I add tuples to a set and preserve their properties as tuples?</p>
| -1 | 2016-08-24T21:08:10Z | 39,133,077 | <p>Try doing any of these instead:</p>
<ul>
<li><code>z = {x}</code></li>
<li><code>z = set([x])</code></li>
<li><code>z = set(); z.add(x)</code></li>
<li><code>z = set(); z.update([x])</code></li>
</ul>
| 1 | 2016-08-24T21:10:16Z | [
"python",
"set"
] |
Confusion about variables in python | 39,133,088 | <p>I am trying to implement heapsort but I am getting unexpected results. I think this is due to something I don't understand about how Python handles variables (I am talking about side effects). Here's the code:</p>
<pre><code>from math import *
def parent(i):
return floor((i+1)/2)-1
def left(i):
return 2*i+1
def right(i):
return 2*i+2
def maxheapify(A, i):
l = left(i)
r = right(i)
if l < len(A) and A[i] < A[l]:
largest = l
else:
largest = i
if r < len(A) and A[largest] < A[r]:
largest = r
if largest != i:
temp = A[i]
A[i] = A[largest]
A[largest] = temp
maxheapify(A, largest)
def buildmaxheap(A):
for i in range(int(floor(len(A)/2)), -1, -1):
maxheapify(A, i)
def heapsort(A):
n = len(A)
buildmaxheap(A)
for k in range(len(A), 0, -1):
temp = A[0]
A[0] = A[k-1]
A[k-1] = temp
C = A[0:k-1]
maxheapify(C, 0)
A = C + A[k-1:n]
print(A)
</code></pre>
<p>Now when I run </p>
<pre><code>A = [2, 4, 1, 3, 7, 5, 9]
heapsort(A)
print(A)
</code></pre>
<p>I obtain two printed lines (one from inside the heapsort showing that the sorting worked and one from the last print):</p>
<pre><code>[1, 2, 3, 4, 5, 7, 9]
[1, 7, 5, 3, 4, 2, 9]
</code></pre>
<p>Obviously, I'd like them both to be the same (which would mean that the sorting actually worked and A is sorted after calling heapsort(A))</p>
<p>So what I don't get is:</p>
<ol>
<li><p>If A is correctly sorted (at the point of the last line in heapsort(A)), why doesn't this change persist after leaving the function block?</p></li>
<li><p>If this is due to some permanence of the variable A, why isn't the end result the original value of A, but the intermediate step in heapsort, which is the result of the maxheapify call?</p></li>
</ol>
| 3 | 2016-08-24T21:10:57Z | 39,133,159 | <p>At the start of the function, the list <code>A</code> inside the function is the same as the list outside of the function, and any modifications made to one will be reflected in the other (it's a mutable object).</p>
<p>When you do an assignment to a list, you're substituting a new list object for the old list object. This breaks the connection to the outside object.</p>
<p>Instead of assigning a new list to <code>A</code>, you can assign to a <em>slice</em> of <code>A</code> and the original object will be modified in place instead.</p>
<pre><code>A[:] = C + A[k-1:n]
</code></pre>
| 2 | 2016-08-24T21:16:18Z | [
"python"
] |
Confusion about variables in python | 39,133,088 | <p>I am trying to implement heapsort but I am getting unexpected results. I think this is due to something I don't understand about how Python handles variables (I am talking about side effects). Here's the code:</p>
<pre><code>from math import *
def parent(i):
return floor((i+1)/2)-1
def left(i):
return 2*i+1
def right(i):
return 2*i+2
def maxheapify(A, i):
l = left(i)
r = right(i)
if l < len(A) and A[i] < A[l]:
largest = l
else:
largest = i
if r < len(A) and A[largest] < A[r]:
largest = r
if largest != i:
temp = A[i]
A[i] = A[largest]
A[largest] = temp
maxheapify(A, largest)
def buildmaxheap(A):
for i in range(int(floor(len(A)/2)), -1, -1):
maxheapify(A, i)
def heapsort(A):
n = len(A)
buildmaxheap(A)
for k in range(len(A), 0, -1):
temp = A[0]
A[0] = A[k-1]
A[k-1] = temp
C = A[0:k-1]
maxheapify(C, 0)
A = C + A[k-1:n]
print(A)
</code></pre>
<p>Now when I run </p>
<pre><code>A = [2, 4, 1, 3, 7, 5, 9]
heapsort(A)
print(A)
</code></pre>
<p>I obtain two printed lines (one from inside the heapsort showing that the sorting worked and one from the last print):</p>
<pre><code>[1, 2, 3, 4, 5, 7, 9]
[1, 7, 5, 3, 4, 2, 9]
</code></pre>
<p>Obviously, I'd like them both to be the same (which would mean that the sorting actually worked and A is sorted after calling heapsort(A))</p>
<p>So what I don't get is:</p>
<ol>
<li><p>If A is correctly sorted (at the point of the last line in heapsort(A)), why doesn't this change persist after leaving the function block?</p></li>
<li><p>If this is due to some permanence of the variable A, why isn't the end result the original value of A, but the intermediate step in heapsort, which is the result of the maxheapify call?</p></li>
</ol>
| 3 | 2016-08-24T21:10:57Z | 39,133,165 | <pre><code>A = C + A[k-1:n]
</code></pre>
<p>This is the line responsible for the behaviour you're seeing. By setting A equal to A[0:k-1] + A[k-1:n] you are making a copy of all of A's elements. If you want your changes to persist within the list you passed in you must assign the list to all the elements of A like so:</p>
<pre><code>A[:] = C + A[k-1:n]
</code></pre>
| 1 | 2016-08-24T21:16:45Z | [
"python"
] |
Confusion about variables in python | 39,133,088 | <p>I am trying to implement heapsort but I am getting unexpected results. I think this is due to something I don't understand about how Python handles variables (I am talking about side effects). Here's the code:</p>
<pre><code>from math import *
def parent(i):
return floor((i+1)/2)-1
def left(i):
return 2*i+1
def right(i):
return 2*i+2
def maxheapify(A, i):
l = left(i)
r = right(i)
if l < len(A) and A[i] < A[l]:
largest = l
else:
largest = i
if r < len(A) and A[largest] < A[r]:
largest = r
if largest != i:
temp = A[i]
A[i] = A[largest]
A[largest] = temp
maxheapify(A, largest)
def buildmaxheap(A):
for i in range(int(floor(len(A)/2)), -1, -1):
maxheapify(A, i)
def heapsort(A):
n = len(A)
buildmaxheap(A)
for k in range(len(A), 0, -1):
temp = A[0]
A[0] = A[k-1]
A[k-1] = temp
C = A[0:k-1]
maxheapify(C, 0)
A = C + A[k-1:n]
print(A)
</code></pre>
<p>Now when I run </p>
<pre><code>A = [2, 4, 1, 3, 7, 5, 9]
heapsort(A)
print(A)
</code></pre>
<p>I obtain two printed lines (one from inside the heapsort showing that the sorting worked and one from the last print):</p>
<pre><code>[1, 2, 3, 4, 5, 7, 9]
[1, 7, 5, 3, 4, 2, 9]
</code></pre>
<p>Obviously, I'd like them both to be the same (which would mean that the sorting actually worked and A is sorted after calling heapsort(A))</p>
<p>So what I don't get is:</p>
<ol>
<li><p>If A is correctly sorted (at the point of the last line in heapsort(A)), why doesn't this change persist after leaving the function block?</p></li>
<li><p>If this is due to some permanence of the variable A, why isn't the end result the original value of A, but the intermediate step in heapsort, which is the result of the maxheapify call?</p></li>
</ol>
| 3 | 2016-08-24T21:10:57Z | 39,133,472 | <p>The following implementation shows a rewrite of your code but includes an alternate solution above the last call to the <code>print</code> function. The commented-out line may replace the line directly above it, or you may choose to return <code>a</code> at the end of the <code>heap_sort</code> function and rebind the value of <code>a</code> in your <code>main</code> function instead.</p>
<pre><code>def main():
a = [2, 4, 1, 3, 7, 5, 9]
heap_sort(a)
print(a)
parent = lambda i: (i + 1 >> 1) - 1
left = lambda i: (i << 1) + 1
right = lambda i: i + 1 << 1
def max_heapify(a, i, n):
l = left(i)
r = right(i)
largest = l if l < n and a[i] < a[l] else i
if r < n and a[largest] < a[r]:
largest = r
if largest != i:
a[i], a[largest] = a[largest], a[i]
max_heapify(a, largest, n)
def build_max_heap(a, n):
for i in reversed(range(n + 2 >> 1)):
max_heapify(a, i, n)
def heap_sort(a):
n = len(a)
build_max_heap(a, n)
for k in reversed(range(n)):
a[0], a[k] = a[k], a[0]
c = a[:k]
max_heapify(c, 0, k)
a[:k] = c
# the following would change "a" in this scope only
# a = c + a[k:]
# print(a)
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-08-24T21:42:07Z | [
"python"
] |
Apply same selection (cut) to multiple dataframes | 39,133,121 | <p>My question is about making selections in <code>pandas</code> (python.)</p>
<p>As you know, one can apply a <code>selection</code> (or 'cut') to a dataframe by doing</p>
<pre><code>df = df[df.area > 10]
</code></pre>
<p>if you wanted to (say) select all rows whose column value of <code>area</code> was greater than <code>10</code>. But suppose you have many dataframes, and you'd like to eventually apply this cut to all of them. It would be nice to do something like</p>
<pre><code>cut = dataframe.area > 10
</code></pre>
<p>and then somehow be able to do</p>
<pre><code>df = df[cut]
</code></pre>
<p>Obviously given the strategy above it won't work because <code>cut</code> refers to a specific dataframe. But is there a way to approximate this behavior?</p>
<p>That is, is it possible to define a <code>cut</code> that refers to no dataframe in particular and can be applied as <code>df = df[cut]</code>?</p>
| 1 | 2016-08-24T21:13:16Z | 39,133,196 | <p>I can get something similar</p>
<pre><code>cut = lambda df: df[df.area > 10]
cut(df)
</code></pre>
<p>Per @root</p>
<pre><code>cut = 'area > 10'
df.query(cut)
</code></pre>
<p>Per @ayhan</p>
<pre><code>cut = lambda x: x.area > 10
df[cut]
</code></pre>
<hr>
<h3>Timing</h3>
<p><strong><em>100 rows</em></strong></p>
<pre><code>df = pd.DataFrame(np.random.randint(0, 20, 100), columns=['area'])
</code></pre>
<p><a href="http://i.stack.imgur.com/F1NWU.png" rel="nofollow"><img src="http://i.stack.imgur.com/F1NWU.png" alt="enter image description here"></a></p>
<p><strong><em>1,000,000 rows</em></strong></p>
<pre><code>df = pd.DataFrame(np.random.randint(0, 20, 1000000), columns=['area'])
</code></pre>
<p><a href="http://i.stack.imgur.com/EoUZr.png" rel="nofollow"><img src="http://i.stack.imgur.com/EoUZr.png" alt="enter image description here"></a></p>
| 5 | 2016-08-24T21:19:22Z | [
"python",
"pandas",
"filter"
] |
Embedding Python in C: Passing a two dimensional array and retrieving back a list | 39,133,222 | <p>I want to pass a list of arrays (or a 2D array) such as <code>[[1,2,3],[4,5,6]]</code> from C to a Python script which computes and returns a list. What possible changes would be required to the embedding <a href="https://docs.python.org/3/extending/embedding.html#pure-embedding" rel="nofollow">code</a> in order to achieve this? The python script to be executed is as follows:</p>
<p><em>abc.py</em></p>
<pre><code>import math
def xyz(size,wss):
result=[0 for i in range(size)]
for i in range(size):
wss_mag=math.sqrt(wss[i][0]*wss[i][0]+wss[i][1]*wss[i][1]+wss[i][2]*wss[i][2])
result[i]=1/wss_mag
return result
</code></pre>
<p>Here size is the number of 1D arrays in WSS (for e.g. 2 in case <code>wss=[[1,2,3],[4,5,6]]</code>) The question is different than the suggested duplicate in the sense it has to return back a list as a 1-D array to C.</p>
| 1 | 2016-08-24T21:21:50Z | 39,176,243 | <p>I think what you want to do is to pass in some Lists, have them converted to C arrays and then back to Lists before returning to Python.</p>
<p>The Lists are received in C as a pointer to a PyObject, so to get the data from you'll have to use <a href="https://docs.python.org/3.4/extending/extending.html" rel="nofollow">PyArg_ParseXX</a>(YY), where XX and YY depend on what type of list object you had in Python and how you want it to be interpreted in C. This is where you would specify the shape information of the input lists and turn it into whatever shape you need for processing.</p>
<p>To return the arrays back to python you'll have to look at the Python-C API, which gives methods for creating and manipulating Python objects in C. As othera have suggested, using the <a href="http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html" rel="nofollow">numpy-C API</a> is also an option with many advantages. In this case, you can use the PyArray_SimpleNew to create an array and populate it with your output.</p>
| 0 | 2016-08-27T00:13:20Z | [
"python",
"c",
"arrays",
"python-c-api",
"python-embedding"
] |
Share a variable between two files? | 39,133,230 | <p>What is a mechanism in Python by which I can do the following:</p>
<p>file1.py:</p>
<pre><code>def getStatus():
print status
</code></pre>
<p>file2.py:</p>
<pre><code>status = 5
getStatus() # 5
status = 1
getStatus() # 1
</code></pre>
<p>The function and the variable are in two different files and I'd like to avoid the use of a global.</p>
| -2 | 2016-08-24T21:22:15Z | 39,231,803 | <p>You can share variables without making them global by putting them in a module. Anybody who imports the module gets the <em>same</em> module object, so its contents are shared; changes made at one location show up in all the others.</p>
<p>notglobal.py:</p>
<pre><code>status = 0
</code></pre>
<p>get.py:</p>
<pre><code>import notglobal
def getStatus():
return notglobal.status
</code></pre>
<p>Testing:</p>
<pre><code>>>> import notglobal
>>> import get
>>> notglobal.status = 5
>>> get.getStatus()
5
>>> notglobal.status = 1
>>> get.getStatus()
1
</code></pre>
| 2 | 2016-08-30T15:38:29Z | [
"python"
] |
Numpy array multiplication of LDL^T factorization of symmetric matrix | 39,133,279 | <p>Suppose I have an "LDL^T" decomposition of a symmetric, positive-semidefinite matrix A (<code>numpy</code> array), and I would like to multiply all factors together to obtain A.</p>
<p>What is the most efficient way to achieve this?</p>
<p>Currently, I am doing (D is available as "vector"):
<code>np.dot(np.dot(L, np.diag(D)), L.T)</code>,
which is quite obviously a bad solution.</p>
| 2 | 2016-08-24T21:26:15Z | 39,133,425 | <p><strong>Approach #1</strong></p>
<p>We could use <code>elementwise multiplication</code> and then <code>matrix-multiplication</code>. This basically replaces <code>np.dot(L, np.diag(D))</code> with a direct <code>element-wise multiplication</code> for hopefully some speedup. So, with it, the implementation would become -</p>
<pre><code>(L*D).dot(L.T)
</code></pre>
<p><strong>Approach #2</strong></p>
<p>Another approach could be with <code>np.einsum</code> to do all those things in one-go, like so -</p>
<pre><code>np.einsum('ij,j,kj->ik',L,D,L)
</code></pre>
<hr>
<p><strong>Runtime test</strong></p>
<pre><code>In [303]: L = np.random.randint(0,9,(1000,1000))
In [304]: D = np.random.randint(0,9,(1000))
In [305]: %timeit np.dot(np.dot(L, np.diag(D)), L.T)
1 loops, best of 3: 3.87 s per loop
In [306]: %timeit (L*D).dot(L.T)
1 loops, best of 3: 1.39 s per loop
In [307]: %timeit np.einsum('ij,j,kj->ik',L,D,L)
1 loops, best of 3: 1.71 s per loop
</code></pre>
| 2 | 2016-08-24T21:38:08Z | [
"python",
"performance",
"numpy"
] |
Weights and Bias from Trained Meta Graph | 39,133,285 | <p>I have successfully exported a re-trained InceptionV3 NN as a TensorFlow meta graph. I have read this protobuf back into python successfully, but I am struggling to see a way to export each layers weight and bias values, which I am assuming is stored within the meta graph protobuf, for recreating the nn outside of TensorFlow.</p>
<p>My workflow is as such:</p>
<pre><code>Retrain final layer for new categories
Export meta graph tf.train.export_meta_graph(filename='model.meta')
Build python pb2.py using Protoc and meta_graph.proto
Load Protobuf:
import meta_graph_pb2
saved = meta_graph_pb2.CollectionDef()
with open('model.meta', 'rb') as f:
saved.ParseFromString(f.read())
</code></pre>
<p>From here I can view most aspects of the graph, like node names and such, but I think my inexperience is making it difficult to track down the correct way to access the weight and bias values for each relevant layer.</p>
| 2 | 2016-08-24T21:26:40Z | 39,134,460 | <p>The <code>MetaGraphDef</code> proto doesn't actually contain the values of the weights and biases. Instead it provides a way to associate a <code>GraphDef</code> with the weights stored in one or more checkpoint files, written by a <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Saver" rel="nofollow"><code>tf.train.Saver</code></a>. The <a href="https://www.tensorflow.org/versions/r0.10/how_tos/meta_graph/index.html" rel="nofollow"><code>MetaGraphDef</code> tutorial</a> has more details, but the approximate structure is as follows:</p>
<ol>
<li><p>In you training program, write out a checkpoint using a <code>tf.train.Saver</code>. This will also write a <code>MetaGraphDef</code> to a <code>.meta</code> file in the same directory.</p>
<pre><code>saver = tf.train.Saver(...)
# ...
saver.save(sess, "model")
</code></pre>
<p>You should find files called <code>model.meta</code> and <code>model-NNNN</code> (for some integer <code>NNNN</code>) in your checkpoint directory.</p></li>
<li><p>In another program, you can import the <code>MetaGraphDef</code> you just created, and restore from a checkpoint.</p>
<pre><code>saver = tf.train.import_meta_graph("model.meta")
saver.restore("model-NNNN") # Or whatever checkpoint filename was written.
</code></pre>
<p>If you want to get the value of each variable, you can (for example) find the variable in <code>tf.all_variables()</code> collection and pass it to <code>sess.run()</code> to get its value. For example, to print the values of all variables, you can do the following:</p>
<pre><code>for var in tf.all_variables():
print var.name, sess.run(var)
</code></pre>
<p>You could also filter <code>tf.all_variables()</code> to find the particular weights and biases that you're trying to extract from the model.</p></li>
</ol>
| 3 | 2016-08-24T23:24:04Z | [
"python",
"tensorflow",
"protocol-buffers",
"protoc"
] |
django user permissions not showing in production | 39,133,380 | <p>I have a wierd problem , In production I can't see user permissions list in django admin and after openning group edit page it's show noting and page language transform to another language .</p>
<p>I have some cusotm permission define in app models .</p>
<p>what I done ?</p>
<ol>
<li>sync my local database with production database .</li>
<li>setting default encoding in supervisor (i thought maybe if my app verbose name is a unicode name so thats why its wont load)
Im using django version 1.7</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Its look like the problem is from gunicorn or supervisord cuz it's working in direct runserver </p>
| 1 | 2016-08-24T21:34:09Z | 39,143,766 | <p>Problem was from os default encoding , I had some unicod permission names in my table and it's wont load and error happened . </p>
<p>so I just add this three line in my wsgi.py </p>
<pre><code>import sys
reload(sys)
sys.setdefaultencoding('utf8')
</code></pre>
| 0 | 2016-08-25T11:14:22Z | [
"python",
"django",
"gunicorn",
"supervisord"
] |
Python 3.5 OpenSSL error | 39,133,428 | <p>SO I am was working with <a href="https://github.com/errbotio/errbot" rel="nofollow">errbot</a> and fired up a virtualenv with python3.5. When I run the errbot command I get this error </p>
<pre><code>from OpenSSL import crypto
File "/Users/me/workspace/chatbotv2/chatbot_venv3/lib/python3.5/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/Users/me/workspace/chatbotv2/chatbot_venv3/lib/python3.5/site-packages/OpenSSL/rand.py", line 12, in <module>
from OpenSSL._util import (
File "/Users/me/workspace/chatbotv2/chatbot_venv3/lib/python3.5/site-packages/OpenSSL/_util.py", line 6, in <module>
from cryptography.hazmat.bindings.openssl.binding import Binding
File "/Users/me/workspace/chatbotv2/chatbot_venv3/lib/python3.5/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 250, in <module>
_verify_openssl_version(Binding.lib.SSLeay())
File "/Users/me/workspace/chatbotv2/chatbot_venv3/lib/python3.5/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 230, in _verify_openssl_version
"You are linking against OpenSSL 0.9.8, which is no longer "
</code></pre>
<p>This is 'asked to death' topic on SO so obviously I was abel to get a solution asap. I followed this <a href="http://stackoverflow.com/a/37757206/1664675">answer</a>. However When I run <code>brew link --force openssl</code> I get this :</p>
<pre><code>Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib
</code></pre>
<p>For which I tried :</p>
<pre><code>export CPPFLAGS='-I/usr/local/opt/openssl/include'
export LDFLAGS='-L/usr/local/opt/openssl/lib'
</code></pre>
<p>After this I am lost and do not know what to do. When I try : <code>python -c "import ssl; print (ssl.OPENSSL_VERSION)"</code> I still get <code>OpenSSL 0.9.8zg 14 July 2015</code>. <strong>I am on OSX</strong></p>
| 1 | 2016-08-24T21:38:32Z | 39,171,768 | <p>Upgrade your pip. pip 8.1+ will download a binary wheel that will have cryptography precompiled. If you want to compile it yourself the correct environment variables for homebrew can also be found in the docs on the <a href="https://cryptography.io/en/latest/installation/" rel="nofollow">installation</a> page.</p>
| 1 | 2016-08-26T17:29:19Z | [
"python",
"osx",
"python-3.x",
"ssl",
"errbot"
] |
Continuous Interpolation in MATLAB? | 39,133,448 | <p>I have a set of data that I would like to get an interpolating function for. MATLAB's interpolating functions seem to only return values at a finer set of discrete points. However, for my purposes, I need to be able to look up the function value for <em>any</em> input. What I'm looking for is something like SciPy's "interp1d."</p>
| 1 | 2016-08-24T21:40:11Z | 39,133,530 | <p>That appears to be what <a href="http://www.mathworks.com/help/matlab/ref/ppval.html" rel="nofollow"><code>ppval</code></a> is for. It looks like many of the <a href="http://www.mathworks.com/help/matlab/1-d-interpolation.html" rel="nofollow">1D interpolation functions</a> have a <code>pp</code> variant that plugs into this.</p>
<p><strong>Disclaimer:</strong> I haven't actually tried this.</p>
| 2 | 2016-08-24T21:47:03Z | [
"python",
"matlab",
"input",
"interpolation",
"data-analysis"
] |
Trying to write and read multiple lines on a JSON File on python | 39,133,505 | <p>I'm writing a random script to make a student name be linked with a grade on a dictionary, it's simple, for writing on a file, i'm using a JSON module:</p>
<pre><code>import json
for i in range(1000):
finalMedia = {"name":"name", "media":media}
json.dump(finalMedia, open("xtext.txt",'w'))
txt.write("\n")
</code></pre>
<p>Resulting in a file like this: </p>
<pre><code>...
{"media": 7, "nome": "Bernardo"}
{"media": 7, "nome": "Isadora"}
{"media": 7, "nome": "Pedro"}
{"media": 9, "nome": "Agatha"}
...
</code></pre>
<p>For reading i wrote a script that also uses the JSON module: </p>
<pre><code>import json
data = json.load(open("xtext.txt"))
print data
</code></pre>
<p>But i get the error: <em>"Extra data: line 2 column 1 - line 1001 column 1 (char 32 - 31997)"</em></p>
<p>I already tried removing the txt.write("\n") and tried changing to: txt.write(","). Is there something i have to do with JSON module or it's just the way i'm writing the file?</p>
| 0 | 2016-08-24T21:45:10Z | 39,133,531 | <pre><code>data = map(json.loads,open("xtext.txt"))
</code></pre>
<p>each line is a json structure ... but when together as a single file thats not valid json</p>
<p>although really you should just write the json.dump once</p>
<pre><code>medias = [{"name":"name", "media":media} for name,media in all_media]
json.dump(medias,open("xtext.txt","wb"))
</code></pre>
| 3 | 2016-08-24T21:47:07Z | [
"python",
"json"
] |
Writing a Twisted Client to send looping GET request to multiple API calls and record response | 39,133,507 | <p>I haven't done twisted programming in a while so I'm trying to get back into it for a new project. I'm attempting to set up a twisted client that can take a list of servers as an argument, and for each server it sends an API GET call and writes the return message to a file. This API GET call should be repeated every 60 seconds.</p>
<p>I've done it successfully with a single server using Twisted's agent class:</p>
<pre><code>from StringIO import StringIO
from twisted.internet import reactor
from twisted.internet.protocol import Protocol
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
from twisted.internet.defer import Deferred
import datetime
from datetime import timedelta
import time
count = 1
filename = "test.csv"
class server_response(Protocol):
def __init__(self, finished):
print "init server response"
self.finished = finished
self.remaining = 1024 * 10
def dataReceived(self, bytes):
if self.remaining:
display = bytes[:self.remaining]
print 'Some data received:'
print display
with open(filename, "a") as myfile:
myfile.write(display)
self.remaining -= len(display)
def connectionLost(self, reason):
print 'Finished receiving body:', reason.getErrorMessage()
self.finished.callback(None)
def capture_response(response):
print "Capturing response"
finished = Deferred()
response.deliverBody(server_response(finished))
print "Done capturing:", finished
return finished
def responseFail(err):
print "error" + err
reactor.stop()
def cl(ignored):
print "sending req"
agent = Agent(reactor)
headers = {
'authorization': [<snipped>],
'cache-control': [<snipped>],
'postman-token': [<snipped>]
}
URL = <snipped>
print URL
a = agent.request(
'GET',
URL,
Headers(headers),
None)
a.addCallback(capture_response)
reactor.callLater(60, cl, None)
#a.addBoth(cbShutdown, count)
def cbShutdown(ignored, count):
print "reactor stop"
reactor.stop()
def parse_args():
usage = """usage: %prog [options] [hostname]:port ...
Run it like this:
python test.py hostname1:instanceName1 hostname2:instancename2 ...
"""
parser = optparse.OptionParser(usage)
_, addresses = parser.parse_args()
if not addresses:
print parser.format_help()
parser.exit()
def parse_address(addr):
if ':' not in addr:
hostName = '127.0.0.1'
instanceName = addr
else:
hostName, instanceName = addr.split(':', 1)
return hostName, instanceName
return map(parse_address, addresses)
if __name__ == '__main__':
d = Deferred()
d.addCallbacks(cl, responseFail)
reactor.callWhenRunning(d.callback, None)
reactor.run()
</code></pre>
<p>However I'm having a tough time figuring out how to have multiple agents sending calls. With this, I'm relying on the end of the write in cl() ---reactor.callLater(60, cl, None) to create the call loop. <strong>So how do I create multiple call agent protocols (server_response(Protocol)) and continue to loop through the GET for each of them once my reactor is started?</strong></p>
| 0 | 2016-08-24T21:45:19Z | 39,136,513 | <p>Look what the cat dragged in!</p>
<blockquote>
<p>So how do I create multiple call agent</p>
</blockquote>
<p>Use <a href="http://treq.readthedocs.io/en/latest/index.html" rel="nofollow"><code>treq</code></a> you dingus :D. You rarely want to get tangled up with the <code>Agent</code> class.</p>
<blockquote>
<p>This API GET call should be repeated every 60 seconds</p>
</blockquote>
<p>Use <a href="http://twistedmatrix.com/documents/current/core/howto/time.html" rel="nofollow"><code>LoopingCalls</code></a> instead of <code>callLater</code>, in this case it's easier and you'll run into less problems later.</p>
<pre><code>import treq
from twisted.internet import task, reactor
filename = 'test.csv'
def writeToFile(content):
with open(filename, 'ab') as f:
f.write(content)
def everyMinute(*urls):
for url in urls:
d = treq.get(url)
d.addCallback(treq.content)
d.addCallback(writeToFile)
#----- Main -----#
sites = [
'https://www.google.com',
'https://www.amazon.com',
'https://www.facebook.com']
repeating = task.LoopingCall(everyMinute, *sites)
repeating.start(60)
reactor.run()
</code></pre>
<p>It starts in the <code>everyMinute()</code> function, which runs every 60 seconds. Within that function, each endpoint is queried and once the contents of the response becomes available, the <code>treq.content</code> function takes the response and returns the contents. Finally the contents are written to a file.</p>
<p>PS</p>
<p>Are you scraping or trying to extract something from those sites? If you are <a href="http://scrapy.org/" rel="nofollow"><code>scrapy</code></a> might be a good option for you.</p>
| 1 | 2016-08-25T04:02:18Z | [
"python",
"api",
"client",
"twisted"
] |
Identifying collections from pair wise combinations | 39,133,511 | <p>I have a list of tuples that identifies pair wise relations between items.</p>
<pre><code>[(1,2), (2,3), (3,1), (4,5), (5,4), (6,7)]
</code></pre>
<p>I want to look across the tuples and collapse them into unique collections as below (maybe as a hash map - any other efficient data structures?):</p>
<pre><code>{a: (1,2,3), b: (4,5), c(6,7)}
</code></pre>
<p>Is there an algorithm that does this efficiently - I can only think of a brute force approach right now.</p>
<p>Looking to implement this in Python or R. My original examples has about 28 million tuples.</p>
| 0 | 2016-08-24T21:45:35Z | 39,133,910 | <p>You basically want to find connected components. For that there is a function <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csgraph.connected_components.html" rel="nofollow">connected_components</a> in scipy. You just have to reinterpret your data a bit:</p>
<pre><code>l = [(1,2), (2,3), (3,1), (4,5), (5,4), (6,7)]
from scipy.sparse.csgraph import connected_components
from scipy.sparse import csr_matrix
# make list of unique elements
uniques = list(set(list(map(lambda a: a[0], l)) + list(map(lambda a: a[1], l))))
# reverse index to lookup elements index
unique2index = dict([(el, i) for (i, el) in enumerate(uniques)])
# prepare data for csr_matrix construction
data = [1 for x in l] # value 1 -- means edge
data_i = [unique2index.get(x[0]) for x in l] # source node
data_j = [unique2index.get(x[1]) for x in l] # target node
graphMatrix = csr_matrix((data, (data_i, data_j)),shape=(len(uniques), len(uniques)))
(numComponents, labels) = connected_components(graphMatrix) # here is the work done
# interpret labels back to original elements
components = [[uniques[j] for (j,x) in enumerate(labels) if x==i] for i in range(0, numComponents)]
print(components) # [[1, 2, 3], [4, 5], [6, 7]] is printed
</code></pre>
| 2 | 2016-08-24T22:24:21Z | [
"python",
"algorithm"
] |
Return multiple HttpResponse objects that are Excel spreadsheets | 39,133,529 | <p>The desired outcome is to export a spreadsheet of all users in the database, but rather than export them all at once - break up the users into 3 groups and export each of these groups as a spread sheet.</p>
<p>I am using xlsxwriter and StringIO</p>
<p>Thus far the code is returning only the first HttpResponse object out of the 3 (aka the first chunk of users). I tried using StreamingHttpResponse but believe I misapplied it/it wasn't appropriate for this task.</p>
<p>The question/s: Can I return multiple spreadsheets in one response? If not, would the best thing to do be to call this function multiple times, once for each user group? (<em>edited after first comment</em>)</p>
<p>Thanks Much!</p>
<p>Code below:</p>
<pre><code>def export_users(request):
# all users
users = User.objects.all()
# breaks up all users into 3 groups
user_groups = [users[x:x+2] for x in xrange(0, len(users), 2)]
# FUNCTIONALITY NOTES --> idea is to split number of users for exporting into groups of 3 then export each group. right now it stops after the first group
# list var for storing all responses
full_response = []
for group in user_groups:
response = HttpResponse(content_type='application/vnd.ms-excel')
response['Content-Disposition'] = 'attachment; filename=UserReport.xlsx'
# user group
print "group! ---> %s"%(group)
# creates data variable to write to response
xlsx_data = WriteToExcel(group)
response.write(xlsx_data)
# appending each response to an array
full_response.append(response)
print len(full_response)
# all response objects
print full_response
# returning one here
return response
# non-functioning attempt to return all responses
# for response in full_response:
# print response
# return response
</code></pre>
| 1 | 2016-08-24T21:46:58Z | 39,134,274 | <p>You <strong>can not return multiple HttpResponse</strong> object with single API call. However as a workaround, you can:</p>
<ul>
<li><p><strong>Approach 1: <em>If content is going to be always dynamic and file reference is not needed in future</em></strong>:
Make multiple request each returning <code>HttpResponse</code></p></li>
<li><p><strong>Approach 2: <em>If content is going to be static, and you may need the same file in future</em></strong>: Make a single request, upload the file to <code>aws s3</code>, <code>google cloud storage</code>, or on your server. Return the path of each file with response and download those file at client side.</p></li>
</ul>
| 1 | 2016-08-24T23:00:20Z | [
"python",
"django",
"export",
"xlsxwriter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.