title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Python syntax error on mad libs | 38,963,038 | <p>So I'm trying to make a simple Mad Libs program and I'm getting multiple errors and I cannot figure out why. One of them is with the "Noah's First Program" part, and the other is with the printing of the story variable. How do I fix it?</p>
<pre><code>print "Noah's First Program!"
name=raw_input("Enter a name:")
adjectiveone=raw_input("Enter an Adjective:")
adjectivetwo=raw_input("Enter an Adjective:")
adjectivethree=raw_input("Enter an Adjective:")
verbone=raw_input("Enter a Verb:")
verbtwo=raw_input("Enter a Verb:")
verbthree=raw_input("Enter a Verb:")
nounone=raw_input("Enter a Noun:")
nountwo=raw_input("Enter a Noun:")
nounthree=raw_input("Enter a Noun:")
nounfour=raw_input("Enter a Noun:")
animal=raw_input("Enter an Animal:")
food=raw_input("Enter a Food:")
fruit=raw_input("Enter a Fruit:")
number=raw_input("Enter a Number:")
superhero=raw_input("Enter a Superhero Name:")
country=raw_input("Enter a Country:")
dessert=raw_input("Enter a Dessert:")
year=raw_input("Enter a Year:")
STORY = âMan, I look really %s this morning. My name is %s, by the way, and my favorite thing to do is %s. My best friend is super %s, because he owns a(n) %s and a(n) %s! Whatâs your favorite animal? Mine is a %s. I like to watch them at the zoo as I eat %s while %s. Those things are all great, but my other friend is even more interesting! She has a %s, and a lifetime supply of %s! Sheâs really %s, and her name is %s. She enjoys %s, but only %s times per day! She usually does it with %s. My favorite superhero is %s, but hers is %s. My third friend is named %s and is foreign. His family comes from %s, and their family name is %s. To wrap things up, my favorite dessert is %s, and Iâm glad to have introduced you to my friends. Maybe soon Iâll introduce you to my fourth friend %s, but that will probably be in the year %s! I love %s!"
print STORY (adjectiveone,name,verbone,adjectivetwo,nounone,nountwo,animal,food,verbtwo,nounthree,fruit,adjectivethree,name,verbthree,number,name,superhero,superhero,name,country,name,dessert,name,year,nounfour)
</code></pre>
| -2 | 2016-08-15T21:04:36Z | 38,963,249 | <p>If you are using Python 2, which <code>raw_input</code> leads me to believe, your code should be as follows:</p>
<pre><code>print "Noah's First Program!"
name=raw_input("Enter a name:")
adjectiveone=raw_input("Enter an Adjective:")
adjectivetwo=raw_input("Enter an Adjective:")
adjectivethree=raw_input("Enter an Adjective:")
verbone=raw_input("Enter a Verb:")
verbtwo=raw_input("Enter a Verb:")
verbthree=raw_input("Enter a Verb:")
nounone=raw_input("Enter a Noun:")
nountwo=raw_input("Enter a Noun:")
nounthree=raw_input("Enter a Noun:")
nounfour=raw_input("Enter a Noun:")
animal=raw_input("Enter an Animal:")
food=raw_input("Enter a Food:")
fruit=raw_input("Enter a Fruit:")
number=raw_input("Enter a Number:")
superhero=raw_input("Enter a Superhero Name:")
country=raw_input("Enter a Country:")
dessert=raw_input("Enter a Dessert:")
year=raw_input("Enter a Year:")
STORY = "Man, I look really %s this morning. My name is %s, by the way, and my favorite thing to do is %s. My best friend is super %s, because he owns a(n) %s and a(n) %s! What's your favorite animal? Mine is a %s. I like to watch them at the zoo as I eat %s while %s. Those things are all great, but my other friend is even more interesting! She has a %s, and a lifetime supply of %s! She's really %s, and her name is %s. She enjoys %s, but only %s times per day! She usually does it with %s. My favorite superhero is %s, but hers is %s. My third friend is named %s and is foreign. His family comes from %s, and their family name is %s. To wrap things up, my favorite dessert is %s, and I'm glad to have introduced you to my friends. Maybe soon I'll introduce you to my fourth friend %s, but that will probably be in the year %s! I love %s!"
print STORY % (adjectiveone,name,verbone,adjectivetwo,nounone,nountwo,animal,food,verbtwo,nounthree,fruit,adjectivethree,name,verbthree,number,name,superhero,superhero,name,country,name,dessert,name,year,nounfour)
</code></pre>
<p>To summarize: </p>
<p>Replace your apostrophes with <code>'</code>.</p>
<p>Fix your syntax for string formatting:</p>
<pre><code>print STORY(arg1, ..., argn)
</code></pre>
<p>should be:</p>
<pre><code>print STORY % (arg1, ..., argn)
</code></pre>
<p>If you are using Python 3, replace <code>raw_input</code> with <code>input</code> and <code>print ...</code> with <code>print(...)</code>. Also, according to <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">pep-8</a>, you should have a single space on either side of the <code>=</code> when assigning variables, so for example:</p>
<pre><code>name=raw_input("Enter a name:")
</code></pre>
<p>should be:</p>
<pre><code>name = raw_input("Enter a name:")
</code></pre>
<p>Though not doing it this way will not cause a syntax error.</p>
| 1 | 2016-08-15T21:20:50Z | [
"python",
"syntax"
] |
What will be faster - grep or pandas TextFileReader for a very large file? | 38,963,102 | <p>I need to search for a particular regex in a very large file I can't load to memory or create dataframe of. Which one, grep or iterating over a TextFileReader will be faster in that case?</p>
<p>Sadly, I don't have time to learn, configure and run a Hadoop.</p>
<p>Cheers </p>
| 1 | 2016-08-15T21:08:45Z | 38,963,174 | <p>Since grep is a compiled C program grep is certainly faster than interpreting bytecode for file scan AND regex processing (although regex lib is native code)</p>
<p>Running with pypy could close the gap, but in the end the compiled code would win.</p>
<p>Of course, on smaller data, if data could be stored in a dictionary, multiple search operations would be faster than calling grep the same number of time because grep search is <code>O(n)</code> and dictionary search is <code>O(log(N))</code></p>
| 1 | 2016-08-15T21:14:37Z | [
"python",
"pandas",
"grep"
] |
Add quotes around two tuples | 38,963,199 | <p>I have two tuples and I would simply like to add quotes around both of them:</p>
<pre><code>Nord = 52.2
East = 13.3
</code></pre>
<p>What I'd like to print is this statement <code>"52.2, 13.3"</code>.</p>
<p>However if I do print I get <code>'52.2', '13.3'</code>.</p>
<p>How can I get the two values in double quotes?</p>
| 0 | 2016-08-15T21:16:02Z | 38,963,454 | <p>You may use a simple <code>format()</code>:</p>
<pre><code>Nord = 52.2
East = 13.3
print("\"{0}, {1}\"".format(Nord, East))
# or
print('"{0}, {1}"'.format(Nord, East))
</code></pre>
<p>See the <a href="http://ideone.com/CnCPtg" rel="nofollow">Python demo</a></p>
<p>Note that the <code>'"{0}, {1}"'</code> is preferred as <a href="https://www.python.org/dev/peps/pep-0008/#string-quotes" rel="nofollow">it is considered more <em>pythonic</em></a>:</p>
<blockquote>
<p>In Python, single-quoted strings and double-quoted strings are the same. This PEP does not make a recommendation for this. Pick a rule and stick to it. <strong>When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability.</strong></p>
</blockquote>
| 2 | 2016-08-15T21:37:08Z | [
"python",
"string",
"tuples",
"parentheses"
] |
Add quotes around two tuples | 38,963,199 | <p>I have two tuples and I would simply like to add quotes around both of them:</p>
<pre><code>Nord = 52.2
East = 13.3
</code></pre>
<p>What I'd like to print is this statement <code>"52.2, 13.3"</code>.</p>
<p>However if I do print I get <code>'52.2', '13.3'</code>.</p>
<p>How can I get the two values in double quotes?</p>
| 0 | 2016-08-15T21:16:02Z | 38,968,440 | <p>For any tuple <code>tup</code> (with var length) you can do this one:</p>
<pre><code>print('"{}"'.format(', '.join(str(t) for t in tup)))
</code></pre>
| 0 | 2016-08-16T06:59:01Z | [
"python",
"string",
"tuples",
"parentheses"
] |
Python and Mac OS X 10.11 | 38,963,236 | <p>Recently I installed Mac OS X 10.11. I am involved in the development of scientific applications (mainly in Fortran and C++) and I use MacPorts to install different utilities (GCC compiler, MPI libraries, ...). Immediately after the installation of the new OS, I followed the migration instructions for MacPorts (<a href="https://trac.macports.org/wiki/Migration" rel="nofollow">https://trac.macports.org/wiki/Migration</a>), i.e. I uninstalled all my packages and reinstalled them again with the new OS.</p>
<p>Unfortunately, Python does not seem to work anymore. The first hint is that the terminal is never released, i.e. the function <code>exit()</code> or the combination <code>C+d</code> do not stop the interpreter properly and the terminal is no more usable.</p>
<p>The second (and bigger) problem is that <code>numpy</code> is not found:</p>
<pre><code>>>> import numpy as np
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'numpy'
</code></pre>
<p>I tried to reinstall <code>numpy</code> using <code>pip</code> (<a href="http://stackoverflow.com/questions/19548957/can-i-force-pip-to-reinstall-the-current-version">Can I force pip to reinstall the current version?</a>), without success.</p>
<p>I have this computer (MacBook Pro) from many years and I have installed Python many times. This is the result of the auto-completion:</p>
<pre><code>python python3 python3.4m python3m
python-config python3-32 python3.4m-config python3m-config
python2.6 python3-config python3.6 pythontex
python2.6-config python3.4 python3.6-config pythonw
python2.7 python3.4-32 python3.6m pythonw2.6
python2.7-config python3.4-config python3.6m-config pythonw2.7
</code></pre>
<p>Anyone had a similar problem? Any idea how to get Python work normally?</p>
| 0 | 2016-08-15T21:18:43Z | 38,966,663 | <p>You should have seen a note when you installed Python that, if you will be using it from the terminal, to please install pyNN-readline to fix the current issue with libedit (terminal not being properly handled.) NN=26 for python 2.6, for example.</p>
<p>For numpy, you should be able to 'sudo port install pyNN-numpy'. After doing that, make sure that the python you want (2.6, 2.7, etc is selected with 'sudo port select python ...' (Check the port man page for details) and that /opt/local/bin (assuming default install) is at the head of PATH.</p>
<p>You can skip the select step if you call python2.7 explicitly, FWIW.</p>
| 0 | 2016-08-16T04:28:58Z | [
"python",
"osx",
"numpy",
"pip",
"macports"
] |
Repeating columns in DataFrame | 38,963,331 | <p>What is the right way of repeating columns in DataFrame?</p>
<p>I'm working on df:</p>
<pre><code> England Germany US
0 -3.3199 -3.31 496.68
1 1004.0 4.01 4.01
2 4.9794 4.97 1504.97
3 3.1766 2003.17 3.17
</code></pre>
<p>And I'd like to obtain:</p>
<pre><code> England England Germany Germany US US
0 -3.3199 -3.3199 -3.31 -3.31 496.68 496.68
1 1004.0 1004.0 4.01 4.01 4.01 4.01
2 4.9794 4.9794 4.97 4.97 1504.97 1504.97
3 3.1766 3.1766 2003.17 2003.17 3.17 3.17
</code></pre>
<p>I tough of getting headers from the original DataFrame and double them: </p>
<pre><code>headers_double = [x for x in headers for i in range(2)]
</code></pre>
<p>Subsequently I tried to create df with new headers:</p>
<pre><code>df.columns = [x for x in headers_double]
</code></pre>
<p>Unfortunately, my way of thinking was wrong. Any suggestions how to solve this problem? </p>
| 0 | 2016-08-15T21:27:26Z | 38,963,370 | <p>If you only have a few columns and you can name them manually, just select columns from your dataframe duplicating those names.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
England Germany US
0 -3.3199 -3.31 496.68
1 1004.0 4.01 4.01
2 4.9794 4.97 1504.97
3 3.1766 2003.17 3.17
''')
df = pd.read_csv(data, delim_whitespace=True)
print(df[['England', 'England', 'Germany', 'Germany', 'US', 'US']])
</code></pre>
<p>Output:</p>
<pre><code> England England Germany Germany US US
0 -3.3199 -3.3199 -3.31 -3.31 496.68 496.68
1 1004.0000 1004.0000 4.01 4.01 4.01 4.01
2 4.9794 4.9794 4.97 4.97 1504.97 1504.97
3 3.1766 3.1766 2003.17 2003.17 3.17 3.17
</code></pre>
<p>If you want to do this more generally, you can get your column names, duplicate them and then select columns. The following results in the same output as above:</p>
<pre><code>print(df[[col for col in df.columns for i in range(2)]])
</code></pre>
| 1 | 2016-08-15T21:30:42Z | [
"python",
"pandas"
] |
Repeating columns in DataFrame | 38,963,331 | <p>What is the right way of repeating columns in DataFrame?</p>
<p>I'm working on df:</p>
<pre><code> England Germany US
0 -3.3199 -3.31 496.68
1 1004.0 4.01 4.01
2 4.9794 4.97 1504.97
3 3.1766 2003.17 3.17
</code></pre>
<p>And I'd like to obtain:</p>
<pre><code> England England Germany Germany US US
0 -3.3199 -3.3199 -3.31 -3.31 496.68 496.68
1 1004.0 1004.0 4.01 4.01 4.01 4.01
2 4.9794 4.9794 4.97 4.97 1504.97 1504.97
3 3.1766 3.1766 2003.17 2003.17 3.17 3.17
</code></pre>
<p>I tough of getting headers from the original DataFrame and double them: </p>
<pre><code>headers_double = [x for x in headers for i in range(2)]
</code></pre>
<p>Subsequently I tried to create df with new headers:</p>
<pre><code>df.columns = [x for x in headers_double]
</code></pre>
<p>Unfortunately, my way of thinking was wrong. Any suggestions how to solve this problem? </p>
| 0 | 2016-08-15T21:27:26Z | 38,965,732 | <p>I just came up with another solution what I want to share with. Maybe it will be useful for somebody else.</p>
<pre><code>print df[np.repeat(df.columns.values,2)]
</code></pre>
| 0 | 2016-08-16T02:22:37Z | [
"python",
"pandas"
] |
How to get the data of a website's "pop-up" box with Selenium Webdriver in Python 3 | 38,963,336 | <p>I'm a newcomer in data scraping, so I appologize in advance if my question is flawed by any reason.</p>
<p>I'm trying to scrap a page of an air company (<a href="https://www.voegol.com.br/en-us/Paginas/Default.aspx" rel="nofollow">link</a>, as to get the data of the flight (ex: aircraft type). I was successful as to enter the data for the flights (departure and arrival airports / data), and get a screen with the flight proposals. </p>
<p>In the second screen, there is a link called "Direct flight". I also included a code line to click on this, so a new pop-up window appears - this one, with the data I want (scheduled departure/arrival times, aircraft type).</p>
<p>But when I try to download it with "html = browser.page_source" (parsing with BeautifulSoup), it apparently is downloading only the content of the previous page (before clicking on "Direct flight", while I want to pick the information from the pop-up box (see <a href="http://i.stack.imgur.com/A31Hi.jpg" rel="nofollow">screenshots</a>).</p>
<pre><code>voosdiretos=browser.find_elements_by_class_name('plusBus')
voo=voosdiretos[0]
voo.click()
html = browser.page_source
soup = BeautifulSoup(html)
soup_string=str(soup)
print('soup_to_string')
</code></pre>
<p>I tried to find a solution for it. Those usually recommended the use of window_handle, but there is no way I can make it work here (I suspect this pop-up windows is not an actual new pop-up, but some sort of javascript pop-up window).</p>
<p>Does anybody have any suggestions on how to scrap this information?</p>
<p>[EDIT]</p>
<p>Following Grasshopper's suggestion, I tried to get the elements:</p>
<pre><code>elem_=browser.find_elements_by_css_selector('.informacoesLightbox bgGrid borderIe8')
print(len(elem_))
print(type(elem_))
</code></pre>
<p>Output was 0 and (nothing was returned).</p>
<p>Any suggestion?</p>
| 1 | 2016-08-15T21:27:49Z | 38,967,220 | <p>The information is not a new pop-up window as you have pointed but contained in a div which has the class <code>--informacoesLightbox bgGrid borderIe8</code>. The display attribute is toggled to make this visible when you click direct flight. You can get the rest of the data from this div using css or xpath locators because the divs inside have no ids, names etc.</p>
<p>CSS below --</p>
<pre><code>Flight Name - "div[class='boxVoo'] > span[class='stsLeft']"
Operator Name - "div[class='boxVoo'] > span[class='stsRight']"
Time Duration - "div[class='boxVoo'] div[class='boxInfoLight'] div[class='timeDuration']"
Aircraft Type - "div[class='boxVoo'] div[class='rightboxInfoLight'] div:nth-of-type(1)"
Tag - "div[class='boxVoo'] div[class='rightboxInfoLight'] div:nth-of-type(2)"
</code></pre>
| 1 | 2016-08-16T05:30:33Z | [
"python",
"python-3.x",
"selenium",
"selenium-webdriver",
"web-crawler"
] |
How would you format the following line so that it fits under 100 chars? | 38,963,348 | <p>I need to make this line fit in under 100 chars, and make it as PEP8 compliant (except for the 80 char limit) as possible:</p>
<pre><code>date = dateparser.parse(parsed_response["creation_time"]) + datetime.timedelta(minutes=parsed_response["time"])
</code></pre>
<p>How would you do it? Adding parenthesis and dividing it in two lines seems to make it look bad in my opinion.</p>
| 0 | 2016-08-15T21:29:00Z | 38,963,397 | <p>Well, the obvious approach would be to introduce variables for <code>dateparser.parse(parsed_response["creation_time"])</code> and <code>datetime.timedelta(minutes=parsed_response["time"])</code>. That would have the pleasant side effect of making it more clear just what the heck that code is doing, particularly if you took the opportunity to give the variables more descriptive names than "date".</p>
| 2 | 2016-08-15T21:33:00Z | [
"python",
"pep8"
] |
How would you format the following line so that it fits under 100 chars? | 38,963,348 | <p>I need to make this line fit in under 100 chars, and make it as PEP8 compliant (except for the 80 char limit) as possible:</p>
<pre><code>date = dateparser.parse(parsed_response["creation_time"]) + datetime.timedelta(minutes=parsed_response["time"])
</code></pre>
<p>How would you do it? Adding parenthesis and dividing it in two lines seems to make it look bad in my opinion.</p>
| 0 | 2016-08-15T21:29:00Z | 38,963,432 | <p>You could just break your line as follows:</p>
<pre><code>date = dateparser.parse(parsed_response["creation_time"]) + \
datetime.timedelta(minutes=parsed_response["time"])
</code></pre>
<p>But preparing variables before, then just adding them would be more readable.</p>
<pre><code>creation_time = dateparser.parse(parsed_response["creation_time"])
parsed_response_time = datetime.timedelta(minutes=parsed_response["time"])
date = creation_time + parsed_response_time
</code></pre>
| 1 | 2016-08-15T21:35:45Z | [
"python",
"pep8"
] |
How would you format the following line so that it fits under 100 chars? | 38,963,348 | <p>I need to make this line fit in under 100 chars, and make it as PEP8 compliant (except for the 80 char limit) as possible:</p>
<pre><code>date = dateparser.parse(parsed_response["creation_time"]) + datetime.timedelta(minutes=parsed_response["time"])
</code></pre>
<p>How would you do it? Adding parenthesis and dividing it in two lines seems to make it look bad in my opinion.</p>
| 0 | 2016-08-15T21:29:00Z | 38,963,479 | <p>My solution to that would be something like this:</p>
<pre><code>creation_time = dateparser.parse(parsed_response["creation_time"])
time_delta = datetime.timedelta(minutes=parsed_response["time"])
date = creation_time + time_delta
</code></pre>
<p>That way you got it PEP8 compliant and you got those 2 variables you can reuse for some other purpose without fetching and parsing again.</p>
| 0 | 2016-08-15T21:40:52Z | [
"python",
"pep8"
] |
How to read files with messy unreadable name? | 38,963,361 | <p>I have a lot of data files with unreadable name:</p>
<p><a href="http://i.stack.imgur.com/MuZ9f.png" rel="nofollow"><img src="http://i.stack.imgur.com/MuZ9f.png" alt="enter image description here"></a></p>
<p>Within python, i can use glob.glob to find them.
But when i tried to use pandas to read the file, error occurs.
Here is my code:</p>
<pre><code>import pandas as pd
import os
import glob
cwd=os.getcwd()
os.chdir(cwd)
for file in glob.glob("S*.xls"):
temp=pd.read_excel(file)
</code></pre>
<p>Here is the error message:</p>
<pre><code>IOError: [Errno 22] invalid mode ('rb') or filename: 'Shibor\xa8\xbay?Y2006.xls'
</code></pre>
<p>May i ask, how can i find the files with name like "ShiborÃý¾Ã2015.xls" ?</p>
| 2 | 2016-08-15T21:30:09Z | 38,963,523 | <p>Use unicode file names/path add a "u" prefix, like this:</p>
<pre><code>for file in glob.glob(u"S*.xls"):
temp=pd.read_excel(file)
</code></pre>
| 5 | 2016-08-15T21:43:44Z | [
"python",
"pandas",
"file-io",
"path",
"directory"
] |
How to read files with messy unreadable name? | 38,963,361 | <p>I have a lot of data files with unreadable name:</p>
<p><a href="http://i.stack.imgur.com/MuZ9f.png" rel="nofollow"><img src="http://i.stack.imgur.com/MuZ9f.png" alt="enter image description here"></a></p>
<p>Within python, i can use glob.glob to find them.
But when i tried to use pandas to read the file, error occurs.
Here is my code:</p>
<pre><code>import pandas as pd
import os
import glob
cwd=os.getcwd()
os.chdir(cwd)
for file in glob.glob("S*.xls"):
temp=pd.read_excel(file)
</code></pre>
<p>Here is the error message:</p>
<pre><code>IOError: [Errno 22] invalid mode ('rb') or filename: 'Shibor\xa8\xbay?Y2006.xls'
</code></pre>
<p>May i ask, how can i find the files with name like "ShiborÃý¾Ã2015.xls" ?</p>
| 2 | 2016-08-15T21:30:09Z | 38,963,583 | <p>You have a unicode character in the filename. You need to send a properly encoded string to to pandas to open the file. See <a href="https://github.com/pydata/pandas/issues/9438" rel="nofollow">this</a> open issue for pandas. Honestly, I would just fix the file names in your windows/ gui environment and try and get the process that is generating the files to give you a better name.</p>
<p>In the future, it would be helpful if you stated the version of python and your flavor of operating system.</p>
| 1 | 2016-08-15T21:48:57Z | [
"python",
"pandas",
"file-io",
"path",
"directory"
] |
Represent Wikipedia in Graph form | 38,963,401 | <p>I want to represent whole Wikipedia in a graph form, like each article is a node and if one article contains link of other article then they share an edge.
Since this will be too many hits so I will need to make requests locally ( setup Wikipedia locally ).
Can you guide me how to achieve this ( tell me about libraries or tools that will be helpful ) ? </p>
| 2 | 2016-08-15T21:33:32Z | 39,012,509 | <p>You can get a dump from Wikipedia <a href="https://dumps.wikimedia.org/" rel="nofollow">here.</a>
From your 'python' tag i assume you want to use python to crawl the data and generate the graph.
I can recommend the following modules:</p>
<ul>
<li>requests - for retrieving the websites</li>
<li>Beautifulsoup - for parsing html</li>
<li>scrapy - alternative for beautifulsoup</li>
<li>pymongodb - Together with a mongodb of course. Mongodb is a good choice because it's document oriented</li>
<li>matplotlib - For visualization</li>
<li>graphviz - also a good choice for visualization</li>
<li>networkx - visualization and manipulation of graphs</li>
</ul>
| 2 | 2016-08-18T07:42:30Z | [
"python",
"computer-science",
"wikipedia"
] |
Can not query the json column with sqlachemy | 38,963,409 | <p>I am trying to query with in json column in postgresql with <code>flask-sqlalchemy</code>.
here is my code</p>
<pre><code>house_ = House()
results = house_.query.filter(
House.information['branch_name'].astext == 'release0'
).all()
</code></pre>
<p>I am not sure what is wrong.
I tried to use <code>.cast(Unicode)</code> instead of <code>astext</code> as well.</p>
<p>Getting error as below:</p>
<pre><code>NotImplementedError: Operator 'getitem' is not supported on this expression
</code></pre>
| 0 | 2016-08-15T21:33:52Z | 39,310,968 | <p>You shold use 'op' method in your query, like this:</p>
<pre><code>Session.query(Model).filter(Model.json_field.op('->')('KEY') == VALUE)
</code></pre>
<p>Also, you can use - >> JSON operator for auto-cadt value to text. Also, you can read more about Postgresql JSON(B) operators :) </p>
| 0 | 2016-09-03T20:16:39Z | [
"python",
"postgresql",
"python-2.7",
"sqlalchemy",
"flask-sqlalchemy"
] |
Representing this graph in Python | 38,963,469 | <p>What is one way to represent this graph in a Python data structure?</p>
<p><a href="http://i.stack.imgur.com/97ulj.jpg" rel="nofollow">Example Graph</a></p>
<p>The graph will always be a closed envelope and each vertex will contain an x and y value. There will always be 3...n vertices. No other information is needed along the paths.</p>
| 0 | 2016-08-15T21:39:00Z | 38,963,722 | <p>If you just use <a href="http://matplotlib.org/" rel="nofollow">matplotlib</a> the code could look like this:</p>
<pre><code>import numpy as np
import matplotlib.patches as patches
import pylab
pp = np.array([
[-400, -50000],
[800, -50000],
[6000, -16000],
[6000, 30000],
[4400, 40000],
[-3000, 40000],
[-6000, 12000],
[-6000, -16000]
])
pylab.scatter(pp[:, 0], pp[:, 1])
pylab.gca().add_patch(patches.Polygon(pp, closed=True, fill=False))
pylab.grid()
pylab.show()
</code></pre>
<p>And the output would be:</p>
<p><a href="http://i.stack.imgur.com/QH3FX.png" rel="nofollow"><img src="http://i.stack.imgur.com/QH3FX.png" alt="enter image description here"></a></p>
| 0 | 2016-08-15T22:01:09Z | [
"python",
"graph"
] |
TensorFlow PoolAllocator huge number of requests | 38,963,525 | <p>using Tensorflow r0.9/r.10 I get the following message, that makes me worried I've set my neural network model in the wrong way.</p>
<pre><code>I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 6206792 get requests, put_count=6206802 evicted_count=5000 eviction_rate=0.000805568 and unsatisfied allocation rate=0.000806536
</code></pre>
<p>The network I use is similar to AlexNet/VGG-M, I create the variables and the ops in a function called once, and then I just loop over multiple epochs calling the same omptimizer, loss and prediction function for each mini-batch iteration. </p>
<p>Another thing that makes me worried is that the network can be unstable when using a large batch size: it runs fine for few epochs, and then it goes out of memory (trying to allocate...).</p>
<p>Is there any way to check if there is something wrong and what it is?</p>
| 0 | 2016-08-15T21:43:46Z | 38,975,789 | <p>This is an info-level log statement (the "I" prefix). It does not necessarily mean that anything is wrong: however, the pool allocator (a cache for allocations) is finding that it frequently has to fall back on the underlying allocator. This may indicate memory pressure.</p>
<p>For your instability problem: as you observe, large batches can lead to out-of-memory errors. There is some nondeterminism to operator scheduling, which is why you may not see it fail every time. Try lowering your batch size until you consistently no longer see out of memory errors.</p>
| 0 | 2016-08-16T13:05:38Z | [
"python",
"tensorflow"
] |
Infinite recursion when nesting objects | 38,963,535 | <p>First I should mention I'm rather new to python. I'm trying to put a Container inside a Container, but when I do so, I get infinite number of recursive Containers.</p>
<pre><code>class Container(Drawable):
content = []
# some more code
def append(self, obj):
print(obj.content)
self.content.append(obj)
print(self.content[0].content)
</code></pre>
<p>Page is similar to Container. I add some elements</p>
<pre><code>pa = Page()
ca = Container(color="red")
cb = Container(color="blue")
ca.append(cb)
pa.append(ca)
</code></pre>
<p>Append, inside container prints the following, which is already incorrect, since both of them should be the same</p>
<pre><code>[]
[<src.test.container.Container object at 0x013550D0>]
</code></pre>
<p>I use this method to print it out</p>
<pre><code>class SomeClass():
def __init__(self, page):
self.print_content(page, 0)
def print_content(self, parent, depth):
depth += 1
for obj in parent.content:
print((str(depth).rjust(depth, ' ') + " " + str(obj)).rjust(depth, ' '))
if(depth > 5): # to stop infinite recursion in print
return
self.print_content(obj, depth)
</code></pre>
<p>I get the following, which makes no sense. I never added <em>ca</em> inside itself as a content, but it happens when I append it.</p>
<pre><code>1 Container [color=red ]
2 Container [color=blue ]
3 Container [color=blue ]
4 Container [color=blue ]
5 Container [color=blue ]
6 Container [color=blue ]
</code></pre>
<p>Any idea why this is happening? If I append two Containers to Page it's fine, but as soon as I nest, it becomes infinite recursion. Also, all Containers with color blue are the same (address). I feel like it's an obvious mistake but I can't figure it out</p>
| 2 | 2016-08-15T21:44:47Z | 38,963,684 | <p>On your <code>Container</code> class, you're defining <code>content</code> as a <a href="https://docs.python.org/2/tutorial/classes.html#class-and-instance-variables" rel="nofollow"><strong>class</strong> level attribute</a>.</p>
<p>That means, that even if multiple instances of <code>Container</code> exist, they all share the same mutable <code>content</code> list. As soon as you modify that list (whether you do that with <code>self.content.append()</code> or <code>Container.content.append()</code>), you're changing that list for <em>all</em> those objects.</p>
<p>Initialize <code>content</code> as an instance attribute in <code>__init__()</code> instead:</p>
<pre><code>class Container(Drawable):
def __init__(self, *args, **kwargs):
super(Container, self).__init__(*args, **kwargs)
self.content = []
</code></pre>
<p><em>(The <code>super(Container, self).__init__(*args, **kwargs)</code> call is because I don't know what your <code>Drawable</code> class looks like. It might also need to have its intitialiser called, and so we're accepting all positional and keyword arguments here, and just pass them on, before initialising <code>self.content</code>)</em></p>
| 2 | 2016-08-15T21:57:35Z | [
"python",
"python-3.x",
"recursion"
] |
Numpy padding 4D units with all zeros | 38,963,610 | <p>I have a 4D numpy array, but each element is a variable size 3D volume. Essentially it is a numpy list of 3D volumes. So the numpy array has the shape...</p>
<pre><code>(Pdb) batch_x.shape
(3,)
</code></pre>
<p>And take element <code>i</code> in that list, and it looks like this...</p>
<pre><code>(Pdb) batch_x[i].shape
(7, 70, 66)
</code></pre>
<p>I'm trying to pad each 3D volume with zeros, with the following code...</p>
<pre><code>for i in range(batch_size):
pdb.set_trace()
batch_x[i] = np.lib.pad(batch_x[i], (n_input_z - int(batch_x[i][:,0,0].shape[0]),
n_input_x - int(batch_x[i][0,:,0].shape[0]),
n_input_y - int(batch_x[i][0,0,:].shape[0])),
'constant', constant_values=(0,0,0))
batch_y[i] = np.lib.pad(batch_y[i], (n_input_z - int(batch_y[i][:,0,0].shape[0]),
n_input_x - int(batch_y[i][0,:,0].shape[0]),
n_input_y - int(batch_y[i][0,0,:].shape[0])),
'constant', constant_values=(0,0,0))
</code></pre>
<p>There error is as follow...</p>
<p><code>*** ValueError: Unable to create correctly shaped tuple from (3, 5, 9)</code></p>
<p>I'm trying to pad each 3D volume such that they all have the same shape -- <code>[10,75,75]</code>. Keep in mind, like I showed above, <code>batch_x[i].shape = (7,70,66)</code> So the error message is at least showing me that my dimensions should be correct.</p>
<p>For evidence, debugging...</p>
<pre><code>(Pdb) int(batch_x[i][:,0,0].shape[0])
7
(Pdb) n_input_z
10
(Pdb) (n_input_z - int(batch_x[i][:,0,0].shape[0]))
3
</code></pre>
| 1 | 2016-08-15T21:51:20Z | 38,964,707 | <p>So stripped of the extraneous stuff, the problem is:</p>
<pre><code>In [7]: x=np.ones((7,70,66),int)
In [8]: np.pad(x,(3,5,9),mode='constant',constant_values=(0,0,0))
...
ValueError: Unable to create correctly shaped tuple from (3, 5, 9)
</code></pre>
<p>Looks like a problem with defining the inputs to <code>pad</code>. I haven't used it much, but I recall it required pad size for both the start and end of each dimension.</p>
<p>From its docs:</p>
<pre><code>pad_width : {sequence, array_like, int}
Number of values padded to the edges of each axis.
((before_1, after_1), ... (before_N, after_N)) unique pad widths
for each axis.
</code></pre>
<p>So lets try a tuple of tuples:</p>
<pre><code>In [13]: np.pad(x,((0,3),(0,5),(0,9)), mode='constant', constant_values=0).shape
Out[13]: (10, 75, 75)
</code></pre>
<p>Can you take it from there?</p>
| 2 | 2016-08-15T23:58:56Z | [
"python",
"arrays",
"numpy",
"pad"
] |
Failed to import module when building the sphinx documentation | 38,963,631 | <p>I'm using <code>Sphinx</code> version <code>1.4.5</code>.</p>
<p>My project structure is the following:</p>
<p><code>+ src > main.py
+ docs (generated with sphinx-quickstart)</code></p>
<p>Even after adding the path to the <code>src</code> folder in <code>docs/conf.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>sys.path.insert(0, os.path.abspath('../src'))
</code></pre>
<p>And generating the rst file for <code>src/main.py</code> (i.e. <code>docs/src.rst</code> and <code>docs/modules.rst</code>) with:</p>
<pre class="lang-sh prettyprint-override"><code>$ sphinx-apidoc -fo docs src
</code></pre>
<p>When I try to build the <code>html</code> webpages with:</p>
<pre class="lang-sh prettyprint-override"><code>$ make clean
$ make html
</code></pre>
<p>It couldn't find both the <code>src</code> module and <code>src/main.py</code>:</p>
<p><code>WARNING: autodoc: failed to import module u'src.main'; the following exception was raised</code></p>
| 0 | 2016-08-15T21:53:27Z | 38,963,769 | <p>Your current working directory should be the directory of your makefile, which should be <code>docs</code>.</p>
| 0 | 2016-08-15T22:06:27Z | [
"python",
"python-sphinx"
] |
Failed to import module when building the sphinx documentation | 38,963,631 | <p>I'm using <code>Sphinx</code> version <code>1.4.5</code>.</p>
<p>My project structure is the following:</p>
<p><code>+ src > main.py
+ docs (generated with sphinx-quickstart)</code></p>
<p>Even after adding the path to the <code>src</code> folder in <code>docs/conf.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>sys.path.insert(0, os.path.abspath('../src'))
</code></pre>
<p>And generating the rst file for <code>src/main.py</code> (i.e. <code>docs/src.rst</code> and <code>docs/modules.rst</code>) with:</p>
<pre class="lang-sh prettyprint-override"><code>$ sphinx-apidoc -fo docs src
</code></pre>
<p>When I try to build the <code>html</code> webpages with:</p>
<pre class="lang-sh prettyprint-override"><code>$ make clean
$ make html
</code></pre>
<p>It couldn't find both the <code>src</code> module and <code>src/main.py</code>:</p>
<p><code>WARNING: autodoc: failed to import module u'src.main'; the following exception was raised</code></p>
| 0 | 2016-08-15T21:53:27Z | 38,963,810 | <p>Try doing this for your path insertion instead:</p>
<pre><code>sys.path.insert(0, os.path.abspath('../'))
</code></pre>
<p>Also consider a better name for your directory than <code>src</code>.</p>
| 1 | 2016-08-15T22:10:09Z | [
"python",
"python-sphinx"
] |
Dateutil parse bug in python returns the wrong value | 38,963,653 | <p>I have looked at many possible ways to parse python times. <a href="http://stackoverflow.com/questions/1101508/how-to-parse-dates-with-0400-timezone-string-in-python">Using parse seems link the only method that should work</a>. <a href="http://stackoverflow.com/questions/10494312/parsing-time-string-in-python">While trying to use datetime.strptime causes an error because <code>%z</code> does not work with python 2.7</a>. But using parse.parse incorrectly recognizes the time zone.</p>
<p>I parse both <code>Fri Nov 9 09:04:02 2012 -0500</code> and <code>Fri Nov 9 09:04:02 2012 -0800</code> and get the exact same timestamp in unix time. <code>1352480642</code></p>
<ul>
<li>My version of python 2.7.10 </li>
<li>My version of dateutil 1.5</li>
</ul>
<p>Here is my code that runs the test.</p>
<pre><code>#!/usr/bin/python
import time
from dateutil import parser
def get_timestamp(time_string):
timing = parser.parse(time_string)
return time.mktime(timing.timetuple())
test_time1 = "Fri Nov 9 09:04:02 2012 -0500"
test_time2 = "Fri Nov 9 09:04:02 2012 -0800"
print get_timestamp(test_time1)
print get_timestamp(test_time2)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>1352480642.0
1352480642.0
</code></pre>
<p><strong>Expected output</strong></p>
<pre><code>1352469842.0
1352480642.0
</code></pre>
| 1 | 2016-08-15T21:55:15Z | 38,964,103 | <p>This has nothing to do with the parser, you'll see the same behavior just from <code>mktime()</code> alone, since <code>datetime.timetuple()</code> doesn't have any time zone offset information, and <code>mktime()</code> is the inverse of <code>localtime</code>. You can correct this by converting it to <code>localtime</code> before calling <code>timetuple()</code>:</p>
<pre><code>from time import mktime
from datetime import datetime
from dateutil import tz
dt_base = datetime(2012, 11, 9, 9, 4, 2)
dt_est = dt_base.replace(tzinfo=tz.tzoffset('EST', -5 * 3600))
dt_pst = dt_base.replace(tzinfo=tz.tzoffset('PST', -8 * 3600))
def print_mktime(dt):
print(mktime(dt.timetuple()))
# Run in UTC
print_mktime(dt_est) # 1352469842.0
print_mktime(dt_pst) # 1352469842.0
# Convert to local time zone first first
print_mktime(dt_est.astimezone(tz.tzlocal())) # 1352469842.0
print_mktime(dt_pst.astimezone(tz.tzlocal())) # 1352480642.0
</code></pre>
<p>Note that there is a chart on the <a href="https://docs.python.org/3/library/time.html" rel="nofollow">documentation for <code>time()</code></a> (<a href="https://docs.python.org/2/library/time.html" rel="nofollow">python 2.x docs</a>) that tells you how to convert between these representations:</p>
<pre><code>From To Use
---------------------------------------------------------------------------
seconds since the epoch | struct_time in UTC | gmtime()
seconds since the epoch | struct_time in local time | localtime()
struct_time in UTC | seconds since the epoch | calendar.timegm()
struct_time in local time | seconds since the epoch | mktime()
</code></pre>
<p>My personal preference would be to convert the parsed date to UTC, in which case <code>calendar.timegm()</code> would be the appropriate function:</p>
<pre><code>from calendar import timegm
def print_timegm(dt):
print(timegm(dt.timetuple()))
print_timegm(dt_est.astimezone(tz.tzutc())) # 1352469842.0
print_timegm(dt_pst.astimezone(tz.tzutc())) # 1352480642.0
</code></pre>
| 0 | 2016-08-15T22:42:19Z | [
"python",
"python-2.7",
"parsing",
"python-dateutil"
] |
Server-sent-events not received by all web-clients | 38,963,698 | <p>I have a Flask web-server which generates server-sent-events (sse) which should be received by all connected web-clients.</p>
<p>In "<strong>Version 1</strong>" below this works. All web-clients receive the events, and update accordingly.</p>
<p>In "<strong>Version 2</strong>" below, which is a refactoring of Version 1, this no longer works as expected:</p>
<p>Instead I get:</p>
<ul>
<li>mostly only one of the web-clients gets the event, or </li>
<li>rarely
multiple web-clients get the event, or </li>
<li>rarely none of the
web-clients get the event</li>
</ul>
<p>As far as I can make out, the server is always generating the events, and normally at least one client is receiving.</p>
<p>My initial test hosted the web-server on a Raspberry Pi 3, with the web-clients on the Pi, on Windows and OSX using a variety of browsers.</p>
<p>To eliminate any possible network issues I repeated the same test with the web-server and 3 instances of Chrome all hosted on the same OSX laptop.
This gave the same results: Version 1 "OK", Version 2 "NOT OK".</p>
<p>The client that successfully receives seemingly varies randomly from event to event: so far I can't discern a pattern.</p>
<p>Both Version 1 and Version 2 have a structure <code>change_objects</code> containing "things that should be tracked for changes"</p>
<ul>
<li><p>In Version 1 <code>change_objects</code> is a dict of dicts.</p></li>
<li><p>In Version 2 I refactored <code>change_objects</code> to be a list of instances of the class <code>Reporter</code>, or sub classes of <code>Reporter</code>.</p></li>
</ul>
<p>The changes to the "things" are triggered based on web-services received elsewhere in the code.</p>
<h2>Version 1 (OK: sse events received by all web-clients)</h2>
<pre><code>def check_walk(walk_new, walk_old):
if walk_new != walk_old:
print("walk change", walk_old, walk_new)
return True, walk_new
else:
return False, walk_old
def walk_event(walk):
silliness = walk['silliness']
data = '{{"type": "walk_change", "silliness": {}}}'.format(silliness)
return "data: {}\n\n".format(data)
change_objects = {
"walk1": {
"object": walks[0],
"checker": check_walk,
"event": walk_event,
},
... more things to be tracked...
}
def event_stream(change_objects):
copies = {}
for key, value in change_objects.items():
copies[key] = {"obj_old": deepcopy(value["object"])} # ensure a true copy, not a reference!
while True:
gevent.sleep(0.5)
for key, value in change_objects.items():
obj_new = deepcopy(value["object"]) # use same version in check and yield functions
obj_changed, copies[key]["obj_old"] = value["checker"](obj_new, copies[key]["obj_old"])
if (obj_changed):
yield value["event"](obj_new)
@app.route('/server_events')
def sse_request():
return Response(
event_stream(change_objects),
mimetype='text/event-stream')
</code></pre>
<h2>Version 2 (NOT OK: sse events NOT always received by all web-clients)</h2>
<pre><code>class Reporter:
def __init__(self, reportee, name):
self._setup(reportee, name)
def _setup(self, reportee, name):
self.old = self.truecopy(reportee)
self.new = reportee
self.name = "{}_change".format(name)
def truecopy(self, orig):
return deepcopy(orig)
def changed(self):
if self.new != self.old:
self.old = self.truecopy(self.new)
return True
else:
return False
def sse_event(self):
data = self.new.copy()
data['type'] = self.name
data = json.dumps(data)
return "data: {}\n\n".format(data)
class WalkReporter(Reporter):
# as we are only interested in changes to attribute "silliness" (not other attributes) --> override superclass sse_event
def sse_event(self):
silliness = self.new['silliness']
data = '{{"type": "walk_change", "silliness": {}}}'.format(silliness)
return "data: {}\n\n".format(data)
change_objects = [
WalkReporter(name="walk1", reportee=walks[0]),
... more objects to be tracked...
]
def event_stream(change_objects):
while True:
gevent.sleep(0.5)
for obj in change_objects:
if obj.changed():
yield obj.sse_event()
@app.route('/server_events')
def sse_request():
return Response(
event_stream(change_objects),
mimetype='text/event-stream')
</code></pre>
<p>Full disclosure: This question is a follow on to the question: <a href="http://stackoverflow.com/questions/38814896/refactor-a-multigenerator-python-function/38831039#38831039">Refactor a (multi)generator python function</a>
which focussed on refactoring the <code>event_stream()</code> function when tracking changes to multiple "things".
However the problem here is clearly outside the scope of the original question, hence a new one.</p>
| 0 | 2016-08-15T21:58:33Z | 38,970,460 | <p>The refactored "Version 2" code in the question suffers from a concurrency / timing problem.</p>
<p><code>sse_request()</code> is called for each of the web-clients (in the test case 3 instances). We thus have 3 instances looping in <code>event_stream()</code>.</p>
<p>These calls happen "more or less" in parallel: which actually means in random sequence.</p>
<p>However the list <code>change_objects</code> is shared, so the first web-client that spots a change will update the "old" copy in the shared <code>WalkReporter</code> instance to the latest state, and may do so before the other clients spot the change. i.e. the first successful web-client effectively hides the change from the other web-clients.</p>
<p>This is easily fixed, by giving each web-client its own copy of <code>change_objects</code>. </p>
<p>i.e. <code>change_objects</code> is moved into <code>sse_request()</code> as shown below.</p>
<pre><code>@app.route('/server_events')
def sse_request():
change_objects = [
WalkReporter(name="walk1", reportee=walks[0]),
... more objects to be tracked...
]
return Response(
event_stream(change_objects),
mimetype='text/event-stream')
</code></pre>
<p>With this minor change, each instance of <code>sse_request()</code> can spot the changes, and thus all the web-clients receive the sse-events as expected.</p>
| 1 | 2016-08-16T08:51:20Z | [
"python",
"flask",
"server-sent-events"
] |
Python Plotting Data Obtained from Quandl API the Wrong Way? | 38,963,711 | <p>I have the following code:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
# quandl_data.py
from __future__ import print_function
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import requests
def construct_futures_symbols(
symbol, start_year=2010, end_year=2016
):
"""
Constructs a list of futures contract codes
for a particular symbol and timeframe.
"""
futures = []
# March, June, September and
# December delivery codes
months = 'HMUZ'
for y in range(start_year, end_year+1):
for m in months:
futures.append("%s%s%s" % (symbol, m, y))
return futures
def download_contract_from_quandl(contract, dl_dir):
"""
Download an individual futures contract from Quandl and then
store it to disk in the 'dl_dir' directory. An auth_token is
required, which is obtained from the Quandl upon sign-up.
"""
# Construct the API call from the contract and auth_token
api_call = "https://www.quandl.com/api/v3/datasets/"
api_call += "CME/%s.csv" % contract
# If you wish to add an auth token for more downloads, simply
# comment the following line and replace MY_AUTH_TOKEN with
# your auth token in the line below
params = "?sort_order=asc"
params = "?auth_token=myTokenHere&sort_order=asc"
full_url = "%s%s" % (api_call, params)
# Download the data from Quandl
data = requests.get(full_url).text
# Store the data to disk
fc = open('%s/%s.csv' % (dl_dir, contract), 'w')
fc.write(data)
fc.close()
def download_historical_contracts(
symbol, dl_dir, start_year=2010, end_year=2016
):
"""
Downloads all futures contracts for a specified symbol
between a start_year and an end_year.
"""
contracts = construct_futures_symbols(
symbol, start_year, end_year
)
for c in contracts:
print("Downloading contract: %s" % c)
download_contract_from_quandl(c, dl_dir)
if __name__ == "__main__":
symbol = 'ES'
# Make sure you've created this
# relative directory beforehand
dl_dir = 'quandl/futures/ES'
# Create the start and end years
start_year = 2010
end_year = 2016
# Download the contracts into the directory
download_historical_contracts(
symbol, dl_dir, start_year, end_year
)
# Open up a single contract via read_csv
# and plot the settle price
es = pd.io.parsers.read_csv(
"%s/ESH2010.csv" % dl_dir, index_col="Date"
)
es["Settle"].plot()
plt.show()
</code></pre>
<p>The code runs without error, however it is plotting in the wrong direction. Seems to be plotting from new to old dates. I would like to plot the oldest data first.</p>
<p><a href="http://i.stack.imgur.com/65lb2.png" rel="nofollow"><img src="http://i.stack.imgur.com/65lb2.png" alt="Graph of result"></a></p>
<p>How do I achieve this? I thought changing the <code>params = "?sort_order=asc"</code> to <code>params = "?sort_order=desc"</code>, which only changes the .csv file order not the plot.</p>
<p>Any ideas?</p>
| 1 | 2016-08-15T21:59:59Z | 38,963,782 | <p>Okay, i've used the API docs and found the problem.
The parameter you need to use to order the data is: "order=asc|desc", and not "sort_order" as previously thought.</p>
<p>Please use this function:</p>
<pre><code>def download_contract_from_quandl(contract, dl_dir):
"""
Download an individual futures contract from Quandl and then
store it to disk in the 'dl_dir' directory. An auth_token is
required, which is obtained from the Quandl upon sign-up.
"""
# Construct the API call from the contract and auth_token
api_call = "https://www.quandl.com/api/v3/datasets/"
api_call += "CME/%s.csv" % contract
# If you wish to add an auth token for more downloads, simply
# comment the following line and replace MY_AUTH_TOKEN with
# your auth token in the line below
params = "?auth_token=YOUR_TOKEN"
params += "&order=asc"
full_url = "%s%s" % (api_call, params)
# Download the data from Quandl
data = requests.get(full_url).text
# Store the data to disk
fc = open('%s/%s.csv' % (dl_dir, contract), 'w')
fc.write(data)
</code></pre>
<p>Note:
The way you are using the api, by simple http request, altough works, is the not the ideal way to use their API.
There is a python package called Quandl, you can install like so:</p>
<pre><code>pip3 install quandl
</code></pre>
<p>On your system.
Also then you would have a single (and not multiple using auth_token=YOUR_TOKEN in each request) auth call like so:</p>
<pre><code>quandl.ApiConfig.api_key = 'YOUR_TOEKN'
</code></pre>
<p>And then each api call will be simple and elegent using their package instead or creating an http request manually, like so:</p>
<pre><code>data = quandl.get("CME/ESH2010.csv", order="asc")
</code></pre>
<p>I will advise using the second method of using the API, but both will work perfectly.</p>
<p>Cheers, Or.</p>
| 1 | 2016-08-15T22:07:39Z | [
"python",
"plot"
] |
How to change the marker size in pandas.scatter_matrix? | 38,963,734 | <p>How to change the marker sizes in <code>pandas.scatter_matrix()</code> using python 3.5.2 and pandas 0.18.0? </p>
| 2 | 2016-08-15T22:02:17Z | 38,964,654 | <p>use the <code>s</code> parameter.</p>
<pre><code>from pandas.tools.plotting import scatter_matrix
df = pd.DataFrame(np.random.rand(10, 2))
scatter_matrix(df, alpha=0.5, figsize=(8, 8), diagonal='kde', s=1000)
</code></pre>
<p><a href="http://i.stack.imgur.com/Ry1Zr.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ry1Zr.png" alt="enter image description here"></a></p>
| 1 | 2016-08-15T23:50:21Z | [
"python",
"pandas"
] |
Django checkbox not being submitted when unchecked | 38,963,751 | <p>I am developing a django application. In my forms. I have a checkbox in my form. My forms submits fine when the box is checked, but when the box is unchecked, the form fails to submit. The field I am using is Boolean.</p>
<p>Here is my code:</p>
<pre><code>#models.py
class Ingredient(models.Model):
user = models.ForeignKey('auth.User')
recipe_id = models.ForeignKey(Recipe, on_delete=models.CASCADE)
title = models.CharField(max_length=500)
instructions = models.CharField(max_length=500)
rules = models.TextField(max_length=500,blank=True)
primal = models.CharField(default='False',max_length=500)
def __str__(self):
return self.title
#views.py
def create_ingredient(request):
form = IngredientForm(current_user=request.user)
if request.method == 'POST':
form = IngredientForm(request.POST, current_user=request.user)
if form.is_valid():
current_user = request.user
data = form.cleaned_data
ingredient_data=Ingredient.objects.create(user=current_user, recipe_id=data['recipe_id'], title=data['title'], primal=data['primal'], instructions=data['instructions'], rules=data['rules'])
ingredient_data.save()
ingredient = Ingredient.objects.get(pk = ingredient_data.pk)
return redirect('ingredient_detail', pk=ingredient.pk)
else:
messages.error(request, "Error")
return render(request, 'create_ingredient.html', {'form': form })
#in my template
....
<div class="form-group">
<div class="checkbox">
<label><input type="checkbox" name="{{ form.primal.name }}" value="True" id="primal1">Primal</label>
</div>
</div>
....
</code></pre>
<p>Does anyone have a solution?</p>
| 0 | 2016-08-15T22:04:10Z | 38,963,870 | <p>This is not about django but about html in general. This is your template:</p>
<pre><code><div class="form-group">
<div class="checkbox">
<label><input type="checkbox" name="{{ form.primal.name }}" value="True" id="primal1">Primal</label>
</div>
</div>
</code></pre>
<p>Your checkbox, when unchecked, will not fly because it will not make a <code>{{ form.primal.name }}=True</code> in the url or post body.</p>
<p>To solve your problem, you should ensure a way to add <code>{{ form.primal.name }}=False</code> to the url. The standard solution involves a fixed additional field (a hidden one) like this:</p>
<pre><code><div class="form-group">
<div class="checkbox">
<input type="hidden" name="{{ form.primal.name }}" value="False" />
<label><input type="checkbox" name="{{ form.primal.name }}" value="True" id="primal1">Primal</label>
</div>
</div>
</code></pre>
<p>Which will generate a query string part like <code>{{ form.primal.name }}=False</code> if checkbox is unchecked, or <code>{{ form.primal.name }}=False&{{ form.primal.name }}=True</code> if checkbox is checked. In this case, only the latter occurrence counts, so you will have <code>"True"</code> when checked and <code>"False"</code> when unchecked.</p>
| 1 | 2016-08-15T22:16:19Z | [
"python",
"django",
"forms",
"checkbox"
] |
Compute 2^x where x is the user input | 38,963,816 | <p>I have tried to build exceptions for this problem like asked in the problem below. Unfortunately I can't make it work. I would greatly appreciate any input whatsoever. Thank you in advance.</p>
<p><strong>Compute 2^x where x is the user input. x should be greater than or equal to 5 and less than or equal to 25. If the user input is not an integer then raise an exception. Create custom exceptions and raise if x is less than 5 and greater than 25. Then add the digits of 2x. For example if user inputs 6, then find 26 = 64, so the sum of the digits is 6 + 4 = 10.</strong></p>
<pre><code>import sys
i = int(raw_input("Please provide a value for x (between 5 and 25): " ))
try:
x = int(i)
except ValueError as v:
print 'You did not enter a valid integer',v
except NotAValidValue as n:
if x < 5 or x > 25:
print 'Your entry is not valid. Please provide a number between 5 and 25',n
sys.exit(0)
exp = 2 ** x
print(exp)
</code></pre>
<p>Again, Thank you so much for giving this your time. </p>
| -4 | 2016-08-15T22:10:37Z | 38,963,918 | <p>Here's a working example, it's written to be executed on python 2.x:</p>
<pre><code>import sys
try:
x = int(raw_input("Please provide a value for x (between 5 and 25): "))
if x < 5 or x > 25:
print('Your entry is not valid {0}.' +
'Please provide a number between 5 and 25'.format(x))
else:
exp = 2 ** x
print(exp)
except ValueError as v:
print('You did not enter a valid integer {0}'.format(v))
</code></pre>
<p>One advice though, try carefully to read & understand all the code and start
tweaking it here and there to make it yours. You won't learn too much using other's code 'as it is', next time try to be more specific asking which specific parts of your code don't understand :)</p>
<p>Have fun learning python!</p>
| 1 | 2016-08-15T22:21:29Z | [
"python",
"exception",
"exception-handling"
] |
Compute 2^x where x is the user input | 38,963,816 | <p>I have tried to build exceptions for this problem like asked in the problem below. Unfortunately I can't make it work. I would greatly appreciate any input whatsoever. Thank you in advance.</p>
<p><strong>Compute 2^x where x is the user input. x should be greater than or equal to 5 and less than or equal to 25. If the user input is not an integer then raise an exception. Create custom exceptions and raise if x is less than 5 and greater than 25. Then add the digits of 2x. For example if user inputs 6, then find 26 = 64, so the sum of the digits is 6 + 4 = 10.</strong></p>
<pre><code>import sys
i = int(raw_input("Please provide a value for x (between 5 and 25): " ))
try:
x = int(i)
except ValueError as v:
print 'You did not enter a valid integer',v
except NotAValidValue as n:
if x < 5 or x > 25:
print 'Your entry is not valid. Please provide a number between 5 and 25',n
sys.exit(0)
exp = 2 ** x
print(exp)
</code></pre>
<p>Again, Thank you so much for giving this your time. </p>
| -4 | 2016-08-15T22:10:37Z | 38,964,486 | <p>The way you define custom exceptions in python is as shown below. You need to define each custom exception as a subclass of the Exception class. You can then catch your own custom exceptions with a catch-except block.</p>
<pre><code>import sys
class TooSmallExc(Exception):
def __init__(self):
Exception.__init__(self,"The number is less than 5")
class TooLargeExc(Exception):
def __init__(self):
Exception.__init__(self,"The number is greater than 25")
print 'How are you?'
i = raw_input("Please provide a value for x (between 5 and 25): " )
try:
x = int(i)
if x<5:
raise TooSmallExc
if x>25:
raise TooLargeExc
except ValueError:
print 'I just caught a ValueError exception, which is a Python built-in exception'
except TooSmallExc:
print 'I just caught a custom exception that I made for integers less than 5'
except TooLargeExc:
print 'I just caught a custom exception that I made for integers greater than 25'
</code></pre>
| 0 | 2016-08-15T23:28:39Z | [
"python",
"exception",
"exception-handling"
] |
checking django empty table object not working | 38,963,822 | <p>I've an html page, I'd like to display the search bar only when the passed in table object is NOT empty. But my check is not working properly. Here's the code:</p>
<pre><code><!-- We'll display the search bar only when the user has access to at least one item, otherwise, hide it. -->
{% if item_info %}
Number of entries: {{ item_info|length }}, nothing? {{item_info}}
<section>
<form method="post" action=".">
{% csrf_token %}
<input type="text" class="search-query span80" id="search" name="search" placeholder="Enter ItemNo to search">
<button type="submit" class="btn">Search</button>
</form>
</section>
{% else %}
No item_info.
{% endif%}
</code></pre>
<p>Here's what I see on the browser:
<a href="http://i.stack.imgur.com/DukV8.png" rel="nofollow"><img src="http://i.stack.imgur.com/DukV8.png" alt="still enters if branch"></a> </p>
<p>item_info is blank, I think it should go to else branch, however, it entered if branch, any help is greatly appreciated! </p>
<p>Edit after elethan's answer:
I've printed it out to debug, here's the screenshot:
<a href="http://i.stack.imgur.com/q9Slh.png" rel="nofollow"><img src="http://i.stack.imgur.com/q9Slh.png" alt="2nd_img"></a>
So, looks like this item_info is really empty, I didn't see any item_info object gets printed out.</p>
<p>Also, to help debug, here's my view code:</p>
<pre><code>def item_info(request):
iteminfo= ItemInfo.objects.all().filter(Q(some_query)
table = ItemInfoTable(iteminfo)
RequestConfig(request).configure(table)
return render(request, 'item_info.html', {'item_info':table,})
</code></pre>
<p>And here's my table definition:</p>
<pre><code>import django_tables2 as tables
class ItemInfoTable(tables.Table):
itmno = tables.Column(verbose_name="Item #")
class Meta:
model = ItemInfo
empty_text = "There is no item record."
</code></pre>
<p>And here's the ItemInfo table it refers to:</p>
<pre><code>class ItemInfo(models.Model):
itmno = models.CharField(primary_key=True, max_length=11L, db_column='ItmNo', blank=True)
class Meta:
db_table = 'item_info'
</code></pre>
| 2 | 2016-08-15T22:11:12Z | 38,963,940 | <p>If <code>item_info</code> is a <code>RawQuerySet</code>, try <code>{% if item_info.all %}</code> instead of <code>{% if item_info %}</code>. <code>RawQuerySet</code> does not define a <code>__bool__()</code> method, so the instances are always considered <code>True</code>. See the warnings in <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/#performing-raw-queries" rel="nofollow">this section</a> of the docs, repeated below, just in case this link dies in the future:</p>
<blockquote>
<p>While a RawQuerySet instance can be iterated over like a normal
QuerySet, RawQuerySet doesnât implement all methods you can use with
QuerySet. For example, <strong>bool</strong>() and <strong>len</strong>() are not defined in
RawQuerySet, and thus all RawQuerySet instances are considered True.
The reason these methods are not implemented in RawQuerySet is that
implementing them without internal caching would be a performance
drawback and adding such caching would be backward incompatible.</p>
</blockquote>
| 2 | 2016-08-15T22:23:50Z | [
"python",
"html",
"django"
] |
Use a here-document with pxssh (pexpect) | 38,963,838 | <p>In my python script I need to execute a command over SSH that also takes a heredoc as an argument. The command calls an interactive script that can be also called as follows:</p>
<pre><code>dbscontrol << EOI
HELP
QUIT
EOI
</code></pre>
<p>I also found <a href="http://stackoverflow.com/questions/35344870/ssh-here-document-syntax-with-python">this Q&A</a> that describes how to do it using <code>subprocess</code> but I really like <code>pexpect.pxssh</code> convenience.
Code example would be greatly appreciated </p>
| 1 | 2016-08-15T22:13:03Z | 39,381,252 | <p>I don't have pexpect handy to test my answer to your question, but I have a suggestion that should work and, if not, may at least get you closer. </p>
<p>Consider this command:</p>
<pre><code>$ ssh oak 'ftp << EOF
lpwd
quit
EOF'
Local directory: /home/jklowden
$
</code></pre>
<p>What is happening? The entire quoted string is passed as a single argument to ssh, where it is "executed" on the remote. While ssh isn't explicit about what that means, exactly, we know what <strong>execv</strong>(2) does: if <strong>execve</strong>(2) fails to execute its passed arguments, the execv function will invoke <code>/bin/sh</code> with the same arguments (in this case, our quoted string). The shell then evaluates the quoted string as separate arguments, detects the HereDoc redirection, and executes per usual. </p>
<p>Using that information, and taking a quick look at the <a href="https://pexpect.readthedocs.io/en/stable/api/pxssh.html" rel="nofollow">pexpect.pxssh</a> documentation, it looks like you want:</p>
<pre><code>s = pxssh.pxssh()
...
s.sendline('ftp << EOF\nlpwd\nquit\nEOF')
</code></pre>
<p>If that doesn't work, something is munging your data. Five minutes with <strong>strace</strong>(1) will tell you what happened to it, and you can start pointing fingers. ;-)</p>
<p>HTH. </p>
| 1 | 2016-09-08T01:26:46Z | [
"python",
"ssh",
"pexpect",
"heredoc",
"pxssh"
] |
Import Error from cyptography.hazmat.bindings._constant_time import lib | 38,963,857 | <p>So I'm trying to create an aws lambda function, to log in to an instance and do some stuff. And the script works fine outside of lambda, but when I package it using the same instructions as this <a href="https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/" rel="nofollow">https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/</a> it doesn't work. It throws this error.</p>
<pre><code>libffi-72499c49.so.6.0.4: cannot open shared object file: No such file or directory: ImportError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 12, in lambda_handler
key = paramiko.RSAKey.from_private_key(key)
File "/var/task/paramiko/pkey.py", line 217, in from_private_key
key = cls(file_obj=file_obj, password=password)
File "/var/task/paramiko/rsakey.py", line 42, in __init__
self._from_private_key(file_obj, password)
File "/var/task/paramiko/rsakey.py", line 168, in _from_private_key
self._decode_key(data)
File "/var/task/paramiko/rsakey.py", line 173, in _decode_key
data, password=None, backend=default_backend()
File "/var/task/cryptography/hazmat/backends/__init__.py", line 35, in default_backend
_default_backend = MultiBackend(_available_backends())
File "/var/task/cryptography/hazmat/backends/__init__.py", line 22, in _available_backends
"cryptography.backends"
File "/var/task/pkg_resources/__init__.py", line 2236, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/var/task/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
from cryptography.hazmat.backends.openssl.backend import backend
File "/var/task/cryptography/hazmat/backends/openssl/backend.py", line 15, in <module>
from cryptography import utils, x509
File "/var/task/cryptography/x509/__init__.py", line 7, in <module>
from cryptography.x509.base import (
File "/var/task/cryptography/x509/base.py", line 15, in <module>
from cryptography.x509.extensions import Extension, ExtensionType
File "/var/task/cryptography/x509/extensions.py", line 19, in <module>
from cryptography.hazmat.primitives import constant_time, serialization
File "/var/task/cryptography/hazmat/primitives/constant_time.py", line 9, in <module>
from cryptography.hazmat.bindings._constant_time import lib
ImportError: libffi-72499c49.so.6.0.4: cannot open shared object file: No such file or directory
</code></pre>
| 1 | 2016-08-15T22:14:53Z | 38,966,482 | <p>The zip commands in that tutorial are missing a parameter. I ran into this exact problem today with pysftp, which is built on paramiko. <code>libffi-72499c49.so.6.0.4</code> is in a hidden dot directory inside <code>lib64/python2.7/site-packages/.libs_cffi_backend</code>. Depending on how you zipped up the dependencies in your virtualenv, you may have inadvertantly excluded this directory.</p>
<ol>
<li><p>First, make sure libffi-devel and openssl-devel are installed on your <a href="http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html" rel="nofollow">Amazon Linux instance</a>, otherwise the cryptography module may not be compiling correctly.</p>
<pre><code>sudo yum install libffi-devel openssl-devel
</code></pre></li>
</ol>
<p>If those packages were not installed before, delete and rebuild your virtualenv.</p>
<ol start="2">
<li><p>Make sure that when you are zipping up your site-packages that you use '.' instead of '*', otherwise you will not be including files and directories that are hidden because their names begin with a period.</p>
<pre><code>cd path/to/my/helloworld-env/lib/python2.7/site-packages
zip -r9 path/to/zip/worker_function.zip .
cd path/to/my/helloworld-env/lib64/python2.7/site-packages
zip -r9 path/to/zip/worker_function.zip .
</code></pre></li>
</ol>
| 2 | 2016-08-16T04:03:12Z | [
"python",
"amazon-web-services",
"aws-lambda",
"paramiko"
] |
Partial sums and subtotals with Pandas | 38,963,882 | <p>I'm trying to achieve a table with subtotals as shown <a href="http://pandas.pydata.org/pandas-docs/stable/cookbook.html#pivot" rel="nofollow">here</a>, but<del> either that code doesn't work with the latest pandas version (0.18.1) or the example is wrong</del> for multiple columns instead of one. <a href="http://pastebin.com/B8MZwSUq" rel="nofollow">My code here</a> results in the following table</p>
<pre><code> 2014 2015 2016
project__name person__username activity__name issue__subject
Influenster employee1 Development 161.0 122.0 104.0
Fix bug 22.0 0.0 0.0
Refactor view 0.0 7.0 0.0
Quality assurance 172.0 158.0 161.0
employee2 Development 119.0 137.0 155.0
Quality assurance 193.0 186.0 205.0
employee3 Development Refactor view 0.0 0.0 1.0
Profit tools employee1 Development 177.0 136.0 216.0
Quality assurance 162.0 122.0 182.0
employee2 Development 154.0 168.0 124.0
Quality assurance 130.0 183.0 192.0
Fix bug 22.0 0.0 0.0
All 1312.0 1219.0 1340.0
</code></pre>
<p>and my desired output would be something like:</p>
<pre><code> 2014 2015 2016
project__name person__username activity__name issue__subject
Influenster employee1 Development 161.0 122.0 104.0
Fix bug 22.0 0.0 0.0
Refactor view 0.0 7.0 0.0
Total xxx xxx xxx
Quality assurance 172.0 158.0 161.0
Total xxx xxx xxx
Total xxx xxx xxx
employee2 Development 119.0 137.0 155.0
Total xxx xxx xxx
Quality assurance 193.0 186.0 205.0
Total xxx xxx xxx
Total xxx xxx xxx
employee3 Development Refactor view 0.0 0.0 1.0
Total xxx xxx xxx
Total xxx xxx xxx
Total xxx xxx xxx
Profit tools employee1 Development 177.0 136.0 216.0
Total xxx xxx xxx
Quality assurance 162.0 122.0 182.0
Total xxx xxx xxx
Total xxx xxx xxx
employee2 Development 154.0 168.0 124.0
Total xxx xxx xxx
Quality assurance 130.0 183.0 192.0
Fix bug 22.0 0.0 0.0
Total xxx xxx xxx
Total xxx xxx xxx
Total xxx xxx xxx
All 1312.0 1219.0 1340.0
</code></pre>
<p>Any help on how to achieve this is appreciated.</p>
| 2 | 2016-08-15T22:17:59Z | 38,964,596 | <h3>Recursive <code>groupby</code> and <code>apply</code></h3>
<pre><code>def append_tot(df):
if hasattr(df, 'name') and df.name is not None:
xs = df.xs(df.name)
else:
xs = df
gb = xs.groupby(level=0)
n = xs.index.nlevels
name = tuple('Total' if i == 0 else '' for i in range(n))
tot = gb.sum().sum().rename(name).to_frame().T
if n > 1:
sm = gb.apply(append_tot3)
else:
sm = gb.sum()
return pd.concat([sm, tot])
fields = ['project__name', 'person__username',
'activity__name', 'issue__subject']
append_tot(df.set_index(fields))
</code></pre>
<p><a href="http://i.stack.imgur.com/377Nd.png" rel="nofollow"><img src="http://i.stack.imgur.com/377Nd.png" alt="enter image description here"></a></p>
| 3 | 2016-08-15T23:42:27Z | [
"python",
"pandas"
] |
Partial sums and subtotals with Pandas | 38,963,882 | <p>I'm trying to achieve a table with subtotals as shown <a href="http://pandas.pydata.org/pandas-docs/stable/cookbook.html#pivot" rel="nofollow">here</a>, but<del> either that code doesn't work with the latest pandas version (0.18.1) or the example is wrong</del> for multiple columns instead of one. <a href="http://pastebin.com/B8MZwSUq" rel="nofollow">My code here</a> results in the following table</p>
<pre><code> 2014 2015 2016
project__name person__username activity__name issue__subject
Influenster employee1 Development 161.0 122.0 104.0
Fix bug 22.0 0.0 0.0
Refactor view 0.0 7.0 0.0
Quality assurance 172.0 158.0 161.0
employee2 Development 119.0 137.0 155.0
Quality assurance 193.0 186.0 205.0
employee3 Development Refactor view 0.0 0.0 1.0
Profit tools employee1 Development 177.0 136.0 216.0
Quality assurance 162.0 122.0 182.0
employee2 Development 154.0 168.0 124.0
Quality assurance 130.0 183.0 192.0
Fix bug 22.0 0.0 0.0
All 1312.0 1219.0 1340.0
</code></pre>
<p>and my desired output would be something like:</p>
<pre><code> 2014 2015 2016
project__name person__username activity__name issue__subject
Influenster employee1 Development 161.0 122.0 104.0
Fix bug 22.0 0.0 0.0
Refactor view 0.0 7.0 0.0
Total xxx xxx xxx
Quality assurance 172.0 158.0 161.0
Total xxx xxx xxx
Total xxx xxx xxx
employee2 Development 119.0 137.0 155.0
Total xxx xxx xxx
Quality assurance 193.0 186.0 205.0
Total xxx xxx xxx
Total xxx xxx xxx
employee3 Development Refactor view 0.0 0.0 1.0
Total xxx xxx xxx
Total xxx xxx xxx
Total xxx xxx xxx
Profit tools employee1 Development 177.0 136.0 216.0
Total xxx xxx xxx
Quality assurance 162.0 122.0 182.0
Total xxx xxx xxx
Total xxx xxx xxx
employee2 Development 154.0 168.0 124.0
Total xxx xxx xxx
Quality assurance 130.0 183.0 192.0
Fix bug 22.0 0.0 0.0
Total xxx xxx xxx
Total xxx xxx xxx
Total xxx xxx xxx
All 1312.0 1219.0 1340.0
</code></pre>
<p>Any help on how to achieve this is appreciated.</p>
| 2 | 2016-08-15T22:17:59Z | 38,965,198 | <p>Consider running three level pivot_tables with stack and concatenate them for a final groupby object. As mentioned, the docs does work if you see the use of <code>.stack()</code> on the corresponding pivot_table columns value:</p>
<pre><code># ISSUE_SUBJECT PIVOT
pt1 = pd.pivot_table(data=df, values=['2014', '2015', '2016'],
columns=['issue__subject'], aggfunc=np.sum,
index=['project__name', 'person__username', 'activity__name'],
margins=True, margins_name = 'Total')
pt1 = pt1.stack().reset_index()
# ACTIVITY_NAME PIVOT
pt2 = pd.pivot_table(data=df, values=['2014', '2015', '2016'],
columns=['activity__name'], aggfunc=np.sum,
index=['project__name', 'person__username'],
margins=True, margins_name = 'Total' )
pt2 = pt2.stack().reset_index()
# PERSON_USERNAME PIVOT
pt3 = pd.pivot_table(data=df, values=['2014', '2015', '2016'],
columns=['person__username'],
aggfunc=np.sum, index=['project__name'],
margins=True, margins_name = 'Total')
pt3 = pt3.stack().reset_index()
# CONCATENATE ALL THREE
gdf = pd.concat([pt1,
pt2[(pt2['project__name']=='Total') |
(pt2['activity__name']=='Total')],
pt3[(pt3['project__name']=='Total') |
(pt3['person__username']=='Total')]]).reset_index(drop=True)
# REPLACE NaNS IN COLUMN
gdf = gdf.apply(lambda x: np.where(pd.isnull(x), '', x), axis=1)
# FINAL GROUPBY (A COUNT USED TO RENDER GROUPBY)
gdf = gdf.groupby(['project__name', 'person__username',
'activity__name', 'issue__subject',
'2014', '2015', '2016']).agg(len)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>project__name person__username activity__name issue__subject 2014 2015 2016
Influenster Total 667.0 610.0 626.0 1
employee1 Development 161.0 122.0 104.0 1
Fix bug 22.0 0.0 0.0 1
Refactor view 0.0 7.0 0.0 1
Total 183.0 129.0 104.0 1
Quality assurance 172.0 158.0 161.0 1
Total 172.0 158.0 161.0 1
Total 355.0 287.0 265.0 1
employee2 Development 119.0 137.0 155.0 1
Total 119.0 137.0 155.0 1
Quality assurance 193.0 186.0 205.0 1
Total 193.0 186.0 205.0 1
Total 312.0 323.0 360.0 1
employee3 Development Refactor view 0.0 0.0 1.0 1
Total 0.0 0.0 1.0 1
Total 0.0 0.0 1.0 1
Profit tools Total 645.0 609.0 714.0 1
employee1 Development 177.0 136.0 216.0 1
Total 177.0 136.0 216.0 1
Quality assurance 162.0 122.0 182.0 1
Total 162.0 122.0 182.0 1
Total 339.0 258.0 398.0 1
employee2 Development 154.0 168.0 124.0 1
Total 154.0 168.0 124.0 1
Quality assurance 130.0 183.0 192.0 1
Fix bug 22.0 0.0 0.0 1
Total 152.0 183.0 192.0 1
Total 306.0 351.0 316.0 1
Total 1268.0 1212.0 1339.0 1
Fix bug 44.0 0.0 0.0 1
Refactor view 0.0 7.0 1.0 1
Total 1312.0 1219.0 1340.0 1
Development 633.0 570.0 600.0 1
Quality assurance 679.0 649.0 740.0 1
Total 1312.0 1219.0 1340.0 1
Total 1312.0 1219.0 1340.0 1
employee1 694.0 545.0 663.0 1
employee2 618.0 674.0 676.0 1
employee3 0.0 0.0 1.0 1
</code></pre>
| 2 | 2016-08-16T01:07:16Z | [
"python",
"pandas"
] |
How to get predictions out of tensorflow model after you've used tf.group on your optimizers | 38,963,951 | <p>I'm trying to write something similar to google's wide and deep learning after running into difficulties of doing multi-class classification(12 classes) with the sklearn api. I've tried to follow the advice in a couple of posts and used the tf.group(logistic_regression_optimizer, deep_model_optimizer). It seems to work but I was trying to figure out how to get predictions out of this model. I'm hoping that with the tf.group operator the model is learning to weight the logistic and deep models differently but I don't know how to get these weights out so I can get the right combination of the two model's predictions. Thanks in advance for any help. </p>
<p><a href="https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/Cs0R75AGi8A" rel="nofollow">https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/Cs0R75AGi8A</a></p>
<p><a href="https://stackoverflow.com/questions/34945554/how-to-set-layer-wise-learning-rate-in-tensorflow?newreg=36393d6817284f17a94e89454f0fa079">How to set layer-wise learning rate in Tensorflow?</a> </p>
| 0 | 2016-08-15T22:24:43Z | 38,976,009 | <p><code>tf.group()</code> creates a node that forces a list of other nodes to run using control dependencies. It's really just a handy way to package up logic that says "run this set of nodes, and I don't care about their output". In the discussion you point to, it's just a convenient way to create a single <code>train_op</code> from a pair of training operators. </p>
<p>If you're interested in the value of a Tensor (e.g., weights), you should pass it to <code>session.run()</code> explicitly, either in the same call as the training step, or in a separate session.run() invocation. You can pass a list of values to <code>session.run()</code>, for example, your <code>tf.group()</code> expression, as well as a Tensor whose value you would like to compute.</p>
<p>Hope that helps!</p>
| 0 | 2016-08-16T13:15:10Z | [
"python",
"tensorflow",
"deep-learning"
] |
Active tasks is a negative number in Spark UI | 38,964,007 | <p>When using <a href="/questions/tagged/spark-1.6.2" class="post-tag" title="show questions tagged 'spark-1.6.2'" rel="tag">spark-1.6.2</a> and <a href="/questions/tagged/pyspark" class="post-tag" title="show questions tagged 'pyspark'" rel="tag">pyspark</a>, I saw this:</p>
<p><a href="http://i.stack.imgur.com/m3yQm.png" rel="nofollow"><img src="http://i.stack.imgur.com/m3yQm.png" alt="enter image description here"></a></p>
<p>where you see that the active tasks are a <em>negative</em> number (the difference of the the total tasks from the completed tasks).</p>
<p>What is the source of this error?</p>
<hr>
<p>Node that I have <em>many</em> executors. However, it seems like there is a task that seems to have been idle (I don't see any progress), while another identical task completed normally.</p>
<hr>
<p>Also this is related: that <a href="https://www.mail-archive.com/issues@spark.apache.org/msg04931.html" rel="nofollow">mail</a> I can confirm that many tasks are being created, since I am using 1k or 2k executors.</p>
<p>The error I am getting is a bit different:</p>
<pre><code>16/08/15 20:03:38 ERROR LiveListenerBus: Dropping SparkListenerEvent because no remaining room in event queue. This likely means one of the SparkListeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler.
16/08/15 20:07:18 WARN TaskSetManager: Lost task 20652.0 in stage 4.0 (TID 116652, myfoo.com): FetchFailed(BlockManagerId(61, mybar.com, 7337), shuffleId=0, mapId=328, reduceId=20652, message=
org.apache.spark.shuffle.FetchFailedException: java.util.concurrent.TimeoutException: Timeout waiting for task.
</code></pre>
| 13 | 2016-08-15T22:31:41Z | 39,331,671 | <p>It is a Spark issue. It occurs when executors restart after failures. The JIRA issue for the same is already created. You can get more details about the same from <a href="https://issues.apache.org/jira/browse/SPARK-10141" rel="nofollow">https://issues.apache.org/jira/browse/SPARK-10141</a> link.</p>
| 5 | 2016-09-05T13:27:47Z | [
"python",
"hadoop",
"apache-spark",
"bigdata",
"distributed-computing"
] |
Active tasks is a negative number in Spark UI | 38,964,007 | <p>When using <a href="/questions/tagged/spark-1.6.2" class="post-tag" title="show questions tagged 'spark-1.6.2'" rel="tag">spark-1.6.2</a> and <a href="/questions/tagged/pyspark" class="post-tag" title="show questions tagged 'pyspark'" rel="tag">pyspark</a>, I saw this:</p>
<p><a href="http://i.stack.imgur.com/m3yQm.png" rel="nofollow"><img src="http://i.stack.imgur.com/m3yQm.png" alt="enter image description here"></a></p>
<p>where you see that the active tasks are a <em>negative</em> number (the difference of the the total tasks from the completed tasks).</p>
<p>What is the source of this error?</p>
<hr>
<p>Node that I have <em>many</em> executors. However, it seems like there is a task that seems to have been idle (I don't see any progress), while another identical task completed normally.</p>
<hr>
<p>Also this is related: that <a href="https://www.mail-archive.com/issues@spark.apache.org/msg04931.html" rel="nofollow">mail</a> I can confirm that many tasks are being created, since I am using 1k or 2k executors.</p>
<p>The error I am getting is a bit different:</p>
<pre><code>16/08/15 20:03:38 ERROR LiveListenerBus: Dropping SparkListenerEvent because no remaining room in event queue. This likely means one of the SparkListeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler.
16/08/15 20:07:18 WARN TaskSetManager: Lost task 20652.0 in stage 4.0 (TID 116652, myfoo.com): FetchFailed(BlockManagerId(61, mybar.com, 7337), shuffleId=0, mapId=328, reduceId=20652, message=
org.apache.spark.shuffle.FetchFailedException: java.util.concurrent.TimeoutException: Timeout waiting for task.
</code></pre>
| 13 | 2016-08-15T22:31:41Z | 39,336,659 | <p>Answered in the Spark-dev mailing list from <a href="http://stackoverflow.com/questions/39260820/is-sparks-kmeans-unable-to-handle-bigdata">S. Owen</a>, there are several JIRA tickets that are relevant to this issue, such as:</p>
<ol>
<li><a href="https://issues.apache.org/jira/browse/YARN-2523?jql=text%20~%20%22ResourceManager%20UI%20showing%20negative%22" rel="nofollow">ResourceManager UI showing negative value</a></li>
<li><a href="https://issues.apache.org/jira/browse/YARN-1697?jql=text%20~%20%22negative%22" rel="nofollow">NodeManager reports negative running containers</a></li>
</ol>
<p>This behavior usually occurs when (many) executors restart after failure(s).</p>
<hr>
<p>This behavior can also occur when the application uses too many executors. Use <code>coalesce()</code> to fix this case.</p>
<p>To be exact, in <a href="http://stackoverflow.com/questions/39401690/prepare-my-bigdata-with-spark-via-python">Prepare my bigdata with Spark via Python</a>, I had >400k partitions. I used <code>data.coalesce(1024)</code>, as described in <a class='doc-link' href="http://stackoverflow.com/documentation/apache-spark/5822/partitions/21155/repartition-an-rdd#t=201609112000500779093">Repartition an RDD</a>, and I was able to bypass that Spark UI bug. You see, <a class='doc-link' href="http://stackoverflow.com/documentation/apache-spark/5822/partitions#t=201609112000500779093">Partitions</a> is a very important concept when it comes to Distributed Computing and Spark.</p>
<p>In my question I also use 1-2k executors, so it must be related.</p>
<p>Note: Too few partitions and you might experience this <a href="http://stackoverflow.com/questions/28967111/spark-java-error-size-exceeds-integer-max-value">Spark Java Error: Size exceeds Integer.MAX_VALUE</a>.</p>
| 5 | 2016-09-05T19:32:45Z | [
"python",
"hadoop",
"apache-spark",
"bigdata",
"distributed-computing"
] |
merge values of list of lists with a list of dictionaries | 38,964,024 | <pre><code>u= [['1', '2'], ['3'], ['4', '5', '6'], ['7', '8', '9', '10']]
v=[{'id': 'a', 'adj': ['blue', 'yellow']}, {'id': 'b', 'adj': ['purple', 'red']}, {'id': 'c', 'adj': ['green', 'orange']}, {'id': 'd', 'adj': ['black', 'purple']}]
</code></pre>
<p>I want:</p>
<pre><code> result=[ {'id': 'a', 'adj': ['blue', 'yellow'], 'value': '1' },
{'id': 'a', 'adj': ['blue', 'yellow'], 'value': '2' },
{'id': 'a', 'adj': ['purple', 'red'], 'value': '3' },
...]
</code></pre>
<p>I've converted <code>u</code> to a dictionary:</p>
<pre><code>m=[]
for i in u:
s={}
s['value']=i
m.append(s)
#>>m= [{'value': ['1', '2']}, {'value': ['3']}, {'value': ['4', '5', '6']}, {'value': ['7', '8', '9', '10']}]
</code></pre>
<p>Then tried to applied <code>zip</code> function...</p>
<pre><code>for i,j in enumerate(v):
for s,t in enumerate(l):
if i= =s:
#zip 2 dictionary together. Stuck here
</code></pre>
<p>Thanks a lot in advance! This is my 2nd week of learning programming.</p>
| 2 | 2016-08-15T22:33:49Z | 38,964,114 | <p>You need to zip, iterate over each sublist from u,<a href="https://docs.python.org/2/library/copy.html#copy.deepcopy" rel="nofollow"><em>deepcopy</em></a> each dict from v and add the new key/value pairing, finally append the new dict to a list: </p>
<pre><code>from copy import deepcopy
u= [['1', '2'], ['3'], ['4', '5', '6'], ['7', '8', '9', '10']]
v=[{'id': 'a', 'adj': ['blue', 'yellow']}, {'id': 'b', 'adj': ['purple', 'red']}, {'id': 'c', 'adj': ['green', 'orange']}, {'id': 'd', 'adj': ['black', 'purple']}]
out = []
# match up corresponding elements fromm both lists
for dct, sub in zip(v, u):
# iterate over each sublist
for val in sub:
# deepcopy the dict as it contains mutable elements (lists)
dct_copy = deepcopy(dct)
# set the new key/value pairing
dct_copy["value"] = val
# append the dict to our out list
out.append(dct_copy)
from pprint import pprint as pp
pp(out)
</code></pre>
<p>Which will give you:</p>
<pre><code>[{'adj': ['blue', 'yellow'], 'id': 'a', 'value': '1'},
{'adj': ['blue', 'yellow'], 'id': 'a', 'value': '2'},
{'adj': ['purple', 'red'], 'id': 'b', 'value': '3'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '4'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '5'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '6'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '7'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '8'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '9'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '10'}]
</code></pre>
<p>dicts have a <code>.copy</code> attribute or you could call <code>dict(dct)</code> but because you have mutable object as values, just doing a shallow copy will not work. The example below shows you the actual difference:</p>
<pre><code>In [19]: d = {"foo":[1, 2, 4]}
In [20]: d1_copy = d.copy() # shallow copy, same as dict(d)
In [21]: from copy import deepcopy
In [22]: d2_copy = deepcopy(d) # deep copy
In [23]: d["foo"].append("bar")
In [24]: d
Out[24]: {'foo': [1, 2, 4, 'bar']}
In [25]: d1_copy
Out[25]: {'foo': [1, 2, 4, 'bar']} # copy also changed
In [26]: d2_copy
Out[26]: {'foo': [1, 2, 4]} # deepcopy is still the same
</code></pre>
<p><a href="http://stackoverflow.com/questions/17246693/what-exactly-is-the-difference-between-shallow-copy-deepcopy-and-normal-assignm">what-exactly-is-the-difference-between-shallow-copy-deepcopy-and-normal-assignment</a></p>
| 0 | 2016-08-15T22:43:22Z | [
"python",
"loops",
"indexing"
] |
merge values of list of lists with a list of dictionaries | 38,964,024 | <pre><code>u= [['1', '2'], ['3'], ['4', '5', '6'], ['7', '8', '9', '10']]
v=[{'id': 'a', 'adj': ['blue', 'yellow']}, {'id': 'b', 'adj': ['purple', 'red']}, {'id': 'c', 'adj': ['green', 'orange']}, {'id': 'd', 'adj': ['black', 'purple']}]
</code></pre>
<p>I want:</p>
<pre><code> result=[ {'id': 'a', 'adj': ['blue', 'yellow'], 'value': '1' },
{'id': 'a', 'adj': ['blue', 'yellow'], 'value': '2' },
{'id': 'a', 'adj': ['purple', 'red'], 'value': '3' },
...]
</code></pre>
<p>I've converted <code>u</code> to a dictionary:</p>
<pre><code>m=[]
for i in u:
s={}
s['value']=i
m.append(s)
#>>m= [{'value': ['1', '2']}, {'value': ['3']}, {'value': ['4', '5', '6']}, {'value': ['7', '8', '9', '10']}]
</code></pre>
<p>Then tried to applied <code>zip</code> function...</p>
<pre><code>for i,j in enumerate(v):
for s,t in enumerate(l):
if i= =s:
#zip 2 dictionary together. Stuck here
</code></pre>
<p>Thanks a lot in advance! This is my 2nd week of learning programming.</p>
| 2 | 2016-08-15T22:33:49Z | 38,964,115 | <p>Apply <code>zip</code> on both lists, and create a new dictionary where the old ones have the values from the corresponding list added as the <em>key-value</em> entry <code>value</code>: <em>number from list</em>:</p>
<pre><code>>>> import pprint, copy
>>> result = [dict(copy.deepcopy(j), value = ind) for i, j in zip(u, v) for ind in i]
>>> pprint.pprint(result)
[{'adj': ['blue', 'yellow'], 'id': 'a', 'value': '1'},
{'adj': ['blue', 'yellow'], 'id': 'a', 'value': '2'},
{'adj': ['purple', 'red'], 'id': 'b', 'value': '3'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '4'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '5'},
{'adj': ['green', 'orange'], 'id': 'c', 'value': '6'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '7'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '8'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '9'},
{'adj': ['black', 'purple'], 'id': 'd', 'value': '10'}]
</code></pre>
| 0 | 2016-08-15T22:43:36Z | [
"python",
"loops",
"indexing"
] |
merge values of list of lists with a list of dictionaries | 38,964,024 | <pre><code>u= [['1', '2'], ['3'], ['4', '5', '6'], ['7', '8', '9', '10']]
v=[{'id': 'a', 'adj': ['blue', 'yellow']}, {'id': 'b', 'adj': ['purple', 'red']}, {'id': 'c', 'adj': ['green', 'orange']}, {'id': 'd', 'adj': ['black', 'purple']}]
</code></pre>
<p>I want:</p>
<pre><code> result=[ {'id': 'a', 'adj': ['blue', 'yellow'], 'value': '1' },
{'id': 'a', 'adj': ['blue', 'yellow'], 'value': '2' },
{'id': 'a', 'adj': ['purple', 'red'], 'value': '3' },
...]
</code></pre>
<p>I've converted <code>u</code> to a dictionary:</p>
<pre><code>m=[]
for i in u:
s={}
s['value']=i
m.append(s)
#>>m= [{'value': ['1', '2']}, {'value': ['3']}, {'value': ['4', '5', '6']}, {'value': ['7', '8', '9', '10']}]
</code></pre>
<p>Then tried to applied <code>zip</code> function...</p>
<pre><code>for i,j in enumerate(v):
for s,t in enumerate(l):
if i= =s:
#zip 2 dictionary together. Stuck here
</code></pre>
<p>Thanks a lot in advance! This is my 2nd week of learning programming.</p>
| 2 | 2016-08-15T22:33:49Z | 38,965,430 | <p>You can use the following code to acquire the desired result.</p>
<pre><code>result = []
for index, d in enumerate(u):
for value in d:
result.append(dict(v[index], value=value))
</code></pre>
<p>It iterates over an <code>enumerate</code>-ion of <code>u</code>, and then appends a combination of the correct <code>v</code> dict and the <code>value</code> to the <code>result</code> list.</p>
<p>You can compress this into a relatively clean one-liner using a list comprehension.</p>
<pre><code>result = [dict(v[index], value=value) for index, d in enumerate(u) for value in d]
</code></pre>
| 0 | 2016-08-16T01:38:15Z | [
"python",
"loops",
"indexing"
] |
Creating a chat (client) program. How can I add simultaneous conversation? | 38,964,165 | <pre><code># -*- coding: utf-8 -*-
#!/usr/bin/python3
import socket
# nao tem servidor UDP no google -> vamos usar netcat como servidor UDP!
#Programa de chat: so fala um de cada vez
#implementar falando ao mesmo tempo
client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
"""
pacotes_recebidos = client.recvfrom(1024) devolve uma tupla:
(' llallalaaaa\n', ('192.168.1.4', 667))
msg recebida + (IP,porta)
"""
try:
while 1: #while True
client.sendto(input("Voce: ") +"\n", ("192.168.1.4", 668)) # endereço do servidor UDP do kali linux usando netcat
msg, friend = client.recvfrom(1024)
print(str(friend) + ": " + msg)
#se quiser apenas o ip: use friend[0]
# convertemos str(friend) porque recebemos o erro:
# (TypeError(can only concatenate tuple (not str) to tuple,))
client.close()
except Exception as erro:
print("Conexao falhou ")
print("O erro foi: ", erro)
client.close()
</code></pre>
<p>In python 3.5 (Linux), when I sent "hi" this code shows error:</p>
<pre><code>('O erro foi: ', NameError("name 'hi' is not defined",))
</code></pre>
<p>The code run on python 2.7.</p>
<p>I would like to make two people can talk simultaneously , how do? At the moment , only one person at a time can enter the message we have to wait for one of the participants write and hit to continue Could someone help me?
I use netcat as server.</p>
| 2 | 2016-08-15T22:49:12Z | 38,982,897 | <p>Here is an example of a 'multithreaded' UDP python client in python3 that allows messages to be sent and received simultaneously.</p>
<p>I personally like to make a Wrapper for the <code>socket</code> class that has all the threading and functions built in, so we'll start with the wrapper.</p>
<pre><code>import socket
import threading
class socketwrapper:
def __init__(self, host, port):
self.server = (host, port)
self.client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.client.settimeout(5) # Time is in seconds
self.connected = False
def startrecvdata(self):
recvThread = threading.Thread(target=self.recvdata)
recvThread.daemon = True
recvThread.start()
self.connected = True
def recvdata(self):
while self.connected:
try:
data, friend = self.client.recvfrom(1024)
if data:
print(str(friend) + ": " + data.decode('utf-8'))
except socket.timeout:
else:
print("No Message Received before timeout.")
continue
except:
self.connected = False
self.stop()
def sendmessage(self, data):
data += "\n"
self.client.sendto(data.encode('utf-8'), self.server)
def stop(self):
self.client.shutdown(socket.SHUT_RDWR)
self.client.close()
</code></pre>
<p>So as you can see we have a couple functions in this, the two you should notice are <code>startrecvdata(self)</code> and <code>recvdata(self)</code>. We will call the <code>startrecvdata(self)</code> from our main function, which will start the <code>recvdata(self)</code> thread. That function will print any received data to the console.</p>
<p>Additionally, notice that we have <code>settimeout(5)</code> in the <code>__init__</code> function of the wrapper, which sets a 5 second timeout on the socket connection. This way, we can close the entire program cleanly, shutting down and closing the socket with the <code>stop()</code> function.</p>
<p>Now for the main loop. Since we setup all of our functions in the wrapper class, we can have a super simple and clean loop:</p>
<pre><code>def main():
server = socketwrapper('192.168.1.1', 30000)
server.startrecvdata()
while not server.connected:
continue
print("Connected to server! Type 'exit' to quit.")
while server.connected:
message = input("Voce: ")
if message == "exit":
server.connected = False
break
server.sendmessage(message)
server.stop()
</code></pre>
<p>In this loop, we create a an instance of our <code>socketwrapper</code> which initializes everything for us. Then we call <code>server.startrecvdata()</code> which, as we said above, starts the function to receive and print data from the UDP connection. The <code>while not server.connected</code> blocks the program until the thread has started.</p>
<p>Lastly, we have our <code>while server.connected</code> loop, which waits for user input in the console. We check if the user wants to exit, and if they do, we set <code>server.connected = False</code> and <code>break</code> our while loop.<br>
If the user does not want to exit, we send the message to the UDP server.</p>
<p>After the loop ends, we call <code>server.stop</code> to ensure that the socket is closed before exiting the application.</p>
| 2 | 2016-08-16T19:22:08Z | [
"python",
"multithreading",
"python-2.7",
"python-3.x"
] |
Strange behaviour of Class instance - when updating internal variables | 38,964,206 | <p>Let's say I define the following class in Python. </p>
<pre><code>class test():
def __init__(self):
self.x = 0
self.y = self.x ** 2
def check(self):
self.x = self.x + 1
print self.x
print self.y
</code></pre>
<p>Here I have two internal variables x and y. In the initialization I have set $$ y = x^2 $$. Now each time I call the method <code>check()</code> it increments the value of x by 1 : <code>self.x = self.x + 1</code>. However when I print the values x correctly increments by 1 but y remains 0 always. (Should'nt it be x^2??) What's going wrong?</p>
| -1 | 2016-08-15T22:53:40Z | 38,964,220 | <p>Your <code>self.y</code> is only assigned when you <strong>first</strong> create an instance of your class. It is not updated every time you call a function. The <code>def __init__(self):</code> function is only executed when you do<br>
<code>myVar = new test()</code>, therefore the value of y is only assigned and updated once.</p>
<p>Your code would need to be:</p>
<pre><code>class test():
def __init__(self):
self.x = 0
self.y = self.x ** 2
def check(self):
self.x = self.x + 1
self.y = self.x ** 2
print self.x
print self.y
</code></pre>
| 2 | 2016-08-15T22:55:46Z | [
"python",
"class",
"object"
] |
Strange behaviour of Class instance - when updating internal variables | 38,964,206 | <p>Let's say I define the following class in Python. </p>
<pre><code>class test():
def __init__(self):
self.x = 0
self.y = self.x ** 2
def check(self):
self.x = self.x + 1
print self.x
print self.y
</code></pre>
<p>Here I have two internal variables x and y. In the initialization I have set $$ y = x^2 $$. Now each time I call the method <code>check()</code> it increments the value of x by 1 : <code>self.x = self.x + 1</code>. However when I print the values x correctly increments by 1 but y remains 0 always. (Should'nt it be x^2??) What's going wrong?</p>
| -1 | 2016-08-15T22:53:40Z | 38,964,232 | <p>Python is not like a spreadsheet where updating one cell (variable) can automatically affect the values of others.</p>
<p>Following initialisation, the code never changes the value of <code>self.y</code>. You need to add some code to do that, e.g.</p>
<pre><code> def check(self):
self.x = self.x + 1
self.y = self.x ** 2
print self.x
print self.y
</code></pre>
<hr>
<p>There is a way to implement that behaviour though: use <a href="https://docs.python.org/3/library/functions.html#property" rel="nofollow">properties</a>:</p>
<pre><code>class Test(object):
def __init__(self):
self._x = 0
self.y = 0
@property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
self.y = value ** 2
def check(self):
self.x = self.x + 1
print self.x, self.y
>>> t = Test()
>>> for i in range(5):
... t.check()
1 1
2 4
3 9
4 16
5 25
>>> t.x = 200
>>> t.y
40000
</code></pre>
<p>If you wanted you can also implement <code>y</code> as a property and have it set <code>x</code> to its square root when it's updated. This would enforce the relationship that <code>x</code> is the square root of `y`` and vice versa.</p>
| 2 | 2016-08-15T22:57:27Z | [
"python",
"class",
"object"
] |
Strange behaviour of Class instance - when updating internal variables | 38,964,206 | <p>Let's say I define the following class in Python. </p>
<pre><code>class test():
def __init__(self):
self.x = 0
self.y = self.x ** 2
def check(self):
self.x = self.x + 1
print self.x
print self.y
</code></pre>
<p>Here I have two internal variables x and y. In the initialization I have set $$ y = x^2 $$. Now each time I call the method <code>check()</code> it increments the value of x by 1 : <code>self.x = self.x + 1</code>. However when I print the values x correctly increments by 1 but y remains 0 always. (Should'nt it be x^2??) What's going wrong?</p>
| -1 | 2016-08-15T22:53:40Z | 38,964,244 | <p>The variable <code>y</code> is only set when it is instantiated and does not get updated in your code.</p>
<p>You should have separate methods to update and/or increment <code>x</code> and adjust <code>y</code> as necessary, along with your method to check the values.</p>
<pre><code>class test():
def __init__(self):
self.x = 0
self.y = 0
def update_x(self, x):
self.x = x
self.y = x ** 2
def increment_x(self):
self.x += 1
self.y = self.x ** 2
def check(self):
print self.x
print self.y
</code></pre>
| 0 | 2016-08-15T22:59:00Z | [
"python",
"class",
"object"
] |
Can't get the next line using next() in a loop (python) | 38,964,245 | <p>I'm trying to write a code that iterates through a txt file and only gets the lines I want to print them out.</p>
<p>the text file should look like so:</p>
<pre><code>mimi
passwordmimi
mimi johnson
somejob
joji
passwordjoji
jojo
somejob
john
passwordjohn
jonathan
somejob
....
</code></pre>
<p>and so on. this text file contains basically a user information (for a log in). I need to make everyone's username print out and their real name (ex: mimi and mimi johnson.) and only those. I don't want the current user's info to print out (in this ex: joji)</p>
<p>here is my code:</p>
<pre><code>username="joji"
file=open("username.txt","r")
x=file.readlines()
x=[item.rstrip('\n') for item in x]
x=iter(x)
for line in x:
if line==username:
next(x,None)
next(x,None)
next(x,None)
else:
print line + " username" ****username should print out. ex:mimi or john
next(x,None)
print line +" real name ****real name should print out. ex: mimi johnson or jonathan
</code></pre>
<p>for whatever reason when I run this program and i print out the second **** i put, it prints out the username's twice. (so ex:</p>
<pre><code>mimi username
mimi real name
mimi johnson username
mimi johnson real name
john username
john real name
jonathan username
jonathan real name
....
</code></pre>
<p>why is that? it should print out </p>
<pre><code>mimi username
mimi johnson real name
john username
jonathan realname
...
</code></pre>
<p>if someone could help me out i'd be really grateful i dont get python.
Im also open to any other suggestions to do this. </p>
<p>EDIT::: i tried making a change with a suggestion this is the outcome:</p>
<p>new block of code:</p>
<pre><code>else:
print line + "username"
line =next(x,None)
print line
</code></pre>
<p>this is the new outcome:</p>
<pre><code> mimi username
passmimi real name
mimi johnson username
somejob real name
john username
passjohn real name
jonathan username
somejob real name(***im assuming this one is from john's job)
</code></pre>
<p>:/ its not doing what its supposed to </p>
| 0 | 2016-08-15T22:59:25Z | 38,964,593 | <p>I would recommend using regex to parse this file:</p>
<pre><code>import re
# regex expression to parse the file as you provided it
# you could access the parseddata as a dict using the
# keys "username", "password", "real_name" and "job"
ex = "\n*(?P<username>.+)\n(?P<password>.+)\n(?P<real_name>.+)\n(?P<job>.+)[\n\$]"
with open("usernames.txt", 'r') as users:
matches = re.finditer(ex, users.read())
for match in matches:
user = match.groupdict() # user is a dict
# print username and real name
print(user['username'], "username", user['real_name'], "real name")
</code></pre>
<hr>
<p><strong>Edit</strong>: I figured that regex was not really needed here as the format of this file is quite simple. So here is the same thing without using regex.</p>
<pre><code>def parse(usersfile):
# strip line break characters
lines = (line.rstrip('\n') for line in usersfile)
# keys to be used in the dictionnary
keys = ('username', 'password', 'real_name', 'job')
while True:
# build the user dictionnary with the keys above
user = {key: line for key, line in zip(keys, lines)}
# yield user if all the keys are in the dict
if len(user) == len(keys):
yield user
else: # stop the loop
break
with open("usernames.txt", 'r') as usersfile:
for user in parse(usersfile):
# print username and real name
print(user['username'], "username", user['real_name'], "real name")
</code></pre>
| 1 | 2016-08-15T23:42:10Z | [
"python",
"loops",
"iteration",
"next"
] |
Is there a way to hide a displayed object using IPython? | 38,964,323 | <p>I am using the IPython module in a Jupyter Notebook.
I am using the display module to display buttons.</p>
<pre><code>from ipywidgets import widgets
import IPython.display as dsply
def click_reset(b):
print("reset domains button")
restoreDomains()
resetButton = widgets.Button(description="Reset Domains")
resetButton.on_click(click_reset)
dsply.display(resetButton)
</code></pre>
<p>This works fine, but I am trying to find a way to programatically hide certain buttons. Based off the execution of my other code, I want certain buttons to be removed from the UI. Is there anything like <code>hide(resetButton)</code> that I can use?</p>
| 1 | 2016-08-15T23:09:13Z | 38,966,556 | <p>You can hide every widget by setting it's property <code>visible</code> to <code>False</code> </p>
<pre><code>resetButton.visible = False
</code></pre>
| 1 | 2016-08-16T04:13:24Z | [
"python",
"python-3.x",
"ipython",
"display"
] |
How to plot points on hexbin graph in python? | 38,964,381 | <p>I have one hexbin graphm but how to add some other points(such as two points (x1,y1)) into this graph ???</p>
<p>Thanks </p>
<pre><code>x=log(PR)
y=log(CR)
xmin = x.min()
xmax = x.max()
ymin = y.min()
ymax = y.max()
x1=log(np.array([10,20]))
y1=log(np.array([10,20]))
plt.hexbin(x1, y1, bins='log',color='red')
plt.hexbin(x, y, bins='log', cmap=plt.cm.gist_ncar)
plt.axis([xmin, xmax, ymin, ymax])
plt.title("With a log color scale")
cb = plt.colorbar()
cb.set_label('log10(N)')
plt.show()
</code></pre>
| -1 | 2016-08-15T23:16:41Z | 38,977,186 | <p>There are several ways, depending on what you want to plot. Some examples:</p>
<pre><code>import matplotlib.pylab as pl
import numpy as np
x=np.random.random(1000)
y=np.random.random(1000)
pl.figure()
pl.hexbin(x, y, gridsize=5, cmap=pl.cm.PuBu_r)
pl.scatter(0.2, 0.2, s=400)
pl.text(0.5, 0.5, 'a simple text')
pl.text(0.6, 0.6, 'an aligned text', va='center', ha='center')
pl.annotate("an annotation for that scatter point", (0.2, 0.2), (0.3, 0.3),\
arrowprops=dict(facecolor='black', shrink=0.05))
</code></pre>
<p><a href="http://i.stack.imgur.com/Gq7zu.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gq7zu.png" alt="enter image description here"></a></p>
| 0 | 2016-08-16T14:09:23Z | [
"python",
"matplotlib"
] |
Python - comparing lists of dictionaries using tuples - unexpected behaviour? | 38,964,394 | <p>I've been attempting to compare two lists of dictionaries, and to find the userid's of new people in list2 that aren't in list1. For example the first list:</p>
<pre><code>list1 = [{"userid": "13451", "name": "james", "age": "24", "occupation": "doctor"}, {"userid": "94324""name": "john", "age": "33", "occupation": "pilot"}]
</code></pre>
<p>and the second list:</p>
<pre><code>list2 = [{"userid": "13451", "name": "james", "age": "24", "occupation": "doctor"}, {"userid": "94324""name": "john", "age": "33", "occupation": "pilot"}, {"userid": "34892", "name": "daniel", "age": "64", "occupation": "chef"}]
</code></pre>
<p>the desired output:</p>
<pre><code>newpeople = ['34892']
</code></pre>
<p>This is what I've managed to put together:</p>
<pre><code>list1tuple = ((d["userid"]) for d in list1)
list2tuple = ((d["userid"]) for d in list2)
newpeople = [t for t in list2tuple if t not in list1tuple]
</code></pre>
<p>This actually seems to be pretty efficient, especially considering the lists I am using might contain over 50,000 dictionaries. However, here's the issue:</p>
<p>If it finds a userid in list2 that indeed isn't in list1, it adds it to newpeople (as desired), <em>but then also adds every other userid that comes afterwards in list2 to newpeople as well</em>.</p>
<p>So, say list2 contains 600 userids and the 500th userid in list2 isn't found anywhere in list1, the first item in newpeople will be the 500th userid (again, as desired), but then followed by the other 100 userids that came after the new one.</p>
<p>This is pretty perplexing to me - I'd greatly appreciate anyone helping me get to the bottom of why this is happening. </p>
| 1 | 2016-08-15T23:18:44Z | 38,964,424 | <p>Currently you have set <code>list1tuple</code> and <code>list2tuple</code> as:</p>
<pre><code>list1tuple = ((d["userid"]) for d in list1)
list2tuple = ((d["userid"]) for d in list2)
</code></pre>
<p>These are <em>generators</em>, not lists (or tuples), which means they can only be iterated over once, which is causing your problem.</p>
<p>You could change them to be lists:</p>
<pre><code>list1tuple = [d["userid"] for d in list1]
list2tuple = [d["userid"] for d in list2]
</code></pre>
<p>which would allow you to iterate over them as many times as you like. But a better solution would be to simply make them sets:</p>
<pre><code>list1tuple = set(d["userid"] for d in list1)
list2tuple = set(d["userid"] for d in list2)
</code></pre>
<p>And then take the set difference</p>
<pre><code>newpeople = list2tuple - list1tuple
</code></pre>
| 3 | 2016-08-15T23:22:32Z | [
"python",
"list",
"dictionary",
"tuples"
] |
Python - comparing lists of dictionaries using tuples - unexpected behaviour? | 38,964,394 | <p>I've been attempting to compare two lists of dictionaries, and to find the userid's of new people in list2 that aren't in list1. For example the first list:</p>
<pre><code>list1 = [{"userid": "13451", "name": "james", "age": "24", "occupation": "doctor"}, {"userid": "94324""name": "john", "age": "33", "occupation": "pilot"}]
</code></pre>
<p>and the second list:</p>
<pre><code>list2 = [{"userid": "13451", "name": "james", "age": "24", "occupation": "doctor"}, {"userid": "94324""name": "john", "age": "33", "occupation": "pilot"}, {"userid": "34892", "name": "daniel", "age": "64", "occupation": "chef"}]
</code></pre>
<p>the desired output:</p>
<pre><code>newpeople = ['34892']
</code></pre>
<p>This is what I've managed to put together:</p>
<pre><code>list1tuple = ((d["userid"]) for d in list1)
list2tuple = ((d["userid"]) for d in list2)
newpeople = [t for t in list2tuple if t not in list1tuple]
</code></pre>
<p>This actually seems to be pretty efficient, especially considering the lists I am using might contain over 50,000 dictionaries. However, here's the issue:</p>
<p>If it finds a userid in list2 that indeed isn't in list1, it adds it to newpeople (as desired), <em>but then also adds every other userid that comes afterwards in list2 to newpeople as well</em>.</p>
<p>So, say list2 contains 600 userids and the 500th userid in list2 isn't found anywhere in list1, the first item in newpeople will be the 500th userid (again, as desired), but then followed by the other 100 userids that came after the new one.</p>
<p>This is pretty perplexing to me - I'd greatly appreciate anyone helping me get to the bottom of why this is happening. </p>
| 1 | 2016-08-15T23:18:44Z | 38,964,523 | <p>As can be seen from a python console, list1tuple and list2tuple are generators:</p>
<pre><code>>>> ((d["userid"]) for d in list1)
<generator object <genexpr> at 0x10a9936e0>
</code></pre>
<p>Although the second one can remain a generator (there is no need to expand the list), the first one should first be converted to a list, set or tuple, e.g.:</p>
<pre><code>list1set = {d['userid'] for d in list1}
list2generator = (d['userid'] for d in list2)
</code></pre>
<p>You can now check for membership in the group:</p>
<pre><code>>>> [t for t in list2generator if t not in list1set]
['34892']
</code></pre>
| 1 | 2016-08-15T23:32:49Z | [
"python",
"list",
"dictionary",
"tuples"
] |
Why does python ignore my other lines of code? | 38,964,416 | <pre><code>mark = raw_input
if mark < 50:
print "your mark is unsatisfactory!"
else:
print "your mark is satisfactory!"
</code></pre>
<p>I want to create something that gives an answer depending on what number the user inputs. However, when I run the code it only shows "your mark is satisfactory!" and I can't seem to figure out why it's not letting me input a number.</p>
| -2 | 2016-08-15T23:21:29Z | 38,964,469 | <p>You need to call <code>raw_input</code>. Right now you're just taking the value of the function, which is always True (because for Python, functions are truthy). Add some parentheses to make it <code>raw_input()</code> to actually call the function. Also you'll want to call <code>int(raw_input())</code> to get an integer from the string.</p>
| 0 | 2016-08-15T23:26:16Z | [
"python",
"python-2.7"
] |
Why does python ignore my other lines of code? | 38,964,416 | <pre><code>mark = raw_input
if mark < 50:
print "your mark is unsatisfactory!"
else:
print "your mark is satisfactory!"
</code></pre>
<p>I want to create something that gives an answer depending on what number the user inputs. However, when I run the code it only shows "your mark is satisfactory!" and I can't seem to figure out why it's not letting me input a number.</p>
| -2 | 2016-08-15T23:21:29Z | 38,964,481 | <p>Call <code>raw_input</code>, like</p>
<pre><code>mark = int(raw_input("enter something"))
if mark < 50:
print "your mark is unsatisfactory!"
else:
print "your mark is satisfactory!"
</code></pre>
<p>Note the conversion to int, since <code>raw_input</code> returns a string.</p>
<p>At the moment all you are doing is creating another variable that refers to the function <code>raw_input</code> and comparing it against 50.</p>
| 0 | 2016-08-15T23:27:50Z | [
"python",
"python-2.7"
] |
Why does python ignore my other lines of code? | 38,964,416 | <pre><code>mark = raw_input
if mark < 50:
print "your mark is unsatisfactory!"
else:
print "your mark is satisfactory!"
</code></pre>
<p>I want to create something that gives an answer depending on what number the user inputs. However, when I run the code it only shows "your mark is satisfactory!" and I can't seem to figure out why it's not letting me input a number.</p>
| -2 | 2016-08-15T23:21:29Z | 38,964,511 | <p>it is because you don't call the function.</p>
<p>You have to do:</p>
<pre><code>answer = raw_input('your question')
</code></pre>
<p>Also, the type of the <code>answer</code> is <code>str</code>. Because it looks like you want to compare it with number, you have to transform into a number:</p>
<pre><code>answer = int(answer) # if you want an int
answer = float(answer) # if you want a float
</code></pre>
<p>Note:</p>
<p>by doing</p>
<pre><code>mark = raw_input
</code></pre>
<p><code>mark</code> is now equal to the function, witch means that if you do <code>mark('...')</code>, it would do the same thing than <code>raw_input('...')</code></p>
| 0 | 2016-08-15T23:30:54Z | [
"python",
"python-2.7"
] |
AttributeError: 'tuple' object has no attribute 'regex' | 38,964,528 | <p>Strange error, and I don't understand where is the mistake. The traceback shows nothing relevant:</p>
<pre class="lang-none prettyprint-override"><code> File "/home/popovvasile/work/intiativa_new/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 27, in check_resolver
warnings.extend(check_pattern_startswith_slash(pattern))
File "/home/popovvasile/work/intiativa_new/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 63, in check_pattern_startswith_slash
regex_pattern = pattern.regex.pattern
</code></pre>
<p>It seems like something is wrong in the urls.py file:</p>
<pre><code>urlpatterns = [
# Examples:
url(r'^$', 'newsletter.views.home', name='home'),
url(r'^contact/$', 'newsletter.views.contact', name='contact'),
url(r'^about/$', About.as_view(), name='about'),
# url(r'^blog/', include('blog.urls')),
url(r'^noway/', include(admin.site.urls)),
url(r'^petitions/', include('newsletter.petitions_urls', namespace="petitions")),
url(r'^laws/', include('newsletter.laws_urls', namespace="laws")),
url(r'^accounts/register/$', RegistrationView.register, {'backend': 'registration.backends.default.DefaultBackend','form_class': UserRegForm}, name='registration_register'),(r'^accounts/', include('registration.urls')),
url(r'^news/', include('newsletter.news_urls', namespace="news")),
url(r'^petition-thanks/$', PetitionThanksView.as_view(), name='thanks_petitions'),
url(r'^addpetitions/$', create_new_petition, name='add_petitions'),
url(r'^comments/', include('fluent_comments.urls')),
# url(r'^comments/posted/$', 'newsletter.views.comment_posted' )
]
</code></pre>
| 0 | 2016-08-15T23:33:29Z | 38,964,560 | <p>There's a trailing tuple lurking somewhere between those lines; the <code>url(r'^accounts/register/$'...)</code> line:</p>
<pre><code>(r'^accounts/', include('registration.urls'))
</code></pre>
<p>You intend to have that as a url pattern not a tuple:</p>
<pre><code>url(r'^accounts/', include('registration.urls')),
</code></pre>
| 2 | 2016-08-15T23:38:09Z | [
"python",
"django"
] |
pytest: Adding fixture in the test file instead of conftest.py | 38,964,578 | <p>I am new to Python and I have a doubt in <strong>pytest</strong></p>
<p><strong>test_client.py</strong> </p>
<pre><code># Simple Example tests
import pytest
def test_one():
assert False == False
def test_two():
assert True == True
def cleanup():
# do some cleanup stuff
</code></pre>
<p><strong>conftest.py</strong></p>
<pre><code>import pytest
import test_client
@pytest.fixture(scope="session", autouse=True)
def do_clean_up(request):
request.addfinalizer(test_client.cleanup)
</code></pre>
<p>Is it possible to move the fixture defined in <strong>conftest.py</strong> to the <strong>test_client.py</strong> thereby eliminating having <strong>conftest.py</strong></p>
| 0 | 2016-08-15T23:40:07Z | 38,971,474 | <p>Yes. Why didn't you simply try? ;)</p>
<p>Fixtures are put in <code>conftest.py</code> files to be able to use them in multiple test files.</p>
| 1 | 2016-08-16T09:39:42Z | [
"python",
"python-2.7",
"python-3.x",
"py.test",
"pytest-django"
] |
How to create a numpy matrix with differing column data types? | 38,964,698 | <p>Lets say I have three vectors <code>a</code>, <code>b</code>, and <code>c</code>:</p>
<pre><code>a = np.array([1,2,3])
b = np.array([1.2, 3.2, 4.5])
c = np.array([True, True, False])
</code></pre>
<p>What is the simplest way to turn this into a matrix <code>d</code> of differing data types and column labels, as such:</p>
<pre><code>d = ([[1, 1.2, True],
[2, 3.2, True],
[3, 4.5, False]],
dtype=[('aVals','i8'), ('bVals','f4'), ('cVals','bool')])
</code></pre>
<p>So that I can then save this matrix to a <code>.npy</code> file and access the data as such after opening it;</p>
<pre><code>>>> d = np.load('dFile')
>>> d['aVals']
np.array([1,2,3], dtype = [('aVals', '<i8)])
</code></pre>
<p>I have used a cimple <code>column_stack</code> to create the matrix, but I am getting a headache trying to figure out how to include the datatypes and column names, since <code>column_stack</code> does not accept a <code>dtype</code> argument, and I can't see a way to add field names and data types after the <code>column_stack</code> is preformed. It is worth mentioning that the vectors <code>a</code>, <code>b</code>, and <code>c</code> have no explicit datatypes declared upon their creation, they are as shown above. </p>
| 2 | 2016-08-15T23:57:13Z | 38,964,759 | <pre><code>d = np.empty(len(a), dtype=[('aVals',a.dtype), ('bVals',b.dtype), ('cVals',c.dtype)])
d['aVals'] = a
d['bVals'] = b
d['cVals'] = c
</code></pre>
<p>As a reusable function:</p>
<pre><code>def column_stack_overflow(**kwargs):
dtype = [(name, val.dtype) for name, val in kwargs.items()]
arr = np.empty(len(kwargs.values()[0]), dtype=dtype)
for name, val in kwargs.items():
arr[name] = val
return arr
</code></pre>
<p>Then:</p>
<pre><code>column_stack_overflow(aVals=a, bVals=b, cVals=c)
</code></pre>
<p>But note kwargs is a dict so unordered, so you might not get the columns in the order you pass them.</p>
| 3 | 2016-08-16T00:06:09Z | [
"python",
"arrays",
"numpy"
] |
How to create a numpy matrix with differing column data types? | 38,964,698 | <p>Lets say I have three vectors <code>a</code>, <code>b</code>, and <code>c</code>:</p>
<pre><code>a = np.array([1,2,3])
b = np.array([1.2, 3.2, 4.5])
c = np.array([True, True, False])
</code></pre>
<p>What is the simplest way to turn this into a matrix <code>d</code> of differing data types and column labels, as such:</p>
<pre><code>d = ([[1, 1.2, True],
[2, 3.2, True],
[3, 4.5, False]],
dtype=[('aVals','i8'), ('bVals','f4'), ('cVals','bool')])
</code></pre>
<p>So that I can then save this matrix to a <code>.npy</code> file and access the data as such after opening it;</p>
<pre><code>>>> d = np.load('dFile')
>>> d['aVals']
np.array([1,2,3], dtype = [('aVals', '<i8)])
</code></pre>
<p>I have used a cimple <code>column_stack</code> to create the matrix, but I am getting a headache trying to figure out how to include the datatypes and column names, since <code>column_stack</code> does not accept a <code>dtype</code> argument, and I can't see a way to add field names and data types after the <code>column_stack</code> is preformed. It is worth mentioning that the vectors <code>a</code>, <code>b</code>, and <code>c</code> have no explicit datatypes declared upon their creation, they are as shown above. </p>
| 2 | 2016-08-15T23:57:13Z | 38,965,197 | <p>There's a little known <code>recarray</code> function that constructs arrays like this. It was cited in a recent SO question:</p>
<p><a href="http://stackoverflow.com/questions/32474115/assigning-field-names-to-numpy-array-in-python-2-7-3">Assigning field names to numpy array in Python 2.7.3</a></p>
<p>Allowing it to deduce everything from the input arrays:</p>
<pre><code>In [19]: np.rec.fromarrays([a,b,c])
Out[19]:
rec.array([(1, 1.2, True), (2, 3.2, True), (3, 4.5, False)],
dtype=[('f0', '<i4'), ('f1', '<f8'), ('f2', '?')])
</code></pre>
<p>Specifying names</p>
<pre><code>In [26]: d=np.rec.fromarrays([a,b,c],names=['avals','bvals','cVals'])
In [27]: d
Out[27]:
rec.array([(1, 1.2, True),
(2, 3.2, True),
(3, 4.5, False)],
dtype=[('avals', '<i4'), ('bvals', '<f8'), ('cVals', '?')])
In [28]: d['cVals']
Out[28]: array([ True, True, False], dtype=bool)
</code></pre>
<p>After creating the target array of right size and dtype it does a field by field copy. This is typical of the <code>rec.recfunctions</code> (even <code>astype</code> does this).</p>
<pre><code># populate the record array (makes a copy)
for i in range(len(arrayList)):
_array[_names[i]] = arrayList[i]
</code></pre>
<p>A 2011 reference: <a href="http://stackoverflow.com/questions/8220689/how-to-make-a-structured-array-from-multiple-simple-array">How to make a Structured Array from multiple simple array</a></p>
| 3 | 2016-08-16T01:07:02Z | [
"python",
"arrays",
"numpy"
] |
Cant define variable from tkinter Entry box | 38,964,735 | <p>I am trying to learn tkinter in python 3.5, and for some reason I cannot print the text in an Entry box. Here is my code:</p>
<pre><code>from tkinter import *
text = StringVar
def func():
print(text.get())
root = Tk()
root.geometry('450x450')
root.title('App')
mylabel = Label(text='My Label').grid(row = 0, column=0, sticky='W')
mybutton = Button(text = 'Button',command = func).grid(row=0,column=1,sticky='W')
myentry = Entry(root, textvariable=text).grid(row=1,column=1)
root.mainloop()
</code></pre>
<p>However when I press the button, I get an error saying </p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.5/tkinter/__init__.py", line 1553, in __call__
return self.func(*args)
File "tkapp.py", line 6, in func
print(text.get())
TypeError: get() missing 1 required positional argument: 'self'
</code></pre>
<p>Thanks in advance for any help!</p>
| 0 | 2016-08-16T00:02:49Z | 38,964,880 | <p>You're missing parenthesis!</p>
<pre><code>text = StringVar()
</code></pre>
| 0 | 2016-08-16T00:23:21Z | [
"python",
"class",
"tkinter",
"python-3.5"
] |
Cant define variable from tkinter Entry box | 38,964,735 | <p>I am trying to learn tkinter in python 3.5, and for some reason I cannot print the text in an Entry box. Here is my code:</p>
<pre><code>from tkinter import *
text = StringVar
def func():
print(text.get())
root = Tk()
root.geometry('450x450')
root.title('App')
mylabel = Label(text='My Label').grid(row = 0, column=0, sticky='W')
mybutton = Button(text = 'Button',command = func).grid(row=0,column=1,sticky='W')
myentry = Entry(root, textvariable=text).grid(row=1,column=1)
root.mainloop()
</code></pre>
<p>However when I press the button, I get an error saying </p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.5/tkinter/__init__.py", line 1553, in __call__
return self.func(*args)
File "tkapp.py", line 6, in func
print(text.get())
TypeError: get() missing 1 required positional argument: 'self'
</code></pre>
<p>Thanks in advance for any help!</p>
| 0 | 2016-08-16T00:02:49Z | 38,965,183 | <p>There are 2 errors:</p>
<p>1) as @cdonts said, use <code>StringVar()</code> instead of <code>StringVar</code></p>
<p>2) <code>StringVar()</code> should be called after calling Tk(), so move <code>text = StringVar()</code> before creating <code>myentry</code>:</p>
<pre><code>text = StringVar()
myentry = Entry(....)
</code></pre>
| 0 | 2016-08-16T01:05:31Z | [
"python",
"class",
"tkinter",
"python-3.5"
] |
Can someone help me understand how these programs work? Have just started programming | 38,964,769 | <pre><code>**1**
count = 0
phrase = "hello, world"
for iteration in range(5):
index = 0
while index < len(phrase):
count += 1
index += 1
print "Iteration " + str(iteration) + "; count is: " + str(count)
**2**
count = 0
phrase = "hello, world"
for iteration in range(5):
while True:
count += len(phrase)
break
print "Iteration " + str(iteration) + "; count is: " + str(count)
**3**
count = 0
phrase = "hello, world"
for iteration in range(5):
count += len(phrase)
print "Iteration " + str(iteration) + "; count is: " + str(count)
</code></pre>
| -1 | 2016-08-16T00:07:16Z | 38,964,965 | <p>Number 1: There is a <code>count</code> variable to store a number, and the phrase <code>"hello, world"</code> stored in the <code>phrase</code> variable. The for loop repeats 5 times. Inside it, a placeholder variable <code>index</code> is defined. Repeating the while loop the length of <code>phrase</code> times, the placeholder <code>index</code> and the <code>count</code> variable are increased by one. The last line of the for loop prints out which round of the for loop it in and the <code>count</code> variable.</p>
<p>Number 2: Again, the <code>count</code> and <code>phrase</code> variables are defined. Repeating the for loop 5 times, the first line creates an infinite while loop (one that repeats forever). However, after <code>count</code> is increased by the length of <code>phrase</code>, it immediately <code>break</code>s out of the while loop (stops it) so that it does not continue forever. The last line prints out the same thing as Number 1 does. (This might be clear due to the fact that they are identical lines of code.)</p>
<p>Number 3: The <code>count</code> and <code>phrase</code> variables are defined once more. 5 times the for loop is run. Each time, <code>count</code> is increased by the length of <code>phrase</code>, and then the <code>print</code> statement is run (same as numbers 1 and 2).</p>
<p>Hope this helps!</p>
| 0 | 2016-08-16T00:35:23Z | [
"python"
] |
Warning: multiple data types in column of very large dataframe | 38,964,819 | <p>I have a fairly large pandas DataFrame read in from csv (~3 million rows & 72 columns), and I am getting warnings that some of the columns contain mixed data types:</p>
<pre><code>DtypeWarning: Columns (1,2,3,15,16,17,18,19,20,21,22,23,31,32,33,35,37,38,39,40,41,42,43,44,45,46,47,48,50,51,52,55,57,58,60,71) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
</code></pre>
<p>What's the best way to deal with this given that I can't just eyeball the csv? In particular, is there a way to get a list of all the data types that occur in a given column and what their corresponding row numbers are?</p>
| 3 | 2016-08-16T00:14:30Z | 38,964,882 | <p>consider the following <code>df</code></p>
<pre><code>df = pd.DataFrame(dict(col1=[1, '1', False, np.nan, ['hello']],
col2=[2, 3.14, 'hello', (1, 2, 3), True]))
df = pd.concat([df for _ in range(2)], ignore_index=True)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/hCb2G.png" rel="nofollow"><img src="http://i.stack.imgur.com/hCb2G.png" alt="enter image description here"></a></p>
<p>You could investigate the different types and how many of them there are with</p>
<pre><code>df.col1.apply(type).value_counts()
<type 'float'> 2
<type 'int'> 2
<type 'list'> 2
<type 'bool'> 2
<type 'str'> 2
Name: col1, dtype: int64
</code></pre>
<p>you could investigate which rows of <code>col1</code> are float like this</p>
<pre><code>df[df.col1.apply(type) == float]
</code></pre>
<p><a href="http://i.stack.imgur.com/gV9Oe.png" rel="nofollow"><img src="http://i.stack.imgur.com/gV9Oe.png" alt="enter image description here"></a></p>
| 3 | 2016-08-16T00:23:39Z | [
"python",
"pandas"
] |
Python Dictionary replace value and save in dict | 38,964,838 | <p>My dictionary values are strings that are supposed to have <code>'|x'</code> at the end of each term. Some strings contain many terms and they are separated by a <code>space</code>. </p>
<p>I am trying to remove terms in the values that do not have <code>'|x'</code> but the dictionary is not saving the new value.</p>
<pre><code>d={'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
for k,v in d.iteritems():
for t in v.split(' '):
if '|x' in v:
v=v.replace(t,'')
</code></pre>
<p>output:</p>
<pre><code>d
{'food': u'burger|x fries|x soda pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>output i want:</p>
<pre><code>{'food': u'burger|x fries|x pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>Why didnt the value get replaced?</p>
| 2 | 2016-08-16T00:17:57Z | 38,964,873 | <p>You're only creating a new string and not updating the value(s) in the dict. </p>
<p>You can instead remove those items using a dictionary comprehension:</p>
<pre><code>d = {'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
d = {k: ' '.join(i for i in v.split() if i.endswith('|x')) for k, v in d.iteritems()}
print d
# {'food': u'burger|x fries|x pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
</code></pre>
<p>Note that <code>split()</code> can replace <code>split(' ')</code> in this context.</p>
| 5 | 2016-08-16T00:22:54Z | [
"python",
"dictionary"
] |
Python Dictionary replace value and save in dict | 38,964,838 | <p>My dictionary values are strings that are supposed to have <code>'|x'</code> at the end of each term. Some strings contain many terms and they are separated by a <code>space</code>. </p>
<p>I am trying to remove terms in the values that do not have <code>'|x'</code> but the dictionary is not saving the new value.</p>
<pre><code>d={'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
for k,v in d.iteritems():
for t in v.split(' '):
if '|x' in v:
v=v.replace(t,'')
</code></pre>
<p>output:</p>
<pre><code>d
{'food': u'burger|x fries|x soda pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>output i want:</p>
<pre><code>{'food': u'burger|x fries|x pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>Why didnt the value get replaced?</p>
| 2 | 2016-08-16T00:17:57Z | 38,965,300 | <h1>Input</h1>
<pre><code>d = {'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
</code></pre>
<h1>Code</h1>
<pre><code>for (i, j) in d.iteritems():
x = j.split()
for k in x:
if not k.endswith('|x'):
x.remove(k)
d[i] = " ".join(x)
</code></pre>
<h1>Output</h1>
<pre><code>d = {'food': u'burger|x fries|x pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
</code></pre>
| 1 | 2016-08-16T01:22:29Z | [
"python",
"dictionary"
] |
Python Dictionary replace value and save in dict | 38,964,838 | <p>My dictionary values are strings that are supposed to have <code>'|x'</code> at the end of each term. Some strings contain many terms and they are separated by a <code>space</code>. </p>
<p>I am trying to remove terms in the values that do not have <code>'|x'</code> but the dictionary is not saving the new value.</p>
<pre><code>d={'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
for k,v in d.iteritems():
for t in v.split(' '):
if '|x' in v:
v=v.replace(t,'')
</code></pre>
<p>output:</p>
<pre><code>d
{'food': u'burger|x fries|x soda pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>output i want:</p>
<pre><code>{'food': u'burger|x fries|x pie|x', 'animal': u'cat|x', 'transport': u'bus|x'}
</code></pre>
<p>Why didnt the value get replaced?</p>
| 2 | 2016-08-16T00:17:57Z | 38,965,372 | <p>An alternative to a dictionary comprehension is to update the dictionary <em>in place</em>. This might be better for very large dictionaries because the dict comprehension produces a second dictionary:</p>
<pre><code>d = {'food': u'burger|x fries|x soda pie|x', 'transport': u'bus|x', 'animal': u'cat|x'}
for k in d:
terms = d[k].split()
d[k] = ' '.join(term for term in terms if term.endswith('|x'))
</code></pre>
<p>Or shoehorning it into one operation: </p>
<pre><code>for k in d:
d[k] = ' '.join(term for term in d[k].split() if term.endswith('|x'))
</code></pre>
| 0 | 2016-08-16T01:31:41Z | [
"python",
"dictionary"
] |
If string found Print n earlier lines | 38,965,024 | <p>Quick question ; So basically i am trying to print the results as below if the key word is found</p>
<pre><code>Keyword = ['Dn']
Output = ISIS Protocol Information for ISIS(523)
---------------------------------------
SystemId: 0101.7001.1125 System Level: L2
Area-Authentication-mode: NULL
Domain-Authentication-mode: NULL
Ipv6 is not enabled
ISIS is in restart-completed status
Level-2 Application Supported: MPLS Traffic Engineering
L2 MPLS TE is not enabled
ISIS is in protocol hot standby state: Real-Time Backup
Interface: 10.170.11.125(Loop0)
Cost: L1 0 L2 0 Ipv6 Cost: L1 0 L2 0
State: IPV4 Up IPV6 Down
Type: P2P MTU: 1500
Priority: L1 64 L2 64
Timers: Csnp: L12 10 , Retransmit: L12 5 , Hello: 10 ,
Hello Multiplier: 3 , LSP-Throttle Timer: L12 50
Interface: 10.164.179.218(GE0/5/0)
Cost: L1 10 L2 10 Ipv6 Cost: L1 10 L2 10
State: IPV4 Mtu:Up/Lnk:Dn/IP:Dn IPV6 Down
Type: BROADCAST MTU: 9497
Priority: L1 64 L2 64
Timers: Csnp: L1 10 L2 10 ,Retransmit: L12 5 , Hello: L1 10 L2
Hello Multiplier: L1 3 L2 3 , LSP-Throttle Timer: L12 50
Interface: 10.164.179.237(GE0/6/0)
Cost: L1 1000 L2 1000 Ipv6 Cost: L1 10 L2 10
State: IPV4 Up IPV6 Down
Type: BROADCAST MTU: 9497
Priority: L1 64 L2 64
Timers: Csnp: L1 10 L2 10 ,Retransmit: L12 5 , Hello: L1 10 L2 10 ,
Hello Multiplier: L1 3 L2 3 , LSP-Throttle Timer: L12 50
</code></pre>
<p>so if "Dn" found in output print last 2 lines , so expected output should be something</p>
<pre><code>Interface: 10.164.179.218(GE0/5/0)
Cost: L1 10 L2 10 Ipv6 Cost: L1 10 L2 10
State: IPV4 Mtu:Up/Lnk:Dn/IP:Dn IPV6 Down
</code></pre>
<p>using Snippet as below:</p>
<pre><code>with open( host1 + ".txt","w") as f:
else:
if (">") in output:
output = net_connect.send_command("screen-length 0 temporary", delay_factor=1)
print (output)
output = net_connect.send_command("dis isis brief", delay_factor=1)
print (output)
f.write(output)
hosts = open((hostsfile) , "r")
keys = ['Dn']
hosts = [hosts for hosts in (hosts.strip() for hosts in open(hostsfile)) if hosts]
for host2 in hosts:
for line in f:
for keywords in keys:
if keywords in line:
print (line)
</code></pre>
<p>i hope this explains , also kindly ignore minor issues like file operations etc , as the main concern is to output the n-2 lines if string found </p>
| 0 | 2016-08-16T00:43:45Z | 38,965,050 | <p>Use a <a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow"><code>collections.deque</code></a> to store <code>n</code> lines (<code>collections.deque(maxlen=n)</code>) and print the lines in the deque when the keyword is found.</p>
| 2 | 2016-08-16T00:47:12Z | [
"python",
"string",
"lines"
] |
If string found Print n earlier lines | 38,965,024 | <p>Quick question ; So basically i am trying to print the results as below if the key word is found</p>
<pre><code>Keyword = ['Dn']
Output = ISIS Protocol Information for ISIS(523)
---------------------------------------
SystemId: 0101.7001.1125 System Level: L2
Area-Authentication-mode: NULL
Domain-Authentication-mode: NULL
Ipv6 is not enabled
ISIS is in restart-completed status
Level-2 Application Supported: MPLS Traffic Engineering
L2 MPLS TE is not enabled
ISIS is in protocol hot standby state: Real-Time Backup
Interface: 10.170.11.125(Loop0)
Cost: L1 0 L2 0 Ipv6 Cost: L1 0 L2 0
State: IPV4 Up IPV6 Down
Type: P2P MTU: 1500
Priority: L1 64 L2 64
Timers: Csnp: L12 10 , Retransmit: L12 5 , Hello: 10 ,
Hello Multiplier: 3 , LSP-Throttle Timer: L12 50
Interface: 10.164.179.218(GE0/5/0)
Cost: L1 10 L2 10 Ipv6 Cost: L1 10 L2 10
State: IPV4 Mtu:Up/Lnk:Dn/IP:Dn IPV6 Down
Type: BROADCAST MTU: 9497
Priority: L1 64 L2 64
Timers: Csnp: L1 10 L2 10 ,Retransmit: L12 5 , Hello: L1 10 L2
Hello Multiplier: L1 3 L2 3 , LSP-Throttle Timer: L12 50
Interface: 10.164.179.237(GE0/6/0)
Cost: L1 1000 L2 1000 Ipv6 Cost: L1 10 L2 10
State: IPV4 Up IPV6 Down
Type: BROADCAST MTU: 9497
Priority: L1 64 L2 64
Timers: Csnp: L1 10 L2 10 ,Retransmit: L12 5 , Hello: L1 10 L2 10 ,
Hello Multiplier: L1 3 L2 3 , LSP-Throttle Timer: L12 50
</code></pre>
<p>so if "Dn" found in output print last 2 lines , so expected output should be something</p>
<pre><code>Interface: 10.164.179.218(GE0/5/0)
Cost: L1 10 L2 10 Ipv6 Cost: L1 10 L2 10
State: IPV4 Mtu:Up/Lnk:Dn/IP:Dn IPV6 Down
</code></pre>
<p>using Snippet as below:</p>
<pre><code>with open( host1 + ".txt","w") as f:
else:
if (">") in output:
output = net_connect.send_command("screen-length 0 temporary", delay_factor=1)
print (output)
output = net_connect.send_command("dis isis brief", delay_factor=1)
print (output)
f.write(output)
hosts = open((hostsfile) , "r")
keys = ['Dn']
hosts = [hosts for hosts in (hosts.strip() for hosts in open(hostsfile)) if hosts]
for host2 in hosts:
for line in f:
for keywords in keys:
if keywords in line:
print (line)
</code></pre>
<p>i hope this explains , also kindly ignore minor issues like file operations etc , as the main concern is to output the n-2 lines if string found </p>
| 0 | 2016-08-16T00:43:45Z | 38,965,067 | <p>This is a great use-case for a <code>collections.deque</code>. Lets say you want to print the matching line and the 2 previous lines (that's what I <em>think</em> you want based on your example). We can do this by packing each line into the deque and then when we find a match, we print the entire deque:</p>
<pre><code>from collections import deque
def print_deque(dqu):
for item in dqu:
print(item)
lines = deque(maxlen=3)
with open('filename') as file_input:
for line in file_input:
lines.append(line)
if 'Dn' in line:
print_deque(lines)
</code></pre>
<p>This works because the <code>deque</code> will quietly drop the oldest items when you try to add something beyond it's maximum length.</p>
| 3 | 2016-08-16T00:49:17Z | [
"python",
"string",
"lines"
] |
Why is default timeout for python requests.get() different for different machines | 38,965,086 | <p>My program wants to get the webpage contents on a private IP(10.0.0.0/8) in internal network.
I am using python requests.get() for that purpose.</p>
<hr>
<p>I went through question <a href="http://stackoverflow.com/questions/17782142/why-doesnt-requests-get-return-what-is-the-default-timeout-that-requests-get#">Why doesn't requests.get() return? What is the default timeout that requests.get() uses?</a> but I did not get much help</p>
<ol>
<li>On what basis is the default timeout for python get request determined ?</li>
<li>Is it dependent on the TCP stack configuration ? Which file hosts that configuration ?</li>
</ol>
| 0 | 2016-08-16T00:51:27Z | 38,965,425 | <ol>
<li><p>Again, the default timeout is <code>None</code>, you can dig into the library files in your OS (the path is <code>/usr/local/lib/python2.7/site-packages/requests/</code> in my Mac OSX)</p>
<ul>
<li>two files to look into is <code>api.py</code> and <code>sessions.py</code></li>
<li>for <code>timeout</code> part, line 275 file <code>sessions.py</code></li>
</ul></li>
<li><p>You can read the document here: <a href="http://docs.python-requests.org/en/master/user/quickstart/#timeouts" rel="nofollow">http://docs.python-requests.org/en/master/user/quickstart/#timeouts</a> (the Note part)</p></li>
</ol>
| -2 | 2016-08-16T01:37:35Z | [
"python",
"linux",
"python-requests"
] |
write a Python game that tests the arithmetic skills of the player | 38,965,120 | <p>Everyone, please help me to address this.
I have no idea how to write code for TOTAL TIME and SCORE.<a href="http://i.stack.imgur.com/0j95P.png" rel="nofollow">enter image description here</a></p>
<p>I have done most of the code.
Only CALCULATE SCORE, I have no idea how to write code.</p>
<p>Many thanks!</p>
| -5 | 2016-08-16T00:56:31Z | 38,986,569 | <p><code>yourAnswer</code> is only set to the answer of the last question, so at most there will be 100 points. You should instead define <code>answer1</code>, <code>answer2</code>, etc. to store the user's answers and then check those specific answers against the actual answer to the respective questions. </p>
| 0 | 2016-08-17T00:51:57Z | [
"python"
] |
Not printing correct string | 38,965,126 | <pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
text = f.readline(number)
print(text)
</code></pre>
<p>The file "path.txt" looks like this:</p>
<p>line1<br>
line2<br>
line3</p>
<p>yet, no matter what number you enter, it will still return line1.</p>
| -3 | 2016-08-16T00:57:21Z | 38,965,204 | <p>You're incorrectly using <code>f.readline()</code>, passing this method a parameter will not return a specific line number. This method always reads and returns the next line (in this case line #1), and your parameter is actually functioning as a substring.</p>
<blockquote>
<p><code>f.readline()</code> reads a single line from the file; a newline character
(\n) is left at the end of the string, and is only omitted on the last
line of the file if the file doesnât end in a newline.
<a href="https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow">https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects</a></p>
</blockquote>
<p>I believe the usage you're really looking for is this:</p>
<pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
lines = f.readlines()
print(lines[number])
</code></pre>
<p><code>f.readlines()</code> reads and returns all lines, then you can index.</p>
<p><a href="https://repl.it/Cn0q" rel="nofollow">See live example</a></p>
| 0 | 2016-08-16T01:08:12Z | [
"python",
"python-3.x"
] |
Not printing correct string | 38,965,126 | <pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
text = f.readline(number)
print(text)
</code></pre>
<p>The file "path.txt" looks like this:</p>
<p>line1<br>
line2<br>
line3</p>
<p>yet, no matter what number you enter, it will still return line1.</p>
| -3 | 2016-08-16T00:57:21Z | 38,965,226 | <p>I'm not a Python guy, but a quick look a the docs tells me that readline does exactly that... it reads one line at a time starting at the beginning of the file.</p>
<p>Try this:</p>
<pre><code>f = open('path.txt','r')
print(f.readline())
print(f.readline())
</code></pre>
<p>This should output:</p>
<p>line1
line2</p>
<p>The parameter passed to readline is the number of bytes to read, not the line number.</p>
<p>This might help: <a href="http://www.tutorialspoint.com/python/file_readline.htm" rel="nofollow">http://www.tutorialspoint.com/python/file_readline.htm</a></p>
| 0 | 2016-08-16T01:12:41Z | [
"python",
"python-3.x"
] |
Not printing correct string | 38,965,126 | <pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
text = f.readline(number)
print(text)
</code></pre>
<p>The file "path.txt" looks like this:</p>
<p>line1<br>
line2<br>
line3</p>
<p>yet, no matter what number you enter, it will still return line1.</p>
| -3 | 2016-08-16T00:57:21Z | 38,965,252 | <p>The function <code>readline()</code> (no function argument) reads a single line from the file, hence, your statement <code>text = f.readline(number)</code> always return the line at the cursor.</p>
<p>For example:</p>
<pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
print f.readline(number) => print line1
print f.readline(number) => print line2
</code></pre>
<p>For your case, I think you can do:</p>
<pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
lines = f.readlines()
print lines[number] # if number is not greater than len(lines)
</code></pre>
| -1 | 2016-08-16T01:15:42Z | [
"python",
"python-3.x"
] |
Not printing correct string | 38,965,126 | <pre><code>number = int(input('Enter Line: '))
f = open('path.txt','r')
text = f.readline(number)
print(text)
</code></pre>
<p>The file "path.txt" looks like this:</p>
<p>line1<br>
line2<br>
line3</p>
<p>yet, no matter what number you enter, it will still return line1.</p>
| -3 | 2016-08-16T00:57:21Z | 38,965,258 | <p><code>f.readlines()</code> will read all lines to memory, if your file is very large, it will eat all memory. So I thin you can use the following code.</p>
<pre><code>for line_num, line in eneumerat(lines, 1):
if line_num == number:
print(text)
break
</code></pre>
| 1 | 2016-08-16T01:16:29Z | [
"python",
"python-3.x"
] |
Plotting Sympy Result to Particular Solution of Differential Equation | 38,965,182 | <p>So far I have managed to find the particular solution to this equation for any given mass and drag coefficient. I have not however found a way to plot the solution or even evaluate the solution for a specific point. I really want to find a way to plot the solution.</p>
<pre><code>from sympy import *
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
f = Function('f')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t:0}),C1)[0]
equation = equation.subs({C1:C1_ic})
</code></pre>
| 4 | 2016-08-16T01:05:29Z | 38,965,360 | <p>If I've understood correctly, you want to represent the right hand side of your solution, here's one of the multiple ways to do it:</p>
<pre><code>from sympy import *
import numpy as np
import matplotlib.pyplot as plt
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
f = Function('f')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t: 0}), C1)[0]
equation = equation.subs({C1: C1_ic})
t1 = np.arange(0.0, 50.0, 0.1)
y1 = [equation.subs({t: tt}).rhs for tt in t1]
plt.figure(1)
plt.plot(t1, y1)
plt.show()
</code></pre>
| 0 | 2016-08-16T01:30:14Z | [
"python",
"python-2.7",
"matplotlib",
"sympy"
] |
Plotting Sympy Result to Particular Solution of Differential Equation | 38,965,182 | <p>So far I have managed to find the particular solution to this equation for any given mass and drag coefficient. I have not however found a way to plot the solution or even evaluate the solution for a specific point. I really want to find a way to plot the solution.</p>
<pre><code>from sympy import *
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
f = Function('f')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t:0}),C1)[0]
equation = equation.subs({C1:C1_ic})
</code></pre>
| 4 | 2016-08-16T01:05:29Z | 38,965,413 | <p>Import these libraries (seaborn just makes the plots pretty).</p>
<pre><code>from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
</code></pre>
<p>Then tack this onto the end. This will plot time, t, against velocity, v(t). </p>
<pre><code># make a numpy-ready function from the sympy results
func = lambdify(t, equation.rhs,'numpy')
xvals = np.arange(0,10,.1)
yvals = func(xvals)
# make figure
fig, ax = plt.subplots(1,1,subplot_kw=dict(aspect='equal'))
ax.plot(xvals, yvals)
ax.set_xlabel('t')
ax.set_ylabel('v(t)')
plt.show()
</code></pre>
<p>I get a plot like this for a mass of 2 and a drag coefficient of 2.
<a href="http://i.stack.imgur.com/Vdvva.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vdvva.png" alt="enter image description here"></a></p>
| 3 | 2016-08-16T01:35:55Z | [
"python",
"python-2.7",
"matplotlib",
"sympy"
] |
Plotting Sympy Result to Particular Solution of Differential Equation | 38,965,182 | <p>So far I have managed to find the particular solution to this equation for any given mass and drag coefficient. I have not however found a way to plot the solution or even evaluate the solution for a specific point. I really want to find a way to plot the solution.</p>
<pre><code>from sympy import *
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
f = Function('f')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t:0}),C1)[0]
equation = equation.subs({C1:C1_ic})
</code></pre>
| 4 | 2016-08-16T01:05:29Z | 38,969,152 | <p>For completeness, you may also use Sympy's <code>plot</code>, which is probably more convenient if you want a "quick and dirty" plot.</p>
<pre><code>plot(equation.rhs,(t,0,10))
</code></pre>
<p><a href="http://i.stack.imgur.com/sIEdd.png" rel="nofollow"><img src="http://i.stack.imgur.com/sIEdd.png" alt="enter image description here"></a></p>
| 3 | 2016-08-16T07:39:50Z | [
"python",
"python-2.7",
"matplotlib",
"sympy"
] |
Importing Python modules for Azure Function | 38,965,193 | <p>How can I import modules for a Python Azure Function?</p>
<pre><code>import requests
</code></pre>
<p>Leads to:</p>
<pre><code>2016-08-16T01:02:02.317 Exception while executing function: Functions.detect_measure. Microsoft.Azure.WebJobs.Script: Traceback (most recent call last):
File "D:\home\site\wwwroot\detect_measure\run.py", line 1, in <module>
import requests
ImportError: No module named requests
</code></pre>
<p>Related, where are the modules available documented? </p>
| 1 | 2016-08-16T01:06:26Z | 38,985,704 | <p>Python support is currently experimental for Azure Functions, so documentation isn't very good.</p>
<p>You need to bring your own modules. None are available by default on Azure Functions. You can do this by uploading it via the portal UX or kudu (which is handy for lots of files).</p>
<p>You can leave comments on which packages you'd like, how you'd like to manage you packages here on the tracking issue for "real" Python support - <a href="https://github.com/Azure/azure-webjobs-sdk-script/issues/335" rel="nofollow">https://github.com/Azure/azure-webjobs-sdk-script/issues/335</a></p>
| 2 | 2016-08-16T22:56:48Z | [
"python",
"azure",
"azure-functions"
] |
Change docstring when inheriting but leave method the same | 38,965,232 | <p>I'm building an HTTP API and I factored out a lot of code into a superclass that handles requests to a collection of objects. In my subclass, I specify what database models the operation should work on and the superclass takes care of the rest.</p>
<p>This means that I don't need to re-implement the get, post, etc. methods from the superclass, however, I want to change their docstrings in the subclass so that I can have some documentation more specific to the actual model the endpoint is operating on.</p>
<p>What is the cleanest way to inherit the parent class's functionality but change the docstrings?</p>
<p>Example:</p>
<pre><code>class CollectionApi(Resource):
"""Operate on a collection of something.
"""
class Meta(object):
model = None
schema = None
def get(self):
"""Return a list of collections.
"""
# snip
def post(self):
"""Create a new item in this collection.
"""
# snip
class ActivityListApi(CollectionApi):
"""Operations on the collection of Activities.
"""
class Meta(object):
model = models.Activity
schema = schemas.ActivitySchema
</code></pre>
<p>Specifically, I need <code>ActivityListApi</code> to have <code>get</code> and <code>post</code> run like in <code>CollectionApi</code>, but I want different docstrings (for automatic documentation's sake).</p>
<p>I can do this:</p>
<pre><code> def get(self):
"""More detailed docs
"""
return super(ActivityListApi, self).get()
</code></pre>
<p>But this seems messy.</p>
| 3 | 2016-08-16T01:12:56Z | 39,009,559 | <pre><code>class CollectionApi(Resource):
"""Operate on a collection of something.
"""
def _get(self):
"""actual work... lotsa techy doc here!
the get methods only serve to have something to hang
their user docstrings onto
"""
pass
def get(self):
"""user-intended doc for CollectionApi"""
return self._get()
class ActivityListApi(CollectionApi):
def get(self):
"""user-intended doc for ActivityListApi"""
return self._get()
</code></pre>
| 1 | 2016-08-18T03:33:13Z | [
"python",
"inheritance",
"python-sphinx"
] |
Is there a way to submit a form from an online app without using selenium or browser client? | 38,965,268 | <p>I want to build myself an online app that manages inventory for myself and submit the item to a local classified site. This local classified site does not have an API, only some old looking HTML forms which have multiple steps.</p>
<p>Without using selenium / webdriver, or spinning up a virtual client through firefox / chrome, is where a way to remotely submit forms on a webpage? It would have to support some sort of session since the submission process is multiple steps.</p>
<p>I have done it using webdriver and python, and this seems to be the most popular answer to similar questions online. </p>
| 0 | 2016-08-16T01:18:08Z | 38,965,328 | <p>As long as you know the post variables that the post is looking for, you can use <a href="https://docs.python.org/2/library/urllib2.html" rel="nofollow">urllib2</a> in Python2.7, <a href="https://docs.python.org/3.5/library/urllib.request.html?highlight=urllib2" rel="nofollow">urllib.requests</a> in python 3.5 or <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> to submit each page and get the results, assuming that they truly are just submitting posts.</p>
| 0 | 2016-08-16T01:25:45Z | [
"javascript",
"python",
"forms",
"selenium",
"web-scraping"
] |
Is there a way to submit a form from an online app without using selenium or browser client? | 38,965,268 | <p>I want to build myself an online app that manages inventory for myself and submit the item to a local classified site. This local classified site does not have an API, only some old looking HTML forms which have multiple steps.</p>
<p>Without using selenium / webdriver, or spinning up a virtual client through firefox / chrome, is where a way to remotely submit forms on a webpage? It would have to support some sort of session since the submission process is multiple steps.</p>
<p>I have done it using webdriver and python, and this seems to be the most popular answer to similar questions online. </p>
| 0 | 2016-08-16T01:18:08Z | 38,965,484 | <p>In general, you should be able to use any HTTP client / library for this task (as, behind the scenes, everything boils down to just making the correct HTTP calls to some server anyways).</p>
<p>How hard it's going to be greatly depends on how (badly) the application you're interfacing with is designed.</p>
<p>In the simples scenario, you'll want to:</p>
<ul>
<li>Login, and keep track of the session cookies</li>
<li>Send your data via POST to the server</li>
</ul>
<p>The example here uses Python <code>requests</code>, which is pretty much the best option nowadays.</p>
<p>Let's get started.</p>
<p>First, you'll need to inspect your login page form. Usually a look at the page HTML will give you enough information on how to build the request.</p>
<p>An example could be:</p>
<pre><code><form action="/login" method="POST">
<input type="text" name="username">
<input type="password" name="password">
...
</form>
</code></pre>
<p>To keep track of the cookies, we're going to use a <code>Session</code> object:</p>
<pre><code>import requests
session = requests.Session()
</code></pre>
<p>Next, we're going to submit the credentials via POST (assuming your app is at <code>http://example.com</code>):</p>
<pre><code>response = session.post(
'http://example.com/login',
data={'username': 'your_user', 'password': 'your_password'})
</code></pre>
<p>At this point, you can check <code>response.ok</code> to make sure everything went fine. If you inspect <code>session.cookies</code> you should see your session cookie being set.</p>
<p>From now on, all requests made to your app using that session will be authenticated, and so equivalent to the ones you run from the browser.</p>
<p>To submit form data, simply start inspecting how the application works (get submit URIs and form field names by inspecting forms, as we did for the login page, and submit the data via POST using the same session).</p>
<p>In case the page HTML is complicated, it might also be helpful to watch the HTTP requests being made by using you browser developers tools, and replicate them via code.</p>
| 1 | 2016-08-16T01:45:45Z | [
"javascript",
"python",
"forms",
"selenium",
"web-scraping"
] |
How can I install a package using easy_install from a package on the local host? | 38,965,310 | <p>I am trying to install Python packages using easy_install from a local directory.</p>
<p>The reason I am doing this is because of network/IT issues.</p>
<p>I have a workstation (Ubuntu) that can access easy_install's repositories on the Internet. I can install things without any problems.</p>
<p>We have a lab network that is closed off to the Internet. I have an Ubuntu VM on this lab network. I cannot use easy_install (or pip) to install anything because it is blocked off from the repositories. I need to install some Python packages so I need to work around this limitation.</p>
<p>The way I got around this limitation for pip was to do a "pip download" of a package, then SCP the package file to the VM in the lab network and do a "pip install" of the package file.</p>
<p>I am trying to do this with easy_install. I was able to download the easy_install package by issuing this command</p>
<pre><code> > easy_install -q --editable --build-directory . <package name>
</code></pre>
<p>For example, suppose I wanted to install pip using easy_install. I have the pip directory after downloading the source code thru easy_install. I can tar the pip directory and SCP it over to the VM. Is there a way to tell easy_install on the VM to install using the files from the pip directory rather than try to install via the external repository?</p>
<p>I have searched for a similar question to this using the easy_install tag but I don't see anything so I thought I'd ask.</p>
| 0 | 2016-08-16T01:23:26Z | 38,965,588 | <p><a href="http://doc.devpi.net/latest/quickstart-pypimirror.html" rel="nofollow">devpi-server</a> is a caching PyPI proxy. If you use it to install packages on one host, it will forward requests to <a href="https://pypi.python.org/pypi" rel="nofollow">https://pypi.python.org/pypi</a>, and save everything it downloads. Then you copy over a tarball of <code>~/.devpi</code> to another host, launch the server, and have <code>devpi-server</code> serve up the cached files.</p>
<p>Host 1 (online):</p>
<pre><code>$ easy_install --user devpi-server
$ devpi-server --start
$ easy_install --user -i http://localhost:3141/root/pypi/ Django
</code></pre>
<p>Copy <code>~/.devpi</code> from Host 1 to Host 2</p>
<p>Somehow youâll also need to copy <code>devpi-server</code> over to the offline box too. It has quite a few dependencies. Maybe to bootstrap you could create a basic VM, run <code>easy_install --user devpi-server</code>, then tar up <code>~/.local</code> and copy it over?</p>
<p>Host 2 (no internet):</p>
<pre><code>$ devpi-server --start
$ easy_install --user -i http://localhost:3141/root/pypi/ Django
# Success!
</code></pre>
| 0 | 2016-08-16T02:02:14Z | [
"python",
"easy-install"
] |
Instantiating a C# Object in Robot Framework | 38,965,341 | <p>I'm looking to use Robot Framework for testing .NET applications and I'm struggling to understand how Robot Framework can instantiate C# objects, to be used in testing.</p>
<p>The C# application I'm playing with is very simple:</p>
<pre><code>SystemUnderTest solution
|_ DataAccess project (uses Entity Framework to connect to database)
| |_ SchoolContext class
|
|_ Models project
|_ Student class
|
|_ SchoolGrades project (class library)
|_ SchoolRoll class
|_ AddStudent(Student) method
</code></pre>
<p>I'm wanting to execute the AddStudent method from Robot Framework, passing in a Student object which should be saved to the database.</p>
<p>I've written a test library in Python that uses <a href="https://github.com/pythonnet/pythonnet" rel="nofollow">Python for .NET (pythonnet)</a> to call the .NET application:</p>
<pre><code>import clr
import sys
class SchoolGradesLibrary (object):
def __init__(self, application_path, connection_string):
self._application_path = application_path
sys.path.append(application_path)
# Need application directory on sys path before we can add references to the DLLs.
clr.AddReference("SchoolGrades")
clr.AddReference("DataAccess")
clr.AddReference("Models")
from SchoolGrades import SchoolRoll
from DataAccess import SchoolContext
from Models import Student
context = SchoolContext(connection_string)
self._schoolRoll = SchoolRoll(context)
def add_student(self, student):
self._schoolRoll.AddStudent(student)
</code></pre>
<p>Calling this from Python works:</p>
<pre><code>from SchoolGradesLibrary import SchoolGradesLibrary
import clr
application_path = r"C:\...\SchoolGrades\bin\Debug"
connection_string = r"Data Source=...;Initial Catalog=...;Integrated Security=True"
schoolLib = SchoolGradesLibrary(application_path, connection_string)
# Have to wait to add reference until after initializing SchoolGradesLibrary,
# as that adds the application directory to sys path.
clr.AddReference("Models")
from Models import Student
student = Student()
student.StudentName = "Python Student"
schoolLib.add_student(student)
</code></pre>
<p>I'm a bit lost as to how to do the same thing from Robot Framework. This is what I've got so far:</p>
<pre><code>*** Variables ***
${APPLICATION_PATH} = C:\...\SchoolGrades\bin\Debug
${CONNECTION_STRING} = Data Source=...;Initial Catalog=...;Integrated Security=True
*** Settings ***
Library SchoolGradesLibrary ${APPLICATION_PATH} ${CONNECTION_STRING}
*** Test Cases ***
Add Student To Database
${student} = Student
${student.StudentName} = RF Student
Add Student ${student}
</code></pre>
<p>When I run this it fails with error message: <code>No keyword with name 'Student' found.</code></p>
<p>How can I create a Student object in Robot Framework, to pass to the Add Student keyword? Is there anything else obviously wrong with the test?</p>
<p>The C# application is written with .NET 4.5.1, the Python version is 3.5 and the Robot Framework version is 3.0.</p>
| 1 | 2016-08-16T01:28:09Z | 38,974,579 | <p>You probably can't directly instantiate <code>Student</code> in robot without a helper utility, since <code>Student</code> is a class rather than a keyword.</p>
<p>The simplest solution is to create a keyword in SchoolGradesLibrary that creates the student:</p>
<pre><code>...
import clr
clr.AddReference("Models")
from Models import Student
...
class SchoolGradesLibrary (object):
...
def create_student(name=None):
student = Student()
if name is not None:
student.StudentName = name
return student
...
</code></pre>
<p>You could then use that in your test case like a normal keyword. </p>
<pre><code>${student}= create student Inigo Montoya
</code></pre>
| 2 | 2016-08-16T12:08:35Z | [
"c#",
"python",
".net",
"robotframework",
"python.net"
] |
Python Tkinter: version conflict for package "Tcl": have 8.4, need 8.5 | 38,965,457 | <p><br>Using python2.7 Tkinter for executing Tcl.</p>
<p>The Tcl code has <code>package require Tcl 8.5</code>, while the tclsh loads Tcl 8.4 by default.
<br>Causes: version conflict for package "Tcl": have 8.4, need 8.5</p>
<p>I have <code>libtcl8.5.so</code> installed at a custom location.
Tried adding it to LD_LIBRARY_PATH, TCL_LIBRARY, TCLLIBPATH. Nothing worked. It's like the tclsh completely ignores the envs.</p>
| 0 | 2016-08-16T01:42:26Z | 38,965,465 | <p>What worked eventually:<br>
<code>
tcl = Tkinter.tcl()
tcl.eval('package forget Tcl')
tcl.eval('package provide Tcl 8.5')
tcl.eval('package require Tcl')
8.5</code></p>
<p>Success!</p>
| 0 | 2016-08-16T01:43:40Z | [
"python",
"tcl",
"version"
] |
Error 400 when sending a message using python | 38,965,493 | <p>I am trying to send an email using Gmail API. I have successfully authenticated and have a client_secret.json file on my machine.
I have been able to get a list of labels using the quickstart example on the Gmail API website</p>
<p>I have reset my scope successfully to </p>
<pre><code>SCOPES = 'https://mail.google.com'
</code></pre>
<p>allowing full access to my gmail account.</p>
<p>I have a python script, compiled from <a href="https://developers.google.com/gmail/api/quickstart/python#step_3_set_up_the_sample" rel="nofollow">here</a> and <a href="https://developers.google.com/gmail/api/guides/sending#sending_messages" rel="nofollow">here</a>. See below. When executing the script I get the following error message:</p>
<blockquote>
<p>An error occurred: https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "'raw' RFC822 payload message string or uploading message via /upload/* URL required"></p>
</blockquote>
<p>Any thoughts on what I am doing wrong and how to fix it?</p>
<pre><code>from __future__ import print_function
import argparse
import time
from time import strftime, localtime
import os
import base64
import os
import httplib2
from httplib2 import Http
from apiclient import errors
from apiclient import discovery
import oauth2client
from oauth2client import file, client, tools
SCOPES = 'https://mail.google.com'
CLIENT_SECRET_FILE = 'client_secret.json'
store = file.Storage('storage.json')
credentials = store.get()
if not credentials or credentials.invalid:
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
credentials = tools.run_flow(flow, store, flags)
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'gmail-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def send_message(service, user_id, message):
"""Send an email message.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
message: Message to be sent.
Returns:
Sent Message.
"""
try:
message = (service.users().messages().send(userId=user_id, body=message)
.execute())
print ('Message Id: %s' % message['id'])
return message
except errors.HttpError, error:
print ('An error occurred: %s' % error)
def main():
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
send_message(service, 'me','test message')
main()
</code></pre>
| 2 | 2016-08-16T01:47:46Z | 38,971,497 | <p>A message has to be created like it is outlined in the <a href="https://developers.google.com/gmail/api/guides/sending#creating_messages" rel="nofollow">Sending Email</a> guide:</p>
<pre><code>def create_message(sender, to, subject, message_text):
message = MIMEText(message_text)
message['to'] = to
message['from'] = sender
message['subject'] = subject
return {'raw': base64.urlsafe_b64encode(message.as_string())}
def main():
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
message = create_message(
'sender@gmail.com', 'receiver@gmail.com', 'Subject', 'Message text'
)
send_message(service, 'me', message)
</code></pre>
| 3 | 2016-08-16T09:40:37Z | [
"python",
"raspberry-pi",
"gmail-api"
] |
user created log files | 38,965,633 | <p>I am trying to create log files that is saved in my documents folder upon log in of the system. This is my code... something is going wrong within the <code>printTime()</code> function...the issue I am having is getting the .txt file created upon execution saved in the documents folder specified in the code below.</p>
<pre><code>from time import strftime import os.path
def main():
getTime()
def getTime():
time = strftime("%Y-%m-%d %I:%M:%S")
printTime(time)
def printTime(time):
savePath = "C:\Users\Nicholas\Documents"
files = open("LogInLog.txt", "a")
temp = os.path.join(savePath, files)
files.write("A LogIn occured.")
files.write(time)
print files.read
files.close
main()
</code></pre>
| -3 | 2016-08-16T02:08:16Z | 38,965,888 | <p>Here's a working version:</p>
<pre><code>from time import strftime
import os.path
def main():
getTime()
def getTime():
time = strftime("%Y-%m-%d %I:%M:%S")
printTime(time)
def printTime(time):
savePath = "C:\Users\Nicholas\Documents"
logFile = "LogInLog.txt"
files = open(os.path.join(savePath, logFile), "a+")
openPosition = files.tell()
files.write("A LogIn occured.")
files.write(time)
files.seek(openPosition)
print(files.read())
files.close()
if __name__ == '__main__':
main()
</code></pre>
<p>There were a few problems with the code snippet posted in the question:</p>
<ol>
<li><p>Two import statements were concatenated together. Each should be on a separate line.</p></li>
<li><p>The <code>os.path.join</code> function doesn't work on an open filehandle.</p></li>
<li><p>The <code>read()</code> and <code>close()</code> methods were missing parens.</p></li>
<li><p>If the intent is to read what is written in append mode, it's necessary to get the current file position via <code>tell()</code> and <code>seek()</code> to that position <em>after</em> writing to the file.</p></li>
<li><p>While it's legal to call <code>main()</code> without any conditional check, it's usually best to make sure the module is being called as a script as opposed to being imported.</p></li>
</ol>
| 0 | 2016-08-16T02:44:05Z | [
"python",
"logfiles"
] |
Pandas: take whichever column is not NaN | 38,965,667 | <p>I am working with a fairly messy data set that has been individual csv files with slightly different names. It would be too onerous to rename columns in the csv file, partly because I am still discovering all the variations, so I am looking to determine, for a set of columns, in a given row, which field is not NaN and carrying that forward to a new column. Is there a way to do that?</p>
<p>Case in point. Let's say I have a data frame that looks like this:</p>
<pre><code>Index A B
1 15 NaN
2 NaN 11
3 NaN 99
4 NaN NaN
5 12 14
</code></pre>
<p>Let's say my desired output from this is to create a new column C such that my data frame will look like the following:</p>
<pre><code>Index A B C
1 15 NaN 15
2 NaN 11 11
3 NaN 99 99
4 NaN NaN NaN
5 12 14 12 (so giving priority to A over B)
</code></pre>
<p>How can I accomplish this?</p>
| 4 | 2016-08-16T02:13:01Z | 38,965,696 | <p>If you just have 2 columns, the cleanest way would be to use <code>where</code> (the syntax is <code>where([condition], [value if condition is true], [value if condition is false])</code> (for some reason it took me a while to wrap my head around this).</p>
<pre><code>In [2]: df.A.where(df.A.notnull(),df.B)
Out[2]:
0 15.0
1 11.0
2 99.0
3 NaN
4 12.0
Name: A, dtype: float64
</code></pre>
<p>If you have more than two columns, it might be simpler to use <code>max</code> or <code>min</code>; this will ignore the null values, however you'll lose the "column prececence" you want:</p>
<pre><code>In [3]: df.max(axis=1)
Out[3]:
0 15.0
1 11.0
2 99.0
3 NaN
4 14.0
dtype: float64
</code></pre>
| 2 | 2016-08-16T02:17:05Z | [
"python",
"pandas"
] |
Pandas: take whichever column is not NaN | 38,965,667 | <p>I am working with a fairly messy data set that has been individual csv files with slightly different names. It would be too onerous to rename columns in the csv file, partly because I am still discovering all the variations, so I am looking to determine, for a set of columns, in a given row, which field is not NaN and carrying that forward to a new column. Is there a way to do that?</p>
<p>Case in point. Let's say I have a data frame that looks like this:</p>
<pre><code>Index A B
1 15 NaN
2 NaN 11
3 NaN 99
4 NaN NaN
5 12 14
</code></pre>
<p>Let's say my desired output from this is to create a new column C such that my data frame will look like the following:</p>
<pre><code>Index A B C
1 15 NaN 15
2 NaN 11 11
3 NaN 99 99
4 NaN NaN NaN
5 12 14 12 (so giving priority to A over B)
</code></pre>
<p>How can I accomplish this?</p>
| 4 | 2016-08-16T02:13:01Z | 38,965,747 | <p>For a dataframe with an arbitrary number of columns, you can back fill the rows (<code>.bfill(axis=1)</code>) and take the first column (<code>.iloc[:, 0]</code>):</p>
<pre><code>df = pd.DataFrame({
'A': [15, None, None, None, 12],
'B': [None, 11, 99, None, 14],
'C': [10, None, 10, 10, 10]})
df['D'] = df.bfill(axis=1).iloc[:, 0]
>>> df
A B C D
0 15 NaN 10 15
1 NaN 11 NaN 11
2 NaN 99 10 99
3 NaN NaN 10 10
4 12 14 10 12
</code></pre>
| 4 | 2016-08-16T02:25:07Z | [
"python",
"pandas"
] |
Pandas: take whichever column is not NaN | 38,965,667 | <p>I am working with a fairly messy data set that has been individual csv files with slightly different names. It would be too onerous to rename columns in the csv file, partly because I am still discovering all the variations, so I am looking to determine, for a set of columns, in a given row, which field is not NaN and carrying that forward to a new column. Is there a way to do that?</p>
<p>Case in point. Let's say I have a data frame that looks like this:</p>
<pre><code>Index A B
1 15 NaN
2 NaN 11
3 NaN 99
4 NaN NaN
5 12 14
</code></pre>
<p>Let's say my desired output from this is to create a new column C such that my data frame will look like the following:</p>
<pre><code>Index A B C
1 15 NaN 15
2 NaN 11 11
3 NaN 99 99
4 NaN NaN NaN
5 12 14 12 (so giving priority to A over B)
</code></pre>
<p>How can I accomplish this?</p>
| 4 | 2016-08-16T02:13:01Z | 38,966,495 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="nofollow"><code>pandas.DataFrame.update</code></a>:</p>
<pre><code>df['updated'] = np.nan
for col in df.columns:
df['updated'].update(df[col])
</code></pre>
| 1 | 2016-08-16T04:05:13Z | [
"python",
"pandas"
] |
Pandas: take whichever column is not NaN | 38,965,667 | <p>I am working with a fairly messy data set that has been individual csv files with slightly different names. It would be too onerous to rename columns in the csv file, partly because I am still discovering all the variations, so I am looking to determine, for a set of columns, in a given row, which field is not NaN and carrying that forward to a new column. Is there a way to do that?</p>
<p>Case in point. Let's say I have a data frame that looks like this:</p>
<pre><code>Index A B
1 15 NaN
2 NaN 11
3 NaN 99
4 NaN NaN
5 12 14
</code></pre>
<p>Let's say my desired output from this is to create a new column C such that my data frame will look like the following:</p>
<pre><code>Index A B C
1 15 NaN 15
2 NaN 11 11
3 NaN 99 99
4 NaN NaN NaN
5 12 14 12 (so giving priority to A over B)
</code></pre>
<p>How can I accomplish this?</p>
| 4 | 2016-08-16T02:13:01Z | 38,966,777 | <p>Or you could use 'df.apply' to give priority to column A.</p>
<pre><code>def func1(row):
A=row['A']
B=row['B']
if A==float('nan'):
if B==float('nan'):
y=float('nan')
else:
y=B
else:
y=A
return y
df['C']=df.apply(func1,axis=1)
</code></pre>
| 0 | 2016-08-16T04:41:47Z | [
"python",
"pandas"
] |
Pandas: take whichever column is not NaN | 38,965,667 | <p>I am working with a fairly messy data set that has been individual csv files with slightly different names. It would be too onerous to rename columns in the csv file, partly because I am still discovering all the variations, so I am looking to determine, for a set of columns, in a given row, which field is not NaN and carrying that forward to a new column. Is there a way to do that?</p>
<p>Case in point. Let's say I have a data frame that looks like this:</p>
<pre><code>Index A B
1 15 NaN
2 NaN 11
3 NaN 99
4 NaN NaN
5 12 14
</code></pre>
<p>Let's say my desired output from this is to create a new column C such that my data frame will look like the following:</p>
<pre><code>Index A B C
1 15 NaN 15
2 NaN 11 11
3 NaN 99 99
4 NaN NaN NaN
5 12 14 12 (so giving priority to A over B)
</code></pre>
<p>How can I accomplish this?</p>
| 4 | 2016-08-16T02:13:01Z | 38,966,998 | <p>Try this: (This methods allows for flexiblity of giving preference to columns without relying on order of columns.)</p>
<p>Using @Alexanders setup.</p>
<pre><code>df["D"] = df["B"]
df["D"] = df['D'].fillna(df['A'].fillna(df['B'].fillna(df['C'])))
A B C D
0 15.0 NaN 10.0 15.0
1 NaN 11.0 NaN 11.0
2 NaN 99.0 10.0 99.0
3 NaN NaN 10.0 10.0
4 12.0 14.0 10.0 14.0
</code></pre>
| 1 | 2016-08-16T05:07:04Z | [
"python",
"pandas"
] |
Find closest point in Pandas DataFrames | 38,965,720 | <p>I am quite new to Python. I have the following table in Postgres. These are Polygon values with four coordinates with same <code>Id</code> with <code>ZONE</code> name I have stored this data in Python dataframe called <code>df1</code></p>
<pre><code>Id Order Lat Lon Zone
00001 1 50.6373473 3.075029928 A
00001 2 50.63740441 3.075068636 A
00001 3 50.63744285 3.074951754 A
00001 4 50.63737839 3.074913884 A
00002 1 50.6376054 3.0750528 B
00002 2 50.6375896 3.0751209 B
00002 3 50.6374239 3.0750246 B
00002 4 50.6374404 3.0749554 B
</code></pre>
<p>I have Json data with <code>Lon</code> and <code>Lat</code> values and I have stored them is python dataframe called <code>df2</code>.</p>
<pre><code>Lat Lon
50.6375524099 3.07507914474
50.6375714407 3.07508201591
</code></pre>
<p>My task is to compare <code>df2</code> <code>Lat</code> and <code>Lon</code> values with four coordinates of each zone in <code>df1</code> to extract the zone name and add it to <code>df2</code>.</p>
<p>For instance <code>(50.637552409 3.07507914474)</code> belongs to <code>Zone B</code>.</p>
<pre><code>#This is ID with Zone
df1 = pd.read_sql_query("""SELECT * from "zmap" """,con=engine)
#This is with lat,lon values
df2 = pd.read_sql_query("""SELECT * from "E1" """,con=engine)
df2['latlon'] = zip(df2.lat, df2.lon)
zones = [
["A", [[50.637347297, 3.075029928], [50.637404408, 3.075068636], [50.637442847, 3.074951754],[50.637378390, 3.074913884]]]]
for i in range(0, len(zones)): # for each zone points
X = mplPath.Path(np.array(zones[i][1]))
# find if points are Zones
Y= X.contains_points(df2.latlon.values.tolist())
# Label points that are in the current zone
df2[Y, 'zone'] = zones[i][0]
</code></pre>
<p>Currently I have done it manually for Zone 'A'. I need to generate the "Zones" for the coordinates in df2.</p>
| 0 | 2016-08-16T02:20:47Z | 39,318,808 | <p>This sounds like a good use case for <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow">scipy cdist</a>, also discussed <a href="http://codereview.stackexchange.com/questions/28207/finding-the-closest-point-to-a-list-of-points">here</a>.</p>
<pre><code>import pandas as pd
from scipy.spatial.distance import cdist
data1 = {'Lat': pd.Series([50.6373473,50.63740441,50.63744285,50.63737839,50.6376054,50.6375896,50.6374239,50.6374404]),
'Lon': pd.Series([3.075029928,3.075068636,3.074951754,3.074913884,3.0750528,3.0751209,3.0750246,3.0749554]),
'Zone': pd.Series(['A','A','A','A','B','B','B','B'])}
data2 = {'Lat': pd.Series([50.6375524099,50.6375714407]),
'Lon': pd.Series([3.07507914474,3.07508201591])}
def closest_point(point, points):
""" Find closest point from a list of points. """
return points[cdist([point], points).argmin()]
def match_value(df, col1, x, col2):
""" Match value x from col1 row to value in col2. """
return df[df[col1] == x][col2].values[0]
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
df1['point'] = [(x, y) for x,y in zip(df1['Lat'], df1['Lon'])]
df2['point'] = [(x, y) for x,y in zip(df2['Lat'], df2['Lon'])]
df2['closest'] = [closest_point(x, list(df1['point'])) for x in df2['point']]
df2['zone'] = [match_value(df1, 'point', x, 'Zone') for x in df2['closest']]
print(df2)
# Lat Lon point closest zone
# 0 50.637552 3.075079 (50.6375524099, 3.07507914474) (50.6375896, 3.0751209) B
# 1 50.637571 3.075082 (50.6375714407, 3.07508201591) (50.6375896, 3.0751209) B
</code></pre>
| 0 | 2016-09-04T15:57:20Z | [
"python",
"postgresql",
"pandas"
] |
Using Python (Flask) and PHP on apache2 | 38,965,933 | <p>I am running a page based on Flask and Apache2/MOD_Wsgi and am wanting to also deploy a page using PHP (a IDE called Codiad). Flask as my main page ( enjay.work ) and I want Codiad to run on enjay.work/codiad</p>
<p>Unfortunately I have the most basic understanding of Apache config and don't know at all how to find what I need.</p>
<p>Here is what I have so far:</p>
<pre><code><virtualhost *:80>
ServerName enjay.work
DocumentRoot /home/nathan/www/enjay
WSGIDaemonProcess enjay user=nathan group=www-data threads=5 home=/home/nathan/www
WSGIScriptAlias / /home/nathan/www/enjay/enjay.wsgi
<Directory /Codiad>
Options indexes FollowSymlinks MultiViews
AllowOverride All
Require all granted
Allow from all
</Directory>
<Directory />
WSGIProcessGroup enjay
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On
Require all granted
</Directory>
</virtualhost>
</code></pre>
<p>now when I navigate to enjay.work/Codiad I get a 404 page. for the php project I am adding to my site I followed these <a href="https://github.com/Codiad/Codiad/wiki/Quick-install-on-Ubuntu" rel="nofollow" title="Codiad Ubuntu quick install">directions</a> (loosly)</p>
<p>I can get just the python to work, or just the PHP to work, when I combine the two Config files I get the Python page but the path that should return the PHP gives me a 404</p>
| 0 | 2016-08-16T02:53:01Z | 38,966,155 | <p>I think it should be like this.</p>
<pre><code><Directory /home/nathan/www/enjay>
Options indexes FollowSymlinks MultiViews
AllowOverride All
Require all granted
Allow from all
</Directory>
</code></pre>
| -1 | 2016-08-16T03:22:15Z | [
"php",
"python",
"apache",
"ubuntu",
"flask"
] |
Using Python (Flask) and PHP on apache2 | 38,965,933 | <p>I am running a page based on Flask and Apache2/MOD_Wsgi and am wanting to also deploy a page using PHP (a IDE called Codiad). Flask as my main page ( enjay.work ) and I want Codiad to run on enjay.work/codiad</p>
<p>Unfortunately I have the most basic understanding of Apache config and don't know at all how to find what I need.</p>
<p>Here is what I have so far:</p>
<pre><code><virtualhost *:80>
ServerName enjay.work
DocumentRoot /home/nathan/www/enjay
WSGIDaemonProcess enjay user=nathan group=www-data threads=5 home=/home/nathan/www
WSGIScriptAlias / /home/nathan/www/enjay/enjay.wsgi
<Directory /Codiad>
Options indexes FollowSymlinks MultiViews
AllowOverride All
Require all granted
Allow from all
</Directory>
<Directory />
WSGIProcessGroup enjay
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On
Require all granted
</Directory>
</virtualhost>
</code></pre>
<p>now when I navigate to enjay.work/Codiad I get a 404 page. for the php project I am adding to my site I followed these <a href="https://github.com/Codiad/Codiad/wiki/Quick-install-on-Ubuntu" rel="nofollow" title="Codiad Ubuntu quick install">directions</a> (loosly)</p>
<p>I can get just the python to work, or just the PHP to work, when I combine the two Config files I get the Python page but the path that should return the PHP gives me a 404</p>
| 0 | 2016-08-16T02:53:01Z | 38,966,636 | <p>Change the configuration for your PHP page to point directly at the project (in this case /home/nathan/www/enjay/Codiad) then above that configuration add an alias for the web address you want</p>
<pre><code>Alias /Codiad "/home/nathan/www/enjay/Codiad"
<Directory /home/nathan/www/enjay/Codiad>
**Existing Config**
</Directory>
</code></pre>
| 0 | 2016-08-16T04:25:45Z | [
"php",
"python",
"apache",
"ubuntu",
"flask"
] |
Create a SSH session connect to device through a terminal server | 38,965,949 | <p>I want to connect to a network device. But in out policy, I have to ssh successfully to a terminal server first, then from this one, ssh to network device. In Python, I use Paramiko : </p>
<pre><code>import paramiko
ssh = paramiko.SSHClient()
print("OTP : ")
otp = raw_input()
ssh.port=9922
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.load_system_host_keys()
ssh.connect('10.0.0.1',9922,username='khangtt',password=str('12345')+str(otp))
stdin, stdout, stderr = ssh.exec_command('whoami')
stdin.close()
for line in stdout.read().splitlines():
print(line)
</code></pre>
<p>Connect to server sucessfully, I can see my username in the output. But I don't know how to SSH to device. I used this but nothing happen to set input user/pass :</p>
<pre><code>stdin, stdout, stderr = ssh.exec_command('telnet 10.80.1.120')
stdin.close()
for line in stdout.read().splitlines():
print(line)
</code></pre>
| 0 | 2016-08-16T02:54:30Z | 38,966,015 | <p>Since you are using paramiko, use the documentation here: <a href="https://pynet.twb-tech.com/blog/python/paramiko-ssh-part1.html" rel="nofollow">https://pynet.twb-tech.com/blog/python/paramiko-ssh-part1.html</a>. Its very explicit. </p>
<p>Or you can also try the codes here to connect:</p>
<pre><code>ssh.connect('10.0.0.1', 'khangtt', str('12345')+str(otp), look_for_keys=False, allow_agent=False)
</code></pre>
<p>Or </p>
<pre><code>ssh.connect('10.0.0.1', 'khangtt', str('12345')+str(otp))
</code></pre>
<p>in case you don't need look for keys and allow agent.</p>
| 0 | 2016-08-16T03:02:42Z | [
"python",
"ssh",
"telnet"
] |
How to calculate average with Pandas Dataframe using Cells from different rows | 38,965,951 | <p>I have a pandas DataFrame that currently looks like this </p>
<p>Here is the <code>df.head()</code> and <code>df.tail()</code></p>
<pre><code> Name Count Year Gender
0 John 9655 1880 M
1 William 9532 1880 M
2 James 5927 1880 M
3 Charles 5348 1880 M
4 George 5126 1880 M
Name Count Year Gender
743745 Zykeem 5 2014 M
743746 Zymeer 5 2014 M
743747 Zymiere 5 2014 M
743748 Zyran 5 2014 M
743749 Zyrin 5 2014 M
</code></pre>
<p>It is the number of babies named that during that year. I want to calculate the percent change from the previous year. Is there a pythonic way of using pandas to do that simply or do I need to make a complicated loop. </p>
| 1 | 2016-08-16T02:54:40Z | 38,966,351 | <p>First step involves getting groups. Then you can iterate over the groups, merge your DataFrames, compute your statistics and collect the result in another DataFrame:</p>
<pre><code>perc_chng = []
keys = []
grouped = df.groupby('Year')
for name, group in grouped[1:]:
try:
prev = group.get(name - 1)
except KeyError:
prev = pd.DataFrame()
merged = pd.merge(group, prev, how='outer', on='Name').set_index('Name')
merged['perc'] = merged['Count_x'].difference(merged['Count_y']).\
divide(merged['Count_y']).multiply(100)
perc_chng.extend([])
keys.extend(['{p}-{c}'.format(p=name-1, c=name)])
res = pd.concat(perc_chng, keys=keys)
</code></pre>
| 1 | 2016-08-16T03:47:22Z | [
"python",
"pandas"
] |
How to calculate average with Pandas Dataframe using Cells from different rows | 38,965,951 | <p>I have a pandas DataFrame that currently looks like this </p>
<p>Here is the <code>df.head()</code> and <code>df.tail()</code></p>
<pre><code> Name Count Year Gender
0 John 9655 1880 M
1 William 9532 1880 M
2 James 5927 1880 M
3 Charles 5348 1880 M
4 George 5126 1880 M
Name Count Year Gender
743745 Zykeem 5 2014 M
743746 Zymeer 5 2014 M
743747 Zymiere 5 2014 M
743748 Zyran 5 2014 M
743749 Zyrin 5 2014 M
</code></pre>
<p>It is the number of babies named that during that year. I want to calculate the percent change from the previous year. Is there a pythonic way of using pandas to do that simply or do I need to make a complicated loop. </p>
| 1 | 2016-08-16T02:54:40Z | 38,966,393 | <p>Pivot your data so it looks like this:</p>
<pre><code> John Simon Barry
1880 30 0 0
1930 20 10 5
1960 18 9 8
</code></pre>
<p>Then it's a simple diff, something like this:</p>
<pre><code>df.iloc[1:] = 100.0 * (df.iloc[1:] - df.iloc[:-1]) / df.iloc[:-1]
df.iloc[0] = np.nan
</code></pre>
| 0 | 2016-08-16T03:51:56Z | [
"python",
"pandas"
] |
How to calculate average with Pandas Dataframe using Cells from different rows | 38,965,951 | <p>I have a pandas DataFrame that currently looks like this </p>
<p>Here is the <code>df.head()</code> and <code>df.tail()</code></p>
<pre><code> Name Count Year Gender
0 John 9655 1880 M
1 William 9532 1880 M
2 James 5927 1880 M
3 Charles 5348 1880 M
4 George 5126 1880 M
Name Count Year Gender
743745 Zykeem 5 2014 M
743746 Zymeer 5 2014 M
743747 Zymiere 5 2014 M
743748 Zyran 5 2014 M
743749 Zyrin 5 2014 M
</code></pre>
<p>It is the number of babies named that during that year. I want to calculate the percent change from the previous year. Is there a pythonic way of using pandas to do that simply or do I need to make a complicated loop. </p>
| 1 | 2016-08-16T02:54:40Z | 38,970,129 | <p>assuming that you have the following DF:</p>
<pre><code>In [13]: df
Out[13]:
Name Count Year Gender
0 John 9655 1880 M
1 William 9532 1880 M
2 James 5927 1880 M
3 John 9000 1881 M
4 William 8000 1881 M
5 James 5000 1881 M
</code></pre>
<p>you can use <code>groupby()</code> followed by <code>pct_change()</code>:</p>
<pre><code>In [14]: df['pct_change'] = df.sort_values('Year').groupby('Name').Count.pct_change() * 100
In [15]: df
Out[15]:
Name Count Year Gender pct_change
0 John 9655 1880 M NaN
1 William 9532 1880 M NaN
2 James 5927 1880 M NaN
3 John 9000 1881 M -6.784050
4 William 8000 1881 M -16.072178
5 James 5000 1881 M -15.640290
</code></pre>
| 0 | 2016-08-16T08:34:25Z | [
"python",
"pandas"
] |
Segmentation fault when Connect to MySQL in python 3.5.1 on AIX 6 | 38,965,970 | <p>I tried to do following tasks:
1 Compile Python 3.5.1 source use GCC 4.2.0 on AIX 6.0;
2 Use Python 3.5.1 do my work including connect and uses mysql databae;
once I tried this tasks,I can compile python 3.5.1 source successful,and doing something well except connect and uses database;</p>
<pre><code>
$/usr/local/bin/python3.5
Python 3.5.1 (default, Aug 12 2016, 15:48:31)
[GCC 4.2.0] on aix6
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> dir(sys.path)
['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']
</code></pre>
<p>and then I tried install PyMySQL-0.7.6 witch I was working well on Linux and Windows,and it install successful,but unfortunately,when I tried to use it to connect to MySQL database,it gave me 'Segmentation fault(coredump)' error and it abort python automatic;</p>
<pre><code>
>>> import pymysql
connection = pymysql.connect(host='150.17.31.113',user='sywu',password='sywu',db='sydb',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor);>>> connection = pymysql.connect(host='150.17.31.113',user='sywu',password='sywu',db='sydb',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor);
Segmentation fault(coredump)
$
</code></pre>
<p>again and again,it always like this,I had read core file,it contains unhuman readable contents,and I can't figure out the problem was, since I can't do it with pymysql,I tried install mysql-connector-python 2.1.3,it install successful,but I got 'Illegal instruction(coredump)' error and it abort python automatic,</p>
<pre><code>
Type "help", "copyright", "credits" or "license" for more information.
>>> import mysql.connector
>>> cnx=mysql.connector.connect(user='sywu',password='sywu',host='150.17.31.113',database='sydb')
Illegal instruction(coredump)
$
</code></pre>
<p>does anyone to do this on aix successful,any help?</p>
| 0 | 2016-08-16T02:57:13Z | 38,966,175 | <p>I haven't used AIX but from the code snippet, I could figure that from <a href="https://github.com/PyMySQL/PyMySQL/blob/master/example.py" rel="nofollow">https://github.com/PyMySQL/PyMySQL/blob/master/example.py</a>, the parameter is <code>passwd</code> not <code>password</code>. </p>
<pre><code>import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='mysql')
</code></pre>
<p>Also, in the same library: <a href="https://github.com/PyMySQL/PyMySQL" rel="nofollow">https://github.com/PyMySQL/PyMySQL</a>, look at the example. May be it can help you out. </p>
| 0 | 2016-08-16T03:24:46Z | [
"python",
"mysql",
"segmentation-fault"
] |
How to restrict anaconda from upgrading the module being installed if its a higher level dependency | 38,966,075 | <p>I'm trying to use continuum io anaconda packing system to package python-2.7.10 with other dependent modules for our environment. I want to automate the pack distribution to simply be a single installation of python with the modules we require.</p>
<p>The issue I'm having is when I specify the modules under the build parameter in meta.yaml it will upgrade the version of python being installed despite the fact that it is python-2.7.10. This will cause an error in the build process.</p>
<p>Is there a way to pin the version of python being installed so that if there is a dependency it will hard fail, or use an earlier version of the package?</p>
<p>meta.yaml, ive tried not pinning the version of the modules as well.</p>
<pre><code>package:
name: python
version: 2.7.10
source:
fn: Python-2.7.10.tgz
url: https://www.python.org/ftp/python/2.7.10/Python-2.7.10.tgz
md5: d7547558fd673bd9d38e2108c6b42521
build:
no_link: bin/python
requirements:
build:
- bzip2 [unix]
- zlib [unix]
- sqlite [unix]
- readline [unix]
- tk [unix]
- openssl [unix]
- system [linux]
- ipython 5.0.0
- numpy 1.11.1
- cython 0.24.1
- scipy 0.18.0
- pandas 0.18.1
- patsy 0.4.1
- statsmodels 0.6.1
- matplotlib 1.5.2
- ggplot 0.9.4
- scikit-learn 0.17.1
- distribute 0.6.45
- backports.ssl-match-hostname 3.5.0.1
- certifi 14.05.14
- nose_parameterized 0.5.0
- pyparsing 2.1.4
- python-dateutil 2.5.3
- pytz 2016.6.1
- pyzmq 15.3.0
- simplejson 3.3.3
- six 1.10.0
- sympy 1.0
- tornado 4.4.1
- virtualenv 13.0.1
- wsgiref 0.1.2
- python-swiftclient 2.7.0
#- python-ceilometerclient #issue
#- python-heatclient #issue
#- python-keystoneclient 1.6.0
#- python-novaclient 2.26.0
#- python-troveclient #issue
- python-cinderclient 1.1.2
- python-glanceclient 0.17.2
- python-neutronclient 2.4.0
- networkx 1.11
- pysal 1.11.1
- pyyaml 3.11
- shapely 1.5.13
- beautifulsoup4 4.4.1
- nltk 3.2.1
- requests 2.10.0
- seaborn 0.5.0
- h5py 2.6.0
- xlrd 1.0.0
- markupsafe 0.23
- crypto 1.1.0
- jinja2 2.8
- openpyxl 2.3.2
- jaro_winkler 1.0.2
- bokeh 0.12.1
- numexpr 2.6.1
- pytables 3.2.3.1
- pycurl 7.43.0
- mgrs 1.1.0
- psutil 4.3.0
- biopython 1.67
- enaml 0.9.8
- mdp 3.5
- bitarray 0.8.1
- clusterpy 0.9.9
- pyside 1.2.1
- pyqt 4.11.4
- parsedatetime 1.4
- pymysql 0.6.7
- pyodbc 3.0.10
- tabulate 0.7.2
run:
- zlib [unix]
- sqlite [unix]
- readline [unix]
- tk [unix]
- openssl [unix]
- system [linux]
test:
commands:
- python -V [unix]
- 2to3 -h [unix]
- python-config --help [unix]
about:
home: http://www.python.org/
summary: general purpose programming language
license: PSF
</code></pre>
<p>The output with the error:</p>
<pre><code>$ conda build .
Removing old build environment
BUILD START: python-2.7.10-0
(actual version deferred until further download or env creation)
Using Anaconda Cloud api site https://api.anaconda.org
The following packages will be downloaded:
package | build
---------------------------|-----------------
geos-3.5.0 | 0 16.9 MB defaults
libgcc-4.8.5 | 1 922 KB r
pixman-0.32.6 | 0 2.4 MB defaults
unixodbc-2.3.4 | 0 688 KB defaults
yaml-0.1.6 | 0 246 KB defaults
curl-7.49.0 | 1 543 KB defaults
glib-2.43.0 | 2 7.4 MB r
hdf5-1.8.17 | 1 1.9 MB defaults
atom-0.3.10 | py27_0 676 KB defaults
backports_abc-0.4 | py27_0 5 KB defaults
beautifulsoup4-4.4.1 | py27_0 116 KB defaults
bitarray-0.8.1 | py27_0 89 KB defaults
et_xmlfile-1.0.1 | py27_0 15 KB defaults
future-0.15.2 | py27_0 616 KB defaults
jaro_winkler-1.0.2 | py27_0 24 KB auto
jdcal-1.2 | py27_1 9 KB defaults
kiwisolver-0.1.3 | py27_0 571 KB defaults
markupsafe-0.23 | py27_2 31 KB defaults
mgrs-1.1.0 | py27_0 48 KB auto
mpmath-0.19 | py27_1 873 KB defaults
nltk-3.2.1 | py27_0 1.7 MB defaults
parsedatetime-1.2 | py27_0 39 KB auto
ply-3.8 | py27_0 71 KB defaults
psutil-4.3.0 | py27_0 224 KB defaults
pycurl-7.43.0 | py27_0 128 KB defaults
pymysql-0.7.6 | py27_0 116 KB defaults
pyodbc-3.0.10 | py27_0 146 KB defaults
pyyaml-3.11 | py27_4 297 KB defaults
pyzmq-15.4.0 | py27_0 705 KB defaults
requests-2.10.0 | py27_0 611 KB defaults
shapely-1.5.16 | py27_0 494 KB defaults
tabulate-0.7.2 | py27_0 18 KB auto
wsgiref-0.1.2 | py27_0 943 B auto
xlrd-1.0.0 | py27_0 181 KB defaults
biopython-1.67 | np111py27_0 2.2 MB defaults
clusterpy-0.9.9 | py27_1 101 KB conda-forge
cmd2-0.6.7 | py27_0 33 KB auto
h5py-2.6.0 | np111py27_2 2.4 MB defaults
jinja2-2.8 | py27_1 264 KB defaults
jsonschema-2.5.1 | py27_0 55 KB defaults
mdp-3.5 | py27_0 477 KB defaults
networkx-1.11 | py27_0 1.1 MB defaults
numexpr-2.6.1 | np111py27_0 347 KB defaults
openpyxl-2.3.2 | py27_0 248 KB defaults
rsa-3.4.2 | py27_0 50 KB conda-forge
singledispatch-3.4.0.3 | py27_1 17 KB r
ssl_match_hostname-3.4.0.2 | py27_1 6 KB defaults
cliff-1.10.1 | py27_0 36 KB gus
crypto-1.1.0 | py27_0 3 KB auto
pysal-1.11.1 | py27_0 11.2 MB defaults
pytables-3.2.3.1 | np111py27_0 3.4 MB defaults
tornado-4.4.1 | py27_0 552 KB defaults
bokeh-0.12.1 | py27_0 3.2 MB defaults
harfbuzz-0.9.35 | 6 1.1 MB r
ipython-5.1.0 | py27_0 936 KB defaults
pyopenssl-16.0.0 | py27_0 66 KB defaults
pango-1.36.8 | 3 796 KB r
qt-4.8.7 | 4 32.7 MB defaults
python-neutronclient-2.4.0 | py27_0 222 KB gus
shiboken-1.2.1 | py27_0 883 KB defaults
enaml-0.9.8 | py27_1 944 KB defaults
pyside-1.2.1 | py27_1 5.7 MB defaults
seaborn-0.7.1 | py27_0 272 KB defaults
------------------------------------------------------------
Total: 107.8 MB
The following NEW packages will be INSTALLED:
atom: 0.3.10-py27_0 defaults
babel: 2.3.3-py27_0 defaults
backports: 1.0-py27_0 defaults
backports.ssl-match-hostname: 3.5.0.1-py27_0 getpantheon
backports_abc: 0.4-py27_0 defaults
beautifulsoup4: 4.4.1-py27_0 defaults
biopython: 1.67-np111py27_0 defaults
bitarray: 0.8.1-py27_0 defaults
bokeh: 0.12.1-py27_0 defaults
brewer2mpl: 1.4.1-py27_1 conda-forge
bzip2: 1.0.6-3 defaults
cairo: 1.12.18-6 defaults
certifi: 2016.2.28-py27_0 defaults
cffi: 1.6.0-py27_0 defaults
cliff: 1.10.1-py27_0 gus
clusterpy: 0.9.9-py27_1 conda-forge
cmd2: 0.6.7-py27_0 auto
crypto: 1.1.0-py27_0 auto
cryptography: 1.4-py27_0 defaults
curl: 7.49.0-1 defaults
cycler: 0.10.0-py27_0 defaults
cython: 0.24.1-py27_0 defaults
decorator: 4.0.10-py27_0 defaults
distribute: 0.6.45-py27_1 defaults
enaml: 0.9.8-py27_1 defaults
enum34: 1.1.6-py27_0 defaults
et_xmlfile: 1.0.1-py27_0 defaults
fontconfig: 2.11.1-6 defaults
freetype: 2.5.5-1 defaults
functools32: 3.2.3.2-py27_0 defaults
future: 0.15.2-py27_0 defaults
futures: 3.0.5-py27_0 defaults
geos: 3.5.0-0 defaults
get_terminal_size: 1.0.0-py27_0 defaults
ggplot: 0.11.1-py27_1 conda-forge
glib: 2.43.0-2 r
h5py: 2.6.0-np111py27_2 defaults
harfbuzz: 0.9.35-6 r
hdf5: 1.8.17-1 defaults
idna: 2.1-py27_0 defaults
ipaddress: 1.0.16-py27_0 defaults
ipython: 5.1.0-py27_0 defaults
ipython_genutils: 0.1.0-py27_0 defaults
iso8601: 0.1.11-py27_0 defaults
jaro_winkler: 1.0.2-py27_0 auto
jdcal: 1.2-py27_1 defaults
jinja2: 2.8-py27_1 defaults
jsonpatch: 1.3-py27_0 auto
jsonpointer: 1.2-py27_0 auto
jsonschema: 2.5.1-py27_0 defaults
kiwisolver: 0.1.3-py27_0 defaults
libffi: 3.2.1-0 defaults
libgcc: 4.8.5-1 r
libgfortran: 3.0.0-1 defaults
libpng: 1.6.22-0 defaults
libsodium: 1.0.10-0 defaults
libxml2: 2.9.2-0 defaults
markupsafe: 0.23-py27_2 defaults
matplotlib: 1.5.1-np111py27_0 defaults
mdp: 3.5-py27_0 defaults
mgrs: 1.1.0-py27_0 auto
mkl: 11.3.3-0 defaults
mpmath: 0.19-py27_1 defaults
msgpack-python: 0.4.7-py27_0 defaults
netaddr: 0.7.18-py27_0 conda-forge
netifaces: 0.10.4-py27_0 conda-forge
networkx: 1.11-py27_0 defaults
nltk: 3.2.1-py27_0 defaults
nose_parameterized: 0.5.0-py27_0 conda-forge
numexpr: 2.6.1-np111py27_0 defaults
numpy: 1.11.1-py27_0 defaults
openpyxl: 2.3.2-py27_0 defaults
openssl: 1.0.2h-1 defaults
oslo.config: 1.9.3-py27_0 gus
oslo.i18n: 1.5.0-py27_0 gus
oslo.serialization: 1.4.0-py27_0 gus
oslo.utils: 1.4.0-py27_0 gus
pandas: 0.18.1-np111py27_0 defaults
pango: 1.36.8-3 r
parsedatetime: 1.2-py27_0 auto
path.py: 8.2.1-py27_0 defaults
pathlib2: 2.1.0-py27_0 defaults
patsy: 0.4.1-py27_0 defaults
pbr: 0.11.0-py27_0 defaults
pexpect: 4.0.1-py27_0 defaults
pickleshare: 0.7.3-py27_0 defaults
pip: 8.1.2-py27_0 defaults
pixman: 0.32.6-0 defaults
ply: 3.8-py27_0 defaults
prettytable: 0.7.2-py27_0 conda-forge
prompt_toolkit: 1.0.3-py27_0 defaults
psutil: 4.3.0-py27_0 defaults
ptyprocess: 0.5.1-py27_0 defaults
pyasn1: 0.1.9-py27_0 defaults
pycairo: 1.10.0-py27_0 defaults
pycparser: 2.14-py27_1 defaults
pycurl: 7.43.0-py27_0 defaults
pygments: 2.1.3-py27_0 defaults
pymysql: 0.7.6-py27_0 defaults
pyodbc: 3.0.10-py27_0 defaults
pyopenssl: 16.0.0-py27_0 defaults
pyparsing: 2.1.4-py27_0 defaults
pyqt: 4.11.4-py27_4 defaults
pysal: 1.11.1-py27_0 defaults
pyside: 1.2.1-py27_1 defaults
pytables: 3.2.3.1-np111py27_0 defaults
python: 2.7.12-1 defaults
python-cinderclient: 1.1.2-py27_0 gus
python-dateutil: 2.5.3-py27_0 defaults
python-glanceclient: 0.17.2-py27_0 gus
python-keystoneclient: 1.3.2-py27_0 gus
python-neutronclient: 2.4.0-py27_0 gus
python-swiftclient: 2.7.0-py27_0 chenghlee
pytz: 2016.6.1-py27_0 defaults
pyyaml: 3.11-py27_4 defaults
pyzmq: 15.4.0-py27_0 defaults
qt: 4.8.7-4 defaults
readline: 6.2-2 defaults
requests: 2.10.0-py27_0 defaults
rsa: 3.4.2-py27_0 conda-forge
scikit-learn: 0.17.1-np111py27_2 defaults
scipy: 0.18.0-np111py27_0 defaults
seaborn: 0.7.1-py27_0 defaults
setuptools: 25.1.6-py27_0 defaults
shapely: 1.5.16-py27_0 defaults
shiboken: 1.2.1-py27_0 defaults
simplegeneric: 0.8.1-py27_1 defaults
simplejson: 3.8.2-py27_0 defaults
singledispatch: 3.4.0.3-py27_1 r
sip: 4.18-py27_0 defaults
six: 1.10.0-py27_0 defaults
sqlite: 3.13.0-0 defaults
ssl_match_hostname: 3.4.0.2-py27_1 defaults
statsmodels: 0.6.1-np111py27_1 defaults
stevedore: 1.3.0-py27_0 gus
sympy: 1.0-py27_0 defaults
system: 5.8-2 defaults
tabulate: 0.7.2-py27_0 auto
tk: 8.5.18-0 defaults
tornado: 4.4.1-py27_0 defaults
traitlets: 4.2.2-py27_0 defaults
unixodbc: 2.3.4-0 defaults
virtualenv: 13.0.1-py27_0 defaults
warlock: 1.3.0-py27_0 conda-forge
wcwidth: 0.1.7-py27_0 defaults
wheel: 0.29.0-py27_0 defaults
wsgiref: 0.1.2-py27_0 auto
xlrd: 1.0.0-py27_0 defaults
yaml: 0.1.6-0 defaults
zeromq: 4.1.4-0 defaults
zlib: 1.2.8-3 defaults
Source cache directory is: /opt/app/anaconda2/conda-bld/src_cache
Found source in cache: Python-2.7.10.tgz
Extracting download
BUILD START: python-2.7.10-0
python is installed as a build dependency. Removing.
An unexpected error has occurred, please consider sending the
following traceback to the conda GitHub issue tracker at:
https://github.com/conda/conda-build/issues
Include the output of the command 'conda info' in your report.
Traceback (most recent call last):
File "/opt/app/anaconda2/bin/conda-build", line 5, in <module>
sys.exit(main())
File "/opt/app/anaconda2/lib/python2.7/site-packages/conda_build/main_build.py", line 152, in main
args_func(args, p)
File "/opt/app/anaconda2/lib/python2.7/site-packages/conda_build/main_build.py", line 415, in args_func
args.func(args, p)
File "/opt/app/anaconda2/lib/python2.7/site-packages/conda_build/main_build.py", line 358, in execute
debug=args.debug)
File "/opt/app/anaconda2/lib/python2.7/site-packages/conda_build/build.py", line 561, in build
assert not plan.nothing_to_do(actions), actions
AssertionError: defaultdict(<type 'list'>, {'op_order': ('RM_FETCHED', 'FETCH', 'RM_EXTRACTED', 'EXTRACT', 'UNLINK', 'LINK', 'SYMLINK_CONDA'), 'PREFIX': '/opt/app/anaconda2/envs/_build_placehold_placehold_placehold_placehold_placehold'})
</code></pre>
| 2 | 2016-08-16T03:11:39Z | 39,031,222 | <p>Unless there's a specific reason you need to compile python yourself, I think what you're actually going after is <code>conda bundle</code> (<a href="http://conda.pydata.org/docs/commands/conda-bundle.html" rel="nofollow">http://conda.pydata.org/docs/commands/conda-bundle.html</a>). Unfortunately we've removed it in conda 4.2 which will be coming out soon, intending to move it to conda-build. Since that hasn't happened yet, and if it ends up actually being useful to people, we can add it back.</p>
<hr>
<p>You could also try this using conda-build...</p>
<p>Remove the whole <code>source</code> block in your <code>meta.yaml</code> file. Also remove all of the build requirements that are also not run requirements. Then in your <code>build.sh</code> file</p>
<pre><code>conda install --yes --quiet \
python=2.7.10 \
ipython=5.0.0 \
numpy=1.11.1 \
cython=0.24.1 \
scipy=0.18.0 \
pandas=0.18.1 \
patsy=0.4.1 \
statsmodels=0.6.1 \
matplotlib=1.5.2 \
ggplot=0.9.4 \
scikit-learn=0.17.1 \
distribute=0.6.45 \
backports.ssl-match-hostname=3.5.0.1 \
certifi=14.05.14 \
nose_parameterized=0.5.0 \
pyparsing=2.1.4 \
python-dateutil=2.5.3 \
pytz=2016.6.1 \
pyzmq=15.3.0 \
simplejson=3.3.3 \
six=1.10.0 \
sympy=1.0 \
tornado=4.4.1 \
virtualenv=13.0.1 \
wsgiref=0.1.2 \
python-swiftclient=2.7.0 \
python-cinderclient=1.1.2 \
python-glanceclient=0.17.2 \
python-neutronclient=2.4.0 \
networkx=1.11 \
pysal=1.11.1 \
pyyaml=3.11 \
shapely=1.5.13 \
beautifulsoup4=4.4.1 \
nltk=3.2.1 \
requests=2.10.0 \
seaborn=0.5.0 \
h5py=2.6.0 \
xlrd=1.0.0 \
markupsafe=0.23 \
crypto=1.1.0 \
jinja2=2.8 \
openpyxl=2.3.2 \
jaro_winkler=1.0.2 \
bokeh=0.12.1 \
numexpr=2.6.1 \
pytables=3.2.3.1 \
pycurl=7.43.0 \
mgrs=1.1.0 \
psutil=4.3.0 \
biopython=1.67 \
enaml=0.9.8 \
mdp=3.5 \
bitarray=0.8.1 \
clusterpy=0.9.9 \
pyside=1.2.1 \
pyqt=4.11.4 \
parsedatetime=1.4 \
pymysql=0.6.7 \
pyodbc=3.0.10 \
tabulate=0.7.2
</code></pre>
<p>The big difference: by listing all of those packages as build requirements, you're actually ensuring that they <em>won't</em> be in your final conda package. Think of build requirements more like a compiler, or something that's necessary when you're building the package, but not when you're actually running it.</p>
| 0 | 2016-08-19T04:39:15Z | [
"python",
"linux",
"python-2.7",
"anaconda",
"continuum"
] |
Batching for a non-image data set with Tensorflow | 38,966,084 | <p>I am a beginner in tensorflow.
I have a data set with 43 inputs and one output. I am gonna create a mini-batch of the data to run deep learning. </p>
<p>Here are my inputs:</p>
<pre><code>x = tf.placeholder(tf.float32, shape=[None, 43])
y_ = tf.placeholder(tf.float32, shape=[None])
</code></pre>
<p>which I am feeding them from a matlab file, looking:</p>
<pre><code>train_mat = train_mat["binary_train"].value
feed_dict={x:Train[0:100,0:43] , y_:Train[0:100,43]}
</code></pre>
<p>I am gonna have random batch instead of calling 0:100 records.
I saw </p>
<pre><code>tf.train.batch
</code></pre>
<p>but, I could not realize how does it work.
Could you please guide me how I can do that.</p>
<p>Thanks,
Afshin</p>
| 0 | 2016-08-16T03:12:40Z | 38,966,355 | <p>The <code>tf.train.batch</code> and other similar methods are based on Queues, which are best fit in parallel loading huge amount of samples asynchronously. The document <a href="https://www.tensorflow.org/versions/r0.10/how_tos/threading_and_queues/index.html#threading-and-queues" rel="nofollow">here</a> describes basic of using queues in TensorFlow. There is also another blog describing <a href="https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html" rel="nofollow">how to read data from files</a>.</p>
<p>If you are going to use queues, the <code>placeholder</code> and <code>feed_dict</code> is unnecessary.</p>
<p>For your specific case, the potential solution maybe look like this:</p>
<pre><code>from tensorflow.python.training import queue_runner
# capacity and min_after_dequeue could be set according to your case
q = tf.RandomShuffleQueue(1000, 500, tf.float32)
enq = q.enqueue_many(train_mat)
queue_runner.add_queue_runner(queue_runner.QueueRunner(q, [enq]))
deq = q.dequeue()
input = deq[:, 0:43]
label = deq[:, 43]
x, y_ = tf.train.batch([input, label], 100)
# then you can use x and y_ directly in inference and train process.
</code></pre>
<p>Code above is based on some hypothesis, because information provided in question is not sufficient. However, I hope the code could inspire you in some way.</p>
| 0 | 2016-08-16T03:47:33Z | [
"python",
"random",
"tensorflow",
"batching"
] |
Is there a Pythonic way of calling many members of a single object? | 38,966,411 | <p>I'm trying to rewrite this Pythonically:</p>
<pre><code>if not x.v1()
if not x.v2()
if not x.v3()
return 'validated'
return 'invalid'
</code></pre>
<p>Note in particular that if <code>x.v1()</code> does not pass, <code>x.v2()</code> and <code>x.v3()</code> are not even run.</p>
<p>This is the best idea I have so far:</p>
<pre><code>import operator
for method in ['v1', 'v2', 'v3']:
if operator.methodcaller(method)(x):
return 'invalid'
return 'validated'
</code></pre>
<p>but I feel like there's definitely a more Pythonic way.</p>
<p><strong>Clarification:</strong> The number of functions is large and possibly even changeable at runtime, so though <code>x.v1() or x.v2() or x.v3()</code> certainly looks nicer, it's not possible and the array is indeed necessary. (That said, if there's a better way than writing it with all these strings, tell me about it!) Sorry about the unclearness.</p>
| 0 | 2016-08-16T03:54:00Z | 38,966,441 | <p>Just invert the logic.</p>
<pre><code>if x.v1() or x.v2() or x.v3():
return 'invalid'
return 'validated'
</code></pre>
| 5 | 2016-08-16T03:58:32Z | [
"python",
"list"
] |
Is there a Pythonic way of calling many members of a single object? | 38,966,411 | <p>I'm trying to rewrite this Pythonically:</p>
<pre><code>if not x.v1()
if not x.v2()
if not x.v3()
return 'validated'
return 'invalid'
</code></pre>
<p>Note in particular that if <code>x.v1()</code> does not pass, <code>x.v2()</code> and <code>x.v3()</code> are not even run.</p>
<p>This is the best idea I have so far:</p>
<pre><code>import operator
for method in ['v1', 'v2', 'v3']:
if operator.methodcaller(method)(x):
return 'invalid'
return 'validated'
</code></pre>
<p>but I feel like there's definitely a more Pythonic way.</p>
<p><strong>Clarification:</strong> The number of functions is large and possibly even changeable at runtime, so though <code>x.v1() or x.v2() or x.v3()</code> certainly looks nicer, it's not possible and the array is indeed necessary. (That said, if there's a better way than writing it with all these strings, tell me about it!) Sorry about the unclearness.</p>
| 0 | 2016-08-16T03:54:00Z | 38,966,520 | <blockquote>
<p>Note in particular that <strong>if x.v1() does not pass</strong>, x.v2() and x.v3() are
not even run.</p>
</blockquote>
<p>I think you mean "if <strong>not x.v1()</strong> does not pass, x.v2() and x.v3() are not even run." Which is the same as "If x.v1() passes, x.v2() and x.v3() are not even run" ... and 'invalid' is returned. Since Python short-circuits expression evaluation, the following is a Pythonic way of rewriting the code in the OP with the stated intent of short-circuited evauation: </p>
<pre><code>if x.v1() or x.v2() or x.v3():
return 'invalid'
else:
return 'validated'
</code></pre>
| 0 | 2016-08-16T04:08:45Z | [
"python",
"list"
] |
Is there a Pythonic way of calling many members of a single object? | 38,966,411 | <p>I'm trying to rewrite this Pythonically:</p>
<pre><code>if not x.v1()
if not x.v2()
if not x.v3()
return 'validated'
return 'invalid'
</code></pre>
<p>Note in particular that if <code>x.v1()</code> does not pass, <code>x.v2()</code> and <code>x.v3()</code> are not even run.</p>
<p>This is the best idea I have so far:</p>
<pre><code>import operator
for method in ['v1', 'v2', 'v3']:
if operator.methodcaller(method)(x):
return 'invalid'
return 'validated'
</code></pre>
<p>but I feel like there's definitely a more Pythonic way.</p>
<p><strong>Clarification:</strong> The number of functions is large and possibly even changeable at runtime, so though <code>x.v1() or x.v2() or x.v3()</code> certainly looks nicer, it's not possible and the array is indeed necessary. (That said, if there's a better way than writing it with all these strings, tell me about it!) Sorry about the unclearness.</p>
| 0 | 2016-08-16T03:54:00Z | 38,966,638 | <p>If you are thinking about supporting arbitrary number of functions, it's worth rolling out your own version of <code>any</code> and <code>all</code> for a list of methods:</p>
<pre><code>def any_fn(*funcs):
return any(f() for f in funcs)
def all_fn(*funcs):
return all(f() for f in funcs)
</code></pre>
<p>Then you can say:</p>
<pre><code>if any_fn(x.v1, x.v2, x.v3):
return 'invalid'
else:
return 'valid'
</code></pre>
| 1 | 2016-08-16T04:25:47Z | [
"python",
"list"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.