title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Modifying python colormaps to single value beyond a specific point | 39,024,331 | <p>How do I change a colormap color scheme to show the same color beyond a point.</p>
<p>E.g. here's my colormap:</p>
<pre><code>import palettable
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
</code></pre>
<p>If I use this colormap to plot a range from 0 to 100, how can I modify the color map such that beyond 50, it changes to the color red?</p>
| 2 | 2016-08-18T17:35:54Z | 39,025,686 | <p><code>cmap.set_over("red")</code></p>
<p>And you may wanna use one of the <a href="http://matplotlib.org/api/colors_api.html" rel="nofollow">norm functions</a> to set your specific bounds. If using imshow, you can also set the parameter vmin=50 to make that your top value. </p>
| 2 | 2016-08-18T19:01:00Z | [
"python",
"matplotlib",
"seaborn"
] |
Modifying python colormaps to single value beyond a specific point | 39,024,331 | <p>How do I change a colormap color scheme to show the same color beyond a point.</p>
<p>E.g. here's my colormap:</p>
<pre><code>import palettable
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
</code></pre>
<p>If I use this colormap to plot a range from 0 to 100, how can I modify the color map such that beyond 50, it changes to the color red?</p>
| 2 | 2016-08-18T17:35:54Z | 39,124,326 | <p>You can access the colors with:</p>
<pre><code>cmap_dict = cmap._segmentdata
</code></pre>
<p>which yields a dictionary. By indexing it with:</p>
<pre><code>red = cmap_dict["red"]
green= cmap_dict["green"]
blue = cmap_dict["blue"]
alpha = cmap_dict["alpha"]
</code></pre>
<p>Now you can add a color from the list like this: </p>
<pre><code>red .append(red [1])
</code></pre>
<p>recombine them into a dictionary with the 4 keys like:</p>
<pre><code>cmap_dict_new["red"] = red
</code></pre>
<p>and create a new colormap with:</p>
<pre><code>new_cmap = palettable.palette.ListedColormap(cmap_dict_new)
</code></pre>
| 1 | 2016-08-24T13:12:06Z | [
"python",
"matplotlib",
"seaborn"
] |
Modifying python colormaps to single value beyond a specific point | 39,024,331 | <p>How do I change a colormap color scheme to show the same color beyond a point.</p>
<p>E.g. here's my colormap:</p>
<pre><code>import palettable
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
</code></pre>
<p>If I use this colormap to plot a range from 0 to 100, how can I modify the color map such that beyond 50, it changes to the color red?</p>
| 2 | 2016-08-18T17:35:54Z | 39,154,679 | <p>You could create the colormap for the given range (0 â100) by stacking two different colormaps on top of each other as shown:</p>
<p><strong>Illustration:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import palettable
import matplotlib.colors as mcolors
# Set random seed
np.random.seed(42)
# Create random values of shape 10x10
data = np.random.rand(10,10) * 100
# Given colormap which takes values from 0â50
colors1 = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap(np.linspace(0, 1, 256))
# Red colormap which takes values from 50â100
colors2 = plt.cm.Reds(np.linspace(0, 1, 256))
# stacking the 2 arrays row-wise
colors = np.vstack((colors1, colors2))
# generating a smoothly-varying LinearSegmentedColormap
cmap = mcolors.LinearSegmentedColormap.from_list('colormap', colors)
plt.pcolor(data, cmap=cmap)
plt.colorbar()
# setting the lower and upper limits of the colorbar
plt.clim(0, 100)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/568SD.png" rel="nofollow"><img src="http://i.stack.imgur.com/568SD.png" alt="Image1"></a></p>
<p>Incase you want the upper portion to be of the same color and not spread over the length of the colormap, you could make the following modification:</p>
<pre><code>colors2 = plt.cm.Reds(np.linspace(1, 1, 256))
</code></pre>
<p><a href="http://i.stack.imgur.com/edQh2.png" rel="nofollow"><img src="http://i.stack.imgur.com/edQh2.png" alt="Image2"></a></p>
| 2 | 2016-08-25T21:02:17Z | [
"python",
"matplotlib",
"seaborn"
] |
Modifying python colormaps to single value beyond a specific point | 39,024,331 | <p>How do I change a colormap color scheme to show the same color beyond a point.</p>
<p>E.g. here's my colormap:</p>
<pre><code>import palettable
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
</code></pre>
<p>If I use this colormap to plot a range from 0 to 100, how can I modify the color map such that beyond 50, it changes to the color red?</p>
| 2 | 2016-08-18T17:35:54Z | 39,184,812 | <p>You can create a new colormap from an existing colormap using:</p>
<p><code>newcmap = cmap.from_list('newcmap',list(map(cmap,range(50))), N=50)</code></p>
<p>This new map uses the last value from the colormap for colors over 50. To make the last color red, we can just append red to the last color in the list that defines the colormap.</p>
<p><code>newcmap = cmap.from_list('newcmap',list(map(cmap,range(50)))+[(1,0,0,1)], N=51)</code></p>
<pre><code>import palettable
from matplotlib import pyplot as plt
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
newcmap = cmap.from_list('newcmap',list(map(cmap,range(50))), N=50)
for x in range(80):
plt.bar(x,1, width=1, edgecolor='none',facecolor=newcmap(x))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/DJ2aW.png" rel="nofollow"><img src="http://i.stack.imgur.com/DJ2aW.png" alt="Plot uses last color over 50"></a></p>
<pre><code>newcmap = cmap.from_list('newcmap',list(map(cmap,range(50)))+[(1,0,0,1)], N=51)
for x in range(80):
plt.bar(x,1, width=1, edgecolor='none',facecolor=newcmap(x))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/Tbp0Y.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tbp0Y.png" alt="Plot uses red for values over 50"></a></p>
| 1 | 2016-08-27T19:15:14Z | [
"python",
"matplotlib",
"seaborn"
] |
Modifying python colormaps to single value beyond a specific point | 39,024,331 | <p>How do I change a colormap color scheme to show the same color beyond a point.</p>
<p>E.g. here's my colormap:</p>
<pre><code>import palettable
cmap = palettable.colorbrewer.sequential.YlGn_9.mpl_colormap
</code></pre>
<p>If I use this colormap to plot a range from 0 to 100, how can I modify the color map such that beyond 50, it changes to the color red?</p>
| 2 | 2016-08-18T17:35:54Z | 39,185,270 | <p>I don't think you should change the colormap, but rather the <em>object</em> using the colormap. I asked a similar question not so long ago: <a href="http://stackoverflow.com/questions/38918306/change-color-for-first-level-of-contourf">change color for first level of contourf</a>, and I took the answer from here: <a href="http://stackoverflow.com/questions/11386054/python-matplotlib-change-default-color-for-values-exceeding-colorbar-range">Python matplotlib change default color for values exceeding colorbar range</a></p>
<p>If you use contours in your plot for example, you should do something like this:</p>
<pre><code>cs = pyplot.contourf(x,y,z, cmap=your_cmap)
cs.cmap.set_over('r') # Change color to red
cs.set_clim(0, 50) # Set the limit beyond which everything is red
cb = pyplot.colorbar(cs) # Plot the colorbar (if needed)
</code></pre>
| 1 | 2016-08-27T20:05:29Z | [
"python",
"matplotlib",
"seaborn"
] |
Python mechanize navigation using __doPostBack functions | 39,024,386 | <p>How can I use mechanize to navigate through a table on a web-page if the table uses __doPostBack functions? </p>
<p>My code is:</p>
<pre><code>import mechanize
br = mechanize.Browser()
br.set_handle_robots(False)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.open("http://www.gfsc.gg/The-Commission/Pages/Regulated-Entities.aspx?auto_click=1")
page_num = 2
for link in br.links():
if link.text == str(page_num):
br.open(link) #I suspect this is not correct
break
for link in br.links():
print link.text, link.url
</code></pre>
<p>A search of all the controls in the table (e.g. drop-down menus) does not show the page buttons but a search for all the links in the table does. The page button does not contain a URL so it is not a typical link. I get TypeError: expected string or buffer.</p>
<p>I get the impression that this is something which can be done using mechanize. </p>
<p>Thanks for reading.</p>
| 0 | 2016-08-18T17:39:52Z | 39,297,042 | <p>Mechanize can be used to navigate a table which uses __doPostBack. I used BeautifulSoup to parse the HTML for the required parameters and followed useful <a href="http://stackoverflow.com/questions/39254333/python-re-escape-coincidental-parentheses-in-regex-pattern/" title="guidance with the regex">guidance with the regex</a>. My code is written below.</p>
<pre><code>import mechanize
import re # write a regex to get the parameters expected by __doPostBack
from bs4 import BeautifulSoup
from time import sleep
br = mechanize.Browser()
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
response = br.open("http://www.gfsc.gg/The-Commission/Pages/Regulated-Entities.aspx?auto_click=1")
# satisfy the __doPostBack function to navigate to different pages
for pg in range(2,5):
br.select_form(nr=0) # the only form on the page
br.set_all_readonly(False) # to set the __doPostBack parameters
# BeautifulSoup for parsing
soup = BeautifulSoup(response, 'lxml')
table = soup.find('table', {'class': 'RegulatedEntities'})
records = table.find_all('tr', {'style': ["background-color:#E4E3E3;border-style:None;", "border-style:None;"]})
for rec in records[:1]:
print 'Company name:', rec.a.string
# disable 'Search' and 'Clear filters'
for control in br.form.controls[:]:
if control.type in ['submit', 'image', 'checkbox']:
control.disabled = True
# get parameters for the __doPostBack function
for link in soup("a"):
if link.string == str(page):
next = re.search("""<a href="javascript:__doPostBack\('(.*?)','(.*?)'\)">""", str(link))
br["__EVENTTARGET"] = next.group(1)
br["__EVENTARGUMENT"] = next.group(2)
sleep(1)
response = br.submit()
</code></pre>
| 0 | 2016-09-02T16:30:01Z | [
"python",
"python-2.7",
"mechanize",
"dopostback"
] |
How do I log into Google through python requests? | 39,024,406 | <p>I'm making an API using Python requests, and HTTP GET is working fine, but I'm having a little bit of trouble with HTTP POST. So, as with a lot of websites, you can read information, but in order to make a post request (such as following a user, or writing a post), you need to have an authenticated session. THIS website, uses google to log in. Normally, I would just pass the username:password into the POST request formdata to log in, but this google thing is pretty wonky (and quite frankly I'm not that experienced). Does anyone have a reference or an example to help me out? ;/</p>
| -1 | 2016-08-18T17:41:04Z | 39,027,207 | <p>I do not know about python requests but to send an email its as easy as this</p>
<pre><code>import yagmail
yagmail.SMTP(emailh).send(email, subject, body)
#emailh = your email (just username no @gmail.com)
#email = send to (full email including domain ***@gmail.com or ***@outlook.com)
#subject = subject of the message
#body = body of the message
</code></pre>
<p>Even better </p>
<pre><code> emailh = raw_input('Your email: ')
email = raw_input('Send to: ')
subject = raw_input('Subject: ')
body = raw_input('Body: ')
yagmail.SMTP(emailh).send(email, subject, body)
print('Email Sent.')
</code></pre>
<p>If this is what you are talking about anyway.</p>
<p>This page might be useful <a href="http://stackoverflow.com/questions/6754709/logging-in-to-google-using-python">link</a></p>
| -1 | 2016-08-18T20:46:28Z | [
"python",
"http",
"post",
"get",
"python-requests"
] |
modprobe: FATAL: Module nvidia-uvm not found in directory /lib/modules/ | 39,024,542 | <p>I had an issue recently after successfully installing and testing Tensorflow compiled with GPU support.</p>
<p>After rebooting the machine, I got the following error Message when I tried to run a Tensorflow program:</p>
<pre><code>...
('Extracting', 'MNIST_data/t10k-labels-idx1-ubyte.gz')
modprobe: FATAL: Module nvidia-uvm not found in directory /lib/modules/4.4.0-34-generic
E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:140] kernel driver does not appear to be running on this host (caffe-desktop): /proc/driver/nvidia/version does not exist
I tensorflow/core/common_runtime/gpu/gpu_init.cc:92] No GPU devices available on machine.
(0, 114710.45)
(1, 95368.891)
...
(98, 56776.922)
(99, 57289.672)
</code></pre>
<p><a href="http://i.stack.imgur.com/PgQo3.png" rel="nofollow">Screencapture of error</a></p>
<p>Code: <a href="https://github.com/llSourcell/autoencoder_demo" rel="nofollow">https://github.com/llSourcell/autoencoder_demo</a></p>
<p>Question: Why would restarting a Ubuntu 16.04 machine break Tensorflow?</p>
| 0 | 2016-08-18T17:48:21Z | 39,024,591 | <p>I actually solved my own problem and wanted to share the solution which worked for me.</p>
<p>The magic Google search was:
"modprobe: FATAL: Module nvidia-uvm not found in directory /lib/modules/"</p>
<p>Which led me to the following answer on askubuntu:
<a href="http://askubuntu.com/a/496146">http://askubuntu.com/a/496146</a></p>
<p>That answer's author, <em>Sneetsher</em>, did a really good job of explaining so if the link doesn't 404 I would start there.</p>
<p><strong>Cliff Notes</strong></p>
<p>Diagnosis: I suspected that Ubuntu may have installed a kernel update when I rebooted.</p>
<p>Solution: Reinstalling the NVIDIA driver fixed the error.</p>
<p>Problem: NVIDIA drivers cannot be installed with X server running</p>
<p><strong>Two different ways to fix the NVIDIA Driver</strong></p>
<p><strong>1) Keyboard and Monitor:</strong></p>
<p><em>Paraphrasing the askubuntu answer:</em></p>
<blockquote>
<p>1) Switch to text-only console (Ctrl+Alt+F1 or any to F6).</p>
<p>2) Build driver modules for the current kernel (which just installed) <code>sudo ./<DRIVER>.run -K</code></p>
</blockquote>
<p>credit "Sneetsher" : <a href="http://askubuntu.com/a/496146">http://askubuntu.com/a/496146</a></p>
<p><em>I don't have a keyboard or monitor attached to this PC so here's the "headless" approach I actually used:</em></p>
<p><strong>2) Over SSH:</strong> </p>
<p>Following this guide to reboot to console:</p>
<p><a href="http://ubuntuhandbook.org/index.php/2014/01/boot-into-text-console-ubuntu-linux-14-04/" rel="nofollow">http://ubuntuhandbook.org/index.php/2014/01/boot-into-text-console-ubuntu-linux-14-04/</a></p>
<pre><code>$ sudo cp -n /etc/default/grub /etc/default/grub.orig
$ sudo nano /etc/default/grub
$ sudo update-grub
</code></pre>
<p>edit the grub file according to above link(3 changes):</p>
<blockquote>
<ol>
<li>Comment the line GRUB_CMDLINE_LINUX_DEFAULT=âquiet splashâ, by adding # at the beginning, which will disable the Ubuntu purple screen.</li>
<li>Change GRUB_CMDLINE_LINUX=â" to GRUB_CMDLINE_LINUX=âtextâ, this makes Ubuntu boot directly into Text Mode.</li>
<li>Uncomment this line #GRUB_TERMINAL=console, by removing the # at the beginning, this makes Grub Menu into real black & white Text Mode (without background image)</li>
</ol>
</blockquote>
<pre><code>$ sudo shutdown -r now
$ sudo service lightdm stop
$ sudo ./<DRIVER>.run
</code></pre>
<p>follow the NVIDIA driver installer</p>
<pre><code>$ sudo mv /etc/default/grub /etc/default/grub.textonly
$ sudo mv /etc/default/grub.orig /etc/default/grub
$ sudo update-grub
$ sudo shutdown -r now
</code></pre>
<p><strong>Results</strong> <em>(What things look like now the GPU was successfully detected)</em></p>
<pre><code>...
('Extracting', 'MNIST_data/t10k-labels-idx1-ubyte.gz')
I tensorflow/core/common_runtime/gpu/gpu_init.cc:118] Found device 0 with properties:
name: GeForce GTX 970
major: 5 minor: 2 memoryClockRate (GHz) 1.342
pciBusID 0000:01:00.0
Total memory: 3.94GiB
Free memory: 3.88GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:138] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:148] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:868] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0)
(0, 113040.92)
(1, 94895.867)
...
</code></pre>
<p><a href="http://i.stack.imgur.com/bl3IT.png" rel="nofollow">Screencapture of the same</a></p>
| 1 | 2016-08-18T17:50:43Z | [
"python",
"tensorflow",
"gpu",
"nvidia"
] |
Error 401 and API V1.1 Python | 39,024,600 | <p>Good morning,
I am trying to download the people that is twitting certain words in an area by this python code:</p>
<pre><code>import sys
import tweepy
consumer_key="LMhbj3fywfKPNgjaPhOwQuFTY"
consumer_secret=" LqMw9x9MTkYxc5oXKpfzvfbgF9vx3bleQHroih8wsMrIUX13nd"
access_key="3128235413-OVL6wctnsx1SWMYAGa5vVZwDH5ul539w1kaQTyx"
access_secret="fONdTRrD65ENIGK5m9ntpH48ixvyP2hfcJRqxJmdO78wC"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
class CustomStreamListener(tweepy.StreamListener):
def on_status(self, status):
if 'busco casa' in status.text.lower():
print (status.text)
def on_error(self, status_code):
print (sys.stderr, 'Encountered error with status code:', status_code)
return True # Don't kill the stream
def on_timeout(self):
print (sys.stderr, 'Timeout...')
return True # Don't kill the stream
sapi = tweepy.streaming.Stream(auth, CustomStreamListener())
sapi.filter(locations=[-78.37,-0.20,-78.48,-0.18])
</code></pre>
<p>I am getting this error:
Encountered error with status code: 401</p>
<p>I read in this link <a href="https://dev.twitter.com/overview/api/response-codes" rel="nofollow">https://dev.twitter.com/overview/api/response-codes</a> that the error is caused by:
Authentication credentials were missing or incorrect.
Also returned in other circumstances, for example, all calls to API v1 endpoints now return 401 (use API v1.1 instead).</p>
<p>The authentication is there with updated keys. How should I use API v1.1?</p>
<p>Thanks,
Anita</p>
| 0 | 2016-08-18T17:51:14Z | 39,038,396 | <p>If it says that your credentials are incorrect, you might want to check your credentials: you need to remove the whitespaces in your consumer secret for your code to work.</p>
<p>Also, I just tested your credentials (without whitespaces) and they are working. I can do whatever I want on behalf of your application. I suggest you very quickly go to <a href="https://apps.twitter.com" rel="nofollow">https://apps.twitter.com</a> and generate new ones. Never share your credentials. Especially online where everyone can see them.</p>
| 0 | 2016-08-19T11:50:16Z | [
"python",
"api",
"authentication",
"tweepy"
] |
Long tail distribution of random numbers in Python | 39,024,785 | <p>I need to make a randomizing function in Python returning values using long tail distribution. Unfortunately, my math skills are nowhere near my programming skills so I'm stuck.</p>
<p>This is the kind of distribution I'm looking for:
<img src="http://www.danvk.org/wp/wp-content/uploads/2007/02/scores.png" alt="">.</p>
<p>Returned value must be between 0 and 1, and it must be possible to assign a peak value (where the graph peaks on the Y axis), which would be a number between 0 and 1.</p>
<p>Example usage:</p>
<pre><code>def random_long_tail(peak):
#magic
value = random_long_tail(0.2)
print(value) #outputs i.e. 0.345811242
</code></pre>
<p>I will be incredibly grateful for any help in solving this issue. Thank you!</p>
| 0 | 2016-08-18T18:04:41Z | 39,024,861 | <p>There are quite a few distributions with single peak value and some tail, <a href="https://en.wikipedia.org/wiki/Log-normal_distribution" rel="nofollow">log-normal</a>, <a href="https://en.wikipedia.org/wiki/Gamma_distribution" rel="nofollow">Gamma</a>, <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution" rel="nofollow">Chi<sup>2</sup></a> to name a few.</p>
<p>Typically, one can pick out <code>numpy</code> random module and see what's avalable and how they fit into your problem. Link: <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/routines.random.html" rel="nofollow">http://docs.scipy.org/doc/numpy-1.10.1/reference/routines.random.html</a></p>
| 1 | 2016-08-18T18:10:05Z | [
"python",
"random",
"distribution"
] |
Long tail distribution of random numbers in Python | 39,024,785 | <p>I need to make a randomizing function in Python returning values using long tail distribution. Unfortunately, my math skills are nowhere near my programming skills so I'm stuck.</p>
<p>This is the kind of distribution I'm looking for:
<img src="http://www.danvk.org/wp/wp-content/uploads/2007/02/scores.png" alt="">.</p>
<p>Returned value must be between 0 and 1, and it must be possible to assign a peak value (where the graph peaks on the Y axis), which would be a number between 0 and 1.</p>
<p>Example usage:</p>
<pre><code>def random_long_tail(peak):
#magic
value = random_long_tail(0.2)
print(value) #outputs i.e. 0.345811242
</code></pre>
<p>I will be incredibly grateful for any help in solving this issue. Thank you!</p>
| 0 | 2016-08-18T18:04:41Z | 39,024,927 | <p>You might look into Numpy's <a href="http://docs.scipy.org/doc/numpy/reference/routines.random.html" rel="nofollow">random sampling</a> options. See which ones are included in the list of <a href="https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Common_heavy-tailed_distributions" rel="nofollow">common heavy-tailed distributions</a>.</p>
<p>The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.lognormal.html#numpy.random.lognormal" rel="nofollow">log-normal</a> is a nice example. Numpy allows you to specify the mean and standard deviation. You will have to do a bit of algebra to "assign a peak value". This peak is called the "mode".</p>
| 0 | 2016-08-18T18:15:15Z | [
"python",
"random",
"distribution"
] |
Python Numpy nanmax() returning nan when there are non nan values in the array | 39,024,791 | <p>I tried to use Numpy's nanmax function to get the max of all non-nan values in a matrix's column, for some it works, for some it returns nan as the maximum. However, there are non-nan values in every column and just to be sure I tried the same thing in R with max(x, na.rm = T) and everything is fine there. </p>
<p>Anyone has any ideas of why this occurs? The only thing I can think of is that I converted the numpy matrix from a pandas frame but I really have no clue...</p>
<pre><code>np.nanmax(datamatrix, axis=0)
matrix([[1, 101, 193, 1, 163.0, 10.6, nan, 4.7, 142.0, 0.47, 595.0,
170.0, 5.73, 24.0, 27.0, 23.0, 361.0, 33.0, 94.0, 9.2, 16.8, nan,
nan, 91.0, nan, nan, nan, nan, 0.0, 105.0, nan, nan, nan, nan,nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan]], dtype=object)
</code></pre>
| 0 | 2016-08-18T18:05:04Z | 39,025,455 | <p>Your array is an <code>object</code> array, meaning the elements in the array are arbitrary python objects. Pandas uses object arrays, so it is likely that when you converted your Pandas DataFrame to a numpy array, the result was an object array. <code>nanmax()</code> doesn't handle object arrays correctly.</p>
<p>Here are a couple examples, one using a <code>numpy.matrix</code> and one a <code>numpy.ndarray</code>. With a <code>matrix</code>, you get no warning at all the something went wrong:</p>
<pre><code>In [1]: import numpy as np
In [2]: m = np.matrix([[2.0, np.nan, np.nan]], dtype=object)
In [3]: np.nanmax(m)
Out[3]: nan
</code></pre>
<p>With an array, you get a cryptic warning, but <code>nan</code> is still returned:</p>
<pre><code>In [4]: a = np.array([[2.0, np.nan, np.nan]], dtype=object)
In [5]: np.nanmax(a)
/Users/warren/miniconda3scipy/lib/python3.5/site-packages/numpy/lib/nanfunctions.py:326: RuntimeWarning: All-NaN slice encountered
warnings.warn("All-NaN slice encountered", RuntimeWarning)
Out[5]: nan
</code></pre>
<p>You can determine if your array is an object array in a few ways. When you display the array in an interactive python or ipython shell, you'll see <code>dtype=object</code>. Or you can check <code>a.dtype</code>; if <code>a</code> is an object array, you'll see either <code>dtype('O')</code> or <code>object</code> (depending on whether you end up seeing the <code>str()</code> or <code>repr()</code> of the dtype).</p>
<p>Assuming all the values in the array are, in fact, floating point values, a way to work around this is to first convert from the object array to an array of floating point values:</p>
<pre><code>In [6]: b = a.astype(np.float64)
In [7]: b
Out[7]: array([[ 2., nan, nan]])
In [8]: np.nanmax(b)
Out[8]: 2.0
In [9]: n = m.astype(np.float64)
In [10]: np.nanmax(n)
Out[10]: 2.0
</code></pre>
| 1 | 2016-08-18T18:47:04Z | [
"python",
"numpy"
] |
Unloading audio samples in kivy | 39,024,813 | <p>Audio unload does not seem to actually release memory, at least as far as top on linux is concerned. Repeatedly loading and unloading causes memory usage to creep upward. Did I miss something obvious?</p>
<pre><code>from __future__ import print_function
import kivy
kivy.require('1.9.0') # replace with your current kivy version !
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.core.audio import SoundLoader
class SoundTest(BoxLayout):
def __init__(self, **kwargs):
super(SoundTest, self).__init__(**kwargs)
button = Button(text='Play sound')
self.add_widget(button)
button.bind(on_release=self.PlaySound)
def PlaySound(self, widget):
snd = SoundLoader.load('test.ogg')
snd.bind(on_stop=self.UnloadSoundWhenDone)
snd.play()
print ("play", snd)
def UnloadSoundWhenDone(self, snd):
print ("unload", snd)
snd.unload()
class MyApp(App):
def build(self):
return SoundTest()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>Update: this seems to be using the kivy.core.audio.audio_gstplayer.SoundGstplayer backend.</p>
| 0 | 2016-08-18T18:06:30Z | 39,025,461 | <pre><code>del an_obj
</code></pre>
<p>Try to delete the actual object, maybe? I am not exactly sure what you are doing.</p>
| -1 | 2016-08-18T18:47:24Z | [
"python",
"audio",
"kivy"
] |
Unloading audio samples in kivy | 39,024,813 | <p>Audio unload does not seem to actually release memory, at least as far as top on linux is concerned. Repeatedly loading and unloading causes memory usage to creep upward. Did I miss something obvious?</p>
<pre><code>from __future__ import print_function
import kivy
kivy.require('1.9.0') # replace with your current kivy version !
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.core.audio import SoundLoader
class SoundTest(BoxLayout):
def __init__(self, **kwargs):
super(SoundTest, self).__init__(**kwargs)
button = Button(text='Play sound')
self.add_widget(button)
button.bind(on_release=self.PlaySound)
def PlaySound(self, widget):
snd = SoundLoader.load('test.ogg')
snd.bind(on_stop=self.UnloadSoundWhenDone)
snd.play()
print ("play", snd)
def UnloadSoundWhenDone(self, snd):
print ("unload", snd)
snd.unload()
class MyApp(App):
def build(self):
return SoundTest()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>Update: this seems to be using the kivy.core.audio.audio_gstplayer.SoundGstplayer backend.</p>
| 0 | 2016-08-18T18:06:30Z | 39,041,429 | <p>Actually, the memory does not appear to creep upward indefinitely, so while SoundGstplayer seems to allocate itself much more than the SDL2 backend, this probably as intended.</p>
| 0 | 2016-08-19T14:23:13Z | [
"python",
"audio",
"kivy"
] |
How to Create a port scanner TCP SYN using the method (TCP SYN )? | 39,024,816 | <pre><code>#####################################
# Portscan TCP #
# #
#####################################
# -*- coding: utf-8 -*-
#!/usr/bin/python3
import socket
ip = input("Digite o IP ou endereco: ")
ports = []
count = 0
while count < 10:
ports.append(int(input("Digite a porta: ")))
count += 1
for port in ports:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.settimeout(0.05)
code = client.connect_ex((ip, port)) #conecta e traz a msg de erro
#Like connect(address), but return an error indicator instead of raising an exception for errors
if code == 0: #0 = Success
print (str(port) + " -> Porta aberta")
else:
print (str(port) + " -> Porta fechada")
print ("Scan Finalizado")
</code></pre>
<p>The python script above is a TCP Scanning. How can I change it into a TCP SYN scanning ? How to Create a port scanner TCP SYN using the method (TCP SYN ) ?</p>
| -2 | 2016-08-18T18:06:42Z | 39,091,110 | <p>First, you will have to generate your own SYN packets using RAW sockets. You can find an example <a href="http://www.binarytides.com/raw-socket-programming-in-python-linux/" rel="nofollow">here</a></p>
<p>Second, you will need to listen for SYN-ACKs from the scanned host in order to determine which ports actually try to start the TCP Handshake (SYN,SYN-ACK,ACK). You should be able to detect and parse the TCP header from the applications that respond. From that header you can determine the origin port and thus figure out a listening application was there.</p>
<p>Also if you implement this, you also basically made a SYN DDOS utility because you will be creating a ton of half-opened tcp connections.</p>
| 3 | 2016-08-23T01:46:50Z | [
"python",
"python-3.x"
] |
How to Create a port scanner TCP SYN using the method (TCP SYN )? | 39,024,816 | <pre><code>#####################################
# Portscan TCP #
# #
#####################################
# -*- coding: utf-8 -*-
#!/usr/bin/python3
import socket
ip = input("Digite o IP ou endereco: ")
ports = []
count = 0
while count < 10:
ports.append(int(input("Digite a porta: ")))
count += 1
for port in ports:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.settimeout(0.05)
code = client.connect_ex((ip, port)) #conecta e traz a msg de erro
#Like connect(address), but return an error indicator instead of raising an exception for errors
if code == 0: #0 = Success
print (str(port) + " -> Porta aberta")
else:
print (str(port) + " -> Porta fechada")
print ("Scan Finalizado")
</code></pre>
<p>The python script above is a TCP Scanning. How can I change it into a TCP SYN scanning ? How to Create a port scanner TCP SYN using the method (TCP SYN ) ?</p>
| -2 | 2016-08-18T18:06:42Z | 39,094,232 | <p>As @Upsampled mentioned, you might use raw sockets (<a href="https://en.wikipedia.org/" rel="nofollow">https://en.wikipedia.org/</a>) as you only need a subset of TCP protocol (send <strong>SYN</strong> and recieve <strong>RST-ACK</strong> or <strong>SYN-ACK</strong>
).</p>
<p>As coding something like <a href="http://www.binarytides.com/raw-socket-programming-in-python-linux/" rel="nofollow">http://www.binarytides.com/raw-socket-programming-in-python-linux/</a>
could be a good excersice, I would also suggest to consider <a href="https://github.com/secdev/scapy" rel="nofollow">https://github.com/secdev/scapy</a></p>
<blockquote>
<p>Scapy is a powerful Python-based interactive packet manipulation
program and library.</p>
</blockquote>
<p>Here's the code sample that already implements a simple port scanner
<a href="http://pastebin.com/YCR3vp9B" rel="nofollow">http://pastebin.com/YCR3vp9B</a> and a detailed article on what it does:
<a href="http://null-byte.wonderhowto.com/how-to/build-stealth-port-scanner-with-scapy-and-python-0164779/" rel="nofollow">http://null-byte.wonderhowto.com/how-to/build-stealth-port-scanner-with-scapy-and-python-0164779/</a></p>
<p>The code is a little bit ugly but it works â I've checked it from my local Ubuntu PC against my VPS.
Here's the most important code snippet (slightly adjusted to conform to PEP8):</p>
<pre><code># Generate Port Number
srcport = RandShort()
# Send SYNC and receive RST-ACK or SYN-ACK
SYNACKpkt = sr1(IP(dst=target) /
TCP(sport=srcport, dport=port, flags="S"))
# Extract flags of received packet
pktflags = SYNACKpkt.getlayer(TCP).flags
if pktflags == SYNACK:
# port is open
pass
else:
# port is not open
# ...
pass
</code></pre>
| 3 | 2016-08-23T06:53:47Z | [
"python",
"python-3.x"
] |
Ignore string data that does not match a certain format when calculating "min" with Pandas | 39,024,852 | <p>I have a DataFrame column 'datetime' with the values in this format:</p>
<p><code>'2016-08-01 13:43:35'</code></p>
<p>I would like to find the min and max values. The problem is that some of the rows are missing time values so they look like this:</p>
<p><code>'2016-07-29 '</code></p>
<p>How can I exclude the rows with missing data when calculating the min and max?</p>
<p>Here is how I'm finding the min value:</p>
<pre><code>min_ = df['datetime'].min()
</code></pre>
<p>The minimum value that I'm trying to find is the earliest date/time combination where both are included. So for example in from my data:</p>
<p>'7/29/2016 11:02:38' would be the desired value.</p>
| 0 | 2016-08-18T18:09:41Z | 39,024,985 | <p>You can convert values that have a specific format to datetime, and the remaining ones will be NaT. If you take the minimum on the resulting series, it will ignore NaTs.</p>
<pre><code>df['datetime'] = ['2016-08-01 13:43:35', '2016-06-01 13:43:35', '2013-08-01 13:43:35',
'2016-07-29 ']
df
Out:
datetime
0 2016-08-01 13:43:35
1 2016-06-01 13:43:35
2 2013-08-01 13:43:35
3 2016-07-29
pd.to_datetime(df['datetime'], format='%Y-%m-%d %H:%M:%S', errors='coerce')
Out:
0 2016-08-01 13:43:35
1 2016-06-01 13:43:35
2 2013-08-01 13:43:35
3 NaT
Name: datetime, dtype: datetime64[ns]
pd.to_datetime(df['datetime'], format='%Y-%m-%d %H:%M:%S', errors='coerce').min()
Out: Timestamp('2013-08-01 13:43:35')
</code></pre>
| 1 | 2016-08-18T18:19:12Z | [
"python",
"pandas"
] |
Ignore string data that does not match a certain format when calculating "min" with Pandas | 39,024,852 | <p>I have a DataFrame column 'datetime' with the values in this format:</p>
<p><code>'2016-08-01 13:43:35'</code></p>
<p>I would like to find the min and max values. The problem is that some of the rows are missing time values so they look like this:</p>
<p><code>'2016-07-29 '</code></p>
<p>How can I exclude the rows with missing data when calculating the min and max?</p>
<p>Here is how I'm finding the min value:</p>
<pre><code>min_ = df['datetime'].min()
</code></pre>
<p>The minimum value that I'm trying to find is the earliest date/time combination where both are included. So for example in from my data:</p>
<p>'7/29/2016 11:02:38' would be the desired value.</p>
| 0 | 2016-08-18T18:09:41Z | 39,025,146 | <p>Since your date strings have a decreasing order of magnitude (i.e. year --> month --> ...), there is actually no need to convert toe datetime objects.</p>
<p>Also, since your date strings should all be fixed-width, all you really need to do is drop the rows with missing values and then compare the date strings directly.</p>
<pre><code>df = pd.DataFrame({'datetime': ['2016-08-01 13:43:35', '2016-06-01 13:43:35', '2013-08-01 13:43:35', '2016-07-29 ']})
min_dt = df[df.datetime.str.len() == 19].min()
print min_dt
# 2013-08-01 13:43:35
max_dt = df[df.datetime.str.len() == 19].max()
print max_dt
# 2016-08-01 13:43:35
</code></pre>
<p>[EDIT] Since the topic of running time came up in the comments, I did some %timeit testing and found that keeping the date strings (instead of using <code>to_datetime</code>) is about 20x faster. But both methods are acceptably fast for 1M rows.</p>
<pre><code>print data[0:4] # Data list of 1M date strings.
# >>> ['01/01/2015 00:00:00', '01/01/2015 00:05:00', '01/01/2015 00:10:00', '01/01/2015 00:15:00']
print len(data)
# >>> 1047870
df = pd.DataFrame({'datetime': data})
df2 = pd.DataFrame({'datetime': data})
%timeit -n5 d=pd.to_datetime(df['datetime'], format='%m/%d/%Y %H:%M:%S', errors='coerce').min()
# >>> 5 loops, best of 3: 5 s per loop
%timeit -n5 df2[df2['datetime'].str.len() == 19].min()
# >>> 5 loops, best of 3: 232 ms per loop
</code></pre>
| 0 | 2016-08-18T18:28:24Z | [
"python",
"pandas"
] |
SQLalchemy trying to connect to postgresql using oop | 39,024,897 | <p>I'm trying to teach myself a bit of OOP and as a test i'm trying to create a class that will connect to an existing postgresql database i've created.</p>
<p>I can connect to the database fine using sqlalchemy if I use this code </p>
<pre><code>engine = create_engine('postgresql://user@localhost/dbname')
conn = engine.connect()
result = conn.execute(sql)
for row in result:
print(row)
</code></pre>
<p>However, as I mentioned i'm new to OOP so trying to figure out how to replicate this in a class format. The following code gives the error <code>AttributeError: 'NoneType' object has no attribute 'execute'</code>. I imagine there are many errors and best practices i'm missing out on so some kind of guidance would be much appreciated.</p>
<pre><code>from sqlalchemy import create_engine
class dbConnect(object):
db_connection = None
db_cur = None
def __init__(self):
self.db_connection = create_engine('postgresql://user@localhost/dbname')
self.cur = self.db_connection.connect()
def query(self, query):
test = self.db_cur.execute(query)
return test
sql = """
SELECT
id
FROM
t
WHERE
id = 14070
"""
x = dbConnect()
result = x.query(sql)
for row in result:
print(row)
</code></pre>
| 0 | 2016-08-18T18:12:57Z | 39,025,629 | <p>I always use Pajlada's <a href="https://github.com/pajlada/pajbot/blob/master/pajbot/managers/db.py" rel="nofollow">Database Manager Singleton Class</a>. It's pretty easy to get going. All you need to do is initialize the class with your URL.</p>
<pre><code>DBManager.init('psqlurlhere')
</code></pre>
<p>and then whenever you want to create a session there are multiple ways to do it:</p>
<p>Create a session that will end after you do certain thing</p>
<pre><code>with DBManager.create_session_scope() as session:
session.add(Model)
session.commit()
</code></pre>
<p>Create a session that you will reuse throughout your program:</p>
<pre><code>session = DBManager.create_session(expire_on_commit=False)
session.add(Model)
session.comit()
</code></pre>
| 0 | 2016-08-18T18:58:07Z | [
"python",
"postgresql",
"oop",
"sqlalchemy"
] |
How to write out to compressed file with pyscopg2.copy_to or copy_expert? | 39,025,204 | <p>All of the examples I've seen have used <code>psql</code> with something like <code>COPY COMMAND | gzip > 'filename'</code>. I'd prefer to use a solution with <code>psycopg2</code> if possible, and I was thinking that it might be nice to write it out to a string buffer type object which processes the data and then writes out a compressed <code>gzip</code> file.</p>
<p>How can I do this?</p>
| -1 | 2016-08-18T18:31:22Z | 39,025,336 | <p>The documentation for psycopg says that <a href="http://initd.org/psycopg/docs/cursor.html#cursor.copy_to" rel="nofollow"><code>copy_to</code></a> accepts any file-like object. Thus you could simply use the <a href="https://docs.python.org/3/library/gzip.html" rel="nofollow"><code>gzip.open</code></a> to open a writable gzip file-like object:</p>
<pre><code>import gzip
with gzip.open('table-data.gz', 'wb') as gzip_file:
cursor.copy_to(gzip_file, 'my_table')
</code></pre>
<p>Alternatively, if you prefer to write text in certain encoding, and on Python 3.3+, you can use mode <code>'wt'</code> and add <code>encoding='UTF-8'</code> or similar to the <code>gzip.open</code>.</p>
| 1 | 2016-08-18T18:39:40Z | [
"python",
"postgresql",
"psycopg2"
] |
Trigger event when Bokeh DataTable Selection | 39,025,264 | <p>Is it possible to trigger a callback event when I select a row (or rows) of a Bokeh <code>DataTable</code>?</p>
<pre><code>def update(rows):
...
dt = DataTable(...)
dt.on_select(update)
</code></pre>
<p>I see that there is an <code>.on_change</code> method that can trigger on a particular property, however I can not find a property that corresponds to the selected rows.</p>
| 0 | 2016-08-18T18:35:05Z | 39,025,584 | <p>I believe that selecting a row of a data table is the same as making a selection a data source. So, if you attach a callback to the datasource powering the table then the callback should work.</p>
<pre><code>source = ColumnDataSource(mpg)
columns = [....]
data_table = DataTable(source=source, columns=columns)
source.on_change('selected', callback)
</code></pre>
| 1 | 2016-08-18T18:54:51Z | [
"python",
"bokeh"
] |
Access partial results of a Celery task | 39,025,270 | <p>I'm not a Python expert, however I'm trying to develop some long-running Celery-based tasks which I'm able to access their partial results instead of waiting for the tasks to finish. </p>
<p>As you can see in the code below, given a multiplier, an initial and final range, the <em>worker</em> creates a list of size <em>final_range - initial_range + 1</em>.</p>
<pre><code>from celery import Celery
app = Celery('trackers', backend='amqp', broker='amqp://')
@app.task
def worker(value, initial_range, final_range):
if initial_range < final_range
list_values = []
for index in range(initial_frame, final_frame + 1):
list_values.append(value * index)
return list_values
else
return None
</code></pre>
<p>So, instead of waiting for all <em>four workers</em> to finish, I would like to access the to-be-returned values (<em>list_values</em>) before they are actually returned.</p>
<pre><code>from trackers import worker
res_1 = worker.delay(3, 10, 10000000)
res_2 = worker.delay(5, 01, 20000000)
res_3 = worker.delay(7, 20, 50000000)
res_4 = worker.delay(9, 55, 99999999)
</code></pre>
<p>First of all, is it possible?
If so, what sort of changes do I have to perform to make it work?</p>
| 0 | 2016-08-18T18:35:31Z | 39,031,088 | <p>You absolutely need to use an external storage such as SQL or Redis/Memcached because different tasks can be executed on different servers in common case.</p>
<p>So in your example you should store list_values in some DB and update it during the loop.</p>
| 0 | 2016-08-19T04:23:48Z | [
"python",
"celery",
"jobs"
] |
Get length of CSV to show progress | 39,025,315 | <p>I am working with a large number of CSV files, each of which contain a large amount of rows. My goal is to take the data line by line and write it to a database using Python. However because there is a large amount of data I would like tot keep track of how much data has been written. For this I have counted the amount of files being queued and keep on adding one every time a file is complete.</p>
<p>I would like to do something similar for the CSV files and show what row I am on, and how many rows there are in total (for example: <code>Currently on row 1 of X</code>). I can easily get he current row by starting at one and then doing something like: <code>currentRow += 1</code>, however I am unsure how to get the total with out going though the time consuming process of reading line. </p>
<p>Additionally because my CSV files are all stored in zip archives I am currently reading them using the ZipFile module like this:</p>
<pre><code>#The Zip archive and the csv files share the same name
with zipArchive.open(fileName[:-4] + '.csv', 'r') as csvFile:
lines = (line.decode('ascii') for line in csvFile)
currentRow = 1
for row in csv.reader(lines):
print(row)
currentRow += 1
</code></pre>
<p>Any ideas on how I can quickly get a total row count of a CSV file?</p>
| 0 | 2016-08-18T18:38:32Z | 39,025,933 | <p>You can't count the lines in a file without opening it and counting the lines.</p>
<p>If your files are so large that counting lines with <code>row_count = sum(1 for row in file_handle)</code> is not practical, and reading the whole file into memory is a non-starter, a different approach may be needed.</p>
<p>It is quite easy to get the length of a file in bytes (<a href="http://stackoverflow.com/questions/2104080/how-to-check-file-size-in-python">How to check file size in python?</a>). If you then count the length (in bytes) of each line as you read it, you can then report "Currently on byte 13927 of 4972397 (2.8%)"</p>
<p>For files stored in zip, <code>Zipfile.getinfo(name).file_size</code> is the size of the uncompressed file.</p>
| 1 | 2016-08-18T19:17:10Z | [
"python",
"csv"
] |
Get length of CSV to show progress | 39,025,315 | <p>I am working with a large number of CSV files, each of which contain a large amount of rows. My goal is to take the data line by line and write it to a database using Python. However because there is a large amount of data I would like tot keep track of how much data has been written. For this I have counted the amount of files being queued and keep on adding one every time a file is complete.</p>
<p>I would like to do something similar for the CSV files and show what row I am on, and how many rows there are in total (for example: <code>Currently on row 1 of X</code>). I can easily get he current row by starting at one and then doing something like: <code>currentRow += 1</code>, however I am unsure how to get the total with out going though the time consuming process of reading line. </p>
<p>Additionally because my CSV files are all stored in zip archives I am currently reading them using the ZipFile module like this:</p>
<pre><code>#The Zip archive and the csv files share the same name
with zipArchive.open(fileName[:-4] + '.csv', 'r') as csvFile:
lines = (line.decode('ascii') for line in csvFile)
currentRow = 1
for row in csv.reader(lines):
print(row)
currentRow += 1
</code></pre>
<p>Any ideas on how I can quickly get a total row count of a CSV file?</p>
| 0 | 2016-08-18T18:38:32Z | 39,025,972 | <p>If you just want to show some progress, you could try using <a href="https://pypi.python.org/pypi/tqdm" rel="nofollow">tqdm</a>. </p>
<pre><code>from tqdm import tqdm
with zipArchive.open(fileName[:-4] + '.csv', 'r') as csvFile:
lines = [line.decode('ascii') for line in csvFile]
currentRow = 1
for row in tqdm(csv.reader(lines), total=len(lines)):
print(row)
currentRow += 1
</code></pre>
<p>This should give you a sleek progress bar with virtually no effort on your part.</p>
| 3 | 2016-08-18T19:19:19Z | [
"python",
"csv"
] |
Python+kivy+SQLite: How to set label initial value and how to update label text? | 39,025,320 | <p>everyone,</p>
<p>I want to use <code>kivy+Python</code> to display items from a <code>db</code> file. </p>
<p>To this purpose I have asked a question before: <a href="http://stackoverflow.com/questions/38939416/pythonkivysqlite-how-to-use-them-together">Python+kivy+SQLite: How to use them together</a> </p>
<p>The App in the link contains one screen. It works very well. </p>
<p>Today I have changed the App to <strong>two</strong> screens. The <code>first screen</code> has no special requirement but to lead the App to the <code>second screen</code>. In the <code>second screen</code> there is a <code>label</code> and a <code>button</code>. By clicking the <code>button</code> I want to have the <code>label text</code> changed. The <code>label text</code> refers to the <code>car type</code> from the <code>db</code> file that I have in the link above. </p>
<p>For this two screens design, I have two questions: </p>
<p><strong>Question 1:</strong> How to update the <code>label text</code> when the <code>button</code> is clicked? </p>
<p>I tried with two methods:</p>
<p><strong>Method A:</strong> <code>self.ids.label.text = str(text)</code><br>
It shows me the error message: <code>AttributeError: 'super' object has no attribute '__getattr__'</code> I have googled a lot but still cannot understand. </p>
<p><strong>Method B:</strong> <code>self.ids["label"].text = str(text)</code>
It shows me the error message: <code>KeyError: 'label'</code> I am confused because I have the <code>label</code> defined. </p>
<p><strong>Question 2:</strong> How to set the <code>label origin text</code> to one of the car type, so that everytime the second screen is shown, there is already a car type shown. </p>
<p>Here is the code: (For the <code>db file</code> please refer to the link above.)</p>
<pre><code># -*- coding: utf-8 -*-
from kivy.app import App
from kivy.base import runTouchApp
from kivy.lang import Builder
from kivy.properties import ListProperty
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.label import Label
from kivy.uix.widget import Widget
from kivy.graphics import Rectangle
from kivy.properties import NumericProperty, StringProperty, BooleanProperty, ListProperty
from kivy.base import runTouchApp
from kivy.clock import mainthread
import sqlite3
import random
class MyScreenManager(ScreenManager):
def init(self, **kwargs):
super().__init__(**kwargs)
@mainthread # execute within next frame
def delayed():
self.load_random_car()
delayed()
def load_random_car(self):
conn = sqlite3.connect("C:\\test.db")
cur = conn.cursor()
cur.execute("SELECT * FROM Cars ORDER BY RANDOM() LIMIT 1;")
currentAll = cur.fetchone()
conn.close()
currentAll = list(currentAll) # Change it from tuple to list
print currentAll
current = currentAll[1]
print current
self.ids.label.text = str(current) # Method A
# self.ids["label"].text = str(current) # Method B
class FirstScreen(Screen):
pass
class SecondScreen(Screen):
pass
root_widget = Builder.load_string('''
#:import FadeTransition kivy.uix.screenmanager.FadeTransition
MyScreenManager:
transition: FadeTransition()
FirstScreen:
SecondScreen:
<FirstScreen>:
name: 'first'
BoxLayout:
orientation: 'vertical'
Label:
text: "First Screen"
font_size: 30
Button:
text: 'Go to 2nd Screen'
font_size: 30
on_release: app.root.current = 'second'
<SecondScreen>:
name: 'second'
BoxLayout:
orientation: 'vertical'
Label:
id: label
text: 'click to get a new car' # I want its text changed when the button is clicked everytime. And its original text value should be one of the random car type from the db file.
font_size: 30
Button:
text: 'Click to get a random car'
font_size: 30
on_release: app.root.load_random_car()
''')
class ScreenManager(App):
def build(self):
return root_widget
if __name__ == '__main__':
ScreenManager().run()
</code></pre>
<p>I have read a lot from internet, but I cannot understand them all. :-( </p>
<p>Please help me to correct the code. Thank you so much!</p>
| 0 | 2016-08-18T18:38:44Z | 39,025,438 | <pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.properties import StringProperty
from kivy.uix.widget import Widget
Builder.load_string("""
<ScreenX>
BoxLayout:
orientation: 'vertical'
Label:
text: root.label_text
Button:
text: 'Click Me'
on_release: root.on_clicked_button()
""")
class ScreenX(Widget):
label_text = StringProperty("Default Value")
def on_clicked_button(self):
self.label_text = "New Text!"
class MyApp(App):
def build(self):
return ScreenX()
MyApp().run()
</code></pre>
<p>is typically how you would do this ...</p>
| 1 | 2016-08-18T18:45:39Z | [
"android",
"python",
"kivy",
"kivy-language"
] |
How can I release memory after using csv.writer in python? | 39,025,431 | <p>I've got 1.6GB available to use in a python process. I'm writing a large csv file which data is coming from a database. The problem is: After the file is written, the memory (>1.5GB) is not released immediately which causes an error in the next bit of code (allocating memory fails because the OS cannot find enough memory to allocate).</p>
<p>Does any function exists which would help me release that memory?
Or, do you have a better way to do it?</p>
<p>This is the script I'm using to write the file, is writing by chunks to deal with the memory issue:</p>
<pre><code>size_to_read = 20000
sqlData = rs_cursor.fetchmany(size_to_read)
c = csv.writer(open(fname_location, "wb"))
c.writerow(headers)
print("- Generating file %s ..." % out_fname)
while sqlData:
for row in sqlData:
c.writerow(row)
sqlData = rs_cursor.fetchmany(size_to_read)
</code></pre>
| 0 | 2016-08-18T18:45:15Z | 39,025,587 | <p>I am thinking the issue is that you never closed the file. Give this a shot.</p>
<pre><code>size_to_read = 20000
sqlData = rs_cursor.fetchmany(size_to_read)
with open(fname_location, "wb")) as f:
c = csv.writer(f)
c.writerow(headers)
print("- Generating file %s ..." % out_fname)
while sqlData:
with open(fname_location, "a") as f: # "a" means to append
c = csv.writer(f)
for row in sqlData:
c.writerow(row)
sqlData = rs_cursor.fetchmany(size_to_read)
</code></pre>
<p>By using <code>with</code> you close the file automatically and releases the memory. Avoids having to explicitly call <code>c.close()</code></p>
<p>Also I believe you can avoid a loop by like so...</p>
<pre><code>while sqlData:
with open(fname_location, "wb") as f:
c = csv.writer(f)
c.writerows(sqlData) # .writerows
sqlData = rs_cursor.fetchmany(size_to_read)
</code></pre>
<p>Hard to replicate since I don't have the data :(</p>
<p><strong>EDIT</strong></p>
<p>I know this is not really an answer but check out the package <code>memory_profiler</code> to do a line by line assessment to see where you're using a lot of mem. <a href="https://pypi.python.org/pypi/memory_profiler" rel="nofollow">https://pypi.python.org/pypi/memory_profiler</a></p>
<p><strong>EDIT 2</strong></p>
<p>Here is an example of using a generator to keep your memory usage low.</p>
<pre><code>def results_iter(cursor, n=10000):
while True:
results = cursor.fetchmany(n)
if not results:
break
for result in results:
yield result
with open('file.csv') as f:
c = csv.writer(f)
for result in results_iter(rs_cursor, size_to_read)
c.writerow(result)
</code></pre>
<p>via <a href="http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/" rel="nofollow">http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/</a></p>
<p>If any of this works let us know!</p>
| 1 | 2016-08-18T18:54:53Z | [
"python",
"csv"
] |
No module named owslib.wmts | 39,025,502 | <p>I installed owslib through command line by:</p>
<pre><code>pip install owslib
</code></pre>
<p>and it worked. But for some reason I am getting an error saying: No module named owslib.wmts . What do you think would be causing this? I tried installing Anaconda, I've rebooted a few times, and when I run this script:</p>
<pre><code>import pip
installed_packages = pip.get_installed_distributions()
installed_packages_list = sorted(["%s==%s" % (i.key, i.version)
for i in installed_packages])
print(installed_packages_list)
</code></pre>
<p>it shows me the installed modules like this: </p>
<pre><code>['nose==1.3.7', 'numpy==1.11.1', 'overpy==0.3.1', 'owslib==0.11.2',
'pip==8.1.1', 'pyproj==1.9.5.1', 'python-dateutil==2.5.3',
'pytz==2016.6.1', 'requests==2.11.1', 'setuptools==20.10.1', 'six==1.10.0']
</code></pre>
<p>any ideas?</p>
| 0 | 2016-08-18T18:49:10Z | 39,027,073 | <p>You can try
pip install owslib.wmts
Otherwise there might be a problem within the module. Look at this website it has the documentation for owslib <a href="https://geopython.github.io/OWSLib/" rel="nofollow">documentation</a></p>
| 0 | 2016-08-18T20:36:10Z | [
"python"
] |
No module named owslib.wmts | 39,025,502 | <p>I installed owslib through command line by:</p>
<pre><code>pip install owslib
</code></pre>
<p>and it worked. But for some reason I am getting an error saying: No module named owslib.wmts . What do you think would be causing this? I tried installing Anaconda, I've rebooted a few times, and when I run this script:</p>
<pre><code>import pip
installed_packages = pip.get_installed_distributions()
installed_packages_list = sorted(["%s==%s" % (i.key, i.version)
for i in installed_packages])
print(installed_packages_list)
</code></pre>
<p>it shows me the installed modules like this: </p>
<pre><code>['nose==1.3.7', 'numpy==1.11.1', 'overpy==0.3.1', 'owslib==0.11.2',
'pip==8.1.1', 'pyproj==1.9.5.1', 'python-dateutil==2.5.3',
'pytz==2016.6.1', 'requests==2.11.1', 'setuptools==20.10.1', 'six==1.10.0']
</code></pre>
<p>any ideas?</p>
| 0 | 2016-08-18T18:49:10Z | 39,038,952 | <p>So I am 99% sure this is a problem on my work machine. I have found a workaround by using the Anaconda terminal instead of windows cmd line.</p>
| 0 | 2016-08-19T12:22:02Z | [
"python"
] |
how to call a method and make it run in background in python 3.4? | 39,025,511 | <p>I have implemented the Google Cloud Messaging server in python and I want that method to be Asynchronous. I do not expect any return values from that method. Is there a simple way to do this?
I have tried using <code>async</code> from <code>asyncio</code> package:</p>
<pre><code>...
loop = asyncio.get_event_loop()
if(module_status=="Fail"):
loop.run_until_complete(sendNotification(module_name, module_status))
...
</code></pre>
<p>and here is my method <code>sendNotification()</code>:</p>
<pre><code>async def sendNotification(module_name, module_status):
gcm = GCM("API_Key")
data ={"message":module_status, "moduleName":module_name}
reg_ids = ["device_tokens"]
response = gcm.json_request(registration_ids=reg_ids, data=data)
print("GCM notification sent!")
</code></pre>
| 1 | 2016-08-18T18:49:40Z | 39,035,031 | <p>You could use a <a href="https://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor" rel="nofollow">ThreadPoolExecutor</a>:</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor()
...
future = executor.submit(send_notification, module_name, module_status)
</code></pre>
| 1 | 2016-08-19T09:01:15Z | [
"python",
"python-3.x",
"google-cloud-messaging",
"python-asyncio"
] |
how to call a method and make it run in background in python 3.4? | 39,025,511 | <p>I have implemented the Google Cloud Messaging server in python and I want that method to be Asynchronous. I do not expect any return values from that method. Is there a simple way to do this?
I have tried using <code>async</code> from <code>asyncio</code> package:</p>
<pre><code>...
loop = asyncio.get_event_loop()
if(module_status=="Fail"):
loop.run_until_complete(sendNotification(module_name, module_status))
...
</code></pre>
<p>and here is my method <code>sendNotification()</code>:</p>
<pre><code>async def sendNotification(module_name, module_status):
gcm = GCM("API_Key")
data ={"message":module_status, "moduleName":module_name}
reg_ids = ["device_tokens"]
response = gcm.json_request(registration_ids=reg_ids, data=data)
print("GCM notification sent!")
</code></pre>
| 1 | 2016-08-18T18:49:40Z | 39,037,495 | <p>Since GCM is not async library compatible need to use an external event loop. </p>
<p>There are a few, simplest one IMO is probably <a href="http://www.gevent.org/" rel="nofollow">gevent</a>.</p>
<p>Note that gevent monkey patching may introduce dead locks if the underlying libraries used rely on blocking behaviour to operate.</p>
<pre><code>import gevent
from gevent.greenlet import Greenlet
from gevent import monkey
monkey.patch_all()
def sendNotification(module_name, module_status):
gcm = GCM("API_Key")
data ={"message":module_status, "moduleName":module_name}
reg_ids = ["device_tokens"]
response = gcm.json_request(registration_ids=reg_ids, data=data)
print("GCM notification sent!")
greenlet = Greenlet.spawn(sendNotification,
args=(module_name, module_status,))
# Yield control to gevent's event loop without blocking
# to allow background tasks to run
gevent.sleep(0)
#
# Other code, other greenlets etc here
#
# Call get to get return value if needed
greenlet.get()
</code></pre>
| 0 | 2016-08-19T11:06:26Z | [
"python",
"python-3.x",
"google-cloud-messaging",
"python-asyncio"
] |
how to call a method and make it run in background in python 3.4? | 39,025,511 | <p>I have implemented the Google Cloud Messaging server in python and I want that method to be Asynchronous. I do not expect any return values from that method. Is there a simple way to do this?
I have tried using <code>async</code> from <code>asyncio</code> package:</p>
<pre><code>...
loop = asyncio.get_event_loop()
if(module_status=="Fail"):
loop.run_until_complete(sendNotification(module_name, module_status))
...
</code></pre>
<p>and here is my method <code>sendNotification()</code>:</p>
<pre><code>async def sendNotification(module_name, module_status):
gcm = GCM("API_Key")
data ={"message":module_status, "moduleName":module_name}
reg_ids = ["device_tokens"]
response = gcm.json_request(registration_ids=reg_ids, data=data)
print("GCM notification sent!")
</code></pre>
| 1 | 2016-08-18T18:49:40Z | 39,294,376 | <p>You can use asyncio's api: <code>loop.run_in_executor(None, callable)</code></p>
<p>This will run the code using an executor (by default a <a href="https://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor" rel="nofollow">ThreadPoolExecutor</a>)</p>
<p>See the <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.run_in_executor" rel="nofollow">documentation</a></p>
| 0 | 2016-09-02T14:06:39Z | [
"python",
"python-3.x",
"google-cloud-messaging",
"python-asyncio"
] |
Python Regex: dealing with 4 patterns using re.search() and if statement once | 39,025,551 | <p>For example, below are the 4 patterns existing in a list of strings and results to be returned:</p>
<p>Pattern 1: <code>'A: 45'</code> --> 45 (45 - 0)</p>
<p>Pattern 2: <code>'B: 34'</code> --> -34 (0 - 34)</p>
<p>Pattern 3: <code>'A: 45, B: 34'</code> --> 11 (45 - 34)</p>
<p>Pattern 4: <code>'B: 34, A: 45'</code> --> 11 (45 - 34)</p>
<p>Is it possible that using <code>re.search()</code> only once and one <code>if</code> statement to achieve this? If not, are there alternative ways? Many thanks!</p>
<p>The method I came up with is</p>
<pre><code>match = (re.search(r'(A: (\d+))?(B: (\d+))?', str))
if match:
print(float(match.group(2)) - float(match.group(4)))
</code></pre>
<p>which only deals with the first 3 conditions and will throws an error for the the first conditions as either <code>match.group(2)</code> or <code>match.group(4)</code> is <code>NaN</code>.</p>
| 0 | 2016-08-18T18:52:13Z | 39,026,122 | <p>Here you go (with a combination of a regex and tuple unpacking):</p>
<pre><code>import re
strings = ['A: 45', 'B: 34', 'A: 45, B: 34', 'B: 34, A: 45']
rx = re.compile(r'([AB]): (\d+)')
for string in strings:
groups = [m.groups() for m in rx.finditer(string)]
sum = 0
for group in groups:
(letter, value) = group
if letter == 'A':
sum += float(value)
elif letter == 'B':
sum -= float(value)
print(sum)
</code></pre>
<p><hr>
You can have it somewhat shorter with a list comprehension and the ternary operator (look it up :)), like so:</p>
<pre><code>import re
strings = ['A: 45', 'B: 34', 'A: 45, B: 34', 'B: 34, A: 45']
rx = re.compile(r'([AB]): (\d+)')
def calc(groups):
sum = 0
for group in groups:
(letter, value) = group
c = float(value) if letter == 'A' else -float(value)
sum += c
return sum
results = [(string, calc(groups)) \
for string in strings \
for groups in [rx.findall(string)]]
print(results)
# [('A: 45', 45.0), ('B: 34', -34.0), ('A: 45, B: 34', 11.0), ('B: 34, A: 45', 11.0)]
</code></pre>
<p><hr>
See <a href="http://ideone.com/shXnZf" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 0 | 2016-08-18T19:29:40Z | [
"python",
"regex"
] |
Swapping in a sorted TreeView | 39,025,554 | <p>Let's say I have following code (the important stuff of a bigger class):</p>
<pre><code> self.store = Gtk.ListStore(int)
for i in range(10):
self.store.append(i)
sw = Gtk.ScrolledWindow()
sw.set_shadow_type(Gtk.ShadowType.IN)
sw.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)
self.treeView = Gtk.TreeView(self.store)
self.create_columns(self.treeView)
sw.add(self.treeView)
def create_columns(self, treeView):
rendererText = Gtk.CellRendererText()
column = Gtk.TreeViewColumn("Test", rendererText, text=1)
column.set_spacing(50)
column.set_fixed_width(180)
column.set_sort_column_id(1)
column.set_resizable(True)
treeView.append_column(column)
def move_item_up(self):
selection = self.treeView.get_selection()
selections, model = selection.get_selected_rows()
for row in selections:
if selection.iter_is_selected(row.iter) and row.previous:
self.store.swap(row.iter, row.previous.iter)
break
</code></pre>
<p>After generating opening the window I am greeted with a column that is populated with values 1-10. If I click on a column the column is sorted according to the values. If I click once it goes from 1-10 if I click twice it show 10-1. This is perfect and exactly what I want. </p>
<p>So I sort my array and then I want the execute the function <code>move-item-up</code> . Before sorting this function works as expected, but after sorting I get an error message, such as <code>gtk_list_store_swap: assertion 'iter_is_valid (a, store)' failed</code></p>
<p>Is there some way, to make the array "unsorted" again?</p>
| 0 | 2016-08-18T18:52:28Z | 39,039,330 | <p>Something like this:</p>
<pre><code>store.set_sort_column_id (Gtk.TREE_SORTABLE_UNSORTED_SORT_COLUMN_ID,
Gtk.SORT_ASCENDING);
</code></pre>
| 1 | 2016-08-19T12:41:45Z | [
"python",
"sorting",
"treeview",
"gtk",
"pygtk"
] |
Why is python list storing each character in BeautifulSoup tags? | 39,025,597 | <pre><code>from bs4 import BeautifulSoup
from urllib.request import urlopen
def getLinks(pathUrl):
a=[]
html = urlopen(xiny+pathUrl)
soup = BeautifulSoup(html,"html.parser")
nameList = soup.findAll("td",{"class":"name"})
for name in nameList:
aLink = name.find("a").attrs["href"]
print(aLink)
a += aLink
return a
xiny = 'https://learnxinyminutes.com'
links = getLinks("/")
print(links)
</code></pre>
<p>I'm trying to get the relative pathname links using bs4 and store them in a list, but when I do this, it concatenates each individual character of the pathnames instead of the just the pathname. </p>
| 1 | 2016-08-18T18:55:56Z | 39,025,889 | <p>Use <code>a.append(aLink)</code> to add items to your list. Tested, works for me. I would avoid using <code>+=</code> for concatenating a list, it's a bit hackey :)</p>
| 1 | 2016-08-18T19:14:19Z | [
"python",
"beautifulsoup"
] |
Why is python list storing each character in BeautifulSoup tags? | 39,025,597 | <pre><code>from bs4 import BeautifulSoup
from urllib.request import urlopen
def getLinks(pathUrl):
a=[]
html = urlopen(xiny+pathUrl)
soup = BeautifulSoup(html,"html.parser")
nameList = soup.findAll("td",{"class":"name"})
for name in nameList:
aLink = name.find("a").attrs["href"]
print(aLink)
a += aLink
return a
xiny = 'https://learnxinyminutes.com'
links = getLinks("/")
print(links)
</code></pre>
<p>I'm trying to get the relative pathname links using bs4 and store them in a list, but when I do this, it concatenates each individual character of the pathnames instead of the just the pathname. </p>
| 1 | 2016-08-18T18:55:56Z | 39,026,034 | <p><code>+=</code> is the same as <code>list.extend</code> which takes an <em>iterable</em> and extends the list with the contents of the iterable, what you want is to <em>append:</em></p>
<pre><code>In [45]: l = []
In [46]: s = "foobar"
In [47]: l.append(s)
In [48]: l
Out[48]: ['foobar']
In [49]: l += s
In [50]: l
Out[50]: ['foobar', 'f', 'o', 'o', 'b', 'a', 'r']
In [51]: l.extend(s)
In [52]: l
Out[52]: ['foobar', 'f', 'o', 'o', 'b', 'a', 'r', 'f', 'o', 'o', 'b', 'a', 'r']
</code></pre>
<p>Using <em>+=</em> or <em>.extend</em> is logically equivalent to doing:</p>
<pre><code>In [53]: l = []
In [54]: for ele in s:
....: l.append(ele)
....:
In [55]: l
Out[55]: ['f', 'o', 'o', 'b', 'a', 'r']
</code></pre>
<p>So that is why you see each char as an individual element in the list.</p>
| 3 | 2016-08-18T19:23:11Z | [
"python",
"beautifulsoup"
] |
Python - kill hung process started using subprocess.Popen | 39,025,635 | <p>I've started a subprocess using:</p>
<pre><code>proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = proc.communicate()[0]
</code></pre>
<p>Sometimes the command <code>cmd</code> hangs so my Python script also hangs at this point. </p>
<p>I'd like to let this run for a time (10 seconds?) and if I don't get a response, then simply kill the process and continue on with my script.</p>
<p><strong>How can I do this?</strong></p>
| 2 | 2016-08-18T18:58:19Z | 39,025,698 | <p>From <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess documentation</a> <code>proc.terminate()</code> is what you 're looking for</p>
| 0 | 2016-08-18T19:01:26Z | [
"python",
"subprocess",
"popen",
"kill",
"hang"
] |
Python - kill hung process started using subprocess.Popen | 39,025,635 | <p>I've started a subprocess using:</p>
<pre><code>proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = proc.communicate()[0]
</code></pre>
<p>Sometimes the command <code>cmd</code> hangs so my Python script also hangs at this point. </p>
<p>I'd like to let this run for a time (10 seconds?) and if I don't get a response, then simply kill the process and continue on with my script.</p>
<p><strong>How can I do this?</strong></p>
| 2 | 2016-08-18T18:58:19Z | 39,026,168 | <p>If you're using python 3, <code>Popen.communicate</code> has a timeout kwarg:</p>
<pre><code>proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = proc.communicate(timeout=10)[0]
</code></pre>
| 2 | 2016-08-18T19:32:29Z | [
"python",
"subprocess",
"popen",
"kill",
"hang"
] |
Tkinter changing image live after a given time | 39,025,637 | <p>I currently have a listbox with the paths of dozens of images, when an element in the listbox is selected the image will be displayed in the middle of the gui.</p>
<p>My third image has 2 different looks to it so i wrote: </p>
<pre><code> #Third image
elif (self.index==3):
start = clock()
self.label.configure(image=self.photo2[1])#first image
if (start>2):
self.label.configure(image=self.photo2[2])#second image
</code></pre>
<p>For now when the user clicks the third element it will first display the first image, then if reclicked after two seconds it will display the second image, so that works.</p>
<p>However, what I want is for my third image to automatically change after two seconds without reclicking. Is there a way to update an image live in tkinter or any ideas on what approach I could take?</p>
| 0 | 2016-08-18T18:58:24Z | 39,025,874 | <p>This is a common pattern. You should use <code>Tk.after</code> to run a function that changes the image then schedules the next change.</p>
<pre><code>def change_image(label, imagelist, nextindex):
label.configure(image=imagelist[nextindex])
root.after(2000, lambda: change_image(label, imagelist, (nextindex+1) % len(imagelist)))
</code></pre>
<p>Then call it once and let it do its thing forever.</p>
<pre><code>root = Tk()
setup_your_stuff()
change_image(root.label, root.photo2, 0)
</code></pre>
| 2 | 2016-08-18T19:12:49Z | [
"python",
"tkinter"
] |
How to combine two DStreams(pyspark)? | 39,025,640 | <p>I have a kafka stream coming in with some input topic.
This is the code i wrote for accepting kafka stream. </p>
<pre><code>conf = SparkConf().setAppName(appname)
sc = SparkContext(conf=conf)
ssc = StreamingContext(sc)
kvs = KafkaUtils.createDirectStream(ssc, topics,\
{"metadata.broker.list": brokers})
</code></pre>
<p>Then I create two DStreams of the keys and values of the original stream. </p>
<pre><code>keys = kvs.map(lambda x: x[0].split(" "))
values = kvs.map(lambda x: x[1].split(" "))
</code></pre>
<p>Then I perform some computation in the values DStream.
For Example,</p>
<pre><code>val = values.flatMap(lambda x: x*2)
</code></pre>
<p>Now, I need to combine the keys and the val DStream and return the result in the form of Kafka stream. </p>
<p>How to combine val to the corressponding key?</p>
| 0 | 2016-08-18T18:58:39Z | 39,026,779 | <p>You can just use the <code>join</code> operator on the 2 DStreams to merge them.
When you do map, you are essentially creating another stream. So, join will help you merge them together.</p>
<p>Eg:</p>
<pre><code>Joined_Stream = keys.join(values).(any operation like map, flatmap...)
</code></pre>
| 0 | 2016-08-18T20:16:35Z | [
"python",
"apache-kafka",
"pyspark",
"python-kafka"
] |
Python: Find mean of points within radius of a an element in 2D array | 39,025,644 | <p>I am looking for an efficient way to find the mean of of values with a certain radius of an element in a 2D NumPy array, excluding the center point and values < 0.</p>
<p>My current method is to create a disc shaped mask (using the method <a href="http://stackoverflow.com/questions/8647024/how-to-apply-a-disc-shaped-mask-to-a-numpy-array">here</a>) and find the mean of points within this mask. This is taking too long however...over 10 minutes to calculate ~18000 points within my 300x300 array.</p>
<p>The array I want to find means within is here titled "arr"</p>
<pre><code>def radMask(index,radius,array,insert):
a,b = index
nx,ny = array.shape
y,x = np.ogrid[-a:nx-a,-b:ny-b]
mask = x*x + y*y <= radius*radius
array[mask] = insert
return array
arr_mask = np.zeros_like(arr).astype(int)
arr_mask = radMask(center, radius, arr_mask, 1)
arr_mask[arr < 0] = 0 #Exclude points with no echo
arr_mask[ind] = 0 #Exclude center point
arr_mean = 0
if np.any(dbz_bg):
arr_mean = sp.mean(arr[arr_mask])
</code></pre>
<p>Is there any more efficient way to do this? I've looked into some of the image processing filters/tools but can't quite wrap my head around it.</p>
| 1 | 2016-08-18T18:58:47Z | 39,027,626 | <p>is this helpful? This takes only a couple of seconds on my laptop for ~ 18000 points:</p>
<pre><code>import numpy as np
#generate a random 300x300 matrix for testing
inputMat = np.random.random((300,300))
radius=50
def radMask(index,radius,array):
a,b = index
nx,ny = array.shape
y,x = np.ogrid[-a:nx-a,-b:ny-b]
mask = x*x + y*y <= radius*radius
return mask
#meanAll is going to store ~18000 points
meanAll=np.zeros((130,130))
for x in range(130):
for y in range(130):
centerMask=(x,y)
mask=radMask(centerMask,radius,inputMat)
#un-mask center and values below 0
mask[centerMask]=False
mask[inputMat<0]=False
#get the mean
meanAll[x,y]=np.mean(inputMat[mask])
</code></pre>
| 1 | 2016-08-18T21:13:40Z | [
"python",
"arrays",
"numpy",
"mean"
] |
Trouble creating pandas dataframe from lists | 39,025,674 | <p>I am having some trouble creating a pandas df from lists I generate while scraping data from the web. Here I am using beautifulsoup to pull a few pieces of information about local farms from localharvest.org (farm name, city, and description). I am able to scrape the data effectively, creating a list of objects on each pass. The trouble I'm having is outputting these lists into a tabular df.</p>
<p>My complete code is as follows:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas
url = "http://www.localharvest.org/search.jsp?jmp&lat=44.80798&lon=-69.22736&scale=8&ty=6"
r = requests.get(url)
soup = BeautifulSoup(r.content)
data = soup.find_all("div", {'class': 'membercell'})
fname = []
fcity = []
fdesc = []
for item in data:
name = item.contents[1].text
fname.append(name)
city = item.contents[3].text
fcity.append(city)
desc = item.find_all("div", {'class': 'short-desc'})[0].text
fdesc.append(desc)
df = pandas.DataFrame({'fname': fname, 'fcity': fcity, 'fdesc': fdesc})
print (df)
df.to_csv('farmdata.csv')
</code></pre>
<p>Interestingly, the <code>print(df)</code> function shows that all three lists have been passed to the dataframe. But the resultant .CSV output contains only a single column of values (fcity) with the fname and fdesc column labels present. Interstingly, If I do something crazy like try to force tab delineated output with <code>df.to_csv('farmdata.csv', sep='\t')</code>, I get a single column with jumbled output, but it appears to at least be passing the other elements of the dataframe. </p>
<p>Thanks in advance for any input.</p>
| 0 | 2016-08-18T19:00:22Z | 39,026,408 | <p>It works for me:</p>
<pre><code># Taking a few slices of each substring of a given string after stripping off whitespaces
df['fname'] = df['fname'].str.strip().str.slice(start=0, stop=20)
df['fdesc'] = df['fdesc'].str.strip().str.slice(start=0, stop=20)
df.to_csv('farmdata.csv')
df
fcity fdesc fname
0 South Portland, ME Gromaine Farm is pro Gromaine Farm
1 Newport, ME We are a diversified Parker Family Farm
2 Unity, ME The Buckle Farm is a The Buckle Farm
3 Kenduskeag, ME Visit wiseacresfarm. Wise Acres Farm
4 Winterport, ME Winter Cove Farm is Winter Cove Farm
5 Albion, ME MISTY BROOK FARM off Misty Brook Farm
6 Dover-Foxcroft, ME We want you to becom Ripley Farm
7 Madison, ME Hide and Go Peep Far Hide and Go Peep Far
8 Etna, ME Fail Better Farm is Fail Better Farm
9 Pittsfield, ME We are a family farm Snakeroot Organic Fa
</code></pre>
<p>Maybe you had a lot of empty spaces which was misinterpreted by the default delimiter(<em>,</em>) and hence picked up <code>fcity</code> column as it contained(<em>,</em>) in it which led to the ordering getting affected.</p>
| 1 | 2016-08-18T19:48:05Z | [
"python",
"python-3.x",
"csv",
"pandas",
"dataframe"
] |
Trouble creating pandas dataframe from lists | 39,025,674 | <p>I am having some trouble creating a pandas df from lists I generate while scraping data from the web. Here I am using beautifulsoup to pull a few pieces of information about local farms from localharvest.org (farm name, city, and description). I am able to scrape the data effectively, creating a list of objects on each pass. The trouble I'm having is outputting these lists into a tabular df.</p>
<p>My complete code is as follows:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas
url = "http://www.localharvest.org/search.jsp?jmp&lat=44.80798&lon=-69.22736&scale=8&ty=6"
r = requests.get(url)
soup = BeautifulSoup(r.content)
data = soup.find_all("div", {'class': 'membercell'})
fname = []
fcity = []
fdesc = []
for item in data:
name = item.contents[1].text
fname.append(name)
city = item.contents[3].text
fcity.append(city)
desc = item.find_all("div", {'class': 'short-desc'})[0].text
fdesc.append(desc)
df = pandas.DataFrame({'fname': fname, 'fcity': fcity, 'fdesc': fdesc})
print (df)
df.to_csv('farmdata.csv')
</code></pre>
<p>Interestingly, the <code>print(df)</code> function shows that all three lists have been passed to the dataframe. But the resultant .CSV output contains only a single column of values (fcity) with the fname and fdesc column labels present. Interstingly, If I do something crazy like try to force tab delineated output with <code>df.to_csv('farmdata.csv', sep='\t')</code>, I get a single column with jumbled output, but it appears to at least be passing the other elements of the dataframe. </p>
<p>Thanks in advance for any input.</p>
| 0 | 2016-08-18T19:00:22Z | 39,026,439 | <p>Consider, instead of using lists of the information for each farm entity that you scrape, to use a list of dictionaries, or a dict of dicts. eg:</p>
<pre><code>[{name:farm1, city: San Jose... etc},
{name: farm2, city: Oakland...etc}]
</code></pre>
<p>Now you can call <code>Pandas.DataFrame.from_dict()</code> on the above defined list of dicts. </p>
<p>Pandas method: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html</a></p>
<p>An answer that might describe this solution in more detail: <a href="http://stackoverflow.com/questions/18837262/convert-python-dict-into-a-dataframe/18837389#18837389">Convert Python dict into a dataframe</a></p>
| 0 | 2016-08-18T19:50:01Z | [
"python",
"python-3.x",
"csv",
"pandas",
"dataframe"
] |
Trouble creating pandas dataframe from lists | 39,025,674 | <p>I am having some trouble creating a pandas df from lists I generate while scraping data from the web. Here I am using beautifulsoup to pull a few pieces of information about local farms from localharvest.org (farm name, city, and description). I am able to scrape the data effectively, creating a list of objects on each pass. The trouble I'm having is outputting these lists into a tabular df.</p>
<p>My complete code is as follows:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas
url = "http://www.localharvest.org/search.jsp?jmp&lat=44.80798&lon=-69.22736&scale=8&ty=6"
r = requests.get(url)
soup = BeautifulSoup(r.content)
data = soup.find_all("div", {'class': 'membercell'})
fname = []
fcity = []
fdesc = []
for item in data:
name = item.contents[1].text
fname.append(name)
city = item.contents[3].text
fcity.append(city)
desc = item.find_all("div", {'class': 'short-desc'})[0].text
fdesc.append(desc)
df = pandas.DataFrame({'fname': fname, 'fcity': fcity, 'fdesc': fdesc})
print (df)
df.to_csv('farmdata.csv')
</code></pre>
<p>Interestingly, the <code>print(df)</code> function shows that all three lists have been passed to the dataframe. But the resultant .CSV output contains only a single column of values (fcity) with the fname and fdesc column labels present. Interstingly, If I do something crazy like try to force tab delineated output with <code>df.to_csv('farmdata.csv', sep='\t')</code>, I get a single column with jumbled output, but it appears to at least be passing the other elements of the dataframe. </p>
<p>Thanks in advance for any input.</p>
| 0 | 2016-08-18T19:00:22Z | 39,026,460 | <p>Try stripping out the newline and space characters:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas
url = "http://www.localharvest.org/search.jsp?jmp&lat=44.80798&lon=-69.22736&scale=8&ty=6"
r = requests.get(url)
soup = BeautifulSoup(r.content)
data = soup.find_all("div", {'class': 'membercell'})
fname = []
fcity = []
fdesc = []
for item in data:
name = item.contents[1].text.split()
fname.append(' '.join(name))
city = item.contents[3].text.split()
fcity.append(' '.join(city))
desc = item.find_all("div", {'class': 'short-desc'})[0].text.split()
fdesc.append(' '.join(desc))
df = pandas.DataFrame({'fname': fname, 'fcity': fcity, 'fdesc': fdesc})
print (df)
df.to_csv('farmdata.csv')
</code></pre>
| 0 | 2016-08-18T19:51:29Z | [
"python",
"python-3.x",
"csv",
"pandas",
"dataframe"
] |
Does data attributes override method attributes in a class in Python? | 39,025,680 | <p>According to the <a href="https://docs.python.org/3/tutorial/classes.html#random-remarks" rel="nofollow">official documentation</a>, "data attributes override method attributes with the same name". However, I found that to be incorrect.</p>
<pre><code>class C:
x = 111
def x(self):
print('I am x')
c = C()
print(c.x)
</code></pre>
<p>The print statement in the code above shows c.x being a method, not the data attribute assigned to 111. Thus, this code indicates that data attributes do not necessarily override method attributes with the same name and the documentation is wrong. Can anyone confirm my finding?</p>
<p>P.S. I tried the code in both Python 3.5 and Python 2.7 and obtained the same result. </p>
| 1 | 2016-08-18T19:00:44Z | 39,025,770 | <p>Attributes override methods and vise-versa. A rule of thumb is, the latter overrides the former. So if you do</p>
<pre><code>class C:
x = 111
def x(self):
print('I am x')
x = 112
c = C()
print(c.x)
</code></pre>
<p>You will get </p>
<pre><code>112
</code></pre>
| 3 | 2016-08-18T19:06:12Z | [
"python"
] |
Does data attributes override method attributes in a class in Python? | 39,025,680 | <p>According to the <a href="https://docs.python.org/3/tutorial/classes.html#random-remarks" rel="nofollow">official documentation</a>, "data attributes override method attributes with the same name". However, I found that to be incorrect.</p>
<pre><code>class C:
x = 111
def x(self):
print('I am x')
c = C()
print(c.x)
</code></pre>
<p>The print statement in the code above shows c.x being a method, not the data attribute assigned to 111. Thus, this code indicates that data attributes do not necessarily override method attributes with the same name and the documentation is wrong. Can anyone confirm my finding?</p>
<p>P.S. I tried the code in both Python 3.5 and Python 2.7 and obtained the same result. </p>
| 1 | 2016-08-18T19:00:44Z | 39,025,814 | <p>You were missing the second definition of x, that is why it seemed like the official documentation was wrong.</p>
| -1 | 2016-08-18T19:09:25Z | [
"python"
] |
Does data attributes override method attributes in a class in Python? | 39,025,680 | <p>According to the <a href="https://docs.python.org/3/tutorial/classes.html#random-remarks" rel="nofollow">official documentation</a>, "data attributes override method attributes with the same name". However, I found that to be incorrect.</p>
<pre><code>class C:
x = 111
def x(self):
print('I am x')
c = C()
print(c.x)
</code></pre>
<p>The print statement in the code above shows c.x being a method, not the data attribute assigned to 111. Thus, this code indicates that data attributes do not necessarily override method attributes with the same name and the documentation is wrong. Can anyone confirm my finding?</p>
<p>P.S. I tried the code in both Python 3.5 and Python 2.7 and obtained the same result. </p>
| 1 | 2016-08-18T19:00:44Z | 39,026,385 | <p>I guess the tutorial is unfortunately (because ambiguously) phrased and by</p>
<blockquote>
<p>[d]ata attributes override method attributes with the same name</p>
</blockquote>
<p>it actually means "Data attributes override previously assigned/defined method attributes of the same name and vice versa: method attributes override previously assigned/defined data attributes of the same name."</p>
<p>"Duh", you might think "data attributes also override previously assigned <em>data</em> attributes of the same name, so what's the big deal? Why is this even mentioned?" Assigning and re-assigning (called "overriding" in the cited tutorial) to variables (whether called "attributes" of something or not) is after all one of the prototypical features of an imperative programming language.</p>
<p>Well, let me introduce you to</p>
<h1>Namespaces</h1>
<p>Python classes are namespaces. So what the tutorial might try to tell us here is that data attributes <strong>and</strong> method attributes <em>within</em> a class share a namespace.</p>
<p>This isn't the case for attributes of different classes, though. If a class inherits from another class, it has access to its parent's names. If a name is reused for method definition or data assignment within the inheriting class, the parent class keeps the original values. In the child class they are merely temporarily shadowed. If you remove the name from the child class, it, too, will again provide access to the parent's attribute of the same name:</p>
<pre><code>class A:
x = 111
class B1(A):
x = 123 # Shadows A.x
assert B1.x == 123
del B1.x # But after removing B1's own x attribute ...
assert B1.x == 111 # ... B1.x is just an alias to A.x !
# Shadowing can happen at any time:
class B2(A):
pass
assert B2.x == A.x == 111
B2.x = 5 # shadowing attributes can also be added after the class definition
assert B2.x == 5
assert A.x == 111
del B2.x
assert B2.x == A.x == 111
</code></pre>
<p>Contrast this with re-definition a.k.a. re-assignment (or "overriding" as the tutorial calls it):</p>
<pre><code>class C:
x = 555
def x(self):
print('I am x')
C().x() # outputs "I am x"
del C.x
print(C.x) # AttributeError: 'C' object has no attribute 'x'
</code></pre>
<p>Method <code>C.x()</code> didn't temporarily shadow data attribute <code>C.x</code>. It replaced it, so when we delete the method, <code>x</code> is missing completely within <code>C</code>, rather than the data attribute re-appearing under that name.</p>
<h1>More Namespaces</h1>
<p>Instantiation adds another namespace and thus another chance for shadowing:</p>
<pre><code>a = A()
assert a.x == 111 # instance namespace includes class namespace
a.x = 1000
assert a.x == 1000
assert A.x == 111 # class attribute unchanged
del a.x
assert a.x == 111 # sees A.x again
</code></pre>
<p>In fact, all (nested) namespaces in Python work that way: packages, modules, classes, functions and methods, instance objects, inner classes, nested functions ...</p>
<p>When reading a variable, the namespace hierarchy is walked bottom-up until the name is found. (<em>Reading</em> here means finding the value (object, function/method or built-in) to which the variable's name is bound. If the value is mutable, this can also be used to change the value.)</p>
<p>On the other hand, when setting (defining or redefining) a variable, a name of the current namespace is used: Rebound to the new value if the name already exists in that very namespace (rather than only being included there from another namespace) or a newly created name if it didn't exist before.</p>
| 3 | 2016-08-18T19:46:41Z | [
"python"
] |
Django app urls not working properly | 39,025,697 | <p>I have the following base urls file:</p>
<pre><code>urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^agenda/', include('planner.urls', namespace='planner', app_name='planner'))
]
</code></pre>
<p>And my planner app contains the following urls:</p>
<pre><code>urlpatterns = patterns('',
url(r'^', SkirmList.as_view(), name='agenda'),
url(r'^skirm/(?P<pk>\d+)/$', SkirmDetailView.as_view(), name='skirmdetailview'),
)
</code></pre>
<p>The problem I am having is when going to: <a href="http://localhost:8000/agenda/skirm/41/" rel="nofollow">http://localhost:8000/agenda/skirm/41/</a>
It keeps loading the SkirmList view instead of the SkirmDetailView.</p>
<p>It's propably obvious to most but I am a beginner with Django. Any help is appreciated, thanks</p>
| 0 | 2016-08-18T19:01:26Z | 39,025,781 | <p>The regex <code>r'^'</code> matches any string. It just says the string needs to have a start. Every string has a start, so...</p>
<p>You need to include an end anchor as well:</p>
<pre><code>url(r'^$', ...)
</code></pre>
<p>This regex looks for the start of the string, followed immediately by the end, i.e. an empty string. It won't match the <code>/agenda/skirm/41/</code> url. </p>
| 3 | 2016-08-18T19:06:48Z | [
"python",
"django",
"url",
"django-urls"
] |
How to convert ArrayType to DenseVector in PySpark DataFrame? | 39,025,707 | <p>I'm getting the following error trying to build a ML <code>Pipeline</code>:</p>
<pre><code>pyspark.sql.utils.IllegalArgumentException: 'requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually ArrayType(DoubleType,true).'
</code></pre>
<p>My <code>features</code> column contains an array of floating point values. It sounds like I need to convert those to some type of vector (it's not sparse, so a DenseVector?). Is there a way to do this directly on the DataFrame or do I need to convert to an RDD?</p>
| 0 | 2016-08-18T19:02:22Z | 39,026,265 | <p>You can use UDF:</p>
<pre><code>udf(lambda vs: Vectors.dense(vs), VectorUDT())
</code></pre>
<p>In Spark < 2.0 import:</p>
<pre><code>from pyspark.mllib.linalg import Vectors, VectorUDT
</code></pre>
<p>In Spark 2.0+ import:</p>
<pre><code>from pyspark.ml.linalg import Vectors, VectorUDT
</code></pre>
<p>Please note that these classes are not compatible despite identical implementation.</p>
<p>It is also possible to extract individual features and assemble with <code>VectorAssembler</code>. Assuming input column is called <code>features</code>:</p>
<pre><code>from pyspark.ml.feature import VectorAssembler
n = ... # Size of features
assembler = VectorAssembler(
inputCols=["features[{0}]".format(i) for i in range(n)],
outputCol="features_vector")
assembler.transform(df.select(
"*", *(df["features"].getItem(i) for i in range(n))
))
</code></pre>
| 0 | 2016-08-18T19:38:47Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-mllib",
"apache-spark-ml"
] |
How to split digits and text | 39,025,782 | <p>I have a dataset like this</p>
<pre><code>data = pd.DataFrame({ 'a' : [5, 5, '2 bad']})
</code></pre>
<p>I want to convert this to </p>
<pre><code>{ 'a.digits' : [5, 5, 2], 'a.text' : [nan, nan, 'bad']}
</code></pre>
<p>I can get 'a.digits' as bellow</p>
<pre><code>data['a.digits'] = data['a'].replace('[^0-9]', '', regex = True)
5 2
2 1
Name: a, dtype: int64
</code></pre>
<p>When i do</p>
<pre><code>data['a'] = data['a'].replace('[^\D]', '', regex = True)
</code></pre>
<p>or</p>
<pre><code>data['a'] = data['a'].replace('[^a-zA-Z]', '', regex = True)
</code></pre>
<p>i get</p>
<pre><code>5 2
bad 1
Name: a, dtype: int64
</code></pre>
<p>What's wrong? How to remove digits?</p>
| 0 | 2016-08-18T19:06:49Z | 39,026,211 | <p>Something like this would suffice?</p>
<pre><code>In [8]: import numpy as np
In [9]: import re
In [10]: data['a.digits'] = data['a'].apply(lambda x: int(re.sub(r'[\D]', '', str(x))))
In [12]: data['a.text'] = data['a'].apply(lambda x: re.sub(r'[\d]', '', str(x)))
In [13]: data.replace('', np.nan, regex=True)
Out[13]:
a a.digits a.text
0 5 5 NaN
1 5 5 NaN
2 2 bad 2 bad
</code></pre>
| 2 | 2016-08-18T19:35:18Z | [
"python",
"replace"
] |
How to split digits and text | 39,025,782 | <p>I have a dataset like this</p>
<pre><code>data = pd.DataFrame({ 'a' : [5, 5, '2 bad']})
</code></pre>
<p>I want to convert this to </p>
<pre><code>{ 'a.digits' : [5, 5, 2], 'a.text' : [nan, nan, 'bad']}
</code></pre>
<p>I can get 'a.digits' as bellow</p>
<pre><code>data['a.digits'] = data['a'].replace('[^0-9]', '', regex = True)
5 2
2 1
Name: a, dtype: int64
</code></pre>
<p>When i do</p>
<pre><code>data['a'] = data['a'].replace('[^\D]', '', regex = True)
</code></pre>
<p>or</p>
<pre><code>data['a'] = data['a'].replace('[^a-zA-Z]', '', regex = True)
</code></pre>
<p>i get</p>
<pre><code>5 2
bad 1
Name: a, dtype: int64
</code></pre>
<p>What's wrong? How to remove digits?</p>
| 0 | 2016-08-18T19:06:49Z | 39,026,225 | <p>Assuming there is a space between 2 and the word bad, you can do this:</p>
<p><code>data['Text'] = data['a'].str.split(' ').str[1]</code></p>
| 0 | 2016-08-18T19:36:04Z | [
"python",
"replace"
] |
extracting values from json file using ijson | 39,025,988 | <p>I have a large JSON file which looks like this :</p>
<pre><code>{"details":{
"1000":[
["10","Thursday","1","19.89"],
["12","Monday","3","20.90"],
...
]
"1001":[
["30","Sunday","11","80.22"],
["88","Wednesday","22","8.29"],
...
]
}
}
</code></pre>
<p>Now I'm extracting the lists present in variables like "1000", "1001" from the "<strong>details</strong>" value using <strong>ijson</strong> (Interactive Json) using code given below :</p>
<pre><code>import ijson as ijson
filename='Clean_Details.json'
with open(filename,'r') as f:
objects=ijson.items(f,'details.1001.item')
for row in objects:
print(row)
print("Done")
</code></pre>
<p>But the Problem is that : <strong>for the loop is not terminating</strong> in the above code. After printing the final list in 1001 it keeps running.</p>
<p>I'm guessing that the the Generator(<strong>objects</strong>) in above code is not encountering the <strong>StopIteration</strong> don't know why.</p>
<p>Could Anyone help?
A little help would be appreciated.</p>
| 0 | 2016-08-18T19:20:42Z | 39,036,837 | <p>Ok as it turns out because of <em>large size</em> of <strong>JSON</strong> file which is > <strong>800MB</strong> with about more than a million records the parsing takes time to complete so it </p>
<p>The loop terminates eventually but takes some time to complete. On a pc with normal specs it definitely takes some time.</p>
<p>Also using :</p>
<pre><code>import ijson as ijson
</code></pre>
<p>is way slower on very large files because most of the parsing takes place using python backend code so inorder to improve the speed,</p>
<p>It is way better to use</p>
<pre><code>import ijson.backends.yajl2_cffi as ijson
</code></pre>
<p>because it has a backend in C language using <em>cffi</em> which does improves running time for the above code.</p>
| 0 | 2016-08-19T10:33:58Z | [
"python",
"json",
"yajl",
"ijson"
] |
Django: login failed error when trying to connect to Azure database | 39,026,068 | <p>I tried to connect django to Azure database using django-pyodbc-azure and made sure that my settings in setting.py are correct. But still got this problem. I heard that this could be caused by some authentication problem but not sure how to solve this.</p>
<blockquote>
<p>django.db.utils.Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'USERNAME'. (18456) (SQLDriverConnect)")</p>
</blockquote>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'SERVER_NAME',
'USER': 'USERNAME',
'PASSWORD': 'PASSWORD',
'HOST': 'SERVER_NAME.database.windows.net',
'PORT': '',
}
</code></pre>
<p>}</p>
| 0 | 2016-08-18T19:26:10Z | 39,032,497 | <p>@HansongLi, The connection string of Azure SQL Database is in this format <code>Server=<ServerName>,<ServerPort>;Database=<DatabaseName>;UiD=<UserName>;Password={your_password_here};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;</code>.</p>
<p>Correspondingly, the database configuration is defined as below, see the article <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-ptvs-django-sql/#configure-the-project" rel="nofollow">here</a>.</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': '<DatabaseName>',
'USER': '<UserName>',
'PASSWORD': '{your_password_here}',
'HOST': '<ServerName>',
'PORT': '<ServerPort>',
'OPTIONS': {
'driver': 'SQL Server Native Client 11.0',
'MARS_Connection': 'True',
}
}
}
</code></pre>
<p>Meanwhile, the <code><UserName></code> should be in the format <code><name>@<sql-azure-name></code> when the host <code><ServerName></code> is like this <code>tcp:<sql-azure-name>.database.windows.net</code>.</p>
<p>You can find the connection string on Azure portal.</p>
<p><a href="http://i.stack.imgur.com/nb5q6.png" rel="nofollow"><img src="http://i.stack.imgur.com/nb5q6.png" alt="enter image description here"></a></p>
<p>Or at the tab <code>DASHBOARD</code> of Azure classic portal.</p>
<p><a href="http://i.stack.imgur.com/KHzb0.png" rel="nofollow"><img src="http://i.stack.imgur.com/KHzb0.png" alt="enter image description here"></a></p>
| 0 | 2016-08-19T06:33:23Z | [
"python",
"django",
"azure"
] |
Adding element to list of dictionaries | 39,026,090 | <p>Normally, I wouldnt post this, but it's been driving me crazy for the past 10 hours...</p>
<p>I have 2 list of dictionaries. But they have either none, 1 or 2 things in common. If while iterating the elements on the second list I match a key-value pair from the first list, then I have to add these elements to the first list, at that specific spot</p>
<p>so the first list is like this:</p>
<pre><code>list1 = [{'key11':'value11', 'key12':'value12', ...}, {'key11':'value111', 'key121':'value121', ...}]
</code></pre>
<p>and list2 is like the above mentioned list:</p>
<pre><code>list2 = [{'2key11':'value11', 'key12':'value12', '2key13': 'value'...}, {...}]
</code></pre>
<p>Notice that <code>key12</code> is the same on both lists. So the end product I want is this:</p>
<pre><code>>list1 = list1 (some operation) list2
>list1
>[{'key11':'value11', 'key12':'value12', '2key11':'value11', ...}, {'key11':'value111', 'key121':'value121', ...}]
</code></pre>
<p>Notice that in the desired output, I've added all of the second lists dictionary elements to the dictionary that corresponded to key12 in list1(first dictionary).</p>
<p>So far, I've been doing it manually and the results are not good.
my code is this:</p>
<pre><code>for i in range(len(list)):
# Now we need to map the panther data as well.
for pitem in plist:
# match the id's to the mapped symbols
if list[i]['key_id1'] == pitem['key_id1']:
if list[i]['key_id2'] == 'n/a':
list[i]['key_id2'] = pitem['key_id2']
list[i]['somekey1'] = panther_item['somekey1']
list[i]['somekey2'] = panther_item['somekey2']
list[i]['somekey3'] = panther_item['somekey3'] # <- WHY IS THIS GIVING ME A KEY ERROR AND NOT THE ONE ABOVE IT? Both didnt exist in the dictionary stored in list.
list[i]['somekey4'] = panther_item['somekey4']
list[i]['somekey5'] = panther_item['somekey5']
elif list[i]['key_id2'] == pitem['key_id2']:
if list[i]['key_id1'] == 'n/a':
list[i]['key_id1'] = pitem['key_id1']
list[i]['somekey1'] = panther_item['somekey1']
list[i]['somekey2'] = panther_item['somekey2']
list[i]['somekey3'] = panther_item['somekey3']
list[i]['somekey4'] = panther_item['somekey4']
list[i]['somekey5'] = panther_item['somekey5']
</code></pre>
<p>But i'm getting a <code>keyError</code> on 'somekey3'. Why 'somekey3' and not 'somekey2'? Both werent there. I put them every time in this iteration. And when I print the 2 lists before the edit they are correct. What could possibly be going wrong here?</p>
| 1 | 2016-08-18T19:27:43Z | 39,026,355 | <p>I think I've got the question right, you want to find the union of the two corresponding dictionaries in 2 lists? The code below should accomplish that. Remember that in case the values of the corresponding keys are different, y(list2 dictionaries) will take precedence over x.</p>
<pre><code>list3 = []
for x,y in zip(list1, list2):
z = x.copy()
for key, value in y.iteritems():
if value != 'n/a':
z[key] = value
list3.append(z)
</code></pre>
| 0 | 2016-08-18T19:44:41Z | [
"python",
"list",
"dictionary"
] |
Adding element to list of dictionaries | 39,026,090 | <p>Normally, I wouldnt post this, but it's been driving me crazy for the past 10 hours...</p>
<p>I have 2 list of dictionaries. But they have either none, 1 or 2 things in common. If while iterating the elements on the second list I match a key-value pair from the first list, then I have to add these elements to the first list, at that specific spot</p>
<p>so the first list is like this:</p>
<pre><code>list1 = [{'key11':'value11', 'key12':'value12', ...}, {'key11':'value111', 'key121':'value121', ...}]
</code></pre>
<p>and list2 is like the above mentioned list:</p>
<pre><code>list2 = [{'2key11':'value11', 'key12':'value12', '2key13': 'value'...}, {...}]
</code></pre>
<p>Notice that <code>key12</code> is the same on both lists. So the end product I want is this:</p>
<pre><code>>list1 = list1 (some operation) list2
>list1
>[{'key11':'value11', 'key12':'value12', '2key11':'value11', ...}, {'key11':'value111', 'key121':'value121', ...}]
</code></pre>
<p>Notice that in the desired output, I've added all of the second lists dictionary elements to the dictionary that corresponded to key12 in list1(first dictionary).</p>
<p>So far, I've been doing it manually and the results are not good.
my code is this:</p>
<pre><code>for i in range(len(list)):
# Now we need to map the panther data as well.
for pitem in plist:
# match the id's to the mapped symbols
if list[i]['key_id1'] == pitem['key_id1']:
if list[i]['key_id2'] == 'n/a':
list[i]['key_id2'] = pitem['key_id2']
list[i]['somekey1'] = panther_item['somekey1']
list[i]['somekey2'] = panther_item['somekey2']
list[i]['somekey3'] = panther_item['somekey3'] # <- WHY IS THIS GIVING ME A KEY ERROR AND NOT THE ONE ABOVE IT? Both didnt exist in the dictionary stored in list.
list[i]['somekey4'] = panther_item['somekey4']
list[i]['somekey5'] = panther_item['somekey5']
elif list[i]['key_id2'] == pitem['key_id2']:
if list[i]['key_id1'] == 'n/a':
list[i]['key_id1'] = pitem['key_id1']
list[i]['somekey1'] = panther_item['somekey1']
list[i]['somekey2'] = panther_item['somekey2']
list[i]['somekey3'] = panther_item['somekey3']
list[i]['somekey4'] = panther_item['somekey4']
list[i]['somekey5'] = panther_item['somekey5']
</code></pre>
<p>But i'm getting a <code>keyError</code> on 'somekey3'. Why 'somekey3' and not 'somekey2'? Both werent there. I put them every time in this iteration. And when I print the 2 lists before the edit they are correct. What could possibly be going wrong here?</p>
| 1 | 2016-08-18T19:27:43Z | 39,026,895 | <p>Maybe I got it right and you want to add all values of dictionaries in list2 to first dictionary in list1 which has 'key12' in it? Following code should do.</p>
<pre><code>first_dict = next(d for d in list1 if 'key12' in d)
for d in list2:
first_dict.update(d)
</code></pre>
<p>If you want to add them to first dictionary from list1 which has common key(s) with dict from list2 on same position then:</p>
<pre><code>first_dict = next(a for a, b in zip(list1, list2) if set(a.keys()) & set(b.keys()))
for d in list2:
first_dict.update(d)
</code></pre>
| -1 | 2016-08-18T20:24:31Z | [
"python",
"list",
"dictionary"
] |
How can I remove text within multi layer of parentheses python | 39,026,120 | <p>I have a python string that I need to remove parentheses. The standard way is to use <code>text = re.sub(r'\([^)]*\)', '', text)</code>, so the content within the parentheses will be removed.</p>
<p>However, I just found a string that looks like <code>(Data with in (Boo) And good luck)</code>. With the regex I use, it will still have <code>And good luck)</code> part left. I know I can scan through the entire string and try to keep a counter of number of <code>(</code> and <code>)</code> and when the numbers are balanced, index the location of <code>(</code> and <code>)</code> and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.</p>
<p>Someone asked for expected result so here's what I am expecting:</p>
<p><code>Hi this is a test ( a b ( c d) e) sentence</code></p>
<p>Post replace I want it to be <code>Hi this is a test sentence</code>, instead of <code>Hi this is a test e) sentence</code></p>
| 0 | 2016-08-18T19:29:35Z | 39,026,523 | <p>First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line:</p>
<pre><code>line = "(Data with in (Boo) And good luck)"
new_line = "".join(re.split(r'(?:[()])',line))
print ( new_line )
# 'Data with in Boo And good luck'
</code></pre>
| 1 | 2016-08-18T19:55:41Z | [
"python",
"regex",
"text"
] |
How can I remove text within multi layer of parentheses python | 39,026,120 | <p>I have a python string that I need to remove parentheses. The standard way is to use <code>text = re.sub(r'\([^)]*\)', '', text)</code>, so the content within the parentheses will be removed.</p>
<p>However, I just found a string that looks like <code>(Data with in (Boo) And good luck)</code>. With the regex I use, it will still have <code>And good luck)</code> part left. I know I can scan through the entire string and try to keep a counter of number of <code>(</code> and <code>)</code> and when the numbers are balanced, index the location of <code>(</code> and <code>)</code> and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.</p>
<p>Someone asked for expected result so here's what I am expecting:</p>
<p><code>Hi this is a test ( a b ( c d) e) sentence</code></p>
<p>Post replace I want it to be <code>Hi this is a test sentence</code>, instead of <code>Hi this is a test e) sentence</code></p>
| 0 | 2016-08-18T19:29:35Z | 39,026,664 | <p>With the re module (replace the innermost parenthesis until there's no more replacement to do):</p>
<pre><code>import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
</code></pre>
<p>With the <a href="https://pypi.python.org/pypi/regex" rel="nofollow">regex module</a> that allows recursion:</p>
<pre><code>import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
</code></pre>
<p>Where <code>(?R)</code> refers to the whole pattern itself.</p>
| 2 | 2016-08-18T20:06:56Z | [
"python",
"regex",
"text"
] |
How can I remove text within multi layer of parentheses python | 39,026,120 | <p>I have a python string that I need to remove parentheses. The standard way is to use <code>text = re.sub(r'\([^)]*\)', '', text)</code>, so the content within the parentheses will be removed.</p>
<p>However, I just found a string that looks like <code>(Data with in (Boo) And good luck)</code>. With the regex I use, it will still have <code>And good luck)</code> part left. I know I can scan through the entire string and try to keep a counter of number of <code>(</code> and <code>)</code> and when the numbers are balanced, index the location of <code>(</code> and <code>)</code> and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.</p>
<p>Someone asked for expected result so here's what I am expecting:</p>
<p><code>Hi this is a test ( a b ( c d) e) sentence</code></p>
<p>Post replace I want it to be <code>Hi this is a test sentence</code>, instead of <code>Hi this is a test e) sentence</code></p>
| 0 | 2016-08-18T19:29:35Z | 39,027,283 | <p>No regex...</p>
<pre><code>>>> a = 'Hi this is a test ( a b ( c d) e) sentence'
>>> o = ['(' == t or t == ')' for t in a]
>>> o
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, True, False, False,
False, False, False, True, False, False, False, False, True, False, False,
True, False, False, False, False, False, False, False, False, False]
>>> start,end=0,0
>>> for n,i in enumerate(o):
... if i and not start:
... start = n
... if i and start:
... end = n
...
>>>
>>> start
18
>>> end
32
>>> a1 = ' '.join(''.join(i for n,i in enumerate(a) if (n<start or n>end)).split())
>>> a1
'Hi this is a test sentence'
>>>
</code></pre>
| 1 | 2016-08-18T20:51:51Z | [
"python",
"regex",
"text"
] |
How can I remove text within multi layer of parentheses python | 39,026,120 | <p>I have a python string that I need to remove parentheses. The standard way is to use <code>text = re.sub(r'\([^)]*\)', '', text)</code>, so the content within the parentheses will be removed.</p>
<p>However, I just found a string that looks like <code>(Data with in (Boo) And good luck)</code>. With the regex I use, it will still have <code>And good luck)</code> part left. I know I can scan through the entire string and try to keep a counter of number of <code>(</code> and <code>)</code> and when the numbers are balanced, index the location of <code>(</code> and <code>)</code> and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.</p>
<p>Someone asked for expected result so here's what I am expecting:</p>
<p><code>Hi this is a test ( a b ( c d) e) sentence</code></p>
<p>Post replace I want it to be <code>Hi this is a test sentence</code>, instead of <code>Hi this is a test e) sentence</code></p>
| 0 | 2016-08-18T19:29:35Z | 39,032,992 | <p>Assuming (1) there are always matching parentheses and (2) we only remove the parentheses and everything in between them (ie. surrounding spaces around the parentheses are untouched), the following should work.</p>
<p>It's basically a state machine that maintains the current depth of nested parentheses. We keep the character if it's (1) not a parenthesis and (2) the current depth is 0.</p>
<p><em>No regexes. No recursion. A single pass through the input string without any intermediate lists.</em></p>
<pre><code>tests = [
"Hi this is a test ( a b ( c d) e) sentence",
"(Data with in (Boo) And good luck)",
]
delta = {
'(': 1,
')': -1,
}
def remove_paren_groups(input):
depth = 0
for c in input:
d = delta.get(c, 0)
depth += d
if d != 0 or depth > 0:
continue
yield c
for input in tests:
print ' IN: %s' % repr(input)
print 'OUT: %s' % repr(''.join(remove_paren_groups(input)))
</code></pre>
<p>Output:</p>
<pre><code> IN: 'Hi this is a test ( a b ( c d) e) sentence'
OUT: 'Hi this is a test sentence'
IN: '(Data with in (Boo) And good luck)'
OUT: ''
</code></pre>
| 1 | 2016-08-19T07:05:20Z | [
"python",
"regex",
"text"
] |
pandas find count between dates for list of month ends | 39,026,131 | <p>I have a list of customers with a "start date" and and "end date". For any given time period, my goal is to find how many customers I have active. A customer is active if their start date is before x and their end date is after x. I've written a brute force version of this:</p>
<pre><code>from datetime import datetime
import pandas as pd
#dates of interest
dates = ['2016-01-31','2016-02-29','2016-03-31','2016-04-30','2016-05-31']
dates = [datetime.strptime(x, '%Y-%m-%d') for x in dates]
#sample records
df = pd.DataFrame( [['A','2016-01-01','2016-04-23'],['B','2016-02-05','2016-04-30'],['C','2016-02-02','2016-05-25']],columns = ['customerId','startDate','endDate'])
df['startDate'] = pd.to_datetime(df['startDate'])
df['endDate'] = pd.to_datetime(df['endDate'])
output = []
#is there a better way to do this?
for currDate in dates:
record_count = len(df[(df['startDate']<= currDate) & (df['endDate']>= currDate)])
output.append([currDate,record_count])
output = pd.DataFrame(output, columns = ['date','active count'])
</code></pre>
<p>Is there a better way to find how many customers are active between each date of interest? Right now I'm just iterating through all the dates, but that doesn't feel very "pythonic" to me.</p>
<p>Any thoughts or assistance would be appreciated!</p>
| 0 | 2016-08-18T19:30:06Z | 39,026,474 | <p>One way would be:</p>
<pre><code>In [142]: tf = pd.DataFrame({'dates': dates})
In [143]: tf['active_count'] = tf['dates'].apply(lambda x: df[(df['startDate']<= x) & (df['endDate']>= x)].count())
In [144]: tf
Out[144]:
dates active_count
0 2016-01-31 1
1 2016-02-29 3
2 2016-03-31 3
3 2016-04-30 2
4 2016-05-31 0
</code></pre>
| 1 | 2016-08-18T19:52:19Z | [
"python",
"pandas"
] |
Numpy multiplying different shapes | 39,026,173 | <p>have two arrays that are like this</p>
<pre><code>x = [a,b]
y = [p,q,r]
</code></pre>
<p>I need to multiply this together to a product <code>c</code> which should be like this,</p>
<pre><code>c = [a*p, a*q, a*r, b*p, b*q, b*r]
</code></pre>
<p>However <code>x*y</code> gives the following error,</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (2,) (3,)
</code></pre>
<p>I can do something like this,</p>
<pre><code>for i in range(len(x)):
for t in range(len(y)):
c.append(x[i] * y[t]
</code></pre>
<p>But really the length of my <code>x</code> and <code>y</code> is quite large so what's the most efficient way to make such a multiplication without the looping.</p>
| 0 | 2016-08-18T19:32:53Z | 39,026,226 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> for pairwise elementwise multiplication between <code>x</code> and <code>y</code> and then flatten with <code>.ravel()</code>, like so -</p>
<pre><code>(x[:,None]*y).ravel()
</code></pre>
<p>Or use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html" rel="nofollow"><code>outer product</code></a> and then flatten -</p>
<pre><code>np.outer(x,y).ravel()
</code></pre>
| 4 | 2016-08-18T19:36:05Z | [
"python",
"numpy"
] |
Numpy multiplying different shapes | 39,026,173 | <p>have two arrays that are like this</p>
<pre><code>x = [a,b]
y = [p,q,r]
</code></pre>
<p>I need to multiply this together to a product <code>c</code> which should be like this,</p>
<pre><code>c = [a*p, a*q, a*r, b*p, b*q, b*r]
</code></pre>
<p>However <code>x*y</code> gives the following error,</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (2,) (3,)
</code></pre>
<p>I can do something like this,</p>
<pre><code>for i in range(len(x)):
for t in range(len(y)):
c.append(x[i] * y[t]
</code></pre>
<p>But really the length of my <code>x</code> and <code>y</code> is quite large so what's the most efficient way to make such a multiplication without the looping.</p>
| 0 | 2016-08-18T19:32:53Z | 39,026,864 | <p>Use Numpy dot...</p>
<pre><code>>>> import numpy as np
>>> a=np.arange(1,3)# [1,2]
>>> b=np.arange(1,4)# [1,2,3]
>>> np.dot(a[:,None],b[None])
array([[1, 2, 3],
[2, 4, 6]])
>>> np.dot(a[:,None],b[None]).ravel()
array([1, 2, 3, 2, 4, 6])
>>>
</code></pre>
| 0 | 2016-08-18T20:23:04Z | [
"python",
"numpy"
] |
Get the first and last times for each day with Python datetime | 39,026,297 | <p>I have a list of timestamps in this format: '2016-08-01 13:02:57' or "%Y-%m-%d-%H-%M-%S-%f"</p>
<p>I would like to get the first and last time for each day. So for if there were two days 8/1 and 7/29 the function would return 4 values. For example:</p>
<pre><code>8/1
first: '2016-08-01 13:02:57'
last: '2016-08-01 13:08:44'
7/29
first: '2016-07-29 14:34:02'
last: '2016-07-29 14:37:35'
</code></pre>
<p>The first time is the one that occurs first in that day, the last time is the one that occurs last in that day.</p>
| 2 | 2016-08-18T19:40:46Z | 39,026,419 | <p>To sensibly group your data I would probably use a dictionary in the following fashion by first splitting your string into the date half and time half.</p>
<pre><code>d = dict()
for item in L:
if item in d:
d[item] = [time]
else:
d[item].append(time)
</code></pre>
<p>Then you have dict mapping certain dates to a list of times. Then it's probably trivial to employ some datetime function which can do max(list) and min(list) to give you earliest and latest times.</p>
| -1 | 2016-08-18T19:48:40Z | [
"python",
"datetime"
] |
Get the first and last times for each day with Python datetime | 39,026,297 | <p>I have a list of timestamps in this format: '2016-08-01 13:02:57' or "%Y-%m-%d-%H-%M-%S-%f"</p>
<p>I would like to get the first and last time for each day. So for if there were two days 8/1 and 7/29 the function would return 4 values. For example:</p>
<pre><code>8/1
first: '2016-08-01 13:02:57'
last: '2016-08-01 13:08:44'
7/29
first: '2016-07-29 14:34:02'
last: '2016-07-29 14:37:35'
</code></pre>
<p>The first time is the one that occurs first in that day, the last time is the one that occurs last in that day.</p>
| 2 | 2016-08-18T19:40:46Z | 39,026,515 | <p>Group by year-month-day then get the min and max:</p>
<pre><code>from collections import defaultdict
d = defaultdict(list)
dates = ['2016-08-01 13:02:54',............]
for dte in dates:
key, _ = dte.split()
d[key].append(dte)
for k,v in d.items():
print(min(v), max(v))
</code></pre>
<p>Because of the date formats you don't need to convert to datetimes, lexicographical comparison will work fine. You could make a function that does the min and max in one loop but it may not be as fast as the builtins.</p>
| 1 | 2016-08-18T19:54:56Z | [
"python",
"datetime"
] |
Get the first and last times for each day with Python datetime | 39,026,297 | <p>I have a list of timestamps in this format: '2016-08-01 13:02:57' or "%Y-%m-%d-%H-%M-%S-%f"</p>
<p>I would like to get the first and last time for each day. So for if there were two days 8/1 and 7/29 the function would return 4 values. For example:</p>
<pre><code>8/1
first: '2016-08-01 13:02:57'
last: '2016-08-01 13:08:44'
7/29
first: '2016-07-29 14:34:02'
last: '2016-07-29 14:37:35'
</code></pre>
<p>The first time is the one that occurs first in that day, the last time is the one that occurs last in that day.</p>
| 2 | 2016-08-18T19:40:46Z | 39,026,552 | <p>A lexical compare is with your datetime format gives min and max dates. So you simply have to group all datetimes with the same date in one list each:</p>
<pre><code>from collections import defaultdict
dates = ['2016-08-01 13:02:57', '2016-08-01 13:08:44', ...]
dates_and_times = defaultdict(list)
for date in dates:
d, t = date.split()
dates_and_times[d].append(t)
for date, times in dates_and_times.items():
print(date, min(times))
print(date, max(times))
</code></pre>
| 1 | 2016-08-18T19:58:03Z | [
"python",
"datetime"
] |
How to merge rows in the array Python3, csv | 39,026,322 | <p>sample data:</p>
<pre>
id, Name, mail, data1, data2, data3
1, Name1, mail@com, abc, 14, de
1, Name1, mail@com, fgh, 25, kl
1, Name1, mail@com, mno, 38, pq
2, Name2, mail@com, abc, 14, d
</pre>
<p>I wrote a script that selects the first field is a unique string to clear the duplicates. However, since the data in the fields date1-3 are not repeated, it is necessary to make the result:</p>
<blockquote>
<p>1, Name1, mail@com, "abc, 14, de, fgh, 25, kl, mno, 38, pq"</p>
</blockquote>
<p>How to merge rows in the array?
My code not work:</p>
<pre><code>import sys
import csv
in_fln = sys.argv[1]
# You can replace here and choose any delimiter:
csv.register_dialect('dlm', delimiter=',')
csv.register_dialect('dmt', delimiter=';')
# if this .csv file do:
if (in_fln[-3:]) == "csv":
out_fln = 'out' + in_fln
inputf = open(in_fln, 'r')
seen = []
outfile = []
nout = {}
#rowun = []
try:
reader = csv.reader(inputf, dialect='dlm')
# select by ContactID
for row in reader:
if row[0] not in seen:
#IT'S work byt temp comment
#rowun = '"' + (row[-4]) + ', ' + (row[-3]) + ', ' + (row[-2]) + '"'
#outfile.append(row[:-5]+[rowun])
outfile.append(row[:-4])
rowun = (row[0])
nout[rowun] = (row[-4:-1])
seen.append(row[0])
print (type(row))
else:
#rowun = '"' + (row[-4]) + ', ' + (row[-3]) + ', ' + (row[-2]) + '"'
#nout.insert(-1,(row[-4:-1]))
print (type(row))
rowun = (row[0])
rowun2 = {rowun:(row[-4:-1])}
nout.update(rowun2)
finally:
#print (nout)
#print (outfile[:-1])
#csv.writer(open(('nout' + in_fln), 'w', newline='')).writerows(nout)
csv.writer(open(out_fln, 'w', newline=''), dialect='dlm').writerows(outfile)
inputf.close()
print ("All done")
</code></pre>
| 0 | 2016-08-18T19:42:14Z | 39,028,000 | <p>This should do the trick.</p>
<pre><code>from collections import defaultdict
import pandas as pd
# recreate your example
df = pd.DataFrame([[1, 'Name1', 'mail@com', 'abc', 14, 'de'],
[1, 'Name1', 'mail@com', 'fgh', 25, 'kl'],
[1, 'Name1', 'mail@com', 'mno', 38, 'pq'],
[2, 'Name2', 'mail@com', 'abc', 14, 'd']
], columns=['id', 'Name', 'mail', 'data1', 'data2','data3'])
res = defaultdict(list)
for ind, row in df.iterrows():
key = (row['id'], row['Name'], row['mail'])
value = (row['data1'], row['data2'], row['data3'])
res[key].append(value)
for key, value in res.items():
print(key, value)
# gives
# (2, 'Name2', 'mail@com') [('abc', 14, 'd')]
# (1, 'Name1', 'mail@com') [('abc', 14, 'de'), ('fgh', 25, 'kl'), ('mno', 38, 'pq')]
</code></pre>
| 0 | 2016-08-18T21:47:13Z | [
"python",
"arrays",
"csv"
] |
How to merge rows in the array Python3, csv | 39,026,322 | <p>sample data:</p>
<pre>
id, Name, mail, data1, data2, data3
1, Name1, mail@com, abc, 14, de
1, Name1, mail@com, fgh, 25, kl
1, Name1, mail@com, mno, 38, pq
2, Name2, mail@com, abc, 14, d
</pre>
<p>I wrote a script that selects the first field is a unique string to clear the duplicates. However, since the data in the fields date1-3 are not repeated, it is necessary to make the result:</p>
<blockquote>
<p>1, Name1, mail@com, "abc, 14, de, fgh, 25, kl, mno, 38, pq"</p>
</blockquote>
<p>How to merge rows in the array?
My code not work:</p>
<pre><code>import sys
import csv
in_fln = sys.argv[1]
# You can replace here and choose any delimiter:
csv.register_dialect('dlm', delimiter=',')
csv.register_dialect('dmt', delimiter=';')
# if this .csv file do:
if (in_fln[-3:]) == "csv":
out_fln = 'out' + in_fln
inputf = open(in_fln, 'r')
seen = []
outfile = []
nout = {}
#rowun = []
try:
reader = csv.reader(inputf, dialect='dlm')
# select by ContactID
for row in reader:
if row[0] not in seen:
#IT'S work byt temp comment
#rowun = '"' + (row[-4]) + ', ' + (row[-3]) + ', ' + (row[-2]) + '"'
#outfile.append(row[:-5]+[rowun])
outfile.append(row[:-4])
rowun = (row[0])
nout[rowun] = (row[-4:-1])
seen.append(row[0])
print (type(row))
else:
#rowun = '"' + (row[-4]) + ', ' + (row[-3]) + ', ' + (row[-2]) + '"'
#nout.insert(-1,(row[-4:-1]))
print (type(row))
rowun = (row[0])
rowun2 = {rowun:(row[-4:-1])}
nout.update(rowun2)
finally:
#print (nout)
#print (outfile[:-1])
#csv.writer(open(('nout' + in_fln), 'w', newline='')).writerows(nout)
csv.writer(open(out_fln, 'w', newline=''), dialect='dlm').writerows(outfile)
inputf.close()
print ("All done")
</code></pre>
| 0 | 2016-08-18T19:42:14Z | 39,028,833 | <p>My own version is very close to the beter:</p>
<p>Now all work!</p>
<pre><code>#!/usr/bin/env python3
import csv, re
import os, sys
in_fln = sys.argv[1]
# You can replace here and choose any delimiter:
#csv.register_dialect('dlm', delimiter=',')
dm = ','
seen = []
# if this .csv file do:
if (in_fln[-3:]) == "csv":
out_fln = 'out' + in_fln
#create the full structure: output_rows
infile = csv.reader(open(in_fln, 'r'), delimiter=dm, quotechar='"')
output_rows = []
for row in infile:
a = 0
if row[0] not in seen:
seen.append(row[0])
output_rows.append(row[:-4])
#rowun = '"' + row[-4] + ', ' + row[-3] + ', ' + row[-2] + '"'
rowun = row[-4] + ', ' + row[-3] + ', ' + row[-2]
output_rows.append([rowun])
else:
#output_rows.append([row[-4], row[-3], row[-2]])
#rowun = '"' + row[-4] + ', ' + row[-3] + ', ' + row[-2] + '"'
rowun = row[-4] + ', ' + row[-3] + ', ' + row[-2]
#output_rows.insert(-1,[rowun])
#rowun = str(rowun)
#print (rowun)
output_rows[-1].append(rowun)
#Finally save it to a file
csv.writer(open(out_fln, 'w', newline=''), delimiter=dm, quotechar='"').writerows(output_rows)
chng = [
['","',','], # chng "," on ,
['\n"',',"'], # Del new str
]
input_file = open(out_fln).read()
output_file = open(out_fln,'w')
for string in chng:
input_file = re.sub(str(string[0]),str(string[1]),input_file)
output_file.write(input_file)
output_file.close()
print ("All done")
</code></pre>
| 0 | 2016-08-18T23:09:53Z | [
"python",
"arrays",
"csv"
] |
Setting variables in a function (subfunction) from a file with a variable filename | 39,026,413 | <p>I have a function that has a subfunction and I would like to initialize variables inside of the subfunction. Simply,</p>
<pre><code>file = "foo"
def outerFunction():
def innerFunction():
var1 = 0
var2 = 1
var3 = 2
print file, var1, var2, var3
file = "bar"
innerFunction()
outerFunction()
</code></pre>
<p>Output: <code>bar 0 1 2</code></p>
<p>However, I have quite a large number of variables in multiple different files and I would like to simply import them into the subfunction when the subfunction is called. Assume I have a file <code>bar.py</code> with the following contents:</p>
<pre><code>var1=0
var2=1
var3=2
</code></pre>
<p>Then I change my code to be </p>
<pre><code>file = "foo"
def outerFunction():
def innerFunction():
from file import *
print file, var1, var2, var3
file = "bar"
innerFunction()
outerFunction()
</code></pre>
<p>This is going to result in an error because Python 2.7 doesn't like it when you use <code>import</code> within a subfunction. So, instead, we can use the <code>__import__</code> function directly:</p>
<pre><code>file = "foo"
def outerFunction():
def innerFunction():
__import__(file)
print file, var1, var2, var3
file = "bar"
innerFunction()
outerFunction()
</code></pre>
<p>The import method here works, but the variables don't ever actually make it to the subfunction's variable list, resulting in an error when I go to print them out. Now, I know that I can change the code to</p>
<pre><code>file = "foo"
def outerFunction():
def innerFunction():
f = __import__(file)
print file, f.var1, f.var2, f.var3
file = "bar"
innerFunction()
outerFunction()
</code></pre>
<p>and it will work peachy. That is not the solution I am looking for, though. I want to be able to import those functions without changing the rest of my code to accommodate. Are there any better solutions to this problem?</p>
| 1 | 2016-08-18T19:48:18Z | 39,027,992 | <p>From a maintainability perspective, importing the module and then referencing the variables of the module is the best option. It's easy to see where the variables have come from. </p>
<p><code>from module import *</code> is frowned upon because it makes it hard to trace where variables have come from, especially if multiple modules contain variables with the same name. <code>from module import *</code> is impossible within a function because of how local variables work in Python. Local variables are stored on the stack, and if Python doesn't know how many variables there will be then it can't allocate space on the stack properly.</p>
<p>However, that said, you can do what you're asking. But you shouldn't. The idea behind this is to leave the variables referenced as globals, and update your globals accordingly when the function starts. eg.</p>
<pre><code>from importlib import import_module
file_ = "foo"
def outer():
def inner():
m = import_module(file_) # import module whose name is the contents of file_
globals().update(m.__dict__) # add module's globals to our own
print(file_, var0, var1, var2)
file_ = "bar"
inner()
outer()
</code></pre>
<p>This is a terrible idea as each inner function will share the same set of globals and may possible overwrite variables with the same name. This is will cause problems if <code>inner</code> functions are run concurrently, or if they overwrite variables needed by the module that defined <code>outer</code>. It is possible to overcome this by modifying each <code>inner</code> function to use its own unique set of globals, but this is a horrible hack, and mainly included to show how this is possible.</p>
<pre><code>from importlib import import_module
from types import FunctionType
file_ = "foo"
def outer():
def inner():
m = import_module(file_)
globals().update(m.__dict__)
print(file_, var0, var1, var2)
# redefine inner with new globals
inner = FunctionType(inner.__code__,
dict(import_module=import_module, __builtins__=__builtins__), # new globals
inner.__name__, inner.__defaults__, inner.__closure__
)
file_ = "bar"
inner()
outer()
</code></pre>
| 0 | 2016-08-18T21:46:38Z | [
"python",
"python-2.7",
"import",
"python-import"
] |
Python Eclipse Code Analysis ignore doesn't seem to work | 39,026,418 | <p>I use Eclipse with PyDev. I have some tweaks that need to be made to Pep8. I have monkeyed around with this and thought I made some progress, but I really don't know how this works. </p>
<p>Current problem is that I would like to get PyDev warnings to stop coming up when I don't match pep8 indents. I use 2 spaces instead of 4. The weird thing is that I don't get them in every case. It seems like I successfully turned them off in some places, but not others. I would like to turn off E121. I have a pylint file. </p>
<ol>
<li>I have tried --ignore in the PYDev->Editor->CodeAnalysis settings. </li>
<li>I have tried turning off/on PyDev->Editor->CodeStyle->CodeFormatter.</li>
<li>I have tried right clicking the container folder and doing PyDev->RemoveErrorMarkers, but they come back when the analysis runs again. </li>
</ol>
<p>What else can I try? How can I narrow this down? Would someone please give mye some insight into how this works?</p>
<p>Any help is appreciated. Thanks. </p>
| 1 | 2016-08-18T19:48:39Z | 39,032,174 | <p>I have configured pep8 using a <strong>setup.cfg</strong> file in my projects. It resides in parallel to the src folder and contains this:</p>
<pre><code>[pep8]
max-line-length=100
# pep8 1.6.2 wants all includes at the top:
ignore=E402
</code></pre>
<p>This way, I get the intended results even if I use pep8 on the command line or from any other editor. You would have to change the ignore line to match the error code you want to suppress.</p>
<p>See the output from pep8 --help:</p>
<blockquote>
<p>Configuration:
The project options are read from the [pep8] section of the tox.ini
file or the setup.cfg file located in any parent folder of the path(s)
being processed. Allowed options are: exclude, filename, select,
ignore, max-line-length, hang-closing, count, format, quiet, show-
pep8, show-source, statistics, verbose.</p>
</blockquote>
| 0 | 2016-08-19T06:10:30Z | [
"python",
"eclipse",
"pydev"
] |
Creating combination of value list with existing key - Pyspark | 39,026,480 | <p>So my rdd consists of data looking like: </p>
<pre><code>(k, [v1,v2,v3...])
</code></pre>
<p>I want to create a combination of all sets of two for the value part.</p>
<p>So the end map should look like:</p>
<pre><code>(k1, (v1,v2))
(k1, (v1,v3))
(k1, (v2,v3))
</code></pre>
<p>I know to get the value part, I would use something like </p>
<pre><code>rdd.cartesian(rdd).filter(case (a,b) => a < b)
</code></pre>
<p>However, that requires the entire rdd to be passed (right?) not just the value part. I am unsure how to arrive at my desired end, I suspect its a groupby.</p>
<p>Also, ultimately, I want to get to the k,v looking like </p>
<pre><code>((k1,v1,v2),1)
</code></pre>
<p>I know how to get from what I am looking for to that, but maybe its easier to go straight there? </p>
<p>Thanks.</p>
| 1 | 2016-08-18T19:52:45Z | 39,026,852 | <p>Use <code>itertools</code> to create the combinations. Here is a demo:</p>
<pre><code>import itertools
k, v1, v2, v3 = 'k1 v1 v2 v3'.split()
a = (k, [v1,v2,v3])
b = itertools.combinations(a[1], 2)
data = [(k, pair) for pair in b]
</code></pre>
<p><code>data</code> will be:</p>
<pre><code>[('k1', ('v1', 'v2')), ('k1', ('v1', 'v3')), ('k1', ('v2', 'v3'))]
</code></pre>
| 1 | 2016-08-18T20:22:13Z | [
"python",
"apache-spark",
"mapreduce",
"pyspark"
] |
Creating combination of value list with existing key - Pyspark | 39,026,480 | <p>So my rdd consists of data looking like: </p>
<pre><code>(k, [v1,v2,v3...])
</code></pre>
<p>I want to create a combination of all sets of two for the value part.</p>
<p>So the end map should look like:</p>
<pre><code>(k1, (v1,v2))
(k1, (v1,v3))
(k1, (v2,v3))
</code></pre>
<p>I know to get the value part, I would use something like </p>
<pre><code>rdd.cartesian(rdd).filter(case (a,b) => a < b)
</code></pre>
<p>However, that requires the entire rdd to be passed (right?) not just the value part. I am unsure how to arrive at my desired end, I suspect its a groupby.</p>
<p>Also, ultimately, I want to get to the k,v looking like </p>
<pre><code>((k1,v1,v2),1)
</code></pre>
<p>I know how to get from what I am looking for to that, but maybe its easier to go straight there? </p>
<p>Thanks.</p>
| 1 | 2016-08-18T19:52:45Z | 39,028,398 | <p>I think Israel's answer is a incomplete, so I go a step further. </p>
<pre><code>import itertools
a = sc.parallelize([
(1, [1,2,3,4]),
(2, [3,4,5,6]),
(3, [-1,2,3,4])
])
def combinations(row):
l = row[1]
k = row[0]
return [(k, v) for v in itertools.combinations(l, 2)]
a.map(combinations).flatMap(lambda x: x).take(3)
# [(1, (1, 2)), (1, (1, 3)), (1, (1, 4))]
</code></pre>
| 1 | 2016-08-18T22:22:32Z | [
"python",
"apache-spark",
"mapreduce",
"pyspark"
] |
What does this strange format string "{[[[]}" do? | 39,026,630 | <p>I come across the below in some code from an ex employee. </p>
<p>the code is not called, from anywhere, but my question is can it actually do something useful as it is?</p>
<pre><code>def xshow(x):
print("{[[[[]}".format(x))
</code></pre>
| 5 | 2016-08-18T20:04:20Z | 39,026,662 | <p>That is a format string with an empty argument name and an element index (the part between <code>[</code> and <code>]</code> for a key <code>[[[</code> (those indices don't have to be integers). It will print the value for that key.</p>
<p>Calling:</p>
<pre><code>xshow({'[[[': 1})
</code></pre>
<p>will print <code>1</code></p>
| 4 | 2016-08-18T20:06:49Z | [
"python",
"string-formatting"
] |
What does this strange format string "{[[[]}" do? | 39,026,630 | <p>I come across the below in some code from an ex employee. </p>
<p>the code is not called, from anywhere, but my question is can it actually do something useful as it is?</p>
<pre><code>def xshow(x):
print("{[[[[]}".format(x))
</code></pre>
| 5 | 2016-08-18T20:04:20Z | 39,058,456 | <p>One can use the interactive interpreter to investigate something like this experimentally.</p>
<pre><code>>>> xshow(None)
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
xshow(None)
File "<pyshell#11>", line 1, in xshow
def xshow(x): print("{[[[[]}".format(x))
TypeError: 'NoneType' object is not subscriptable
# So let us try something subscriptable.
>>> xshow([])
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
xshow([])
File "<pyshell#11>", line 1, in xshow
def xshow(x): print("{[[[[]}".format(x))
TypeError: list indices must be integers or slices, not str
# That did not work, try something else.
>>> xshow({})
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
xshow({})
File "<pyshell#11>", line 1, in xshow
def xshow(x): print("{[[[[]}".format(x))
KeyError: '[[['
# Aha! Try a dict with key '[[['.
>>> xshow({'[[[':1})
1
</code></pre>
<p>Now maybe go read the doc.</p>
| 0 | 2016-08-20T20:30:24Z | [
"python",
"string-formatting"
] |
How do you add a custom column to a query in sqlalchemy if you are using the classical mapping? | 39,026,819 | <p>Some caveats:</p>
<ol>
<li>Our models are generated from yaml file, and we don't have a way to add a computed property via the yaml file at this time (we will eventually)</li>
<li>Given the above, we don't have models to add hybrid properties to.</li>
</ol>
<p>What is the proper way to add a computed column to a query, such that it is included in the returned object, and not as a separate entry in a returned tuple?</p>
<pre><code>db_query = session.query(Listing)
db_query = db_query.add_column("1 as listing_test")
print(self.db_query)
## this looks good.
# SELECT listing.id AS listing_id, 1 as listing_test
# FROM listing
# WHERE listing.id = :id_1
self.db_query.all()
## this looks bad.
# [(<flask_sqlalchemy.Lease object at 0x7fca868ff1d0>, 1)]
self.db_query.column_descriptions
[{'aliased': False,
'entity': <class 'flask_sqlalchemy.Lease'>,
 'expr': <class 'flask_sqlalchemy.Lease'>,
 'name': 'Lease',
 'type': <class 'flask_sqlalchemy.Lease'>},
{'aliased': False,
 'entity': None,
 'expr': '1 as listing_test',
 'name': '1 as listing_test',
 'type': NullType()}]
</code></pre>
<p>Ideally, instead of the tuple that is being currently returned, the listing_test is just a part of of the returned listing object.</p>
| 0 | 2016-08-18T20:19:31Z | 39,028,095 | <p>@univerio got me very close!</p>
<p>I had to set it as a column property, and then all was well.</p>
<pre><code>class Listing(db.model):
custom_function = column_property(func.custom_function(id))
</code></pre>
| 0 | 2016-08-18T21:54:27Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy"
] |
Tkinter window is blank when running | 39,026,823 | <p>When I run my tkinter code for measuring the temperature with Adafruit. When I run my code tkinter opens a window but nothing appears on the window. I have used tkinter a bit before and I have had what is supposed to appear appear but just not in this particular code.</p>
<pre><code>#!/usr/bin/python
# -*- coding: latin-1 -*-
import Adafruit_DHT as dht
import time
from Tkinter import *
root = Tk()
k= StringVar()
num = 1
thelabel = Label(root, textvariable=k)
thelabel.pack
def READ():
h,t = dht.read_retry(dht.DHT22, 4)
newtext = "Temp=%s*C Humidity=%s" %(t,h)
k.set(str(newtext))
print newtext #I added this line to make sure that newtext actually had the values I wanted
def read30seconds():
READ()
root.after(30000, read30seconds)
read30seconds()
root.mainloop()
</code></pre>
<p>And to clarify the print line in READ does run ever 30 seconds as intended.</p>
| 4 | 2016-08-18T20:19:40Z | 39,027,044 | <p>it is because you are not packing it in the window but you are printing it in the python shell.</p>
<p>you should replace that <code>print newtext</code> with:</p>
<pre><code>w = Label(root, text=newtext)
w.pack()
</code></pre>
<p>a working code should look like this:</p>
<pre><code>#!/usr/bin/python
# -*- coding: latin-1 -*-
import Adafruit_DHT as dht
import time
from Tkinter import *
root = Tk()
k= StringVar()
num = 1
thelabel = Label(root, textvariable=k)
thelabel.pack
def READ():
h,t = dht.read_retry(dht.DHT22, 4)
newtext = "Temp=%s*C Humidity=%s" %(t,h)
k.set(str(newtext))
w = Label(root, text=newtext)
w.pack()
def read30seconds():
READ()
root.after(30000, read30seconds)
read30seconds()
root.mainloop()
</code></pre>
<p>note that this is a very basic code graphically speaking.
to learn more about this topic visit this <a href="http://effbot.org/tkinterbook/label.htm" rel="nofollow">tkinter label tutorial</a>
and to learn more about tkinter itself visit this <a href="http://effbot.org/tkinterbook/" rel="nofollow">introduction to tkinter</a></p>
<p>if you want the label to be overwritten everytime it is refreshed you should use the <code>destroy()</code> method to delete and then replace the Label like this:</p>
<pre><code>#!/usr/bin/python
# -*- coding: latin-1 -*-
import Adafruit_DHT as dht
import time
from Tkinter import *
root = Tk()
k= StringVar()
num = 1
thelabel = Label(root, textvariable=k)
thelabel.pack
def READ():
global w
h,t = dht.read_retry(dht.DHT22, 4)
newtext = "Temp=%s*C Humidity=%s" %(t,h)
k.set(str(newtext))
print newtext #I added this line to make sure that newtext actually had the values I wanted
def read30seconds():
READ()
try: w.destroy()
except: pass
root.after(30000, read30seconds)
read30seconds()
root.mainloop()
</code></pre>
| 4 | 2016-08-18T20:34:46Z | [
"python",
"tkinter",
"raspberry-pi",
"adafruit"
] |
How to use Graphene GraphQL framework with Django REST Framework authentication | 39,026,831 | <p>I got some REST API endpoints in Django and I'd like to use <a href="http://www.django-rest-framework.org/api-guide/authentication/" rel="nofollow">the same authentication</a> for Graphene. The <a href="http://graphene-python.org/docs/django/tutorial/" rel="nofollow">documentation</a> does not provide any guidance.</p>
| 2 | 2016-08-18T20:19:56Z | 39,026,832 | <p>For example, if you are using <code>authentication_classes = (TokenAuthentication,)</code> in your API views, you should add an endpoint to a GraphQLView decorated in this way:</p>
<p><strong>urls.py:</strong></p>
<pre><code># ...
from rest_framework.authentication import TokenAuthentication
from rest_framework.permissions import IsAuthenticated
from rest_framework.decorators import authentication_classes, permission_classes, api_view
def graphql_token_view():
view = GraphQLView.as_view(schema=schema)
view = permission_classes((IsAuthenticated,))(view)
view = authentication_classes((TokenAuthentication,))(view)
view = api_view(['POST'])(view)
return view
urlpatterns = [
# ...
url(r'^graphql_token', graphql_token_view()),
url(r'^graphql', csrf_exempt(GraphQLView.as_view(schema=schema))),
url(r'^graphiql', include('django_graphiql.urls')),
# ...
</code></pre>
<p>Note that we added a new <code>^graphql_token</code> endpoint and kept the original <code>^graphql</code> which is used by the GraphiQL tool.</p>
<p>Then, you should set the <code>Authorization</code> header in your GraphQL client and point to the <code>graphql_token</code> endpoint.</p>
| 1 | 2016-08-18T20:19:56Z | [
"python",
"django",
"rest",
"authentication",
"graphql"
] |
How to configure Django to access remote MySQL db django.contrib.sites.RequestSite module missing | 39,026,950 | <p>I'm trying to set up a django app that connects to a remote MySQL db. I currently have Django==1.10 and MySQL-python==1.2.5 installed in my venv. In settings.py I have added the following to the DATABASES variable:</p>
<pre><code>'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_password',
'HOST': 'db_host',
'PORT': 'db_port',
}
</code></pre>
<p>I get the error </p>
<pre><code>from django.contrib.sites.models import RequestSite
</code></pre>
<p>when I run python manage.py migrate</p>
<p>I am a complete beginner when it comes to django. Is there some step I am missing?</p>
<p>edit: I have also installed mysql-connector-c via brew install
edit2: realized I just need to connect to a db by importing MySQLdb into a file. sorry for the misunderstanding. </p>
| 0 | 2016-08-18T20:28:18Z | 39,027,272 | <p>This is documentation you should look at this for connecting to databases and how django does it, from what you gave us, as long as your parameters are correct you should be connecting to the database but a good confirmation of this would be the inspectdb tool.</p>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/databases/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/databases/</a></p>
<p>This shows you how to import your models from the sql database:</p>
<p><a href="https://docs.djangoproject.com/en/1.10/howto/legacy-databases/" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/legacy-databases/</a></p>
<p>To answer your question in the comments </p>
<p>"
Auto-generate the models</p>
<p>Django comes with a utility called inspectdb that can create models by introspecting an existing database. You can view the output by running this command:</p>
<pre><code>python manage.py inspectdb
</code></pre>
<p>Save this as a file by using standard Unix output redirection:</p>
<pre><code>python manage.py inspectdb > models.py
</code></pre>
<p>"</p>
| 0 | 2016-08-18T20:51:07Z | [
"python",
"mysql",
"django"
] |
How to configure Django to access remote MySQL db django.contrib.sites.RequestSite module missing | 39,026,950 | <p>I'm trying to set up a django app that connects to a remote MySQL db. I currently have Django==1.10 and MySQL-python==1.2.5 installed in my venv. In settings.py I have added the following to the DATABASES variable:</p>
<pre><code>'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_password',
'HOST': 'db_host',
'PORT': 'db_port',
}
</code></pre>
<p>I get the error </p>
<pre><code>from django.contrib.sites.models import RequestSite
</code></pre>
<p>when I run python manage.py migrate</p>
<p>I am a complete beginner when it comes to django. Is there some step I am missing?</p>
<p>edit: I have also installed mysql-connector-c via brew install
edit2: realized I just need to connect to a db by importing MySQLdb into a file. sorry for the misunderstanding. </p>
| 0 | 2016-08-18T20:28:18Z | 39,027,295 | <p>The error you're seeing has nothing to do with your database settings (assuming your real code has the actual database name, username, and password) or connection. You are not importing the RequestSite from the correct spot.</p>
<p>Change (wherever you have this set) from:</p>
<pre><code>from django.contrib.sites.models import RequestSite
</code></pre>
<p>to:</p>
<pre><code>from django.contrib.sites.requests import RequestSite
</code></pre>
| 1 | 2016-08-18T20:52:25Z | [
"python",
"mysql",
"django"
] |
How can I scale a pyplot colorbar so that contrast is seen in my scatter points? | 39,026,969 | <p>I'm making a scatter plot with <code>x</code> and <code>y</code> values, and a <code>z</code> array that I'm using to define point color as so:</p>
<p><code>plt.scatter(x, y, c = z)</code></p>
<p>However, <code>max(z) ~ 110</code>, but <code>mean(z) ~ 20</code>. So since most <code>z</code> values are around or below, but the max is around 100, most of the points look dark blue and its hard to tell a difference.</p>
<p>I tried to use a log colorbar, but since my <code>z</code> range is almost exactly from 10-100, it barely helps:</p>
<p><a href="http://i.stack.imgur.com/QOKfK.png" rel="nofollow"><img src="http://i.stack.imgur.com/QOKfK.png" alt="color coded"></a></p>
<p>Is there a way to use a normal color scale up to a certain value? Like have the color bar go from Blue - Red from <code>10 - 40</code>, and then have everything above that just be red?</p>
| 2 | 2016-08-18T20:29:31Z | 39,027,202 | <p>Try this:</p>
<pre><code>plt.scatter(x, y, c = z, vmin=10, vmax=40)
</code></pre>
| 1 | 2016-08-18T20:46:03Z | [
"python",
"matplotlib",
"colorbar"
] |
How can I scale a pyplot colorbar so that contrast is seen in my scatter points? | 39,026,969 | <p>I'm making a scatter plot with <code>x</code> and <code>y</code> values, and a <code>z</code> array that I'm using to define point color as so:</p>
<p><code>plt.scatter(x, y, c = z)</code></p>
<p>However, <code>max(z) ~ 110</code>, but <code>mean(z) ~ 20</code>. So since most <code>z</code> values are around or below, but the max is around 100, most of the points look dark blue and its hard to tell a difference.</p>
<p>I tried to use a log colorbar, but since my <code>z</code> range is almost exactly from 10-100, it barely helps:</p>
<p><a href="http://i.stack.imgur.com/QOKfK.png" rel="nofollow"><img src="http://i.stack.imgur.com/QOKfK.png" alt="color coded"></a></p>
<p>Is there a way to use a normal color scale up to a certain value? Like have the color bar go from Blue - Red from <code>10 - 40</code>, and then have everything above that just be red?</p>
| 2 | 2016-08-18T20:29:31Z | 39,027,348 | <p>You can use the <code>extend</code> argument on the matplotlib colorbar as</p>
<pre><code>plt.scatter(x, y, c = z, vmin=10, vmax=40)
plt.colorbar(extend="max")
</code></pre>
<p>For detailed examples, check <a href="http://matplotlib.org/examples/pylab_examples/contourf_demo.html" rel="nofollow">this link</a> out.</p>
| 1 | 2016-08-18T20:54:58Z | [
"python",
"matplotlib",
"colorbar"
] |
how to define a set of variables so the program responds in any of the items in a list is called, python3 | 39,027,038 | <p>I create a list so if someones inputs any item from the list it will print the same. </p>
<pre><code>lista = ["example", "example2"]
listb = ["exampleb", "example2b"]
choice = input()
if choice == lista[]:
print("outputa")
elif choice == listb[]:
print("outputb")
</code></pre>
<p>if the user types either example or example2 it will print outputa but if the user types exampleb or example2b it will print outputb.
Thanks</p>
| 1 | 2016-08-18T20:34:17Z | 39,027,085 | <p>Use:</p>
<pre><code>if choice in lista:
print("a")
elif choice in listb:
print("b")
</code></pre>
<p>`</p>
| 2 | 2016-08-18T20:36:55Z | [
"python",
"list",
"if-statement",
"input",
"set"
] |
how to define a set of variables so the program responds in any of the items in a list is called, python3 | 39,027,038 | <p>I create a list so if someones inputs any item from the list it will print the same. </p>
<pre><code>lista = ["example", "example2"]
listb = ["exampleb", "example2b"]
choice = input()
if choice == lista[]:
print("outputa")
elif choice == listb[]:
print("outputb")
</code></pre>
<p>if the user types either example or example2 it will print outputa but if the user types exampleb or example2b it will print outputb.
Thanks</p>
| 1 | 2016-08-18T20:34:17Z | 39,027,088 | <pre><code>lista = ["example", "example2"]
listb = ["exampleb", "example2b"]
choice = input()
if choice in lista:
print("outputa")
elif choice in listb:
print("outputb")
</code></pre>
| 4 | 2016-08-18T20:37:15Z | [
"python",
"list",
"if-statement",
"input",
"set"
] |
how to define a set of variables so the program responds in any of the items in a list is called, python3 | 39,027,038 | <p>I create a list so if someones inputs any item from the list it will print the same. </p>
<pre><code>lista = ["example", "example2"]
listb = ["exampleb", "example2b"]
choice = input()
if choice == lista[]:
print("outputa")
elif choice == listb[]:
print("outputb")
</code></pre>
<p>if the user types either example or example2 it will print outputa but if the user types exampleb or example2b it will print outputb.
Thanks</p>
| 1 | 2016-08-18T20:34:17Z | 39,027,149 | <p>Have you tried using the <code>in</code> operator <a href="https://docs.python.org/3/reference/datamodel.html#object.__contains__" rel="nofollow">in</a> Python? Such that you'd write:</p>
<pre><code>if choice in list_a:
print("Output A")
elif choice in list_b:
print("Output B")
</code></pre>
| 1 | 2016-08-18T20:42:25Z | [
"python",
"list",
"if-statement",
"input",
"set"
] |
pandas - Replace one letter in a string with its capital | 39,027,072 | <p>I have a columns with some names that need capitalizing in the middle of the word, like Mcgill to McGill, Mcneill to McNeill, O'donnell to O'Donnell, etc.</p>
<p>I know some text editors can do this by prepending the captured group with <code>\U</code> but this doesn't work in pandas.</p>
<p>Here's what I tried. Is this even possible?</p>
<pre><code>import pandas as pd
names = pd.Series(["Mcgill", "Mcneill", "O'donnell", "Mctavish"])
names.replace(r'\bMc([a-z])', r'Mc\U$1', inplace=True)
</code></pre>
| 3 | 2016-08-18T20:36:09Z | 39,027,310 | <p>You can use <code>apply()</code> in combination with <code>re.sub()</code>:</p>
<pre><code>import pandas as pd, re
names = pd.Series(["Mcgill", "Mcneill", "O'donnell", "Mctavish"])
def capitalize(name):
rx = re.compile(r'(?:(?<=Mc)|(?<=O\'))([a-z])')
def repl(m):
char = m.group(1)
return char.upper()
return rx.sub(repl,name)
names = names.apply(capitalize)
# 0 McGill
# 1 McNeill
# 2 O'Donnell
# 3 McTavish
</code></pre>
<p>Is this what you were after?</p>
| 3 | 2016-08-18T20:53:20Z | [
"python",
"regex",
"pandas"
] |
How can I optimize this python code - NO MEMORY? | 39,027,080 | <p>I wrote this python code:</p>
<pre><code>from itertools import product
combo_pack = product("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890!&+*-_.#@", repeat = 8)
myfile = open("lista_combinazioni.txt","w")
for combo in combo_pack:
combo = "".join(combo)
combo = "%s\n" % (combo)
myfile.write(combo)
myfile.close()
</code></pre>
<p>How can I optimize it? After a long time it running, it crashing because there isn't memory.</p>
| -2 | 2016-08-18T20:36:40Z | 39,027,141 | <p>What's probably happening is your file buffer is filling up and it isn't flushing to the file, thus using a lot of memory.</p>
<p>Try using this instead:</p>
<pre><code>myfile = open("lista_combinazioni.txt","w",1)
</code></pre>
| -3 | 2016-08-18T20:41:42Z | [
"python",
"optimization"
] |
Problems installing Python Pymqi package | 39,027,100 | <p>THis is my first post on this forum. If I am not complying with protocols, please just let me know.</p>
<pre><code>C:\>python --version
Python 2.7.11
</code></pre>
<p>OS: Windows version 7</p>
<p>WMQ: 8.2</p>
<p>I am trying to install Python <strong>pymqi</strong> package. After couple hours of trying and searching the web for solutions I decided to post this question hoping to get some help. The following is the command I issue and the errors I am getting.</p>
<p><strong>C:>pip install pymqi</strong></p>
<pre><code>Collecting pymqi
Using cached pymqi-1.5.4.tar.gz
Requirement already satisfied (use --upgrade to upgrade): testfixtures in c:\python27\lib\site-packages (from pymqi)
Installing collected packages: pymqi
Running setup.py install for pymqi ... error
Complete output from command c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\reyesv~1\\appdata\\local\\temp\\1\\pip
-build-4qqnkt\\pymqi\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --rec
ord c:\users\reyesv~1\appdata\local\temp\1\pip-u2jdz5-record\install-record.txt --single-version-externally-managed --compile:
Building PyMQI client 64bits
running install
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\pymqi
copying pymqi\__init__.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQCFC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQXC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQZC.py -> build\lib.win-amd64-2.7\pymqi
running build_ext
building 'pymqi.pymqe' extension
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
creating build\temp.win-amd64-2.7\Release\pymqi
C:\Users\reyesviloria362048\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DND
EBUG -DPYQMI_SERVERBUILD=0 "-Ic:\Program Files (x86)\IBM\WebSphere MQ\tools\c\include" -Ic:\python27\include -Ic:\python27\PC /Tcpymqi/pymqe.c /Fobuil
d\temp.win-amd64-2.7\Release\pymqi/pymqe.obj
pymqe.c
pymqi/pymqe.c(240) : error C2275: 'MQCSP' : illegal use of this type as an expression
C:\IBM\WebSphere MQ\tools\c\include\cmqc.h(4072) : see declaration of 'MQCSP'
pymqi/pymqe.c(240) : error C2146: syntax error : missing ';' before identifier 'csp'
pymqi/pymqe.c(240) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(240) : error C2059: syntax error : '{'
pymqi/pymqe.c(247) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(247) : error C2224: left of '.AuthenticationType' must have struct/union type
pymqi/pymqe.c(248) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(248) : error C2224: left of '.CSPUserIdPtr' must have struct/union type
pymqi/pymqe.c(249) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(249) : error C2224: left of '.CSPUserIdLength' must have struct/union type
pymqi/pymqe.c(250) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(250) : error C2224: left of '.CSPPasswordPtr' must have struct/union type
pymqi/pymqe.c(251) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(251) : error C2224: left of '.CSPPasswordLength' must have struct/union type
pymqi/pymqe.c(256) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(256) : warning C4133: '=' : incompatible types - from 'int *' to 'PMQCSP'
error: command 'C:\\Users\\reyesviloria362048\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' fa
iled with exit status 2
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\reyesv~1\\appdata\\local\\temp\\1\\pip-build-4qqnkt\\pymqi\\se
tup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\reyesv~1\ap
pdata\local\temp\1\pip-u2jdz5-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\reyesv~1\a
ppdata\local\temp\1\pip-build-4qqnkt\pymqi\
</code></pre>
| -1 | 2016-08-18T20:38:23Z | 39,027,176 | <p>You need to install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=44266" rel="nofollow">Microsoft Visual C++ compiler for Python 2.7</a></p>
| 1 | 2016-08-18T20:44:40Z | [
"python",
"windows"
] |
Appending miniature lists to a larger list | 39,027,187 | <p>I'm creating a python application to play the game Perudo (or Liar's dice).</p>
<p>I am trying to create a function that calculates all the possible moves that the player (or AI) is allowed to make and returns a list of them so that it can reject illegal ones.</p>
<p>Turns are stored as a list of 2 numbers e.g. [10,6] representing ten sixes.
If the starting variable </p>
<pre><code>currentbid
</code></pre>
<p>is </p>
<pre><code>[19,3]
</code></pre>
<p>(nineteen threes) and there are 20 dice in play, then the only possible moves are 19 fours, 19 fives, 19 sixes, 20 twos, 20 threes, 20 fours, 20 fives and 20 sixes. Calls of ones are not allowed.
The program should output the above as:</p>
<pre><code>[[19,4],[19,5],[19,6],[20,2],[20,3],[20,4],[20,5],[20,6]]
</code></pre>
<p>but instead outputs it as:</p>
<pre><code>[[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6]]
</code></pre>
<p>What am I doing wrong?</p>
<pre><code>def calcpossiblemoves(self, currentbid, totalnoofdice):
self.possiblemoves = []#Create a list of possible moves that will be added too
self.bid = currentbid
while self.bid[0] <= totalnoofdice:
while self.bid[1] < 6:
self.bid[1] += 1
self.possiblemoves.append(self.bid) # <- I think the problem is something to do with this line
print(self.possiblemoves) #For tracking the process
# Increase 1st number, reset 2nd number to 1
self.bid[0] += 1
self.bid[1] = 1 # Which will get increased to 2
#print("Reached 6")
return self.possiblemoves
</code></pre>
| 3 | 2016-08-18T20:45:07Z | 39,027,358 | <p>you're appending the same object in your list which explains you get 6 times the last value.</p>
<p>replace</p>
<pre><code>self.possiblemoves.append(self.bid) # <- I think the problem is something to do with this line
</code></pre>
<p>by</p>
<pre><code>self.possiblemoves.append(list(self.bid)) # make a copy of the list to make it independent from next modifications on self.bid
</code></pre>
| 0 | 2016-08-18T20:55:40Z | [
"python",
"list"
] |
Appending miniature lists to a larger list | 39,027,187 | <p>I'm creating a python application to play the game Perudo (or Liar's dice).</p>
<p>I am trying to create a function that calculates all the possible moves that the player (or AI) is allowed to make and returns a list of them so that it can reject illegal ones.</p>
<p>Turns are stored as a list of 2 numbers e.g. [10,6] representing ten sixes.
If the starting variable </p>
<pre><code>currentbid
</code></pre>
<p>is </p>
<pre><code>[19,3]
</code></pre>
<p>(nineteen threes) and there are 20 dice in play, then the only possible moves are 19 fours, 19 fives, 19 sixes, 20 twos, 20 threes, 20 fours, 20 fives and 20 sixes. Calls of ones are not allowed.
The program should output the above as:</p>
<pre><code>[[19,4],[19,5],[19,6],[20,2],[20,3],[20,4],[20,5],[20,6]]
</code></pre>
<p>but instead outputs it as:</p>
<pre><code>[[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6]]
</code></pre>
<p>What am I doing wrong?</p>
<pre><code>def calcpossiblemoves(self, currentbid, totalnoofdice):
self.possiblemoves = []#Create a list of possible moves that will be added too
self.bid = currentbid
while self.bid[0] <= totalnoofdice:
while self.bid[1] < 6:
self.bid[1] += 1
self.possiblemoves.append(self.bid) # <- I think the problem is something to do with this line
print(self.possiblemoves) #For tracking the process
# Increase 1st number, reset 2nd number to 1
self.bid[0] += 1
self.bid[1] = 1 # Which will get increased to 2
#print("Reached 6")
return self.possiblemoves
</code></pre>
| 3 | 2016-08-18T20:45:07Z | 39,027,426 | <p>You're better off in thinking about a base 6 number here where you effectively have a range of numbers for 20 dice of 0 - 126 inclusive.</p>
<pre><code>current_bid = [19, 3]
offset = current_bid[0] * 6 + current_bid[1]
# 117 - and divmod(117, 6) == (19, 3)
</code></pre>
<p>So, to then get the valid bids that are left, you do the maths over the range of 117 - 126 and get the number of dice and remainder that's valid.</p>
<pre><code>valid_bids_with1s = ([n // 6, n % 6 + 1] for n in range(offset, 126))
valid_bids = [[a, b] for a, b in valid_bids_with1s if b != 1]
</code></pre>
<p>Which gives you:</p>
<pre><code>[[19, 4],
[19, 5],
[19, 6],
[20, 2],
[20, 3],
[20, 4],
[20, 5],
[20, 6]]
</code></pre>
| 0 | 2016-08-18T20:59:58Z | [
"python",
"list"
] |
Appending miniature lists to a larger list | 39,027,187 | <p>I'm creating a python application to play the game Perudo (or Liar's dice).</p>
<p>I am trying to create a function that calculates all the possible moves that the player (or AI) is allowed to make and returns a list of them so that it can reject illegal ones.</p>
<p>Turns are stored as a list of 2 numbers e.g. [10,6] representing ten sixes.
If the starting variable </p>
<pre><code>currentbid
</code></pre>
<p>is </p>
<pre><code>[19,3]
</code></pre>
<p>(nineteen threes) and there are 20 dice in play, then the only possible moves are 19 fours, 19 fives, 19 sixes, 20 twos, 20 threes, 20 fours, 20 fives and 20 sixes. Calls of ones are not allowed.
The program should output the above as:</p>
<pre><code>[[19,4],[19,5],[19,6],[20,2],[20,3],[20,4],[20,5],[20,6]]
</code></pre>
<p>but instead outputs it as:</p>
<pre><code>[[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6]]
</code></pre>
<p>What am I doing wrong?</p>
<pre><code>def calcpossiblemoves(self, currentbid, totalnoofdice):
self.possiblemoves = []#Create a list of possible moves that will be added too
self.bid = currentbid
while self.bid[0] <= totalnoofdice:
while self.bid[1] < 6:
self.bid[1] += 1
self.possiblemoves.append(self.bid) # <- I think the problem is something to do with this line
print(self.possiblemoves) #For tracking the process
# Increase 1st number, reset 2nd number to 1
self.bid[0] += 1
self.bid[1] = 1 # Which will get increased to 2
#print("Reached 6")
return self.possiblemoves
</code></pre>
| 3 | 2016-08-18T20:45:07Z | 39,027,452 | <p>The problem is that you are using <code>list</code>s, which are mutable objects, and you're creating references, rather than copies. You could fix it by copying the <code>self.bid</code> list each time, but the most "pythonic" solution is not to use <code>list</code>, but to use <code>tuple</code>s, instead:</p>
<pre><code>def calcpossiblemoves(self, currentbid, totalnoofdice):
possiblemoves = []
numberof, value = currentbid
while numberOf <= totalnoofdice:
while value < 6:
value += 1
possiblemoves.append((numberOf, value))
# Increase 1st number, reset 2nd number to 1
numberof += 1
value = 1 # Which will get increased to 2
#print("Reached 6")
return self.possiblemoves
</code></pre>
<p>Note that this doesn't update <code>self.bid</code> (but you can easily add that in), and you get a <code>list</code> of immutable <code>tuple</code>s.</p>
<p><strong>EDIT:</strong> Because <code>tuple</code>s are immutable, I've used <em>tuple unpacking</em> to create two variables. This is to prevent the following problem:</p>
<pre><code>>>> bid = (1, 2)
>>> bid[0] += 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>I could write <code>bid = (bid[0]+1, bid[1])</code>, but using two variables is, imho, easier to understand.</p>
<hr>
<p>Generally, it's a good rule of thumb to make sure that all members of <code>list</code>s mean the same thing, whereas values meaning different things go in <code>tuple</code>s or <code>dict</code>s, or custom containers such as <code>NamedTuple</code> or custom classes.</p>
<p>In this example, the outer <code>list</code> contains a number of bids, but each of the inner <code>list</code>s has a <em>number of dice</em> and a <em>value</em>, which indicates a <code>list</code> is perhaps the wrong type to use.</p>
| 1 | 2016-08-18T21:01:42Z | [
"python",
"list"
] |
Appending miniature lists to a larger list | 39,027,187 | <p>I'm creating a python application to play the game Perudo (or Liar's dice).</p>
<p>I am trying to create a function that calculates all the possible moves that the player (or AI) is allowed to make and returns a list of them so that it can reject illegal ones.</p>
<p>Turns are stored as a list of 2 numbers e.g. [10,6] representing ten sixes.
If the starting variable </p>
<pre><code>currentbid
</code></pre>
<p>is </p>
<pre><code>[19,3]
</code></pre>
<p>(nineteen threes) and there are 20 dice in play, then the only possible moves are 19 fours, 19 fives, 19 sixes, 20 twos, 20 threes, 20 fours, 20 fives and 20 sixes. Calls of ones are not allowed.
The program should output the above as:</p>
<pre><code>[[19,4],[19,5],[19,6],[20,2],[20,3],[20,4],[20,5],[20,6]]
</code></pre>
<p>but instead outputs it as:</p>
<pre><code>[[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6],[20,6]]
</code></pre>
<p>What am I doing wrong?</p>
<pre><code>def calcpossiblemoves(self, currentbid, totalnoofdice):
self.possiblemoves = []#Create a list of possible moves that will be added too
self.bid = currentbid
while self.bid[0] <= totalnoofdice:
while self.bid[1] < 6:
self.bid[1] += 1
self.possiblemoves.append(self.bid) # <- I think the problem is something to do with this line
print(self.possiblemoves) #For tracking the process
# Increase 1st number, reset 2nd number to 1
self.bid[0] += 1
self.bid[1] = 1 # Which will get increased to 2
#print("Reached 6")
return self.possiblemoves
</code></pre>
| 3 | 2016-08-18T20:45:07Z | 39,027,480 | <p>The problem is the reference to self.bid. It's the same variable over and over again in the list meaning when you update it in your loop, you are updating every instance (all the variables in the list in this case). I've rewritten what you are trying to do to work the way you want in case you want to keep this same format but as others have said, you can clean it up some from a coding standpoint.</p>
<pre><code>def calcpossiblemoves(currentbid, totalnoofdice):
possiblemoves = []# in range(totalnoofdice) #Create a list of possible moves that will be added too
bid = currentbid
available_bid = None
while bid[0] <= totalnoofdice:
while bid[1] < 6:
bid[1] += 1
available_bid = [bid[0],bid[1]]
possiblemoves.append(available_bid) # <- I think the problem is something to do with this line
print(possiblemoves) #For tracking the process
# Increase 1st number, reset 2nd number to 1
bid[0] += 1
bid[1] = 1 # Which will get increased to 2
#print("Reached 6")
return possiblemoves
calcpossiblemoves([19,3],20)
</code></pre>
| 0 | 2016-08-18T21:03:42Z | [
"python",
"list"
] |
Django Nose - Need to capture output while running tests | 39,027,258 | <p>I am running running django-nose tests in a management command using <code>call_command</code>. I need to capture output and do something with it, depending on if it failed or passed.
My current code in management command :</p>
<pre><code>content = StringIO()
try:
call_command('test', '--nologcapture', '-s', stdout=content)
# since its calling system exit
except BaseException:
pass
content.seek(0)
print content.read(), '<-- content'
# Check if test is passed and do something or else something else.
</code></pre>
<p>In my case, the content is always an empty string. </p>
<p>I tried a lot of nose plugins, but cannot fetch output. </p>
<p>Thanks.</p>
| 1 | 2016-08-18T20:50:17Z | 39,027,300 | <p>The likely cause of your problem is that inside the management code, they are using print statements which are going directly to <code>sys.stdout</code>, rather than using the <code>self.stdout</code> attribute of the management command instance. </p>
<p>Check the code inside the management command. The prints should look something like:</p>
<pre><code>self.stdout.write(...)
# or, perhaps ...
print(..., file=self.stdout)
</code></pre>
<p>If this is not the case, and you have control over the management command's code, then fix it up to use <code>self.stdout</code>. Then you should see the output captured in your <code>StringIO</code> instance with your current code. </p>
<p>If you don't have control over the code in the management command, then you will have to go about this another way, by redirecting <code>stdout</code> (using a context manager) before calling the command. </p>
<p>Note: A simpler way to retrieve output from a <code>StringIO</code> instance is using <code>content.getvalue()</code>, instead of seeking and reading.</p>
| 0 | 2016-08-18T20:52:45Z | [
"python",
"django",
"django-nose"
] |
SQLite: how to search multiple columns without returning duplicate rows? | 39,027,277 | <p>trying to solve this with my limited knowledge of SQL and I'm stuck. :)</p>
<p>This is being done in <strong>SQLite 3</strong> and <strong>Python 3</strong>.</p>
<p>I've 'JOIN'ed 2 tables (<strong>names</strong> and <strong>data</strong>). </p>
<p>I'm trying to search 2 columns in the result for user supplied text (case insensitive). If a match is found in either column, then return the row. </p>
<p>The problem I have, is that if the text is found in the name columns, I get multiple rows returned (I guess because there are multiple rows with the name name).</p>
<p>The required data is the '<strong>name_id</strong>'</p>
<p>Here's a mockup of the join (there are other columns I haven't included here):</p>
<pre><code> -------------------------------------------------------------------
| name_id | name | data_id | data |
-------------------------------------------------------------------
| 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 101 | Jerry Jones | 202 | white teeth |
| 103 | Barry Johnson | 256 | brown hair |
-------------------------------------------------------------------
</code></pre>
<p>So, if I search for "teeth", I get:</p>
<pre><code> | 100 | John Smith | 201 | hairy teeth |
| 101 | Jerry Jones | 202 | white teeth |
</code></pre>
<p>and, if I search for "hair", I get:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>or, if I search for "john", I get:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>But what I actually want is only one row returned based on each '<strong>name_id</strong>' so:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>The remaining SQLite command looks like this:</p>
<pre><code>WHERE data LIKE "%john%" OR name LIKE "%john%" COLLATE NOCASE
</code></pre>
<p>I've tried using 'DISTINCT' but it appears to remove duplicate rows based upon the specified column. These missing rows contain data that is then not searched.</p>
<p>Thanks.</p>
| 0 | 2016-08-18T20:51:38Z | 39,027,724 | <p>Alright so I'm assuming you simply want the first record for each unique value in the <code>name_id</code> column. If that is the case then your query should look like this.</p>
<pre><code>SELECT
*
FROM
names n
JOIN data d ON d.name_id = n.name_id
WHERE
LCASE(n.name) LIKE '%john%' // Force lower so only 1 WHERE statement
GROUP BY
n.name_id
</code></pre>
<p><code>GROUP BY</code> should 'collapse' all the data on whatever column you select. So lets say you wanted to count the amount of results for each <code>name_id</code> you could change your <code>SELECT</code> statement as follows.</p>
<pre><code>SELECT
n.name_id,
COUNT(n.name_id) AS result_count
FROM
// Rest of query
</code></pre>
<p>If you don't use an aggregate function like <code>SUM, COUNT, etc</code> then it simply drops all but the first result.</p>
<p>If this isn't what you are looking for LMK. Hope that helps.</p>
| 1 | 2016-08-18T21:23:31Z | [
"python",
"sqlite",
"python-3.x",
"search",
"sqlite3"
] |
SQLite: how to search multiple columns without returning duplicate rows? | 39,027,277 | <p>trying to solve this with my limited knowledge of SQL and I'm stuck. :)</p>
<p>This is being done in <strong>SQLite 3</strong> and <strong>Python 3</strong>.</p>
<p>I've 'JOIN'ed 2 tables (<strong>names</strong> and <strong>data</strong>). </p>
<p>I'm trying to search 2 columns in the result for user supplied text (case insensitive). If a match is found in either column, then return the row. </p>
<p>The problem I have, is that if the text is found in the name columns, I get multiple rows returned (I guess because there are multiple rows with the name name).</p>
<p>The required data is the '<strong>name_id</strong>'</p>
<p>Here's a mockup of the join (there are other columns I haven't included here):</p>
<pre><code> -------------------------------------------------------------------
| name_id | name | data_id | data |
-------------------------------------------------------------------
| 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 101 | Jerry Jones | 202 | white teeth |
| 103 | Barry Johnson | 256 | brown hair |
-------------------------------------------------------------------
</code></pre>
<p>So, if I search for "teeth", I get:</p>
<pre><code> | 100 | John Smith | 201 | hairy teeth |
| 101 | Jerry Jones | 202 | white teeth |
</code></pre>
<p>and, if I search for "hair", I get:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>or, if I search for "john", I get:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 100 | John Smith | 201 | hairy teeth |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>But what I actually want is only one row returned based on each '<strong>name_id</strong>' so:</p>
<pre><code> | 100 | John Smith | 200 | grey hair |
| 103 | Barry Johnson | 256 | brown hair |
</code></pre>
<p>The remaining SQLite command looks like this:</p>
<pre><code>WHERE data LIKE "%john%" OR name LIKE "%john%" COLLATE NOCASE
</code></pre>
<p>I've tried using 'DISTINCT' but it appears to remove duplicate rows based upon the specified column. These missing rows contain data that is then not searched.</p>
<p>Thanks.</p>
| 0 | 2016-08-18T20:51:38Z | 39,028,096 | <p>With the same assumptions as the previous answer (i.e. you're looking for the first record for the unique name), I'd suggest the following:</p>
<pre><code>select names.name_id
, <names_fields>
, group_concat(data,';')
from names
join (select name_id, data from data_table)
on names.name_id = data.name_id
where name_id
group by <name_fields>
</code></pre>
<p>More on group_concat <a href="https://www.sqlite.org/lang_aggfunc.html" rel="nofollow">here</a>.</p>
| 0 | 2016-08-18T21:54:35Z | [
"python",
"sqlite",
"python-3.x",
"search",
"sqlite3"
] |
Iterating through rows to see which value comes first | 39,027,281 | <p>When working with Pandas I am able to check prior and following rows to see if certain conditions are met using .shift(). </p>
<p>What if I need to see if a condition is met which could be 100 rows or more after the row I am comparing with? I know looping through a dataframe is not the most efficient but can someone please help with the following example.</p>
<p>When the df['reversal'] column has a signal - as per the example below showing 6.0 at 14:00 - I want to know which one of the following is seen first after the row at 14:00 :</p>
<ul>
<li><code>df.price</code> with value of <code>df.reversal</code> + 1 (we can see this was hit at 20:00 as 6 + 1 = 7) </li>
<li><code>df.price</code> with a value of <code>df.reversal</code> - <code>df.reversal</code> (this was not hit in this example as price did not hit 6 - 6 = 0)</li>
</ul>
<p><a href="http://i.stack.imgur.com/qLCAR.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qLCAR.jpg" alt="revimage"></a></p>
<p>This is the desired output with cells in Blue showing what I would like to see. The new columns should show the time that target is hit (as per this example) or target missed (if price hits 0 in this example).</p>
<p><a href="http://i.stack.imgur.com/zZQUx.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/zZQUx.jpg" alt="des out"></a></p>
<p>Please see <code>df.to_dict()</code> below to reproduce:</p>
<pre><code>{'move_start': {datetime.time(9, 0): nan,
datetime.time(10, 0): nan,
datetime.time(11, 0): nan,
datetime.time(12, 0): nan,
datetime.time(13, 0): nan,
datetime.time(14, 0): datetime.time(9, 0),
datetime.time(15, 0): nan,
datetime.time(16, 0): nan,
datetime.time(17, 0): nan,
datetime.time(18, 0): nan,
datetime.time(19, 0): nan,
datetime.time(20, 0): nan},
'price': {datetime.time(9, 0): 1,
datetime.time(10, 0): 2,
datetime.time(11, 0): 3,
datetime.time(12, 0): 4,
datetime.time(13, 0): 5,
datetime.time(14, 0): 6,
datetime.time(15, 0): 5,
datetime.time(16, 0): 4,
datetime.time(17, 0): 3,
datetime.time(18, 0): 2,
datetime.time(19, 0): 4,
datetime.time(20, 0): 7},
'reversal': {datetime.time(9, 0): nan,
datetime.time(10, 0): nan,
datetime.time(11, 0): nan,
datetime.time(12, 0): nan,
datetime.time(13, 0): nan,
datetime.time(14, 0): 6.0,
datetime.time(15, 0): nan,
datetime.time(16, 0): nan,
datetime.time(17, 0): nan,
datetime.time(18, 0): nan,
datetime.time(19, 0): nan,
datetime.time(20, 0): nan}}
</code></pre>
| 4 | 2016-08-18T20:51:49Z | 39,027,699 | <p>Make it nicer and cleaner, thought... </p>
<pre><code>import datetime,numpy as np,pandas as pd
nan = np.nan
a = pd.DataFrame({'move_start': {datetime.time(9, 0): nan, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan, datetime.time(14, 0): datetime.time(9, 0), datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}, 'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 0, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 7}, 'reversal': {datetime.time(9, 0): nan, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan,
datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}})
a['target_hit']=nan;
a['target_miss']=nan;
a['reversal1']=a['reversal']+1;
a['reversal2']=a['reversal']-a['reversal'];
a.sort_index(1,inplace=True);
hit = a.ix[:,:-2].dropna()
takeBoth = False
targetIsHit,targetIsMiss = False,False
if takeBoth:
targetHit = a[(hit['reversal1'].values==a['price'].values) & (hit['reversal1'].index.values<a['price'].index.values)];
targetMiss = a[(hit['reversal2'].values==a['price'].values) & (hit['reversal2'].index.values<a['price'].index.values)];
targetIsHit,targetIsMiss = not targetHit.empty, not targetMiss.empty
else:
targetHit = a[(hit['reversal1'].values==a['price'].values) & (hit['reversal1'].index.values<a['price'].index.values)];
targetIsHit = not targetHit.empty
if not targetIsHit:
targetMiss = a[(hit['reversal2'].values==a['price'].values) & (hit['reversal2'].index.values<a['price'].index.values)];
targetIsMiss = not targetMiss.empty
if targetIsHit:a.loc[hit.index.values,"target_hit"] = targetHit.index.values;
if targetIsMiss:a.loc[hit.index.values,"target_miss"] = targetMiss.index.values;
print '#'*50
print a
'''
##################################################
move_start price reversal reversal1 reversal2 target_hit \
09:00:00 NaN 1 NaN NaN NaN NaN
10:00:00 NaN 0 NaN NaN NaN NaN
11:00:00 NaN 3 NaN NaN NaN NaN
12:00:00 NaN 4 NaN NaN NaN NaN
13:00:00 NaN 7 NaN NaN NaN NaN
14:00:00 09:00:00 6 6.0 7.0 0.0 20:00:00
15:00:00 NaN 5 NaN NaN NaN NaN
16:00:00 NaN 4 NaN NaN NaN NaN
17:00:00 NaN 0 NaN NaN NaN NaN
18:00:00 NaN 2 NaN NaN NaN NaN
19:00:00 NaN 4 NaN NaN NaN NaN
20:00:00 NaN 7 NaN NaN NaN NaN
target_miss
09:00:00 NaN
10:00:00 NaN
11:00:00 NaN
12:00:00 NaN
13:00:00 NaN
14:00:00 NaN
15:00:00 NaN
16:00:00 NaN
17:00:00 NaN
18:00:00 NaN
19:00:00 NaN
20:00:00 NaN
'''
</code></pre>
<p>PS: As typical pythonist Im lazy and like to work on plain input/output containing most typical problems, so You make it nasty df from beggining, not only write about it XD... About reading text... People tends to write, write and write. Usually its simple problem, but hella wroten, so I dont understand a yota xD. Then its like "WTF did he want here?!".</p>
| 1 | 2016-08-18T21:21:02Z | [
"python",
"pandas"
] |
Python script error with numbers 01 | 39,027,299 | <p>I have the following script and there seems to be an error:</p>
<pre><code>import random, time
from random import randint
while True:
part_1 = randint(1,31)
part_2 = randint(1,12)
part_3 = random.choice([81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,00,01,02,03,04])
if part_3 == 01 or part_3 == 1:
part_3 = 0 + part_3
full = str(part_1) + " " + str(part_2) + " " + str(part_3)
print full
time.sleep(1)
</code></pre>
<p>In part_3 I want to make it so it's 01, but it just prints off as 1. I want the same behavior with with 2, 3, etc. How would I do this?</p>
| 0 | 2016-08-18T20:52:35Z | 39,027,417 | <p>Change your declaration of <code>full</code> to: </p>
<pre><code>full = "%02d %02d %02d" % (part_1, part_2, part_3)
</code></pre>
<p>You can read more about format strings <a href="https://docs.python.org/2/library/string.html#format-specification-mini-language" rel="nofollow">here</a>.</p>
| 2 | 2016-08-18T20:59:34Z | [
"python"
] |
Python script error with numbers 01 | 39,027,299 | <p>I have the following script and there seems to be an error:</p>
<pre><code>import random, time
from random import randint
while True:
part_1 = randint(1,31)
part_2 = randint(1,12)
part_3 = random.choice([81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,00,01,02,03,04])
if part_3 == 01 or part_3 == 1:
part_3 = 0 + part_3
full = str(part_1) + " " + str(part_2) + " " + str(part_3)
print full
time.sleep(1)
</code></pre>
<p>In part_3 I want to make it so it's 01, but it just prints off as 1. I want the same behavior with with 2, 3, etc. How would I do this?</p>
| 0 | 2016-08-18T20:52:35Z | 39,027,679 | <p>Alternative answer using <a href="https://pyformat.info/" rel="nofollow"><code>str.format</code></a></p>
<pre><code>part_1 = randint(1,31)
part_2 = randint(1,12)
part_3 = random.choice([81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,00,01,02,03,04])
full = ('{:02d} '*3).format(part_1, part_2, part_3).rstrip()
# '16 09 97'
</code></pre>
| 0 | 2016-08-18T21:18:52Z | [
"python"
] |
ValueError: The input contains nan values - from lmfit model despite the input not containing NaNs | 39,027,346 | <p>I'm trying to build a model using lmfit <a href="https://lmfit.github.io/lmfit-py/model.html#the-model-class" rel="nofollow">(link to docs)</a> and I can't seems to find out why I keep getting a <code>ValueError: The input contains nan values</code> when I try to fit the model.</p>
<pre><code>from lmfit import minimize, Minimizer, Parameters, Parameter, report_fit, Model
import numpy as np
def cde(t, Qi, at, vw, R, rhob_cb, al, d, r):
# t (time), is the independent variable
return Qi / (8 * np.pi * ((at * vw)/R) * t * rhob_cb * (np.sqrt(np.pi * ((al * vw)/R * t)))) * \
np.exp(- (R * (d - (t * vw)/ R)**2) / (4 * (al * vw) * t) - (R * r**2)/ (4 * (at * vw) * t))
model_cde = Model(cde)
# Allowed to vary
model_cde.set_param_hint('vw', value =10**-4, min=0.000001)
model_cde.set_param_hint('d', value = -0.038, min = 0.0001)
model_cde.set_param_hint('r', value = 5.637e-10)
model_cde.set_param_hint('at', value =0.1)
model_cde.set_param_hint('al', value =0.15)
# Fixed
model_cde.set_param_hint('Qi', value = 1000, vary = False)
model_cde.set_param_hint('R', value =1.7, vary = False)
model_cde.set_param_hint('rhob_cb', value =3000, vary = False)
# test data
data = [ 1.37, 1.51, 1.65, 1.79, 1.91, 2.02, 2.12, 2.2 ,
2.27, 2.32, 2.36, 2.38, 2.4 , 2.41, 2.42, 2.41, 2.4 ,
2.39, 2.37, 2.35, 2.33, 2.31, 2.29, 2.26, 2.23, 2.2 ,
2.17, 2.14, 2.11, 2.08, 2.06, 2.02, 1.99, 1.97, 1.94,
1.91, 1.88, 1.85, 1.83, 1.8 , 1.78, 1.75, 1.72, 1.7 ,
1.68, 1.65, 1.63, 1.61, 1.58]
time = list(range(5,250,5))
model_cde.fit(data, t= time)
</code></pre>
<p>Produces the following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-785fcc6a994b> in <module>()
----> 1 model_cde.fit(data, t= time)
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/model.py in fit(self, data, params, weights, method, iter_cb, scale_covar, verbose, fit_kws, **kwargs)
539 scale_covar=scale_covar, fcn_kws=kwargs,
540 **fit_kws)
--> 541 output.fit(data=data, weights=weights)
542 output.components = self.components
543 return output
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/model.py in fit(self, data, params, weights, method, **kwargs)
745 self.init_fit = self.model.eval(params=self.params, **self.userkws)
746
--> 747 _ret = self.minimize(method=self.method)
748
749 for attr in dir(_ret):
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/minimizer.py in minimize(self, method, params, **kws)
1240 val.lower().startswith(user_method)):
1241 kwargs['method'] = val
-> 1242 return function(**kwargs)
1243
1244
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/minimizer.py in leastsq(self, params, **kws)
1070 np.seterr(all='ignore')
1071
-> 1072 lsout = scipy_leastsq(self.__residual, vars, **lskws)
1073 _best, _cov, infodict, errmsg, ier = lsout
1074 result.aborted = self._abort
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
385 maxfev = 200*(n + 1)
386 retval = _minpack._lmdif(func, x0, args, full_output, ftol, xtol,
--> 387 gtol, maxfev, epsfcn, factor, diag)
388 else:
389 if col_deriv:
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/minimizer.py in __residual(self, fvars, apply_bounds_transformation)
369
370 out = self.userfcn(params, *self.userargs, **self.userkws)
--> 371 out = _nan_policy(out, nan_policy=self.nan_policy)
372
373 if callable(self.iter_cb):
/home/bprodz/.virtualenvs/phd_dev/lib/python3.5/site-packages/lmfit/minimizer.py in _nan_policy(a, nan_policy, handle_inf)
1430
1431 if contains_nan:
-> 1432 raise ValueError("The input contains nan values")
1433 return a
1434
ValueError: The input contains nan values
</code></pre>
<p>However the results of the following checks for NaNs confirms that there were no NaN values in my data:</p>
<pre><code>print(np.any(np.isnan(data)), np.any(np.isnan(time)))
False False
</code></pre>
<p>So far I've tried converting 1 and/or both of <code>data</code> and <code>time</code> from lists to <code>numpy ndarrays</code>, removing the 0th time step (in case there was a dividing by 0 error), explicitly specifying the <code>t</code> as being independent and allowing all variables to vary. However these all throw the same error.</p>
<p>Does anyone have ideas what is causing this error to be thrown? Thanks.</p>
| 1 | 2016-08-18T20:54:49Z | 39,043,951 | <p>I tried to fit my model using <code>scipy.optimize.curve_fit</code> and got the following error:</p>
<pre><code>/home/bprodz/.virtualenvs/phd_dev/lib/python3.4/site-packages/ipykernel/__main__.py:3: RuntimeWarning: invalid value encountered in sqrt
app.launch_new_instance()
</code></pre>
<p>Which suggests the problem is with my model generating some negative numbers for <code>np.sqrt()</code>. The default behaviour for <code>np.sqrt()</code> when given a negative number is to output <code>nan</code> <a href="http://stackoverflow.com/questions/22949917/im-getting-an-error-string149-runtimewarning-invalid-value-encountered-in">as per this question.</a> NB the np.sqrt can be set to raise an error if given a negative number be setting the following: <code>np.seterr(all='raise')</code> <a href="http://stackoverflow.com/questions/22949917/im-getting-an-error-string149-runtimewarning-invalid-value-encountered-in#comment35039419_22950312">source</a></p>
<p>TIP I also asked for help in the lmfit google group and received the following <a href="https://groups.google.com/d/msg/lmfit-py/HrpzGa8q318/BWOte-VfAwAJ" rel="nofollow">helpful advice</a>:</p>
<ul>
<li>Consider breaking long formulas into smaller pieces to make troubleshooting easier</li>
<li>Use <code>Model.eval()</code> to test what certain parameters will produce when run through your model function</li>
<li><code>np.ndarray</code> is generally superior to python lists in these (numerical) situations</li>
</ul>
| 1 | 2016-08-19T16:37:33Z | [
"python",
"lmfit"
] |
Why there is a difference between 32 bit and 64 bit numpy/pandas | 39,027,484 | <p>I'm using numpy/pandas on a 64-bit fedora box, in production they pushed to a 32-bit Centos box and hit an error with <code>json.dumps</code>. It was throwing <code>repr(0) is not Serializable</code>. </p>
<p>I tried testing on 64-bit Centos and it runs absolutely fine. But on 32-bit (Centos 6.8 to be precise) it throws an error. I was wondering if anyone has hit this issue before.</p>
<p>Below is 64-bit Fedora, </p>
<pre><code>Python 2.6.6 (r266:84292, Jun 30 2016, 09:54:10)
[GCC 5.3.1 20160406 (Red Hat 5.3.1-6)] on linux4
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> >>> a = pd.DataFrame([{'a':1}])
>>>
>>> a
a
0 1
>>> a.to_dict()
{'a': {0: 1}}
>>> import json
>>> json.dumps(a.to_dict())
'{"a": {"0": 1}}'
</code></pre>
<p>Below is 32-bit Centos</p>
<pre><code>import json
import pandas as pd
a = pd.DataFrame( [ {'a': 1} ] )
json.dumps(a.to_dict())
Traceback (most recent call last):
File "sample.py", line 5, in <module>
json.dumps(a.to_dict())
File "/usr/lib/python2.6/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.6/json/encoder.py", line 367, in encode
chunks = list(self.iterencode(o))
File "/usr/lib/python2.6/json/encoder.py", line 309, in _iterencode
for chunk in self._iterencode_dict(o, markers):
File "/usr/lib/python2.6/json/encoder.py", line 275, in _iterencode_dict
for chunk in self._iterencode(value, markers):
File "/usr/lib/python2.6/json/encoder.py", line 309, in _iterencode
for chunk in self._iterencode_dict(o, markers):
File "/usr/lib/python2.6/json/encoder.py", line 268, in _iterencode_dict
raise TypeError("key {0!r} is not a string".format(key))
TypeError: key 0 is not a string
</code></pre>
<p>What is the usual work around for this issue? I cannot use custom encoder for json as the library I'm using to push this data expects a dictionary and it internally uses <code>json</code> module to serialize it and push it over the wire. </p>
<p><strong>Update:</strong> Python version 2.6.6 on both and pandas is 0.16.1 on both</p>
| 0 | 2016-08-18T21:04:03Z | 39,027,831 | <p>I believe this happens because the index is a <code>numpy.intNN</code> of different size as the Python <code>int</code> and these are not natively converted from one to another.</p>
<p>Like, on my 64-bit Python 2.7 and Numpy:</p>
<pre><code>>>> isinstance(numpy.int64(5), int)
True
>>> isinstance(numpy.int32(5), int)
False
</code></pre>
<p>Then:</p>
<pre><code>>>> json.dumps({numpy.int64(5): '5'})
'{"5": "5"}'
>>> json.dumps({numpy.int32(5): '5'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
TypeError: keys must be a string
</code></pre>
<hr>
<p>You could try to change the index to <code>numpy.int32</code>, <code>numpy.int64</code> or <code>int</code>:</p>
<pre><code>>>> df = pd.DataFrame( [ {'a': 1}, {'a': 2} ] )
>>> df.index = df.index.astype(numpy.int32) # perhaps your index was of these?
>>> json.dumps(df.to_dict())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
TypeError: keys must be a string
</code></pre>
<p>So you can try changing the index type to <code>int32</code>, <code>int64</code> or just plain Python <code>int</code>: </p>
<pre><code>>>> df.index = df.index.astype(numpy.int64)
>>> json.dumps(df.to_dict())
'{"a": {"0": 1, "1": 2}}'
>>> df.index = df.index.astype(int)
>>> json.dumps(df.to_dict())
'{"a": {"0": 1, "1": 2}}'
</code></pre>
| 3 | 2016-08-18T21:32:00Z | [
"python",
"json",
"pandas",
"numpy",
"32bit-64bit"
] |
Distributed tensorflow replicated training example: grpc_tensorflow_server - No such file or directory | 39,027,488 | <p>I am trying to make a <code>distributed tensorflow</code> implementation by following the instructions in this blog: <a href="http://leotam.github.io/general/2016/03/13/DistributedTF.html" rel="nofollow">Distributed TensorFlow by Leo K. Tam</a>. My aim is to perform <code>replicated training</code> as mentioned in this <a href="http://stackoverflow.com/a/37733117/5082406">post</a></p>
<p>I have completed the steps till <code>installing tensorflow</code> and successfully running the following command and getting results:</p>
<pre><code>sudo bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
</code></pre>
<p>Now the next thing, which I want to implement is to launch the <code>gRPC server</code> on one of the nodes by the following command :</p>
<pre><code>bazel-bin/tensorflow/core/distributed_runtime/rpc/grpc_tensorflow_server --cluster_spec='worker|192.168.555.254:2500;192.168.555.255:2501' --job_name=worker --task_id=0 &
</code></pre>
<p>Though, when I run it, I get the following error: <code>rpc/grpc_tensorflow_server:No such file directory</code></p>
<pre><code>-bash: bazel-bin/tensorflow/core/distributed_runtime/rpc/grpc_tensorflow_server: No such file or directory
</code></pre>
<p>The contents of my <code>rpc</code> folder are:</p>
<pre><code> libgrpc_channel.pic.a libgrpc_remote_master.pic.lo libgrpc_session.pic.lo libgrpc_worker_service_impl.pic.a _objs/
libgrpc_master_service_impl.pic.a libgrpc_remote_worker.pic.a libgrpc_tensor_coding.pic.a libgrpc_worker_service.pic.a
libgrpc_master_service.pic.lo libgrpc_server_lib.pic.lo libgrpc_worker_cache.pic.a librpc_rendezvous_mgr.pic.a
</code></pre>
<p>I am clearly missing out on a step in between, which is not mentioned in the blog. My objective is to be able to run the command mentioned above (to launch the <code>gRPC server</code>) so that I can start a worker process on one of the nodes. </p>
| 1 | 2016-08-18T21:04:13Z | 39,027,701 | <p>The <code>grpc_tensorflow_server</code> binary was a temporary measure used in the pre-released version of Distributed TensorFlow, and it is no longer built by default or included in the binary distributions. Its replacement is the <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/train.html#Server" rel="nofollow"><code>tf.train.Server</code></a> Python class, which is more programmable and easier to use.</p>
<p>You can write simple Python scripts using <code>tf.train.Server</code> to reproduce the behavior of <code>grpc_tensorflow_server</code>:</p>
<pre><code># ps.py. Run this on 192.168.0.1. (IP addresses changed to be valid.)
import tensorflow as tf
server = tf.train.Server({"ps": ["192.168.0.1:2222"]},
{"worker": ["192.168.0.2:2222", "192.168.0.3:2222"]},
job_name="ps", task_index=0)
server.join()
# worker_0.py. Run this on 192.168.0.2.
import tensorflow as tf
server = tf.train.Server({"ps": ["192.168.0.1:2222"]},
{"worker": ["192.168.0.2:2222", "192.168.0.3:2222"]},
job_name="worker", task_index=0)
server.join()
# worker_1.py. Run this on 192.168.0.3. (IP addresses changed to be valid.)
import tensorflow as tf
server = tf.train.Server({"ps": ["192.168.0.1:2222"]},
{"worker": ["192.168.0.2:2222", "192.168.0.3:2222"]},
job_name="worker", task_index=1)
server.join()
</code></pre>
<p>Clearly this example could be cleaned up and made reusable with command-line flags etc., but TensorFlow doesn't prescribe a particular form for these. The main things to note is that (i) there is one <code>tf.train.Server</code> instance per TensorFlow task, (ii) all <code>Server</code> instances must be constructed with the same "cluster definition" (the dictionary mapping job names to lists of addressess), and (iii) each task is identified by a unique pair of <code>job_name</code> and <code>task_index</code>.</p>
<p>Once you run the three scripts on the respective machines,, you can create another script to connect to them:</p>
<pre><code>import tensorflow as tf
sess = tf.Session("grpc://192.168.0.2:2222")
# ...
</code></pre>
| 1 | 2016-08-18T21:21:07Z | [
"python",
"tensorflow",
"gpu",
"distributed-computing",
"grpc"
] |
Convert data from an excel file into a python dictionary | 39,027,559 | <p>I'm trying to convert data from an excel file to a python dictionary. My excel file has has two columns and many rows. </p>
<pre><code>Name Age
Steve 11
Mike 10
John 11
</code></pre>
<p>How do I go about adding this into a dictionary with Age as the key and name as the value? Also, if many names have the same age, they should all be in an array. For example:</p>
<pre><code>{'11':['Steve','John'],'10':['Mike']}
</code></pre>
<p>What I've written so far:</p>
<pre><code>import xlsxwriter
import openpyxl
wb = openpyxl.load_workbook('demo.xlsx')
sheet = wb.get_sheet_by_name('Sheet1')
#print sheet.cell(row=2, column=2).value
age_and_names = {}
for i in range(1,11):
age = sheet.cell(row=i, column=2).value
name = sheet.cell(row=i, column=1).value
#Problem seems to be in this general area
if not age in age_and_names:
age_and_names[age]=[]
age_and_names[age].append(name)
print age_and_names
</code></pre>
<p>What should I have done for the desired output? I'm very new to python. All help will be appreciated. Thank You. </p>
| 0 | 2016-08-18T21:09:24Z | 39,027,584 | <p>Just a simple indentation error and your code is incorrect</p>
<pre><code>#Problem seems to be in this general area
if not age in age_and_names:
age_and_names[age]=[]
age_and_names[age].append(name)
</code></pre>
<p>should be</p>
<pre><code>#Problem seems to be in this general area
if not age in age_and_names:
age_and_names[age]=[]
age_and_names[age].append(name)
</code></pre>
<p>Otherwise you destroy previous data from <code>age_and_names[age]</code>.</p>
<p>You should consider using <code>collections.defaultdict</code> instead to avoid testing if key exists:</p>
<p>Declare like this</p>
<pre><code>from collections import defaultdict
age_and_names = defaultdict(list)
</code></pre>
<p>Use like this:</p>
<pre><code>age_and_names[12].append("Mike")
</code></pre>
<p>If the dict has no key <code>12</code>, it will invoke the <code>list</code> method and will create an empty list for you. No need to test if key exists in the first place.</p>
| 1 | 2016-08-18T21:11:03Z | [
"python",
"excel",
"dictionary"
] |
Convert data from an excel file into a python dictionary | 39,027,559 | <p>I'm trying to convert data from an excel file to a python dictionary. My excel file has has two columns and many rows. </p>
<pre><code>Name Age
Steve 11
Mike 10
John 11
</code></pre>
<p>How do I go about adding this into a dictionary with Age as the key and name as the value? Also, if many names have the same age, they should all be in an array. For example:</p>
<pre><code>{'11':['Steve','John'],'10':['Mike']}
</code></pre>
<p>What I've written so far:</p>
<pre><code>import xlsxwriter
import openpyxl
wb = openpyxl.load_workbook('demo.xlsx')
sheet = wb.get_sheet_by_name('Sheet1')
#print sheet.cell(row=2, column=2).value
age_and_names = {}
for i in range(1,11):
age = sheet.cell(row=i, column=2).value
name = sheet.cell(row=i, column=1).value
#Problem seems to be in this general area
if not age in age_and_names:
age_and_names[age]=[]
age_and_names[age].append(name)
print age_and_names
</code></pre>
<p>What should I have done for the desired output? I'm very new to python. All help will be appreciated. Thank You. </p>
| 0 | 2016-08-18T21:09:24Z | 39,027,624 | <p>For this case, use <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> instead of a plain dictionary <code>{}</code>); <code>collections.defaultdict</code> takes a factory function that is used to construct values for new keys. Use <code>list</code> to construct an empty list for each key:</p>
<pre><code>import collections
age_and_names = collections.defaultdict(list)
...
age_and_names[age].append(name)
</code></pre>
<p>No <code>if</code>s are needed.</p>
| 1 | 2016-08-18T21:13:35Z | [
"python",
"excel",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.