title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Pip installing to wrong virtual environment | 38,933,360 | <p>I have two Django projects each with a virtual environment called <code>myvenv</code>. I am trying to use pip to install a package called psycopg2. It is installing to the wrong virtual environment even though the <code>which pip</code> command confirms that pip is being run within the correct virtual environment.</p>
<p>I rebooted my mac so as to start without any virtual environments running. My terminal log looks like this when I try to use pip:</p>
<pre><code>BillMacBookPro:~ billnoble$ cd ~/documents/yhistory-server
BillMacBookPro:yhistory-server billnoble$ source ~/documents/yhistory- server/myvenv/bin/activate
(myvenv) BillMacBookPro:yhistory-server billnoble$ which pip
/Users/billnoble/Documents/YHistory-Server/myvenv/bin/pip
(myvenv) BillMacBookPro:yhistory-server billnoble$ brew install postgresql
Warning: postgresql-9.4.4 already installed
Warning: You are using OS X 10.11.
We do not provide support for this pre-release version.
You may encounter build failures or other breakage.
(myvenv) BillMacBookPro:yhistory-server billnoble$ pip install psycopg2
Requirement already satisfied (use --upgrade to upgrade): psycopg2 in /Users/billnoble/Documents/VeeUServer/myvenv/lib/python3.4/site-packages
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
</code></pre>
<p>I am successfully starting my virtual environment in the YHistory-Server directory and <code>which pip</code> confirms that it is running from this virtual server. However when I run pip it complains that the package is installed in another virtual environment on my computer. Why is it using the wrong path name? How can I make it install in the correct virtual environment?</p>
| 0 | 2016-08-13T13:45:28Z | 38,933,486 | <p>I think when you created your second virtualenv you did so with the first already activated, and possibly chained them together. Try to recreate your virtual environments, and when you create the second one <strong>make sure</strong> you deactivate the first one you created first.</p>
| 1 | 2016-08-13T14:00:51Z | [
"python",
"django",
"osx",
"virtualenv"
] |
Python operation data List in loop | 38,933,384 | <p>I want to solve my programming problem, the problem goes like so:</p>
<pre><code>input: 3 #if i input as first input example 3, output is [1, 2, 3]
[1, 2, 3]
input: 2 #2nd input example 2 output is [1, 2] + [1, 2, 3] = [2, 4, 6]
[2, 4, 3]
input: 6 #3rd input [2, 4, 6] + [1, 2, 3, 4, 5, 6] = [3, 6, 6, 4, 5, 6]
[3, 6, 6, 4, 5, 6]
</code></pre>
<p>My code:</p>
<pre><code>while True:
a = input('Input : ')
n = range (1,a+1,1)
print n
</code></pre>
<p>Outputs:</p>
<pre><code> Input : 3
[1, 2, 3]
Input : 2
[1, 2]
Input : 6
[1, 2, 3, 4, 5, 6]
</code></pre>
<p>How can I solve this problem?</p>
| 2 | 2016-08-13T13:48:36Z | 38,933,490 | <p>Building on your existing code, I would use <a href="https://docs.python.org/2/library/itertools.html#itertools.izip_longest" rel="nofollow"><code>itertools.izip_longest</code></a> (Python 2, for 3 use <code>zip.longest</code>):</p>
<pre><code>>>> import itertools
>>> nxt = []
>>> while True:
a = input('Input : ')
n = range(1, a+1, 1) # could change to range(1, a+1)
nxt = map(sum, itertools.izip_longest(n, nxt, fillvalue=0))
print nxt
</code></pre>
<p>Which yields:</p>
<pre><code>Input : 3
[1, 2, 3]
Input : 2
[2, 4, 3]
Input : 6
[3, 6, 6, 4, 5, 6]
</code></pre>
| 3 | 2016-08-13T14:01:42Z | [
"python",
"list",
"loops",
"sum",
"condition"
] |
Python operation data List in loop | 38,933,384 | <p>I want to solve my programming problem, the problem goes like so:</p>
<pre><code>input: 3 #if i input as first input example 3, output is [1, 2, 3]
[1, 2, 3]
input: 2 #2nd input example 2 output is [1, 2] + [1, 2, 3] = [2, 4, 6]
[2, 4, 3]
input: 6 #3rd input [2, 4, 6] + [1, 2, 3, 4, 5, 6] = [3, 6, 6, 4, 5, 6]
[3, 6, 6, 4, 5, 6]
</code></pre>
<p>My code:</p>
<pre><code>while True:
a = input('Input : ')
n = range (1,a+1,1)
print n
</code></pre>
<p>Outputs:</p>
<pre><code> Input : 3
[1, 2, 3]
Input : 2
[1, 2]
Input : 6
[1, 2, 3, 4, 5, 6]
</code></pre>
<p>How can I solve this problem?</p>
| 2 | 2016-08-13T13:48:36Z | 38,933,577 | <p>You can use map</p>
<pre><code>result = []
while True:
a = input('Input : ')
n = range(1, a+1)
result = [(x or 0) + (y or 0) for x,y in map(None, n, result)]
print result
</code></pre>
<p>and result would be:</p>
<pre><code>Input : 3
[1, 2, 3]
Input : 2
[2, 4, 3]
Input : 6
[3, 6, 6, 4, 5, 6]
</code></pre>
| 1 | 2016-08-13T14:12:10Z | [
"python",
"list",
"loops",
"sum",
"condition"
] |
Python operation data List in loop | 38,933,384 | <p>I want to solve my programming problem, the problem goes like so:</p>
<pre><code>input: 3 #if i input as first input example 3, output is [1, 2, 3]
[1, 2, 3]
input: 2 #2nd input example 2 output is [1, 2] + [1, 2, 3] = [2, 4, 6]
[2, 4, 3]
input: 6 #3rd input [2, 4, 6] + [1, 2, 3, 4, 5, 6] = [3, 6, 6, 4, 5, 6]
[3, 6, 6, 4, 5, 6]
</code></pre>
<p>My code:</p>
<pre><code>while True:
a = input('Input : ')
n = range (1,a+1,1)
print n
</code></pre>
<p>Outputs:</p>
<pre><code> Input : 3
[1, 2, 3]
Input : 2
[1, 2]
Input : 6
[1, 2, 3, 4, 5, 6]
</code></pre>
<p>How can I solve this problem?</p>
| 2 | 2016-08-13T13:48:36Z | 38,933,849 | <blockquote>
<p>but i can not use external code</p>
</blockquote>
<p>Then you can write some code that does exactly what <code>izip_longest</code> does with the zero padding when the new entry is lengthier than the previous result. </p>
<p>The sums are performed in a <em>list comprehension</em> where the values and indices from the input list are gotten by applying <code>enumerate</code> on the entry in the comprehension. Values from the accumulated list are indexed and added to new values at the same index:</p>
<pre><code>tot = []
while True:
a = input('Input : ')
n = range (1,a+1,1)
x, y = len(tot), len(n)
if y > x:
tot[x:y] = [0]*(y-x) # pad the tot list with zeros
tot[:y] = [tot[i]+v for i, v in enumerate(n)]
print tot
</code></pre>
<p>Output:</p>
<pre><code>Input : 3
[1, 2, 3]
Input : 2
[2, 4, 3]
Input : 6
[3, 6, 6, 4, 5, 6]
Input : 1
[4, 6, 6, 4, 5, 6]
Input : 0
[4, 6, 6, 4, 5, 6]
</code></pre>
| 0 | 2016-08-13T14:44:22Z | [
"python",
"list",
"loops",
"sum",
"condition"
] |
How do I count the number of times a function is called? | 38,933,393 | <p>Here is my code, what I am trying to do is to apply a function written by me to a numpy array column-wisely. To know the progress of the program, I want to do similar thing as I can do in for loop with if <code>i % 100 == 0: print i</code></p>
<pre><code>from sklearn.mixture import GMM
gmm = GMM(n_components=2)
def getFunc(x):
print 1
return gmm.fit_predict(np.expand_dims(x,axis=1))
newX = np.apply_along_axis(getFunc, 0, inputX)
</code></pre>
| 1 | 2016-08-13T13:49:30Z | 38,933,443 | <p>There's no such facility â in fact, there can't be, because <code>apply_along_axis</code> is a monolithic call (and that's a good thing, for performance reasons) that mostly happens in C/CPython. There's no facility to communicate the amount of already "processed" items back to python â in fact, when considering the Global Interpreter Lock, I don't even see how that could even happen.</p>
<p>So, no, this won't work, unless your <code>getFunc</code> updates a <code>global</code> counter â which might or might not be a good idea, considering that it's actually pretty close to being a lambda (you'll probably get a speed improvement if you use <code>lambda x: gmm.fit_predict(np.expand_dims(x, axis=1))</code> instead of <code>getFunc</code>, or just use <code>np.vectorize</code>!).</p>
| 1 | 2016-08-13T13:55:00Z | [
"python",
"numpy"
] |
How do I count the number of times a function is called? | 38,933,393 | <p>Here is my code, what I am trying to do is to apply a function written by me to a numpy array column-wisely. To know the progress of the program, I want to do similar thing as I can do in for loop with if <code>i % 100 == 0: print i</code></p>
<pre><code>from sklearn.mixture import GMM
gmm = GMM(n_components=2)
def getFunc(x):
print 1
return gmm.fit_predict(np.expand_dims(x,axis=1))
newX = np.apply_along_axis(getFunc, 0, inputX)
</code></pre>
| 1 | 2016-08-13T13:49:30Z | 38,933,493 | <p>You could try writing a decorator for your function.</p>
<pre><code>from functools import wraps
from sklearn.mixture import GMM
gmm = GMM(n_components=2)
def log_every(n):
def decorator(fn)
counter = 0
@wraps(fn)
def wrapper(*args, **kwargs):
counter += 1
if counter % n == 0:
print(n)
return fn(*args, **kwargs)
return wrapper
return decorator
@log_every(100)
def getFunc(x):
return gmm.fit_predict(np.expand_dims(x,axis=1))
newX = np.apply_along_axis(getFunc, 0, inputX)
</code></pre>
<p>But, as @MarcusMüller points out, performance would probably suffer.</p>
| 2 | 2016-08-13T14:02:08Z | [
"python",
"numpy"
] |
How do I count the number of times a function is called? | 38,933,393 | <p>Here is my code, what I am trying to do is to apply a function written by me to a numpy array column-wisely. To know the progress of the program, I want to do similar thing as I can do in for loop with if <code>i % 100 == 0: print i</code></p>
<pre><code>from sklearn.mixture import GMM
gmm = GMM(n_components=2)
def getFunc(x):
print 1
return gmm.fit_predict(np.expand_dims(x,axis=1))
newX = np.apply_along_axis(getFunc, 0, inputX)
</code></pre>
| 1 | 2016-08-13T13:49:30Z | 38,934,312 | <p>A <code>global</code> counter should work, so would a mutable counter</p>
<pre><code>Out[1471]: array([12, 15, 18, 21])
In [1472]: A=np.arange(12).reshape(3,4)
In [1473]: def foo(x):
...: counter[0] +=1
...: print(counter)
...: return sum(x)
...:
In [1474]: counter=[0]
In [1475]: np.apply_along_axis(foo,0,A)
[1]
[2]
[3]
[4]
Out[1475]: array([12, 15, 18, 21])
</code></pre>
<p>Note that in this case <code>apply_along_axis</code> is </p>
<pre><code>In [1476]: [sum(col) for col in A.T]
Out[1476]: [12, 15, 18, 21]
In [1477]: A.sum(axis=0)
Out[1477]: array([12, 15, 18, 21])
</code></pre>
<p>Why are you using <code>apply_along_axis</code>? What's the dimension of <code>inputX</code>? </p>
<p>In a 2d case, <code>apply_along_axis</code> is basically:</p>
<pre><code>[sum(A[(slice(None),i)]) for i in range(A.shape[1])]
</code></pre>
<p>If <code>A</code> is higher dimensioned, it takes care of iterating on <code>(slice(None),i,j...)</code> for all <code>i,j...</code>. But there are other ways of generating those indices. It provides convenience, not speed or functionality.</p>
| 1 | 2016-08-13T15:36:34Z | [
"python",
"numpy"
] |
How to make your character move without tapping repeatedly on buttons? | 38,933,399 | <p>I'm working on a racing game. I have been using this code but my player keeps moving once only. How do I solve this problem?</p>
<pre><code>change = 7
dist = 7
change_r = 0
change_l = 0
dist_u = 0
dist_d = 0
pygame.display.update()
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
change_r = True
elif event.key == pygame.K_LEFT:
change_l = True
elif event.key == pygame.K_UP:
dist_u = True
elif event.key == pygame.K_DOWN:
dist_d = True
if event.type == pygame.KEYUP:
if event.key == pygame.K_RIGHT or event.key == pygame.K_LEFT or event.key == pygame.K_UP or event.key == pygame.K_DOWN:
change_r = False
change_l = False
dist_u = False
dist_d = False
clock.tick(60)
if change_r == True:
x += change
if change_l == True:
x -= change
if dist_u == True:
y -= dist
if dist_d == True:
y += dist
</code></pre>
| 1 | 2016-08-13T13:49:52Z | 38,933,461 | <p>You should have a global loop irrespective of your event handling. You should place your clock.tick() and your movement there. Namely:</p>
<pre><code>#some constants here
while True:
pygame.display.update()
clock.tick(60)
#update you game state
if change_r == True:
x += change
if change_l == True:
x -= change
if dist_u == True:
y -= dist
if dist_d == True:
y += dist
for event in pygame.event.get():
# react to player input by changing the game state
</code></pre>
| 0 | 2016-08-13T13:58:00Z | [
"python",
"pygame",
"keydown"
] |
How to make your character move without tapping repeatedly on buttons? | 38,933,399 | <p>I'm working on a racing game. I have been using this code but my player keeps moving once only. How do I solve this problem?</p>
<pre><code>change = 7
dist = 7
change_r = 0
change_l = 0
dist_u = 0
dist_d = 0
pygame.display.update()
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
change_r = True
elif event.key == pygame.K_LEFT:
change_l = True
elif event.key == pygame.K_UP:
dist_u = True
elif event.key == pygame.K_DOWN:
dist_d = True
if event.type == pygame.KEYUP:
if event.key == pygame.K_RIGHT or event.key == pygame.K_LEFT or event.key == pygame.K_UP or event.key == pygame.K_DOWN:
change_r = False
change_l = False
dist_u = False
dist_d = False
clock.tick(60)
if change_r == True:
x += change
if change_l == True:
x -= change
if dist_u == True:
y -= dist
if dist_d == True:
y += dist
</code></pre>
| 1 | 2016-08-13T13:49:52Z | 38,935,542 | <p>There are different ways you could go about making move-able shapes/sprites using Pygame. The code you posted in your question seems overly complex. I'll give you two different ways that are simpler, and easier to use. Using a sprite class, and using a function.</p>
<p>It may not seem like I'm answering your question, but if I just told you what to fix in your code you posted, I'd just be knowingly steering you down a road that will lead to many problems, and much complexity in the future.</p>
<h2><strong>Method 1: Using a sprite class</strong></h2>
<p>To make a your race car sprite, you could use a sprite class like the one below.</p>
<pre><code>class Player(pygame.sprite.Sprite):
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.Surface((50, 50))
self.image.fill((255, 0, 0))
self.rect = self.image.get_rect()
self.rect.x = WIDTH / 2
self.rect.y = HEIGHT / 2
self.vx = 0
self.vy = 0
def update(self):
self.vx = 0
self.vy = 0
key = pygame.key.get_pressed()
if key[pygame.K_LEFT]:
self.vx = -5
elif key[pygame.K_RIGHT]:
self.vx = 5
if key[pygame.K_UP]:
self.vy = -5
elif key[pygame.K_DOWN]:
self.vy = 5
self.rect.x += self.vx
self.rect.y += self.vy
</code></pre>
<p>Since your class is inheriting from Pygame's sprite class, <strong>You must name your image</strong> <code>self.image</code> <strong>and you must name your rectangle for the image</strong> <code>self.rect</code>. As you can also see, the class has two main methods. One for creating the sprite(<code>__init__</code>) and one for updating the sprite(<code>update</code>)</p>
<p>To use your class, make a Pygame sprite group to hold all your sprites, and then add your player object to the group:</p>
<pre><code>sprites = pygame.sprite.Group()
player = Player()
sprtites.add(player)
</code></pre>
<p>And to actual render your sprites to the screen call <code>sprites.update()</code> and <code>sprites.draw()</code> in your game loop, where you update the screen:</p>
<pre><code>sprites.update()
window_name.fill((200, 200, 200))
sprites.draw(window_name)
pygame.display.flip()
</code></pre>
<p>The reason i highly recommended using sprite classes, is that it will make your code look much cleaner, and be much easier to maintain. You could even move each sprite class to their own separate file.</p>
<p>Before diving fully into the method above however, you should read up on <a href="http://www.pygame.org/docs/ref/rect.html" rel="nofollow">pygame.Rect</a> objects and <a href="http://www.pygame.org/docs/ref/sprite.html" rel="nofollow">pygame.sprite</a> objects, as you'll be using them. </p>
<h2><strong>Method 2: Using A function</strong></h2>
<p>If you prefer not to get into sprite classes, you can create your game entities using a function similar to the one below.</p>
<pre><code>def create_car(surface, x, y, w, h, color):
rect = pygame.Rect(x, y, w, h)
pygame.draw.rect(surface, color, rect)
return rect
</code></pre>
<p>If you would still like to use a sprites, but don't want to make a class just modify the function above slightly:</p>
<pre><code>def create_car(surface, x, y, color, path_to_img):
img = pygame.image.load(path_to_img)
rect = img.get_rect()
surface.blit(img, (x, y))
</code></pre>
<p>Here is an example of how i would use the functions above to make a movable rectangle/sprite:</p>
<pre><code>import pygame
WIDTH = 640
HEIGHT = 480
display = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Moving Player Test")
clock = pygame.time.Clock()
FPS = 60
def create_car(surface, x, y, w, h, color):
rect = pygame.Rect(x, y, w, h)
pygame.draw.rect(surface, color, rect)
return rect
running = True
vx = 0
vy = 0
player_x = WIDTH / 2 # middle of screen width
player_y = HEIGHT / 2 # middle of screen height
player_speed = 5
while running:
clock.tick(FPS)
for e in pygame.event.get():
if e.type == pygame.QUIT:
running = False
pygame.quit()
quit()
if e.type == pygame.KEYDOWN:
if e.key == pygame.K_LEFT:
vx = -player_speed
elif e.key == pygame.K_RIGHT:
vx = player_speed
if e.key == pygame.K_UP:
vy = -player_speed
elif e.key == pygame.K_DOWN:
vy = player_speed
if e.type == pygame.KEYUP:
if e.key == pygame.K_LEFT or e.key == pygame.K_RIGHT or\
e.key == pygame.K_UP or e.key == pygame.K_DOWN:
vx = 0
vy = 0
player_x += vx
player_y += vy
display.fill((200, 200, 200))
####make the player#####
player = create_car(display, player_x, player_y, 10, 10, (255, 0, 0))
pygame.display.flip()
</code></pre>
<p>As you may see above, your code for moving your race car can be simplified. </p>
<p>I should note, I'm assuming a few things with each method outlined above. </p>
<ol>
<li><p>That your either using a circle, square, or some type of pygame shape object. </p></li>
<li><p>Or that your using a sprite.</p></li>
</ol>
<p>If your not currently using any of the methods outlined above, I suggest that you do. Doing so will make your code much eaiser to maintain once you begin to build larger and more complex games.</p>
| 1 | 2016-08-13T18:03:34Z | [
"python",
"pygame",
"keydown"
] |
python setup.py - how to show message after install | 38,933,402 | <p>I am developing a PyQt5 app and making it available via <code>pip install</code> now that pip in python3 can install pyqt5 as dependence. I made an entry point to launch my package, and told setup.py that it's a gui_scripts. </p>
<p>What I would like to do now, is after the person typing <code>pip install package</code>, and the installation is finished, display a message to the person telling that you can now type <code>package</code> in the terminal to load the application. What's the correct way of doing that? Or should I not do this?</p>
| 0 | 2016-08-13T13:50:25Z | 38,933,723 | <p>The <strong>setup.py</strong> is almost a regular Python script.
Just use the <code>print()</code> function at the end of your <strong>setup.py</strong> file.
In this example the file structure is somedir/setup.py, somedir/test/ and test/<strong>init</strong>.py</p>
<h3>Simple solution</h3>
<pre><code>from setuptools import setup
print("Started!")
setup(name='testing',
version='0.1',
description='The simplest setup in the world',
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.0',
],
keywords='setup',
author='someone',
author_email='someone@example.com',
license='MIT',
packages=['test'],
entry_points={
},
zip_safe=False)
print("Finished!")
</code></pre>
<blockquote>
<p>Started! <br>running install <br>running bdist_egg <br>running egg_info<br> writing testing.egg-info/PKG-INFO <br>
...<br> ...<br> ...<br>
Processing dependencies for testing==0.1 <br>Finished processing
dependencies for testing==0.1 <br>Finished!<br></p>
</blockquote>
<h3>Using <code>setuptools.command.install</code> solution</h3>
<p>Also, you can subclass the setuptools.command.install solution. Check the difference when you change the order of <code>install.run(self)</code> and <code>os.system("cat testing.egg-info/PKG-INFO")</code> in a clean setup.</p>
<pre><code>from setuptools import setup
from setuptools.command.install import install
import os
class PostInstallCommand(install):
"""Post-installation for installation mode."""
def run(self):
install.run(self)
os.system("cat testing.egg-info/PKG-INFO")
setup(name='testing',
version='0.1',
description='The simplest setup in the world',
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.0',
],
keywords='setup',
author='someone',
author_email='someone@example.com',
license='MIT',
packages=['test'],
entry_points={
},
cmdclass={
'install': PostInstallCommand,
},
zip_safe=False)
</code></pre>
| 1 | 2016-08-13T14:30:18Z | [
"python",
"python-3.x",
"setup.py"
] |
Curl to requests; max retries exceeded | 38,933,668 | <p>I've seen this:
<a href="http://stackoverflow.com/questions/31041994/python-requests-equivalent-to-curl-h">python requests equivalent to curl -H</a></p>
<p>but when I try and make my own python request I get the "Max retries exceeded with url:" error.</p>
<p>I am trying to convert this command</p>
<pre><code>curl.exe -H "x-api-key: aORMdWt3AX90YewgsRfYM7Y77eUQws8M75Mb8TIF" https://cqh77pglf1.executeÂapi.usÂwestÂ2.amazonaws.com/prod/data/location/Â71.1043443253471,Â42.3150676015829/time/2009Â12Â25
</code></pre>
<p>into </p>
<pre><code>import requests
headers = {"xÂ-api-Âkey": "aORMdWt3AX90YewgsRfYM7Y77eUQws8M75Mb8TIF"}
r = requests.post('https://cqh77pglf1.executeÂapi.usÂwestÂ2.amazonaws.com/prod/data/location/Â71.1043443253471,Â42.3150676015829/time/2009Â12Â25', headers=headers)
</code></pre>
<p>(I've tried get as well)</p>
<p>Any help? Thanks in advance!</p>
| 0 | 2016-08-13T14:24:44Z | 38,935,076 | <p>Try <code>requests.get</code> instead of <code>requests.post</code>, that curl command is probably calling GET and not POST.</p>
| 0 | 2016-08-13T17:03:16Z | [
"python",
"curl"
] |
Removing brackets in a list of tuples | 38,933,747 | <p>I have a list of tuple as below. </p>
<pre><code>>>>> date_lst
[(2013, 8, 13, 17),
(2013, 8, 5, 17),
(2013, 6, 26, 17),
(2013, 8, 7, 17),
(2013, 8, 7, 18),
(2013, 8, 8, 16),
(2013, 8, 8, 18),
(2013, 8, 7, 17),
(2013, 8, 7, 17)]
</code></pre>
<p>I want this list to print output as below. I would like to remove tuple brackets. What are some possible ways to do so?</p>
<pre><code>2013, 8, 13, 17
2013, 8, 5, 17
2013, 6, 26, 17
2013, 8, 7, 17
2013, 8, 7, 18
2013, 8, 8, 16
2013, 8, 8, 18
2013, 8, 7, 17
2013, 8, 7, 17
</code></pre>
| -3 | 2016-08-13T14:32:52Z | 38,933,777 | <p>The brackets aren't in the data, so there is nothing to remove. The brackets and parenthesis only show up when you try to print the data. You simply need to extract the data from the list:</p>
<pre><code>for data in data_lst:
print("%s, %s, %s, %s" % data)
</code></pre>
| 0 | 2016-08-13T14:36:25Z | [
"python"
] |
Removing brackets in a list of tuples | 38,933,747 | <p>I have a list of tuple as below. </p>
<pre><code>>>>> date_lst
[(2013, 8, 13, 17),
(2013, 8, 5, 17),
(2013, 6, 26, 17),
(2013, 8, 7, 17),
(2013, 8, 7, 18),
(2013, 8, 8, 16),
(2013, 8, 8, 18),
(2013, 8, 7, 17),
(2013, 8, 7, 17)]
</code></pre>
<p>I want this list to print output as below. I would like to remove tuple brackets. What are some possible ways to do so?</p>
<pre><code>2013, 8, 13, 17
2013, 8, 5, 17
2013, 6, 26, 17
2013, 8, 7, 17
2013, 8, 7, 18
2013, 8, 8, 16
2013, 8, 8, 18
2013, 8, 7, 17
2013, 8, 7, 17
</code></pre>
| -3 | 2016-08-13T14:32:52Z | 38,933,806 | <p>Cast every item in tuple to string, then join them.</p>
<pre><code>[', '.join(map(str, x)) for x in date_lst]
</code></pre>
| 0 | 2016-08-13T14:38:39Z | [
"python"
] |
cant get wsgi and python to work in wamp | 38,933,789 | <p>wamp version : <strong>2.5 , 64 bit</strong> <br>
python version : <strong>3.5.2 , 64 bit</strong><br>
windows version: <strong> 10 ,64 bit</strong><br>
apache version: <strong> 2.4.9</strong><br>
mod_wsgi: <strong>4.4.23 </strong>
<br><br>
I am trying to make python and flask work in wamp , so I read online and it says get wsgi which I did get from <strong> <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#mod_wsgi" rel="nofollow">from here</a></strong> i put it in the apache modules file in this dir: <strong>C:\wamp\bin\apache\apache2.4.9\modules</strong><br>
I edited my httpd.conf file and added the following: <br></p>
<pre><code>LoadModule wsgi_module modules/mod_wsgi.so
WSGIPythonHome C:/Python35/
</code></pre>
<p>wamp was yellow in the begging but i figured it out all you had to do is stop the cgi module then instructions online say to make a virtual host so i did:<br></p>
<pre><code>WSGIScriptAlias /flaskapp C:/wamp/www/flaskapp/flaskapp.wsgi<br>
<VirtualHost *>
ServerName localhost
Directory C:/wamp/www/flaskapp>
Require all granted
</Directory>
</VirtualHost>
</code></pre>
<p>it also says to make a <strong>.wsgi</strong> so i made a file i called <strong>flaskapp.wsgi</strong> it contains : <br></p>
<pre><code>import sys
#Expand Python classes path with your app's path
sys.path.insert(0, 'C:/wamp/www/flaskapp')
from__init__ import app
#Put logging code (and imports) here ...
#Initialize WSGI app object
application = app
</code></pre>
<p>after all this it should work normally but whenever I go to localhost/flaskapp/<code>__init__</code>.py I just get the same text in <code>__init__</code>.py which is :<br></p>
<pre><code>from flask import Flask, request
app = Flask(__ name__)
@app.route('/hello')
def hello_world():
return "Hello World"
if __ name__ == '__ main__':
app.run()
</code></pre>
<p>btw i have flask installed in my global env and python is set to work with all users and has access to PATH<br> i have tried everything so obviously something is wrong because its not working please if somebody knows help </p>
| 0 | 2016-08-13T14:37:25Z | 38,936,929 | <p>So basicaly i disabled cgi , because when i put wsgi wamp was yellow so when i disabled cgi wamp became green so i thought cgi was the problem , now i was giving up so i said lets return everything to how it was i turned on cgi and wamp went yellow then </p>
<h1>GREEN</h1>
<p><code>I DONT KNOW WHY</code> after that everything worked fine<br> THANKS FOR ANYONE WHO TRIED TO HELP ME :)</p>
| 0 | 2016-08-13T20:55:06Z | [
"python",
"apache",
"flask",
"wamp",
"wsgi"
] |
How to find the minimum number of tiles needed to cover holes in a roof | 38,933,872 | <p>I'm just learning about dynamic programming, and I've stumbled upon a problem which I am not sure how to formulate in Python:</p>
<blockquote>
<p>Given a binary array <code>H</code> of length 55, a <code>1</code> indicates a hole in the roof, <code>0</code> indicates no hole.
The tails you can use have length 1, 13 or 55, and the cost to deploy each is 3, 13 and 50, respectively.
For a given array of holes <code>H</code> return the minimum cost such that all the holes are covered.</p>
</blockquote>
<p>From what I learned, the first step is to find the base cases, and to reason by induction.
So, here are some base cases I could easily find:</p>
<ul>
<li>a tile of size 13 is more convenient than 5 tiles of size 1 (cost: 13 vs 15 or more)</li>
<li>a tile of size 55 is more convenient than 4 tiles of size 13 (cost: 50 vs 52 or more)</li>
</ul>
<p>Initially I thought the first point means that if there are 5 or more holes in 13 contiguous spaces I should always choose the 13 tile. However I think it depends on following holes.</p>
<p>The second point is even more problematic if you throw in 1-tiles in the problem. Consider, e.g., 4 single holes at locations <code>[0, 15, 29, 44]</code> you're better off with 4 1-tiles (1 x 55-tile costs 50, 4 x 13-tiles = 52).</p>
<p>So it looks I have to evaluate "how spaced" are the holes for all the possible combination of slices in the array are.</p>
<p>How can I formulate the above into (even pseudo-) code?</p>
| 1 | 2016-08-13T14:47:17Z | 38,934,300 | <p>Lets say cost[i] - best cost to cover first i elements of the roof.
Obviously cost[0] = 0 (we don't need any money to cover 0 tiles).</p>
<p>Lets describe our state as (position, cost).</p>
<p>From state (i,cost[i]) we can get to 4 different potential states:</p>
<ul>
<li>(i + 1, cost[i] + 3) (when we use tile of length 1 and cost is 3)</li>
<li>(i + 13, cost[i] + 13) (tile length = 13, cost is also 13)</li>
<li>(i + 55, cost[i] + 50) (tile length = 55, cost is 50)</li>
<li>(i + 1, cost[i]) (we ignore current position and don't use any tile here)</li>
</ul>
<p>Once we change state using one of the above rules we should consider:</p>
<ul>
<li>position should be <= total Length (55)</li>
<li>if we get to position <strong>i</strong> with same or bigger cost we don't want to proceed (basically dynamic programming plays role here, if we get to the sub-problem with same or worse result we don't want to proceed).</li>
<li>we can't skip tile (our 4th state transformation) if this tile has hole.</li>
</ul>
<p>Once we run all this state transformations answer will be at cost[total length (55)]</p>
| 3 | 2016-08-13T15:34:50Z | [
"python",
"algorithm",
"dynamic-programming"
] |
Python regex for removing scraping results according to substrings? | 38,933,897 | <p>I have a written a scraper in python. I have a group of strings which i want to search on the page and from the result of that, i want to remove those results which contains words from another group of strings i have.</p>
<p>Here is the code - </p>
<pre><code>def find_jobs(self, company, soup):
allowed = re.compile(r"Developer|Engineer|Designer|Admin|Manager|Writer|Executive|Lead|Analyst|Editor|"
r"Associate|Architect|Recruiter|Specialist|Scientist|Support|Expert|SSE|Head|"
r"Producer|Evangelist|Ninja", re.IGNORECASE)
not_allowed = re.compile(r"^responsibilities$|^description$|^requirements$|^experience$|^empowering$|^engineering$|^"
r"find$|^skills$|^recruiterbox$|^google$|^communicating$|^associated$|^internship$|^you$|^"
r"proficient$|^leadsquared$|^referral$|^should$|^must$|^become$|^global$|^degree$|^good$|^"
r"capabilities$|^leadership$|^services$|^expertise$|^architecture$|^hire$|^follow$|^jobs$|^"
r"procedures$|^conduct$|^perk$|^missed$|^generation$|^search$|^tools$|^worldwide$|^contact$|^"
r"question$|^intern$|^classes$|^trust$|^ability$|^businesses$|^join$|^industry$|^response$|^"
r"using$|^work$|^based$|^grow$|^provide$|^understand$|^header$|^headline$|^masthead$|^office$", re.IGNORECASE)
profile_list = set()
k = soup.body.findAll(text=allowed)
for i in k:
if len(i) < 60 and not_allowed.search(i) is None:
profile_list.add(i.strip().upper())
self.update_jobs(company, profile_list)
</code></pre>
<p>So I am facing a problem here. With the anchor tags in <code>not_allowed</code>, strings such as <code>//HEADLINE-BG</code> and <code>ABILITY TO LEAD & MENTOR A TEAM</code> are getting through, although i have the strings <code>headline</code> and <code>ability</code> in <code>not_allowed</code>. These are removed if i remove the anchor tags but then a string such as <code>SCALABILITY ENGINEER</code> does not get saved due to string <code>ability</code> in <code>not_allowed</code>.So being a newbie in regex, i am not sure how can i get this to work. Earlier i was using this - </p>
<pre><code>def find_jobs(self, company, soup):
allowed = re.compile(r"Developer|Designer|Engineer|Admin|Manager|Writer|Executive|Lead|Analyst|Editor|"
r"Associate|Architect|Recruiter|Specialist|Scientist|Support|Expert|SSE|Head"
r"Producer|Evangelist|Ninja", re.IGNORECASE)
not_allowed = ['responsibilities', 'description', 'requirements', 'experience', 'empowering', 'engineering',
'find', 'skills', 'recruiterbox', 'google', 'communicating', 'associated', 'internship',
'proficient', 'leadsquared', 'referral', 'should', 'must', 'become', 'global', 'degree', 'good',
'capabilities', 'leadership', 'services', 'expertise', 'architecture', 'hire', 'follow',
'procedures', 'conduct', 'perk', 'missed', 'generation', 'search', 'tools', 'worldwide', 'contact',
'question', 'intern', 'classes', 'trust', 'ability', 'businesses', 'join', 'industry', 'response', 'you', 'using', 'work', 'based', 'grow', 'provide']
profile_list = set()
k = soup.body.findAll(text=allowed)
for i in k:
if len(i) < 60 and not any(x in i.lower() for x in not_allowed):
profile_list.add(i.strip().upper())
self.update_jobs(company, profile_list)
</code></pre>
<p>But this also omitted a string if a substring was present in <code>not_allowed</code>. Please can anyone help with this.</p>
| 0 | 2016-08-13T14:50:25Z | 38,933,937 | <p>It looks like your are writing your notallowed regex wrongly. Your notallowed regex is actually looking for those words to be the only item on the line.</p>
<p><code>re.compile(r'^something_i_dont_like$')</code> is going to match something_i_dont_like if it is the only item on the line</p>
<p>if you want to omit something, you need to do a negative lookahead</p>
<p><code>re.compile(r'^((?!something_i_dont_like).)*$')</code></p>
| 1 | 2016-08-13T14:55:26Z | [
"python",
"regex",
"string",
"web-scraping"
] |
Python regex for removing scraping results according to substrings? | 38,933,897 | <p>I have a written a scraper in python. I have a group of strings which i want to search on the page and from the result of that, i want to remove those results which contains words from another group of strings i have.</p>
<p>Here is the code - </p>
<pre><code>def find_jobs(self, company, soup):
allowed = re.compile(r"Developer|Engineer|Designer|Admin|Manager|Writer|Executive|Lead|Analyst|Editor|"
r"Associate|Architect|Recruiter|Specialist|Scientist|Support|Expert|SSE|Head|"
r"Producer|Evangelist|Ninja", re.IGNORECASE)
not_allowed = re.compile(r"^responsibilities$|^description$|^requirements$|^experience$|^empowering$|^engineering$|^"
r"find$|^skills$|^recruiterbox$|^google$|^communicating$|^associated$|^internship$|^you$|^"
r"proficient$|^leadsquared$|^referral$|^should$|^must$|^become$|^global$|^degree$|^good$|^"
r"capabilities$|^leadership$|^services$|^expertise$|^architecture$|^hire$|^follow$|^jobs$|^"
r"procedures$|^conduct$|^perk$|^missed$|^generation$|^search$|^tools$|^worldwide$|^contact$|^"
r"question$|^intern$|^classes$|^trust$|^ability$|^businesses$|^join$|^industry$|^response$|^"
r"using$|^work$|^based$|^grow$|^provide$|^understand$|^header$|^headline$|^masthead$|^office$", re.IGNORECASE)
profile_list = set()
k = soup.body.findAll(text=allowed)
for i in k:
if len(i) < 60 and not_allowed.search(i) is None:
profile_list.add(i.strip().upper())
self.update_jobs(company, profile_list)
</code></pre>
<p>So I am facing a problem here. With the anchor tags in <code>not_allowed</code>, strings such as <code>//HEADLINE-BG</code> and <code>ABILITY TO LEAD & MENTOR A TEAM</code> are getting through, although i have the strings <code>headline</code> and <code>ability</code> in <code>not_allowed</code>. These are removed if i remove the anchor tags but then a string such as <code>SCALABILITY ENGINEER</code> does not get saved due to string <code>ability</code> in <code>not_allowed</code>.So being a newbie in regex, i am not sure how can i get this to work. Earlier i was using this - </p>
<pre><code>def find_jobs(self, company, soup):
allowed = re.compile(r"Developer|Designer|Engineer|Admin|Manager|Writer|Executive|Lead|Analyst|Editor|"
r"Associate|Architect|Recruiter|Specialist|Scientist|Support|Expert|SSE|Head"
r"Producer|Evangelist|Ninja", re.IGNORECASE)
not_allowed = ['responsibilities', 'description', 'requirements', 'experience', 'empowering', 'engineering',
'find', 'skills', 'recruiterbox', 'google', 'communicating', 'associated', 'internship',
'proficient', 'leadsquared', 'referral', 'should', 'must', 'become', 'global', 'degree', 'good',
'capabilities', 'leadership', 'services', 'expertise', 'architecture', 'hire', 'follow',
'procedures', 'conduct', 'perk', 'missed', 'generation', 'search', 'tools', 'worldwide', 'contact',
'question', 'intern', 'classes', 'trust', 'ability', 'businesses', 'join', 'industry', 'response', 'you', 'using', 'work', 'based', 'grow', 'provide']
profile_list = set()
k = soup.body.findAll(text=allowed)
for i in k:
if len(i) < 60 and not any(x in i.lower() for x in not_allowed):
profile_list.add(i.strip().upper())
self.update_jobs(company, profile_list)
</code></pre>
<p>But this also omitted a string if a substring was present in <code>not_allowed</code>. Please can anyone help with this.</p>
| 0 | 2016-08-13T14:50:25Z | 38,933,960 | <p>The regex</p>
<pre><code>^ability$
</code></pre>
<p>Means "the line consists only of the word "ability". If you want sub-strings, just change to</p>
<pre><code>ability
</code></pre>
<p>If you want to omit the word "ability", but not "disability", then use something like</p>
<pre><code>\bability\b
</code></pre>
| 0 | 2016-08-13T14:57:32Z | [
"python",
"regex",
"string",
"web-scraping"
] |
Py2Exe error: error: [Errno 2] No such file or directory: run-py3.5-win32.exe' | 38,934,129 | <p>Similar Questions have been asked to this before, however I have tried quite a lot and nothing seems to be working my exact error is:</p>
<pre><code>error: [Errno 2] No such file or directory: 'C:\\Users\\Byron\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\py2exe\\run-py3.5-win32.exe'
</code></pre>
<p>What I have tried:</p>
<ul>
<li>Reinstalling Py2Exe (a few times via Pip & Installer)</li>
<li>Disabling Anti-Virus</li>
</ul>
<p>Anything else:</p>
<ul>
<li>I initially installed using Pip after reinstalling python to a different directory</li>
<li>I reinstalled using a MS Windows Installer</li>
<li>I'm using Python 3.5</li>
<li>I have checked where Py2Exe is checking for this file and it is not present or visible. </li>
<li>I have included a screenshot of my Py2Exe installation.</li>
</ul>
<p><strong>Py2Exe Installation Image</strong></p>
<p><a href="http://i.stack.imgur.com/hTCAE.png" rel="nofollow"><img src="http://i.stack.imgur.com/hTCAE.png" alt="Py2Exe install Image"></a></p>
| 0 | 2016-08-13T15:15:58Z | 39,207,226 | <p>Py2exe does not support python 3.5 so the file doesn't exist yet.</p>
| 0 | 2016-08-29T13:13:22Z | [
"python",
"windows",
"exe",
"py2exe"
] |
How to use typeshed with mypy? | 38,934,177 | <p>I cloned <a href="https://github.com/python/typeshed/tree/master/stdlib/3.5" rel="nofollow">typeshed</a> but I can't figure out how to tell mypy to use the type hints it contains, I see no option in mypy --help. The mypy repo does contain reference to the typeshed repo, but pip installing it doesn't not download it.</p>
| 0 | 2016-08-13T15:21:19Z | 38,934,841 | <p>Mypy comes bundled with typeshed by default, so you shouldn't need to do anything -- simply doing <code>pip install mypy-lang</code> will install it correctly. </p>
<p>Note that typeshed is not a Python module, so it isn't possible to import it or otherwise access it from a Python program unless you literally look at the location in the filesystem the stubs are stored.</p>
| 1 | 2016-08-13T16:40:58Z | [
"python",
"type-hinting",
"mypy"
] |
I keep getting 'AttributeError: 'int' object has no attribute 'isdecimal'' not sure how to correct | 38,934,197 | <p>I have been working through a book called How to Automate the boring stuff Python tutorial book and I have come across an example that does not seem to run.</p>
<p>Here is my code:</p>
<pre><code>while True:
print 'Enter your age:'
age = input()
if age.isdecimal():
break
print 'Please enter a number for your age.'
while True:
print 'Select a new password (letters and numbers only):'
password = input()
if password.isalnum():
break
print 'Passwords can only have letters and numbers.'
</code></pre>
<p>Thanks for any help given!</p>
| -2 | 2016-08-13T15:23:38Z | 39,321,945 | <p>Your indentation is off. Also the book uses Python 3.5, where print is followed by () with the code inside.</p>
<pre><code>while True:
print('Enter your age:')
age = input()
if age.isdecimal():
break
print('Please enter a number for your age.')
while True:
print('Select a new password (letters and numbers only):')
password = input()
if password.isalnum():
break
print('Passwords can only have letters and numbers.')
</code></pre>
| 1 | 2016-09-04T22:12:49Z | [
"python"
] |
Obtain expression inside one parenthesis inside double parenthesis in python | 38,934,220 | <p>Suppose there is the following code</p>
<pre><code>s= 'hello {world {123} }'
print s[s.find("{")+1:s.find("}")]
</code></pre>
<p>it gives me </p>
<pre><code>"world {123"
</code></pre>
<p>What should I do to obtain </p>
<pre><code>world {123}
</code></pre>
<p>?</p>
| -1 | 2016-08-13T15:26:28Z | 38,934,295 | <p>When slicing the <code>end</code> argument is exclusive, so you will need to add 1 to the result of <code>find</code>:</p>
<pre><code>print s[s.find("{") + 1: s.find("}") + 1]
</code></pre>
| 0 | 2016-08-13T15:33:58Z | [
"python",
"string",
"parentheses"
] |
Obtain expression inside one parenthesis inside double parenthesis in python | 38,934,220 | <p>Suppose there is the following code</p>
<pre><code>s= 'hello {world {123} }'
print s[s.find("{")+1:s.find("}")]
</code></pre>
<p>it gives me </p>
<pre><code>"world {123"
</code></pre>
<p>What should I do to obtain </p>
<pre><code>world {123}
</code></pre>
<p>?</p>
| -1 | 2016-08-13T15:26:28Z | 38,934,330 | <p>Well, I've never used this find method before, but I messed around a little bit and I've found if you use:</p>
<pre><code>print s[s.find("{")+1:s.find("}")+1]
</code></pre>
<p>it will return the results you're looking for.
Thanks for helping me learn something new.</p>
| 0 | 2016-08-13T15:38:30Z | [
"python",
"string",
"parentheses"
] |
Why is this parameter invalid in sklearn's Pipeline? | 38,934,236 | <p>I am getting the following error for the below code, but cannot figure out why my parameter is invalid. <code>SelectFromModel</code> is a valid input in a Pipeline as it has a fit and transform function. </p>
<pre><code>ValueError: Invalid parameter sfm_threshold for estimator Pipeline.
Check the list of available parameters with
`estimator.get_params().keys()`
</code></pre>
<p></p>
<pre><code>from sklearn.preprocessing import PolynomialFeatures, StandardScaler
from sklearn.linear_model import LassoCV, LinearRegression
from sklearn.feature_selection import SelectFromModel
from sklearn.pipeline import Pipeline
poly = PolynomialFeatures()
std = StandardScaler()
ls = LassoCV(cv=10)
sfm = SelectFromModel(estimator=ls)
lr = LinearRegression()
pipe_lr = Pipeline([('poly', poly),
('std', std),
('sfm', sfm),
('lr', lr)])
param_range_degree = [2, 3]
param_range_threshold = [0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5]
param_grid_lr = [{'poly__degree': param_range_degree,
'sfm__threshold': param_range_threshold}]
</code></pre>
<p>When I run <code>pipe_lr.get_params().keys()</code> I get the following output, which does in fact include <code>sfm__threshold</code>, which I copy and pasted exactly as is.</p>
<pre><code> ['std__with_mean',
'sfm__estimator__precompute',
'lr__n_jobs',
'sfm__prefit',
'poly',
'sfm__threshold',
'sfm__estimator__cv',
'sfm__estimator__max_iter',
'sfm__estimator__positive',
'sfm__estimator__n_alphas',
'std__with_std',
'sfm__estimator__random_state',
'std__copy',
'lr__normalize',
'sfm__estimator__copy_X',
'lr',
'sfm__estimator__n_jobs',
'poly__interaction_only',
'sfm__estimator__fit_intercept',
'sfm__estimator__tol',
'sfm__estimator',
'sfm__estimator__verbose',
'sfm',
'sfm__estimator__normalize',
'std',
'sfm__estimator__selection',
'poly__degree',
'lr__copy_X',
'sfm__estimator__alphas',
'lr__fit_intercept',
'steps',
'poly__include_bias',
'sfm__estimator__eps']
</code></pre>
| 0 | 2016-08-13T15:27:32Z | 38,942,257 | <p>This is simple typographical error, you pass <code>sfm_threshold</code> and you should <code>sfm__threshold</code> (notice <strong>double</strong> underscore). At least this is what the error at the very beginning shows. </p>
| 1 | 2016-08-14T12:27:46Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Set default date with Django SelectDateWidget | 38,934,402 | <p>Firstly, this question has already been asked <a href="http://stackoverflow.com/questions/8709116/how-to-set-defult-date-in-selectdatewidget-in-django">here</a>, however the answer does not work, and is also for Django 1.3. Having done extensive research on similar questions on SO, the Django Docs, and general Googling, I still can't find a working answer. Not really a major detail, but it's annoying both when trying to use the form I'm creating, and because I can't solve it.</p>
<p>When using the SelectDateWidget in a ModelForm, I want to be able to set the default on the widget to today's date.</p>
<p><strong>forms.py</strong></p>
<pre><code>from django import forms
from .models import Article
from django.utils import timezone
class ArticleForm(forms.ModelForm):
publish=forms.DateField(widget=forms.SelectDateWidget(), initial = timezone.now)
class Meta:
model = Article
fields = [
'title',
...
'publish',
]
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>from django.conf import settings
from django.core.urlresolvers import reverse
from django.db import models
from django.utils import timezone
from Blog.models import Category
class Article(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL)
title = models.CharField(max_length = 120, unique=True)
...
publish = models.DateField(default=timezone.now)
</code></pre>
<p>I'm assuming I have some syntax error somewhere, but have had no joy in finding it.</p>
| 0 | 2016-08-13T15:48:10Z | 38,935,327 | <p>Looks like there's a subtle difference when setting the the default time for <code>DateTimeField</code> and <code>DateField</code>.</p>
<blockquote>
<p>DateField.auto_now_add Automatically set the field to now when the
object is first created. Useful for creation of timestamps. Note that
the current date is always used; itâs not just a default value that
you can override. So even if you set a value for this field when
creating the object, it will be ignored. If you want to be able to
modify this field, set the following instead of auto_now_add=True:</p>
<p><strong>For DateField: default=date.today - from datetime.date.today()</strong>
For DateTimeField: default=timezone.now - from django.utils.timezone.now()</p>
</blockquote>
<p>So, you'll want to use <code>date.today</code> rather than <code>timezone.now</code>.</p>
| 0 | 2016-08-13T17:34:11Z | [
"python",
"django",
"django-models",
"django-forms"
] |
Python code between django "autoescape off" marks can not be executed | 38,934,414 | <p>I am trying to establish a blog using Django. I directly stored HTML into models. And I used <code>{% autoescape off %}</code> and <code>{% endautoescape %}</code> marks in HTML files that tells Python do not autoescape HTML code to string. Between the marks, I used codes like <code>{{ article.content }}</code> to load HTML codes stored in models.</p>
<p>The problem is that <code><img src="{% static 'img/dot.png' %}"></code> (HTML codes stored in models) will be displayed as <code>{%%20static%20'img/dot.png'%20%}</code>. The python codes won't be executed.</p>
<p>I feel helplessness. Does anyone have a good idea?</p>
<p>Here is a example of one template:</p>
<pre><code>{% extends "base.html" %}
{% block content %}
{% load static %}
<div style='margin:0 auto;width:0px;height:0px;overflow:hidden;'>
<img src="{% static 'img/wechat.png' %}">
</div>
<header>
<div id="logo">
<span class="heading_desktop">You are browsing&nbsp;&nbsp;</span><a href="/index" class="logotype">Paradox</a><span class="heading_tablet">&nbsp;&nbsp;, a personal site</span><span>&nbsp;of&nbsp;</span><a href="/me">Jiawei Lu</a><span class="heading_tablet">&nbsp;since 2015</span><span class="phone_h">.&nbsp;Happy<script>document.write(" " + whatDay());</script>.</span>
</div>
<nav>
<ul>
<li><a href="/index" class="selected">Articles</a></li>
<li><a href="/portfolio">Portfolio</a></li>
<li><a href="/me">Jiawei Lu</a></li>
</ul>
</nav>
<hr class="red">
</header>
<div id="breadcrumb" class="article">
<a href="/index">Articles</a> &rarr; {{ article.title }}
</div>
<div id="wrapper">
<article>
<h1 class="article">{{ article.title }}</h1>
<div id="post_info">
<p>Published on {{ article.timestamp }}<!--<span>&nbsp;|&nbsp;</span>--></p>
<!-- Category inside article <p><a href="#">Category 1</a><a href="#">Category 2</a><span>&nbsp;|&nbsp;</span></p> -->
<!-- Share inside article <p>Share: Email/Linkedin</p> -->
</div>
<div id="post_content" class="article">
{% autoescape off %}
{{ article.content }}
{% endautoescape %}
</div>
</article>
</div>
</div>
{% endblock %}
</code></pre>
<p>article.content: HTML Codes, like </p>
<pre><code><h1>Test</h1>
<img src="{% static 'img/dot.png' %}">
</code></pre>
| 0 | 2016-08-13T15:49:28Z | 38,934,633 | <p>This doesn't really have anything to do with autoescaping. When Django outputs a variable, it doesn't do anything to interpret whatever is in that variable; so putting template tags into a model field won't work.</p>
<p>You would need to do something to render that data yourself manually; perhaps a custom template tag, or a model method, that uses the <a href="https://docs.djangoproject.com/en/1.10/ref/templates/api/#loading-a-template" rel="nofollow">Template API</a> to instantiate a template object and call its <code>render</code> method.</p>
| 0 | 2016-08-13T16:14:43Z | [
"python",
"html",
"django"
] |
How to substract values in DataFrames omiting some solumns | 38,934,415 | <p>I created two dataFrames and I want to subtract their values omitting two first columns in the first DataFrame.</p>
<pre><code>df = pd.DataFrame({'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0], 'Country':['US', 'England', 'US', 'Germany'], 'Price':[500, 1000, 1500, 2000]})
print df
index = {'sys':[23,24,27,30]}
df2 = pd.DataFrame({ 'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0],'Price2':[300, 600, 4000, 1000], 'Price3': [2000, 1000, 600, 2000]})
df = df.set_index(['sys','dis', 'Country']).unstack().fillna(0)
df = df.reset_index()
df.columns.names =[None, None]
df.columns = df.columns.droplevel(0)
infile = pd.read_csv('benchmark_data.csv')
infile_slice = infile[(infile.System_num==26)]['Benchmark']
infile_slice = infile_slice.reset_index()
infile_slice = infile_slice.drop(infile_slice.index[4])
del infile_slice['index']
print infile_slice
dfNew = df.sub(infile_slice['Benchmark'].values, axis=0)
</code></pre>
<p>In this case I can substract only all values from all columns. How can I skip two first columns from df?</p>
<p>I've tried: <code>dfNew = df.iloc[3:].sub(infile_slice['Benchmark'].values,axis=0)</code>, but it does not work.</p>
<p>DataFrames look like:</p>
<p>df:</p>
<pre><code> England Germany US
0 23 0.8 0.0 0.0 500.0
1 24 0.8 1000.0 0.0 0.0
2 27 1.0 0.0 0.0 1500.0
3 30 1.0 0.0 2000.0 0.0
</code></pre>
<p>infile_slice:</p>
<pre><code>Benchmark
0 3.3199
1 -4.0135
2 -4.9794
3 -3.1766
</code></pre>
| 0 | 2016-08-13T15:49:41Z | 38,934,525 | <p>Maybe, this is what you are looking for?</p>
<pre><code>>>> df
England Germany US
0 23 0.8 0.0 0.0 500.0
1 24 0.8 1000.0 0.0 0.0
2 27 1.0 0.0 0.0 1500.0
3 30 1.0 0.0 2000.0 0.0
>>> infile_slice
Benchmark
0 3.3199
1 -4.0135
2 -4.9794
3 -3.1766
>>> df.iloc[:, 4:] = df.iloc[:, 4:].sub(infile_slice['Benchmark'].values,axis=0)
>>> df
England Germany US
0 23 0.8 0.0 0.0 496.6801
1 24 0.8 1000.0 0.0 4.0135
2 27 1.0 0.0 0.0 1504.9794
3 30 1.0 0.0 2000.0 3.1766
>>>
</code></pre>
| 1 | 2016-08-13T16:01:08Z | [
"python",
"pandas"
] |
How to substract values in DataFrames omiting some solumns | 38,934,415 | <p>I created two dataFrames and I want to subtract their values omitting two first columns in the first DataFrame.</p>
<pre><code>df = pd.DataFrame({'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0], 'Country':['US', 'England', 'US', 'Germany'], 'Price':[500, 1000, 1500, 2000]})
print df
index = {'sys':[23,24,27,30]}
df2 = pd.DataFrame({ 'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0],'Price2':[300, 600, 4000, 1000], 'Price3': [2000, 1000, 600, 2000]})
df = df.set_index(['sys','dis', 'Country']).unstack().fillna(0)
df = df.reset_index()
df.columns.names =[None, None]
df.columns = df.columns.droplevel(0)
infile = pd.read_csv('benchmark_data.csv')
infile_slice = infile[(infile.System_num==26)]['Benchmark']
infile_slice = infile_slice.reset_index()
infile_slice = infile_slice.drop(infile_slice.index[4])
del infile_slice['index']
print infile_slice
dfNew = df.sub(infile_slice['Benchmark'].values, axis=0)
</code></pre>
<p>In this case I can substract only all values from all columns. How can I skip two first columns from df?</p>
<p>I've tried: <code>dfNew = df.iloc[3:].sub(infile_slice['Benchmark'].values,axis=0)</code>, but it does not work.</p>
<p>DataFrames look like:</p>
<p>df:</p>
<pre><code> England Germany US
0 23 0.8 0.0 0.0 500.0
1 24 0.8 1000.0 0.0 0.0
2 27 1.0 0.0 0.0 1500.0
3 30 1.0 0.0 2000.0 0.0
</code></pre>
<p>infile_slice:</p>
<pre><code>Benchmark
0 3.3199
1 -4.0135
2 -4.9794
3 -3.1766
</code></pre>
| 0 | 2016-08-13T15:49:41Z | 38,934,612 | <p>You could use <code>iloc</code> as follows:</p>
<pre><code>df_0_2 = df.iloc[:,0:2] # First 2 columns
df_2_end = df.iloc[:,2:].sub(infile_slice['Benchmark'].values, axis=0)
pd.concat([df_0_2, df_2_end], axis=1)
England Germany US
0 23 0.8 -3.3199 -3.3199 496.6801
1 24 0.8 1004.0135 4.0135 4.0135
2 27 1.0 4.9794 4.9794 1504.9794
3 30 1.0 3.1766 2003.1766 3.1766
</code></pre>
| 0 | 2016-08-13T16:11:22Z | [
"python",
"pandas"
] |
Maximum recursion depth exceeded when using scipy.io.savemat() | 38,934,433 | <p>I'm trying to save a matrix using the scipy.io.savemat() function. However, I'm getting the following error:</p>
<p><code>RuntimeError: maximum recursion depth exceeded while calling a Python object</code></p>
<p>Here's the complete error I'm getting:</p>
<pre><code>scipy.io.savemat('path/to/file.mat',dict__)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio.py", line 207, in savemat
MW.put_variables(mdict)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 876, in put_variables
self._matrix_writer.write_top(var, asbytes(name), is_global)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 626, in write_top
self.write(arr)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 655, in write
self.write_cells(narr)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 759, in write_cells
self.write(el)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 655, in write
self.write_cells(narr)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 759, in write_cells
self.write(el)
.
.
.
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 759, in write_cells
self.write(el)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 655, in write
self.write_cells(narr)
File "C:\Python27\lib\site-packages\scipy\io\matlab\mio5.py", line 758, in write_cells
for el in A:
File "C:\Python27\lib\site-packages\numpy\matrixlib\defmatrix.py", line 316, in __getitem__
out = N.ndarray.__getitem__(self, index)
File "C:\Python27\lib\site-packages\numpy\matrixlib\defmatrix.py", line 292, in __array_finalize__
if (isinstance(obj, matrix) and obj._getitem): return
RuntimeError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I tried to increase the recursion limit using <code>sys.setrecursionlimit(10000)</code> but it didn't change anything. I also tried the <code>resource.setrlimit()</code> fix but it didn't help neither.</p>
<p>Does anybody have an idea what's the problem here or why is this happening? Is there any way to fix this error?
Thanks in advance for your help! </p>
<p>P.S: I'm getting this error on Windows and Linux! </p>
<p>Elie</p>
| 0 | 2016-08-13T15:51:15Z | 38,936,386 | <p>Okay so just for the record, the cause of the problem is two variables in <code>dict__</code> that were being converted to <code>object</code> (normally they're supposed to be <code>float64</code>). When I was trying to save <code>dict__</code> in file.mat using <code>scipy.io.savemat()</code>, the error above was occurring when the variable's dtype is <code>object</code>. I don't know though:</p>
<ol>
<li>Why <code>savemat()</code> was raising this specific error when the variable that's being saved is an <code>object</code></li>
<li>Why python is converting, by itself, a <code>float64</code> array to an
<code>object</code></li>
</ol>
<p>Thanks everyone for your help especially @unutbu for the debugging tip!</p>
| 1 | 2016-08-13T19:46:08Z | [
"python",
"python-2.7",
"recursion",
"scipy",
"runtime-error"
] |
Regex with HTML tags and escaped characters | 38,934,445 | <p>I have this text:</p>
<p></p>
<pre><code><h5 class="subblocksubhead subsubsectionhead first"><b>Messaggi inseriti</b></h5>
<dl class="blockrow stats">
<dt><b>Messaggi inseriti</b></dt>
<dd> 81</dd>
</dl>
<dl class="blockrow stats">
<dt>Media dei messaggi giornalieri</dt>
<dd> 0.02</dd>
</dl>
</code></pre>
<p>and I'm trying to extract the <code>" 81"</code> using this code:</p>
<pre><code>regex_message_sent_num=r'Messaggi inseriti<.+>\n\t\t<.+?>(\s.+)<.+?>'
pattern_message_sent_num=re.compile(regex_message_sent_num)
results_message_sent_num=re.findall(pattern_message_sent_num,html_text)
</code></pre>
<p>I always get an empty list as output, whereas when I test the code <a href="https://www.regex101.com/#python" rel="nofollow">here</a> I get the right extraction.</p>
<p>Any idea what I'm doing wrong? The HTML comes from a webpage from which I'm trying to extract some visible data as exercise. I tested the regex on the HTML text saved from chrome browser.</p>
| 0 | 2016-08-13T15:52:36Z | 38,934,515 | <p>Use an HTML Parser instead, like <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow"><code>BeautifulSoup</code></a>.</p>
<p>Adapting the <a class='doc-link' href="http://stackoverflow.com/documentation/beautifulsoup/1940/locating-elements/6339/locate-a-text-after-an-element-in-beautifulsoup#t=201608131558400197181">related example in the documentation</a> and using the text search and the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-next-siblings-and-find-next-sibling" rel="nofollow"><code>find_next_sibling()</code> method</a>:</p>
<pre><code>from bs4 import BeautifulSoup
data = """
<div>
<dl class="blockrow stats">
<dt><b>Messaggi inseriti</b></dt>
<dd> 81</dd>
</dl>
<dl class="blockrow stats">
<dt>Media dei messaggi giornalieri</dt>
<dd> 0.02</dd>
</dl>
</div>"""
soup = BeautifulSoup(data, "html.parser")
label = soup.find("dt", text="Messaggi inseriti")
print(label.find_next_sibling("dd").get_text(strip=True))
</code></pre>
<p>Prints <code>81</code>.</p>
| 0 | 2016-08-13T15:59:59Z | [
"python",
"html",
"regex"
] |
A function that wont stop/pause my script | 38,934,457 | <p>I have a <code>for</code> loop, inside the loop I check something with <code>if</code> and, when the conditions are met, I launch a function. The thing is this function includes a waiting period (<code>time.sleep</code>) and I noticed the loop is paused until that function is finished.</p>
<p>Is there any way to have my loop keep running while the function is executed?
I'm thinking maybe launch another script but I'm not even sure that wouldn't also pause the loop...</p>
<p>The goal is to be able to launch the function again before the first one is finished.</p>
<p>tho code kind of looks like this</p>
<pre><code>def myfunction():
time.sleep(30)
print("waited 30 seconds...")
do something
for something
if condition:
myfunction()
print("the loop is resumed...")
</code></pre>
| -1 | 2016-08-13T15:54:24Z | 38,934,933 | <p>The problem you describe can be fairly trivially solved with simple threading like so:</p>
<pre><code>import time
import threading
def myfunction():
time.sleep(30)
print("waited 30 seconds.")
# do whatever else
num = 0
mythreads = []
while(True): # Just loop forever for demo purposes
num += 1
if num % 5 == 0: # Execute the function once every 5 times
newthread = threading.Thread(target=myfunction)
mythreads.append(newthread)
newthread.start()
print("The loop is resumed.")
</code></pre>
| 0 | 2016-08-13T16:49:46Z | [
"python",
"subprocess"
] |
Tkinter labels change window size even after resizable | 38,934,494 | <p>I want to know if there is a way to make labels NOT change the window size. I've tried master.resizable(width=False, height=False) but it doesn't work. Same with adding buttons, the window size still changes. Also, I want it to work with <code>.grid</code> and not only <code>.pack</code>. Here's an example:</p>
<pre><code>import tkinter as tk
from tkinter import *
master = tk.Tk()
lel = tk.Button()
lel["text"] = "ooo"
lel.pack(side = "top")
master.configure(width=500, height=500)
master.resizable(width=False, height=False)
mainloop()
</code></pre>
| 0 | 2016-08-13T15:58:23Z | 38,934,582 | <p>Maybe you want this?</p>
<pre><code>master.geometry('{}x{}'.format(500, 500))
</code></pre>
<p><a href="http://i.stack.imgur.com/TCJz0.png" rel="nofollow"><img src="http://i.stack.imgur.com/TCJz0.png" alt="enter image description here"></a></p>
| 3 | 2016-08-13T16:08:09Z | [
"python",
"tkinter"
] |
Passing Json While running python script from terminal | 38,934,578 | <p>I want to pass JSON as a parameter while running the python script from the terminal. Can I pass JSON Object from terminal. However, I can only pass string from the terminal but I need JSON instead of that.</p>
<p>My tried passing string as follows which gave expected result.</p>
<pre><code> $ python test.py 'Param1'
</code></pre>
<p>But if I want JSON, it gives error.I tried with the following to pass json. </p>
<pre><code> $ python test.py { 'a':1, 'b':2 }
</code></pre>
| 1 | 2016-08-13T16:07:40Z | 38,934,647 | <p>Two ways of doing this:</p>
<pre><code>$ cat a.py
import json
import sys
print json.loads(sys.stdin.read().strip())
$ python a.py <<< '{ "a":1, "b":2 }'
{u'a': 1, u'b': 2}
$ echo '{ "a":1, "b":2 }' | python a.py
{u'a': 1, u'b': 2}
$ cat c.py
import json
import sys
print json.loads(sys.argv[1])
$ python c.py '{ "a":1, "b":2 }'
{u'a': 1, u'b': 2}
</code></pre>
<p>Follow up (maintaining order):</p>
<pre><code>$ cat d.py
import json
import sys
from collections import OrderedDict
print json.loads(sys.argv[1], object_pairs_hook=OrderedDict)
$ python d.py '{ "a":1, "b":2, "c":3, "d":4 }'
OrderedDict([(u'a', 1), (u'b', 2), (u'c', 3), (u'd', 4)])
</code></pre>
| 1 | 2016-08-13T16:16:23Z | [
"python",
"json",
"shell"
] |
How to add a column of random numbers based on two conditions? | 38,934,684 | <p>I have a data frame in python containing the following information:</p>
<pre><code>Day Type
Weekday 1
Weekday 2
Weekday 3
Weekday 1
Weekend 2
Weekend 1
</code></pre>
<p>I want to add a new column by generating a Weibull random number but each pair of "Day" and "Type" has a unique Weibull distributions. </p>
<p>For example, I have tried the following codes but they did not work:</p>
<pre><code>df['Duration'][ (df['Day'] == "Weekend") & (df['Type'] == 1) ] = int(random.weibullvariate(5.6/math.gamma(1+1/6),6))
df['Duration'] = df['Day','Type'].map(lambda x,y: int(random.weibullvariate(5.6/math.gamma(1+1/10),10)) if x == "Weekday" and y == 1 if x == "Weekend" and y == 1 int(random.weibullvariate(5.6/math.gamma(1+1/6),6)))
</code></pre>
| 0 | 2016-08-13T16:20:32Z | 38,934,803 | <p>Define a function that generates the random number you want and apply it to the rows.</p>
<pre><code>import io
import random
import math
import pandas as pd
data = io.StringIO('''\
Day Type
Weekday 1
Weekday 2
Weekday 3
Weekday 1
Weekend 2
Weekend 1
''')
df = pd.read_csv(data, delim_whitespace=True)
def duration(row):
if row['Day'] == 'Weekend' and row['Type'] == 1:
return int(random.weibullvariate(5.6/math.gamma(1+1/6),6))
if row['Day'] == 'Weekday' and row['Type'] == 1:
return int(random.weibullvariate(5.6/math.gamma(1+1/10),10))
df['Duration'] = df.apply(duration, axis=1)
</code></pre>
| 1 | 2016-08-13T16:36:38Z | [
"python",
"pandas"
] |
How to compare and then concatenate information from two different rows using python pandas data frames | 38,934,732 | <p>I have written this code:</p>
<pre><code>import pandas as pd
import numpy as np
input_table = {'W' : pd.Series([1.1,2.1,3.1,4.1,5.1,6.1], index = ['1','2','3','4','5','6']),
'X' : pd.Series([7.,8.,9.,10.,11.,12.], index = ['1','2','3','4','5','6']),
'Y' : pd.Series(['A','B','C','D','E','E'], index = ['1','2','3','4','5','6']),
'Z' : pd.Series(['First',' ','Last','First',' ','Last'], ['1','2','3','4','5','6'])}
output_table = pd.DataFrame(input_table)
output_table['Previous_Y'] = output_table['Y']
output_table.Previous_Y = output_table.Previous_Y.shift(1)
def Calc_flowpath(x):
if x['Z'] == 'First':
return x['Y']
else:
return x['Previous_Y'] + x['Y']
output_table['Flowpath'] = output_table.apply(Calc_flowpath, axis=1)
print output_table
</code></pre>
<p>And my output is (as expected):</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B BC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E EE
</code></pre>
<p>However, what I'm trying to do with the Flowpath function is:</p>
<blockquote>
<p>If Column Z is "First", Flowpath = Column Y</p>
<p>If Column Z is anything else, Flowpath = Previous Flowpath value + Column Y</p>
<p>Unless Column Y repeats the same value, in which case skip that row.</p>
</blockquote>
<p>The output I am targeting is:</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B ABC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E DE
</code></pre>
<p>To give context, these lines are steps in a manufacturing process, and I'm trying to describe the path materials take through a job shop. My data is a large number of customer orders and every step they took in the manufacturing process. Y is the manufacturing step, and column Z indicates the first and last step for each order. I'm using Knime to do the analysis but I can't find a node that will do this, so I'm trying to write a python script myself even though I'm quite the programming novice (as you can probably see). In my previous job, I would have done this in Alteryx using the Multi-Row node but I no longer have access to that software. I've spent a lot of time reading the Pandas documentation and I feel the solution is some combination of DataFrame.loc, DataFrame.shift, or DataFrame.cumsum, but I can't figure it out.</p>
<p>Any help would be greatly appreciated.</p>
| 2 | 2016-08-13T16:26:52Z | 38,935,015 | <p>Iterate over the rows of your DataFrame and set the value of the <code>Flowpath</code> column following the logic you outline in the OP.</p>
<pre><code>import pandas as pd
output_table = pd.DataFrame({'W' :[1.1, 2.1, 3.1, 4.1, 5.1, 6.1],
'X': [7., 8., 9., 10., 11., 12.],
'Y': ['A', 'B', 'C', 'D', 'E', 'E'],
'Z': ['First', ' ', 'Last', 'First', ' ', 'Last']},
index=range(1, 7))
output_table['Flowpath'] = ''
for idx in output_table.index:
this_Z = output_table.loc[idx, 'Z']
this_Y = output_table.loc[idx, 'Y']
last_Y = output_table.loc[idx-1, 'Y'] if idx > 1 else ''
last_Flowpath = output_table.loc[idx-1, 'Flowpath'] if idx > 1 else ''
if this_Z == 'First':
output_table.loc[idx, 'Flowpath'] = this_Y
elif this_Y != last_Y:
output_table.loc[idx, 'Flowpath'] = last_Flowpath + this_Y
else:
output_table.loc[idx, 'Flowpath'] = last_Flowpath
</code></pre>
| 1 | 2016-08-13T16:56:53Z | [
"python",
"pandas"
] |
How to compare and then concatenate information from two different rows using python pandas data frames | 38,934,732 | <p>I have written this code:</p>
<pre><code>import pandas as pd
import numpy as np
input_table = {'W' : pd.Series([1.1,2.1,3.1,4.1,5.1,6.1], index = ['1','2','3','4','5','6']),
'X' : pd.Series([7.,8.,9.,10.,11.,12.], index = ['1','2','3','4','5','6']),
'Y' : pd.Series(['A','B','C','D','E','E'], index = ['1','2','3','4','5','6']),
'Z' : pd.Series(['First',' ','Last','First',' ','Last'], ['1','2','3','4','5','6'])}
output_table = pd.DataFrame(input_table)
output_table['Previous_Y'] = output_table['Y']
output_table.Previous_Y = output_table.Previous_Y.shift(1)
def Calc_flowpath(x):
if x['Z'] == 'First':
return x['Y']
else:
return x['Previous_Y'] + x['Y']
output_table['Flowpath'] = output_table.apply(Calc_flowpath, axis=1)
print output_table
</code></pre>
<p>And my output is (as expected):</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B BC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E EE
</code></pre>
<p>However, what I'm trying to do with the Flowpath function is:</p>
<blockquote>
<p>If Column Z is "First", Flowpath = Column Y</p>
<p>If Column Z is anything else, Flowpath = Previous Flowpath value + Column Y</p>
<p>Unless Column Y repeats the same value, in which case skip that row.</p>
</blockquote>
<p>The output I am targeting is:</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B ABC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E DE
</code></pre>
<p>To give context, these lines are steps in a manufacturing process, and I'm trying to describe the path materials take through a job shop. My data is a large number of customer orders and every step they took in the manufacturing process. Y is the manufacturing step, and column Z indicates the first and last step for each order. I'm using Knime to do the analysis but I can't find a node that will do this, so I'm trying to write a python script myself even though I'm quite the programming novice (as you can probably see). In my previous job, I would have done this in Alteryx using the Multi-Row node but I no longer have access to that software. I've spent a lot of time reading the Pandas documentation and I feel the solution is some combination of DataFrame.loc, DataFrame.shift, or DataFrame.cumsum, but I can't figure it out.</p>
<p>Any help would be greatly appreciated.</p>
| 2 | 2016-08-13T16:26:52Z | 38,935,088 | <p>So bad things will happen if <code>Z['1']!='First'</code> but for your case this works. I understand you want something more Pandas-ish so I'm sorry that this answer is pretty plain python...</p>
<pre><code>import pandas as pd
import numpy as np
input_table = {'W' : pd.Series([1.1,2.1,3.1,4.1,5.1,6.1], index = ['1','2','3','4','5','6']),
'X' : pd.Series([7.,8.,9.,10.,11.,12.], index = ['1','2','3','4','5','6']),
'Y' : pd.Series(['A','B','C','D','E','E'], index = ['1','2','3','4','5','6']),
'Z' : pd.Series(['First',' ','Last','First',' ','Last'], index =['1','2','3','4','5','6'])}
ret = pd.Series([None,None,None,None,None,None], index = ['1','2','3','4','5','6'])
for k in [str(n) for n in range(1,7)]:
if(input_table['Z'][k]=='First'):
op = input_table['Y'][k]
else:
if(input_table['Y'][k]==input_table['Y'][str(int(k)-1)]):
op = ret[str(int(k)-1)]
else:
op = ret[str(int(k)-1)]+input_table['Y'][k]
ret[k]=op
input_table['Flowpath'] = ret
output_table = pd.DataFrame(input_table)
print output_table
</code></pre>
<p>Prints:: </p>
<pre><code> Flowpath W X Y Z
1 A 1.1 7 A First
2 AB 2.1 8 B
3 ABC 3.1 9 C Last
4 D 4.1 10 D First
5 DE 5.1 11 E
6 DE 6.1 12 E Last
</code></pre>
| 0 | 2016-08-13T17:04:54Z | [
"python",
"pandas"
] |
How to compare and then concatenate information from two different rows using python pandas data frames | 38,934,732 | <p>I have written this code:</p>
<pre><code>import pandas as pd
import numpy as np
input_table = {'W' : pd.Series([1.1,2.1,3.1,4.1,5.1,6.1], index = ['1','2','3','4','5','6']),
'X' : pd.Series([7.,8.,9.,10.,11.,12.], index = ['1','2','3','4','5','6']),
'Y' : pd.Series(['A','B','C','D','E','E'], index = ['1','2','3','4','5','6']),
'Z' : pd.Series(['First',' ','Last','First',' ','Last'], ['1','2','3','4','5','6'])}
output_table = pd.DataFrame(input_table)
output_table['Previous_Y'] = output_table['Y']
output_table.Previous_Y = output_table.Previous_Y.shift(1)
def Calc_flowpath(x):
if x['Z'] == 'First':
return x['Y']
else:
return x['Previous_Y'] + x['Y']
output_table['Flowpath'] = output_table.apply(Calc_flowpath, axis=1)
print output_table
</code></pre>
<p>And my output is (as expected):</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B BC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E EE
</code></pre>
<p>However, what I'm trying to do with the Flowpath function is:</p>
<blockquote>
<p>If Column Z is "First", Flowpath = Column Y</p>
<p>If Column Z is anything else, Flowpath = Previous Flowpath value + Column Y</p>
<p>Unless Column Y repeats the same value, in which case skip that row.</p>
</blockquote>
<p>The output I am targeting is:</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B ABC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E DE
</code></pre>
<p>To give context, these lines are steps in a manufacturing process, and I'm trying to describe the path materials take through a job shop. My data is a large number of customer orders and every step they took in the manufacturing process. Y is the manufacturing step, and column Z indicates the first and last step for each order. I'm using Knime to do the analysis but I can't find a node that will do this, so I'm trying to write a python script myself even though I'm quite the programming novice (as you can probably see). In my previous job, I would have done this in Alteryx using the Multi-Row node but I no longer have access to that software. I've spent a lot of time reading the Pandas documentation and I feel the solution is some combination of DataFrame.loc, DataFrame.shift, or DataFrame.cumsum, but I can't figure it out.</p>
<p>Any help would be greatly appreciated.</p>
| 2 | 2016-08-13T16:26:52Z | 38,935,108 | <p>You can calculate a group variable by <code>cumsum</code> on the condition vector where <code>Z</code> is <code>first</code> to satisfy the first and second conditions and replace the same value as previous one with empty string so that you can do <code>cumsum</code> on the Y column which should give the expected output:</p>
<pre><code>import pandas as pd
# calculate the group varaible
grp = (output_table.Z == "First").cumsum()
# calculate a condition vector where the current Y column is the same as the previous one
dup = output_table.Y.groupby(grp).apply(lambda g: g.shift() != g)
# replace the duplicated process in Y as empty string, group the column by the group variable
# calculated above and then do a cumulative sum
output_table['flowPath'] = output_table.Y.where(dup, "").groupby(grp).cumsum()
output_table
# W X Y Z flowPath
# 1 1.1 7 A First A
# 2 2.1 8 B AB
# 3 3.1 9 C Last ABC
# 4 4.1 10 D First D
# 5 5.1 11 E DE
# 6 6.1 12 E Last DE
</code></pre>
<p><em>Update</em>: The above code works under <code>0.15.2</code> but not <code>0.18.1</code>, but a little bit tweaking with the last line as following can save it:</p>
<pre><code>output_table['flowPath'] = output_table.Y.where(dup, "").groupby(grp).apply(pd.Series.cumsum)
</code></pre>
| 1 | 2016-08-13T17:07:33Z | [
"python",
"pandas"
] |
How to compare and then concatenate information from two different rows using python pandas data frames | 38,934,732 | <p>I have written this code:</p>
<pre><code>import pandas as pd
import numpy as np
input_table = {'W' : pd.Series([1.1,2.1,3.1,4.1,5.1,6.1], index = ['1','2','3','4','5','6']),
'X' : pd.Series([7.,8.,9.,10.,11.,12.], index = ['1','2','3','4','5','6']),
'Y' : pd.Series(['A','B','C','D','E','E'], index = ['1','2','3','4','5','6']),
'Z' : pd.Series(['First',' ','Last','First',' ','Last'], ['1','2','3','4','5','6'])}
output_table = pd.DataFrame(input_table)
output_table['Previous_Y'] = output_table['Y']
output_table.Previous_Y = output_table.Previous_Y.shift(1)
def Calc_flowpath(x):
if x['Z'] == 'First':
return x['Y']
else:
return x['Previous_Y'] + x['Y']
output_table['Flowpath'] = output_table.apply(Calc_flowpath, axis=1)
print output_table
</code></pre>
<p>And my output is (as expected):</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B BC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E EE
</code></pre>
<p>However, what I'm trying to do with the Flowpath function is:</p>
<blockquote>
<p>If Column Z is "First", Flowpath = Column Y</p>
<p>If Column Z is anything else, Flowpath = Previous Flowpath value + Column Y</p>
<p>Unless Column Y repeats the same value, in which case skip that row.</p>
</blockquote>
<p>The output I am targeting is:</p>
<pre><code> W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B ABC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E DE
</code></pre>
<p>To give context, these lines are steps in a manufacturing process, and I'm trying to describe the path materials take through a job shop. My data is a large number of customer orders and every step they took in the manufacturing process. Y is the manufacturing step, and column Z indicates the first and last step for each order. I'm using Knime to do the analysis but I can't find a node that will do this, so I'm trying to write a python script myself even though I'm quite the programming novice (as you can probably see). In my previous job, I would have done this in Alteryx using the Multi-Row node but I no longer have access to that software. I've spent a lot of time reading the Pandas documentation and I feel the solution is some combination of DataFrame.loc, DataFrame.shift, or DataFrame.cumsum, but I can't figure it out.</p>
<p>Any help would be greatly appreciated.</p>
| 2 | 2016-08-13T16:26:52Z | 38,935,720 | <pre><code>for index, row in output_table.iterrows():
prev_index = str(int(index) - 1)
if row['Z'] == 'First':
output_table.set_value(index, 'Flowpath', row['Y'])
elif output_table['Y'][prev_index] == row['Y']:
output_table.set_value(index, 'Flowpath', output_table['Flowpath'][prev_index])
else:
output_table.set_value(index, 'Flowpath', output_table['Flowpath'][prev_index] + row['Y'])
print output_table
W X Y Z Previous_Y Flowpath
1 1.1 7.0 A First NaN A
2 2.1 8.0 B A AB
3 3.1 9.0 C Last B ABC
4 4.1 10.0 D First C D
5 5.1 11.0 E D DE
6 6.1 12.0 E Last E DE
</code></pre>
| 1 | 2016-08-13T18:25:47Z | [
"python",
"pandas"
] |
Need warning message if count of each country code is less than 5 | 38,935,005 | <p>I am trying to get a warning or print message if count or frequency of a particular country code is less than 5.</p>
<pre><code>QuoteID
1500759-BE
1500759-BE
1500759-BE
1500759-BE
1605101-FR
1605101-FR
1605101-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1600896-NL
1600896-NL
1600896-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
</code></pre>
<p>Tried the below code </p>
<pre><code>chars=('BE','FR','NL')
check_string=OutputData['QuoteID']
for char in chars:
count = check_string.count(char)
if count < 5:
print ('count is less than 5 )
</code></pre>
<p>expected result is - <em>"warning 'category BE' has less than 5 records"</em></p>
<p><code>OutputData</code> - Data set name <br>
<code>QuoteID</code> - variable name</p>
<p>values like <code>1500759-BE</code> is value in variable and frequency or count of 'BE', 'FR' and 'NL' has to be counted and warning message required if count is less than 5.</p>
<p>Many thanks in advance</p>
| 0 | 2016-08-13T16:56:13Z | 38,935,144 | <p>You can use a <code>Counter</code> provided by Python's <code>collections</code> module to count the occurrences of the elements in a list.
In addition you can extract the country codes given in your sample data by splitting all lines and strip off the last two elements of each line (which is the country code).</p>
<p>All in all I would suggest something like this:</p>
<pre><code>from collections import Counter
data = """1500759-BE
1500759-BE
1500759-BE
1500759-BE
1605101-FR
1605101-FR
1605101-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1600896-NL
1600896-NL
1600896-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
"""
codes = [l[-2:] for l in data.splitlines()]
c = Counter(codes)
for k,v in c.items():
if v < 5:
print('less then 5 items for {}'.format(k))
</code></pre>
<p>As you tagged your question with <code>python-2.7</code> you need to keep in mind to convert the Python3 code I provided to the Python2-equivalent. That said, you need to use <code>print output</code> instead of <code>print(output)</code> and <code>.items()</code> would become <code>.iteritems()</code>.</p>
| 0 | 2016-08-13T17:11:09Z | [
"python",
"loops",
"pandas",
"dataframe"
] |
Need warning message if count of each country code is less than 5 | 38,935,005 | <p>I am trying to get a warning or print message if count or frequency of a particular country code is less than 5.</p>
<pre><code>QuoteID
1500759-BE
1500759-BE
1500759-BE
1500759-BE
1605101-FR
1605101-FR
1605101-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1600896-NL
1600896-NL
1600896-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
</code></pre>
<p>Tried the below code </p>
<pre><code>chars=('BE','FR','NL')
check_string=OutputData['QuoteID']
for char in chars:
count = check_string.count(char)
if count < 5:
print ('count is less than 5 )
</code></pre>
<p>expected result is - <em>"warning 'category BE' has less than 5 records"</em></p>
<p><code>OutputData</code> - Data set name <br>
<code>QuoteID</code> - variable name</p>
<p>values like <code>1500759-BE</code> is value in variable and frequency or count of 'BE', 'FR' and 'NL' has to be counted and warning message required if count is less than 5.</p>
<p>Many thanks in advance</p>
| 0 | 2016-08-13T16:56:13Z | 38,935,254 | <p>what is the type of QuoteID if its type is string then it works fine </p>
<pre><code>alist = "1500759-BE1500759-BE1500759-BE1500759-BE1605101-FR1605101-FR1605101-FR1605119-FR1605119-FR1605119-FR1605119-FR1605119-FR1600896-NL1600896-NL1600896-NL1600898-NL1600898-NL1600898-NL1600898-NL1600898-NL1600898-NL"
chars=('BE','FR','NL')
for char in chars:
count = alist.count(char)
if count < 5:
print ('count is less than 5' )
print char
print "\n"
</code></pre>
<p>if works fine for me</p>
| 0 | 2016-08-13T17:25:32Z | [
"python",
"loops",
"pandas",
"dataframe"
] |
Need warning message if count of each country code is less than 5 | 38,935,005 | <p>I am trying to get a warning or print message if count or frequency of a particular country code is less than 5.</p>
<pre><code>QuoteID
1500759-BE
1500759-BE
1500759-BE
1500759-BE
1605101-FR
1605101-FR
1605101-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1605119-FR
1600896-NL
1600896-NL
1600896-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
1600898-NL
</code></pre>
<p>Tried the below code </p>
<pre><code>chars=('BE','FR','NL')
check_string=OutputData['QuoteID']
for char in chars:
count = check_string.count(char)
if count < 5:
print ('count is less than 5 )
</code></pre>
<p>expected result is - <em>"warning 'category BE' has less than 5 records"</em></p>
<p><code>OutputData</code> - Data set name <br>
<code>QuoteID</code> - variable name</p>
<p>values like <code>1500759-BE</code> is value in variable and frequency or count of 'BE', 'FR' and 'NL' has to be counted and warning message required if count is less than 5.</p>
<p>Many thanks in advance</p>
| 0 | 2016-08-13T16:56:13Z | 38,935,341 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow"><code>str.extract</code></a> to extract the country codes from each <code>QuoteID</code> string as follows:</p>
<pre><code>In [16]: df['CountryCode'] = df['QuoteID'].str.extract('(?P<letter>BE|FR|NL)', expand=True)
In [17]: df
Out[17]:
QuoteID CountryCode
0 1500759-BE BE
1 1500759-BE BE
2 1500759-BE BE
3 1500759-BE BE
4 1605101-FR FR
5 1605101-FR FR
6 1605101-FR FR
7 1605119-FR FR
8 1605119-FR FR
9 1605119-FR FR
10 1605119-FR FR
11 1605119-FR FR
12 1600896-NL NL
13 1600896-NL NL
14 1600896-NL NL
15 1600898-NL NL
16 1600898-NL NL
17 1600898-NL NL
18 1600898-NL NL
19 1600898-NL NL
20 1600898-NL NL
</code></pre>
<p>By using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a> to compute the counts of unique values, you could then convert the <code>series</code> object to a dictionary by calling <code>to_dict()</code> followed by a <code>list-comprehension</code> to get your desired result.</p>
<pre><code>In [18]: ["count of %s is %d" % (key, value) if value > 5 else \
"WARN!: count of category %s is less than 5" % (key) \
for key, value in df['CountryCode'].value_counts().to_dict().items()]
Out[18]:
['WARN!: count of category BE is less than 5',
'count of NL is 9',
'count of FR is 8']
</code></pre>
| 0 | 2016-08-13T17:36:00Z | [
"python",
"loops",
"pandas",
"dataframe"
] |
Mocking ReviewBoard third party library using python and mock | 38,935,061 | <p>I use ReviewBoard API library and today I moved the code to separate class and wanted to cover the logic with some tests. I understand mocks and testing but I am clearly not much experienced with the python and it's libraries. Here's the chunk of the real code:</p>
<pre><code><!-- language: python -->
from rbtools.api.client import RBClient
class ReviewBoardWrapper():
def __init__(self, url, username, password):
self.url = url
self.username = username
self.password = password
pass
def Connect(self):
self.client = RBClient(self.url, username=self.username, password=self.password)
self.root = self.client.get_root()
pass
</code></pre>
<p>And I want to assert the initialization as well as the get_root() methods are called. Here's how I try to accomplish that:</p>
<pre><code><!-- language: python -->
import unittest
import mock
from module_base import ReviewBoardWrapper as rb
class RbTestCase(unittest.TestCase):
@mock.patch('module_base.RBClient')
@mock.patch('module_base.RBClient.get_root')
def test_client_connect(self, mock_client, mock_method):
rb_client = rb('', '', '')
rb_client.Connect()
self.assertTrue(mock_method.called)
self.assertTrue(mock_client.called)
</code></pre>
<p>And here's the error I stuck on:</p>
<pre><code>$ python -m unittest module_base_tests
F.
======================================================================
FAIL: test_client_connect (module_base_tests.RbTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
File "module_base_tests.py", line 21, in test_client_connect
self.assertTrue(mock_client.called)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 2 tests in 0.002s
FAILED (failures=1)
</code></pre>
<p>What do I do wrong? Do I correctly mock the "local copy" of imported libraries? Does the issue lie completely in a different area?</p>
<p>I have also tried to do this:</p>
<pre><code>@mock.patch('module_base.RBClient.__init__')
</code></pre>
<p>And / or this:</p>
<pre><code>self.assertTrue(mock_client.__init__.called)
</code></pre>
| 1 | 2016-08-13T17:02:12Z | 39,061,758 | <p>In the example from your post, the order of the mocking is reversed:</p>
<pre><code>test_client_connect(self, mock_client, mock_method)
</code></pre>
<p>The client is actually being mocked as the second argument and the method call is being mocked as the first argument. </p>
<p>However, to properly mock the client, you want to mock the return value of the client call. An example of mocking the return value and making an assertion on the return value would like the following:</p>
<pre><code>class RbTestCase(unittest.TestCase):
@mock.patch('module_base.RBClient')
def test_client_connect(self, mock_client):
client = mock.MagicMock()
mock_client.return_value = client
rb_client = rb('', '', '')
rb_client.Connect()
self.assertTrue(client.get_root.called)
self.assertTrue(mock_client.called)
</code></pre>
| 1 | 2016-08-21T07:13:04Z | [
"python",
"mocking",
"python-unittest",
"review-board"
] |
Import at the top of the file with gi | 38,935,095 | <p>For libnotify I use the following code</p>
<pre><code>import gi
gi.require_version('Notify', '0.7')
from gi.repository import Notify
</code></pre>
<p>Then <code>flake8</code> complains that: </p>
<pre><code>E402 module level import not at top of file
</code></pre>
<p>On the other hand, one has to specify the version when using gi: <a href="http://mednis.info/use-girequire_versiongtk-30-before-import.html" rel="nofollow">http://mednis.info/use-girequire_versiongtk-30-before-import.html</a></p>
<p>What should I do? Ignore the linter's messages or remove the <code>gi.require_version</code> line?</p>
| 0 | 2016-08-13T17:06:08Z | 38,935,141 | <p>Put <code># noqa: E402</code> at the top of the file to ignore just this error in this file.</p>
<p>Sometimes, imports that aren't at the top of the file are necessary. For example, to avoid a circular import, to avoid the overheard of initializing a module until a certain function is called, or for configuration as is the case here. Imports can have significant side-effects, so in edge-cases like this is perfectly acceptable to have an import further down from the top.</p>
| 2 | 2016-08-13T17:11:02Z | [
"python",
"python-import",
"pep8"
] |
How do I stack rows in a Pandas data frame to get one âlong rowâ, preserving column types? | 38,935,112 | <p>This is a follow up to <a href="http://stackoverflow.com/questions/38900981/how-do-i-stack-rows-in-a-pandas-data-frame-to-get-one-long-row">How do I stack rows in a Pandas data frame to get one "long row"?</a></p>
<p>The answers there work but dropping the index here loses column types (they all become <em>object</em>):</p>
<pre><code>df.stack().reset_index(drop=True).T
</code></pre>
<p>I need to preserve column types, and preferably rename columns with prefixes indicating the original row, for example: </p>
<p><code>row_0_column_A, row_0_column_B, ... , row_5_column_A, row_5_column_B ...</code></p>
<p>Example:</p>
<pre><code>df = pd.DataFrame( [ {'stringy': 'A', 'numerical': 2 }, { 'stringy': 'B', 'numer
ical': 3 } ] )
numerical stringy
0 2 A
1 3 B
</code></pre>
<p>Desired output:</p>
<pre><code> row_0_numerical row_0_stringy row_1_numerical row_1_stringy
0 2 A 3 B
</code></pre>
<p>How to?</p>
| -1 | 2016-08-13T17:07:55Z | 38,935,411 | <p>You can <code>pivot</code> your table:</p>
<pre><code># create a unique id for all rows and pivot the table
df['id'] = 0
df1 = df.reset_index().pivot(index = 'id', columns = 'index')
# collapse multi index columns to single index
df1.columns = ['_'.join(['row'] + [str(c) for c in col][::-1]) for col in df1.columns.values]
df1
# row_0_numerical row_1_numerical row_0_stringy row_1_stringy
# id
# 0 2 3 A B
</code></pre>
| 1 | 2016-08-13T17:46:14Z | [
"python",
"pandas",
"dataframe"
] |
Center-align text with Python-pptx | 38,935,137 | <p><strong>Question in short:</strong> Is it possible to align text to the center in Python-pptx?</p>
<p>Since I'm using Python-pptx, I have been able to automate quite a lot of things and I really enjoy using it! However, I've run into a problem. I'm trying to center my text horizontally on a slide. If you don't understand me:</p>
<p><a href="http://i.stack.imgur.com/R3JB0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/R3JB0.jpg" alt="See how the first two paragraphs differ from the other two?"></a></p>
<p>My text is now aligned to the left, similar to the text in the first two paragraphs. However, I'd like them to be aligned to the center, like those last two paragraphs. This is a snippet of my code:</p>
<pre><code>left = Cm(3)
top = Cm(2.5)
width = Cm(15)
height = Cm(1)
txBox = slide.shapes.add_textbox(left, top, width, height)
tf = txBox.text_frame
p = tf.add_paragraph()
run = p.add_run()
run.text = "Just an example"
font = run.font
font.size = Pt(30)
</code></pre>
<p>I looked in the documentation, but couldn't find anything useful. I read something about "MSO_VERTICAL_ANCHOR" and "PP_PARAGRAPH_ALIGNMENT", but I just can't get it working.</p>
<p>Thank you in advance!</p>
| 0 | 2016-08-13T17:10:21Z | 38,935,342 | <pre><code>from pptx.enum.text import PP_ALIGN
shape.paragraphs[0].alignment = PP_ALIGN.CENTER
</code></pre>
<p>This is taken directly from the <a href="https://python-pptx.readthedocs.io/en/latest/" rel="nofollow">Python pptx Docs</a>. Does this not work for you? You said in your question that you've heard of <code>PP_PARAGRAPH_ALIGNMENT</code> but can't get it working. What problems are arising?</p>
<p>You can view more information regarding Python pptx alignment <a href="http://python-pptx.readthedocs.io/en/latest/api/enum/PpParagraphAlignment.html" rel="nofollow">here</a>.</p>
<p>Scanny, who commented below me added a wonderful point that will solve your problem:</p>
<p>Paragraph alignment (also known as justification) is a property of a paragraph and must be applied individually to each paragraph. In the code you included in your question, if you add a line <code>p.alignment = PP_ALIGN.CENTER</code> you should get what you're after.</p>
| 2 | 2016-08-13T17:36:11Z | [
"python",
"powerpoint",
"python-pptx"
] |
Convert elements of a list into binary | 38,935,169 | <p>Suppose I have a list:</p>
<pre><code>lst = [0, 1, 0, 0]
</code></pre>
<p>How can I make python interpret this list as a binary number 0100 so that <code>2*(0100)</code> gives me <code>01000</code>?</p>
<p>The only way that I can think of is to first make a function that converts the "binary" elements to corresponding integers(to base 10) and then use bin() function..</p>
<p>Is there a better way?</p>
| 3 | 2016-08-13T17:15:16Z | 38,935,289 | <p>You can try:</p>
<pre><code>l = [0, 0, 1, 0]
num = int(''.join(str(x) for x in l), base=2)
</code></pre>
| -1 | 2016-08-13T17:28:54Z | [
"python",
"python-3.x",
"binary"
] |
Convert elements of a list into binary | 38,935,169 | <p>Suppose I have a list:</p>
<pre><code>lst = [0, 1, 0, 0]
</code></pre>
<p>How can I make python interpret this list as a binary number 0100 so that <code>2*(0100)</code> gives me <code>01000</code>?</p>
<p>The only way that I can think of is to first make a function that converts the "binary" elements to corresponding integers(to base 10) and then use bin() function..</p>
<p>Is there a better way?</p>
| 3 | 2016-08-13T17:15:16Z | 38,935,406 | <p>You can use <a href="https://wiki.python.org/moin/BitwiseOperators" rel="nofollow">bitwise operators</a> like this:</p>
<pre><code>>>> lst = [0, 1, 0, 0]
>>> bin(int(''.join(map(str, lst)), 2) << 1)
'0b1000'
</code></pre>
| 2 | 2016-08-13T17:45:07Z | [
"python",
"python-3.x",
"binary"
] |
Convert elements of a list into binary | 38,935,169 | <p>Suppose I have a list:</p>
<pre><code>lst = [0, 1, 0, 0]
</code></pre>
<p>How can I make python interpret this list as a binary number 0100 so that <code>2*(0100)</code> gives me <code>01000</code>?</p>
<p>The only way that I can think of is to first make a function that converts the "binary" elements to corresponding integers(to base 10) and then use bin() function..</p>
<p>Is there a better way?</p>
| 3 | 2016-08-13T17:15:16Z | 38,936,539 | <p>This is not a fancy one-liner, but simple and fast.</p>
<pre><code>lst = [0,1,1,0]
num = 0
for b in lst:
num = 2 * num + b
print(num) # 6
</code></pre>
| -1 | 2016-08-13T20:05:55Z | [
"python",
"python-3.x",
"binary"
] |
Error while importing caffe | 38,935,189 | <p>I am getting this error while importing caffe in ipython.</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-1-1cca3aa1f8c5> in <module>()
----> 1 import caffe
/home/harshita/deep-learning/caffe/python/caffe/__init__.py in <module>()
----> 1 from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver
2 from ._caffe import set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver, layer_type_list, set_random_seed
3 from ._caffe import __version__
4 from .proto.caffe_pb2 import TRAIN, TEST
5 from .classifier import Classifier
/home/harshita/deep-learning/caffe/python/caffe/pycaffe.py in <module>()
11 import numpy as np
12
---> 13 from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
14 RMSPropSolver, AdaDeltaSolver, AdamSolver
15 import caffe.io
ImportError: /home/harshita/deep-learning/caffe/python/caffe/../../build/lib/libcaffe.so.1.0.0-rc3: undefined symbol: _ZN2cv8imencodeERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_11_InputArrayERSt6vectorIhSa....
</code></pre>
<p>I have no idea if this error is coming due to any installation problems.Help me out.</p>
| 0 | 2016-08-13T17:16:59Z | 38,938,285 | <p>Have you built pycaffe? Run
<code>make pycaffe</code>
in your caffe home folder.</p>
<p>By the way, your PATH is fine, otherwise you would receive an error like: no module named caffe.</p>
| 0 | 2016-08-14T00:58:23Z | [
"python",
"caffe",
"pycaffe"
] |
Adding triple containing blank node to stardog with rdflib | 38,935,244 | <p>I'm using the rdflib python library to manipulate a stardog database. How do I add a blank node? I'm trying <code>g.add((BNode(),FOAF.knows,Literal('amy')))</code>, but i get an exception "SPARQLStore does not support Bnodes!". What is the alternative? </p>
| 0 | 2016-08-13T17:24:31Z | 38,942,940 | <p>This is a restriction of RDFLib's SPARQLStore implementation, which you seem to be using. See the <a href="https://rdflib.readthedocs.io/en/2.4.0/apidocs/rdflib.plugins.stores.html?highlight=sparqlstore#rdflib.plugins.stores.sparqlstore.SPARQLStore" rel="nofollow">docs</a>. </p>
<p>You may be able to accomplish this by using <a href="https://rdflib.github.io/sparqlwrapper/" rel="nofollow">SPARQLWrapper</a> to insert the triples directly without using the RDFLib interface. </p>
| 2 | 2016-08-14T13:57:32Z | [
"python",
"sparql",
"semantic-web",
"rdflib",
"stardog"
] |
Program for ARP scanning | 38,935,304 | <pre><code>#!/usr/bin/python3
#Fazer arping da conexao
import sys
from datetime import datetime
from scapy.all import *
try:
interface = input ("\n[*] Set interface: ")
ips = input("[*] Set IP RANGE or Network: ")
except KeyboardInterrupt:
print("\n user aborted")
sys.exit()
print("Scanning...")
start_time = datetime.now()
conf.verb = 0
ans,unans = srp(Ether(dst = "ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface ,inter= 0.1)
print("\n\tMAC\t\tIP\n")
for snd,rcv in ans:
print(rcv.sprintf("%Ether.src% - %ARP.psrc%"))
stop_time = datetime.now()
total_time = stop_time - start_time
print("\n[*] Scan Completed")
print("[*] Scan Duration: %s" %(total_time))
</code></pre>
<p>I found this code on internet and I am trying to understand it.
I didnt understand :</p>
<pre><code> ans,unans = srp(Ether(dst = "ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface ,inter= 0.1)
</code></pre>
<p>Why there is a tuple <code>ans,unans</code> ?
Whats is <code>inter= 0.1</code> ?</p>
<pre><code>for snd,rcv in ans:
print(rcv.sprintf("%Ether.src% - %ARP.psrc%"))
</code></pre>
<p>I didnt understand <code>rcv.sprintf</code>. What is this? Why rcv.sprintf instead of print?
What is <code>conf.verb = 0</code> ?</p>
<p>Could someone explain it?</p>
| 1 | 2016-08-13T17:30:30Z | 39,048,478 | <p>About the code:</p>
<pre><code> ans,unans = srp(Ether(dst = "ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface ,inter= 0.1)
</code></pre>
<p>What this code does is fairly simple. You use the srp funtion to send packets and receive answer for them - in this case that packets are constructed by the protocols: Ethernet and ARP. To understand what those protocols do and used for you must have some at least basic networking background. But anyway what this code does is asking the ip, specified by the pdst parameter, to tell its mac address. </p>
<p>In the tuple <code>ans,unans</code> parameter is stored the answer of the srp function.</p>
<p>Also, with the inter parameter you specify a time interval to wait between two packets.</p>
<p>As for the rcv.sprintf I didn't understood it either. You could possibly write something very simple like:</p>
<pre><code>print rcv[ARP].psrc
print rcv[Ether].src
</code></pre>
<p>As for the <strong>conf.verb=0</strong> variable what it does is setting scapy's verbosity to 0 so you don't get too much output in the terminal when you run the program.</p>
| 3 | 2016-08-19T22:26:25Z | [
"python",
"python-3.4"
] |
Program for ARP scanning | 38,935,304 | <pre><code>#!/usr/bin/python3
#Fazer arping da conexao
import sys
from datetime import datetime
from scapy.all import *
try:
interface = input ("\n[*] Set interface: ")
ips = input("[*] Set IP RANGE or Network: ")
except KeyboardInterrupt:
print("\n user aborted")
sys.exit()
print("Scanning...")
start_time = datetime.now()
conf.verb = 0
ans,unans = srp(Ether(dst = "ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface ,inter= 0.1)
print("\n\tMAC\t\tIP\n")
for snd,rcv in ans:
print(rcv.sprintf("%Ether.src% - %ARP.psrc%"))
stop_time = datetime.now()
total_time = stop_time - start_time
print("\n[*] Scan Completed")
print("[*] Scan Duration: %s" %(total_time))
</code></pre>
<p>I found this code on internet and I am trying to understand it.
I didnt understand :</p>
<pre><code> ans,unans = srp(Ether(dst = "ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface ,inter= 0.1)
</code></pre>
<p>Why there is a tuple <code>ans,unans</code> ?
Whats is <code>inter= 0.1</code> ?</p>
<pre><code>for snd,rcv in ans:
print(rcv.sprintf("%Ether.src% - %ARP.psrc%"))
</code></pre>
<p>I didnt understand <code>rcv.sprintf</code>. What is this? Why rcv.sprintf instead of print?
What is <code>conf.verb = 0</code> ?</p>
<p>Could someone explain it?</p>
| 1 | 2016-08-13T17:30:30Z | 39,156,820 | <p><strong>1ãWhy there is a tuple ans,unans ?</strong></p>
<p>A: <code>srp</code> return <code>answered</code> and <code>unanswered</code> packets, so it is a tuple.<a href="https://github.com/secdev/scapy/blob/master/scapy/sendrecv.py" rel="nofollow">srp function</a></p>
<p><strong>2ãWhats is inter= 0.1 ?</strong></p>
<p>A: <code>inter</code> is a interval which resend unanswered packets or retry when no more packets are answered. <code>srp</code> function call <code>sndrcv</code>, the <code>inter</code> argument see <a href="https://github.com/secdev/scapy/blob/master/scapy/sendrecv.py" rel="nofollow">sndscv function</a></p>
<p><strong>3ãI didnt understand rcv.sprintf. What is this? Why rcv.sprintf instead of print?</strong></p>
<p>A:<code>sprintf</code> output str where format is a string that can include directives.A directive begins and
ends by % and has the following format <code>%[fmt[r],][cls[:nb].]field%</code>. The detail is here <a href="https://github.com/secdev/scapy/blob/master/scapy/packet.py" rel="nofollow">sprintf function</a>.</p>
<p><strong>4ãWhat is conf.verb = 0 ?</strong></p>
<p>A: 'conf.verb': level of verbosity, from 0 (almost mute) to 3 (verbose). <a href="https://github.com/secdev/scapy/blob/master/scapy/config.py" rel="nofollow">verb of Conf Class </a></p>
| 1 | 2016-08-26T00:48:50Z | [
"python",
"python-3.4"
] |
How to chain multiple arguments in python? add(1)(2)(3) = 6 | 38,935,313 | <p>I'm trying to create a function that chains results from multiple arguments.</p>
<pre><code>def hi(string):
print(string)<p>
return hi
</code></pre>
<p>Calling <code>hi("Hello")("World")</code> works and becomes Hello \n World as expected.</p>
<p>the problem is when I want to append the result as a single string, but
return string + hi produces an error since hi is a function.</p>
<p>I've tried using <code>__str__ and __repr__</code> to change how hi behaves when it has not input. But this only creates a different problem elsewhere.</p>
<p><code>hi("Hello")("World") = "Hello"("World")</code> -> Naturally produces an error.</p>
<p>I understand why the program cannot solve it, but I cannot find a solution to it. :/</p>
| -1 | 2016-08-13T17:31:21Z | 38,935,360 | <p>You are getting into some deep, Haskell-style, type-theoretical issues by having <code>hi</code> return a reference to itself. Instead, just accept multiple arguments and concatenate them in the function.</p>
<pre><code>def hi(*args):
return "\n".join(args)
</code></pre>
<p>Some example usages:</p>
<pre><code>print(hi("Hello", "World"))
print("Hello\n" + hi("World"))
</code></pre>
| 2 | 2016-08-13T17:38:28Z | [
"python"
] |
How to chain multiple arguments in python? add(1)(2)(3) = 6 | 38,935,313 | <p>I'm trying to create a function that chains results from multiple arguments.</p>
<pre><code>def hi(string):
print(string)<p>
return hi
</code></pre>
<p>Calling <code>hi("Hello")("World")</code> works and becomes Hello \n World as expected.</p>
<p>the problem is when I want to append the result as a single string, but
return string + hi produces an error since hi is a function.</p>
<p>I've tried using <code>__str__ and __repr__</code> to change how hi behaves when it has not input. But this only creates a different problem elsewhere.</p>
<p><code>hi("Hello")("World") = "Hello"("World")</code> -> Naturally produces an error.</p>
<p>I understand why the program cannot solve it, but I cannot find a solution to it. :/</p>
| -1 | 2016-08-13T17:31:21Z | 38,935,526 | <p>You're running into difficulty here because the result of each call to the function must be a function (so you can chain another function call), while at the same time <em>also</em> being a legitimate string (in case you <em>don't</em> chain another function call and just use the return value as-is). </p>
<p>Fortunately Python has you covered: any type can be made to act like a function by defining a <code>__call__</code> method on it. Built-in types like <code>str</code> don't have such a method, but you can define a subclass of <code>str</code> that does.</p>
<pre><code>class hi(str):
def __call__(self, string):
return hi(self + '\n' + string)
</code></pre>
<p>This isn't very pretty and is sorta fragile (i.e. you will end up with regular <code>str</code> objects when you do almost any operation with your special string, unless you override all methods of <code>str</code> to return <code>hi</code> instances instead) and so isn't considered very Pythonic. </p>
<p>(I guess in this particular case it wouldn't much matter if you end up with regular <code>str</code> instances when you start using the result, because at that point you're done chaining function calls anyway. However, this is often an issue in the general case where you're adding functionality to a built-in type via subclassing.)</p>
<p>To a first approximation, the question in your title can be answered similarly:</p>
<pre><code>class add(int): # could also subclass float
def __call__(self, value):
return add(self + value)
</code></pre>
<p>To really do <code>add()</code> right, though, you need to use a subclass of any type the result of an numeric addition could be, not just <code>int</code> or <code>float</code>. Rather than trying to catalog these and manually write the necessary subclasses, we can dynamically create them based on the result type. Here's a quick-and-dirty version:</p>
<pre><code>class AddMixIn(object):
def __call__(self, value):
return add(self + value)
def add(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type("add_" + t.__name__, (t, AddMixIn), {})
return _classes[t](value)
</code></pre>
<p>(Happily, this implementation works fine for strings, since they can be concatenated using <code>+</code>).</p>
<p>Once you've started down this path, you'll probably want to do this for other operations too. It's a drag copying and pasting basically the same code for every operation, so let's write a function that writes the functions for you! Just specify a function that actually does the work, i.e., takes two values and does something to them, and it gives you back a function that does all the class munging for you. You can specify the operation with a lambda (anonymous function) or a predefined function, such as one from the <code>operator</code> module.</p>
<pre><code> def chainable(operation):
class CallMixIn(object):
def __call__(self, value):
return do(operation(self, value))
def do(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type(t.__name__, (t, CallAddMixIn), {})
return _classes[t](value)
return do
add = chainable(lambda a, b: a + b)
# or...
import operator
add = chainable(operator.add)
</code></pre>
<p>In the end it's <em>still</em> not very pretty and is <em>still</em> sorta fragile and <em>still</em> wouldn't be considered very Pythonic.</p>
<p>If you're willing to use an additional (empty) call to signal the end of the chain, things get a lot simpler, because you just need to return functions until you're called with no argument:</p>
<pre><code>def add(x):
return lambda y=None: x if y is None else add(x+y)
</code></pre>
<p>You call it like this:</p>
<pre><code>add(3)(4)(5)() # 12
</code></pre>
| 3 | 2016-08-13T18:01:30Z | [
"python"
] |
simple multithreading example with tornado.web.RequestHandler | 38,935,322 | <p>I have got a <code>mysterious_library</code>, providing a synchronous function <code>query_resource_for_a_long_time</code>.</p>
<p>Then I have the code below that is supposed to fetch the resource asynchronously:</p>
<pre><code>import tornado.ioloop
import tornado.web
import threading
from mysterious_library import query_resource_for_a_long_time, ResourceNotFoundException
def resource_fetcher(set_status, finish):
try:
resource = query_resource_for_a_long_time()
except ResourceNotFoundException:
tornado.ioloop.IOLoop.instance().add_callback(set_status, 404)
tornado.ioloop.IOLoop.instance().add_callback(finish, 'not found')
else:
tornado.ioloop.IOLoop.instance().add_callback(set_status, 200)
tornado.ioloop.IOLoop.instance().add_callback(finish, str(resource))
class Handler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
threading.Thread(
target=resource_fetcher,
args=[self.set_status, self.finish]
).start()
tornado.web.Application([
(r'.*', Handler),
]).listen(8765)
tornado.ioloop.IOLoop.instance().start()
</code></pre>
<p>However, it seems that the process is blocked until <code>query_resource_for_a_long_time</code> returns, although the function runs in a separated thread.</p>
<p>I'm new to tornado and I'm wondering is it possible to deal with these requests concurrently.</p>
| 0 | 2016-08-13T17:33:18Z | 38,935,985 | <p>Yes, follow the instructions to use ThreadPoolExecutor:</p>
<p><a href="http://www.tornadoweb.org/en/stable/guide/coroutines.html#calling-blocking-functions" rel="nofollow">http://www.tornadoweb.org/en/stable/guide/coroutines.html#calling-blocking-functions</a></p>
<p>Be aware, when you're testing this, that you can only run a couple queries at once from your browser:</p>
<p><a href="http://www.tornadoweb.org/en/stable/faq.html#my-code-is-asynchronous-but-it-s-not-running-in-parallel-in-two-browser-tabs" rel="nofollow">http://www.tornadoweb.org/en/stable/faq.html#my-code-is-asynchronous-but-it-s-not-running-in-parallel-in-two-browser-tabs</a></p>
<p>... so try wget or curl if you want to prove to yourself that you can run the mysterious long-running function in many threads at once from Tornado.</p>
| 1 | 2016-08-13T18:58:16Z | [
"python",
"asynchronous",
"tornado"
] |
Dynamic Filtering in Django with default Queryset | 38,935,422 | <p>I've read the excellent explanation in this question
<a href="http://stackoverflow.com/questions/25662374/dynamically-filter-listview-cbv-in-django-1-7">Dynamically filter ListView CBV in Django 1.7</a>. </p>
<p>But I wan't to get extra help on the queryset.</p>
<pre><code># urls.py
urlpatterns = patterns('',
url(r'^(?P<exp>[ASG])$', \
ScholarshipDirectoryView.as_view(),\
name='scholarship_directory'),)
# views.py
class ScholarshipDirectoryView(ListView):
queryset= Scholarship.objects.all()
model = Scholarship
template_name = 'scholarship-directory.html'
def get_queryset(self):
qs = super(ScholarshipDirectoryView, self).get_queryset()
return qs.filter(experience_level__exact=self.kwargs['exp'])
</code></pre>
<p>What is the DRY way to fallback to the standard queryset in case the "exp" parameter is missing in the url?</p>
<p>I want to take on this approach because I don't think doing an extra view or extra urlpattern for the complete Queryset and the Custom/Filtered Queryset makes sense.</p>
| 0 | 2016-08-13T17:47:37Z | 38,935,505 | <p>Just wrap it in an <code>if</code>.</p>
<pre><code> qs = super(ScholarshipDirectoryView, self).get_queryset()
exp = self.kwargs['exp']
if exp:
qs = qs.filter(experience_level__exact=exp)
# return the new or the old queryset.
return qs
</code></pre>
| 0 | 2016-08-13T17:58:01Z | [
"python",
"django",
"listview"
] |
TensorFlow REST Frontend but not TensorFlow Serving | 38,935,428 | <p>I want to deploy a simple TensorFlow model and run it in REST service like Flask.
Did not find so far good example on github or here.</p>
<p>I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf...</p>
| 11 | 2016-08-13T17:48:09Z | 39,102,115 | <p>This <a href="https://github.com/sugyan/tensorflow-mnist/blob/master/main.py" rel="nofollow">github project</a> shows a working example of restoring a model checkpoint and using Flask.</p>
<pre><code>@app.route('/api/mnist', methods=['POST'])
def mnist():
input = ((255 - np.array(request.json, dtype=np.uint8)) / 255.0).reshape(1, 784)
output1 = simple(input)
output2 = convolutional(input)
return jsonify(results=[output1, output2])
</code></pre>
<p>The online <a href="https://tensorflow-mnist.herokuapp.com/" rel="nofollow">demo</a> seems pretty quick.</p>
| 1 | 2016-08-23T13:06:53Z | [
"python",
"flask",
"tensorflow",
"tensorflow-serving"
] |
TensorFlow REST Frontend but not TensorFlow Serving | 38,935,428 | <p>I want to deploy a simple TensorFlow model and run it in REST service like Flask.
Did not find so far good example on github or here.</p>
<p>I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf...</p>
| 11 | 2016-08-13T17:48:09Z | 39,104,964 | <p>There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and initialize variables in the code where you restore the model. There is a way shown in <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/learn/python/learn" rel="nofollow">tensorflow skflow/contrib learn</a> which is more elegant, however this doesn't seem to be functional at the moment and the documentation is out of date. </p>
<p>I put a short example together on github <a href="https://github.com/benman1/tensorflow_flask" rel="nofollow">here</a> that shows how you would named GET or POST parameters to a flask REST-deployed tensorflow model. </p>
<p>The main code is then in a function that takes a dictionary based on the POST/GET data: </p>
<pre><code>@app.route('/model', methods=['GET', 'POST'])
@parse_postget
def apply_model(d):
tf.reset_default_graph()
with tf.Session() as session:
n = 1
x = tf.placeholder(tf.float32, [n], name='x')
y = tf.placeholder(tf.float32, [n], name='y')
m = tf.Variable([1.0], name='m')
b = tf.Variable([1.0], name='b')
y = tf.add(tf.mul(m, x), b) # fit y_i = m * x_i + b
y_act = tf.placeholder(tf.float32, [n], name='y_')
error = tf.sqrt((y - y_act) * (y - y_act))
train_step = tf.train.AdamOptimizer(0.05).minimize(error)
feed_dict = {x: np.array([float(d['x_in'])]), y_act: np.array([float(d['y_star'])])}
saver = tf.train.Saver()
saver.restore(session, 'linear.chk')
y_i, _, _ = session.run([y, m, b], feed_dict)
return jsonify(output=float(y_i))
</code></pre>
| 4 | 2016-08-23T15:14:47Z | [
"python",
"flask",
"tensorflow",
"tensorflow-serving"
] |
TensorFlow REST Frontend but not TensorFlow Serving | 38,935,428 | <p>I want to deploy a simple TensorFlow model and run it in REST service like Flask.
Did not find so far good example on github or here.</p>
<p>I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf...</p>
| 11 | 2016-08-13T17:48:09Z | 39,222,603 | <p>i don't like to put much code with data\model processing in flask restful file.. Usually have tf model class and so on separately.. i.e. it could be something like this</p>
<pre><code># model init, loading data
cifar10_recognizer = Cifar10_Recognizer()
cifar10_recognizer.load('data/c10_model.ckpt')
@app.route('/tf/api/v1/SomePath', methods=['GET', 'POST'])
def upload():
X = []
if request.method == 'POST':
if 'photo' in request.files:
# place for uploading process workaround, obtaining input for tf
X = generate_X_c10(f)
if len(X) != 0:
# designing desired result here
answer = np.squeeze(cifar10_recognizer.predict(X))
top3 = (-answer).argsort()[:3]
res = ([cifar10_labels[i] for i in top3], [answer[i] for i in top3])
# you can simply print this to console
# return 'Prediction answer: {}'.format(res)
# or generate some html with result
return fk.render_template('demos/c10_show_result.html',
name=file,
result=res)
if request.method == 'GET':
# in html i have simple form to upload img file
return fk.render_template('demos/c10_classifier.html')
</code></pre>
<p>cifar10_recognizer.predict(X) is simple func, that runs prediction operation in tf session:</p>
<pre><code> def predict(self, image):
logits = self.sess.run(self.model, feed_dict={self.input: image})
return logits
</code></pre>
<p>ps. saving\restoring model from file is extremely long process, try to avoid this while serving post\get requests</p>
| 1 | 2016-08-30T08:37:30Z | [
"python",
"flask",
"tensorflow",
"tensorflow-serving"
] |
Finding Excel cell reference using Python | 38,935,460 | <p><a href="http://i.stack.imgur.com/fwVen.png" rel="nofollow">Here is the Excel file in question:</a></p>
<p>Context: I am writing a program which can pull values from a PDF and put them in the appropriate cell in an Excel file.</p>
<p>Question: I want to write a function which takes a column value (e.g. 2014) and a row value (e.g. 'COGS') as arguments and return the cell reference where those two intersect (e.g. 'C3' for 2014 COGS).</p>
<pre><code>def find_correct_cell(year=2014, item='COGS'):
#do something similar to what the =match function in Excel does
return cell_reference #returns 'C3'
</code></pre>
<p><a href="http://i.stack.imgur.com/8N9RE.png" rel="nofollow">I have already tried using openpyxl like this to change the values of some random empty cells where I can store these values:</a></p>
<pre><code> col_num = '=match(2014, A1:E1)'
row_num = '=match("COGS", A1:A5)'
</code></pre>
<p>But I want to grab those values without having to arbitrarily write to those random empty cells. Plus, even with this method, when I read those cells (F5 and F6) it reads the formulae in those cells and not the face value of 3.</p>
<p>Any help is appreciated, thanks.</p>
| 0 | 2016-08-13T17:53:01Z | 38,947,108 | <p>There are a surprising number of details you need to get right to manipulate Excel files this way with openpyxl. First, it's worth knowing that the xlsx file contains two representations of each cell - the formula, and the current value of the formula. openpyxl can return either, and if you want values you should specify <code>data_only=True</code> when you open the file. Also, openpyxl is not able to calculate a new value when you change the formula for a cell - only Excel itself can do that. So inserting a MATCH() worksheet function won't solve your problem.</p>
<p>The code below does what you want, mostly in Python. It uses the "A1" reference style, and does some calculations to turn column numbers into column letters. This won't hold up well if you go past column Z. In that case, you may want to switch to numbered references to rows and columns. There's some more info on that <a href="http://openpyxl.readthedocs.io/en/default/tutorial.html" rel="nofollow">here</a> and <a href="http://stackoverflow.com/questions/12902621/getting-the-row-and-column-numbers-from-coordinate-value-in-openpyxl">here</a>. But hopefully this will get you on your way.</p>
<p>Note: This code assumes you are reading a workbook called 'test.xlsx', and that 'COGS' is in a list of items in 'Sheet1!A2:A5' and 2014 is in a list of years in 'Sheet1!B1:E1'.</p>
<pre><code>import openpyxl
def get_xlsx_region(xlsx_file, sheet, region):
""" Return a rectangular region from the specified file.
The data are returned as a list of rows, where each row contains a list
of cell values"""
# 'data_only=True' tells openpyxl to return values instead of formulas
# 'read_only=True' makes openpyxl much faster (fast enough that it
# doesn't hurt to open the file once for each region).
wb = openpyxl.load_workbook(xlsx_file, data_only=True, read_only=True)
reg = wb[sheet][region]
return [[cell.value for cell in row] for row in reg]
# cache the lists of years and items
# get the first (only) row of the 'B1:F1' region
years = get_xlsx_region('test.xlsx', 'Sheet1', 'B1:E1')[0]
# get the first (only) column of the 'A2:A6' region
items = [r[0] for r in get_xlsx_region('test.xlsx', 'Sheet1', 'A2:A5')]
def find_correct_cell(year, item):
# find the indexes for 'COGS' and 2014
year_col = chr(ord('B') + years.index(year)) # only works in A:Z range
item_row = 2 + items.index(item)
cell_reference = year_col + str(item_row)
return cell_reference
print find_correct_cell(year=2014, item='COGS')
# C3
</code></pre>
| 0 | 2016-08-14T22:18:06Z | [
"python",
"excel",
"match",
"cell"
] |
Finding Excel cell reference using Python | 38,935,460 | <p><a href="http://i.stack.imgur.com/fwVen.png" rel="nofollow">Here is the Excel file in question:</a></p>
<p>Context: I am writing a program which can pull values from a PDF and put them in the appropriate cell in an Excel file.</p>
<p>Question: I want to write a function which takes a column value (e.g. 2014) and a row value (e.g. 'COGS') as arguments and return the cell reference where those two intersect (e.g. 'C3' for 2014 COGS).</p>
<pre><code>def find_correct_cell(year=2014, item='COGS'):
#do something similar to what the =match function in Excel does
return cell_reference #returns 'C3'
</code></pre>
<p><a href="http://i.stack.imgur.com/8N9RE.png" rel="nofollow">I have already tried using openpyxl like this to change the values of some random empty cells where I can store these values:</a></p>
<pre><code> col_num = '=match(2014, A1:E1)'
row_num = '=match("COGS", A1:A5)'
</code></pre>
<p>But I want to grab those values without having to arbitrarily write to those random empty cells. Plus, even with this method, when I read those cells (F5 and F6) it reads the formulae in those cells and not the face value of 3.</p>
<p>Any help is appreciated, thanks.</p>
| 0 | 2016-08-13T17:53:01Z | 38,948,460 | <p>Consider a translated VBA solution as the <a href="https://msdn.microsoft.com/en-us/library/office/ff835873.aspx" rel="nofollow">Match</a> function can adequately handle your needs. Python can access the Excel VBA Object Library using a COM interface with the <code>win32com</code> module. Please note this solution assumes you are using Excel for PC. Below includes the counterpart VBA function.</p>
<p><strong>VBA</strong> Function <em>(native interface)</em></p>
<p>If below function is placed in Excel standard module, function can be called in spreadsheet cell <code>=FindCell(..., ###)</code></p>
<pre class="lang-vb prettyprint-override"><code>' MATCHES ROW AND COL INPUT FOR CELL ADDRESS OUTPUT
Function FindCell(item As String, year As Integer) As String
FindCell = Cells(Application.Match(item, Range("A1:A5"), 0), _
Application.Match(year, Range("A1:E1"), 0)).Address
End Function
debug.Print FindCell("COGS", 2014)
' $C$3
</code></pre>
<p><strong>Python</strong> Script <em>(foreign interface, requiring all objects to be declared)</em></p>
<p>Try/Except/Finally is used to properly close the Excel process regardless of script success or fail.</p>
<pre><code>import win32com.client
# MATCHES ROW AND COL INPUT FOR CELL ADDRESS OUTPUT
def FindCell(item, year):
return(xlWks.Cells(xlApp.WorksheetFunction.Match(item, xlWks.Range("A1:A5"), 0),
xlApp.WorksheetFunction.Match(year, xlWks.Range("A1:E1"), 0)).Address)
try:
xlApp = win32com.client.Dispatch("Excel.Application")
xlWbk = xlApp.Workbooks.Open('C:/Path/To/Workbook.xlsx')
xlWks = xlWbk.Worksheets("SHEETNAME")
print(FindCell("COGS", 2014))
# $C$3
except Exception as e:
print(e)
finally:
xlWbk.Close(False)
xlApp.Quit
xlWks = None
xlWbk = None
xlApp = None
</code></pre>
| 0 | 2016-08-15T02:16:38Z | [
"python",
"excel",
"match",
"cell"
] |
dplyr summarize equivalent in pandas | 38,935,541 | <p>I used to work with R and really love the dplyr package which you can easily group by and summarize. </p>
<p>However, in pandas, I don't see an equivalent of summarize and here is how I achieve it in Python: </p>
<pre><code>import pandas as pd
data = pd.DataFrame(
{'col1':[1,1,1,1,1,2,2,2,2,2],
'col2':[1,2,3,4,5,6,7,8,9,0],
'col3':[-1,-2,-3,-4,-5,-6,-7,-8,-9,0]
}
)
result = []
for k,v in data.groupby('col1'):
result.append([k, max(v['col2']), min(v['col3'])])
print pd.DataFrame(result, columns=['col1', 'col2_agg', 'col3_agg'])
</code></pre>
<p>It is not only very verbose, but also might not be the most optimized and efficient. (I used to rewrite a <code>for loop groupby</code> implementation into <code>groupby.agg</code> and the performance enhancement was huge). </p>
<p>In R the code will be </p>
<pre><code>data %>% groupby(col1) %>% summarize(col2_agg=max(col2), col3_agg=min(col3))
</code></pre>
<p>Is there an efficient equivalent in Python or for loop is what I have to work with. </p>
<hr>
<p>also, @ayhan really gave a solution to my answer, this is a follow-up question that I will list here instead of the comment: </p>
<p>what is the equivalent of <code>groupby().summarize(newcolumn=max(col2 * col3))</code></p>
| 8 | 2016-08-13T18:03:32Z | 38,935,669 | <p>The equivalent in pandas would be something like this:</p>
<pre><code>data.groupby(['col1']).agg({'col2': max, 'col3': min})
Out:
col2 col3
col1
1 5 -5
2 9 -9
</code></pre>
<p>If you don't want col1 as an index, you can add <code>.reset_index()</code> at the end.</p>
<p>In general, you can pass multiple functions to agg:</p>
<pre><code>data.groupby(['col1']).agg({'col2': [max, min], 'col3': [min, 'count']})
Out:
col2 col3
max min min count
col1
1 5 1 -5 5
2 9 0 -9 5
</code></pre>
<p>Note that max and min are Python functions so you can use them directly. But for other functions you need to use aliases ('count', 'mean' etc. You can also use np.mean, np.var, ... These numpy functions will be translated to pandas functions.)</p>
<p>If you want to label each returning output you can use a dictionary of dictionaries:</p>
<pre><code>data.groupby(['col1']).agg({'col2': {'col2_max': max, 'col2_min': min},
'col3': {'col3_min': min, 'col3_count': 'count'}})
Out:
col2 col3
col2_max col2_min col3_min col3_count
col1
1 5 1 -5 5
2 9 0 -9 5
</code></pre>
<p>If the operation involves multiple columns (maximum of col2 * col3), you can assign a new column and use groupby agg:</p>
<pre><code>data.assign(col2_col3 = lambda x: x.col2 * x.col3).groupby('col1')['col2_col3'].agg(max)
Out:
col1
1 -1
2 0
Name: col2_col3, dtype: int64
</code></pre>
<p>Or, use groupby.apply</p>
<pre><code>data.groupby('col1').apply(lambda x: (x.col2 * x.col3).max())
Out:
col1
1 -1
2 0
dtype: int64
</code></pre>
<p>This last one can also be done with agg, but for all columns it will return the same result:</p>
<pre><code>data.groupby('col1').agg(lambda x: (x.col2 * x.col3).max())
Out:
col2 col3
col1
1 -1 -1 # these column names can be misleading here. For both col2 and col3,
2 0 0 # the returning value is max(col2*col3)
</code></pre>
| 5 | 2016-08-13T18:18:21Z | [
"python",
"pandas"
] |
Confusion with data after scipy.fftconvolve | 38,935,606 | <p>This question has probably been asked before but I cannot find my specific answer in any one place. So here it is: I am trying to find the time difference (lag) between two audio signals (Gaussian white noise) that have been captured by two USB mics. With no problem I have recorded the audio from two different streams, saved them and read the data back to an array. Next, I have experimented with a few different methods for finding the delay: taking the FFT first then cross correlating, <code>scipy.fftconvole</code>, zero-padding, and it seems countless other methods/strategies. As a test to confirm my calculated delay I tried to use that time delay to calculate the speed of sound. With a mic spacing of .6 meters I should get a time delay of 1.728 ms (v = dis / time). I convolve the mics recordings with the source signal, I am unsure what to do with the peak values I get from the <code>fftconvolve</code>. From these values I should be able to compute my time difference? Any help you can give would be excellent. </p>
<p>Reading data:</p>
<pre><code>samprat_1, data_1 = scipy.io.wavfile.read(sin_tone_1)
samprat_2, data_2 = scipy.io.wavfile.read(sin_tone_2)
samrat_tone_played, data_tone_played = scipy.io.wavfile.read(sin_tone_played)
</code></pre>
<p>Convolving:</p>
<pre><code>data_1 = _1_
data_2 = _2_
_tp_ = data_tone_played
convol_1_wsig = signal.fftconvolve(_1_, _tp_[::-1], "full")
convol_2_wsig = signal.fftconvolve(_2_, _tp_[::-1], "full")
</code></pre>
<p>Also here is the plot of the FFTconvolve result:</p>
<p><img src="http://i.stack.imgur.com/jemJE.png" alt="FFTconvolve"></p>
| 0 | 2016-08-13T18:11:19Z | 38,938,439 | <p>In situations like this, itâs a good idea to make sure your algorithm works on perfect data before trying with real data. It turns out that even if you understand the algorithm, Scipy is stupid in not giving an easy way to obtain the lags corresponding to correlations.</p>
<p>So hereâs my <a href="https://gist.github.com/fasiha/d089d5b5b71601b5e8d563cc693f64db" rel="nofollow">little script</a>: I generate two signal vectors of unequal lengths, but both containing a specific signal (a linear chirp) with <em>known</em> delays (0.1 and 0.2 seconds respectively). Then I use both <code>scipy.signal.correlate</code> and <code>scipy.signal.fftconvolve</code> to make sure I can recover the correct delay between the two signals. This boils down to generating the vector of lags (i.e., the time vector that corresponds to the indexes of the correlation) correctly, and knowing how to interpret them.</p>
<pre><code>import numpy as np
import scipy.signal as signal
import numpy.fft as fft
try:
import pylab
except ImportError:
print("Can't import pylab, do you have matplotlib installed?")
## Generator for the ideal underlying signal
# Parameters for a linear chirp that lasts 0.5 seconds, over which the frequency
# is swept from -10 Hz to 5 Hz.
signalLength = 0.5 # seconds
fStart = -10.0 # Hz
fEnd = +5.0 # Hz
fRate = (fEnd - fStart) / signalLength
# `x` is a function that evaluates this underlying signal for any given time
# vector `t`.
x = lambda t: (np.sin(2 * np.pi * (fStart + fRate * 0.5 * t) * t) *
np.logical_and(t >= 0, t<signalLength))
## Generate some test data
fs = 100.0 # Sample rate, Hz
# First signal: lasts 2 seconds, sampled at `fs` Hz
Tend = 2.0 # seconds
t1 = np.arange(0, Tend, 1/fs)
y1 = x(t1 - 0.1)
# Plot?
try:
pylab.figure()
pylab.subplot(211)
pylab.plot(t1, y1)
except NameError:
pass
# Second signal: lasts 1 second, also sampled at `fs` Hz
t2 = np.arange(0, Tend/2, 1/fs)
y2 = x(t2 - 0.2)
try:
pylab.plot(t2, y2)
pylab.legend(['y1', 'y2'])
except NameError:
pass
## Correlate
# Evaluate the correlation
z = signal.correlate(y1, y2)
# And this is crucial: the vector of lags for which the above `z` corresponds
# to.
tz = (np.arange(z.size) - (y2.size - 1)) / fs
# Here's how to evaluate the relative delay between y1 and y2
print('y1 is y2 delayed by {} seconds'.format(tz[np.argmax(np.abs(z))]))
try:
pylab.subplot(212)
pylab.plot(tz, z)
except NameError:
pass
## Correlate with flipped y1-vs-y2 to make sure it still works
z = signal.correlate(y2, y1)
# Note that now, we subtract `y1.size` because `y1` is second argument to
# `correlate`
tz = (np.arange(z.size) - (y1.size - 1)) / fs
print('y2 is y1 delayed by {} seconds'.format(tz[np.argmax(np.abs(z))]))
try:
pylab.subplot(212)
pylab.plot(tz, z)
pylab.legend(['correlate(y1,y2)', 'correlate(y2,y1)'])
except NameError:
pass
</code></pre>
<p>This generates (see bottom for figure):</p>
<pre><code>y1 is y2 delayed by -0.1 seconds
y2 is y1 delayed by 0.1 seconds
</code></pre>
<p>Hooray! Ok! So we know that if you <code>correlate(y1, y2)</code>, to generate the vector of lags <code>tz</code>, you subtract <code>y2</code>âs length. And we understand that in this case, the maximum magnitude-correlation happens at a lag that means âhow much to delay <code>y2</code> to get <code>y1</code>â.</p>
<p>Letâs wrap this up into some functions for general purpose reuse. The function wonât know about sampling rate, so itâll just return integer-lags, but we can divide by the sample rate outside.</p>
<pre><code>## Wrap this in a function
def correlateWithLags(y1, y2, *args, **kwargs):
"Returns `scipy.signal.correlate` output as wel as vector of lags"
z = signal.correlate(y1, y2, *args, **kwargs)
lags = np.arange(z.size) - (y2.size - 1)
return (z, lags)
# Make sure all works as above
(z, nz) = correlateWithLags(y1, y2)
tz = nz / fs
print('y1 is y2 delayed by {} seconds: `correlateWithLags`'.format(tz[np.argmax(np.abs(z))]))
</code></pre>
<p>This generates:</p>
<pre><code>y1 is y2 delayed by -0.1 seconds: `correlateWithLags`
</code></pre>
<p>Still working!</p>
<p>Now for funsies, letâs replace <code>scipy.signal.correlate</code> with <code>scipy.signal.fftconvolve</code>. This can greatly reduce runtime with long signal lengths. Correlation is the same as convolution except you have to time-reverse one of the signals, but after taking care of that little detail, the results should be exactly the same, and the mechanics of generating the correct delay is the same as above.</p>
<pre><code>## Use `fftconvolve` instead of `correlate` for fast-convolution
def fftCorrelateWithLags(y1, y2, *args, **kwargs):
"Equivalent to correlateWithLags but uses `scipy.signal.fftconvolve`"
# NOTA BENE: reverse `y2`! And if complex, need `np.conj(y2)` too!
z = signal.fftconvolve(y1, y2[::-1], *args, **kwargs)
lags = np.arange(z.size) - (y2.size - 1)
return (z, lags)
# Make sure it still works
(z, nz) = fftCorrelateWithLags(y1, y2)
tz = nz / fs
print('y1 is y2 delayed by {} seconds: `fftCorrelateWithLags`'.format(tz[np.argmax(np.abs(z))]))
</code></pre>
<p>Still works:</p>
<pre><code>y1 is y2 delayed by -0.1 seconds: `fftCorrelateWithLags`
</code></pre>
<p>Scipyâs <code>correlate</code> really should give you a function that returns the vector of lags corresponding to the correlation. For this reason, I think itâs ok to have posted this on StackOverflow instead of <a href="http://dsp.stackexchange.com">http://dsp.stackexchange.com</a> because even if you understood the algorithm, youâd still have code-bullshittery to deal with í ½í¸.</p>
<p>Postscript. Hereâs the <a href="https://gist.github.com/fasiha/d089d5b5b71601b5e8d563cc693f64db" rel="nofollow">full code in a Gist</a>. Hereâs the image generated, showing the two signals (top) and the two ways to do the correlation (<code>correlate(y1, y2)</code> vs <code>correlate(y2, y1)</code>, bottom).</p>
<p><a href="http://i.stack.imgur.com/Z986h.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z986h.png" alt="Data and correlation"></a></p>
| 0 | 2016-08-14T01:35:02Z | [
"python",
"scipy",
"signal-processing",
"fft"
] |
How to generate this sequence using python | 38,935,690 | <p>For example if q = 2, then i have to generate all sequence between [1,1] to [2,2].
if q = 3, then generate sequence between [1,1,1] to [3,3,3]. for q = 4, then generate sequence between [1,1,1,1] to [4,4,4,4], etc..</p>
<p>example of sequence .
for q = 3</p>
<pre><code>(1, 1, 1)
(1, 1, 2)
(1, 1, 3)
(1, 2, 1)
(1, 2, 2)
(1, 2, 3)
(1, 3, 1)
(1, 3, 2)
(1, 3, 3)
(2, 1, 1)
(2, 1, 2)
(2, 1, 3)
(2, 2, 1)
(2, 2, 2)
(2, 2, 3)
(2, 3, 1)
(2, 3, 2)
(2, 3, 3)
(3, 1, 1)
(3, 1, 2)
(3, 1, 3)
(3, 2, 1)
(3, 2, 2)
(3, 2, 3)
(3, 3, 1)
(3, 3, 2)
(3, 3, 3)
</code></pre>
<p>i have tried this "<a href="http://stackoverflow.com/questions/31552101/python-generating-all-nondecreasing-sequences">Python generating all nondecreasing sequences</a>" but not getting the required output.</p>
<p>currently i am using this code,</p>
<pre><code>import itertools
def generate(q):
k = range(1, q+1) * q
ola = set(i for i in itertools.permutations(k, q))
for i in sorted(ola):
print i
generate(3)
</code></pre>
<p>i need another and good way to generate this sequence.</p>
| 0 | 2016-08-13T18:21:38Z | 38,935,749 | <p>Use itertools.product with the repeat parameter:</p>
<pre><code>q = 2
list(itertools.product(range(1, q + 1), repeat=q))
Out: [(1, 1), (1, 2), (2, 1), (2, 2)]
q = 3
list(itertools.product(range(1, q + 1), repeat=q))
Out:
[(1, 1, 1),
(1, 1, 2),
(1, 1, 3),
(1, 2, 1),
(1, 2, 2),
...
</code></pre>
| 6 | 2016-08-13T18:28:29Z | [
"python",
"itertools"
] |
How to generate this sequence using python | 38,935,690 | <p>For example if q = 2, then i have to generate all sequence between [1,1] to [2,2].
if q = 3, then generate sequence between [1,1,1] to [3,3,3]. for q = 4, then generate sequence between [1,1,1,1] to [4,4,4,4], etc..</p>
<p>example of sequence .
for q = 3</p>
<pre><code>(1, 1, 1)
(1, 1, 2)
(1, 1, 3)
(1, 2, 1)
(1, 2, 2)
(1, 2, 3)
(1, 3, 1)
(1, 3, 2)
(1, 3, 3)
(2, 1, 1)
(2, 1, 2)
(2, 1, 3)
(2, 2, 1)
(2, 2, 2)
(2, 2, 3)
(2, 3, 1)
(2, 3, 2)
(2, 3, 3)
(3, 1, 1)
(3, 1, 2)
(3, 1, 3)
(3, 2, 1)
(3, 2, 2)
(3, 2, 3)
(3, 3, 1)
(3, 3, 2)
(3, 3, 3)
</code></pre>
<p>i have tried this "<a href="http://stackoverflow.com/questions/31552101/python-generating-all-nondecreasing-sequences">Python generating all nondecreasing sequences</a>" but not getting the required output.</p>
<p>currently i am using this code,</p>
<pre><code>import itertools
def generate(q):
k = range(1, q+1) * q
ola = set(i for i in itertools.permutations(k, q))
for i in sorted(ola):
print i
generate(3)
</code></pre>
<p>i need another and good way to generate this sequence.</p>
| 0 | 2016-08-13T18:21:38Z | 38,935,818 | <p>I think you want <code>itertools.product()</code>, which does all possible combinations of the iterable elements. <code>itertools.permutations()</code> does not repeat elements, and <code>itertools.combinations()</code> or <code>itertools.combinations_with_replacement()</code> only goes in sorted order (e.g. the first element of the input iterable won't be the last element of the result).</p>
<pre><code>from itertools import product
def generate(q):
assert q > 0 # not defined for <= 0
return list(product(range(1,q+1), repeat=q))
generate(3) # [(1,1,1), (1,1,2), ..., (3,3,2), (3,3,3)]
</code></pre>
<p>See: <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow">https://docs.python.org/3/library/itertools.html</a></p>
| 3 | 2016-08-13T18:36:36Z | [
"python",
"itertools"
] |
'int' object is not iterable in class __init__ definition | 38,935,713 | <pre><code>class MenuItem():
def __init__(self, width, height, (posX, posY)=(0,0)):
#self.label = self.render(self.text, 1, self.font_color)
self.width = width
self.height = height
self.posX = posX
self.posY = posY
self.position = posX, posY
def set_position(self, x, y):
self.position = (x, y)
self.posX = x
self.posY = y
def is_mouse_selection(self, (posx, posy)):
if (posx >= self.posX and posx <= self.posX + self.width) and (posy >= self.posY and posy <= self.posY + self.height):
return True
return False
def draw(self):
raise MissingFunction("Child of MenuItem class has no draw function")
class RadioButton(MenuItem):
def __init__(width, (posX, posY) = (0, 0)):
super(RadioButton, self).__init__(width, width, (posX, posY))
self.active = False
def draw(screen):
pygame.draw.rect(screen. GREY, pygame.Rect(posX, posY, self.width, self.height))
if self.active:
pygame.draw.circle(screen, BLACK, (posX + (self.width / 2), posY + (self.height / 2)), self.width)
else:
pygame.draw.circle(screen, BLACK, (posX + (self.width / 2), posY + (self.height / 2)), self.width, 1)
class RadioFrame(MenuItem):
def __init__(numButtons, buttonWidth, (posX, posY)=(0, 0)):
super(RadioFrame, self).__init__(numButtons * buttonWidth, width)
self.set_position(posX, posY)
self.numButtons = numButtons
self.buttons = []
self.activeButton = None
curX = self.posX
for i in enumerate(self.numButtons):
self.buttons.append(RadioButton(buttonWidth, (curX, self.posY)))
curX += buttonWidth
</code></pre>
<p>I'm trying to create a child of my MenuItem class, RadioFrame, which contains a list of RadioButton objects. I already have other successfully running children that are initialized in a similar way.</p>
<p>I'm getting the error that int object isn't iterable, I know that, but I'm struggling to see where I'm making that error. It's being triggered:
<code>line 44, in __init__
def __init__(numButtons, buttonWidth, (posX, posY)=(0, 0)):</code></p>
<p>For some more context here is where I am creating new instances of the class:
numRadios = 2 # 2 Frames
index = 0</p>
<pre><code> # First Num Players
## PosX = middle of screen width - middle of (numButtons * buttonWidth) [aka RadioFrame width]
## PosY = middle of screen height - middle of (buttonWidth) [aka RadioFrame height]
playerRadio = RadioFrame(3, 50)
posx = (self.scr_width / 2) - ((50 * 3) / 2)
posy = (self.scr_height / 2) - (50 / 2) + ((index * 2) + index * 50) # Copied from text button position, index starts at zero
playerRadio.set_position(posx, posy)
self.items.append(playerRadio)
index += 1
</code></pre>
<p>Full Trace:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<pre><code> File "./main.py", line 61, in <module>
2: PreGameMenu(screen, clock, preGameFuncs.keys(), preGameFuncs),
File "/home/rhartman/Documents/pitch/menus.py", line 113, in __init__
playerRadio = RadioFrame(3, 50)
File "/home/rhartman/Documents/pitch/menus.py", line 44, in __init__
def __init__(numButtons, buttonWidth, (posX, posY)=(0, 0)):
</code></pre>
<p>TypeError: 'int' object is not iterable</p>
</blockquote>
| -2 | 2016-08-13T18:25:16Z | 38,961,426 | <p>You're trying to iterate an integer. </p>
<pre><code>class RadioFrame(MenuItem):
def __init__(numButtons, buttonWidth, (posX, posY)=(0, 0)):
super(RadioFrame, self).__init__(numButtons * buttonWidth, width)
self.set_position(posX, posY)
self.numButtons = numButtons
self.buttons = []
self.activeButton = None
curX = self.posX
for i in enumerate(self.numButtons):
self.buttons.append(RadioButton(buttonWidth, (curX, self.posY)))
curX += buttonWidth
playerRadio = RadioFrame(3, 50)
</code></pre>
<p>According to your code <code>self.numButtons</code> is an integer 3. Then you're trying to iterate through it in this line <code>for i in enumerate(self.numButtons):</code>. It maybe should be <code>for i in xrange(self.numButtons):</code>.</p>
| 0 | 2016-08-15T19:08:58Z | [
"python",
"class",
"pygame"
] |
Get windows 10 build version (release ID) | 38,935,715 | <p>I want to get Windows build <em>version</em>. I have searched everywhere for this, but to no avail.</p>
<p>No, I don't want to know if it's 7, 8, 10, or whatever. I don't want the Windows build number. I want to know the Windows build <strong>version</strong>. I am not sure what the official name of this would be, but here is an image of what I'm asking for:</p>
<p><a href="http://i.stack.imgur.com/XyEGY.png" rel="nofollow"><img src="http://i.stack.imgur.com/XyEGY.png" alt="enter image description here"></a></p>
<p>I tried using the <code>sys</code>, <code>os</code> and <code>platform</code> modules, but I can't seem to find anything built-in that can do this.</p>
| 3 | 2016-08-13T18:25:28Z | 38,936,122 | <p>I don't know of any libraries that will give you this value directly, but you can parse the command window output when you open a new command window via <code>os.popen()</code>.</p>
<pre><code>print(os.popen('cmd').read())
</code></pre>
<p>The boot screen for the command window has the version/build data on the first line. I'm running version 6.1, build 7601, according to the following output from <code>os.popen()</code>:</p>
<pre><code>Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\...>
</code></pre>
<p>And when I run <code>winver</code>, I see that I'm running Windows 7, Version 6.1, Build 7601: SP1:</p>
<p><img src="http://i.stack.imgur.com/SVWbW.png" alt="Winver output screencap"></p>
<p>Which ties to the interpretation of the first line in the output from <code>os.popen()</code>.</p>
| 0 | 2016-08-13T19:14:23Z | [
"python",
"windows-10"
] |
Get windows 10 build version (release ID) | 38,935,715 | <p>I want to get Windows build <em>version</em>. I have searched everywhere for this, but to no avail.</p>
<p>No, I don't want to know if it's 7, 8, 10, or whatever. I don't want the Windows build number. I want to know the Windows build <strong>version</strong>. I am not sure what the official name of this would be, but here is an image of what I'm asking for:</p>
<p><a href="http://i.stack.imgur.com/XyEGY.png" rel="nofollow"><img src="http://i.stack.imgur.com/XyEGY.png" alt="enter image description here"></a></p>
<p>I tried using the <code>sys</code>, <code>os</code> and <code>platform</code> modules, but I can't seem to find anything built-in that can do this.</p>
| 3 | 2016-08-13T18:25:28Z | 38,936,407 | <p>You can use <a href="https://docs.python.org/3.5/library/ctypes.html?highlight=ctypes#module-ctypes" rel="nofollow">ctypes</a> and <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms724451(v=vs.85).aspx" rel="nofollow">GetVersionEx</a> from <code>Kernel32.dll</code> to find the build number.</p>
<pre><code>import ctypes
def getWindowsBuild():
class OSVersionInfo(ctypes.Structure):
_fields_ = [
("dwOSVersionInfoSize" , ctypes.c_int),
("dwMajorVersion" , ctypes.c_int),
("dwMinorVersion" , ctypes.c_int),
("dwBuildNumber" , ctypes.c_int),
("dwPlatformId" , ctypes.c_int),
("szCSDVersion" , ctypes.c_char*128)];
GetVersionEx = getattr( ctypes.windll.kernel32 , "GetVersionExA")
version = OSVersionInfo()
version.dwOSVersionInfoSize = ctypes.sizeof(OSVersionInfo)
GetVersionEx( ctypes.byref(version) )
return version.dwBuildNumber
</code></pre>
| 2 | 2016-08-13T19:49:43Z | [
"python",
"windows-10"
] |
Get windows 10 build version (release ID) | 38,935,715 | <p>I want to get Windows build <em>version</em>. I have searched everywhere for this, but to no avail.</p>
<p>No, I don't want to know if it's 7, 8, 10, or whatever. I don't want the Windows build number. I want to know the Windows build <strong>version</strong>. I am not sure what the official name of this would be, but here is an image of what I'm asking for:</p>
<p><a href="http://i.stack.imgur.com/XyEGY.png" rel="nofollow"><img src="http://i.stack.imgur.com/XyEGY.png" alt="enter image description here"></a></p>
<p>I tried using the <code>sys</code>, <code>os</code> and <code>platform</code> modules, but I can't seem to find anything built-in that can do this.</p>
| 3 | 2016-08-13T18:25:28Z | 38,936,997 | <p>It seems you are looking for the <code>ReleaseID</code> which is different from the build number.</p>
<p>You can find it by query the value of <code>ReleaseID</code> in <code>HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion</code> registry key. </p>
<p>You can query the value using <a href="https://technet.microsoft.com/en-us/library/cc742028(v=ws.11).aspx" rel="nofollow">REG command</a> or <a href="https://docs.python.org/3/library/winreg.html" rel="nofollow"><code>winreg</code></a> module:</p>
<hr>
<pre><code>import os
def getReleaseId():
key = r"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion"
val = r"ReleaseID"
output = os.popen( 'REG QUERY "{0}" /V "{1}"'.format( key , val) ).read()
return int( output.strip().split(' ')[-1] )
</code></pre>
<hr>
<pre><code>import winreg
def getReleaseId():
key = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion"
val = r"ReleaseID"
key = winreg.OpenKey( winreg.HKEY_LOCAL_MACHINE , key)
releaseId = int(winreg.QueryValueEx(key,val)[0])
winreg.CloseKey(key)
return releaseId
</code></pre>
| 1 | 2016-08-13T21:03:34Z | [
"python",
"windows-10"
] |
pandas extract list from dataframe | 38,935,735 | <p>Suppose I have a dataframe like below: </p>
<pre><code> FDT_DATE FFLT_LATITUDE FFLT_LONGITUDE FINT_STAT FSTR_ID
51307 1417390467000 31.2899 121.4845 0 112609
51308 1417390428000 31.2910 121.4859 0 112609
51309 1417390608000 31.2944 121.4857 1 112609
51310 1417390548000 31.2940 121.4850 1 112609
51313 1417390668000 31.2954 121.4886 1 112609
51314 1417390717000 31.2965 121.4937 1 112609
53593 1417390758000 31.2946 121.4940 0 112609
63586 1417390798000 31.2932 121.4960 1 112609
63587 1417390818000 31.2940 121.4966 1 112609
63588 1417390827000 31.2946 121.4974 1 112609
63589 1417390907000 31.2952 121.4986 0 112609
</code></pre>
<p>I want to extract the location records in a polyline list, means to extract location of the records which have the same <code>FSTR_ID</code> and with the <code>FINT_STAT</code> equals to 1 :</p>
<pre><code> FSTR_ID FDT_DATE POLYLINE
0 112609 1417390608000 [[31.2944,121.4857],[31.2940,121.4850],[31.2954,121.4886],[31.2965,121.4937]]
1 112609 1417390798000 [[31.2932,121.4960],[31.2940,121.4966],[31.2946, 121.4974]]
</code></pre>
<p>How can I do that?</p>
<p>The orginal dataset can be generated by this code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"FDT_DATE":{"0":1417390467000,"1":1417390428000,"2":1417390608000,"3":1417390548000,"4":1417390668000,"5":1417390717000,"6":1417390758000,"7":1417390798000,"8":1417390818000,"9":1417390827000,"10":1417390907000},"FFLT_LATITUDE":{"0":31.2899,"1":31.291,"2":31.2944,"3":31.294,"4":31.2954,"5":31.2965,"6":31.2946,"7":31.2932,"8":31.294,"9":31.2946,"10":31.2952},"FFLT_LONGITUDE":{"0":121.4845,"1":121.4859,"2":121.4857,"3":121.485,"4":121.4886,"5":121.4937,"6":121.494,"7":121.496,"8":121.4966,"9":121.4974,"10":121.4986},"FINT_STAT":{"0":0,"1":0,"2":1,"3":1,"4":1,"5":1,"6":0,"7":1,"8":1,"9":1,"10":0},"FSTR_ID":{"0":112609,"1":112609,"2":112609,"3":112609,"4":112609,"5":112609,"6":112609,"7":112609,"8":112609,"9":112609,"10":112609}})
df = df.sort(['FDT_DATE'])
</code></pre>
| 1 | 2016-08-13T18:26:55Z | 38,936,581 | <p><a href="http://stackoverflow.com/questions/26483254/python-pandas-insert-list-into-a-cell">You can insert</a> <code>list</code> into <code>pandas.DataFrame()</code> only with <code>.set_value()</code> method. And the column type should be <code>object</code>.</p>
<pre><code>df = pd.DataFrame({"FDT_DATE":[1417390467000, 1417390428000, 1417390608000, 1417390548000,
1417390668000, 1417390717000, 1417390758000, 1417390798000, 1417390818000,
1417390827000, 1417390907000], "FFLT_LATITUDE":[31.2899, 31.291, 31.2944, 31.294,
31.2954, 31.2965, 31.2946, 31.2932, 31.294, 31.2946, 31.2952],
"FFLT_LONGITUDE":[121.4845, 121.4859, 121.4857, 121.485, 121.4886, 121.4937,
121.494, 121.496, 121.4966, 121.4974, 121.4986],
"FINT_STAT":[0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0],
"FSTR_ID":[112609, 112609, 112609, 112609, 112609, 112609, 112609, 112609,
112609, 112609, 112609]})
df = df.sort(['FDT_DATE']).reset_index(drop=True).reset_index()
def func(x):
global a
global b
if (x['index'] - x['FINT_STAT']) != x['index']:
return a
else:
b += 1
a = b
# Create 't1' column for filter "1" groups in 'FINT_STAT' column
a = 0
b = 0
df['t1'] = df[['index', 'FINT_STAT']].apply(lambda x: func(x), axis=1)
# Initialize result dataframe
df_res = df.drop_duplicates(subset=['t1'])[['FSTR_ID', 'FDT_DATE', 't1']].copy()\
.reset_index(drop=True)
df_res = df_res.dropna().reset_index(drop=True)
# First create 'POLYLINE' column then convert it into 'object'
df_res['POLYLINE'] = np.nan
df_res['POLYLINE'] = df_res['POLYLINE'].astype(object)
# Inserting list into dataframe is available with 'pd.DataFrame.set_value()
for i in df['t1'].dropna().unique():
df_res.set_value(df_res.loc[df_res['t1'] == i, 't1'].index.tolist()[0], 'POLYLINE',
df.loc[df['t1'] == i, ['FFLT_LATITUDE', 'FFLT_LONGITUDE']].values.tolist())
df_res = df_res.drop(['t1'], axis=1)
</code></pre>
<p>The result is (your posted result is NOT sorted by 'FDT_DATE'):</p>
<pre><code> FSTR_ID FDT_DATE POLYLINE
0 112609 1417390548000 [[31.294, 121.485], [31.2944, 121.4857], [31.2954, 121.4886], [31.2965, 121.4937]]
1 112609 1417390798000 [[31.2932, 121.496], [31.294, 121.4966], [31.2946, 121.4974]]
</code></pre>
| 1 | 2016-08-13T20:11:18Z | [
"python",
"pandas",
"dataframe",
"preprocessor",
"data-cleaning"
] |
pandas extract list from dataframe | 38,935,735 | <p>Suppose I have a dataframe like below: </p>
<pre><code> FDT_DATE FFLT_LATITUDE FFLT_LONGITUDE FINT_STAT FSTR_ID
51307 1417390467000 31.2899 121.4845 0 112609
51308 1417390428000 31.2910 121.4859 0 112609
51309 1417390608000 31.2944 121.4857 1 112609
51310 1417390548000 31.2940 121.4850 1 112609
51313 1417390668000 31.2954 121.4886 1 112609
51314 1417390717000 31.2965 121.4937 1 112609
53593 1417390758000 31.2946 121.4940 0 112609
63586 1417390798000 31.2932 121.4960 1 112609
63587 1417390818000 31.2940 121.4966 1 112609
63588 1417390827000 31.2946 121.4974 1 112609
63589 1417390907000 31.2952 121.4986 0 112609
</code></pre>
<p>I want to extract the location records in a polyline list, means to extract location of the records which have the same <code>FSTR_ID</code> and with the <code>FINT_STAT</code> equals to 1 :</p>
<pre><code> FSTR_ID FDT_DATE POLYLINE
0 112609 1417390608000 [[31.2944,121.4857],[31.2940,121.4850],[31.2954,121.4886],[31.2965,121.4937]]
1 112609 1417390798000 [[31.2932,121.4960],[31.2940,121.4966],[31.2946, 121.4974]]
</code></pre>
<p>How can I do that?</p>
<p>The orginal dataset can be generated by this code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"FDT_DATE":{"0":1417390467000,"1":1417390428000,"2":1417390608000,"3":1417390548000,"4":1417390668000,"5":1417390717000,"6":1417390758000,"7":1417390798000,"8":1417390818000,"9":1417390827000,"10":1417390907000},"FFLT_LATITUDE":{"0":31.2899,"1":31.291,"2":31.2944,"3":31.294,"4":31.2954,"5":31.2965,"6":31.2946,"7":31.2932,"8":31.294,"9":31.2946,"10":31.2952},"FFLT_LONGITUDE":{"0":121.4845,"1":121.4859,"2":121.4857,"3":121.485,"4":121.4886,"5":121.4937,"6":121.494,"7":121.496,"8":121.4966,"9":121.4974,"10":121.4986},"FINT_STAT":{"0":0,"1":0,"2":1,"3":1,"4":1,"5":1,"6":0,"7":1,"8":1,"9":1,"10":0},"FSTR_ID":{"0":112609,"1":112609,"2":112609,"3":112609,"4":112609,"5":112609,"6":112609,"7":112609,"8":112609,"9":112609,"10":112609}})
df = df.sort(['FDT_DATE'])
</code></pre>
| 1 | 2016-08-13T18:26:55Z | 38,937,043 | <pre><code>import pandas as pd
import numpy as np
# Initializing the data
df = pd.DataFrame({'FDT_DATE': {0: 1417390467000, 1: 1417390428000, 2: 1417390608000, 3: 1417390548000,
4: 1417390668000, 5: 1417390717000, 6: 1417390758000, 7: 1417390798000,
8: 1417390818000, 9: 1417390827000, 10: 1417390907000},
'FFLT_LATITUDE': {0: 31.2899, 1: 31.291, 2: 31.2944, 3: 31.294, 4: 31.2954,
5: 31.2965, 6: 31.2946, 7: 31.2932, 8: 31.294, 9: 31.2946,
10: 31.2952},
'FFLT_LONGITUDE': {0: 121.4845, 1: 121.4859, 2: 121.4857, 3: 121.485, 4: 121.4886,
5: 121.4937, 6: 121.494, 7: 121.496, 8: 121.4966, 9: 121.4974,
10: 121.4986},
'FINT_STAT': {0: 0, 1: 0, 2: 1, 3: 1, 4: 1, 5: 1, 6: 0, 7: 1, 8: 1, 9: 1,
10: 0},
'FSTR_ID': {0: 112609, 1: 112609, 2: 112609, 3: 112609, 4: 112609, 5: 112609,
6: 112609, 7: 112609, 8: 112609, 9: 112609, 10: 112609}})
# Transforming sequences of records with FINT_STAT == 1 to unique GROUP_ID values
df['GROUP_ID'] = df['FINT_STAT'].apply(np.logical_not).cumsum()
# Marking groups with FINT_STAT == 0 for removing
df['GROUP_ID'] *= df['FINT_STAT']
# Removing marked groups
df['GROUP_ID'] = df['GROUP_ID'].replace(0, np.NaN)
# Grouping by columns GROUP_ID and FSTR_ID
gb = df.groupby(['GROUP_ID', 'FSTR_ID'])
result = pd.DataFrame()
# Appending columns with values of minimal FDT_DATE for every group
result['MIN_FDT_DATE'] = gb['FDT_DATE'].min()
# Aggregating results by applying the lambda
# which return list of pairs of FFLT_LATITUDE and FFLT_LONGITUDE
result['COORDINATES'] = gb.apply(lambda group: [(row['FFLT_LATITUDE'], row['FFLT_LONGITUDE'])
for _, row in group.iterrows()])
# Widening line and max column width for printing
pd.set_option('display.line_width', 300)
pd.set_option('display.max_colwidth', 200)
# Looking at result
print (result)
</code></pre>
<p>Output:</p>
<pre><code> MIN_FDT_DATE COORDINATES
GROUP_ID FSTR_ID
2.0 112609 1417390548000 [(31.2944, 121.4857), (31.294, 121.485), (31.2954, 121.4886), (31.2965, 121.4937)]
3.0 112609 1417390798000 [(31.2932, 121.496), (31.294, 121.4966), (31.2946, 121.4974)]
</code></pre>
| 2 | 2016-08-13T21:11:17Z | [
"python",
"pandas",
"dataframe",
"preprocessor",
"data-cleaning"
] |
Using pyCrypto to hash results in a TypeError | 38,935,777 | <p>I'm trying to use pycrypto for python 3.5.1 on win10<br>
using this simple piece of code for has: </p>
<pre><code>from Crypto.Hash import SHA256
SHA256.new('abc').hexdigest()
</code></pre>
<p>resulting this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "E:/Python/C.py", line 2, in <module>
SHA256.new('abc').hexdigest()
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 88, in new
return SHA256Hash().new(data)
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 75, in new
return SHA256Hash(data)
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 72, in __init__
HashAlgo.__init__(self, hashFactory, data)
File "E:\Python\lib\site-packages\Crypto\Hash\hashalgo.py", line 51, in __init__
self.update(data)
File "E:\Python\lib\site-packages\Crypto\Hash\hashalgo.py", line 69, in update
return self._hash.update(data)
TypeError: Unicode-objects must be encoded before hashing
</code></pre>
<p>Anyone know what the problem is?</p>
| 0 | 2016-08-13T18:31:05Z | 38,936,070 | <p>Use the <code>.encode()</code> function on your 'abc' string before running the hashing function. </p>
<p>For example, if you wish to use <em>Unicode</em> encoding:</p>
<pre><code>'abc'.encode('utf-8')
</code></pre>
| 0 | 2016-08-13T19:07:18Z | [
"python",
"hash",
"encoding",
"pycrypto"
] |
Using pyCrypto to hash results in a TypeError | 38,935,777 | <p>I'm trying to use pycrypto for python 3.5.1 on win10<br>
using this simple piece of code for has: </p>
<pre><code>from Crypto.Hash import SHA256
SHA256.new('abc').hexdigest()
</code></pre>
<p>resulting this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "E:/Python/C.py", line 2, in <module>
SHA256.new('abc').hexdigest()
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 88, in new
return SHA256Hash().new(data)
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 75, in new
return SHA256Hash(data)
File "E:\Python\lib\site-packages\Crypto\Hash\SHA256.py", line 72, in __init__
HashAlgo.__init__(self, hashFactory, data)
File "E:\Python\lib\site-packages\Crypto\Hash\hashalgo.py", line 51, in __init__
self.update(data)
File "E:\Python\lib\site-packages\Crypto\Hash\hashalgo.py", line 69, in update
return self._hash.update(data)
TypeError: Unicode-objects must be encoded before hashing
</code></pre>
<p>Anyone know what the problem is?</p>
| 0 | 2016-08-13T18:31:05Z | 38,936,506 | <blockquote>
<p>TypeError: Unicode-objects must be encoded before hashing</p>
</blockquote>
<p>means you should do something like this:</p>
<pre><code>from Crypto.Hash import SHA256
print(SHA256.new('abc'.encode('utf-8')).hexdigest())
</code></pre>
| 0 | 2016-08-13T20:02:16Z | [
"python",
"hash",
"encoding",
"pycrypto"
] |
Using Python 3 when Python 2.7 is default, pip install error | 38,935,801 | <p>I originally had both python 2.7 and python 3 installed with python 2.7 as my default in PATH. I needed to run a script using python 3 but could not set it as my default python in PATH for some reason. After just uninstalling python 2.7 I opened 3 and ran the command python get-pip.py install which gave me this error</p>
<pre><code>C:\Python30>python get-pip.py install
Traceback (most recent call last):
File "c:\users\jacob\appdata\local\temp\tmplqxvtx\pip.zip\pip\compat\__init__.py", line 16, in <module>
ImportError: cannot import name OrderedDict
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get-pip.py", line 19177, in <module>
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
File "c:\users\jacob\appdata\local\temp\tmplqxvtx\pip.zip\pip\__init__.py", line 14, in <module>
File "c:\users\jacob\appdata\local\temp\tmplqxvtx\pip.zip\pip\utils\__init__.py", line 22, in <module>
File "c:\users\jacob\appdata\local\temp\tmplqxvtx\pip.zip\pip\compat\__init__.py", line 18, in <module>
File "c:\users\jacob\appdata\local\temp\tmplqxvtx\pip.zip\pip\compat\ordereddict.py", line 25, in <module>
ImportError: No module named UserDict
</code></pre>
<p>I already tried <code>python -m pip install</code> (module name) but it returns <code>No module named pip</code></p>
| 2 | 2016-08-13T18:34:48Z | 39,140,216 | <p>I assume you use windows, so here is an answer how to install pip on windows:</p>
<p><a href="http://stackoverflow.com/questions/4750806/how-do-i-install-pip-on-windows">how-do-i-install-pip-on-windows</a></p>
<p>On Linux you could just type:</p>
<pre><code>sudo apt install python-pip
</code></pre>
| 0 | 2016-08-25T08:25:37Z | [
"python",
"python-3.x",
"pip"
] |
I am trying to run a program that will tell you how many times a letter is repeated in a string based on raw_input | 38,935,845 | <pre><code>dict = {}
raw_input('Please enter a string :')
letter = raw_input()
for letter in raw_input:
if letter not in dict.keys():
dict[letter] = 1
else:
dict[letter] += 1
print dict
</code></pre>
<p>My error: </p>
<pre><code>line 9, in <module>
TypeError: 'builtin_function_or_method' object is not iterable
</code></pre>
| 1 | 2016-08-13T18:39:28Z | 38,944,620 | <p>You are receiving this error because you are attempting to iterate through <code>raw_input</code>: <code>for letter in raw_input:</code>. </p>
<p>However, in Python, only objects with an <code>__iter__()</code> method are iterable, and <code>raw_input</code> doesn't have this method (it's also a builtin in Python). You can look up the type of an object using <code>type()</code> and a list of an object's methods using <code>dir()</code>:</p>
<pre><code>>>> print type(raw_input)
<class 'builtin_function_or_method'>
>>> print dir(raw_input)
['__call__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__self__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__text_signature__']
</code></pre>
<p>You want to iterate through the <em>result</em> of <code>raw_input()</code> instead. Here are some ways that your code should be improved:</p>
<ul>
<li>Either assign the result of <code>raw_input()</code> to a variable (for example: <code>letters = raw_input('Please enter a string :')</code>) or iterate through <code>raw_input('Please enter a string :')</code> directly.</li>
<li>Use a variable name other than <code>dict</code> for your dictionary. <code>dict</code> already exists as a builtin itself.</li>
<li>Use proper indentation in your <code>for</code> and conditional <code>if</code> blocks.</li>
</ul>
<p>End result:</p>
<pre><code>dic = {}
letters = raw_input('Please enter a string :')
for letter in letters:
if letter not in dic.keys():
dic[letter] = 1
else:
dic[letter] += 1
print dic
# output:
# Please enter a string :success
# {'e': 1, 's': 3, 'c': 2, 'u': 1}
</code></pre>
| 1 | 2016-08-14T17:15:01Z | [
"python",
"python-2.7",
"raw-input"
] |
I am trying to run a program that will tell you how many times a letter is repeated in a string based on raw_input | 38,935,845 | <pre><code>dict = {}
raw_input('Please enter a string :')
letter = raw_input()
for letter in raw_input:
if letter not in dict.keys():
dict[letter] = 1
else:
dict[letter] += 1
print dict
</code></pre>
<p>My error: </p>
<pre><code>line 9, in <module>
TypeError: 'builtin_function_or_method' object is not iterable
</code></pre>
| 1 | 2016-08-13T18:39:28Z | 38,945,736 | <p>worked for me when i changed <code>for letter in raw_input:</code> to <code>for letter in raw_input():</code> and the input was "question" and output was "{'e': 1, 'i': 1, 'o': 1, 'n': 1, 'q': 1, 's': 1, 'u': 1, 't': 1}" , and also i think you can change the part of code inside the for loop to <code>dict[letter] = dict.get(letter,0) + 1</code> if you like.</p>
| 1 | 2016-08-14T19:16:07Z | [
"python",
"python-2.7",
"raw-input"
] |
I am trying to run a program that will tell you how many times a letter is repeated in a string based on raw_input | 38,935,845 | <pre><code>dict = {}
raw_input('Please enter a string :')
letter = raw_input()
for letter in raw_input:
if letter not in dict.keys():
dict[letter] = 1
else:
dict[letter] += 1
print dict
</code></pre>
<p>My error: </p>
<pre><code>line 9, in <module>
TypeError: 'builtin_function_or_method' object is not iterable
</code></pre>
| 1 | 2016-08-13T18:39:28Z | 38,946,684 | <p>By far the easiest way to do this is to use the <code>Counter</code> class in the built-in <code>collections</code> module:</p>
<pre><code>from collections import Counter
print Counter(raw_input('Please enter a string: '))
</code></pre>
<p>A <code>Counter</code> can be accessed just like a dictionary:</p>
<pre><code>>>> a = Counter('spam spam spam')
>>> print a
Counter({'a': 3, 'p': 3, 's': 3, 'm': 3, ' ': 2})
>>> print a['s']
3
</code></pre>
| 0 | 2016-08-14T21:13:34Z | [
"python",
"python-2.7",
"raw-input"
] |
rmvirtualenv <name> does not remove a env | 38,935,850 | <p>I am a beginner in Python.</p>
<p>I am trying to remove a virtualenv which I have created before by using this command under C:>virtualenvs</p>
<pre><code>rmvirtualenv test
</code></pre>
<p>and then I got the following</p>
<p>virtualenv "test" does not exist.</p>
<p>I still can see the directory listed under virtualenvs. I am on windows platform and have already installed virtualwrapper-win.</p>
<p>Please suggest how can I get rid of that env using the above command.</p>
| 0 | 2016-08-13T18:40:11Z | 38,936,179 | <p>I have sorted out this issue.</p>
<p>Considering you have already installed virtualenvwrapper-win(if Windows)</p>
<p>The steps are -- </p>
<pre><code>1.Go to directory 'virtualenvs' (where all the envs are listed)
2.Make an env by using this command -- mkvirtualenv test (where test is a env name in my case)
3. Use this command -- workon test
4. Deactivate test
5. To remove test use this command -- rmvirtualenv test
</code></pre>
<p>That's it.</p>
| 0 | 2016-08-13T19:19:58Z | [
"python",
"virtualenv",
"virtualenvwrapper"
] |
Convert python pandas rows to columns | 38,935,862 | <pre><code> Decade difference (kg) Version
0 1510 - 1500 -0.346051 v1.0h
1 1510 - 1500 -3.553251 A2011
2 1520 - 1510 -0.356409 v1.0h
3 1520 - 1510 -2.797978 A2011
4 1530 - 1520 -0.358922 v1.0h
</code></pre>
<p>I want to transform the pandas dataframe such that the 2 unique enteries in the <code>Version</code> column are transfered to become columns. How do I do that?</p>
<p>The resulting dataframe should not have a multiindex</p>
| 0 | 2016-08-13T18:41:10Z | 38,935,934 | <pre><code>In [28]: df.pivot(index='Decade', columns='Version', values='difference (kg)')
Out[28]:
Version A2011 v1.0h
Decade
1510 - 1500 -3.553251 -0.346051
1520 - 1510 -2.797978 -0.356409
1530 - 1520 NaN -0.358922
</code></pre>
<p>or</p>
<pre><code>In [31]: df.pivot(index='difference (kg)', columns='Version', values='Decade')
Out[31]:
Version A2011 v1.0h
difference (kg)
-3.553251 1510 - 1500 None
-2.797978 1520 - 1510 None
-0.358922 None 1530 - 1520
-0.356409 None 1520 - 1510
-0.346051 None 1510 - 1500
</code></pre>
<p>both satisfy your requirements. </p>
| 1 | 2016-08-13T18:52:35Z | [
"python",
"pandas"
] |
Running python code from ruby with variables | 38,935,919 | <p>I am running python scripts from ruby. But I have pass parameters to python script that i cam from ruby. The parameters that should be passed is the some variable i have in that particular instance. Here is my code that illustrate the scenario.</p>
<pre><code> #test.rb
result = `python ml.py '{"a":1, "b":23, "c":2334,"d":2244}'`
</code></pre>
<p>The code above works perfectly. But the params are generated from some manipulation which i need to pass to the python script. Sending the params is the issue for me. Here is the code I implemented.</p>
<pre><code> #testvar.rb
params = manipulate_data # This gives result in json format
#result `python ml.py 'params'` # Not working
</code></pre>
<p>I tried other approach as well but not working for me. </p>
| 0 | 2016-08-13T18:50:56Z | 38,936,005 | <p>Try this:</p>
<pre><code>$ cat d.py
import json
import sys
from collections import OrderedDict
print json.loads(sys.argv[1], object_pairs_hook=OrderedDict)
$ cat a.rb
require 'json'
params = {"a":1, "b":23, "c":2334,"d":2244}.to_json
result = `python d.py '#{params}'`
puts result
$ ruby a.rb
OrderedDict([(u'a', 1), (u'b', 23), (u'c', 2334), (u'd', 2244)])
</code></pre>
| 0 | 2016-08-13T18:59:45Z | [
"python",
"json",
"ruby",
"python-3.x"
] |
Why does this passlib hashing script create a new result every time I run the script? | 38,935,992 | <p>I'm using the following <a href="http://pythonhosted.org/passlib/#welcome" rel="nofollow">script from the passlib docs</a> to hash a password:</p>
<pre><code># import the hash algorithm
from passlib.hash import sha256_crypt
# generate new salt, and hash a password
hash = sha256_crypt.encrypt("toomanysecrets")
print hash # <== WHY IS THIS ALWAYS A DIFFERENT STRING?
# verifying the password
print sha256_crypt.verify("toomanysecrets", hash) # Outputs "True"
print sha256_crypt.verify("joshua", hash) # Outputs "False"
</code></pre>
<p>It seems odd that <code>sha256_crypt.verify</code> would be able to verify multiple different hashes as "toomanysecrets" - why isn't there just one hash for this password?</p>
| 0 | 2016-08-13T18:58:58Z | 38,936,197 | <p>Hash result depends on the input and <a href="https://en.wikipedia.org/wiki/Salt_%28cryptography%29" rel="nofollow">salt</a>. Where the salt - it is randomly generated value which is included in the output string together with result of hashing. That is why for each call sha256_crypt.encrypt output string looks random, but password verification ability is preserved.</p>
| 2 | 2016-08-13T19:21:59Z | [
"python",
"hash",
"passwords",
"salt",
"passlib"
] |
Open multiple files using path = open() feature | 38,936,000 | <p>I have been using path = open(Testfilename) on a few of my beginning python scripts while I was learning about arrays. Now that I am working on dictionaries, I want to see if I can open up multiple files using this feature. </p>
<pre><code>fav_cars = {}
for c in cars: #I have a few separate CSV files with different cars
path = open()
</code></pre>
<p>I essentially want to loop around the different cars that user puts in. As mentioned above, I know how to open a single file, but I am trying to learn to open up multiple files. I tried doing:</p>
<pre><code>path = open('F-150.csv', 'Silverado.csv', 'Mustang.csv', 'Tesla.csv', 'r'),
</code></pre>
<p>but that did not work as I got an error code <code>TypeError: an integer is required</code> </p>
<p>UPDATE:</p>
<p>The car files are made of one column with the header called "colors". There are 6 rows for six colors: red, blue, yellow, black, white, green</p>
<pre><code>cars = ['F-150','Silverado','Mustang','Tesla','Juke','Corolla']
fav_cars = {}
for c in cars:
path = open () # wanting to open multiple files here
car_colors = {} #colors for each car in cars
for temp_dict in path:
if not temp_dict.startswith("#"): #to get rid of the header "colors in the file
if values in user_input: #value is the car color
car_colors.update({values})
fav_cars.update({c:car_colors})
</code></pre>
<p>I am only using the cars that the user inputs, user_inputs, when I called for it using raw_input. Hopefully this helps.</p>
| 0 | 2016-08-13T18:59:25Z | 38,936,073 | <p>You should do it like this:</p>
<pre><code>fav_cars = {}
for c in cars:
with open(c, 'r') as f:
fav_vars[c] = f.readlines()
</code></pre>
<p>builtin <code>open</code> function expects only one file as an argument, second one is the file mode for your opening.</p>
| 0 | 2016-08-13T19:07:45Z | [
"python",
"python-2.7",
"loops",
"dictionary",
"path"
] |
Keras - How are batches and epochs used in fit_generator()? | 38,936,016 | <p>I have a video of 8000 frames, and I'd like to train a Keras model on batches of 200 frames each. I have a frame generator that loops through the video frame-by-frame and accumulates the (3 x 480 x 640) frames into a numpy matrix <code>X</code> of shape <code>(200, 3, 480, 640)</code> -- (batch size, rgb, frame height, frame width) -- and yields <code>X</code> and <code>Y</code> every 200th frame:</p>
<pre><code>import cv2
...
def _frameGenerator(videoPath, dataPath, batchSize):
"""
Yield X and Y data when the batch is filled.
"""
camera = cv2.VideoCapture(videoPath)
width = camera.get(3)
height = camera.get(4)
frameCount = int(camera.get(7)) # Number of frames in the video file.
truthData = _prepData(dataPath, frameCount)
X = np.zeros((batchSize, 3, height, width))
Y = np.zeros((batchSize, 1))
batch = 0
for frameIdx, truth in enumerate(truthData):
ret, frame = camera.read()
if ret is False: continue
batchIndex = frameIdx%batchSize
X[batchIndex] = frame
Y[batchIndex] = truth
if batchIndex == 0 and frameIdx != 0:
batch += 1
print "now yielding batch", batch
yield X, Y
</code></pre>
<p>Here's how run <a href="https://keras.io/models/model/" rel="nofollow"><code>fit_generator()</code></a>:</p>
<pre><code> batchSize = 200
print "Starting training..."
model.fit_generator(
_frameGenerator(videoPath, dataPath, batchSize),
samples_per_epoch=8000,
nb_epoch=10,
verbose=args.verbosity
)
</code></pre>
<p>My understanding is an epoch finishes when <code>samples_per_epoch</code> samples have been seen by the model, and <code>samples_per_epoch</code> = batch size * number of batches = 200 * 40. So after training for an epoch on frames 0-7999, the next epoch will start training again from frame 0. Is this correct?</p>
<p>With this setup <strong>I expect 40 batches (of 200 frames each) to be passed from the generator to <code>fit_generator</code>, per epoch; this would be 8000 total frames per epoch</strong> -- i.e., <code>samples_per_epoch=8000</code>. Then for subsequent epochs, <code>fit_generator</code> would reinitialize the generator such that we begin training again from the start of the video. Yet this is not the case. <strong>After the first epoch is complete (after the model logs batches 0-24), the generator picks up where it left off. Shouldn't the new epoch start again from the beginning of the training dataset?</strong></p>
<p>If there is something incorrect in my understanding of <code>fit_generator</code> please explain. I've gone through the documentation, this <a href="https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py" rel="nofollow">example</a>, and these <a href="https://github.com/fchollet/keras/issues/1627" rel="nofollow">related</a> <a href="https://github.com/fchollet/keras/issues/107" rel="nofollow">issues</a>. I'm using Keras v1.0.7 with the TensorFlow backend. This issue is also posted in the <a href="https://github.com/fchollet/keras/issues/3461" rel="nofollow">Keras repo</a>.</p>
| 3 | 2016-08-13T19:01:09Z | 38,938,237 | <blockquote>
<p>After the first epoch is complete (after the model logs batches 0-24), the generator picks up where it left off</p>
</blockquote>
<p>This is an accurate description of what happens. If you want to reset or rewind the generator, you'll have to do this internally. Note that keras's behavior is quite useful in many situations. For example, you can end an epoch after seeing 1/2 the data then do an epoch on the other half, which would be impossible if the generator status was reset (which can be useful for monitoring the validation more closely).</p>
| 1 | 2016-08-14T00:48:19Z | [
"python",
"tensorflow",
"generator",
"keras"
] |
Count lines in file, float numbers in chosen lines, add them and compute average | 38,936,065 | <blockquote>
<p>Write a program that prompts for a file name, then opens that file and
reads through the file, looking for lines of the form:</p>
<p><code>X-DSPAM-Confidence: 0.8475</code></p>
<p>Count these lines and extract the floating point values from each of
the lines and compute the average of those values and produce an
output as shown below. Do not use the <code>sum()</code> function or a variable
named <code>sum</code> in your solution.</p>
<p>You can download the sample data at
<a href="http://www.pythonlearn.com/code/mbox-short.txt" rel="nofollow">http://www.pythonlearn.com/code/mbox-short.txt</a><br>
When you are testing below enter <code>mbox-short.txt</code> as the file name.</p>
</blockquote>
<p>That's my code, which is not working. If you know how to fix it, please explain (as simple as possible):</p>
<pre><code># Use the file name mbox-short.txt as the file name
fname = raw_input("Enter file name: ")
fh = open(fname)
count = 0
total = 0
for line in fh:
if not float(line.startswith("X-DSPAM-Confidence:")) : continue
count = count + 1
float(total) = float(total) + float(line)
float(average = total/count)
print "Average spam confidence: ", average
</code></pre>
<p>Correct answer should be: <code>Average spam confidence: 0.750718518519</code></p>
| -1 | 2016-08-13T19:06:58Z | 38,936,176 | <p>You're not doing anything in your code to actually extract the float value from the line. Simply casting the <code>line</code> as a float and trying to add it to total wont work, because <code>line</code> is referring to the entire line from the file. An easy way to extract the float value would be to split the line by the <code>:</code> and then take everything after it. That can be done like this: <code>floatnum = line.split(':')[1]</code> The <code>[1]</code> means that it will take everything after the delimiter that we used to split the line, which in this case was <code>:</code>.</p>
<p>You have some other errors in your code, so if you just want to take how I extracted the float and apply that to the method you're using and adjust your code accordingly then that would be a good idea. </p>
<p>Here's a working example for you though that will get done what you need:</p>
<pre><code>fname = raw_input('Enter file name: ')
file = open(fname)
counter = 0
total = 0.0
for line in file:
if 'X-DSPAM-Confidence:' in line: # checks to see if line pertains to you
counter += 1 # if so, increment counter
floatnum = line.split(':')[1] # splits line at ':' and takes everything after it
total += float(floatnum) # ... and adjust total
average = total/counter # gets average
print 'Average spam confidence: ' + str(average)
</code></pre>
<p>I used this file as input, which contains:</p>
<pre><code>hello world
X-DSPAM-Confidence: 0.8475
hello world
X-DSPAM-Confidence: 0.8400
hello world
X-DSPAM-Confidence: 0.9475
hello world
</code></pre>
<p>Result:
<code>Average spam confidence: 0.878333333333</code></p>
| 1 | 2016-08-13T19:19:43Z | [
"python"
] |
Spark Exception: Python in worker has different version 3.4 than that in driver 3.5 | 38,936,150 | <p>I am using Amazon EC2, and I have my master and development servers as one. And I have another instance for a single worker.</p>
<p>I am new to this, but I have managed to make spark work in a standalone mode. Now I am trying cluster. the master and worker are active (I can see the webUI for them and they are functioning).</p>
<p>I have Spark 2.0, and I have installed the latest Anaconda 4.1.1 which comes with Python 3.5.2. In both worker and master, if I go to pyspark and do os.version_info, I will get the 3.5.2, I also have set all the environment variables correctly (as seen in other posts on stackoverflow and google) (e.g., PYSPARK_PYTHON).</p>
<p>There is no 3.4 version of python anywhere anyways. So I am wondering how I can fix this. </p>
<p>I get the error by running this command:</p>
<pre><code>rdd = sc.parallelize([1,2,3])
rdd.count()
</code></pre>
<p>error happens for the count() method:</p>
<pre><code>16/08/13 18:44:31 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 17)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 123, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.4 than that in driver 3.5, PySpark cannot run with different minor versions
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/13 18:44:31 ERROR Executor: Exception in task 1.1 in stage 2.0 (TID 18)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 123, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.4 than that in driver 3.5, PySpark cannot run with different minor versions
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
</code></pre>
| 0 | 2016-08-13T19:16:54Z | 38,936,300 | <p>Since you already use Anaconda you can simply create an environment with the desired Python version:</p>
<pre><code>conda create --name foo python=3.4
source activate foo
python --version
## Python 3.4.5 :: Continuum Analytics, Inc
</code></pre>
<p>and use it as <code>PYSPARK_DRIVER_PYTHON</code>:</p>
<pre><code>export PYSPARK_DRIVER_PYTHON=/path/to/anaconda/envs/foo/bin/python
</code></pre>
| 2 | 2016-08-13T19:33:54Z | [
"python",
"apache-spark",
"version",
"cluster-computing"
] |
Multiple Models on a Single View/Template in Django | 38,936,153 | <p>Trying to incorporate 2 models into my "season" view/template on my django site. Currently, I get the following as a ValueError "The view webapp.views.season didn't return an HttpResponse object. It returned None instead." Not sure what I am doing wrong, but hoping someone can take a look. </p>
<p>views.py</p>
<pre><code>from django.shortcuts import render, get_object_or_404, redirect
from django.views.generic import ListView
from .models import Player, Season
def home(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/home.html', {'seasons': seasons})
def player(request, pk):
player = get_object_or_404(Player, pk=pk)
return render(request, 'webapp/player.html', {'player': player})
def season(ListView, pk):
model = Season
template_name = 'webapp/season.html'
def get_context_data(self, **kwargs):
context = super(season, self).get_context_data(**kwargs)
context['players'] = Player.objects.all()
return context
def seasons(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/seasons.html', {'seasons': seasons})
</code></pre>
<p>urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^player/(?P<pk>\d+)/$', views.player, name='player'),
url(r'^season/(?P<pk>\d+)/$', views.season, name='season'),
url(r'^seasons/$', views.seasons, name='seasons'),
]
</code></pre>
<p>It should be noted that I originally had a TypeError that said "season() got an unexpected keyword argument 'pk'" before I added the pk to the season argument. Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-08-13T19:17:12Z | 38,937,144 | <p>The views:<code>home</code>, <code>player</code> and <code>seasons</code> are <a href="https://docs.djangoproject.com/en/1.10/topics/http/views/" rel="nofollow">Function Based Views</a>, this is the old Django Views style. On the other hand, <code>ListView</code> is a <a href="https://docs.djangoproject.com/en/1.10/topics/class-based-views/" rel="nofollow">Class Based View</a>, a newer way to write views in Django. You are mixing both kind of views and that's a bad idea. No idea what your <code>season</code> view should do, but try something like:</p>
<pre><code>def season(request, pk):
season = get_object_or_404(Season, pk=pk)
return render(
request,
'webapp/season.html',
{'season': season, 'players': Player.objects.all()}
)
</code></pre>
| 1 | 2016-08-13T21:26:20Z | [
"python",
"django"
] |
Multiple Models on a Single View/Template in Django | 38,936,153 | <p>Trying to incorporate 2 models into my "season" view/template on my django site. Currently, I get the following as a ValueError "The view webapp.views.season didn't return an HttpResponse object. It returned None instead." Not sure what I am doing wrong, but hoping someone can take a look. </p>
<p>views.py</p>
<pre><code>from django.shortcuts import render, get_object_or_404, redirect
from django.views.generic import ListView
from .models import Player, Season
def home(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/home.html', {'seasons': seasons})
def player(request, pk):
player = get_object_or_404(Player, pk=pk)
return render(request, 'webapp/player.html', {'player': player})
def season(ListView, pk):
model = Season
template_name = 'webapp/season.html'
def get_context_data(self, **kwargs):
context = super(season, self).get_context_data(**kwargs)
context['players'] = Player.objects.all()
return context
def seasons(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/seasons.html', {'seasons': seasons})
</code></pre>
<p>urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^player/(?P<pk>\d+)/$', views.player, name='player'),
url(r'^season/(?P<pk>\d+)/$', views.season, name='season'),
url(r'^seasons/$', views.seasons, name='seasons'),
]
</code></pre>
<p>It should be noted that I originally had a TypeError that said "season() got an unexpected keyword argument 'pk'" before I added the pk to the season argument. Any help is greatly appreciated! Thanks!</p>
| 0 | 2016-08-13T19:17:12Z | 38,937,791 | <p>In the url you need to call the class based view like, views.season.as_view()</p>
| 0 | 2016-08-13T23:19:43Z | [
"python",
"django"
] |
Why is Numpy's LinAlg Norm function named the way it is? | 38,936,171 | <p>I have been following some online tutorials that introduce Linear Algebra through Python and have come across the section that talks about a vector's magnitude and normalization. The linear algebra concept seems to state that:</p>
<p><strong>Magnitude of Vector</strong> = Square all of the elements of a vector, add them together, and take the square root.</p>
<p><strong>Normalization of Vector</strong> = Divide the vector by the magnitude (or multiply by 1/magnitude).</p>
<p>Great! I attempted to Google for how to find the magnitude of a vector through Numpy's library functions and found that to find the magnitude I am using the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html" rel="nofollow">numpy.linalg.norm()</a> function. This seemed odd to me at first, but after digging deeper it seems that by default the function finds the Frobenius norm which is essentially finding the magnitude like I did above. </p>
<p>However, I am still unsure why the "normalization" function does not have the option of giving me the normalization of the vector and instead opts to have itself be a "magnitude" function by default. Is there a specific reason why there was not a standalone magnitude function made and instead it resides inside of the norm function? I assume there must be a good reason, but it just seems confusing to me :).</p>
| 0 | 2016-08-13T19:18:54Z | 38,936,420 | <p>As you already can see in the docs of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html" rel="nofollow">numpy.linalg.norm()</a>, there is a parameter called ord, which is the order of the norm, by default such parameter is None and if you read the docs that means it will be calculating the Frobenius norm, also known as Euclidean distance.</p>
<p>The Frobenius norm is just a <strong>particular case</strong> of the more general concept of <a href="http://mathworld.wolfram.com/VectorNorm.html" rel="nofollow">vector norm</a>, therefore the numpy's function name norm makes sense by definition.</p>
| 2 | 2016-08-13T19:51:00Z | [
"python",
"numpy",
"linear-algebra"
] |
etree & xpath return entire html instead of text | 38,936,185 | <p>I'm working around since quite a while on this and tried all kind of namespace solutions. However, my current script is not printing the demanded strings but the entire html dump. Does anyone knows how to get around that issue?</p>
<pre class="lang-py prettyprint-override"><code>from lxml.html import parse
from lxml import etree
import requests
r = requests.get('https://berlin.kauperts.de/Strassen/Aachener-Strasse-10713-Berlin.html')
tree = etree.parse(r.text)
NSMAP = {'mw':'http://www.w3.org/1999/xhtml/'}
Name2 = tree.xpath('//{http://www.w3.org/1999/xhtml}html/body/div[7]/div/div/div/table/tbody/tr/td[2]/a')
Name3 = tree.find("//html/body/div[7]/div/div/div/table/tbody/tr/td[2]/a")
print(Name2, Name3)
</code></pre>
| 1 | 2016-08-13T19:21:00Z | 38,936,269 | <p>Namespaces are inherited. If a document is XHTML, then all the nodes in the document are in the XHTML namespace by default.</p>
<p>That means you must use that namespace in every step of your XPath expression. Using it on the first step (<code>html</code>) is not enough.</p>
<p><code>nsmap</code> can help you with keeping the code manageable, but you have to use it, too.</p>
<pre><code>from lxml.html import parse
import requests
from lxml import etree
r = requests.get('https://berlin.kauperts.de/Strassen/Aachener-Strasse-10713-Berlin.html')
tree = etree.parse(r.text)
nsmap = {'x':'http://www.w3.org/1999/xhtml/'}
path = '//x:body/x:div[7]/x:div/x:div/x:div/x:table/x:tbody/x:tr/x:td[2]/x:a'
name = tree.findall(path, nsmap)
</code></pre>
<p>The above is unwieldy and brittle. Try to create a simpler expression.</p>
<p>Rule: Never use XPath that was automatically generated. Manually create the "least specific" expression (i.e. least dependent on irrelevant document structure, like <code>div</code> nesting levels or -positions) that still matches exactly what you need. Maybe along the lines of this.</p>
<pre><code>name = tree.finall('//x:table[@class="foo"]//x:td[2]/x:a', nsmap)
</code></pre>
| 1 | 2016-08-13T19:30:17Z | [
"python",
"parsing",
"xpath",
"lxml"
] |
Pandas Dataframe to SQLite - AttributeError: 'DataFrame' object has no attribute 'encode' | 38,936,186 | <p>I have a Pandas data frame that I want to export to a SQLite db. The db has a datetime index and I suspect that might be an issue. </p>
<p>When I run this part of the code:</p>
<pre><code>con = sqlite3.connect("pat_rec.db")
dfMid.to_sql(dfMid,con=con, flavor='sqlite', if_exists='replace')
</code></pre>
<p>I get this error:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'encode'
</code></pre>
| 1 | 2016-08-13T19:21:04Z | 38,936,235 | <p>The first attribute of <code>to_sql</code> is name of the SQL table. You're passing the data frame object again. If you want to use <code>dfMid</code> as the table name, you have to put it in quotes.</p>
<p>Thus, your code would become:</p>
<pre><code>dfMid.to_sql('dfMid',con=con, flavor='sqlite', if_exists='replace')
</code></pre>
| 0 | 2016-08-13T19:26:29Z | [
"python",
"dataframe",
"sqlite3"
] |
ImportError: No module named fcntl under windows | 38,936,259 | <p>I am trying to test simple script to get WIFI information.i installed python-wifi module to use it but once i run the script i get this message error</p>
<p>File "C:\Users\PC\Anaconda2\lib\site-packages\pythonwifi\iwlibs.py", line 28, in
import fcntl
ImportError: No module named fcntl</p>
<p>any ideas !!
thanks </p>
| 0 | 2016-08-13T19:29:18Z | 38,936,724 | <p>If you read the description from <a href="https://pypi.python.org/pypi/python-wifi" rel="nofollow">python-wifi on pip</a> you'll see the Operating System it's POSIX & Linux, that's why you're having problems in the first place, they didn't write portable code and just stick to linux. </p>
<p>Therefore, either you make work that library on windows following the comment's duplicated posts (which could take you time) or you just find something more suitable for windows like <a href="https://github.com/6e726d/PyWiWi" rel="nofollow">this one</a>.</p>
| 0 | 2016-08-13T20:30:10Z | [
"python",
"windows-10"
] |
Returning False to Break out of a loop | 38,936,287 | <p>I am trying to break out of this loop. I have returned False and added a break if the character in the word is not on the hand, which is a dictionary with integer values per letter, based on Scrabble letter scores. This is part of a larger game but this particular function checks to see if the word I've entered is valid.</p>
<p>Now the issue I'm having is if the word entered is not in the word list the input is rejected and the player is asked to enter another word. HOWEVER...if I enter a single letter for instance, the hand is still updated to have that letter removed, and is therefore not available when I retype a word using all three letters.</p>
<p>For example:</p>
<p>if my hand is r u t</p>
<p>and I enter u, my word will be invalid, but the hand now contains only r t. Therefore rut will no longer be available.</p>
<p>I have a pretty good grasp of the loops and return statements after the last 24 hours straight of coding, except I can't figure out how to structure this loop to avoid that issue.</p>
<p>Here is the code:</p>
<pre><code>def is_valid_word(word, hand, word_list):
"""
Returns True if word is in the word_list and is entirely
composed of letters in the hand. Otherwise, returns False.
Does not mutate hand or word_list.
word: string
hand: dictionary (string -> int)
word_list: list of lowercase strings
"""
remaining = hand
for c in word:
if c not in remaining.keys(): """Loop 1"""
return False
break
if c in remaining.keys(): """Loop 2"""
remaining [c] -= 1
if remaining [c] < 0: """Loop 3"""
return False
if word in word_list:
return True
else:
return False
</code></pre>
<p>However I structure the loops eventually a condition will fail. If I indent Loop 2 and 3 then they'll fail if the letter is not in the hand, and vice versa. Obviously a simple fix would be something like break(n loops) but that doesn't exist in Python. Any suggestions to make this happen? I've been working on it a long time as I'm new to programming.</p>
<p>Thanks in advance!</p>
| 0 | 2016-08-13T19:32:06Z | 38,936,436 | <p>There are several problems in your code, I have refactored it to make it clearer (and fixed it, that was the point :))</p>
<p><em>minor issues</em>:</p>
<ul>
<li>not optimal: first check if word is in word_list (you'd better use a <code>set</code> rather than a list, would be much faster), then test for available letters</li>
<li>several <code>return</code> statements, followed by <code>break</code>. Confusing.</li>
</ul>
<p><em>major issue</em>:</p>
<p>The main problem being that you affect <code>remaining</code> to <code>hand</code> and you change it, thus destroying your <code>head</code> each time you call the function which explains the behaviour you're experiencing.</p>
<p>You have to copy the dictionary, and it will work. Python parameters are passed by reference, so if you pass a <code>list</code> or a <code>dict</code> and you modify it within the function, then it remains modified, unless you make a copy of it.</p>
<p>No real problems with the break & returns: my idea is that you think it does not work when you call your function several times in a row, when first call just destroys your input data.</p>
<pre><code>def is_valid_word(word, hand, word_list):
"""
Returns True if word is in the word_list and is entirely
composed of letters in the hand. Otherwise, returns False.
Does not mutate hand or word_list.
word: string
hand: dictionary (string -> int)
word_list: list of lowercase strings
"""
# make a copy or hand is destroyed by your test
remaining = hand.copy()
rval = False
if word in word_list:
rval = True
# don't check if word is not in word_list (optim)
for c in word:
if c not in remaining: # no need to specify keys()
rval = False
break
remaining [c] -= 1
if remaining [c] < 0:
rval = False
break
return rval
</code></pre>
| 0 | 2016-08-13T19:52:59Z | [
"python",
"loops",
"return"
] |
Python: easy way to modify array in parallel | 38,936,296 | <p>The question might sound easy, but being new to parallelization in Python I am definitely struggling. I dealt with parallelization in OpenMP for C++ and that was a hell of a lot easier.
What I need to do is modify entries of a matrix in parallel. That's it. Thing is, I can't do it using the simple joblib library:</p>
<pre><code>from joblib import Parallel, delayed
my_array = [[ 1 ,2 ,3],[4,5,6]]
def foo(array,x):
for i in [0,1,2]:
array[x][i]=2
return 0
def main(array):
inputs = [0,1]
if __name__ == '__main__':
Parallel(n_jobs=2, verbose = 0)(delayed(foo)(array,i) for i in inputs)
main(my_array)
</code></pre>
<p>This code would work when the number of jobs is 1 (so the array after the call of main is all 2s), but it wouldn't when it actually becomes true multiprocessing (the array stays the same after the call of main).
The main problem I think is that I can't return anything useful with the Parallel function. I also tried to have the array in shared memory, but I couldn't find any informations on how to do it with joblib.</p>
<p>Any suggestions?</p>
| 3 | 2016-08-13T19:33:47Z | 38,937,003 | <p>Generally there are two ways of sharing data</p>
<ul>
<li>Shared Memory</li>
<li>Multiple processes (in your case)</li>
</ul>
<p>if you used the <code>threading</code> backend in Joblib, you wouldn't have any problem with your current code, but in python (Cpython) don't execute in parallel because of the GIL (Global interpreter lock). so that isn't true parallelism</p>
<p>On the other hand, using Multiple processes, spawns a new interpreter in a new process, though it still has it's own GIL, but you can spawn more processes to do a particular task which means you have side stepped the GIL and you can execute more than one operation at the same time, the problem with this is that processes don't share the same memory, so if they're working on a particular data, they basically copy this data and work on their own copy, not some global copy where everybody writes to like threads (That's why the GIL was put there to prevent different threads from unexpectedly changing the state of a variable). It's possible to share memory (not exactly sharing, you have to pass the data around between the processes)</p>
<p>Using shared memory is in the Joblib docs, quoting the docs</p>
<blockquote>
<p>By default the workers of the pool are real Python processes forked
using the multiprocessing module of the Python standard library when
n_jobs != 1. The arguments passed as input to the Parallel call are
serialized and reallocated in the memory of each worker process.</p>
</blockquote>
<p>what this means is that by default if you don't specify that you want the memory to be shared among the number of processes you have, each process will take a copy of the list and perform the operation on it (when in reality you want them to work on the same list you passed).</p>
<p>with your previous code, and some print statements added you'll understand what's going on</p>
<pre><code>from joblib import Parallel, delayed
my_array = [[ 1 ,2 ,3],[4,5,6]]
def foo(array,x):
for i in [0,1,2]:
array[x][i]=2
print(array, id(array), 'arrays in workers')
return 0
def main(array):
print(id(array), 'Original array')
inputs = [0,1]
if __name__ == '__main__':
Parallel(n_jobs=2, verbose = 0)(delayed(foo)(array,i) for i in inputs)
print(my_array, id(array), 'Original array')
main(my_array)
</code></pre>
<p>Running the code, we get:</p>
<pre><code>140464165602120 Original array
[[2, 2, 3], [4, 5, 6]] 140464163002888 arrays in workers
[[2, 2, 3], [4, 5, 6]] 140464163002888 arrays in workers
[[2, 2, 2], [4, 5, 6]] 140464163002888 arrays in workers
[[1, 2, 3], [2, 5, 6]] 140464163003208 arrays in workers
[[1, 2, 3], [2, 2, 6]] 140464163003208 arrays in workers
[[1, 2, 3], [2, 2, 2]] 140464163003208 arrays in workers
[[1, 2, 3], [4, 5, 6]] 140464165602120 Original array
</code></pre>
<p>From the output you can see that the operations are indeed carried out but on different lists (with different id's), so at the end of the day there is no way to merge those two lists together to get your desired result.</p>
<p>but when you specify that you want memory to be shared between the various processes, you'll see that the output is quite different, as the processes are all working on the same list</p>
<pre><code>from joblib import Parallel, delayed
from joblib.pool import has_shareable_memory
my_array = [[ 1 ,2 ,3],[4,5,6]]
def foo(array,x):
for i in [0,1,2]:
array[x][i]=2
print(array, id(array), 'arrays in workers')
return 0
def main(array):
print(id(array), 'Original array')
inputs = [0,1]
if __name__ == '__main__':
Parallel(n_jobs=2, verbose = 0)(delayed(has_shareable_memory)(foo(array,i)) for i in inputs)
print(my_array, id(array), 'Original array')
main(my_array)
</code></pre>
<p>Output</p>
<pre><code>140615324148552 Original array
[[2, 2, 3], [4, 5, 6]] 140615324148552 arrays in workers
[[2, 2, 3], [4, 5, 6]] 140615324148552 arrays in workers
[[2, 2, 2], [4, 5, 6]] 140615324148552 arrays in workers
[[2, 2, 2], [2, 5, 6]] 140615324148552 arrays in workers
[[2, 2, 2], [2, 2, 6]] 140615324148552 arrays in workers
[[2, 2, 2], [2, 2, 2]] 140615324148552 arrays in workers
[[2, 2, 2], [2, 2, 2]] 140615324148552 Original array
</code></pre>
<p>I'll suggest you experiment and play with the multi processing module proper, to really understand what's going on before you start using Joblib</p>
<p><a href="https://docs.python.org/3.5/library/multiprocessing.html" rel="nofollow">https://docs.python.org/3.5/library/multiprocessing.html</a></p>
<p>A simple way to do this, using the multiprocessing module from the standard library would be</p>
<pre><code>from multiprocessing import Pool
def foo(x):
return [2 for i in x]
my_array = [[1 ,2 ,3],[4,5,6]]
if __name__ == '__main__':
with Pool(2) as p:
print(p.map(foo, my_array))
</code></pre>
<p>But Note that this doesn't modify <code>my_array</code>, it returns a new list</p>
| 0 | 2016-08-13T21:04:18Z | [
"python",
"arrays",
"parallel-processing"
] |
Python: easy way to modify array in parallel | 38,936,296 | <p>The question might sound easy, but being new to parallelization in Python I am definitely struggling. I dealt with parallelization in OpenMP for C++ and that was a hell of a lot easier.
What I need to do is modify entries of a matrix in parallel. That's it. Thing is, I can't do it using the simple joblib library:</p>
<pre><code>from joblib import Parallel, delayed
my_array = [[ 1 ,2 ,3],[4,5,6]]
def foo(array,x):
for i in [0,1,2]:
array[x][i]=2
return 0
def main(array):
inputs = [0,1]
if __name__ == '__main__':
Parallel(n_jobs=2, verbose = 0)(delayed(foo)(array,i) for i in inputs)
main(my_array)
</code></pre>
<p>This code would work when the number of jobs is 1 (so the array after the call of main is all 2s), but it wouldn't when it actually becomes true multiprocessing (the array stays the same after the call of main).
The main problem I think is that I can't return anything useful with the Parallel function. I also tried to have the array in shared memory, but I couldn't find any informations on how to do it with joblib.</p>
<p>Any suggestions?</p>
| 3 | 2016-08-13T19:33:47Z | 38,948,653 | <p>Using only the standard library, you could use shared memory, in this case an <code>Array</code> to store and modify your array:</p>
<pre><code>from multiprocessing import Pool, Array, Lock
lock = Lock()
my_array = [Array('i', [1, 2, 3], lock=lock),
Array('i', [4, 5, 6], lock=lock),]
</code></pre>
<p>Let me suggest you some modifications to your procedure: make a list or a schedule of all the changes you need to make to your matrix (To make it clear I am going to use a <code>namedtuple</code>), and a function to map these changes.</p>
<pre><code>Change = namedtuple('Change', 'row idx value')
scheduled_changes = [Change(0, 0, 2),
Change(0, 1, 2),
Change(1, 0 ,2),
Change(1, 1, 2)]
# or build the scheduled changes list in any other way like using
# for loops or list comprehensions...
def modify(change, matrix=my_array):
matrix[change.row][change.idx] = change.value
</code></pre>
<p>Now you can use a <code>Pool</code> to map the modify function to the changes:</p>
<pre><code>pool = Pool(4)
pool.map(modify, scheduled_changes)
for row in my_array:
for col in row:
print(col, end=' ')
print()
# 2 2 3
# 2 2 6
</code></pre>
| 1 | 2016-08-15T02:45:57Z | [
"python",
"arrays",
"parallel-processing"
] |
How to use mask indexing on numpy arrays of classes? | 38,936,349 | <p>When working with numpy <code>array</code> of custom classes like:</p>
<pre><code>class TestClass:
active = False
</code></pre>
<p>How to use the inline masking (boolean index arrays) like described here: <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow">http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays</a></p>
<p>Direct attempt fails:</p>
<pre><code>items = np.array([TestClass() for _ in range(10)])
items[items.active]
AttributeError: 'numpy.ndarray' object has no attribute 'active'
</code></pre>
<p>Any suggestions?</p>
| 0 | 2016-08-13T19:40:19Z | 38,936,480 | <p>So your array is <code>dtype=object</code> (print it) and each element points to an instance of your class:</p>
<pre><code>items = np.array([TestClass() for _ in range(10)])
</code></pre>
<p>Now try:</p>
<pre><code>items.active
</code></pre>
<p><code>items</code> is an array; <code>active</code> is an attribute of your class, not an attribute of the array of your objects. Your definition does not add any functionality to the class <code>ndarray</code>. The error isn't in the masking; it's in trying to get the instance attribute.</p>
<p>Many operations on arrays like this have be done iteratively. This kind of array is similar to a plain Python list.</p>
<pre><code>[obj.active for obj in items]
</code></pre>
<p>or to turn it back into an array</p>
<pre><code>np.array([obj...])
</code></pre>
<p><code>items[[True,False,True,...]]</code> should work, but that's because the mask is a boolean list or array already. </p>
<p>====================</p>
<p>Lets modify your class so it shows something interesting. Note I am assigning <code>active</code> to instances, not, as you did, to the class:</p>
<pre><code>In [1671]: class TestClass:
...: def __init__(self,val):
...: self.active = bool(val%2)
In [1672]: items = np.array([TestClass(i) for i in range(10)])
In [1674]: items
Out[1674]:
array([<__main__.TestClass object at 0xb106758c>,
<__main__.TestClass object at 0xb117764c>,
...
<__main__.TestClass object at 0xb269850c>], dtype=object)
# print of the array isn't interesting. The class needs a `__str__` method.
</code></pre>
<p>This simple iterative access to the attribute:</p>
<pre><code>In [1675]: [i.active for i in items]
Out[1675]: [False, True, False, True, False, True, False, True, False, True]
</code></pre>
<p><code>np.frompyfunc</code> provides a more powerful way of accessing each element of an array. <code>operator.attrgetter('active')(i)</code> is a functional way of doing <code>i.active</code>.</p>
<pre><code>In [1676]: f=np.frompyfunc(operator.attrgetter('active'),1,1)
In [1677]: f(items)
Out[1677]: array([False, True, False, True, False, True, False, True, False, True], dtype=object)
</code></pre>
<p>but the main advantage of this function appears when I change the shape of the array:</p>
<pre><code>In [1678]: f(items.reshape(2,5))
Out[1678]:
array([[False, True, False, True, False],
[True, False, True, False, True]], dtype=object)
</code></pre>
<p>Note this array is dtype object. That's what <code>frompyfunc</code> does. To get an array of booleans we need to change type:</p>
<pre><code>In [1679]: f(items.reshape(2,5)).astype(bool)
Out[1679]:
array([[False, True, False, True, False],
[ True, False, True, False, True]], dtype=bool)
</code></pre>
<p><code>np.vectorize</code> uses <code>frompyfunc</code>, and makes the dtype a little more user friendly. But in timings it's a bit slower.</p>
<p>===============</p>
<p>Expanding on Jon's comment</p>
<pre><code>In [1702]: class TestClass:
...: def __init__(self,val):
...: self.active = bool(val%2)
...: def __bool__(self):
...: return self.active
...: def __str__(self):
...: return 'TestClass(%s)'%self.active
...: def __repr__(self):
...: return str(self)
In [1707]: items = np.array([TestClass(i) for i in range(5)])
</code></pre>
<p><code>items</code> now display in an informative manner; and convert to strings:</p>
<pre><code>In [1708]: items
Out[1708]:
array([TestClass(False), TestClass(True), TestClass(False),
TestClass(True), TestClass(False)], dtype=object)
In [1709]: items.astype('S20')
Out[1709]:
array([b'TestClass(False)', b'TestClass(True)', b'TestClass(False)',
b'TestClass(True)', b'TestClass(False)'],
dtype='|S20')
</code></pre>
<p>and convert to <code>bool</code>:</p>
<pre><code>In [1710]: items.astype(bool)
Out[1710]: array([False, True, False, True, False], dtype=bool)
</code></pre>
<p>In effect <code>astype</code> is applying the conversion method to each element of the array. We could also define <code>__int__</code>, <code>__add__</code>, This shows that it is easier to add functionality to the custom class than to the array class itself. I wouldn't expect to get the same speed as with native types.</p>
| 2 | 2016-08-13T19:59:02Z | [
"python",
"numpy"
] |
how to install sqlite in a django project | 38,936,355 | <p>i am learning django. It seems when you create a new project, a sqlite3 data base comes with it. In my case, when i look in my project, no data base can be found.</p>
<pre><code> Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 13/08/2016 21:26 .idea
d----- 13/08/2016 21:03 AJJL_project
-a---- 13/08/2016 20:34 810 manage.py
</code></pre>
<p>I think maybe i should install sqlite myself. So, how do i do that and how do i insert it in a django project ?</p>
| -1 | 2016-08-13T19:41:15Z | 38,936,448 | <p>To create a database and tables in database You must run the following command:</p>
<pre><code>python manage.py migrate
</code></pre>
<p>Read more about migrations <a href="https://docs.djangoproject.com/en/1.10/ref/django-admin/#django-admin-migrate" rel="nofollow">here</a> and <a class='doc-link' href="http://stackoverflow.com/documentation/django/1200/migrations#t=201608131953278895078">here</a></p>
| 1 | 2016-08-13T19:54:19Z | [
"python",
"django",
"sqlite",
"sqlite3"
] |
How to get path of all elements in lxml with attribute | 38,936,531 | <p>I have the following code:</p>
<pre><code>tree = etree.ElementTree(new_xml)
for e in new_xml.iter():
print tree.getpath(e), e.text
</code></pre>
<p>This will give me something like the following:</p>
<pre><code>/Item/Purchases
/Item/Purchases/Purchase[1]
/Item/Purchases/Purchase[1]/URL http://tvgo.xfinity.com/watch/x/6091165185315991112/movies
/Item/Purchases/Purchase[1]/Rating R
/Item/Purchases/Purchase[2]
/Item/Purchases/Purchase[2]/URL http://tvgo.xfinity.com/watch/x/6091165185315991112/movies
/Item/Purchases/Purchase[2]/Rating R
</code></pre>
<p>However, I need to get the path not of the list element but of the attribute. Here is what the xml looks like:</p>
<pre><code><Item>
<Purchases>
<Purchase Country="US">
<URL>http://tvgo.xfinity.com/watch/x/6091165US</URL>
<Rating>R</Rating>
</Purchase>
<Purchase Country="CA">
<URL>http://tvgo.xfinity.com/watch/x/6091165CA</URL>
<Rating>R</Rating>
</Purchase>
</Item>
</code></pre>
<p>How would I get the following path instead?</p>
<pre><code>/Item/Purchases
/Item/Purchases/Purchase[@Country="US"]
/Item/Purchases/Purchase[@Country="US"]/URL http://tvgo.xfinity.com/watch/x/6091165185315991112/movies
/Item/Purchases/Purchase[@Country="US"]/Rating R
/Item/Purchases/Purchase[@Country="CA"]
/Item/Purchases/Purchase[@Country="CA"]/URL http://tvgo.xfinity.com/watch/x/6091165185315991112/movies
/Item/Purchases/Purchase[@Country="CA"]/Rating R
</code></pre>
| 0 | 2016-08-13T20:05:15Z | 38,937,236 | <p>Not pretty, but it does the job.</p>
<pre><code>replacements = {}
for e in tree.iter():
path = tree.getpath(e)
if re.search('/Purchase\[\d+\]$', path):
new_predicate = '[@Country="' + e.attrib['Country'] + '"]'
new_path = re.sub('\[\d+\]$', new_predicate, path)
replacements[path] = new_path
for key, replacement in replacements.iteritems():
path = path.replace(key, replacement)
print path, e.text.strip()
</code></pre>
<p>prints this for me:</p>
<pre><code>/Item
/Item/Purchases
/Item/Purchases/Purchase[@Country="US"]
/Item/Purchases/Purchase[@Country="US"]/URL http://tvgo.xfinity.com/watch/x/6091165US
/Item/Purchases/Purchase[@Country="US"]/Rating R
/Item/Purchases/Purchase[@Country="CA"]
/Item/Purchases/Purchase[@Country="CA"]/URL http://tvgo.xfinity.com/watch/x/6091165CA
/Item/Purchases/Purchase[@Country="CA"]/Rating R
</code></pre>
| 1 | 2016-08-13T21:39:31Z | [
"python",
"xml",
"lxml"
] |
How to find good documentation for Python modules | 38,936,584 | <p>I can't seem to find a good explanation of how to use Python modules. Take for example the <code>urllib</code> module. It has commands such as
<code>req = urllib.request.Request()</code>.</p>
<p>How would I find out what specific commands, like this one, are in certain Python modules?</p>
<p>For all the examples I've seen of people using specific Python modules, they just know what to type, and how to use them. </p>
<p>Any suggestions will be much appreciated.</p>
| 0 | 2016-08-13T20:12:07Z | 38,937,103 | <p>My flow chart looks something like this:</p>
<ul>
<li>Reading the published documentation (or use <code>help(moduleName)</code> which gives you the same information without an internet connection in a harder to read format). This can be overly verbose if you're only looking for one tidbit of information, in which case I move on to...</li>
<li>Finding tutorials or similar stack overflow posts using specific keywords in your favorite search engine. This is generally the approach you will use 99% of the time.</li>
<li>Just recursively poking around with <code>dir()</code> and <code>__doc__</code> if you think the answer for what you're looking for is going to be relatively obvious (usually if the module has relatively simple functions such as <code>math</code> that are obvious by the name)</li>
<li>Looking at the source of the module if you <em>really</em> want to see how things works.</li>
</ul>
| 1 | 2016-08-13T21:20:26Z | [
"python",
"python-3.x",
"python-module"
] |
VSCode - invoke nosetests for a specific function or class | 38,936,628 | <p>I've setup a task in <code>tasks.json</code> to run <code>nosetests</code> from within VSCode, but, I'd like to invoke specific tests, eg a function or class of tests. Can anyone see a way to do this? In the tasks variable substitution I don't see function or class as variables.</p>
<pre><code>{
"version": "0.1.0",
"command": "/Users/alex/.virtualenvs/ddc/bin/nosetests",
"isShellCommand": true,
"args": ["-v"],
"showOutput": "always",
"tasks": [
{
"taskName": "${file}",
"isTestCommand": true,
"args": []
}
]
}
</code></pre>
| 0 | 2016-08-13T20:17:07Z | 38,973,685 | <p>@Alex, I'm the author of the extension, the solution is as follows:<br>
(the plan is to make this a seamless experience in the future) </p>
<ol>
<li>Create a file named xyz.py in the root directory </li>
<li>Add the following code </li>
<li>Add a break point in you test method (the test method you wish to debug) </li>
<li>Start debugging </li>
</ol>
<p><code>
import nose
nose.run()
</code></p>
| 1 | 2016-08-16T11:26:28Z | [
"python",
"vscode",
"nose"
] |
Scipy analytically integrate piecewise function | 38,936,703 | <p>Is there a method in scipy for analytic integration of a piecewise function? For example I have:</p>
<pre><code>xrange_one, xrange_two = np.arange(0,4), np.arange(3,7)
part_one = lambda x: x + 3
part_two = lambda x: -2*x + 2
</code></pre>
<p>I'd like to integrate the first moment of this piecewise function:</p>
<pre><code>func_one = lambda x: x * (x + 3)
func_two = lambda x: x * (-2*x + 2)
</code></pre>
<p>Is there a way with scipy integrate.quad or some other analytical integration function that do something like this:</p>
<pre><code>total = integrate.quad(func_one, 0, 3, func_two, 3, 6)
</code></pre>
<p>I don't want to just integrate the two pieces separately.</p>
| 1 | 2016-08-13T20:27:19Z | 38,936,842 | <p>Scipy will not perform analytical integration for you, since it is made for solving numerical problems. Sympy, on the other hand, can handle simple symbolic problems exactly:</p>
<pre><code>>>> import sympy as sym
>>> x = sym.symbols('x')
>>> f = sym.Piecewise((x*(x+3),x<3), (x*(-2*x+2),True))
>>> sym.integrate(f,(x,0,6))
-153/2
</code></pre>
<p>Compare</p>
<pre><code>>>> import scipy.integrate as integrate
>>> integrate.quad(lambda x:x*(x+3),0,3)[0] + integrate.quad(lambda x:x*(-2*x+2),3,6)[0]
-76.5
>>> -153/2.
-76.5
</code></pre>
<p>You could also define your original piecewise function first, then multiply it with the symbolic <code>x</code>, then integrate this new function analytically.</p>
<p>Another alternative, perhaps closer to the spirit of your question, might be to define the piecewise function numerically, and using scipy after all. This will still save you some work, but won't be strictly analytical:</p>
<pre><code>>>> f = lambda x: x*(x+3) if x<3 else x*(-2*x+2)
>>> integrate.quad(f,0,6)[0]
-76.5
</code></pre>
<p>The most complete setup with this approach:</p>
<pre><code>>>> f = lambda x: x+3 if x<3 else -2*x+2
>>> xf = lambda x: x*f(x)
>>> first_mom = integrate.quad(xf,0,6)[0]
>>> print(first_mom)
-76.5
</code></pre>
<p>First we define the piecewise lambda for <code>f</code>, then the integrand of the first moment, multiplying it with <code>x</code>. Then we do the integration.</p>
<hr>
<p>Note that it is frowned upon by many to bind lambdas to variables. If you want to do this properly, you should probably define a named function for your piecewise function, and only use a lambda inside the integration (if otherwise you wouldn't use that integrand):</p>
<pre><code>import scipy.integrate as integrate
def f(x):
return x+3 if x<3 else -2*x+2
first_mom = integrate.quad(lambda x: x*f(x),0,6)[0]
</code></pre>
| 3 | 2016-08-13T20:43:03Z | [
"python",
"python-2.7",
"scipy",
"integration"
] |
Scipy analytically integrate piecewise function | 38,936,703 | <p>Is there a method in scipy for analytic integration of a piecewise function? For example I have:</p>
<pre><code>xrange_one, xrange_two = np.arange(0,4), np.arange(3,7)
part_one = lambda x: x + 3
part_two = lambda x: -2*x + 2
</code></pre>
<p>I'd like to integrate the first moment of this piecewise function:</p>
<pre><code>func_one = lambda x: x * (x + 3)
func_two = lambda x: x * (-2*x + 2)
</code></pre>
<p>Is there a way with scipy integrate.quad or some other analytical integration function that do something like this:</p>
<pre><code>total = integrate.quad(func_one, 0, 3, func_two, 3, 6)
</code></pre>
<p>I don't want to just integrate the two pieces separately.</p>
| 1 | 2016-08-13T20:27:19Z | 38,936,993 | <p>After playing with the <code>numpy</code> <code>poly</code> functions, I came up with:</p>
<pre><code>integrate.quad(lambda x:np.piecewise(x, [x < 3, x >= 3],
[lambda x: np.polyval([1,3,0],x),
lambda x: np.polyval([-2,2,0],x)]),
0,6)
</code></pre>
<p>evaluates to:</p>
<pre><code>(-76.5, 1.3489209749195652e-12)
</code></pre>
<p>There is a <code>polyint</code> to do a polynomial integration</p>
<pre><code>In [1523]: np.polyint([1,3,0])
Out[1523]: array([ 0.33333333, 1.5 , 0. , 0. ])
In [1524]: np.polyint([-2,2,0])
Out[1524]: array([-0.66666667, 1. , 0. , 0. ])
</code></pre>
<p>That is </p>
<pre><code>x*(x+3) => x**2 + 3*x => np.poly1d([1,3,0]) => 1/3 x**3 + 3/2 x**2
</code></pre>
<p>So the <code>analytical</code> solution is the appropriate end point differences for these two <code>polyint</code> objects:</p>
<pre><code>In [1619]: np.diff(np.polyval(np.polyint([1,3,0]),[0,3])) +
np.diff(np.polyval(np.polyint([-2,2,0]),[3,6]))
Out[1619]: array([-76.5])
In [1621]: [np.polyval(np.polyint([1,3,0]),[0,3]),
np.polyval(np.polyint([-2,2,0]),[3,6])]
Out[1621]: [array([ 0. , 22.5]), array([ -9., -108.])]
</code></pre>
| 1 | 2016-08-13T21:02:53Z | [
"python",
"python-2.7",
"scipy",
"integration"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.