title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to sort by likes or views Flask? | 39,001,584 | <p>I have a problem sorting <strong>Authors</strong> by <code>Author.likes</code> or <code>Author.views</code> .</p>
<p>Lets me first show you my codes snippets :</p>
<p><strong>views.py</strong></p>
<pre><code>from sqlalchemy import func
@app.route('/authors/')
def authors():
page = request.args.get('page', 1, type=int)
pagination = Author.query.order_by(func.count(Author.likes)).paginate(page, per_page=app.config['AUTHORS_PER_PAGE'], error_out=False)
</code></pre>
<p>Here am using the func.count() function to sort the authors ordered by <code>likes</code>.</p>
<p>The problem is am getting just the most liked author , in fact i have 12 of them in the database each one has his likes, so i need to sort them all by likes from the highest one to the lowest .</p>
<p>I tried a different ways but without avail , please any help !</p>
| -1 | 2016-08-17T16:13:00Z | 39,001,673 | <p>Remove <code>func.count</code> and just order by <code>Author.likes</code>, it's already a count. <code>func.count</code> is for counting groups of things. Currently, you're grouping everything into one group and so you only get one item back. </p>
| 1 | 2016-08-17T16:17:38Z | [
"python",
"sqlalchemy"
] |
Django ImageField file max upload limit | 39,001,618 | <p>I have my avatar in my UserProfile model in models.py file:-</p>
<pre><code>class UserProfile(models.Model):
profileuser = models.OneToOneField(User)
handle = models.CharField(max_length=128, blank=True)
avatar = models.ImageField(upload_to='profile_images', blank=True)
</code></pre>
<p>In my forms.py file:-</p>
<pre><code>class UserProfileForm(forms.ModelForm):
class Meta:
model= UserProfile
exclude= ('profileuser',)
</code></pre>
<p>In my views.py file , I am taking file data like below:-</p>
<pre><code>if 'avatar' in request.FILES:
profile.avatar = request.FILES['avatar']
</code></pre>
<p>I want to limit max upload image file size to 1 MB. I have gone through many answers on stackoverflow. but still unable to do this.
How can I make this happen?</p>
| 0 | 2016-08-17T16:14:46Z | 39,001,934 | <p>This should work:</p>
<pre><code>class UserProfileForm(forms.ModelForm):
class Meta:
model= UserProfile
exclude= ('profileuser',)
def clean(self, *args, **kwargs):
super(UserProfileForm, self).clean(*args, **kwargs)
try:
avatar = self.cleaned_data['avatar']
except Exception as e:
pass
if avatar.size > 2000000: # 2MegaBytes
raise forms.ValidationError(u'File size is too large')
</code></pre>
| 0 | 2016-08-17T16:32:26Z | [
"python",
"django"
] |
UnicodeDecodeError on python3 | 39,001,641 | <p>Im currently trying to use some simple regex on a very big .txt file (couple of million lines of text). The most simple code that causes the problem:</p>
<pre><code>file = open("exampleFileName", "r")
for line in file:
pass
</code></pre>
<p>The error message:</p>
<pre><code>Traceback (most recent call last):
File "example.py", line 34, in <module>
example()
File "example.py", line 16, in example
for line in file:
File "/usr/lib/python3.4/codecs.py", line 319, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 7332: invalid continuation byte
</code></pre>
<p>How can i fix this? is utf-8 the wrong encoding? And if it is, how do i know which one is right?</p>
<p>Thanks and best regards!</p>
| 2 | 2016-08-17T16:15:57Z | 39,001,821 | <p>It looks like it is invalid UTF-8 and you should try to read with <code>latin-1</code> encoding. Try</p>
<pre><code>file = open('exampleFileName', 'r', encoding='latin-1')
</code></pre>
| 0 | 2016-08-17T16:25:33Z | [
"python",
"regex",
"utf-8",
"decoding"
] |
UnicodeDecodeError on python3 | 39,001,641 | <p>Im currently trying to use some simple regex on a very big .txt file (couple of million lines of text). The most simple code that causes the problem:</p>
<pre><code>file = open("exampleFileName", "r")
for line in file:
pass
</code></pre>
<p>The error message:</p>
<pre><code>Traceback (most recent call last):
File "example.py", line 34, in <module>
example()
File "example.py", line 16, in example
for line in file:
File "/usr/lib/python3.4/codecs.py", line 319, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 7332: invalid continuation byte
</code></pre>
<p>How can i fix this? is utf-8 the wrong encoding? And if it is, how do i know which one is right?</p>
<p>Thanks and best regards!</p>
| 2 | 2016-08-17T16:15:57Z | 39,002,048 | <p>It is not possible to identify the encoding on the fly. So, either user a method which I wrote as a comment or use similar constructions (as proposed by another answer), but this is a wild shot:</p>
<pre class="lang-py prettyprint-override"><code>try:
file = open("exampleFileName", "r")
except UnicodeDecodeError:
try:
file = open("exampleFileName", "r", encoding="latin2")
except: #...
</code></pre>
<p>And so on, until you test all the encodings from <a href="https://docs.python.org/3.5/library/codecs.html#standard-encodings" rel="nofollow">Standard Python Encodings</a>.</p>
<p>So I think there's no need to bother with this nested hell, just do <code>file -bi [filename]</code> once, copy the encoding and forget about this.</p>
<p><strong>UPD.</strong> Actually, I've found <a href="http://stackoverflow.com/a/17249680/5164080">another stackoverflow answer</a> which you can use if you're on <strong>Windows</strong>.</p>
| 0 | 2016-08-17T16:40:09Z | [
"python",
"regex",
"utf-8",
"decoding"
] |
Calculating surface normal in Python using Newell's Method | 39,001,642 | <p>I'm trying to implement Newell's Method to calculate the surface normal vector in Python, based on the following pseudocode from <a href="https://www.opengl.org/wiki/Calculating_a_Surface_Normal" rel="nofollow">here</a>.</p>
<pre><code>Begin Function CalculateSurfaceNormal (Input Polygon) Returns Vector
Set Vertex Normal to (0, 0, 0)
Begin Cycle for Index in [0, Polygon.vertexNumber)
Set Vertex Current to Polygon.verts[Index]
Set Vertex Next to Polygon.verts[(Index plus 1) mod Polygon.vertexNumber]
Set Normal.x to Sum of Normal.x and (multiply (Current.y minus Next.y) by (Current.z plus Next.z))
Set Normal.y to Sum of Normal.y and (multiply (Current.z minus Next.z) by (Current.x plus Next.x))
Set Normal.z to Sum of Normal.z and (multiply (Current.x minus Next.x) by (Current.y plus Next.y))
End Cycle
Returning Normalize(Normal)
End Function
</code></pre>
<p>Here's my code:</p>
<pre><code>Point3D = collections.namedtuple('Point3D', 'x y z')
def surface_normal(poly):
n = [0.0, 0.0, 0.0]
for i, v_curr in enumerate(poly):
v_next = poly[(i+1) % len(poly)]
n[0] += (v_curr.y - v_next.y) * (v_curr.z - v_next.z)
n[1] += (v_curr.z - v_next.z) * (v_curr.x - v_next.x)
n[2] += (v_curr.x - v_next.x) * (v_curr.y - v_next.y)
normalised = [i/sum(n) for i in n]
return normalised
def test_surface_normal():
poly = [Point3D(0.0, 0.0, 0.0),
Point3D(0.0, 1.0, 0.0),
Point3D(1.0, 1.0, 0.0),
Point3D(1.0, 0.0, 0.0)]
assert surface_normal(poly) == [0.0, 0.0, 1.0]
</code></pre>
<p>This fails at the normalisation step since the <code>n</code> at that point is <code>[0.0, 0.0, 0.0]</code>. If I'm understanding correctly, it should be <code>[0.0, 0.0, 1.0]</code> (<a href="http://www.wolframalpha.com/input/?i=normal+vector+of+plane+%5B%5B0,0,0%5D,%5B1,0,0%5D,%5B1,1,0%5D,%5B0,1,0%5D%5D" rel="nofollow">confirmed</a> by Wolfram Alpha).</p>
<p>What am I doing wrong here? And is there a better way of calculating surface normals in python? My polygons will always be planar so Newell's Method isn't absolutely necessary if there's another way.</p>
| 2 | 2016-08-17T16:15:57Z | 39,001,875 | <p>If you want an alternate approach to Newell's Method you can use the Cross-Product of 2 non-parallel vectors. That should work for just about any planar shape you'd provide it. I know the theory says it is for convex polygons, but examples we've looked at on Wolfram Alpha return an appropriate surface normal for even concave polygons (i.e. bowtie polygon).</p>
| 1 | 2016-08-17T16:28:49Z | [
"python",
"3d",
"geometry"
] |
Calculating surface normal in Python using Newell's Method | 39,001,642 | <p>I'm trying to implement Newell's Method to calculate the surface normal vector in Python, based on the following pseudocode from <a href="https://www.opengl.org/wiki/Calculating_a_Surface_Normal" rel="nofollow">here</a>.</p>
<pre><code>Begin Function CalculateSurfaceNormal (Input Polygon) Returns Vector
Set Vertex Normal to (0, 0, 0)
Begin Cycle for Index in [0, Polygon.vertexNumber)
Set Vertex Current to Polygon.verts[Index]
Set Vertex Next to Polygon.verts[(Index plus 1) mod Polygon.vertexNumber]
Set Normal.x to Sum of Normal.x and (multiply (Current.y minus Next.y) by (Current.z plus Next.z))
Set Normal.y to Sum of Normal.y and (multiply (Current.z minus Next.z) by (Current.x plus Next.x))
Set Normal.z to Sum of Normal.z and (multiply (Current.x minus Next.x) by (Current.y plus Next.y))
End Cycle
Returning Normalize(Normal)
End Function
</code></pre>
<p>Here's my code:</p>
<pre><code>Point3D = collections.namedtuple('Point3D', 'x y z')
def surface_normal(poly):
n = [0.0, 0.0, 0.0]
for i, v_curr in enumerate(poly):
v_next = poly[(i+1) % len(poly)]
n[0] += (v_curr.y - v_next.y) * (v_curr.z - v_next.z)
n[1] += (v_curr.z - v_next.z) * (v_curr.x - v_next.x)
n[2] += (v_curr.x - v_next.x) * (v_curr.y - v_next.y)
normalised = [i/sum(n) for i in n]
return normalised
def test_surface_normal():
poly = [Point3D(0.0, 0.0, 0.0),
Point3D(0.0, 1.0, 0.0),
Point3D(1.0, 1.0, 0.0),
Point3D(1.0, 0.0, 0.0)]
assert surface_normal(poly) == [0.0, 0.0, 1.0]
</code></pre>
<p>This fails at the normalisation step since the <code>n</code> at that point is <code>[0.0, 0.0, 0.0]</code>. If I'm understanding correctly, it should be <code>[0.0, 0.0, 1.0]</code> (<a href="http://www.wolframalpha.com/input/?i=normal+vector+of+plane+%5B%5B0,0,0%5D,%5B1,0,0%5D,%5B1,1,0%5D,%5B0,1,0%5D%5D" rel="nofollow">confirmed</a> by Wolfram Alpha).</p>
<p>What am I doing wrong here? And is there a better way of calculating surface normals in python? My polygons will always be planar so Newell's Method isn't absolutely necessary if there's another way.</p>
| 2 | 2016-08-17T16:15:57Z | 39,021,428 | <p>Ok, the problem was actually a stupid one.</p>
<p>The lines like:</p>
<pre><code>n[0] += (v_curr.y - v_next.y) * (v_curr.z - v_next.z)
</code></pre>
<p>should be:</p>
<pre><code>n[0] += (v_curr.y - v_next.y) * (v_curr.z + v_next.z)
</code></pre>
<p>The values in the second set of brackets should be added, not subtracted.</p>
| 0 | 2016-08-18T14:55:56Z | [
"python",
"3d",
"geometry"
] |
cython error by python import | 39,001,828 | <p>Iâm making an C++ class to manage the communication with a device (SPI). The idea is to use this C++ class on an Arduino and a Raspberry Pi. This way I need to make this class only once. </p>
<p>I will use the class on the Arduino (this is no problem).
I also want to use it on my Raspberry Pi together with python. (Here is the problem)</p>
<p>I already made a test class (Rectangle) to use it on both. This one was successful :)</p>
<p>Now I stumble across a problem with my real class. </p>
<p>My idea was to make a <code>namespace</code> with 2 classes inside (one <code>public</code>, one <code>private</code>). The <code>public</code> one to manage the communication of the device. The <code>private</code> one to manage the SPI bus needed for the device. </p>
<p>Now I want to make the SPI class with the bcm2835 class from mikem, this is where I get an error. (<a href="http://www.airspayce.com/mikem/bcm2835/" rel="nofollow">http://www.airspayce.com/mikem/bcm2835/</a>).</p>
<p><strong><em>The next files I made (I know, itâs still public):</em></strong>
<em>Device.h</em></p>
<pre><code>namespace device {
class Spi {
public:
int speed, modus;
Spi(int speed, int modus);
~Spi();
void openSpi();
void closeSpi();
void writeSpi(int dataToWrite);
int readSpi();
};
}
</code></pre>
<p><em>Device.cpp</em></p>
<pre><code>#include "Device.h"
#include âbcm2835.hâ
#include <iostream>
using namespace device;
//###############################################################################################
//***********************************************************************************************
// Constructors:
// Default constructor: speed = 1MHz, modus = 0
// Specific constructor: speed = var(int), modus = var(int)
//***********************************************************************************************
//Specific
Spi::Spi(int speed, int modus) {
speed = speed;
modus = modus;
}
//Default
Spi::~Spi() {
}
//###############################################################################################
void Spi::openSpi() {
if (!bcm2835_init())
{
std::cout<< "bcm2835_init failed." ;
//return 1;
}
std::cout << "SPI is open.";
}
void Spi::closeSpi()
{
std::cout << "SPI is closed.";
}
void Spi::writeSpi(int dataToWrite) {
std::cout << "SPI write: " << dataToWrite;
}
int Spi::readSpi() {
return 0;
}
</code></pre>
<p><em>dev.pyx</em></p>
<pre><code>cdef extern from "Device.h" namespace "device":
cdef cppclass Spi:
Spi(int, int) except +
int speed, modus
void openSpi()
void closeSpi()
void writeSpi(int)
int readSpi()
cdef class PySpi:
cdef Spi *thisptr
def __cinit__(self, int speed, int modus):
self.thisptr = new Spi(speed, modus)
def __dealloc__(self):
del self.thisptr
def openSpi(self):
self.thisptr.openSpi()
def closeSpi(self):
self.thisptr.closeSpi()
def writeSpi(self, data):
self.thisptr.writeSpi(data)
def readSpi(self):
return self.thisptr.readSpi()
</code></pre>
<p><em>setup.py</em></p>
<pre><code>from distutils.core import setup, Extension
from Cython.Build import cythonize
setup(ext_modules = cythonize(Extension(
"dev",
sources=["dev.pyx","Device.cpp"],
language="c++",
)))
</code></pre>
<p>I get no errors while building, but when I do âimport devâ inside python. I get the error:</p>
<blockquote>
<p>undefined symbol: bcm2835_init</p>
</blockquote>
<p>Does anyone know what I do wrong? </p>
| 0 | 2016-08-17T16:26:07Z | 39,002,696 | <p><code>bcm2835_init</code> is presumably in "bcm2835.cpp" so you need to add that to <code>sources</code> in "setup.py"</p>
<pre><code>sources=["dev.pyx","Device.cpp","bcm2835.cpp"]
</code></pre>
<p>Alternatively, if you have a bcm2835 library already compiled you might want to link to it by adding to "setup.py"</p>
<pre><code>libraries = ["bcm2835"]
</code></pre>
<p>You're getting the error because it you call the function (in "Device.cpp") but never provide a definition of the function.</p>
| 0 | 2016-08-17T17:16:13Z | [
"python",
"c++",
"cython"
] |
Convert Tecplot ascii to Python numpy | 39,001,856 | <p>I want to convert a <em>Tecplot</em> file into an array but I don't know how to do it.
Here is an extract of the file:</p>
<pre><code>TITLE = "Test"
VARIABLES = "x" "y"
ZONE I=18, F=BLOCK
0.1294538E-01 0.1299554E-01 0.1303974E-01 0.1311453E-01 0.1313446E-01 0.1319080E-01
0.1322709E-01 0.1323904E-01 0.1331753E-01 0.1335821E-01 0.1340850E-01 0.1347061E-01
0.1350522E-01 0.1358302E-01 0.1359585E-01 0.1363086E-01 0.1368307E-01 0.1370017E-01
0.1377368E-01 0.1381353E-01 0.1386420E-01 0.1391916E-01 0.1395847E-01 0.1400548E-01
0.1405659E-01 0.1410006E-01 0.1417611E-01 0.1419149E-01 0.1420015E-01 0.1428019E-01
0.1434745E-01 0.1436735E-01 0.1439856E-01 0.1445430E-01 0.1448778E-01 0.1454278E-01
</code></pre>
<p>I want to retrieve <code>x</code> and <code>y</code> as array. So <code>x</code> should contain:</p>
<pre><code>0.1294538E-01 0.1299554E-01 0.1303974E-01 0.1311453E-01 0.1313446E-01 0.1319080E-01
0.1322709E-01 0.1323904E-01 0.1331753E-01 0.1335821E-01 0.1340850E-01 0.1347061E-01
0.1350522E-01 0.1358302E-01 0.1359585E-01 0.1363086E-01 0.1368307E-01 0.1370017E-01
</code></pre>
<p>And <code>y</code> should contain:</p>
<pre><code>0.1377368E-01 0.1381353E-01 0.1386420E-01 0.1391916E-01 0.1395847E-01 0.1400548E-01
0.1405659E-01 0.1410006E-01 0.1417611E-01 0.1419149E-01 0.1420015E-01 0.1428019E-01
0.1434745E-01 0.1436735E-01 0.1439856E-01 0.1445430E-01 0.1448778E-01 0.1454278E-01
</code></pre>
<p>I have seen <code>np.loadtxt('./file.dat', skiprows=3)</code> but I can't find the right options to say read all numbers and separate every 18 figures.</p>
<p>Also, I started something like this with no luck:</p>
<pre><code>with open(file, 'r') as a:
for line in a.readlines():
A = re.match(r'TITLE = (.*$)', line, re.M | re.I)
B = re.match(r'VARIABLES = (.*$)', line, re.M | re.I)
C = re.match(r'ZONE (.*$)', line, re.M | re.I)
if A or B or C:
continue
else:
D = re.match(r'(.*$)', line, re.M | re.I)
value = "{:.16}".format(D.group(1))
y.append(float(value))
j = j+1
if j == 18:
j = 0
</code></pre>
<p>Thank you for your help!</p>
| 0 | 2016-08-17T16:28:00Z | 39,004,033 | <p>Solved it with the last option:</p>
<pre><code>arrays = []
with open(file, 'r') as a:
for line in a.readlines():
A = re.match(r'TITLE = (.*$)', line, re.M | re.I)
B = re.match(r'VARIABLES = (.*$)', line, re.M | re.I)
C = re.match(r'ZONE (.*$)', line, re.M | re.I)
if A or B or C:
continue
else:
arrays.append([float(s) for s in line.split()])
arrays = np.concatenate(arrays)
len_var = len(arrays)
x = arrays[0:len_var/2-1]
y = arrays[len_var/2:len_var]
</code></pre>
<p>This answer was of great help for the creation of the array: <a href="http://stackoverflow.com/a/4289557/6522112">http://stackoverflow.com/a/4289557/6522112</a> and also this one for traveling the array: <a href="http://stackoverflow.com/a/952952/6522112">http://stackoverflow.com/a/952952/6522112</a>. But in the end using <code>np.concatenate</code> seems better.</p>
<p>For the record, I created this function in order to read any file:</p>
<pre><code>def tecplot_reader(file, nb_var):
"""Tecplot reader."""
arrays = []
with open(file, 'r') as a:
for idx, line in enumerate(a.readlines()):
if idx < 3:
continue
else:
arrays.append([float(s) for s in line.split()])
arrays = np.concatenate(arrays)
output = np.split(arrays, nb_var)
return output
</code></pre>
<p>So just do: <code>x, y, z = tecplot_reader('./file', 3)</code></p>
| 0 | 2016-08-17T18:39:10Z | [
"python",
"numpy",
"io",
"converter"
] |
Selenium webdriver functions are not showing in autosuggestion list pycharm | 39,002,097 | <p>I have installed Python pip selenium pycharm. all works but I see that the autosuggestion box doesn't show the web driver functions. is there a reason for this?</p>
<p>Selenium is installed for project interpreter in pycharm</p>
<p><a href="http://i.stack.imgur.com/LHkMQ.png" rel="nofollow">This is how autosuggest list looks like</a></p>
<p><a href="http://i.stack.imgur.com/ywbtC.jpg" rel="nofollow">And this is how i expect it to look like</a></p>
| 1 | 2016-08-17T16:42:37Z | 39,044,220 | <p>When installing Selenium many folder are installed in the pythonxxx folder. Keep the folder Lib > site-package > Selenium and delete the other Selenium folder in python.</p>
| 0 | 2016-08-19T16:54:26Z | [
"python",
"python-3.x",
"selenium",
"selenium-webdriver",
"pycharm"
] |
Binning time series with pandas | 39,002,122 | <p>I'm having a time series in form of a <code>DataFrame</code> that I can <code>groupby</code> to a series </p>
<pre><code>pan.groupby(pan.Time).mean()
</code></pre>
<p>which has just two columns <code>Time</code> and <code>Value</code>: </p>
<pre><code>Time Value
2015-04-24 06:38:49 0.023844
2015-04-24 06:39:19 0.019075
2015-04-24 06:43:49 0.023844
2015-04-24 06:44:18 0.019075
2015-04-24 06:44:48 0.023844
2015-04-24 06:45:18 0.019075
2015-04-24 06:47:48 0.023844
2015-04-24 06:48:18 0.019075
2015-04-24 06:50:48 0.023844
2015-04-24 06:51:18 0.019075
2015-04-24 06:51:48 0.023844
2015-04-24 06:52:18 0.019075
2015-04-24 06:52:48 0.023844
2015-04-24 06:53:48 0.019075
2015-04-24 06:55:18 0.023844
2015-04-24 07:00:47 0.019075
2015-04-24 07:01:17 0.023844
2015-04-24 07:01:47 0.019075
</code></pre>
<p>What I'm trying to do is figuring out how I can bin those values into a sampling rate of e.g. 30 seconds and average those bins with more than one observations.</p>
<p>In a last step I'd need to interpolate those values but I'm sure that there's something out there I can use. </p>
<p>However, I just can't figure out how to do the binning and averaging of those values. <code>Time</code> is a <code>datetime.datetime</code> object, not a <code>str</code>.</p>
<p>I've tried different things but nothing works. Exceptions flying around. </p>
<p>Somebody out there who got this?</p>
| 1 | 2016-08-17T16:44:00Z | 39,003,074 | <p>IIUC, you could use <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Grouper.html" rel="nofollow"><code>TimeGrouper</code></a> along with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> on the index level to calculate the averages for the <code>Value</code> column as shown:</p>
<pre><code>df.set_index('Time', inplace=True)
# Taking mean values for a frequency of 2 minutes
df_group = df.groupby(pd.TimeGrouper(level='Time', freq='2T'))['Value'].agg('mean')
df_group.dropna(inplace=True)
df_group = df_group.to_frame().reset_index()
print(df_group)
Time Value
0 2015-04-24 06:38:00 0.021459
1 2015-04-24 06:42:00 0.023844
2 2015-04-24 06:44:00 0.020665
3 2015-04-24 06:46:00 0.023844
4 2015-04-24 06:48:00 0.019075
5 2015-04-24 06:50:00 0.022254
6 2015-04-24 06:52:00 0.020665
7 2015-04-24 06:54:00 0.023844
8 2015-04-24 07:00:00 0.020665
</code></pre>
<p>You could also use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow"><code>resample</code></a> as pointed out by @Paul H which is rather concise for this situation.</p>
<pre><code>print(df.set_index('Time').resample('2T').mean().dropna().reset_index())
Time Value
0 2015-04-24 06:38:00 0.021459
1 2015-04-24 06:42:00 0.023844
2 2015-04-24 06:44:00 0.020665
3 2015-04-24 06:46:00 0.023844
4 2015-04-24 06:48:00 0.019075
5 2015-04-24 06:50:00 0.022254
6 2015-04-24 06:52:00 0.020665
7 2015-04-24 06:54:00 0.023844
8 2015-04-24 07:00:00 0.020665
</code></pre>
| 1 | 2016-08-17T17:39:23Z | [
"python",
"pandas"
] |
Matplotlib - Legend of a specific contour at different instants | 39,002,125 | <p>I m trying to plot a figure with a specific contour line (level = 320) at different times that is why a loop is used.
I would like plot a legend with labels to specify the time during the loop as here :</p>
<p><a href="http://i.stack.imgur.com/cfSMK.png" rel="nofollow"><img src="http://i.stack.imgur.com/cfSMK.png" alt="enter image description here"></a></p>
<p>A part of my code is shown :</p>
<pre><code>cmap = plt.cm.hot
instant = 0
for instant in range(0,Sigma_stockMAX.shape[2]):
name = 'test'
VM_MAX_eq = 1./np.sqrt(2)*np.sqrt((Sigma_stockMAX[:,2,instant]-Sigma_stockMAX[:,3,instant])**2 + (Sigma_stockMAX[:,3,instant])**2 + (-Sigma_stockMAX[:,2,instant])**2 + 6*Sigma_stockMAX[:,5,instant]**2)
VM_MAX_eq_interpolate = interpolate(VM_MAX_eq, vtx, wts).reshape(X.shape[0],X.shape[0])
plt.legend(loc='upper center', shadow=True)
contour = plt.contour(XX_field[20:480,20:480], YY_field[20:480,20:480],ndimage.filters.gaussian_filter(VM_MAX_eq_interpolate[20:480,20:480], 5),colors=(cmap(instant/ np.float(Sigma_stockMAX.shape[2])),),levels = [320],linestyles=('-',),linewidths=(2,))
plt.savefig(name+ '_0' + test[instant][81:110] + ".png", dpi=250)
</code></pre>
<p>I tried to add in the loop this part but it doesnt work :</p>
<pre><code>labels = ['line1', 'line2','line3','line4',
'line5', 'line6', 'line6', 'line6', 'line6', 'line6']
for i in range(len(labels)):
contour.collections[instant].set_label(labels[instant])
</code></pre>
| 0 | 2016-08-17T16:44:08Z | 39,064,664 | <p>I used: <a href="http://matplotlib.org/examples/pylab_examples/contour_demo.html" rel="nofollow">http://matplotlib.org/examples/pylab_examples/contour_demo.html</a></p>
<pre><code>cmap = plt.cm.hot
lines = []
labels = []
i = 0
instant = 0
for instant in range(0,Sigma_stockMAX.shape[2]):
name = 'test'
VM_MAX_eq = 1./np.sqrt(2)*np.sqrt((Sigma_stockMAX[:,2,instant]-Sigma_stockMAX[:,3,instant])**2 + (Sigma_stockMAX[:,3,instant])**2 + (-Sigma_stockMAX[:,2,instant])**2 + 6*Sigma_stockMAX[:,5,instant]**2)
VM_MAX_eq_interpolate = interpolate(VM_MAX_eq, vtx, wts).reshape(X.shape[0],X.shape[0])
contour = plt.contour(XX_field[20:480,20:480], YY_field[20:480,20:480],ndimage.filters.gaussian_filter(VM_MAX_eq_interpolate[20:480,20:480], 5),colors=(cmap(instant/ np.float(Sigma_stockMAX.shape[2])),),levels = [320],linestyles=('-',),linewidths=(2,))
lines.extend(contour.collections)
labels.extend(['line'+str(i+j) for j in range(len(contour.collections))])
i += len(contour.collections)
plt.legend(lines, labels, loc='upper center', shadow=True)
plt.savefig(name+ '_0' + test[instant][81:110] + ".png", dpi=250)
</code></pre>
| 1 | 2016-08-21T13:21:52Z | [
"python",
"numpy",
"matplotlib",
"contour"
] |
rasterio transform and affine | 39,002,173 | <p>Im trying to do some basic image filtering. Ive included a snippet verbatim from the rasterio cookbook (I removed .astype() from the median filter output). The issue is that my input and output rasters should have the same extent but dont. The transform and affine are different for the input and output. Is this the expected behavior? Do I need to do something to the affine and transform to get the output to be the same as the input?</p>
<p>Python 2.7.11 |Anaconda 4.0.0 (64-bit)| (default, Feb 16 2016, 09:58:36) [MSC v.1500 64 bit (AMD64)] on win32</p>
<p>rasterio==0.36.0</p>
<pre><code>import rasterio
from scipy.signal import medfilt
path = "map.tif"
output = "map2.tif"
with rasterio.open(path) as src:
array = src.read()
profile = src.profile
# apply a 5x5 median filter to each band
filtered = medfilt(array, (1, 5, 5))
# Write to tif, using the same profile as the source
with rasterio.open(output, 'w', **profile) as dst:
dst.write(filtered)
print profile
print dst.profile
>>> {'count': 1, 'crs': CRS({'init': u'epsg:3857'}), 'interleave': 'band', 'dtype': 'float64', 'affine': Affine(100.0, 0.0, -13250000.0, 0.0, 100.0, 3980000.0), 'driver': u'GTiff', 'transform': (-13250000.0, 100.0, 0.0, 3980000.0, 0.0, 100.0), 'height': 1700, 'width': 1700, 'tiled': False, 'nodata': None}
>>> {'count': 1, 'crs': CRS({'init': u'epsg:3857'}), u'interleave': 'band', 'dtype': 'float64', 'affine': Affine(-13250000.0, 100.0, 0.0, 3980000.0, 0.0, 100.0), 'driver': u'GTiff', 'transform': (0.0, -13250000.0, 100.0, 100.0, 3980000.0, 0.0), 'height': 1700, 'width': 1700, u'tiled': False, 'nodata': None}
</code></pre>
| 2 | 2016-08-17T16:46:18Z | 39,021,707 | <p>The rasterio docs include a <a href="https://mapbox.github.io/rasterio/topics/migrating-to-v1.html#affine-affine-vs-gdal-style-geotransforms" rel="nofollow">history affine/transform usage</a> that you may find useful. I used to have a couple lines like the following to handle this:</p>
<pre><code>out_profile = src.profile.copy()
out_affine = out_profile.pop("affine")
out_profile["transform"] = out_affine
# then, write the output raster
with rasterio.open(output, 'w', **out_profile) as dst:
dst.write(filtered)
</code></pre>
<p>I think that is what is necessary here.</p>
| 0 | 2016-08-18T15:08:50Z | [
"python",
"rasterio"
] |
Scrapy: TypeError: start_requests() takes exactly 2 arguments (1 given) | 39,002,285 | <p>I'm starting work on a new scrapy project. So far I have:</p>
<pre><code>class ContactSpider(Spider):
name = "contact"
allowed_domains = ["http://www.domain.com/"]
start_urls = [
"http://web.domain.com/DECORATION"
]
def start_requests(self,response):
l = response.selector.xpath('//*[@id="ListingResults"]/text()').extract()
print(l)
</code></pre>
<p>I'm getting:</p>
<pre><code>Unhandled error in Deferred:
2016-08-17 12:37:16 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "Hlib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "C:\lib\site-packages\scrapy\crawler.py", line 163, in crawl
return self._crawl(crawler, *args, **kwargs)
File "C:\lib\site-packages\scrapy\crawler.py", line 167, in _crawl
d = crawler.crawl(*args, **kwargs)
File "C:\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "C:\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "C:\lib\site-packages\scrapy\crawler.py", line 90, in crawl
six.reraise(*exc_info)
File "C:\lib\site-packages\scrapy\crawler.py", line 73, in crawl
start_requests = iter(self.spider.start_requests())
exceptions.TypeError: start_requests() takes exactly 2 arguments (1 given)
2016-08-17 12:37:16 [twisted] CRITICAL:
Traceback (most recent call last):
File "C:\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "C:\lib\site-packages\scrapy\crawler.py", line 90, in crawl
six.reraise(*exc_info)
File "C:\lib\site-packages\scrapy\crawler.py", line 73, in crawl
start_requests = iter(self.spider.start_requests())
TypeError: start_requests() takes exactly 2 arguments (1 given)
Unhandled error in Deferred:
2016-08-17 12:37:16 [twisted] CRITICAL: Unhandled error in Deferred:
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-08-17T16:51:55Z | 39,005,071 | <p><code>start_requests</code> is a method from <code>scrapy.spider</code> it takes no arguments other than <code>self</code>. It is used to create starting <code>Requests</code> so it should <strong>yield</strong> some <code>Request</code> objects(or return a list of <code>Requests</code>).</p>
<pre><code>def start_requests(self,response):
</code></pre>
<p>should be:</p>
<pre><code>def start_requests(self):
</code></pre>
| 3 | 2016-08-17T19:45:50Z | [
"python",
"scrapy"
] |
Pandas GroupBy apply all | 39,002,340 | <p>I've got an involved situation. Let's say I have the following example dataframe of loans:</p>
<pre><code>test_df = pd.DataFrame({'name': ['Jack','Jill','John','Jack','Jill'],
'date': ['2016-08-08','2016-08-08','2016-08-07','2016-08-08','2016-08-08'],
'amount': [1000.0,1500.0,2000.0,2000.0,3000.0],
'return_amount': [5000.0,2000.0,3000.0,0.0,0.0],
'return_date': ['2017-08-08','2017-08-08','2017-08-07','','2017-08-08']})
test_df.head()
amount date name return_amount return_date
0 1000.0 2016-08-08 Jack 5000.0 2017-08-08
1 1500.0 2016-08-08 Jill 2000.0 2017-08-08
2 2000.0 2016-08-07 John 3000.0 2017-08-07
3 2500.0 2016-08-08 Jack 0.0
4 2500.0 2016-08-08 Jill 0.0 2017-08-08
</code></pre>
<p>There are a few operations I need to perform after grouping this dataframe by name (grouping loans by person):</p>
<p>1) <code>return amount</code> needs to allocated <em>proportionally</em> by the sum of <code>amount</code>.</p>
<p>2) If <code>return date</code> is missing for <strong>ANY</strong> loan for a given person, then all return_dates should be converted to empty strings ''.</p>
<p>I already have a function that I use to allocate the proportional return amount:</p>
<pre><code>def allocate_return_amount(group):
loan_amount = group['amount']
return_amount = group['return_amount']
sum_amount = loan_amount.sum()
sum_return_amount = return_amount.sum()
group['allocated_return_amount'] = (loan_amount/sum_amount) * sum_return_amount
return group
</code></pre>
<p>And I use <code>grouped_test_df = grouped_test_df.apply(allocate_return_amount)</code> to apply it.</p>
<p>What I am struggling with is the second operation I need to perform, checking if any of the loans to a person are missing a <code>return_date</code>, and if so, changing all <code>return_dates</code> for that person to ''.</p>
<p>I've found GroupBy.all in the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.DataFrameGroupBy.all.html" rel="nofollow">pandas documentation</a>, but I haven't figured out how to use it yet, anyone with experience with this?</p>
<p>Since this example might be a bit hard to follow, here's my ideal output for this example:</p>
<pre><code>ideal_test_df.head()
amount date name return_amount return_date
0 1000.0 2016-08-08 Jack 0.0 ''
1 1500.0 2016-08-08 Jill 666.66 2017-08-08
2 2000.0 2016-08-07 John 3000.0 2017-08-07
3 2500.0 2016-08-08 Jack 0.0 ''
4 2500.0 2016-08-08 Jill 1333.33 2017-08-08
</code></pre>
<p>Hopefully this makes sense, and thank you in advance to any pandas expert who takes the time to help me out!</p>
| 4 | 2016-08-17T16:55:13Z | 39,002,970 | <p>You can do it by iterating through the groups, testing the condition using <code>any</code>, then setting back to the original dataframe using <code>loc</code>:</p>
<pre><code>test_df = pd.DataFrame({'name': ['Jack','Jill','John','Jack','Jill'],
'date': ['2016-08-08','2016-08-08','2016-08-07','2016-08-08','2016-08-08'],
'amount': [1000.0,1500.0,2000.0,2000.0,3000.0],
'return_amount': [5000.0,2000.0,3000.0,0.0,0.0],
'return_date': ['2017-08-08','2017-08-08','2017-08-07','','2017-08-08']})
grouped = test_df.groupby('name')
for name, group in grouped:
if any(group['return_date'] == ''):
test_df.loc[group.index,'return_date'] = ''
</code></pre>
<p>And if you want to reset <code>return_amount</code> also, and don't mind the additional overhead, just add this line right after:</p>
<pre><code>test_df.loc[group.index, 'return_amount'] = 0
</code></pre>
| 2 | 2016-08-17T17:32:38Z | [
"python",
"pandas",
"dataframe"
] |
TensorFlow Choose GPU to use from multiple GPUs | 39,002,352 | <p>I'm new to TensorFlow and have installed CUDA-7.5 and cudnn-v4 as per the instructions on the TensorFlow website. After adjusting the TensorFlow configuration file and trying to run the following example from the website:</p>
<pre><code>python -m tensorflow.models.image.mnist.convolutional
</code></pre>
<p>I'm pretty sure TensorFlow is using one of the GPUs instead of the other, however, I'd like it to use the faster one. I was wondering if this example code just defaults to using the first GPU it finds. If so, how can I choose which GPU to use in my TensorFlow code in python?</p>
<p>The messages I get when running the example code are:</p>
<pre><code>ldt-tesla:~$ python -m tensorflow.models.image.mnist.convolutional
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Tesla K20c
major: 3 minor: 5 memoryClockRate (GHz) 0.7055
pciBusID 0000:03:00.0
Total memory: 4.63GiB
Free memory: 4.57GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2f27390
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties:
name: Quadro K2200
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:02:00.0
Total memory: 3.95GiB
Free memory: 3.62GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: N Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20c, pci bus id: 0000:03:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:02:00.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
Initialized!
</code></pre>
| 0 | 2016-08-17T16:56:04Z | 39,002,657 | <p>You can set the <code>CUDA_VISIBLE_DEVICES</code> environment variable to expose only the ones that you want, quoting this example on <a href="http://www.acceleware.com/blog/cudavisibledevices-masking-gpus" rel="nofollow">masking gpus</a>:</p>
<pre><code>CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES=â0,1â Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
</code></pre>
| 3 | 2016-08-17T17:14:12Z | [
"python",
"cuda",
"tensorflow",
"gpu"
] |
How to see if a string contains all substrings from a list? [Python] | 39,002,391 | <p>Here's the scenario:</p>
<p>I have a long list of time-stamped file names with characters before and after the time-stamp.</p>
<p>Something like this: <code>prefix_20160817_suffix</code></p>
<p>What I want is a list (which will ultimately be a subset of the original list) that contains file names with specific prefixes, suffixes, and parts of the timestamp. These specific strings are already given in a list. Note: <strong>this "contains" list might vary in size.</strong></p>
<p>For example: <code>['prefix1', '2016', 'suffix']</code> or <code>['201608', 'suffix']</code></p>
<p>How can I easily get a list of file names that contain <strong>every</strong> element in the "contains" array?</p>
<p>Here's some pseudo code to demonstrate what I want:</p>
<pre><code>for each fileName in the master list:
if the fileName contains EVERY element in the "contains" array:
add fileName to filtered list of filenames
</code></pre>
| 0 | 2016-08-17T16:58:32Z | 39,002,476 | <p>I'd compile the list into a <a href="https://docs.python.org/2/library/fnmatch.html" rel="nofollow"><code>fnmatch</code> pattern</a>:</p>
<pre><code>import fnmatch
pattern = '*'.join(contains)
filetered_filenames = fnmatch.filter(master_list, pattern)
</code></pre>
<p>This basically concatenates all strings in <code>contains</code> into a glob pattern with <code>*</code> wildcards in between. This assumes the order of <code>contains</code> is significant. Given that you are looking for prefixes, suffixes and (parts of) dates in between, that's not that much of a stretch.</p>
<p>It is important to note that if you run this on an OS that has a case-insensitive filesystem, that <code>fnmatch</code> matching is also case-insensitive. This is usually exactly what you'd want in that case.</p>
| 4 | 2016-08-17T17:03:46Z | [
"python",
"python-3.x"
] |
How to see if a string contains all substrings from a list? [Python] | 39,002,391 | <p>Here's the scenario:</p>
<p>I have a long list of time-stamped file names with characters before and after the time-stamp.</p>
<p>Something like this: <code>prefix_20160817_suffix</code></p>
<p>What I want is a list (which will ultimately be a subset of the original list) that contains file names with specific prefixes, suffixes, and parts of the timestamp. These specific strings are already given in a list. Note: <strong>this "contains" list might vary in size.</strong></p>
<p>For example: <code>['prefix1', '2016', 'suffix']</code> or <code>['201608', 'suffix']</code></p>
<p>How can I easily get a list of file names that contain <strong>every</strong> element in the "contains" array?</p>
<p>Here's some pseudo code to demonstrate what I want:</p>
<pre><code>for each fileName in the master list:
if the fileName contains EVERY element in the "contains" array:
add fileName to filtered list of filenames
</code></pre>
| 0 | 2016-08-17T16:58:32Z | 39,002,523 | <p>You're looking for something like that (using <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=201608171708486260643">list comprehension</a> and <a class='doc-link' href="http://stackoverflow.com/documentation/python/209/list/8125/any-and-all#t=201608171709575223328"><code>all()</code></a>):</p>
<pre><code>>>> files = ["prefix_20160817_suffix", "some_other_file_with_suffix"]
>>> contains = ['prefix', '2016', 'suffix']
>>> [ f for f in files if all(c in f for c in contains) ]
['prefix_20160817_suffix']
</code></pre>
| 1 | 2016-08-17T17:06:24Z | [
"python",
"python-3.x"
] |
How to see if a string contains all substrings from a list? [Python] | 39,002,391 | <p>Here's the scenario:</p>
<p>I have a long list of time-stamped file names with characters before and after the time-stamp.</p>
<p>Something like this: <code>prefix_20160817_suffix</code></p>
<p>What I want is a list (which will ultimately be a subset of the original list) that contains file names with specific prefixes, suffixes, and parts of the timestamp. These specific strings are already given in a list. Note: <strong>this "contains" list might vary in size.</strong></p>
<p>For example: <code>['prefix1', '2016', 'suffix']</code> or <code>['201608', 'suffix']</code></p>
<p>How can I easily get a list of file names that contain <strong>every</strong> element in the "contains" array?</p>
<p>Here's some pseudo code to demonstrate what I want:</p>
<pre><code>for each fileName in the master list:
if the fileName contains EVERY element in the "contains" array:
add fileName to filtered list of filenames
</code></pre>
| 0 | 2016-08-17T16:58:32Z | 39,002,543 | <p>Given:</p>
<pre><code>>>> cond1=['prefix1', '2016', 'suffix']
>>> cond2=['201608', 'suffix']
>>> fn="prefix_20160817_suffix"
</code></pre>
<p>You can test the existence of each substring in the list of conditions with <code>in</code> and (in the interim example) a list comprehension:</p>
<pre><code>>>> [e in fn for e in cond1]
[False, True, True]
>>> [e in fn for e in cond2]
[True, True]
</code></pre>
<p>That can then be used in a single <code>all</code> statement to test all the substrings:</p>
<pre><code>>>> all(e in fn for e in cond1)
False
>>> all(e in fn for e in cond2)
True
</code></pre>
<p>Then you can combine with <code>filter</code> (or use a list comprehension or a loop) to filter the list:</p>
<pre><code>>>> fns=["prefix_20160817_suffix", "prefix1_20160817_suffix"]
>>> filter(lambda fn: all(e in fn for e in cond1), fns)
['prefix1_20160817_suffix']
>>> filter(lambda fn: all(e in fn for e in cond2), fns)
['prefix_20160817_suffix', 'prefix1_20160817_suffix']
</code></pre>
| 0 | 2016-08-17T17:07:23Z | [
"python",
"python-3.x"
] |
How to see if a string contains all substrings from a list? [Python] | 39,002,391 | <p>Here's the scenario:</p>
<p>I have a long list of time-stamped file names with characters before and after the time-stamp.</p>
<p>Something like this: <code>prefix_20160817_suffix</code></p>
<p>What I want is a list (which will ultimately be a subset of the original list) that contains file names with specific prefixes, suffixes, and parts of the timestamp. These specific strings are already given in a list. Note: <strong>this "contains" list might vary in size.</strong></p>
<p>For example: <code>['prefix1', '2016', 'suffix']</code> or <code>['201608', 'suffix']</code></p>
<p>How can I easily get a list of file names that contain <strong>every</strong> element in the "contains" array?</p>
<p>Here's some pseudo code to demonstrate what I want:</p>
<pre><code>for each fileName in the master list:
if the fileName contains EVERY element in the "contains" array:
add fileName to filtered list of filenames
</code></pre>
| 0 | 2016-08-17T16:58:32Z | 39,002,577 | <p>This should work for you.</p>
<pre><code>filtered_list = []
for file_name in master_list:
for element in contains_array:
if element not in file_name:
break
filtered_list.append(file_name)
</code></pre>
| 0 | 2016-08-17T17:09:46Z | [
"python",
"python-3.x"
] |
How to see if a string contains all substrings from a list? [Python] | 39,002,391 | <p>Here's the scenario:</p>
<p>I have a long list of time-stamped file names with characters before and after the time-stamp.</p>
<p>Something like this: <code>prefix_20160817_suffix</code></p>
<p>What I want is a list (which will ultimately be a subset of the original list) that contains file names with specific prefixes, suffixes, and parts of the timestamp. These specific strings are already given in a list. Note: <strong>this "contains" list might vary in size.</strong></p>
<p>For example: <code>['prefix1', '2016', 'suffix']</code> or <code>['201608', 'suffix']</code></p>
<p>How can I easily get a list of file names that contain <strong>every</strong> element in the "contains" array?</p>
<p>Here's some pseudo code to demonstrate what I want:</p>
<pre><code>for each fileName in the master list:
if the fileName contains EVERY element in the "contains" array:
add fileName to filtered list of filenames
</code></pre>
| 0 | 2016-08-17T16:58:32Z | 39,002,636 | <p>Your pseudocode was not far from a usable implementation as you see:</p>
<pre><code>masterList=["prefix_20160817_suffix"]
containsArray=['prefix1', '2016', 'suffix']
filteredListOfFilenames=[]
for fileName in masterList:
if all((element in fileName) for element in containsArray):
filteredListOfFilenames.append(fileName)
</code></pre>
<p>I would suggest to have a deeper look into the really good <a href="https://docs.python.org/3/tutorial/index.html" rel="nofollow">official tutorial</a> - it contains many useful things.</p>
| 0 | 2016-08-17T17:13:06Z | [
"python",
"python-3.x"
] |
how to extract date/time parameters from a list of strings? | 39,002,454 | <p>i have a pandas dataframe having a column as </p>
<pre><code> from pandas import DataFrame
df = pf.DataFrame({ 'column_name' : [u'Monday,30 December,2013', u'Delivered', u'19:23', u'1']})
</code></pre>
<p>now i want to extract every thing from it and store in three columns as</p>
<pre><code>date status time
[30/December/2013] ['Delivered'] [19:23]
</code></pre>
<p>i have so far used this :</p>
<pre><code>import dateutil.parser as dparser
dparser.parse([u'Monday,30 December,2013', u'Delivered', u'19:23', u'1'])
</code></pre>
<p>but this throws an error . can anyone please guide me to a solution ?</p>
| 0 | 2016-08-17T17:02:26Z | 39,002,718 | <p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><strong><code>apply()</code></strong></a> a function to a column, see the whole example:</p>
<pre><code>from pandas import DataFrame
df = DataFrame({'date': ['Monday,30 December,2013'], 'delivery': ['Delivered'], 'time': ['19:23'], 'status':['1']})
# delete the status column
del df['status']
def splitter(val):
parts = val.split(',')
return parts[1]
df['date'] = df['date'].apply(splitter)
</code></pre>
<p>This yields a dataframe with <code>date</code>, <code>delivery</code> and the <code>time</code>.</p>
| 0 | 2016-08-17T17:17:41Z | [
"python",
"date",
"pandas"
] |
Installing OpenCV Python Bindings on Mac with VirtualEnv: Can't find interpreter | 39,002,549 | <p>I've been following <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">this</a> guide on installing OpenCV on MacOS. I'm stuck on step 8, where you cmake everything.</p>
<p>From the <code>~/opencv/build</code> directory, I run</p>
<pre><code>$ cmake
-D CMAKE_BUILD_TYPE=RELEASE
-D CMAKE_INSTALL_PREFIX=/usr/local
-D PYTHON2_PACKAGES_PATH=~/.virtualenvs/cv/lib/python2.7/site-packages
-D PYTHON2_LIBRARY=/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/bin
-D PYTHON2_INCLUDE_DIR=/usr/local/Frameworks/Python.framework/Headers -D INSTALL_C_EXAMPLES=OFF
-D INSTALL_PYTHON_EXAMPLES=ON
-D BUILD_EXAMPLES=ON
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules ..
</code></pre>
<p>If I run the cmake command from outside the venv, it fails to find the interpreter/numpy version in the virtualenv:</p>
<pre><code>-- Python 2:
-- Interpreter: /usr/local/bin/python2.7 (ver 2.7.12)
-- Libraries: /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/bin (ver 2.7.12)
-- numpy: /Library/Python/2.7/site-packages/numpy/core/include (ver 1.11.0)
-- packages path: /Users/peter/.virtualenvs/cv/lib/python2.7/site-packages
</code></pre>
<p>If I run from inside the venv:</p>
<pre><code>-- Python 2:
-- Interpreter: /Users/peter/.virtualenvs/cv/bin/python2.7 (ver 2.7.6)
</code></pre>
<p>It find the correct interpreter but won't find the numpy path.</p>
<p>Can I have my cake and eat it too?</p>
| 0 | 2016-08-17T17:07:43Z | 39,024,863 | <p>Well, eventually I went with the following approach: Don't bother getting cmake to use the virtualenv interpreter. You can just compile opencv and the copy the cv2.so file into your virtualenv. In my case this involved proceeding with the <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">tutorial</a>, then using the command </p>
<pre><code>cp /usr/local/lib/python2.7/site-packages/cv2.so ~/projects/spiking-experiments/venv/lib/python2.7/site-packages/
</code></pre>
<p>To copy the file. Now I can import cv2 from inside the virtualenv.</p>
| 0 | 2016-08-18T18:10:08Z | [
"python",
"opencv",
"numpy",
"cmake"
] |
Using python for loop to average data | 39,002,558 | <p>I have a data set that contains 20 years of monthly averages in a numpy array with dimensions (1x240). I'm trying to write a function to spit out the yearly averages. I've managed to do it (I believe), using a for loop, but when I stick the exact same code into a function, it only gives me the first of what should be 20 values.</p>
<pre><code>def yearlymean_gm(gm_data):
data= np.load(gm_data)
for i in range (0,20):
average= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>gm_data is the name of the file. </p>
<p>When I simply manually enter</p>
<pre><code>data= np.load(gm_data)
for i in range (0,20):
average= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>it successfully reads out the 20 values. I'm pretty sure I just don't quite understand how for loops work in the context of functions. Any explanation (and a fix, if possible), would be awesome. </p>
<p>Secondly, I would love to have these values fed into an numpy array. I tried </p>
<pre><code>def yearlymean_gm(gm_data):
data= np.load(gm_data)
average = np.zeroes(20)
for i in range (0,20):
average[i]= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>but this gives me a long, wacky, list. Help on this would be cool too. Thanks!</p>
| 0 | 2016-08-17T17:08:21Z | 39,002,700 | <p>here's what you need...</p>
<pre><code>def yearlymean_gm(gm_data):
data= np.load(gm_data)
average = np.zeroes(20)
for i in range (0,20):
average[i]= data[i*12:i*12+12].sum()/12
print average
return average #don't return until the loop has completed
</code></pre>
| 1 | 2016-08-17T17:16:33Z | [
"python",
"arrays",
"numpy",
"for-loop"
] |
Using python for loop to average data | 39,002,558 | <p>I have a data set that contains 20 years of monthly averages in a numpy array with dimensions (1x240). I'm trying to write a function to spit out the yearly averages. I've managed to do it (I believe), using a for loop, but when I stick the exact same code into a function, it only gives me the first of what should be 20 values.</p>
<pre><code>def yearlymean_gm(gm_data):
data= np.load(gm_data)
for i in range (0,20):
average= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>gm_data is the name of the file. </p>
<p>When I simply manually enter</p>
<pre><code>data= np.load(gm_data)
for i in range (0,20):
average= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>it successfully reads out the 20 values. I'm pretty sure I just don't quite understand how for loops work in the context of functions. Any explanation (and a fix, if possible), would be awesome. </p>
<p>Secondly, I would love to have these values fed into an numpy array. I tried </p>
<pre><code>def yearlymean_gm(gm_data):
data= np.load(gm_data)
average = np.zeroes(20)
for i in range (0,20):
average[i]= data[i*12:i*12+12].sum()/12
print average
return average
</code></pre>
<p>but this gives me a long, wacky, list. Help on this would be cool too. Thanks!</p>
| 0 | 2016-08-17T17:08:21Z | 39,003,162 | <p>Why not avoid the "for" loop altogether?</p>
<pre><code>def yearlymean_gm(gm_data):
data = np.load(gm_data)
data = data.reshape((12, 20))
print data.mean(axis=1)
return data.mean(axis=1)
</code></pre>
| 4 | 2016-08-17T17:44:58Z | [
"python",
"arrays",
"numpy",
"for-loop"
] |
Creating bigrams from a string using regex | 39,002,563 | <p>I have a string like:</p>
<pre><code>"[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT', u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
</code></pre>
<p>Which is taken from an Excel file. This looks like an array, but because it is extracted from a file it is just a string.</p>
<p>What I need to do is:</p>
<p>a) Remove the <code>[ ]</code></p>
<p>b) Split the string by <code>,</code> and thus actually create a new list</p>
<p>c) Take the first string only i.e. <code>u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT'</code></p>
<p>d) Create bigrams of the resulting string as an actual string spit by spaces (not as bigrams):</p>
<pre><code>LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to *extend*to~prepc_according_to+expectancy~-nsubj expectancy~-nsubj+is~parataxis is~parataxis+NUMBER~nsubj NUMBER~nsubj+NUMBER_SLOT
</code></pre>
<p>Current code snippets I have been playing around with.</p>
<pre><code>text = "[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT', u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
text = re.sub('^\[(.*)\]',"\1",text)
text = [text.split(",")[0]]
bigrams = [b for l in text for b in zip(l.split("+")[:-1], l.split("+")[1:])]
bigrams = [("+").join(bigram).encode('utf-8') for bigram in bigrams]
bigrams = (' ').join(map(str, bigrams))
bigrams = ('').join(bigrams)
</code></pre>
<p>My regex though seems to return nothing.</p>
| 0 | 2016-08-17T17:08:36Z | 39,002,851 | <p>I have solved this. The regex needs to go through twice to replace first the brackets, then get the first string, then remove quotes:</p>
<pre><code> text = "[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT', u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
text = re.sub(r'\[u|\]',"",text)
text = text.split(",")[0]
text = re.sub(r'\'',"",text)
text = text.split("+")
bigrams = [text[i:i+2] for i in xrange(len(text)-2)]
bigrams = [("+").join(bigram).encode('utf-8') for bigram in bigrams]
bigrams = (' ').join(map(str, bigrams))
</code></pre>
| 0 | 2016-08-17T17:25:21Z | [
"python",
"regex",
"string",
"list",
"n-gram"
] |
Creating bigrams from a string using regex | 39,002,563 | <p>I have a string like:</p>
<pre><code>"[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT', u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
</code></pre>
<p>Which is taken from an Excel file. This looks like an array, but because it is extracted from a file it is just a string.</p>
<p>What I need to do is:</p>
<p>a) Remove the <code>[ ]</code></p>
<p>b) Split the string by <code>,</code> and thus actually create a new list</p>
<p>c) Take the first string only i.e. <code>u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT'</code></p>
<p>d) Create bigrams of the resulting string as an actual string spit by spaces (not as bigrams):</p>
<pre><code>LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to *extend*to~prepc_according_to+expectancy~-nsubj expectancy~-nsubj+is~parataxis is~parataxis+NUMBER~nsubj NUMBER~nsubj+NUMBER_SLOT
</code></pre>
<p>Current code snippets I have been playing around with.</p>
<pre><code>text = "[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT', u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
text = re.sub('^\[(.*)\]',"\1",text)
text = [text.split(",")[0]]
bigrams = [b for l in text for b in zip(l.split("+")[:-1], l.split("+")[1:])]
bigrams = [("+").join(bigram).encode('utf-8') for bigram in bigrams]
bigrams = (' ').join(map(str, bigrams))
bigrams = ('').join(bigrams)
</code></pre>
<p>My regex though seems to return nothing.</p>
| 0 | 2016-08-17T17:08:36Z | 39,003,797 | <p>Your string looks like a Python list of unicode strings, right?</p>
<p>You can evaluate it to get list of unicode string. A good way to do that is to use <code>ast.literal_eval</code> function from the <a href="https://docs.python.org/2/library/ast.html#ast-helpers" rel="nofollow"><strong>ast</strong></a> module.</p>
<p>Simply write:</p>
<pre><code>text = "[u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT'," \
" u'LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT']"
import ast
lines = ast.literal_eval(text)
</code></pre>
<p>The result is the list of unicode strings:</p>
<pre><code>for line in lines:
print(line)
</code></pre>
<p>You'll get:</p>
<pre><code>LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT
LOCATION_SLOT~-prep_in+*extend*to~prepc_according_to+expectancy~-nsubj+is~parataxis+NUMBER~nsubj+NUMBER_SLOT
</code></pre>
<p>To compute the <em>bigrams</em>:</p>
<pre><code>bigrams = [b for l in lines for b in zip(l.split("+")[:-1], l.split("+")[1:])]
bigrams = ["+".join(bigram).encode('utf-8') for bigram in bigrams]
bigrams = ' '.join(map(str, bigrams))
bigrams = ''.join(bigrams)
</code></pre>
| 1 | 2016-08-17T18:23:20Z | [
"python",
"regex",
"string",
"list",
"n-gram"
] |
Django - QuerySet - Get count for all determined child objects inside all parent objects | 39,002,617 | <p>I've got this model:</p>
<pre><code>class Order(models.Model):
#bunch of fields#
class Laptop(models.Model):
#bunch of fields#
class Camera(models.Model):
#bunch of fields#
class MusicPlayer(models.Model):
#bunch of fields#
</code></pre>
<p>The last three have a foreign key associated to the Order class. <em>I need to retrieve via a QuerySet the summed count for each child object for all orders, grouped by each one of them.</em></p>
<p>The result should look something like this:
<strong>Laptop:5
Camera:10
MusicPlayer:1</strong></p>
<p>I've tried using different django queries but all I get to is retrieving the count for each order instead the count for all orders.</p>
<p>I know this could be easily done by just querying each one separately but I was requested to do all in one query.</p>
| 0 | 2016-08-17T17:12:12Z | 39,002,832 | <p>Add related_name to your models:</p>
<pre><code>class Laptop(models.Model):
order = models.ForeignKey(Order, related_name='laptops')
class Camera(models.Model):
order = models.ForeignKey(Order, related_name='cameras')
class MusicPlayer(models.Model):
order = models.ForeignKey(Order, related_name='music_players')
</code></pre>
<p>And then you can retrieve number of related models using <a href="https://docs.djangoproject.com/ja/1.10/topics/db/aggregation/#generating-aggregates-for-each-item-in-a-queryset" rel="nofollow">annotate</a> and <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#id7" rel="nofollow">Count</a>:</p>
<pre><code>from django.db.models import Count
orders = Order.objects.annotate(laptops_count=Count('laptops'), cameras_count=Count('cameras'), music_players_count=Count('music_players'))
print(orders.laptops_count)
print(orders.cameras_count)
print(orders.music_players_count)
</code></pre>
| 1 | 2016-08-17T17:23:44Z | [
"python",
"django",
"django-queryset"
] |
Python program to map a series of chemical reactions with unique nodes only appearing once | 39,002,631 | <p>I have written a program in python to compose a map of a chemistry an example of which is the below text in the format: reactants > products.</p>
<pre><code>H2 > H2
H2 > H2
H2 > H2*
H2 > H2*
H2 > H + H
H2 > H2^
H2 > H* + H
H2 > H + H
H2^ > H2^
H2^ > H^ + H
H2^ > H + H
H3^ > H3^
H3^ > H^ + H2
H3^ > H + H2
H > H
H > H*
H > H^
H^ > H^
H^ > H
H + H2^ > H2 + H^
H2 + H2^ > H + H3^
CF4 > CF4
CF4 > F- + CF3
</code></pre>
<p>I want my program to create nodes on the map for every species in the chemistry and draw pathways between reactants and products with one species appearing only once on the map and the map having lines to represent a reaction from each reactant in the reaction to each product in the reaction.</p>
<p>I have written the below code however this code is simply taking each reaction and drawing it on the map and it is not connecting them and I am unsure how best to proceed linking up the common species in the reactions.</p>
<pre><code>import os
import sys
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import copy
try:
data = sys.argv[1]
except IndexError:
print('Insert filename')
sys.exit(1)
def parse_line(line):
new=line.split('>')
reactants=new[0].split('+')
products=new[1].split('+')
return reactants,products
all_edges=[]
edge_labels={}
all_reactants=[]
all_products=[]
with open(data) as fi:
for line in fi.readlines():
line = line.strip()
if not line or line[0] in '#%!':
# skip blank and comment lines
continue
reactants,products = parse_line(line)
for i in np.arange(len(reactants)):
other_reactants=copy.copy(reactants)
other_reactants.remove(reactants[i])
other_reactants=', '.join(other_reactants)
for j in np.arange(len(products)):
edge=(reactants[i],products[j])
all_edges.append(edge)
edge_labels[edge]=other_reactants
gr=nx.DiGraph()
gr.add_edges_from(all_edges)
pos=nx.random_layout(gr)
nx.draw_networkx_nodes(gr,pos,node_size=2000,node_shape='o',node_color='0.75',alpha=10)
nx.draw_networkx_edges(gr,pos, width=0.05,edge_color='b')
nx.draw_networkx_labels(gr, pos,font_size=12, font_color='k', font_weight='normal', alpha=1.0, ax=None)
nx.draw_networkx_edge_labels(gr,pos,edge_labels=edge_labels, label_pos=0.01, ax=None, rotate=False)
plt.show()
</code></pre>
<p>Could anyone advise me on the best way to proceed with this, I need a function that will identify the common species in reactants and products in all reactions and create a single node for each unique species and create one line from each reactant to each product for each reaction.</p>
<p>Any help would be hugely appreciated</p>
<p>Thank you very much</p>
| 1 | 2016-08-17T17:12:59Z | 39,006,157 | <p>One of the problems you seem to be having is that you're not stripping the whitespace from around the reactants/products you parse out of the "qstrings" of reactions. This means that e.g. <code>' H2'</code> and <code>'H2 '</code> are treated as different species and so get distinct nodes in your graph.</p>
<p>You probably want to handle this with something like:</p>
<pre><code>reactants = [s.strip() for s in new[0].split('+')]
products = [s.strip() for s in new[1].split('+')]
</code></pre>
<p>in your <code>parse_line</code> function.</p>
| 1 | 2016-08-17T20:57:48Z | [
"python",
"networkx"
] |
Getting 'dict_keys' object does not support indexing despite casting to list | 39,002,665 | <p>I am using Python 3 and despite of casting to list, I cannot seem to run my program.</p>
<p>This is the function calling:</p>
<pre><code>path = euleriancycle(edges)
</code></pre>
<p>And this is where I have used the keys method:</p>
<pre><code>def euleriancycle(e):
currentnode = list[e.keys()[0]]
path = [currentnode]
</code></pre>
<p>I tried to run it without type-casting to list and got this error. After rummaging about this site and similar queries, I followed the solutions suggested and type-cast to list but to no avail. I got the same error. </p>
<p>This is the error track:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-56-356b905111a9> in <module>()
45 edges[int(edge[0])] = [int(edge[1])]
46
---> 47 path = euleriancycle(edges)
48 print(path)
<ipython-input-56-356b905111a9> in euleriancycle(e)
1 def euleriancycle(e):
----> 2 currentnode = list[e.keys()[0]]
3 path = [currentnode]
4
5 while true:
TypeError: 'dict_keys' object does not support indexing
</code></pre>
| 0 | 2016-08-17T17:14:27Z | 39,002,721 | <p>The <code>dict_keys</code> objects, like sets, can not be indexed.</p>
<p>Instead of this:</p>
<pre><code>list[e.keys()[0]]
</code></pre>
<p>The next closest thing would be this:</p>
<pre><code>list(e)[0]
</code></pre>
<p>Python makes no guarantee on what key from the dict will be returned, so you might want to put an ordering on it yourself. </p>
| 4 | 2016-08-17T17:17:45Z | [
"python",
"python-3.x",
"dictionary",
"jupyter-notebook"
] |
Getting 'dict_keys' object does not support indexing despite casting to list | 39,002,665 | <p>I am using Python 3 and despite of casting to list, I cannot seem to run my program.</p>
<p>This is the function calling:</p>
<pre><code>path = euleriancycle(edges)
</code></pre>
<p>And this is where I have used the keys method:</p>
<pre><code>def euleriancycle(e):
currentnode = list[e.keys()[0]]
path = [currentnode]
</code></pre>
<p>I tried to run it without type-casting to list and got this error. After rummaging about this site and similar queries, I followed the solutions suggested and type-cast to list but to no avail. I got the same error. </p>
<p>This is the error track:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-56-356b905111a9> in <module>()
45 edges[int(edge[0])] = [int(edge[1])]
46
---> 47 path = euleriancycle(edges)
48 print(path)
<ipython-input-56-356b905111a9> in euleriancycle(e)
1 def euleriancycle(e):
----> 2 currentnode = list[e.keys()[0]]
3 path = [currentnode]
4
5 while true:
TypeError: 'dict_keys' object does not support indexing
</code></pre>
| 0 | 2016-08-17T17:14:27Z | 39,002,770 | <p>You are trying to index the <code>dict_keys</code> object, <em>then</em> convert that element to a list (syntax error of <code>list[...]</code> vs <code>list(...)</code> aside). You need to convert the entire object to a list first, <em>then</em> index it.</p>
<pre><code>currentnode = list[e.keys()[0]] # Wrong
currentnode = list(e.keys()[0]) # Less wrong, but still wrong
currentnode = list(e.keys())[0] # Right
</code></pre>
<p><code>list</code> takes any iterable, and the iterator returned by a dictionary is just an iterator over its keys, so you don't need to call <code>keys</code> explicitly.</p>
<pre><code>currentnode = list(e)[0]
</code></pre>
| 2 | 2016-08-17T17:20:16Z | [
"python",
"python-3.x",
"dictionary",
"jupyter-notebook"
] |
Error in python function pymongo | 39,002,751 | <p>This is a function that returns an error at the line with <code>post['date']</code>.</p>
<p>This is the error:</p>
<pre><code>in get_posts
post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p")
KeyError: 'date'
</code></pre>
<p>What does it mean?</p>
<p>This is the function:</p>
<pre><code>def get_posts(self, num_posts):
cursor = self.posts.find({},{}).limit(num_posts) # Using an empty itable for a placeholder so blog compiles before you make your changes
# XXX HW 3.2 Work here to get the posts
l = []
for post in cursor:
print post
post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p")
if 'tags' not in post:
post['tags'] = [] # fill it in if its not there already
if 'comments' not in post:
post['comments'] = []
l.append({'title':post['title'], 'body':post['body'], 'post_date':post['date'],
'permalink':post['permalink'],
'tags':post['tags'],
'author':post['author'],
'comments':post['comments']})
return l
</code></pre>
| 1 | 2016-08-17T17:19:11Z | 39,003,146 | <p><code>KeyError: 'date'</code> means that <code>post</code> doesn't have the attribute <code>date</code>.</p>
| 0 | 2016-08-17T17:44:12Z | [
"python",
"mongodb",
"mongodb-query",
"pymongo",
"keyerror"
] |
Sending message with Router-Dealer Proxy using c# and python | 39,002,823 | <p>I am able to have c# (client) and python (server) talk to each other by using a simple request-reply. However, I want my web application built on c# asp.net to be stable and need more clients and servers, so I tried connecting c# and python using the Router-Dealer Proxy with python.</p>
<p>I tried running the proxy python script first, then running c# (client), then python (server). However, when I run the python (server), it gives me an "Address in use" error message.</p>
<p>Am I running them in a wrong order OR is there something wrong with the proxy python script (shown below)?</p>
<p>5602 = c# client</p>
<p>5603 = python server</p>
<pre><code>def main():
context = zmq.Context()
# Socket facing clients
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://*:5602")
# Socket facing services
backend = context.socket(zmq.DEALER)
backend.bind("tcp://*:5603")
zmq.proxy(frontend, backend)
# We never get hereâ¦
frontend.close()
backend.close()
context.term()
if __name__ == "__main__":
main()
</code></pre>
| 0 | 2016-08-17T17:23:13Z | 39,005,000 | <p>I'm assuming your servers use bind, so the proxy should <code>connect</code> to them rather than also using <code>bind</code>.</p>
<p>Note: in zeromq the order of application startup doesn't matter so you can tell your proxy to <code>connect</code> to a server that doesn't yet exist, when the server is started the connection will be made.</p>
| 0 | 2016-08-17T19:41:47Z | [
"c#",
"python",
"proxy",
"zeromq",
"netmq"
] |
Python curses not accept curses.KEY_UP as input | 39,002,864 | <p>I need your help in understanding the mistake in my new menu based python program.</p>
<p>I have created a menu based python using the curses library. The program works fine with main menu options by accepting all the keys including special keys as input when getting user input via <code>getch()</code> but with a sub-menu list it fails to accept the special key as input.</p>
<pre><code> # Increment or Decrement
elif c == curses.KEY_DOWN: # down arrow
if self.option < len(self.submenu):
self.option += 1
else: self.option = 0
elif c == curses.KEY_UP: # up arrow
if self.option > 0:
self.option -= 1
else: self.option = len(self.submenu)
</code></pre>
<p>So above the is condition I am using to select the option but no input is sent to curses when I am using the UP/DOWN arrow keys </p>
<p>I am sharing my full code below</p>
<pre><code>import curses, os, traceback, json
file1 = json.loads(open('/etc/ansible/org/json/tag_definition.json').read())
submenu = list(file1.keys())
cfg_dict = {'InstanceName': 'NONE',
'Environment': 'NONE',
'InstanceType':'t2.micro',
'SystemOwner':'1',
'LifeCycle':'NONE',
'DeptName':'NONE',
'Org':'NONE'}
class CursedMenu(object):
'''A class which abstracts the horrors of building a curses-based menu system'''
def __init__(self):
'''Initialization'''
self.screen = curses.initscr()
curses.noecho()
curses.cbreak()
curses.start_color()
self.screen.keypad(1)
# Highlighted and Normal line definitions
curses.init_pair(1, curses.COLOR_BLACK, curses.COLOR_WHITE)
self.highlighted = curses.color_pair(1)
self.normal = curses.A_NORMAL
def show(self, options, title="Title", subtitle="Subtitle"):
'''Draws a menu with the given parameters'''
self.set_options(options)
self.title = title.upper()
self.subtitle = subtitle.upper()
self.selected = 0
self.draw_menu()
def set_options(self, options):
'''Validates that the last option is "Exit"'''
if options[-1] is not 'Exit':
options.append('Exit')
self.options = options
def set_submenu(self, submenu):
'''Validates that the last option is "Exit"'''
if submenu[-1] is not 'Exit':
submenu.append('Exit')
self.submenu = submenu
def draw_dict(self):
self.screen.addstr(8, 35, " "*43, curses.A_BOLD)
self.screen.addstr(10, 35," "*43, curses.A_BOLD)
self.screen.addstr(12, 35," "*43, curses.A_BOLD)
self.screen.addstr(14, 35," "*43, curses.A_BOLD)
self.screen.addstr(16, 35," "*43, curses.A_BOLD)
self.screen.addstr(18, 35," "*43, curses.A_BOLD)
self.screen.addstr(20, 35," "*43, curses.A_BOLD)
self.screen.addstr(8, 35, cfg_dict['InstanceName'], curses.A_STANDOUT)
self.screen.addstr(10, 35,cfg_dict['Environment'], curses.A_STANDOUT)
self.screen.addstr(12, 35,cfg_dict['InstanceType'], curses.A_STANDOUT)
self.screen.addstr(14, 35,cfg_dict['SystemOwner'], curses.A_STANDOUT)
self.screen.addstr(16, 35,cfg_dict['LifeCycle'], curses.A_STANDOUT)
self.screen.addstr(18, 35,cfg_dict['DeptName'], curses.A_STANDOUT)
self.screen.addstr(20, 35,cfg_dict['Org'], curses.A_STANDOUT)
self.screen.refresh()
def draw_menu(self):
'''Actually draws the menu and handles branching'''
request = ""
try:
while request is not "Exit":
self.draw()
request = self.get_user_input()
self.handle_request(request)
self.__exit__()
# Also calls __exit__, but adds traceback after
except Exception as exception:
self.__exit__()
traceback.print_exc()
def draw(self):
'''Draw the menu and lines'''
#self.screen.border(0)
self.screen.subwin(40, 80, 0, 0).box()
self.screen.addstr(2,30, self.title, curses.A_STANDOUT|curses.A_BOLD) # Title for this menu
self.screen.hline(3, 1, curses.ACS_HLINE, 78)
self.screen.addstr(4,2, self.subtitle, curses.A_BOLD) #Subtitle for this menu
# Display all the menu items, showing the 'pos' item highlighted
left = 2
for index in range(len(self.options)):
menu_name = len(self.options[index])
textstyle = self.normal
if index == self.selected:
textstyle = self.highlighted
self.screen.addstr(5, left, "%d.%s" % (index+1, self.options[index]), textstyle)
left = left + menu_name + 3
self.screen.addstr(8, 4, " Instance Name:", curses.A_BOLD)
self.screen.addstr(10, 4," Environment:", curses.A_BOLD)
self.screen.addstr(12, 4," Instance Type:", curses.A_BOLD)
self.screen.addstr(14, 4," SystemOwner:", curses.A_BOLD)
self.screen.addstr(16, 4," LifeCycle:", curses.A_BOLD)
self.screen.addstr(18, 4," Department Name:", curses.A_BOLD)
self.screen.addstr(20, 4," Org:", curses.A_BOLD)
self.draw_dict()
self.screen.refresh()
def get_user_input(self):
'''Gets the user's input and acts appropriately'''
user_in = self.screen.getch() # Gets user input
'''Enter and Exit Keys are special cases'''
if user_in == 10:
return self.options[self.selected]
if user_in == 27:
return self.options[-1]
if user_in == (curses.KEY_END, ord('!')):
return self.options[-1]
# This is a number; check to see if we can set it
if user_in >= ord('1') and user_in <= ord(str(min(9,len(self.options)+1))):
self.selected = user_in - ord('0') - 1 # convert keypress back to a number, then subtract 1 to get index
return
# Increment or Decrement
if user_in == curses.KEY_LEFT: # left arrow
self.selected -=1
if user_in == curses.KEY_RIGHT: # right arrow
self.selected +=1
self.selected = self.selected % len(self.options)
return
def handle_request(self, request):
'''This is where you do things with the request'''
if request is "Org":
self.org_func()
if request is None: return
def org_func(self):
'''Actually draws the submenu and handles branching'''
c = None
self.option = 0
self.set_submenu(submenu)
height = len(self.submenu)
while c != 10:
self.s = curses.newwin(height+2, 20, 6, 14)
self.s.box()
for index in range(len(self.submenu)):
textstyle = self.normal
if index == self.option:
textstyle = self.highlighted
self.s.addstr(index+1,1, "%d-%s" % (index+1, self.submenu[index]), textstyle)
self.s.refresh()
c = self.s.getch() # Gets user input
if c == ord('k'):
d = self.submenu[self.option]
cfg_dict['Org'] = file1[d]['Org']
# This is a number; check to see if we can set it
if c >= ord('1') and c <= ord(str(len(self.submenu)+1)):
self.option = c - ord('0') - 1 # convert keypress back to a number, then subtract 1 to get index
# Increment or Decrement
elif c == curses.KEY_DOWN: # down arrow
if self.option < len(self.submenu):
self.option += 1
else: self.option = 0
elif c == curses.KEY_UP: # up arrow
if self.option > 0:
self.option -= 1
else: self.option = len(self.submenu)
return self.option
def __exit__(self):
curses.endwin()
os.system('clear')
'''demo'''
cm = CursedMenu()
cm.show(['Instance-ID','Org','Tag'], title='Instance Launch', subtitle='Options')
</code></pre>
<p>It works perfectly fine if I am using the alphabets and numerics as input but not with the special keys. I have used the code below and it was working</p>
<pre><code> elif c == ord('k'): # down arrow
if self.option < len(self.submenu):
self.option += 1
else: self.option = 0
elif c == ord('i'): # up arrow
if self.option > 0:
self.option -= 1
else: self.option = len(self.submenu)
</code></pre>
<p>Please help understand why the code is not accepting the Special Keys.</p>
<p>Below is the input json I have used</p>
<pre><code>{
"Dept1": {
"Instance Name": "instance1",
"Environment": "environment1",
"SystemOwner": "Owner1",
"LifeCycle": "Lcy1",
"DeptName": "Check1",
"Org": "CSO"
}
}
</code></pre>
| 0 | 2016-08-17T17:26:03Z | 39,059,220 | <p>When you did </p>
<pre><code>self.s = curses.newwin(height+2, 20, 6, 14)
</code></pre>
<p>you omitted setting properties for the new window, e.g,</p>
<pre><code>self.s.keypad(1)
</code></pre>
| 2 | 2016-08-20T22:25:22Z | [
"python",
"menuitem",
"curses"
] |
Too many open files in Windows when writing multiple HDF5 files | 39,002,942 | <p>My question is <strong>how to close HDF5 files indefinitely after writing them</strong>?</p>
<p>I am trying to save data to HDF5 files - there are around 200 folders and each folder contains some data for each day for this year.</p>
<p>When I retrieve and save data using pandas HDFStore with following code in iPython console, the function stop automatically after a while (no error msg).</p>
<pre><code>import pandas as pd
data = ... # in format as pd.DataFrame
# Method 1
data.to_hdf('D:/file_001/2016-01-01.h5', 'type_1')
# Method 2
with pd.HDFStore('D:/file_001/2016-01-01.h5', 'a') as hf:
hf['type_1'] = data
</code></pre>
<p>When I tried the same script to download data again, it says:</p>
<blockquote>
<p>[Errno 24] Too many open files: ...</p>
</blockquote>
<p>There are some posts suggesting using <strong>ulimit -n 1200</strong> for example in Linux to overcome the problem, but unfortunately I'm using Windows.</p>
<p>Besides, I think I already close files explicitly using <em>with</em> closure, especially in <em>Method 2</em>. How come iPython still count these files as open?</p>
<p>My loop is sth like below:</p>
<pre><code>univ = pd.read_excel(univ_file, univ_tab)
for dt in pd.DatetimeIndex(start=start_date, end=end_date, freq='B'):
for t in univ:
data = download_data(t, dt)
with pd.HDFStore(data_file, 'a') as hf:
# Use pd.DataFrame([np.nan]) instead of pd.DataFrame() to save space
hf[typ] = EMPTY_DF if data.shape[0] == 0 else data
</code></pre>
<p>Thanks in advanced!</p>
| 1 | 2016-08-17T17:31:07Z | 39,005,921 | <p>You can check / list all open files belonging to Python process in Windows using <code>psutil</code> module.</p>
<p>Demo:</p>
<pre><code>In [52]: [proc.open_files() for proc in psutil.process_iter() if proc.pid == os.getpid()]
Out[52]:
[[popenfile(path='C:\\Windows\\System32\\en-US\\KernelBase.dll.mui', fd=-1),
popenfile(path='C:\\Users\\Max\\.ipython\\profile_default\\history.sqlite-journal', fd=-1),
popenfile(path='C:\\Users\\Max\\.ipython\\profile_default\\history.sqlite', fd=-1)]]
</code></pre>
<p>a file handler will be closed as soon as we are done with the following block:</p>
<pre><code>In [53]: with pd.HDFStore('d:/temp/1.h5', 'a') as hf:
....: hf['df2'] = df
....:
</code></pre>
<p>prove:</p>
<pre><code>In [54]: [proc.open_files() for proc in psutil.process_iter() if proc.pid == os.getpid()]
Out[54]:
[[popenfile(path='C:\\Windows\\System32\\en-US\\KernelBase.dll.mui', fd=-1),
popenfile(path='C:\\Users\\Max\\.ipython\\profile_default\\history.sqlite', fd=-1)]]
</code></pre>
<p>check whether <code>psutil</code> works properly at all (pay attention at the <code>D:\\temp\\aaa</code>):</p>
<pre><code>In [55]: fd = open('d:/temp/aaa', 'w')
In [56]: [proc.open_files() for proc in psutil.process_iter() if proc.pid == os.getpid()]
Out[56]:
[[popenfile(path='C:\\Windows\\System32\\en-US\\KernelBase.dll.mui', fd=-1),
popenfile(path='D:\\temp\\aaa', fd=-1),
popenfile(path='C:\\Users\\Max\\.ipython\\profile_default\\history.sqlite', fd=-1)]]
In [57]: fd.close()
In [58]: [proc.open_files() for proc in psutil.process_iter() if proc.pid == os.getpid()]
Out[58]:
[[popenfile(path='C:\\Windows\\System32\\en-US\\KernelBase.dll.mui', fd=-1),
popenfile(path='C:\\Users\\Max\\.ipython\\profile_default\\history.sqlite', fd=-1)]]
</code></pre>
<p>So using this technique you can debug your code and find the place where the number of open files goes crazy in your case</p>
| 1 | 2016-08-17T20:41:26Z | [
"python",
"pandas",
"hdf5",
"hdfstore"
] |
How to swap the rows of two tables in python (astropy.table)? | 39,003,083 | <p>I am trying to swap the rows of two python tables, e.g</p>
<pre><code> In [15]: print t2
x y z
---- ---- ----
20.0 30.0 40.0
50.0 60.0 70.0
80.0 90.0 99.0
In [16]: print t1
a b c
--- --- ---
1.0 2.0 3.0
3.0 4.0 5.0
6.0 7.0 8.0
</code></pre>
<p>I want it to be </p>
<pre><code> In [15]: print t2
x y z
---- ---- ----
20.0 30.0 40.0
3.0 4.0 5.0
80.0 90.0 99.0
In [16]: print t1
a b c
--- --- ---
1.0 2.0 3.0
50.0 60.0 70.0
6.0 7.0 8.0
</code></pre>
<p>I tried using the examples given here <a href="http://stackoverflow.com/questions/14836228/is-there-a-standardized-method-to-swap-two-variables-in-python">Is there a standardized method to swap two variables in Python?</a>, but none of them are working probably because of the row type: </p>
<pre><code> In [19]: type(t1[1])
Out[19]: astropy.table.row.Row
</code></pre>
<p>e.g.</p>
<pre><code> In [20]: t1[1],t2[1] = t2[1], t1[1]
In [21]: print t1
a b c
---- ---- ----
1.0 2.0 3.0
50.0 60.0 70.0
6.0 7.0 8.0
In [22]: print t2
x y z
---- ---- ----
20.0 30.0 40.0
50.0 60.0 70.0
80.0 90.0 99.0
</code></pre>
<p>i.e. t2 does not change. Is there a fix to it?
I also tried using the first solution given in <a href="http://stackoverflow.com/questions/13167300/python-simple-swap-function">Python Simple Swap Function</a> where I changed the row to a list and swapped them but it gives the assertion error. Can someone help please?</p>
| -1 | 2016-08-17T17:40:02Z | 39,003,408 | <p>Looks like your code has changed <code>t1[1]</code>and then the new value has overwritten <code>t2[1]</code>. You need to make an explicit copy of t1, change the copy and then alter t2 based on the original t1. If you have to do this with multiple steps, it might just be safer to make a copy of each and then overwrite the copied data from the original each time</p>
<pre><code>import copy
t1_copy = copy.deepcopy(t1)
t2_copy = copy.deepcopy(t2)
t1[1], t2[1] = t2_copy[1], t1_copy[1]
</code></pre>
| 0 | 2016-08-17T18:00:01Z | [
"python",
"swap",
"astropy"
] |
Better way of filtering in Django and seeing if the object exists | 39,003,093 | <p>A lot of times I find myself filtering for an object and returning <code>None</code> if it can't be found. However, the method I do this seems really inefficient (in terms of lines of code)</p>
<p>When I filter for an object I normally do something like this:</p>
<pre><code>person = Person.objects.filter(id=id)
if person:
person = Person.objects.get(id=id)
else:
person = None
</code></pre>
<p>Is there a better way of doing this?</p>
<p><strong>Edit</strong>
I have made edits to clarify confusion on my end.
The filter query should always return 1 object, if it exists.</p>
| -2 | 2016-08-17T17:40:41Z | 39,003,465 | <p>Your <code>if/else</code> is unusual in that you assign <code>person</code> twice. I can't understand why. You have two options I think.</p>
<p>First, you could reduce <code>if/else</code> to just <code>if</code> like this:</p>
<pre><code>person = Person.objects.filter(name=name)
if not person:
person = None
</code></pre>
<p>Or with a <a href="http://stackoverflow.com/questions/4978738/is-there-a-python-equivalent-of-the-c-sharp-null-coalescing-operator">coalescing operator</a> to make it very terse:</p>
<pre><code>person = Person.objects.filter(name=name) or None
</code></pre>
<p>Which will return <code>person = None</code> if <code>Person.objects.filter(name=name)</code> is falsy.</p>
| 1 | 2016-08-17T18:03:27Z | [
"python",
"django"
] |
Better way of filtering in Django and seeing if the object exists | 39,003,093 | <p>A lot of times I find myself filtering for an object and returning <code>None</code> if it can't be found. However, the method I do this seems really inefficient (in terms of lines of code)</p>
<p>When I filter for an object I normally do something like this:</p>
<pre><code>person = Person.objects.filter(id=id)
if person:
person = Person.objects.get(id=id)
else:
person = None
</code></pre>
<p>Is there a better way of doing this?</p>
<p><strong>Edit</strong>
I have made edits to clarify confusion on my end.
The filter query should always return 1 object, if it exists.</p>
| -2 | 2016-08-17T17:40:41Z | 39,003,528 | <p>Filter return list (or empty list), so if you know you get list, and want to replace empty list with None:</p>
<pre><code>persons = Person.objects.filter(name=name)
if not any(person):
person = None
# single person
person = persons[0] # but there could be more than one
</code></pre>
<p>If you want single Person</p>
<pre><code>try:
person = Person.objects.get(name=name)
except Person.MultipleObjectsReturned:
# do something if there is more Persons with that name
person = Person.objects.first() # for example return first person with that name
except Person.DoesNotExist:
person = None # set person None
</code></pre>
| 1 | 2016-08-17T18:07:21Z | [
"python",
"django"
] |
Better way of filtering in Django and seeing if the object exists | 39,003,093 | <p>A lot of times I find myself filtering for an object and returning <code>None</code> if it can't be found. However, the method I do this seems really inefficient (in terms of lines of code)</p>
<p>When I filter for an object I normally do something like this:</p>
<pre><code>person = Person.objects.filter(id=id)
if person:
person = Person.objects.get(id=id)
else:
person = None
</code></pre>
<p>Is there a better way of doing this?</p>
<p><strong>Edit</strong>
I have made edits to clarify confusion on my end.
The filter query should always return 1 object, if it exists.</p>
| -2 | 2016-08-17T17:40:41Z | 39,003,754 | <p>Just use <code>.get()</code> if you want to get one person or return None.</p>
<pre><code>try:
person = Person.objects.get(name=name)
except (Person.DoesNotExist, Person.MultipleObjectsReturned) as e:
person = None
</code></pre>
| 1 | 2016-08-17T18:20:15Z | [
"python",
"django"
] |
Better way of filtering in Django and seeing if the object exists | 39,003,093 | <p>A lot of times I find myself filtering for an object and returning <code>None</code> if it can't be found. However, the method I do this seems really inefficient (in terms of lines of code)</p>
<p>When I filter for an object I normally do something like this:</p>
<pre><code>person = Person.objects.filter(id=id)
if person:
person = Person.objects.get(id=id)
else:
person = None
</code></pre>
<p>Is there a better way of doing this?</p>
<p><strong>Edit</strong>
I have made edits to clarify confusion on my end.
The filter query should always return 1 object, if it exists.</p>
| -2 | 2016-08-17T17:40:41Z | 39,004,279 | <p>You can use <code>exists()</code></p>
<p>From the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.exists" rel="nofollow" title="docs">docs</a>:</p>
<blockquote>
<p>If you only want to determine if at least one result exists (and donât need the actual objects), itâs more efficient to use exists().</p>
</blockquote>
<pre><code>entry = Entry.objects.get(pk=123)
if some_queryset.filter(pk=entry.pk).exists():
print("Entry contained in queryset")
else:
return None
</code></pre>
<p>You can shorten this a little if lines of code is a concern. However:</p>
<blockquote>
<p>Additionally, if a some_queryset has not yet been evaluated, but you
know that it will be at some point, then using some_queryset.exists()
will do more overall work (one query for the existence check plus an
extra one to later retrieve the results) than simply using
bool(some_queryset), which retrieves the results and then checks if
any were returned.</p>
</blockquote>
| 1 | 2016-08-17T18:54:08Z | [
"python",
"django"
] |
Better way of filtering in Django and seeing if the object exists | 39,003,093 | <p>A lot of times I find myself filtering for an object and returning <code>None</code> if it can't be found. However, the method I do this seems really inefficient (in terms of lines of code)</p>
<p>When I filter for an object I normally do something like this:</p>
<pre><code>person = Person.objects.filter(id=id)
if person:
person = Person.objects.get(id=id)
else:
person = None
</code></pre>
<p>Is there a better way of doing this?</p>
<p><strong>Edit</strong>
I have made edits to clarify confusion on my end.
The filter query should always return 1 object, if it exists.</p>
| -2 | 2016-08-17T17:40:41Z | 39,047,988 | <p>You cannot do like this. </p>
<pre><code>person = Person.objects.get(name=name)
</code></pre>
<p>will raise an exception.</p>
<p>What you can do is:</p>
<pre><code>try:
person = Person.objects.get(name=name)
except Person.MultipleObjectsReturned:
person = Person.objects.first()
except Person.DoesNotExist:
person = None
</code></pre>
<p>But Best thing here is to use:</p>
<pre><code>some_queryset.filter(pk=entry.pk).exists()
</code></pre>
| 1 | 2016-08-19T21:34:59Z | [
"python",
"django"
] |
Python: Access camera WITHOUT OpenCV | 39,003,106 | <p>Comrades,</p>
<p>I'd like to capture images from the laptop camera in Python. Currently all signs point to OpenCV. Problem is OpenCV is a nightmare to install, and it's a nightmare that reoccurs every time you reinstall your code on a new system. </p>
<p>Is there a more lightweight way to capture camera data in Python? I'm looking for something to the effect of:</p>
<pre><code>$ pip install something
$ python
>>> import something
>>> im = something.get_camera_image()
</code></pre>
<p>I'm on Mac, but solutions on any platform are welcome.</p>
| 0 | 2016-08-17T17:41:23Z | 39,003,827 | <p>I've done this before using <a href="http://www.pygame.org/download.shtml" rel="nofollow">pygame</a>. But I can't seem to find the script that I used... It seems there is a tutorial <a href="http://www.pygame.org/docs/tut/camera/CameraIntro.html" rel="nofollow">here</a> which uses the native camera module and is not dependent on OpenCV.</p>
<p>Try this:</p>
<pre><code>import pygame
import pygame.camera
from pygame.locals import *
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/path/to/camera",(640,480))
cam.start()
image = cam.get_image()
</code></pre>
<p>If you don't know the path to the camera you can also get a list of all available which should include you laptop webcam:</p>
<pre><code>camlist = pygame.camera.list_cameras()
if camlist:
cam = pygame.camera.Camera(camlist[0],(640,480))
</code></pre>
| 1 | 2016-08-17T18:24:41Z | [
"python",
"osx",
"camera"
] |
A more efficient solution for solving this matrix puzzle? | 39,003,147 | <p>Find the 3x3 matrix, M, whose rows, columns and diagonals add up to 15. Condition: you must use every number from 1-9.</p>
<p>I'm not very clever, so I just tried this brute force method:</p>
<pre><code>def solve_999():
for a in range(1, 10):
for b in range(1, 10):
for c in range(1, 10):
for d in range(1, 10):
for e in range(1, 10):
for f in range(1, 10):
for g in range(1, 10):
for h in range(1, 10):
for i in range(1, 10):
if (a+b+c == d+e+f == g+h+i == a+d+g == b+e+h == c+f+i == a+e+i == c+e+g == 15):
if check_unique([a, b, c, d, e, f, g, h, i]):
print(a, b, c)
print(d, e, f)
print(g, h, i)
return
def check_unique(L):
d = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0}
for letter in L:
d[letter] += 1
if d[letter] > 1:
return False
return True
</code></pre>
<p>It works, but it isn't very efficient. Can anyone help me find a more efficient solution?</p>
| 1 | 2016-08-17T17:44:15Z | 39,003,495 | <p>The biggest speedup you will get will be from generating your squares so that they are already unique. The easiest way to do this is <code>itertools.permutations</code>. This will reduce you to checking 9! == 362880 boards rather than 387420489 boards (about 1/1000th the work) and you don't have to check to make sure they are unique.</p>
<pre><code>from itertools import permutations
t1=time()
for board in permutations(list(range(1,10)),9):
if sum(board[0:3]) == 15:
if sum(board[3:6])== 15:
if board[0]+board[3]+board[6]==15:
if board[1]+board[4]+board[7]==15:
if board[0]+board[4]+board[8]==15:
if board[2]+board[4]+board[6]==15:
print(board[0:3])
print(board[3:6])
print(board[6:9])
break
</code></pre>
<p>One major problem with this solution is that we are still checking a ton more cases than we need to. Something to notice is that for each (a,b,d,e) there exists a maximum of 1 value for each of c, f, g, h, and i. This means that we only have to check through 9P4 == 3024 possibilities. The downside of this is that we lose guarantee that all the values are unique. But even if we add this checks, we still see another 10x speedup over our simpler code.</p>
<pre><code>def solve_999():
for a,b,d,e in permutations(range(1,10),4):
c = 15 - (a + b)
f = 15 - (d + e)
g = 15 - (a + d)
h = 15 - (b + e)
i = 15 - (a + e)
if 15 == g+h+i == c+f+i == c+e+g:
if len(set([a,b,c,d,e,f,g,h,i]))==9:
print(a, b, c)
print(d, e, f)
print(g, h, i)
print()
break
</code></pre>
<p>To time the code:</p>
<pre><code>from timeit import timeitrom itertools import permutations
def solve_999():
for a,b,d,e in permutations(range(1,10),4):
c = 15 - (a + b)
f = 15 - (d + e)
g = 15 - (a + d)
h = 15 - (b + e)
i = 15 - (a + e)
if 15 == g+h+i == c+f+i == c+e+g:
if len(set([a,b,c,d,e,f,g,h,i]))==9:
return
print(timeit(solve_999, number=1000))
</code></pre>
<p>yields a time of 2.9 seconds/10000 tries == .00029 sec/try</p>
| 1 | 2016-08-17T18:05:06Z | [
"python",
"matrix",
"puzzle"
] |
A more efficient solution for solving this matrix puzzle? | 39,003,147 | <p>Find the 3x3 matrix, M, whose rows, columns and diagonals add up to 15. Condition: you must use every number from 1-9.</p>
<p>I'm not very clever, so I just tried this brute force method:</p>
<pre><code>def solve_999():
for a in range(1, 10):
for b in range(1, 10):
for c in range(1, 10):
for d in range(1, 10):
for e in range(1, 10):
for f in range(1, 10):
for g in range(1, 10):
for h in range(1, 10):
for i in range(1, 10):
if (a+b+c == d+e+f == g+h+i == a+d+g == b+e+h == c+f+i == a+e+i == c+e+g == 15):
if check_unique([a, b, c, d, e, f, g, h, i]):
print(a, b, c)
print(d, e, f)
print(g, h, i)
return
def check_unique(L):
d = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0}
for letter in L:
d[letter] += 1
if d[letter] > 1:
return False
return True
</code></pre>
<p>It works, but it isn't very efficient. Can anyone help me find a more efficient solution?</p>
| 1 | 2016-08-17T17:44:15Z | 39,003,505 | <p>You can use the itertools.permutations library to iterate over all possibilities checking for your required sums.</p>
<p>While your code works for fixed size of input array (in this case 3x3 = 9), I have generalized it to check for any given sum over any range that is provided through the use of some python assisted splicing. Note that the length of the input needs to be a perfect square for this to work.</p>
<pre><code>from itertools import permutations
from math import sqrt
def check_magic_square(rangeallowed,sumMS):
sidelenght = int(sqrt(len(rangeallowed)))
for i in permutations(rangeallowed):
# initialize all conditions to be satisfied
MSConditionsSatisfied = True
# Check for forward diagonal sum
if (MSConditionsSatisfied and sum(i[::sidelenght+1]) != sumMS): MSConditionsSatisfied = False
# Check for reverse diagonal sum
if (MSConditionsSatisfied and sum(i[sidelenght-1:-1:sidelenght-1]) != sumMS): MSConditionsSatisfied = False
for j in range(sidelenght):
# Check for each column
if (MSConditionsSatisfied and sum(i[j::sidelenght]) != sumMS): MSConditionsSatisfied = False
# Check for each row
if (MSConditionsSatisfied and sum(i[j*sidelenght:(j+1)*sidelenght]) != sumMS): MSConditionsSatisfied = False
# if all conditions are satisfied, return the splice reshaped to be a square matrix
if MSConditionsSatisfied: return [[i[k*sidelenght + j] for j in range(sidelenght)] for k in range(sidelenght)]
return False
if __name__ == "__main__":
print(check_magic_square(range(1,10),15))
</code></pre>
<p>returns</p>
<blockquote>
<p>[[2, 7, 6], [9, 5, 1], [4, 3, 8]]</p>
</blockquote>
<p>Time taken on my i5 machine 0.1 seconds</p>
| 0 | 2016-08-17T18:05:46Z | [
"python",
"matrix",
"puzzle"
] |
A more efficient solution for solving this matrix puzzle? | 39,003,147 | <p>Find the 3x3 matrix, M, whose rows, columns and diagonals add up to 15. Condition: you must use every number from 1-9.</p>
<p>I'm not very clever, so I just tried this brute force method:</p>
<pre><code>def solve_999():
for a in range(1, 10):
for b in range(1, 10):
for c in range(1, 10):
for d in range(1, 10):
for e in range(1, 10):
for f in range(1, 10):
for g in range(1, 10):
for h in range(1, 10):
for i in range(1, 10):
if (a+b+c == d+e+f == g+h+i == a+d+g == b+e+h == c+f+i == a+e+i == c+e+g == 15):
if check_unique([a, b, c, d, e, f, g, h, i]):
print(a, b, c)
print(d, e, f)
print(g, h, i)
return
def check_unique(L):
d = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0}
for letter in L:
d[letter] += 1
if d[letter] > 1:
return False
return True
</code></pre>
<p>It works, but it isn't very efficient. Can anyone help me find a more efficient solution?</p>
| 1 | 2016-08-17T17:44:15Z | 39,003,794 | <p>Bit shorter than the others:</p>
<pre><code>>>> from itertools import permutations
>>> for a, b, c, d, e, f, g, h, i in permutations(range(1, 10)):
if 15 == a+b+c == d+e+f == a+d+g == b+e+h == a+e+i == c+e+g:
print(a, b, c)
print(d, e, f)
print(g, h, i)
break
2 7 6
9 5 1
4 3 8
</code></pre>
<p>The search takes about 0.009 seconds on my PC. Measured as follows, it takes about 9 seconds for doing it 1000 times:</p>
<pre><code>from itertools import permutations
from timeit import timeit
def solve_999():
for a, b, c, d, e, f, g, h, i in permutations(range(1, 10)):
if 15 == a+b+c == d+e+f == a+d+g == b+e+h == a+e+i == c+e+g:
return
print(timeit(solve_999, number=1000))
</code></pre>
| 1 | 2016-08-17T18:23:12Z | [
"python",
"matrix",
"puzzle"
] |
A more efficient solution for solving this matrix puzzle? | 39,003,147 | <p>Find the 3x3 matrix, M, whose rows, columns and diagonals add up to 15. Condition: you must use every number from 1-9.</p>
<p>I'm not very clever, so I just tried this brute force method:</p>
<pre><code>def solve_999():
for a in range(1, 10):
for b in range(1, 10):
for c in range(1, 10):
for d in range(1, 10):
for e in range(1, 10):
for f in range(1, 10):
for g in range(1, 10):
for h in range(1, 10):
for i in range(1, 10):
if (a+b+c == d+e+f == g+h+i == a+d+g == b+e+h == c+f+i == a+e+i == c+e+g == 15):
if check_unique([a, b, c, d, e, f, g, h, i]):
print(a, b, c)
print(d, e, f)
print(g, h, i)
return
def check_unique(L):
d = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0}
for letter in L:
d[letter] += 1
if d[letter] > 1:
return False
return True
</code></pre>
<p>It works, but it isn't very efficient. Can anyone help me find a more efficient solution?</p>
| 1 | 2016-08-17T17:44:15Z | 39,004,079 | <p>Its easy. Just think how many ways are there to build 15 out of a sum of three numbers.</p>
<pre><code>1+5+9, 1+6+8, 2+4+9, 2+5+8, 2+6+7, 3+4+8, 3+5+7, 4+5+6
</code></pre>
<p>Each sum appears in your square because there are only 8 ways.
5 must be in the center because it appears 4 times.
2,4,6,8 must be in the corner because they appear 3 times.</p>
<p>Go on and you find the solution just with thinking.</p>
| 3 | 2016-08-17T18:41:45Z | [
"python",
"matrix",
"puzzle"
] |
Excel to PostgreSQL using Python | 39,003,159 | <p>I'm new to Python and I'm trying to export excel directly into postgreSQL without CSV files.</p>
<p>I don't know if it's possible.</p>
<p>I keep running into the error <code>'column "daily_date" is of type date but expression is of type numeric'</code>.</p>
<pre><code>import psycopg2
import xlrd
book = xlrd.open_workbook("pytest.xlsx")
sheet = book.sheet_by_name("Source")
database = psycopg2.connect (database = "RunP", user="postgres", password="", host="localhost", port="5432")
cursor = database.cursor()
query = """INSERT INTO orders (Daily_Date, Days, First, Second, Leader) VALUES (%s, %s, %s, %s, %s)"""
for r in range(1, sheet.nrows):
Daily_Date = sheet.cell(r,0).value
Days = sheet.cell(r,1).value
First = sheet.cell(r,2).value
Second = sheet.cell(r,3).value
Leader = sheet.cell(r,4).value
values = (Daily_Date, Days, First, Second, Leader)
cursor.execute(query, values)
cursor.close()
database.commit()
database.close()
print ""
print ""
columns = str(sheet.ncols)
rows = str(sheet.nrows)
print "I just imported Excel into postgreSQL"
</code></pre>
| -1 | 2016-08-17T17:44:42Z | 39,005,370 | <p>Excel stores datetimes as numbers. You can use <code>xlrd.xldate.xldate_as_datetime</code> to convert:</p>
<pre><code> Daily_Date = xlrd.xldate.xldate_as_datetime(sheet.cell(r,0).value,book.datemode)
</code></pre>
| 0 | 2016-08-17T20:05:49Z | [
"python",
"excel",
"postgresql"
] |
Pygame reset an Image after scaling it | 39,003,192 | <p>I tried to enlarge an image by moving the mouse over the image and make it to the original size when moving away from it.
For example:</p>
<pre><code>if ImageRect.collidepoint(mouseX,mouseY):
Image = pygame.transform.scale(Image, (100,100))
else:
Image = pygame.transform.scale(Image, (64,64)) #actual size of Image
</code></pre>
<p>But after moving the mouse 10 times over it , the image is getting weird.
How can I fix this?</p>
<p><a href="http://i.stack.imgur.com/xpCHZ.png" rel="nofollow">This is how it looks</a></p>
| 0 | 2016-08-17T17:46:18Z | 39,003,476 | <blockquote>
<p>Some of the transforms are considered destructive. These means every
time they are performed they lose pixel data. Common examples of this
are resizing and rotating. For this reason, it is better to
retransform the original surface than to keep transforming an image
multiple times. (For example, suppose you are animating a bouncing
spring which expands and contracts. If you applied the size changes
incrementally to the previous images, you would lose detail. Instead,
always begin with the original image and scale to the desired size.)</p>
</blockquote>
<p>Taken directly from <a href="http://www.pygame.org/docs/ref/transform.html" rel="nofollow">pygame.transform documentation</a></p>
<p><strong>Work Around</strong> </p>
<p>Don't change the original image each time. Instead, every time they collide just show a new image on top of the old image (since it is a larger image the old one won't be shown). Alternatively you can just hide the old image. </p>
<p>Example:</p>
<pre><code>if ImageRect.collidepoint(mouseX,mouseY):
Image2 = pygame.transform.scale(Image, (100,100))
</code></pre>
| 0 | 2016-08-17T18:04:13Z | [
"python",
"image",
"pygame",
"mouse"
] |
how to check django_tables2 object empty/null? | 39,003,216 | <p>The problem I'm trying to solve:<br>
Display a search bar only when the table <code>my_table</code> is not empty, else, hide it.</p>
<p>Here's my views.py:</p>
<pre><code>def my_table(request):
model1 = Model1.objects.all().filter(Q(some_query))
table = MyTable(model1)
RequestConfig(request).configure(table)
if not table:
print "table is empty"
table = None
else:
print "table is not empty."
return render(request, 'my_table.html', {'my_table':table,})
</code></pre>
<p>Here's my <code>my_table.html</code>:</p>
<pre><code>{% load render_table from django_tables2 %}
{% if my_table %}
Number of entries: {{ my_table|length }}.
<section>
<form method="post" action=".">
{% csrf_token %}
<input type="text" class="search-query span80" id="search" name="search" placeholder="Type to search">
<button type="submit" class="btn">Search</button>
</form>
</section>
{% else %}
<!-- There's no my_table. -->
{% endif%}
{% render_table my_table %}
</code></pre>
<p>However, this table object in <code>my_table(request)</code> views.py contains really nothing, and it doesn't render anything in the html, but it's just not empty, and displays the search bar.<br>
Please advise. Where I'm doing it wrong?</p>
<p>In response to comments, here's MyTable code:</p>
<pre><code>class MyTable(tables.Table):
entry1 = tables.Column(verbose_name="Entry 1")
entry2 = tables.Column(verbose_name="Entry 2")
class Meta:
model = MyModel
empty_text = "There is no entry record."
order_by = "entry1"
orderable = True
sequence = ('entry1', 'entry2')
fields = ('entry1', 'entry2')
attrs = {"class": "paleblue", "style": "width:380px"}
</code></pre>
<p>And here's the code for MyModel:</p>
<pre><code>class MyModel(models.Model):
entry1 = models.CharField(primary_key=True, max_length=11L, db_column='Entry1', blank=True) # Field name made lowercase.
entry2 = models.CharField(max_length=11L, db_column='Entry2', blank=True) # Field name made lowercase.
class Meta:
db_table = 'my_table'
</code></pre>
| 2 | 2016-08-17T17:47:39Z | 39,007,177 | <p>In you views.py you can use:</p>
<pre><code>if table.data.data.__len__ > 0:
</code></pre>
<p>and in your template you can use:</p>
<pre><code>{% if table.data|length > 0 %}
do whatever you want here...
{% endif %}
</code></pre>
| 1 | 2016-08-17T22:24:06Z | [
"python",
"django"
] |
Python parsing multi column file | 39,003,251 | <p>I have a file where i need to parse and construct in to a single row from multi row using python</p>
<pre><code>NAME ID
TITLE DEP
USER1 0023
S1 SALES
USER2 0022
A2 ACCOUNT
</code></pre>
<p>As you can see here the file header are NAME, ID, TITLE, DEP</p>
<p>I want to print output like below so i can read easy in csv file and do other stuff easily.</p>
<pre><code>NAME, ID, TITLE, DEP
USER1,0023,S1,SALES
USER2,0022,A2,ACCOUNT
</code></pre>
<p>Below is the code i started with but not able to get to where i want.
I tried with different options to split and replace but did not worked.</p>
<pre><code>import csv
file =open('test_file_parse.csv','r')
out_file=open('test_out.csv','w')
lines = file.readlines()
file.close()
for line in lines:
line=line.strip()
print (line)
</code></pre>
<p>Any help is appreciated</p>
| -1 | 2016-08-17T17:49:48Z | 39,003,358 | <p>Having all the words in a file called <code>file.txt</code> here is the code:</p>
<pre><code>
# read all the words
with open('file.txt') as f:
words = f.read().split()
# convert to groups of 4-s
groups4 = [words[i:i+4] for i in range(0, len(words), 4)]
# convert to lines with commas using join()
lines = [', '.join(lst) for lst in groups4]
# and here is the result
for line in lines:
print(line)
</code></pre>
<p>Output: </p>
<pre><code>NAME, ID, TITLE, DEP
USER1, 0023, S1, SALES
USER2, 0022, A2, ACCOUNT
</code></pre>
| 0 | 2016-08-17T17:56:55Z | [
"python",
"parsing",
"multiline"
] |
Python parsing multi column file | 39,003,251 | <p>I have a file where i need to parse and construct in to a single row from multi row using python</p>
<pre><code>NAME ID
TITLE DEP
USER1 0023
S1 SALES
USER2 0022
A2 ACCOUNT
</code></pre>
<p>As you can see here the file header are NAME, ID, TITLE, DEP</p>
<p>I want to print output like below so i can read easy in csv file and do other stuff easily.</p>
<pre><code>NAME, ID, TITLE, DEP
USER1,0023,S1,SALES
USER2,0022,A2,ACCOUNT
</code></pre>
<p>Below is the code i started with but not able to get to where i want.
I tried with different options to split and replace but did not worked.</p>
<pre><code>import csv
file =open('test_file_parse.csv','r')
out_file=open('test_out.csv','w')
lines = file.readlines()
file.close()
for line in lines:
line=line.strip()
print (line)
</code></pre>
<p>Any help is appreciated</p>
| -1 | 2016-08-17T17:49:48Z | 39,003,761 | <pre><code>infile = open('test_file_parse.csv','r')
def custom_func(x):
return next(x).strip().split()
while infile:
try:
print ','.join(reduce(lambda x, y: x + y, (custom_func(infile) for z in range(4))))
except TypeError:
break
infile.close()
</code></pre>
| 0 | 2016-08-17T18:20:52Z | [
"python",
"parsing",
"multiline"
] |
Python parsing multi column file | 39,003,251 | <p>I have a file where i need to parse and construct in to a single row from multi row using python</p>
<pre><code>NAME ID
TITLE DEP
USER1 0023
S1 SALES
USER2 0022
A2 ACCOUNT
</code></pre>
<p>As you can see here the file header are NAME, ID, TITLE, DEP</p>
<p>I want to print output like below so i can read easy in csv file and do other stuff easily.</p>
<pre><code>NAME, ID, TITLE, DEP
USER1,0023,S1,SALES
USER2,0022,A2,ACCOUNT
</code></pre>
<p>Below is the code i started with but not able to get to where i want.
I tried with different options to split and replace but did not worked.</p>
<pre><code>import csv
file =open('test_file_parse.csv','r')
out_file=open('test_out.csv','w')
lines = file.readlines()
file.close()
for line in lines:
line=line.strip()
print (line)
</code></pre>
<p>Any help is appreciated</p>
| -1 | 2016-08-17T17:49:48Z | 39,004,006 | <p>If you know everything will be pairs of two lines and you know you can ignore empty lines, you could do something like:</p>
<pre><code>infile = open('test_file_parse.csv', 'r')
# A generator that yields the non-empty lines, without newlines.
lines = (l.strip() for l in infile if l.strip())
# An iterator to iterate over the yielded lines.
line_iter = iter(lines)
# A generator to yield space-separated combined lines.
new_lines = (' '.join(l_pair) for l_pair in zip(line_iter, line_iter))
# Lastly, a generator to yield proper csv for the lines.
csv_lines = (','.join(l.split()) for l in new_lines)
for line in csv_lines:
print line
</code></pre>
| 0 | 2016-08-17T18:36:59Z | [
"python",
"parsing",
"multiline"
] |
How can I detect whether I'm running in a Celery worker? | 39,003,282 | <p>Is there a way to determine, programatically, that the current module being imported/run is done so in the context of a celery worker?</p>
<p>We've settled on setting an environment variable before running the Celery worker, and checking this environment variable in the code, but I wonder if there's a better way?</p>
| 0 | 2016-08-17T17:52:06Z | 39,019,837 | <p>Adding a environment variable is a good way to check if the module is being run by celery worker. In the task submitter process we may set the environment variable, to mark that it is not running in the context of a celery worker.</p>
<p>But the better way may be to use some celery signals which may help to know if the module is running in worker or task submitter. For example, <a href="http://docs.celeryproject.org/en/latest/userguide/signals.html#worker-process-init" rel="nofollow">worker-process-init</a> signal is sent to each child task executor process (in preforked mode) and the handler can be used to set some global variable indicating it is a worker process. </p>
| 0 | 2016-08-18T13:44:00Z | [
"python",
"celery"
] |
How can I detect whether I'm running in a Celery worker? | 39,003,282 | <p>Is there a way to determine, programatically, that the current module being imported/run is done so in the context of a celery worker?</p>
<p>We've settled on setting an environment variable before running the Celery worker, and checking this environment variable in the code, but I wonder if there's a better way?</p>
| 0 | 2016-08-17T17:52:06Z | 39,023,702 | <p>Depending on what your use-case scenario is exactly, you may be able to detect it by checking whether the request id is set:</p>
<pre><code>@app.task(bind=True)
def foo(self):
print self.request.id
</code></pre>
<p>If you invoke the above as <code>foo.delay()</code> then the task will be sent to a worker and <code>self.request.id</code> will be set to a unique number. If you invoke it as <code>foo()</code>, then it will be executed in your current process and <code>self.request.id</code> will be <code>None</code>.</p>
| 0 | 2016-08-18T16:57:21Z | [
"python",
"celery"
] |
How can I detect whether I'm running in a Celery worker? | 39,003,282 | <p>Is there a way to determine, programatically, that the current module being imported/run is done so in the context of a celery worker?</p>
<p>We've settled on setting an environment variable before running the Celery worker, and checking this environment variable in the code, but I wonder if there's a better way?</p>
| 0 | 2016-08-17T17:52:06Z | 39,093,390 | <p>It is a good practice to start workers with names, so that it becomes easier to manage(stop/kill/restart) them. You can use <code>-n</code> to name a worker.</p>
<pre><code>celery worker -l info -A test -n foo
</code></pre>
<p>Now, in your script you can use <code>app.control.inspect</code> to see if that worker is running.</p>
<pre><code>In [22]: import test
In [23]: i = test.app.control.inspect(['foo'])
In [24]: i.app.control.ping()
Out[24]: [{'celery@foo': {'ok': 'pong'}}]
</code></pre>
<p>You can read more <a href="http://docs.celeryproject.org/en/latest/userguide/workers.html#inspecting-workers" rel="nofollow">about this in celery worker docs</a></p>
| 0 | 2016-08-23T05:58:25Z | [
"python",
"celery"
] |
wxpython dialog return values with option to destroy | 39,003,360 | <p>I have a wxpython dialog (wrapped in a function call which returns user selection) that Im using to prompt user for "ok", "cancel" type questions.</p>
<p>I have another thread that is running that allows the user to click a button and perform an emergency stop (also wxpython). when the emergency stop button is clicked, it performs some stuff and then displays its own user dialog.</p>
<p>Now, assume that a popup message has prompted the user with a yes/no question and at the very same time, the user sees something on fire and decides to click the emergency stop.</p>
<p>I need to be able to destroy the current dialog question and eventually display the dialog thats part of the estop thread.</p>
<p>Ive created code that starts the dialog in one thread and another that issues a pub.sendMessage(...) to kill the dialog thread if its displayed on the screen at the time of pressing the estop button.</p>
<p>This works just fine.</p>
<p>Problem is, i cant get the result from the user dialog (yes,no,cancel etc) because the thread never joins back to return what the user picked. </p>
<p>If i add a .join() to my code, i get the user response, but the dialog isnt destroyed if a pub.sendMessage(...) is sent.</p>
<p>heres pseudocode:</p>
<pre><code>1)start up thread to monitor user emergency stop button push
2)start up test which may contain popup window (custom wxdialog) prompting user for answer
3)return user selection from dialog if selected
OR if user has clicked estop button: destroy current dialog and display new dialog related to estop
</code></pre>
<p>simple enough, but with the implementation of wrapper functions (this is code for others to call and needs to provide simple function calls), its obviously a little weird. but reimplementing it might not fly right now</p>
<p>heres some code (both are part of larger class so thats why the self references are there)</p>
<pre><code> #the receiver of the pub message first:
def DestroyUI(self):
print 'into destroy'
#self.Hide()
#self.Close()
self.Destroy()
#these 2 fuctions are part of a different class
def closeDialogs(self):
print 'into closeDialogs'
def destroyUI():
if self.okToKill:
print 'closing wx dialogs'
pub.sendMessage("destroyUI")
ts = threading.Thread(target = destroyUI)
ts.start()
ts.join()
def MB(self,blah,blah):
def runMB(*args):
self.okToKill= True
self.response = self.PromptUser(*args)
self.okToKill= False
tr = threading.Thread(target = runMB,args=(blah,blah))
tr.start()
#if i add the join back in, i always return the user values
#but the dialog is never destroyed (if needed) before user
#clicks.
#without the join, the destroy pub send works great, but the
#user values are never returned
#tr.join()
return self.response
</code></pre>
<p>I have tried using queues, multiprocess pool, but the problem with those is the q.get() and async_result().get() both block until user clicks and therefore wont allow the destroy to work as needed.</p>
<p>What id love to figure out is how to get the user values IF the user clicked buttons, but also be able to destroy the current dialog and display the new emergency stop dialog instead (with custom buttons).</p>
<p>I wish wxpython just had a closeAll() :)</p>
<p>Also, i have tried closing the windows based on title name, and while that works, it hoses up my wx instance for the next dialog.</p>
<p>Any ideas?
thanks</p>
| 1 | 2016-08-17T17:56:59Z | 39,003,717 | <p>since i dont really understand what you are asking for, I will instead address "I wish wxpython just had a closeAll() :)"</p>
<pre><code>def wx_closeAll():
for win in wx.GetTopLevelWindows():
win.Destroy()
def closeAllButPrimary():
for win in wx.GetTopLevelWindows():
if win != wx.GetTopLevelWindow():
win.Destroy()
</code></pre>
<p><strong>note</strong> this will close all frames and dialogs whose parent is None ... if you set a parent for the frame or dialog that dialog will be closed when its parent window is closed</p>
| 2 | 2016-08-17T18:17:49Z | [
"python",
"multithreading",
"return",
"wxpython"
] |
wxpython dialog return values with option to destroy | 39,003,360 | <p>I have a wxpython dialog (wrapped in a function call which returns user selection) that Im using to prompt user for "ok", "cancel" type questions.</p>
<p>I have another thread that is running that allows the user to click a button and perform an emergency stop (also wxpython). when the emergency stop button is clicked, it performs some stuff and then displays its own user dialog.</p>
<p>Now, assume that a popup message has prompted the user with a yes/no question and at the very same time, the user sees something on fire and decides to click the emergency stop.</p>
<p>I need to be able to destroy the current dialog question and eventually display the dialog thats part of the estop thread.</p>
<p>Ive created code that starts the dialog in one thread and another that issues a pub.sendMessage(...) to kill the dialog thread if its displayed on the screen at the time of pressing the estop button.</p>
<p>This works just fine.</p>
<p>Problem is, i cant get the result from the user dialog (yes,no,cancel etc) because the thread never joins back to return what the user picked. </p>
<p>If i add a .join() to my code, i get the user response, but the dialog isnt destroyed if a pub.sendMessage(...) is sent.</p>
<p>heres pseudocode:</p>
<pre><code>1)start up thread to monitor user emergency stop button push
2)start up test which may contain popup window (custom wxdialog) prompting user for answer
3)return user selection from dialog if selected
OR if user has clicked estop button: destroy current dialog and display new dialog related to estop
</code></pre>
<p>simple enough, but with the implementation of wrapper functions (this is code for others to call and needs to provide simple function calls), its obviously a little weird. but reimplementing it might not fly right now</p>
<p>heres some code (both are part of larger class so thats why the self references are there)</p>
<pre><code> #the receiver of the pub message first:
def DestroyUI(self):
print 'into destroy'
#self.Hide()
#self.Close()
self.Destroy()
#these 2 fuctions are part of a different class
def closeDialogs(self):
print 'into closeDialogs'
def destroyUI():
if self.okToKill:
print 'closing wx dialogs'
pub.sendMessage("destroyUI")
ts = threading.Thread(target = destroyUI)
ts.start()
ts.join()
def MB(self,blah,blah):
def runMB(*args):
self.okToKill= True
self.response = self.PromptUser(*args)
self.okToKill= False
tr = threading.Thread(target = runMB,args=(blah,blah))
tr.start()
#if i add the join back in, i always return the user values
#but the dialog is never destroyed (if needed) before user
#clicks.
#without the join, the destroy pub send works great, but the
#user values are never returned
#tr.join()
return self.response
</code></pre>
<p>I have tried using queues, multiprocess pool, but the problem with those is the q.get() and async_result().get() both block until user clicks and therefore wont allow the destroy to work as needed.</p>
<p>What id love to figure out is how to get the user values IF the user clicked buttons, but also be able to destroy the current dialog and display the new emergency stop dialog instead (with custom buttons).</p>
<p>I wish wxpython just had a closeAll() :)</p>
<p>Also, i have tried closing the windows based on title name, and while that works, it hoses up my wx instance for the next dialog.</p>
<p>Any ideas?
thanks</p>
| 1 | 2016-08-17T17:56:59Z | 39,003,808 | <p>Doing GUI from threads is not a good idea, and nearly always gives unexpected results.</p>
<p>In order to show the message box from a thread, you need to make the thread ask the main thread to show the message box. </p>
<p>You do this using the function <a href="https://wiki.wxpython.org/CallAfter" rel="nofollow">wx.CallAfter</a></p>
| 2 | 2016-08-17T18:23:43Z | [
"python",
"multithreading",
"return",
"wxpython"
] |
Brute force with python selenium? | 39,003,378 | <p>I am trying to brute force with <code>python selenium</code> But it seems to be too time taking unlike brute force with python requests,although both are different.I was wondering is there any way that brute force can be done more faster?</p>
| -1 | 2016-08-17T17:58:02Z | 39,003,568 | <p>I once wrote a Python script that used Splinter to drive Firefox via Selenium for a video that demonstrated brute-forcing a Wordpress login in the same fashion (for educating site owners about how easy it was to brute-force the Wordpress login by default). I used this method precisely because it was slower (and it was possible to see it happening).</p>
<p>There's no getting away from the fast that it will be much faster using <code>requests</code>, because it still needs to load all the CSS, images and Javascript. Using a headless browser like PhantomJS might help since it won't have to render it on screen, but if you don't need to see it anyway you might as well use <code>requests</code>.</p>
| 1 | 2016-08-17T18:08:56Z | [
"python",
"selenium",
"python-requests",
"python-3.4"
] |
Programmatically getting list of child processes of a given PID | 39,003,433 | <p>I'd like to get a list of all the immediate children of a given PID. I'm OK with using <code>/proc</code> but <code>/proc/<PID>/task/<PID>/children</code> is NOT precise and may return inaccurate results (<a href="https://lwn.net/Articles/475688/" rel="nofollow">see section 3.7 here</a>). I'd like a more reliable method of doing this. </p>
<p>I'd prefer not using a wrapper around a shell command.</p>
| 1 | 2016-08-17T18:01:34Z | 39,003,721 | <p>Why not use psutils?</p>
<p>Here is an example where I kill all the children.</p>
<pre><code>def infanticide(pid):
try:
parent = psutil.Process(pid)
except psutil.NoSuchProcess:
return
children = parent.children(recursive=True)
for p in children:
os.kill(p.pid, signal.SIGKILL)
</code></pre>
| 0 | 2016-08-17T18:18:01Z | [
"python",
"node.js",
"linux",
"ubuntu",
"process"
] |
Transform 3D polygon to 2D, perform clipping, and transform back to 3D | 39,003,450 | <p>My problem is that I have two walls, represented as 2D planes in 3D space, (<code>wallA</code> and <code>wallB</code>). These walls are overlapping. I need to convert that into three wall sections, one for the <code>wallA.intersect(wallB)</code>, one for <code>wallA.diff(wallB)</code>, and one for <code>wallB.diff(wallA)</code>.</p>
<p>What I think I need to to do is rotate them both into 2D space, without changing their overlaps, perform the clipping to identify the diffs and intersect, then rotate the new walls back into the original plane.</p>
<p>The walls are not necessarily vertical, otherwise the problem might be simpler.</p>
<p>The clipping part of my problem is easily solved in 2D, using <code>pyclipper</code>. What I'm having trouble with is the algorithm for recoverably rotating the walls into 2D.</p>
<p>From what I can understand, something similar to but not exactly the same as the steps in <a href="http://stackoverflow.com/questions/6023166/rotating-a-3d-polygon-into-xy-plane-while-maintaining-orientation">this question</a>. I've looked at <a href="https://pypi.python.org/pypi/transforms3d" rel="nofollow"><code>transforms3D</code></a> which looks really useful, but can't quite understand which, or what combination, of the functions I need to use to reproduce that algorithm.</p>
<p>Here's an example of what I'm trying to achieve, using a really simple example of a pair of 2 x 2 vertical surfaces that have an overlapping 1 x 1 square in one corner.</p>
<pre><code>import pyclipper as pc
wallA= [(0,0,2), (2,0,2), (2,0,0), (0,0,0)]
wallB = [(1,0,3), (3,0,3), (3,0,1), (1,0,1)]
expected_overlaps = [[(1,0,2), (2,0,2), (2,0,1), (1,0,1)]]
wallA_2d = transform_to_2D(wallA, <whatever else is needed>)
wallB_2d = transform_to_2D(wallB, <whatever else is needed>)
scaledA = pc.scale_to_clipper(wallA_2d)
scaledB = pc.scale_to_clipper(wallB_2d)
clipper = pc.Pyclipper()
clipper.AddPath(scaledA, poly_type=pc.PT_SUBJECT, closed=True)
clipper.AddPath(scaledB, poly_type=pc.PT_CLIP, closed=True)
# just showing the intersection - differences are handled similarly
intersections = clipper.Execute(
pc.CT_INTERSECTION, pc.PFT_NONZERO, pc.PFT_NONZERO)
intersections = [pc.scale_from_clipper(i) for i in intersections]
overlaps = [transform_to_3D(i, <whatever else is needed>) for i in intersections]
assert overlaps == expected_overlaps
</code></pre>
<p>What I'm looking for is an explanation of the steps required to write <code>transform_to_2d</code> and <code>transform_to_3d</code>.</p>
| 0 | 2016-08-17T18:02:48Z | 39,008,641 | <p>Rather than rotating, you can simply project. The key is to map the 3d space onto a 2d plane in a way that you can then reverse. (Any distortion resulting from the projection will be undone when you map back.) To do this, you should first find the plane that contains both of your walls. Here is some example code:</p>
<pre><code>wallA = [(0,0,2), (2,0,2), (2,0,0), (0,0,0)]
wallB = [(1,0,3), (3,0,3), (3,0,1), (1,0,1)]
v = (0, 1, 0) # the normal vector
a = 0 # a number so that v[0] * x + v[1] * y + v[2] * z = a is the equation of the plane containing your walls
# To calculate the normal vector in general,
# you would take the cross product of any two
# vectors in the plane of your walls, e.g.
# (wallA[1] - wallA[0]) X (wallA[2] - wallA[0]).
# You can then solve for a.
proj_axis = max(range(3), key=lambda i: abs(v[i]))
# this just needs to be any number such that v[proj_axis] != 0
def project(x):
# Project onto either the xy, yz, or xz plane. (We choose the one that avoids degenerate configurations, which is the purpose of proj_axis.)
# In this example, we would be projecting onto the xz plane.
return tuple(c for i, c in enumerate(x) if i != proj_axis)
def project_inv(x):
# Returns the vector w in the walls' plane such that project(w) equals x.
w = list(x)
w[proj_axis:proj_axis] = [0.0]
c = a
for i in range(3):
c -= w[i] * v[i]
c /= v[proj_axis]
w[proj_axis] = c
return tuple(w)
projA = [project(x) for x in wallA]
projB = [project(x) for x in wallB]
proj_intersection = intersection(projA, projB) # use your 2d algorithm here
intersection = [project_inv(x) for x in proj_intersection] # this is your intersection in 3d; you can do similar things for the other pieces
</code></pre>
| 2 | 2016-08-18T01:33:28Z | [
"python",
"3d",
"geometry",
"transforms3d"
] |
How to fix appengine ImportError: No module named protobuf? | 39,003,464 | <p>I have the following folders structure:</p>
<pre><code>myappdir
- libs
- somelib
script1.py
script2.py
- google
- protobuf
__init__.py
message.py
...
__init__.py
...
app.yaml
appengine_config.py
...
</code></pre>
<p>And the following files content - </p>
<p><strong>appengine_config.py</strong>:</p>
<pre><code>import sys
sys.path.append('libs')
</code></pre>
<p><strong>script1.py</strong>:</p>
<pre><code>from somelib.script2 import Something
</code></pre>
<p><strong>script2.py</strong>:</p>
<pre><code>from google.protobuf import message
</code></pre>
<p>In result I get:</p>
<pre><code> File "myappdir/libs/somelib/script1.py", line 34, in <module>
from somelib.script2 import Something
File "myappdir/libs/somelib/script2.py", line 38, in <module>
from google.protobuf import message
ImportError: No module named protobuf
</code></pre>
<p>What is wrong with my setup?</p>
| 1 | 2016-08-17T18:03:24Z | 39,003,903 | <p>Change the lines in your <strong>appengine_config.py</strong> file, from:</p>
<pre><code>import sys
sys.path.append('libs')
</code></pre>
<p>to:</p>
<pre><code>import sys
import os.path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'libs'))
</code></pre>
| 0 | 2016-08-17T18:29:55Z | [
"python",
"google-app-engine",
"protocol-buffers",
"python-import"
] |
How to easy switch db in Django 1.9.x across multiple servers | 39,003,588 | <p>I have one application in Django 1.9.x, and I want to use multidb.</p>
<p>Have any configuration on the settings that I can enter multiple databases, and when it is on a specific server it uses the correct database. For example:</p>
<p>When I'm programming localhost use default, When I put in test serve, automatically switch to testserverdb and when I put in production server use productiondb, I tried use <a href="https://docs.djangoproject.com/pt-br/1.10/topics/db/multi-db/" rel="nofollow">multi-db documentation</a> but it's not what I want 'cuze this case is to work with legacy database, not my case.</p>
<p>How I do it?</p>
| 0 | 2016-08-17T18:10:14Z | 39,004,529 | <p>It sounds like you want to have environment specific databases, not necessarily a single app that connects to many databases. You can easily accomplish this with a custom settings module for each of these environments. </p>
<p>You might have a structure like the following : </p>
<pre><code>myproject/
- settings/
- __init__.py
- common.py
</code></pre>
<p>You'll want to put all your common settings under <code>common.py</code>. This will serve as the basis for all your other environment settings. From here,there are a few setups that you can use to do what you want, but I'm going to suggest that you use <code>common.py</code> as a base settings module that can be overridden locall.y </p>
<p>To do this, you can set your <code>DJANGO_SETTINGS_MODULE</code> to by <code>myproject.settings</code>, and in your <code>__init__.py</code>, </p>
<pre><code>from .common import *
try:
from .local import *
except ImportError:
pass
</code></pre>
<p>Then on each environment (production/development/etc), you'll want to include a file named <code>local.py</code> in <code>myproject/settings</code>. Any settings you put in that <code>local.py</code> file will override your <code>common.py</code> when your settings module gets loaded up.</p>
| 1 | 2016-08-17T19:12:47Z | [
"python",
"mysql",
"django",
"database",
"postgresql"
] |
How to easy switch db in Django 1.9.x across multiple servers | 39,003,588 | <p>I have one application in Django 1.9.x, and I want to use multidb.</p>
<p>Have any configuration on the settings that I can enter multiple databases, and when it is on a specific server it uses the correct database. For example:</p>
<p>When I'm programming localhost use default, When I put in test serve, automatically switch to testserverdb and when I put in production server use productiondb, I tried use <a href="https://docs.djangoproject.com/pt-br/1.10/topics/db/multi-db/" rel="nofollow">multi-db documentation</a> but it's not what I want 'cuze this case is to work with legacy database, not my case.</p>
<p>How I do it?</p>
| 0 | 2016-08-17T18:10:14Z | 39,005,365 | <p>In your settings file:</p>
<pre><code>try:
from [app_name].local_settings import *
except ImportError:
pass
</code></pre>
<p>Changes you make in this local_settings file will be overridden. So now you can have a different local_settings file for your localhost, development or production. You can specify a separate db in these files separately.</p>
| 4 | 2016-08-17T20:05:38Z | [
"python",
"mysql",
"django",
"database",
"postgresql"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,695 | <p>If it is in the loop, that means that each loop iteration, product will be reset to 1. So you end up with the same result as if you ran just the final iteration of the loop. In other words the products don't accumulate.</p>
| 0 | 2016-08-17T18:16:58Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,702 | <p>That's because every time the for loop runs, you're setting:</p>
<p>product = 1</p>
| 0 | 2016-08-17T18:17:25Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,724 | <p>product is reset to 1 in every iteration of the loop.</p>
| 0 | 2016-08-17T18:18:06Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,743 | <p>By putting</p>
<pre><code>product = 1
</code></pre>
<p>inside the loop, you are re-initializing the total value each iteration of the loop.</p>
<p>If the user were to enter 3,
It would show <code>1, 2, 3</code> because every iteration you are just creating a variable product with the value of 1, and multiplying it by the (iterator +1) which is just (1 * (iterator +1)).</p>
<p>If you put </p>
<pre><code>product = 1
</code></pre>
<p>outside the loop, the total value would only be initialized to 1 at the start, and you would be able to correctly sum the value of the factorial.</p>
<p>If the user entered 3 as the input again, it would show <code>1, 2, 6</code> because it will no longer multiply (1* (iterator + 1)) but (previous sum * (iterator + 1))</p>
| 1 | 2016-08-17T18:19:07Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,769 | <p>The loop works like this:</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #Set product to 1
product = product * (i + 1) #Increase product
print(product) #Print the product
</code></pre>
<p>Every loop the value of <code>product</code> will reset back to <code>1</code> before doing the calculation.</p>
<blockquote>
<p>Loop 1 <br>
product = 1 <br>
product = 1 * (1 + 1) = 2 <br>
Loop 2 <br>
product = 1 <br>
product = 1 * (1 + 2) = 3 <br></p>
</blockquote>
| 1 | 2016-08-17T18:21:45Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,782 | <p>this isnt going to answer your question ... but I find factorial is easiest to think about recursively </p>
<pre><code>def factorial(n):
#base case if n is 0 or 1 : 0! == 1! = 1
if n in (1,0): return 1
#recursive case : otherwise n! == n*(n-1)!
return n*factorial(n-1)
</code></pre>
| 1 | 2016-08-17T18:22:35Z | [
"python"
] |
Beginner's Factorial Code | 39,003,652 | <p>I'm starting to learn Python and I have a small bit of code that takes the factorial of a user input. I'm trying to understand the logic behind what's going on so I can better understand the process. Why is it when I change the location of one of my variables the output changes? (I'm using python 2)</p>
<pre><code>user_input = input("enter a positive number")
for i in range(user_input):
product = 1 #the output changes when I move it here instead of above the for loop
product = product * (i + 1)
print(product)
</code></pre>
| -2 | 2016-08-17T18:14:04Z | 39,003,787 | <p>I think this is what you want to do. </p>
<pre><code>user_input = int(input("enter a positive number"))
product = 1
for i in range(user_input):
product = product * (i + 1)
print(product)
</code></pre>
<p>But when you put <strong>product = 1</strong> into loop, every time loop start with product =1 which remove previous production. wokr like this</p>
<pre><code>user_input = int(input("enter a positive number"))
for i in range(user_input):
product = i + 1
print(product)
</code></pre>
| 0 | 2016-08-17T18:22:42Z | [
"python"
] |
AWS - OS Error permission denied Lambda Script | 39,003,668 | <p>I'm trying to execute a Lambda Script in Python with an imported library, however I'm getting permission errors.
I am also getting some alerts about the database, but database queries are called after the subprocess so I don't think they are related. Could someone explain why do I get error?</p>
<p>Alert information</p>
<pre><code>Alarm:Database-WriteCapacityUnitsLimit-BasicAlarm
State changed to INSUFFICIENT_DATA at 2016/08/16. Reason: Unchecked: Initial alarm creation
</code></pre>
<p>Lambda Error</p>
<pre><code>[Errno 13] Permission denied: OSError Traceback (most recent call last):File "/var/task/lambda_function.py", line 36, in lambda_handler
xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url])
File "/usr/lib64/python2.7/subprocess.py", line 566, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception
OSError: [Errno 13] Permission denied
</code></pre>
<p>Lambda code</p>
<pre><code>import logging
import subprocess
import boto3
SIGNED_URL_EXPIRATION = 300 # The number of seconds that the Signed URL is valid
DYNAMODB_TABLE_NAME = "TechnicalMetadata"
DYNAMO = boto3.resource("dynamodb")
TABLE = DYNAMO.Table(DYNAMODB_TABLE_NAME)
logger = logging.getLogger('boto3')
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""
:param event:
:param context:
"""
# Loop through records provided by S3 Event trigger
for s3_record in event['Records']:
logger.info("Working on new s3_record...")
# Extract the Key and Bucket names for the asset uploaded to S3
key = s3_record['s3']['object']['key']
bucket = s3_record['s3']['bucket']['name']
logger.info("Bucket: {} \t Key: {}".format(bucket, key))
# Generate a signed URL for the uploaded asset
signed_url = get_signed_url(SIGNED_URL_EXPIRATION, bucket, key)
logger.info("Signed URL: {}".format(signed_url))
# Launch MediaInfo
# Pass the signed URL of the uploaded asset to MediaInfo as an input
# MediaInfo will extract the technical metadata from the asset
# The extracted metadata will be outputted in XML format and
# stored in the variable xml_output
xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url])
logger.info("Output: {}".format(xml_output))
save_record(key, xml_output)
def save_record(key, xml_output):
"""
Save record to DynamoDB
:param key: S3 Key Name
:param xml_output: Technical Metadata in XML Format
:return:
"""
logger.info("Saving record to DynamoDB...")
TABLE.put_item(
Item={
'keyName': key,
'technicalMetadata': xml_output
}
)
logger.info("Saved record to DynamoDB")
def get_signed_url(expires_in, bucket, obj):
"""
Generate a signed URL
:param expires_in: URL Expiration time in seconds
:param bucket:
:param obj: S3 Key name
:return: Signed URL
"""
s3_cli = boto3.client("s3")
presigned_url = s3_cli.generate_presigned_url('get_object', Params={'Bucket': bucket, 'Key': obj},
ExpiresIn=expires_in)
return presigned_url
</code></pre>
| 0 | 2016-08-17T18:15:12Z | 39,023,870 | <p>I'm fairly certain that this is a restriction imposed by the lambda execution environment, but it can be worked around by executing the script through the shell.<br>
Try providing shell=True to your subprocess call:</p>
<pre><code>xml_output = subprocess.check_output(["./mediainfo", "--full", "--output=XML", signed_url], shell=True)
</code></pre>
| 1 | 2016-08-18T17:07:58Z | [
"python",
"amazon-web-services",
"aws-lambda"
] |
cp: cannot stat â/mnt/ask78b30/TEST_FILEâ: Remote I/O error | 39,003,801 | <p>I am copying a file to nfs mounted dir. When I run the following command manually, the file is successfully copied</p>
<p>sudo cp TEST_FILE /mnt/ask78b30</p>
<p>However, when I use the same command in a python script, I get the following error,</p>
<p>running cmd = sudo cp TEST_FILE /mnt/ask78b30</p>
<p>cp: cannot stat â/mnt/ask78b30/TEST_FILEâ: Remote I/O error</p>
<p>Below is the code:</p>
<pre><code> cmd = "sudo cp "+file_name_arg+" "+ mount_pt_arg
print "cmd = ", cmd
os.system(cmd)
</code></pre>
<hr>
<p>Note: Earlier command used to fail since â speacial char was added. I changed outty to UTF-8 and that got resolved
cp: cannot stat â/mnt/askdab3c/TEST_FILEâ: Remote I/O error</p>
| 1 | 2016-08-17T18:23:32Z | 39,003,862 | <p>I would recommend to use pythons shutil to copy files instead of calling cp</p>
<pre><code>from shutil import copyfile
copyfile(src, dst)
</code></pre>
| 0 | 2016-08-17T18:27:10Z | [
"python",
"linux",
"shell",
"mount",
"cp"
] |
How to find a word inside a list and search backwards from that word? | 39,003,837 | <p>How can I search for a specific word inside a list? Once I have searched for that specific word, how can I search backwards from there? In my example, I need to search for the city, and then search backwards to find the street type (ex: Rd, St, Ave, etc.)</p>
<p>I first allow a user to input an address, like <em>123 Fakeville St SW San Francisco CA 90215</em>:</p>
<pre><code>searchWord = 'San Francisco'
searchWord = searchWord.upper()
address = raw_input("Type an address: ").upper()
</code></pre>
<p>Once the address is entered, I split it using <code>address = address.split()</code>, which results in:</p>
<p><code>['123', 'Fakeville', 'St', 'SW', 'San Francisco', 'CA', '90215']</code></p>
<p>I then search for city in the list:</p>
<pre><code>for items in address:
if searchWord in items:
print searchWord
</code></pre>
<p>But I'm not sure how to count backwards to find the street type (ex: St).</p>
| 1 | 2016-08-17T18:25:14Z | 39,004,104 | <pre><code>for items in address:
if searchWord in items:
for each in reversed(address[0:address.index(searchWord)]):
if each == 'St':
print each
</code></pre>
<p>Once you find the city, reverse traverse the list using reversed</p>
| 0 | 2016-08-17T18:43:31Z | [
"python",
"list"
] |
How to find a word inside a list and search backwards from that word? | 39,003,837 | <p>How can I search for a specific word inside a list? Once I have searched for that specific word, how can I search backwards from there? In my example, I need to search for the city, and then search backwards to find the street type (ex: Rd, St, Ave, etc.)</p>
<p>I first allow a user to input an address, like <em>123 Fakeville St SW San Francisco CA 90215</em>:</p>
<pre><code>searchWord = 'San Francisco'
searchWord = searchWord.upper()
address = raw_input("Type an address: ").upper()
</code></pre>
<p>Once the address is entered, I split it using <code>address = address.split()</code>, which results in:</p>
<p><code>['123', 'Fakeville', 'St', 'SW', 'San Francisco', 'CA', '90215']</code></p>
<p>I then search for city in the list:</p>
<pre><code>for items in address:
if searchWord in items:
print searchWord
</code></pre>
<p>But I'm not sure how to count backwards to find the street type (ex: St).</p>
| 1 | 2016-08-17T18:25:14Z | 39,004,149 | <p>You can use <code>list.index</code> method to search the index of an item in a list.</p>
<p>There is no <code>list.rindex</code> method to search backward.
You need to use:</p>
<pre><code>rev_idx = len(my_list) - my_list[::-1].index(item) - 1
</code></pre>
<p>I don't really understand whatâs your aim, be I can explain how to search backward the "St" string in the <em>address</em> list of strings:</p>
<pre><code>address = ['123', 'Fakeville', 'St', 'SW', 'San Francisco', 'CA', '90215']
town_idx = address.index('San Francisco')
print(town_idx)
# You'll get: 4
before = address[:town_idx]
st_index = len(before) - before[::-1].index("St") - 1
print(st_index)
# You'll get: 2
</code></pre>
| 0 | 2016-08-17T18:46:17Z | [
"python",
"list"
] |
passing arguments when scheduling a function with asyncio/aiocron | 39,003,853 | <p>I am using the <code>aiocron</code> library to schedule function calls for certain times of day. This is a minimal example of what I want to do. In this example, I want to send a "good morning" message every day at 8:00</p>
<pre><code>class GoodMorningSayer:
@asyncio.coroutine
def send_message(text):
#insert fancy networking stuff here
@aiocron.crontab("0 8 * * *")
@asyncio.coroutine
def say_good_morning():
yield from self.send_message("good morning!")
# I don't have self here, so this will produce an error
</code></pre>
<p>Unfortunately, I have no way of getting <code>self</code> in my <code>say_good_morning</code> method, as <code>aiocron</code> obviously does not know or care what class the function it calls is in. This makes aiocron rather useless in my case, which is a shame. Is there any way I can get <code>self</code> in a function called by <code>aiocron</code>?</p>
| 1 | 2016-08-17T18:26:12Z | 39,005,726 | <p>I solved this myself for now by using a closure:</p>
<pre><code>class GoodMorningSayer:
def __init__(self):
@aiocron.crontab("0 8 * * *")
@asyncio.coroutine
def on_morning()
yield from self.send_message("good morning!")
@asyncio.coroutine
def send_message(text):
#insert fancy networking stuff here
</code></pre>
<p>In the actual program I wrapped this in some boilerplate that eventually calls self.say_good_morning to keep the code cleaner</p>
| 0 | 2016-08-17T20:29:15Z | [
"python",
"python-3.x",
"python-asyncio"
] |
Django encoding problems with CSV | 39,003,878 | <p>File export works well, but I have a problem with encoding data.
Where I made ââmistakes?</p>
<p>My code is </p>
<pre><code>for user in users:
result = user[0].encode('utf-8')
for x in filter(lambda q: q is not None, user):
result += ', '
if type(x) in (str, unicode):
result += x.encode('utf-8')
else:
result += str(x)
print type(result), result
writer.writerow(result)
return response
</code></pre>
| -4 | 2016-08-17T18:28:36Z | 39,005,251 | <p>The .encode method gets applied to a Unicode string to make a byte-string; if your CSV data is NOT in utf-8, and instead encoded let's say in latin-1, then you need a "transcoding". Something like this:</p>
<pre><code>line.decode('latin-1').encode('utf-8')
</code></pre>
<p>if you know your CSV encoding, then replace latin-1 with whatever your input data encoding is.</p>
<p>Also, if you are not sure what the CSV file's encoding is then you may want consider using <a href="http://pypi.python.org/pypi/chardet" rel="nofollow">chardet</a> and you can learn about how to use it at <a href="http://chardet.readthedocs.io/en/latest/usage.html" rel="nofollow">readthedocs</a></p>
| 0 | 2016-08-17T19:58:28Z | [
"python",
"django",
"csv",
"encoding",
"decoding"
] |
Django registration | change behaviour | 39,003,883 | <p>After succesfully registering, the user is redirected to the template 'registration_done.html'.</p>
<p>Is there any way of changing this behaviour to redirecting the user to the registration page and displaying a message?</p>
<p>I tried these code below also tried different ways to change it but have different types of errors in different cases.</p>
<p><strong>urls.py</strong></p>
<pre><code> url(r'^register/$',
views.register,
{
'success_url': '/accounts/register/?success=true'
},
name='register'),
</code></pre>
<p><strong>view.py</strong></p>
<pre><code>def register(request):
if request.method == 'POST':
user_form = UserRegistrationForm(request.POST)
if user_form.is_valid():
# Create a new user object but avoid saving it yet
new_user = user_form.save(commit=False)
# Set the chosen password
new_user.set_password(user_form.cleaned_data['password'])
# Save the User object
new_user.save()
success = request.GET.get('success', None)
return render(request, {'new_user': new_user, 'success': success})
else:
user_form = UserRegistrationForm()
return render(request, 'account/register.html', {'user_form': user_form})
</code></pre>
<p><strong>registration.html</strong>:</p>
<pre><code>{% if success %}
<p>{% trans 'Successfull registration!' %}</p>
{% endif %}
</code></pre>
<p>Whats wrong I did?!</p>
<p><strong>Error</strong>:</p>
<pre><code>Traceback (most recent call last):
File "C:\Python34\lib\site-packages\django\core\handlers\base.py", line 149, in get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Python34\lib\site-packages\django\core\handlers\base.py", line 147, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
TypeError: register() got an unexpected keyword argument 'success_url'
[18/Aug/2016 14:17:55] "GET /en/account/register/ HTTP/1.1" 500 59886
</code></pre>
| 1 | 2016-08-17T18:28:48Z | 39,013,555 | <p>You are trying to pass the kwarg <code>success_url</code> to your function <code>register</code></p>
<p>Your function <code>register(request)</code> only accepts one argument: 'request'.</p>
<p>So you can accept a second argument, say success_url, like so</p>
<pre><code>def register(request, success_url):
...
</code></pre>
<p>But there is no point doing that if success_url is a constant. In that case, just define it your register function</p>
<pre><code>def register(request):
success_url = 'foo'
</code></pre>
<p>Another point here is that you want to <a href="https://docs.djangoproject.com/en/dev/ref/urlresolvers/#reverse" rel="nofollow">reverse that url</a>, rather than hard code it.</p>
<p>Also I'm not sure why you would want to use this:</p>
<pre><code>success = request.GET.get('success', None)
</code></pre>
<p>Any user could submit their own success variable in the GET request. Are you expecting this from a form? Looks like a security vulnerability if the user can just say that the were successful in request.GET.</p>
<p>Right now you aren't actually rendering a template on success anyway because you are missing a template name/path from your first <code>render</code> call.</p>
<p>So either add a template or redirect them to another page.</p>
| 1 | 2016-08-18T08:40:11Z | [
"python",
"django",
"python-3.x",
"django-views",
"django-1.9"
] |
Can't find TF_MIN_GPU_MULTIPROCESSOR_COUNT | 39,003,909 | <p>I get a message that says my GPU Device is ignored because its multiprocessor count is lower than the minimum set. However, it gives me the environment variable <em>TF_MIN_GPU_MULTIPROCESSOR_COUNT</em> but it doesn't seem to exist because I keep getting command not found. When I look at the environment variables using <em>set</em> or <em>printenv</em> and grep for the variable name, it doesn't exist. Does anyone know where I can find it or how I can change its set value?</p>
| 0 | 2016-08-17T18:30:17Z | 39,004,702 | <p>Do something like this before running your main script
<code>export TF_MIN_GPU_MULTIPROCESSOR_COUNT=4</code> </p>
<p>Note though that the default is set for a reason -- if you enable slower GPU by changing that variable, your program may run slower than it would without any GPU available, because TensorFlow will try to put run everything on that GPU</p>
| 0 | 2016-08-17T19:23:29Z | [
"python",
"tensorflow",
"gpu"
] |
Joining elements in list in python | 39,003,982 | <p>I have the data below, I want to find the unique values in 'a' and sum the data for the corresponding indices from 'b', 'c'. Any ideas on the best way to do this? I'm not sure where to start.</p>
<pre><code>a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
</code></pre>
<p>After processing,</p>
<pre><code>a = ['x', 'y', 'z', 'w']
b = [ 10, 4, 12, 5 ]
c = [ 12, 6, 15, 7 ]
</code></pre>
| 2 | 2016-08-17T18:35:05Z | 39,004,173 | <p>Use <code>collections.defaultdict</code>:</p>
<pre><code>from collections import defaultdict
a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
b_unique = collections.defaultdict(int)
c_unique = collections.defaultdict(int)
for k, bv, cv in zip(a,b,c):
b_unique[k] += bv
c_unique[k] += cv
</code></pre>
| 2 | 2016-08-17T18:48:08Z | [
"python",
"list"
] |
Joining elements in list in python | 39,003,982 | <p>I have the data below, I want to find the unique values in 'a' and sum the data for the corresponding indices from 'b', 'c'. Any ideas on the best way to do this? I'm not sure where to start.</p>
<pre><code>a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
</code></pre>
<p>After processing,</p>
<pre><code>a = ['x', 'y', 'z', 'w']
b = [ 10, 4, 12, 5 ]
c = [ 12, 6, 15, 7 ]
</code></pre>
| 2 | 2016-08-17T18:35:05Z | 39,004,178 | <p>Could do something like this using <a href="https://docs.python.org/3/library/collections.html#ordereddict-objects" rel="nofollow">OrderedDict</a>, since you need to maintain the same order:</p>
<pre><code>from collections import OrderedDict
a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
b_data = OrderedDict()
c_data = OrderedDict()
for letter, b_value, c_value in zip(a, b, c):
if letter in b_data:
b_data[letter] += b_value
c_data[letter] += c_value
else:
b_data[letter] = b_value
c_data[letter] = c_value
a = b_data.keys()
b = b_data.values()
c = c_data.values()
print(a)
print(b)
print(c)
</code></pre>
<p>Output:</p>
<pre><code>['x', 'y', 'z', 'w']
[10, 4, 12, 5]
[12, 6, 15, 7]
</code></pre>
| 4 | 2016-08-17T18:48:27Z | [
"python",
"list"
] |
Joining elements in list in python | 39,003,982 | <p>I have the data below, I want to find the unique values in 'a' and sum the data for the corresponding indices from 'b', 'c'. Any ideas on the best way to do this? I'm not sure where to start.</p>
<pre><code>a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
</code></pre>
<p>After processing,</p>
<pre><code>a = ['x', 'y', 'z', 'w']
b = [ 10, 4, 12, 5 ]
c = [ 12, 6, 15, 7 ]
</code></pre>
| 2 | 2016-08-17T18:35:05Z | 39,004,282 | <p>Using Pandas:</p>
<pre><code>import pandas as pd
# To keep original order of values.
a_ordered = [val for idx, val in enumerate(a) if val not in a[:idx]]
# >>> a_ordered
# OUT: ['x', 'y', 'z', 'w']
df = pd.DataFrame({'a': a, 'b': b, 'c': c}).groupby('a').sum().T[a_ordered]
a = df.columns.tolist()
b, c = df.values.tolist()
>>> a
['x', 'y', 'z', 'w']
>>> b
[10, 4, 12, 5]
>>> c
[12, 6, 15, 7]
</code></pre>
| 2 | 2016-08-17T18:54:33Z | [
"python",
"list"
] |
Joining elements in list in python | 39,003,982 | <p>I have the data below, I want to find the unique values in 'a' and sum the data for the corresponding indices from 'b', 'c'. Any ideas on the best way to do this? I'm not sure where to start.</p>
<pre><code>a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
</code></pre>
<p>After processing,</p>
<pre><code>a = ['x', 'y', 'z', 'w']
b = [ 10, 4, 12, 5 ]
c = [ 12, 6, 15, 7 ]
</code></pre>
| 2 | 2016-08-17T18:35:05Z | 39,004,516 | <p>Just felt like playing a little code-golf:</p>
<pre><code>>>> from collections import OrderedDict
>>> od = OrderedDict()
>>> for t in zip(a,zip(b,c)):
... od[t[0]] = [i + x for i,x in zip(od.get(t[0],[0,0]), t[1])]
...
>>> od
OrderedDict([('x', [10, 12]), ('y', [4, 6]), ('z', [12, 15]), ('w', [5, 7])])
>>> a = list(od.keys())
>>> b,c = map(list,zip(*od.values()))
>>> a
['x', 'y', 'z', 'w']
>>> b
[10, 4, 12, 5]
>>> c
[12, 6, 15, 7]
>>>
</code></pre>
| 0 | 2016-08-17T19:11:55Z | [
"python",
"list"
] |
Joining elements in list in python | 39,003,982 | <p>I have the data below, I want to find the unique values in 'a' and sum the data for the corresponding indices from 'b', 'c'. Any ideas on the best way to do this? I'm not sure where to start.</p>
<pre><code>a = ['x', 'y', 'z', 'z', 'x', 'w']
b = [ 1, 4, 5, 7, 9, 5]
c = [ 3, 6, 7, 8, 9, 7]
</code></pre>
<p>After processing,</p>
<pre><code>a = ['x', 'y', 'z', 'w']
b = [ 10, 4, 12, 5 ]
c = [ 12, 6, 15, 7 ]
</code></pre>
| 2 | 2016-08-17T18:35:05Z | 39,004,728 | <p>here is an elementary approach</p>
<pre><code>d1 = defaultdict(int)
d2 = defaultdict(int)
for x,y,z in zip(a,b,c):
d1[x] += y
d2[x] += z
an=list()
bn=list()
cn=list()
for k in sorted(d1.keys(), key=lambda x:a.index(x)):
an.append(k)
bn.append(d1[k])
cn.append(d2[k])
[an,bn,cn]
[['x', 'y', 'z', 'w'], [10, 4, 12, 5], [12, 6, 15, 7]]
</code></pre>
| 0 | 2016-08-17T19:24:42Z | [
"python",
"list"
] |
Data entry in sqlite database | 39,004,080 | <p>l m a fresh user and l m not good at coding yet. l created a database with 3 columns: <code>(date, station, pcp )</code>.
l have <code>41_years_data</code> set and by using a <code>for</code> loop, l would like to insert these data set in database. I could not do it correctly. When l run the code, it gives </p>
<pre><code>"c.execute("INSERT INTO pcp VALUES(kdm,station,klm)")
sqlite3.OperationalError: no such column: kdm" error.
</code></pre>
<p>l tried something but no work.</p>
<p>My code is below</p>
<pre><code>import sqlite3
from datetime import date, datetime, timedelta
conn=sqlite3.connect("pcp.db")
c=conn.cursor()
pcp=open("p.txt")
station="1"
def create_table():
c.execute("CREATE TABLE IF NOT EXISTS pcp (date DATE,stations TEXT,pcp DOUBLE)")
def perdelta(start, end, delta):
curr = start
while curr < end:
yield curr
curr += delta
def data_entry():
for i in pcp:
k=print(i)
def data_entry2():
for a in perdelta(date(1979, 1, 1), date(2015, 1, 1), timedelta(days=1)):
b=print(a)
def dyn():
kdm=data_entry2()
klm=data_entry()
c.execute("INSERT INTO pcp(date,stations,pcp) VALUES(?,?,?)",( kdm,station,klm ))
conn.commit()
create_table()
dyn()
</code></pre>
| 1 | 2016-08-17T18:41:48Z | 39,004,137 | <p>Your <code>INSERT</code> statement </p>
<pre><code>c.execute("INSERT INTO pcp VALUES(kdm,station,klm)")
</code></pre>
<p>contains the names of your variables that hold the data values.</p>
<p>You need to construct the <code>INSERT</code> statement by concatenating the values inside those variables. Like so. (Maybe there is a more efficient way, but I am not too familiar with python.)</p>
<pre><code>insert_sql = 'INSERT INTO pcp(date,stations,pcp) VALUES( ' + kdm + ',' + station + ',' + klm + ')'
c.execute(insert_sql)
</code></pre>
<p>Also try to figure out query parameters instead of string concatenation, as <strong>@colonel Thirty</strong> has stated, you might expose yourself to SQL Injection. Think <strong>Bobby Tables</strong></p>
<p><a href="http://i.stack.imgur.com/hQwzd.png" rel="nofollow"><img src="http://i.stack.imgur.com/hQwzd.png" alt="enter image description here"></a></p>
| 0 | 2016-08-17T18:45:39Z | [
"python",
"sqlite3"
] |
Data entry in sqlite database | 39,004,080 | <p>l m a fresh user and l m not good at coding yet. l created a database with 3 columns: <code>(date, station, pcp )</code>.
l have <code>41_years_data</code> set and by using a <code>for</code> loop, l would like to insert these data set in database. I could not do it correctly. When l run the code, it gives </p>
<pre><code>"c.execute("INSERT INTO pcp VALUES(kdm,station,klm)")
sqlite3.OperationalError: no such column: kdm" error.
</code></pre>
<p>l tried something but no work.</p>
<p>My code is below</p>
<pre><code>import sqlite3
from datetime import date, datetime, timedelta
conn=sqlite3.connect("pcp.db")
c=conn.cursor()
pcp=open("p.txt")
station="1"
def create_table():
c.execute("CREATE TABLE IF NOT EXISTS pcp (date DATE,stations TEXT,pcp DOUBLE)")
def perdelta(start, end, delta):
curr = start
while curr < end:
yield curr
curr += delta
def data_entry():
for i in pcp:
k=print(i)
def data_entry2():
for a in perdelta(date(1979, 1, 1), date(2015, 1, 1), timedelta(days=1)):
b=print(a)
def dyn():
kdm=data_entry2()
klm=data_entry()
c.execute("INSERT INTO pcp(date,stations,pcp) VALUES(?,?,?)",( kdm,station,klm ))
conn.commit()
create_table()
dyn()
</code></pre>
| 1 | 2016-08-17T18:41:48Z | 39,024,054 | <p>l solved the problem and it inserted values of date, station and pcp into pcp columns but l had to use range(1000(just for trial number)) for it.is there any suggestion to make my code better?</p>
<pre><code>import sqlite3
from sqlite3 import *
from datetime import date, datetime, timedelta
conn=sqlite3.connect("pcp.db")
c=conn.cursor()
pcp=open("p.txt")
prr=open("date.txt","w")
station="1"
date_pcp=[]
pcp_values=[]
def create_table():
c.execute("CREATE TABLE IF NOT EXISTS pcp (date DATE,stations TEXT,pcp DOUBLE)")
def perdelta(start, end, delta):
curr = start
while curr < end:
yield curr
curr += delta
def data_entry():
for i in pcp:
pcp_values.append(i)
def data_entry2():
for a in perdelta(date(1979, 1, 1), date(2014, 7, 31), timedelta(days=1)):
date_pcp.append(a)
def dyn():
data_entry2()
data_entry()
for l in range(1000):
kdm=date_pcp[l]
klm=pcp_values[l]
c.execute("""INSERT INTO pcp(date,stations,pcp) VALUES(?,?,?)""",( kdm,station,klm ))
conn.commit()
create_table()
dyn()
</code></pre>
| 0 | 2016-08-18T17:20:11Z | [
"python",
"sqlite3"
] |
In python multi-producer & multi-consumer threading, may queue.join() be unreliable? | 39,004,095 | <p>A python multi-producer & multi-consumer threading pseudocode:</p>
<pre><code>def threadProducer():
while upstreams_not_done:
data = do_some_work()
queue_of_data.put(data)
def threadConsumer():
while True:
data = queue_of_data.get()
do_other_work()
queue_of_data.task_done()
queue_of_data = queue.Queue()
list_of_producers = create_and_start_producers()
list_of_consumers = create_and_start_consumers()
queue_of_data.join()
# is now all work done?
</code></pre>
<p>In which <code>queue_of_data.task_done()</code> is called for each item in queue.</p>
<p>When <strong>producers work slower then consumers</strong>, is there a possibility <code>queue_of_data.join()</code> <strong>non-blocks at some moment when no producer generates data yet, but all consumers finish their tasks</strong> by <code>task_done()</code>?</p>
<p>And if <code>Queue.join()</code> is not reliable like this, how can I check if all work done?</p>
| 1 | 2016-08-17T18:43:01Z | 39,004,610 | <p>The usual way is to put a sentinel value (like <code>None</code>) on the queue, one for each consumer thread, when producers are done. Then consumers are written to exit the thread when it pulls <code>None</code> from the queue.</p>
<p>So, e.g., in the main program:</p>
<pre><code>for t in list_of_producers:
t.join()
# Now we know all producers are done.
for t in list_of_consumers:
queue_of_data.put(None) # tell a consumer we're done
for t in list_of_consumers:
t.join()
</code></pre>
<p>and consumers look like:</p>
<pre><code>def threadConsumer():
while True:
data = queue_of_data.get()
if data is None:
break
do_other_work()
</code></pre>
<p>Note: if producers can overwhelm consumers, create the queue with a maximum size. Then <code>queue.put()</code> will block when the queue reaches that size, until a consumer removes something from the queue.</p>
| 1 | 2016-08-17T19:18:13Z | [
"python",
"multithreading",
"queue"
] |
Python-Using character exclusion and quantifiers in a lookbehind to ignore characters if they exist | 39,004,117 | <p>Want to be able to grab a string with regex that is able to grab values regardless of whether or not something exists in a lookbehind.
E.g. the two strings below</p>
<pre><code> string_1 = "this('isastring', 'secondstring')"
string_2 = "this(\\'issomeotherstring\\', \\'ADiffSecondString\\')
</code></pre>
<p>What I want to be able to do is to grab what is inside the quotes on the second string regardless of whether or not they have the double backslashes. I tried using a lookbehind with a character exclusion and quantifier but got an error where a lookbehind must be zero-width. Sorry very new to regex.</p>
| 0 | 2016-08-17T18:44:35Z | 39,004,297 | <p>If by "grab", you mean find all strings, you can do as follow:</p>
<pre><code>string_1 = "this('isastring', 'secondstring')"
string_2 = "this(\\'issomeotherstring\\', \\'ADiffSecondString\\')"
import re
findall_str = re.compile(r"\\?'(.*?)\\?'").findall
print(findall_str(string_1))
print(findall_str(string_2))
</code></pre>
<p>You'll get:</p>
<pre><code>['isastring', 'secondstring']
['issomeotherstring', 'ADiffSecondString']
</code></pre>
| 0 | 2016-08-17T18:55:31Z | [
"python",
"regex"
] |
Most efficient way to turn dictionary into symmetric/distance matrix (Python | Pandas) | 39,004,152 | <p>I'm doing pairwise distance for something w/ a weird distance metric. I have a dictionary like <code>{(key_A, key_B):distance_value}</code> and I want to make a symmetric <code>pd.DataFrame</code> like a distance matrix. </p>
<p>What is the most efficient way to do this? I found one way but it doesn't seem like the best way to do this... <strong>Is there anything in <code>NumPy</code> or <code>Pandas</code> that does this type of operation?</strong> or just a quicker way? My way is <code>1.46 ms per loop</code> </p>
<pre><code>np.random.seed(0)
D_pair_value = dict()
for pair in itertools.combinations(list("ABCD"),2):
D_pair_value[pair] = np.random.randint(0,5)
D_pair_value
# {('A', 'B'): 4,
# ('A', 'C'): 0,
# ('A', 'D'): 3,
# ('B', 'C'): 3,
# ('B', 'D'): 3,
# ('C', 'D'): 1}
D_nested_dict = defaultdict(dict)
for (p,q), value in D_pair_value.items():
D_nested_dict[p][q] = value
D_nested_dict[q][p] = value
# Fill diagonal with zeros
DF = pd.DataFrame(D_nested_dict)
np.fill_diagonal(DF.values, 0)
DF
</code></pre>
<p><a href="http://i.stack.imgur.com/ZWrTr.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZWrTr.png" alt="enter image description here"></a></p>
| 2 | 2016-08-17T18:46:29Z | 39,004,744 | <p>You can use <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform"><code>scipy.spatial.distance.squareform</code></a>, which converts a vector of distance computations, i.e. <code>[d(A,B), d(A,C), ..., d(C,D)]</code>, into the distance matrix you're looking for.</p>
<p><strong>Method 1: Distances Stored in a List</strong></p>
<p>If you're computing your distances in order, like in your example code and in my example distance vector, I'd avoid using a dictionary and just store the results in a list, and do something like:</p>
<pre><code>from scipy.spatial.distance import squareform
df = pd.DataFrame(squareform(dist_list), index=list('ABCD'), columns=list('ABCD'))
</code></pre>
<p><strong>Method 2: Distances Stored in a Dictionary</strong></p>
<p>If you're computing things out of order and a dictionary is required, you just need to get a distance vector that's properly sorted:</p>
<pre><code>from scipy.spatial.distance import squareform
dist_list = [dist[1] for dist in sorted(D_pair_value.items())]
df = pd.DataFrame(squareform(dist_list), index=list('ABCD'), columns=list('ABCD'))
</code></pre>
<p><strong>Method 3: Distances Stored in a Sorted Dictionary</strong></p>
<p>If a dictionary is required, note that there's a package called <a href="https://pypi.python.org/pypi/sortedcontainers"><code>sortedcontainers</code></a> which has a <a href="http://www.grantjenks.com/docs/sortedcontainers/sorteddict.html"><code>SortedDict</code></a> that essentially would solve the sorting issue for you. To use it, all you'd need to change is initializing <code>D_pair_value</code> as a <code>SortedDict()</code> instead of a <code>dict</code>. Using your example setup:</p>
<pre><code>from scipy.spatial.distance import squareform
from sortedcontainers import SortedDict
np.random.seed(0)
D_pair_value = SortedDict()
for pair in itertools.combinations(list("ABCD"),2):
D_pair_value[pair] = np.random.randint(0,5)
df = pd.DataFrame(squareform(D_pair_value.values()), index=list('ABCD'), columns=list('ABCD'))
</code></pre>
<p><strong>The Resulting Output for Any Method Above:</strong></p>
<pre><code> A B C D
A 0.0 4.0 0.0 3.0
B 4.0 0.0 3.0 3.0
C 0.0 3.0 0.0 1.0
D 3.0 3.0 1.0 0.0
</code></pre>
| 7 | 2016-08-17T19:25:55Z | [
"python",
"pandas",
"numpy",
"matrix",
"distance"
] |
Most efficient way to turn dictionary into symmetric/distance matrix (Python | Pandas) | 39,004,152 | <p>I'm doing pairwise distance for something w/ a weird distance metric. I have a dictionary like <code>{(key_A, key_B):distance_value}</code> and I want to make a symmetric <code>pd.DataFrame</code> like a distance matrix. </p>
<p>What is the most efficient way to do this? I found one way but it doesn't seem like the best way to do this... <strong>Is there anything in <code>NumPy</code> or <code>Pandas</code> that does this type of operation?</strong> or just a quicker way? My way is <code>1.46 ms per loop</code> </p>
<pre><code>np.random.seed(0)
D_pair_value = dict()
for pair in itertools.combinations(list("ABCD"),2):
D_pair_value[pair] = np.random.randint(0,5)
D_pair_value
# {('A', 'B'): 4,
# ('A', 'C'): 0,
# ('A', 'D'): 3,
# ('B', 'C'): 3,
# ('B', 'D'): 3,
# ('C', 'D'): 1}
D_nested_dict = defaultdict(dict)
for (p,q), value in D_pair_value.items():
D_nested_dict[p][q] = value
D_nested_dict[q][p] = value
# Fill diagonal with zeros
DF = pd.DataFrame(D_nested_dict)
np.fill_diagonal(DF.values, 0)
DF
</code></pre>
<p><a href="http://i.stack.imgur.com/ZWrTr.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZWrTr.png" alt="enter image description here"></a></p>
| 2 | 2016-08-17T18:46:29Z | 39,005,028 | <p>Given a dictionary of keys (of single characters) and distances, here's a NumPy based approach -</p>
<pre><code>def dict2frame(D_pair_value):
# Extract keys and values
k = np.array(D_pair_value.keys())
v = np.array(D_pair_value.values())
# Get row, col indices from keys
idx = (np.fromstring(k.tobytes(),dtype=np.uint8)-65).reshape(-1,2)
# Setup output array and using row,col indices set values from v
N = idx.max()+1
out = np.zeros((N,N),dtype=v.dtype)
out[idx[:,0],idx[:,1]] = v
out[idx[:,1],idx[:,0]] = v
header = list("".join([chr(item) for item in np.arange(N)+65]))
return pd.DataFrame(out,index=header, columns=header)
</code></pre>
<p>Sample run -</p>
<pre><code>In [166]: D_pair_value
Out[166]:
{('A', 'B'): 4,
('A', 'C'): 0,
('A', 'D'): 3,
('B', 'C'): 3,
('B', 'D'): 3,
('C', 'D'): 1}
In [167]: dict2frame(D_pair_value)
Out[167]:
A B C D
A 0 4 0 3
B 4 0 3 3
C 0 3 0 1
D 3 3 1 0
</code></pre>
| 1 | 2016-08-17T19:43:38Z | [
"python",
"pandas",
"numpy",
"matrix",
"distance"
] |
Command failed: ./distribute.sh -l | 39,004,228 | <p>I am trying to follow the instructions on <a href="https://kivy.org/docs/guide/packaging-android-vm.html" rel="nofollow">https://kivy.org/docs/guide/packaging-android-vm.html</a> to create kivy android applications. I installed Kivy Buildozer VM and was following instructions on the Readme file. Created the buildozer.spec file using <code>buildozer init</code> command but <code>buildozer android debug</code>is failing with following output</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Run 'dpkg --version'
# Cwd None
Debian `dpkg' package management program version 1.17.5 (amd64).
This is free software; see the GNU General Public License version 2 or
later for copying conditions. There is NO warranty.
# Search for Git (git)
# -> found at /usr/bin/git
# Search for Cython (cython)
# -> found at /usr/local/bin/cython
# Search for Java compiler (javac)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
# Search for Java keytool (keytool)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/keytool
# Install platform
# Apache ANT found at /home/kivy/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/kivy/.buildozer/android/platform/android-sdk-21
# Android NDK found at /home/kivy/.buildozer/android/platform/android-ndk-r9c
# Check application requirements
# Run './distribute.sh -l'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PYTHON-FOR-ANDROID ERROR! SEE BELOW FOR SOLUTION:
You are trying to run an old version of python-for-android via
distribute.sh. However, python-for-android has been rewritten and no
longer supports the distribute.sh interface.
If you are using buildozer, you should:
- upgrade buildozer to the latest version (at least 0.30)
- delete the .buildozer folder in your app directory (the same directory that has your buildozer.spec)
- run buildozer again as normal
If you are not using buildozer, see
https://github.com/kivy/python-for-android/blob/master/README.md for
instructions on using the new python-for-android
toolchain. Alternatively, you can get the old toolchain from the
'old_toolchain' branch at
https://github.com/kivy/python-for-android/tree/old_toolchain .
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# Command failed: ./distribute.sh -l
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
</code></pre>
<p>Then I tried to update using <code>buildozer android update</code> But fails giving the following output</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Run 'dpkg --version'
# Cwd None
Debian `dpkg' package management program version 1.17.5 (amd64).
This is free software; see the GNU General Public License version 2 or
later for copying conditions. There is NO warranty.
# Search for Git (git)
# -> found at /usr/bin/git
# Search for Cython (cython)
# -> found at /usr/local/bin/cython
# Search for Java compiler (javac)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
# Search for Java keytool (keytool)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/keytool
# Install platform
# Run 'git clean -dxf'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
# Run 'git pull origin master'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
From https://github.com/kivy/python-for-android
* branch master -> FETCH_HEAD
Already up-to-date.
# Apache ANT found at /home/kivy/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/kivy/.buildozer/android/platform/android-sdk-21
# Android NDK found at /home/kivy/.buildozer/android/platform/android-ndk-r9c
# Check application requirements
# Run './distribute.sh -l'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PYTHON-FOR-ANDROID ERROR! SEE BELOW FOR SOLUTION:
You are trying to run an old version of python-for-android via
distribute.sh. However, python-for-android has been rewritten and no
longer supports the distribute.sh interface.
If you are using buildozer, you should:
- upgrade buildozer to the latest version (at least 0.30)
- delete the .buildozer folder in your app directory (the same directory that has your buildozer.spec)
- run buildozer again as normal
If you are not using buildozer, see
https://github.com/kivy/python-for-android/blob/master/README.md for
instructions on using the new python-for-android
toolchain. Alternatively, you can get the old toolchain from the
'old_toolchain' branch at
https://github.com/kivy/python-for-android/tree/old_toolchain .
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# Command failed: ./distribute.sh -l
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
</code></pre>
<p>How can I fix this?</p>
| 0 | 2016-08-17T18:51:17Z | 39,004,764 | <p>As you can see here <a href="https://github.com/kivy/python-for-android/blob/master/distribute.sh" rel="nofollow">python-for-android</a> the distribute.sh file is not supposed to be used anymore. <a href="http://buildozer.readthedocs.io/en/latest/quickstart.html" rel="nofollow">Here</a> you have a guide how to use <code>buildozer</code> directly.</p>
| 0 | 2016-08-17T19:27:12Z | [
"android",
"python",
"kivy"
] |
Command failed: ./distribute.sh -l | 39,004,228 | <p>I am trying to follow the instructions on <a href="https://kivy.org/docs/guide/packaging-android-vm.html" rel="nofollow">https://kivy.org/docs/guide/packaging-android-vm.html</a> to create kivy android applications. I installed Kivy Buildozer VM and was following instructions on the Readme file. Created the buildozer.spec file using <code>buildozer init</code> command but <code>buildozer android debug</code>is failing with following output</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Run 'dpkg --version'
# Cwd None
Debian `dpkg' package management program version 1.17.5 (amd64).
This is free software; see the GNU General Public License version 2 or
later for copying conditions. There is NO warranty.
# Search for Git (git)
# -> found at /usr/bin/git
# Search for Cython (cython)
# -> found at /usr/local/bin/cython
# Search for Java compiler (javac)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
# Search for Java keytool (keytool)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/keytool
# Install platform
# Apache ANT found at /home/kivy/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/kivy/.buildozer/android/platform/android-sdk-21
# Android NDK found at /home/kivy/.buildozer/android/platform/android-ndk-r9c
# Check application requirements
# Run './distribute.sh -l'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PYTHON-FOR-ANDROID ERROR! SEE BELOW FOR SOLUTION:
You are trying to run an old version of python-for-android via
distribute.sh. However, python-for-android has been rewritten and no
longer supports the distribute.sh interface.
If you are using buildozer, you should:
- upgrade buildozer to the latest version (at least 0.30)
- delete the .buildozer folder in your app directory (the same directory that has your buildozer.spec)
- run buildozer again as normal
If you are not using buildozer, see
https://github.com/kivy/python-for-android/blob/master/README.md for
instructions on using the new python-for-android
toolchain. Alternatively, you can get the old toolchain from the
'old_toolchain' branch at
https://github.com/kivy/python-for-android/tree/old_toolchain .
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# Command failed: ./distribute.sh -l
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
</code></pre>
<p>Then I tried to update using <code>buildozer android update</code> But fails giving the following output</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Run 'dpkg --version'
# Cwd None
Debian `dpkg' package management program version 1.17.5 (amd64).
This is free software; see the GNU General Public License version 2 or
later for copying conditions. There is NO warranty.
# Search for Git (git)
# -> found at /usr/bin/git
# Search for Cython (cython)
# -> found at /usr/local/bin/cython
# Search for Java compiler (javac)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
# Search for Java keytool (keytool)
# -> found at /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/keytool
# Install platform
# Run 'git clean -dxf'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
# Run 'git pull origin master'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
From https://github.com/kivy/python-for-android
* branch master -> FETCH_HEAD
Already up-to-date.
# Apache ANT found at /home/kivy/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/kivy/.buildozer/android/platform/android-sdk-21
# Android NDK found at /home/kivy/.buildozer/android/platform/android-ndk-r9c
# Check application requirements
# Run './distribute.sh -l'
# Cwd /media/sf_virtual_box/organizer/2nd_vid_tute/tut1_vid4/.buildozer/android/platform/python-for-android
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
PYTHON-FOR-ANDROID ERROR! SEE BELOW FOR SOLUTION:
You are trying to run an old version of python-for-android via
distribute.sh. However, python-for-android has been rewritten and no
longer supports the distribute.sh interface.
If you are using buildozer, you should:
- upgrade buildozer to the latest version (at least 0.30)
- delete the .buildozer folder in your app directory (the same directory that has your buildozer.spec)
- run buildozer again as normal
If you are not using buildozer, see
https://github.com/kivy/python-for-android/blob/master/README.md for
instructions on using the new python-for-android
toolchain. Alternatively, you can get the old toolchain from the
'old_toolchain' branch at
https://github.com/kivy/python-for-android/tree/old_toolchain .
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# Command failed: ./distribute.sh -l
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
</code></pre>
<p>How can I fix this?</p>
| 0 | 2016-08-17T18:51:17Z | 39,436,346 | <p>If you get this error in the Kivy Android Build Image you should update buildozer like this:</p>
<p>1.type</p>
<pre><code>sudo pip install -U buildozer
</code></pre>
<p>2. delete the .buildozer folder in your app folder</p>
<p>The sudo password for this image is kivy123</p>
<ol start="3">
<li><p>type </p>
<pre><code> buildozer -v android debug
</code></pre>
<p>to see what is happening during the compiling </p></li>
</ol>
| 0 | 2016-09-11T12:58:48Z | [
"android",
"python",
"kivy"
] |
List of dictionaries to csv file without square brackets | 39,004,283 | <pre><code>with open('processedData/file.tsv', 'wb') as f:
dict_writer = csv.DictWriter(f, keys, delimiter = '\t')
dict_writer.writeheader()
dict_writer.writerows(hgnc_list)
</code></pre>
<p>By executing the above code, I'm successfully writing my list of dictionaries to a file. </p>
<p><code>The catch is</code>: that some of the values in the dictionaries are lists. Which I <strong>would</strong> like to be printed without the square brackets or the <code>''</code> and use the <code>|</code> as a delimiter instead of the <code>comma</code></p>
<p>so the output csv file would be like this: </p>
<pre><code>column1 column2 column3 ... columnN
value11 value121|value122 value13 ... value1N1|value1N2|value1N3
</code></pre>
<p>Instead of: </p>
<pre><code>column1 column2 column3 ... columnN
value11 ['value121','value122'] value13 ... ['value1N1','value1N2','value1N3']
</code></pre>
<p>Can this be done without reprocessing the string?</p>
| 0 | 2016-08-17T18:54:36Z | 39,004,366 | <p>The <code>csv</code> module (and CSV files in general) don't handle things like list entries in any special way. If you want to put a list into a CSV entry, you need to explicitly convert your values to strings, store the string values, and then parse them back into lists yourself when you read the data.</p>
<p>For instance, if you have <code>some_list</code>, a list of strings, you could replace that in your dictionary with <code>'|'.join(some_list)</code> to get a pipe-separated string, and then store that value. If you read that data back in, such an entry will be read as a single string value, which you'll have to split into a list yourself.</p>
| 1 | 2016-08-17T19:00:32Z | [
"python",
"list",
"csv",
"dictionary"
] |
Tensorflow loss not changing and also computed gradients and applied batch norm but still loss is not changing? | 39,004,303 | <p>My Tensorflow loss is not changing. This is my code.</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import math
import os
import nltk
import random
import tflearn
batch_size = 100
start = 0
end = batch_size
learning_rate = 0.01
num_classes = 8
path1 = "/home/indy/Downloads/aclImdb/train/pos"
path2 = "/home/indy/Downloads/aclImdb/train/neg"
path3 = "/home/indy/Downloads/aclImdb/test/pos"
path4 = "/home/indy/Downloads/aclImdb/test/neg"
time_steps = 300
embedding = 50
step = 1
def get_embedding():
gfile_path = os.path.join("/home/indy/Downloads/glove.6B", "glove.6B.50d.txt")
f = open(gfile_path,'r')
embeddings = {}
for line in f:
sp_value = line.split()
word = sp_value[0]
embedding = [float(value) for value in sp_value[1:]]
assert len(embedding) == 50
embeddings[word] = embedding
return embeddings
ebd = get_embedding()
def get_y(file_path):
y_value = file_path.split('_')
y_value = y_value[1].split('.')
if y_value[0] == '1':
return 0
elif y_value[0] == '2':
return 1
elif y_value[0] == '3':
return 2
elif y_value[0] == '4':
return 3
elif y_value[0] == '7':
return 4
elif y_value[0] == '8':
return 5
elif y_value[0] == '9':
return 6
elif y_value[0] == '10':
return 7
def get_x(file_path):
x_value = open(file_path,'r')
for line in x_value:
x_value = line.replace("<br /><br />","")
x_value = x_value.lower()
x_value = nltk.word_tokenize(x_value.decode('utf-8'))
padding = 300 - len(x_value)
if padding > 0:
p_value = ['pad' for i in range(padding)]
x_value = np.concatenate((x_value,p_value))
if padding < 0:
x_value = x_value[:300]
for i in x_value:
if ebd.get(i) == None:
ebd[i] = [float(np.random.normal(0.0,1.0)) for j in range(50)]
x_value = [ebd[value] for value in x_value]
assert len(x_value) == 300
return x_value
def get_total_files(path1,path2,path3,path4):
directory1 = os.listdir(path1)
file_path1 = [os.path.join(path1,file) for file in directory1]
directory2 = os.listdir(path2)
file_path2 = [os.path.join(path2,file) for file in directory2]
directory3 = os.listdir(path3)
file_path3 = [os.path.join(path3,file) for file in directory3]
directory4 = os.listdir(path4)
file_path4 = [os.path.join(path4,file) for file in directory4]
total_files_train = np.concatenate((file_path1,file_path2))
total_files_test = np.concatenate((file_path3,file_path4))
random.shuffle(total_files_train)
random.shuffle(total_files_test)
x1 = [get_x(file) for file in total_files_train]
y1 = [get_y(file) for file in total_files_train]
x2 = [get_x(file) for file in total_files_test]
y2 = [get_y(file) for file in total_files_test]
return x1 , y1 , x2 , y2
total_files_train_x, total_files_train_y, total_files_test_x, total_files_test_y = get_total_files(path1,path2,path3,path4)
train_set_x = total_files_train_x[:10000]
validate_set_x = total_files_train_x[10000:15000]
test_set_x = total_files_test_x[0:5000]
train_set_y = total_files_train_y[:10000]
validate_set_y = total_files_train_y[10000:15000]
test_set_y = total_files_test_y[0:5000]
X = tf.placeholder(tf.float32, [None,time_steps,embedding])
Y = tf.placeholder(tf.int32, [None])
def build_nlp_model(x, _units,num_classes,num_of_filters):
x = tf.expand_dims(x,3)
with tf.variable_scope("one"):
filter_shape = [1, embedding, 1, num_of_filters]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv = tf.nn.conv2d(x, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
with tf.variable_scope("two"):
filter_shape = [1, 1, 1, 1000]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[1000]))
conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
with tf.variable_scope("three"):
filter_shape = [1, 1, 1, 1000]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[1000]))
conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
with tf.variable_scope("four"):
filter_shape = [1, 1, 1, num_of_filters]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
with tf.variable_scope("five"):
filter_shape = [1, 1, 1, num_of_filters]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
x = tf.squeeze(outputs_fed_lstm, [2])
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, 1])
x = tf.split(0, time_steps, x)
lstm = tf.nn.rnn_cell.LSTMCell(num_units = _units)
# multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * lstm_layers, state_is_tuple = True)
outputs , state = tf.nn.rnn(lstm,x, dtype = tf.float32)
weights = tf.Variable(tf.random_normal([_units,num_classes]))
biases = tf.Variable(tf.random_normal([num_classes]))
logits = tf.matmul(outputs[-1], weights) + biases
return logits
logits = build_nlp_model(X,500,num_classes,1000)
c_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits,Y)
loss = tf.reduce_mean(c_loss)
global_step = tf.Variable(0, name="global_step", trainable=False)
# decayed_learning_rate = tf.train.exponential_decay(learning_rate,0,10000,0.9)
optimizer= tf.train.AdamOptimizer(learning_rate)
minimize_loss = optimizer.minimize(loss, global_step=global_step)
with tf.variable_scope("four", reuse = True):
weights = tf.get_variable("conv_weights")
grads_and_vars = optimizer.compute_gradients(loss,[weights])
correct_predict = tf.nn.in_top_k(logits, Y, 1)
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
for i in range(10):
for j in range(100):
x = train_set_x[start:end]
y = train_set_y[start:end]
start = end
end += batch_size
if start >= 10000:
start = 0
end = batch_size
sess.run(minimize_loss,feed_dict={X : x, Y : y})
step += 1
gr_print = sess.run([grad for grad, _ in grads_and_vars], feed_dict={X : x, Y : y})
print (gr_print)
print ("One Epoch Finished")
cost = sess.run(loss,feed_dict = {X: x,Y: y})
accu = sess.run(accuracy,feed_dict = {X: x, Y: y})
print ("Loss after one Epoch(Training) = " + "{:.6f}".format(cost) + ", Training Accuracy= " + "{:.5f}".format(accu))
q = validate_set_x[:100]
w = validate_set_y[:100]
cost = sess.run(loss,feed_dict = {X: q,Y: w})
accu = sess.run(accuracy,feed_dict = {X: q, Y: w})
</code></pre>
<p>My loss remains the same after many Epochs. So I think that I'm having vanishing gradient problem and so I applied batch normalization but I got no difference in results.I also tried overfitting the model, but I'm getting same results. I'm using <code>optimizer.compute_gradients</code> for computing gradients. Below are the results of gradients of loss with respect to different conv layers, and how they look like. Here is how my gradients look like with respect to first conv layers and with respect to 4th conv layer.</p>
<p>Code for gradients with respect to first conv layer:</p>
<pre><code>with tf.variable_scope("one", reuse = True):
weights = tf.get_variable("conv_weights")
grads_and_vars = optimizer.compute_gradients(loss,[weights])
gr_print = sess.run([grad for grad, _ in grads_and_vars], feed_dict={X : x, Y : y})
print (gr_print)
</code></pre>
<p>And this is what I get after one iteration:</p>
<pre><code>[array([[[[ 2.38197345e-06, -1.04135906e-04, 2.60035231e-05, ...,
-1.01550373e-04, 0.00000000e+00, 1.01060732e-06]],
[[ -1.98007251e-06, 8.13827137e-05, -8.14055747e-05, ...,
-6.40711369e-05, 0.00000000e+00, 1.05516607e-04]],
[[ 4.51127789e-06, 2.21654373e-05, -4.99439229e-05, ...,
9.87191743e-05, 0.00000000e+00, 1.70595697e-04]],
...,
[[ -4.70160239e-06, -8.67914496e-05, 2.50699850e-05, ...,
1.18909593e-04, 0.00000000e+00, 2.43308150e-05]],
[[ -1.18101923e-06, -7.71943451e-05, -3.41630148e-05, ...,
-3.28040805e-05, 0.00000000e+00, -6.01144784e-05]],
[[ -1.98778321e-06, -3.23160748e-05, -5.44797731e-05, ...,
2.23019324e-05, 0.00000000e+00, -3.29296927e-05]]]], dtype=float32)]
</code></pre>
<p>Code for gradients with respect to 4th conv layer:</p>
<pre><code>with tf.variable_scope("four", reuse = True):
weights = tf.get_variable("conv_weights")
grads_and_vars = optimizer.compute_gradients(loss,[weights])
gr_print = sess.run([grad for grad, _ in grads_and_vars], feed_dict={X : x, Y : y})
print (gr_print)
</code></pre>
<p>And this what I get after one iteration:</p>
<pre><code>[array([[[[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , -6.21198082, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ]]]], dtype=float32)]
</code></pre>
<p>After first layer, gradients with respect to 2nd,3rd,4th,5th conv layers all look like above. But there's one thing common among all the gradients with respect to conv layers which are after first conv layer, they all have one number in the entire gradient array,that is not zero as shown above in the output. And I also applied batch norm and I'm still getting the above results.</p>
<p>I'm totally confused, I don't know where the problem is?</p>
<p>And I've one more question, If I want to access variables like pooling, output_fed_lstm etc how can I access them?</p>
<pre><code>with tf.variable_scope("one", reuse = True):
weights = tf.get_variable("conv_weights")
grads_and_vars = optimizer.compute_gradients(loss,[weights])
</code></pre>
<p>I know I can access variables like conv_weights
as shown above. </p>
<pre><code>with tf.variable_scope("one"):
filter_shape = [1, embedding, 1, num_of_filters]
conv_weights = tf.get_variable("conv_weights" , filter_shape, tf.float32, tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
conv_biases = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv = tf.nn.conv2d(x, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
</code></pre>
<p>But how can I access variables like pooling,outputs_fed_lstm etc which are also in scope "one" ?</p>
| 1 | 2016-08-17T18:56:00Z | 39,006,053 | <p>You can get all variables in the current graph by using <code>tf.all_variables()</code>. This will give a list of all variables as variable objects, and you can find what you are looking for by using <code>variable.name()</code> to identify a variable. You should also go and name all of the variables you are interested in to facilitate this. For example, to name the pooling op:</p>
<pre><code>pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True, name='pooling')
</code></pre>
<p>As far as your code goes, my first guess is that your learning rate is too high, and this is leading to instability right off the bat because of dead neurons. Try lowering the learning rate and see if that helps. </p>
| 0 | 2016-08-17T20:50:54Z | [
"python",
"numpy",
"machine-learning",
"neural-network",
"tensorflow"
] |
Chaining filters for a Series | 39,004,336 | <p>It's convenient to <a href="http://stackoverflow.com/a/28159296/336527">chain filters on a DataFrame using query</a>:</p>
<pre><code># quoting from the SO answer above
df = pd.DataFrame( np.random.randn(30,3), columns = ['a','b','c'])
df_filtered = df.query('a>0').query('0<b<2')
</code></pre>
<p>What if I need to do the same to a <code>Series</code>:</p>
<pre><code>df = pd.DataFrame({'a': [0, 0, 1, 1, 2, 2], 'b': [1, 2, 3, 4, 5, 6]})
df.groupby('a').b.sum().query('? > 3').query('? % 3 == 1')
</code></pre>
<p><code>Series.query</code> doesn't exist (for a good reason, most of the query syntax is to allow access to multiple columns).</p>
| 2 | 2016-08-17T18:58:10Z | 39,004,359 | <p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_frame.html" rel="nofollow">to_frame()</a> method:</p>
<pre><code>In [10]: df.groupby('a').b.sum().to_frame('v').query('v > 3').query('v % 3 == 1')
Out[10]:
v
a
1 7
</code></pre>
<p>if you need result as series:</p>
<pre><code>In [12]: df.groupby('a').b.sum().to_frame('v').query('v > 3').query('v % 3 == 1').v
Out[12]:
a
1 7
Name: v, dtype: int64
</code></pre>
<blockquote>
<p>does to_frame() involve copying of the series?</p>
</blockquote>
<p>It involves a call of the DataFrame constructor:</p>
<p><a href="https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L1140" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L1140</a>:</p>
<pre><code>df = self._constructor_expanddim({name: self})
</code></pre>
<p><a href="https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L265" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L265</a>:</p>
<pre><code>def _constructor_expanddim(self):
from pandas.core.frame import DataFrame
return DataFrame
</code></pre>
<p>Performance impact (testing against 600K rows DF):</p>
<pre><code>In [66]: %timeit df.groupby('a').b.sum()
10 loops, best of 3: 46.2 ms per loop
In [67]: %timeit df.groupby('a').b.sum().to_frame('v')
10 loops, best of 3: 49.7 ms per loop
In [68]: 49.7 / 46.2
Out[68]: 1.0757575757575757
</code></pre>
<p>Performance impact (testing against 6M rows DF):</p>
<pre><code>In [69]: df = pd.concat([df] * 10, ignore_index=True)
In [70]: df.shape
Out[70]: (6000000, 2)
In [71]: %timeit df.groupby('a').b.sum()
1 loop, best of 3: 474 ms per loop
In [72]: %timeit df.groupby('a').b.sum().to_frame('v')
1 loop, best of 3: 464 ms per loop
</code></pre>
<p>Performance impact (testing against 60M rows DF):</p>
<pre><code>In [73]: df = pd.concat([df] * 10, ignore_index=True)
In [74]: df.shape
Out[74]: (60000000, 2)
In [75]: %timeit df.groupby('a').b.sum()
1 loop, best of 3: 4.28 s per loop
In [76]: %timeit df.groupby('a').b.sum().to_frame('v')
1 loop, best of 3: 4.3 s per loop
In [77]: 4.3 / 4.28
Out[77]: 1.0046728971962615
</code></pre>
<p><strong>Conclusion</strong>: the performance impact doesn't seem to be that big...</p>
| 3 | 2016-08-17T18:59:59Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Mock entire python class | 39,004,540 | <p>I'm trying to make a simple test in python, but I'm not able to figure it out how to accomplish the mocking process.</p>
<p>This is the class and def code:</p>
<pre><code>class FileRemoveOp(...)
@apply_defaults
def __init__(
self,
source_conn_keys,
source_conn_id='conn_default',
*args, **kwargs):
super(v4FileRemoveOperator, self).__init__(*args, **kwargs)
self.source_conn_keys = source_conn_keys
self.source_conn_id = source_conn_id
def execute (self, context)
source_conn = Connection(conn_id)
try:
for source_conn_key in self.source_keys:
if not source_conn.check_for_key(source_conn_key):
logging.info("The source key does not exist")
source_conn.remove_file(source_conn_key,'')
finally:
logging.info("Remove operation successful.")
</code></pre>
<p>And this is my test for the execute function:</p>
<pre><code>@mock.patch('main.Connection')
def test_remove_execute(self,MockConn):
mock_coon = MockConn.return_value
mock_coon.value = #I'm not sure what to put here#
remove_operator = FileRemoveOp(...)
remove_operator.execute(self)
</code></pre>
<p>Since the <strong>execute</strong> method try to make a connection, I need to mock that, I don't want to make a real connection, just return something mock. How can I make that? I'm used to do testing in Java but I never did on python..</p>
| 3 | 2016-08-17T19:13:33Z | 39,004,889 | <p>First it is very important to understand that you always need to Mock where it the thing you are trying to mock out is used as stated in the <code>unittest.mock</code> documentation.</p>
<blockquote>
<p>The basic principle is that you patch where an object is looked up,
which is not necessarily the same place as where it is defined.</p>
</blockquote>
<p>Next what you would need to do is to return a <code>MagicMock</code> instance as <code>return_value</code> of the patched object. So to summarize this you would need to use the following sequence.</p>
<ul>
<li>Patch Object</li>
<li>prepare <code>MagicMock</code> to be used</li>
<li>return the <code>MagicMock</code> we've just created as <code>return_value</code></li>
</ul>
<p>Here a quick example of a project.</p>
<p><strong>connection.py (Class we would like to Mock)</strong></p>
<pre><code>class Connection(object):
def execute(self):
return "Connection to server made"
</code></pre>
<p><strong>file.py (Where the Class is used)</strong></p>
<pre><code>from project.connection import Connection
class FileRemoveOp(object):
def __init__(self, foo):
self.foo = foo
def execute(self):
conn = Connection()
result = conn.execute()
return result
</code></pre>
<p><strong>tests/test_file.py</strong></p>
<pre><code>import unittest
from unittest.mock import patch, MagicMock
from project.file import FileRemoveOp
class TestFileRemoveOp(unittest.TestCase):
def setUp(self):
self.fileremoveop = FileRemoveOp('foobar')
@patch('project.file.Connection')
def test_execute(self, connection_mock):
# Create a new MagickMock instance which will be the
# `return_value` of our patched object
connection_instance = MagicMock()
connection_instance.execute.return_value = "testing"
# Return the above created `connection_instance`
connection_mock.return_value = connection_instance
result = self.fileremoveop.execute()
expected = "testing"
self.assertEqual(result, expected)
def test_not_mocked(self):
# No mocking involved will execute the `Connection.execute` method
result = self.fileremoveop.execute()
expected = "Connection to server made"
self.assertEqual(result, expected)
</code></pre>
| 4 | 2016-08-17T19:34:26Z | [
"python",
"unit-testing",
"magicmock"
] |
How to display percentage label in histogram plot in Python | 39,004,606 | <p>I managed to create a plot that shows the number of records per class label for each age in my Pandas dataframe. But I would also like to see a percentage label for the "non functional" class in each age group.</p>
<p>The Python code for the graph is</p>
<pre><code>train['age_wpt'] = train.date_recorded.str.split('-').str.get(0).apply(int) - train.construction_year
figure = plt.figure(figsize=(15,8))
plt.hist([
train[(train.status_group=='functional') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt'],
train[(train.status_group=='non functional') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt'],
train[(train.status_group=='functional needs repair') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt']
],
stacked=True, color = ['b','r','y'],
bins = 30,label = ['functional','non functional', 'functional needs repair'])
plt.xlabel('Age')
plt.ylabel('Number of records')
plt.legend()
</code></pre>
<p>This results in the following graph
<a href="http://i.stack.imgur.com/LYaQ4.png" rel="nofollow"><img src="http://i.stack.imgur.com/LYaQ4.png" alt="enter image description here"></a></p>
| 0 | 2016-08-17T19:18:02Z | 39,004,677 | <blockquote>
<p>normed : boolean, optional
If <code>True</code>, the first element of the return tuple will
be the counts normalized to form a probability density, i.e.,
<code>n/(len(x)`dbin)</code>, i.e., the integral of the histogram will sum
to 1. If <em>stacked</em> is also <em>True</em>, the sum of the histograms is
normalized to 1.
Default is <code>False</code></p>
</blockquote>
<pre><code>plt.hist([
train[(train.status_group=='functional') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt'],
train[(train.status_group=='non functional') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt'],
train[(train.status_group=='functional needs repair') & (train.age_wpt < 60.0) & (train.age_wpt >= 0.0)]['age_wpt']
],
stacked=False, color = ['b','r','y'], normed=True
bins = 30,label = ['functional','non functional', 'functional needs repair'])
</code></pre>
| 0 | 2016-08-17T19:22:04Z | [
"python",
"matplotlib"
] |
Google Cloud Storage API write files with special characters vs regular python files | 39,004,673 | <p>I am using Google App Engine to write a new file to a Google Cloud Storage bucket for eventual serving in the browser. Normally on my local computer this writes a nice text file which I can open and see the test character as expected:</p>
<pre><code>with open('new_file.txt', 'w') as f:
f.write(u'é'.encode('utf-8'))
</code></pre>
<p>When I open <code>new_file.txt</code> in Notepad it's properly displayed as <code>é</code>.</p>
<p>But when I try the analogous process on Google Cloud Storage:</p>
<pre><code>with gcs.open('/mybucket/newfile.txt', 'w', content_type='text/html') as f:
f.write(u'é'.encode('utf-8'))
</code></pre>
<p>My files are served in the browser with special characters all messed up, in this case it outputs <code>é</code>.</p>
<p><a href="http://i.stack.imgur.com/ZL206.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZL206.png" alt="enter image description here"></a></p>
| 1 | 2016-08-17T19:21:36Z | 39,005,315 | <p>The default charset for HTTP 1.1 is ISO-8859-1.</p>
<p>If you want the browser to interpret your text as UTF-8, you should set the content-type header to include the charset, like this:</p>
<pre><code>with gcs.open('/mybucket/newfile.txt', 'w', content_type='text/html; charset=utf-8') as f:
</code></pre>
| 1 | 2016-08-17T20:02:20Z | [
"python",
"google-app-engine",
"character-encoding",
"google-cloud-storage",
"google-cloud-platform"
] |
pd.read_sql unicode types causing problems | 39,004,686 | <p>I'm formatting some data I'm pulling from a database using sqlalchemy, pyodbc, and pandas read_sql, which I get back as a dataframe <code>df</code>.</p>
<p>I want to apply a formatting of the data in each 'cell' of the dataframe, row by row and excluding the first two columns using this: </p>
<pre><code>df.iloc[6, 2:] = (df.iloc[6, 2:]*100).map('{:,.2f}%'.format)
</code></pre>
<p>I apply a similar formatting for several other rows in the dataframe. This used to work great when I was reading my data from a <code>csv</code> file, but now reading from the database causes a ValueError on that line that reads:</p>
<pre><code>ValueError: Unknown format code 'f' for object of type 'unicode'
</code></pre>
<p>I tried some other casting attemps such as: <code>df.iloc[6, 2:] = (float(df.iloc[6, 2:].encode())*100).map('{:,.2f}%'.format)</code> But this causes some additional errors.</p>
<p>I'm pretty sure the error is being caused by the unicode type of the results. How should I format my dataframe or modify my read_sql to not have unicode strings? I'm using python 2.7 by the way.</p>
<p>The <code>dtype</code> for each column is <code>object</code>.</p>
| 0 | 2016-08-17T19:22:33Z | 39,007,144 | <p>You're trying to do string formatting for a <code>float</code>, but you're actually passing it a string.</p>
<p>To illustrate the source of your error, consider the following:</p>
<pre><code>'{:,.2f}%'.format(u'1')
</code></pre>
<p>which raises the same error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-fb59302ab6b7> in <module>()
----> 1 '{:,.2f}%'.format(u'1')
ValueError: Unknown format code 'f' for object of type 'unicode'
</code></pre>
<p>To solve this, cast your string (dtype = <code>object</code>) columns to float, e.g.</p>
<pre><code># get columns to cast to float
vals = df.select_dtypes(['object']).astype(float)
cols = vals.columns
# and replace them
df[cols] = vals
</code></pre>
<p>Alternatively, you could put some logic in your mapper, e.g.</p>
<pre><code>def safe_float_formatter(value):
try:
return '{:,.2f}%'.format(value)
except ValueError:
return value
df.map(safe_float_formatter)
</code></pre>
| 1 | 2016-08-17T22:21:10Z | [
"python",
"python-2.7",
"pandas"
] |
Custom sorting in Python | 39,004,687 | <p>I'm new to Python, thus the question,</p>
<p>I have the following list of list items,</p>
<pre><code>[[0, 1], [2,3], [1,2], [4, 5], [3, 5]]
</code></pre>
<p>I want to sort this list in increasing order comparing the second item of each list first and then the first item</p>
<p>This is my code,</p>
<pre><code>def sorting(a, b):
if a[1] > b[1]:
return 1
elif a[1] == b[1]:
if a[0] > b[0]:
return 1
else:
return -1
else:
return 1
</code></pre>
<p>However can someone help me rewrite this using the sorted function with lamda and comprehensions.</p>
| 1 | 2016-08-17T19:22:34Z | 39,004,792 | <p>You should use Python's built-in <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted()</code></a> method, it already has support for lambda comparisons.</p>
<pre><code>l = [[0, 1], [2,3], [1,2], [4, 5], [3, 5]]
l = sorted(l, cmp=sorting)
</code></pre>
<p>Your comparison function is wrong if you want to sort in the way that you are talking about. There is no return zero case and the bottom <code>return 1</code> should be <code>return -1</code>. Also indentation error.</p>
| 1 | 2016-08-17T19:28:34Z | [
"python",
"lambda"
] |
Custom sorting in Python | 39,004,687 | <p>I'm new to Python, thus the question,</p>
<p>I have the following list of list items,</p>
<pre><code>[[0, 1], [2,3], [1,2], [4, 5], [3, 5]]
</code></pre>
<p>I want to sort this list in increasing order comparing the second item of each list first and then the first item</p>
<p>This is my code,</p>
<pre><code>def sorting(a, b):
if a[1] > b[1]:
return 1
elif a[1] == b[1]:
if a[0] > b[0]:
return 1
else:
return -1
else:
return 1
</code></pre>
<p>However can someone help me rewrite this using the sorted function with lamda and comprehensions.</p>
| 1 | 2016-08-17T19:22:34Z | 39,004,861 | <p>You can reverse the order when the sort looks at them. Just don't alter the original list items.</p>
<p>sorted(l, key=lambda x: (x[1], x[0]))</p>
| 2 | 2016-08-17T19:32:17Z | [
"python",
"lambda"
] |
simple matrix type rotation (without numpy or pandas) | 39,004,701 | <p>This must be something that is really simple, but I could not fix it.</p>
<p>I want to do a matrix type transpose with native python list of list (i.e., without using <code>numpy</code> or <code>pandas</code>). Code is show following. I am having a hard time trying to figure out where it is wrong.</p>
<pre><code>raw_matrix_list = [[1, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 1, 1]]
def rotate_matrix_list(raw_matrix_list):
rows = len(raw_matrix_list)
cols = len(raw_matrix_list[0])
new_matrix_list = [[0] * rows] * cols
for ii in xrange(cols):
for jj in xrange(rows):
# print str(ii) + ', ' + str(jj) + ', ' + str(rows)
new_matrix_list[ii][jj] = raw_matrix_list[rows-jj - 1][ii]
return(new_matrix_list)
rotate_matrix_list(raw_matrix_list)
</code></pre>
<p>The result I get is</p>
<pre><code>[[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 0, 0]]
</code></pre>
<p>What I want to get is:</p>
<pre><code>[[1, 0, 1, 1], [0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 0], [1, 0, 1, 1], [1, 1, 0, 0]]
</code></pre>
<p>===</p>
<pre><code>$ python --version
</code></pre>
<p>Python 2.7.12 :: Anaconda 2.3.0 (x86_64)</p>
<p>===</p>
<h1>update 2</h1>
<p>Now I got the answer of how to do it in python with <code>zip</code> function. But I just failed to see why my code did not work. </p>
| 0 | 2016-08-17T19:23:24Z | 39,004,809 | <p>Well, doing a transpose with vanilla lists in Python is pretty easy: use <code>zip</code> and the splat operator:</p>
<pre><code>>>> raw_matrix_list = [[1, 0, 1, 0, 1, 0],
... [1, 0, 0, 0, 1, 0],
... [0, 0, 0, 0, 0, 1],
... [1, 0, 0, 0, 1, 1]]
>>> transpose = list(zip(*raw_matrix_list))
>>> transpose
[(1, 1, 0, 1), (0, 0, 0, 0), (1, 0, 0, 0), (0, 0, 0, 0), (1, 1, 0, 1), (0, 0, 1, 1)]
>>> from pprint import pprint
>>> pprint(transpose)
[(1, 1, 0, 1),
(0, 0, 0, 0),
(1, 0, 0, 0),
(0, 0, 0, 0),
(1, 1, 0, 1),
(0, 0, 1, 1)]
</code></pre>
<p>For python 2, you only need <code>zip(*raw_matrix_list))</code> rather than <code>list(zip(*raw_matrix_list)))</code></p>
<p>If a list of tuples won't do:</p>
<pre><code>>>> transpose = [list(t) for t in zip(*raw_matrix_list)]
>>> pprint(transpose)
[[1, 1, 0, 1],
[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 0],
[1, 1, 0, 1],
[0, 0, 1, 1]]
</code></pre>
<p>The problem with your solution is that you use:</p>
<pre><code>new_matrix_list = [[0] * rows] * cols
</code></pre>
<p>This makes every list the <em>same object</em>.
See this example for the problem:</p>
<pre><code>>>> x = [[0]*3] * 4
>>> x
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]
>>> x[0][0] = 1
>>> x
[[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]]
</code></pre>
<p>Use something like:</p>
<pre><code>new_matrix_list = [[0 for _ in range(rows)] for _ in range(cols)]
</code></pre>
<p>And you should be well on your way.</p>
| 2 | 2016-08-17T19:29:23Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.