title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
python - Implementing Sobel operators with python without opencv | 39,035,510 | <p>Given a greyscale 8 bit image (2D array with values from 0 - 255 for pixel intensity), I want to implement the Sobel operators (mask) on an image. The Sobel function below basically loops around a given pixel,applies the following weight to the pixels:
<a href="http://i.stack.imgur.com/1N67K.png" rel="nofollow"><img src="http://i.stack.imgur.com/1N67K.png" alt="enter image description here"></a> </p>
<p><a href="http://i.stack.imgur.com/Ut0Aq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ut0Aq.png" alt="enter image description here"></a></p>
<p>And then aplies the given formula:</p>
<p><a href="http://i.stack.imgur.com/aBBUL.png" rel="nofollow"><img src="http://i.stack.imgur.com/aBBUL.png" alt="enter image description here"></a></p>
<p>Im trying to implement the formulas from this link:
<a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm" rel="nofollow">http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm</a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import Image
def Sobel(arr,rstart, cstart,masksize, divisor):
sum = 0;
x = 0
y = 0
for i in range(rstart, rstart+masksize, 1):
x = 0
for j in range(cstart, cstart+masksize, 1):
if x == 0 and y == 0:
p1 = arr[i][j]
if x == 0 and y == 1:
p2 = arr[i][j]
if x == 0 and y == 2:
p3 = arr[i][j]
if x == 1 and y == 0:
p4 = arr[i][j]
if x == 1 and y == 1:
p5 = arr[i][j]
if x == 1 and y == 2:
p6 = arr[i][j]
if x == 2 and y == 0:
p7 = arr[i][j]
if x == 2 and y == 1:
p8 = arr[i][j]
if x == 2 and y == 2:
p9 = arr[i][j]
x +=1
y +=1
return np.abs((p1 + 2*p2 + p3) - (p7 + 2*p8+p9)) + np.abs((p3 + 2*p6 + p9) - (p1 + 2*p4 +p7))
def padwithzeros(vector, pad_width, iaxis, kwargs):
vector[:pad_width[0]] = 0
vector[-pad_width[1]:] = 0
return vector
im = Image.open('charlie.jpg')
im.show()
img = np.asarray(im)
img.flags.writeable = True
p = 1
k = 2
m = img.shape[0]
n = img.shape[1]
masksize = 3
img = np.lib.pad(img, p, padwithzeros) #this function padds image with zeros to cater for pixels on the border.
x = 0
y = 0
for row in img:
y = 0
for col in row:
if not (x < p or y < p or y > (n-k) or x > (m-k)):
img[x][y] = Sobel(img, x-p,y-p,masksize,masksize*masksize)
y = y + 1
x = x + 1
img2 = Image.fromarray(img)
img2.show()
</code></pre>
<p>Given this greyscale 8 bit image</p>
<p><a href="http://i.stack.imgur.com/8zINU.gif" rel="nofollow"><img src="http://i.stack.imgur.com/8zINU.gif" alt="enter image description here"></a></p>
<p>I get this when applying the function:</p>
<p><a href="http://i.stack.imgur.com/MPM6y.png" rel="nofollow"><img src="http://i.stack.imgur.com/MPM6y.png" alt="enter image description here"></a></p>
<p>but should get this:</p>
<p><a href="http://i.stack.imgur.com/ECAIK.gif" rel="nofollow"><img src="http://i.stack.imgur.com/ECAIK.gif" alt="enter image description here"></a></p>
<p>I have implemented other gaussian filters with python, I'm not sure where I'm going wrong here?</p>
| 2 | 2016-08-19T09:24:08Z | 39,038,392 | <p>Sticking close to what your code is doing, one elegant solution is to use the <a href="http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.filters.generic_filter.html" rel="nofollow"><code>scipy.ndimage.filters.generic_filter()</code></a> with the formula provided above.</p>
<pre><code>import numpy as np
from scipy.ndimage.filters import generic_filter
from scipy.ndimage import imread
# Load sample data
with np.DataSource().open("http://i.stack.imgur.com/8zINU.gif", "rb") as f:
img = imread(f, mode="I")
# Apply the Sobel operator
def sobel_filter(P):
return (np.abs((P[0] + 2 * P[1] + P[2]) - (P[6] + 2 * P[7] + P[8])) +
np.abs((P[2] + 2 * P[6] + P[7]) - (P[0] + 2 * P[3] + P[6])))
G = generic_filter(img, sobel_filter, (3, 3))
</code></pre>
<p>Running this on the sample image takes about 400 ms. For comparison, the <code>convolve2d</code>'s performance is about 6.5 ms.</p>
| 1 | 2016-08-19T11:50:05Z | [
"python",
"image",
"vision",
"sobel"
] |
Satellite position computation using Runge-Kutta 4 | 39,035,696 | <p>my issue is related to Runge-Kutta 4 (RK4) method and the correct iteration steps required for the state vector of an orbiting satellite.
The below code (in Python) describes the motion based on the description as per this link (<a href="http://www.navipedia.net/index.php/GLONASS_Satellite_Coordinates_Computation" rel="nofollow">http://www.navipedia.net/index.php/GLONASS_Satellite_Coordinates_Computation</a>):</p>
<pre><code> if total_step_number != 0:
for i in range(1, total_step_number+1):
#Calculate k1
k1[0] = (-cs.GM_GLONASS * XYZ[0] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[0] * (1 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[0] + (cs.OMEGAE_DOT**2 * XYZ[0]) + (2 * cs.OMEGAE_DOT * XYZDot[1])
k1[1] = (-cs.GM_GLONASS * XYZ[1] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[1] * (1 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[1] + (cs.OMEGAE_DOT**2 * XYZ[1]) - (2 * cs.OMEGAE_DOT * XYZDot[0])
k1[2] = (-cs.GM_GLONASS * XYZ[2] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[2] * (3 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[2]
#Intermediate step to bridge k1 to k2
XYZ2[0] = XYZ[0] + (XYZDot[0] * h / 2) + (k1[0] * h**2 / 8)
XYZDot2[0] = XYZDot[0] + (k1[0] * h / 2)
XYZ2[1] = XYZ[1] + (XYZDot[1] * h / 2) + (k1[1] * h**2 / 8)
XYZDot2[1] = XYZDot[1] + (k1[1] * h / 2)
XYZ2[2] = XYZ[2] + (XYZDot[2] * h / 2) + (k1[2] * h**2 / 8)
XYZDot2[2] = XYZDot[2] + (k1[2] * h / 2)
radius = np.sqrt((XYZ2[0]**2)+(XYZ2[1]**2)+(XYZ2[2]**2))
....
</code></pre>
<p>There is more code however I want to limit what I show for now since it's the intermediate steps I'm most interested in resolving. Basically, for those familiar with state vectors and using RK4, you can see that the position and velocity is updated in the intermediate step, but not the acceleration. My question is related to the calculation required in order to update too the acceleration. It would begin:</p>
<pre><code>XYZDDot[0] = ...
XYZDDot[1] = ...
XYZDDot[2] = ...
</code></pre>
<p>...but what exactly comes after is not very clear. Any advice welcome.</p>
<p>Below is the full code: </p>
<pre><code> for j in h_step_values:
h = j
if h > 0:
one_way_iteration_steps = one_way_iteration_steps -1
elif h < 0:
one_way_iteration_steps = one_way_iteration_steps +1
XYZ = initial_XYZ
#if total_step_number != 0:
for i in range(0, one_way_iteration_steps):
#Calculate k1
k1[0] = (-cs.GM_GLONASS * XYZ[0] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[0] * (1 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[0] + (cs.OMEGAE_DOT**2 * XYZ[0]) + (2 * cs.OMEGAE_DOT * XYZDot[1])
k1[1] = (-cs.GM_GLONASS * XYZ[1] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[1] * (1 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[1] + (cs.OMEGAE_DOT**2 * XYZ[1]) - (2 * cs.OMEGAE_DOT * XYZDot[0])
k1[2] = (-cs.GM_GLONASS * XYZ[2] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ[2] * (3 - (5*(XYZ[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[2]
#Intermediate step to bridge k1 to k2
XYZ2[0] = XYZ[0] + (XYZDot[0] * h / 2) + (k1[0] * h**2 / 8)
XYZDot2[0] = XYZDot[0] + (k1[0] * h / 2)
XYZDDot2[0] = XYZDDot[0] + (k1[0] * h / 2)
XYZ2[1] = XYZ[1] + (XYZDot[1] * h / 2) + (k1[1] * h**2 / 8)
XYZDot2[1] = XYZDot[1] + (k1[1] * h / 2)
XYZ2[2] = XYZ[2] + (XYZDot[2] * h / 2) + (k1[2] * h**2 / 8)
XYZDot2[2] = XYZDot[2] + (k1[2] * h / 2)
radius = np.sqrt((XYZ2[0]**2)+(XYZ2[1]**2)+(XYZ2[2]**2))
#Calculate k2
k2[0] = (-cs.GM_GLONASS * XYZ2[0] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[0] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[0] + (cs.OMEGAE_DOT**2 * XYZ2[0]) + (2 * cs.OMEGAE_DOT * XYZDot2[1])
k2[1] = (-cs.GM_GLONASS * XYZ2[1] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[1] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[1] + (cs.OMEGAE_DOT**2 * XYZ2[1]) - (2 * cs.OMEGAE_DOT * XYZDot2[0])
k2[2] = (-cs.GM_GLONASS * XYZ2[2] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[2] * (3 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[2]
#Intermediate step to bridge k2 to k3
XYZ2[0] = XYZ[0] + (XYZDot[0] * h / 2) + (k2[0] * h**2 / 8)
XYZDot2[0] = XYZDot[0] + (k2[0] * h / 2)
XYZ2[1] = XYZ[1] + (XYZDot[1] * h / 2) + (k2[1] * h**2 / 8)
XYZDot2[1] = XYZDot[1] + (k2[1] * h / 2)
XYZ2[2] = XYZ[2] + (XYZDot[2] * h / 2) + (k2[2] * h**2 / 8)
XYZDot2[2] = XYZDot[2] + (k2[2] * h / 2)
radius = np.sqrt((XYZ2[0]**2)+(XYZ2[1]**2)+(XYZ2[2]**2))
#Calculate k3
k3[0] = (-cs.GM_GLONASS * XYZ2[0] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[0] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[0] + (cs.OMEGAE_DOT**2 * XYZ2[0]) + (2 * cs.OMEGAE_DOT * XYZDot2[1])
k3[1] = (-cs.GM_GLONASS * XYZ2[1] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[1] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[1] + (cs.OMEGAE_DOT**2 * XYZ2[1]) - (2 * cs.OMEGAE_DOT * XYZDot2[0])
k3[2] = (-cs.GM_GLONASS * XYZ2[2] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[2] * (3 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[2]
#Intermediate step to bridge k3 to k4
XYZ2[0] = XYZ[0] + (XYZDot[0] * h) + (k3[0] * h**2 / 2)
XYZDot2[0] = XYZDot[0] + (k3[0] * h)
XYZ2[1] = XYZ[1] + (XYZDot[1] * h) + (k3[1] * h**2 / 2)
XYZDot2[1] = XYZDot[1] + (k3[1] * h)
XYZ2[2] = XYZ[2] + (XYZDot[2] * h) + (k3[2] * h**2 / 2)
XYZDot2[2] = XYZDot[2] + (k3[2] * h)
radius = np.sqrt((XYZ2[0]**2)+(XYZ2[1]**2)+(XYZ2[2]**2))
#Calculate k4
k4[0] = (-cs.GM_GLONASS * XYZ2[0] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[0] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[0] + (cs.OMEGAE_DOT**2 * XYZ2[0]) + (2 * cs.OMEGAE_DOT * XYZDot2[1])
k4[1] = (-cs.GM_GLONASS * XYZ2[1] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[1] * (1 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[1] + (cs.OMEGAE_DOT**2 * XYZ2[1]) - (2 * cs.OMEGAE_DOT * XYZDot2[0])
k4[2] = (-cs.GM_GLONASS * XYZ2[2] / radius**3) \
+ ((3/2) * cs.C_20 * cs.GM_GLONASS * cs.SEMI_MAJOR_AXIS_GLONASS**2 * XYZ2[2] * (3 - (5*(XYZ2[2]**2) / (radius**2))) / radius**5) \
+ XYZDDot[2]
for p in range(3):
XYZ[p] = XYZ[p] + XYZDot[p] * h + h**2 * ((k1[p] + 2*k2[p] + 2*k3[p] + k4[p]) / 12)
XYZDot[p] = XYZDot[p] + (h * (k1[p] + 2*k2[p] + 2*k3[p] + k4[p]) / 6)
radius = np.sqrt((XYZ[0])**2 + (XYZ[0])**2 + (XYZ[0])**2)
</code></pre>
| 3 | 2016-08-19T09:32:58Z | 39,066,824 | <p>The equation you are solving is of the type</p>
<pre><code>ddot x = a(x)
</code></pre>
<p>where <code>a(x)</code> is the acceleration which is computed in your <code>k1</code> computation. Indeed, the first order system would be</p>
<pre><code>dot v = a(x)
dot x = v
</code></pre>
<p>The RK4 implementation thus starts with</p>
<pre><code>k1 = a(x)
l1 = v
k2 = a(x+l1*h/2) = a(x+v*h/2)
l2 = v+k1*h/2
</code></pre>
<p>etc. The use of the <code>l1,l2,...</code> seems implicit in the code, inserting these linear combinations directly where they occur.</p>
<hr>
<p>In short, you are not missing the acceleration computation, it is the main part of the code fragment.</p>
<hr>
<p><strong>Update: (8/22)</strong> To come closer to the intention of the intermediate bridge steps, the abstract code should read ( with <code>(* .. *)</code> denoting comments or unnecessary computations)</p>
<pre><code>k1 = a(x) (* l1 = v *)
x2 = x + v*h/2 (* v2 = v + k1*h/2 *)
k2 = a(x2) (* l2 = v2 *)
x3 (* = x + l2*h/2 *)
= x + v*h/2 + k1*h^2/4 (* v3 = v + k2*h/2 *)
k3 = a(x3) (* l3 = v3 *)
x4 (* = x + l3*h *)
= x + v*h + k2*h^2/2 (* v4 = v + k3*h *)
k4 = a(x4) (* l4 = v4 *)
delta_v = ( k1+2*(k2+k3)+k4 ) * h/6
delta_x (* = ( l1+2*(l2+l3)+l4 ) * h/6 *)
= v*h + (k1+k2+k3) * h^2/6
</code></pre>
| 1 | 2016-08-21T17:15:53Z | [
"python",
"runge-kutta",
"orbital-mechanics",
"satellite-navigation"
] |
python - adding quotes for an each word and symbol on a text file | 39,035,876 | <p>I want to put quotation marks between each words and symbols on a text file.</p>
<p>For instance;</p>
<blockquote>
<p>Türkiye ya da resmî adıyla Türkiye Cumhuriyeti, topraklarının büyük
bölümü Anadolu'ya, küçük bir bölümü ise Balkanlar'ın uzantısı olan
Trakya'ya yayılmıŠbir ülke.</p>
</blockquote>
<p>becomes -></p>
<blockquote>
<p>"Türkiye" "ya" "da" "resmî" "adıyla" "Türkiye" "Cumhuriyeti" ","
"topraklarının" "büyük" "bölümü" "Anadolu'ya" "," "küçük" "bir"
"bölümü" "ise" "Balkanlar'ın" "uzantısı" "olan" "Trakya'ya" "yayılmıÅ"
"bir" "ülke" "."</p>
</blockquote>
<p>For this reason, I have written such a code:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re, codecs, io
with io.open ("turkish.txt", "r", encoding="utf-8") as myfile:
text=myfile.read()
replacer = re.compile("([\w'-]+|[.,!?;()%])", re.UNICODE)
output_text = replacer.sub(r'"\1"', text).replace('""','" "')
text_file = open("Output.txt", "w")
text_file.write(output_text.encode('utf8'))
text_file.close()
</code></pre>
<p>On the example above, the replacement is fine.</p>
<p>But for such an example, </p>
<blockquote>
<p>İmparatorluk zirvesini 15 ve 17'nin arasında, özelikle I. Süleyman
döneminde 10.000'lerde yaÅadı.</p>
</blockquote>
<p>the replacement occurs as following;</p>
<blockquote>
<p>"İmparatorluk" "zirvesini" "15" "ve" "17'" "nin" "arasında",
"özelikle" "I" "." "Süleyman" "döneminde" "10" "." "000'" "lerde"
"yaÅadı" "."</p>
</blockquote>
<p>As you see, <code>10.000</code> is a number, <code>17'nin</code> is together, and <code>I.</code> refers to roman numeral ranking, so I want them to be seperated as</p>
<p><code>10.000</code>, <code>17'nin</code>, and <code>I.</code>.</p>
<p>How should I modify my regex or code to achieve that?</p>
<p>Thanks,</p>
| 1 | 2016-08-19T09:44:11Z | 39,036,017 | <p>Adding <code>[IVXLCDM]+\.|[\d\.]+(?:'\w+)?</code> to the beginning of the regex pattern matches the "10.000" and "10.000'lerde" and "I." as intended.</p>
<pre><code>replacer = re.compile(r"\b([IVXLCDM]+\.|[\d\.]+(?:'\w+)?|[\w'-]+|[.,!?;()%])", re.UNICODE)
</code></pre>
| 1 | 2016-08-19T09:52:12Z | [
"python",
"regex"
] |
python - adding quotes for an each word and symbol on a text file | 39,035,876 | <p>I want to put quotation marks between each words and symbols on a text file.</p>
<p>For instance;</p>
<blockquote>
<p>Türkiye ya da resmî adıyla Türkiye Cumhuriyeti, topraklarının büyük
bölümü Anadolu'ya, küçük bir bölümü ise Balkanlar'ın uzantısı olan
Trakya'ya yayılmıŠbir ülke.</p>
</blockquote>
<p>becomes -></p>
<blockquote>
<p>"Türkiye" "ya" "da" "resmî" "adıyla" "Türkiye" "Cumhuriyeti" ","
"topraklarının" "büyük" "bölümü" "Anadolu'ya" "," "küçük" "bir"
"bölümü" "ise" "Balkanlar'ın" "uzantısı" "olan" "Trakya'ya" "yayılmıÅ"
"bir" "ülke" "."</p>
</blockquote>
<p>For this reason, I have written such a code:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re, codecs, io
with io.open ("turkish.txt", "r", encoding="utf-8") as myfile:
text=myfile.read()
replacer = re.compile("([\w'-]+|[.,!?;()%])", re.UNICODE)
output_text = replacer.sub(r'"\1"', text).replace('""','" "')
text_file = open("Output.txt", "w")
text_file.write(output_text.encode('utf8'))
text_file.close()
</code></pre>
<p>On the example above, the replacement is fine.</p>
<p>But for such an example, </p>
<blockquote>
<p>İmparatorluk zirvesini 15 ve 17'nin arasında, özelikle I. Süleyman
döneminde 10.000'lerde yaÅadı.</p>
</blockquote>
<p>the replacement occurs as following;</p>
<blockquote>
<p>"İmparatorluk" "zirvesini" "15" "ve" "17'" "nin" "arasında",
"özelikle" "I" "." "Süleyman" "döneminde" "10" "." "000'" "lerde"
"yaÅadı" "."</p>
</blockquote>
<p>As you see, <code>10.000</code> is a number, <code>17'nin</code> is together, and <code>I.</code> refers to roman numeral ranking, so I want them to be seperated as</p>
<p><code>10.000</code>, <code>17'nin</code>, and <code>I.</code>.</p>
<p>How should I modify my regex or code to achieve that?</p>
<p>Thanks,</p>
| 1 | 2016-08-19T09:44:11Z | 39,036,161 | <pre><code>with open("turkish.txt", "r") as myfile:
text=myfile.read()
output_text = text.split(" ")
with open("Output.txt", "w",) as outfile:
for word in output_text:
outfile.write(' "'+ word + '" ')
</code></pre>
<p>May be a better solution</p>
| 1 | 2016-08-19T09:59:46Z | [
"python",
"regex"
] |
How to decode string representative of utf-8 with python? | 39,035,899 | <p>I have a <strong>unicode</strong> like this:</p>
<pre><code>\xE5\xB1\xB1\xE4\xB8\x9C \xE6\x97\xA5\xE7\x85\xA7
</code></pre>
<p>And I know it is the string representative of <code>bytes</code> which is encoded with <code>utf-8</code></p>
<p>Note that the string <code>\xE5\xB1\xB1\xE4\xB8\x9C \xE6\x97\xA5\xE7\x85\xA7</code> itself is <code><type 'unicode'></code></p>
<p>How to decode it to the real string <code>å±±ä¸ æ¥ç
§</code> ?</p>
| 1 | 2016-08-19T09:45:45Z | 39,035,938 | <p>If you printed the <code>repr()</code> output of your <code>unicode</code> string then you appear to have a <a href="https://en.wikipedia.org/wiki/Mojibake" rel="nofollow"><em>Mojibake</em></a>, bytes data decoded using the wrong encoding.</p>
<p>First encode back to bytes, then decode using the right codec. This may be as simple as encoding as Latin-1:</p>
<pre><code>unicode_string.encode('latin1').decode('utf8')
</code></pre>
<p>This depends on how the incorrect decoding was applied however. If a Windows codepage (like CP1252) was used, you can end up with Unicode data that is not actually encodable back to CP1252 if UTF-8 bytes outside the CP1252 range were force-decoded anyway.</p>
<p>The best way to repair such mistakes is using the <a href="http://ftfy.readthedocs.io/en/latest/" rel="nofollow"><code>ftfy</code> library</a>, which knows how to deal with forced-decoded Mojibake texts for a variety of codecs.</p>
<p>For your small sample, Latin-1 <em>appears</em> to work just fine:</p>
<pre><code>>>> unicode_string = u'\xE5\xB1\xB1\xE4\xB8\x9C \xE6\x97\xA5\xE7\x85\xA7'
>>> print unicode_string.encode('latin1').decode('utf8')
å±±ä¸ æ¥ç
§
>>> import ftfy
>>> print ftfy.fix_text(unicode_string)
å±±ä¸ æ¥ç
§
</code></pre>
<p>If you have the <em>literal</em> character <code>\</code>, <code>x</code>, followed by two digits, you have another layer of encoding where the bytes where replaced by 4 characters each. You'd have to 'decode' those to actual bytes first, by asking Python to interpret the escapes with the <code>string_escape</code> codec:</p>
<pre><code>>>> unicode_string = ur'\xE5\xB1\xB1\xE4\xB8\x9C \xE6\x97\xA5\xE7\x85\xA7'
>>> unicode_string
u'\\xE5\\xB1\\xB1\\xE4\\xB8\\x9C \\xE6\\x97\\xA5\\xE7\\x85\\xA7'
>>> print unicode_string.decode('string_escape').decode('utf8')
å±±ä¸ æ¥ç
§
</code></pre>
<p><code>'string_escape'</code> is a Python 2 only codec that produces a bytestring, so it is safe to decode that as UTF-8 afterwards.</p>
| 5 | 2016-08-19T09:48:04Z | [
"python",
"utf-8",
"decode",
"encode"
] |
How to read input to python in bash | 39,035,904 | <p>I have code from <a href="http://stackoverflow.com/questions/16852655/convert-a-tsv-file-to-xls-xlsx-using-python">here</a> and I would like to use this python script in bash like:</p>
<blockquote>
<p>python conversion.py sample.tsv sample.xlsx</p>
</blockquote>
<p>python script:</p>
<pre><code>import csv
from xlsxwriter.workbook import Workbook
# Add some command-line logic to read the file names.
tsv_file = 'sample.tsv'
xlsx_file = 'sample.xlsx'
# Create an XlsxWriter workbook object and add a worksheet.
workbook = Workbook(xlsx_file)
worksheet = workbook.add_worksheet()
# Create a TSV file reader.
tsv_reader = csv.reader(open(tsv_file, 'rb'), delimiter='\t')
# Read the row data from the TSV file and write it to the XLSX file.
for row, data in enumerate(tsv_reader):
worksheet.write_row(row, 0, data)
# Close the XLSX file.
workbook.close()
</code></pre>
<p>How can I modify this python script. Sorry for the maybe stupid question, but I do not work with python.</p>
| 1 | 2016-08-19T09:45:58Z | 39,036,020 | <p>You can use <a href="https://docs.python.org/2/library/sys.html" rel="nofollow">sys.argv</a>.</p>
<p>....</p>
<p>tsv_file = sys.argv[1]</p>
<p>xlsx_file = sys.argv[2]</p>
<p>...</p>
<p>You can run like:</p>
<pre><code>main.py sample.tsv sample.xlsx
</code></pre>
<p>You also can check argparse package. It provides much fancy functions. It is basically a wraper of sys.argv.</p>
| 2 | 2016-08-19T09:52:16Z | [
"python",
"bash"
] |
Changing values in a list with a click | 39,035,956 | <p>I'm working on a 'card game'. The idea is that when the user can reveal up to two cards at the same time, so when they click on a third card, the previous two disappear. Later I will modify the code, so that if the cards the user clicked on were the same, they'd stay exposed.</p>
<p>The problem is that right now only one card gets exposed. When I click on another card, the first one turns back face down.</p>
<p>Here's the function, why does it not work?</p>
<pre><code>def mouseclick(pos):
global exposed, click_count, turn
# add game state logic here
for card in range(len(deck)):
if pos[0] > card * w and pos[0] < card * w + w:
exposed[card] = True
click_count += 1
if click_count > 2:
click_count = 1
turn += 1
print click_count
else:
exposed[card] = False
if exposed[card] == True:
pass
print exposed
</code></pre>
<p>BTW, the code won't work unless you run it in CodeSkulptor:
<a href="http://www.codeskulptor.org/#user41_BGhWSBJ6ln_0.py" rel="nofollow">http://www.codeskulptor.org/#user41_BGhWSBJ6ln_0.py</a></p>
<pre><code># implementation of card game - Memory
import simplegui
import random
deck = range(0, 8)* 2
exposed = [False] * len(deck)
print exposed
print deck
w = 50
h = 100
WIDTH = w * 16 + 2
HEIGHT = 102
click_count = 0
turn = 0
# helper function to initialize globals
def new_game():
global exposed, click_count
random.shuffle(deck)
exposed = [False] * len(deck)
click_count = 0
print deck
print exposed
# event handlers
def mouseclick(pos):
global exposed, click_count, turn
# add game state logic here
for card in range(len(deck)):
if pos[0] > card * w and pos[0] < card * w + w:
exposed[card] = True
click_count += 1
if click_count > 2:
click_count = 1
turn += 1
print click_count
else:
exposed[card] = False
if exposed[card] == True:
pass
print exposed
# cards are 50x100 pixels in size
def draw(canvas):
line = 1
x = 1
y = 1
for i in range(len(deck)):
if exposed[i] == True:
canvas.draw_text(str(deck[i]), [(0.3* w) + w * i, (y + h) * 0.66], 40, "Black")
else:
canvas.draw_polygon([[x, y], [x + w, y], [x + w, y + h], [x, y + h]], line, "White", '#55aa55')
x += w
# create frame and button and labels
frame = simplegui.create_frame("Memory", WIDTH, HEIGHT)
frame.add_button("Reset", new_game)
label = frame.add_label("Turns = " + str(turn))
frame.set_canvas_background("White")
# register event handlers
frame.set_mouseclick_handler(mouseclick)
frame.set_draw_handler(draw)
# get things rolling
new_game()
frame.start()
</code></pre>
| 0 | 2016-08-19T09:48:46Z | 39,036,759 | <p>I think, you are logically wrong. If you want to exposed two card and disappear then when clicked in third one, you can not use only boolean True and False.
Use a list or Two variable for first two action, If action is more then two then clear all and save third action.</p>
| 0 | 2016-08-19T10:30:15Z | [
"python",
"list",
"counter"
] |
Changing values in a list with a click | 39,035,956 | <p>I'm working on a 'card game'. The idea is that when the user can reveal up to two cards at the same time, so when they click on a third card, the previous two disappear. Later I will modify the code, so that if the cards the user clicked on were the same, they'd stay exposed.</p>
<p>The problem is that right now only one card gets exposed. When I click on another card, the first one turns back face down.</p>
<p>Here's the function, why does it not work?</p>
<pre><code>def mouseclick(pos):
global exposed, click_count, turn
# add game state logic here
for card in range(len(deck)):
if pos[0] > card * w and pos[0] < card * w + w:
exposed[card] = True
click_count += 1
if click_count > 2:
click_count = 1
turn += 1
print click_count
else:
exposed[card] = False
if exposed[card] == True:
pass
print exposed
</code></pre>
<p>BTW, the code won't work unless you run it in CodeSkulptor:
<a href="http://www.codeskulptor.org/#user41_BGhWSBJ6ln_0.py" rel="nofollow">http://www.codeskulptor.org/#user41_BGhWSBJ6ln_0.py</a></p>
<pre><code># implementation of card game - Memory
import simplegui
import random
deck = range(0, 8)* 2
exposed = [False] * len(deck)
print exposed
print deck
w = 50
h = 100
WIDTH = w * 16 + 2
HEIGHT = 102
click_count = 0
turn = 0
# helper function to initialize globals
def new_game():
global exposed, click_count
random.shuffle(deck)
exposed = [False] * len(deck)
click_count = 0
print deck
print exposed
# event handlers
def mouseclick(pos):
global exposed, click_count, turn
# add game state logic here
for card in range(len(deck)):
if pos[0] > card * w and pos[0] < card * w + w:
exposed[card] = True
click_count += 1
if click_count > 2:
click_count = 1
turn += 1
print click_count
else:
exposed[card] = False
if exposed[card] == True:
pass
print exposed
# cards are 50x100 pixels in size
def draw(canvas):
line = 1
x = 1
y = 1
for i in range(len(deck)):
if exposed[i] == True:
canvas.draw_text(str(deck[i]), [(0.3* w) + w * i, (y + h) * 0.66], 40, "Black")
else:
canvas.draw_polygon([[x, y], [x + w, y], [x + w, y + h], [x, y + h]], line, "White", '#55aa55')
x += w
# create frame and button and labels
frame = simplegui.create_frame("Memory", WIDTH, HEIGHT)
frame.add_button("Reset", new_game)
label = frame.add_label("Turns = " + str(turn))
frame.set_canvas_background("White")
# register event handlers
frame.set_mouseclick_handler(mouseclick)
frame.set_draw_handler(draw)
# get things rolling
new_game()
frame.start()
</code></pre>
| 0 | 2016-08-19T09:48:46Z | 39,042,126 | <p>OK, here is what I found so far based on your code.
First of all yes it is possible. I created a second list which is a copy of exposed list. I update it right below where you update the exposed[card].
What I realized is that it keeps a second value True, even when exposed keeps just one. <a href="http://www.codeskulptor.org/#user41_BGhWSBJ6ln_2.py" rel="nofollow">1st attempt to remove the bug</a></p>
<p>I have included some print statements with the changes made. I know, it's not much nor an exact solution, but I don't have any more time to work with it. Keep the question active and I will get back with it ASAP.</p>
| 0 | 2016-08-19T14:56:29Z | [
"python",
"list",
"counter"
] |
Compress/Zip numpy arrays in Memory | 39,035,983 | <p>My memory is too small for my data, so I tried packing it in memory.</p>
<p>The following code does work, but I have to remember the type of the data, which is kind of akward (lots of different data types). </p>
<p>Any better suggestions? Smaller running time would also be appreciated</p>
<pre><code>import numpy as np
import zlib
A=np.arange(10000)
dtype=A.dtype
B=zlib.compress(A,1)
C=np.fromstring(zlib.decompress(B),dtype)
np.testing.assert_allclose(A,C)
</code></pre>
| 1 | 2016-08-19T09:50:25Z | 39,036,831 | <p>You could try using numpy's builtin array compressor <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez_compressed.html#numpy-savez-compressed" rel="nofollow"><code>np.savez_compressed()</code></a>. This will save you the hassle of keeping track of the data types, but would probably give similar performance to your method. Here's an example:</p>
<pre><code>import io
import numpy as np
A = np.arange(10000)
compressed_array = io.BytesIO() # np.savez_compressed() requires a file-like object to write to
np.savez_compressed(compressed_array, A)
# load it back
compressed_array.seek(0) # seek back to the beginning of the file-like object
decompressed_array = np.load(compressed_array)['arr_0']
>>> print(len(compressed_array.getvalue())) # compressed array size
15364
>>> assert A.dtype == decompressed_array.dtype
>>> assert all(A == decompressed_array)
</code></pre>
<p>Note that any size reduction depends on the distribution of your data. Random data is inherently incompressible, so you might not see much benefit by attempting to compress it.</p>
| 2 | 2016-08-19T10:33:30Z | [
"python",
"numpy",
"memory",
"zip"
] |
Python numpy data pointer addresses change without modification | 39,036,089 | <p><strong>EDIT</strong></p>
<p>After some more fiddling around, I've so far isolated the following states:</p>
<ol>
<li>A <strong>1D array</strong> gives <em>two</em> different addresses when entering variable directly, and only <em>one</em> when using <code>print()</code></li>
<li>A <strong>2D array</strong> (or <strong>matrix</strong>) gives <em>three</em> different addresses when entering variable directly, and <em>two</em> when using <code>print()</code></li>
<li>A <strong>3D array</strong> gives <em>two</em> different address when entering variable directly, and only <em>one</em> when using <code>print()</code> (apparently the same as with the <strong>1D array</strong>)</li>
</ol>
<p>Like so:</p>
<pre><code>>>> a = numpy.array([1,2,3], dtype="int32")
>>> a.data
<memory at 0x7f02e85e4048>
>>> a.data
<memory at 0x7f02e85e4110>
>>> a.data
<memory at 0x7f02e85e4048>
>>> a.data
<memory at 0x7f02e85e4110>
>>> a.data
<memory at 0x7f02e85e4048>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> d = numpy.array([[1,2,3]], dtype="int32")
>>> d.data
<memory at 0x7f02e863ae48>
>>> d.data
<memory at 0x7f02e863a9e8>
>>> d.data
<memory at 0x7f02e863aac8>
>>> d.data
<memory at 0x7f02e863ae48>
>>> d.data
<memory at 0x7f02e863a9e8>
>>> d.data
<memory at 0x7f02e863aac8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> print(d.data)
<memory at 0x7f02e863a9e8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> print(d.data)
<memory at 0x7f02e863a9e8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> b = numpy.matrix([[1,2,3],[4,5,6]], dtype="int32")
>>> b.data
<memory at 0x7f02e863a9e8>
>>> b.data
<memory at 0x7f02e863ae48>
>>> b.data
<memory at 0x7f02e863aac8>
>>> b.data
<memory at 0x7f02e863a9e8>
>>> b.data
<memory at 0x7f02e863ae48>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> print(b.data)
<memory at 0x7f02e863a9e8>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> print(b.data)
<memory at 0x7f02e863a9e8>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> c = numpy.matrix([[1,2,3],[4,5,6],[7,8,9]], dtype="int32")
>>> c.data
<memory at 0x7f02e863aac8>
>>> c.data
<memory at 0x7f02e863a9e8>
>>> c.data
<memory at 0x7f02e863ae48>
>>> c.data
<memory at 0x7f02e863aac8>
>>> c.data
<memory at 0x7f02e863ae48>
>>> c.data
<memory at 0x7f02e863a9e8>
>>> c.data
<memory at 0x7f02e863aac8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> print(c.data)
<memory at 0x7f02e863a9e8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> print(c.data)
<memory at 0x7f02e863a9e8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> e = numpy.array([[[0,1],[2,3]],[[4,5],[6,7]]], dtype="int32")
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> e.data
<memory at 0x7f8ca0fe1140>
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> e.data
<memory at 0x7f8ca0fe1140>
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
</code></pre>
<hr>
<p><strong>ORIGINAL POST</strong></p>
<p>I was under the impression that merely entering a variable along in the python console with echo a string simply describing the value (and type) of it. It formats in a different manner than print(), but I assumed the values they both returned would be the same.</p>
<p>When I try to output the address of the data pointer object of a numpy object, just entering the variable gives me different value every other time, while print() gives the same value.</p>
<p>That suggests that the difference in the two operations aren't just how the output is formatted, but also where they get their information from. But what exactly do these additional differences consist of?</p>
<pre><code>>>> a = numpy.array([0,1,2])
>>> a
array([0, 1, 2])
>>> print(a)
[0 1 2]
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff253099818>
>>> a.data
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff253099818>
>>> a.data
<memory at 0x7ff25120c110>
</code></pre>
| 3 | 2016-08-19T09:56:09Z | 39,037,228 | <p>From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.data.html" rel="nofollow">docs</a> </p>
<blockquote>
<p>ndarray.data</p>
<p>Python buffer object pointing to the start of the arrayâs data.</p>
</blockquote>
<p>Which should just be a <code>memoryview</code> of the data.</p>
<p><em>Edit, trying to be clearer</em>:</p>
<p>In my case, a <em>1-D</em> array gives new values every time - it doesn't cycle between two values only:</p>
<pre><code>In [196]: a = numpy.array([0, 1, 2])
In [197]: a.data
Out[197]: <read-write buffer for 0x7f7de5934f80, size 24, offset 0 at 0x7f7de594d4b0>
In [198]: a.data
Out[198]: <read-write buffer for 0x7f7de5934f80, size 24, offset 0 at 0x7f7de594df70>
In [199]: a.data
Out[199]: <read-write buffer for 0x7f7de5934f80, size 24, offset 0 at 0x7f7de594d570>
In [200]: a.data
Out[200]: <read-write buffer for 0x7f7de5934f80, size 24, offset 0 at 0x7f7de594d870>
</code></pre>
<p>I think the behaviour is not peculiar to numpy only. See what happens with a <code>buffer</code>:</p>
<pre><code>In [222]: a = ('123' * 999)
In [223]: buffer(a)
Out[223]: <read-only buffer for 0x7f7de003cbd0, size -1, offset 0 at 0x7f7de5955170>
In [224]: buffer(a)
Out[224]: <read-only buffer for 0x7f7de003cbd0, size -1, offset 0 at 0x7f7de594ddb0>
In [225]: buffer(a)
Out[225]: <read-only buffer for 0x7f7de003cbd0, size -1, offset 0 at 0x7f7de597a5b0>
In [226]: buffer(a)
Out[226]: <read-only buffer for 0x7f7de003cbd0, size -1, offset 0 at 0x7f7de594de70>
</code></pre>
<p>In the case of <code>Buffer</code>, the doc says (emphasis mine):</p>
<blockquote>
<p>buffer(object[, offset[, size]])</p>
<p>The object argument must be an object that supports the buffer call interface (such as strings, arrays, and buffers). <em>A new buffer object will be created which references the object argument</em>.</p>
</blockquote>
<p>So I guess we should expect the address memory to change. However, back to the original question, it seems that caching happens and I concur with you both, that it must be down to some sort of optimisation. Unfortunately, why and in which cases the caching happens, I cannot find out in the Python code base.</p>
| 1 | 2016-08-19T10:53:09Z | [
"python",
"pointers",
"numpy"
] |
Python numpy data pointer addresses change without modification | 39,036,089 | <p><strong>EDIT</strong></p>
<p>After some more fiddling around, I've so far isolated the following states:</p>
<ol>
<li>A <strong>1D array</strong> gives <em>two</em> different addresses when entering variable directly, and only <em>one</em> when using <code>print()</code></li>
<li>A <strong>2D array</strong> (or <strong>matrix</strong>) gives <em>three</em> different addresses when entering variable directly, and <em>two</em> when using <code>print()</code></li>
<li>A <strong>3D array</strong> gives <em>two</em> different address when entering variable directly, and only <em>one</em> when using <code>print()</code> (apparently the same as with the <strong>1D array</strong>)</li>
</ol>
<p>Like so:</p>
<pre><code>>>> a = numpy.array([1,2,3], dtype="int32")
>>> a.data
<memory at 0x7f02e85e4048>
>>> a.data
<memory at 0x7f02e85e4110>
>>> a.data
<memory at 0x7f02e85e4048>
>>> a.data
<memory at 0x7f02e85e4110>
>>> a.data
<memory at 0x7f02e85e4048>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> print(a.data)
<memory at 0x7f02e85e4110>
>>> d = numpy.array([[1,2,3]], dtype="int32")
>>> d.data
<memory at 0x7f02e863ae48>
>>> d.data
<memory at 0x7f02e863a9e8>
>>> d.data
<memory at 0x7f02e863aac8>
>>> d.data
<memory at 0x7f02e863ae48>
>>> d.data
<memory at 0x7f02e863a9e8>
>>> d.data
<memory at 0x7f02e863aac8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> print(d.data)
<memory at 0x7f02e863a9e8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> print(d.data)
<memory at 0x7f02e863a9e8>
>>> print(d.data)
<memory at 0x7f02e863ae48>
>>> b = numpy.matrix([[1,2,3],[4,5,6]], dtype="int32")
>>> b.data
<memory at 0x7f02e863a9e8>
>>> b.data
<memory at 0x7f02e863ae48>
>>> b.data
<memory at 0x7f02e863aac8>
>>> b.data
<memory at 0x7f02e863a9e8>
>>> b.data
<memory at 0x7f02e863ae48>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> print(b.data)
<memory at 0x7f02e863a9e8>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> print(b.data)
<memory at 0x7f02e863a9e8>
>>> print(b.data)
<memory at 0x7f02e863aac8>
>>> c = numpy.matrix([[1,2,3],[4,5,6],[7,8,9]], dtype="int32")
>>> c.data
<memory at 0x7f02e863aac8>
>>> c.data
<memory at 0x7f02e863a9e8>
>>> c.data
<memory at 0x7f02e863ae48>
>>> c.data
<memory at 0x7f02e863aac8>
>>> c.data
<memory at 0x7f02e863ae48>
>>> c.data
<memory at 0x7f02e863a9e8>
>>> c.data
<memory at 0x7f02e863aac8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> print(c.data)
<memory at 0x7f02e863a9e8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> print(c.data)
<memory at 0x7f02e863a9e8>
>>> print(c.data)
<memory at 0x7f02e863ae48>
>>> e = numpy.array([[[0,1],[2,3]],[[4,5],[6,7]]], dtype="int32")
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> e.data
<memory at 0x7f8ca0fe1140>
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> e.data
<memory at 0x7f8ca0fe1140>
>>> e.data
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
>>> print(e.data)
<memory at 0x7f8ca0fe1048>
</code></pre>
<hr>
<p><strong>ORIGINAL POST</strong></p>
<p>I was under the impression that merely entering a variable along in the python console with echo a string simply describing the value (and type) of it. It formats in a different manner than print(), but I assumed the values they both returned would be the same.</p>
<p>When I try to output the address of the data pointer object of a numpy object, just entering the variable gives me different value every other time, while print() gives the same value.</p>
<p>That suggests that the difference in the two operations aren't just how the output is formatted, but also where they get their information from. But what exactly do these additional differences consist of?</p>
<pre><code>>>> a = numpy.array([0,1,2])
>>> a
array([0, 1, 2])
>>> print(a)
[0 1 2]
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> print(a.data)
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff253099818>
>>> a.data
<memory at 0x7ff25120c110>
>>> a.data
<memory at 0x7ff253099818>
>>> a.data
<memory at 0x7ff25120c110>
</code></pre>
| 3 | 2016-08-19T09:56:09Z | 39,037,372 | <p>The <code>memoryview</code> returned by <code>a.data</code> seems to alternate between two (or more) views. If you store a given instance of <code>a.data</code>, you get consistent output:</p>
<pre><code>>>> a.data
<memory at 0x7fb88ea1f828>
>>> a.data
<memory at 0x7fb88e98c4a8>
>>> t = a.data
>>> a.data
<memory at 0x7fb88e98ce48>
>>> a.data
<memory at 0x7fb88e98c3c8>
>>> a.data
<memory at 0x7fb88e98c4a8>
>>> a.data
<memory at 0x7fb88e98ce48>
>>> a.data
<memory at 0x7fb88e98c3c8>
>>> a.data
<memory at 0x7fb88e98c4a8>
>>> t
<memory at 0x7fb88ea1f828>
>>> t
<memory at 0x7fb88ea1f828>
>>> t
<memory at 0x7fb88ea1f828>
</code></pre>
<p>Note that there are 3 addresses rotating in the above example; I'm pretty sure this is all an implementation detail. I would guess that some caching is involved, implying that a new view is not actually generated each time you access <code>a.data</code>.</p>
<p>You can also make certain that you are looking at separate view objects:</p>
<pre><code>>>> id(a.data)
140430643088968
>>> id(a.data)
140430643086280
>>> id(a.data)
140430643088968
>>> id(a.data)
140430643086280
</code></pre>
<p>So most of the confusion probably comes from the fact that the attribute notation of <code>a.data</code> would suggest that it's a fixed object we're talking about, while this is not the case.</p>
| 1 | 2016-08-19T10:59:25Z | [
"python",
"pointers",
"numpy"
] |
Scrapy : XPath error: Invalid expression in //media:content | 39,036,100 | <p>I want to extract content from a News Site RSS Feed with Item like below </p>
<pre><code><item>
<title>BPS: Kartu Bansos Bantu Turunkan Angka Gini Ratio</title>
<media:content url="/image.jpg" expression="full" type="image/jpeg"/> </item>
</code></pre>
<p>but Error raised When parsing the information with tag like media:content using the xpath like <strong>item.xpath('//media:content')</strong></p>
<pre><code>Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/parsel/selector.py", line 183, in xpath
six.reraise(ValueError, ValueError(msg), sys.exc_info()[2])
File "/usr/local/lib/python2.7/site-packages/parsel/selector.py", line 179, in xpath
smart_strings=self._lxml_smart_strings)
File "src/lxml/lxml.etree.pyx", line 1587, in lxml.etree._Element.xpath (src/lxml/lxml.etree.c:57923)
File "src/lxml/xpath.pxi", line 307, in lxml.etree.XPathElementEvaluator.__call__ (src/lxml/lxml.etree.c:167084)
File "src/lxml/xpath.pxi", line 227, in lxml.etree._XPathEvaluatorBase._handle_result (src/lxml/lxml.etree.c:166043)
ValueError: XPath error: Undefined namespace prefix in //media:content
</code></pre>
<p>Does anybody know what should I do? Thanks :) </p>
| 0 | 2016-08-19T09:56:32Z | 39,036,265 | <p>You need to tell xpath which namespace the <code>media</code> prefix is mapped to by calling the <a href="https://parsel.readthedocs.io/en/latest/usage.html#parsel.selector.Selector.register_namespace" rel="nofollow"><code>register_namespace(prefix, namespace)</code></a> on the selector first, e.g:</p>
<pre><code>selector.register_namespace('media', 'http://the.namespace.of/media')
</code></pre>
<p>or if you only want to use the local name, you can use:</p>
<pre><code> item.xpath("//*[local-name()='content']")
</code></pre>
| 2 | 2016-08-19T10:04:57Z | [
"python",
"xpath",
"scrapy",
"web-crawler"
] |
python list of strings parsing | 39,036,115 | <p>I have a list of strings representing dates:</p>
<pre><code>>>> dates
['14.9.2016',
'13.9.2016',
'12.9.2016',
'11.9.2016',
'10.9.2016',
'9.9.2016',
'8.9.2016',
'7.9.2016',
'6.9.2016',
'5.9.2016']
</code></pre>
<p>I need to zero-padd days & months and I cannot use standard calendar methods due to "artifical dates" I need to work with ("29.2.2015" for example)</p>
<p>following seems to be working:</p>
<pre><code>>>> parsed_dates = []
>>> for l in [d.split('.') for d in dates]:
>>> parsed_dates.append('.'.join([i.zfill(2) for i in l]))
>>> parsed_dates
['14.09.2016',
'13.09.2016',
'12.09.2016',
'11.09.2016',
'10.09.2016',
'09.09.2016',
'08.09.2016',
'07.09.2016',
'06.09.2016',
'05.09.2016']
</code></pre>
<p>is it possible to achieve the same result using a single list comprehension? or some other, more elegant way?
I have tried following, but cannot find a way to concatenate single list items to form date strings again...</p>
<pre><code>>>> [i.zfill(2) for l in [d.split('.') for d in dates] for i in l]
['14',
'09',
'2016',
'13',
'09',
'2016',
'12',
'09',
.
.
.
</code></pre>
| 2 | 2016-08-19T09:57:40Z | 39,036,170 | <p>Sure, just inline the expression you pass to the <code>parsed_dates.append()</code> call, with <code>l</code> substituted with <code>d.split('.')</code> from your <code>for</code> loop:</p>
<pre><code>['.'.join([i.zfill(2) for i in d.split('.')]) for d in dates]
</code></pre>
<p>Demo:</p>
<pre><code>>>> ['.'.join([i.zfill(2) for i in d.split('.')]) for d in dates]
['14.09.2016', '13.09.2016', '12.09.2016', '11.09.2016', '10.09.2016', '09.09.2016', '08.09.2016', '07.09.2016', '06.09.2016', '05.09.2016']
</code></pre>
| 7 | 2016-08-19T10:00:12Z | [
"python",
"list-comprehension"
] |
How yo make a selenium Scripts faster? | 39,036,137 | <p>I use python Selenium and Scrapy for crawling a website.</p>
<p>but my script is so slow, </p>
<pre><code>Crawled 1 pages (at 1 pages/min)
</code></pre>
<p>i use CSS SELECTOR instead of XPATH for optimise the time.
i change the midllewares</p>
<pre><code>'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
</code></pre>
<p>is Selenium is too slow or i should change something in Setting? </p>
<p>my Code:</p>
<pre><code>def start_requests(self):
yield Request(self.start_urls, callback=self.parse)
def parse(self, response):
display = Display(visible=0, size=(800, 600))
display.start()
driver = webdriver.Firefox()
driver.get("http://www.example.com")
inputElement = driver.find_element_by_name("OneLineCustomerAddress")
inputElement.send_keys("75018")
inputElement.submit()
catNums = driver.find_elements_by_css_selector("html body div#page div#main.content div#sContener div#menuV div#mvNav nav div.mvNav.bcU div.mvNavLk form.jsExpSCCategories ul.mvSrcLk li")
#INIT
driver.find_element_by_css_selector(".mvSrcLk>li:nth-child(1)>label.mvNavSel.mvNavLvl1").click()
for catNumber in xrange(1,len(catNums)+1):
print "\n IN catnumber \n"
driver.find_element_by_css_selector("ul#catMenu.mvSrcLk> li:nth-child(%s)> label.mvNavLvl1" % catNumber).click()
time.sleep(5)
self.parse_articles(driver)
pages = driver.find_elements_by_xpath('//*[@class="pg"]/ul/li[last()]/a')
if(pages):
page = driver.find_element_by_xpath('//*[@class="pg"]/ul/li[last()]/a')
checkText = (page.text).strip()
if(len(checkText) > 0):
pageNums = int(page.text)
pageNums = pageNums - 1
for pageNumbers in range (pageNums):
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "waitingOverlay")))
driver.find_element_by_css_selector('.jsNxtPage.pgNext').click()
self.parse_articles(driver)
time.sleep(5)
def parse_articles(self,driver) :
test = driver.find_elements_by_css_selector('html body div#page div#main.content div#sContener div#sContent div#lpContent.jsTab ul#lpBloc li div.prdtBloc p.prdtBDesc strong.prdtBCat')
def between(self, value, a, b):
pos_a = value.find(a)
if pos_a == -1: return ""
pos_b = value.rfind(b)
if pos_b == -1: return ""
adjusted_pos_a = pos_a + len(a)
if adjusted_pos_a >= pos_b: return ""
return value[adjusted_pos_a:pos_b]
</code></pre>
| 0 | 2016-08-19T09:58:22Z | 39,042,044 | <p>So your code has few flaws here. </p>
<ol>
<li>You use selenium to parse the page contents when scrapy Selectors are faster and more efficient. </li>
<li>You start a webdriver for every response. </li>
</ol>
<p>This can be resolved very eloquently by using scrapy's <code>Downloader middlewares</code>!
You want to create a custom downloader middleware that would download requests using selenium rather than scrapy downloader.</p>
<p>For example I use this:</p>
<pre><code># middlewares.py
class SeleniumDownloader(object):
def create_driver(self):
"""only start the driver if middleware is ever called"""
if not getattr(self, 'driver', None):
self.driver = webdriver.Chrome()
def process_request(self, request, spider):
# this is called for every request, but we don't want to render
# every request in selenium, so use meta key for those we do want.
if not request.meta.get('selenium', False):
return request
self.create_driver()
self.driver.get(request.url)
return HtmlResponse(request.url, body=self.driver.page_source, encoding='utf-8')
</code></pre>
<p>Activate your middleware:</p>
<pre><code># settings.py
DOWNLOADER_MIDDLEWARES = {
'myproject.middleware.SeleniumDownloader': 13,
}
</code></pre>
<p>Then in your spider you can specify which urls to download via selenium driver by adding a meta argument.</p>
<pre><code># you can start with selenium
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, meta={'selenium': True})
def parse(self, response):
# this response is rendered by selenium!
# also can use no selenium for another response if you wish
url = response.xpath("//a/@href")
yield scrapy.Request(url)
</code></pre>
<p>The advantages of this approach is that you your driver is being started only once and used to download page source only, the rest is left to proper asynchronous scrapy tools.<br>
The disadvantages is that you cannot click buttons around and such since you are not exposed to the driver. Most of the times you can reverse engineer what the buttons do via network inspector and you should never need to do any clicking with the driver itself.</p>
| 0 | 2016-08-19T14:52:34Z | [
"python",
"selenium",
"web-scraping",
"scrapy",
"middleware"
] |
Storage Format for AND-OR-tree in Python 2.7 | 39,036,186 | <p>I am doing some work with decision trees at the moment, where I am using AND-OR trees as representation. I am looking for a suitable storage format for these kind of trees.</p>
<p>The node which starts with "t" is an OR node, the node starts with a "c" is an (ordered!) AND node. The leaves always start with "p".</p>
<p>Originally, every node contains of two parts: a node name and a node description.</p>
<p>The pictures showing two different representation of the same decision tree. Basically I need both representation, respectively an easy and fast solution to convert the representations to each other.
<a href="http://i.stack.imgur.com/xvPE1.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/xvPE1.jpg" alt="enter image description here"></a>
<a href="http://i.stack.imgur.com/Ip0Ex.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Ip0Ex.jpg" alt="enter image description here"></a></p>
<p><strong>My thoughts so far</strong></p>
<p><em>Datatype:</em></p>
<p>Dict: It would be necessary to use the ordered dict. In addition, it would be easier to store the name and the description.</p>
<p>Lists would be good because the order is defined, however I don't know how to save the node name and description in a good way.</p>
<p><em>Node link:</em></p>
<p>Basically, it would be possible to use both kind of datatypes. However I don't know how to link the nodes to each other. Should I use keyword (such as "AND" and "OR") or should I nest the nodes?</p>
<p>Would be very thankful for any input.</p>
| 4 | 2016-08-19T10:01:00Z | 39,039,111 | <p>If you use lists, you can link them together by nesting.
Something like:</p>
<pre><code>['t1', ['c1', 'p1', ['c2', 'p2', 'p3']], ['c3', ['c4', 'p4', 'p5'], ['t2', ['c5', 'p6', 'p7'], 'p8']]]
</code></pre>
<p>The basic structure would be:</p>
<pre><code>BASE := [node_name, left, right]
</code></pre>
<p>where:</p>
<ul>
<li><code>node_name</code> is (in this example) a string and</li>
<li><code>left</code> and <code>right</code> are
either a <code>BASE</code> (which implicitly means they are either a <code>c</code> or <code>t</code>
type of node) or a <code>node name</code> (which implicitly means they are leaf
nodes)</li>
</ul>
<p>Nothing stops you from using <code>OrderedDict</code>s but the creation gets a bit more cumbersome IMO:</p>
<pre><code>In [26]: from collections import OrderedDict as OD
In [27]: tree = OD((('name', 't1'), ('left', 'c1'), ('right', OD((('name', '...'),)) )))
In [28]: tree
Out[28]:
OrderedDict([('name', 't1'),
('left', 'c1'),
('right', OrderedDict([('name', '...')]))])
</code></pre>
<p>Finally, you can just create your own data structure:</p>
<pre><code>class Node:
type = ''
left = None
right = None
</code></pre>
| 0 | 2016-08-19T12:30:21Z | [
"python",
"python-2.7",
"tree",
"decision-tree"
] |
join values in sqlalchemy | 39,036,221 | <p>I would like to run following postgreSQL query in SQLalchemy:</p>
<pre><code>select c.*
from comments c
join (
values
(1,1),
(3,2),
(2,3),
(4,4)
) as x (id, ordering) on c.id = x.id
order by x.ordering
</code></pre>
<p>Is it possible to join something like list of lists or list of tuples and use them to provide ordering in SQLalchemy?</p>
| 0 | 2016-08-19T10:02:34Z | 39,038,772 | <pre><code>from sqlalchemy import *
from yourdbmodule import dbsession
VALUES = ((1, 1), (3, 2), (2, 3), (4, 4))
temp_table = Table(
'temp_table', MetaData(),
Column('id', INT, primary_key=True),
Column('ordering', INT),
prefixes=['TEMPORARY']
)
temp_table.create(bind=dbsession.bind, checkfirst=True)
dbsession.execute(temp_table.insert().values(VALUES))
# Now you can query it
dbsession.query(Comments)\
.join(temp_table, Comments.id == temp_table.c.id)\
.order_by(temp_table.c.ordering)\
.all()
</code></pre>
| 0 | 2016-08-19T12:10:48Z | [
"python",
"postgresql",
"sqlalchemy"
] |
join values in sqlalchemy | 39,036,221 | <p>I would like to run following postgreSQL query in SQLalchemy:</p>
<pre><code>select c.*
from comments c
join (
values
(1,1),
(3,2),
(2,3),
(4,4)
) as x (id, ordering) on c.id = x.id
order by x.ordering
</code></pre>
<p>Is it possible to join something like list of lists or list of tuples and use them to provide ordering in SQLalchemy?</p>
| 0 | 2016-08-19T10:02:34Z | 39,045,472 | <p>See the <a href="https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PGValues" rel="nofollow">PGValues recipe</a> for how to make SQLAlchemy compile the <code>VALUES</code> clause:</p>
<pre><code>from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql import column
from sqlalchemy.sql.expression import FromClause
class values(FromClause):
named_with_column = True
def __init__(self, columns, *args, **kw):
self._column_args = columns
self.list = args
self.alias_name = self.name = kw.pop('alias_name', None)
def _populate_column_collection(self):
for c in self._column_args:
c._make_proxy(self)
@compiles(values)
def compile_values(element, compiler, asfrom=False, **kw):
columns = element.columns
v = "VALUES %s" % ", ".join(
"(%s)" % ", ".join(
compiler.render_literal_value(elem, column.type)
for elem, column in zip(tup, columns))
for tup in element.list
)
if asfrom:
if element.alias_name:
v = "(%s) AS %s (%s)" % (v, element.alias_name, (", ".join(c.name for c in element.columns)))
else:
v = "(%s)" % v
return v
>>> x = values([column("id", Integer), column("ordering", Integer)], (1, 1), (3, 2), (2, 3), (4, 4), alias_name="x")
>>> q = session.query(Comment).join(x, Comment.id == x.c.id).order_by(x.c.ordering)
>>> print(q)
SELECT comments.id AS comments_id
FROM comments JOIN (VALUES (1, 1), (3, 2), (2, 3), (4, 4)) AS x (id, ordering) ON comments.id = x.id ORDER BY x.ordering
</code></pre>
| 0 | 2016-08-19T18:19:24Z | [
"python",
"postgresql",
"sqlalchemy"
] |
What does it mean to sort an array/matrix by the argmax as the key? | 39,036,343 | <p>I am struggling to understand the mechanism behind a function around the topic of sorting in numpy.</p>
<pre><code> import numpy as np
arr = [[8, 5, 9],
[3, 9.5, 5], [5.5, 4, 3.5], [6, 2, 1],
[6,1,2],[3,2,1],[8,5,3]]
res = sorted(arr, key=np.argmax)
</code></pre>
<p>This gives me the following result:</p>
<pre><code> print(res)
[[5.5, 4, 3.5], [6, 2, 1], [6, 1, 2],
[3, 2, 1], [8, 5, 3], [3, 9.5, 5], [8, 5, 9]]
</code></pre>
<p>I am an R user and not very familiar with Python. I might have some clue about the role of the 'key' argument, but for this example specifically I ask for your help.
In a simple case if the <code>key</code> argument is defined as a function which returns the first element, then the <code>sorted</code>, sorts the array based on its first element, but I cannot see how this works with the <code>argmax</code>.
Thanks,</p>
| 4 | 2016-08-19T10:08:40Z | 39,036,469 | <p>The argmax function returns the indice of the biggest element. It is used as a key in the sort function.</p>
<p>If you print this:</p>
<pre><code>print([np.argmax(x) for x in arr])
</code></pre>
<p>you get:</p>
<pre><code>[2, 1, 0, 0, 0, 0, 0]
</code></pre>
<p>which explains the sorting. Last elements appear first in your result, first element appears last because it has the highest criteria, and second element appears just before.</p>
<p>Of course this is a "weak" sorting since the criteria often returns the same value and thus the result depends on the order of the initial list (edit: this is called a stable sorting, see interesting Bakuriu comment)</p>
| 2 | 2016-08-19T10:15:15Z | [
"python",
"numpy"
] |
What does it mean to sort an array/matrix by the argmax as the key? | 39,036,343 | <p>I am struggling to understand the mechanism behind a function around the topic of sorting in numpy.</p>
<pre><code> import numpy as np
arr = [[8, 5, 9],
[3, 9.5, 5], [5.5, 4, 3.5], [6, 2, 1],
[6,1,2],[3,2,1],[8,5,3]]
res = sorted(arr, key=np.argmax)
</code></pre>
<p>This gives me the following result:</p>
<pre><code> print(res)
[[5.5, 4, 3.5], [6, 2, 1], [6, 1, 2],
[3, 2, 1], [8, 5, 3], [3, 9.5, 5], [8, 5, 9]]
</code></pre>
<p>I am an R user and not very familiar with Python. I might have some clue about the role of the 'key' argument, but for this example specifically I ask for your help.
In a simple case if the <code>key</code> argument is defined as a function which returns the first element, then the <code>sorted</code>, sorts the array based on its first element, but I cannot see how this works with the <code>argmax</code>.
Thanks,</p>
| 4 | 2016-08-19T10:08:40Z | 39,036,526 | <p>np.argmax gives you the argument of the maximum value. In your example, it is acting on each individual list of 3 items, for example</p>
<pre><code>>>> np.argmax([8,5,3])
0
>>> np.argmax([1,2,3])
2
</code></pre>
| 0 | 2016-08-19T10:17:38Z | [
"python",
"numpy"
] |
Using reduce to shorten a for-loop | 39,036,375 | <p>From an array <code>Ns</code>, I'd like to derive an array <code>multipliers</code> as follows:</p>
<pre><code>Ns = [3, 3, 6, 3]
multipliers = [0]*len(Ns)
multipliers[0] = 1
for n in range(1,len(Ns)):
multipliers[n] = multipliers[n-1] * Ns[n-1]
</code></pre>
<p>The resulting array <code>multipliers</code> is <code>[1, 3, 9, 54]</code>. My gut feeling is that it should be possible to make this code more succinct using <code>reduce</code> or another built-in function, but I don't yet see how. Any ideas?</p>
| 1 | 2016-08-19T10:10:17Z | 39,036,609 | <p>You could use <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow"><code>itertools.accumulate</code></a> with a custom accumulation function (Python 3 only, if you want to use Python 2 you could install <a href="https://github.com/kachayev/fn.py" rel="nofollow"><code>fn.py</code></a> library (or similar) or backport the function using the implementation provided in the docs) :</p>
<pre><code>In [10]: from itertools import accumulate
In [11]: import operator
In [12]: list(accumulate([3, 3, 6, 3], func=operator.mul))
Out[12]: [3, 9, 54, 162]
</code></pre>
<p>And then just fix the first and the last elements:</p>
<pre><code>In [13]: l = list(accumulate([3, 3, 6, 3], func=operator.mul))
In [14]: [1] + l[:-1]
Out[14]: [1, 3, 9, 54]
</code></pre>
| 5 | 2016-08-19T10:22:33Z | [
"python"
] |
Using reduce to shorten a for-loop | 39,036,375 | <p>From an array <code>Ns</code>, I'd like to derive an array <code>multipliers</code> as follows:</p>
<pre><code>Ns = [3, 3, 6, 3]
multipliers = [0]*len(Ns)
multipliers[0] = 1
for n in range(1,len(Ns)):
multipliers[n] = multipliers[n-1] * Ns[n-1]
</code></pre>
<p>The resulting array <code>multipliers</code> is <code>[1, 3, 9, 54]</code>. My gut feeling is that it should be possible to make this code more succinct using <code>reduce</code> or another built-in function, but I don't yet see how. Any ideas?</p>
| 1 | 2016-08-19T10:10:17Z | 39,036,625 | <p><code>reduce</code> give you just the final result of the reducing process, so if you want all the intermediate values you can use list comprehension as followed:</p>
<pre><code>>>> [reduce(lambda x,y:x*y, Ns[:i], 1) for i in range(len(Ns))]
[1, 3, 9, 54]
</code></pre>
<p>but that isn't efficient since it reducing again and again for each sublist.</p>
| 2 | 2016-08-19T10:23:09Z | [
"python"
] |
Using reduce to shorten a for-loop | 39,036,375 | <p>From an array <code>Ns</code>, I'd like to derive an array <code>multipliers</code> as follows:</p>
<pre><code>Ns = [3, 3, 6, 3]
multipliers = [0]*len(Ns)
multipliers[0] = 1
for n in range(1,len(Ns)):
multipliers[n] = multipliers[n-1] * Ns[n-1]
</code></pre>
<p>The resulting array <code>multipliers</code> is <code>[1, 3, 9, 54]</code>. My gut feeling is that it should be possible to make this code more succinct using <code>reduce</code> or another built-in function, but I don't yet see how. Any ideas?</p>
| 1 | 2016-08-19T10:10:17Z | 39,036,711 | <p>You can simulate the behavior of <code>accumulate</code> (see @soon's answer) in Python 2 with <code>reduce</code>. You have to manage the list by yourself. </p>
<pre><code>from functools import reduce
Ns = [3, 3, 6, 3]
multipliers = reduce(lambda l, x: l + [l[-1] * x], Ns[:-1], [1])
</code></pre>
| 3 | 2016-08-19T10:28:02Z | [
"python"
] |
Using reduce to shorten a for-loop | 39,036,375 | <p>From an array <code>Ns</code>, I'd like to derive an array <code>multipliers</code> as follows:</p>
<pre><code>Ns = [3, 3, 6, 3]
multipliers = [0]*len(Ns)
multipliers[0] = 1
for n in range(1,len(Ns)):
multipliers[n] = multipliers[n-1] * Ns[n-1]
</code></pre>
<p>The resulting array <code>multipliers</code> is <code>[1, 3, 9, 54]</code>. My gut feeling is that it should be possible to make this code more succinct using <code>reduce</code> or another built-in function, but I don't yet see how. Any ideas?</p>
| 1 | 2016-08-19T10:10:17Z | 39,037,117 | <p>As my comment and other answers mention, this is easy in Python 3, using <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow"><code>itertools.accumulate</code></a>. However, from your previous questions it appears that you're using Python 2.</p>
<p>In Python, it's almost always better to iterate directly over a list rather than using indices. Your code could be rewritten like this. (I've changed your list name to <code>ns</code> to comply with the <a href="https://www.python.org/dev/peps/pep-0008" rel="nofollow">PEP 8 style guide</a>).</p>
<pre><code>ns = [3, 3, 6, 3]
multipliers = []
last = 1
for u in ns:
multipliers.append(last)
last *= u
print multipliers
</code></pre>
<p><strong>output</strong></p>
<pre><code>[1, 3, 9, 54]
</code></pre>
<p>Note that this code does an extra multiplication at the end, the result of which doesn't get appended to <code>multipliers</code>. Here's an alternative that's a little more compact. Instead of using the <code>last</code> variable it looks up the last element in <code>multipliers</code>, which is slightly less efficient, and while it doesn't do that extra multiplication it does need to create a new list when it slices <code>ns</code>.</p>
<pre><code>ns = [3, 3, 6, 3]
multipliers = [1]
for u in ns[:-1]:
multipliers.append(u * multipliers[-1])
print multipliers
</code></pre>
| 2 | 2016-08-19T10:47:19Z | [
"python"
] |
Embed a plot into GUI | 39,036,462 | <p>I found a website scikit-rf</p>
<p><a href="http://scikit-rf.readthedocs.io/en/latest/tutorials/plotting.html" rel="nofollow">http://scikit-rf.readthedocs.io/en/latest/tutorials/plotting.html</a></p>
<p>It can create smith plot by following code</p>
<pre><code>ntwk = rf.Network("my data path")
ntwk.plot_s_smith()
</code></pre>
<p>Then I want to embed it to my python gui.</p>
<pre><code>import sys
from PyQt4 import QtGui,QtCore
import matplotlib
import matplotlib.figure
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar
import skrf as rf
import numpy as np
class PrettyWidget(QtGui.QMainWindow):
def __init__(self):
super(PrettyWidget, self).__init__()
self.setGeometry(100,100,1200,700)
self.center()
self.setWindowTitle('NRW')
self.initUI()
def initUI(self):
grid = QtGui.QGridLayout()
self.setLayout(grid)
self.figure = matplotlib.figure.Figure()
self.canvas = FigureCanvas(self.figure)
self.toolbar = NavigationToolbar(self.canvas, self)
grid.addWidget(self.canvas, 3,0,2,12)
def plot1(self):
self.figure.clf()
self.figure.tight_layout()
self.canvas.draw()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
app.aboutToQuit.connect(app.deleteLater)
GUI = PrettyWidget()
GUI.show()
app.exec_()
</code></pre>
<p>But I have no idea how to put or modify code below</p>
<pre><code>ntwk = rf.Network("my data path")
ntwk.plot_s_smith()
</code></pre>
<p>Would anyone tell me if you have any iedas? Thanks!</p>
| 1 | 2016-08-19T10:14:51Z | 39,036,725 | <p><a href="http://scikit-rf.readthedocs.io/en/latest/reference/generated/skrf.network.Network.plot_s_smith.html" rel="nofollow">Network.plot_s_smith</a> has an <code>ax</code> keyword to plot on existing <em>Axes</em> object. So you can modify your plot function like this:</p>
<pre><code>def plot1(self):
self.figure.clf()
ntwk = rf.Network("my data path")
ntwk.plot_s_smith(ax=self.figure.gca())
self.figure.tight_layout()
self.canvas.draw()
</code></pre>
| 1 | 2016-08-19T10:28:40Z | [
"python",
"pyqt4"
] |
Is there a way to return a dataframe from Python to R ? | 39,036,561 | <p>I'm using the system() function to trigger a Python script from R,
The python script writes a data frame into a .csv file which then needs to be imported to R. I was wondering if I could directly return the data frame from Python to R.</p>
<p><strong>R</strong> </p>
<pre><code>command = "python"
path2script='"/Desktop/testing_connection.py"'
allArgs = path2script
output = system2(command, args = allArgs, stdout=TRUE)
</code></pre>
<p><strong>Python</strong></p>
<pre><code>df_temp = pd.read_csv('/Desktop/items.csv')
print(df)
</code></pre>
<p>I want to return a dataframe from Python to R
Currently 'output' is being created as a character vector.</p>
| 1 | 2016-08-19T10:20:01Z | 39,036,971 | <p>The <a href="https://blog.rstudio.org/2016/03/29/feather/" rel="nofollow">Feather</a> format may accelerate your code.</p>
<p>I am not aware of a way to have one dataframe in r that you can access in python. The reason I doubt it is possible is that it is two different software, so the interoperability of data is really difficult to implement(if possible).</p>
| 0 | 2016-08-19T10:40:25Z | [
"python",
"dataframe",
"return"
] |
Is there a way to return a dataframe from Python to R ? | 39,036,561 | <p>I'm using the system() function to trigger a Python script from R,
The python script writes a data frame into a .csv file which then needs to be imported to R. I was wondering if I could directly return the data frame from Python to R.</p>
<p><strong>R</strong> </p>
<pre><code>command = "python"
path2script='"/Desktop/testing_connection.py"'
allArgs = path2script
output = system2(command, args = allArgs, stdout=TRUE)
</code></pre>
<p><strong>Python</strong></p>
<pre><code>df_temp = pd.read_csv('/Desktop/items.csv')
print(df)
</code></pre>
<p>I want to return a dataframe from Python to R
Currently 'output' is being created as a character vector.</p>
| 1 | 2016-08-19T10:20:01Z | 39,039,817 | <p>Install <a href="http://rpy2.bitbucket.org/" rel="nofollow">rpy</a>. <code>rpy</code> is a simple, easy-to-use interface to R from Python. It enables one to enjoy the elegance of Python programming while having access to the rich graphical and statistical capabilities of R.
The <code>feather</code> format as suggested by a commentator may also be of help if file I/O is mostly what you want it for.</p>
| 1 | 2016-08-19T13:05:06Z | [
"python",
"dataframe",
"return"
] |
Django+python: Use Order by to Django RawQuery objects? | 39,036,577 | <p>I have almost 100 million product names present in DB.I am displaying 100 products in UI each time & after scrolling showing next 100 & so on. For this I have used Django RawQuery as my database(mysql) doesn't support <strong>distinct functionality</strong>.</p>
<p>Here 'fetch' is callback function used in otherfile:</p>
<pre><code> def fetch(query_string, *query_args):
conn = connections['databaseName']
with conn.cursor() as cursor:
cursor.execute(query_string, query_args)
record = dictfetchall(cursor)
return record
</code></pre>
<p>Here is the main call in views.py
So sample raw query code snippet:</p>
<pre><code>record= fetch("select productname from abc")
</code></pre>
<p>Here if I am going to apply sorting criteria to the records </p>
<pre><code>record= fetch("select productname from abc orderby name ASC")
</code></pre>
<p>Same doing for descending as well. As a result it takes so much time to display the sorted products.</p>
<p>What I want is like I will query 1 time & will store in a python object then will start applying ascending or descending. </p>
<p>So that for the first time when it loads, it will take some time but then applying sorting criteria it won't go to database to apply sorting for each time sort is hitted.</p>
<p>Overally want to say increase performance in case of sorting the records.</p>
| 0 | 2016-08-19T10:20:48Z | 39,036,909 | <p>I think what you are looking for is <a href="https://docs.djangoproject.com/en/1.10/topics/pagination/" rel="nofollow">pagination</a>. This is an essential technique when you want to display data in batches(pages). </p>
<pre><code>from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
def listing(request):
query_string = 'your query'
query_args = []
conn = connections['databaseName']
with conn.cursor() as cursor:
cursor.execute(query_string, *query_args)
all_records = dictfetchall(cursor)
paginator = Paginator(all_records, 100) # Show 100 records per page
page = request.GET.get('page')
try:
records = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
records = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
records = paginator.page(paginator.num_pages)
return records
</code></pre>
<p>Whenever you make a request you should include the page you want to display (e.g. 1,2,3...) in the url parameters. </p>
<blockquote>
<p>Example GET <a href="http://localhost/products/?page=1" rel="nofollow">http://localhost/products/?page=1</a> </p>
</blockquote>
<p>In terms of logic, your javascript should display the first page and have a counter that hold the next page you should request, after the user scrolls make an AJAX request to get the second page and increase the page counter, etc...</p>
<p>EDIT: As for the sorting matter you can also use javascript to sort your data</p>
| 1 | 2016-08-19T10:37:36Z | [
"python",
"django",
"sorting",
"django-orm"
] |
Django+python: Use Order by to Django RawQuery objects? | 39,036,577 | <p>I have almost 100 million product names present in DB.I am displaying 100 products in UI each time & after scrolling showing next 100 & so on. For this I have used Django RawQuery as my database(mysql) doesn't support <strong>distinct functionality</strong>.</p>
<p>Here 'fetch' is callback function used in otherfile:</p>
<pre><code> def fetch(query_string, *query_args):
conn = connections['databaseName']
with conn.cursor() as cursor:
cursor.execute(query_string, query_args)
record = dictfetchall(cursor)
return record
</code></pre>
<p>Here is the main call in views.py
So sample raw query code snippet:</p>
<pre><code>record= fetch("select productname from abc")
</code></pre>
<p>Here if I am going to apply sorting criteria to the records </p>
<pre><code>record= fetch("select productname from abc orderby name ASC")
</code></pre>
<p>Same doing for descending as well. As a result it takes so much time to display the sorted products.</p>
<p>What I want is like I will query 1 time & will store in a python object then will start applying ascending or descending. </p>
<p>So that for the first time when it loads, it will take some time but then applying sorting criteria it won't go to database to apply sorting for each time sort is hitted.</p>
<p>Overally want to say increase performance in case of sorting the records.</p>
| 0 | 2016-08-19T10:20:48Z | 39,038,432 | <p>Below is my try & I got the desired answer.</p>
<p>I fetch the data from database & is stored in the form of array of dictionary in a list.</p>
<p>Assuming the data stored format in a list:</p>
<pre><code>l = [{'productName'='soap',id=1},{'productName'='Laptop',id=2}]
</code></pre>
<p>Below code snippet is the solution to sort dependening upon key:</p>
<pre><code>from operator import itemgetter
</code></pre>
<p><strong>for Ascending</strong></p>
<pre><code>res= sorted(l, key=itemgetter('name'))
</code></pre>
<p><strong>For Descending</strong></p>
<pre><code>res= sorted(l, key=itemgetter('name'),reverse=True)
</code></pre>
| 0 | 2016-08-19T11:51:35Z | [
"python",
"django",
"sorting",
"django-orm"
] |
Is there any function available to modify the content occuring after "?" in django? | 39,036,631 | <p>I am creating a middleware at my server end to modify the API calls in the following manner</p>
<pre>
http://example.com/xyz?<b>jdvnjvjnfjAA536sddjjfjfjdjbdfjdh656dnjd5ndjvb</b>
</pre>
<p>should be converted by the middleware to:</p>
<pre>
http://example.com/xyz?<b>user=Tango&testID=123</b>
</pre>
<p>where the earlier URL contains the encrypted string of the <code>user=Tango&testID=123</code>.</p>
<p>Hence simply put, the middleware decrypts the value and replaces it with the real string because i do not want to do decryption for over 50 API call view functions.</p>
<p>Is there any function available to modify the content occuring after "?" in django ?</p>
| 0 | 2016-08-19T10:23:26Z | 39,045,524 | <p>You can't modify the query-string from the server side. You basically have 2 options:</p>
<ol>
<li>issue a redirect response to tell the browser to request a different URL. See the Django docs on <a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#redirect" rel="nofollow">redirect</a>.</li>
<li>embed the new URL in your response along with javascript to
rewrite the location using the HTML5 History API, <a href="http://stackoverflow.com/questions/3338642/updating-address-bar-with-new-url-without-hash-or-reloading-the-page">see this SO question</a>.</li>
</ol>
| 0 | 2016-08-19T18:22:41Z | [
"python",
"django",
"middleware"
] |
.Net dll function returns just first char of string in Python | 39,036,658 | <p>I want to call <strong>.NET</strong> dll in my python project. I want to use a function which returns string and gets string parameter. But somehow I cant get the string I can just get the first char of the string and sometimes it raises <strong>following error:</strong></p>
<pre><code>'charmap' codec can't encode characters in position 1-15: character maps to <undefined>
</code></pre>
<p><strong>EDIT</strong>: Solved the error by putting some piece of code. </p>
<pre><code>encode(sys.stdout.encoding, errors='replace')
</code></pre>
<p>But this time result is not exactly what I want. Result is : <strong>b'h\xe7\x8c\x80'</strong> with the input <strong>"hello"</strong> </p>
<p>Here is my code:</p>
<pre><code>import ctypes
import time
hllDll = ctypes.WinDLL ("C:\\Users\\yazilimhpaio2\\Downloads\\mydll.dll")
hllApiProto = ctypes.WINFUNCTYPE (
ctypes.c_wchar_p, # Return type.
ctypes.c_wchar_p # Parameter 1 ...
)
hllApiParams = (1,"p1",0),
hllApi = hllApiProto (("ReturnMyString", hllDll), hllApiParams)
var = "hello"
p1 = ctypes.c_wchar_p (var)
x = hllApi (p1)
print(x.encode(sys.stdout.encoding, errors='replace'))
time.sleep(3)
</code></pre>
<p><strong>ReturnMyString</strong> function gets string parameter and returns that parameter. But when I run this code it just prints first letter of my parameter.</p>
<p>I found out that <a href="https://docs.python.org/3.1/library/ctypes.html#fundamental-data-types" rel="nofollow">c_wchar_p is used for string</a> in python.
So I couldn't understand what is the problem with my code.</p>
<p>Any help would be appreciated..</p>
<p><strong>EDIT:</strong></p>
<p><strong>exported dll function:</strong></p>
<pre><code>[ComVisible(true)]
[DllExport("ReturnMyString", CallingConvention = System.Runtime.InteropServices.CallingConvention.StdCall)]
public static string ReturnMyString(string value)
{
return value;
}
</code></pre>
<p><strong>Prototype:</strong></p>
<pre><code>public static string ReturnMyString(string)
</code></pre>
| 0 | 2016-08-19T10:24:54Z | 39,125,557 | <p>If you using Unmanaged Exports its Marshaling Example says ".Net will marshal these strings as single-byte Ansi" by default. If so, use c_char_p and pass byte strings from Python.</p>
| 1 | 2016-08-24T14:04:51Z | [
"python",
".net",
"string",
"dll",
"ctypes"
] |
IPython parallel DirectView.apply does not run in parallel | 39,036,693 | <p>For some reason I cannot get IPython parallel function <code>DirectView.apply()</code> to really parallelize a function call. <code>DirectView.map()</code> works as expected. Am I doing something wrong here? A working example script</p>
<pre><code>import time
from datetime import datetime
from ipyparallel import Client, require
@require(time)
def wait(seconds=1):
time.sleep(seconds)
if __name__=='__main__':
client = Client()
print('engine ids: {}'.format(client.ids))
dview = client.direct_view((0, 1, 2, 3))
dview.block = False
print('view targets: {}'.format(dview.targets))
print('dview.apply...')
t0 = datetime.now()
results = [dview.apply(wait) for i in range(4)]
while len(results) > 0:
results.pop(0).get()
print('time: {}'.format(datetime.now() - t0))
print('dview.map... ')
t0 = datetime.now()
results = dview.map(wait, [1]*4)
print('time: {}'.format(datetime.now() - t0))
</code></pre>
<p>prints</p>
<pre><code>engine ids: [0, 1, 2, 3]
view targets: [0, 1, 2, 3]
dview.apply...
time: 0:00:04.021680
dview.map...
time: 0:00:01.013941
</code></pre>
<p>showing that <code>apply</code> clearly does not perform as I expected.</p>
<p>My system is Ubuntu 14.04, Python 3.4.3, IPython 4.2.0.</p>
| 0 | 2016-08-19T10:26:52Z | 39,038,524 | <p>Seems that I misinterpreted the documentation. <a href="https://ipython.org/ipython-doc/3/api/generated/IPython.parallel.client.view.html#IPython.parallel.client.view.View.apply" rel="nofollow" title="Here">Here</a> it says that <em><code>apply(f, *args, **kwargs)</code> calls <code>f(*args, **kwargs)</code> on remote engines, returning the result.</em></p>
<p>If I understand correctly, this means that the function is applied on all of the engines, instead of running it once on just one of them.</p>
| 0 | 2016-08-19T11:56:42Z | [
"python",
"parallel-processing",
"ipython",
"ipython-parallel"
] |
Python 3 get value by index from json | 39,036,801 | <p>I need to get a value from json file <strong><em>by index</em></strong> (not by key)</p>
<p>Here is my json file</p>
<pre><code>{
"db": {
"vendor": {
"product": {
"fruits": {
"price": {
"dollars": [
10,
2
],
"euros": [
11,
1.9
],
"pesos": [
16,
15
]
}
},
"vegatables": {
"price": {
"dollars": {
"0": 8,
"1": 2
},
"euros": {
"0": 10,
"1": 1.5
},
"pesos": {
"0": 15,
"1": 12
}
}
}
}
}
}
}
</code></pre>
<p>My goal - get values in dollars, euros and pesos for all "products" (in this json file it is fruits and vegatables only)</p>
<p>My code:</p>
<pre><code>import json
path_to_file = "C:\\Users\\admin\\Documents\\kolos\\data\\vegs.json";
with open(path_to_file) as data_file:
data = json.load(data_file)
lenght_of_products = (len(data["db"]["vendor"]["product"]))
for x in range(lenght_of_back):
print(data["db"]["vendor"]["product"][x]["price"]["dollars"])
</code></pre>
<p>And I got</p>
<pre><code>Traceback (most recent call last):
File "C:\\Users\\admin\\Documents\\kolos\\data\\vegs.json", line 12, in <module>
print(data["db"]["vendor"]["product"][x]["price"]["dollars"])
KeyError: 0
</code></pre>
<p><strong>The problem is in X variable</strong>. If I try without it, for example code below, it works.</p>
<pre><code>print(data["db"]["vendor"]["product"]["fruits"]["price"]["dollars"][0])
</code></pre>
<p>P.S</p>
<p>In my another program I use the similar code and it works great</p>
<pre><code> ...
for x in range(lenght_of_leagues):
print(data["leagues"][x]["id"])
...
</code></pre>
| 0 | 2016-08-19T10:32:13Z | 39,037,530 | <p>I think the input string is wrong.</p>
<pre><code>{
"d": {
"items": {
"vegatables": {
"spinach": 120,12,23,3445
}
}
}
}
</code></pre>
<p>JSON format doesn't has the the following input it must be changed to </p>
<pre><code>{
"d": {
"items": {
"vegatables": {
"spinach": [120,12,23,3445]
}
}
}
}
</code></pre>
<p>Then the following command will work fine</p>
<pre><code>print(data["d"]["items"]["vegatables"]["spinach"][0])
</code></pre>
| 0 | 2016-08-19T11:07:51Z | [
"python",
"json",
"python-3.x"
] |
Python 3 get value by index from json | 39,036,801 | <p>I need to get a value from json file <strong><em>by index</em></strong> (not by key)</p>
<p>Here is my json file</p>
<pre><code>{
"db": {
"vendor": {
"product": {
"fruits": {
"price": {
"dollars": [
10,
2
],
"euros": [
11,
1.9
],
"pesos": [
16,
15
]
}
},
"vegatables": {
"price": {
"dollars": {
"0": 8,
"1": 2
},
"euros": {
"0": 10,
"1": 1.5
},
"pesos": {
"0": 15,
"1": 12
}
}
}
}
}
}
}
</code></pre>
<p>My goal - get values in dollars, euros and pesos for all "products" (in this json file it is fruits and vegatables only)</p>
<p>My code:</p>
<pre><code>import json
path_to_file = "C:\\Users\\admin\\Documents\\kolos\\data\\vegs.json";
with open(path_to_file) as data_file:
data = json.load(data_file)
lenght_of_products = (len(data["db"]["vendor"]["product"]))
for x in range(lenght_of_back):
print(data["db"]["vendor"]["product"][x]["price"]["dollars"])
</code></pre>
<p>And I got</p>
<pre><code>Traceback (most recent call last):
File "C:\\Users\\admin\\Documents\\kolos\\data\\vegs.json", line 12, in <module>
print(data["db"]["vendor"]["product"][x]["price"]["dollars"])
KeyError: 0
</code></pre>
<p><strong>The problem is in X variable</strong>. If I try without it, for example code below, it works.</p>
<pre><code>print(data["db"]["vendor"]["product"]["fruits"]["price"]["dollars"][0])
</code></pre>
<p>P.S</p>
<p>In my another program I use the similar code and it works great</p>
<pre><code> ...
for x in range(lenght_of_leagues):
print(data["leagues"][x]["id"])
...
</code></pre>
| 0 | 2016-08-19T10:32:13Z | 39,039,595 | <p>Here's a version that should work (depending on what you expect it to do):</p>
<ul>
<li><p>I changed the dict() to an OrderedDict() to preserve the original ordering</p></li>
<li><p>I added <code>.values()</code> to the dictionary which is being indexed into. This method returns a list of all dictionaries which can then be indexed into</p></li>
</ul>
<p>Here's the code:</p>
<pre><code>import json
from collections import OrderedDict
path_to_file = "test.json";
with open(path_to_file) as data_file:
data = OrderedDict(json.load(data_file))
length = (len(data["db"]["vendor"]["product"]))
for x in range(length):
print(data["db"]["vendor"]["product"].values()[x]["price"]["dollars"])
</code></pre>
<p>Output is:</p>
<pre><code>{u'1': 2, u'0': 8}
[10, 2]
</code></pre>
<p>The reason the output is different is that your <code>fruits</code> and <code>vegetables</code> are stored differently (one is a list and the other is a dictionary). You'll have to make them the same if you want to be able to iterate over them in a similar fashion.</p>
| 1 | 2016-08-19T12:54:31Z | [
"python",
"json",
"python-3.x"
] |
Graphviz: give color to lines with rainbow effect | 39,036,808 | <p>I have dataframe and I build graph using <code>graphviz</code></p>
<pre><code>for id_key, group in df.groupby('ID'):
f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8')
f.body.extend(['rankdir=LR', 'size="5,5"'])
f.attr('node', shape='box')
for i in range(len(group)-1):
f.edge(str(group['category'].iloc[i]), str(group['category'].iloc[i+1]),
label=str(group['search_term'].iloc[i+1]))
f.render(filename=str(id_key))
</code></pre>
<p>and get this result
<a href="http://i.stack.imgur.com/2s2WN.png" rel="nofollow"><img src="http://i.stack.imgur.com/2s2WN.png" alt="image"></a>. How can I change lines color: first arrow - red, second - orange, third - yellow, etc?</p>
| 0 | 2016-08-19T10:32:27Z | 39,039,139 | <p>You can use one of the <a href="http://graphviz.org/doc/info/colors.html#brewer" rel="nofollow">brewer color schemes</a>. For example:</p>
<pre><code>g = graphviz.Digraph(format='png')
g.body.extend(["rankdir=LR"])
for i in range(9):
g.edge(str(i),str(i+1),color="/spectral9/"+str(i+1))
g.render(filename="example")
</code></pre>
<p>produce:</p>
<p><a href="http://i.stack.imgur.com/WE3gK.png" rel="nofollow"><img src="http://i.stack.imgur.com/WE3gK.png" alt="example"></a></p>
<p>If you wish to generate the colors yourself you can use the <a href="https://en.wikipedia.org/wiki/HSL_and_HSV" rel="nofollow">hsv format</a> with constants <em>saturation</em> & <em>value</em> and increasing <em>hue</em>:</p>
<pre><code>n = 20
g = graphviz.Digraph(format='png')
g.body.extend(["layout=circo"])
for i in range(n):
g.edge(str(i),str(i+1),color="{h:} 1 1".format(h=i/n))
g.edge(str(n),str(0),color="1 1 1")
g.render(filename="example")
</code></pre>
<p>produce:</p>
<p><a href="http://i.stack.imgur.com/rQcPy.png" rel="nofollow"><img src="http://i.stack.imgur.com/rQcPy.png" alt="example2"></a></p>
| 2 | 2016-08-19T12:31:59Z | [
"python",
"pandas",
"graphviz"
] |
How to embed a bokeh plot in web2py | 39,036,835 | <p>I have wasted quite some time and am unable to embed a bokeh plot in my web2py app.</p>
<p>My current code:</p>
<pre><code>def plot():
from bokeh.plotting import figure
from bokeh.resources import CDN
from bokeh.embed import file_html
plot = figure()
plot.circle([1,2], [3,4])
html = file_html(plot, CDN, "my plot")
return (html)
</code></pre>
<p>But nothing happens. I would be grateful for any tipe of example, it doesn't have to be nothing special. Just a simple graph.</p>
<p>Kind regards</p>
| 0 | 2016-08-19T10:33:44Z | 39,083,164 | <p>In your code, <code>html</code> is a string (of HTML markup). When a web2py action returns a string, that string is returned directly to the browser. If you are attempting to load that HTML as a complete web page, it will not work, as the Bokeh <code>file_html</code> function simply produces a <code><script></code> tag with Javascript code. It only works if you embed it within a complete HTML page and load the Bokeh Javascript and CSS files in the page. For further details, please see the relevant <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/embed.html" rel="nofollow">Bokeh documentation</a>.</p>
<p>To make this work in web2py, you can use <code>response.files</code> to include the necessary Bokeh Javascript and CSS files, and you can embed the Bokeh-generated script tag in a view.</p>
<pre><code>def plot():
from bokeh.plotting import figure
from bokeh.resources import CDN
from bokeh.embed import file_html
response.files.extend(list_of_Bokeh_JS_and_CSS_static_file_URLs)
plot = figure()
plot.circle([1,2], [3,4])
html = file_html(plot, CDN, "my plot")
return dict(bokeh_script=html)
</code></pre>
<p>It is up to you to specify the list of Bokeh JS and CSS files and ensure they are available (you can copy them to your web2py app's static folder and serve them from there or use the Bokeh CDN as shown in their docs).</p>
<p>Then in the view for the <code>plot</code> action (e.g., /views/default/plot.html):</p>
<pre><code>{{extend 'layout.html'}}
{{=XML(bokeh_script)}}
</code></pre>
<p>Note, when inserting a string of HTML markup directly into a web2py view, you must wrap it in <code>XML()</code> to prevent web2py from escaping the HTML.</p>
<p>Finally, assuming you have Python and Bokeh installed on your system, be sure to run web2py from source rather than using the Windows or OSX binaries, as the latter include their own Python interpreters and therefore cannot import libraries installed on your system.</p>
| 0 | 2016-08-22T15:14:59Z | [
"python",
"web2py",
"bokeh"
] |
Is it possible to make all modules import a module implicitly? | 39,036,874 | <p>I love the type hinting from Python 3, but I'm really tired of writing <code>from typing import *</code> in all the modules that I write.</p>
<p>Is it possible to wrap my app in a module, or anything, and implicitly import the module in all of my app modules?</p>
| 1 | 2016-08-19T10:35:54Z | 39,037,270 | <p>You could hijack the <code>builtins</code> module and put what you need there. That would make the code harder to maintain, as it would be harder to figure where these globals are coming from, or if they are accidentally clobbered. To be clear, it's possible, but I recommend <em>not</em> doing this.</p>
<p>The main module would need to do something like this at the top. If it's not the first thing to happen in the program then other modules won't work properly. Import order shouldn't make a difference, so if someone messes with this and it break the program then it will be hard to figure out why.</p>
<pre><code>import typing # I assume you meant typing, not types
import builtins
vars(builtins).update({k: getattr(typing, k) for k in typing.__all__})
# Any module could do this without having to import anything
def f(x: T) -> T:
return x
</code></pre>
| 0 | 2016-08-19T10:54:58Z | [
"python",
"python-3.x"
] |
Running Popen under WSGI with Python 2.7.11, ModWSGI 4.4.6 and Apache 2.4.20 returns empty | 39,036,880 | <p>I have a WSGI application that does this:</p>
<pre><code>p = Popen(["which", "java"], stdout=PIPE).stdout.readlines()
</code></pre>
<p>However it returns empty. My WSGI configuration is:</p>
<pre><code><VirtualHost *:80>
ServerAdmin my@myserver.com
WSGIPassAuthorization On
# Deploy as a daemon (avoids conflicts between other python apps).
WSGIDaemonProcess formshare python-path=/opt/formshare display-name=formshare processes=2 threads=15
WSGIScriptAlias /formshare /opt/formshare/src/formshare/extras/wsgi/formshare.wsgi process-group=formshare application-group=%{GLOBAL}
<Location "/formshare">
WSGIProcessGroup formshare
</Location>
#Output any errors and messages to these files
ErrorLog /var/log/httpd/formshare.error.log
CustomLog /var/log/httpd/formshare.custom.log combined
</VirtualHost>
</code></pre>
<p>Any idea why? I have tried different WSGI versions and WSGIRestrictStdout Off but nothing seems to work.</p>
| 0 | 2016-08-19T10:36:08Z | 39,037,457 | <p>The problem here was that I was starting Apache with su (I use slackware) and under su java is not available. If I loggin with root and start apache then all works fine. </p>
| 0 | 2016-08-19T11:04:41Z | [
"python",
"subprocess",
"mod-wsgi"
] |
python list as index for a nested list | 39,036,925 | <p>I have the following two lists:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
</code></pre>
<p>How can I use the first list as a tree index, in order to append an item to the second list, like doing</p>
<pre><code>l2[2][3][2].append(0)
</code></pre>
| 1 | 2016-08-19T10:38:25Z | 39,037,067 | <p>You could use <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow"><code>reduce</code></a> function:</p>
<pre><code>In [1]: l1 = [2, 3, 2]
In [2]: l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
In [3]: from functools import reduce
In [4]: reduce(lambda l, i: l[i], l1, l2)
Out[4]: [0, 0]
In [5]: l2[2][3][2]
Out[5]: [0, 0]
</code></pre>
| 1 | 2016-08-19T10:44:56Z | [
"python",
"list",
"tree",
"nested",
"append"
] |
python list as index for a nested list | 39,036,925 | <p>I have the following two lists:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
</code></pre>
<p>How can I use the first list as a tree index, in order to append an item to the second list, like doing</p>
<pre><code>l2[2][3][2].append(0)
</code></pre>
| 1 | 2016-08-19T10:38:25Z | 39,037,086 | <p>There is no standard way to do that, but this will work:</p>
<pre><code>from functools import reduce
from operator import getitem
def tree_index(tree, index):
return reduce(getitem, index, tree)
tree_index(l2, l1).append(0)
</code></pre>
<p>As a bonus, the <code>tree_index</code> function also works for dictionaries and any other mappings. For example:</p>
<pre><code>>>> adjs = {'apple': ['red', 'green'], 'swallow': ['african', 'european'] }
>>> tree_index(adjs, ['apples', 0])
'red'
</code></pre>
<p>On the other hand, <code>tree_index</code> will not work for assignments. This will not work:</p>
<pre><code>tree_index(l2, [1,1]) = 33 # throws SyntaxError
</code></pre>
<p>In order to assign to tree indices you need another function or partial indexing:</p>
<pre><code>tree_index(l2, [1])[1] = 33
</code></pre>
| 3 | 2016-08-19T10:45:26Z | [
"python",
"list",
"tree",
"nested",
"append"
] |
python list as index for a nested list | 39,036,925 | <p>I have the following two lists:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
</code></pre>
<p>How can I use the first list as a tree index, in order to append an item to the second list, like doing</p>
<pre><code>l2[2][3][2].append(0)
</code></pre>
| 1 | 2016-08-19T10:38:25Z | 39,037,269 | <p>This should work</p>
<pre><code>def tree_append(tree, index_list', val=None):
for index in index_list:
tree = tree[index]
tree.append(val)
l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
tree_append(l2, l1, val=0)
>> l2
[0, [0, 1], [0, 1, 0, [0, 1, [0, 0. 0]]]]
</code></pre>
| 1 | 2016-08-19T10:54:58Z | [
"python",
"list",
"tree",
"nested",
"append"
] |
python list as index for a nested list | 39,036,925 | <p>I have the following two lists:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
</code></pre>
<p>How can I use the first list as a tree index, in order to append an item to the second list, like doing</p>
<pre><code>l2[2][3][2].append(0)
</code></pre>
| 1 | 2016-08-19T10:38:25Z | 39,037,430 | <p>Do this only if you know what you're doing:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
str1=repr(l1)
str1=str1.replace(', ','][') # str1=[2][3][2]
eval(repr(l2)+str1) # or var=exec(repr(l2)+str1)
</code></pre>
| 0 | 2016-08-19T11:02:59Z | [
"python",
"list",
"tree",
"nested",
"append"
] |
python list as index for a nested list | 39,036,925 | <p>I have the following two lists:</p>
<pre><code>l1 = [2, 3, 2]
l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
</code></pre>
<p>How can I use the first list as a tree index, in order to append an item to the second list, like doing</p>
<pre><code>l2[2][3][2].append(0)
</code></pre>
| 1 | 2016-08-19T10:38:25Z | 39,042,336 | <p>For some strange reason, I immediately thought of a recursive routine, but essentially it does does what Kostas is doing, I just find it easier to follow :</p>
<pre><code>def tree_appender(tree, location, thing):
sub_tree_index = location.pop(0)
if len(location) > 0:
tree_appender(tree[sub_tree_index], location, thing)
else:
tree[sub_tree_index].append(thing)
return
>>> l1 = [2, 3, 2]
>>> l2 = [0, [0, 1], [0, 1, 0, [0, 1, [0, 0]]]]
>>> python_file.tree_appender(l2, l1, "dave")
>>> l2
[0, [0, 1], [0, 1, 0, [0, 1, [0, 0, 'dave']]]]
</code></pre>
<p>It will fail of course if l1 was [0] for example as l2[0] is an int not a list. You could test for that and turn whatever is about to have something appended to it into a list if that was likely to be desired.</p>
| 0 | 2016-08-19T15:06:35Z | [
"python",
"list",
"tree",
"nested",
"append"
] |
How do I add user data to url in Django? | 39,037,021 | <p>I've got a function based view in Django called <code>userView</code></p>
<pre><code>@login_required(login_url='/login/')
def userView(request):
user = None
if request.user.is_authenticated():
user = request.user
user_id = user.pk
return render(request, "user_view.html", {'user': user})
</code></pre>
<p>and here's my URL for it</p>
<pre><code>urlpatterns = [
url(r'^user/', userView, name='user'),
]
</code></pre>
<p>After my user has logged in, I'd like him to see his pk number in the URL. I.e., if the user's PK is <code>3</code> and the user directs their browser to <code>www.myapp.com/user</code>, the URL in the address bar should change to <code>www.myapp.com/user/3/</code>. How do I make that happen? I'm aware I need to edit the RegEx for the URL into</p>
<pre><code>url(r'^user/(?P<user_id>[0-9]+)/$', userView, name='user')
</code></pre>
<p>but how do I pass the PK number to the URL?</p>
| 0 | 2016-08-19T10:43:03Z | 39,037,675 | <p>I am not a Django expert but my guess is you want to redirect the user to his 'home page' after he is logged in. That can be done using the <a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#redirect" rel="nofollow"><code>redirect()</code></a> method </p>
<pre><code>@login_required(login_url='/login/')
def userLoginView(request):
if request.user.is_authenticated():
return redirect("/user/{0}".format(request.user.pk), user=request.user)
</code></pre>
<p>And then define a second view that will render the home page</p>
<pre><code>def userView(request, user_id=None, user=None):
return render(request, "user_view.html", {'user': user})
</code></pre>
<p>Also your url patterns should be as follows</p>
<pre><code>urlpatterns = [
url(r'^user/', userLoginView, name='userlogin'),
url(r'^user/(?P<user_id>[0-9]+)/$', userView, name='user')
]
</code></pre>
| 2 | 2016-08-19T11:14:56Z | [
"python",
"django"
] |
In Python: imports shenanigans (import subfolder.module in folder, use in subfolder/module2.py) | 39,037,069 | <h1>Â Ok, this is not very clear at all. Allow me to rephrase.</h1>
<p><strong>Quick note</strong>: <em>"Solved" while writing this question. Will accept <strong>answer</strong> with inputs <strong>regarding best practices</strong>.</em></p>
<p><strong>Original quick note</strong>: <em>This is probably a duplicate. I apparently couldn't phrase the question well enough to fit this situation. My apologies if it is.</em></p>
<p>First of all, to get you situated, here is my project's hierarchy as well as some relevant code:</p>
<pre><code>Project/
âââ main.py
âââ lib
  âââ __init__.py
  âââ A.py
  âââ B.py
  âââ C.py
</code></pre>
<p><code>main.py</code>:</p>
<pre><code>## Imports
from lib.A import ClassA as clA
from lib.B import ClassB as clB
from lib.C import ClassC as clC
#
# ... Some other code ...
#
a = clA()
b = clB()
a.add_ClB_Inst_Ref(b)
</code></pre>
<p><code>A.py</code>:</p>
<pre><code>## Imports
if __name__ == "__main__":
from B import ClassB as clB
from C import ClassC as clC
class ClassA:
dict_Of_Weighted_ClB_Refs = {}
#
# ... Some other code (attributes, init, ...) ...
#
def add_ClB_Inst_Ref(self, ClB_Instance):
if isinstance(ClB_Instance, clB):
key = clB.ID
if key in self.dict_Of_Weighted_ClB_Refs:
self.dict_Of_Weighted_ClB_Refs[key] +=1
else:
self.dict_Of_Weighted_ClB_Refs[key] = 1
</code></pre>
<h2>Issue:</h2>
<p>The program crashes in the <code>add_ClB_Inst_Ref</code> method when checking that the method's parameter is an instance of the ClassB object with the following error:</p>
<pre><code>NameError: name 'clB' is not defined
</code></pre>
<p><em>"Well"</em>, you might say, <em>"of course it crashes, the <code>A.py</code> files has never heard of that fancy <code>clB</code>!"</em>.
And, yes this may be the problem, but when I try to import it inside this file, replacing the imports section with this:</p>
<pre><code>##Â Imports
from B import ClassB as clB
from C import ClassC as clC
</code></pre>
<p>I get the following error: <code>ImportError: No module named 'B'</code></p>
<h1>Question:</h1>
<p>How can I fix this? Your input on best practices would be most welcomed.</p>
<h2>Suspicions:</h2>
<p>All this leads me to these hypothesis:</p>
<ul>
<li>Modules imported in a file at the root of the project aren't globally available for the duration of the program's execution.</li>
<li><code>import</code> searches a match from where the initial script has been executed. (<strong>Note</strong>: This lead me to double check that I had tried it this way <code>from lib.B import ClassB as clB</code> just like in the main file; turns out I hadn't and this works... I get the logic, but it seems rather counter intuitive. See answer below for details.)</li>
<li>My project's architecture is flawed.</li>
</ul>
| 0 | 2016-08-19T10:44:57Z | 39,037,070 | <p>As noted in the question, here is what is needed to fix the issue of the unavailable module B in module A.</p>
<p>Simply replace the imports section in <code>A.py</code> with the following:</p>
<pre><code>## Imports
if __name__ == "__main__":
from B import ClassB as clB
from C import ClassC as clC
else:
from lib.B import ClassB as clB
from lib.C import ClassC as clC
</code></pre>
<p>This doesn't seem like the best possible solution to me, as this means that each time the module is called from somewhere else than the <code>lib</code> folder, it assumes this "somewhere" is the project's root folder, which could lead to some serious conflicts.</p>
<h3>Please write you own answer if you are able to provide a more general approach and/or best practices on this issue.</h3>
| 0 | 2016-08-19T10:44:57Z | [
"python",
"python-import"
] |
Django migration can't find GDALRaster | 39,037,351 | <p>I took over a project with Django, Django REST framework and Leaflet to store drawn path in a database. Installing Django in an <code>virtualenv</code> and trying to migrate it raises:</p>
<blockquote>
<p>File "D:\SHK\ElektroClean\venv\lib\site-packages\django\contrib\gis\db\backends\postgis\operations.py", line 7, in
from django.contrib.gis.gdal import GDALRaster
ImportError: cannot import name 'GDALRaster'</p>
</blockquote>
<p>In the directory <code>D:\SHK\ElektroClean\py27\Lib\site-packages\django\contrib\gis\gdal</code> is a folder called raster. Is renaiming this folder to GDALRaster the fix?</p>
<p>Anyone suggestions to fix this?</p>
| 0 | 2016-08-19T10:58:32Z | 39,276,546 | <p>solved did mistakes on installing GDAL -.-</p>
| 0 | 2016-09-01T16:25:19Z | [
"python",
"django",
"gdal"
] |
ansible "the directory /usr/lib/ is not empty, refusing to convert it" | 39,037,353 | <p>I'm trying to create symlinks with ansible:</p>
<p><code>file: src=/usr/lib/x86_64-linux-gnu/{{ item.src }} dest=/usr/lib/ state=link force=yes
with_items:
- { src: 'libz.so'}
- { src: 'libfreetype.so'}
- { src: 'libjpeg.so'}</code></p>
<p>and I'm getting this:</p>
<p><code>failed: [192.168.2.2] (item={u'dest': u'/usr/lib/', u'src': u'libz.so'}) => {"failed": true, "gid": 0, "group": "root", "item": {"dest": "/usr/lib/", "src": "libz.so"}, "mode": "0755", "msg": "the directory /usr/lib/ is not empty, refusing to convert it", "owner": "root", "path": "/usr/lib/", "size": 4096, "state": "directory", "uid": 0}</code></p>
| 0 | 2016-08-19T10:58:35Z | 39,037,684 | <p>You are trying to link the whole <code>/usr/lib</code> directory to the file. </p>
<p>Append <code>{{ item.src }}</code> to your dest</p>
<pre><code>file: src=/usr/lib/x86_64-linux-gnu/{{ item.src }} dest=/usr/lib/{{ item.src }} state=link force=yes
with_items:
- { src: 'libz.so'}
- { src: 'libfreetype.so'}
- { src: 'libjpeg.so'}
</code></pre>
| 5 | 2016-08-19T11:15:13Z | [
"python",
"debian",
"ansible"
] |
What is wrong with my Neo4j query for Py2neo? | 39,037,427 | <p>This is my query for Py2neo for Neo4j database</p>
<pre><code>MATCH (u:User),(p:Prize),(ca:Category) CREATE (ch:Challenge {chid:'dassdshhhhasdasda',challenge_title:'Exm 2015', total_question_per_user:200,challenge_status:1,timestamp:'1471516538.4643',date:'2016-08-18'}), (p)-[:BELONG {rank:3}]->(ch),(ca)-[:BELONG {percentage_question:20}]->(ch) WHERE u.username = 'xyz@gmail.com' AND p.pid = 'e766d8cd-26d1-4848-ac97-15c233caa4d4' AND ca.catname = 'nature'
</code></pre>
<p>But when I run it manually in Neo4j database command line then it show error</p>
<pre><code>Invalid input 'H': expected 'i/I' (line 1, column 287 (offset: 286))
"MATCH (u:User),(p:Prize),(ca:Category) CREATE (ch:Challenge {chid:'dassdshhhhasdasda',challenge_title:'Exm 2015', total_question_per_user:200,challenge_status:1,timestamp:'1471516538.4643',date:'2016-08-18'}), (p)-[:BELONG {rank:3}]->(ch),(ca)-[:BELONG {percentage_question:20}]->(ch) WHERE u.username = 'xyz@gmail.com' AND p.pid = 'e766d8cd-26d1-4848-ac97-15c233caa4d4' AND ca.catname = 'nature'"
</code></pre>
<p>I want to use <code>WHERE</code> clauses, without <code>WHERE</code> I run this query like this then its working </p>
<pre><code>MATCH (u:User {username:'xyz@gmail.com'}),(p:Prize{pid:'e766d8cd-26d1-4848-ac97-15c233caa4d4'}),(ca:Category {catname:'nature'}) CREATE (ch:Challenge {chid:'dassdsdjgjasdasdasda',challenge_title:'Exm 2015', total_question_per_user:200,challenge_status:1,timestamp:'1471516538.4643',date:'2016-08-18'}), (p)-[:BELONG {rank:3}]->(ch),(ca)-[:BELONG {percentage_question:20}]->(ch)
</code></pre>
| 0 | 2016-08-19T11:02:52Z | 39,040,041 | <p>You are using the <code>WHERE</code> clause in the wrong spot. The where clause needs to be used in conjunction with the <code>MATCH</code> statement and not the <code>CREATE</code>.</p>
<p>Something like this...</p>
<pre><code>MATCH (u:User),(p:Prize),(ca:Category)
WHERE u.username = 'xyz@gmail.com'
AND p.pid = 'e766d8cd-26d1-4848-ac97-15c233caa4d4'
AND ca.catname = 'nature'
CREATE (ch:Challenge {chid:'dassdshhhhasdasda',
challenge_title:'Exm 2015',
total_question_per_user:200,
challenge_status:1,
timestamp:'1471516538.4643',
date:'2016-08-18'})
,(p)-[:BELONG {rank:3}]->(ch)
,(ca)-[:BELONG {percentage_question:20}]->(ch)
</code></pre>
| 1 | 2016-08-19T13:15:26Z | [
"python",
"neo4j",
"py2neo"
] |
Gzip compression and decompression without any encoding | 39,037,445 | <p>I want to decompress a string in java which was gzip compressed in python.</p>
<p>Normally, I use base64 encoding on compressed string in python and then decode that compressed string before performing decompression in java. This works fine while using base64 encoding.</p>
<p>But is there a way to decompress a string in java which was gzip compressed in python without using base64 encoding.</p>
<p>Actually, I want to http post the compressed binary data to a server where the binary data gets decompressed. Here compression and http post in done in python and server side is java.</p>
<p>I tried this code without base64 encode in python and read that in java using buffered reader and then converted that read compressed string into byte[] using getBytes() which is given to GZIPInputStream for decompression. But this throws an exception as: </p>
<pre><code>java.io.IOException: Not in GZIP format at
java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:154)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:75)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:85)
at GZipFile.gunzipIt(GZipFile.java:58)
at GZipFile.main(GZipFile.java:42)
</code></pre>
<p>Please give me a solution to perform compression and decompression without any encoding. Is there a way to send binary data in http post in python?</p>
<p><strong>This is the compression code in python:</strong></p>
<pre><code>import StringIO
import gzip
import base64
import os
m='hello'+'\r\n'+'world'
out = StringIO.StringIO()
with gzip.GzipFile(fileobj=out, mode="wb") as f:
f.write(m)
f=open('comp_dump','wb')
f.write(base64.b64encode(out.getvalue()))
f.close()
</code></pre>
<p><strong>This is the decompression code in java:</strong></p>
<pre><code>//$Id$
import java.io.*;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import javax.xml.bind.DatatypeConverter;
import java.util.Arrays;
public class GZipFile
{
public static String readCompressedData()throws Exception
{
String compressedStr ="";
String nextLine;
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream("comp_dump")));
try
{
while((nextLine=reader.readLine())!=null)
{
compressedStr += nextLine;
}
}
finally
{
reader.close();
}
return compressedStr;
}
public static void main( String[] args ) throws Exception
{
GZipFile gZip = new GZipFile();
byte[] contentInBytes = DatatypeConverter.parseBase64Binary(readCompressedData());
String decomp = gZip.gunzipIt(contentInBytes);
System.out.println(decomp);
}
/**
* GunZip it
*/
public static String gunzipIt(final byte[] compressed){
byte[] buffer = new byte[1024];
StringBuilder decomp = new StringBuilder() ;
try{
GZIPInputStream gzis = new GZIPInputStream(new ByteArrayInputStream(compressed));
int len;
while ((len = gzis.read(buffer)) > 0) {
decomp.append(new String(buffer, 0, len));
}
gzis.close();
}catch(IOException ex){
ex.printStackTrace();
}
return decomp.toString();
}
</code></pre>
<p>}</p>
| 2 | 2016-08-19T11:03:57Z | 39,038,117 | <blockquote>
<p>Not every byte[] can be converted to a string, and the conversion back
could give other bytes.</p>
</blockquote>
<p>Please define encoding explicitly when compress and do the same when decompress. Otherwise your <code>OS</code>, <code>JVM</code> etc... will do it for you. And probably will mess it up. </p>
<p><strong>For example:</strong> on my Linux machine:</p>
<p><em>Python</em></p>
<pre><code>import sys
print sys.getdefaultencoding()
>> ascii
</code></pre>
<p><em>Java</em></p>
<pre><code>System.out.println(Charset.defaultCharset());
>> UTF-8
</code></pre>
<p>Related answer: <a href="http://stackoverflow.com/a/14467099/3014866">http://stackoverflow.com/a/14467099/3014866</a></p>
| 0 | 2016-08-19T11:37:31Z | [
"java",
"python",
"base64",
"gzip",
"decompression"
] |
Bar plot width inconsistency | 39,037,446 | <p>please don't be too harsh.
Could someone explain me why the colors of bars in these two plots are so different?</p>
<pre><code>import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams.update({'figure.autolayout': True})
plt.style.use('ggplot')
def plot3():
bal=np.cumsum(ret)
ind = np.arange(len(ret))
fig, ax = plt.subplots()
barlist=ax.bar(ind,ret,label="Return")
ax.plot(ind,bal,color='b',label="Balance")
for i in ind:
if ret[i]>=0:
barlist[i].set_color('g')
else:
barlist[i].set_color('r')
ax.legend(loc='best',frameon=False)
plt.show()
def plot3b():
bal=np.cumsum(ret)
ind = np.arange(len(ret))
fig, ax = plt.subplots()
colors=['g' if r>=0 else 'r' for r in ret]
ax.bar(ind,ret,color=colors,label="Return")
ax.plot(ind,bal,color='b',label="balance")
ax.legend(loc='best',frameon=False)
plt.show()
</code></pre>
<p>In my laptop given </p>
<pre><code>n=100
ret=np.random.randn(n)
ret=np.insert(ret,0,0)
</code></pre>
<p>the plots are respectively</p>
<p><img src="http://i.stack.imgur.com/2mlxC.png" alt="plot3"></p>
<p>and</p>
<p><img src="http://i.stack.imgur.com/gqdzN.png" alt="plot3b"></p>
| 0 | 2016-08-19T11:04:04Z | 39,038,770 | <p>Bars have both <code>facecolor</code> and <code>edgecolor</code>, see the docs <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar" rel="nofollow">here</a>.</p>
<p>It seems that <code>set_color()</code> sets <em>both</em> edge and face colour of the bar, so in your first chart the bars are wider that in your second, where the edge colour has not been set</p>
<p>If you change:<br>
<code>ax.bar(ind,ret,color=colors,label="Return")</code></p>
<p>To:<br>
<code>ax.bar(ind,ret,color = colors, edgecolor = colors, label="Return")</code></p>
<p>...then both plots are the same: </p>
<p><strong>plot3()</strong>
<a href="http://i.stack.imgur.com/kMrIY.png" rel="nofollow"><img src="http://i.stack.imgur.com/kMrIY.png" alt="enter image description here"></a></p>
<p><strong>plot3b()</strong>
<a href="http://i.stack.imgur.com/cGFDd.png" rel="nofollow"><img src="http://i.stack.imgur.com/cGFDd.png" alt="enter image description here"></a></p>
<p>Please excuse the different spellings of color/colour in this post. Im in the UK, and it just feels wrong to spell colour without the 'u', so I've spelled it "correctly"* when not referencing a function argument.</p>
<p>*That's humour, BTW. So please don't start an American / British English war.</p>
| 0 | 2016-08-19T12:10:44Z | [
"python",
"matplotlib"
] |
Alternative of ST_DWithin of PostGIS in python shapely | 39,037,571 | <p>I've two linestrings Line1,Line2.</p>
<pre><code>line1 = "LINESTRING(72.863221 18.782499,72.863736 18.770147,72.882275 18.756169,72.881417 18.750805,72.878842 18.736987,72.874379 18.709512,72.860989 18.679593,72.864422 18.653897)"
line2 = "LINESTRING(72.883133 18.780793,72.882103 18.760314,72.862534 18.716422,72.860474 18.683577)"
</code></pre>
<p>I'm trying to perform the following query of POSTGIS in shapely. As of now I haven't been able to find the alternative of ST_DWithin command. </p>
<pre><code>road2 = "ST_GeographyFromText('SRID=4326;%s')"%line1
road4 = "ST_GeographyFromText('SRID=4326;%s')"%line2
cur.execute("SELECT ST_AsText(road1) from %s as road1,%s as road2
where ST_DWithin(road1,road2,500)"%(road2,road4))
res = cur.fetchall()
print res
</code></pre>
<p>Does anyone knows what is the alternative of ST_DWithin in shapely ?</p>
| 1 | 2016-08-19T11:09:49Z | 39,408,449 | <p>As far as I know, shapely only supports operations in planar coordinates (no geography types). However, for LineStrings which are not too large so that the curvature of the globe can be neglected, one could partially "circumvent" this by:</p>
<ul>
<li><p>working in some planar projection (for example directly in the lat/lon or lon/lat coordinates) </p></li>
<li><p>following the second note in the documentation of <a href="http://postgis.net/docs/ST_DWithin.html" rel="nofollow"><code>ST_DWithin</code></a> and the first note in the documentation of <a href="http://postgis.net/docs/ST_Expand.html" rel="nofollow"><code>ST_Expand</code></a>, i.e.:</p>
<ol>
<li>checking if the bounding box of the second LineString intersects with the expanded bounding box of the first LineString</li>
<li>if yes, checking if the minimum distance is indeed below the prescribed threshold </li>
</ol></li>
</ul>
<p>For example:</p>
<pre><code>from shapely.wkt import dumps, loads
from shapely.geometry.geo import box
spec = [
"LINESTRING(72.863221 18.782499,72.863736 18.770147,72.882275 18.756169,72.881417 18.750805,72.878842 18.736987,72.874379 18.709512,72.860989 18.679593,72.864422 18.653897)",
"LINESTRING(72.883133 18.780793,72.882103 18.760314,72.862534 18.716422,72.860474 18.683577)"
]
lines = list(map(loads, spec))
eta = 0.005
b1 = box(*lines[0].bounds).buffer(eta)
b2 = box(*lines[1].bounds)
flag = b2.intersects(b1) and (lines[0].distance(lines[1]) < eta)
print(eta, flag)
</code></pre>
<p>Alternatively, if you would like to check if the entire second LineString is within prescribed threshold from the first LineString, you could also use the <code>buffer</code> method as:</p>
<pre><code>lines[0].buffer(eta).contains(lines[1])
</code></pre>
<p>The threshold supplied here to the <code>buffer</code> method is expressed in the same coordinate system in which the LineStrings are defined. Within the lon/lat system, this would represent the "central angle" - the issue then consists in the fact that the <a href="https://en.wikipedia.org/wiki/Great-circle_distance" rel="nofollow">great circle distance</a> corresponding to a fixed <code>eta</code> not only depends on the particular values of latitude and longitude but also on the direction of the displacement. However, if the LineStrings are not too large and the required precision is not too high, it probably won't <a href="http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters">matter that much</a>.</p>
| 0 | 2016-09-09T09:37:31Z | [
"python",
"postgresql",
"postgis",
"shapely"
] |
How to prevent nose to check imported modules like matplotlib? | 39,037,603 | <p>I am using nose to my test python code. Each time I execute nosetests,
nose test also all imported modules like matplotlib, pandas, etc. .</p>
<p>This may take a while.</p>
<p>So how to skip imported modules explicitly?</p>
<p>Thanks in advance.</p>
<p>Example:</p>
<p>Code that should be tested:</p>
<pre><code>import math
import matplotlib
import pandas
class myClass:
def __init__(self):
self.__a = 4
def geta(self):
return self.__a
</code></pre>
<p>And here comes the unittest-Code-Example:</p>
<pre><code>import unittest
import myClass
class test_myClass( unittest.TestCase ):
def setUp(self):
self.aClass = myClass.myClass()
def test_my_a(self):
self.assertEqual(4, self.aClass.geta() )
</code></pre>
| 0 | 2016-08-19T11:11:04Z | 39,038,267 | <p>The option you need is:</p>
<p><code>--cover-package=your_python_package_name</code></p>
<p>When you run your nose tests, you can supply either just one, or a list of these parameters to restrict nose to the listed packages of yours:</p>
<p><code>nose --cover-package=one_package --cover-package=other_package</code></p>
| 1 | 2016-08-19T11:44:27Z | [
"python",
"nose"
] |
Upload raw JSON data on Google Cloud Storage using Python code | 39,037,616 | <p>I am trying to upload raw JSON data on Google Cloud Platform. But I am getting this error: </p>
<pre><code>TypeError: must be string or buffer, not dict
</code></pre>
<p>Code:</p>
<pre><code>def upload_data(destination_path):
credentials = GoogleCredentials.get_application_default()
service = discovery.build('storage', 'v1', credentials=credentials)
content = {'name': 'test'}
media = http.MediaIoBaseUpload(StringIO(content), mimetype='plain/text')
req = service.objects().insert(
bucket=settings.GOOGLE_CLOUD_STORAGE_BUCKET,
body={"cacheControl": "public,max-age=31536000"},
media_body=media,
predefinedAcl='publicRead',
name=destination_path,
)
resp = req.execute()
return resp
</code></pre>
<p>Code Worked by changing <code>StringIO(content)</code> to <code>StringIO(json.dumps(content))</code></p>
| 0 | 2016-08-19T11:11:31Z | 39,037,759 | <p>in your example, <code>content</code> is a dict. Perhaps you want to use json?</p>
<pre><code>content = json.dumps({'name': 'test'})
</code></pre>
| 3 | 2016-08-19T11:19:31Z | [
"python",
"json",
"google-app-engine",
"cloud",
"google-cloud-storage"
] |
Python - Basic vs extended slicing | 39,037,663 | <p>When experimenting with slicing I noticed a strange behavior in Python 2.7:</p>
<pre><code>class A:
def __getitem__(self, i):
print repr(i)
a=A()
a[:] #Prints slice(0, 9223372036854775807, None)
a[::] #prints slice(None, None, None)
a[:,:] #prints (slice(None, None, None), slice(None, None, None))
</code></pre>
<p>When using a single colon in the brackets, the slice object has 0 as start and a huge integer as end. However, when I use more than a single colon, start and stop are None if not specified.</p>
<p>Is this behaviour guaranteed or implementation specific?</p>
<p>The <a href="https://docs.python.org/2/reference/datamodel.html#types">Documentation</a> says that the second and third case are extended slicing, while the first case is not. However, I couldn't find any clear explanation of the difference between basic and extended slicing.</p>
<p>Are there any other "special cases" which I should be aware of when I override <code>__getitem__</code> and want to accept extended slicing??</p>
| 8 | 2016-08-19T11:14:23Z | 39,037,810 | <p>For Python 2 <code>[:]</code> still calls <a href="https://docs.python.org/2/reference/datamodel.html#object.__getslice__" rel="nofollow"><code>__getslice__(self, i, j)</code></a> (deprecated) and this is documented to return a slice <code>slice(0, sys.maxsize, None)</code> when called with default parameters:</p>
<blockquote>
<p>Note that missing <code>i</code> or <code>j</code> in the slice expression are replaced by <strong>zero</strong> or <strong><code>sys.maxsize</code></strong>, ...</p>
</blockquote>
<p>(emphasis mine).
New style classes don't implement <code>__getslice__()</code> by default, so</p>
<blockquote>
<p>If no <code>__getslice__()</code> is found, a slice object is created instead, and passed to <code>__getitem__()</code> instead.</p>
</blockquote>
<p>Python 3 doesn't support <code>__getslice__()</code>, anymore, instead it <a href="https://docs.python.org/3/reference/datamodel.html#object.__length_hint__" rel="nofollow">constructs a <code>slice()</code></a> object for all of the above slice expressions. And <code>slice()</code> has <code>None</code> as default:</p>
<blockquote>
<p>Note: Slicing is done exclusively with the following three methods. A call like </p>
<p><code>a[1:2] = b</code></p>
<p>is translated to</p>
<p><code>a[slice(1, 2, None)] = b</code></p>
<p>and so forth. Missing slice items are always filled in with <code>None</code>.</p>
</blockquote>
| 9 | 2016-08-19T11:22:02Z | [
"python",
"python-2.7"
] |
RegEx Ignore a commented line. | 39,037,782 | <p>I am trying to parse the following text</p>
<pre><code># ---------------------------------------------------------------------------- #
# Packages
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_1_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_2_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_3_test_1_qip.vhd"]
# Register Tool set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_INVALID_test_1_qip.vhd"]
# ---------------------------------------------------------------------------- #
# Sub Modules
# ---------------------------------------------------------------------------- #
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_2.qip"]
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_3.qip"]
# ---------------------------------------------------------------------------- #
# Module Files
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_4_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_5_test_1_qip.vhd"]
</code></pre>
<p>Using the regex:</p>
<pre><code>(?<=_FILE).*"(.+)"
</code></pre>
<p>This also works fine and gives me all the file names in text above, however it also gives me the file names of lines which have been commented out. </p>
<p>I've tried to make a RegEx which would ignore this however I cannot get it to work. </p>
<p>This is what I've tried </p>
<pre><code>(?<!#)(?:(?<=_FILE).+"(.+)")
</code></pre>
<p>Please take a look at the <a href="https://regex101.com/r/yT0eF0/4" rel="nofollow" title="RegEx 101">RegEx 101</a></p>
<p>Regards
Ephreal</p>
| 1 | 2016-08-19T11:20:28Z | 39,037,895 | <p>To ignore commented lines, you have to start matching at the start of the line and match anything but <code>#</code>:</p>
<pre><code>^[^#\n]*(?:(?<=_FILE).+"(.+)")
</code></pre>
<p>Or just</p>
<pre><code>^[^#\n]*_FILE.+"(.+)"
</code></pre>
<p>Both patterns need the multiline flag <code>m</code>.</p>
| 1 | 2016-08-19T11:26:25Z | [
"python",
"regex",
"python-3.x"
] |
RegEx Ignore a commented line. | 39,037,782 | <p>I am trying to parse the following text</p>
<pre><code># ---------------------------------------------------------------------------- #
# Packages
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_1_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_2_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_3_test_1_qip.vhd"]
# Register Tool set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_INVALID_test_1_qip.vhd"]
# ---------------------------------------------------------------------------- #
# Sub Modules
# ---------------------------------------------------------------------------- #
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_2.qip"]
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_3.qip"]
# ---------------------------------------------------------------------------- #
# Module Files
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_4_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_5_test_1_qip.vhd"]
</code></pre>
<p>Using the regex:</p>
<pre><code>(?<=_FILE).*"(.+)"
</code></pre>
<p>This also works fine and gives me all the file names in text above, however it also gives me the file names of lines which have been commented out. </p>
<p>I've tried to make a RegEx which would ignore this however I cannot get it to work. </p>
<p>This is what I've tried </p>
<pre><code>(?<!#)(?:(?<=_FILE).+"(.+)")
</code></pre>
<p>Please take a look at the <a href="https://regex101.com/r/yT0eF0/4" rel="nofollow" title="RegEx 101">RegEx 101</a></p>
<p>Regards
Ephreal</p>
| 1 | 2016-08-19T11:20:28Z | 39,038,208 | <p>It seems that the lines you target have always the same format, you can avoid the regex with a field approach:</p>
<pre><code>def notcomm(fh):
for line in fh:
line = line.lstrip()
if line.startswith('#') or line == "":
continue
yield(line)
with open('yourfile.txt', 'r') as fh:
for line in notcomm(fh):
parts = line.split()
if parts[2].endswith('_FILE'):
print(parts[6][1:-2])
</code></pre>
| 0 | 2016-08-19T11:41:32Z | [
"python",
"regex",
"python-3.x"
] |
RegEx Ignore a commented line. | 39,037,782 | <p>I am trying to parse the following text</p>
<pre><code># ---------------------------------------------------------------------------- #
# Packages
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_1_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_2_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_3_test_1_qip.vhd"]
# Register Tool set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_INVALID_test_1_qip.vhd"]
# ---------------------------------------------------------------------------- #
# Sub Modules
# ---------------------------------------------------------------------------- #
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_2.qip"]
set_global_assignment -name QIP_FILE [file join $::quartus(qip_path) "module_test_3.qip"]
# ---------------------------------------------------------------------------- #
# Module Files
# ---------------------------------------------------------------------------- #
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_4_test_1_qip.vhd"]
set_global_assignment -name VHDL_FILE [file join $::quartus(qip_path) "file_5_test_1_qip.vhd"]
</code></pre>
<p>Using the regex:</p>
<pre><code>(?<=_FILE).*"(.+)"
</code></pre>
<p>This also works fine and gives me all the file names in text above, however it also gives me the file names of lines which have been commented out. </p>
<p>I've tried to make a RegEx which would ignore this however I cannot get it to work. </p>
<p>This is what I've tried </p>
<pre><code>(?<!#)(?:(?<=_FILE).+"(.+)")
</code></pre>
<p>Please take a look at the <a href="https://regex101.com/r/yT0eF0/4" rel="nofollow" title="RegEx 101">RegEx 101</a></p>
<p>Regards
Ephreal</p>
| 1 | 2016-08-19T11:20:28Z | 39,039,697 | <p>If you want to stick with regular expressions, just add <a href="https://regex101.com/r/rA7mK9/1" rel="nofollow"><strong>another lookahead</strong></a>:</p>
<pre><code>(?=.*_FILE)(?!^#)[^"]*"([^"]*)"
</code></pre>
<p><hr>
In <code>Python</code>, this is rather simple:</p>
<pre><code>import re
rx = re.compile(r'(?=.*_FILE)(?!^#)[^"]*"([^"]*)"', re.MULTILINE)
files = rx.findall(your_string_here)
print(files)
# ['file_1_test_1_qip.vhd', 'file_2_test_1_qip.vhd', 'file_3_test_1_qip.vhd', 'file_INVALID_test_1_qip.vhd', 'module_test_2.qip', 'module_test_3.qip', 'file_4_test_1_qip.vhd', 'file_5_test_1_qip.vhd']
</code></pre>
<p><hr>
See <a href="http://ideone.com/nuLvJr" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 2 | 2016-08-19T12:58:58Z | [
"python",
"regex",
"python-3.x"
] |
Tuple compare in python | 39,037,838 | <p>I have I have simple code looking like:</p>
<pre><code>>>> a = ('1', '2', '3', '4', '5')
>>> b = ('2', '6')
>>>
>>> def comp(list1, list2):
... for val in list1:
... if val in list2:
... return True
... return False
...
>>> print comp(a, b)
True
</code></pre>
<p>Please help me understand why I receive "True"? And how I can find complete matching between two tuples?</p>
<p>Thank you.</p>
| -2 | 2016-08-19T11:23:39Z | 39,038,410 | <p>The statement return statement exits a function, optionally passing back an expression to the caller. </p>
<p>so if you write :</p>
<pre><code>a = ('1', '2', '3', '4', '5')
b = ('6', '2')
def comp(list1, list2):
for val in list1:
if val in list2:
return True
return False
print comp(a, b)
</code></pre>
<p>Answer would be <code>False</code>.
So solution :</p>
<pre><code>a = ('1', '2', '3', '4', '5')
b = ('2', '3')
def comp(list1, list2):
for val in list1:
if val not in list2:
return False
return True
print comp(b, a) # This will written True
print comp(a, b) # This will written False
</code></pre>
| 1 | 2016-08-19T11:50:46Z | [
"python",
"compare",
"tuples"
] |
Tuple compare in python | 39,037,838 | <p>I have I have simple code looking like:</p>
<pre><code>>>> a = ('1', '2', '3', '4', '5')
>>> b = ('2', '6')
>>>
>>> def comp(list1, list2):
... for val in list1:
... if val in list2:
... return True
... return False
...
>>> print comp(a, b)
True
</code></pre>
<p>Please help me understand why I receive "True"? And how I can find complete matching between two tuples?</p>
<p>Thank you.</p>
| -2 | 2016-08-19T11:23:39Z | 39,038,417 | <p>You should Change your code a little bit. May this will be of some help.</p>
<pre><code>def comp(list1, list2):
for val in list1:
if val not in list2:
return False
return True
</code></pre>
| 0 | 2016-08-19T11:50:55Z | [
"python",
"compare",
"tuples"
] |
Python and SQL - inserting and reading values | 39,037,893 | <p>I have a problem with my code. I'm making a windowed application. Right now, I need to insert data to table in database and then show it to me (with SELECT). I have 2 fields to input the Name and Password for user (ID is calculated based on number of entries in the table) and a button to trigger the input. Then I have a field where I would like to have something like confirmation to be displayed after insert, and a button to trigger "SELECT" from the table. The application does start, but as far as I can tell, it doesn't add that to the table (and doesn't display anything as a confirmation). Here is my code:</p>
<pre><code>from tkinter import *
import sqlite3 as lite
import sys
from Crypto.Cipher import AES
con = None
con = lite.connect('test.db')
cur = con.cursor()
class Application(Frame):
def __init__(self, master):
super(Application, self).__init__(master)
self.grid()
self.create_widgets()
def create_widgets(self):
self.inst_lbl = Label(self, text = "Dodaj uzytkownika")
self.inst_lbl.grid(row = 0, column = 0, columnspan = 2, sticky = W)
self.pwa = Label(self, text = "Login")
self.pwa.grid(row = 1, column = 0, sticky = W)
self.pwb = Label(self, text = "Haslo")
self.pwb.grid(row = 2, column = 0, sticky = W)
self.a = Entry(self)
self.a.grid(row = 1, column = 1, sticky = W)
self.b = Entry(self)
self.b.grid(row = 2, column = 1, sticky = W)
self.submit_bttn = Button(self, text = "Dodaj uzytkownika", command = self.dodaj)
self.submit_bttn.grid(row = 3, column = 0, sticky = W)
self.secret_txt = Text(self, width = 35, height = 5, wrap = WORD)
self.secret_txt.grid(row = 4, column = 0, columnspan = 2, sticky = W)
self.view_users = Button(self, text = "Wyswietl uzytkownikow", command = self.wyswietl)
self.view_users.grid(row = 10, column = 0, sticky = W)
self.users_window = Text(self, width = 35, height = 5, wrap = WORD)
self.users_window.grid(row = 11, column = 0, columnspan = 2, sticky = W)
def dodaj(self):
a = self.a.get()
b = self.b.get()
index = cur.execute("SELECT COUNT(Id) FROM users")
cur.execute("INSERT INTO users (Id, Nazwa, Haslo) VALUES(?,?,?)", (int(index),str(a),str(b)))
#obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
#ciphertext = obj.encrypt(test)
self.secret_txt.delete(0.0, END)
self.secret_txt.insert(0.0, (index, a, b))
def wyswietl(self):
viewall = cur.execute("SELECT * FROM users")
self.users_window.delete(0.0, END)
self.users_window.insert(0.0, viewall)
root = Tk()
root.title("ShopQIP")
root.geometry("1024x700")
#root.mainloop()
app = Application(root)
root.mainloop()
</code></pre>
<p>And when I try to view the contents of the table, I get the error in that field:<br>
<em>sqlite3.Cursor object at 0x02A6E220</em></p>
<p>Also, the code runs, but I get this error when running it:<br>
*Exception in Tkinter callback<br>
Traceback (most recent call last):<br>
File "C:\Users\kjubus\AppData\Local\Programs\Python\Python35-32\lib\tkinter__init__.py", line 1549, in <strong>call</strong>
return self.func(<em>args)<br>
File "C:\Users\kjubus\Dysk Google!SzkoÅa\MGR\Projekt inzynierski\inzynierka v0.1.py", line 52, in dodaj<br>
cur.execute("INSERT INTO users (Id, Nazwa,Haslo) VALUES(?,?,?)", (int(index),str(a),str(b)))<br>
TypeError: int() argument must be a string, a bytes-like object or a number, not 'sqlite3.Cursor'</em><br>
and just to clarify - in this part "line 1549" call before and after <strong>call</strong> there are 2 underscores.</p>
| 1 | 2016-08-19T11:26:19Z | 39,038,070 | <p><code>__call__</code> is an internal method used to call objects and functions. You can probably ignore that.</p>
<p>A cursor's <code>execute</code> method normally returns some innocuous value like the number of rows affected, and the results are returned by a separate call to one of its <code>fetch*</code> methods. So you probably need something like</p>
<pre><code> cur.execute("SELECT COUNT(Id) FROM users")
index = cur.fetchone()[0] # First (and only) element in the single row
cur.execute("INSERT INTO users (Id, Nazwa, Haslo) VALUES(?,?,?)", (int(index),str(a),str(b)))
</code></pre>
| 0 | 2016-08-19T11:35:08Z | [
"python",
"sql",
"sqlite3",
"window"
] |
Convert list of strings into integers with Regular Expression | 39,037,919 | <p>I am currently trying to understand Regular Expressions. I have a hard time understanding how I should solve this:</p>
<pre><code>import re
fhand = open("assign11.txt")
lst = list()
for line in fhand:
line = line.rstrip()
x = re.findall("([\d]+)", line)
if not len(x) > 0: continue
numlist = int(x[:])
d = list.append(numlist)
print(lst)
</code></pre>
<p>I want to the program to match every substring that is an integer(0-9) with the help of re.findall(), and then convert the substrings it finds into integers, append them to a list, and sum them up. </p>
<p>Put I can´t figure out a way to convert all of the substrings in each line to a integer, it only works if i write int(x[0]), but not if I write (x[:]). How do I change every substring( When there are more than one number in the substring) into a integer?</p>
| -2 | 2016-08-19T11:27:29Z | 39,037,976 | <p>Use a list comprehension to convert each item in the list to an integer in a new list:</p>
<pre><code>list_of_strings = ['1', '11', '123', '-12']
list_of_ints = [int(x) for x in list_of_strings]
>>> list_of_ints
[1, 11, 123, -12]
</code></pre>
<p>That's the basic idea. Applying that to sum all of the integers in the file:</p>
<pre><code>import re
with open("assign11.txt") as fhand:
numbers = []
for line in fhand:
numbers.extend(re.findall("([\d]+)", line))
print(numbers)
print(sum(int(x) for x in numbers))
</code></pre>
<p>This code adds the <em>items</em> found by <code>findall()</code> to the <code>numbers</code> list using <code>list.extend()</code>. <code>list.append()</code> would add the <em>list</em> returned by <code>re,findall()</code> which is not what you want. Finally the number strings are converted to ints and the sum calculated.</p>
| 0 | 2016-08-19T11:30:46Z | [
"python",
"regex"
] |
Convert list of strings into integers with Regular Expression | 39,037,919 | <p>I am currently trying to understand Regular Expressions. I have a hard time understanding how I should solve this:</p>
<pre><code>import re
fhand = open("assign11.txt")
lst = list()
for line in fhand:
line = line.rstrip()
x = re.findall("([\d]+)", line)
if not len(x) > 0: continue
numlist = int(x[:])
d = list.append(numlist)
print(lst)
</code></pre>
<p>I want to the program to match every substring that is an integer(0-9) with the help of re.findall(), and then convert the substrings it finds into integers, append them to a list, and sum them up. </p>
<p>Put I can´t figure out a way to convert all of the substrings in each line to a integer, it only works if i write int(x[0]), but not if I write (x[:]). How do I change every substring( When there are more than one number in the substring) into a integer?</p>
| -2 | 2016-08-19T11:27:29Z | 39,038,043 | <p>Converting a list of strings to a list of integers is done with the <code>map</code> function. <code>map</code> takes two arguments: the list as well as a function which you wish to apply to each element in the list. </p>
<pre><code>map(int, ['1','2'])
> [1,2]
</code></pre>
<p>An example of your full process:</p>
<pre><code>import re
string = "Hello this is number 1,2 and 3. We sum to 6. "
print sum(map(int, re.findall(r'\d+', string)))
>> 12
</code></pre>
| 1 | 2016-08-19T11:34:12Z | [
"python",
"regex"
] |
HBase and Spark - NullPointerException while trying to write to table | 39,038,139 | <p>I'm using HDP with Ambari on 3 nodes cluster. I tried to save data to Hbase table from Spark using example code (<a href="https://github.com/apache/spark/blob/v1.5.1/examples/src/main/python/hbase_outputformat.py" rel="nofollow">link</a>) but my job failed.</p>
<p>Command I used: </p>
<pre><code>spark-submit --master yarn --deploy-mode client \
--jars spark-examples-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar,\
hbase-common.jar, hbase-client.jar, hbase-server.jar, hbase-protocol.jar\
example.py 172.16.7.33 test row2 cols q1 value1
</code></pre>
<p>Error:<br>
<a href="http://pastebin.com/wm2c5wt4" rel="nofollow">link</a></p>
<p>I can run Hbase shell and add rows to table manualy. I think something is wrong with Hbase configuration (RegionServers?). After instalation, Ambari couldn't connect to RegionServers. I restarted Hbase and now it's ok, no alerts. </p>
<p>Can someone help me with it? Thanks.</p>
<p>HBase ver. 1.1.2, Spark ver. 1.6 (both installed via Ambari)</p>
| 0 | 2016-08-19T11:39:11Z | 39,040,884 | <p>Is the HBase config on your classpath or are you hitting zookeeper by luck since you are on a zookeeper node (localhost)?</p>
<p>We have some examples of Spark Streaming with HBase on our example code site if you are interested.</p>
<p><a href="https://github.com/splicemachine/splice-community-sample-code" rel="nofollow">https://github.com/splicemachine/splice-community-sample-code</a></p>
<p>Let me know, since you cannot get meta I suspect your configuration is not on the path or hbase does not have regions assigned...</p>
| 1 | 2016-08-19T13:56:52Z | [
"python",
"apache-spark",
"hbase",
"pyspark"
] |
How to set the size of the "pure" figure (without axis ticks or title) in matplotlib | 39,038,228 | <p>I want to draw a figure with width:height=1:1, and set the size as</p>
<pre><code>plt.figure(figsize=(10,10))
</code></pre>
<p>However, it sets the size of the complete figure, including title and x-,y- ticks. I hope the size of the "pure" figure without any title, x-,y- ticks is 1:1. How can I do? Thanks!</p>
| 1 | 2016-08-19T11:42:19Z | 39,058,705 | <p>Using this setting should work:</p>
<pre><code>plt.gca().set_aspect('equal')
</code></pre>
<p>For example:</p>
<p><a href="http://i.stack.imgur.com/RHMsd.png" rel="nofollow"><img src="http://i.stack.imgur.com/RHMsd.png" alt="enter image description here"></a></p>
<p>It might not look square but it is!</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy import stats as scistats
fig = plt.figure()
A_results = np.random.poisson(50,100)
B_results = np.random.binomial(100, 0.5, (100))
slope, intercept, r_value, p_value, std_err = scistats.linregress(A_results, B_results)
plt.scatter(A_results, B_results, marker='o', color='deepskyblue', alpha=0.5, edgecolors='k', s=100, zorder=3)
plt.plot([10, 1e2], [10, 1e2], 'k-', lw=0.5, zorder=1)
plt.plot([10, 1e2], [10*slope + intercept, 1e2*slope + intercept], 'b-', lw=1.0,
label='$R^2$ = {}'.format(round(r_value**2,3)), zorder=2)
plt.xlim(10, 100)
plt.ylim(10, 100)
plt.ylabel("variate for comparison B\n(random binomial)", labelpad=15, fontweight='bold')
plt.xlabel("variate for comparison A\n(random poisson)", labelpad=15, fontweight='bold')
plt.gca().xaxis.set_tick_params(which='minor', direction='out', width=2, length=2)
plt.gca().yaxis.set_tick_params(which='minor', direction='out', width=2, length=2)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.grid(b=True, which='major', color='gray', linestyle='-', alpha=0.85, zorder=2, lw=0.5)
plt.grid(b=True, which='minor', color='gray', linestyle='-', alpha=0.65, lw=0.5)
plt.legend(loc='best', prop={'size':14})
plt.gca().set_aspect('equal')
plt.show()
</code></pre>
| 0 | 2016-08-20T21:05:40Z | [
"python",
"matplotlib"
] |
Function chaining in Python | 39,038,358 | <p>On codewars.com I encountered the following task:</p>
<blockquote>
<p>Create a function <code>add</code> that adds numbers together when called in succession. So <code>add(1)</code> should return <code>1</code>, <code>add(1)(2)</code> should return <code>1+2</code>, ...</p>
</blockquote>
<p>While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function <code>f(x)</code> that can be called as <code>f(x)(y)(z)...</code>. Thus far, I'm not even sure how to interpret this notation. </p>
<p>As a mathematician, I'd suspect that <code>f(x)(y)</code> is a function that assigns to every <code>x</code> a function <code>g_{x}</code> and then returns <code>g_{x}(y)</code> and likewise for <code>f(x)(y)(z)</code>. </p>
<p>Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.</p>
<p>How do you call this concept and where can I read more about it?</p>
| 64 | 2016-08-19T11:48:30Z | 39,038,455 | <p>I don't know whether this is <em>function</em> chaining as much as it's <em>callable</em> chaining, but, since functions <em>are</em> callables I guess there's no harm done. Either way, there's two ways I can think of doing this:</p>
<h3>Sub-classing <code>int</code> and defining <code>__call__</code>:</h3>
<p>The first way would be with a custom <code>int</code> subclass that defines <a href="https://docs.python.org/3/reference/datamodel.html#object.__call__" rel="nofollow"><code>__call__</code></a> which returns a new instance of itself with the updated value:</p>
<pre><code>class CustomInt(int):
def __call__(self, v):
return CustomInt(self + v)
</code></pre>
<p>Function <code>add</code> can now be defined to return a <code>CustomInt</code> instance, which, as a callable that returns an updated value of itself, can be called in succession:</p>
<pre><code>>>> def add(v):
... return CustomInt(v)
>>> add(1)
1
>>> add(1)(2)
3
>>> add(1)(2)(3)(44) # and so on..
50
</code></pre>
<p>In addition, as an <code>int</code> subclass, the returned value retains the <code>__repr__</code> and <code>__str__</code> behavior of <code>int</code>s. <em>For more complex operations though, you should define other dunders appropriately</em>. </p>
<p>As @Caridorc noted in a comment, <code>add</code> could also be simply written as:</p>
<pre><code>add = CustomInt
</code></pre>
<p>Renaming the class to <code>add</code> instead of <code>CustomInt</code> also works similarly.</p>
<hr>
<h3>Define a closure, requires extra call to yield value:</h3>
<p>The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm <strong>not</strong> using <code>nonlocal</code> and opt for attaching attributes to the function objects to make it portable between Pythons:</p>
<pre><code>def add(v):
def _inner_adder(val=None):
"""
if val is None we return _inner_adder.v
else we increment and return ourselves
"""
if val is None:
return _inner_adder.v
_inner_adder.v += val
return _inner_adder
_inner_adder.v = v # save value
return _inner_adder
</code></pre>
<p>This continuously returns itself (<code>_inner_adder</code>) which, if a <code>val</code> is supplied, increments it (<code>_inner_adder += val</code>) and if not, returns the value as it is. Like I mentioned, it requires an extra <code>()</code> call in order to return the incremented value:</p>
<pre><code>>>> add(1)(2)()
3
>>> add(1)(2)(3)() # and so on..
6
</code></pre>
| 75 | 2016-08-19T11:53:03Z | [
"python",
"function",
"python-3.x"
] |
Function chaining in Python | 39,038,358 | <p>On codewars.com I encountered the following task:</p>
<blockquote>
<p>Create a function <code>add</code> that adds numbers together when called in succession. So <code>add(1)</code> should return <code>1</code>, <code>add(1)(2)</code> should return <code>1+2</code>, ...</p>
</blockquote>
<p>While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function <code>f(x)</code> that can be called as <code>f(x)(y)(z)...</code>. Thus far, I'm not even sure how to interpret this notation. </p>
<p>As a mathematician, I'd suspect that <code>f(x)(y)</code> is a function that assigns to every <code>x</code> a function <code>g_{x}</code> and then returns <code>g_{x}(y)</code> and likewise for <code>f(x)(y)(z)</code>. </p>
<p>Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.</p>
<p>How do you call this concept and where can I read more about it?</p>
| 64 | 2016-08-19T11:48:30Z | 39,038,540 | <p>If you want to define a function to be called multiple times, first you need to return a callable object each time (for example a function) otherwise you have to create your own object by defining a <code>__call__</code> attribute, in order for it to be callable.</p>
<p>The next point is that you need to preserve all the arguments, which in this case means you might want to use <a href="https://docs.python.org/3/library/asyncio-task.html">Coroutines</a> or a recursive function. But note that <strong>Coroutines are much more optimized/flexible than recursive functions</strong>, specially for such tasks.</p>
<p>Here is a sample function using Coroutines, that preserves the latest state of itself. Note that it can't be called multiple times since the return value is an <code>integer</code> which is not callable, but you might think about turning this into your expected object ;-).</p>
<pre><code>def add():
current = yield
while True:
value = yield current
current = value + current
it = add()
next(it)
print(it.send(10))
print(it.send(2))
print(it.send(4))
10
12
16
</code></pre>
| 13 | 2016-08-19T11:57:30Z | [
"python",
"function",
"python-3.x"
] |
Function chaining in Python | 39,038,358 | <p>On codewars.com I encountered the following task:</p>
<blockquote>
<p>Create a function <code>add</code> that adds numbers together when called in succession. So <code>add(1)</code> should return <code>1</code>, <code>add(1)(2)</code> should return <code>1+2</code>, ...</p>
</blockquote>
<p>While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function <code>f(x)</code> that can be called as <code>f(x)(y)(z)...</code>. Thus far, I'm not even sure how to interpret this notation. </p>
<p>As a mathematician, I'd suspect that <code>f(x)(y)</code> is a function that assigns to every <code>x</code> a function <code>g_{x}</code> and then returns <code>g_{x}(y)</code> and likewise for <code>f(x)(y)(z)</code>. </p>
<p>Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.</p>
<p>How do you call this concept and where can I read more about it?</p>
| 64 | 2016-08-19T11:48:30Z | 39,039,586 | <p>You can hate me, but here is a one-liner :)</p>
<pre><code>add = lambda v: type("", (int,), {"__call__": lambda self, v: self.__class__(self + v)})(v)
</code></pre>
<p>Edit: Ok, how this works? The code is identical to answer of @Jim, but everything happens on a single line.</p>
<ol>
<li><code>type</code> can be used to construct new types: <code>type(name, bases, dict) -> a new type</code>. For <code>name</code> we provide empty string, as name is not really needed in this case. For <code>bases</code> (tuple) we provide an <code>(int,)</code>, which is identical to inheriting <code>int</code>. <code>dict</code> are the class attributes, where we attach the <code>__call__</code> lambda.</li>
<li><code>self.__class__(self + v)</code> is identical to <code>return CustomInt(self + v)</code></li>
<li>The new type is constructed and returned within the outer lambda.</li>
</ol>
| 16 | 2016-08-19T12:54:13Z | [
"python",
"function",
"python-3.x"
] |
Function chaining in Python | 39,038,358 | <p>On codewars.com I encountered the following task:</p>
<blockquote>
<p>Create a function <code>add</code> that adds numbers together when called in succession. So <code>add(1)</code> should return <code>1</code>, <code>add(1)(2)</code> should return <code>1+2</code>, ...</p>
</blockquote>
<p>While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function <code>f(x)</code> that can be called as <code>f(x)(y)(z)...</code>. Thus far, I'm not even sure how to interpret this notation. </p>
<p>As a mathematician, I'd suspect that <code>f(x)(y)</code> is a function that assigns to every <code>x</code> a function <code>g_{x}</code> and then returns <code>g_{x}(y)</code> and likewise for <code>f(x)(y)(z)</code>. </p>
<p>Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.</p>
<p>How do you call this concept and where can I read more about it?</p>
| 64 | 2016-08-19T11:48:30Z | 39,139,256 | <p>The pythonic way to do this would be to use dynamic arguments:</p>
<pre><code>def add(*args):
return sum(args)
</code></pre>
<p>This is not the answer you're looking for, and you may know this, but I thought I would give it anyway because if someone was wondering about doing this not out of curiosity but for work. They should probably have the "right thing to do" answer.</p>
| 3 | 2016-08-25T07:33:30Z | [
"python",
"function",
"python-3.x"
] |
Regular Expression to capture strings between given string anchors | 39,038,371 | <p>I have a log file I'm trying to parse with regex (python). Each line contains tags "BEGIN" and "END" somewhere on the line. In addition, a line may contain one or more "VALUE" tags somewhere between the "BEGIN" and "END" tags. If there is no "VALUE" tag on a given line, I would like to capture the string between "BEGIN" and "END" tags. However, if there is a "VALUE" tag on a given line, I would like to capture all the strings between tags "BEGIN"..."VALUE", "VALUE"..."VALUE", and "VALUE"..."END" tag pairs. Note that a capture string may also be empty.</p>
<p>Given input:</p>
<pre><code>Line1: words we can ignoreBEGINvalue1VALUEvalue with spaceVALUEvalue_with_@VALUElastvalueENDwords we can ignore
Line2: BEGINvalue1VALUEVALUEVALUElastvalueENDwords we can ignore
Line3: words we can ignoreBEGINlastvalueEND
</code></pre>
<p>Regex should return:</p>
<pre><code>Line1: (1)value1 (2)value with space (3)value_with_@ (4)lastvalue
Line2: (1)value1 (2) (3) (4)lastvalue
Line3: (1)lastvalue
</code></pre>
<p>The below regex fails if there's more than one "VALUE" tag on a line, in which case it seems to capture only the strings between "BEGIN"..."VALUE" and "VALUE"..."END", but fails to capture the "VALUE"..."VALUE" matches:</p>
<pre><code>BEGIN(.*?)(?:VALUE(.*?))*END
</code></pre>
| 2 | 2016-08-19T11:49:15Z | 39,038,896 | <p>An easy approach without full use of regex, catch everything between BEGIN and END and then split it with VALUE.</p>
<pre><code>>>>test_cases = ['words we can ignoreBEGINvalue1VALUEvalue with spaceVALUEvalue_with_@VALUElastvalueENDwords we can ignore', 'BEGINvalue1VALUEVALUEVALUElastvalueENDwords we can ignore', 'words we can ignoreBEGINlastvalueEND']
>>> patt = re.compile(r'BEGIN(.*)END')
>>> for test in test_cases:
... matched = patt.search(test)
... if matched is not None:
... print matched.group(1).split('VALUE')
['value1', 'value with space', 'value_with_@', 'lastvalue']
['value1', '', '', 'lastvalue']
['lastvalue']
</code></pre>
| 1 | 2016-08-19T12:18:00Z | [
"python",
"regex"
] |
Replace whole word Python (only variable) | 39,038,468 | <p>I am trying to do a code which replace only whole word in string variable.First of all I found <code>re.sub</code> solution but there was no mentioned about variable inside.
Now my solution looks (it does not replace whole word): </p>
<pre><code>for index in (range(len(lista)):
query=query.replace(lista[index],var)
</code></pre>
<p>As on example above, I want to replace value in <code>lista[index]</code> with value from <code>VAR -> variable</code>.</p>
<p>EDIT:</p>
<p>Example query:</p>
<pre><code>Select Information,AdditionalInformation,Price from Table
</code></pre>
<p>example lista vales:</p>
<pre><code>Information, Price
</code></pre>
<p>example var:</p>
<pre><code>var = "hide"
</code></pre>
<p>Finally query should looks:</p>
<pre><code>Select hide,AdditionalInformation,hide from Table
</code></pre>
| 0 | 2016-08-19T11:53:48Z | 39,038,717 | <blockquote>
<p>I want to replace value in lista[index] with value from VAR ->
variable.</p>
</blockquote>
<p>From this, I <em>think</em> you're looking for something along these lines:</p>
<pre><code>for index in (range(len(lista))):
lista[index] = var
</code></pre>
<p>This changes all the items in the list to <code>var</code>.</p>
<p>If you don't want all items in the list to be replaced, you would need to make a decision to replace the correct item, and so maybe (if you already know the position of the item in the list that needs replacing):</p>
<pre><code>lista[index] = var
</code></pre>
| 0 | 2016-08-19T12:07:27Z | [
"python",
"replace"
] |
Replace whole word Python (only variable) | 39,038,468 | <p>I am trying to do a code which replace only whole word in string variable.First of all I found <code>re.sub</code> solution but there was no mentioned about variable inside.
Now my solution looks (it does not replace whole word): </p>
<pre><code>for index in (range(len(lista)):
query=query.replace(lista[index],var)
</code></pre>
<p>As on example above, I want to replace value in <code>lista[index]</code> with value from <code>VAR -> variable</code>.</p>
<p>EDIT:</p>
<p>Example query:</p>
<pre><code>Select Information,AdditionalInformation,Price from Table
</code></pre>
<p>example lista vales:</p>
<pre><code>Information, Price
</code></pre>
<p>example var:</p>
<pre><code>var = "hide"
</code></pre>
<p>Finally query should looks:</p>
<pre><code>Select hide,AdditionalInformation,hide from Table
</code></pre>
| 0 | 2016-08-19T11:53:48Z | 39,038,895 | <p>I think this should work:</p>
<pre><code>for word in alist:
query = re.sub(r'\b'+word+r'\b',var,query)
</code></pre>
| 1 | 2016-08-19T12:17:58Z | [
"python",
"replace"
] |
Replace whole word Python (only variable) | 39,038,468 | <p>I am trying to do a code which replace only whole word in string variable.First of all I found <code>re.sub</code> solution but there was no mentioned about variable inside.
Now my solution looks (it does not replace whole word): </p>
<pre><code>for index in (range(len(lista)):
query=query.replace(lista[index],var)
</code></pre>
<p>As on example above, I want to replace value in <code>lista[index]</code> with value from <code>VAR -> variable</code>.</p>
<p>EDIT:</p>
<p>Example query:</p>
<pre><code>Select Information,AdditionalInformation,Price from Table
</code></pre>
<p>example lista vales:</p>
<pre><code>Information, Price
</code></pre>
<p>example var:</p>
<pre><code>var = "hide"
</code></pre>
<p>Finally query should looks:</p>
<pre><code>Select hide,AdditionalInformation,hide from Table
</code></pre>
| 0 | 2016-08-19T11:53:48Z | 39,038,937 | <p>You can compile a regex that matches any of the words in <code>lista</code>. </p>
<pre><code>import re
query = "Select Information,AdditionalInformation,Price from Table"
lista = ["Information", "Price"]
var = "hide"
pat = re.compile(r'\b' + '|'.join(lista) + r'\b')
query = pat.sub(var, query)
print(query)
</code></pre>
<p><strong>output</strong></p>
<pre><code>Select hide,AdditionalInformation,hide from Table
</code></pre>
<p>The <code>\b</code> is used to match word boundaries; this prevents "AdditionalInformation" from being modified. See <a href="https://docs.python.org/3/library/re.html#regular-expression-syntax" rel="nofollow">Regular Expression Syntax</a> in the docs. We need to write it as a raw string, <code>r'\b'</code>, otherwise it gets interpreted as the ANSI escape code for backspace.</p>
| 2 | 2016-08-19T12:20:52Z | [
"python",
"replace"
] |
Replace whole word Python (only variable) | 39,038,468 | <p>I am trying to do a code which replace only whole word in string variable.First of all I found <code>re.sub</code> solution but there was no mentioned about variable inside.
Now my solution looks (it does not replace whole word): </p>
<pre><code>for index in (range(len(lista)):
query=query.replace(lista[index],var)
</code></pre>
<p>As on example above, I want to replace value in <code>lista[index]</code> with value from <code>VAR -> variable</code>.</p>
<p>EDIT:</p>
<p>Example query:</p>
<pre><code>Select Information,AdditionalInformation,Price from Table
</code></pre>
<p>example lista vales:</p>
<pre><code>Information, Price
</code></pre>
<p>example var:</p>
<pre><code>var = "hide"
</code></pre>
<p>Finally query should looks:</p>
<pre><code>Select hide,AdditionalInformation,hide from Table
</code></pre>
| 0 | 2016-08-19T11:53:48Z | 39,039,116 | <p>As kaasias wrote, just change the the elements of the list with this code:</p>
<pre><code>for index in (range(len(lista))):
lista[index] = var
</code></pre>
<p>and make sure you create your query with the new elements of the list.</p>
<pre><code>query = "SELECT " + lista[0] + ", " + AdditionalInformation + ", " lista[1] + " FROM " + table
</code></pre>
| 0 | 2016-08-19T12:30:57Z | [
"python",
"replace"
] |
How to run code in another AppDomain? I need to sandbox my IronPython code | 39,038,499 | <p>I have some IronPython code, and im passing variables to it and want to receive a result from python. I managed to get the result from IronPython when i create an engine in the current domain:</p>
<pre><code>ScriptEngine pyEngine = IronPython.Hosting.Python.CreateEngine()
</code></pre>
<p>then i just take the result like that:</p>
<pre><code>var result = this.pyScope.GetVariable("ObiektMS").code_1_1_1("II");
</code></pre>
<p>However, I need it sandboxed so im creating an AppDomain with restricted permissions and running the engine in it:</p>
<pre><code>ScriptEngine pyEngine = IronPython.Hosting.Python.CreateEngine(newDomain)
</code></pre>
<p>But this time when i try to get the result in the samy way i get</p>
<pre><code>An unhandled exception of type 'System.Runtime.Serialization.SerializationException' occurred in PythonTimeTests.exe
</code></pre>
<p>Is it because my "result" variable is in a different domain and the result from ironpython has to be serialized?</p>
<p>Is it possible to create a variable in the new domain?
I want my program to do this:</p>
<p>1.Read a dictionary from a file.</p>
<p>2.Create a new domain with restrictions and pass the dictionary to it.</p>
<p>3.Do work in iron python by calling python functions from c#(all in restricted domain)</p>
<p>4.Return the results to the unrestricted domain.</p>
| 0 | 2016-08-19T11:55:37Z | 39,039,471 | <p>Yes - cross domain variables are serialized between domains unless they inherit from MarshallByRefObject as explained here <a href="http://stackoverflow.com/questions/2206961/sharing-data-between-appdomains">Sharing data between AppDomains</a></p>
<p>Your second question is "can I ... do some C# stuff" - yes; you can do anything you want in an AppDomain. You just need to follow the rules on AppDomain data transfer in order to pass stuff between AppDomains.</p>
| 1 | 2016-08-19T12:48:57Z | [
"c#",
"python",
"ironpython",
"sandbox"
] |
Receive messages from Telegram Bot to Google App Engine via Cloud Endpoint | 39,038,516 | <p>I have a python Google App Engine application that receive incoming message from Telegram Bot via webhook. I'm using Cloud Endpoint to receive request, so I use the Google Protocol RPC to manage the request and the response. </p>
<p>The json incoming update from Telegram that contain the message have a field called <strong><code>from</code></strong>. The problem is that when I write the RPC class to handle the message I can't use the name <strong><code>from</code></strong> for the variable because is a reserved keyword:</p>
<pre><code>class TelegramMessage(messages.Message):
message_id = messages.IntegerField(1, required = True)
from = messages.MessageField(User, 2)
</code></pre>
<p>I can't change the name of variable because otherwise the <strong><code>from</code></strong> field from the incoming json go lost and I recive this warning in the console: No variant found for unrecognized field: from.</p>
<p>How can I solve it? </p>
| 1 | 2016-08-19T11:56:23Z | 39,106,635 | <p>I would suggest to use a python library like <a href="https://pypi.python.org/pypi/python-telegram-bot" rel="nofollow">python-telegram-bot</a>. The author of the library <a href="https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/message.py#L33" rel="nofollow">renamed</a> the Python-incompatible <code>from</code> attribute to from_user. So just do:</p>
<pre><code>user = bot.getUpdates()[-1].from_user
</code></pre>
| 1 | 2016-08-23T16:43:13Z | [
"python",
"google-app-engine",
"telegram-bot"
] |
Using a single replacement operation replace all leading tabs with spaces | 39,038,569 | <p>In my text I want to replace all leading tabs with two spaces but leave the non-leading tabs alone.</p>
<p>For example:</p>
<pre><code>a
\tb
\t\tc
\td\te
f\t\tg
</code></pre>
<p>(<code>"a\n\tb\n\t\tc\n\td\te\nf\t\tg"</code>)</p>
<p>should turn into:</p>
<pre><code>a
b
c
d\te
f\t\tg
</code></pre>
<p>(<code>"a\n b\n c\n d\te\nf\t\tg"</code>)</p>
<p>For my case I could do that with multiple replacement operations, repeating as many times as the many maximum nesting level or until nothing changes.</p>
<p><strong>But wouldn't it also be possible to do in a single run?</strong></p>
<p>I tried but didn't manage to come up with something, the best I came up yet was with lookarounds:</p>
<pre><code>re.sub(r'(^|(?<=\t))\t', ' ', a, flags=re.MULTILINE)
</code></pre>
<p>Which "only" makes one wrong replacement (second tab between <code>f</code> and <code>g</code>).</p>
<p>Now it might be that it's simply impossible to do in regex in a single run because the already replaced parts can't be matched again (or rather the replacement does not happen right away) and you can't sort-of "count" in regex, in this case I would love to see some more detailed explanations on why (as long as this won't shift too much into [cs.se] territory).</p>
<p>I am working in Python currently but this could apply to pretty much any similar regex implementation.</p>
| 4 | 2016-08-19T11:58:44Z | 39,038,654 | <p>You may match the tabs at the start of the lines, and use a lambda inside <code>re.sub</code> to replace with the double spaces multiplied by the length of the match:</p>
<pre><code>import re
s = "a\n\tb\n\t\tc\n\td\te\nf\t\tg";
print(re.sub(r"^\t+", lambda m: " "*len(m.group()), s, flags=re.M))
</code></pre>
<p>See the <a href="https://ideone.com/AmOECe">Python demo</a></p>
| 8 | 2016-08-19T12:03:33Z | [
"python",
"regex"
] |
Using a single replacement operation replace all leading tabs with spaces | 39,038,569 | <p>In my text I want to replace all leading tabs with two spaces but leave the non-leading tabs alone.</p>
<p>For example:</p>
<pre><code>a
\tb
\t\tc
\td\te
f\t\tg
</code></pre>
<p>(<code>"a\n\tb\n\t\tc\n\td\te\nf\t\tg"</code>)</p>
<p>should turn into:</p>
<pre><code>a
b
c
d\te
f\t\tg
</code></pre>
<p>(<code>"a\n b\n c\n d\te\nf\t\tg"</code>)</p>
<p>For my case I could do that with multiple replacement operations, repeating as many times as the many maximum nesting level or until nothing changes.</p>
<p><strong>But wouldn't it also be possible to do in a single run?</strong></p>
<p>I tried but didn't manage to come up with something, the best I came up yet was with lookarounds:</p>
<pre><code>re.sub(r'(^|(?<=\t))\t', ' ', a, flags=re.MULTILINE)
</code></pre>
<p>Which "only" makes one wrong replacement (second tab between <code>f</code> and <code>g</code>).</p>
<p>Now it might be that it's simply impossible to do in regex in a single run because the already replaced parts can't be matched again (or rather the replacement does not happen right away) and you can't sort-of "count" in regex, in this case I would love to see some more detailed explanations on why (as long as this won't shift too much into [cs.se] territory).</p>
<p>I am working in Python currently but this could apply to pretty much any similar regex implementation.</p>
| 4 | 2016-08-19T11:58:44Z | 39,039,515 | <p>It is also possible to do this without regex using <code>replace()</code> in a one liner:</p>
<pre><code>>>> s = "a\n\tb\n\t\tc\n\td\te\nf\t\tg"
>>> "\n".join(x.replace("\t"," ",len(x)-len(x.lstrip("\t"))) for x in s.split("\n"))
'a\n b\n c\n d\te\nf\t\tg'
</code></pre>
| 1 | 2016-08-19T12:50:57Z | [
"python",
"regex"
] |
Using a single replacement operation replace all leading tabs with spaces | 39,038,569 | <p>In my text I want to replace all leading tabs with two spaces but leave the non-leading tabs alone.</p>
<p>For example:</p>
<pre><code>a
\tb
\t\tc
\td\te
f\t\tg
</code></pre>
<p>(<code>"a\n\tb\n\t\tc\n\td\te\nf\t\tg"</code>)</p>
<p>should turn into:</p>
<pre><code>a
b
c
d\te
f\t\tg
</code></pre>
<p>(<code>"a\n b\n c\n d\te\nf\t\tg"</code>)</p>
<p>For my case I could do that with multiple replacement operations, repeating as many times as the many maximum nesting level or until nothing changes.</p>
<p><strong>But wouldn't it also be possible to do in a single run?</strong></p>
<p>I tried but didn't manage to come up with something, the best I came up yet was with lookarounds:</p>
<pre><code>re.sub(r'(^|(?<=\t))\t', ' ', a, flags=re.MULTILINE)
</code></pre>
<p>Which "only" makes one wrong replacement (second tab between <code>f</code> and <code>g</code>).</p>
<p>Now it might be that it's simply impossible to do in regex in a single run because the already replaced parts can't be matched again (or rather the replacement does not happen right away) and you can't sort-of "count" in regex, in this case I would love to see some more detailed explanations on why (as long as this won't shift too much into [cs.se] territory).</p>
<p>I am working in Python currently but this could apply to pretty much any similar regex implementation.</p>
| 4 | 2016-08-19T11:58:44Z | 39,592,690 | <p>This here is kinda crazy, but it works:</p>
<pre><code>"\n".join([ re.sub(r"^(\t+)"," "*(2*len(re.sub(r"^(\t+).*","\1",x))),x) for x in "a\n\tb\n\t\tc\n\td\te\nf\t\tg".splitlines() ])
</code></pre>
| 1 | 2016-09-20T11:18:40Z | [
"python",
"regex"
] |
Deep learning: how can I save the computed model for prediction and how to load it later | 39,038,594 | <p>I am studying the main concepts of deep learning using the theano library. I am trying to run this code that appears in the tutorial. This piece of code runs for hours. How should I save the computed model for later use and how exactly I should load it back? </p>
<pre><code>import cPickle
import gzip
import os
import sys
import time
import numpy
import theano
import theano.tensor as T
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
from logistic_sgd import LogisticRegression, load_data
from mlp import HiddenLayer
class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height,filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows,#cols)
"""
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX),
borrow=True)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(input=input, filters=self.W,
filter_shape=filter_shape, image_shape=image_shape)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(input=conv_out,
ds=poolsize, ignore_border=True)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1,n_filters,1,1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
def evaluate_lenet5(learning_rate=0.1, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
""" Demonstrates lenet on MNIST dataset
:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
gradient)
:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer
:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)
:type nkerns: list of ints
:param nkerns: number of kernels on each layer
"""
rng = numpy.random.RandomState(23455)
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0]
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
n_test_batches = test_set_x.get_value(borrow=True).shape[0]
n_train_batches /= batch_size
n_valid_batches /= batch_size
n_test_batches /= batch_size
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
ishape = (28, 28) # this is the size of MNIST images
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size,28*28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1,28-5+1)=(24,24)
# maxpooling reduces this further to (24/2,24/2) = (12,12)
# 4D output tensor is thus of shape (batch_size,nkerns[0],12,12)
layer0 = LeNetConvPoolLayer(rng, input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2))
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1,12-5+1)=(8,8)
# maxpooling reduces this further to (8/2,8/2) = (4,4)
# 4D output tensor is thus of shape (nkerns[0],nkerns[1],4,4)
layer1 = LeNetConvPoolLayer(rng, input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2))
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size,num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (20,32*4*4) = (20,512)
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(rng, input=layer2_input, n_in=nkerns[1] * 4 * 4,
n_out=500, activation=T.tanh)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function([index], layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]})
validate_model = theano.function([index], layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]})
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i],grads[i]) pairs.
updates = []
for param_i, grad_i in zip(params, grads):
updates.append((param_i, param_i - learning_rate * grad_i))
train_model = theano.function([index], cost, updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]})
###############
# TRAIN MODEL #
###############
print '... training'
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience / 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_params = None
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = time.clock()
epoch = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' % \
(epoch, minibatch_index + 1, n_train_batches, \
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [test_model(i) for i in xrange(n_test_batches)]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of best '
'model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = time.clock()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i,'\
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print >> sys.stderr, ('The code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
if __name__ == '__main__':
evaluate_lenet5()
def experiment(state, channel):
evaluate_lenet5(state.learning_rate, dataset=state.dataset)
</code></pre>
| 0 | 2016-08-19T11:59:43Z | 39,046,014 | <p>Generally, you want to find everywhere that a shared variable is created (<code>theano.shared</code>), and pickle the values. If you've got a shared variable <code>a</code>, you can use <code>a.get_value</code> to get the value of the variable, then pickle this (or use <code>numpy.save</code> or <code>numpy.savez</code>). When you want to load the network, just load those shared variable values, and assign them to the shared variable again using <code>a.set_value</code>.</p>
<p>In your case, an object-oriented way to do things would be to write a <code>save</code> and <code>load</code> method for your <code>LeNetConvPoolLayer</code>. For example, the <code>save</code> method could do </p>
<pre><code>def save(self, filename):
np.savez(filename, W=self.W.get_value(), b=self.b.get_value())
</code></pre>
<p>Then you can use these <code>save</code> and <code>load</code> functions to save and load each layer as you wish.</p>
<p>Trying to pickle the whole thing is another option, but certain Theano objects won't work properly when they're pickled and loaded (I'm not quite sure which, though, and it might depend for example if your shared variable is stored on the CPU or GPU internally). So best to pickle the values separately as I've described above, especially if you want to store them over a long time or share them between machines.</p>
| 0 | 2016-08-19T18:56:15Z | [
"python",
"deep-learning",
"theano"
] |
Deep learning: how can I save the computed model for prediction and how to load it later | 39,038,594 | <p>I am studying the main concepts of deep learning using the theano library. I am trying to run this code that appears in the tutorial. This piece of code runs for hours. How should I save the computed model for later use and how exactly I should load it back? </p>
<pre><code>import cPickle
import gzip
import os
import sys
import time
import numpy
import theano
import theano.tensor as T
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
from logistic_sgd import LogisticRegression, load_data
from mlp import HiddenLayer
class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height,filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows,#cols)
"""
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX),
borrow=True)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(input=input, filters=self.W,
filter_shape=filter_shape, image_shape=image_shape)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(input=conv_out,
ds=poolsize, ignore_border=True)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1,n_filters,1,1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
def evaluate_lenet5(learning_rate=0.1, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
""" Demonstrates lenet on MNIST dataset
:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
gradient)
:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer
:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)
:type nkerns: list of ints
:param nkerns: number of kernels on each layer
"""
rng = numpy.random.RandomState(23455)
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0]
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
n_test_batches = test_set_x.get_value(borrow=True).shape[0]
n_train_batches /= batch_size
n_valid_batches /= batch_size
n_test_batches /= batch_size
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
ishape = (28, 28) # this is the size of MNIST images
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size,28*28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1,28-5+1)=(24,24)
# maxpooling reduces this further to (24/2,24/2) = (12,12)
# 4D output tensor is thus of shape (batch_size,nkerns[0],12,12)
layer0 = LeNetConvPoolLayer(rng, input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2))
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1,12-5+1)=(8,8)
# maxpooling reduces this further to (8/2,8/2) = (4,4)
# 4D output tensor is thus of shape (nkerns[0],nkerns[1],4,4)
layer1 = LeNetConvPoolLayer(rng, input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2))
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size,num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (20,32*4*4) = (20,512)
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(rng, input=layer2_input, n_in=nkerns[1] * 4 * 4,
n_out=500, activation=T.tanh)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function([index], layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]})
validate_model = theano.function([index], layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]})
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i],grads[i]) pairs.
updates = []
for param_i, grad_i in zip(params, grads):
updates.append((param_i, param_i - learning_rate * grad_i))
train_model = theano.function([index], cost, updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]})
###############
# TRAIN MODEL #
###############
print '... training'
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience / 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_params = None
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = time.clock()
epoch = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' % \
(epoch, minibatch_index + 1, n_train_batches, \
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [test_model(i) for i in xrange(n_test_batches)]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of best '
'model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = time.clock()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i,'\
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print >> sys.stderr, ('The code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
if __name__ == '__main__':
evaluate_lenet5()
def experiment(state, channel):
evaluate_lenet5(state.learning_rate, dataset=state.dataset)
</code></pre>
| 0 | 2016-08-19T11:59:43Z | 39,106,645 | <pre><code># in evaluate_lenet5 block, save your model after training finish
open('layer0_model.pkl', 'wb') as f0:
pickle.dump(layer0,f0)
open('layer1_model.pkl', 'wb') as f1:
pickle.dump(layer1,f1)
open('layer2_model.pkl', 'wb') as f2:
pickle.dump(layer0,f2)
open('layer3_model.pkl', 'wb') as f3:
pickle.dump(layer0,f3)
# load the saved model
layer0 = pickle.load(open('layer0_model.pkl'))
layer1 = pickle.load(open('layer1_model.pkl'))
layer2 = pickle.load(open('layer2_model.pkl'))
layer3 = pickle.load(open('layer3_model.pkl'))
</code></pre>
| 0 | 2016-08-23T16:43:39Z | [
"python",
"deep-learning",
"theano"
] |
Python socket server for multiple connection handling | 39,038,719 | <p>Can some one suggest a good example for socket server which can handle multiple connections with threading from python. (Live connection (like server-client ping-pong) that will handle from threads)</p>
| -1 | 2016-08-19T12:07:39Z | 39,040,235 | <p>Using the <code>SocketServer</code> module you can create a server that handles multiple connections. Using <a href="https://docs.python.org/3/library/socketserver.html#asynchronous-mixins" rel="nofollow">Asynchronous mixins</a> you can start new threads for each connection. There is a very good example in the Python documentation above.</p>
| 0 | 2016-08-19T13:24:41Z | [
"python",
"socketserver",
"python-sockets"
] |
Data not displaying correctly in my template for django | 39,038,761 | <p>I need help figuring out how to set up my html table in a way that is giving me issues if anyone can help me figure out how to fix it and make it look like the second picture would be awesome. </p>
<p>*Side note I am using Django </p>
<p>So first off I have three models that I will be using in this view/template. They are called Sheet, Dimension, Inspeciton_vals, my Dimension model has a forgien key called sheet_id which links to sheet, my Inspeciton_vals model has a foreign key that links to Dimension. </p>
<p>Here is my views.py </p>
<pre><code>@login_required
def shipping(request, id):
sheet_data = Sheet.objects.get(pk=id)
work_order = sheet_data.work_order
customer_data = Customer.objects.get(id=sheet_data.customer_id)
customer_name = customer_data.customer_name
title_head = 'Shipping-%s' % sheet_data.work_order
complete_data = Sheet.objects.raw("""select s.id, s.work_order, d.target, i.reading, d.description, i.serial_number from app_sheet s left join app_dimension d on s.id = d.sheet_id
left join app_inspection_vals i on d.id = i.dimension_id""")
for c_d in complete_data:
dim_description = Dimension.objects.filter(sheet_id=c_d.id).values_list('description', flat=True).distinct()
dim_id = Dimension.objects.filter(sheet_id=c_d.id)[:1]
for d_i in dim_id:
dim_data = Inspection_vals.objects.filter(dimension_id=d_i.id)
sample_size = dim_data
return render(request, 'app/shipping.html',
{
'work_order': work_order,
'sample_size': sample_size,
'customer_name': customer_name,
'title': title_head,
'complete_data': complete_data,
'dim_description': dim_description,
})
</code></pre>
<p>here are my models </p>
<pre><code>class Sheet(models.Model):
objects = SheetManager()
create_date = models.DateField()
updated_date = models.DateField()
customer_name = models.CharField(max_length=255)
part_number = models.CharField(max_length=255)
part_revision = models.CharField(max_length=255)
work_order = models.CharField(max_length=255)
purchase_order = models.CharField(max_length=255)
sample_size = models.IntegerField()
sample_scheme = models.CharField(max_length=255)
overide_scheme = models.IntegerField()
template = models.IntegerField()
sample_schem_percent = models.IntegerField()
critical_dimensions = models.IntegerField()
closed = models.IntegerField()
serial_index = models.CharField(max_length=255)
drawing_number = models.CharField(max_length=255)
drawing_revision = models.CharField(max_length=255)
heat_number = models.CharField(max_length=255)
note = models.CharField(max_length=255)
valc = models.CharField(max_length=255)
class Dimension(models.Model):
description = models.CharField(max_length=255)
style = models.CharField(max_length=255)
created_at = models.DateField()
updated_at = models.DateField()
target = models.IntegerField()
upper_limit = models.IntegerField()
lower_limit = models.IntegerField()
inspection_tool = models.CharField(max_length=255)
critical = models.IntegerField()
units = models.CharField(max_length=255)
metric = models.CharField(max_length=255)
target_strings = models.CharField(max_length=255)
ref_dim_id = models.IntegerField()
nested_number = models.IntegerField()
met_upper = models.IntegerField()
met_lower = models.IntegerField()
valc = models.CharField(max_length=255)
sheet = models.ForeignKey(Sheet, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
class Inspection_vals(models.Model):
created_at = models.DateField()
updated_at = models.DateField()
reading = models.IntegerField(null=True)
reading2 = models.IntegerField(null=True)
reading3 = models.IntegerField(null=True)
reading4 = models.IntegerField(null=True)
state = models.CharField(max_length=255)
state2 = models.CharField(max_length=255)
state3 = models.CharField(max_length=255)
state4 = models.CharField(max_length=255)
approved_by = models.CharField(max_length=255)
approved_at = models.DateField(null=True, blank=True)
dimension = models.ForeignKey(Dimension, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
serial_number = models.IntegerField(default=1)
</code></pre>
<p>Finally here is my template What I want to do is have my header be the serial numbers. This will be based off sample_size on my sheet model so lets say I have 24 as a sample size show 20 horizontal rows. Next is I will have my dimension description on the right side now with the sample_size being 24 I will have 2 dimensions linked to my sheet model this will change every time as well. Finally I want to put the reading in the rest of the table for each Inspection_val and dimension- so if I have 2 dimensions with a sample_size of 24 I should have 48 Inspeciton_vals with that I want to use the correct reading for the coresponding dimension and serial number. Here is what I have so far-- </p>
<pre><code><div class="container">
<div class="row">
<div>
<table >
<thead>
<tr>
<th>Serial Number</th>
{% for ss in sample_size %}
<th>{{ ss.serial_number }}</th>
{% endfor %}
</tr>
<tr>
{% for r_c in complete_data %}
<th> {{ r_c.reading }} </th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for desc in dim_description.all %}
<tr>
<th> {{ desc }}</th>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
</code></pre>
<p>Here is what it looks like now
<a href="http://i.stack.imgur.com/gWvhj.gif" rel="nofollow"><img src="http://i.stack.imgur.com/gWvhj.gif" alt="Here is what It looks like now"></a></p>
<p>Here is what I would like it to look like </p>
<p><a href="http://i.stack.imgur.com/48IQE.gif" rel="nofollow"><img src="http://i.stack.imgur.com/48IQE.gif" alt="Here is what I would like it to look like "></a></p>
<p>Bonus here is what my data looks like </p>
<p><a href="http://i.stack.imgur.com/hsu1O.png" rel="nofollow"><img src="http://i.stack.imgur.com/hsu1O.png" alt="My Data"></a></p>
<p>Fix after answer suggestions still not displaying it how I would like it to..</p>
<pre><code><div class="container">
<div class="row">
<div>
<table >
<thead>
<tr>
<th>Serial Number</th>
{% for ss in sample_size %}
<th>{{ ss.serial_number }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for desc in dim_description.all %}
<tr>
<td> {{ desc }}</td>
</tr>
{% for r_c in complete_data %}
<td> {{ r_c.reading }} </td>
{% endfor %}
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
</code></pre>
<p>Picture of what it looks like now </p>
<p><a href="http://i.stack.imgur.com/YnsRn.gif" rel="nofollow"><img src="http://i.stack.imgur.com/YnsRn.gif" alt="New pic"></a></p>
<p>Updated code with @Michael Platt suggestion</p>
<pre><code><div class="container">
<div class="row">
<div>
<table >
<thead>
<tr>
<th>Serial Number</th>
{% for ss in sample_size %}
<th>{{ ss.serial_number }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for desc in dim_description.all %}
<tr>
<td> {{ desc }}</td>
{% for r_c in complete_data %}
<td> {{ r_c.reading }} </td>
{% endfor %}
{% endfor %}
</tr>
</tbody>
</table>
</div>
</div>
</div>
</code></pre>
<p>@Michael Platt helped fix the html issue now I want to be able to split the reading in half so 24 will go on the inner od row and the next 24 will go on the outter od row. </p>
<p><a href="http://i.stack.imgur.com/CIY3J.gif" rel="nofollow"><img src="http://i.stack.imgur.com/CIY3J.gif" alt="Answer pic"></a></p>
| 1 | 2016-08-19T12:09:57Z | 39,042,178 | <p>Ok so knowing that I think this is your problem here:</p>
<pre><code><tbody>
{% for desc in dim_description.all %}
<tr>
<td> {{ desc }}</td>
{% for r_c in complete_data %}
<td> {{ r_c.reading }} </td>
{% endfor %}
{% endfor %}
</tr>
</tbody>
</code></pre>
<p>You had an extra <code><\tr></code> just before your second <code>{% endfor %}</code> in the <code><tbody></code> tags. I've changed it above so I think it will give the right design you want. If it doesn't let me know but it's a bit hard to test on my end just because I don't have the app up and running :-)</p>
<p>Cheers,</p>
| 1 | 2016-08-19T14:59:37Z | [
"python",
"html",
"django"
] |
How to access one2many fields values on Kanban view odoo 0.8? | 39,038,862 | <p>I need to loop over the records of o2m filed in kanban to show what I need from the other model.</p>
<p>all i need in kanban view to do this </p>
<pre><code><t t-foreach="o2m_field" t-as"record">
<t t-esc="record.name"/>
</t>
</code></pre>
<p>Is that possible to do it?</p>
| -2 | 2016-08-19T12:15:58Z | 39,049,502 | <p>Yes you can. </p>
<p>This question is a duplicate to <a href="http://stackoverflow.com/questions/30805475/is-it-possible-to-show-an-one2many-field-in-a-kanban-view-in-odoo">Is it possible to show an One2many field in a kanban view in Odoo?</a> but here is a link to a module from Serpent Consulting which will be able to do what you are looking for. </p>
<p><a href="https://apps.openerp.com/apps/modules/8.0/web_one2many_kanban/" rel="nofollow">https://apps.openerp.com/apps/modules/8.0/web_one2many_kanban/</a></p>
<p>Here is a little more info. </p>
<pre><code><kanban>
<field name="one2manyFieldname"/>
<templates>
<t t-name="kanban-box">
<div class="oe_kanban_content">
<p>
<t t-foreach="record.one2manyFieldname.raw_value" t-as='o'>
<t t-esc="o.name"/><br/>
</t>
</p>
</div>
</t>
</templates>
</kanban>
</code></pre>
<p>The important part is before the template tag you have to pass through your one2many field so it is available within your template. Then you must access the record's "raw_value" and give it an alias. Like this.</p>
<pre><code> <t t-foreach="record.one2manyFieldname.raw_value" t-as='o'>
</code></pre>
<p>Then you can access the properties of the record.</p>
<p>Within the scope of the t-foreach tag you can access properties of the record, like this.</p>
<pre><code><t t-foreach="record.one2manyFieldname.raw_value" t-as='o'>
ID: <t t-esc="o.id"/><br/>
Name: <t t-esc="o.name"/><br/>
Write Date: <t t-esc="o.write_date"/><br/>
Write UID: <t t-esc="o.write_uid"/><br/>
Some Property: <t t-esc="o.some_property"/><br/>
<br/>
</t>
</code></pre>
<p>You should be able to access the properties of each record you have aliased (in this case as 'o'). Do not take the above too literally. The layout and styling of your html and css are up to you. As well as the properties of your record you choose to display.</p>
<p>Many2one values are provided as a tuple. Access the many2one properties like this.</p>
<pre><code>Many2one ID: <t t-esc="o.partner_id[0]"/>
Many2one Name: <t t-esc="o.partner_id[1]"/>
</code></pre>
| 1 | 2016-08-20T00:57:17Z | [
"python",
"xml",
"odoo-8",
"kanban"
] |
How to change font size of the annotation in seaborn jointplot? | 39,039,000 | <p>I am making a jointplot with a regression line using -</p>
<pre><code>sns.jointplot(xdata,ydata,kind='reg', order=2,ylim=[-1,1],xlim=[-1,1],annot_kws={"loc": 4})
</code></pre>
<p>I can change the location using <code>annot_kws</code>, but cannot change the font size. How do I change the font size? </p>
| 1 | 2016-08-19T12:24:50Z | 39,039,124 | <p>I was able to change the font size by changing default legend fontsize for <code>matplotlib</code> before making the plot.</p>
<pre><code>matplotlib.rc("legend", fontsize=20)
sns.jointplot(xdata,ydata,kind='reg', order=2,ylim=[-1,1],xlim=[-1,1],annot_kws={"loc": 4})
</code></pre>
| 0 | 2016-08-19T12:31:20Z | [
"python",
"annotations",
"font-size",
"seaborn"
] |
How to split sentence into words which fit given space? | 39,039,037 | <p>I have a dictionary where key is the number and value is a string. I would like it to print those strings in lines. BUT the lines has length let's say 15, so there can be 15 chars in line. If adding current word to line makes it longer than 15 , the current word is moved to the next line. I think it can be done with join function, but i am not quite sure how it should look like. The only result I got now are words or chars printed in every line.</p>
<pre><code>dictionary = { 1 : "hello i am Alice. i have a cat",
2 : "his name is Bob"
}
for elem in dictionary:
words = dictionary[elem].split(" ")
dictionary[elem] = "\n".join(....)
</code></pre>
<p>The result I am aiming at looks like this:</p>
<pre><code> hello i am
Alice. i
have a cat
his name
is Bob.
</code></pre>
| -1 | 2016-08-19T12:27:15Z | 39,039,255 | <p>Use the <a href="https://docs.python.org/2/library/textwrap.html#textwrap.fill" rel="nofollow"><code>textwrap.fill()</code> function</a>; this takes care of splitting your string, figuring out how many words fit per line, and rejoining the lines with newlines:</p>
<pre><code>import textwrap
for elem, string in dictionary.items():
dictionary[elem] = textwrap.fill(string)
</code></pre>
<p>Adjust the width as needed with the <a href="https://docs.python.org/2/library/textwrap.html#textwrap.TextWrapper.width" rel="nofollow"><code>width</code> keyword argument</a> (defaults to <code>70</code>). See the module documentation for what other options there are.</p>
| 3 | 2016-08-19T12:37:43Z | [
"python"
] |
I can't seem to open two windows with python, tkinter | 39,039,123 | <p>I am getting an error trying to open a window in python<br />
I am using <code>tkinter</code> so the code looks a bit like this</p>
<pre><code>from tkinter import *
Window = Tk()
Window2 = Tk()
Window.create_rectangle(0, 0, 100, 100) # border
Window2.create_rectangle(0, 0, 100, 100)
</code></pre>
| -4 | 2016-08-19T12:31:19Z | 39,039,381 | <p>You should use: import tkinter</p>
| -4 | 2016-08-19T12:44:20Z | [
"python",
"tkinter"
] |
I can't seem to open two windows with python, tkinter | 39,039,123 | <p>I am getting an error trying to open a window in python<br />
I am using <code>tkinter</code> so the code looks a bit like this</p>
<pre><code>from tkinter import *
Window = Tk()
Window2 = Tk()
Window.create_rectangle(0, 0, 100, 100) # border
Window2.create_rectangle(0, 0, 100, 100)
</code></pre>
| -4 | 2016-08-19T12:31:19Z | 39,039,650 | <p>You have some basic typos / syntax errors in your code. But anyway...</p>
<p>A Tkinter window doesn't have <code>create_rectangle</code> method. However, the Canvas widget <em>does</em> have that method; you can use it like this.</p>
<pre><code>import tkinter as tk
window = tk.Tk()
canvas = tk.Canvas(window, width=100, height=100)
canvas.pack()
canvas.create_rectangle(1, 1, 99, 99, outline="blue", fill="white")
tk.mainloop()
</code></pre>
| 2 | 2016-08-19T12:56:57Z | [
"python",
"tkinter"
] |
I can't seem to open two windows with python, tkinter | 39,039,123 | <p>I am getting an error trying to open a window in python<br />
I am using <code>tkinter</code> so the code looks a bit like this</p>
<pre><code>from tkinter import *
Window = Tk()
Window2 = Tk()
Window.create_rectangle(0, 0, 100, 100) # border
Window2.create_rectangle(0, 0, 100, 100)
</code></pre>
| -4 | 2016-08-19T12:31:19Z | 39,044,139 | <p>Although the question already has an accepted answer. It doesn't actually answer the question on creating another window.</p>
<p>You should always avoid having multiple instances of <code>Tk()</code> if you need another window the <code>Toplevel</code> widget is what you should look towards.</p>
<pre><code>import tkinter as tk
root = tk.Tk()
tk.Label(root, text = "This is the main window").pack()
sub_window = tk.Toplevel(root)
tk.Label(sub_window, text = "This is the other window").pack()
root.mainloop()
</code></pre>
| 0 | 2016-08-19T16:48:00Z | [
"python",
"tkinter"
] |
How to pass context in odoo9? | 39,039,222 | <p>I want to pass the partner_id which is selected in sale order to pricelist via context how to do that ?</p>
<pre><code>@api.multi
@api.onchange('product_id')
def product_id_change(self):
if not self.product_id:
return {'domain': {'product_uom': []}}
vals = {}
domain = {'product_uom': [('category_id', '=', self.product_id.uom_id.category_id.id)]}
if not self.product_uom or (self.product_id.uom_id.category_id.id != self.product_uom.category_id.id):
vals['product_uom'] = self.product_id.uom_id
product = self.product_id.with_context(
lang=self.order_id.partner_id.lang,
partner=self.order_id.partner_id.id,
quantity=self.product_uom_qty,
date=self.order_id.date_order,
pricelist=self.order_id.pricelist_id.id,
uom=self.product_uom.id
)
name = product.name_get()[0][1]
if product.description_sale:
name += '\n' + product.description_sale
vals['name'] = name
self._compute_tax_id()
if self.order_id.pricelist_id and self.order_id.partner_id:
vals['price_unit'] = self.env['account.tax']._fix_tax_included_price(product.price, product.taxes_id, self.tax_id)
self.update(vals)
return {'domain': domain}
@api.onchange('product_uom', 'product_uom_qty')
def product_uom_change(self):
if not self.product_uom:
self.price_unit = 0.0
return
if self.order_id.pricelist_id and self.order_id.partner_id:
product = self.product_id.with_context(
lang=self.order_id.partner_id.lang,
partner=self.order_id.partner_id.id,
quantity=self.product_uom_qty,
date_order=self.order_id.date_order,
pricelist=self.order_id.pricelist_id.id,
uom=self.product_uom.id,
fiscal_position=self.env.context.get('fiscal_position')
)
self.price_unit = self.env['account.tax']._fix_tax_included_price(product.price, product.taxes_id, self.tax_id)
</code></pre>
<p>here </p>
<pre><code>vals['price_unit'] = self.env['account.tax']._fix_tax_included_price(product.price, product.taxes_id, self.tax_id)
</code></pre>
<p>i want to pass the partner_id context so i can check if a specific Bool is True or false and do the computation according to it.</p>
<p>when i pass the context it says this accepts 4 and i have given 5.
now where do i change this so it takes 5.</p>
| 1 | 2016-08-19T12:36:16Z | 39,039,347 | <p>Since new API context is encapsulated in an environment object, which often can be called like <code>self.env</code>. To manipulate the context just use the method <code>with_context</code>. An simple example:</p>
<pre class="lang-py prettyprint-override"><code>vals['price_unit'] = self.env['account.tax']\
.with_context(partner_id=self.order_id.partner_id.id)\
._fix_tax_included_price(
product.price, product.taxes_id, self.tax_id)
</code></pre>
<p>For further information look into the <a href="https://www.odoo.com/documentation/9.0/reference/orm.html#environment" rel="nofollow">Odoo Doc</a></p>
| 1 | 2016-08-19T12:42:45Z | [
"python",
"openerp",
"odoo-9"
] |
Get path in svg using Selenium (Python) | 39,039,232 | <p>I have the following code which find all g's in svg, yet how could I get those path elements inside g's and their path value?</p>
<p>I am testing with this website: <a href="http://www.totalcorner.com/match/live_stats/57565755" rel="nofollow">http://www.totalcorner.com/match/live_stats/57565755</a></p>
<p>related code:</p>
<pre><code>nodes = self.driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']")
</code></pre>
<p>I have already tried this:</p>
<pre><code>nodes = self.driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']/*[name()='path']")
</code></pre>
<p>yet what I get is something like this:</p>
<pre><code>[<selenium.webdriver.remote.webelement.WebElement (session="fb86fb35-d2fa-974a-af32-a15db1b7459d", element="{c1dad34f-764d-0249-9302-215dd9ae9cd8}")>, <selenium.webdriver.remote.webelement.WebElement (session="fb86fb35-d2fa-974a-af32-a15db1b7459d", element="{a53816f4-9952-ab49-87ac-5d79538a855d}")>, ...]
</code></pre>
<p>How could I use this to find the path value?
thanks a lot</p>
<p>My updated solution:</p>
<p>thanks everyone's effort. After Robert Longson's updated answer, I think the following is the better solution:</p>
<pre><code>nodes = driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']/*[name()='path']")
for node in nodes:
print(node.get_attribute("d"))
</code></pre>
<p>Since I cannot differentiate paths if using driver.find_elements_by_tag_name, I think the answer above is better.</p>
| 3 | 2016-08-19T12:36:49Z | 39,039,578 | <p>You are getting a list, so try:</p>
<pre><code>for node in nodes:
print(node.text)
</code></pre>
<p>if a you are looking for a value of an attribute then use the following (<code>href</code> here as an example):</p>
<pre><code>print(node.get_attribute('href'))
</code></pre>
| 2 | 2016-08-19T12:53:48Z | [
"python",
"selenium",
"svg"
] |
Get path in svg using Selenium (Python) | 39,039,232 | <p>I have the following code which find all g's in svg, yet how could I get those path elements inside g's and their path value?</p>
<p>I am testing with this website: <a href="http://www.totalcorner.com/match/live_stats/57565755" rel="nofollow">http://www.totalcorner.com/match/live_stats/57565755</a></p>
<p>related code:</p>
<pre><code>nodes = self.driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']")
</code></pre>
<p>I have already tried this:</p>
<pre><code>nodes = self.driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']/*[name()='path']")
</code></pre>
<p>yet what I get is something like this:</p>
<pre><code>[<selenium.webdriver.remote.webelement.WebElement (session="fb86fb35-d2fa-974a-af32-a15db1b7459d", element="{c1dad34f-764d-0249-9302-215dd9ae9cd8}")>, <selenium.webdriver.remote.webelement.WebElement (session="fb86fb35-d2fa-974a-af32-a15db1b7459d", element="{a53816f4-9952-ab49-87ac-5d79538a855d}")>, ...]
</code></pre>
<p>How could I use this to find the path value?
thanks a lot</p>
<p>My updated solution:</p>
<p>thanks everyone's effort. After Robert Longson's updated answer, I think the following is the better solution:</p>
<pre><code>nodes = driver.find_elements_by_xpath("//div[@id='all_container']/*[@id='highcharts-0']/*[name()='svg']/*[name()='g']/*[name()='path']")
for node in nodes:
print(node.get_attribute("d"))
</code></pre>
<p>Since I cannot differentiate paths if using driver.find_elements_by_tag_name, I think the answer above is better.</p>
| 3 | 2016-08-19T12:36:49Z | 39,039,897 | <p><a href="https://seleniumhq.github.io/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webelement.html" rel="nofollow">find_elements_by_tag_name</a> can be used to find path children.</p>
<p>Once you have those, get_attribute("d") will get you the path value.</p>
<p>E.g.</p>
<pre><code>for node in nodes:
paths = node.find_elements_by_tag_name("path")
for path in paths:
value = path.get_attribute("d")
</code></pre>
| 1 | 2016-08-19T13:08:44Z | [
"python",
"selenium",
"svg"
] |
Appending a pair of numpy arrays in a specific format to a 3rd array | 39,039,429 | <p>I'm extracting data points in pairs from a data set. A pair consists of 2 numpy arrays, each shaped <code>(3, 30, 30)</code>. Let's call them <code>X1</code> and <code>Y1</code>. The next pair will then be <code>X2</code> and <code>Y2</code>, followed by <code>X3</code> and <code>Y3</code>, etc. I don't know how many pairs there will be in total, so I have to use something like <code>np.append</code>. </p>
<p>So I want something like:</p>
<pre><code>>>X1, Y1 = extract_X_and_Y_from_data(data)
>>pair1 = np.array([X1, Y1])
>>pair1.shape
(2, 3, 30, 30)
>>list_of_pairs.some_append_function(pair1)
>>list_of_pairs.shape
(1, 2, 3, 30, 30)
>>X2, Y2 = extract_X_and_Y_from_data(data)
>>pair2 = np.array([X2, Y2])
>>list_of_pairs.some_append_function(pair2)
>>list_of_pairs.shape
(2, 2, 3, 30, 30)
</code></pre>
<p>...</p>
<pre><code>>>X50, Y50 = extract_X_and_Y_from_data(data)
>>pair50 = np.array([X50, Y50])
>>list_of_pairs.some_append_function(pair50)
>>list_of_pairs.shape
(50, 2, 3, 30, 30)
</code></pre>
<p>All in all, I need the final list_of_pairs to be a numpy array of shape <code>(no_of_pairs, 2, 3, 30, 30)</code>. <code>np.append</code> keeps giving me <code>(no_of_pairs, 2)</code>, and I'm not too sure why.</p>
<p>Note: <code>np.concatenate</code>, <code>vstack</code> or <code>hstack</code> are tricky to implement because they can't seem to execute the first instance, i.e. appending the first pair to an initially empty <code>list_of_pairs</code>.</p>
<p>Thanks!</p>
| 0 | 2016-08-19T12:46:41Z | 39,043,696 | <p>With list append</p>
<pre><code>list_of_pairs = [] # real list
for data in database:
X1, Y1 = extract_X_and_Y_from_data(data)
pair1 = np.array([X1, Y1])
list_of_pairs.some_append_function(pair1)
array5d = np.array(list_of_pairs)
>> array5d.shape
(50, 2, 3, 30, 30)
</code></pre>
<p><code>appending</code> to a list is relatively fast since it just adds a pointer to the list. Your <code>pair</code> array remains in memory.</p>
<p><code>np.array(alist)</code> builds a new array, joining the components on a new dimension (same as in <code>np.array([[1,2,3],[4,5,6]])</code>)</p>
<p>There is a new function <code>np.stack</code> that gives you a more control over which dimension is new. All the <code>stack</code> functions end up calling <code>np.concatenate</code>. That includes the misnamed (and oft misused) <code>np.append</code>. <code>concatenate</code> requires matching dimensions (in the joining direction). The various <code>stacks</code> just adjust the overall number of dimensions.</p>
| 1 | 2016-08-19T16:21:29Z | [
"python",
"arrays",
"numpy"
] |
Is there a good way to visualize large number of subplots (> 500)? | 39,039,435 | <p>I am still working on my New York Subway data. I cleaned and wrangled the data in such a fashion that I now have 'Average Entries' and 'Average Exits' per Station per hour (ranging from 0 to 23) separated for weekend and weekday (category variable with two possible values: weekend/weekday).</p>
<p>What I was trying to do is to create a plot with each station being a row, each row having two columns (first for weekday, second for weekend). I would like to plot 'Average Entries' and 'Average Exits' per hour to gain some information about the stations. There are two things of interest here; firstly the sheer numbers to indicate how busy a station is; secondly the ratio between entries and exits for a given hour to indicate if the station is a living area (loads of entries in the morning, loads of exits in the evening) or more of a working area (loads of exits in the morning, entries peeking around 4, 6 and 8 pm or so). Only problem, there are roughly 550 stations.</p>
<p>I tried plotting it with seaborn facetgrid, which cant handle more than a few stations (10 or so) without running into memory issues.</p>
<p>So I was wondering if anybody had a good idea to accomplish what I am trying to do.</p>
<p>Please find attached a notebook (second to last cell shows my attempt of visualizing the data, i.e. the plotting for 4 stations). That clearly wouldn't work for 500+ stations, so maybe 5 stations in a row after all?</p>
<p>The very last cell contains the data for Station R001 as requested in a comment..</p>
<p><a href="https://github.com/FBosler/Udacity/blob/master/Example.ipynb" rel="nofollow">https://github.com/FBosler/Udacity/blob/master/Example.ipynb</a></p>
<p>Any input much appreciated!
Fabian</p>
| 1 | 2016-08-19T12:47:08Z | 39,040,175 | <p>You're going to have problems displaying them all on a screen no matter what you do unless you have a whole wall of monitors, however to get around the memory constraint, you could rasterize them and save to image files (I would suggest .png for compressability with images of few distinct colors)</p>
<p>What you want for that is <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig" rel="nofollow"><code>pyplot.savefig()</code></a></p>
<p><a href="http://stackoverflow.com/a/9890599/3220135">Here's</a> an answer to another question on how to do that, with some tips and tricks </p>
| 1 | 2016-08-19T13:22:03Z | [
"python",
"pandas",
"matplotlib",
"visualization",
"subplot"
] |
Is there a good way to visualize large number of subplots (> 500)? | 39,039,435 | <p>I am still working on my New York Subway data. I cleaned and wrangled the data in such a fashion that I now have 'Average Entries' and 'Average Exits' per Station per hour (ranging from 0 to 23) separated for weekend and weekday (category variable with two possible values: weekend/weekday).</p>
<p>What I was trying to do is to create a plot with each station being a row, each row having two columns (first for weekday, second for weekend). I would like to plot 'Average Entries' and 'Average Exits' per hour to gain some information about the stations. There are two things of interest here; firstly the sheer numbers to indicate how busy a station is; secondly the ratio between entries and exits for a given hour to indicate if the station is a living area (loads of entries in the morning, loads of exits in the evening) or more of a working area (loads of exits in the morning, entries peeking around 4, 6 and 8 pm or so). Only problem, there are roughly 550 stations.</p>
<p>I tried plotting it with seaborn facetgrid, which cant handle more than a few stations (10 or so) without running into memory issues.</p>
<p>So I was wondering if anybody had a good idea to accomplish what I am trying to do.</p>
<p>Please find attached a notebook (second to last cell shows my attempt of visualizing the data, i.e. the plotting for 4 stations). That clearly wouldn't work for 500+ stations, so maybe 5 stations in a row after all?</p>
<p>The very last cell contains the data for Station R001 as requested in a comment..</p>
<p><a href="https://github.com/FBosler/Udacity/blob/master/Example.ipynb" rel="nofollow">https://github.com/FBosler/Udacity/blob/master/Example.ipynb</a></p>
<p>Any input much appreciated!
Fabian</p>
| 1 | 2016-08-19T12:47:08Z | 39,040,395 | <p>rather than making 550+ subplots see if you can make two big numpy arrays and then use 2 <code>imview</code> subplots, one for weekdays and one for weekends</p>
<p>for the y-values, first find the min (0) and max (10,000?) for your average values, scale these to fit each fake row of, for example, 10px then offset each row in your data by 10px * the row number.</p>
<p>since you want line plots for each of your 24 data points, you'll have to do linear interpolation between your data points in increments of, again for example, 10px so that the final numpy arrays will be 240 x 5500 x 2.</p>
| 2 | 2016-08-19T13:32:15Z | [
"python",
"pandas",
"matplotlib",
"visualization",
"subplot"
] |
Is there a good way to visualize large number of subplots (> 500)? | 39,039,435 | <p>I am still working on my New York Subway data. I cleaned and wrangled the data in such a fashion that I now have 'Average Entries' and 'Average Exits' per Station per hour (ranging from 0 to 23) separated for weekend and weekday (category variable with two possible values: weekend/weekday).</p>
<p>What I was trying to do is to create a plot with each station being a row, each row having two columns (first for weekday, second for weekend). I would like to plot 'Average Entries' and 'Average Exits' per hour to gain some information about the stations. There are two things of interest here; firstly the sheer numbers to indicate how busy a station is; secondly the ratio between entries and exits for a given hour to indicate if the station is a living area (loads of entries in the morning, loads of exits in the evening) or more of a working area (loads of exits in the morning, entries peeking around 4, 6 and 8 pm or so). Only problem, there are roughly 550 stations.</p>
<p>I tried plotting it with seaborn facetgrid, which cant handle more than a few stations (10 or so) without running into memory issues.</p>
<p>So I was wondering if anybody had a good idea to accomplish what I am trying to do.</p>
<p>Please find attached a notebook (second to last cell shows my attempt of visualizing the data, i.e. the plotting for 4 stations). That clearly wouldn't work for 500+ stations, so maybe 5 stations in a row after all?</p>
<p>The very last cell contains the data for Station R001 as requested in a comment..</p>
<p><a href="https://github.com/FBosler/Udacity/blob/master/Example.ipynb" rel="nofollow">https://github.com/FBosler/Udacity/blob/master/Example.ipynb</a></p>
<p>Any input much appreciated!
Fabian</p>
| 1 | 2016-08-19T12:47:08Z | 39,040,705 | <p>A possible way you could do it is to use the ratio of entries to exits per station. Each day/hour could form a column on an image and each row would be a station. As en example:</p>
<pre class="lang-python prettyprint-override"><code>from matplotlib import pyplot as plt
import random
import numpy as np
all_stations = []
for i in range(550):
entries = [float(random.randint(0, 50)) for i in range(7*24)] # Data point for each hour over a week
exits = [float(random.randint(0, 50)) for i in range(7*24)]
weekend_entries = entries[:2*7]
weekend_exits = exits[:2*7]
day_entries = entries[2*7:]
day_exits = exits[2*7:]
weekend_ratio = [np.array(en) / np.array(ex) for en, ex in zip(weekend_entries, weekend_exits)]
day_ratio = [np.array(en) / np.array(ex) for en, ex in zip(day_entries, day_exits)]
whole_week = weekend_ratio + day_ratio
all_stations.append(whole_week)
plt.figure()
plt.imshow(all_stations, aspect='auto', interpolation="nearest")
plt.xlabel("Hours")
plt.ylabel("Station number")
plt.title("Entry/exit ratio per station")
plt.colorbar(label="Entry/exit ratio")
# Add some vertical lines to indicate days
for j in range(1, 7):
plt.plot([j*24]*2, [0, 550], color="black")
plt.xlim(0, 7*24)
plt.ylim(0, 550)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/kl3Yq.png" rel="nofollow"><img src="http://i.stack.imgur.com/kl3Yq.png" alt="enter image description here"></a></p>
<p>If you would like to show the actual numbers involved an not the ratio, I would consider splitting the data into two, one image for each of the entries and exit data sets. The intensity of each pixel could then be used to inform on the numbers, not ratio.</p>
| 1 | 2016-08-19T13:48:35Z | [
"python",
"pandas",
"matplotlib",
"visualization",
"subplot"
] |
How to merge two images using second as a transparent binary mask in Python? | 39,039,474 | <p>For example I have a image of a cat, and second image with a mask of cat. And I want to add transparent red mask for pixels of a cat in first image.
Both images are numpy arrays...So how to do it?</p>
| 0 | 2016-08-19T12:49:09Z | 39,040,178 | <p>IIUC you could do something like this -</p>
<pre><code>img[:,:,2] = np.where(mask,255,img[:,:,2])
</code></pre>
<p>Results -</p>
<p><a href="http://i.stack.imgur.com/Dn3of.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Dn3of.jpg" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/MbSnR.png" rel="nofollow"><img src="http://i.stack.imgur.com/MbSnR.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/mu5TG.png" rel="nofollow"><img src="http://i.stack.imgur.com/mu5TG.png" alt="enter image description here"></a></p>
| 1 | 2016-08-19T13:22:16Z | [
"python",
"opencv",
"numpy",
"image-processing"
] |
How to deal with iteration/looping in Python behave or BDD Scenarios in general? | 39,039,706 | <p>I am trying to use Python's <a href="https://pythonhosted.org/behave/index.html" rel="nofollow">behave library</a> for writing some BDD/Gherkin style tests on a bunch of delimited textfiles.</p>
<p>A typical scenario would look like this:</p>
<pre><code>Scenario: Check delivery files for illegal values
Given a file system path to the delivery folder
When I open each file and read its values
Then all values in the "foo" column are smaller than 1
And all values in the "fizz" column are larger than 2
</code></pre>
<p>Since there are lots of files and each file contains lots of rows, there is no possibility to hardcode all of them into Scenario Outlines. Moreover, I would like to avoid reading whole files into memory at once but rather use generators to iterate over the rows one by one.</p>
<p>I tried the following. However, this is very inefficient on large datasets and large amounts of conditions since each line is read again and again for every <code>then</code> step. Is there any possibility to pass a single row between multiple <code>then</code> steps and start again at the first <code>then</code> step for the next row?</p>
<p>Or is BDD/Gherkin not suited for this kind of testing? What could be an alternative?</p>
<pre><code>import csv
import itertools
import os
@given('a file system path to the delivery folder')
def step(context):
context.path = '/path/to/delivery/files'
@when('I open each file and read its values')
def step(context):
file_list = os.listdir(context.path)
def row_generator():
for path in file_list:
with open(path, 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
yield row
context.row_generator = row_generator
# itertools.tee forks off the row generator so it can be used multiple times instead of being exhausted after the first 'then' step
@then('all values in the "foo" column are smaller than 1')
def step(context):
for row in itertools.tee(context.row_generator(), 1)[0]:
assert row['foo'] < 1
@then('all values in the "bar" column are larger than 2')
def step(context):
for row in itertools.tee(context.row_generator(), 1)[0]:
assert row['bar'] > 2
</code></pre>
| 1 | 2016-08-19T12:59:25Z | 39,299,346 | <p>Dirk, you could consider writing a script to generate the scenario outline (given the csv file). That way you can rely on standard behave/gherkin features. Keep it as simple as possible as it introduces another place where things can go wrong. It however simplifies your step file so you can focus on the actual functional test rather than the mechanics of reading the csv file. Depending on how large your datasets are, you might also run into memory limitations as I presume the table is kept in memory only. Not a great solution, but 'an alternative'.</p>
| 0 | 2016-09-02T19:15:26Z | [
"python",
"loops",
"iteration",
"gherkin",
"python-behave"
] |
How to pre-fill a modelform field DJango | 39,039,805 | <p>ive been searching for a while for information on how to do this.
here is my situation.
i have a ContactForm(form.ModelForm):
and it excludes the ID field</p>
<p>which i want to be auto generated based on the last entry to the Contact table</p>
<p>but im not finding any examples on how to do this
the help im lookin for is how to enter information without being visible to the user while at the same time automatically pre-fill the right id and i cant use pk cause it has a foriegn key to another table </p>
| 1 | 2016-08-19T13:04:18Z | 39,039,891 | <p>If you define a field of a model as an <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#automatic-primary-key-fields" rel="nofollow"><code>AutoField</code></a> it will do this exact thing</p>
<p>EDIT:</p>
<p>If you want to define a custom auto incrementing ID you can do it like:
from django.db import models</p>
<pre><code>class MyModel(models.Model):
_id_counter = 0
custom_id = models.IntegerField()
@classmethod
def create(cls):
my_model = cls(custom_id=MyModel._id_counter)
MyModel._id_counter += 1
return my_model
model1 = MyModel.create()
</code></pre>
| 1 | 2016-08-19T13:08:26Z | [
"python",
"django",
"modelform"
] |
python install module manually raspberry pi | 39,040,083 | <p>I have to install a module insteon.py manually.</p>
<p>So I copied the module in /usr/lib/python2.7/dist-packages and usr/local/lib/python2.7/dist-packages.</p>
<p>When I try to import the module insteon.py, it gives me this error:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 15, in <module>
from insteon import Insteon
File "/usr/lib/python2.7/dist-packages/insteon.py", line 25, in <module>
config = configuration.loadConfig()
AttributeError: Configuration instance has no attribute 'loadConfig'
</code></pre>
| 0 | 2016-08-19T13:18:05Z | 39,043,245 | <p>if you would you can download the script here</p>
<p><a href="https://github.com/bynds/makevoicedemo" rel="nofollow">https://github.com/bynds/makevoicedemo</a></p>
| 0 | 2016-08-19T15:53:50Z | [
"python",
"module",
"raspberry-pi"
] |
Unable to export DataFrame values into SQL table | 39,040,146 | <p>I am trying to write a python code to perform regression on some data out of the table from SQL and then export some parameters of the cleaned data back to a new SQL table. My code:</p>
<pre><code>import pandas as pd
import numpy as np
import pandas.io.sql as psql
import pypyodbc
from pandas.stats.api import ols
import statsmodels.api as sm
#from statsmodels.formula.api import ols
conn = pypyodbc.connect("DRIVER={SQL Server};SERVER=Server Add;DATABASE=Database;UID=UID;PWD=PWD")
Data1 = pd.read_sql('SELECT net_rate, cohort FROM an.dbo.SL_Stop', conn)
print Data1
dummies = pd.get_dummies(Data1['cohort'],prefix ='Cohort') #Creating Dummies
Data_With_Dummies = Data1[['net_rate']].join(dummies) #Merging Dummies
</code></pre>
<p>..........# Data cleaning process..........
Then I perform regression on the cleaned Data</p>
<pre><code>mod = sm.OLS(endog, exog)
results = mod.fit()
print results.summary()
print "\n"
print ('Paramters:', results.params )
Data_Params=pd.DataFrame(results.params, columns =['values'])
Data_Params = Data_Params.T
Data_Params = Data_Params.rename(columns={'const':'const_Coef',
'Cohort_2' : 'Cohort_2_Coef',
'Cohort_3':'Cohort_3_Coef'})
Data_Pvalues = pd.DataFrame(results.pvalues, columns = ['values'])
Data_Pvalues= Data_Pvalues.T
Data_Pvalues = Data_Pvalues.rename(columns={'const':'const_Pvalue',
'Cohort_2' : 'Cohort_2_Pvalue',
'Cohort_3':'Cohort_3_Pvalue'})
Data_Concatenate_Coeff_Pvalues = pd.concat([Data_Params,Data_Pvalues],axis = 1)
pd.DataFrame(Data_Concatenate_Coeff_Pvalues,index = ["Coeefi","Pvalue"])
const_Coef = Data_Params['const_Coef']
Cohort_2_Coef = Data_Params['Cohort_2_Coef']
Cohort_3_Coef = Data_Params['Cohort_3_Coef']
const_Pvalue = Data_Pvalues['const_Pvalue']
Cohort_2_Pvalue = Data_Pvalues['Cohort_2_Pvalue']
Cohort_3_Pvalue = Data_Pvalues['Cohort_3_Pvalue']
SQL_INSERT_QUERY = """
INSERT INTO _nrr_cohorts (
[report_month],
[beta_cohort_1],
[p_value_cohort_1],
[beta_cohort_2],
[p_value_cohort_2],
[beta_cohort_3],
[p_value_cohort_3],
[updated_datetime]
)
VALUES (
1,Data_Params['const_Coef'],Data_Params['Cohort_2_Coef'],Data_Params['Cohort_3_Coef'],
Data_Pvalues['const_Pvalue'],Data_Pvalues['Cohort_2_Pvalue'],Data_Pvalues['Cohort_3_Pvalue',3
)
"""
db = conn.cursor()
db.execute(SQL_INSERT_QUERY).commit()
</code></pre>
<p>I want to export the regression parameters back into new sql table but the insert into code takes hard coded value is there a method to pass DataFrame or these paramter into new sql table</p>
| 0 | 2016-08-19T13:20:17Z | 39,042,782 | <pre><code>const_Coef = Data_Params.iat[0,0]
Cohort_2_Coef = Data_Params.iat[0,1]
Cohort_3_Coef = Data_Params.iat[0,2]
const_Pvalue = Data_Pvalues.iat[0,0]
Cohort_2_Pvalue = Data_Pvalues.iat[0,1]
Cohort_3_Pvalue = Data_Pvalues.iat[0,2]
Current_Date_Time = datetime.datetime.now() #Extracting Current Time
db = conn.cursor()
db.execute("INSERT INTO _nrr_cohorts(beta_cohort_1,beta_cohort_2,beta_cohort_3,p_value_cohort_1,p_value_cohort_2,p_value_cohort_3,updated_datetime,report_month)values(?,?,?,?,?,?,?,?)",(const_Coef,Cohort_2_Coef,Cohort_3_Coef,const_Pvalue,Cohort_2_Pvalue,Cohort_3_Pvalue,Current_Date_Time,Report_Month)).commit()
</code></pre>
<p>Extracting the values from each cell of DataFrame helped. and using ?? to pass variables</p>
| 0 | 2016-08-19T15:28:41Z | [
"python",
"sql",
"sql-server",
"pandas"
] |
How to implement the functionality of lockfile command | 39,040,174 | <p>I'm trying to implement a filesystem-based lock for a software, which runs on a cluster. The underlying shared filesystem is implemented using <a href="https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device" rel="nofollow">DRBD</a>, so synchronicity can be considered guaranteed. My current implementation looks like the following:</p>
<pre><code># check the lock file
if os.path.isfile(lockfile):
if time.time() - os.path.getmtime(lockfile) > 3600:
logInfo('lock is older than 3600s, removing it and going on collecting result.')
os.remove(lockfile)
else:
logInfo('Previous action is still on-going, please wait until it\'s finished or timeout.')
sys.exit(1)
# create the lock file
open(lockfile, 'w').close();
</code></pre>
<p>Clearly, situations may occur when multiple instances of the script running on different machines in the cluster may unanimously consider the system unlocked, create the lockfile and perform the operations that would require mutual exclusion.</p>
<p>So to sum it up, I need a locking facility which is filesystem-based, and lock check and creation together forms an atomic operation.</p>
<p>The same functionality can be achieved in a shell script using the <a href="http://linux.die.net/man/1/lockfile" rel="nofollow">lockfile</a> command.</p>
| 2 | 2016-08-19T13:21:59Z | 39,097,188 | <p>A solution can be achieved by using os.open, which wraps the <code>open</code> system call:</p>
<pre><code>import os,errno
def lockfile(filename):
try:
os.close(os.open(filename, os.O_CREAT | os.O_EXCL | os.O_WRONLY));
except OSError as e:
if e.errno == errno.EEXIST: # System is already locked as the file already exists.
return False
else: # Something unexpected went wrong so reraise the exception.
raise
return True # System has successfully been locked by the function
</code></pre>
<p>Note the second parameter of os.open(). According to <a href="http://stackoverflow.com/a/10979569/4049863">this answer</a>, when using the flag O_EXCL in conjunction with O_CREAT, and pathname already exists, then open() will fail, or to be more exact, will raise an OSError with errno EEXIST, which in our case means that the system is already locked. However, when the path points to a nonexistent file, it will immediately be created, not leaving a time frame for other users of the filesystem to concurrently do the same steps.
And according to <a href="http://stackoverflow.com/a/15072339/4049863">this one</a>, the described technique can be considered widely platform-independent.</p>
| 0 | 2016-08-23T09:18:45Z | [
"python",
"cluster-computing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.