title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to use merge in pandas | 39,286,344 | <p>Sorry guys, I know it is a very basic question, I'm just a beginner</p>
<pre><code>In [55]: df1
Out[55]:
x y
a 1 3
b 2 4
c 3 5
d 4 6
e 5 7
In [56]: df2
Out[56]:
y z
b 1 9
c 3 8
d 5 7
e 7 6
f 9 5
</code></pre>
<p>pd.merge(df1, df2) gives:</p>
<pre><code>In [56]: df2
Out[56]:
x y z
0 1 3 8
1 3 5 7
2 5 7 6
</code></pre>
<p>I'm confused the use of merge, what does '0','1','2' mean? For example,when the index is 0, why x is 1, y is 3 and z is 8?</p>
| 0 | 2016-09-02T07:11:07Z | 39,286,469 | <p>You get that due to defaults for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>pd.merge</code></a>:</p>
<blockquote>
<pre><code>merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=False, suffixes=('_x',
'_y'), copy=True, indicator=False)
on : label or list
Field names to join on. Must be found in both DataFrames. If on is
None and not merging on indexes, then it merges on the intersection of
the columns by default.
</code></pre>
</blockquote>
<p>You haven't pass any key to <code>on</code> key, so it merges on the intersection of the columns by default. You have different indices for your <code>df1</code> and <code>df2</code> so if you want to keep left or right you should specify that:</p>
<pre><code>In [43]: pd.merge(df1, df2)
Out[43]:
x y z
0 1 3 8
1 3 5 7
2 5 7 6
In [44]: pd.merge(df1, df2, on='y', left_index=True)
Out[44]:
x y z
c 1 3 8
d 3 5 7
e 5 7 6
In [45]: pd.merge(df1, df2, on='y', right_index=True)
Out[45]:
x y z
a 1 3 8
c 3 5 7
e 5 7 6
</code></pre>
| 3 | 2016-09-02T07:18:05Z | [
"python",
"pandas"
] |
How to use merge in pandas | 39,286,344 | <p>Sorry guys, I know it is a very basic question, I'm just a beginner</p>
<pre><code>In [55]: df1
Out[55]:
x y
a 1 3
b 2 4
c 3 5
d 4 6
e 5 7
In [56]: df2
Out[56]:
y z
b 1 9
c 3 8
d 5 7
e 7 6
f 9 5
</code></pre>
<p>pd.merge(df1, df2) gives:</p>
<pre><code>In [56]: df2
Out[56]:
x y z
0 1 3 8
1 3 5 7
2 5 7 6
</code></pre>
<p>I'm confused the use of merge, what does '0','1','2' mean? For example,when the index is 0, why x is 1, y is 3 and z is 8?</p>
| 0 | 2016-09-02T07:11:07Z | 39,290,277 | <p>What pd.merge does is that it joins two dataframes similar to the way two relations are merged using a 'JOIN' statement in case of relational databases. </p>
<p>When you merge df1 and df2 using the code: pd.merge(df1, df2), you haven't specified the values of any other argument of the pd.merge function, so it takes the following default value 'inner' for the 'how' argument of the merge function and does an intersection operation on df1 and df2. The column name common to both df1 and df2 is 'y'. So it searches for common values of the 'y' column in both df1 and df2 and creates a new dataframe with the columns 'x', 'y', 'z' where column 'y' has the common values 3, 5, 7, 'x' will have the values corresponding to 3,5,7 in df1 and similarly 'z' will have the values corresponding to 3,5,7 in df2. The indices of the new dataframe have been set to 0,1,2 (by default) because you haven't specified the indexing pattern in your pd.merge function call using left_index, right_index (which are False by default).</p>
| 1 | 2016-09-02T10:33:19Z | [
"python",
"pandas"
] |
How can you just get the index of an already selected row in pandas? | 39,286,508 | <pre><code>In [60]: print(row.index)
Int64Index([15], dtype='int64')
</code></pre>
<p>I already know that the row number is 15, in this example. But I don't need to know it's type or anything else. The variable which stores it would do something based on what that number is. </p>
<p>There was something else about pandas I was wondering about. Suppose that this only returns one row: </p>
<pre><code>row = df[df['username'] == "unique name"]
</code></pre>
<p>Is it any less proper to use methods like loc, iloc, etc? They still work and everything, but I was curious if it would still be done this way in larger projects.. Are there preferred methods if it's just one row, as opposed to a list of rows. </p>
| 1 | 2016-09-02T07:20:09Z | 39,286,732 | <p>If you write</p>
<pre><code>row.index[0]
</code></pre>
<p>then you will get, in your case, the integer 15.</p>
<hr>
<p>Note that if you check</p>
<pre><code>dir(pd.Int64Index)
</code></pre>
<p>You can see that it includes the list-like method <code>__getitem__</code>.</p>
| 1 | 2016-09-02T07:32:15Z | [
"python",
"pandas"
] |
Different marker symbols | 39,286,585 | <p>I am trying do multiple plots but I am not getting the specified marker symbols for the mass flow rate, total pressure and the static pressure as shown in the attached image.</p>
<hr>
<hr>
<p><a href="http://i.stack.imgur.com/uCMA5.png" rel="nofollow">Plot</a></p>
| -1 | 2016-09-02T07:24:14Z | 39,287,145 | <p>I expect you are using matplotlib you can use something like this:</p>
<pre><code>#Dont forget on import
import matplotlib.pyplot as plt
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
</code></pre>
<p>In the code <strong>'r--'</strong>, <strong>'bs'</strong> and <strong>'g^'</strong> means different types of markers.</p>
<p>More examples is in matplotlib documentation here: <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot" rel="nofollow">Matplotliby.pyplot.plot</a></p>
<p>For better info you should provide more information. Post your code or update tags for your question. It very easy make plots with <strong>matplotlib</strong>.</p>
| 0 | 2016-09-02T07:53:56Z | [
"python"
] |
print current thread in python 3 | 39,286,610 | <p>I have this script:</p>
<pre><code>import threading, socket
for x in range(800)
send().start()
class send(threading.Thread):
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent!")
except:
pass
</code></pre>
<p>And at the place of "Request sent!" I would like to print something like: "Request sent! %s" % (the current number of the thread sending the request)</p>
<p>What's the fastest way to do it?</p>
<p><strong>--SOLVED--</strong></p>
<pre><code>import threading, socket
for x in range(800)
send(x+1).start()
class send(threading.Thread):
def __init__(self, counter):
threading.Thread.__init__(self)
self.counter = counter
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent! @", self.counter)
except:
pass
</code></pre>
| 0 | 2016-09-02T07:25:36Z | 39,286,773 | <p>You could pass your counting number (<code>x</code>, in this case), as a variable in your send class. Keep in mind though that <code>x</code> will start at 0, not 1.</p>
<pre><code>for x in range(800)
send(x+1).start()
class send(threading.Thread):
def __init__(self, count):
self.count = count
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent!"), self.count
except:
pass
</code></pre>
<p>Or, as Rob commented above in the other question, <a href="http://docs.python.org/library/threading.html#threading.current_thread" rel="nofollow"><code>threading.current_thread()</code></a> looks satisfactory.</p>
| 1 | 2016-09-02T07:33:36Z | [
"python",
"multithreading",
"sockets",
"python-3.x"
] |
print current thread in python 3 | 39,286,610 | <p>I have this script:</p>
<pre><code>import threading, socket
for x in range(800)
send().start()
class send(threading.Thread):
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent!")
except:
pass
</code></pre>
<p>And at the place of "Request sent!" I would like to print something like: "Request sent! %s" % (the current number of the thread sending the request)</p>
<p>What's the fastest way to do it?</p>
<p><strong>--SOLVED--</strong></p>
<pre><code>import threading, socket
for x in range(800)
send(x+1).start()
class send(threading.Thread):
def __init__(self, counter):
threading.Thread.__init__(self)
self.counter = counter
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent! @", self.counter)
except:
pass
</code></pre>
| 0 | 2016-09-02T07:25:36Z | 39,286,792 | <p>The easiest way to do this is to use <code>setName</code> and <code>getName</code> to give names to your threads.</p>
<pre><code>import threading, socket
for x in range(800)
new_thread = send()
new_thread.setName("thread number %d" % x)
new_thread.start()
class send(threading.Thread):
def run(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.it", 80))
s.send ("test")
print ("Request sent by %s!" % self.getName())
except:
pass
</code></pre>
<p>You can also add any other attributes to <code>send</code> that you need to keep track of your threads.</p>
| 1 | 2016-09-02T07:34:44Z | [
"python",
"multithreading",
"sockets",
"python-3.x"
] |
GeoDjango distance query with srid 4326 returns 'SpatiaLite does not support distance queries on geometry fields with a geodetic coordinate system.' | 39,286,655 | <p>I'm trying to fetch nearby kitchens, within 4 km radius from a given lat/long.
My spatial backend is spatialite and settings are,</p>
<pre>
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
'rest_framework',
'oauth2_provider',
'kitchen',
)
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.spatialite',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</pre>
<p>Here is my model</p>
<pre>
from django.contrib.gis.db import models
from django.contrib.gis.geos import Point
class Kitchen(models.Model):
id = models.CharField(max_length=100,primary_key=True)
#id = models.AutoField(primary_key=True)
name = models.CharField(max_length=100,blank=False)
address = models.CharField(max_length=1000, blank=True, default='')
contact_no = models.CharField(max_length=100,blank=True, default='')
location = models.PointField(srid=4326, geography=True, blank=True, null=True)
objects = models.GeoManager()
</pre>
<p>My query from Django shell is,</p>
<pre>
from kitchen.models import Kitchen
from django.contrib.gis import measure
from django.contrib.gis import geos
current_point = geos.fromstr('POINT(%s %s)' % (76.7698996, 17.338993), srid=4326)
Kitchen.objects.filter(location__distanc
e_lte=(current_point, measure.D(km=4)))
</pre>
<p>which return the below <strong>Value Error</strong>,</p>
<pre>
SpatiaLite does not support distance queries on geometry fields with a geodetic coordinate system. Distance objects; use a numeric value of your distance in degrees instead.
</pre>
<p>Setting a different projected srid in the model(ex. 3857, 24381 etc) returns incorrect results.
Some help here would be greatly appreciated. </p>
| 1 | 2016-09-02T07:27:31Z | 39,332,764 | <p>Most likely your problem is the SRID. It seems like spatialite does not support this type of distance query on fields with unprojected coordinate systems.</p>
<p>So you are on the right track, but setting the srid to a different value will have no effect as long as you have <code>geography=True</code> enabled in your model definition. The geography type forces the srid to be 4326, as described in the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/gis/model-api/#django.contrib.gis.db.models.GeometryField.geography" rel="nofollow">django geography docs</a>.</p>
<p>So try setting <code>geography=False</code> and the srid to one of the projected coordinate systems that you were trying out.</p>
| 0 | 2016-09-05T14:28:56Z | [
"python",
"django",
"sqlite",
"geodjango",
"spatialite"
] |
How to avoid pickling errors when sharing objects between threads? | 39,286,665 | <p>I have a program in which I need to store a global variable into a file. I am doing this using the <code>pickle</code> module. </p>
<p>I have another <code>thread</code>(Daemon=<code>False</code>, from <code>threading</code> module) which sometimes changes the value of the global variable. The value is also modified in global scope(the main program). </p>
<p>I am dumping the value of the variable into a <code>.pkl</code> file every 5 seconds (using another <code>thread</code> from <code>threading</code> module).</p>
<p>But I found the following error when <code>dump</code> method was executed:</p>
<pre><code>TypeError: can't pickle _thread.lock objects
</code></pre>
<p>Why is this happening? And what can I do to fix it?</p>
<p>Note: I have found some similar answers with <code>multiprocessing</code> module. But I need an answer for <code>threading</code> module.</p>
<p>Code:</p>
<pre><code>def save_state():
while True:
global variable
lastSession = open('lastSession.pkl', 'wb')
# error occurs on this line
pickle.dump(variable, lastSession)
lastSession.close()
time.sleep(5)
state_thread = threading.Thread(target = save_state)
state_thread.setDaemon(False)
state_thread.start()
# variable is changed outside this function and also in another thread(not state_thread).
</code></pre>
| 0 | 2016-09-02T07:28:08Z | 39,306,303 | <p>As others have mentioned, you cannot pickle "volatile" entities (threads, connections, synchonization primitives etc.) because they don't make sense as persistent data.</p>
<p>Looks like what you're trying to do is be saving session variables for later continuation of said session. For that task, there's nothing you can do to save objects that, well, cannot be saved due to their nature.</p>
<p>The simplest solution is just to ignore them. Replacing them with some "stubs" that would yield an error anytime you do anything with them makes little sense, as a missing variable yields an error anyway (it won't if it was masking a global variable but that is a questionable practice in itself).</p>
<p>Alternatively, you can set up these objects anew when restoring a session. But such a logic is by necessity task-specific.</p>
<p>Finally, e.g. IPython <a href="http://stackoverflow.com/questions/12504951/save-session-in-ipython-like-in-matlab">already has session saving/restoration logic</a>, so you probably don't need to reinvent the wheel at all.</p>
| 0 | 2016-09-03T11:17:47Z | [
"python",
"multithreading",
"pickle",
"locks"
] |
X.509 certificate serial number to hex conversion | 39,286,805 | <p>I'm trying to get the serial number for an X.509 certificate using Pythons OpenSSL library.</p>
<p>If I load my cert like this:</p>
<pre><code> x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_ASN1, cert)
</code></pre>
<p>Then print the serial number like this:</p>
<pre><code>print x509.get_serial_number()
</code></pre>
<p>It looks like this:</p>
<pre><code>5.283978953499081e+37
</code></pre>
<p>If I convert it to hex like this:</p>
<pre><code>'{0:x}'.format(int(5.283978953499081e+37))
</code></pre>
<p>It returns this:</p>
<pre><code>27c092c344a6c2000000000000000000
</code></pre>
<p>However, using OpenSSL from the command line to print the certificates serial number returns this.</p>
<pre><code>27:c0:92:c3:44:a6:c2:35:29:8f:d9:a2:fb:16:f9:b7
</code></pre>
<p>Why is half of the serial number being converted to zeros? </p>
| 2 | 2016-09-02T07:35:56Z | 39,292,733 | <pre><code>'%x' % cert.get_serial_number()
</code></pre>
<p><code>'%x' %</code> formats string like <code>'{0:x}'.format(cert.get_serial_number())</code></p>
| 0 | 2016-09-02T12:43:16Z | [
"python",
"pyopenssl"
] |
got frequency, need plot sinus wave in python | 39,286,859 | <p>I am just fighting with modulation of sinus wave.
I have got a frequency (from messured data - changing in time) and now I need to plot sinus wave with coresponding frequency.</p>
<p><img src="http://i.stack.imgur.com/MMfqh.png" alt="real data and sinus"></p>
<p>The blue line are just plotted points of real data and the green is what I did till now, but it does not corespond with real data at all.</p>
<p>The code to plot sin wave is bottom:</p>
<pre><code>def plotmodulsin():
n = 530
f1, f2 = 16, 50 # frequency
t = linspace(6.94,8.2,530)
dt = t[1] - t[0] # needed for integration
print t[1]
print t[0]
f_inst = logspace(log10(f1), log10(f2), n)
phi = 2 * pi * cumsum(f_inst) * dt # integrate to get phase
pylab.plot(t, 5*sin(phi))
</code></pre>
<p>Amplitude vector:</p>
<blockquote>
<p>[2.64, -2.64, 6.14, -6.14, 9.56, -9.56, 12.57, -12.57, 15.55, -15.55, 18.04, -18.04, 21.17, -21.17, 23.34, -23.34, 25.86, -25.86, 28.03, -28.03, 30.49, -30.49, 33.28, -33.28, 35.36, -35.36, 36.47, -36.47, 38.86, -38.86, 41.49, -41.49, 42.91, -42.91, 44.41, -44.41, 45.98, -45.98, 47.63, -47.63, 47.63, -47.63, 51.23, -51.23, 51.23, -51.23, 53.18, -53.18, 55.24, -55.24, 55.24, -55.24, 55.24, -55.24, 57.43, -57.43, 57.43, -57.43, 59.75, -59.75, 59.75, -59.75, 59.75, -59.75, 59.75, -59.75, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 62.22, -62.22, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 59.75, -59.75, 62.22, -62.22, 59.75, -59.75, 59.75]</p>
</blockquote>
<p>Time vector for real data:</p>
<blockquote>
<p>[6.954, 6.985, 7.016, 7.041, 7.066, 7.088, 7.11, 7.13, 7.149, 7.167, 7.186, 7.202, 7.219, 7.235, 7.251, 7.266, 7.282, 7.296, 7.311, 7.325, 7.339, 7.352, 7.366, 7.379, 7.392, 7.404, 7.417, 7.43, 7.442, 7.454, 7.466, 7.478, 7.49, 7.501, 7.513, 7.524, 7.536, 7.547, 7.558, 7.569, 7.58, 7.591, 7.602, 7.613, 7.624, 7.634, 7.645, 7.655, 7.666, 7.676, 7.686, 7.697, 7.707, 7.717, 7.728, 7.738, 7.748, 7.758, 7.768, 7.778, 7.788, 7.798, 7.808, 7.818, 7.828, 7.838, 7.848, 7.858, 7.868, 7.877, 7.887, 7.897, 7.907, 7.917, 7.927, 7.937, 7.946, 7.956, 7.966, 7.976, 7.986, 7.996, 8.006, 8.016, 8.026, 8.035, 8.045, 8.055, 8.065, 8.075, 8.084, 8.094, 8.104, 8.114, 8.124, 8.134, 8.144, 8.154, 8.164, 8.174, 8.184, 8.194, 8.20]</p>
</blockquote>
<p>So I need generate sinus with constant amplitude and following frequency:</p>
<blockquote>
<p>[10.5, 16.03, 20.0, 22.94, 25.51, 27.47, 29.76, 31.25, 32.89, 34.25, 35.71, 37.31, 38.46, 39.06, 40.32, 41.67, 42.37, 43.1, 43.86, 44.64, 44.64, 46.3, 46.3, 47.17, 48.08, 48.08, 48.08, 49.02, 49.02, 50.0, 50.0, 50.0, 50.0]</p>
</blockquote>
| 0 | 2016-09-02T07:38:23Z | 39,295,119 | <p>You can try to match you function with something sine- or actually cosine-like, by extracting estimates for the frequency and the amplitude from your data. If I understood you correctly, your data are maximums and minimums and you want to have a trigonometric function that resembles that. If your data is saved in two arrays <code>time</code> and <code>value</code>, amplitude estimates are simply given by <code>np.abs(value)</code>. Frequencies are given as the inverse of two times the time difference between a maximum and a minimum. <code>freq = 0.5/(time[1:]-time[:-1])</code> gives you frequency estimates for the mid points of each time interval. The corresponding times are thus given as <code>freqTimes = (time[1:]+time[:-1])/2.</code>.</p>
<p>To get a smoother curve, you can now interpolate those amplitude and frequency values to get estimates for the values in between. A very simple way to do this is by use of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow"><code>np.interp</code></a>, which will do a simple linear interpolation. You will have to specify at which points in time to interpolate. We will construct an array for that and then interpolate by:</p>
<pre><code>n = 10000
timesToInterpolate = np.linspace(time[0], time[-1], n, endpoint=True)
freqInterpolated = np.interp(timesToInterpolate, freqTimes, freq)
amplInterpolated = np.interp(timesToInterpolate, time, np.abs(value))
</code></pre>
<p>Now you can do the integration, that you already had in your example by doing:</p>
<pre><code>phi = (2*np.pi*np.cumsum(freqInterpolated)
*(timesToInterpolate[1]-timesToInterpolate[0]))
</code></pre>
<p>And now you can plot. So putting it all together gives you:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
time = np.array([6.954, 6.985, 7.016, 7.041, 7.066, 7.088, 7.11, 7.13]) #...
value = np.array([2.64, -2.64, 6.14, -6.14, 9.56, -9.56, 12.57, -12.57]) #...
freq = 0.5/(time[1:]-time[:-1])
freqTimes = (time[1:]+time[:-1])/2.
n = 10000
timesToInterpolate = np.linspace(time[0], time[-1], n, endpoint=True)
freqInterpolated = np.interp(timesToInterpolate, freqTimes, freq)
amplInterpolated = np.interp(timesToInterpolate, time, np.abs(value))
phi = (2*np.pi*np.cumsum(freqInterpolated)
*(timesToInterpolate[1]-timesToInterpolate[0]))
plt.plot(time, value)
plt.plot(timesToInterpolate, amplInterpolated*np.cos(phi)) #or np.sin(phi+np.pi/2)
plt.show()
</code></pre>
<p>The result looks like this (if you include the full arrays):</p>
<p><a href="http://i.stack.imgur.com/ClfTO.png" rel="nofollow"><img src="http://i.stack.imgur.com/ClfTO.png" alt="enter image description here"></a></p>
| 0 | 2016-09-02T14:43:02Z | [
"python",
"sin",
"modulation"
] |
How to set only specific values for a parameter in docopt python? | 39,286,878 | <p>I am trying to use docopt for a python code. I actually need to set only specific values for a parameter. My usage is as below:</p>
<pre><code>"""
Usage:
test.py --list=(all|available)
Options:
list Choice to list devices (all / available)
"""
</code></pre>
<p>I've tried to run it as: <code>python test.py --list=all</code> but it doesn't accept the value and just displays the docopt string .</p>
<p>I want the value of list parameter to be either 'all' or 'available'. Is there any way this can be achieved?</p>
| 0 | 2016-09-02T07:39:06Z | 39,288,948 | <p>Here's an example to achieve what you want:</p>
<p><strong>test.py:</strong></p>
<pre><code>"""
Usage:
test.py list (all|available)
Options:
-h --help Show this screen.
--version Show version.
list Choice to list devices (all / available)
"""
from docopt import docopt
def list_devices(all_devices=True):
if all_devices:
print("Listing all devices...")
else:
print("Listing available devices...")
if __name__ == '__main__':
arguments = docopt(__doc__, version='test 1.0')
if arguments["list"]:
list_devices(arguments["all"])
</code></pre>
<p>With this script you could run statements like:</p>
<pre><code>python test.py list all
</code></pre>
<p>or:</p>
<pre><code>python test.py list available
</code></pre>
| 1 | 2016-09-02T09:26:48Z | [
"python",
"docopt"
] |
Model design for restaurant, meal and meal category | 39,286,917 | <p>I am planning to develop a food delivery app from local restaurants. I am thinking for the best design. I have designed a json too for modeling API.However i am confuse with menu part. Should meal be in the restaurant as a foreign key or restaurant be a foreignkey in the meal. </p>
<p>Simple concept of my app is </p>
<p>A restaurant prepares various meals to serve various kind of customer. A meal is associated with category like a meal can be veg, non-veg. Customer(User) might order drinks too. </p>
<p>Is my model design and api design apt for such kind of scenario?</p>
<pre><code>class Restaurant(models.Model):
name = models.CharField()
slug = models.SlugField()
owner = models.ForeignKey(User)
location = models.CharField()
city = models.CharField()
features = models.ManyToManyField(FeatureChoice) # dinner, launch, nightlife,
timing = models.ManyToManyField(TimingChoice) # sunday, monday, tuesday,
is_delivery = models.BooleanField(default=True)
# meal = models.ForeignKey(Meal) main confusion is here
class Meal(models.Model):
restaurant = models.ForeignKey(User)
name = models.CharField()
price = models.FloatField()
quantity = models.PositiveIntegerField()
image = models.ImageField()
rating = models.IntegerField()
class MealCategory(models.Model)
meal = models.ForeignKey(Meal)
name = models.CharField()
slug = models.SlugField()
</code></pre>
<p><strong>json design for REST API</strong></p>
<pre><code>[
{
'name':'Kathmandu Fast Food',
'owner':'Sanskar Shrestha',
'location':'Koteshwor',
'city':'Kathmandu',
'features':[
{
'features':'Breakfast'
},
{
'features':'Launch'
},
{
'features':'NightLife'
},
],
'timings':[
{
'timing':'MOnday'
},
{
'timing':'Sunday'
},
],
'is_delivery':'true',
'menu':[
{
'name':'Chicken BBQ',
'price':990,
'quantity':10,
'image':'localhost:8000/media/.../',
'category':{
'name':'Non-veg'
}
},
{
'name':'Veg Chowmin',
'price':160,
'quantity':20,
'image':'localhost:8000/media/',
'category':
{
'name':'Veg'
}
}
]
}
]
</code></pre>
<p>Please do share your expertise thought.</p>
| 0 | 2016-09-02T07:41:48Z | 39,287,050 | <p>Meal should has MealCategory as a ForeignKey. I think that Meal is a model which should be independent of restaurant. Think about add another model </p>
<pre><code>class RestaurantMeal(models.Model):
restaurant = models.ForeignKey(Restaurant)
meal = models.ForeignKey(Meal)
</code></pre>
<p>to store data about meals in particular restaurants.</p>
| 0 | 2016-09-02T07:48:40Z | [
"python",
"json",
"django",
"api",
"django-models"
] |
Model design for restaurant, meal and meal category | 39,286,917 | <p>I am planning to develop a food delivery app from local restaurants. I am thinking for the best design. I have designed a json too for modeling API.However i am confuse with menu part. Should meal be in the restaurant as a foreign key or restaurant be a foreignkey in the meal. </p>
<p>Simple concept of my app is </p>
<p>A restaurant prepares various meals to serve various kind of customer. A meal is associated with category like a meal can be veg, non-veg. Customer(User) might order drinks too. </p>
<p>Is my model design and api design apt for such kind of scenario?</p>
<pre><code>class Restaurant(models.Model):
name = models.CharField()
slug = models.SlugField()
owner = models.ForeignKey(User)
location = models.CharField()
city = models.CharField()
features = models.ManyToManyField(FeatureChoice) # dinner, launch, nightlife,
timing = models.ManyToManyField(TimingChoice) # sunday, monday, tuesday,
is_delivery = models.BooleanField(default=True)
# meal = models.ForeignKey(Meal) main confusion is here
class Meal(models.Model):
restaurant = models.ForeignKey(User)
name = models.CharField()
price = models.FloatField()
quantity = models.PositiveIntegerField()
image = models.ImageField()
rating = models.IntegerField()
class MealCategory(models.Model)
meal = models.ForeignKey(Meal)
name = models.CharField()
slug = models.SlugField()
</code></pre>
<p><strong>json design for REST API</strong></p>
<pre><code>[
{
'name':'Kathmandu Fast Food',
'owner':'Sanskar Shrestha',
'location':'Koteshwor',
'city':'Kathmandu',
'features':[
{
'features':'Breakfast'
},
{
'features':'Launch'
},
{
'features':'NightLife'
},
],
'timings':[
{
'timing':'MOnday'
},
{
'timing':'Sunday'
},
],
'is_delivery':'true',
'menu':[
{
'name':'Chicken BBQ',
'price':990,
'quantity':10,
'image':'localhost:8000/media/.../',
'category':{
'name':'Non-veg'
}
},
{
'name':'Veg Chowmin',
'price':160,
'quantity':20,
'image':'localhost:8000/media/',
'category':
{
'name':'Veg'
}
}
]
}
]
</code></pre>
<p>Please do share your expertise thought.</p>
| 0 | 2016-09-02T07:41:48Z | 39,287,606 | <ol>
<li>Why do you have <code>restaurant = models.ForeignKey(User)</code> in <code>Meal</code> model when you should have <code>restaurant = models.ForeignKey('Restaurant')</code></li>
<li><code>MealCategory</code> should be independent, not belonging to <code>Meal</code>, on the contrary, <code>Meal</code> should belong to <code>MealCategory</code>.</li>
<li><code>price</code> field should be <code>Decimal</code></li>
<li><p>Why does <code>rating</code> is Integer? Have you ever seen rating ever been integer? It's integer only if one person rated it.</p>
<pre><code>class Meal(models.Model):
restaurant = models.ForeignKey('Restaurant')
name = models.CharField()
price = models.DecimalField()
quantity = models.PositiveIntegerField()
image = models.ImageField()
rating = models.FloatField()
meal_category = models.ForeignKey('MealCategory')
class MealCategory(models.Model):
name = models.CharField()
slug = models.SlugField()
</code></pre></li>
</ol>
| 0 | 2016-09-02T08:21:16Z | [
"python",
"json",
"django",
"api",
"django-models"
] |
How to display a row based on specific columns and values | 39,287,534 | <p>Let's say I have this dataframe:</p>
<pre><code>Type | Killed
Dog 5
Cat 7
Dog 9
Dog 10
Dog 6
Cat 2
Cow 1
</code></pre>
<p>I would like to display the row where the number of dogs killed is more than 7.</p>
<p>My desired outcome would be:</p>
<pre><code>Type | Killed
Dog 9
Dog 10
</code></pre>
<p>Thank you!</p>
| -6 | 2016-09-02T08:17:25Z | 39,287,552 | <p>You can select within a column by:</p>
<pre><code>df.loc[(df['Killed'] > 7) & (df.index == 'Dog')]
</code></pre>
| 0 | 2016-09-02T08:18:13Z | [
"python",
"pandas"
] |
How to display a row based on specific columns and values | 39,287,534 | <p>Let's say I have this dataframe:</p>
<pre><code>Type | Killed
Dog 5
Cat 7
Dog 9
Dog 10
Dog 6
Cat 2
Cow 1
</code></pre>
<p>I would like to display the row where the number of dogs killed is more than 7.</p>
<p>My desired outcome would be:</p>
<pre><code>Type | Killed
Dog 9
Dog 10
</code></pre>
<p>Thank you!</p>
| -6 | 2016-09-02T08:17:25Z | 39,287,762 | <p>You can select data using:</p>
<pre><code>data[(data.Type == 'Dog') & (data.Killed > 7)]
</code></pre>
<p><strong>Example:</strong></p>
<pre><code>import pandas as pd
Type=['Dog','Cat','Dog','Dog','Dog','Cat','Cow']
Killed=[5,7,9,10,6,2,1]
dataset=zip(Type,Killed)
data=pd.DataFrame(dataset)
data.columns=['Type','Killed']
data.loc[((data.Type == 'Dog') & data['Killed'] > 7) ]
</code></pre>
<p><a href="http://i.stack.imgur.com/8GniI.png" rel="nofollow"><img src="http://i.stack.imgur.com/8GniI.png" alt="enter image description here"></a></p>
| 0 | 2016-09-02T08:29:37Z | [
"python",
"pandas"
] |
copy file with any extension | 39,287,542 | <p>I have only been attempting to code for a few days so please forgive me for any obvious mistakes that I make. I have searched around for a couple of days now and have not managed to find a solution to my problem, hence why I have decided to post here.</p>
<p>My issue: I have managed to create a piece of script which will search in a dedicated folder and copy a file, that the user has named themselves, via input(), into multiple other folders. My downfall is that I can only get it to copy files with a specific extension e.g. '.docx'. What I would like the code to be able to do is pick up the named file regardless of it's extension. </p>
<p>Here is the code:</p>
<pre><code>import shutil, os, time
while True:
FileName = input('\n Please enter the file name: ') + '.docx'
try:
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder1')
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder2')
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder3')
except FileNotFoundError:
print("No such file exists. Please try again.")
else:
break
print("File transfer complete.")
time.sleep(2)
quit()
</code></pre>
<p>Thanks in advance for any help!</p>
| 1 | 2016-09-02T08:17:44Z | 39,288,005 | <p>This is how i would start going about it. There are definitely things you have to be careful of like multiple files with same name and different extension, multiple dots in files and so on. I am pretty sure those two are taken into account by the script below. Take a look.</p>
<pre><code>import shutil
import os
import time
my_dir = r'C:\users\guest\desktop\Folder'
while True:
all_files = os.listdir(my_dir)
all_files_and_extentions = [x.split('.') for x in all_files]
print(all_files_and_extentions) # -> [['mftf5_fats_LocBuckling_B-Basis_v1', 'txt'], ['mftf5_fats_static_LSET_C_v1', 'docx']
FileName = input('\n Please enter the file name: ')
print(FileName) # -> mftf5_fats_LocBuckling_B-Basis_v1
the_file_with_ext = '.'.join([y for x in all_files_and_extentions for y in x if x[0] == FileName])
print(the_file_with_ext) # -> mftf5_fats_LocBuckling_B-Basis_v1.txt
for files in the_file_with_ext:
try:
shutil.copy(my_dir + '\\' + files, 'C:\\users\\guest\\desktop\\employees\\Folder1')
shutil.copy(my_dir + '\\' + files, 'C:\\users\\guest\\desktop\\employees\\Folder2')
shutil.copy(my_dir + '\\' + files, 'C:\\users\\guest\\desktop\\employees\\Folder3')
except FileNotFoundError:
print("No such file exists. Please try again.")
else:
break
print("File transfer complete.")
time.sleep(2)
quit()
</code></pre>
| 0 | 2016-09-02T08:41:56Z | [
"python",
"python-3.x"
] |
copy file with any extension | 39,287,542 | <p>I have only been attempting to code for a few days so please forgive me for any obvious mistakes that I make. I have searched around for a couple of days now and have not managed to find a solution to my problem, hence why I have decided to post here.</p>
<p>My issue: I have managed to create a piece of script which will search in a dedicated folder and copy a file, that the user has named themselves, via input(), into multiple other folders. My downfall is that I can only get it to copy files with a specific extension e.g. '.docx'. What I would like the code to be able to do is pick up the named file regardless of it's extension. </p>
<p>Here is the code:</p>
<pre><code>import shutil, os, time
while True:
FileName = input('\n Please enter the file name: ') + '.docx'
try:
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder1')
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder2')
shutil.copy('C:\\users\\guest\\desktop\\Folder\\' + FileName, 'C:\\users\\guest\\desktop\\employees\\Folder3')
except FileNotFoundError:
print("No such file exists. Please try again.")
else:
break
print("File transfer complete.")
time.sleep(2)
quit()
</code></pre>
<p>Thanks in advance for any help!</p>
| 1 | 2016-09-02T08:17:44Z | 39,288,326 | <p>You need to make sure <code>FileName</code> either <em>equals</em> the file's name <em>or startswith</em> the file's name + a dot. Else you cannot exclude errors with possible files without extensions, or files with the same <em>partial</em> name.</p>
<p>Furthermore I'd suggest to use a conditional rather then <code>try</code>/<code>except</code>; only do the action if the file actually exists, else retry the input:</p>
<pre class="lang-py prettyprint-override"><code>import shutil, os, time
while True:
dr = 'C:\\users\\guest\\desktop\\Folder\\'
FileName = input('\n Please enter the file name: ')
# see if the file exists (listing files matching the conditions)
match = [f for f in os.listdir(dr) if any([f == FileName, f.startswith(FileName+".")])]
# if the list is not empty, use the first match, copy it and break
if match:
match = match[0]
shutil.copy(os.path.join(dr, match), os.path.join('C:\\users\\guest\\desktop\\employees\\Folder1', match))
# etc...
break
else:
print("No such file exists. Please try again.")
print("File transfer complete.")
time.sleep(2)
quit()
</code></pre>
| 0 | 2016-09-02T08:57:45Z | [
"python",
"python-3.x"
] |
Python Django REST call returning object number instead of object name | 39,287,577 | <p>I'm new to Python, and I guess I'm serializing it incorrectly.</p>
<p><strong>This is the REST call result:</strong>
<a href="http://i.stack.imgur.com/SXko7.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/SXko7.jpg" alt="enter image description here"></a></p>
<blockquote>
<p><strong>authors/models.py:</strong></p>
</blockquote>
<pre><code> from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Author(models.Model):
name = models.CharField(max_length=100)
def __str__(self):
return self.name
class Book(models.Model):
auto_increment_id = models.AutoField(primary_key=True)
name = models.CharField('Book name', max_length=100)
author = models.ForeignKey(Author, blank=False, null=False, related_name='book_author')
contents = models.TextField('Contents', blank=False, null=False)
def __str__(self):
return self.name
</code></pre>
<blockquote>
<p><strong>authors/views.py</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from authors.serializers import BookSerializer, AuthorSerializer
from django.shortcuts import render
from rest_framework import generics
class ListCreateBooks(generics.ListCreateAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
class ListCreateAuthor(generics.ListCreateAPIView):
queryset = Author.objects.all()
serializer_class = AuthorSerializer
</code></pre>
<blockquote>
<p><strong>authors/serializers.py</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from rest_framework import serializers
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
fields = ('name', 'author')
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
book_author = 'name'
</code></pre>
<p>I'm new to Django, but I tried many things, in my views.py I've added another class called <code>AuthorSerializer</code> importing from a corresponding class I created in serializers, but then I realized that I have no clue how to add my <code>ListCreateAuthor</code> to:</p>
<pre><code> url(r'^api-auth/', ListCreateBooks.as_view(), name='list_books')
</code></pre>
<p>I've added another parameter with <code>ListCreateAuthor.as_view()</code> that gave me an immediate error (and which also didn't make much sense) Am I going the wrong way here and how can I solve this?</p>
<blockquote>
<p><strong>EDIT:</strong> @Abdulafaja gave a partial answer, which did solve the read, but now after checking a POST insert - it gives an error for create or update.</p>
</blockquote>
<p>in <a href="http://www.django-rest-framework.org/api-guide/serializers/#saving-instances" rel="nofollow">Django rest_framework's documentation</a> (link provided by @Abdulafaja) it give only one example for a nested serializer, and it's a one to many relationship (Album->tracks) but mine is one to one (book->author) so I have no idea how to serilize the nested feature. This API needs to give the frontend all CRUD features.</p>
<p>Thanks.</p>
<blockquote>
<p><strong>EDITED serializers.py by Abdullah:</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from rest_framework import serializers
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
book_author = 'name'
class BookSerializer(serializers.ModelSerializer):
author = AuthorSerializer(many=False)
class Meta:
model = Book
fields = ('name', 'author')
def create(self, validated_data):
author, _ = Author.objects.get_or_create(name=validated_data.get('author').get('name'))
return Book.objects.create(name=validated_data.get('name'), author=author)
def update(self, instance, validated_data):
author = validated_data.get('author')
if author:
instance.author = Author.objects.get_or_create(name=author.get('name'))
instance.name = validated_data.get('name', instance.name)
instance.save()
return instance
</code></pre>
| 1 | 2016-09-02T08:19:29Z | 39,287,653 | <p>For Book model Author is just an id of that field in database, so that's why it's return an integer field.
Try to add an Author field in Book serializer</p>
<pre><code>class BookSerializer(serializers.ModelSerializer):
author = AuthorSerializer(read_only=True)
class Meta:
model = Book
fields = ('name', 'author')
</code></pre>
<p>This is a NestedSerializer, which is read-only by default. On it's DRF doc site <a href="http://www.django-rest-framework.org/api-guide/relations/#writable-nested-serializers" rel="nofollow">link</a> is mentioned that you </p>
<blockquote>
<p>need to create create() and/or update() methods in order to explicitly specify how the child relationships should be saved.</p>
</blockquote>
<p>So your BookSerializer needs to look like this</p>
<pre><code>class BookSerializer(serializers.ModelSerializer):
author = AuthorSerializer(read_only=True)
class Meta:
model = Book
fields = ('name', 'author')
def create(self, validated_data):
# Create new object
def update(self, instance, validated_data):
# Update existing instance
</code></pre>
<p>Both <code>create</code> and <code>update</code> methods need to save the object to the database and return it. There are called when you save your BookSerializer class instance.
The difference between them is that <code>create</code> is called when you create new instance of Book object </p>
<pre><code>serializer = BookSerializer(data=data)
</code></pre>
<p>and <code>update</code> is called if you passed an existing instance of Book object when instantiating the serializer class </p>
<pre><code>serializer = BookSerializer(book, data=data)
</code></pre>
<p>More information you can find <a href="http://www.django-rest-framework.org/api-guide/serializers/#saving-instances" rel="nofollow">here</a></p>
<p><strong>EDIT:</strong>
If you want to create an instance with NestedSerializer it should not be read only field.</p>
<pre><code>class BookSerializer(serializers.ModelSerializer):
author = AuthorSerializer(many=False)
class Meta:
model = Book
fields = ('name', 'author')
def create(self, validated_data):
author, _ = Author.objects.get_or_create(name=validated_data.get('author').get('name'))
return Book.objects.create(name=validated_data.get('name'), author=author)
def update(self, instance, validated_data):
author = validated_data.get('author')
if author:
instance.author = Author.objects.get_or_create(name=author.get('name'))
instance.name = validated_data.get('name', instance.name)
instance.save()
return instance
</code></pre>
| 1 | 2016-09-02T08:23:51Z | [
"python",
"django",
"django-rest-framework"
] |
Python Django REST call returning object number instead of object name | 39,287,577 | <p>I'm new to Python, and I guess I'm serializing it incorrectly.</p>
<p><strong>This is the REST call result:</strong>
<a href="http://i.stack.imgur.com/SXko7.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/SXko7.jpg" alt="enter image description here"></a></p>
<blockquote>
<p><strong>authors/models.py:</strong></p>
</blockquote>
<pre><code> from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Author(models.Model):
name = models.CharField(max_length=100)
def __str__(self):
return self.name
class Book(models.Model):
auto_increment_id = models.AutoField(primary_key=True)
name = models.CharField('Book name', max_length=100)
author = models.ForeignKey(Author, blank=False, null=False, related_name='book_author')
contents = models.TextField('Contents', blank=False, null=False)
def __str__(self):
return self.name
</code></pre>
<blockquote>
<p><strong>authors/views.py</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from authors.serializers import BookSerializer, AuthorSerializer
from django.shortcuts import render
from rest_framework import generics
class ListCreateBooks(generics.ListCreateAPIView):
queryset = Book.objects.all()
serializer_class = BookSerializer
class ListCreateAuthor(generics.ListCreateAPIView):
queryset = Author.objects.all()
serializer_class = AuthorSerializer
</code></pre>
<blockquote>
<p><strong>authors/serializers.py</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from rest_framework import serializers
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
fields = ('name', 'author')
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
book_author = 'name'
</code></pre>
<p>I'm new to Django, but I tried many things, in my views.py I've added another class called <code>AuthorSerializer</code> importing from a corresponding class I created in serializers, but then I realized that I have no clue how to add my <code>ListCreateAuthor</code> to:</p>
<pre><code> url(r'^api-auth/', ListCreateBooks.as_view(), name='list_books')
</code></pre>
<p>I've added another parameter with <code>ListCreateAuthor.as_view()</code> that gave me an immediate error (and which also didn't make much sense) Am I going the wrong way here and how can I solve this?</p>
<blockquote>
<p><strong>EDIT:</strong> @Abdulafaja gave a partial answer, which did solve the read, but now after checking a POST insert - it gives an error for create or update.</p>
</blockquote>
<p>in <a href="http://www.django-rest-framework.org/api-guide/serializers/#saving-instances" rel="nofollow">Django rest_framework's documentation</a> (link provided by @Abdulafaja) it give only one example for a nested serializer, and it's a one to many relationship (Album->tracks) but mine is one to one (book->author) so I have no idea how to serilize the nested feature. This API needs to give the frontend all CRUD features.</p>
<p>Thanks.</p>
<blockquote>
<p><strong>EDITED serializers.py by Abdullah:</strong></p>
</blockquote>
<pre><code>from authors.models import Book, Author
from rest_framework import serializers
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
book_author = 'name'
class BookSerializer(serializers.ModelSerializer):
author = AuthorSerializer(many=False)
class Meta:
model = Book
fields = ('name', 'author')
def create(self, validated_data):
author, _ = Author.objects.get_or_create(name=validated_data.get('author').get('name'))
return Book.objects.create(name=validated_data.get('name'), author=author)
def update(self, instance, validated_data):
author = validated_data.get('author')
if author:
instance.author = Author.objects.get_or_create(name=author.get('name'))
instance.name = validated_data.get('name', instance.name)
instance.save()
return instance
</code></pre>
| 1 | 2016-09-02T08:19:29Z | 39,291,636 | <p>From my understanding you wish to get details of Authors in the api request not just their number??</p>
<p>The way to do that is to set the depth in your model serializer.</p>
<p><pre><code>
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
fields = ('name', 'author')
depth=1
</pre></code></p>
<p>The depth option should be set to an integer value that indicates the depth of relationships that should be traversed before reverting to a flat representation. i.e. 1, 2, 3...</p>
<p>source: <a href="http://www.django-rest-framework.org/api-guide/serializers/" rel="nofollow">http://www.django-rest-framework.org/api-guide/serializers/</a></p>
| 0 | 2016-09-02T11:45:21Z | [
"python",
"django",
"django-rest-framework"
] |
Unicode-escaped file processing error | 39,287,654 | <p>â Not quite. The post linked above contains the solution to a problem that showed up in the comments of this post while trying to solve it, and seems unrelated to the original problem.</p>
<p>I have a raw text file containing only the following line, and no newline:</p>
<pre><code>Q853 \u0410\u043D\u0434\u0440\u0435\u0439 \u0410\u0440\u0441\u0435\u043D\u044C\u0435\u0432\u0438\u0447 \u0422\u0430\u0440\u043A\u043E\u0432\u0441\u043A\u0438\u0439
</code></pre>
<p>The characters are escaped as shown above, meaning that the <code>\u05E9</code> is really a backslash, followed by 5 alphanumeric characters (and not an Unicode character). I am trying to decode the file using the following code:</p>
<pre><code>import codecs
with codecs.open("wikidata-terms20.nt", 'r', encoding='unicode_escape') as input:
with open("wikidata-terms3.nt", "w") as output:
for line in input:
output.write(line)
</code></pre>
<p>Using <code>print</code> is not possible here, see in the comments.</p>
<p>Running it gives me the following error:</p>
<pre><code>Traceback (most recent call last):
File "terms2.py", line 5, in <module>
for line in input:
File "C:\Program Files\Python35\lib\codecs.py", line 711, in __next__
return next(self.reader)
File "C:\Program Files\Python35\lib\codecs.py", line 642, in __next__
line = self.readline()
File "C:\Program Files\Python35\lib\codecs.py", line 555, in readline
data = self.read(readsize, firstline=True)
File "C:\Program Files\Python35\lib\codecs.py", line 501, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position 67-71: truncated \uXXXX escape
</code></pre>
<p>What is going on?</p>
<p>I am running Python 3.5.1 on Windows 8.1, and the code seems to work for most other Unicode characters (this line is the first one to cause the crash).</p>
<p>See edit history for the original question.</p>
| 5 | 2016-09-02T08:23:53Z | 39,289,924 | <p>It seems that the data read by the decoder is truncated at (after) character#72 (0-based character #71). That obviously is somehow related to the <a href="https://bugs.python.org/issue10344" rel="nofollow">this bug</a>.</p>
<p>The following code produces the same error as in your example:</p>
<pre><code>open("wikidata-terms20.nt", 'r').readline()
open("wikidata-terms20.nt", 'r').readline(72)
</code></pre>
<p>Increasing the readline size above the actual size of the input or setting it to -1 eliminates the error:</p>
<pre><code>open("wikidata-terms20.nt", 'r').readline(1000)
open("wikidata-terms20.nt", 'r').readline(-1)
</code></pre>
<p>Evidently, <code>for line in input:</code> obtains the line to be decoded with <code>readline()</code>, effectively truncating the data-to-be-decoded to 72 characters.</p>
<p>So here are a couple of workarounds:</p>
<p><strong>Workaround 1:</strong></p>
<pre><code>import codecs
with open("wikidata-terms20.nt", 'r') as input:
with open("wikidata-terms3.nt", "w") as output:
for line in input:
output.write(codecs.decode(line, 'unicode_escape'))
</code></pre>
<p><strong>Workaround 2:</strong></p>
<pre><code>import codecs
with codecs.open("wikidata-terms20.nt", 'r', encoding='unicode_escape') as input:
with open("wikidata-terms3.nt", "w") as output:
for line in input.readlines():
output.write(line)
</code></pre>
| 2 | 2016-09-02T10:14:17Z | [
"python",
"python-3.x",
"unicode",
"python-unicode"
] |
Handling ConnectionResetError | 39,287,669 | <p>I have a question on handling ConnectionResetError in Python3. This usually happens when I use the urllib.request.Request function. I would like to know if it is ok to redo the request if we come across such error. For example</p>
<pre><code>def get_html(url):
try:
request = Request(url)
response = urlopen(request)
html = response.read()
except ConectionReserError as e:
get_html(url)
</code></pre>
| 0 | 2016-09-02T08:24:42Z | 39,288,941 | <p>It really depends on the server, but you could do something like:</p>
<pre><code>def get_html(url, retry_count=0):
try:
request = Request(url)
response = urlopen(request)
html = response.read()
except ConectionResetError as e:
if retry_count == MAX_RETRIES:
raise e
time.sleep(for_some_time)
get_html(url, retry_count + 1)
</code></pre>
<p>Also see <a href="http://stackoverflow.com/questions/20568216/python-handling-socket-error-errno-104-connection-reset-by-peer">Python handling socket.error: [Errno 104] Connection reset by peer</a></p>
| 0 | 2016-09-02T09:26:26Z | [
"python",
"urllib"
] |
How to specify property type when using Cloud Datastore API | 39,287,748 | <p>I have this python code snippet for adding a new entity from Google Compute Engine. But this code results in userId being created with an undefined type. How can I specify the type of the property when it is getting created from compute engine?</p>
<pre><code>kg = datastore.Entity(key)
try:
kg.update({
'userId': userId,
})
client.put(kg)
</code></pre>
| 0 | 2016-09-02T08:28:45Z | 39,297,807 | <p><code>gloud-python</code> will infer the type of the property from the type of the value you place in the property map.</p>
<p><a href="https://github.com/GoogleCloudPlatform/gcloud-python/blob/master/gcloud/datastore/helpers.py#L303" rel="nofollow">https://github.com/GoogleCloudPlatform/gcloud-python/blob/master/gcloud/datastore/helpers.py#L303</a></p>
| 2 | 2016-09-02T17:23:59Z | [
"python",
"google-cloud-datastore"
] |
Python - Counting blank lines in a text file | 39,287,853 | <p>Let's say I have a file with the following content(every even line is blank):<br></p>
<blockquote>
<p>Line 1 <br><br> Line 2 <br><br> Line 3 <br><br> ...</p>
</blockquote>
<p>I tried to read the file in 2 ways:<br></p>
<pre><code>count = 0
for line in open("myfile.txt"):
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>and<br></p>
<pre><code>count = 0
file = open('myfile.txt')
lines = file.readlines()
for line in lines:
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>But <code>count</code> always remains 0. How can I count the number of blank lines?</p>
| 1 | 2016-09-02T08:33:58Z | 39,288,023 | <p>When you use <code>readlines()</code> function, it doesn't automatically remove the EOL characters for you. So you either compare against the end of line, something like:</p>
<pre><code>if line == os.linesep:
count += 1
</code></pre>
<p>(you have to import <code>os</code> module of course), or you strip the line (as suggested by @khelwood's comment on your question) and compare against <code>''</code> as you are doing.</p>
<p>Notice that using <code>os.linesep</code> might not necessarily work as you would expect if you are running your program on a certain OS, e.g. MacOS, but the file you are checking is from a different OS, e.g. Linux, as the line ending will be different. So to check for all cases you have to do something like:</p>
<pre><code>if line == '\n' or line == '\r' or line == '\r\n':
count += 1
</code></pre>
<p>Hope this helps.</p>
| 2 | 2016-09-02T08:42:44Z | [
"python",
"file",
"lines"
] |
Python - Counting blank lines in a text file | 39,287,853 | <p>Let's say I have a file with the following content(every even line is blank):<br></p>
<blockquote>
<p>Line 1 <br><br> Line 2 <br><br> Line 3 <br><br> ...</p>
</blockquote>
<p>I tried to read the file in 2 ways:<br></p>
<pre><code>count = 0
for line in open("myfile.txt"):
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>and<br></p>
<pre><code>count = 0
file = open('myfile.txt')
lines = file.readlines()
for line in lines:
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>But <code>count</code> always remains 0. How can I count the number of blank lines?</p>
| 1 | 2016-09-02T08:33:58Z | 39,288,034 | <p>In a more simple and pythonic way:</p>
<pre><code>with open(filename) as fd:
count = sum(1 for line in fd if len(line.strip()) == 0)
</code></pre>
<p>This keep the linear complexity in time and a constant complexity in memory.
And, most of all, it get rid of the variable <code>count</code> as a manually incremented variable.</p>
| 2 | 2016-09-02T08:43:21Z | [
"python",
"file",
"lines"
] |
Python - Counting blank lines in a text file | 39,287,853 | <p>Let's say I have a file with the following content(every even line is blank):<br></p>
<blockquote>
<p>Line 1 <br><br> Line 2 <br><br> Line 3 <br><br> ...</p>
</blockquote>
<p>I tried to read the file in 2 ways:<br></p>
<pre><code>count = 0
for line in open("myfile.txt"):
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>and<br></p>
<pre><code>count = 0
file = open('myfile.txt')
lines = file.readlines()
for line in lines:
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>But <code>count</code> always remains 0. How can I count the number of blank lines?</p>
| 1 | 2016-09-02T08:33:58Z | 39,288,038 | <p>Every line ends with a newline character <code>'\n'</code>. Note that it is only one character.</p>
<p>An easy workaround is to check wether the line equals <code>'\n'</code>, or wether its length is <strong>1</strong>, not 0.</p>
| 1 | 2016-09-02T08:43:45Z | [
"python",
"file",
"lines"
] |
Python - Counting blank lines in a text file | 39,287,853 | <p>Let's say I have a file with the following content(every even line is blank):<br></p>
<blockquote>
<p>Line 1 <br><br> Line 2 <br><br> Line 3 <br><br> ...</p>
</blockquote>
<p>I tried to read the file in 2 ways:<br></p>
<pre><code>count = 0
for line in open("myfile.txt"):
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>and<br></p>
<pre><code>count = 0
file = open('myfile.txt')
lines = file.readlines()
for line in lines:
if line == '': #or if len(line) == 0
count += 1
</code></pre>
<p>But <code>count</code> always remains 0. How can I count the number of blank lines?</p>
| 1 | 2016-09-02T08:33:58Z | 39,288,348 | <p>You can use count from itertools, which returns iterator. Furthermore I used just strip instead of checking length.</p>
<pre><code>from itertools import count
counter = count()
with open('myfile.txt', 'r') as f:
for line in f.readlines():
if not line.strip():
counter.next()
print counter.next()
</code></pre>
| 1 | 2016-09-02T08:58:46Z | [
"python",
"file",
"lines"
] |
Do not show certain fields in admin page | 39,287,991 | <p>In Django, you get the model editor for free in the <code>admin/</code> pages. This all works fine, but I have a few fields in my models that are generated and should never be touched by anybody through a form.</p>
<p>How can I exclude them from these <code>admin/.../change/</code> forms?</p>
<p>I added exclude to the <code>ModelAdmin</code>:</p>
<pre><code>class exampleAdmin(admin.ModelAdmin):
exclude = ('field',)
class example(models.Model):
field = models.CharField(max_length = 100)
</code></pre>
| 1 | 2016-09-02T08:40:58Z | 39,288,158 | <blockquote>
<p>I have a few fields in my models that are
generated and should never be touched by anybody through a form.</p>
</blockquote>
<p>You may also use <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#editable" rel="nofollow"><code>editable=False</code></a> on your model's field</p>
<blockquote>
<p><strong><a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#editable" rel="nofollow">Field.editable</a></strong></p>
<p>If False, the field <strong>will not be displayed in the admin or any other
ModelForm</strong>. They are also skipped during model validation. Default is
True.</p>
</blockquote>
<pre><code>class Example(models.Model):
field = models.CharField(max_length=100, editable=False)
</code></pre>
| 2 | 2016-09-02T08:50:20Z | [
"python",
"django"
] |
Do not show certain fields in admin page | 39,287,991 | <p>In Django, you get the model editor for free in the <code>admin/</code> pages. This all works fine, but I have a few fields in my models that are generated and should never be touched by anybody through a form.</p>
<p>How can I exclude them from these <code>admin/.../change/</code> forms?</p>
<p>I added exclude to the <code>ModelAdmin</code>:</p>
<pre><code>class exampleAdmin(admin.ModelAdmin):
exclude = ('field',)
class example(models.Model):
field = models.CharField(max_length = 100)
</code></pre>
| 1 | 2016-09-02T08:40:58Z | 39,288,182 | <p>You have to <code>register</code> your <code>exampleAdmin</code> to take effect. In your <code>admin.py</code> add <code>admin.site.register(example, exampleAdmin)</code></p>
| 3 | 2016-09-02T08:51:22Z | [
"python",
"django"
] |
Convert date/time columns in Pandas dataframe | 39,288,013 | <p>I have data sets containing the date (Julian day, column 1), hour (HHMM, column 2) and seconds (column 3) in individual columns:</p>
<pre><code>1 253 2300 0 2.9 114.4 18.42 21.17
1 253 2300 10 3.27 111.2 18.48 21.12
1 253 2300 20 3.22 111.3 18.49 21.09
1 253 2300 30 3.84 106.4 18.52 21
1 253 2300 40 3.75 104.4 18.53 20.85
</code></pre>
<p>I'm reading the text file using <code>Pandas</code> as:</p>
<pre><code>columns = ['station','julian_day','hours','seconds','U','Ud','T','RH']
df = pd.read_table(file_name, header=None, names=columns, delim_whitespace=True)
</code></pre>
<p>Now I want to convert the date to something more convenient like <code>YYYY-MM-DD HH:MM:SS</code> (<em>The year isn't provided in the data set, but is fixed at 2001</em>). </p>
<p>I tried combining the three columns into one using <code>parse_dates</code>:</p>
<pre><code>df = pd.read_table(file_name, header=None, names=columns, delim_whitespace=True,
parse_dates={'datetime' : ['julian_day','hours','seconds']})
</code></pre>
<p>which converts the three columns into one string:</p>
<pre><code>In [38]: df['datetime'][0]
Out[38]: '253 2300 0'
</code></pre>
<p>I next tried to convert them using <code>date_parser</code>; following <a href="http://stackoverflow.com/a/20500819/3581217">this post</a> using something like:</p>
<pre><code>date_parser = lambda x: datetime.datetime.strptime(x, '%j %H%M %s')
</code></pre>
<p>The <code>date_parser</code> itself works, but I can't get this to combine with <code>read_table</code>, and I'm pretty much stuck at this point. Is there an easy way to achieve the conversion?</p>
<p>The full minimal (not-so) working example:</p>
<pre><code>import pandas as pd
import datetime
from io import StringIO
data_file = StringIO("""\
1 253 2300 0 2.9 114.4 18.42 21.17
1 253 2300 10 3.27 111.2 18.48 21.12
1 253 2300 20 3.22 111.3 18.49 21.09
1 253 2300 30 3.84 106.4 18.52 21
1 253 2300 40 3.75 104.4 18.53 20.85
""")
date_parser = lambda x: datetime.datetime.strptime(x, '%j %H%M %S')
columns = ['station','julian_day','hours','seconds','U','Ud','T','RH']
df = pd.read_table(data_file, header=None, names=columns, delim_whitespace=True,\
parse_dates={'datetime' : ['julian_day','hours','seconds']})
</code></pre>
| 5 | 2016-09-02T08:42:18Z | 39,288,643 | <p>Would something along these lines work? : </p>
<pre><code>def merge_date(df, year='Year', month='Month', day='Day', hours='Hours', seconds='Seconds'):
"""
* Function: merge_date
* Usage: merge_date(DataFrame, col_year, col_month, col_day) . . .
* -------------------------------
* This function returns Datetime in the format YYYY-MM-DD from
* input of a dataframe with columns holding 'Year', 'Month', 'Day'
"""
df['DateTime'] = df[[year, month, day, hours, seconds]].apply(lambda s : datetime.datetime(*s),axis = 1)
return df
</code></pre>
<p>use <code>datetime.datetime</code> with argument unpacking for each dataframe column</p>
| 1 | 2016-09-02T09:12:32Z | [
"python",
"pandas"
] |
Convert date/time columns in Pandas dataframe | 39,288,013 | <p>I have data sets containing the date (Julian day, column 1), hour (HHMM, column 2) and seconds (column 3) in individual columns:</p>
<pre><code>1 253 2300 0 2.9 114.4 18.42 21.17
1 253 2300 10 3.27 111.2 18.48 21.12
1 253 2300 20 3.22 111.3 18.49 21.09
1 253 2300 30 3.84 106.4 18.52 21
1 253 2300 40 3.75 104.4 18.53 20.85
</code></pre>
<p>I'm reading the text file using <code>Pandas</code> as:</p>
<pre><code>columns = ['station','julian_day','hours','seconds','U','Ud','T','RH']
df = pd.read_table(file_name, header=None, names=columns, delim_whitespace=True)
</code></pre>
<p>Now I want to convert the date to something more convenient like <code>YYYY-MM-DD HH:MM:SS</code> (<em>The year isn't provided in the data set, but is fixed at 2001</em>). </p>
<p>I tried combining the three columns into one using <code>parse_dates</code>:</p>
<pre><code>df = pd.read_table(file_name, header=None, names=columns, delim_whitespace=True,
parse_dates={'datetime' : ['julian_day','hours','seconds']})
</code></pre>
<p>which converts the three columns into one string:</p>
<pre><code>In [38]: df['datetime'][0]
Out[38]: '253 2300 0'
</code></pre>
<p>I next tried to convert them using <code>date_parser</code>; following <a href="http://stackoverflow.com/a/20500819/3581217">this post</a> using something like:</p>
<pre><code>date_parser = lambda x: datetime.datetime.strptime(x, '%j %H%M %s')
</code></pre>
<p>The <code>date_parser</code> itself works, but I can't get this to combine with <code>read_table</code>, and I'm pretty much stuck at this point. Is there an easy way to achieve the conversion?</p>
<p>The full minimal (not-so) working example:</p>
<pre><code>import pandas as pd
import datetime
from io import StringIO
data_file = StringIO("""\
1 253 2300 0 2.9 114.4 18.42 21.17
1 253 2300 10 3.27 111.2 18.48 21.12
1 253 2300 20 3.22 111.3 18.49 21.09
1 253 2300 30 3.84 106.4 18.52 21
1 253 2300 40 3.75 104.4 18.53 20.85
""")
date_parser = lambda x: datetime.datetime.strptime(x, '%j %H%M %S')
columns = ['station','julian_day','hours','seconds','U','Ud','T','RH']
df = pd.read_table(data_file, header=None, names=columns, delim_whitespace=True,\
parse_dates={'datetime' : ['julian_day','hours','seconds']})
</code></pre>
| 5 | 2016-09-02T08:42:18Z | 39,288,736 | <p>Not sure if I am missing something but this seems to work:</p>
<pre><code>import pandas as pd
import datetime
from io import StringIO
data_file = StringIO("""\
1 253 2300 0 2.9 114.4 18.42 21.17
1 253 2300 10 3.27 111.2 18.48 21.12
1 253 2300 20 3.22 111.3 18.49 21.09
1 253 2300 30 3.84 106.4 18.52 21
1 253 2300 40 3.75 104.4 18.53 20.85
""")
date_parser = lambda x: datetime.datetime.strptime(("2001 " + x), '%Y %j %H%M %S')
columns = ['station','julian_day','hours','seconds','U','Ud','T','RH']
df = pd.read_table(data_file, header=None, names=columns, delim_whitespace=True,\
date_parser = date_parser,parse_dates={'datetime' : ['julian_day','hours','seconds']})
</code></pre>
<p>I just add the <strong>date_parser</strong> parameter in read_table and hard codded <strong>2001</strong> in parsing function. </p>
| 2 | 2016-09-02T09:16:43Z | [
"python",
"pandas"
] |
modify list elements on a condition | 39,288,021 | <p>I am new to Python but have C/C++ background.
I am trying to modify a 2D list (list of lists) elements that satisfy a condition.
All I could come up with is the following solution:</p>
<pre><code>tsdata = [[random.random() for x in range(5)] for y in range(5)]
def check(tsdata):
for i in range(len(tsdata)):
for j in range(len(tsdata[i])):
if (tsdata[i][j] < min_value) or (tsdata[i][j]> max_value):
tsdata[i][j] = 0
check(tsdata)
</code></pre>
<p>I am sure there is a better solution which is actually suitable for Python but can't think of anything else that actually works. </p>
<p>EDIT: Actually, the list tsdata is a function argument and I am trying to modify it so some of your answers do not work. I have edited the code so this is clear.</p>
| 0 | 2016-09-02T08:42:40Z | 39,288,170 | <p>Welcome to the world of simplicity and compact statements.</p>
<pre><code>tsdata_mod = [[0 if (x < min_value or x > max_value) else x for x in y] for y in tsdata]
</code></pre>
<p>or in function mode:</p>
<pre><code>def check(my_list):
my_list_mod = [[0 if (x < min_value or x > max_value) else x for x in y] for y in tsdata]
return my_list_mod
tsdata_mod = check(tsdata)
</code></pre>
<p>The above is in my opinion a very "pythonic" way to go about this task.</p>
<hr>
<p>Also when looping through containers such as lists for example you don't do:</p>
<pre><code>for i in range(len(tsdata)):
for j in range(len(tsdata[i])):
# do something with "tsdata[i]","tsdata[i][j]"
</code></pre>
<p>But rather:</p>
<pre><code>for sublist in tsdata:
for entry in sublist:
# do something with "sublist", "entry"
</code></pre>
<p>And if you really need the indexes (which in your case you don't) you use the <code>enumerate()</code> function like so:</p>
<pre><code>for i_sub, sublist in enumerate(tsdata):
for i_entry, entry in enumerate(sublist):
# do something with "i_sub", "sublist", "i_entry", "entry"
</code></pre>
| 0 | 2016-09-02T08:50:51Z | [
"python",
"list"
] |
modify list elements on a condition | 39,288,021 | <p>I am new to Python but have C/C++ background.
I am trying to modify a 2D list (list of lists) elements that satisfy a condition.
All I could come up with is the following solution:</p>
<pre><code>tsdata = [[random.random() for x in range(5)] for y in range(5)]
def check(tsdata):
for i in range(len(tsdata)):
for j in range(len(tsdata[i])):
if (tsdata[i][j] < min_value) or (tsdata[i][j]> max_value):
tsdata[i][j] = 0
check(tsdata)
</code></pre>
<p>I am sure there is a better solution which is actually suitable for Python but can't think of anything else that actually works. </p>
<p>EDIT: Actually, the list tsdata is a function argument and I am trying to modify it so some of your answers do not work. I have edited the code so this is clear.</p>
| 0 | 2016-09-02T08:42:40Z | 39,288,192 | <p>Your solution is not bad, but the following could be more simpler to understand. (depends of how your code should evolve later)</p>
<pre><code>def new_value(value):
"""Encapsulation of the core treatment"""
return 0 if (value < min_value) or (value > max_value) else value
new_tsdata = [[new_value(value) for value in row] for row in ts_data]
</code></pre>
<p>In python, you should seek for new objects instead of existing object modification.</p>
| 0 | 2016-09-02T08:51:43Z | [
"python",
"list"
] |
modify list elements on a condition | 39,288,021 | <p>I am new to Python but have C/C++ background.
I am trying to modify a 2D list (list of lists) elements that satisfy a condition.
All I could come up with is the following solution:</p>
<pre><code>tsdata = [[random.random() for x in range(5)] for y in range(5)]
def check(tsdata):
for i in range(len(tsdata)):
for j in range(len(tsdata[i])):
if (tsdata[i][j] < min_value) or (tsdata[i][j]> max_value):
tsdata[i][j] = 0
check(tsdata)
</code></pre>
<p>I am sure there is a better solution which is actually suitable for Python but can't think of anything else that actually works. </p>
<p>EDIT: Actually, the list tsdata is a function argument and I am trying to modify it so some of your answers do not work. I have edited the code so this is clear.</p>
| 0 | 2016-09-02T08:42:40Z | 39,288,279 | <p>If you want to handle big arrays, you may want to use <code>numpy</code>. Besides being much more efficient, I think the code is easier to read:</p>
<pre><code>from numpy import random
tsdata = random.random((5, 5))
tsdata[tsdata < min_value] = 0
tsdata[tsdata > max_value] = 0
</code></pre>
| 2 | 2016-09-02T08:55:41Z | [
"python",
"list"
] |
How to set proxy while using urllib3.PoolManager in python | 39,288,168 | <p>I am currently using connection pool provided by urllib3 in python like the following,</p>
<pre><code>pool = urllib3.PoolManager(maxsize = 10)
resp = pool.request('GET', 'http://example.com')
content = resp.read()
resp.release_conn()
</code></pre>
<p>However, I don't know how to set proxy while using this connection pool. I tried to set proxy in the 'request' like <code>pool.request('GET', 'http://example.com', proxies={'http': '123.123.123.123:8888'}</code> but it didn't work.</p>
<p>Can someone tell me how to set the proxy while using connection pool</p>
<p>Thanks~</p>
| 0 | 2016-09-02T08:50:43Z | 39,294,281 | <p>There is an example for how to use a proxy with urllib3 in the <a href="https://urllib3.readthedocs.io/en/latest/advanced-usage.html#proxies" rel="nofollow">Advanced Usage section of the documentation</a>. I adapted it to fit your example:</p>
<pre><code>import urllib3
proxy = urllib3.ProxyManager('http://123.123.123.123:8888/', maxsize=10)
resp = proxy.request('GET', 'http://example.com/')
content = resp.read()
# You don't actually need to release_conn() if you're reading the full response.
# This will be a harmless no-op:
resp.release_conn()
</code></pre>
<p>The <code>ProxyManager</code> behaves the same way as a <code>PoolManager</code> would.</p>
| 0 | 2016-09-02T14:01:08Z | [
"python",
"proxy",
"connection-pooling",
"urllib3"
] |
How to plot data from multiple files in a loop using matplotlib in python? | 39,288,210 | <p>I have a more than 1000 files which are .CSV (data_1.csv......data1000.csv), each containing X and Y values !</p>
<pre><code>x1 y1 x2 y2
5.0 60 5.5 500
6.0 70 6.5 600
7.0 80 7.5 700
8.0 90 8.5 800
9.0 100 9.5 900
</code></pre>
<hr>
<p>I have made a subplot program in python which can give two plots (plot1-X1vsY1, Plot2-X2vsY2) at a time using one file.
I need help in looping all the files, (open a file, read it, plot it, pick another file, open it, read it, plot it, then next ..... till all the files in a folder gets plotted)</p>
<h2>I have made a small code:</h2>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df1=pd.read_csv("data_csv",header=1,sep=',')
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]])
plt.subplot(2, 1, 2)
plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]])
plt.show()
</code></pre>
<p>Any loop structure to do this task more efficiently ??</p>
| 0 | 2016-09-02T08:52:42Z | 39,288,386 | <p>You can generate a list of filenames using <code>glob</code> and then plot them in a for loop.</p>
<pre><code>import glob
import matplotlib.pyplot as plt
files = glob.glob(# file pattern something like '*.csv')
for file in files:
df1=pd.read_csv(file,header=1,sep=',')
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]])
plt.subplot(2, 1, 2)
plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]])
plt.show() # this wil stop the loop until you close the plot
</code></pre>
| 2 | 2016-09-02T09:00:31Z | [
"python",
"loops",
"csv",
"matplotlib",
"plot"
] |
How to plot data from multiple files in a loop using matplotlib in python? | 39,288,210 | <p>I have a more than 1000 files which are .CSV (data_1.csv......data1000.csv), each containing X and Y values !</p>
<pre><code>x1 y1 x2 y2
5.0 60 5.5 500
6.0 70 6.5 600
7.0 80 7.5 700
8.0 90 8.5 800
9.0 100 9.5 900
</code></pre>
<hr>
<p>I have made a subplot program in python which can give two plots (plot1-X1vsY1, Plot2-X2vsY2) at a time using one file.
I need help in looping all the files, (open a file, read it, plot it, pick another file, open it, read it, plot it, then next ..... till all the files in a folder gets plotted)</p>
<h2>I have made a small code:</h2>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df1=pd.read_csv("data_csv",header=1,sep=',')
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]])
plt.subplot(2, 1, 2)
plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]])
plt.show()
</code></pre>
<p>Any loop structure to do this task more efficiently ??</p>
| 0 | 2016-09-02T08:52:42Z | 39,371,508 | <p>Here is the basic setup for what am using here at work. This code will plot the data from each file and through each file separately. This will work on any number of files as long as column names remain the same. Just direct it to the proper folder.</p>
<pre><code>import os
import csv
def graphWriterIRIandRut():
m = 0
List1 = []
List2 = []
List3 = []
List4 = []
fileList = []
for file in os.listdir(os.getcwd()):
fileList.append(file)
while m < len(fileList):
for col in csv.DictReader(open(fileList[m],'rU')):
List1.append(col['Col 1 Name'])
List2.append(col['Col 2 Name'])
List3.append(col['Col 3 Name'])
List4.append(col['Col 4 Name'])
plt.subplot(2, 1, 1)
plt.grid(True)
colors = np.random.rand(n)
plt.plot(List1,List2,c=colors)
plt.tick_params(axis='both', which='major', labelsize=8)
plt.subplot(2, 1, 2)
plt.grid(True)
colors = np.random.rand(n)
plt.plot(List1,List3,c=colors)
plt.tick_params(axis='both', which='major', labelsize=8)
m = m + 1
continue
plt.show()
plt.gcf().clear()
plt.close('all')
</code></pre>
| 0 | 2016-09-07T13:39:56Z | [
"python",
"loops",
"csv",
"matplotlib",
"plot"
] |
How to plot data from multiple files in a loop using matplotlib in python? | 39,288,210 | <p>I have a more than 1000 files which are .CSV (data_1.csv......data1000.csv), each containing X and Y values !</p>
<pre><code>x1 y1 x2 y2
5.0 60 5.5 500
6.0 70 6.5 600
7.0 80 7.5 700
8.0 90 8.5 800
9.0 100 9.5 900
</code></pre>
<hr>
<p>I have made a subplot program in python which can give two plots (plot1-X1vsY1, Plot2-X2vsY2) at a time using one file.
I need help in looping all the files, (open a file, read it, plot it, pick another file, open it, read it, plot it, then next ..... till all the files in a folder gets plotted)</p>
<h2>I have made a small code:</h2>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df1=pd.read_csv("data_csv",header=1,sep=',')
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]])
plt.subplot(2, 1, 2)
plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]])
plt.show()
</code></pre>
<p>Any loop structure to do this task more efficiently ??</p>
| 0 | 2016-09-02T08:52:42Z | 39,527,344 | <pre><code># plotting all the file data and saving the plots
import os
import csv
import matplotlib.pyplot as plt
def graphWriterIRIandRut():
m = 0
List1 = []
List2 = []
List3 = []
List4 = []
fileList = []
for file in os.listdir(os.getcwd()):
fileList.append(file)
while m < len(fileList):
for col in csv.DictReader(open(fileList[m],'rU')):
List1.append(col['x1'])
List2.append(col['y1'])
List3.append(col['x2'])
List4.append(col['y2'])
plt.subplot(2, 1, 1)
plt.grid(True)
# colors = np.random.rand(2)
plt.plot(List1,List2,c=colors)
plt.tick_params(axis='both', which='major', labelsize=8)
plt.subplot(2, 1, 2)
plt.grid(True)
# colors = np.random.rand(2)
plt.plot(List1,List3,c=colors)
plt.tick_params(axis='both', which='major', labelsize=8)
m = m + 1
continue
plt.show()
plt.gcf().clear()
plt.close('all')
</code></pre>
| 0 | 2016-09-16T08:52:06Z | [
"python",
"loops",
"csv",
"matplotlib",
"plot"
] |
How to collect a record by a parameter python | 39,288,333 | <p>Python source :</p>
<pre><code>@app.route('/pret/<int:idPret>', methods=['GET'])
def descrire_un_pret(idPret):
j = 0
for j in prets:
if prets[j]['id'] == idPret:
reponse = make_response(json.dumps(prets[j],200))
reponse.headers = {"Content-Type": "application/json"}
return reponse
</code></pre>
<p>I would like to retrieve a record in the <code>prets</code> list by the <code>idPret</code> parameter. Nevertheless I get an error:</p>
<pre><code>TypeError: list indices must be integers, not dict
</code></pre>
| -2 | 2016-09-02T08:58:05Z | 39,288,394 | <p><code>j</code> is not a number. <code>j</code> is one element in the <code>prets</code> list. Python loops are foreach loops. <code>if j['id'] == idPret:</code> would work:</p>
<pre><code>for j in prets:
if j['id'] == idPret:
reponse = make_response(json.dumps(j, 200))
reponse.headers = {"Content-Type": "application/json"}
return reponse
</code></pre>
<p>I'd use a different name here, and use the <a href="http://flask.pocoo.org/docs/0.11/api/#flask.json.jsonify" rel="nofollow"><code>flask.json.jsonify()</code> function</a> to create the response:</p>
<pre><code>from flask import jsonify
for pret in prets:
if pret['id'] == idPret:
return jsonify(pret)
</code></pre>
<p><code>jsonify()</code> takes care of converting to JSON and creating the response object with the right headers for you.</p>
| 0 | 2016-09-02T09:00:55Z | [
"python",
"flask"
] |
How to collect a record by a parameter python | 39,288,333 | <p>Python source :</p>
<pre><code>@app.route('/pret/<int:idPret>', methods=['GET'])
def descrire_un_pret(idPret):
j = 0
for j in prets:
if prets[j]['id'] == idPret:
reponse = make_response(json.dumps(prets[j],200))
reponse.headers = {"Content-Type": "application/json"}
return reponse
</code></pre>
<p>I would like to retrieve a record in the <code>prets</code> list by the <code>idPret</code> parameter. Nevertheless I get an error:</p>
<pre><code>TypeError: list indices must be integers, not dict
</code></pre>
| -2 | 2016-09-02T08:58:05Z | 39,288,414 | <p>You are mistaking Python's for behavior for Javascript's. When you do
<code>for j in prets:</code> in Python, the contents of <code>j</code> are already the elements in <code>pretz</code>, not an index number.</p>
| 0 | 2016-09-02T09:02:11Z | [
"python",
"flask"
] |
Time series prediction with LSTM using Keras: Wrong number of dimensions: expected 3, got 2 with shape | 39,288,341 | <p>I am trying to predict the next value in the time series using the previous 20 values. Here is a sample from my code:</p>
<p><code>X_train.shape</code> is <code>(15015, 20)</code></p>
<p><code>Y_train.shape</code> is <code>(15015,)</code></p>
<pre><code>EMB_SIZE = 1
HIDDEN_RNN = 3
model = Sequential()
model.add(LSTM(input_shape = (EMB_SIZE,), input_dim=EMB_SIZE, output_dim=HIDDEN_RNN, return_sequences=True))
model.add(LSTM(input_shape = (EMB_SIZE,), input_dim=EMB_SIZE, output_dim=HIDDEN_RNN, return_sequences=False))
model.add(Dense(1))
model.add(Activation('softmax'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_train,
Y_train,
nb_epoch=5,
batch_size = 128,
verbose=1,
validation_split=0.1)
score = model.evaluate(X_test, Y_test, batch_size=128)
print score
</code></pre>
<p>Though when I ran my code I got the following error:</p>
<p><code>TypeError: ('Bad input argument to theano function with name "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py:484" at index 0(0-based)', 'Wrong number of dimensions: expected 3, got 2 with shape (32, 20).')</code></p>
<p>I was trying to replicate the results in this post: <a href="https://medium.com/@alexrachnog/neural-networks-for-algorithmic-trading-part-one-simple-time-series-forecasting-f992daa1045a#.muimc84k6" rel="nofollow">neural networks for algorithmic trading</a>. Here is a link to the git repo: <a href="https://github.com/Rachnog/Deep-Trading" rel="nofollow">link</a></p>
<p>It seems to be a conceptual error. Please post any sources where I can get a better understanding of LSTMS for time series prediction. Also please explain me how I fix this error, so that I can reproduce the results mentioned in the article mentioned above.</p>
| 0 | 2016-09-02T08:58:19Z | 39,289,342 | <p>If I understand your problem correctly, your input data a set of 15015 1D sequences of length 20. According to Keras doc, the input is a 3D tensor with shape (nb_samples, timesteps, input_dim). In your case, the shape of <code>X</code> should then be <code>(15015, 20, 1)</code>.</p>
<p>Also, you just need to give <code>input_dim</code> to the first <code>LSTM</code> layer. <code>input_shape</code> is redundant and the second layer will infer its input shape automatically:</p>
<pre><code>model = Sequential()
model.add(LSTM(input_dim=EMB_SIZE, output_dim=HIDDEN_RNN, return_sequences=True))
model.add(LSTM(output_dim=HIDDEN_RNN, return_sequences=False))
</code></pre>
| 0 | 2016-09-02T09:44:48Z | [
"python",
"machine-learning",
"time-series",
"keras"
] |
Celery flower doesn't work under supervisor managment | 39,288,354 | <p>I have Ubuntu 14.04.4 LTS running as a vagrant environment under virtualbox. In this box I have this configuration:</p>
<ul>
<li><p>supervisor 3.0b2</p></li>
<li><p>python 3.4 under virtualenvironment</p></li>
<li><p>celery 3.1.23</p></li>
<li><p>flower 0.9.1</p></li>
</ul>
<p>A flower configuration under supervisor is:</p>
<pre><code>[program:flower]
command=/home/vagrant/.virtualenvs/meridian/bin/python /vagrant/meridian/meridian/manage.py celery flower --loglevel=INFO -conf=/vagrant/meridian/meridian/meridian/flowerconfig.py
directory=/vagrant/meridian/meridian
user=vagrant
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/flower-stdout.log
stderr_logfile=/var/log/supervisor/flower-stderr.log
priority=997
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
stderr_logfile_maxbytes=10MB
stderr_logfile_backups=5
</code></pre>
<p>The flowerconfig.py is an empty file. So all the values are default. Host is localhost and port is 5555.</p>
<p>When I run flower from a command line:</p>
<blockquote>
<p>vagrant@localhost> flower</p>
</blockquote>
<p>it is run as it should and I see the tasks result in my browser, visiting an address localhost:5555.</p>
<p>netstat shows me the ports that are listened:</p>
<blockquote>
<p>vagrant@localhost> netstat -l | grep 5555</p>
</blockquote>
<pre><code>tcp 0 0 *:5555 *:* LISTEN
tcp6 0 0 [::]:5555 [::]:* LISTEN
</code></pre>
<p>So, it is OK. </p>
<p>When I run flowe under supervisor in this way:</p>
<blockquote>
<p>vagrant@localhost> sudo supervisorctl start flower</p>
</blockquote>
<p>it starts as it should. Netstat shows that port 5555 are listened. But a query from a browser just hangs. </p>
<p>Why flower doesn't reply under supervisor ?</p>
| 1 | 2016-09-02T08:59:05Z | 39,455,788 | <p>I have found a solution.
The problem was that I ran flower not under my virtual environment. I added a shell file "start_flower.sh":</p>
<pre><code>source /usr/share/virtualenvwrapper/virtualenvwrapper.sh
source /home/vagrant/.virtualenvs/meridian/bin/activate
workon meridian
exec flower --conf=/vagrant/meridian/meridian/meridian/flowerconfig.py
</code></pre>
<p>and it started works as it should.</p>
<p>Then I rewrited supervisor's configuration for flower in this way:</p>
<pre><code>[program:flower]
command=bash -c "/vagrant/meridian/meridian/start_flower.sh"
directory=/vagrant/meridian/meridian
user=vagrant
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/flower-stdout.log
stderr_logfile=/var/log/supervisor/flower-stderr.log
priority=997
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
stderr_logfile_maxbytes=10MB
stderr_logfile_backups=5
stopasgroup=true
killasgroup=true
</code></pre>
<p>and now everything is good.</p>
<p>Note: I want to draw your attention to fact that I run flower in shell file with "exec":</p>
<pre><code>exec flower --conf=/vagrant/meridian/meridian/meridian/flowerconfig.py
</code></pre>
<p>I use this there because when I use that construction:</p>
<pre><code>flower --conf=/vagrant/meridian/meridian/meridian/flowerconfig.py
</code></pre>
<p>I had a problew - shell file process was terminated with:</p>
<pre><code>sudo supervisorctl stop flower
</code></pre>
<p>But flower process was working anyway!</p>
<p>So if you would face with such kind of problem always use 'exec'.
This is a good additional information for such cases:</p>
<p><a href="http://veithen.github.io/2014/11/16/sigterm-propagation.html" rel="nofollow">http://veithen.github.io/2014/11/16/sigterm-propagation.html</a></p>
| 0 | 2016-09-12T17:38:06Z | [
"python",
"celery",
"supervisord",
"flower"
] |
Reading several lines in a .csv file | 39,288,464 | <p>I want to read several lines of an CSV file. I am opening a list and append the one row to the list. Then I try to print the list. But the list is empty. The CSV file looks as following:</p>
<pre><code>`hallo;das;ist;ein;test;der;hoffentlich;funktioniert;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert1;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert2;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert3;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert4;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert5;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert6;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert7;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert8;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert9;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert10;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert11;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert12;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert13;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert14;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert15;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert16;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert17;fingerscrossed;
`
</code></pre>
<p>This is my Code:</p>
<pre><code>import csv
spamreader = csv.reader(open('test.csv'), delimiter = ';')
verbraeuche_ab_reset = []
def berechne_gemittelten_verbrauch():
anzahl_zeilen = sum(1 for row in spamreader)
for row in spamreader:
if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
print(verbraeuche_ab_reset)
print(anzahl_zeilen)
berechne_gemittelten_verbrauch()
</code></pre>
<p>Thx in advance!</p>
| 0 | 2016-09-02T09:04:48Z | 39,288,673 | <p>The following works.
Remind, iterating over the data in the line <code>anzahl_zeilen ...</code> makes it impossible to iterate again over your data.</p>
<p>2nd thing. <code>if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:</code> does neither really check the columns in the row nor if you're at the end. The iterator does the latter one for you. To cound the columns, use <code>len(row)</code> instead.</p>
<pre><code>import csv
spamreader = csv.reader(open('test.csv'), delimiter = ';')
def berechne_gemittelten_verbrauch():
#anzahl_zeilen = sum(1 for row in spamreader) # kills your data / iterator is at the end
verbraeuche_ab_reset = []
for row in spamreader:
if len(row) > 9:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
return verbraeuche_ab_reset
verb = berechne_gemittelten_verbrauch()
# subsets
print(verb[9:11])
</code></pre>
<p>Please read <a href="http://stackoverflow.com/questions/509211/explain-pythons-slice-notation">python subset notation</a></p>
<pre><code>a[start:end] # items start through end-1
a[start:] # items start through the rest of the array
a[:end] # items from the beginning through end-1
a[:] # a copy of the whole array
</code></pre>
| 0 | 2016-09-02T09:13:45Z | [
"python",
"python-3.x",
"csv"
] |
Reading several lines in a .csv file | 39,288,464 | <p>I want to read several lines of an CSV file. I am opening a list and append the one row to the list. Then I try to print the list. But the list is empty. The CSV file looks as following:</p>
<pre><code>`hallo;das;ist;ein;test;der;hoffentlich;funktioniert;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert1;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert2;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert3;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert4;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert5;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert6;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert7;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert8;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert9;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert10;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert11;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert12;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert13;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert14;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert15;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert16;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert17;fingerscrossed;
`
</code></pre>
<p>This is my Code:</p>
<pre><code>import csv
spamreader = csv.reader(open('test.csv'), delimiter = ';')
verbraeuche_ab_reset = []
def berechne_gemittelten_verbrauch():
anzahl_zeilen = sum(1 for row in spamreader)
for row in spamreader:
if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
print(verbraeuche_ab_reset)
print(anzahl_zeilen)
berechne_gemittelten_verbrauch()
</code></pre>
<p>Thx in advance!</p>
| 0 | 2016-09-02T09:04:48Z | 39,288,697 | <p>The problem with your code is that you are iterating over the spamreader twice. You can only do that once.</p>
<p>this statement will result in the correct answer.</p>
<pre><code>anzahl_zeilen = sum(1 for row in spamreader)
</code></pre>
<p>but when you now iterate over the same spamreader you'll get empty list, since you have already iterated over the file once</p>
<pre><code>for row in spamreader:
if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
</code></pre>
<p>to solve this use,</p>
<pre><code>spamreader = csv.reader(open('test.csv'), delimiter = ';')
anzahl_zeilen = sum(1 for row in spamreader)
spamreader = csv.reader(open('test.csv'), delimiter = ';')
for row in spamreader:
if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
</code></pre>
| 0 | 2016-09-02T09:15:04Z | [
"python",
"python-3.x",
"csv"
] |
Reading several lines in a .csv file | 39,288,464 | <p>I want to read several lines of an CSV file. I am opening a list and append the one row to the list. Then I try to print the list. But the list is empty. The CSV file looks as following:</p>
<pre><code>`hallo;das;ist;ein;test;der;hoffentlich;funktioniert;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert1;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert2;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert3;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert4;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert5;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert6;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert7;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert8;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert9;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert10;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert11;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert12;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert13;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert14;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert15;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert16;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert17;fingerscrossed;
`
</code></pre>
<p>This is my Code:</p>
<pre><code>import csv
spamreader = csv.reader(open('test.csv'), delimiter = ';')
verbraeuche_ab_reset = []
def berechne_gemittelten_verbrauch():
anzahl_zeilen = sum(1 for row in spamreader)
for row in spamreader:
if spamreader.line_num > 9 and spamreader.line_num < anzahl_zeilen:
verbrauch_ab_reset = row[7]
verbraeuche_ab_reset.append(verbrauch_ab_reset)
print(verbraeuche_ab_reset)
print(anzahl_zeilen)
berechne_gemittelten_verbrauch()
</code></pre>
<p>Thx in advance!</p>
| 0 | 2016-09-02T09:04:48Z | 39,290,326 | <pre><code># try this code its very simple
input:
filename :samp1.csv
c1;c2;c3;c4;c5;c6;c7;c8;c9;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert1;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert2;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert3;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert4;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert5;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert6;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert7;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert8;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert9;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert10;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert11;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert12;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert13;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert14;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert15;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert16;fingerscrossed;
hallo;das;ist;ein;test;der;hoffentlich;funktioniert17;fingerscrossed;
#read the file
import pandas as pd
data = pd.read_csv('samp1.csv',sep=';')
df = pd.DataFrame({'c1':data.c1,'c2':data.c2,'c3':data.c3,'c4':data.c4,'c5':data.c5,'c6':data.c6,'c7':data.c7,'c8':data.c8,'c9':data.c9,})
#suppose we want to print first 6 lines
lines = df.ix[:5,['c1','c2','c3','c4','c5','c6','c7','c8','c9']]
print(lines)
output:
c1 c2 c3 c4 c5 c6 c7 c8 c9
0 hallo das ist ein test der hoffentlich funktioniert fingerscrossed
1 hallo das ist ein test der hoffentlich funktioniert1 fingerscrossed
2 hallo das ist ein test der hoffentlich funktioniert2 fingerscrossed
3 hallo das ist ein test der hoffentlich funktioniert3 fingerscrossed
4 hallo das ist ein test der hoffentlich funktioniert4 fingerscrossed
5 hallo das ist ein test der hoffentlich funktioniert5 fingerscrossed
</code></pre>
| 0 | 2016-09-02T10:35:50Z | [
"python",
"python-3.x",
"csv"
] |
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: users.login | 39,288,538 | <p>I am using sqlite database and I declare the model as this:</p>
<pre><code>class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(80), unique=True)
password = db.Column(db.String(64))
def is_authenticated(self):
return True
def is_active(self):
return True
def is_anonymous(self):
return False
def get_id(self):
return self.id
# Required for administrative interface
def __unicode__(self):
return self.username
</code></pre>
<p>And I have add a Model instance like this:</p>
<pre><code>admin = admin.Admin(app, 'Example: Auth', index_view=MyAdminIndexView(), base_template='my_master.html')
admin.add_view(MyModelView(User, db.session))
db.drop_all()
db.create_all()
if db.session.query(User).filter_by(login='passport').count() <= 1:
user = User(login = 'passport', password = 'password')
db.session.add(user)
db.session.commit()
</code></pre>
<p>However, if I comment the db.drop_all(), it will occur an error which is:</p>
<pre><code>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: users.login [SQL: 'INSERT INTO users (login, password) VALUES (?, ?)'] [parameters: ('passport', 'password')]
</code></pre>
<p>And if do not comment the db.drop_all(), everything is fun. Because there are other tables on this database, I do not want to drop all the tables when I run it. Are there any other solutions to fix that?</p>
<p>Thanks a lot. </p>
| 0 | 2016-09-02T09:08:47Z | 39,288,623 | <p>You can just run query, let's say:</p>
<pre><code>db.session.query(User).delete()
db.session.commit()
</code></pre>
<p>OR</p>
<pre><code>User.query.delete()
db.session.commit()
</code></pre>
<p>in that case, you will just remove everything from User table.</p>
| 0 | 2016-09-02T09:11:41Z | [
"python",
"sqlite",
"flask-sqlalchemy"
] |
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: users.login | 39,288,538 | <p>I am using sqlite database and I declare the model as this:</p>
<pre><code>class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
login = db.Column(db.String(80), unique=True)
password = db.Column(db.String(64))
def is_authenticated(self):
return True
def is_active(self):
return True
def is_anonymous(self):
return False
def get_id(self):
return self.id
# Required for administrative interface
def __unicode__(self):
return self.username
</code></pre>
<p>And I have add a Model instance like this:</p>
<pre><code>admin = admin.Admin(app, 'Example: Auth', index_view=MyAdminIndexView(), base_template='my_master.html')
admin.add_view(MyModelView(User, db.session))
db.drop_all()
db.create_all()
if db.session.query(User).filter_by(login='passport').count() <= 1:
user = User(login = 'passport', password = 'password')
db.session.add(user)
db.session.commit()
</code></pre>
<p>However, if I comment the db.drop_all(), it will occur an error which is:</p>
<pre><code>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: users.login [SQL: 'INSERT INTO users (login, password) VALUES (?, ?)'] [parameters: ('passport', 'password')]
</code></pre>
<p>And if do not comment the db.drop_all(), everything is fun. Because there are other tables on this database, I do not want to drop all the tables when I run it. Are there any other solutions to fix that?</p>
<p>Thanks a lot. </p>
| 0 | 2016-09-02T09:08:47Z | 39,288,652 | <p>You're comparing the count to <code><= 1</code>.
So you'll attempt to create the new user even though it already exists.
So it should probably simply be <code>< 1</code>:</p>
<pre><code>if db.session.query(User).filter_by(login='passport').count() < 1:
user = User(login = 'passport', password = 'password')
db.session.add(user)
db.session.commit()
</code></pre>
| 1 | 2016-09-02T09:13:01Z | [
"python",
"sqlite",
"flask-sqlalchemy"
] |
How can I set infinity as an element of a matrix in python(numpy)? | 39,288,580 | <p>This is the program</p>
<pre><code>import numpy as n
m = complex('inf')
z=n.empty([2,2] , dtype = complex)
z=n.array(input() , dtype = complex )
</code></pre>
<p>but in the console when i give 'm' as an input i get the following error massage:
'NameError: name 'm' is not defined'</p>
| 0 | 2016-09-02T09:10:22Z | 39,289,147 | <p>Infinity in python can be typed as:</p>
<p>float('inf') / float('-inf')</p>
<p>or simply as a number</p>
<p>1e500 (+inf)</p>
<p>-1e500 (-inf)</p>
| 0 | 2016-09-02T09:35:32Z | [
"python",
"numpy",
"user-input",
"infinity"
] |
putting int/float from file into a dictionary | 39,288,627 | <p>I like to put some values from a csv file into a dict. An example is:</p>
<pre><code>TYPE, Weldline
Diam, 5.0
Num of Elems, 4
</code></pre>
<p>and I end up after reading into a dictionary (for later use in an API) in:</p>
<pre><code> {'TYPE':'Weldline' , 'Diam':'5.0','Num of Elems':'4'}
</code></pre>
<p>Since the API expects a dictionary i provided all the necessary keywords like 'TYPE', 'Diam', ... according to the API doc.</p>
<p>The problem I have is, that the API expects a float for <code>Diam</code> and an integer for <code>Num of Elems</code></p>
<p>Since I have a large number of entries: is there an simple automatic way for string conversion into float or int?
Or do I have to write something on my own?</p>
| 0 | 2016-09-02T09:11:44Z | 39,288,862 | <p>You can use <code>literal_eval</code></p>
<blockquote>
<p>This can be used for safely evaluating strings containing Python values from untrusted sources without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing.</p>
</blockquote>
<pre><code>import ast
_dict = {}
with open('file.txt', 'r') as f:
for line in f.readlines():
if line.strip():
key, value = line.split(',')
value = value.strip()
try:
_dict[key] = ast.literal_eval(value)
except ValueError:
_dict[key] = value
print _dict
</code></pre>
| 1 | 2016-09-02T09:22:28Z | [
"python",
"type-conversion"
] |
putting int/float from file into a dictionary | 39,288,627 | <p>I like to put some values from a csv file into a dict. An example is:</p>
<pre><code>TYPE, Weldline
Diam, 5.0
Num of Elems, 4
</code></pre>
<p>and I end up after reading into a dictionary (for later use in an API) in:</p>
<pre><code> {'TYPE':'Weldline' , 'Diam':'5.0','Num of Elems':'4'}
</code></pre>
<p>Since the API expects a dictionary i provided all the necessary keywords like 'TYPE', 'Diam', ... according to the API doc.</p>
<p>The problem I have is, that the API expects a float for <code>Diam</code> and an integer for <code>Num of Elems</code></p>
<p>Since I have a large number of entries: is there an simple automatic way for string conversion into float or int?
Or do I have to write something on my own?</p>
| 0 | 2016-09-02T09:11:44Z | 39,289,098 | <p>Also try this it seems more simple.</p>
<pre><code>import csv,ast
with open('file.csv', mode='r') as file:
reader = csv.reader(file)
mydict = {rows[0]:ast.literal_eval(rows[1]) for rows in reader}
print mydict
</code></pre>
| 0 | 2016-09-02T09:33:30Z | [
"python",
"type-conversion"
] |
putting int/float from file into a dictionary | 39,288,627 | <p>I like to put some values from a csv file into a dict. An example is:</p>
<pre><code>TYPE, Weldline
Diam, 5.0
Num of Elems, 4
</code></pre>
<p>and I end up after reading into a dictionary (for later use in an API) in:</p>
<pre><code> {'TYPE':'Weldline' , 'Diam':'5.0','Num of Elems':'4'}
</code></pre>
<p>Since the API expects a dictionary i provided all the necessary keywords like 'TYPE', 'Diam', ... according to the API doc.</p>
<p>The problem I have is, that the API expects a float for <code>Diam</code> and an integer for <code>Num of Elems</code></p>
<p>Since I have a large number of entries: is there an simple automatic way for string conversion into float or int?
Or do I have to write something on my own?</p>
| 0 | 2016-09-02T09:11:44Z | 39,292,792 | <p>Now I tried this:</p>
<pre><code> key = Line[0].strip()
value = Line[1].strip()
try:
value = int(Line[1].strip())
except ValueError:
try:
value = float(Line[1].strip())
except ValueError:
value = Line[1].strip()
Entry[key]=value
</code></pre>
<p>It seems it works fine...</p>
| 0 | 2016-09-02T12:46:22Z | [
"python",
"type-conversion"
] |
jinja2 load template from string: TypeError: no loader for this environment specified | 39,288,706 | <p>I'm using Jinja2 in Flask. I want to render a template from a string. I tried the following 2 methods:</p>
<pre><code> rtemplate = jinja2.Environment().from_string(myString)
data = rtemplate.render(**data)
</code></pre>
<p>and</p>
<pre><code> rtemplate = jinja2.Template(myString)
data = rtemplate.render(**data)
</code></pre>
<p>However both methods return:</p>
<pre><code>TypeError: no loader for this environment specified
</code></pre>
<p>I checked the manual and this url: <a href="https://gist.github.com/wrunk/1317933" rel="nofollow">https://gist.github.com/wrunk/1317933</a></p>
<p>However nowhere is specified to select a loader when using a string.</p>
| 0 | 2016-09-02T09:15:29Z | 39,288,845 | <p>You can provide <code>loader</code> in <code>Environment</code> from <a href="http://jinja.pocoo.org/docs/dev/api/#loaders" rel="nofollow">that list</a></p>
<pre><code>from jinja2 import Environment, BaseLoader
rtemplate = Environment(loader=BaseLoader).from_string(myString)
data = rtemplate.render(**data)
</code></pre>
<p><strong>Edit</strong>: The problem was with <code>myString</code>, it has <code>{% include 'test.html' %}</code> and Jinja2 has no idea where to get template from.</p>
<p><strong>UPDATE</strong></p>
<p>As @iver56 kindly noted, it's better to:</p>
<pre><code>rtemplate = Environment(loader=BaseLoader()).from_string(myString)
</code></pre>
| 1 | 2016-09-02T09:21:46Z | [
"python",
"flask",
"jinja2"
] |
Convert bash script to python | 39,288,950 | <p>I have a bash script and i want to convert it to python.</p>
<p>This is the script :</p>
<pre><code>mv $1/positive/*.$3 $2/JPEGImages
mv $1/negative/*.$3 $2/JPEGImages
mv $1/positive/annotations/*.xml $2/Annotations
mv $1/negative/annotations/*.xml $2/Annotations
cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt
</code></pre>
<p>My problem is : i didn't found how to past positive_label.txt in $4_trainval.txt.</p>
<p>This is my try, it's the first time i work with python. please help me to make it work.
thank you.</p>
<pre><code>import sys # Required for reading command line arguments
import os # Required for path manipulations
from os.path import expanduser # Required for expanding '~', which stands for home folder. Used just in case the command line arguments contain "~". Without this, python won't parse "~"
import glob
import shutil
def copy_dataset(arg1,arg2,arg3,arg4):
path1 = os.path.expanduser(arg1)
path2 = os.path.expanduser(arg2) #
frame_ext = arg3 # File extension of the patches
pos_files = glob.glob(os.path.join(path1,'positive/'+'*.'+frame_ext))
neg_files = glob.glob(os.path.join(path1,'negative/'+'*.'+frame_ext))
pos_annotation = glob.glob(os.path.join(path1,'positive/annotations/'+'*.'+xml))
neg_annotation = glob.glob(os.path.join(path1,'negative/annotations/'+'*.'+xml))
#mv $1/positive/*.$3 $2/JPEGImages
for x in pos_files:
shutil.copyfile(x, os.path.join(path2,'JPEGImages'))
#mv $1/negative/*.$3 $2/JPEGImages
for y in neg_files:
shutil.copyfile(y, os.path.join(path2,'JPEGImages'))
#mv $1/positive/annotations/*.xml $2/Annotations
for w in pos_annotation:
shutil.copyfile(w, os.path.join(path2,'Annotations'))
#mv $1/negative/annotations/*.xml $2/Annotations
for z in neg_annotation:
shutil.copyfile(z, os.path.join(path2,'Annotations'))
#cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt
for line in open(path1+'/positive_label.txt')
line.split(' ')[0]
</code></pre>
| 1 | 2016-09-02T09:26:53Z | 39,326,308 | <p>Without testing, something like that should works.</p>
<pre><code>#cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt
positive = path1+'/positive_label.txt'
path4 = os.path.join(arg4, '_trainval.txt')
with open(positive, 'r') as input_, open(path4, 'w') as output_:
for line in input_.readlines():
output_.write(line.split()[0] + "\n")
</code></pre>
<p>This code defines the two files we will work with and opens both. The first is opened in read mode, the second in write mode. For each line in the <em>input</em> file, we write the first find data separated by a space to the <em>output</em> file.</p>
<p>Check <a href="http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python" rel="nofollow">Reading and Writing Files in Python</a> for more informations about files into Python.</p>
| 0 | 2016-09-05T08:10:37Z | [
"python",
"bash"
] |
Create a new list from two dictionaries (case insensitive) | 39,288,989 | <p>This is a question about Python. I have the following list of dictionaries:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": "Goofy", "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": "goofy", "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": "GooFy", "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": "GoOfY", "c1": 4, "c2": 5, "id": "444"},
{"t": 1, "tid": 2, "gtm": "GOOFY", "c1": 4, "c2": 5, "id": "555"}
]
</code></pre>
<p>and a dictionary I wanted to compare with:</p>
<pre><code>dictA = {"t": 1, "tid": 2, "gtm": "goofy"}
</code></pre>
<p>I wanted to create a list of dicts that match all the items in <strong>dictA</strong> from <strong>listA</strong> and to include the "id" field as well:</p>
<pre><code>listB = [
{"t": 1, "tid": 2, "gtm": "Goofy", "id": "111"},
{"t": 1, "tid": 2, "gtm": "GooFy", "id": "333"},
{"t": 1, "tid": 2, "gtm": "GOOFY", "id": "555"},
]
</code></pre>
<p>How do I compare the two dicts in a case-insensitive way?</p>
| 1 | 2016-09-02T09:28:25Z | 39,289,055 | <p>You'd have to test each dictionary value manually:</p>
<pre><code>def test(d1, d2):
"""Test if all values for d1 match case-insensitively in d2"""
def equal(v1, v2):
try:
return v1.lower() == v2.lower()
except AttributeError:
# not an object that supports .lower()
return v1 == v2
try:
return all(equal(d1[k], d2[k]) for k in d1)
except KeyError:
# d2 is missing a key, not a match
return False
listB = [d for d in listA if test(dictA, d)]
</code></pre>
<p>This produces the 3 matches you are looking for:</p>
<pre><code>>>> listA = [
... {"t": 1, "tid": 2, "gtm": "Goofy", "c1": 4, "id": "111"},
... {"t": 3, "tid": 4, "gtm": "goofy", "c1": 4, "c2": 5, "id": "222"},
... {"t": 1, "tid": 2, "gtm": "GooFy", "c1": 4, "c2": 5, "id": "333"},
... {"t": 5, "tid": 6, "gtm": "GoOfY", "c1": 4, "c2": 5, "id": "444"},
... {"t": 1, "tid": 2, "gtm": "GOOFY", "c1": 4, "c2": 5, "id": "555"}
... ]
>>> dictA = {"t": 1, "tid": 2, "gtm": "goofy"}
>>> [d for d in listA if test(dictA, d)]
[{'tid': 2, 'c1': 4, 'id': '111', 't': 1, 'gtm': 'Goofy'}, {'gtm': 'GooFy', 't': 1, 'tid': 2, 'c2': 5, 'c1': 4, 'id': '333'}, {'gtm': 'GOOFY', 't': 1, 'tid': 2, 'c2': 5, 'c1': 4, 'id': '555'}]
>>> from pprint import pprint
>>> pprint(_)
[{'c1': 4, 'gtm': 'Goofy', 'id': '111', 't': 1, 'tid': 2},
{'c1': 4, 'c2': 5, 'gtm': 'GooFy', 'id': '333', 't': 1, 'tid': 2},
{'c1': 4, 'c2': 5, 'gtm': 'GOOFY', 'id': '555', 't': 1, 'tid': 2}]
</code></pre>
<p>but these include the extra keys. If you must have only certain keys, pick those keys in a new dictionary:</p>
<pre><code>listB = [dict(dictA, id=d['id'], gtm=d['gtm']) for d in listA if test(dictA, d)]
</code></pre>
<p>This creates a copy of <code>dictA</code> and adds in the <code>id</code> and <code>gtm</code> keys from the matching dictionary:</p>
<pre><code>>>> [dict(dictA, id=d['id'], gtm=d['gtm']) for d in listA if test(dictA, d)]
[{'tid': 2, 'id': '111', 't': 1, 'gtm': 'Goofy'}, {'tid': 2, 'id': '333', 't': 1, 'gtm': 'GooFy'}, {'tid': 2, 'id': '555', 't': 1, 'gtm': 'GOOFY'}]
>>> pprint(_)
[{'gtm': 'Goofy', 'id': '111', 't': 1, 'tid': 2},
{'gtm': 'GooFy', 'id': '333', 't': 1, 'tid': 2},
{'gtm': 'GOOFY', 'id': '555', 't': 1, 'tid': 2}]
</code></pre>
| 3 | 2016-09-02T09:31:43Z | [
"python",
"dictionary"
] |
Value of settings.DEBUG changing between settings and url in Django Test | 39,289,064 | <p>I'm trying to set up test for some URLS that are set only in debug. They are not set because apparently the value of DEBUG change to False between my setting file and urls.py. I've never encountered this problem before, and I don't remember doing anything particularly fancy involving DEBUG value.</p>
<p>Here's my urls.py :</p>
<pre><code>from django.conf import settings
from my_views import dnfp
print "settings.DEBUG in url: {}".format(settings.DEBUG)
if settings.DEBUG:
urlpatterns += [url(r'^dnfp/$', dnfp, name="debug_not_found_page"...
</code></pre>
<p>Here's my setting file :</p>
<pre><code>DEBUG=True
print "DEBUG at the end of the settings: {}".format(DEBUG)
</code></pre>
<p>The content that fail in my test :</p>
<pre><code> reverse("debug_not_found_page"),
</code></pre>
<p>Here's the output of the test :</p>
<pre><code>DEBUG at the end of the settings: True
settings.DEBUG in url: False
Creating test database for alias 'default'...
.E
(...)
NoReverseMatch: Reverse for 'debug_not_found_page' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>If I change the value myself in urls.py the url is set again and the test works with this urls.py :</p>
<pre><code>from django.conf import settings
from my_views import dnfp
settings.DEBUG = True
if settings.DEBUG:
urlpatterns += [url(r'^dnfp/$', dnfp, name="debug_not_found_page"...
</code></pre>
<p>Any ideas when and why my value for DEBUG is changing between settings and urls ?</p>
| 0 | 2016-09-02T09:32:04Z | 39,289,472 | <p>From the <a href="https://docs.djangoproject.com/en/1.10/topics/testing/overview/#other-test-conditions" rel="nofollow">docs</a></p>
<blockquote>
<p>Regardless of the value of the DEBUG setting in your configuration file, all Django tests run with DEBUG=False. This is to ensure that the observed output of your code matches what will be seen in a production setting.</p>
</blockquote>
| 2 | 2016-09-02T09:51:15Z | [
"python",
"django",
"testing"
] |
Value of settings.DEBUG changing between settings and url in Django Test | 39,289,064 | <p>I'm trying to set up test for some URLS that are set only in debug. They are not set because apparently the value of DEBUG change to False between my setting file and urls.py. I've never encountered this problem before, and I don't remember doing anything particularly fancy involving DEBUG value.</p>
<p>Here's my urls.py :</p>
<pre><code>from django.conf import settings
from my_views import dnfp
print "settings.DEBUG in url: {}".format(settings.DEBUG)
if settings.DEBUG:
urlpatterns += [url(r'^dnfp/$', dnfp, name="debug_not_found_page"...
</code></pre>
<p>Here's my setting file :</p>
<pre><code>DEBUG=True
print "DEBUG at the end of the settings: {}".format(DEBUG)
</code></pre>
<p>The content that fail in my test :</p>
<pre><code> reverse("debug_not_found_page"),
</code></pre>
<p>Here's the output of the test :</p>
<pre><code>DEBUG at the end of the settings: True
settings.DEBUG in url: False
Creating test database for alias 'default'...
.E
(...)
NoReverseMatch: Reverse for 'debug_not_found_page' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>If I change the value myself in urls.py the url is set again and the test works with this urls.py :</p>
<pre><code>from django.conf import settings
from my_views import dnfp
settings.DEBUG = True
if settings.DEBUG:
urlpatterns += [url(r'^dnfp/$', dnfp, name="debug_not_found_page"...
</code></pre>
<p>Any ideas when and why my value for DEBUG is changing between settings and urls ?</p>
| 0 | 2016-09-02T09:32:04Z | 39,289,679 | <p>The problem with your code is you are setting DEBUG = True after this line</p>
<pre><code> urlpatterns += [url(r'^dnfp/$', dnfp, name="debug_not_found_page"
</code></pre>
<p>The reason is that all the URLs are already appended to urlpatterns[] and you are setting it after the appending of URLs and while appending the URL Django actually transfer control to urls.py for syntax validation purpose. That's why you are getting different value in urls.py.</p>
<p>Set value of DEBUG before this line</p>
<p>Try this, hope it will work.</p>
<p>You can use another approach for doing this, create a separate app for all these type of URLs and don't add the app to INSTALLED_APPS on the basis of debug variable.</p>
| 1 | 2016-09-02T10:02:10Z | [
"python",
"django",
"testing"
] |
how to empty json file before dumping data into it | 39,289,070 | <p><strong>python 3.5</strong></p>
<p>hi i have following codes to add an element to json data :</p>
<pre><code>jsonFile = open("json.json", mode="r+", encoding='utf-8')
jdata = json.load(jsonFile)
jdata['chat_text'].insert(0, {'x':'x'})
json.dump(jdata, jsonFile)
jsonFile.close()
</code></pre>
<p>but it would be result:</p>
<p><strong>first data</strong></p>
<pre><code>{"chat_text": [{"a": "b", "c": "d", "e": "f"}]}
</code></pre>
<p><strong>edited data</strong></p>
<pre><code>{"chat_text": [{"a": "b", "c": "d", "e": "f"}]}{"chat_text": [{'x':'x'},{"a": "b", "c": "d", "e": "f"}]}
</code></pre>
<p>so i wrote this code :</p>
<pre><code>jsonFile = open("json.json", mode="r+", encoding='utf-8')
jdata = json.load(jsonFile)
jdata['chat_text'].insert(0, {'x':'x'})
open('json.json', mode='w').close() #deleting first data
json.dump(jdata, jsonFile)
jsonFile.close()
</code></pre>
<p>result would be this :</p>
<p><strong>first data</strong></p>
<pre><code>{"chat_text": [{"a": "b", "c": "d", "e": "f"}]}
</code></pre>
<p><strong>edited data</strong></p>
<pre><code> {"chat_text": [{"x","x"},{"a": "b", "c": "d", "e": "f"}]}
</code></pre>
<p>as you can see it replaces first data with space and i want it to be nothing...</p>
<p>any ideas?</p>
| 2 | 2016-09-02T09:32:28Z | 39,289,364 | <p>the issue is essentially that you are opening the file twice in different modes.</p>
<pre><code>jsonFile = open("json.json", mode="r")
jdata = json.load(jsonFile)
jsonFile.close()
jdata['chat_text'].insert(0, {'x':'x'})
jsonFile = open('json.json', mode='w+')
json.dump(jdata, jsonFile)
jsonFile.close()
</code></pre>
<p>So the first 3 lines open your file and load it into jdata, then close that file.
Do whatever manipulation you need
Open the file again, for writing this time. Dump data, close file.</p>
| 1 | 2016-09-02T09:45:55Z | [
"python",
"json"
] |
how to get href link from onclick function in python | 39,289,206 | <p>I want to get href link of website form onclick function
Here is html code in which onclick function call a website </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="fl">
<span class="taLnk" onclick="ta.trackEventOnPage('Eatery_Listing', 'Website', 594024, 1); ta.util.cookie.setPIDCookie(15190); ta.call('ta.util.link.targetBlank', event, this, {'aHref':'LqMWJQzZYUWJQpEcYGII26XombQQoqnQQQQoqnqgoqnQQQQoqnQQQQoqnQQQQoqnqgoqnQQQQoqnQQuuuQQoqnQQQQoqnxioqnQQQQoqnQQ2EisSMVCnVcJQQoqnQQQQoqnxioqnQQQQoqnQQniaQQoqnQQQQoqnqgoqnQQQQoqnQQWJQzhYMJkH3KHVAdJJH3VVdB', 'isAsdf':true})">Website</span>
</div></code></pre>
</div>
</div>
</p>
<p>Normaly i use this code to get href link from any span or element</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>geturl = soup.findsoup("span", {"class": "taLnk"})
for link in geturl:
hreflink = link.get("href")
print(hreflink)</code></pre>
</div>
</div>
</p>
<p>But in this case there is no way to directly call href because href exist in onclick function</p>
<p>Please help me what to do now</p>
| 0 | 2016-09-02T09:38:07Z | 39,290,184 | <p>You cannot directly parse <code>aHref</code> attribute, you need to extract <code>onclick</code> first.</p>
<pre><code>>>> import re
>>> data = soup.select('.taLnk')[0].get('onclick')
>>> href = re.search(r"(?is)'aHref':'(.*?)'",str(data)).group(1)
'LqMWJQzZYUWJQpEcYGII26XombQQoqnQQQQoqnqgoqnQQQQoqnQQQQoqnQQQQoqnqgoqnQQQQoqnQQuuuQQoqnQQQQoqnxioqnQQQQoqnQQ2EisSMVCnVcJQQoqnQQQQoqnxioqnQQQQoqnQQniaQQoqnQQQQoqnqgoqnQQQQoqnQQWJQzhYMJkH3KHVAdJJH3VVdB'
</code></pre>
| 0 | 2016-09-02T10:28:22Z | [
"python",
"html",
"python-3.x",
"beautifulsoup"
] |
how to get href link from onclick function in python | 39,289,206 | <p>I want to get href link of website form onclick function
Here is html code in which onclick function call a website </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="fl">
<span class="taLnk" onclick="ta.trackEventOnPage('Eatery_Listing', 'Website', 594024, 1); ta.util.cookie.setPIDCookie(15190); ta.call('ta.util.link.targetBlank', event, this, {'aHref':'LqMWJQzZYUWJQpEcYGII26XombQQoqnQQQQoqnqgoqnQQQQoqnQQQQoqnQQQQoqnqgoqnQQQQoqnQQuuuQQoqnQQQQoqnxioqnQQQQoqnQQ2EisSMVCnVcJQQoqnQQQQoqnxioqnQQQQoqnQQniaQQoqnQQQQoqnqgoqnQQQQoqnQQWJQzhYMJkH3KHVAdJJH3VVdB', 'isAsdf':true})">Website</span>
</div></code></pre>
</div>
</div>
</p>
<p>Normaly i use this code to get href link from any span or element</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>geturl = soup.findsoup("span", {"class": "taLnk"})
for link in geturl:
hreflink = link.get("href")
print(hreflink)</code></pre>
</div>
</div>
</p>
<p>But in this case there is no way to directly call href because href exist in onclick function</p>
<p>Please help me what to do now</p>
| 0 | 2016-09-02T09:38:07Z | 39,290,286 | <p>You can use a regex with bs4, selecting the span with the class taLnk and the <em>onclick</em> attribute starting with <em>ta.trackEventOnPage</em>:</p>
<pre><code>h = """<div class="fl">
<span class="taLnk" onclick="ta.trackEventOnPage('Eatery_Listing', 'Website', 594024, 1); ta.util.cookie.setPIDCookie(15190); ta.call('ta.util.link.targetBlank', event, this, {'aHref':'LqMWJQzZYUWJQpEcYGII26XombQQoqnQQQQoqnqgoqnQQQQoqnQQQQoqnQQQQoqnqgoqnQQQQoqnQQuuuQQoqnQQQQoqnxioqnQQQQoqnQQ2EisSMVCnVcJQQoqnQQQQoqnxioqnQQQQoqnQQniaQQoqnQQQQoqnqgoqnQQQQoqnQQWJQzhYMJkH3KHVAdJJH3VVdB', 'isAsdf':true})">Website</span>
</div>"""
from bs4 import BeautifulSoup
import re
soup = BeautifulSoup(h)
data = soup.select_one("span.taLnk[onclick^=ta.trackEventOnPage]")["onclick"]
print(re.search("'aHref':'(.*?)'", data).group(1))
</code></pre>
| 0 | 2016-09-02T10:33:47Z | [
"python",
"html",
"python-3.x",
"beautifulsoup"
] |
What does this numpy code snippet do? | 39,289,241 | <pre><code>float(multiply(colVec1,colVec2).T * (matrix*matrix[i,:].T)) + c
</code></pre>
<p>I am new to Python and numpy and I am trying to understand what the code snippet above does. </p>
<p>The <code>multiply().T</code> part performs an element-by-element multiplication and then does a transpose, and so the result becomes a row vector. </p>
<p>I am trying to understand what <code>matrix[i,:]</code> does. Does it create a sub-matrix by picking just the i'th row vector or does it create a sub-matrix from the i'th row vector all the way to the end of the matrix? </p>
<p>The <code>*</code> performs a dot-product which is then converted to a float using <code>float()</code>. </p>
| 0 | 2016-09-02T09:39:45Z | 39,289,407 | <p>Yes, <code>matrix[i, :]</code> will give you the i:th <strong>row</strong> of <code>matrix</code> since the <code>:</code> means "pick all in this dimension".</p>
<p>And no, <code>A * B</code> is not the dot product between <code>A</code> and <code>B</code>, it is the <strong>element-wise product</strong> of <code>A</code> and <code>B</code>.
To get the dot product you would use any of</p>
<pre><code>A.dot(B)
np.dot(A, B)
A @ B # Python 3.5+ only
</code></pre>
<p>The above is true as long as you use the <code>np.ndarray</code> class, which you did if you created your matrices/arrays using <code>np.array</code>, <code>np.eye</code>, <code>np.zeros</code>, etc.
There is also a <code>np.matrix</code> class where the multiplication operator <code>*</code> is actually the dot product, but it is strongly adviced to <em>never</em> use it since it tends to create confusion when mixed with the normal array type.</p>
<h3>So what is going on in the expression?</h3>
<p>Lets's break it down to parts.</p>
<p><code>multiply(colVec1,colVec2).T</code> will create the transpose of the element-wise product of <code>colVec1</code> and <code>colVec2</code>.</p>
<p><code>matrix*matrix[i,:].T</code> is the element-wise product between <code>matrix</code> and the transpose of the i:th row of <code>matrix</code>. Due to numpys broadcasting rules this is actually the same as multiplying (elementwise) each row of <code>matrix</code> with its i:th row.</p>
<p>What we can see now is that both these expressions will create a matrix/array and not a scalar.
Therefore the call to <code>float()</code> will fail, as it expects a 1-element array or scalar.</p>
<p>My verdict is that someone has either been using the <code>np.matrix</code> class, or has interpreted the use of <code>*</code> wrong.</p>
| 1 | 2016-09-02T09:48:31Z | [
"python",
"numpy",
"matrix"
] |
How to create a Image Dataset just like MNIST dataset? | 39,289,285 | <p>I have 10000 BMP images of some handwritten digits. If i want to feed the datas to a neural network what do i need to do ? For MNIST dataset i just had to write</p>
<pre><code>(X_train, y_train), (X_test, y_test) = mnist.load_data()
</code></pre>
<p>I am using Keras library in python . How can i create such dataset ?</p>
| 0 | 2016-09-02T09:42:25Z | 39,291,033 | <p>You can either write a function that loads all your images and stack them into a numpy array if all fits in RAM or use Keras ImageDataGenerator (<a href="https://keras.io/preprocessing/image/" rel="nofollow">https://keras.io/preprocessing/image/</a>) which includes a function <code>flow_from_directory</code>. You can find an example here <a href="https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d" rel="nofollow">https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d</a>.</p>
| 2 | 2016-09-02T11:15:12Z | [
"python",
"image-processing",
"dataset",
"neural-network",
"keras"
] |
Having trouble to understand the output in a Jupyter notebook ("Learning Cython") | 39,289,340 | <p>In the O'Reilly video tutorial "Learning Cython" in chapter 5, there's a notebook called <code>strings.ipynb</code>.</p>
<p>First cell loads the Cython extension:</p>
<pre><code>%load_ext cython
</code></pre>
<p>Followed by this Cython cell:</p>
<pre><code>%%cython
# cython: language_level=3
def f(char* text):
print(text)
</code></pre>
<p>Then the following cell is used to demonstrate that a (Unicode) string cannot be used as <code>char*</code> argument:</p>
<pre><code>f('It is I, Arthur, son of Uther Pendragon')
</code></pre>
<p>The outcome here is a <code>TypeError</code> exception.</p>
<p>All of the above is what I'd expect from the author's remarks in the voice-over. However, the outcome of the next cell:</p>
<pre><code>f(b'It is I, Arthur, son of Uther Pendragon')
</code></pre>
<p>was this:</p>
<pre><code>b'It is I, Arthur, son of Uther Pendragon'
</code></pre>
<p>and that stumped me.</p>
<p>Having used a plain <code>print</code> in the function <code>f</code>, why does the output appear as if it was run through <code>repr</code> first, when inside the Cython code above it clearly was <em>not</em> run through <code>repr</code>?</p>
<p>The author doesn't even mention this somewhat (at least to me) unexpected result in the voice-over.</p>
<p>What gives? Why does the output look like it was first passed through <code>repr</code>? Are byte strings in Python 3 not "printable" (i.e. without <code>str</code> method) and therefore fall back to <code>repr</code>?</p>
<p>PS: I have to admit I'm coming from Python 2.x and haven't had too much exposure to Python 3.x, so perhaps the difference is therein.</p>
| 1 | 2016-09-02T09:44:42Z | 39,289,706 | <p>Because it was. In Python 3, <code>bytes_str</code> uses <a href="https://github.com/python/cpython/blob/master/Objects/bytesobject.c#L1370" rel="nofollow"><code>bytes_repr</code> internally</a>:</p>
<pre><code>static PyObject *
bytes_str(PyObject *op)
{
if (Py_BytesWarningFlag) {
if (PyErr_WarnEx(PyExc_BytesWarning,
"str() on a bytes instance", 1))
return NULL;
}
return bytes_repr(op); // call repr on it
}
</code></pre>
<p>As such, <code>print</code> will, in essence, call <code>repr(bytes_instance)</code>.</p>
| 1 | 2016-09-02T10:03:36Z | [
"python",
"python-3.x",
"cython",
"jupyter-notebook"
] |
Python PyAudio, output little cracky. Maybe math | 39,289,375 | <p>Good day. I have a small problem that may be partly math.</p>
<p>The thing is I want to play Sine wave without fixed frequency. Therefore, not to make the sound cracky between transitions or during fixed frequency i need the sine wave to start and to end with amplitude zero. Mathematicly I understand what has to be done. </p>
<p>I chosed a way, where I adapt 'time' of the sine wave so it has time to finish all cycles. Basicly y=sin(2*pi<em>f</em>t) where f*t must be whole number.</p>
<p>The problem is that it actually works but not fully. All waves end up very near to zero, but not exactly there. Sound is quite ok while changing frequency but not perfect. I cant figure out why the last element cant land on zero. </p>
<p>If you would go through it and check i would be really greatful. Thx</p>
<pre><code> import pyaudio
import numpy as np
import matplotlib.pyplot as plt
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100*4 # sampling rate, Hz, must be integer
time = 0.1 # in seconds, may be float
f = 400 # sine frequency, Hz, may be float
k = np.arange(int(time*fs))
t=np.arange(0,time,1/fs)
start=0
end=time
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# generate samples, note conversion to float32 array
for i in range(1000):
start = 0
end = 40 / f #time to acomplish whole whole cycles according to the give frequency - must be whole number
print(len(t))
t = np.arange(start, end, 1 / fs)
samples = (np.sin(2*np.pi*f*t)).astype(np.float32)
print(samples[0],samples[-1]) # The main problem. I need first and last elements in the sample to be zero.
# Problem is that last element is only close to zero, which make the sound not so smooth
#print(start+i,end+i)
#print(samples) # # # # # Shows first and last element
f+=1
# for paFloat32 sample values must be in range [-1.0, 1.0]
# play. May repeat with different volume values (if done interactively)
stream.write(volume*samples)
stream.stop_stream()
stream.close()
p.terminate()
</code></pre>
| 1 | 2016-09-02T09:46:23Z | 39,424,253 | <p>The sine function repeats itself every multiple of <code>2*pi*N</code> where N is a whole number. IOW, <code>sin(2*pi) == sin(2*pi*2) == sin(2*pi*3)</code> and so on. </p>
<p>The typical method for generating samples of a particular frequency is <code>sin(2*pi*i*freq/sampleRate)</code> where <code>i</code> is the sample number.</p>
<p>What follows is that the sine will only repeat at values of <code>i</code> such that <code>i*freq/sampleRate</code> is exactly equal to a whole number (I'm disregarding phase offsets).</p>
<p>The net result is that some frequency/sampleRate combinations may repeat after only a single cycle (1kHz @ 48kSr) whereas others may take a very long time to repeat (997Hz @ 48kSr).</p>
<p>It is not necessary that you change frequencies at exact zero crossings in order to avoid glitches. A better approach is this:</p>
<ol>
<li>Compute the phase increment for the desired frequency as <code>phaseInc = 2*pi*freq/sampleRate</code></li>
<li>For each output sample compute the output sample from the current phase. <code>y = sin(phase)</code></li>
<li>Update the phase by the phase increment: <code>phase += phaseInc</code></li>
<li>Repeat 2-3 for the desired number of samples.</li>
<li>Goto step one to change freq</li>
</ol>
<p>If you are insistent on changing at a zero crossing, just do it at the nearest sample where the phase crosses a multiple of 2*pi. </p>
| 0 | 2016-09-10T08:38:19Z | [
"python",
"audio",
"wave",
"pyaudio",
"sine"
] |
Pandas: Get the only value of a series or nan if it does not exist | 39,289,387 | <p>I'm doing a bit more complex operation on a dataframe where I compare two rows which can be anywhere in the frame.</p>
<p>Here's an example:</p>
<pre><code>import pandas as pd
import numpy as np
D = {'A':['a','a','c','e','e','b','b'],'B':['c','f','a','b','d','a','e']\
,'AW':[1,2,3,4,5,6,7],'BW':[10,20,30,40,50,60,70]}
P = pd.DataFrame(D)
P = P.sort_values(['A','B'])
P['AB'] = P.A+'_'+P.B
P['AWBW'] = P.AW+P.BW
</code></pre>
<p>Now what I am doing here is that I have pairings of strings in <code>A</code> and <code>B</code>, for example <code>a_c</code> which I call <code>AB</code>. And I have the reverse pairing <code>c_a</code> as well. I sum over the numbers <code>AW</code> and <code>BW</code> for each pairing, called <code>AWBW</code>.</p>
<p>Now I want to subtract the summed value of <code>a_c</code> from the value of <code>c_a</code> and do the same thing for every string pairing where both variants exist. All other values should just be <code>NaN</code>, so my result should look like this:</p>
<pre><code> A AW B BW AB AWBW RowDelta
0 a 1 c 10 a_c 11 -22.0
1 a 2 f 20 a_f 22 NaN
5 b 6 a 60 b_a 66 NaN
6 b 7 e 70 b_e 77 33.0
2 c 3 a 30 c_a 33 22.0
3 e 4 b 40 e_b 44 -33.0
4 e 5 d 50 e_d 55 NaN
</code></pre>
<p>I have almost solved the way to do this, but there's one problem left I'm stuck at.</p>
<p>Here's my solution so far:</p>
<pre><code>for i,row in P.iterrows():
P.ix[i,'RowDelta'] = row['AWBW']\
- P[(P['A'] == row.AB[2]) & (P['B'] == row.AB[0])]['AWBW'].get(0,np.nan)
</code></pre>
<p>The problem is that <code>P[(P['A'] == row.AB[2]) & (P['B'] == row.AB[0])]['AWBW']</code> returns a series which is either empty or has exactly one element whose index however is variable. </p>
<p>Now the <code>series.get</code> method solves the problem of returning <code>NaN</code> when the series is empty but it wants a definitive index value, in this case I use <code>0</code>, but I can not get a dynamic index there.</p>
<p>I can not do this for example</p>
<pre><code>T = P[(P['A'] == row.AB[2]) & (P['B'] == row.AB[0])]['AWBW']
T.get(T.index[0],np.nan)
</code></pre>
<p>because there is no index if the series is empty and this leads to an error when doing <code>T.index[0]</code>. Same goes for my attempts using <code>iloc</code>.</p>
<p>Is there a way to dynamically get the unknown one index of a series if it has one element (and never more than one) while at the same time handling the case of an empty series?</p>
| 6 | 2016-09-02T09:47:03Z | 39,291,792 | <p>Credit goes to <a href="https://stackoverflow.com/users/2336654/pirsquared">piRSquared</a> for pointing me into the right direction for the solution:</p>
<pre><code>AB = P.AB.str.split('_', expand=True)
AB = AB.merge(AB, left_on=[0, 1], right_on=[1, 0],how='inner')[[0,1]]
AB = AB.merge(P,left_on=[0,1], right_on=['A','B'])[['A','AW','B','BW']]
AB = AB.merge(P,left_on=['A','B'], right_on=['B','A'])[['AW_x','BW_x','AW_y','BW_y','AB']]
AB['RowDelta'] = AB.AW_y+AB.BW_y-AB.AW_x-AB.BW_x
P = P.merge(AB[['AB','RowDelta']],on='AB',how='outer')
</code></pre>
<p>Maybe it can be made shorter or nicer, it works for sure.</p>
| 2 | 2016-09-02T11:53:39Z | [
"python",
"pandas"
] |
python matplotlib plot table with multiple headers | 39,289,483 | <p>I can plot my single-header dataframe on a figure with this code:</p>
<pre><code>plt.table(cellText=df.round(4).values, cellLoc='center', bbox=[0.225, 1, 0.7, 0.15],
rowLabels=[' {} '.format(i) for i in df.index], rowLoc='center',
rowColours=['silver']*len(df.index), colLabels=df.columns, colLoc='center',
colColours=['lightgrey']*len(df.columns), colWidths=[0.1]*len(df.columns))
</code></pre>
<p>My question is: is it possible to <strong>plot a dataframe with multiindex columns</strong>? I want two separate "rows" for my multi-headers, so tuples in one header row is not good. If it is possible I want apply the above styles (colors) on both headers (it would be brilliant to set different colors for the multi headers).</p>
<p>Here is an example dataframe:</p>
<pre><code>df = pd.DataFrame([[11, 22], [13, 23]],
columns=pd.MultiIndex.from_tuples([('main', 'sub_1'), ('main', 'sub_2')]))
</code></pre>
<p>Result:</p>
<pre><code> main
sub_1 sub_2
0 11 22
1 13 23
</code></pre>
| 2 | 2016-09-02T09:51:57Z | 39,291,805 | <p>OK, I used a "trick" to solve this problem: plot a new table with only one cell, without column header and without row index. The only cell value is the original top header. And place this new table on the top of the old (single-header) table. (It is not the best solution but works...).</p>
<p>The code:</p>
<pre><code>df = pd.DataFrame([[11, 22], [13, 23]], columns=['sub_1', 'sub_2'])
# First plot single-header dataframe (headers = ['sub_1', 'sub_2'])
plt.table(cellText=df.round(4).values, cellLoc='center', bbox=[0.225, 1, 0.7, 0.15],
rowLabels=[' {} '.format(i) for i in df.index], rowLoc='center',
rowColours=['silver']*len(df.index), colLabels=df.columns, colLoc='center',
colColours=['lightgrey']*len(df.columns), colWidths=[0.1]*len(df.columns))
# Then plot a new table with one cell (value = 'main')
plt.table(cellText=[['main']], cellLoc='center', bbox=[0.225, 1.15, 0.7, 0.05],
cellColours=[['Lavender']])
</code></pre>
| 1 | 2016-09-02T11:54:08Z | [
"python",
"matplotlib",
"multi-index"
] |
Calling Python Eve from an Flask application leads to weird errors | 39,289,552 | <p>I've created an Eve API which is being called from an Flask Application using SSL protected traffic.
The application itself should be working nevertheless an error occurs when Eve tries to handle the incoming requests. </p>
<pre><code>Eve==0.6.4
Flask==0.10.1
Traceback (most recent call last):
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/eve/flaskapp.py", line 968, in __call__
return super(Eve, self).__call__(environ, start_response)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1539, in handle_user_exception
return self.handle_http_exception(e)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1495, in handle_http_exception
handler = self._find_error_handler(e)
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1476, in _find_error_handler
.get(code))
File "/home/user/.virtualenvs/eve-oauth2/lib/python2.7/site-packages/flask/app.py", line 1465, in find_handler
handler = handler_map.get(cls)
</code></pre>
| 0 | 2016-09-02T09:55:03Z | 39,289,553 | <p>At least in this case it was in incompatibility between the libraries, no idea why.
Upgrade Eve and Flask via pip (<code>pip install --upgrade flask</code> and <code>pip install --upgrade eve</code>) and it works again. </p>
| 0 | 2016-09-02T09:55:03Z | [
"python",
"flask",
"eve"
] |
Print Name in Text File using python Regular expression | 39,289,581 | <p>I am looking for a name in a text file using python Regular expression.
Example: If I have a Name in the format like Brad Pitt or BRAD PITT in first 10 lines I want to print the name.
I am using the following code.</p>
<pre><code>import re
import sys
name=[sys.argv[1]]
for line in name:
n=re.match("([A-Z]\d[A-Z]\d)", line)
print "Name = "+str(n)
</code></pre>
<p>can any one please help me with the Solution. </p>
| 0 | 2016-09-02T09:56:24Z | 39,290,417 | <p>Test this <a href="https://repl.it/DKaJ" rel="nofollow">one</a></p>
<pre><code>import re
import sys
# template content : TODO load from file or program args
name="Bratt PITT\nYour Name\nSimpleName\nWrong5Name\nThis is a Too complex Name"
for line in name.splitlines():
n = re.match("^([A-Za-z\s]+)+$", line)
if(n):
print "Name = " + str(n.group())
</code></pre>
| 0 | 2016-09-02T10:41:25Z | [
"python"
] |
How to get the column names of a DataFrame GroupBy object? | 39,289,611 | <p>How can I get the column names of a GroupBy object? The object does not supply a columns propertiy. I can aggregate the object first or extract a DataFrame with the get_group()-method but this is either a power consuming hack or error prone if there are dismissed columns (strings for example).</p>
| 0 | 2016-09-02T09:58:38Z | 39,289,936 | <p>Looking at the source code of <code>__getitem__</code>, it seems that you can get the column names with</p>
<pre><code>g.obj.columns
</code></pre>
<p>where g is the groupby object. Apparently <code>g.obj</code> links to the DataFrame.</p>
| 2 | 2016-09-02T10:14:55Z | [
"python",
"pandas"
] |
Execute command inside loop only upon status change | 39,289,625 | <p>I'm looking for a smooth (and if possible pythonic) way of executing something inside a while True loop only once per status change, and then if something changes it should print out the new change once rather than spam the console with whatever the current value is.</p>
<p>My general code:</p>
<pre><code>def function()
while True:
check_status() #External function that returns a new status value if it changes
print check_status()
if status == 0:
do_something()
continue
if status == 1:
do_something_else()
continue
function()
</code></pre>
| 0 | 2016-09-02T09:59:20Z | 39,289,916 | <p>I would put all the tasks into a dictionary, then you can simply track the previous status and only execute a new task when a new status has been reached, something like this:</p>
<pre><code>from time import sleep
tasks = {1:do_something,
2:do_something_else}
prev_status = None
while True:
status = check_status()
if status != prev_status:
prev_status = status
print "status changed to: {}".format(status)
tasks[status]()
sleep(.1)
</code></pre>
| 1 | 2016-09-02T10:13:54Z | [
"python",
"python-2.7",
"loops"
] |
call python code of different git branch other than the current repository without switching branch | 39,289,867 | <p>So, basically I have 2 versions of a project and for some users, I want to use the latest version while for others, I want to use older version. Both of them have same file names and multiple users will use it simultaneously. To accomplish this, I want to call function from different git branch without actually switching the branch.
Is there a way to do so?</p>
<p>for eg., when my current branch is <code>v1</code> and the other branch is <code>v2</code>; depending on the value of variable <code>flag</code>, call the function</p>
<pre><code>if flag == 1:
# import function f1() from branch v2
return f1()
else:
# use current branch v1
</code></pre>
| -1 | 2016-09-02T10:11:34Z | 39,290,774 | <p>Without commenting on <em>why</em> you need to do that, you can simply checkout your repo twice: once for branch1, and one for branch2 (without cloning twice).<br>
See "<a href="http://stackoverflow.com/a/30186843/6309">git working on two branches simultaneously</a>".</p>
<p>You can then make your script aware of its current path (<code>/path/to/branch1</code>), and relative path to the other branch (<code>../branch2/...</code>)</p>
| 1 | 2016-09-02T10:59:57Z | [
"python",
"git",
"gitpython"
] |
call python code of different git branch other than the current repository without switching branch | 39,289,867 | <p>So, basically I have 2 versions of a project and for some users, I want to use the latest version while for others, I want to use older version. Both of them have same file names and multiple users will use it simultaneously. To accomplish this, I want to call function from different git branch without actually switching the branch.
Is there a way to do so?</p>
<p>for eg., when my current branch is <code>v1</code> and the other branch is <code>v2</code>; depending on the value of variable <code>flag</code>, call the function</p>
<pre><code>if flag == 1:
# import function f1() from branch v2
return f1()
else:
# use current branch v1
</code></pre>
| -1 | 2016-09-02T10:11:34Z | 39,291,484 | <p>You <em>must</em> have both versions of the code present / accessible in order to invoke both versions of the code dynamically.</p>
<p>The by-far-simplest way to accomplish this is to have both versions of the code present in different locations, as in <a href="http://stackoverflow.com/a/39290774/1256452">VonC's answer</a>.</p>
<p>Since Python is what it is, though, you <em>could</em> dynamically extract specific versions of specific source files, compile them on the fly (using dynamic imports and temporary files, or <code>exec</code> and internal strings), and hence run code that does not show up in casual perusal of the program source. I <em>do not</em> encourage this approach: it is difficult (though not very difficult) and error-prone, tends towards security holes, and is overall a terrible way to work unless you're writing something like a Python debugger or IDE. But if this is what you want to do, you simply decompose the problem into:</p>
<ul>
<li>examine and/or extract specific files from specific commits (<code>git show</code>, <code>git cat-file -p</code>, etc.), and</li>
<li>dynamically load or execute code from file in file system or from string in memory.</li>
</ul>
<p>The first is a Git programming exercise (and is pretty trivial, <code>git show 1234567:foo.py</code> or <code>git show branch:foo.py</code>: you can redirect the output to a file using either shell redirection or Python's <code>subprocess</code> module), and when done with files, the second is a Python programming exercise of moderate difficulty: see <a href="https://docs.python.org/3/library/modules.html" rel="nofollow">the documentation</a>, paying particularly close attention to <code>importlib</code>.</p>
| 0 | 2016-09-02T11:38:40Z | [
"python",
"git",
"gitpython"
] |
Parsing SQL queries in python | 39,289,898 | <p>I need to built a mini-sql engine in python.So I require a sql-parser for it and I found out about python-sqlparse but am unable to understand how to extract column names or name of the table e.t.c from the SQL query.Can someone help me regarding this matter.</p>
| -3 | 2016-09-02T10:12:55Z | 39,290,439 | <p>Lets check python sqlparse documentation: <a href="https://sqlparse.readthedocs.io/en/latest/intro/#getting-started" rel="nofollow">Documentation - getting started</a></p>
<p>You can see there example how parse sql. This is what is there:</p>
<p><strong>1. First you need parse sql statement with parse method:</strong></p>
<pre><code>sql = 'select * from "someschema"."mytable" where id = 1'
parsed = sqlparse.parse(sql)
</code></pre>
<p><strong>2. Now you need now get Statement object from parsed:</strong></p>
<pre><code>stmt = parsed[0]
'''(<DML 'select' at 0x9b63c34>,
<Whitespace ' ' at 0x9b63e8c>,
<Operator '*' at 0x9b63e64>,
<Whitespace ' ' at 0x9b63c5c>,
<Keyword 'from' at 0x9b63c84>,
<Whitespace ' ' at 0x9b63cd4>,
<Identifier '"somes...' at 0x9b5c62c>,
<Whitespace ' ' at 0x9b63f04>,
<Where 'where ...' at 0x9b5caac>)'''
</code></pre>
<p><strong>3. then you can read again parsed sql statement with str() method:</strong></p>
<pre><code>#all sql statement
str(stmt)
#only parts of sql statements
str(stmt.tokens[-1])
#so the result of last str() method is 'where id = 1'
</code></pre>
<p>Result of <code>str(stmt.tokens[-1])</code> is then <code>'where id = 1'</code></p>
<p><strong>If you want name of the table you just need write:</strong></p>
<pre><code>str(stmt.tokens[-3])
#result "someschema"."mytable"
</code></pre>
<p><strong>If you need names of the columns you can call:</strong></p>
<pre><code>str(stmt.tokens[2])
#result *, now it is operator * because there are not columns in this sql statements
</code></pre>
| 0 | 2016-09-02T10:42:39Z | [
"python",
"sql",
"parsing"
] |
Sorting a combined list of numbers and letters in Python | 39,289,996 | <p>I am trying to sort a list in Python, but one containing both letters and numbers in the same term. The problem with using sort on a string is that it doesn't sort the numbers correctly:</p>
<pre><code>2
23
3
</code></pre>
<p>etc</p>
<pre><code>list = [("a", ['8', '0']), ("a", ['7', '0b']), ("a", ['7', '0']), ("a", ['6', '0b']), ("a", ['6', '01']]
new_list = sorted(list, key=lambda i: i[1])
# works great on ints.
</code></pre>
<p>The letters need to be sorted as there number equivalent, ie:</p>
<pre><code>a = 1, b = 2, c = 3, d = 4 etc
</code></pre>
<p>hence <code>4.1a</code> == <code>4.11</code></p>
<p>But I need to preserve the letter in the output and not just convert it to an int. Any ideas?</p>
| 0 | 2016-09-02T10:17:18Z | 39,290,583 | <p>Here is my solution:</p>
<pre><code>def f(s):
m = {'a': 1,'b': 2,'c': 3,'d': 4,'e': 5,
'f': 6,'g': 7,'h': 8,'i': 9,'j': 10,
'k': 11,'l': 12,'m': 13,'n': 14,'o': 15,
'p': 16,'q': 17,'r': 18,'s': 19,
't': 20,'u': 21,'v': 22,'w': 23,
'x': 24,'y': 25,'z': 26}
result = []
for l in s:
try:
result.append(int(l))
except ValueError:
result.append(m[l])
return result
list = [("a", ['8', '0']), ("a", ['7', '0b']), ("a", ['7', '0']), ("a", ['6', '0b']), ("a", ['6', '01'])]
new_list = sorted(list, key=lambda i: f(''.join(i[1])))
>>> new_list
[('a', ['6', '01']),
('a', ['6', '0b']),
('a', ['7', '0']),
('a', ['7', '0b']),
('a', ['8', '0'])]
</code></pre>
<p>The function converts something like '60b' to [6,0,2] (the letters are converted to integers following the mapping dict. Then lambda sorts based on the returned list.</p>
| 0 | 2016-09-02T10:49:27Z | [
"python",
"list",
"python-2.7",
"sorting"
] |
Sorting a combined list of numbers and letters in Python | 39,289,996 | <p>I am trying to sort a list in Python, but one containing both letters and numbers in the same term. The problem with using sort on a string is that it doesn't sort the numbers correctly:</p>
<pre><code>2
23
3
</code></pre>
<p>etc</p>
<pre><code>list = [("a", ['8', '0']), ("a", ['7', '0b']), ("a", ['7', '0']), ("a", ['6', '0b']), ("a", ['6', '01']]
new_list = sorted(list, key=lambda i: i[1])
# works great on ints.
</code></pre>
<p>The letters need to be sorted as there number equivalent, ie:</p>
<pre><code>a = 1, b = 2, c = 3, d = 4 etc
</code></pre>
<p>hence <code>4.1a</code> == <code>4.11</code></p>
<p>But I need to preserve the letter in the output and not just convert it to an int. Any ideas?</p>
| 0 | 2016-09-02T10:17:18Z | 39,291,296 | <p>It's not totally clear from your question, but I assume that you are ignoring the first item in each tuple, and only sorting on the list in the second item. I also assume that <em>only</em> letters from 'a' to 'i' can occur in in that list. </p>
<p>A simple way to convert a -> 1, b -> 2, c -> 3, ... i -> 9 is to take advantage of the fact that the ASCII codes of the digit characters are contiguous and so are the lower case letters. Thus there's a constant offset of 48 between each letter and its equivalent digit.</p>
<p>So to convert a list of strings to a key we can join all the strings into a single string, then convert each letter char in the string into its equivalent digit char, join the resulting chars back into a single string, and convert that string to an integer.</p>
<pre><code>lst = [
("a", ['8', '0']),
("a", ['7', '0b']),
("a", ['7', '0']),
("a", ['6', '0b']),
("a", ['6', '01']),
]
def keyfunc(t):
a = [chr(ord(c) - 48) if 'a' <= c <= 'i' else c for c in ''.join(t[1])]
return int(''.join(a))
new_lst = sorted(lst, key=keyfunc)
for row in new_lst:
print(row)
</code></pre>
<p><strong>output</strong></p>
<pre><code>('a', ['7', '0'])
('a', ['8', '0'])
('a', ['6', '01'])
('a', ['6', '0b'])
('a', ['7', '0b'])
</code></pre>
<p>BTW, you should not use <code>list</code> as a variable name, as that shadows the built-in <code>list</code> type, and that can lead to mysterious bugs.</p>
| 0 | 2016-09-02T11:29:29Z | [
"python",
"list",
"python-2.7",
"sorting"
] |
Sorting a combined list of numbers and letters in Python | 39,289,996 | <p>I am trying to sort a list in Python, but one containing both letters and numbers in the same term. The problem with using sort on a string is that it doesn't sort the numbers correctly:</p>
<pre><code>2
23
3
</code></pre>
<p>etc</p>
<pre><code>list = [("a", ['8', '0']), ("a", ['7', '0b']), ("a", ['7', '0']), ("a", ['6', '0b']), ("a", ['6', '01']]
new_list = sorted(list, key=lambda i: i[1])
# works great on ints.
</code></pre>
<p>The letters need to be sorted as there number equivalent, ie:</p>
<pre><code>a = 1, b = 2, c = 3, d = 4 etc
</code></pre>
<p>hence <code>4.1a</code> == <code>4.11</code></p>
<p>But I need to preserve the letter in the output and not just convert it to an int. Any ideas?</p>
| 0 | 2016-09-02T10:17:18Z | 39,291,329 | <p>Are you doing an incremental sort or are you only sorting by the <code>a1</code>elements?</p>
<p>If you really need to get a number value for a letter, you can probably use
<code>string.ascii_letters.index(letter)</code>
Or even better, if you only need consecutive numbers for letters, a<=b,
use <code>ord(letter)</code>.</p>
<p>But I think letters should sort properly without needing to get an integer value. I think the problem is splitting <code>['a', 'a1']</code>.</p>
<p>I'm not sure if this is what you need:</p>
<pre><code>def sort_func(item):
try:
return item[1][1][1]
except:
return item[1][1]
# sort by the alphanumeric
vals.sort(key=sort_func)
# sort again by the number
vals.sort(key=lambda i: int(i[1][0]))
# sort again by the first letter
vals.sort(key=lambda i:i[0])
</code></pre>
| 0 | 2016-09-02T11:30:58Z | [
"python",
"list",
"python-2.7",
"sorting"
] |
Solving two sets of coupled ODEs via matrix form in Python | 39,290,189 | <p>I want to solve a coupled system of ODEs in matrix form for two sets of variable (i.e. {y} and {m}) which has such a form:</p>
<blockquote>
<p>y'_n = ((m_n)**2) * y_n+(C * y)_n , m'_n=-4*m_n*y_n </p>
</blockquote>
<p>where <code>C</code> is a matrix, <code>[2 1, -1 3]</code>. </p>
<p>On the other hand I want to solve these equations:</p>
<blockquote>
<p>y'1= m1 ** 2 * y1 + 2 * y1 + y2<br>
y'2= m2 ** 2 * y2 - y1 + 3 * y3<br>
m'1= -4 * m1 * y1 ,<br>
m'2= -4 * m2 * y2<br>
y1(0)=y2(0)=-15. and m1(0)=m2(0)=0.01</p>
</blockquote>
<p>to finally be able to plot ys and ms versus time via matrix form. I wrote the following program:</p>
<pre><code>import numpy as np
from pylab import plot,show
from scipy.integrate import odeint
C=np.array([[2,1],[-1,3]])
dt=0.001
def dy_dt(Y,time):
y,m=Y
m=m+dt*(-4.*m*y)
dy=m**2*y+np.dot(C,y)
return dy
m_init=np.ones(2)*0.01
y_init=np.ones(2)*-15.
time=np.linspace(0,4,1/dt)
y0=np.hstack((y_init, m_init))
y_tot=odeint(dy_dt,y0,time)
plot(time,y_tot[0])#y_1
plot(time,y_tot[1])#y_2
plot(time,y_tot[2])#m_1
plot(time,y_tot[3])#m_2
show()
</code></pre>
<p>but I encountered the following error:</p>
<pre><code> y,m=Y
ValueError: too many values to unpack
</code></pre>
<p>Can anybody help me!</p>
| 1 | 2016-09-02T10:28:36Z | 39,291,710 | <p>Take a look at this to understand what is going on:</p>
<pre><code># we have a collection of different containers, namely list, tuple, set & dictionary
master = [[1, 2], (1, 2), {1, 2}, {1: 'a', 2: 'b'}]
for container in master:
a, b = container # python will automatically try to unpack the container to supply a & b with values
print(a, b) # all print: 1 2 since a = 1 and b = 2 after the unpacking
</code></pre>
<p>if i have a container with more values than the variables i am trying to supply, i get the "too many values to unpack" error, for example:</p>
<pre><code>container = [1, 2, 3]
a, b = container # this raises an error, the value 3 has nowhere to go
</code></pre>
<p>you can however say "dump all the rest to b" by:</p>
<pre><code>a, *b = container
print(a, b) # -> 1 [2, 3] so a = 1 and b = [2, 3]
</code></pre>
<p>Back to you know, when you say: <code>y, m = Y</code> you have to make sure <code>Y</code> is a container with exactly 2 objects which does not seem to be the case. Lastly, as i said in the comments, you do not seem to call your function <code>dy_dt</code> anywhere.</p>
| 1 | 2016-09-02T11:49:03Z | [
"python",
"numpy",
"scipy",
"odeint"
] |
Solving two sets of coupled ODEs via matrix form in Python | 39,290,189 | <p>I want to solve a coupled system of ODEs in matrix form for two sets of variable (i.e. {y} and {m}) which has such a form:</p>
<blockquote>
<p>y'_n = ((m_n)**2) * y_n+(C * y)_n , m'_n=-4*m_n*y_n </p>
</blockquote>
<p>where <code>C</code> is a matrix, <code>[2 1, -1 3]</code>. </p>
<p>On the other hand I want to solve these equations:</p>
<blockquote>
<p>y'1= m1 ** 2 * y1 + 2 * y1 + y2<br>
y'2= m2 ** 2 * y2 - y1 + 3 * y3<br>
m'1= -4 * m1 * y1 ,<br>
m'2= -4 * m2 * y2<br>
y1(0)=y2(0)=-15. and m1(0)=m2(0)=0.01</p>
</blockquote>
<p>to finally be able to plot ys and ms versus time via matrix form. I wrote the following program:</p>
<pre><code>import numpy as np
from pylab import plot,show
from scipy.integrate import odeint
C=np.array([[2,1],[-1,3]])
dt=0.001
def dy_dt(Y,time):
y,m=Y
m=m+dt*(-4.*m*y)
dy=m**2*y+np.dot(C,y)
return dy
m_init=np.ones(2)*0.01
y_init=np.ones(2)*-15.
time=np.linspace(0,4,1/dt)
y0=np.hstack((y_init, m_init))
y_tot=odeint(dy_dt,y0,time)
plot(time,y_tot[0])#y_1
plot(time,y_tot[1])#y_2
plot(time,y_tot[2])#m_1
plot(time,y_tot[3])#m_2
show()
</code></pre>
<p>but I encountered the following error:</p>
<pre><code> y,m=Y
ValueError: too many values to unpack
</code></pre>
<p>Can anybody help me!</p>
| 1 | 2016-09-02T10:28:36Z | 39,292,390 | <p><code>odeint</code> works with one-dimensional arrays. In your function <code>dy_dt</code>, <code>Y</code> will be passed in as a one-dimensional numpy array with length 4. To split it into <code>y</code> and <code>m</code>, you can write</p>
<pre><code>y = Y[:2]
m = Y[2:]
</code></pre>
<p>The function <code>dy_dt</code> must also return a sequence of length 4, containing [y[0]', y[1]', m[0]', m[1]'] (i.e. the derivatives of the four quantities). It looks like your function currently returns just two quantities in <code>dy</code>, so you will have to fix that, too.</p>
<p>This line</p>
<pre><code>m=m+dt*(-4.*m*y)
</code></pre>
<p>looks suspiciously like Euler's method. That's not what you want here. Based on what you wrote in the question, you could write:</p>
<pre><code>dm = -4. * m * y
</code></pre>
<p>Then you could return <code>np.hstack((dy, dm))</code>.</p>
<p>To summarize, you can write <code>dy_dt</code> like this:</p>
<pre><code>def dy_dt(Y, time):
y = Y[:2]
m = Y[2:]
dy = m**2 * y + np.dot(C, y)
dm = -4 * m * y
return np.hstack((dy, dm)) # or: return [dy[0], dy[1], dm[0], dm[1]]
</code></pre>
| 0 | 2016-09-02T12:25:25Z | [
"python",
"numpy",
"scipy",
"odeint"
] |
How to response different responses to the same multiple requests based on whether it has been responded? | 39,290,234 | <p>Let's say my <code>python</code> server has three different responses available. And one user send three HTTP requests at the same time. </p>
<p>How can I make sure that one requests get one unique response out of my three different responses?</p>
<p>I'm using python and mysql. </p>
<p>The problem is that even though I store already responded status in mysql, it's a bit too late by the time the next request came in.</p>
| 0 | 2016-09-02T10:31:10Z | 39,290,454 | <p>For starters, if MySQL isn't handling your performance requirements (and it rightly shouldn't, that doesn't sound like a very sane use-case),
consider using something like in-memory caching, or for more flexibility, Redis:
It's built for stuff like this, and will likely respond much, much quicker.
As an added bonus, it has an even simpler implementation than SQL.</p>
<p>Second, consider hashing some user and request details and storing that hash with the response to be able to identify it.
Upon receiving a request, store an entry with a 'pending' status, and only handle 'pending' requests - never ones that are missing entirely.</p>
| 0 | 2016-09-02T10:43:38Z | [
"python",
"distributed-computing"
] |
how to enumerate string placeholders? | 39,290,366 | <p>I'd like to know what's the best way to enumerate placeholders from strings, I've seen there is already another post which asks <a href="http://stackoverflow.com/questions/14061724/how-can-i-find-all-placeholders-for-str-format-in-a-python-string-using-a-regex">how-can-i-find-all-placeholders-for-str-format-in-a-python-string-using-a-regex</a> but I'm not sure the answers provided are giving me exactly what I'm looking for, let's examine this little test:</p>
<pre><code>import string
tests = [
['this is my placeholder 1 {} and this is the 2 {}', 2],
['another placeholder here {} and here \"{}\"', 2]
]
for s in tests:
num_placeholders = len([
name for text, name, spec, conv in string.Formatter().parse(s[0])])
if num_placeholders != s[1]:
print("FAIL: {0} has {1} placeholders!!! excepted result {2}".format(
s[0], num_placeholders, s[1]))
</code></pre>
<p>It seems <a href="https://docs.python.org/2/library/string.html#string.Formatter" rel="nofollow">string.Formatter</a> is not giving me the expected answer I'm looking for:</p>
<pre><code>FAIL: another placeholder here {} and here "{}" has 3 placeholders!!! excepted result 2
</code></pre>
| 0 | 2016-09-02T10:38:58Z | 39,290,458 | <p>Because you are ignoring the other elements on the tuple that <code>parse(s)</code> returns:</p>
<pre><code>>>> import string
>>>
>>> tests = [
... "{} spam eggs {}",
... "{0} spam eggs {1}",
... "{0:0.2f} spam eggs {1:0.2f}",
... "{{1}} spam eggs {{2}}"
... ]
>>> for s in tests:
... print [x for x in string.Formatter().parse(s)]
...
[('', '', '', None), (' spam eggs ', '', '', None)]
[('', '0', '', None), (' spam eggs ', '1', '', None)]
[('', '0', '0.2f', None), (' spam eggs ', '1', '0.2f', None)]
[('{', None, None, None), ('1}', None, None, None), (' spam eggs {', None, None, None), ('2}', None, None, None)]
</code></pre>
<p>Edit: I see what you mean now. Yes, the interpretation of the parsing is not intuitive nor obvious. The length of the returned list is not for the count of placeholders but for the count of literal portions of strings, including an empty string at the start but not including the empty string at the end. And each element also contains the format of what follows. For example:</p>
<pre><code>>>> list(string.Formatter().parse('{}'))
[('', '', '', None)]
</code></pre>
<p>This is the base case, and there is one single empty string of literal text. There are actually two empty strings, but the parser does not include the last empty string.</p>
<pre><code>>>> list(string.Formatter().parse('a {}'))
[('a ', '', '', None)]
</code></pre>
<p>Now we have the same as before: only one literal string "a " with nothing that follows. Since there is nothing that follows the format bracket then there is no element.</p>
<pre><code>>>> list(string.Formatter().parse('{} b'))
[('', '', '', None), (' b', None, None, None)]
</code></pre>
<p>This is the interesting case: since the format bracket is at the start, the first literal string is an empty literal string, and follows the string " b". </p>
<pre><code>>>> list(string.Formatter().parse('a {1} b {2} c'))
[('a ', '1', '', None), (' b ', '2', '', None), (' c', None, None, None)]
</code></pre>
<p>This one is a very complete example. We have three literal string pieces: <code>['a ', ' b ', ' c']</code>. The confusing part is that the specific format information for the format brackets {} is merged with the previous literal string element.</p>
<p>Edit2:</p>
<pre><code>>>> [x[0] for x in string.Formatter().parse('another placeholder here {} and here \"{}\"')]
['another placeholder here ', ' and here "', '"']
</code></pre>
<p>We follow the same logic here. The quotes are just raw literal string, we can change the quotes to something else:</p>
<pre><code>>>> [x[0] for x in string.Formatter().parse('another placeholder here {} and here qqq{}www')]
['another placeholder here ', ' and here qqq', 'www']
</code></pre>
<p>If you only consider the 'name' from each returned tuple you only get the literal string. Between each individual element lies the format placeholder. </p>
<p>You need to understand the result of the parse() from the point of view of formatting the string. This result makes it simple to produce the output formatted string. For example:</p>
<pre><code>>>> [x for x in string.Formatter().parse('a{}')]
[('a', '', '', None)]
>>> [x for x in string.Formatter().parse('a')]
[('a', None, None, None)]
</code></pre>
<p>With this logic you can count the number of placeholders in a format string like this:</p>
<pre><code>>>> def count_placeholders(fmt):
... count = 0
... L = string.Formatter().parse(fmt)
... for x in L:
... if x[1] is not None:
... count += 1
... return count
...
>>> count_placeholders('')
0
>>> count_placeholders('{}')
1
>>> count_placeholders('{}{}')
2
>>> count_placeholders('a {}{}')
2
>>> count_placeholders('a {} b {}')
2
>>> count_placeholders('a {} b {} c')
2
</code></pre>
| 2 | 2016-09-02T10:43:45Z | [
"python",
"string"
] |
unhashable type: 'numpy.ndarray' for optimization | 39,290,544 | <p>I was performing the optimization to find out the best fit line, using the scipy.optimize library, to a data set that I generated. But I am getting the error "unhashable type: 'numpy.ndarray'"</p>
<pre><code>import numpy as np
import pandas as pd
import scipy.optimize as spo
import matplotlib.pyplot as plt
def error(data, line):
error=np.sum((data[:,1]-(line[0]*data[:,0]+line[1]))**2)
return error
def fit_line(data, error_func):
l=np.float32([0, np.mean(data[:,1])])
min_result=spo.minimize(error_func, l, args={data,}, method="SLSQP", options={"disp":True})
return min_result.x
if __name__=="__main__":
l_orig=np.float32([4,2])
xorig=np.linspace(0,10,21)
yorig=l_orig[1]*xorig + l_orig[0]
np.random.seed(788)
noise=np.random.normal(0, 3.0, yorig.shape)
data=np.asarray([xorig, yorig+noise]).T
result=fit_line(data, error)
</code></pre>
| 1 | 2016-09-02T10:47:33Z | 39,290,639 | <p>The function <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize" rel="nofollow"><code>scipy.optimize.minimize</code></a> takes a <strong>tuple</strong> of extra arguments, not a set. Change: </p>
<pre><code>min_result=spo.minimize(error_func, l, args={data,}, method="SLSQP", options={"disp":True})
</code></pre>
<p>to:</p>
<pre><code>min_result=spo.minimize(error_func, l, args=(data,), method="SLSQP", options={"disp":True})
</code></pre>
| 3 | 2016-09-02T10:52:42Z | [
"python",
"optimization",
"scipy"
] |
Uwsgi wont load app when run as service | 39,290,628 | <p>I have setup a basic flask application in Centos 6, everything is contained in one file(app.py). I can run the file with uwsgi from the folder directory with the command <code>uwsgi --ini myuwsgi.ini</code> and everything works great. The contents of the ini file are: </p>
<pre><code>[uwsgi]
http-socket = :9090
plugin = python
wsgi-file = /project/app.py
process = 3
callable = app
</code></pre>
<p>However I want to set this up so everytime the server is shutdown or whatever that it will come up on its own. When I try to run the command <code>service uwsgi start</code> the logs show that no app was found. The ini file that the service uses is <code>/etc/uwsgi.ini</code> and I replaced it with my ini file.</p>
<p>The dump of the log is as follows if it helps: </p>
<pre><code>*** Starting uWSGI 2.0.13.1 (64bit) on [Fri Sep 2 07:49:37 2016] ***
compiled with version: 4.4.7 20120313 (Red Hat 4.4.7-17) on 02 August 2016 21:07:31
os: Linux-2.6.32-042stab113.21 #1 SMP Wed Mar 23 11:05:25 MSK 2016
nodename: uwsgiHost
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /
detected binary path: /usr/sbin/uwsgi
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 14605
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :9090 fd 3
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72768 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 1315, cores: 1)
-- unavailable modifier requested: 0 --
</code></pre>
<p>If anyone could tell me why it cant find the app when i specified the location and callable in the ini? </p>
| 3 | 2016-09-02T10:51:41Z | 39,291,289 | <p>Try the following config and let me know if it works for you:</p>
<pre><code>[uwsgi]
callable = app
chdir = /path/to/your/project/
http-socket = :9090
plugins = python
processes = 4
virtualenv = /path/to/your/project/venv
wsgi-file = /path/to/your/project/app.py
</code></pre>
<p>You may also need to add <strong>uid</strong> and <strong>gid</strong> params.</p>
<p>Make sure that you have installed the package <strong>uwsgi-plugin-python</strong>:</p>
<pre><code>apt-get install uwsgi-plugin-python
</code></pre>
| 3 | 2016-09-02T11:29:22Z | [
"python",
"service",
"flask",
"centos",
"uwsgi"
] |
confused with the use of axis in Pandas (Python) | 39,290,667 | <p>From my understanding, axis=0 is running vertically downwards across rows and axis =1 is running horizontally across columns
for example:</p>
<pre><code>In [55]: df1
Out[55]:
x y z
0 1 3 8
1 2 4 NaN
2 3 5 7
3 4 6 NaN
4 5 7 6
5 NaN 1 9
6 NaN 9 5
</code></pre>
<p>so mean across column df.mean(axis=0) gives:</p>
<pre><code> x 3
y 5
z 7
</code></pre>
<p>But if I want to drop missing values by column as</p>
<pre><code> y
0 3
1 4
2 5
3 6
4 7
5 1
6 9
</code></pre>
<p>then I have to use df.dropna(axis=1) rather than df.dropna(axis=0) to get the output I want, but isn't axis=1 regarding rows, how come it mean columns in this case?</p>
| 0 | 2016-09-02T10:54:25Z | 39,290,866 | <p>from the pandas documentation:</p>
<pre><code>DataFrame.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
"Return object with labels on given axis omitted where alternately
any or all of the data are missing"
Parameters:
axis : {0 or âindexâ, 1 or âcolumnsâ}, or tuple/list thereof
Pass tuple or list to drop on multiple axes
</code></pre>
<p>So the function is defined in a way that <code>axis=1</code> means columns.
If you want to drop by row you just call it like this:</p>
<pre><code>df_dropped = df.dropna(how='all') # drop by row
</code></pre>
| 0 | 2016-09-02T11:04:20Z | [
"python",
"pandas"
] |
confused with the use of axis in Pandas (Python) | 39,290,667 | <p>From my understanding, axis=0 is running vertically downwards across rows and axis =1 is running horizontally across columns
for example:</p>
<pre><code>In [55]: df1
Out[55]:
x y z
0 1 3 8
1 2 4 NaN
2 3 5 7
3 4 6 NaN
4 5 7 6
5 NaN 1 9
6 NaN 9 5
</code></pre>
<p>so mean across column df.mean(axis=0) gives:</p>
<pre><code> x 3
y 5
z 7
</code></pre>
<p>But if I want to drop missing values by column as</p>
<pre><code> y
0 3
1 4
2 5
3 6
4 7
5 1
6 9
</code></pre>
<p>then I have to use df.dropna(axis=1) rather than df.dropna(axis=0) to get the output I want, but isn't axis=1 regarding rows, how come it mean columns in this case?</p>
| 0 | 2016-09-02T10:54:25Z | 39,292,372 | <p><code>dropna()</code> drops the <strong>labels</strong> on the given axis so <code>df.dropna(axis=1)</code> means "look at the labels across axis 1 (i.e. x, y, and z) and drop that label if there are any NaN in that column"</p>
| 0 | 2016-09-02T12:24:19Z | [
"python",
"pandas"
] |
I can use ajax from /edit but not from /detail | 39,290,740 | <p>Django 1.10.</p>
<p>From DetailView I want to update the model object via ajax. Well, the model object is updated. But the ajax success function can't get data from the post method. I occur in failure function.</p>
<p>In other words, in Django in UpdateView I can stop at a breakpoint in form_valid, control that it returns an HttpResponse with code 200. And later if I refresh page with detail information, I can see that the model object has changed.</p>
<p>But in In Chrome dev tools while debugging js I occur in fail function. And jqXHR.status=0, textStatus = "error", errorThrown="".</p>
<p>I have prepared a simulation of my real situation:
<a href="https://Kifsif@bitbucket.org/Kifsif/ajax_update.git" rel="nofollow">https://Kifsif@bitbucket.org/Kifsif/ajax_update.git</a></p>
<p>There is a difference: this UpdateView renders the general_detail.html. In real life it should render partial_detail.html. Well, it is ajax, we don't need to reload the whole page.</p>
<p>So, this simulation renders the whole page. What does it mean? It means that:</p>
<p>1) If I'm in <code>http://localhost:8000/1/detail</code>, pressing AjaxEdit link leads me to the failure. Not working. In Chrome developers tools I occur in failure function.</p>
<p>2) I return to <code>http://localhost:8000/1/detail</code>, press Edit. I occur in <code>http://localhost:8000/1/edit</code>. This is ordinary editing without ajax. But the view is organized so that to render a response without redirect. So, I save the model and stay in <strong><code>http://localhost:8000/1/edit</code></strong>. And I can see the whole details as if I were looking at the result of a proper DetailView. There are two control links: Edit and AjaxEdit. <strong>And now AjaxEdit starts working.</strong> </p>
<p>In other words, at <code>http://localhost:8000/1/edit</code> ajax works, at <code>http://localhost:8000/1/detail</code> it doesn't.</p>
<p>I've just started learning ajax. I can't cope with this. I would say that a redirect may influence. But there is no redirect. </p>
<p>Via ajax I address to the get method of the view and get a proper data. What is the difference compared to post.</p>
<p>Could you comment on it and help me break through. </p>
<p><strong>views.py</strong></p>
<pre><code>class GeneralUpdate(UpdateView):
model = General
fields = ['char']
def form_valid(self, form):
self.object = form.save()
return render(self.request, 'general/general_detail.html', {"object": self.object})
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class General(models.Model):
char = models.CharField(max_length=100)
def get_absolute_url(self):
return reverse("detail", kwargs={"pk": self.id})
</code></pre>
<p><strong>urls.py</strong></p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^create$', GeneralCreate.as_view(), name="create"),
url(r'^(?P<pk>\d+)/detail$', GeneralDetailView.as_view(), name="detail"),
url(r'^(?P<pk>\d+)/edit$', GeneralUpdate.as_view(), name="edit"),
]
</code></pre>
<p><strong>general_form.html</strong></p>
<pre><code><form id="form" action="" method="post">{% csrf_token %}
{{ form.as_p }}
<input id="submit" type="submit" value="Save" />
</form>
</code></pre>
<p><strong>general_detail.html</strong></p>
<pre><code><p id="char">{{ object.char }}</p>
<a href="{% url "edit" object.id %}">Edit</a>
<a id="ajax_edit" href="javascript:void(0)">AjaxEdit</a>
<script src="https://code.jquery.com/jquery-3.1.0.min.js" integrity="sha256-cCueBR6CsyA4/9szpPfrX3s49M9vUU5BgtiJj06wt/s=" crossorigin="anonymous"></script>
<script>
$( document ).ready(function() {
var ajax_edit = $("#ajax_edit");
var char = $("#char");
function show_get(data){
$(data).insertAfter(char);
var submit_link = $("#submit");
submit_link.click(post);
}
function show_post(data){
debugger;
}
function failure(jqXHR, textStatus, errorThrown){
debugger;
}
function post(){
$.ajax({
method: "post",
data: $("form").serialize(),
url: 'http://localhost:8000/2/edit',
success: show_post,
error: failure,
}
)
}
function get(){
$.ajax({
method: "get",
url: 'http://localhost:8000/2/edit',
success: show_get,
}
)
}
ajax_edit.click(get);
});
</script>
</code></pre>
| 1 | 2016-09-02T10:58:24Z | 39,291,941 | <p>If I understand it correctly...</p>
<ol>
<li>You click <code>AjaxEdit</code> - it fetches response from <code>UpdateView</code> and loads it directly into your <code>/detail</code> page. You technically still on <code>detail</code> page now.</li>
<li>You check your form in chrome dev and see that form <code>action</code> attribute points to nothing.</li>
<li>You click <code>Save</code> and your ajax request updates data as planned.</li>
<li>Then default submit happens and your <code>POST</code> request goes to <code>/detail</code> url since technically you are still on <code>details page</code>.</li>
<li>Django's <code>DetailView</code> forbids <code>POST</code> requests and kick it with 405 HTTP status (Method not Allowed), check it in chrome inspector.</li>
</ol>
<p>So that's the main reason.</p>
<p>As a fix you can disable default submit action by adding </p>
<pre><code>$("#form").submit(function(event){
event.preventDefault();
});
</code></pre>
<p>before <code>submit_link.click(post);</code></p>
| 0 | 2016-09-02T12:00:30Z | [
"python",
"ajax",
"django"
] |
Merging matrices in Python | 39,290,823 | <p>I have some matrices with different lengths. The sizes are 200*59 , 200*1 and 200*1 and I want to make a big matrix of 200*61. How should I do it? </p>
| 0 | 2016-09-02T11:02:15Z | 39,290,951 | <p>Use <code>concatenate</code> from <code>numpy</code></p>
<pre><code>import numpy as np
a = np.random.rand(200,59)
b = np.random.rand(200,1)
c = np.random.rand(200,1)
d = np.concatenate((a,b,c),axis=1)
print d.shape #(200,61)
</code></pre>
| 1 | 2016-09-02T11:10:00Z | [
"python",
"matrix"
] |
cmd module - python | 39,290,830 | <p>I am trying to build a python shell using cmd module. </p>
<pre><code>from cmd import Cmd
import subprocess
import commands
import os
from subprocess import call
class Pirate(Cmd):
intro = 'Welcome to shell\n'
prompt = 'platform> '
pass
if __name__ == '__main__':
Pirate().cmdloop()
</code></pre>
<p>I am trying to build a shell using python - cmd module. I am trying to build these two functionalities. </p>
<p>Welcome to shell</p>
<p>platform> ls </p>
<p>platform> cd .. </p>
<p>like if I want to perform
ls - list all files from that directory in my python shell
or
cd .. - go back to prev directory </p>
<p>Can anyone help in this?
I tried using subprocess library.. but didn't get it working. </p>
<p>Appreciate your help !
Ref Doc: <a href="https://docs.python.org/3/library/cmd.html" rel="nofollow">https://docs.python.org/3/library/cmd.html</a> </p>
| 2 | 2016-09-02T11:02:35Z | 39,291,035 | <p>I have a hard time trying to figure out why you would need such a thing, but here's my attempt:</p>
<pre><code>import subprocess
from cmd import Cmd
class Pirate(Cmd):
intro = 'Welcome to shell\n'
prompt = 'platform> '
def default(self, line): # this method will catch all commands
subprocess.call(line, shell=True)
if __name__ == '__main__':
Pirate().cmdloop()
</code></pre>
<p>The main point is to use <code>default</code> method to catch all commands passed as input.</p>
| 0 | 2016-09-02T11:15:15Z | [
"python",
"shell",
"cmd",
"subprocess"
] |
Unable to fetch result of Python Script in PHP | 39,290,891 | <p>I am using Xampp and Ubuntu 16.04..</p>
<p>now for my project i need python and php integration to call python script from PHP. Actually describing my script... I am using pypi newspaper library to extract a news summary from a news web portal</p>
<pre><code>#!/usr/bin/env python
from newspaper import Article
url='http://www.abplive.in/india-news/countering-terrorism-an-important-shared-objective-by-india-united-states-manohar-parrikar-407134'
article=Article(url)
article.download()
article.parse()
article.nlp()
print article.summary
</code></pre>
<p>So just help me out with a code in php to call a python script</p>
| 1 | 2016-09-02T11:06:19Z | 39,291,245 | <p>One of the <code>passthru()</code> or <code>exec()</code> commands should do the trick. See <a href="http://php.net/manual/en/function.passthru.php" rel="nofollow">http://php.net/manual/en/function.passthru.php</a> for more info. Something like: </p>
<pre><code><?php
$full_path = `<full_path_to_python_script>`;
passthru($full_path);
?>
</code></pre>
| 0 | 2016-09-02T11:27:15Z | [
"php",
"python"
] |
Find dict that has user inputted key from a list of dicts | 39,290,895 | <p><strong>python 3.5</strong></p>
<p>hi i have this simple json.json file :</p>
<p><strong>json</strong></p>
<pre><code>{"x": [
{"A": "B"},
{"C": "D"},
{"E": "F"}
]}
</code></pre>
<p>and i have this code to find the letter after A or C or E</p>
<p><strong>python</strong></p>
<pre><code>data = json.load(open('json.json'))
R = 'C' #user input
print(data['x'][1][R])
</code></pre>
<p>How can I find which dict with has the key without knowing and hard coding the index of the dict?</p>
| 1 | 2016-09-02T11:06:37Z | 39,291,001 | <p>So you want to find the value by searching without hard coding the index, what you need is a loop that checks each dict for the key:</p>
<pre><code>data = json.load(open('json.json'))
R = 'C' #user input
for d in data['x']:
if R in d:
print(d[R])
break # if there can be more that one match then remove
</code></pre>
| 3 | 2016-09-02T11:13:44Z | [
"python",
"json"
] |
Find dict that has user inputted key from a list of dicts | 39,290,895 | <p><strong>python 3.5</strong></p>
<p>hi i have this simple json.json file :</p>
<p><strong>json</strong></p>
<pre><code>{"x": [
{"A": "B"},
{"C": "D"},
{"E": "F"}
]}
</code></pre>
<p>and i have this code to find the letter after A or C or E</p>
<p><strong>python</strong></p>
<pre><code>data = json.load(open('json.json'))
R = 'C' #user input
print(data['x'][1][R])
</code></pre>
<p>How can I find which dict with has the key without knowing and hard coding the index of the dict?</p>
| 1 | 2016-09-02T11:06:37Z | 39,291,177 | <p>As Padraic Cunningham pointed out, you need to loop through your results. Your solution would look like this:</p>
<pre><code>data = json.load(open('json.json'))
R = 'C' #user input
print([x for x in data['x'] if x.keys()[0] == R][0][R])
</code></pre>
<p><code>[x for x in data['x'] if x.keys()[0] == R]</code> gives you all the dict with key R in a list. Assuming that you don't have repeated keys, pick the first element and access to its value.</p>
| 0 | 2016-09-02T11:23:32Z | [
"python",
"json"
] |
Does coverage provide its own version of a nose plugin? | 39,290,927 | <p>The documentation for <a href="http://nose.readthedocs.io/en/latest/plugins/cover.html" rel="nofollow">nose</a> Version 1.3.7 says that </p>
<blockquote>
<p>Newer versions of coverage contain their own nose plugin which is superior to the builtin plugin. It exposes more of coverageâs options and uses coverageâs native html output. Depending on the version of coverage installed, the included plugin may override the nose builtin plugin, or be available under a different name. Check <code>nosetests --help</code> or <code>nosetests --plugins</code> to find out which coverage plugin is available on your system.</p>
</blockquote>
<p>Running nosetests --plugins --verbose I can see that I have the plugin "coverage" with the description "Activate a coverage report using Ned Batchelder's coverage module."
For me it is not clear from this description what coverage plug-in I am using.</p>
<p><strong>With what version of coverage did the new nose plug-in become available?</strong></p>
<p><strong>How can I know if I am using it?</strong></p>
<p><strong>Does such a plug-in really exist?</strong></p>
<p>In May this year (2016) Ned Batchelder seems to advise the use of <code>coverage -m nose ...</code> and does not mention a new plug-in in their <a href="https://bitbucket.org/ned/coveragepy/issues/490/nosetests-with-coverage-seems-not-to-work" rel="nofollow">issue-tracker</a> and on <a href="http://stackoverflow.com/a/2845626/5069869">stackoverflow</a>.</p>
| 0 | 2016-09-02T11:08:35Z | 39,306,130 | <p>Coverage has never provided its own nose plugin. </p>
<p>Notice that nose is no longer maintained, as the <a href="http://nose.readthedocs.io/en/latest/" rel="nofollow">nose documentation states</a>:</p>
<blockquote>
<p>Nose has been in maintenance mode for the past several years and will likely cease without a new person/team to take over maintainership. New projects should consider using Nose2, py.test, or just plain unittest/unittest2.</p>
</blockquote>
<p>If you must use nose, I continue to recommend using coverage to run nose:</p>
<pre><code>coverage run -m nose ....
</code></pre>
| 1 | 2016-09-03T10:58:11Z | [
"python",
"code-coverage",
"nose"
] |
How to change python to .exe file in visual studio | 39,290,932 | <p>How to change python code to <strong>.exe</strong> file using microsoft Visual Studio 2015 without installing any package? Under "Build" button, there is no convert to <strong>.exe</strong> file.</p>
| -2 | 2016-09-02T11:08:52Z | 39,291,254 | <p>Python is a dynamic language (executed by interpreter) and cannot be compiled to binary. (Similar to javascript, php and etc.) It needs interpreter to execute python commands. It's not possible to do that without 3rd party tools which translates python to another languages and compile them to exe.</p>
| 0 | 2016-09-02T11:27:33Z | [
"python",
"visual-studio",
"visual-studio-2015",
"exe"
] |
validate loaded data in QTableView | 39,290,991 | <p>I got a MVC sytle PyQT UI program, and already got delegates binded to certain column for whatever date or regex validation, when insert manually, everything goes fine, the limits holds on </p>
<pre><code>class IPDelegate(QStyledItemDelegate):
def createEditor(self, parent, option, index):
line_edit.setValidator(regex_ip)
</code></pre>
<p>but for loaded data, which I insert by </p>
<pre><code>self.model.appendColumn(
[
QStandardItem(column_value)
for column_value in loaded_line
])
</code></pre>
<p>such validation only happens when I manually double click inside some table cellï¼are their any way to check its value automatically? my idea is to loop get focus of each cell, and simulate the 'press enter' operation, to trigger the check, but did not find any similar APIs</p>
<p>any suggestion?
Thanks, Jack</p>
| 1 | 2016-09-02T11:12:43Z | 39,307,932 | <p>solved by myself, did not use the misery mvc module, turn all data validation to a set of regex, loop over at both set delegate and raw regex validation</p>
| 0 | 2016-09-03T14:19:23Z | [
"python",
"delegates",
"pyqt",
"tableview"
] |
Easily editing base class variables from inherited class | 39,291,000 | <h3>How does communication between base classes and inherited classes work?</h3>
<p>I have a data class in my python code ( storing all important values, duh ), I tried inheriting new subclasses from the <em>data base class</em>, everything worked fine except the fact that the classes were not actually communicating ( when one class variable was changed in a subclass, the class attribute <strong>was not</strong> changed in the base class nor any other subclasses.</p>
<p>I guess I just failed to understand how inheritance works, <strong>my question is</strong>: Does inheritance keep any connection to the base classes, or are the values set at the time of inheritance? </p>
<p>If there is any connection, how do you easily manipulate base class variables from a subclass ( I tried it with the <em>cls</em> variable to access base class variables, didn't work out )</p>
<h3>Example</h3>
<pre><code>class Base:
x = 'baseclass var' # The value I want to edit
class Subclass(Base):
@classmethod(cls)
???edit_base_x_var_here??? # This is the part I don't know
</code></pre>
| 0 | 2016-09-02T11:13:41Z | 39,291,125 | <p>Well, you could do that in this way:</p>
<pre><code>class Base:
x = 'baseclass var' # The value I want to edit
class Subclass(Base):
@classmethod
def change_base_x(cls):
Base.x = 'nothing'
print Subclass.x
Subclass.change_base_x()
print Subclass.x
</code></pre>
<p>furthermore, you don't have to use <code>@classmethod</code>, it could be staticmethod, because you don't need current class object <code>cls</code>:</p>
<pre><code>class Base:
x = 'baseclass var' # The value I want to edit
class Subclass(Base):
@staticmethod
def change_base_x():
Base.x = 'nothing'
</code></pre>
<p>EDITED:</p>
<p>According to your question, about other way. Yes it is, but not so pretty. I would say more. If you want to change variable of base class, then you will do it globally though, so that option with assigning to <code>Base.x</code> is the best way you can achieve that.</p>
| 1 | 2016-09-02T11:21:07Z | [
"python",
"class",
"object",
"inheritance",
"attributes"
] |
Flipping one digit to get all digits the same: is the code wrong? | 39,291,054 | <p>Trying to solve the following problem:</p>
<p>Given a binary D containing only digits 0's and 1's, I have to determine if all the digits could be made the same, by flipping only one digit. </p>
<p>The input is the number of Ds to test, and then the Ds, one per line. for instance:</p>
<pre><code>3
11111001111010
110
100000000000000
</code></pre>
<p>The output is "Yes" or "No", namely in this instance, the run will look like:</p>
<pre><code>$ python3 first.py
3
11111001111010
NO
110
YES
100000000000000
YES
</code></pre>
<p>However, the automatic evaluator of this problem judges the following code wrong:</p>
<pre><code>T = int(input())
for i in range(T):
line = input()
ones = zeros = 0
for c in line:
if int(c) == 1:
ones += 1
elif int(c) == 0:
zeros += 1
else:
raise ValueError
if ones > 1 and zeros > 1:
print("NO")
break
if ones == 1:
print("YES")
elif zeros == 1:
print("YES")
</code></pre>
<p>Can you suggest why?</p>
| 0 | 2016-09-02T11:16:48Z | 39,291,243 | <p>Your program doesn't output anything in case that all the digits are the same. You can fix the problem by changing the last part in following way:</p>
<pre><code>if ones == 1:
print("YES")
elif zeros == 1:
print("YES")
elif ones == 0 or zeros == 0:
print("NO") # assuming that one bit must be changed
</code></pre>
<p>Note that you could just use <a href="https://docs.python.org/3.5/library/stdtypes.html#str.count" rel="nofollow"><code>str.count</code></a> to count zeros (or ones) and then check if count is <code>1</code> or <code>len(line) - 1</code>:</p>
<pre><code>T = int(input())
for i in range(T):
line = input()
zeros = line.count('0')
print('YES' if zeros == 1 or zeros == len(line) - 1 else 'NO')
</code></pre>
| 2 | 2016-09-02T11:27:06Z | [
"python",
"python-3.x"
] |
Flipping one digit to get all digits the same: is the code wrong? | 39,291,054 | <p>Trying to solve the following problem:</p>
<p>Given a binary D containing only digits 0's and 1's, I have to determine if all the digits could be made the same, by flipping only one digit. </p>
<p>The input is the number of Ds to test, and then the Ds, one per line. for instance:</p>
<pre><code>3
11111001111010
110
100000000000000
</code></pre>
<p>The output is "Yes" or "No", namely in this instance, the run will look like:</p>
<pre><code>$ python3 first.py
3
11111001111010
NO
110
YES
100000000000000
YES
</code></pre>
<p>However, the automatic evaluator of this problem judges the following code wrong:</p>
<pre><code>T = int(input())
for i in range(T):
line = input()
ones = zeros = 0
for c in line:
if int(c) == 1:
ones += 1
elif int(c) == 0:
zeros += 1
else:
raise ValueError
if ones > 1 and zeros > 1:
print("NO")
break
if ones == 1:
print("YES")
elif zeros == 1:
print("YES")
</code></pre>
<p>Can you suggest why?</p>
| 0 | 2016-09-02T11:16:48Z | 39,297,176 | <p>The solution is rather embarrassing: I outputted the answer as all <em>uppercase</em>, where the expected output is <em>Title case</em>. The following code is accepted:</p>
<pre><code>T = int(input())
for i in range(T):
line = input()
ones = zeros = 0
for c in line:
if int(c) == 1:
ones += 1
elif int(c) == 0:
zeros += 1
else:
raise ValueError
if ones > 1 and zeros > 1:
print("No")
break
if ones == 1:
print("Yes")
elif zeros == 1:
print("Yes")
elif ones == 0 or zeros == 0:
print("No")
</code></pre>
| 0 | 2016-09-02T16:39:24Z | [
"python",
"python-3.x"
] |
ImportError: cannot import name Error | 39,291,077 | <p>I have a very simple test function that I need to capture its execution time using 'timeit' module, but I get an error</p>
<p>The function:</p>
<pre><code>import timeit
def test1():
l = []
for i in range(1000):
l = l + [i]
t1 = timeit.Timer("test1()", "from __main__ import test1")
print(t1.timeit(number=1000))
</code></pre>
<blockquote>
<p>The Error: C:\Python34\lib\timeit.py:186: in timeit timing =
self.inner(it, self.timer) :3: in inner ??? E<br>
ImportError: cannot import name 'test1'
=========== 1 error in 0.03 seconds ==============</p>
</blockquote>
<p>Can you guys help me with a solution?</p>
| 0 | 2016-09-02T11:18:07Z | 39,291,401 | <p>I think there are couple of problems with your code. First for all make sure you can import timeit. And you have it as a module. For this, you can just run:</p>
<pre><code> python -m timeit '"-".join(str(n) for n in range(100))'
</code></pre>
<p>If it execute fine then you sure have the timeit module. </p>
<p>Now regarding your question. I took the liberty to rewriting it in a cleaner way.</p>
<pre><code>import timeit
def append_list():
num_list = []
for i in range(1000):
num_list.append(i)
print(timeit.timeit(stmt=append_list, number=1000)) #number is the number of repetion of the operation, in this case, 1000
# you can also run
print(timeit.timeit(stmt=append_list, number=1))
</code></pre>
<p>Now, the above could will do what you wanted to do, i.e calculate the amount of time required to append numbers from 1 to 1000 to the list.</p>
| 0 | 2016-09-02T11:34:35Z | [
"python",
"timeit"
] |
python strange looking loop | 39,291,161 | <pre><code> char_rdic = list('helo')
char_dic = {w:i for i ,w in enumerate(char_rdic)}
</code></pre>
<p>I'm not really sure what this code actually do. what does w and i represent in this code? </p>
| -6 | 2016-09-02T11:23:04Z | 39,291,250 | <p>This code creates <code>char_dic</code> dictionary with chars from <code>'helo'</code> string as keys and their occurence indexes in given string.
<code>i</code> is an index of <code>w</code> element in <code>char_rdic</code> list.</p>
| 0 | 2016-09-02T11:27:25Z | [
"python"
] |
python strange looking loop | 39,291,161 | <pre><code> char_rdic = list('helo')
char_dic = {w:i for i ,w in enumerate(char_rdic)}
</code></pre>
<p>I'm not really sure what this code actually do. what does w and i represent in this code? </p>
| -6 | 2016-09-02T11:23:04Z | 39,291,282 | <p>This is a dict comprehension. If you are familiar with list comprehension (<code>[do_stuff(a) for a in iterable]</code> construct), that works very much the same way, but it builds a dict.</p>
<p>See <a href="https://stackoverflow.com/questions/1747817/create-a-dictionary-with-list-comprehension-in-python">Create a dictionary with list comprehension in Python</a>
and <a href="https://docs.python.org/3.5/tutorial/datastructures.html#dictionaries" rel="nofollow">https://docs.python.org/3.5/tutorial/datastructures.html#dictionaries</a> for the official documentation</p>
| 1 | 2016-09-02T11:28:47Z | [
"python"
] |
Unwanted quotations marks in python terminal | 39,291,204 | <p>I am using Geany to code and Ubuntu 16.04 as my OS.
when I enter this code</p>
<pre><code>print("Result:",a+b)
</code></pre>
<p>It gives it's output as <code>('Result:', a+b)</code>.</p>
| -1 | 2016-09-02T11:25:14Z | 39,291,390 | <p>In <strong>python 2</strong>:</p>
<pre><code>print("Result:",a+b) --> # ('Result:', 6)
</code></pre>
<p>parens are no needed, because otherwise we are printing a tuple of two elements: string <code>'Result'</code> and an integer <code>6</code>.</p>
<p>In <strong>python 3</strong>:</p>
<pre><code>print("Result:",a+b) --> # Result: 6
</code></pre>
<p>parens are needed, just because print is a function instead of a statement in python 3.</p>
<p>So to do it the same in python 2 which is in python 3, you have to do this:</p>
<pre><code>print "Result:", a+b --> # Result: 6
</code></pre>
| 0 | 2016-09-02T11:34:01Z | [
"python"
] |
Django: Not Found static/admin/css | 39,291,223 | <p>I just deployed my first Django app on Heroku but I notice that it doesn't have any CSS like when I runserver on the local machine. I know there's something wrong with static files but I don't understand much about it even when I already read <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-static-files-in-development" rel="nofollow">the docs</a>. I can do</p>
<p><code>python3 manage.py collectstatic</code></p>
<p>to create a static folder but I don't know where to put it and how to change the DIRS in settings.py. I really need some help to get rid of it.</p>
<p><a href="http://i.stack.imgur.com/1NtBa.png" rel="nofollow"><img src="http://i.stack.imgur.com/1NtBa.png" alt="root directory"></a></p>
<p>settings.py:</p>
<pre><code>DEBUG = True
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'household_management',
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATIC_ROOT = 'static'
STATIC_URL = '/static/'
</code></pre>
<p>heroku logs:</p>
<pre><code>2016-09-02T10:42:09.461124+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=33fc071d-344c-47e7-8721-919ba6d5df65 fwd="14.191.217.103" dyno=web.1 connect=2ms service=53ms status=302 bytes=400
2016-09-02T10:42:09.760323+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=c050edcd-02d9-4c39-88ba-8a16be692843 fwd="14.191.217.103" dyno=web.1 connect=1ms service=45ms status=200 bytes=2184
2016-09-02T10:42:10.037370+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=ec43016a-09b7-499f-a84b-b8024577b717 fwd="14.191.217.103" dyno=web.1 connect=2ms service=9ms status=404 bytes=4569
2016-09-02T10:42:10.047224+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=6570ee02-3b78-44f4-9ab9-0e80b706ea40 fwd="14.191.217.103" dyno=web.1 connect=1ms service=16ms status=404 bytes=4566
2016-09-02T10:42:10.030726+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T10:42:10.043743+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T10:48:56.593180+00:00 heroku[api]: Deploy d1d39dc by huyvohcmc@gmail.com
2016-09-02T10:48:56.593290+00:00 heroku[api]: Release v21 created by huyvohcmc@gmail.com
2016-09-02T10:48:56.803122+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T10:48:56.803127+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T10:48:56.893962+00:00 heroku[web.1]: Restarting
2016-09-02T10:48:56.894722+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T10:48:59.681267+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T10:49:00.418357+00:00 app[web.1]: [2016-09-02 17:49:00 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T10:49:00.418377+00:00 app[web.1]: [2016-09-02 17:49:00 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T10:49:00.418393+00:00 app[web.1]: [2016-09-02 10:49:00 +0000] [3] [INFO] Handling signal: term
2016-09-02T10:49:00.477684+00:00 app[web.1]: [2016-09-02 10:49:00 +0000] [3] [INFO] Shutting down: Master
2016-09-02T10:49:00.594623+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T10:49:00.607775+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T10:49:02.911936+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T10:49:02.912529+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Listening at: http://0.0.0.0:18162 (3)
2016-09-02T10:49:02.917427+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T10:49:02.912655+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Using worker: sync
2016-09-02T10:49:02.980208+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T10:49:04.228057+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T10:53:41.572630+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=68c0b216-2084-46c8-9be5-b7e5aacaa590 fwd="14.191.217.103" dyno=web.1 connect=0ms service=42ms status=302 bytes=400
2016-09-02T10:53:41.880217+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=17b91dc2-ba06-482c-8af0-e7b015fe2077 fwd="14.191.217.103" dyno=web.1 connect=0ms service=41ms status=200 bytes=2184
2016-09-02T10:53:42.156295+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=40dec62d-8c4a-4af6-8e0f-8053fe8379b9 fwd="14.191.217.103" dyno=web.1 connect=0ms service=9ms status=404 bytes=4566
2016-09-02T10:53:42.157491+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=3a29f200-c185-4344-a6e1-5af35e5d120e fwd="14.191.217.103" dyno=web.1 connect=0ms service=17ms status=404 bytes=4569
2016-09-02T10:53:42.164162+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T10:53:42.177480+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:01:19.031353+00:00 heroku[api]: Deploy 2beb15a by huyvohcmc@gmail.com
2016-09-02T11:01:19.031444+00:00 heroku[api]: Release v22 created by huyvohcmc@gmail.com
2016-09-02T11:01:19.262522+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:01:19.262528+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:01:19.426837+00:00 heroku[web.1]: Restarting
2016-09-02T11:01:19.427455+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:01:22.141325+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:01:22.545379+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:01:22.754067+00:00 app[web.1]: [2016-09-02 18:01:22 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:01:22.754077+00:00 app[web.1]: [2016-09-02 18:01:22 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:01:22.757599+00:00 app[web.1]: [2016-09-02 11:01:22 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:01:22.763197+00:00 app[web.1]: [2016-09-02 11:01:22 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:01:22.880977+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:01:24.628348+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:01:24.628921+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Listening at: http://0.0.0.0:34235 (3)
2016-09-02T11:01:24.629075+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:01:24.636198+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:01:24.722355+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:01:26.271435+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:01:27.930795+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=a844ef4b-a2d1-44fe-af0e-09c76cb0e034 fwd="14.191.217.103" dyno=web.1 connect=0ms service=46ms status=302 bytes=400
2016-09-02T11:01:28.363163+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=31c0823a-466f-4363-b550-3c81681305f5 fwd="14.191.217.103" dyno=web.1 connect=0ms service=171ms status=200 bytes=2184
2016-09-02T11:01:28.716801+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=2d1b8bb2-9ab3-49f7-b557-a54eed996547 fwd="14.191.217.103" dyno=web.1 connect=0ms service=8ms status=404 bytes=4566
2016-09-02T11:01:28.693936+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=24aa1eed-aa87-4854-ab35-1604e8393b9d fwd="14.191.217.103" dyno=web.1 connect=0ms service=18ms status=404 bytes=4569
2016-09-02T11:01:28.681948+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T11:01:28.692958+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:12:43.686922+00:00 heroku[api]: Deploy 63085e6 by huyvohcmc@gmail.com
2016-09-02T11:12:43.687037+00:00 heroku[api]: Release v23 created by huyvohcmc@gmail.com
2016-09-02T11:12:43.951987+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:12:43.951998+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:12:43.926959+00:00 heroku[web.1]: Restarting
2016-09-02T11:12:43.929107+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:12:46.931285+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:12:47.860591+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:12:48.729601+00:00 app[web.1]: [2016-09-02 18:12:48 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:12:48.729617+00:00 app[web.1]: [2016-09-02 18:12:48 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:12:48.729623+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:12:48.775112+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:12:48.890301+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:12:48.839674+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:12:48.840093+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Listening at: http://0.0.0.0:20001 (3)
2016-09-02T11:12:48.840166+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:12:48.843687+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:12:48.939210+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:12:50.565750+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:13:00.439745+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=c30b47e6-fbb8-4412-9242-5fe37217026a fwd="14.191.217.103" dyno=web.1 connect=0ms service=49ms status=400 bytes=199
2016-09-02T11:14:01.686661+00:00 heroku[api]: Deploy c149525 by huyvohcmc@gmail.com
2016-09-02T11:14:01.686965+00:00 heroku[api]: Release v24 created by huyvohcmc@gmail.com
2016-09-02T11:14:02.189063+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:14:02.189073+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:14:02.466456+00:00 heroku[web.1]: Restarting
2016-09-02T11:14:02.467005+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:14:04.713176+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:14:05.259388+00:00 app[web.1]: [2016-09-02 18:14:05 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:14:05.260345+00:00 app[web.1]: [2016-09-02 11:14:05 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:14:05.265937+00:00 app[web.1]: [2016-09-02 18:14:05 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:14:05.317647+00:00 app[web.1]: [2016-09-02 11:14:05 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:14:05.411311+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:14:06.581314+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:14:10.282506+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:14:10.187781+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:14:10.188490+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Listening at: http://0.0.0.0:27446 (3)
2016-09-02T11:14:10.188627+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:14:10.211822+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:14:10.231978+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:14:29.714607+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=947ed6b9-b48a-48b1-8860-36846248acea fwd="14.191.217.103" dyno=web.1 connect=0ms service=153ms status=302 bytes=400
2016-09-02T11:14:30.522664+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=b74c55bf-913c-4e0d-8d16-2b1f4f0cea13 fwd="14.191.217.103" dyno=web.1 connect=0ms service=561ms status=200 bytes=2184
2016-09-02T11:14:30.879732+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=769f989a-f051-4a89-a079-1d6acea3c185 fwd="14.191.217.103" dyno=web.1 connect=0ms service=86ms status=404 bytes=4566
2016-09-02T11:14:30.865971+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=b271b831-a4fb-4bdb-9f6a-e4d66297db88 fwd="14.191.217.103" dyno=web.1 connect=0ms service=75ms status=404 bytes=4569
2016-09-02T11:14:30.865501+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:14:30.871110+00:00 app[web.1]: Not Found: /static/admin/css/base.css
</code></pre>
| 0 | 2016-09-02T11:26:25Z | 39,291,292 | <p>You shouldn't change <code>BASE_DIR</code></p>
<p>Change <code>STATIC_ROOT</code></p>
<pre><code>STATIC_ROOT = os.path.join(BASE_DIR, 'static')
</code></pre>
<p>And run <code>collectstatic</code> again</p>
| 1 | 2016-09-02T11:29:26Z | [
"python",
"django",
"heroku",
"deployment"
] |
Django: Not Found static/admin/css | 39,291,223 | <p>I just deployed my first Django app on Heroku but I notice that it doesn't have any CSS like when I runserver on the local machine. I know there's something wrong with static files but I don't understand much about it even when I already read <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-static-files-in-development" rel="nofollow">the docs</a>. I can do</p>
<p><code>python3 manage.py collectstatic</code></p>
<p>to create a static folder but I don't know where to put it and how to change the DIRS in settings.py. I really need some help to get rid of it.</p>
<p><a href="http://i.stack.imgur.com/1NtBa.png" rel="nofollow"><img src="http://i.stack.imgur.com/1NtBa.png" alt="root directory"></a></p>
<p>settings.py:</p>
<pre><code>DEBUG = True
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'household_management',
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATIC_ROOT = 'static'
STATIC_URL = '/static/'
</code></pre>
<p>heroku logs:</p>
<pre><code>2016-09-02T10:42:09.461124+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=33fc071d-344c-47e7-8721-919ba6d5df65 fwd="14.191.217.103" dyno=web.1 connect=2ms service=53ms status=302 bytes=400
2016-09-02T10:42:09.760323+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=c050edcd-02d9-4c39-88ba-8a16be692843 fwd="14.191.217.103" dyno=web.1 connect=1ms service=45ms status=200 bytes=2184
2016-09-02T10:42:10.037370+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=ec43016a-09b7-499f-a84b-b8024577b717 fwd="14.191.217.103" dyno=web.1 connect=2ms service=9ms status=404 bytes=4569
2016-09-02T10:42:10.047224+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=6570ee02-3b78-44f4-9ab9-0e80b706ea40 fwd="14.191.217.103" dyno=web.1 connect=1ms service=16ms status=404 bytes=4566
2016-09-02T10:42:10.030726+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T10:42:10.043743+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T10:48:56.593180+00:00 heroku[api]: Deploy d1d39dc by huyvohcmc@gmail.com
2016-09-02T10:48:56.593290+00:00 heroku[api]: Release v21 created by huyvohcmc@gmail.com
2016-09-02T10:48:56.803122+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T10:48:56.803127+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T10:48:56.893962+00:00 heroku[web.1]: Restarting
2016-09-02T10:48:56.894722+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T10:48:59.681267+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T10:49:00.418357+00:00 app[web.1]: [2016-09-02 17:49:00 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T10:49:00.418377+00:00 app[web.1]: [2016-09-02 17:49:00 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T10:49:00.418393+00:00 app[web.1]: [2016-09-02 10:49:00 +0000] [3] [INFO] Handling signal: term
2016-09-02T10:49:00.477684+00:00 app[web.1]: [2016-09-02 10:49:00 +0000] [3] [INFO] Shutting down: Master
2016-09-02T10:49:00.594623+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T10:49:00.607775+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T10:49:02.911936+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T10:49:02.912529+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Listening at: http://0.0.0.0:18162 (3)
2016-09-02T10:49:02.917427+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T10:49:02.912655+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [3] [INFO] Using worker: sync
2016-09-02T10:49:02.980208+00:00 app[web.1]: [2016-09-02 10:49:02 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T10:49:04.228057+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T10:53:41.572630+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=68c0b216-2084-46c8-9be5-b7e5aacaa590 fwd="14.191.217.103" dyno=web.1 connect=0ms service=42ms status=302 bytes=400
2016-09-02T10:53:41.880217+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=17b91dc2-ba06-482c-8af0-e7b015fe2077 fwd="14.191.217.103" dyno=web.1 connect=0ms service=41ms status=200 bytes=2184
2016-09-02T10:53:42.156295+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=40dec62d-8c4a-4af6-8e0f-8053fe8379b9 fwd="14.191.217.103" dyno=web.1 connect=0ms service=9ms status=404 bytes=4566
2016-09-02T10:53:42.157491+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=3a29f200-c185-4344-a6e1-5af35e5d120e fwd="14.191.217.103" dyno=web.1 connect=0ms service=17ms status=404 bytes=4569
2016-09-02T10:53:42.164162+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T10:53:42.177480+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:01:19.031353+00:00 heroku[api]: Deploy 2beb15a by huyvohcmc@gmail.com
2016-09-02T11:01:19.031444+00:00 heroku[api]: Release v22 created by huyvohcmc@gmail.com
2016-09-02T11:01:19.262522+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:01:19.262528+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:01:19.426837+00:00 heroku[web.1]: Restarting
2016-09-02T11:01:19.427455+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:01:22.141325+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:01:22.545379+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:01:22.754067+00:00 app[web.1]: [2016-09-02 18:01:22 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:01:22.754077+00:00 app[web.1]: [2016-09-02 18:01:22 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:01:22.757599+00:00 app[web.1]: [2016-09-02 11:01:22 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:01:22.763197+00:00 app[web.1]: [2016-09-02 11:01:22 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:01:22.880977+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:01:24.628348+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:01:24.628921+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Listening at: http://0.0.0.0:34235 (3)
2016-09-02T11:01:24.629075+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:01:24.636198+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:01:24.722355+00:00 app[web.1]: [2016-09-02 11:01:24 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:01:26.271435+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:01:27.930795+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=a844ef4b-a2d1-44fe-af0e-09c76cb0e034 fwd="14.191.217.103" dyno=web.1 connect=0ms service=46ms status=302 bytes=400
2016-09-02T11:01:28.363163+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=31c0823a-466f-4363-b550-3c81681305f5 fwd="14.191.217.103" dyno=web.1 connect=0ms service=171ms status=200 bytes=2184
2016-09-02T11:01:28.716801+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=2d1b8bb2-9ab3-49f7-b557-a54eed996547 fwd="14.191.217.103" dyno=web.1 connect=0ms service=8ms status=404 bytes=4566
2016-09-02T11:01:28.693936+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=24aa1eed-aa87-4854-ab35-1604e8393b9d fwd="14.191.217.103" dyno=web.1 connect=0ms service=18ms status=404 bytes=4569
2016-09-02T11:01:28.681948+00:00 app[web.1]: Not Found: /static/admin/css/base.css
2016-09-02T11:01:28.692958+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:12:43.686922+00:00 heroku[api]: Deploy 63085e6 by huyvohcmc@gmail.com
2016-09-02T11:12:43.687037+00:00 heroku[api]: Release v23 created by huyvohcmc@gmail.com
2016-09-02T11:12:43.951987+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:12:43.951998+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:12:43.926959+00:00 heroku[web.1]: Restarting
2016-09-02T11:12:43.929107+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:12:46.931285+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:12:47.860591+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:12:48.729601+00:00 app[web.1]: [2016-09-02 18:12:48 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:12:48.729617+00:00 app[web.1]: [2016-09-02 18:12:48 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:12:48.729623+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:12:48.775112+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:12:48.890301+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:12:48.839674+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:12:48.840093+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Listening at: http://0.0.0.0:20001 (3)
2016-09-02T11:12:48.840166+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:12:48.843687+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:12:48.939210+00:00 app[web.1]: [2016-09-02 11:12:48 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:12:50.565750+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:13:00.439745+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=c30b47e6-fbb8-4412-9242-5fe37217026a fwd="14.191.217.103" dyno=web.1 connect=0ms service=49ms status=400 bytes=199
2016-09-02T11:14:01.686661+00:00 heroku[api]: Deploy c149525 by huyvohcmc@gmail.com
2016-09-02T11:14:01.686965+00:00 heroku[api]: Release v24 created by huyvohcmc@gmail.com
2016-09-02T11:14:02.189063+00:00 heroku[slug-compiler]: Slug compilation started
2016-09-02T11:14:02.189073+00:00 heroku[slug-compiler]: Slug compilation finished
2016-09-02T11:14:02.466456+00:00 heroku[web.1]: Restarting
2016-09-02T11:14:02.467005+00:00 heroku[web.1]: State changed from up to starting
2016-09-02T11:14:04.713176+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-09-02T11:14:05.259388+00:00 app[web.1]: [2016-09-02 18:14:05 +0000] [10] [INFO] Worker exiting (pid: 10)
2016-09-02T11:14:05.260345+00:00 app[web.1]: [2016-09-02 11:14:05 +0000] [3] [INFO] Handling signal: term
2016-09-02T11:14:05.265937+00:00 app[web.1]: [2016-09-02 18:14:05 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-02T11:14:05.317647+00:00 app[web.1]: [2016-09-02 11:14:05 +0000] [3] [INFO] Shutting down: Master
2016-09-02T11:14:05.411311+00:00 heroku[web.1]: Process exited with status 0
2016-09-02T11:14:06.581314+00:00 heroku[web.1]: Starting process with command `gunicorn assignment.wsgi --log-file -`
2016-09-02T11:14:10.282506+00:00 heroku[web.1]: State changed from starting to up
2016-09-02T11:14:10.187781+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-09-02T11:14:10.188490+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Listening at: http://0.0.0.0:27446 (3)
2016-09-02T11:14:10.188627+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [3] [INFO] Using worker: sync
2016-09-02T11:14:10.211822+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-02T11:14:10.231978+00:00 app[web.1]: [2016-09-02 11:14:10 +0000] [10] [INFO] Booting worker with pid: 10
2016-09-02T11:14:29.714607+00:00 heroku[router]: at=info method=GET path="/" host=peaceful-earth-63194.herokuapp.com request_id=947ed6b9-b48a-48b1-8860-36846248acea fwd="14.191.217.103" dyno=web.1 connect=0ms service=153ms status=302 bytes=400
2016-09-02T11:14:30.522664+00:00 heroku[router]: at=info method=GET path="/admin/login/?next=/" host=peaceful-earth-63194.herokuapp.com request_id=b74c55bf-913c-4e0d-8d16-2b1f4f0cea13 fwd="14.191.217.103" dyno=web.1 connect=0ms service=561ms status=200 bytes=2184
2016-09-02T11:14:30.879732+00:00 heroku[router]: at=info method=GET path="/static/admin/css/base.css" host=peaceful-earth-63194.herokuapp.com request_id=769f989a-f051-4a89-a079-1d6acea3c185 fwd="14.191.217.103" dyno=web.1 connect=0ms service=86ms status=404 bytes=4566
2016-09-02T11:14:30.865971+00:00 heroku[router]: at=info method=GET path="/static/admin/css/login.css" host=peaceful-earth-63194.herokuapp.com request_id=b271b831-a4fb-4bdb-9f6a-e4d66297db88 fwd="14.191.217.103" dyno=web.1 connect=0ms service=75ms status=404 bytes=4569
2016-09-02T11:14:30.865501+00:00 app[web.1]: Not Found: /static/admin/css/login.css
2016-09-02T11:14:30.871110+00:00 app[web.1]: Not Found: /static/admin/css/base.css
</code></pre>
| 0 | 2016-09-02T11:26:25Z | 39,291,412 | <p>Django does not serve static files in production. Normally, To serve Django static files in production you need to setup a standalone server e.g. nginx.</p>
<p>However, the way to serve static files in Heroku is a little different. See the link below, provided by Heroku team, for details on how to serve static files in Heroku:<br>
<a href="https://devcenter.heroku.com/articles/django-assets" rel="nofollow">https://devcenter.heroku.com/articles/django-assets</a> </p>
<p><strong>EDIT: making the answer conform to stackoverflow guidelines:</strong> </p>
<p>as per the Heroku guidelines to serve static files: </p>
<p>add in settings.py:</p>
<pre><code>PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'staticfiles')
STATIC_URL = '/static/'
# Extra places for collectstatic to find static files.
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
)
</code></pre>
<p>then install WhiteNoise project with the following command: </p>
<pre><code>$ pip install whitenoise
</code></pre>
<p>and in your wsgi.py: </p>
<pre><code> from django.core.wsgi import get_wsgi_application
from whitenoise.django import DjangoWhiteNoise
application = get_wsgi_application()
application = DjangoWhiteNoise(application)
</code></pre>
| 1 | 2016-09-02T11:35:09Z | [
"python",
"django",
"heroku",
"deployment"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.