title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Multiple stacked bar plot with pandas | 39,013,425 | <p>I am trying to make a multiple stacked bar plot with pandas but I'm running into issues. Here is a sample code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a':[10, 20], 'b': [15, 25], 'c': [35, 40], 'd':[45, 50]}, index=['john', 'bob'])
ax = df[['a', 'c']].plot.bar(width=0.1, stacked=True)
ax=df[['b', 'd']].plot.bar(width=0.1, stacked=True, ax=ax)
df[['a', 'd']].plot.bar(width=0.1, stacked=True, ax=ax)
</code></pre>
<p>Which produces the following plot:</p>
<p><a href="http://i.stack.imgur.com/j8k35.png" rel="nofollow"><img src="http://i.stack.imgur.com/j8k35.png" alt="enter image description here"></a></p>
<p>As you can see, the bars within each cluster are plotted on top of each other, which is not what I want to achieve. I want the bars within the same cluster to be plotted next to each other. I tried to play with the "position" argument but without much success.</p>
<p>Any idea on how to achieve this?</p>
| 1 | 2016-08-18T08:33:15Z | 39,014,192 | <p>You could do it by shifting the <code>position</code> parameter of a <code>bar-plot</code> so that they are adjacent to each other as shown:</p>
<pre><code>matplotlib.style.use('ggplot')
fig, ax = plt.subplots()
df[['a', 'c']].plot.bar(stacked=True, width=0.1, position=1.5, colormap="bwr", ax=ax, alpha=0.7)
df[['b', 'd']].plot.bar(stacked=True, width=0.1, position=-0.5, colormap="RdGy", ax=ax, alpha=0.7)
df[['a', 'd']].plot.bar(stacked=True, width=0.1, position=0.5, colormap="BrBG", ax=ax, alpha=0.7)
plt.legend(loc="upper center")
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/MKwjf.png" rel="nofollow"><img src="http://i.stack.imgur.com/MKwjf.png" alt="enter image description here"></a></p>
| 2 | 2016-08-18T09:11:16Z | [
"python",
"pandas",
"matplotlib"
] |
Python 3 throwing error for exception handling | 39,013,431 | <p>I started learning python last week and I'm unable to get what's wrong here:</p>
<pre><code>def add(x,y):
"""Adds 2 numbers and returns the result"""
return x+y
def sub(x,y):
"""Subtracts 2 numbers and returns the result"""
return x-y
a = int(input("Enter first number"))
b = int(input("Enter second number"))
c = int(input("Enter 1 for subtraction , 2 for addition and 3 for both"))
try:
if c>3:
raise ValueError()
except ValueError():
print ("Wrong choice")
else:
print ("Your choice is ",c)
if (c==1):
print ("Subtraction of 2 numbers=",(sub(a,b)))
if (c==2):
print ("Addition of 2 numbers = ",(add(a,b)))
if (c==3):
print ("Subtraction of 2 numbers=",(sub(a,b)))
print ("Addition of 2 numbers = ",(add(a,b)))
</code></pre>
<p>If I enter 4 it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Program Files (x86)/Python35-32/calculator.py", line 15, in <module>
raise ValueError()
ValueError
</code></pre>
<p>During handling of the above exception, another exception occurred:</p>
<pre><code>Traceback (most recent call last):
File "C:/Program Files (x86)/Python35-32/calculator.py", line 16, in <module>
except ValueError():
TypeError: catching classes that do not inherit from BaseException is not allowed
</code></pre>
| 0 | 2016-08-18T08:33:22Z | 39,013,472 | <p>You are trying to catch an <em>instance</em> of <code>ValueError()</code>, where Python expects you to filter on the <em>type</em>. Remove the call:</p>
<pre><code>except ValueError:
print ("Wrong choice")
</code></pre>
| 4 | 2016-08-18T08:35:27Z | [
"python",
"python-3.x",
"exception-handling"
] |
How to vary the arguments in multiprocessing.Pool | 39,013,433 | <p>I want to execeute yapsy-plugins with the help of multiprocessing in python. So far i've got a worker as follows:</p>
<pre><code>def mp_worker(plugin, importer, orgadb, regiondb, ispdb):
print(" Processs " + plugin.plugin_object.getOrigin + " running.")
processPlugin(plugin, importer, orgadb, regiondb, ispdb)
print(" Process " + plugin.plugin_object.getOrigin + " done.")
</code></pre>
<p>The plugin parameter is a plugin object. The function processPlugin does the necessary work but istn relevant for the question.</p>
<p>My handler for the multiprocessing is where i stuck: </p>
<pre><code>def mp_handler(plugins, importer, orgadb, regiondb, ispdb):
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
pool.map(mp_worker(???)
</code></pre>
<p>I am giving it an array with the plugin objects called plugins, but obviously the workers need different plugins. How to achieve a pool with this? </p>
<p>Thank you in advance.</p>
| 0 | 2016-08-18T08:33:36Z | 39,014,137 | <p>After a little time of testing i figured out how it would work, if it wouldnt be a yapsy plugin class:</p>
<pre><code> def mp_worker(importer, orgadb, regiondb, ispdb, plugin):
processPlugin(plugin, importer, orgadb, regiondb, ispdb)
def mp_handler(plugins, importer, orgadb, regiondb, ispdb):
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
func = partial(mp_worker, importer, orgadb, regiondb, ispdb)
pool.map(func, plugins)
pool.close()
</code></pre>
<p>Anyway pickle doesnt work with the plugin class so anyone who wants to multiprocess yapsy plugins also has to take a look at this:
<a href="http://yapsy.sourceforge.net/MultiprocessPluginProxy.html" rel="nofollow">http://yapsy.sourceforge.net/MultiprocessPluginProxy.html</a></p>
| 0 | 2016-08-18T09:08:22Z | [
"python",
"python-3.x",
"multiprocessing"
] |
Absolute and relative import of Python modules: a matplotlib example | 39,013,453 | <p>Many questions have been asked regarding how to import Python modules, specifically on whether to use absolute or explicit relative import (<a href="http://stackoverflow.com/questions/4209641/absolute-vs-explicit-relative-import-of-python-module">here</a> for example).
The import style, as suggested by the Python Software Foundation, can be found <a href="https://www.python.org/dev/peps/pep-0008/#imports" rel="nofollow">here</a>. In short, it recommends absolute import. </p>
<p>I'm writing this question because I assume the guys who develop matplotlib know what they are doing. </p>
<p>Given this assumption and assuming I understand the major/obvious differences between these two kinds of import, I would be interested in understanding the tiny differences between them that influenced the matplotlib's developers to write something like this:</p>
<pre><code>import matplotlib
import matplotlib.cbook as cbook
from matplotlib.cbook import mplDeprecation
from matplotlib import docstring, rcParams
from .transforms import (Bbox, IdentityTransform, TransformedBbox,
TransformedPath, Transform)
from .path import Path
</code></pre>
<p>This is the beginning of <code>artist.py</code>, contained inside the <code>matplotlib</code> module (i.e. <code>matplotlib.artist</code>). I'm looking at matplotlib-1.5.1.</p>
<p>I would like to focus the attention on modules <code>matplotlib.cbook</code>, <code>matplotlib.transforms</code>, and <code>matplotlib.path</code>. All three of them are pure Python modules (i.e. <code>module_name.py</code> files).</p>
<p>Why <code>from matplotlib.cbook import mplDeprecation</code> has been chosen rather than <code>from .cbook import mplDeprecation</code> and why <code>from .path import Path</code> was preferred to <code>from matplotlib.path import Path</code>?</p>
<p>Perhaps there is no particular reason and these choices just reflect different styles of different developers; perhaps there is something I'm missing. </p>
| 0 | 2016-08-18T08:34:29Z | 39,018,696 | <p>An important thing to remember about the matplotlib code base is that it is very old (have git history back to 2003 and have lost another few years), large (>93k lines of python, 17k lines of c++), and has over 450 contributors.</p>
<p>Having a look at git-blame (off of the 2.x branch but the imports are pretty stable) shows:</p>
<pre><code>08:29 $ git blame matplotlib/artist.py | head -n 18
5fca7e31 (Thomas A Caswell 2013-09-25 11:36:00 -0500 1) from __future__ import (absolute_import, division, print_function,
5fca7e31 (Thomas A Caswell 2013-09-25 11:36:00 -0500 2) unicode_literals)
f4adec7b (Michael Droettboom 2013-08-14 10:18:10 -0400 3)
07e22753 (Matthew Brett 2016-06-06 12:08:35 -0700 4) import six
0ea5fff3 (Thomas A Caswell 2015-12-01 14:40:34 -0500 5) from collections import OrderedDict
f4adec7b (Michael Droettboom 2013-08-14 10:18:10 -0400 6)
453e0ece (Nelle Varoquaux 2012-08-27 23:16:43 +0200 7) import re
453e0ece (Nelle Varoquaux 2012-08-27 23:16:43 +0200 8) import warnings
731f6c86 (Michael Droettboom 2013-09-27 09:59:48 -0400 9) import inspect
e1d30c85 (Jens Hedegaard Nielsen 2015-08-18 19:52:48 +0100 10) import numpy as np
b44e8f20 (John Hunter 2008-12-08 23:28:55 +0000 11) import matplotlib
99b89a87 (John Hunter 2008-06-03 20:28:14 +0000 12) import matplotlib.cbook as cbook
c137a718 (Thomas A Caswell 2014-11-23 00:37:28 -0500 13) from matplotlib.cbook import mplDeprecation
527b7d9a (Michael Droettboom 2010-06-11 18:17:52 +0000 14) from matplotlib import docstring, rcParams
b2408c33 (Cimarron Mittelsteadt 2014-09-12 15:58:25 -0700 15) from .transforms import (Bbox, IdentityTransform, TransformedBbox,
b2408c33 (Cimarron Mittelsteadt 2014-09-12 15:58:25 -0700 16) TransformedPath, Transform)
f4adec7b (Michael Droettboom 2013-08-14 10:18:10 -0400 17) from .path import Path
f2a0c7ae (John Hunter 2007-03-20 21:48:31 +0000 18)
</code></pre>
<p>You can see that these lines were last touched by a number of people (apparently including me) over a range of years.</p>
<p>I would not read too much into this difference, but if you want to dive deeper try looking at the commit messages on those changes.</p>
| 1 | 2016-08-18T12:48:51Z | [
"python",
"matplotlib",
"import",
"module"
] |
Calculation not storing in DataFrame but works on print | 39,013,519 | <p>I have the following calculation:<br>
<code>np.maximum(0, np.prod([perf_asset, calc_arr['val']]) - amt_payout - np.prod([exposure, calc_arr['delta_1']]))</code></p>
<p>Written out, this would be: </p>
<pre><code>MAX(0, 0.8 Ã 105.015038 - 80 - TRUE Ã 5.3135)
MAX(0, 84.0120307692 - 80 - 5.3135)
= 0
</code></pre>
<p>If I print this, the output actually works but if I want to store it in a DataFrame, it doesn't:
<code>calc_arr['added_amt'] = np.maximum(0, np.prod([perf_asset, calc_arr['val']]) - amt_payout - np.prod([exposure, calc_arr['delta_1']]))</code></p>
<p>The calculations stopped working all of a sudden. Before that I didn't even have to use <code>np.prod</code> and <code>np.sum</code>. I'm completely confused to be honest.</p>
<p>Complete loop:</p>
<pre><code>j = 1
for i in [0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,-0.053,-0.0698,-0.1011,-0.1767,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271]:
risky_return = i
risk_free_return = (-0.0125/260)
stock_val = stock_calc(j, stock_val['amt_payout'], stock_val['alloc_risky'], stock_val['alloc_risk_free'], stock_val['delta_1'], risky_return, risk_free_return)
stock_vals = stock_vals.append(stock_val)
j = j + 1
</code></pre>
<p>And the <code>calc_arr['val']</code> is retrieved: </p>
<pre><code>calc_arr['val'] = np.sum([np.prod([(1 + perf_risky), alloc_risky]), np.prod([(1 + perf_risk_free), alloc_risk_free])])
</code></pre>
| 1 | 2016-08-18T08:37:49Z | 39,013,956 | <p>I think you need <code>append</code> values to <code>list</code> - <code>stock_vals</code> and then assign it to column:</p>
<pre><code>stock_vals = []
for i in [0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,-0.053,-0.0698,-0.1011,-0.1767,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271]:
risky_return = i
risk_free_return = (-0.0125/260)
stock_val = stock_calc(j, stock_val['amt_payout'], stock_val['alloc_risky'], stock_val['alloc_risk_free'], stock_val['delta_1'], risky_return, risk_free_return)
stock_vals.append(stock_val)
calc_arr['val'] = stock_vals
</code></pre>
<p>I try it rewrite to list comprehension:</p>
<pre><code>L = [0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,0.0627,-0.053,-0.0698,-0.1011,-0.1767,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271,-0.6271]
calc_arr['val'] = [stock_vals.append(stock_calc(j, stock_val['amt_payout'], stock_val['alloc_risky'], stock_val['alloc_risk_free'], stock_val['delta_1'], risky_return, (-0.0125/260))) for risky_return in L]
</code></pre>
| 0 | 2016-08-18T08:59:46Z | [
"python",
"pandas",
"numpy"
] |
How to set QoS level 0 for mqtt publish messages in Python Azure IoT Hub Client SDK? | 39,013,564 | <p>I am using <a href="https://github.com/Azure/azure-iot-sdks" rel="nofollow">Azure</a> custom python SDK to connect to Azure IoT Hub. I am successfully able to connect to the Hub using MQTT but didn't find any way by which one can set QoS level 0 of publishing message. Also, I haven't found any documentation for Python APIs.</p>
| 1 | 2016-08-18T08:40:43Z | 39,031,300 | <p>The official azure-iot-sdk python library only supports iothub_client. If you want to use the MQTT python SDK, there're at least two ways,</p>
<ol>
<li>Write MQTT library python wrapper for azure-umqtt-c library, it is located in <a href="https://github.com/Azure/azure-umqtt-c" rel="nofollow">here</a>. and has the set QoS level API available. I think it works the same way as azure-iot-sdks python sdk for azure-iot-sdks c class library.</li>
<li>If you're using some third-party library, <a href="https://github.com/eclipse/paho.mqtt.python" rel="nofollow">paho.mqtt</a> is a good choice. You should be able to connect to the Azure MQTT broker and you also have the set QoS API available. </li>
</ol>
| 1 | 2016-08-19T04:47:34Z | [
"python",
"azure",
"mqtt",
"azure-iot-hub"
] |
How to set QoS level 0 for mqtt publish messages in Python Azure IoT Hub Client SDK? | 39,013,564 | <p>I am using <a href="https://github.com/Azure/azure-iot-sdks" rel="nofollow">Azure</a> custom python SDK to connect to Azure IoT Hub. I am successfully able to connect to the Hub using MQTT but didn't find any way by which one can set QoS level 0 of publishing message. Also, I haven't found any documentation for Python APIs.</p>
| 1 | 2016-08-18T08:40:43Z | 39,031,998 | <p>As addition, you also can use IronPython as Python runtime based on .NET framework, then you can integrate Python with Azure IoTHub SDK for C# if you are familiar with C# more than C, please see the <a href="http://ironpython.net/documentation/dotnet/" rel="nofollow">document</a> to know IronPython .NET integration. </p>
| 0 | 2016-08-19T05:55:41Z | [
"python",
"azure",
"mqtt",
"azure-iot-hub"
] |
Efficiently find indices of all values in an array | 39,013,722 | <p>I have a very large array, consisting of integers between 0 and N, where each value occurs at least once.</p>
<p>I'd like to know, for each value <em>k</em>, all the indices in my array where the array's value equals <em>k</em>.</p>
<p>For example:</p>
<pre><code>arr = np.array([0,1,2,3,2,1,0])
desired_output = {
0: np.array([0,6]),
1: np.array([1,5]),
2: np.array([2,4]),
3: np.array([3]),
}
</code></pre>
<p>Right now I am accomplishing this with a loop over <code>range(N+1)</code>, and calling <code>np.where</code> N times.</p>
<pre><code>indices = {}
for value in range(max(arr)+1):
indices[value] = np.where(arr == value)[0]
</code></pre>
<p>This loop is by far the slowest part of my code. (Both the <code>arr==value</code> evaluation and the <code>np.where</code> call take up significant chunks of time.) Is there a more efficient way to do this?</p>
<p>I also tried playing around with <code>np.unique(arr, return_index=True)</code> but that only tells me the very first index, rather than all of them.</p>
| 4 | 2016-08-18T08:48:25Z | 39,013,789 | <p>I don't know numpy but you could definitely do this in one iteration, with a defaultdict:</p>
<pre><code>indices = defaultdict(list)
for i, val in enumerate(arr):
indices[val].append(i)
</code></pre>
| 1 | 2016-08-18T08:51:48Z | [
"python",
"numpy"
] |
Efficiently find indices of all values in an array | 39,013,722 | <p>I have a very large array, consisting of integers between 0 and N, where each value occurs at least once.</p>
<p>I'd like to know, for each value <em>k</em>, all the indices in my array where the array's value equals <em>k</em>.</p>
<p>For example:</p>
<pre><code>arr = np.array([0,1,2,3,2,1,0])
desired_output = {
0: np.array([0,6]),
1: np.array([1,5]),
2: np.array([2,4]),
3: np.array([3]),
}
</code></pre>
<p>Right now I am accomplishing this with a loop over <code>range(N+1)</code>, and calling <code>np.where</code> N times.</p>
<pre><code>indices = {}
for value in range(max(arr)+1):
indices[value] = np.where(arr == value)[0]
</code></pre>
<p>This loop is by far the slowest part of my code. (Both the <code>arr==value</code> evaluation and the <code>np.where</code> call take up significant chunks of time.) Is there a more efficient way to do this?</p>
<p>I also tried playing around with <code>np.unique(arr, return_index=True)</code> but that only tells me the very first index, rather than all of them.</p>
| 4 | 2016-08-18T08:48:25Z | 39,013,794 | <p>A pythonic way is using <code>collections.defaultdict()</code>:</p>
<pre><code>>>> from collections import defaultdict
>>>
>>> d = defaultdict(list)
>>>
>>> for i, j in enumerate(arr):
... d[j].append(i)
...
>>> d
defaultdict(<type 'list'>, {0: [0, 6], 1: [1, 5], 2: [2, 4], 3: [3]})
</code></pre>
<p>And here is a Numpythonic way using a dictionary comprehension and <code>numpy.where()</code>:</p>
<pre><code>>>> {i: np.where(arr == i)[0] for i in np.unique(arr)}
{0: array([0, 6]), 1: array([1, 5]), 2: array([2, 4]), 3: array([3])}
</code></pre>
<p>And here is a pure Numpythonic approach if you don't want to involve the dictionary:</p>
<pre><code>>>> uniq = np.unique(arr)
>>> args, indices = np.where((np.tile(arr, len(uniq)).reshape(len(uniq), len(arr)) == np.vstack(uniq)))
>>> np.split(indices, np.where(np.diff(args))[0] + 1)
[array([0, 6]), array([1, 5]), array([2, 4]), array([3])]
</code></pre>
| 2 | 2016-08-18T08:51:59Z | [
"python",
"numpy"
] |
Efficiently find indices of all values in an array | 39,013,722 | <p>I have a very large array, consisting of integers between 0 and N, where each value occurs at least once.</p>
<p>I'd like to know, for each value <em>k</em>, all the indices in my array where the array's value equals <em>k</em>.</p>
<p>For example:</p>
<pre><code>arr = np.array([0,1,2,3,2,1,0])
desired_output = {
0: np.array([0,6]),
1: np.array([1,5]),
2: np.array([2,4]),
3: np.array([3]),
}
</code></pre>
<p>Right now I am accomplishing this with a loop over <code>range(N+1)</code>, and calling <code>np.where</code> N times.</p>
<pre><code>indices = {}
for value in range(max(arr)+1):
indices[value] = np.where(arr == value)[0]
</code></pre>
<p>This loop is by far the slowest part of my code. (Both the <code>arr==value</code> evaluation and the <code>np.where</code> call take up significant chunks of time.) Is there a more efficient way to do this?</p>
<p>I also tried playing around with <code>np.unique(arr, return_index=True)</code> but that only tells me the very first index, rather than all of them.</p>
| 4 | 2016-08-18T08:48:25Z | 39,013,922 | <p><strong>Approach #1</strong></p>
<p>Here's a vectorized approach to get those indices as list of arrays -</p>
<pre><code>sidx = arr.argsort()
unq, cut_idx = np.unique(arr[sidx],return_index=True)
indices = np.split(sidx,cut_idx)[1:]
</code></pre>
<p>If you want the final dictionary that corresponds each unique element to their indices, finally we can use a loop-comprehension -</p>
<pre><code>dict_out = {unq[i]:iterID for i,iterID in enumerate(indices)}
</code></pre>
<hr>
<p><strong>Approach #2</strong></p>
<p>If you are just interested in the list of arrays, here's an alternative meant for performance -</p>
<pre><code>sidx = arr.argsort()
indices = np.split(sidx,np.flatnonzero(np.diff(arr[sidx])>0)+1)
</code></pre>
| 7 | 2016-08-18T08:57:50Z | [
"python",
"numpy"
] |
Efficiently find indices of all values in an array | 39,013,722 | <p>I have a very large array, consisting of integers between 0 and N, where each value occurs at least once.</p>
<p>I'd like to know, for each value <em>k</em>, all the indices in my array where the array's value equals <em>k</em>.</p>
<p>For example:</p>
<pre><code>arr = np.array([0,1,2,3,2,1,0])
desired_output = {
0: np.array([0,6]),
1: np.array([1,5]),
2: np.array([2,4]),
3: np.array([3]),
}
</code></pre>
<p>Right now I am accomplishing this with a loop over <code>range(N+1)</code>, and calling <code>np.where</code> N times.</p>
<pre><code>indices = {}
for value in range(max(arr)+1):
indices[value] = np.where(arr == value)[0]
</code></pre>
<p>This loop is by far the slowest part of my code. (Both the <code>arr==value</code> evaluation and the <code>np.where</code> call take up significant chunks of time.) Is there a more efficient way to do this?</p>
<p>I also tried playing around with <code>np.unique(arr, return_index=True)</code> but that only tells me the very first index, rather than all of them.</p>
| 4 | 2016-08-18T08:48:25Z | 39,014,909 | <p>Fully vectorized solution using the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package:</p>
<pre><code>import numpy_indexed as npi
k, idx = npi.groupy_by(arr, np.arange(len(arr)))
</code></pre>
<p>On a higher level; why do you need these indices? Subsequent grouped-operations can usually be computed much more efficiently using the group_by functionality [eg, npi.group_by(arr).mean(someotherarray)], without explicitly computing the indices of the keys.</p>
| 0 | 2016-08-18T09:43:19Z | [
"python",
"numpy"
] |
Google App Engine unable to resolve www.facebook.com | 39,013,932 | <h1>HACKY SOLUTION</h1>
<p>So I spent quite some time figuring out exactly WHY this went wrong, turns out I wasn't wrong at all. Google was. What ended up being the case, was that there's a bug in the development SDK for the App Engine.</p>
<p>To solve this (rediciously annoying) issue, you can do the following:</p>
<p>Open the file called <code>/appengine/tools/devappserver2/python/sandbox.py</code>.</p>
<p>Add an import statement at the top of the file:</p>
<pre><code>import socket as socket_original
</code></pre>
<p>Then replace this:</p>
<pre><code>def load_module(self, fullname):
if fullname in sys.modules:
return sys.modules[fullname]
return self.import_stub_module(fullname)
</code></pre>
<p>With this:</p>
<pre><code>def load_module(self, fullname):
if fullname == "socket":
return socket_original
if fullname in sys.modules:
return sys.modules[fullname]
return self.import_stub_module(fullname)
</code></pre>
<p>And boom, you're now using the original sockets library.</p>
<hr>
<h1>ORIGINAL QUESTION</h1>
<p>I'm using Google App Engine to host my web api, I currently need to make some API calls to certain websites (including facebook), making calls to any other website works just fine, but when the Google App Engine tries to resolve <code>www.facebook.com</code> I get the following error:</p>
<pre><code>RuntimeError: error('illegal IP address string passed to inet_pton',)
</code></pre>
<p>Example code:</p>
<pre><code># Fetching `www.google.com`
socket.gethostbyname_ex("www.google.com")
# Returns:
[
"www.google.com",
[],
[
"173.194.65.106",
"173.194.65.104",
"173.194.65.105",
"173.194.65.103",
"173.194.65.99",
"173.194.65.147"
]
]
# Fetching `www.twitter.com`
socket.gethostbyname_ex("www.twitter.com")
# Returns:
[
"twitter.com",
[
"www.twitter.com"
],
[
"199.16.156.230",
"199.16.156.198",
"199.16.156.102",
"199.16.156.70"
]
]
# Fetching `www.facebook.com`
socket.gethostbyname_ex("www.facebook.com")
# Raises:
RuntimeError: error('illegal IP address string passed to inet_pton',)
</code></pre>
| 0 | 2016-08-18T08:58:30Z | 39,019,442 | <p>GAE supports sockets, but with some significant <a href="https://cloud.google.com/appengine/docs/python/sockets/#limitations_and_restrictions" rel="nofollow">Limitations and restrictions</a>.</p>
<p>The SDK is mimicking these limitations and restrictions, so tweaking it to use the real socket library instead of the supplied one in your local development environment might not be such a great idea - your app might not actually work when deployed on GAE.</p>
| 0 | 2016-08-18T13:25:48Z | [
"python",
"google-app-engine",
"dns"
] |
Parse a YAML with duplicate anchors in Python | 39,013,993 | <p>I'm just getting started with both YAML and Python and I'm trying to parse a YAML in Python which contains anchors and aliases.<br>
In this YAML I overwrite the anchors to make certain nodes have different values. </p>
<p>An example of my YAML:</p>
<pre><code>Some Colors: &some_colors
color_primary: &color_primary "#112233FF"
color_secondary: &color_secondary "#445566FF"
Element: &element
color: *color_primary
Overwrite some colors: &overwrite_colors
color_primary: &color_primary "#000000FF"
Another element: &another_element
color: *color_primary
</code></pre>
<p>Which has the expected outcome of (in JSON):</p>
<pre><code>{
"Some Colors": {
"color_primary": "#112233FF",
"color_secondary": "#445566FF"
},
"Element": {
"color": "#112233FF"
},
"Overwrite some colors": {
"color_primary": "#000000FF"
},
"Another element": {
"color": "#000000FF"
}
}
</code></pre>
<blockquote>
<p><em>I tested the above YAML snippet <a href="http://codebeautify.org/yaml-to-json-xml-csv" rel="nofollow">here</a></em></p>
</blockquote>
<p>From what I've read in the YAML docs; this should've been possible from version 1.1 (I think), but at least YAML version 1.2 should support it.</p>
<p>But whenever I try to parse the YAML, using PyYAML (with <code>yaml.load()</code>) or the <code>ruamel,yaml</code> package (with <code>ruamel.yaml.load()</code>), I get the 'duplicate anchor' error.</p>
<p><strong>What am I doing wrong here? And how to fix this?</strong> </p>
<p><strong>EDIT:</strong></p>
<p>With the help of <code>ruamel</code>'s owner I've found a solution to the above question. </p>
<p>As of <code>ruamel</code> v0.12.3 the above works as expected, although you will receive <code>ReusedAnchorWarning</code>s.<br>
These warnings can be suppressed with the following snippet:</p>
<pre><code>import warnings
from ruamel.yaml.error import ReusedAnchorWarning
warnings.simplefilter("ignore", ReusedAnchorWarning)
</code></pre>
<p>Giving credits where this is due; all of them go to <code>ruamel</code>'s owner.</p>
<hr>
<p>As an added question; when I modify the above YAML to <em>(notice the change at <code>// <-- Added this</code>)</em>:</p>
<pre><code>Some Colors: &some_colors
color_primary: &color_primary "#112233FF"
color_secondary: &color_secondary "#445566FF"
Element: &element
color: *color_primary
Overwrite some colors: &overwrite_colors
<<: *some_colors // <-- Added this to include 'color_secondary' as well
color_primary: &color_primary "#000000FF"
Another element: &another_element
color: *color_primary
</code></pre>
<p>The output is:</p>
<pre><code>{
"Some Colors": {
"color_primary": "#000000FF",
"color_secondary": "#445566FF"
},
"Element": {
"color": "#112233FF"
},
"Overwrite some colors": {
"color_primary": "#000000FF",
"color_secondary": "#445566FF"
},
"Another element": {
"color": "#445566FF" // <-- Now the value is 'color_secondary' instead of 'color_primary'?
}
}
</code></pre>
<p>Why is the <code>color</code> of <code>Another element</code> looking at the value of <code>color_secondary</code> instead? </p>
<p><strong>Is there any way to fix this as well?</strong></p>
| 1 | 2016-08-18T09:01:13Z | 39,020,010 | <p>First of all, you are not doing anything wrong. PyYAML is doing something wrong here. This is most likely because dumping anchors with the same name would be an erroneous situation the the PyYAML dumper. If you have a Python structure that is self referential:</p>
<pre><code> a = dict(x=1)
a['y'] = a
</code></pre>
<p>then PyYAML (and <code>ruamel.yaml</code> will create you a unique anchor name to. If this name was not unique it would be depending on where the name was used as an alias. It therefore makes sense to be suspicious of any reused anchor names, as this might point to a bug in the YAML serialisation code, but it is not against the specification (reuse is already ok according to <a href="http://yaml.org/spec/1.0/" rel="nofollow">YAML 1.0</a> spec (section 3.2.2.2)). </p>
<p>A <a href="https://bugs.debian.org/515634" rel="nofollow">bug report</a> for the python-yaml Debian module existed since 2009, but I haven't found if that ended up-stream.</p>
<p>As you indicated this is solved in ruamel.yaml 0.12.3</p>
<hr>
<p>Two answer your second question, that is just because the "<a href="http://codebeautify.org/yaml-to-json-xml-csv" rel="nofollow">Best Online YAML Converter</a>" isn't, and parses this wrong. It even throws an error if there is a YAML comment on the merge line:</p>
<pre><code> <<: *some_colors # <-- Added this to include 'color_secondary' as well
</code></pre>
<p>This parses as expected in ruamel.yaml (0.12.3):</p>
<pre><code>import sys
import ruamel.yaml
import warnings
from ruamel.yaml.error import ReusedAnchorWarning
warnings.simplefilter("ignore", ReusedAnchorWarning)
yaml_str = """\
Some Colors: &some_colors
color_primary: &color_primary "#112233FF"
color_secondary: &color_secondary "#445566FF"
Element: &element
color: *color_primary
Overwrite some colors: &overwrite_colors
<<: *some_colors # <-- Added this to include 'color_secondary' as well
color_primary: &color_primary "#000000FF"
Another element: &another_element
color: *color_primary
"""
data = ruamel.yaml.load(yaml_str)
ruamel.yaml.round_trip_dump(data, sys.stdout)
</code></pre>
<p>gives:</p>
<pre><code>Some Colors:
color_primary: '#112233FF'
color_secondary: '#445566FF'
Overwrite some colors:
color_primary: '#000000FF'
color_secondary: '#445566FF'
Another element:
color: '#000000FF' # <- not #445566FF
Element:
color: '#112233FF'
</code></pre>
<p>(comment added by hand)</p>
| 0 | 2016-08-18T13:51:08Z | [
"python",
"yaml"
] |
SECRET_KEY setting must not be empty | 39,014,009 | <p>I am getting Error when i run server in cmd</p>
<p>I am using Windows 7, Python Version 3.4.3 and
Django Version 1.8. </p>
<pre><code>>
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 338, in execute_f
rom_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 190, in fetch_com
mand
klass = load_command_class(app_name, subcommand)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 40, in load_comma
nd_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "C:\Python34\lib\site-packages\django\core\management\commands\runserver.py", line 14, in
<module>
from django.db.migrations.executor import MigrationExecutor
File "C:\Python34\lib\site-packages\django\db\migrations\executor.py", line 6, in <module>
from .loader import MigrationLoader
File "C:\Python34\lib\site-packages\django\db\migrations\loader.py", line 10, in <module>
from django.db.migrations.recorder import MigrationRecorder
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 9, in <module>
class MigrationRecorder(object):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 23, in MigrationRec
order
class Migration(models.Model):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 24, in Migration
app = models.CharField(max_length=255)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 1081, in __init_
_
super(CharField, self).__init__(*args, **kwargs)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 161, in __init__
self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\abc\Desktop\myapp\myapp\settings.py", line 16, in <module>
import django.contrib.auth
File "C:\Python34\lib\site-packages\django\contrib\auth\__init__.py", line 7, in <module>
from django.middleware.csrf import rotate_token
File "C:\Python34\lib\site-packages\django\middleware\csrf.py", line 14, in <module>
from django.utils.cache import patch_vary_headers
File "C:\Python34\lib\site-packages\django\utils\cache.py", line 26, in <module>
from django.core.cache import caches
File "C:\Python34\lib\site-packages\django\core\cache\__init__.py", line 34, in <module>
if DEFAULT_CACHE_ALIAS not in settings.CACHES:
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 113, in __init__
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
</code></pre>
| 2 | 2016-08-18T09:01:52Z | 39,014,031 | <p>I think the error message is pretty much self-explaining:
<code>django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.</code></p>
<p>You need to set SECRET_KEY in your settings.py file.</p>
| 1 | 2016-08-18T09:03:01Z | [
"python",
"django"
] |
SECRET_KEY setting must not be empty | 39,014,009 | <p>I am getting Error when i run server in cmd</p>
<p>I am using Windows 7, Python Version 3.4.3 and
Django Version 1.8. </p>
<pre><code>>
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 338, in execute_f
rom_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 190, in fetch_com
mand
klass = load_command_class(app_name, subcommand)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 40, in load_comma
nd_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "C:\Python34\lib\site-packages\django\core\management\commands\runserver.py", line 14, in
<module>
from django.db.migrations.executor import MigrationExecutor
File "C:\Python34\lib\site-packages\django\db\migrations\executor.py", line 6, in <module>
from .loader import MigrationLoader
File "C:\Python34\lib\site-packages\django\db\migrations\loader.py", line 10, in <module>
from django.db.migrations.recorder import MigrationRecorder
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 9, in <module>
class MigrationRecorder(object):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 23, in MigrationRec
order
class Migration(models.Model):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 24, in Migration
app = models.CharField(max_length=255)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 1081, in __init_
_
super(CharField, self).__init__(*args, **kwargs)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 161, in __init__
self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\abc\Desktop\myapp\myapp\settings.py", line 16, in <module>
import django.contrib.auth
File "C:\Python34\lib\site-packages\django\contrib\auth\__init__.py", line 7, in <module>
from django.middleware.csrf import rotate_token
File "C:\Python34\lib\site-packages\django\middleware\csrf.py", line 14, in <module>
from django.utils.cache import patch_vary_headers
File "C:\Python34\lib\site-packages\django\utils\cache.py", line 26, in <module>
from django.core.cache import caches
File "C:\Python34\lib\site-packages\django\core\cache\__init__.py", line 34, in <module>
if DEFAULT_CACHE_ALIAS not in settings.CACHES:
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 113, in __init__
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
</code></pre>
| 2 | 2016-08-18T09:01:52Z | 39,014,504 | <p>If SECRET_KEY is in setttings.py file then you are running different settings file.</p>
<p>use below command:</p>
<pre><code>python manage.py runserver --settings project_name.settings
</code></pre>
<p><code>project_name.settings</code> is path to your settings file.</p>
<p>You can also check it using <code>print</code> statements in settings.py file</p>
| 0 | 2016-08-18T09:24:31Z | [
"python",
"django"
] |
SECRET_KEY setting must not be empty | 39,014,009 | <p>I am getting Error when i run server in cmd</p>
<p>I am using Windows 7, Python Version 3.4.3 and
Django Version 1.8. </p>
<pre><code>>
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 338, in execute_f
rom_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 190, in fetch_com
mand
klass = load_command_class(app_name, subcommand)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 40, in load_comma
nd_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "C:\Python34\lib\site-packages\django\core\management\commands\runserver.py", line 14, in
<module>
from django.db.migrations.executor import MigrationExecutor
File "C:\Python34\lib\site-packages\django\db\migrations\executor.py", line 6, in <module>
from .loader import MigrationLoader
File "C:\Python34\lib\site-packages\django\db\migrations\loader.py", line 10, in <module>
from django.db.migrations.recorder import MigrationRecorder
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 9, in <module>
class MigrationRecorder(object):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 23, in MigrationRec
order
class Migration(models.Model):
File "C:\Python34\lib\site-packages\django\db\migrations\recorder.py", line 24, in Migration
app = models.CharField(max_length=255)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 1081, in __init_
_
super(CharField, self).__init__(*args, **kwargs)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 161, in __init__
self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Python34\lib\importlib\__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\abc\Desktop\myapp\myapp\settings.py", line 16, in <module>
import django.contrib.auth
File "C:\Python34\lib\site-packages\django\contrib\auth\__init__.py", line 7, in <module>
from django.middleware.csrf import rotate_token
File "C:\Python34\lib\site-packages\django\middleware\csrf.py", line 14, in <module>
from django.utils.cache import patch_vary_headers
File "C:\Python34\lib\site-packages\django\utils\cache.py", line 26, in <module>
from django.core.cache import caches
File "C:\Python34\lib\site-packages\django\core\cache\__init__.py", line 34, in <module>
if DEFAULT_CACHE_ALIAS not in settings.CACHES:
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 48, in __getattr__
self._setup(name)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "C:\Python34\lib\site-packages\django\conf\__init__.py", line 113, in __init__
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
</code></pre>
| 2 | 2016-08-18T09:01:52Z | 39,015,530 | <p>It was Django version incompatibility issue. When I installed 1.5 version and 1.9 version of django it is working but not with 1.10 and 1.8</p>
| 4 | 2016-08-18T10:12:49Z | [
"python",
"django"
] |
Particle Tracking by coordinates from txt file | 39,014,085 | <p>I have some particle track data from an OpenFoam simulation.
The data looks like this:</p>
<pre><code>0.005 0.00223546 1.52096e-09 0.00503396
0.01 0.00220894 3.92829e-09 0.0101636
0.015 0.00218103 5.37107e-09 0.0154245
.....
</code></pre>
<p>First row is time, then x, y ,z coordinates.
In my folder, I have a file for every tracked particle. </p>
<p>I would like to calculate the velocity and the displacement for each particle in each timestep.</p>
<p>It would be nice to enter the position data in a way like particle[1].time[0.01].</p>
<p>Is there already a python tool for that kind of problem?
Thanks a lot </p>
| 2 | 2016-08-18T09:05:51Z | 39,014,392 | <p>Individual files would be easily loaded with something like:</p>
<pre><code>import numpy as np
t, x, y, z = np.loadtxt(filename, delimiter=' ', unpack=True)
</code></pre>
<p>Now there is an issue as you would like to index particule position with time, whereas Numpy will only accept integers as indices.</p>
<p>edit: in Python you could make "position" a dictionary so that you can index it with a float, or anything else. But now that comes down to the amount of data you have, and what you want to do with it. Because dictionaries will be less efficient than Numpy arrays for anything a little more 'advanced' than just picking position at time t.</p>
| 0 | 2016-08-18T09:19:36Z | [
"python",
"numpy",
"matplotlib"
] |
Particle Tracking by coordinates from txt file | 39,014,085 | <p>I have some particle track data from an OpenFoam simulation.
The data looks like this:</p>
<pre><code>0.005 0.00223546 1.52096e-09 0.00503396
0.01 0.00220894 3.92829e-09 0.0101636
0.015 0.00218103 5.37107e-09 0.0154245
.....
</code></pre>
<p>First row is time, then x, y ,z coordinates.
In my folder, I have a file for every tracked particle. </p>
<p>I would like to calculate the velocity and the displacement for each particle in each timestep.</p>
<p>It would be nice to enter the position data in a way like particle[1].time[0.01].</p>
<p>Is there already a python tool for that kind of problem?
Thanks a lot </p>
| 2 | 2016-08-18T09:05:51Z | 39,019,557 | <p>If you have regular time steps, you can use a pandas dataframe to find the difference </p>
<pre><code>import pandas as pd
dt = .005 #or whatever time difference you have
df = pd.read_csv(<a bunch of stuff indicating how to read the file>)
df['v_x'] = df.diff(<the x colum>)
df['v_x'] = df['v_x']/dt
</code></pre>
| 2 | 2016-08-18T13:30:56Z | [
"python",
"numpy",
"matplotlib"
] |
Particle Tracking by coordinates from txt file | 39,014,085 | <p>I have some particle track data from an OpenFoam simulation.
The data looks like this:</p>
<pre><code>0.005 0.00223546 1.52096e-09 0.00503396
0.01 0.00220894 3.92829e-09 0.0101636
0.015 0.00218103 5.37107e-09 0.0154245
.....
</code></pre>
<p>First row is time, then x, y ,z coordinates.
In my folder, I have a file for every tracked particle. </p>
<p>I would like to calculate the velocity and the displacement for each particle in each timestep.</p>
<p>It would be nice to enter the position data in a way like particle[1].time[0.01].</p>
<p>Is there already a python tool for that kind of problem?
Thanks a lot </p>
| 2 | 2016-08-18T09:05:51Z | 39,020,087 | <p>You "almost" don't need numpy for that. I created a simple class hierarchy with some initial methods. You could improve from that if you like the approach. Note that I am creating from a string, you should use <code>for line in file</code> instead of the <code>string.split</code> way.</p>
<pre><code>import numpy
class Track(object):
def __init__(self):
self.trackpoints = []
def AddTrackpoint(self, line):
tpt = self.Trackpoint(line)
if self.trackpoints and tpt.t < self.trackpoints[-1].t:
raise ValueError("timestamps should be in ascending order")
self.trackpoints.append(tpt)
return tpt
def length(self):
pairs = zip(self.trackpoints[:-1], self.trackpoints[1:])
dists = map(self.distance, pairs)
result = sum(dists)
print result
def distance(self, points):
p1, p2 = points
return numpy.sqrt(sum((p2.pos - p1.pos)**2)) # only convenient use of numpy so far
class Trackpoint(object):
def __init__(self, line):
t, x, y, z = line.split(' ')
self.t = t
self.pos = numpy.array((x,y,z), dtype=float)
entries = """
0.005 0.00223546 1.52096e-09 0.00503396
0.01 0.00220894 3.92829e-09 0.0101636
0.015 0.00218103 5.37107e-09 0.0154245
""".strip()
lines = entries.split("\n")
track = Track()
for line in lines:
track.AddTrackpoint(line)
print track.length()
</code></pre>
| 0 | 2016-08-18T13:54:51Z | [
"python",
"numpy",
"matplotlib"
] |
Break doesn't seem to break the for-loop | 39,014,132 | <p>My for-loop doesn't seem to break the for-loop and move on to the next d value. It takes 0.5 seconds when the range is 10000 but 0.8 when its 100000? But the range shouldn't matter when it should break long before getting above 100.
Here's my code:</p>
<pre><code>import math
d = 1
l = {}
while d < 10:
for y in range(0, 10000):
x = y * math.sqrt(d)
x = round(x)
if x**2 - (d * y**2) == 1:
l[d] = x
print("x: " + str(x) + " d: " + str(d) + " y: " + str(y))
break
d += 1
m = max(l, key=l.get)
print("d", m, " :-: x", l[m])
</code></pre>
<p>And here's what it outputs:</p>
<pre><code>x: 3 d: 2 y: 2
x: 2 d: 3 y: 1
x: 9 d: 5 y: 4
x: 5 d: 6 y: 2
x: 8 d: 7 y: 3
x: 3 d: 8 y: 1
d 5 :-: x 9
>[Finished in 0.8s]
</code></pre>
| -1 | 2016-08-18T09:07:58Z | 39,014,440 | <p>The <code>break</code> is working just fine. However, you didn't find a solution for all values of <code>d</code>.</p>
<p>Add an <code>else:</code> suite to the <code>for</code> loop to show when no <code>break</code> was executed for that loop:</p>
<pre><code>while d < 10:
for y in range(0, 10000):
x = y * math.sqrt(d)
x = round(x)
if x**2 - (d * y**2) == 1:
l[d] = x
print("x: " + str(x) + " d: " + str(d) + " y: " + str(y))
break
else:
print('No solution for d:', d)
d += 1
</code></pre>
<p>which outputs:</p>
<pre><code>No solution for d: 1
x: 3 d: 2 y: 2
x: 2 d: 3 y: 1
No solution for d: 4
x: 9 d: 5 y: 4
x: 5 d: 6 y: 2
x: 8 d: 7 y: 3
x: 3 d: 8 y: 1
No solution for d: 9
</code></pre>
<p>The <code>while</code> loop could just as well be replaced by a <code>for d in range(1, 10):</code>, by the way.</p>
| 0 | 2016-08-18T09:21:34Z | [
"python",
"for-loop",
"break"
] |
Cassandra driver error in installing | 39,014,212 | <p>I am getting following error in installing Cassandra driver for python. </p>
<pre><code>Command "c:\python33\python.exe -u -c "import setuptools, tokenize;__file__='c:\
\users\\vmasama\\appdata\\local\\temp\\pip-build-we10p7\\cassandra-driver\\setup
.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n
', '\n'), __file__, 'exec'))" install --record c:\users\vmasama\appdata\local\te
mp\pip-olh8j5-record\install-record.txt --single-version-externally-managed --co
mpile" failed with error code 1 in c:\users\vmasama\appdata\local\temp\pip-build
-we10p7\cassandra-driver\
</code></pre>
<p>Any one have any idea about this ? What am i missing ?</p>
<p>Following errors were also there when i run <strong>pip install cassandra-driver</strong></p>
<blockquote>
<p>Failed building wheel for cassandra-driver<br>
Failed cleaning build dir
for cassandra-driver</p>
</blockquote>
<p>python version : 3.3</p>
| 0 | 2016-08-18T09:12:17Z | 39,132,448 | <p>I tried it in an environment missing compiler paths and observed something similar. I think it's a problem in setuptools error handling when it doesn't find what it's expecting. The easiest way I've found is to use the VS Command Prompt shortcut installed with Visual Studio:</p>
<blockquote>
<p>Perhaps the easiest way to do this is to run the build/install from a Visual Studio Command Prompt (a shortcut installed with Visual Studio that sources the appropriate environment and presents a shell).</p>
</blockquote>
<p><a href="http://datastax.github.io/python-driver/installation.html#windows-installation-notes" rel="nofollow">http://datastax.github.io/python-driver/installation.html#windows-installation-notes</a></p>
| 0 | 2016-08-24T20:24:57Z | [
"python",
"python-3.x",
"cassandra",
"cassandra-2.0"
] |
Edit with IDLE not working | 39,014,407 | <p>My IDLE option in the right click menu has disappeared. I think it's because I use Python 3.5.2, but I installed Python 2.7.12 without uninstalling Python 3.5.2. Later I uninstalled Python 2.7.12 and from then on the .py files are opening in either Chrome but I cannot get it to open with IDLE. And the usual logo for Python scripts has disappeared. Instead now it has the logo in the picture. What can I do? I uninstalled Python 3.5 after the problem and reinstalled it. Yet the problem persists.<a href="http://i.stack.imgur.com/JHeEO.png" rel="nofollow"><img src="http://i.stack.imgur.com/JHeEO.png" alt="New logo of Python files/scripts"></a></p>
| 0 | 2016-08-18T09:20:06Z | 39,014,685 | <p><strong>Option 1:</strong>
Right click on any <code>.py</code> file and click <code>Open With</code> and click <code>choose default Program</code> or <code>Choose another app</code>. </p>
<p>From the list that appears after if you see <code>python.exe</code> choose that otherwise click <code>Choose another program</code> or <code>Browse</code> option. Might that option appears after clicking <code>More App</code> in Windows 10.</p>
<p>Next browse to python insallation directory and select <code>python.exe</code></p>
<p><strong>Option 2:</strong>
Run <code>regedit</code> and goto key <code>HKEY_CLASSES_ROOT\.py</code> and change the default to <code>Python.File</code>.</p>
<p>After that logout & login. Context menu will appear again.</p>
| 0 | 2016-08-18T09:33:32Z | [
"python",
"python-idle"
] |
Python regular expression, using dots many times | 39,014,627 | <p>In python regular expressions, if I want to write 20 dots in a row because I want 20 characters, how can I do this in shorthand?</p>
| -4 | 2016-08-18T09:30:26Z | 39,014,729 | <p>use <code>'.{20}'</code></p>
<pre><code>In [15]: x = 'a' * 30 + '.' * 20 + 'b' * 30
In [16]: m = re.search('.{20}', x)
In [17]: m
Out[17]: <_sre.SRE_Match at 0x103ec3850>
</code></pre>
| 1 | 2016-08-18T09:35:42Z | [
"python",
"regex"
] |
Python regular expression, using dots many times | 39,014,627 | <p>In python regular expressions, if I want to write 20 dots in a row because I want 20 characters, how can I do this in shorthand?</p>
| -4 | 2016-08-18T09:30:26Z | 39,014,741 | <pre><code>print '.'*20
</code></pre>
<p>Searching in internet should solve your problem before posting that...</p>
| -1 | 2016-08-18T09:36:08Z | [
"python",
"regex"
] |
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out | 39,014,670 | <p>(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running <code>pybot --version</code>, errors came out as follows.</p>
<p>I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6</p>
<blockquote>
<p>[root@localhost robotframework-2.7.6]# pybot --version<br>
Traceback (most recent call last):<br>
File "/usr/bin/pybot", line 4, in <br>
from robot import run_cli<br>
File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in <br>
from robot.rebot import rebot, rebot_cli<br>
File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in <br>
from robot.conf import RebotSettings<br>
File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in <br>
from .settings import RobotSettings, RebotSettings<br>
File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in <br>
from robot import utils<br>
File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in <br>
from .compress import compress_text<br>
File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in <br>
import zlib<br>
ImportError: No module named zlib</p>
</blockquote>
| 0 | 2016-08-18T09:33:09Z | 39,037,542 | <p>Reasons could be any of the following:</p>
<ol>
<li>Either the python files (at least one) have lost the formatting. Python is prone to formatting errors</li>
<li>At least one installation (python, Robo) doesn't have administrative privileges.</li>
<li>Environment variables (PATH, CLASSPATH, PYTHON PATH) are not set fine.</li>
<li>What does python --version print? If this throws errors, installation has issues.</li>
</ol>
| 0 | 2016-08-19T11:08:37Z | [
"python",
"linux",
"robotframework"
] |
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out | 39,014,670 | <p>(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running <code>pybot --version</code>, errors came out as follows.</p>
<p>I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6</p>
<blockquote>
<p>[root@localhost robotframework-2.7.6]# pybot --version<br>
Traceback (most recent call last):<br>
File "/usr/bin/pybot", line 4, in <br>
from robot import run_cli<br>
File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in <br>
from robot.rebot import rebot, rebot_cli<br>
File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in <br>
from robot.conf import RebotSettings<br>
File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in <br>
from .settings import RobotSettings, RebotSettings<br>
File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in <br>
from robot import utils<br>
File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in <br>
from .compress import compress_text<br>
File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in <br>
import zlib<br>
ImportError: No module named zlib</p>
</blockquote>
| 0 | 2016-08-18T09:33:09Z | 39,104,313 | <p>I installed the zlib-devel and python-level with the help of yum, and recompiled the python, finally completed the test of installation. Thank you for your answer.</p>
| 0 | 2016-08-23T14:45:41Z | [
"python",
"linux",
"robotframework"
] |
install python modules in mac osx | 39,014,749 | <p>I am new to pyhton. I am using python 3.5 on mac os x el capitan.
I tried using the command 'pip install requests' in the python interpreter IDLE . But it throws invalid 'syntax error'.</p>
<p>I read about installing modules is only possible in command line.
So i moved to TERMINAL, but no command is working here also.
(I tried 'python -m pip install requests')</p>
<p>I read that mac os x comes with python 2.7 already installed and i ran 'easy_install pip' but it also works on the 2.7 version.
Then there's discussion abouth the PATH settings also.</p>
<p>Can please anybody explain me how i can use my current version in TERMINAL window and what is the PATH scenario.</p>
<p>I am familiar with the environment variable settings and adding pythonpath in windows but not on mac.</p>
| 0 | 2016-08-18T09:36:19Z | 39,015,203 | <p>Here is what you should do.</p>
<p>Use homebrew to install python 2.7 and 3.5 in a virtual environment.</p>
<p><code>pip install virtualenv</code>
Then make a directory called <code>virtualenvs</code> in your root folder and add local files with.</p>
<pre><code>cd virtualenvs
virtualenv venv
</code></pre>
<p>activate a virtualenv with <code>source ~/virtualenvs/bin/activate</code></p>
<p>Then use pip to install brew in this virtualenv <code>pip install brew</code></p>
<p>Then install python 2.7 as python and python 3 as python3:</p>
<pre><code>brew update
brew install python
brew install python3
</code></pre>
<p>Then you can use python and python3 and not have to worry about the local install.</p>
<p>Then to run a file <code>python3 filename.py</code></p>
| 0 | 2016-08-18T09:57:53Z | [
"python",
"osx",
"module"
] |
install python modules in mac osx | 39,014,749 | <p>I am new to pyhton. I am using python 3.5 on mac os x el capitan.
I tried using the command 'pip install requests' in the python interpreter IDLE . But it throws invalid 'syntax error'.</p>
<p>I read about installing modules is only possible in command line.
So i moved to TERMINAL, but no command is working here also.
(I tried 'python -m pip install requests')</p>
<p>I read that mac os x comes with python 2.7 already installed and i ran 'easy_install pip' but it also works on the 2.7 version.
Then there's discussion abouth the PATH settings also.</p>
<p>Can please anybody explain me how i can use my current version in TERMINAL window and what is the PATH scenario.</p>
<p>I am familiar with the environment variable settings and adding pythonpath in windows but not on mac.</p>
| 0 | 2016-08-18T09:36:19Z | 39,015,841 | <p>Followed this guide.
<a href="https://docs.python.org/3/using/mac.html" rel="nofollow">https://docs.python.org/3/using/mac.html</a></p>
<p>Found python3.5 in usr/local/bin instead of the default usr/bin where the default 2.7 exists.</p>
<p>The 3.5 Package automatically genrates an alias for itself that is python3.5 for use in terminal.</p>
<p>Ran the command 'python3.5 -m pip install requests' and everything went good.</p>
| 0 | 2016-08-18T10:28:36Z | [
"python",
"osx",
"module"
] |
Flask script windows subprocess and sys | 39,014,785 | <p>I have the following Ptyhon Flask script,</p>
<pre><code>from flask import Flask
import subprocess
import sys
app = Flask(__name__)
@app.route("/")
def hello():
return "index"
@app.route("/script")
def cmd():
cmd = [sys.executable, "C:\\Users\\JSm\\Project\\FlaskAutomation\\test.py"]
p = subprocess.Popen(cmd)
out = p.communicate()
return out
if __name__ == "__main__" :
app.run()
</code></pre>
<p>The cmd() command works outside of the flask app when run independently, however it is crashing now. Any suggestions as to why?</p>
<p>I am running this on a windows machine, I want to kick off a script from a Python Flask script</p>
<p>Any help much appreciated.
Thanks,
J</p>
| 1 | 2016-08-18T09:37:54Z | 39,015,234 | <p>I'm not very sure about windows environment, however if i use the same script and assuming i've <code>test.py</code> is present at given location, I get 500 internal server error if i go and access <code>/script</code>.</p>
<p>To make it successful, I've made following change,</p>
<pre><code>p = subprocess.Popen(cmd, stdout = subprocess.PIPE)
</code></pre>
| 0 | 2016-08-18T09:59:19Z | [
"python",
"flask"
] |
Battery Historian Error for Checking Power Consumption | 39,014,791 | <p>I want to check the power consumption of my app on an android device and I just came across this link <a href="https://developer.android.com/studio/profile/battery-historian.html" rel="nofollow">Battery Historian</a></p>
<p>I followed the above tutorial and every step is fine until I reached the last one.</p>
<p>While I executed the last step </p>
<blockquote>
<p>python historian.py batterystats.txt > batterystats.html</p>
</blockquote>
<p>It gives me an error as </p>
<pre><code>File "Historian.py", Line xx
print "\nUsage: %s [OPTIONS] [FILE]\n" %sys.argv[0]
SyntaxError: Missing parenthesis in call to 'print'
</code></pre>
<p>The file of batterystats.txt is being generated but batterytstats.html is not being created</p>
<p>NOTE: This is the 1st time I have installed Python on my device just to check the battery consumption. I don't have any idea on this on how to deal with this. </p>
<p>I just checked what might be causing the error and I came to know that the Syntax of Python has changed from 2.x to 3.x. Am using Python Version 3.5.2</p>
<p>Any help is much appreciated</p>
| 1 | 2016-08-18T09:38:07Z | 39,016,115 | <p>From the docs:</p>
<blockquote>
<p>Next, make sure Python 2.7 (NOT Python 3!) is installed</p>
</blockquote>
<p>Install Python 2.7 to use the script and it should work fine.</p>
<p><a href="https://www.python.org/downloads/release/python-2712/" rel="nofollow">https://www.python.org/downloads/release/python-2712/</a></p>
| 2 | 2016-08-18T10:41:50Z | [
"android",
"python",
"python-3.x"
] |
Where does Python search for files when using open? | 39,014,865 | <p>I have the simplest file opening line in my code.</p>
<pre><code>file = open("file.txt", "r+")
</code></pre>
<p>Where does python search for files? The only location that works for me is </p>
<blockquote>
<p>C:/Users/<em>useraccount</em></p>
</blockquote>
<pre><code>file = open("file.txt", "w")
</code></pre>
<p>This also creates the file in that specific location.</p>
<p>It won't open the file if the file is in the exact same folder as the python script itself. </p>
<p>Also, if I make it</p>
<pre><code>file = open("folder/file.txt", "r+")
</code></pre>
<p>it will not open the file if the file is in <code>C:/Users/*useraccount*/</code>folder.</p>
<p>Is it possible to open files that aren't in that specific location?</p>
| 0 | 2016-08-18T09:41:30Z | 39,015,044 | <p>If you pass a relative path, like <code>file.txt</code>, Python will search for that file relative to the same directory where you are running the command from. </p>
<p>If you are in - <code>C:/Users/useraccount/</code> and you try to open <code>file.txt</code> then Python tries to open <code>C:/Users/useraccount/file.txt</code>.</p>
<p>Similarly, if it's <code>folder/file.txt</code> then Python tries to open <code>C:/Users/useraccount/folder/file.txt</code></p>
<p>You should always try to get the absolute path of a file by using the different functions in the <code>os.path</code> module. </p>
| 2 | 2016-08-18T09:50:08Z | [
"python"
] |
Where does Python search for files when using open? | 39,014,865 | <p>I have the simplest file opening line in my code.</p>
<pre><code>file = open("file.txt", "r+")
</code></pre>
<p>Where does python search for files? The only location that works for me is </p>
<blockquote>
<p>C:/Users/<em>useraccount</em></p>
</blockquote>
<pre><code>file = open("file.txt", "w")
</code></pre>
<p>This also creates the file in that specific location.</p>
<p>It won't open the file if the file is in the exact same folder as the python script itself. </p>
<p>Also, if I make it</p>
<pre><code>file = open("folder/file.txt", "r+")
</code></pre>
<p>it will not open the file if the file is in <code>C:/Users/*useraccount*/</code>folder.</p>
<p>Is it possible to open files that aren't in that specific location?</p>
| 0 | 2016-08-18T09:41:30Z | 39,015,063 | <p>If you use relative paths, they will be relative to the current working directory. To find out the current working directory, run the following code snippet from Python.</p>
<pre><code>import os
print os.getcwd()
</code></pre>
<p>To avoid this, specify the absolute path.</p>
| 1 | 2016-08-18T09:50:40Z | [
"python"
] |
Python: Count character in string which are following each other | 39,014,882 | <p>I have a string in which I want to count the occurrences of <code>#</code> following each other to replace them by numbers to create a increment.</p>
<p>For example:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = doConvertMyString(rawString)
print output
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
<p>Assuming that the number of <code>#</code> is not fixed and that <code>rawString</code> is a user input containing only <code>string.ascii_letters + string.digits + '_' + '#</code>, how can I do that?</p>
<p>Here is my test so far:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
incrDatas = {}
key = '#'
counter = 1
for x in xrange(len(rawString)):
if rawString[x] != key:
counter = 1
continue
else:
if x > 0:
if rawString[x - 1] == key:
counter += 1
else:
pass
# ???
</code></pre>
| 1 | 2016-08-18T09:42:08Z | 39,015,141 | <pre><code>test_string = 'MyString1_test##_edit####'
def count_hash(raw_string):
str_list = list(raw_string)
hash_count = str_list.count("#") + 1
for num in xrange(1, hash_count):
new_string = raw_string.replace("####", "000" + str(num))
new_string = new_string.replace("##", "0" + str(num))
print new_string
count_hash(test_string)
</code></pre>
<p>It's a bit clunky, and only works for # counts of less than 10, but seems to do what you want.
EDIT: By "only works" I mean that you'll get extra characters with the fixed number of # symbols inserted</p>
<p>EDIT2: amended code</p>
| -2 | 2016-08-18T09:54:52Z | [
"python",
"string"
] |
Python: Count character in string which are following each other | 39,014,882 | <p>I have a string in which I want to count the occurrences of <code>#</code> following each other to replace them by numbers to create a increment.</p>
<p>For example:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = doConvertMyString(rawString)
print output
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
<p>Assuming that the number of <code>#</code> is not fixed and that <code>rawString</code> is a user input containing only <code>string.ascii_letters + string.digits + '_' + '#</code>, how can I do that?</p>
<p>Here is my test so far:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
incrDatas = {}
key = '#'
counter = 1
for x in xrange(len(rawString)):
if rawString[x] != key:
counter = 1
continue
else:
if x > 0:
if rawString[x - 1] == key:
counter += 1
else:
pass
# ???
</code></pre>
| 1 | 2016-08-18T09:42:08Z | 39,015,232 | <p>How about this-</p>
<pre><code>rawString = 'MyString1_test##_edit####'
splitString = rawString.split('_')
for i in xrange(10): # you may put any count
print '%s_%s%02d_%s%04d' % (splitString[0], splitString[1][0:4], i, splitString[2][0:4], i, )
</code></pre>
| 0 | 2016-08-18T09:59:13Z | [
"python",
"string"
] |
Python: Count character in string which are following each other | 39,014,882 | <p>I have a string in which I want to count the occurrences of <code>#</code> following each other to replace them by numbers to create a increment.</p>
<p>For example:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = doConvertMyString(rawString)
print output
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
<p>Assuming that the number of <code>#</code> is not fixed and that <code>rawString</code> is a user input containing only <code>string.ascii_letters + string.digits + '_' + '#</code>, how can I do that?</p>
<p>Here is my test so far:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
incrDatas = {}
key = '#'
counter = 1
for x in xrange(len(rawString)):
if rawString[x] != key:
counter = 1
continue
else:
if x > 0:
if rawString[x - 1] == key:
counter += 1
else:
pass
# ???
</code></pre>
| 1 | 2016-08-18T09:42:08Z | 39,015,402 | <p>You can try this naive (and probably not most efficient) solution. It assumes that the number of <code>'#'</code> is fixed.</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for i in range(1, 6):
temp = rawString.replace('####', str(i).zfill(4)).replace('##', str(i).zfill(2))
print(temp)
>> MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
| 0 | 2016-08-18T10:06:55Z | [
"python",
"string"
] |
Python: Count character in string which are following each other | 39,014,882 | <p>I have a string in which I want to count the occurrences of <code>#</code> following each other to replace them by numbers to create a increment.</p>
<p>For example:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = doConvertMyString(rawString)
print output
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
<p>Assuming that the number of <code>#</code> is not fixed and that <code>rawString</code> is a user input containing only <code>string.ascii_letters + string.digits + '_' + '#</code>, how can I do that?</p>
<p>Here is my test so far:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
incrDatas = {}
key = '#'
counter = 1
for x in xrange(len(rawString)):
if rawString[x] != key:
counter = 1
continue
else:
if x > 0:
if rawString[x - 1] == key:
counter += 1
else:
pass
# ???
</code></pre>
| 1 | 2016-08-18T09:42:08Z | 39,015,493 | <p>You may use <code>zfill</code> in the <code>re.sub</code> replacement to pad any amount of <code>#</code> chunks. <code>#+</code> regex pattern matches 1 or more <code>#</code> symbols. The <code>m.group()</code> stands for the match the regex found, and thus, we replace all <code>#</code>s with the incremented <code>x</code> converted to string padded with the same amount of <code>0</code>s as there are <code>#</code> in the match.</p>
<pre><code>import re
rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = re.sub(r"#+", lambda m: str(x+1).zfill(len(m.group())), rawString)
print output
</code></pre>
<p>Result of <a href="https://ideone.com/2boJvI" rel="nofollow">the demo</a>:</p>
<pre><code>MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
| 2 | 2016-08-18T10:10:52Z | [
"python",
"string"
] |
Python: Count character in string which are following each other | 39,014,882 | <p>I have a string in which I want to count the occurrences of <code>#</code> following each other to replace them by numbers to create a increment.</p>
<p>For example:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
for x in xrange(5):
output = doConvertMyString(rawString)
print output
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
MyString1_test05_edit0005
</code></pre>
<p>Assuming that the number of <code>#</code> is not fixed and that <code>rawString</code> is a user input containing only <code>string.ascii_letters + string.digits + '_' + '#</code>, how can I do that?</p>
<p>Here is my test so far:</p>
<pre><code>rawString = 'MyString1_test##_edit####'
incrDatas = {}
key = '#'
counter = 1
for x in xrange(len(rawString)):
if rawString[x] != key:
counter = 1
continue
else:
if x > 0:
if rawString[x - 1] == key:
counter += 1
else:
pass
# ???
</code></pre>
| 1 | 2016-08-18T09:42:08Z | 39,015,857 | <p>The code below converts the <code>rawString</code> to a format string, using <code>groupby</code> in a list comprehension to find groups of hashes. Each run of hashes is converted into a format directive to print a zero-padded integer of the appropriate width, runs of non-hashes are simply joined back together.</p>
<p>This code works on Python 2.6 and later.</p>
<pre><code>from itertools import groupby
def convert(template):
return ''.join(['{{x:0{0}d}}'.format(len(list(g))) if k else ''.join(g)
for k, g in groupby(template, lambda c: c == '#')])
rawString = 'MyString1_test##_edit####'
fmt = convert(rawString)
print(repr(fmt))
for x in range(5):
print(fmt.format(x=x))
</code></pre>
<p><strong>output</strong></p>
<pre><code>'MyString1_test{x:02d}_edit{x:04d}'
MyString1_test00_edit0000
MyString1_test01_edit0001
MyString1_test02_edit0002
MyString1_test03_edit0003
MyString1_test04_edit0004
</code></pre>
| 1 | 2016-08-18T10:29:12Z | [
"python",
"string"
] |
Heatmap correlation plot half with values number and half color map in seaborn | 39,014,907 | <p>In the previous versions of seaborn (<0.7) it was present the function <em>corrplot()</em>, which allowed to plot a correlation matrix such that half of the matrix is numeric and the other half is a color map. Now, seaborn (0.7.1) has just the <em>heatmap()</em> function, that doesn't have this function directly. Is there a way to obtain the same result? </p>
| 0 | 2016-08-18T09:43:19Z | 39,014,908 | <p>I have spend some time to do it, basically it require to overlap two heatmaps, where one makes use of a mask to cover half of the matrix. A code example is showed below.
'
import numpy as np
import pandas as pd
import seaborn
from matplotlib.colors import ListedColormap
from matplotlib.pylab import *</p>
<pre><code>arr_name = ['D','S','P','E','C','KW','K','EF']
data = np.random.randn(8,8)
df = pd.DataFrame(data, columns=arr_name)
labels = df.where(np.triu(np.ones(df.shape)).astype(np.bool))
labels = labels.round(2)
labels = labels.replace(np.nan,' ', regex=True)
mask = np.triu(np.ones(df.shape)).astype(np.bool)
ax = seaborn.heatmap(df, mask=mask, cmap='RdYlGn_r', fmt='', square=True, linewidths=1.5)
mask = np.ones((8, 8))-mask
ax = seaborn.heatmap(df, mask=mask, cmap=ListedColormap(['white']),annot=labels,cbar=False, fmt='', linewidths=1.5)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
</code></pre>
<p>'
The final result is:
<a href="http://i.stack.imgur.com/WxUuv.png" rel="nofollow"><img src="http://i.stack.imgur.com/WxUuv.png" alt="enter image description here"></a></p>
| 1 | 2016-08-18T09:43:19Z | [
"python",
"heatmap",
"correlation",
"seaborn"
] |
Django: how to query relations effectively | 39,015,062 | <p>I have a model "Booking" referencing to another model "Event" with a foreign key.</p>
<pre><code>class Event(models.Model):
title = models.CharField(_("title"), max_length=100)
class Booking(models.Model):
user = models.ForeignKey('auth.User', ..., related_name='bookings')
event = models.ForeignKey('Event', ..., related_name='bookings')
</code></pre>
<p><strong>I want to get all the events a user has booked</strong> in a set, to use it in a ListView.</p>
<p>I have managed to accomplish that by overwriting ListView's get_queryset method:</p>
<pre><code>def get_queryset(self):
user_bookings = Booking.objects.filter(user=self.request.user)
events = Event.objects.filter(id__in=user_bookings.values('event_id'))
return events
</code></pre>
<p>But I am quite sure, that that is not a very efficient way of solving my problem, in terms of needed database queries. </p>
<p>I have thought about using "select_related" method, but I didn't figured out how I could benefit from that in my usecase.</p>
<p><strong>My question is: How would you solve this? What is the most efficient way to do something like this?</strong></p>
| 0 | 2016-08-18T09:50:40Z | 39,015,179 | <p>You can do this in one line: <code>Event.objects.filter(bookings__user=self.request.user)</code></p>
| 2 | 2016-08-18T09:56:28Z | [
"python",
"django",
"performance",
"django-queryset",
"django-select-related"
] |
python mixing garmin timestamps | 39,015,119 | <p>I belong a garmin watch , to report statistic they have a sdk
in this SDK they have a timestamp in two format
one is a true timestamp on 32 bit
another is the lower part on 16 bit which must be combinate whith the first </p>
<p>I dont know to code this in Python Can somebody help me</p>
<p>here is their explanation and the formula</p>
<p>*timestamp_16 is a 16 bit version of the timestamp field (which is 32 bit) that represents the lower 16 bits of the timestamp.
This field is meant to be used in combination with an earlier timestamp field that is used as a reference for the upper 16 bits.
The proper way to deal with this field is summarized as follows:</p>
<p><strong>mesgTimestamp += ( timestamp_16 - ( mesgTimestamp & 0xFFFF ) ) & 0xFFFF</strong>;*</p>
<p>my problem is not to obtain the two timestamp but to combinate the two in python</p>
<p>thanks</p>
| 0 | 2016-08-18T09:53:37Z | 39,016,857 | <p>I'm not sure of the result but I took them literally explanation
I shifted 32-bit timestamps of bits left 16 positions
then I shifted 16 places to the right and I made one or bitwise with the 16-bit timestamp</p>
| 0 | 2016-08-18T11:19:04Z | [
"python",
"timestamp",
"bit-shift"
] |
TKinter, Python - Creating TopLevel popups using iteration and then closing them again without killing root | 39,015,297 | <p>I am trying to create a program that allows the user to enter the real names of people from a list of obscure names. </p>
<p>I want the pop-up windows to be generated from a main root window. Once the user has entered the real name, the pop-up should close and then open the next obscure name in the list. </p>
<p>I also want the user to be able to kill the entire iteration by closing the main root window; hence the need for it and not creating an individual pop-up for each name. </p>
<p>I have tried using win.destroy(), this kills the first pop-up but the iteration is also killed and the second name is not opened. I then tried using win.quit() but this left the pop-up windows open and if the information is entered twice it causes the program to crash. </p>
<p>Is it possible to get the pop-up windows to close after assigning the real name without disrupting the iteration? </p>
<p>Here is my code (I have included both the win.quit() and win.destroy() commands I have tried) </p>
<pre><code>from Tkinter import *
name_list = ["Jimmy Bob", " Bobby Jim", "Sammy Jim Bob" ]
def assign():
print("You chose option %s" %(e1.get()))
win.destroy() # Ends the iteration
#win.quit() # Continues the iteration but does not close the window and crashes if entered twice
root = Tk()
for i in name_list:
win = Toplevel(root)
win.lift()
e1 = Entry(win)
e1.grid(row=1, column=0)
var = StringVar()
var.set(i)
Label(win, textvariable = var).grid(row=0, column=0)
Button(win, text='Enter Real Name', command=assign).grid(row=2, column=0, pady=4)
win.mainloop( )
root.mainloop()
</code></pre>
| 0 | 2016-08-18T10:02:27Z | 39,044,980 | <p>The issue is that <code>win.mainloop()</code> is blocking, so having it in the iteration means that nothing will happen until after the win has been closed.</p>
<p>however, just getting rid of <code>win.mainloop()</code> causes all of the popups to come up at once so we need a way to make it so that the next iteration only happens once the button has been pressed, this is the reason for the <code>do_next()</code> function</p>
<p>the final change I made is the addition of the <code>quit</code> function, this duplicates the behavior of closing the <code>root</code> window in your earlier version, by making each popup window have a way of exiting out, since it won't run <code>do_next()</code> unless the button is pressed.</p>
<pre><code>from Tkinter import *
name_list = ["Jimmy Bob", " Bobby Jim", "Sammy Jim Bob" ]
def assign():
global e1, win
print("You chose option %s" %(e1.get()))
win.destroy()
do_next()
def quit():
global root
root.destroy()
def do_next():
global i, e1, win, root
if i == len(name_list):
root.destroy()
return
win = Toplevel(root)
win.lift()
e1 = Entry(win)
e1.grid(row=1, column=0)
var = StringVar()
var.set(name_list[i])
Label(win, textvariable = var).grid(row=0, column=0)
Button(win, text='Enter Real Name', command=assign).grid(row=2, column=0, pady=4)
win.protocol("WM_DELETE_WINDOW",quit)
i += 1
i = 0
root = Tk()
root.withdraw()
do_next()
root.mainloop()
</code></pre>
<p>some other things to note: if you're including this as part of a larger script I would encapsulate the entire popup procedure in a specialized class in order to avoid using globals</p>
| 0 | 2016-08-19T17:45:49Z | [
"python",
"python-2.7",
"tkinter"
] |
Python Multiprocessing with Subprocess Flags fail to run | 39,015,354 | <p>When trying to run the following code I get </p>
<blockquote>
<p>OSError: [Errno 2] No such file or directory</p>
</blockquote>
<p>The strange issue is that when I try to run LS without any other flags such as -a, the subprocess run as intended with no errors. I also tried adding shell=True along with the flag -a, but still no luck either.</p>
<pre><code>from multiprocessing import *
import subprocess
class ThreadManager:
def __init__(self, tools):
self.tools = tools
self.pool_size = cpu_count()
self.p1 = Pool(processes=self.pool_size, maxtasksperchild=2, )
def initiate(self):
for self.tool in self.tools:
print self.tool
self.p1 = Pool(4)
self.p1 = Process(target=subprocess.call, args=(self.tool,))
print self.p1
self.p1.start()
th = ThreadManager("ls -a".split("/"))
th.initiate()
</code></pre>
| 1 | 2016-08-18T10:05:19Z | 39,016,770 | <p>Your problem is here:</p>
<pre><code>"ls -a".split("/")
</code></pre>
<p>This turns it into a list <code>["ls -a"]</code> which forces subprocess.call() to find a binary named "ls -a" which doesn't exist. subprocess.call() can be called two ways:</p>
<pre><code>subprocess.call("ls -a", shell=True) # argument parsing is done by shell
subprocess.call(["ls", "-a"]) # argument parsing not needed
subprocess.call("ls -a".split()) # argument parsing is done via .split()
subprocess.call(shlex.split("ls -a")) # a more reliable approach to parse
# command line arguments in Python
</code></pre>
<p>Note that first argument of subprocess.call() is a <em>positional argument</em> and passed as <code>args</code> tuple, while <code>shell=True</code> is a <em>keyword argument</em> and should be passed as dictionary argument <code>kwargs</code>:</p>
<pre><code>Process(target=subprocess.call, args=("ls -a",), kwargs=dict(shell=True))
Process(target=subprocess.call, args=(["ls", "-a"],))
</code></pre>
| 0 | 2016-08-18T11:14:33Z | [
"python",
"subprocess",
"python-multithreading",
"python-multiprocessing"
] |
Tensorflow gradients are always zero | 39,015,376 | <p>Tensorflow gradients are always zero with respect to conv layers that are after first conv layer. I've tried different ways to check that but gradients are always zero! Here is the small reproducible code that can be run to check that.</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import math
import os
import random
import tflearn
batch_size = 100
start = 0
end = batch_size
learning_rate = 0.000001
num_classes = 4
time_steps = 4
embedding = 2
step = 1
_units = 500
num_of_filters = 1000
train_set_x = [[[1,2],[3,4],[5,6],[7,8]],[[1,2],[3,4],[5,6],[7,8]]]
train_set_y = [0,1]
X = tf.placeholder(tf.float32, [None,time_steps,embedding])
Y = tf.placeholder(tf.int32, [None])
x = tf.expand_dims(X,3)
filter_shape = [1, embedding, 1, num_of_filters]
conv_weights = tf.get_variable("conv_weights1" , filter_shape, tf.float32, tf.contrib.layers.xavier_initializer())
conv_biases = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv = tf.nn.conv2d(x, conv_weights, strides=[1,1,1,1], padding = "VALID")
normalize = conv + conv_biases
tf_normalize = tflearn.layers.normalization.batch_normalization(normalize)
relu = tf.nn.elu(tf_normalize)
pooling = tf.reduce_max(relu, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm = pooling
filter_shape2 = [1, 1, 1, num_of_filters]
conv_weights2 = tf.get_variable("conv_weights2" , filter_shape2, tf.float32, tf.contrib.layers.xavier_initializer())
conv_biases2 = tf.Variable(tf.constant(0.1, shape=[num_of_filters]))
conv2 = tf.nn.conv2d(outputs_fed_lstm, conv_weights2, strides=[1,1,1,1], padding = "VALID")
normalize2 = conv2 + conv_biases2
tf_normalize2 = tflearn.layers.normalization.batch_normalization(normalize2)
relu2 = tf.nn.elu(tf_normalize2)
pooling2 = tf.reduce_max(relu2, reduction_indices = 3, keep_dims = True)
outputs_fed_lstm2 = pooling2
x = tf.squeeze(outputs_fed_lstm2, [2])
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, 1])
x = tf.split(0, time_steps, x)
lstm = tf.nn.rnn_cell.LSTMCell(num_units = _units)
# multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * lstm_layers, state_is_tuple = True)
outputs , state = tf.nn.rnn(lstm,x, dtype = tf.float32)
weights = tf.Variable(tf.random_normal([_units,num_classes]))
biases = tf.Variable(tf.random_normal([num_classes]))
logits = tf.matmul(outputs[-1], weights) + biases
c_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits,Y)
loss = tf.reduce_mean(c_loss)
global_step = tf.Variable(0, name="global_step", trainable=False)
# decayed_learning_rate = tf.train.exponential_decay(learning_rate,0,10000,0.9)
optimizer= tf.train.AdamOptimizer(learning_rate)
minimize_loss = optimizer.minimize(loss, global_step=global_step)
grads_and_vars = optimizer.compute_gradients(loss,[conv_weights2])
correct_predict = tf.nn.in_top_k(logits, Y, 1)
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
for i in range(1):
for j in range(1):
x = train_set_x
y = train_set_y
sess.run(minimize_loss,feed_dict={X : x, Y : y})
step += 1
gr_print = sess.run([grad for grad, _ in grads_and_vars], feed_dict={X : x, Y : y})
print (gr_print)
cost = sess.run(loss,feed_dict = {X: x,Y: y})
accu = sess.run(accuracy,feed_dict = {X: x, Y: y})
print ("Loss after one Epoch(Training) = " + "{:.6f}".format(cost) + ", Training Accuracy= " + "{:.5f}".format(accu))
</code></pre>
<p>And here is the output</p>
<pre><code>[array([[[[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 5.21326828, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ]]]], dtype=float32)]
</code></pre>
| 0 | 2016-08-18T10:06:07Z | 39,017,739 | <p>What you compute is kind of weird. Let's examine the shapes in your model:</p>
<ul>
<li>input <code>x</code>: <code>[batch_size, 4, 2, 1]</code></li>
<li>1st convolution <code>conv</code>: <code>[batch_size, 4, 1, 1000]</code></li>
<li>1st max pool <code>pooling</code>: <code>[batch_size, 4, 1, 1]</code></li>
<li>2nd convolution <code>conv2</code>: <code>[batch_size, 4, 1, 1000]</code></li>
<li>2nd max pool <code>polling2</code>: <code>[batch_size, 4, 1, 1]</code></li>
<li>input to LSTM: <code>[4, batch_size, 1]</code></li>
<li>output of LSTM: <code>[batch_size, 500]</code></li>
</ul>
<hr>
<p>From what I understand, you try to apply two 1D convolutions and then an LSTM. However, the first convolution is on the 3rd dimension of size <code>embedding=2</code>.</p>
<p>After that, you apply a max pooling on <strong>all the 1000-sized embedding</strong>. You should maybe apply the max pooling to the 2nd dimension of size 4:</p>
<pre><code>pooling = tf.nn.max_pool(conv, [1, 2, 1, 1], [1, 2, 1, 1], "VALID")
# pooling has shape [batch_size, 2, 1, 1000]
</code></pre>
<hr>
<p>Concerning your gradient issue, it comes from the two max pooling. Only 1 of the 1000 inputs is passed through, so the gradients for 999 of the inputs is 0.</p>
<p>This is why your first conv weights have only <strong>2 non-zero gradients</strong>, and the second conv weights have only <strong>1 non-zero gradients</strong>.</p>
<p>All in all, the real issue is your architecture here, you should maybe rewrite it down on a piece of paper first.</p>
| 1 | 2016-08-18T12:02:49Z | [
"python",
"tensorflow",
"mathematical-optimization",
"gradient-descent"
] |
Slicing strings pulled from HTML | 39,015,385 | <p>I pulled some data from a webpage using </p>
<pre><code>y = soup.find('td', attrs={'data'}).string
</code></pre>
<p>When I tried to use float() I got the error message that it was an invaled literal for float(). So I tried to find out what was wrong by using print(repr(y)).
That gave me the following result.</p>
<pre><code>u'\r\n 450,990\r\n '
</code></pre>
<p>I realize u' means unicode, but how can I make this into a format so I can float() it ? </p>
<p>Thanks in advance, first post here so have mercy.</p>
| 1 | 2016-08-18T10:06:23Z | 39,015,487 | <pre><code>>>> y = str(u'\r\n 450,990\r\n ')
>>> map(float, y.strip().split(','))
[450.0, 990.0]
>>> float(y.strip().replace(',',''))
450990.0
>>> float(y.strip().replace(',','.'))
450.99
</code></pre>
| 0 | 2016-08-18T10:10:42Z | [
"python",
"python-unicode"
] |
Avoiding string + integer addition when using recursive function | 39,015,523 | <p>I am trying to build a function(which uses function recursion) that scans a number, n, to look for a digit d, & if found, I would like to replace d with a specified number r, as shown in the code below. This code works fine but the output is in a string format. I have tried numerous ways to change it to output an integer but to no avail. Thanks for the help!</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
i = 0
if len(number) == 1:
if number == str(d):
return str(r)
else:
return number
else:
if number[i] == str(d):
return number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
return number[i] + replace_digit(int(number[i+1:]),d ,r)
</code></pre>
| 1 | 2016-08-18T10:12:25Z | 39,015,794 | <p>It's quite simple but you need some type conversions between <code>str</code> and <code>int</code>:</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
rest = str(replace_digit(int(number[1:]), d, r)) if len(number) > 1 else ""
digit = number[0]
digit = str(r) if digit == str(d) else digit
return int(digit + rest)
</code></pre>
<p>There's also another possiblity, using a wrapper. This limits number of type conversions.</p>
<pre><code>def replace_digit(n, d, r):
def replace(n, d, r):
rest = replace(n[1:], d, r) if len(n) > 1 else ""
return r + rest if n[0] == d else n[0] + rest
return int(replace(str(n), str(d), str(r)))
</code></pre>
| 0 | 2016-08-18T10:26:20Z | [
"python",
"python-3.x",
"recursion"
] |
Avoiding string + integer addition when using recursive function | 39,015,523 | <p>I am trying to build a function(which uses function recursion) that scans a number, n, to look for a digit d, & if found, I would like to replace d with a specified number r, as shown in the code below. This code works fine but the output is in a string format. I have tried numerous ways to change it to output an integer but to no avail. Thanks for the help!</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
i = 0
if len(number) == 1:
if number == str(d):
return str(r)
else:
return number
else:
if number[i] == str(d):
return number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
return number[i] + replace_digit(int(number[i+1:]),d ,r)
</code></pre>
| 1 | 2016-08-18T10:12:25Z | 39,015,997 | <p>If you already have a working function why not just split the problem ?</p>
<pre><code>def replace(n, d, r):
def replace_digit(n, d, r): # doesn't change
return ...
return int(replace_digit(str(n), str(d), str(r))
</code></pre>
| 0 | 2016-08-18T10:35:47Z | [
"python",
"python-3.x",
"recursion"
] |
Avoiding string + integer addition when using recursive function | 39,015,523 | <p>I am trying to build a function(which uses function recursion) that scans a number, n, to look for a digit d, & if found, I would like to replace d with a specified number r, as shown in the code below. This code works fine but the output is in a string format. I have tried numerous ways to change it to output an integer but to no avail. Thanks for the help!</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
i = 0
if len(number) == 1:
if number == str(d):
return str(r)
else:
return number
else:
if number[i] == str(d):
return number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
return number[i] + replace_digit(int(number[i+1:]),d ,r)
</code></pre>
| 1 | 2016-08-18T10:12:25Z | 39,016,241 | <p>The solution is to wrap the return values with <code>int()</code> as has been already been stated in the comments and other answers.</p>
<p>But here's a version that doesn't use string manipulation at all. Just for fun.</p>
<pre><code>def replace_digit(n, d, r):
rest = n // 10 # all but the rightmost digit
digit = n - rest * 10 # only the rightmost digit
digit = r if digit == d else digit
if rest == 0:
return digit
return replace_digit(rest, d, r) * 10 + digit
</code></pre>
| 0 | 2016-08-18T10:47:54Z | [
"python",
"python-3.x",
"recursion"
] |
Avoiding string + integer addition when using recursive function | 39,015,523 | <p>I am trying to build a function(which uses function recursion) that scans a number, n, to look for a digit d, & if found, I would like to replace d with a specified number r, as shown in the code below. This code works fine but the output is in a string format. I have tried numerous ways to change it to output an integer but to no avail. Thanks for the help!</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
i = 0
if len(number) == 1:
if number == str(d):
return str(r)
else:
return number
else:
if number[i] == str(d):
return number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
return number[i] + replace_digit(int(number[i+1:]),d ,r)
</code></pre>
| 1 | 2016-08-18T10:12:25Z | 39,019,046 | <p>Instead of returning the concatenation try this:</p>
<pre><code>if number[i] == str(d):
new_number = number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
new_number = number[i] + replace_digit(int(number[i+1:]),d ,r)
return int(new_number)
</code></pre>
| 0 | 2016-08-18T13:05:46Z | [
"python",
"python-3.x",
"recursion"
] |
Avoiding string + integer addition when using recursive function | 39,015,523 | <p>I am trying to build a function(which uses function recursion) that scans a number, n, to look for a digit d, & if found, I would like to replace d with a specified number r, as shown in the code below. This code works fine but the output is in a string format. I have tried numerous ways to change it to output an integer but to no avail. Thanks for the help!</p>
<pre><code>def replace_digit(n, d, r):
number = str(n)
i = 0
if len(number) == 1:
if number == str(d):
return str(r)
else:
return number
else:
if number[i] == str(d):
return number[:i] + str(r) + replace_digit(int(number[i+1:]),d,r)
else:
return number[i] + replace_digit(int(number[i+1:]),d ,r)
</code></pre>
| 1 | 2016-08-18T10:12:25Z | 39,028,542 | <pre><code>def replace_digit(number, digit, replacement):
if number == 0:
return number # base case
quotient, remainder = divmod(number, 10)
if remainder == digit:
remainder = replacement
return replace_digit(quotient, digit, replacement) * 10 + remainder
print(replace_digit(961748941982451653, 9, 2))
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>261748241282451653
</code></pre>
| 1 | 2016-08-18T22:36:13Z | [
"python",
"python-3.x",
"recursion"
] |
How to annotate a method that returns a specific type (or subtype) | 39,015,600 | <p>Please consider this snippet of python 3.5 code:</p>
<pre><code>class Foo:
pass
class Bar(Foo):
pass
class AbstractSomething:
def get_foobinator_type(self):
return Foo
</code></pre>
<p>I'd like to annotate (<a href="https://www.python.org/dev/peps/pep-0484/" rel="nofollow">using PEP-0484 annotations</a>) return value of <code>get_foobinator_type</code> method to say: "It returns a type, that is either a <code>Foo</code> or any subtype of it". </p>
<p>I din't find any sensible way to do it in Python. Here are approaches that are obviously wrong: </p>
<ul>
<li><p>Following: <code>def get_foobinator_type(self) -> Foo</code> means that this method returns an <strong>instance</strong> of <code>Foo</code>. </p></li>
<li><p>Following: <code>def get_foobinator_type(self) -> type</code> means that this method returns a type, but sadly, there is no information about that this is needs to be a subtype of <code>Foo</code>. </p></li>
</ul>
<p>In Java terms I'd like to have method with signature like: <code>Class<Foo> getFoobinatorType()</code>. </p>
| 0 | 2016-08-18T10:16:23Z | 39,016,491 | <p>I think what you need is TypeVar from the typing module.</p>
<pre><code>from typing import TypeVar
class Foo:
pass
class Bar(Foo):
pass
T = TypeVar('T', bound=Foo)
class AbstractSomething:
def get_foobinator_type(self) -> T:
return Foo
</code></pre>
<p><a href="https://docs.python.org/3/library/typing.html#typing.TypeVar" rel="nofollow">From the documentation of typing</a>:</p>
<blockquote>
<p>Alternatively, a type variable may specify an upper bound using
bound=. This means that an actual type substituted (explicitly
or implicitly) for the type variable must be a subclass of the
boundary type, see PEP 484</p>
</blockquote>
| 0 | 2016-08-18T10:59:45Z | [
"python",
"python-3.x"
] |
How to annotate a method that returns a specific type (or subtype) | 39,015,600 | <p>Please consider this snippet of python 3.5 code:</p>
<pre><code>class Foo:
pass
class Bar(Foo):
pass
class AbstractSomething:
def get_foobinator_type(self):
return Foo
</code></pre>
<p>I'd like to annotate (<a href="https://www.python.org/dev/peps/pep-0484/" rel="nofollow">using PEP-0484 annotations</a>) return value of <code>get_foobinator_type</code> method to say: "It returns a type, that is either a <code>Foo</code> or any subtype of it". </p>
<p>I din't find any sensible way to do it in Python. Here are approaches that are obviously wrong: </p>
<ul>
<li><p>Following: <code>def get_foobinator_type(self) -> Foo</code> means that this method returns an <strong>instance</strong> of <code>Foo</code>. </p></li>
<li><p>Following: <code>def get_foobinator_type(self) -> type</code> means that this method returns a type, but sadly, there is no information about that this is needs to be a subtype of <code>Foo</code>. </p></li>
</ul>
<p>In Java terms I'd like to have method with signature like: <code>Class<Foo> getFoobinatorType()</code>. </p>
| 0 | 2016-08-18T10:16:23Z | 39,017,178 | <p>As far as I understand, you really cannot. You're looking for a way to indicate the return <em>type</em> of a <strong>class</strong>; to check based on what the type of the class is, i.e its metaclass. </p>
<p>The problem with that is that a metaclass doesn't help a type checker evaluate what the inheritance of an object might be, if it's of type <code>type</code> it's alright.</p>
<p>Apart from that, and, not being sure what type-checker you use, <code>mypy</code> for example <a href="https://github.com/python/mypy/wiki/Unsupported-Python-Features" rel="nofollow">doesn't have support yet</a> for custom metaclasses which you might use to group your objects in a more custom group. </p>
<p>The way I see it, you either don't annotate at all all, or, you change the implementation and annotate with <code>Foo</code>.</p>
| 1 | 2016-08-18T11:34:45Z | [
"python",
"python-3.x"
] |
Pandas bin and count | 39,015,635 | <p>I'm new to Pandas, please don't be too harsh ;) Let's assume my initial data frame looks like this:</p>
<pre><code>#::: initialize dictionary
np.random.seed(0)
d = {}
d['size'] = 2 * np.random.randn(100) + 3
d['flag_A'] = np.random.randint(0,2,100).astype(bool)
d['flag_B'] = np.random.randint(0,2,100).astype(bool)
d['flag_C'] = np.random.randint(0,2,100).astype(bool)
#::: convert dictionary into pandas dataframe
df = pd.DataFrame(d)
</code></pre>
<p>I now bin the data frame according to 'size',</p>
<pre><code>#::: bin pandas dataframe per size
bins = np.arange(0,10,1)
groups = df.groupby( pd.cut( df['size'], bins ) )
</code></pre>
<p>which results in this output:</p>
<pre><code>---
(0, 1]
flag_A flag_B flag_C size
25 False False True 0.091269
40 True True True 0.902894
41 True True True 0.159964
46 False True True 0.494409
53 False True True 0.638736
73 True False True 0.530348
80 True False False 0.669700
88 True True True 0.858495
---
(1, 2]
flag_A flag_B flag_C size
...
</code></pre>
<p>My question is now: How can I proceed from here to get the count of True and False per flag (A,B,C) per bin? E.g. for bin=(0,1] I expect to get something like N_flag_A_true = 5, N_flag_A_false = 3, and so on. Ideally, I would like to get this information summarized by extending this data frame, or into a new data frame.</p>
| 0 | 2016-08-18T10:18:30Z | 39,016,203 | <p>It can be achieved with multi-index groupbys, concatenating the results and unstacking:</p>
<pre><code>flag_A = df.groupby( [pd.cut( df['size'], bins),'flag_A'] ).count()['size'].to_frame()
flag_B = df.groupby( [pd.cut( df['size'], bins),'flag_B'] ).count()['size'].to_frame()
flag_C = df.groupby( [pd.cut( df['size'], bins),'flag_C'] ).count()['size'].to_frame()
T = pd.concat([flag_A,flag_B],axis=1)
R = pd.concat([T,flag_C],axis=1)
R.columns = ['flag_A','flag_B','flag_C']
R.index.names = [u'Bins',u'Value']
R = R.unstack('Value')
</code></pre>
<p>The result is:</p>
<pre><code> flag_A flag_B flag_C
Value False True False True False True
Bins
(0, 1] 3.0 5.0 3.0 5.0 1.0 7.0
(1, 2] 6.0 8.0 7.0 7.0 5.0 9.0
(2, 3] 7.0 9.0 11.0 5.0 13.0 3.0
(3, 4] 15.0 12.0 12.0 15.0 17.0 10.0
(4, 5] 2.0 8.0 5.0 5.0 7.0 3.0
(5, 6] 5.0 5.0 3.0 7.0 7.0 3.0
(6, 7] 1.0 5.0 NaN 6.0 3.0 3.0
(7, 8] NaN 2.0 1.0 1.0 NaN 2.0
(8, 9] NaN NaN NaN NaN NaN NaN
</code></pre>
<p>EDIT: You can resolve the multi-index in the columns like this:</p>
<pre><code>R.columns = ['flag_A_F','flag_A_T','flag_B_F','flag_B_T','flag_C_F','flag_C_T']
</code></pre>
<p>With the result:</p>
<pre><code> flag_A_F flag_A_T flag_B_F flag_B_T flag_C_F flag_C_T
Bins
(0, 1] 3.0 5.0 3.0 5.0 1.0 7.0
(1, 2] 6.0 8.0 7.0 7.0 5.0 9.0
(2, 3] 7.0 9.0 11.0 5.0 13.0 3.0
(3, 4] 15.0 12.0 12.0 15.0 17.0 10.0
(4, 5] 2.0 8.0 5.0 5.0 7.0 3.0
(5, 6] 5.0 5.0 3.0 7.0 7.0 3.0
(6, 7] 1.0 5.0 NaN 6.0 3.0 3.0
(7, 8] NaN 2.0 1.0 1.0 NaN 2.0
(8, 9] NaN NaN NaN NaN NaN NaN
</code></pre>
| 3 | 2016-08-18T10:46:02Z | [
"python",
"pandas",
"count",
"histogram",
"bin"
] |
Pandas bin and count | 39,015,635 | <p>I'm new to Pandas, please don't be too harsh ;) Let's assume my initial data frame looks like this:</p>
<pre><code>#::: initialize dictionary
np.random.seed(0)
d = {}
d['size'] = 2 * np.random.randn(100) + 3
d['flag_A'] = np.random.randint(0,2,100).astype(bool)
d['flag_B'] = np.random.randint(0,2,100).astype(bool)
d['flag_C'] = np.random.randint(0,2,100).astype(bool)
#::: convert dictionary into pandas dataframe
df = pd.DataFrame(d)
</code></pre>
<p>I now bin the data frame according to 'size',</p>
<pre><code>#::: bin pandas dataframe per size
bins = np.arange(0,10,1)
groups = df.groupby( pd.cut( df['size'], bins ) )
</code></pre>
<p>which results in this output:</p>
<pre><code>---
(0, 1]
flag_A flag_B flag_C size
25 False False True 0.091269
40 True True True 0.902894
41 True True True 0.159964
46 False True True 0.494409
53 False True True 0.638736
73 True False True 0.530348
80 True False False 0.669700
88 True True True 0.858495
---
(1, 2]
flag_A flag_B flag_C size
...
</code></pre>
<p>My question is now: How can I proceed from here to get the count of True and False per flag (A,B,C) per bin? E.g. for bin=(0,1] I expect to get something like N_flag_A_true = 5, N_flag_A_false = 3, and so on. Ideally, I would like to get this information summarized by extending this data frame, or into a new data frame.</p>
| 0 | 2016-08-18T10:18:30Z | 39,017,407 | <p>You can apply your group to the DF then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow">pd.melt</a>:</p>
<pre><code>df['group'] = pd.cut(df['size'], bins=bins)
melted = pd.melt(df, id_vars='group', value_vars=['flag_A', 'flag_B', 'flag_C'])
</code></pre>
<p>Which'll give you:</p>
<pre><code> group variable value
0 (6, 7] flag_A False
1 (3, 4] flag_A False
2 (4, 5] flag_A True
3 (7, 8] flag_A True
4 (6, 7] flag_A True
5 (1, 2] flag_A False
[...]
</code></pre>
<p>Then group by the columns and take the size of each group:</p>
<pre><code>df2 = melted.groupby(['group', 'variable', 'value']).size()
</code></pre>
<p>Which gives you:</p>
<pre><code>group variable value
(0, 1] flag_A False 3
True 5
flag_B False 3
True 5
flag_C False 1
True 7
(1, 2] flag_A False 6
True 8
flag_B False 7
True 7
flag_C False 5
True 9
(2, 3] flag_A False 7
True 9
flag_B False 11
True 5
flag_C False 13
True 3
[...]
</code></pre>
<p>Then you'll need to re-shape that as to how you want to use it...</p>
| 2 | 2016-08-18T11:46:12Z | [
"python",
"pandas",
"count",
"histogram",
"bin"
] |
Mean of sum in a DataFrame with Python | 39,015,648 | <p>I create a DataFrame df1 which contains for each day of the week the number of activation time for each machine.</p>
<pre><code>machine1 38696 non-null float64
machine3 38697 non-null float64
machine5 38695 non-null float64
machine6 38695 non-null float64
machine7 38693 non-null float64
machine8 38696 non-null float64
date 38840 non-null datetime64[ns]
day_of_week 38840 non-null object
dtypes: datetime64[ns](2), float64(6), object(1)
memory usage: 2.7+ MB
Machine1 Machine3 Machine5 Machine6 Machine7 Machine8 date day_of_week
90.0 90.0 90.0 90.0 90.0 90.0 2015-07-31 Fri
0.0 0.0 0.0 0.0 0.0 0.0 2015-07-31 Mon
0.0 0.0 0.0 0.0 0.0 0.0 2015-07-31 Tues
0.0 0.0 0.0 0.0 0.0 0.0 2015-07-31 Fri
0.0 0.0 0.0 0.0 0.0 0.0 2015-07-31 Tues
</code></pre>
<p>I try to create another DataFrame which extracts for each machine the mean of activation per day. For example: </p>
<pre><code> Machine1 Machine3 Machine5 Machine6 Machine7 Machine8
Mon 0 .. .. .. .. ..
Tue 0
wed 0
thu 0
fri 45
</code></pre>
<p>Can you help me to achieve this in the smartest way?</p>
| 2 | 2016-08-18T10:19:07Z | 39,015,725 | <p>IIUC you can use:</p>
<pre><code>print (df.groupby('day_of_week').mean())
Machine1 Machine3 Machine5 Machine6 Machine7 Machine8
day_of_week
Fri 45.0 45.0 45.0 45.0 45.0 45.0
Mon 0.0 0.0 0.0 0.0 0.0 0.0
Tues 0.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
<p>If need output with reseting index:</p>
<pre><code>print (df.groupby('day_of_week', as_index=False).mean())
day_of_week Machine1 Machine3 Machine5 Machine6 Machine7 Machine8
0 Fri 45.0 45.0 45.0 45.0 45.0 45.0
1 Mon 0.0 0.0 0.0 0.0 0.0 0.0
2 Tues 0.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
| 3 | 2016-08-18T10:22:45Z | [
"python",
"pandas",
"dataframe",
"group-by",
"mean"
] |
Evaluate sum of step functions | 39,015,649 | <p>I have a fairly large number (around 1000) of step functions, each with only two intervals. I'd like to sum them up and then find the maximum value. What is the best way to do this? I've tried out sympy, with code as follows:</p>
<pre><code>from sympy import Piecewise, piecewise_fold, evalf
from sympy.abc import x
from sympy.plotting import *
import numpy as np
S = 20
t = np.random.random(20)
sum_piecewise = None
for s in range(S):
p = Piecewise((np.random.random(), x<t[s]), (np.random.random(), x>=t[s]))
if not sum_piecewise:
sum_piecewise = p
else:
sum_piecewise += p
print sum_piecewise.evalf(0.2)
</code></pre>
<p>However, this outputs a large symbolic expression and not an actual value, which is what I want. </p>
| 2 | 2016-08-18T10:19:07Z | 39,015,929 | <p>What about using <a href="http://docs.sympy.org/latest/tutorial/basic_operations.html#substitution" rel="nofollow">substitution</a>? Try changing <code>sum_piecewise.evalf(0.2)</code> by <code>sum_piecewise.subs(x, 0.2)</code></p>
| 0 | 2016-08-18T10:32:54Z | [
"python",
"numpy",
"scipy",
"sympy"
] |
Evaluate sum of step functions | 39,015,649 | <p>I have a fairly large number (around 1000) of step functions, each with only two intervals. I'd like to sum them up and then find the maximum value. What is the best way to do this? I've tried out sympy, with code as follows:</p>
<pre><code>from sympy import Piecewise, piecewise_fold, evalf
from sympy.abc import x
from sympy.plotting import *
import numpy as np
S = 20
t = np.random.random(20)
sum_piecewise = None
for s in range(S):
p = Piecewise((np.random.random(), x<t[s]), (np.random.random(), x>=t[s]))
if not sum_piecewise:
sum_piecewise = p
else:
sum_piecewise += p
print sum_piecewise.evalf(0.2)
</code></pre>
<p>However, this outputs a large symbolic expression and not an actual value, which is what I want. </p>
| 2 | 2016-08-18T10:19:07Z | 39,016,605 | <p>As it appears that you consider numerical functions, it is better (in terms of performance) to work with Numpy. Here's one approach:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(10)
S = 20 # number of piecewise functions
# generate S function parameters.
# For example, the k-th function is defined as equal to
# p_values[k,0] when t<t_values[k] and equal to
# p_values[k,1] when t>= t_values[k]
t_values = np.random.random(S)
p_values = np.random.random((S,2))
# define a piecewise function given the function's parameters
def p_func(t, t0, p0):
return np.piecewise(t, [t < t0, t >= t0], p0)
# define a function that sums a set of piecewise functions corresponding to
# parameter arrays t_values and p_values
def p_sum(t, t_values, p_values):
return np.sum([p_func(t, t0, p0) for t0, p0 in zip(t_values,p_values)])
</code></pre>
<p>Here is the plot of the sum of functions:</p>
<pre><code>t_range = np.linspace(0,1,1000)
plt.plot(t_range, [p_sum(tt,t_values,p_values) for tt in t_range])
</code></pre>
<p><a href="http://i.stack.imgur.com/XMTrf.png" rel="nofollow"><img src="http://i.stack.imgur.com/XMTrf.png" alt="enter image description here"></a></p>
<p>Clearly, in order to find the maximum, it suffices to consider only the <code>S</code> time instants contained in <code>t_values</code>. For this example,</p>
<pre><code>np.max([p_sum(tt,t_values,p_values) for tt in t_values])
</code></pre>
<blockquote>
<p><code>11.945901591934897</code></p>
</blockquote>
| 2 | 2016-08-18T11:06:33Z | [
"python",
"numpy",
"scipy",
"sympy"
] |
Which selenium python package should I be using? | 39,015,690 | <p>I'm using <code>Python 3.5.1 |Anaconda 2.4.0 (x86_64) on a Mac OS X 10.11.6</code></p>
<p>Should I be using selenium-3.0.0b2 <a href="https://pypi.python.org/pypi/selenium" rel="nofollow">https://pypi.python.org/pypi/selenium</a></p>
<p>or should I be using selenium-2.53.6 <a href="https://pypi.python.org/pypi/selenium/2.53.6" rel="nofollow">https://pypi.python.org/pypi/selenium/2.53.6</a> which is the version which installs when I run <code>pip install -U selenium</code></p>
<p>Both encounter problems when running and I don't want spend ages fixing a package which is indeed the wrong version for my machine</p>
| 1 | 2016-08-18T10:21:19Z | 39,019,497 | <p>I guess, using pip is quite a good practice.
Also, as far as I know, your problems more likely will be about communication with selenium-standalone and your selenium, so firstly you should check which bunch is stable at the moment.</p>
| 1 | 2016-08-18T13:28:50Z | [
"python",
"selenium"
] |
Django query to annotate number of foreign keys matching certain value | 39,015,704 | <p>If I had the following related models:</p>
<pre><code>class User(models.Model):
...
class Identity(models.Model):
user = models.ForeignKey(User, on_delete=models.PROTECT)
category = models.CharField(max_length=8, choices=IDENTITY_CATEGORIES)
...
</code></pre>
<p>How could I query for users with multiple <em>email</em> identities, where multiple <code>Identity</code> instances exist of category "email" which point to the same <code>User</code>.</p>
<p>I've seen that <a href="https://docs.djangoproject.com/en/1.8/ref/models/conditional-expressions/" rel="nofollow">Django 1.8 introduced Conditional Expressions</a>, but I'm not sure how they would apply to this situation.</p>
| 0 | 2016-08-18T10:21:52Z | 39,015,705 | <p>By applying <code>django.db.models.Sum</code>, here's one way of achieving it:</p>
<pre><code>from django.db.models import Case, IntegerField, Sum, When
def users_with_multiple_email_identities():
"""
Return a queryset of Users who have multiple email identities.
"""
return (
User.objects
.annotate(
num_email_identities=Sum(
Case(
When(identity__category='email', then=1),
output_field=IntegerField(),
default=Value(0)
)
)
)
.filter(num_email_identities__gt=1)
)
</code></pre>
<p>So, we use use <code>.annotate()</code> to create an aggregate field representing the number of email identities per user, and then apply <code>.filter()</code> to the results to return only users with multiple email identities.</p>
| 0 | 2016-08-18T10:21:52Z | [
"python",
"django",
"orm",
"django-queryset"
] |
dlib train_object_detector immense amounts of RAM usage | 39,015,782 | <p>I am using dlib's <em>train_object_detector</em> for face detection and I have roughly 6k images in a folder with which I am trying to train my model.</p>
<p>Also, I am using dlib's example python code(train_object_detector.py) for this purpose.</p>
<p>But the thing is, the program's RAM usage is insane. For roughly 300 images, it required approximately 15GB RAM and right now with my 6k images, I'm stuck. </p>
<p>For 6k images, while training, it required <strong>more than 100GBs of RAM</strong> and eventually program killed itself.</p>
<p>Has it always been like this? or am I doing something wrong? Is it normal to have this much of RAM usage?</p>
<p>It is almost not modified at all and pretty much same with the example code from dlib.</p>
<p><strong>Note:</strong> The sizes of the images are between 10-100 KB.</p>
<p>Here is the code I'm using (remote): <a href="http://pastebin.com/WipU8qgq" rel="nofollow">http://pastebin.com/WipU8qgq</a>
Here's the code:</p>
<pre><code>import os
import sys
import glob
import dlib
from skimage import io
if len(sys.argv) != 4:
print(
"Give the path to the faces directory as the argument to this "
"program with training and test xml files in order. For example: \n"
" ./train_object_detector_modified.py ../faces ../faces/training.xml ../faces/testing.xml")
exit()
faces_folder = sys.argv[1]
training_xml_path = sys.argv[2]
testing_xml_path = sys.argv[3]
options = dlib.simple_object_detector_training_options()
options.add_left_right_image_flips = True
options.C = 5
options.num_threads = 8
options.be_verbose = True
dlib.train_simple_object_detector(training_xml_path, "detector.svm", options)
print 'training end'
print("") # Print blank line to create gap from previous output
print("Training accuracy: {}".format(
dlib.test_simple_object_detector(training_xml_path, "detector.svm")))
print("Testing accuracy: {}".format(
dlib.test_simple_object_detector(testing_xml_path, "detector.svm")))
'''
# Now let's use the detector as you would in a normal application. First we
# will load it from disk.
detector = dlib.simple_object_detector("detector.svm")
# We can look at the HOG filter we learned. It should look like a face. Neat!
win_det = dlib.image_window()
win_det.set_image(detector)
# Now let's run the detector over the images in the faces folder and display the
# results.
print("Showing detections on the images in the faces folder...")
win = dlib.image_window()
for f in glob.glob(os.path.join(faces_folder, "*.jpg")):
print("Processing file: {}".format(f))
img = io.imread(f)
dets = detector(img)
print("Number of faces detected: {}".format(len(dets)))
for k, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
k, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
'''
</code></pre>
| 1 | 2016-08-18T10:25:47Z | 39,016,727 | <p>This is happening because you have a combination of big images and/or small bounding boxes. By default, dlib.train_simple_object_detector uses a detection window that is 6400 pixels in size. If images contain target boxes much smaller than this then those images are upsampled to make the objects big enough. </p>
<p>All these settings are fields in the options object. </p>
| 1 | 2016-08-18T11:12:58Z | [
"python",
"face-detection",
"dlib"
] |
How to test only one function with tox? | 39,015,885 | <p>I'm learning to write tests with tox. How do I test only one function with tox? For example if I want to test only <code>test_simple_backup_generation</code> from <code>tests/test_backup_cmd.py</code> of <code>django-backup</code> <a href="https://github.com/django-backup/django-backup" rel="nofollow">extension</a></p>
| 0 | 2016-08-18T10:30:36Z | 39,016,416 | <p>If you define a parameter <code>{posargs}</code> in your <code>tox.ini</code> you can pass in arguments during execution. In the case of py.test, where</p>
<pre><code>py.test -k test_simple_backup_generation
</code></pre>
<p>would only test one function:</p>
<pre><code>[tox]
envlist = py27,py35
[testenv]
deps=pytest
commands=
pip install -e .[tests,docs]
py.test {posargs}
</code></pre>
<p>and run like</p>
<pre><code>tox -- -k test_simple_backup_generation
</code></pre>
| 2 | 2016-08-18T10:55:51Z | [
"python",
"testing",
"automated-tests",
"tox"
] |
Comprehension to instantiate a boolean 2D array? | 39,015,925 | <p>I'm new to Python, thus the question,</p>
<p>So I'm trying to instantiate a 2D array/list with all false of the size str +1 rows and pattern +1 cols.</p>
<p>Here's my code,</p>
<pre><code>memo = []
for i in range(0, len(str) + 1):
memo[i] = [[False] for j in range(len(pattern) + 1)]
</code></pre>
<p>Now I've two questions,</p>
<p>Is there a more pythonic way of doing this in 1 line?
Also if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
| 2 | 2016-08-18T10:32:44Z | 39,016,049 | <p>A one-liner would be <br/> <code>memo = [[False for j in range(len(pattern) + 1)] for i in range(len(str) + 1)]</code>.</p>
<p>As a side-note, keep in mind that using <code>str</code> as a variable name should be avoided as it shadows the built-in <code>str</code> type.</p>
<blockquote>
<p>if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
</blockquote>
<p>Nothing, it is simply empty.</p>
<p>Python lists store references to other objects. If you don't insert any reference to the list, the list doesn't contain anything.</p>
| 4 | 2016-08-18T10:38:24Z | [
"python",
"arrays",
"boolean"
] |
Comprehension to instantiate a boolean 2D array? | 39,015,925 | <p>I'm new to Python, thus the question,</p>
<p>So I'm trying to instantiate a 2D array/list with all false of the size str +1 rows and pattern +1 cols.</p>
<p>Here's my code,</p>
<pre><code>memo = []
for i in range(0, len(str) + 1):
memo[i] = [[False] for j in range(len(pattern) + 1)]
</code></pre>
<p>Now I've two questions,</p>
<p>Is there a more pythonic way of doing this in 1 line?
Also if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
| 2 | 2016-08-18T10:32:44Z | 39,016,086 | <p>Using <a href="https://docs.python.org/2/library/itertools.html#itertools.repeat" rel="nofollow"><code>itertools.repeat</code></a> aproach, fully loaded in a generator:</p>
<pre><code>memo = itertools.repeat(itertools.repeat(False, xrange(len(pattern) + 1)), xrange(len(str) + 1))
</code></pre>
| 1 | 2016-08-18T10:40:37Z | [
"python",
"arrays",
"boolean"
] |
Comprehension to instantiate a boolean 2D array? | 39,015,925 | <p>I'm new to Python, thus the question,</p>
<p>So I'm trying to instantiate a 2D array/list with all false of the size str +1 rows and pattern +1 cols.</p>
<p>Here's my code,</p>
<pre><code>memo = []
for i in range(0, len(str) + 1):
memo[i] = [[False] for j in range(len(pattern) + 1)]
</code></pre>
<p>Now I've two questions,</p>
<p>Is there a more pythonic way of doing this in 1 line?
Also if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
| 2 | 2016-08-18T10:32:44Z | 39,016,184 | <p>Depending on how big the 2d list is and what you plan to do with it, you might also consider using a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html" rel="nofollow">numpy ndarray</a> to store the data:</p>
<pre><code>import numpy as np
memo = np.full((len(str) + 1, len(pattern) + 1), False, dtype=bool)
# example
> np.full((3,2), False, dtype=bool)
>
array([[False, False],
[False, False],
[False, False]], dtype=bool)
</code></pre>
| 1 | 2016-08-18T10:45:20Z | [
"python",
"arrays",
"boolean"
] |
Comprehension to instantiate a boolean 2D array? | 39,015,925 | <p>I'm new to Python, thus the question,</p>
<p>So I'm trying to instantiate a 2D array/list with all false of the size str +1 rows and pattern +1 cols.</p>
<p>Here's my code,</p>
<pre><code>memo = []
for i in range(0, len(str) + 1):
memo[i] = [[False] for j in range(len(pattern) + 1)]
</code></pre>
<p>Now I've two questions,</p>
<p>Is there a more pythonic way of doing this in 1 line?
Also if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
| 2 | 2016-08-18T10:32:44Z | 39,016,453 | <p>A short hand way would be to write:</p>
<pre><code>ncols = 3 # len(string1) + 1
nrows = 4 # len(pattern1) + 1
memo = nrows * [ncols*[False]]
>>> [[False, False, False], [False, False, False], [False, False, False], [False, False, False]]
</code></pre>
<p>In this case the second part [ncols*[False]] makes one of the inner lists</p>
| 1 | 2016-08-18T10:57:45Z | [
"python",
"arrays",
"boolean"
] |
Comprehension to instantiate a boolean 2D array? | 39,015,925 | <p>I'm new to Python, thus the question,</p>
<p>So I'm trying to instantiate a 2D array/list with all false of the size str +1 rows and pattern +1 cols.</p>
<p>Here's my code,</p>
<pre><code>memo = []
for i in range(0, len(str) + 1):
memo[i] = [[False] for j in range(len(pattern) + 1)]
</code></pre>
<p>Now I've two questions,</p>
<p>Is there a more pythonic way of doing this in 1 line?
Also if I just create the list and dont initialize it with anything, what is there in each grid(java equivalent of non-initialization meaning initalized with false)?</p>
| 2 | 2016-08-18T10:32:44Z | 39,016,464 | <p>This is the shortest list comprehension I can think of:</p>
<pre><code>memo = [[False] * (len(pattern)+1) for i in range(len(str)+1)]
</code></pre>
<p>If you create the list and don't initialize it, it will be empty.</p>
<p>One alternative to work around having to do that would be to represent the 2D array as a <code>defaultdict</code> of <code>defaultdict</code>s. This would essentially make it be a "sparse" array in the sense that it would only have entries where they were assigned values or whose value was referencedâalso known as "lazy-initialization". Even though data in it was stored this way, its contents could be indexed in a manner similar to that of a <code>list</code> of <code>list</code>s.<br>
(i.e. using <code>memo[i][j]</code>)</p>
<p>Here's what I mean:</p>
<pre><code>from collections import defaultdict
memo = defaultdict(lambda: defaultdict(bool))
memo[1][1] = True
print(memo[2][4]) # -> False
</code></pre>
<p>This would result in a dictionary containing only the values assigned or referenced so far:</p>
<pre><code>{
1: {
1: True
},
2: {
4: False
}
}
</code></pre>
| 1 | 2016-08-18T10:58:23Z | [
"python",
"arrays",
"boolean"
] |
Sort a column containing string in Pandas | 39,015,931 | <p>I am new to Pandas, and looking to sort a column containing strings and generate a numerical value to uniquely identify the string. My data frame looks something like this:</p>
<pre><code>df = pd.DataFrame({'key': range(8), 'year_week': ['2015_10', '2015_1', '2015_11', '2016_9', '2016_10','2016_3', '2016_9', '2016_10']})
</code></pre>
<p>First I like to sort the <code>'year_week'</code> column to arrange in ascending order <code>(2015_1, 2016_9, '2016_9', 2016_10, 2016_11, 2016_3, 2016_10, 2016_10)</code> and then generate a numerical value for each unique <code>'year_week'</code> string. </p>
| 1 | 2016-08-18T10:33:04Z | 39,016,168 | <p>You can first convert <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> column <code>year_week</code>, then sort it by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow"><code>factorize</code></a>:</p>
<pre><code>df = pd.DataFrame({'key': range(8), 'year_week': ['2015_10', '2015_1', '2015_11', '2016_9', '2016_10','2016_3', '2016_9', '2016_10']})
#http://stackoverflow.com/a/17087427/2901002
df['date'] = pd.to_datetime(df.year_week + '-0', format='%Y_%W-%w')
#sort by column date
df.sort_values('date', inplace=True)
#create numerical values
df['num'] = pd.factorize(df.year_week)[0]
print (df)
key year_week date num
1 1 2015_1 2015-01-11 0
0 0 2015_10 2015-03-15 1
2 2 2015_11 2015-03-22 2
5 5 2016_3 2016-01-24 3
3 3 2016_9 2016-03-06 4
6 6 2016_9 2016-03-06 4
4 4 2016_10 2016-03-13 5
7 7 2016_10 2016-03-13 5
</code></pre>
| 3 | 2016-08-18T10:44:41Z | [
"python",
"sorting",
"pandas",
"dataframe",
"categorical-data"
] |
How to find non-zero median/mean of multiple columns in pandas? | 39,016,017 | <p>I have a long list of columns for which I want to calculate non-zero median,mean & std in a one go. I cannot just delete rows with 0 based on 1 column because the value for another column in same column may not be 0.</p>
<p>Below is the code I currently have which calculates median,mean etc. including zero.</p>
<pre><code> agg_list_oper={'ABC1':[max,np.std,np.mean,np.median],
'ABC2':[max,np.std,np.mean,np.median],
'ABC3':[max,np.std,np.mean,np.median],
'ABC4':[max,np.std,np.mean,np.median],
.....
.....
.....
}
df=df_tmp.groupby(['id']).agg(agg_list_oper).reset_index()
</code></pre>
<p>I know I can write long code with loops to process one column at a time.
Is there a way to do this in pandas groupby.agg() or some other functions elegantly?</p>
| 1 | 2016-08-18T10:36:46Z | 39,016,522 | <p>You can temporarily replace 0's with NaNs. Then, pandas will ignore the NaNs while calculating medians.</p>
<pre><code>df_tmp.replace(0, np.nan).groupby(['id']).agg(agg_list_oper).reset_index()
</code></pre>
| 1 | 2016-08-18T11:01:06Z | [
"python",
"pandas",
"aggregate-functions",
"median",
"summarization"
] |
wxPython program - render HTML | 39,016,022 | <p>Is it possible to render HTML in a Python window, using wxPython as the GUI builder.</p>
<p>For example, I give a program the URL <a href="http://google.com" rel="nofollow">http://google.com</a> and then it loads up that domain name and renders it in a window below, but still in the confines of the Python program.</p>
<pre><code>|---------------------------------------|
| URL: google.com |
|---------------------------------------|
| |
| VIEW THAT RENDERS WEBPAGE |
| OF GOOGLE.COM |
| |
|---------------------------------------|
</code></pre>
<p>I'm thinking that I need something like this: <a href="http://stackoverflow.com/questions/29404856/how-can-i-render-javascript-html-to-html-in-python">How can I render JavaScript HTML to HTML in python?</a></p>
<p>But rendering it in the wxPython window, and not loading up Firefox.</p>
| 0 | 2016-08-18T10:37:00Z | 39,021,695 | <p>Maybe it is simpler to check the <a href="https://www.wxpython.org/download.php" rel="nofollow">wxPython demo</a>. It contains two applied examples (search for html), one simple renderer (no CSS) and an advanced one which uses the rendering engine of the underlying platform to display HTML.</p>
| 0 | 2016-08-18T15:08:12Z | [
"python",
"wxpython"
] |
python callbacks vs exceptions in control flow | 39,016,028 | <p>What is more python for control flow, callbacks or exceptions? For example if i wrote a user login logic i can write it in this ways:</p>
<p>Exceptions</p>
<pre><code> try:
valid_data = validate(form)
try:
do_login(valid_data)
return SuccessLoginTemplate()
except LoginError:
return RenderTemplate(form)
except FormError:
return RenderTemplate(form)
</code></pre>
<p>Callbacks:</p>
<pre><code>validate(form, on_form_ok, on_form_error)
def on_form_ok(valid_data):
do_login(valid_data, on_login_success, on_login_error)
def on_form_error(errors):
return RenderTemplate(form)
def on_login_success(user):
return SuccessLoginTemplate()
def on_login_error(errors):
return RenderTemplate(form)
</code></pre>
<p>It seem what most python code writen in 1 case via Exceptions, but IMO
callback style more expressive in point of DSL view. In case 2 where is no intermediate code "clue to preapere vars from one calls to another" i mean:</p>
<pre><code>try:
valid_data = validate(form)
try:
#mess with some intermediate vars
do_login(valid_data)
#mess with some intermediate vars
return SuccessLoginTemplate()
except LoginError:
#mess with some intermediate vars
return RenderTemplate(form)
except FormError:
#mess with some intermediate vars
return RenderTemplate(form)
</code></pre>
<p>IMO this mess with intermediate vars reduce readability of code, beacause in case of callbacks intermediate code goes to callback and it more easier to understand it then it wraped in function, so it gets some context in it (i mean function name which carry some DSL semantic) and in case of exception this code unbound from any context, so it harder to read)</p>
<p>Also I dont't want to check results of functions - this also make mess with some intermediate vars. I search for clearenest way to chaining computation together in more functional style, like monads or CPS style. So IMO callbacks are more expresive way to do it, but how pythonic is it?</p>
| 1 | 2016-08-18T10:37:15Z | 39,016,436 | <p>IMO, you have 3 obvious contexts: 1 <code>try</code> and 2 distinct <code>except</code>s with all variables and stuff explicitly manipulated and passed while with callbacks you don't know from (unless you write docs or anything like this) expression at least what are the arguments for the callbacks you're passing. Exception block with exception name provide enough meaning to the code block.
And you even have to implement <code>validate</code> like this (with callbacks)</p>
<pre><code>def validate(data, on_success, on_error1, on_error2):
try:
# do_validation_stuff here
except Exception1:
on_error1()
except Exception2:
on_error2()
else:
on_success()
</code></pre>
<p>thus <code>validate</code> function not only do validation, but handle error processing, and in your example it do user login.</p>
| 0 | 2016-08-18T10:56:54Z | [
"python",
"callback",
"control-flow"
] |
install a package with pip (or pip3) in python3 | 39,016,067 | <p>ubuntu 14.04</p>
<p>I'm new to python and I'm having trouble installing packages. I've looked at similar questions to mine, but it's hard to tell if it's relevant and up to date.</p>
<p>I'm not sure what I need to do to get this to work.</p>
<pre><code>pip --version
The program 'pip' is currently not installed. You can install it by typing:
sudo apt-get install python-pip
pip3 --version
pip 1.5.4 from /usr/lib/python3/dist-packages (python 3.4)
</code></pre>
<p>and I've got several versions of python, the latest;</p>
<pre><code>python3
Python 3.4.3 (default, Oct 14 2015, 20:28:29)
[GCC 4.8.4] on linux
</code></pre>
<p>When I try install a package</p>
<pre><code>pip3 install copy
Downloading/unpacking copy
Could not find any downloads that satisfy the requirement copy
Cleaning up...
No distributions at all found for copy
Storing debug log for failure in /home/ben/.pip/pip.log
</code></pre>
<p>I saw in similar stackoverflow questions, that people went to the log and posted that as well.</p>
<pre><code>------------------------------------------------------------
/usr/bin/pip3 run on Thu Aug 18 20:25:27 2016
Downloading/unpacking copy
Getting page https://pypi.python.org/simple/copy/
Could not fetch URL https://pypi.python.org/simple/copy/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/copy/ when looking for download links for copy
Getting page https://pypi.python.org/simple/
URLs to search for versions for copy:
* https://pypi.python.org/simple/copy/
Getting page https://pypi.python.org/simple/copy/
Could not fetch URL https://pypi.python.org/simple/copy/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/copy/ when looking for download links for copy
Could not find any downloads that satisfy the requirement copy
Cleaning up...
Removing temporary dir /tmp/pip_build_ben...
No distributions at all found for copy
Exception information:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
pip.exceptions.DistributionNotFound: No distributions at all found for copy
</code></pre>
<p>Other answers say something like
"You need to fetch pypi over HTTPS, not HTTP." but I don't really understand that or what I need to do to fix that.</p>
<p>So my question is; what do I need to do to be able to install the python3 version of a package? Do I need to install pip, or is it better to use pip3? </p>
| 1 | 2016-08-18T10:39:29Z | 39,016,181 | <p>If you have several versions of python, make sure to navigate in the directory of the python version you want to use, go to 'Scripts' folder and run the pip command there. Hope this works.</p>
| 0 | 2016-08-18T10:45:16Z | [
"python",
"ubuntu",
"pip"
] |
install a package with pip (or pip3) in python3 | 39,016,067 | <p>ubuntu 14.04</p>
<p>I'm new to python and I'm having trouble installing packages. I've looked at similar questions to mine, but it's hard to tell if it's relevant and up to date.</p>
<p>I'm not sure what I need to do to get this to work.</p>
<pre><code>pip --version
The program 'pip' is currently not installed. You can install it by typing:
sudo apt-get install python-pip
pip3 --version
pip 1.5.4 from /usr/lib/python3/dist-packages (python 3.4)
</code></pre>
<p>and I've got several versions of python, the latest;</p>
<pre><code>python3
Python 3.4.3 (default, Oct 14 2015, 20:28:29)
[GCC 4.8.4] on linux
</code></pre>
<p>When I try install a package</p>
<pre><code>pip3 install copy
Downloading/unpacking copy
Could not find any downloads that satisfy the requirement copy
Cleaning up...
No distributions at all found for copy
Storing debug log for failure in /home/ben/.pip/pip.log
</code></pre>
<p>I saw in similar stackoverflow questions, that people went to the log and posted that as well.</p>
<pre><code>------------------------------------------------------------
/usr/bin/pip3 run on Thu Aug 18 20:25:27 2016
Downloading/unpacking copy
Getting page https://pypi.python.org/simple/copy/
Could not fetch URL https://pypi.python.org/simple/copy/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/copy/ when looking for download links for copy
Getting page https://pypi.python.org/simple/
URLs to search for versions for copy:
* https://pypi.python.org/simple/copy/
Getting page https://pypi.python.org/simple/copy/
Could not fetch URL https://pypi.python.org/simple/copy/: 404 Client Error: Not Found
Will skip URL https://pypi.python.org/simple/copy/ when looking for download links for copy
Could not find any downloads that satisfy the requirement copy
Cleaning up...
Removing temporary dir /tmp/pip_build_ben...
No distributions at all found for copy
Exception information:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
pip.exceptions.DistributionNotFound: No distributions at all found for copy
</code></pre>
<p>Other answers say something like
"You need to fetch pypi over HTTPS, not HTTP." but I don't really understand that or what I need to do to fix that.</p>
<p>So my question is; what do I need to do to be able to install the python3 version of a package? Do I need to install pip, or is it better to use pip3? </p>
| 1 | 2016-08-18T10:39:29Z | 39,016,437 | <p>It seems this package is not available on PyPi. What are you trying to install?
Your pip version is quite old - pip 1.5.4. <a href="https://pip.pypa.io/en/stable/" rel="nofollow">current version is 8.1.2</a>.</p>
<p>You can also use <code>pip install -v</code> to get more information while installing.</p>
| 0 | 2016-08-18T10:56:55Z | [
"python",
"ubuntu",
"pip"
] |
Service and version displayed via nmap scan for simple python socket server | 39,016,100 | <p>I've got a simple python socket server. Here's the code:</p>
<pre><code>import socket
host = "0.0.0.0" # address to bind on.
port = 8081
def listen_serv():
try:
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host,port))
s.listen(4)
...
messages back and forth between the server and client
...
if __name__ == "__main__":
while True:
listen_serv()
</code></pre>
<p>When I run the python server locally and then scan with <code>nmap localhost</code> i see the open port 8081 with the service blackice-icecap running on it. A quick google search revealed that this is a firewall service that uses the port 8081 for a service called ice-cap remote. If I change the port to 12000 for example, I get another service called cce4x.</p>
<p>A further scan with <code>nmap localhost -sV</code> returns the contents of the python script</p>
<pre><code>1 service unrecognized despite returning data. If you know the service/version,
please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port8081-TCP:V=7.25BETA1%I=7%D=8/18%Time=57B58EE7%P=x86_64-pc-linux-gn
SF:u%r(NULL,1A4,"\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\*\n\*\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\*\n\*\x20\x20\x20\x20\x20\x
SF:20Welcome\x20to\x20ScapeX\x20Mail\x20Server\x20\x20\x20\x20\*\n\*\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
SF:\x20\x20\*\n\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\nHere\x20is\x20a\x20quiz\x20to\x20test\x20your\
SF:x20knowledge\x20of\x20hacking\.\.\.\n\n\nAnswer\x20correctly\x20and\x20
SF:we\x20will\x20reward\x20you\x20with\x20a\x20shell\x20:-\)\x20\nQuestion
etc...
etc...
</code></pre>
<p>Is there a way I can customize the service and version descriptions that are displayed by nmap for my simple python server?</p>
| 0 | 2016-08-18T10:41:11Z | 39,037,482 | <p>Found a solution by sending the following line as the first message from the server</p>
<pre><code>c.send("HTTP/1.1 200 OK\r\nServer: Netscape-Enterprise/6.1\r\nDate: Fri, 19 Aug 2016 10:28:43 GMT\r\nContent-Type: text/html; charset=UTF-8\r\nConnection: close\r\nVary: Accept-Encoding\n\nContent-Length: 32092\r\n\n\n""")
</code></pre>
| 1 | 2016-08-19T11:05:45Z | [
"python",
"sockets",
"banner",
"nmap"
] |
How to drop a row from Pandas dataframe? | 39,016,158 | <p>This question is similar to <a href="http://stackoverflow.com/questions/14661701/how-to-drop-a-list-of-rows-from-pandas-dataframe">this question</a> posted before. However, I want to do something different, here is my <code>df</code>:</p>
<pre><code> pos event
A 4 d5
A 2 d3
B 3 d3
B 6 u3
</code></pre>
<p>I want to get:</p>
<pre><code> pos event
A 4 d5
A 2 d3
B 6 u3
</code></pre>
<p>I wrote this code but it is not working! any suggestion? </p>
<pre><code>df.drop(df.ix[B]['event']=='d3', inplace=True)
</code></pre>
<p>My actual datafram is big and I want to drop the row that I want with index and value in event column.</p>
| 2 | 2016-08-18T10:44:08Z | 39,016,365 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <code>|</code> (<code>or</code>):</p>
<pre><code>print (df[(df['event']!='d3') | (df.index != 'B')])
pos event
A 4 d5
A 2 d3
B 6 u3
</code></pre>
| 3 | 2016-08-18T10:53:17Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
Replace multiline existing output in python | 39,016,202 | <p>In python 3 , there have been answers about how to replace an existing output with another.
These answers suggest the use of end='\r' ad in print('hello', end='\r')
This is working ofcourse, but it works only for one line .</p>
<p>In the program that I post below, I first output 5 lines which is the representation of a table matrix. The user is asked to type one number (1-3) and then the matrix is printed again with an 'X' in the position that the user indicated. </p>
<p>But as you can see, the matrix is printed below the initial one. How can I replace the existing output ? </p>
<p>If I use the end = '\r' it will just move the cursor to the beginning of the line. But this will not work for me because I want to print 5 lines , and then move the cursor to the beginning of the first line, and not to the beginning of the fifth line (as the end='\r' does).</p>
<p>How could I achieve this in python ? </p>
<pre><code>from __future__ import print_function
list1=[' '*11,'|',' '*7,'|',' '*10]
def board():
print ('\t \t | \t \t |')
print (list1)
print ('\t \t | \t \t |')
#print '\n'
print
def playerone():
player1 = raw_input("PLAYER1: enter your position 1-3: ")
if (player1 == '1'):
list1[0]=' '*5 + 'X'+' '*5
elif (player1=='2'):
list1[2]=' '*3 + 'X'+' '*3
elif (player1=='3'):
list1[4]=' '*3 + 'X'+' '*6
else:
playerone()
print ('our board is : ')
board()
playerone()
print ('our board is :')
board()
</code></pre>
| 0 | 2016-08-18T10:45:58Z | 39,016,290 | <p>Unless you want to use curses (which is another big step), you cannot go back by several lines.</p>
<p>BUT what you can do is clear the screen and redisplay everything.</p>
<pre><code>print(chr(27) + "[2J")
</code></pre>
<p>clears the screen</p>
<p>(as stated in <a href="http://stackoverflow.com/questions/2084508/clear-terminal-in-python">clear terminal in python</a>)</p>
| 0 | 2016-08-18T10:49:51Z | [
"python",
"python-2.7",
"python-3.x",
"ipython",
"jupyter-notebook"
] |
Replace multiline existing output in python | 39,016,202 | <p>In python 3 , there have been answers about how to replace an existing output with another.
These answers suggest the use of end='\r' ad in print('hello', end='\r')
This is working ofcourse, but it works only for one line .</p>
<p>In the program that I post below, I first output 5 lines which is the representation of a table matrix. The user is asked to type one number (1-3) and then the matrix is printed again with an 'X' in the position that the user indicated. </p>
<p>But as you can see, the matrix is printed below the initial one. How can I replace the existing output ? </p>
<p>If I use the end = '\r' it will just move the cursor to the beginning of the line. But this will not work for me because I want to print 5 lines , and then move the cursor to the beginning of the first line, and not to the beginning of the fifth line (as the end='\r' does).</p>
<p>How could I achieve this in python ? </p>
<pre><code>from __future__ import print_function
list1=[' '*11,'|',' '*7,'|',' '*10]
def board():
print ('\t \t | \t \t |')
print (list1)
print ('\t \t | \t \t |')
#print '\n'
print
def playerone():
player1 = raw_input("PLAYER1: enter your position 1-3: ")
if (player1 == '1'):
list1[0]=' '*5 + 'X'+' '*5
elif (player1=='2'):
list1[2]=' '*3 + 'X'+' '*3
elif (player1=='3'):
list1[4]=' '*3 + 'X'+' '*6
else:
playerone()
print ('our board is : ')
board()
playerone()
print ('our board is :')
board()
</code></pre>
| 0 | 2016-08-18T10:45:58Z | 39,016,449 | <p>You can clear the screen before printing the board.</p>
<pre><code>def clearscreen(numlines=100):
"""Clear the console.
numlines is an optional argument used only as a fall-back.
"""
import os
if os.name == "posix":
# Unix/Linux/MacOS/BSD/etc
os.system('clear')
elif os.name in ("nt", "dos", "ce"):
# DOS/Windows
os.system('CLS')
else:
# Fallback for other operating systems.
print '\n' * numlines
</code></pre>
<p>And inside the board() you can call the clearscreen() to clear the screen before printing the board.</p>
| 0 | 2016-08-18T10:57:39Z | [
"python",
"python-2.7",
"python-3.x",
"ipython",
"jupyter-notebook"
] |
Replace multiline existing output in python | 39,016,202 | <p>In python 3 , there have been answers about how to replace an existing output with another.
These answers suggest the use of end='\r' ad in print('hello', end='\r')
This is working ofcourse, but it works only for one line .</p>
<p>In the program that I post below, I first output 5 lines which is the representation of a table matrix. The user is asked to type one number (1-3) and then the matrix is printed again with an 'X' in the position that the user indicated. </p>
<p>But as you can see, the matrix is printed below the initial one. How can I replace the existing output ? </p>
<p>If I use the end = '\r' it will just move the cursor to the beginning of the line. But this will not work for me because I want to print 5 lines , and then move the cursor to the beginning of the first line, and not to the beginning of the fifth line (as the end='\r' does).</p>
<p>How could I achieve this in python ? </p>
<pre><code>from __future__ import print_function
list1=[' '*11,'|',' '*7,'|',' '*10]
def board():
print ('\t \t | \t \t |')
print (list1)
print ('\t \t | \t \t |')
#print '\n'
print
def playerone():
player1 = raw_input("PLAYER1: enter your position 1-3: ")
if (player1 == '1'):
list1[0]=' '*5 + 'X'+' '*5
elif (player1=='2'):
list1[2]=' '*3 + 'X'+' '*3
elif (player1=='3'):
list1[4]=' '*3 + 'X'+' '*6
else:
playerone()
print ('our board is : ')
board()
playerone()
print ('our board is :')
board()
</code></pre>
| 0 | 2016-08-18T10:45:58Z | 39,016,920 | <p>What about trying to use a portable solution using terminal's clear command, for example:</p>
<pre><code>from __future__ import print_function
import os
class Game:
def __init__(self):
self.running = True
self.list1 = [' ' * 11, '|', ' ' * 7, '|', ' ' * 10]
def clear_sceen(self):
os.system('cls' if os.name == 'nt' else 'clear')
def draw_board(self):
print('our board is : ')
print('\t \t | \t \t |')
print(self.list1)
print('\t \t | \t \t |')
def check_inputs(self):
player1 = raw_input("PLAYER1: enter your position 1-3: ")
if (player1 not in ['1', '2', '3']):
self.running = False
else:
print(chr(27) + "[2J")
if (player1 == '1'):
self.list1[0] = ' ' * 5 + 'X' + ' ' * 5
elif (player1 == '2'):
self.list1[2] = ' ' * 3 + 'X' + ' ' * 3
elif (player1 == '3'):
self.list1[4] = ' ' * 3 + 'X' + ' ' * 6
def run(self):
self.clear_sceen()
while self.running:
self.draw_board()
self.check_inputs()
print(
'\nGame ended! you should have pressed numbers between 1-3 :P')
if __name__ == "__main__":
g = Game()
g.run()
</code></pre>
| 0 | 2016-08-18T11:21:42Z | [
"python",
"python-2.7",
"python-3.x",
"ipython",
"jupyter-notebook"
] |
Python 2 and 3 in Powershell | 39,016,249 | <p>I have both versions on my PC because I'm working through different tutorials, (I'm still a noob). I've seen a couple of similar questions on here, but nothing specific to Powershell.</p>
<p>When I run <strong>python</strong> in Powershell it brings up 2.7, but how do I specify that I want Python 3? Are there other issues that I need to be aware of? For example when running scripts from notepad++?</p>
| 1 | 2016-08-18T10:48:08Z | 39,016,480 | <p>You can install Python3 <a href="https://chocolatey.org/packages/python3" rel="nofollow">via Chocolatey</a>, AFAIK it will bring <code>python3</code> command into your realm. Do not forget to uninstall previously installed Python3 before it.</p>
<p>Also, you may try to create alias for Python3, see Microsoft <a href="https://technet.microsoft.com/en-us/library/ee176913.aspx" rel="nofollow">doc</a>.</p>
| 0 | 2016-08-18T10:59:04Z | [
"python",
"powershell"
] |
Python 2 and 3 in Powershell | 39,016,249 | <p>I have both versions on my PC because I'm working through different tutorials, (I'm still a noob). I've seen a couple of similar questions on here, but nothing specific to Powershell.</p>
<p>When I run <strong>python</strong> in Powershell it brings up 2.7, but how do I specify that I want Python 3? Are there other issues that I need to be aware of? For example when running scripts from notepad++?</p>
| 1 | 2016-08-18T10:48:08Z | 39,017,335 | <p>The command "py" (if v2 is the default) or "py -2" should launch Python 2.7, "py -3" should launch Python 3. See <a href="http://stackoverflow.com/a/13328713/6683985">here</a> for the details.</p>
<p>I've found this <a href="http://stackoverflow.com/a/20371084/6683985">post</a> as well. It might be helpful.</p>
<p><a href="http://stackoverflow.com/a/1825807/6683985">This one</a> will help you check the current version you are using, if you desire to change that see <a href="http://stackoverflow.com/a/13328713/6683985">here</a>.</p>
<p><a href="http://stackoverflow.com/a/31237640/6683985">Should you want to pop out of the shell</a>.</p>
<p>Cheers and have fun with Python ;)</p>
| 2 | 2016-08-18T11:42:36Z | [
"python",
"powershell"
] |
Pandas dataframe If else with logical AND involving two columns | 39,016,405 | <p>How to add logical <code>AND</code> in a control statement involving two columns of a pandas dataframe i.e.</p>
<p>This works:</p>
<pre><code>def getContinent(row):
if row['Location'] in ['US','Canada']:
val = 'North America'
elif row['Location'] in['UK', 'Germany']:
val = 'Europe'
else:
val = None
return val
df.apply(getContinent, axis=1)
</code></pre>
<p>Now I want to include an additional condition with another field <code>row['Sales']</code>:</p>
<pre><code>def getContinent(row):
if row['Location'] in ['US','Canada'] & row['Sales'] >= 100:
val = 'North America'
elif row['Location'] in['UK', 'Germany'] & row['Sales'] < 100:
val = 'Europe'
else:
val = None
return val
df.apply(getContinent, axis=1)
</code></pre>
<blockquote>
<p>ValueError: ('Arrays were different lengths: 6132 vs 2', u'occurred at index 0')</p>
</blockquote>
| 1 | 2016-08-18T10:55:08Z | 39,016,467 | <p>You need use <code>and</code> instead <code>&</code>:</p>
<pre><code>df = pd.DataFrame({'Sales': {0: 400, 1: 20, 2: 300},
'Location': {0: 'US', 1: 'UK', 2: 'Slovakia'}})
print (df)
Location Sales
0 US 400
1 UK 20
2 Slovakia 300
def getContinent(row):
if row['Location'] in ['US','Canada'] and row['Sales'] >= 100:
val = 'North America'
elif row['Location'] in['UK', 'Germany'] and row['Sales'] < 100:
val = 'Europe'
else:
val = None
return val
print (df.apply(getContinent, axis=1))
0 North America
1 Europe
2 None
dtype: object
</code></pre>
| 2 | 2016-08-18T10:58:33Z | [
"python",
"pandas",
"if-statement",
"dataframe",
"apply"
] |
AnalysisException: u"cannot resolve 'name' given input columns: [ list] in sqlContext in spark | 39,016,440 | <p>I tried a simple example like:</p>
<pre><code>data = sqlContext.read.format("csv").option("header", "true").option("inferSchema", "true").load("/databricks-datasets/samples/population-vs-price/data_geo.csv")
data.cache() # Cache data for faster reuse
data = data.dropna() # drop rows with missing values
data = data.select("2014 Population estimate", "2015 median sales price").map(lambda r: LabeledPoint(r[1], [r[0]])).toDF()
</code></pre>
<p>It works well, But when i try something very similar like:</p>
<pre><code>data = sqlContext.read.format("csv").option("header", "true").option("inferSchema", "true").load('/mnt/%s/OnlineNewsTrainingAndValidation.csv' % MOUNT_NAME)
data.cache() # Cache data for faster reuse
data = data.dropna() # drop rows with missing values
data = data.select("timedelta", "shares").map(lambda r: LabeledPoint(r[1], [r[0]])).toDF()
display(data)
</code></pre>
<p>It raise error:
AnalysisException: u"cannot resolve 'timedelta' given input columns: [ data_channel_is_tech,...</p>
<p>off-course I imported LabeledPoint and LinearRegression </p>
<p>What could be wrong?</p>
<p>Even the simpler case </p>
<pre><code>df_cleaned = df_cleaned.select("shares")
</code></pre>
<p>raises same AnalysisException (error).</p>
<p>*please note: df_cleaned.printSchema() works well.</p>
| 0 | 2016-08-18T10:57:02Z | 39,024,551 | <p>I found the issue: some of the column names contain white spaces before the name itself.
So </p>
<pre><code>data = data.select(" timedelta", " shares").map(lambda r: LabeledPoint(r[1], [r[0]])).toDF()
</code></pre>
<p>worked.
I could catch the white spaces using </p>
<pre><code>assert " " not in ''.join(df.columns)
</code></pre>
<p>Now I am thinking of a way to remove the white spaces. Any idea is much appreciated!</p>
| 0 | 2016-08-18T17:48:44Z | [
"python",
"apache-spark",
"linear-regression"
] |
How to contain an object and call its method in a class? | 39,016,463 | <p>I have two classes (<code>ClassA</code> and <code>ClassB</code>) and <code>ClassA</code> contains one object, <code>b</code>, that is an instance of <code>ClassB</code>. The question is that I can't call the <code>b</code>'s method in <code>Class A</code>.</p>
<pre><code>class ClassB(object):
def __init__(self):
print('Class B init ...')
def show(self):
print('Showing class b')
class ClassA(object):
#__classb = ClassB()
def __init__(self, classb):
print('Class A init ...')
__classb = classb
def show(self):
__classb.show() # <=== I just want to do this!
b = ClassB()
a = ClassA(b)
a.show()
</code></pre>
<p>I expect the result should be:</p>
<pre class="lang-none prettyprint-override"><code>Class B init ...
Class A init ...
Showing class b
</code></pre>
<p>But I meet the problem as <a href="http://i.stack.imgur.com/llSTk.png" rel="nofollow">this image</a> shows:</p>
<p>How can I fix it?</p>
| -2 | 2016-08-18T10:58:21Z | 39,016,508 | <p>By doing <code>__classb = classb</code> you are only defining a local <code>__classb</code> variable in the <code>__init__</code> method. </p>
<p>If you want <code>__classb</code> to be an instance attribute you will need to use <code>self</code>:</p>
<pre><code>self.__classb = classb
</code></pre>
<p>And then:</p>
<pre><code>def show(self):
self.__classb.show()
</code></pre>
| 3 | 2016-08-18T11:00:28Z | [
"python"
] |
How to contain an object and call its method in a class? | 39,016,463 | <p>I have two classes (<code>ClassA</code> and <code>ClassB</code>) and <code>ClassA</code> contains one object, <code>b</code>, that is an instance of <code>ClassB</code>. The question is that I can't call the <code>b</code>'s method in <code>Class A</code>.</p>
<pre><code>class ClassB(object):
def __init__(self):
print('Class B init ...')
def show(self):
print('Showing class b')
class ClassA(object):
#__classb = ClassB()
def __init__(self, classb):
print('Class A init ...')
__classb = classb
def show(self):
__classb.show() # <=== I just want to do this!
b = ClassB()
a = ClassA(b)
a.show()
</code></pre>
<p>I expect the result should be:</p>
<pre class="lang-none prettyprint-override"><code>Class B init ...
Class A init ...
Showing class b
</code></pre>
<p>But I meet the problem as <a href="http://i.stack.imgur.com/llSTk.png" rel="nofollow">this image</a> shows:</p>
<p>How can I fix it?</p>
| -2 | 2016-08-18T10:58:21Z | 39,016,547 | <p>You should create a attribute for a instance of class B in class A like that self.__classb.
Following code</p>
<pre><code> class ClassB(object):
def __init__(self):
print('Class B init ...')
def show(self):
print('Showing class b')
class ClassA(object):
def __init__(self, classb):
print('Class A init ...')
self.__classb = classb
def show(self):
self.__classb.show() # <=== I just want to do this!
b = ClassB()
a = ClassA(b)
a.show()
</code></pre>
| 1 | 2016-08-18T11:02:32Z | [
"python"
] |
Asserting call with lambda | 39,016,487 | <p>I have this piece of code:</p>
<pre><code>from shutil import rmtree
def ook(path):
rmtree(path, onerror=lambda x, y, z: self._logger.warn(z[1]))
</code></pre>
<p>In my unit tests, I want to mock it so check that right <code>path</code> is passed:</p>
<pre><code>from mock import patch, ANY
@patch("rmtree")
def test_rmtree(self, m_rmtree):
ook('/tmp/fubar')
m_rmtree.assert_called_once_with('/tmp/fubar', onerror=ANY)
</code></pre>
<p>What can I replace <code>ANY</code> with to check that there is a lambda there?</p>
| 0 | 2016-08-18T10:59:28Z | 39,016,617 | <p>I would do this with the <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.call_args" rel="nofollow"><code>call_args</code></a> and <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.call_count" rel="nofollow"><code>call_count</code></a> rather than directly in <code>assert_called_once_with</code>, I don't think <code>unittest.mock</code> has anything like e.g. <a href="http://jasmine.github.io/2.0/introduction.html#section-Matching_Anything_with_<code>jasmine.any</code" rel="nofollow"><code>jasmine.any</code></a>:</p>
<pre><code>from collections import Callable
...
@patch("rmtree")
def test_rmtree(self, m_rmtree):
ook('/tmp/fubar')
assert m_rmtree.call_count == 1
args, kwargs = m_rmtree.call_args
assert args[0] == '/tmp/fubar'
assert isinstance(kwargs.get('onerror'), Callable)
</code></pre>
<p>Note that it's not relevant that the argument is a <code>lambda</code> specifically, just that it is callable. </p>
| 1 | 2016-08-18T11:07:14Z | [
"python",
"mocking",
"python-mock"
] |
Pygame - when i push a button and then another | 39,016,509 | <p>First of all, here is my code:</p>
<pre><code>while not crashed:
curr_event = pygame.event.get()
if (len(curr_event) == 2):
print curr_event[0],'\n',curr_event[1],'\n\n\n'
for event in curr_event:
if event.type == pygame.KEYUP and not len(curr_event) == 1:
continue
if event.type == pygame.QUIT:
crashed = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
x_change = -5
elif event.key == pygame.K_RIGHT:
x_change = 5
if event.type == pygame.KEYUP:
x_change = 0
x += x_change
Display.fill((255, 255, 255))
car(x,y)
pygame.display.update()
clock.tick(100)
</code></pre>
<p>This should be a try to move smoothly from right to left and opposite.
All is good when for example, I press "left key" and wait 2 seconds and then press "right key".
But - when I press "left key", leave the key, and just after it press "right key" - the "key up" is smashing my right press and the player stops on the screen.</p>
<p>I thought it's because created a list of 2 indexes when the first have the "right press" and the second have the "key up", so i tried to do:</p>
<pre><code>if event.type == pygame.KEYUP and not len(curr_event) == 1:
continue
</code></pre>
<p>As you can see in my code.
Well... sometimes it's the situation... But sometimes it doesn't creates a list with two indexes - but smash the "right key" pressing..</p>
<p>How can I fix it and what is the issue?</p>
| 0 | 2016-08-18T11:00:33Z | 39,018,581 | <p>Moving the mouse, pressing buttons and other actions counts as events and are put in the queue, so the code <code>if event.type == pygame.KEYUP and not len(curr_event) == 1: continue</code> might be unreliable.</p>
<h2>Solution</h2>
<p>You could check the state of the button (if it's being held down or not) rather than just when it's pressed and released. <code>pygame.key.get_pressed()</code> returns a list of all the keys current state: 0 if not being pressed and 1 if it's being pressed. A key position in the list is it's integer constant representation.</p>
<pre><code>key = pygame.key.get_pressed()
if key[pygame.K_LEFT]:
x_change = -5
elif key[pygame.K_RIGHT]:
x_change = 5
</code></pre>
<p>This doesn't keep track on was what pressed first or last, so if you're holding down both keys it'll go left. I've tried another solution that works pretty well. It basically a list that keeps track of the key presses in a list and moves the player based on the last key pressed. </p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((500, 200))
clock = pygame.time.Clock()
x, y = 100, 100
car = pygame.Surface((32, 32))
velocity_dx = []
speed = 5
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
quit()
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
velocity_dx.insert(0, -speed)
elif event.key == pygame.K_RIGHT:
velocity_dx.insert(0, speed)
elif event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT:
velocity_dx.remove(-speed)
elif event.key == pygame.K_RIGHT:
velocity_dx.remove(speed)
try:
x += velocity_dx[0] # Take the first value from the list
except IndexError: # if list is empty, catch the IndexError.
pass
screen.fill((255, 255, 255))
screen.blit(car, (x, y))
pygame.display.update()
# print(velocity_dx) # Uncomment to see what's happening in action!
</code></pre>
| 0 | 2016-08-18T12:43:50Z | [
"python",
"pygame"
] |
SPSS - Python functions in startup script | 39,016,549 | <p>Following up on <a href="http://stackoverflow.com/questions/35936239/quickly-import-custom-spss-commands-from-python">this question and it's answer</a>, I am trying to create an SPSS custom function in Python, to use in SPSS syntax. </p>
<p>I have a program that works well in a syntax:</p>
<pre><code>begin program.
import spss,spssaux, sys
def CustomFunction ():
#function code here
CustomFunction()
end program.
</code></pre>
<p>But I want the <code>CustomFunction()</code>to be available in "normal" SPSS syntax.</p>
| 1 | 2016-08-18T11:02:41Z | 39,039,984 | <p>Standard syntax does not use Python code nor does it add Python functions to, say, the transformation system. If you have a Python module, its contents can be used directly by just importing that module. And don't worry about importing it multiple times as Python is smart about ignoring redundant imports.</p>
<p>If you want to use Python code as regular syntax, you need to create an extension command. These have standard-style syntax but are implemented in Python (or R or Java). There are many of these either installed with Statistics or installable from the Utilities menu (or the Extensions menu in V24). Information on how to write these is in the Python help doc and related topics in the Help system.</p>
<p>As an example, the SPSSINC TRANS extension command makes Python functions available as transformation code. If CustomFunction, say, creates or transforms a variable casewise and is stored in mymodule.py, it might be invoked like this. </p>
<p>SPSSINC TRANS RESULT=newvar<br>
/FORMULA "mymodule.CustomFunction(x,y,z)". </p>
<p>Extension commands are automatically loaded when Statistics starts, so there is no need for a startup script.</p>
| 1 | 2016-08-19T13:12:45Z | [
"python",
"function",
"spss"
] |
Pandas: convert column of dataframe to datetime | 39,016,627 | <p>I have df </p>
<pre><code>ID month
0 0001ee12f919a1b570658024bb59d118 2014-02
1 0001ee12f919a1b570658024bb59d118 2014-03
2 0001ee12f919a1b570658024bb59d118 2014-04
3 0001ee12f919a1b570658024bb59d118 2014-05
4 0001ee12f919a1b570658024bb59d118 2014-06
5 0001ee12f919a1b570658024bb59d118 2014-07
</code></pre>
<p>and I try to turn <code>year_month</code> to datetime.
I use <code>df1['month'] = pd.to_datetime(df1.month)</code>
but it return <code>ValueError: Unknown string format</code>
how can I fix that?</p>
| 1 | 2016-08-18T11:07:43Z | 39,016,651 | <p>You need to pass a format string <code>'%Y-%m'</code> as <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> can't deduce the format from your string:</p>
<pre><code>In [42]:
df['date'] = pd.to_datetime(df['month'], format='%Y-%m')
df
Out[42]:
ID month date
0 0001ee12f919a1b570658024bb59d118 2014-02 2014-02-01
1 0001ee12f919a1b570658024bb59d118 2014-03 2014-03-01
2 0001ee12f919a1b570658024bb59d118 2014-04 2014-04-01
3 0001ee12f919a1b570658024bb59d118 2014-05 2014-05-01
4 0001ee12f919a1b570658024bb59d118 2014-06 2014-06-01
5 0001ee12f919a1b570658024bb59d118 2014-07 2014-07-01
In [43]:
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 6 entries, 0 to 5
Data columns (total 3 columns):
ID 6 non-null object
month 6 non-null object
date 6 non-null datetime64[ns]
dtypes: datetime64[ns](1), object(2)
memory usage: 192.0+ bytes
</code></pre>
| 1 | 2016-08-18T11:08:37Z | [
"python",
"datetime",
"pandas"
] |
Convert numpy matrix to python array | 39,016,638 | <p>Are there alternative or better ways to convert a numpy matrix to a python array than this?</p>
<pre><code>>>> import numpy
>>> import array
>>> b = numpy.matrix("1.0 2.0 3.0; 4.0 5.0 6.0", dtype="float16")
>>> print(b)
[[ 1. 2. 3.]
[ 4. 5. 6.]]
>>> a = array.array("f")
>>> a.fromlist((b.flatten().tolist())[0])
>>> print(a)
array('f', [1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
</code></pre>
| 1 | 2016-08-18T11:08:03Z | 39,016,730 | <p>You could convert to a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html" rel="nofollow"><code>NumPy array</code></a> and generate its flattened version with <code>.ravel()</code> or <code>.flatten()</code>. This could also be achieved by simply using the function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html" rel="nofollow"><code>np.ravel</code></a> itself as it does both these takes under the hood. Finally, use <a href="https://docs.python.org/2/library/array.html#array.array" rel="nofollow"><code>array.array()</code></a> on it, like so -</p>
<pre><code>a = array.array('f',np.ravel(b))
</code></pre>
<p>Sample run -</p>
<pre><code>In [107]: b
Out[107]:
matrix([[ 1., 2., 3.],
[ 4., 5., 6.]], dtype=float16)
In [108]: array.array('f',np.ravel(b))
Out[108]: array('f', [1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
</code></pre>
| 0 | 2016-08-18T11:13:09Z | [
"python",
"arrays",
"numpy"
] |
Convert numpy matrix to python array | 39,016,638 | <p>Are there alternative or better ways to convert a numpy matrix to a python array than this?</p>
<pre><code>>>> import numpy
>>> import array
>>> b = numpy.matrix("1.0 2.0 3.0; 4.0 5.0 6.0", dtype="float16")
>>> print(b)
[[ 1. 2. 3.]
[ 4. 5. 6.]]
>>> a = array.array("f")
>>> a.fromlist((b.flatten().tolist())[0])
>>> print(a)
array('f', [1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
</code></pre>
| 1 | 2016-08-18T11:08:03Z | 39,016,751 | <p>here is an example :</p>
<pre><code>>>> x = np.matrix(np.arange(12).reshape((3,4))); x
matrix([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.tolist()
[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
</code></pre>
| -1 | 2016-08-18T11:13:50Z | [
"python",
"arrays",
"numpy"
] |
Python how to delete a line in a table | 39,016,758 | <p>I using Python and a function give me this output:</p>
<pre><code> [['2', 'prod1', 'Ela - Available'], ['2', 'prod1', 'Base - Replication logs']]
</code></pre>
<p>My goal it's to delete all lines contains "Available" with Python</p>
<p>So my goal it's to have:</p>
<pre><code>[['2', 'prod1', 'Base - Replication logs']]
</code></pre>
| 0 | 2016-08-18T11:14:08Z | 39,016,877 | <pre><code>>>> l = [['2', 'prod1', 'Ela - Available'], ['2', 'prod1', 'Base - Replication logs']]
>>> filter(lambda x: not any('Available' in y for y in x), l)
[['2', 'prod1', 'Base - Replication logs']]
</code></pre>
| 3 | 2016-08-18T11:19:57Z | [
"python"
] |
Python how to delete a line in a table | 39,016,758 | <p>I using Python and a function give me this output:</p>
<pre><code> [['2', 'prod1', 'Ela - Available'], ['2', 'prod1', 'Base - Replication logs']]
</code></pre>
<p>My goal it's to delete all lines contains "Available" with Python</p>
<p>So my goal it's to have:</p>
<pre><code>[['2', 'prod1', 'Base - Replication logs']]
</code></pre>
| 0 | 2016-08-18T11:14:08Z | 39,016,904 | <p>Try this :</p>
<pre><code>data = [['2', 'prod1', 'Ela - Available'], ['2', 'prod1', 'Base - Replication logs']]
output = [line for line in data if not 'Available' in str(line)]
print(output)
[['2', 'prod1', 'Base - Replication logs']]
</code></pre>
| 4 | 2016-08-18T11:21:05Z | [
"python"
] |
Minimize total distance between two sets of points in Python | 39,016,821 | <p>Given two sets of points in n-dimensional space, how can one map points from one set to the other, such that each point is only used once and the total euclidean distance between the pairs of points is minimized?</p>
<p>For example,</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# create six points in 2d space; the first three belong to set "A" and the
# second three belong to set "B"
x = [1, 2, 3, 1.8, 1.9, 3.4]
y = [2, 3, 1, 2.6, 3.4, 0.4]
colors = ['red'] * 3 + ['blue'] * 3
plt.scatter(x, y, c=colors)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/6q6ah.png" rel="nofollow"><img src="http://i.stack.imgur.com/6q6ah.png" alt="example of point distance minimization problem"></a></p>
<p>So in the example above, the goal would be to map each red point to a blue point such that each blue point is only used once and the sum of the distances between points is minimized.</p>
<p>I came across <a href="http://stackoverflow.com/questions/1871536/euclidean-distance-between-points-in-two-different-numpy-arrays-not-within">this question</a> which helps to solve the first part of the problem -- computing the distances between all pairs of points <em>across</em> sets using the <code>scipy.spatial.distance.cdist()</code> function.</p>
<p>From there, I could probably test every permutation of single elements from each row, and find the minimum.</p>
<p>The application I have in mind involves a fairly small number of datapoints in 3-dimensional space, so the brute force approach might be fine, but I thought I would check to see if anyone knows of a more efficient or elegant solution first. </p>
| 2 | 2016-08-18T11:16:56Z | 39,017,677 | <p>There's a known algorithm for this, <a href="http://www.math.harvard.edu/archive/20_spring_05/handouts/assignment_overheads.pdf" rel="nofollow">The Hungarian Method For Assignment</a>, which works in time <em>O(n<sup>3</sup>)</em>. </p>
<p>In SciPy, you can find an implementation in <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html" rel="nofollow"><code>scipy.optimize.linear_sum_assignment</code></a></p>
| 2 | 2016-08-18T11:59:32Z | [
"python",
"scipy",
"euclidean-distance"
] |
Minimize total distance between two sets of points in Python | 39,016,821 | <p>Given two sets of points in n-dimensional space, how can one map points from one set to the other, such that each point is only used once and the total euclidean distance between the pairs of points is minimized?</p>
<p>For example,</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# create six points in 2d space; the first three belong to set "A" and the
# second three belong to set "B"
x = [1, 2, 3, 1.8, 1.9, 3.4]
y = [2, 3, 1, 2.6, 3.4, 0.4]
colors = ['red'] * 3 + ['blue'] * 3
plt.scatter(x, y, c=colors)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/6q6ah.png" rel="nofollow"><img src="http://i.stack.imgur.com/6q6ah.png" alt="example of point distance minimization problem"></a></p>
<p>So in the example above, the goal would be to map each red point to a blue point such that each blue point is only used once and the sum of the distances between points is minimized.</p>
<p>I came across <a href="http://stackoverflow.com/questions/1871536/euclidean-distance-between-points-in-two-different-numpy-arrays-not-within">this question</a> which helps to solve the first part of the problem -- computing the distances between all pairs of points <em>across</em> sets using the <code>scipy.spatial.distance.cdist()</code> function.</p>
<p>From there, I could probably test every permutation of single elements from each row, and find the minimum.</p>
<p>The application I have in mind involves a fairly small number of datapoints in 3-dimensional space, so the brute force approach might be fine, but I thought I would check to see if anyone knows of a more efficient or elegant solution first. </p>
| 2 | 2016-08-18T11:16:56Z | 39,019,606 | <p>An example of assigning (mapping) elements of one set to points to the elements of another set of points, such that the sum Euclidean distance is minimized.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from scipy.optimize import linear_sum_assignment
np.random.seed(100)
points1 = np.array([(x, y) for x in np.linspace(-1,1,7) for y in np.linspace(-1,1,7)])
N = points1.shape[0]
points2 = 2*np.random.rand(N,2)-1
C = cdist(points1, points2)
_, assigment = linear_sum_assignment(C)
plt.plot(points1[:,0], points1[:,1],'bo', markersize = 10)
plt.plot(points2[:,0], points2[:,1],'rs', markersize = 7)
for p in range(N):
plt.plot([points1[p,0], points2[assigment[p],0]], [points1[p,1], points2[assigment[p],1]], 'k')
plt.xlim(-1.1,1.1)
plt.ylim(-1.1,1.1)
plt.axes().set_aspect('equal')
</code></pre>
<p><a href="http://i.stack.imgur.com/xIaOt.png" rel="nofollow"><img src="http://i.stack.imgur.com/xIaOt.png" alt="enter image description here"></a></p>
| 1 | 2016-08-18T13:32:36Z | [
"python",
"scipy",
"euclidean-distance"
] |
How to remove duplicate links in text file? | 39,016,909 | <p>So I have a text file that inside looks like this:</p>
<pre><code>http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same3.avi
http://example.pl/folder/this_same3.avi
</code></pre>
<p>And I want to delete all the duplicate links.
To the output file looks like this:</p>
<pre><code>http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same3.avi
</code></pre>
| -3 | 2016-08-18T11:21:18Z | 39,017,003 | <p>Oh I've improved my answer: </p>
<pre><code>links = set()
with open('file.txt', 'r') as fp:
for line in fp.readlines():
links.add(line)
</code></pre>
<p>Then you can write back to the file:</p>
<pre><code>with open('file.txt', 'wb') as fp:
for line in links:
fp.write(line)
</code></pre>
<p>Test it yourself..</p>
| 1 | 2016-08-18T11:26:00Z | [
"python",
"parsing",
"hyperlink",
"duplicates",
"text-parsing"
] |
How to remove duplicate links in text file? | 39,016,909 | <p>So I have a text file that inside looks like this:</p>
<pre><code>http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same3.avi
http://example.pl/folder/this_same3.avi
</code></pre>
<p>And I want to delete all the duplicate links.
To the output file looks like this:</p>
<pre><code>http://example.pl/folder/this_same1.avi
http://example.pl/folder/this_same2.avi
http://example.pl/folder/this_same3.avi
</code></pre>
| -3 | 2016-08-18T11:21:18Z | 39,017,208 | <p>If that structure is consistent and order is important:</p>
<pre><code>links = fp.readlines()[::2]
</code></pre>
<p>If structure is not consistent, and order is important:</p>
<pre><code>links = []
for line in fp.readlines():
if line not in links:
links.append(line)
</code></pre>
<p>Then write to file.</p>
| 0 | 2016-08-18T11:36:44Z | [
"python",
"parsing",
"hyperlink",
"duplicates",
"text-parsing"
] |
Django ORM not working for models | 39,016,998 | <p>I've a model Product and it has a sub model Variation for product variations. Now I am trying to add a Shop Model top of Products and I wish to get products listed uploaded by the Shop owner who created the shop. But I am getting an error. Following is my code:</p>
<pre><code>class Product(models.Model):
title = models.CharField(max_length=120)
description = models.TextField(blank=True, null=True)
price = models.DecimalField(decimal_places=2, max_digits=20)
active = models.BooleanField(default=True)
categories = models.ManyToManyField('Category', blank=True)
default = models.ForeignKey('Category', related_name='default_category', null=True, blank=True)
hits = models.ManyToManyField(HitCount, blank=True)
hitcounts = GenericRelation(HitCount, content_type_field='content_type', object_id_field='object_pk',)
objects = ProductManager()
class Meta:
ordering = ["-title"]
class Variation(models.Model):
product = models.ForeignKey(Product)
title = models.CharField(max_length=120)
price = models.DecimalField(decimal_places=2, max_digits=20)
sale_price = models.DecimalField(decimal_places=2, max_digits=20, null=True, blank=True)
active = models.BooleanField(default=True)
inventory = models.IntegerField(null=True, blank=True) #refer none == unlimited amount
def __unicode__(self):
return self.title
class Shop(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, null=False, blank=False)
products = models.ManyToManyField(Product)
name = models.CharField(max_length=120)
image = models.ImageField(upload_to=image_upload_to_shop, null=True)
location = models.CharField(max_length=120)
def __unicode__(self):
return self.name
</code></pre>
<p>Error I am getting is when I try to add shop:</p>
<pre><code>Environment:
Request Method: POST
Request URL: http://localhost:8000/admin/products/shop/add/
Django Version: 1.8.5
Python Version: 2.7.9
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog',
'products',
'orders',
'carts',
'newsletter',
'crispy_forms',
'registration',
'colorfield',
'hitcount')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\core\handlers\base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\contrib\admin\options.py" in wrapper
616. return self.admin_site.admin_view(view)(*args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\utils\decorators.py" in _wrapped_view
110. response = view_func(request, *args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\views\decorators\cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\contrib\admin\sites.py" in inner
233. return view(request, *args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\contrib\admin\options.py" in add_view
1516. return self.changeform_view(request, None, form_url, extra_context)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\utils\decorators.py" in _wrapper
34. return bound_func(*args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\utils\decorators.py" in _wrapped_view
110. response = view_func(request, *args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\utils\decorators.py" in bound_func
30. return func.__get__(self, type(self))(*args2, **kwargs2)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\utils\decorators.py" in inner
145. return func(*args, **kwargs)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\contrib\admin\options.py" in changeform_view
1468. self.save_related(request, form, formsets, not add)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\contrib\admin\options.py" in save_related
1100. form.save_m2m()
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\forms\models.py" in save_m2m
102. f.save_form_data(instance, cleaned_data[f.name])
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\fields\related.py" in save_form_data
2590. setattr(instance, self.attname, data)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\fields\related.py" in __set__
1261. manager.clear()
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\fields\related.py" in clear
998. self.through._default_manager.using(db).filter(filters).delete()
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\query.py" in delete
537. collector.delete()
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\deletion.py" in delete
292. qs._raw_delete(using=self.using)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\query.py" in _raw_delete
549. sql.DeleteQuery(self.model).delete_qs(self, using)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\sql\subqueries.py" in delete_qs
78. self.get_compiler(using).execute_sql(NO_RESULTS)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\models\sql\compiler.py" in execute_sql
840. cursor.execute(sql, params)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\backends\utils.py" in execute
79. return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\backends\utils.py" in execute
64. return self.cursor.execute(sql, params)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\utils.py" in __exit__
97. six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\backends\utils.py" in execute
64. return self.cursor.execute(sql, params)
File "C:\Users\Shazia\ecommerce\lib\site-packages\django\db\backends\sqlite3\base.py" in execute
318. return Database.Cursor.execute(self, query, params)
Exception Type: OperationalError at /admin/products/shop/add/
Exception Value: no such table: products_shop_products
</code></pre>
<p>Here is my migration table for my apps:</p>
<p>001-Initial.py</p>
<pre><code>from __future__ import unicode_literals
from django.db import migrations, models
import colorfield.fields
import products.models
class Migration(migrations.Migration):
dependencies = [
('hitcount', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(unique=True, max_length=120)),
('slug', models.SlugField(unique=True)),
('description', models.TextField(null=True, blank=True)),
('active', models.BooleanField(default=True)),
('timestamp', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='color_product',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('color', colorfield.fields.ColorField(default=b'#FF0000', max_length=10)),
],
),
migrations.CreateModel(
name='Product',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=120)),
('description', models.TextField(null=True, blank=True)),
('price', models.DecimalField(max_digits=20, decimal_places=2)),
('active', models.BooleanField(default=True)),
('categories', models.ManyToManyField(to='products.Category', blank=True)),
('default', models.ForeignKey(related_name='default_category', blank=True, to='products.Category', null=True)),
('hits', models.ManyToManyField(to='hitcount.HitCount', blank=True)),
],
options={
'ordering': ['-title'],
},
),
migrations.CreateModel(
name='ProductFeatured',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('image', models.ImageField(upload_to=products.models.image_upload_to_featured)),
('title', models.CharField(max_length=120, null=True, blank=True)),
('text', models.CharField(max_length=220, null=True, blank=True)),
('text_right', models.BooleanField(default=False)),
('text_css_color', models.CharField(max_length=6, null=True, blank=True)),
('show_price', models.BooleanField(default=False)),
('make_image_background', models.BooleanField(default=False)),
('active', models.BooleanField(default=True)),
('product', models.ForeignKey(to='products.Product')),
],
),
migrations.CreateModel(
name='ProductImage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('image', models.ImageField(upload_to=products.models.image_upload_to)),
('product', models.ForeignKey(to='products.Product')),
],
),
migrations.CreateModel(
name='Shop',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=120)),
('image', models.ImageField(null=True, upload_to=products.models.image_upload_to_shop)),
('location', models.CharField(max_length=120)),
('products', models.ManyToManyField(to='products.Product')),
],
),
migrations.CreateModel(
name='size_product',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('size', models.CharField(max_length=10, choices=[(b'XS', b'XS'), (b'S', b'S'), (b'SM', b'SM'), (b'M', b'M'), (b'L', b'L'), (b'XL', b'Xl'), (b'XXL', b'XXL')])),
('price', models.DecimalField(max_digits=20, decimal_places=2)),
('product', models.ForeignKey(to='products.Product')),
],
),
migrations.CreateModel(
name='Variation',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=120)),
('price', models.DecimalField(max_digits=20, decimal_places=2)),
('sale_price', models.DecimalField(null=True, max_digits=20, decimal_places=2, blank=True)),
('active', models.BooleanField(default=True)),
('inventory', models.IntegerField(null=True, blank=True)),
('product', models.ForeignKey(to='products.Product')),
],
),
migrations.AddField(
model_name='color_product',
name='product',
field=models.ForeignKey(to='products.Product'),
),
]
</code></pre>
<p>002_shop_user.py</p>
<pre><code>class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('products', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='shop',
name='user',
field=models.ForeignKey(default=1, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
]
</code></pre>
| 0 | 2016-08-18T11:25:41Z | 39,018,379 | <p>This is strange but once I deleted all my migrations and db again...it worked. Thank you for help.</p>
| 0 | 2016-08-18T12:34:01Z | [
"python",
"django",
"django-models"
] |
How to join two list of tuples without duplicates | 39,017,018 | <p>I am running two queries to a database, the result i get from each, is a list of tuples which is perfect. I would like to join these into one list of tuples. These are examples of the tuples: </p>
<pre><code>list1 = [('abc', 1 ), ('def', 2) ... ]
list2 = [(1000, 'abc'), (2000, 'def' ), (3000, 'def') ... ]
</code></pre>
<p>I want to create just one list of tuples and i join them like this:</p>
<pre><code>q = []
for i in list1:
for j in list2:
if i[0] == (j[1]):
i = i + (j[0],)
q.append(i)
</code></pre>
<p>This returns duplicates in my new list <code>q</code> as i get something like this:</p>
<pre><code>q = [('abc', 1 , 1000) , ('def', 2, 2000), ('def', 2, 2000, 3000) ...]
</code></pre>
<p>How can I avoid getting duplicates like the second list of tuples in the q list?
I want just <code>('def', 2, 2000, 3000)</code> and not this <code>('def', 2, 2000), ('def', 2, 2000, 3000)</code></p>
<p>I've been stuck on this for a while so any help is appreciated. Thanks </p>
| 1 | 2016-08-18T11:26:34Z | 39,017,134 | <p>You don't need to append <code>i</code> inside the inner loop. Just once at the end of the outer loop.</p>
<pre><code>q = []
for i in list1:
for j in list2:
if i[0] == j[1]:
i = i + (j[0],)
q.append(i)
</code></pre>
<p>There was also a typo in the outer loop. It should be <code>list1</code> instead of just <code>list</code>.</p>
| 0 | 2016-08-18T11:32:47Z | [
"python",
"list",
"tuples",
"sql-server-express"
] |
How to join two list of tuples without duplicates | 39,017,018 | <p>I am running two queries to a database, the result i get from each, is a list of tuples which is perfect. I would like to join these into one list of tuples. These are examples of the tuples: </p>
<pre><code>list1 = [('abc', 1 ), ('def', 2) ... ]
list2 = [(1000, 'abc'), (2000, 'def' ), (3000, 'def') ... ]
</code></pre>
<p>I want to create just one list of tuples and i join them like this:</p>
<pre><code>q = []
for i in list1:
for j in list2:
if i[0] == (j[1]):
i = i + (j[0],)
q.append(i)
</code></pre>
<p>This returns duplicates in my new list <code>q</code> as i get something like this:</p>
<pre><code>q = [('abc', 1 , 1000) , ('def', 2, 2000), ('def', 2, 2000, 3000) ...]
</code></pre>
<p>How can I avoid getting duplicates like the second list of tuples in the q list?
I want just <code>('def', 2, 2000, 3000)</code> and not this <code>('def', 2, 2000), ('def', 2, 2000, 3000)</code></p>
<p>I've been stuck on this for a while so any help is appreciated. Thanks </p>
| 1 | 2016-08-18T11:26:34Z | 39,017,633 | <p>Using nested loops is ok if your lists are fairly small but it soon becomes inefficient for larger lists. Eg, if len(list1) == 10 and len(list2) == 20 the code inside the inner loop is executed 200 times. </p>
<p>Here's an algorithm that builds the desired list of tuples via a dictionary. The dictionary stores the tuple data in lists because it's more efficient: it's possible to append to lists, whereas tuples are immutable, so each time you add an item to the end of a tuple with <code>i = i + (j[0],)</code> you're actually creating a new tuple object (as well as the temporary <code>(j[0],)</code> tuple) and discarding the old one that was bound to <code>i</code>.</p>
<pre><code>list1 = [('abc', 1 ), ('def', 2), ('ghi', 3)]
list2 = [
(1000, 'abc'),
(2000, 'def'),
(2100, 'def'),
(3000, 'ghi'),
(3100, 'ghi'),
(3200, 'ghi'),
]
# Insert list1 data into a dict of lists
d = {t[0]:list(t) for t in list1}
# Append list2 data to the correct list
for v, k in list2:
d[k].append(v)
# Convert lists back into tuples, using the key order from list1
result = [tuple(d[k]) for k, _ in list1]
for t in result:
print(t)
</code></pre>
<p><strong>output</strong></p>
<pre><code>('abc', 1, 1000)
('def', 2, 2000, 2100)
('ghi', 3, 3000, 3100, 3200)
</code></pre>
<p>With this algorithm, if len(list1) == 10 and len(list2) == 20 then we have a loop of length 10 to build the dictionary <code>d</code>, a loop of length 20 to append the list2 data to <code>d</code>'s lists, and another loop of length 10 to build the final list of tuples. he steps inside each of those loops are fairly basic, roughly on par with your <code>i = i + (j[0],)</code>, and obviously 40 steps is a lot better than 200. And of course if the input lists had 1000 items each then my code would take 3000 loops in contrast to the one million loops required with the nested loop approach. </p>
<p>I should also mention that this code will raise <code>KeyError</code> if <code>list2</code> contains a key that's not in <code>list1</code>. Presumably this isn't an issue for the data you're processing, since your code (and Sevanteri's) silently ignores such keys. If you <em>do</em> need to handle such keys it's fairly simple to do so, but it makes my <code>list2</code> loop simpler & more efficient if it doesn't have to handle missing keys. </p>
| 5 | 2016-08-18T11:57:20Z | [
"python",
"list",
"tuples",
"sql-server-express"
] |
findAll doesn't work, nested tags | 39,017,243 | <p>I'm parsing through <a href="https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection" rel="nofollow">this</a> page. I need to get text content - which is located in <code>p</code> tags. The general structure of the page is the following:</p>
<pre><code><html>
<body>
<article itemprop="articleBody">
<div...>
<div...>
<figure>
<span..></span>
<p>THE TEXT</p>
</figure>
</div>
</div>
</article>
</body>
</html>
</code></pre>
<p>So the <code>p</code> is not a direct child of <code>article</code> but it is still inside, and <code>findAll</code> should be able to find it. But it doesn't.</p>
<pre><code>articleBody=soupArticle.find("article", {"itemprop":"articleBody"})
textList=articleBody.findAll("p")
print(len(textList)) #gives 0
</code></pre>
<p>What am I doing wrong here?</p>
| -1 | 2016-08-18T11:38:04Z | 39,017,365 | <p>You should use something like this:</p>
<pre><code>for p in soupArticle.findAll("article", {"itemprop":"articleBody"}):
textList = p.find_all("p")
print(len(textList))
</code></pre>
<p>It probably would help.</p>
| 1 | 2016-08-18T11:44:34Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
findAll doesn't work, nested tags | 39,017,243 | <p>I'm parsing through <a href="https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection" rel="nofollow">this</a> page. I need to get text content - which is located in <code>p</code> tags. The general structure of the page is the following:</p>
<pre><code><html>
<body>
<article itemprop="articleBody">
<div...>
<div...>
<figure>
<span..></span>
<p>THE TEXT</p>
</figure>
</div>
</div>
</article>
</body>
</html>
</code></pre>
<p>So the <code>p</code> is not a direct child of <code>article</code> but it is still inside, and <code>findAll</code> should be able to find it. But it doesn't.</p>
<pre><code>articleBody=soupArticle.find("article", {"itemprop":"articleBody"})
textList=articleBody.findAll("p")
print(len(textList)) #gives 0
</code></pre>
<p>What am I doing wrong here?</p>
| -1 | 2016-08-18T11:38:04Z | 39,017,389 | <p>Check the doc, are you sure it is findAll? <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all" rel="nofollow">https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all</a></p>
| -2 | 2016-08-18T11:45:29Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
findAll doesn't work, nested tags | 39,017,243 | <p>I'm parsing through <a href="https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection" rel="nofollow">this</a> page. I need to get text content - which is located in <code>p</code> tags. The general structure of the page is the following:</p>
<pre><code><html>
<body>
<article itemprop="articleBody">
<div...>
<div...>
<figure>
<span..></span>
<p>THE TEXT</p>
</figure>
</div>
</div>
</article>
</body>
</html>
</code></pre>
<p>So the <code>p</code> is not a direct child of <code>article</code> but it is still inside, and <code>findAll</code> should be able to find it. But it doesn't.</p>
<pre><code>articleBody=soupArticle.find("article", {"itemprop":"articleBody"})
textList=articleBody.findAll("p")
print(len(textList)) #gives 0
</code></pre>
<p>What am I doing wrong here?</p>
| -1 | 2016-08-18T11:38:04Z | 39,017,953 | <p>The HTML that you see in your browser is not the same as the HTML that you would get if you retrieved it with <code>urllib</code>, <code>requests</code> or other HTTP client - assuming that that's how you obtain the HTML.</p>
<p>That's because the content that you are after is inserted dynamically into the document with Javascript. You might need to use something like <a href="http://www.seleniumhq.org/projects/webdriver/" rel="nofollow">Selenium webdriver</a> to programatically control your browser so that the content is rendered via the Javascript.</p>
<p>Take a look at the value of <code>articleBody</code> after the <code>find()</code>:</p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
url = 'https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection'
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html)
>>> print soup.article
<article class="content link-underline relative body-copy" data-js="content" itemprop="articleBody">
<a class="visually-hidden skip-to-text-link focusable bg-white" href="#start-of-content">Go Back to Top. Skip To: Start of Article.</a>
</article>
</code></pre>
<p>This shows that the content is not where you have presumed it to be, it is embedded in the <code><script></code> tags and dynamically inserted by Javascript when the page is loaded.</p>
| 1 | 2016-08-18T12:13:41Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.